[Back]


Contributions to Proceedings:

E. A. Neufeld:
"Reinforcement Learning Guided by Provable Normative Compliance";
in: "Proceedings of the 14th International Conference on Agents and Artificial Intelligence", INSTICC Press, 2022, 444 - 453.



English abstract:
Reinforcement learning (RL) has shown promise as a tool for engineering safe, ethical, or legal behaviour in autonomous agents. Its use typically relies on assigning punishments to state-action pairs that constitute unsafe or unethical choices. Despite this assignment being a crucial step in this approach, however, there has been limited discussion on generalizing the process of selecting punishments and deciding where to apply them. In this paper, we adopt an approach that leverages an existing framework - the normative supervisor of (Neufeld et al., 2021) - during training. This normative supervisor is used to dynamically translate states and the applicable normative system into defeasible deontic logic theories, feed these theories to a theorem prover, and use the conclusions derived to decide whether or not to assign a punishment to the agent. We use multiobjective RL (MORL) to balance the ethical objective of avoiding violations with a non-ethical objective; we will demonstrate that our approach works for a multiplicity of MORL techniques, and show that it is effective regardless of the magnitude of the punishment we assign.

Keywords:
Reinforcement Learning, Ethical AI, Deontic Logic

Created from the Publication Database of the Vienna University of Technology.