Moral Machines

Abstract
The human‐built environment is increasingly being populated by artificial agents that, through artificial intelligence (AI), are capable of acting autonomously. The software controlling these autonomous systems is, to‐date, “ethically blind” in the sense that the decision‐making capabilities of such systems does not involve any explicit moral reasoning. The title Moral Machines: Teaching Robots Right from Wrong refers to the need for these increasingly autonomous systems (robots and software bots) to become capable of factoring ethical and moral considerations into their decision making. The n ... More The human‐built environment is increasingly being populated by artificial agents that, through artificial intelligence (AI), are capable of acting autonomously. The software controlling these autonomous systems is, to‐date, “ethically blind” in the sense that the decision‐making capabilities of such systems does not involve any explicit moral reasoning. The title Moral Machines: Teaching Robots Right from Wrong refers to the need for these increasingly autonomous systems (robots and software bots) to become capable of factoring ethical and moral considerations into their decision making. The new field of inquiry directed at the development of artificial moral agents is referred to by a number of names including machine morality, machine ethics, roboethics, or artificial morality. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems.