On morals and machines

As artificial devices and systems become more sophisticated and flexible, and as they are entrusted with a diversifying range of tasks with decreasing human supervision, it seems prudent to investigate the embedding of moral values within automatic agents. This idea is not new: a form of it was presented in Asimov's Laws of Robotics, a very simple set of laws for robots ensuring safety for humans in human-robot interactions. However, interactions between moral beings in the real world are far more complex than a set of basic safety rules for a system of conscious masters and non-conscious slaves; the value systems underlying real ethics are not a simple complete ordering, although evaluation of a situation may result in a very clear-cut choice between possible actions.

Of course, it could be said that actual ethics apply only to conscious or ensouled entities, and that a machine cannot have ethics (and neither, perhaps, can a lower animal). I myself am inclined to take this view; but it is possible to model some part of ethics logically and build this into a computational system, and I'd find the experiment very interesting...

An experimental system

How could we go about building such a system? We could allow it to emerge from an evolutionary system, or we could encode established ideas (translate the Book of Proverbs into Prolog? ;-), or a mixture of these, and perhaps some others too?

An automatic advisor?

Having build such a system, it should also be possible for people to query it about a situation (when our representation of real-life situations becomes good enough); the ethical decision would still, eventually, be up to the person, but the machine could point out (non-exhaustively) what appears to be unfair about a situation.

[John's essay index]
Contact me

For other essays, see the index to this collection; and for some other thoughts, my thoughts index.

[John's home] Last modified: Sun Jun 10 22:28:51 GMT Daylight Time 2007