Are machines capable of understanding and abiding by moral principles? How would we go about teaching them ethics? These questions and more are explored in an article published by the New York Times entitled The Future of Moral Machines.
The article explores the possibility that, as computing power is growing exponentially, we will soon reach a “technology singularity” that could spawn computers that resemble the artificial intelligence constructs of many popular works of science fiction. More importantly, if and when this does happen, what are the moral implications? Author Colin Allen draws an important distinction between “autonomy” in the sense of a self-functioning machine (like cars that drive themselves) and “autonomy” in the sense of a machine capable of free will. Ultimately it is the latter that precludes the possibility for our machines to be capable of any sort of moral thought, and Allen isn’t fully convinced we’re there yet:
A bar-robbing robot would have to be instructed or constructed to do exactly that. On this view, morality is an issue only for creatures like us who can choose to do wrong.
While true A.I. may still be the stuff of movies, one thing is for sure – we share a closer relationship with our technology now than we ever have before, and that makes for a future where these very Hollywood topics will be at the forefront of the technology industry. If you don’t believe me, just ask your friend Siri.