Robot Morality: Can a Machine Have a Conscience? Can It Be On a Higher Moral Ground than Humans?

The SCU Markkula Center for Applied Ethics was fortunate to have Professor George Lucas – Naval Postgraduate School-Monterey, California- address this topic on September 30, 2013.  The event was co-sponsored by the Commonwealth Club Silicon Valley section.

Professor Lucas emphasis was on how robot morality pertains to the U.S. military’s increasing reliance on robots, drones and unmanned electronic systems.  Lucas believes that a robot can be designed to comply with the demands and laws of morality.  The Naval Postgraduate School (NPS) is doing research in unmanned, underwater craft for military intelligence, surveillance and reconnaissance missions- of the sort that might become necessary were the Navy asked to provide support for allies like Japan and South Korea in their conflict with China over disputed territory and resources in the South China Sea.

It’s simply too expensive to use multiple manned submarines for this and similar missions, according to Prof. Lucas. Instead, one manned submarine is dispatched to a strategic area as a command center, with unmanned, underwater vehicles as cargo.  The underwater vehicles are released to explore the deep sea environment and alert the command sub of any imminent dangers.  This highly scripted, well defined mission with undersea electronic systems were said to be “dull, boring and dangerous,” and so ideal in principle for the use of unmanned systems.

Prof. Lucas alluded to the pioneering work of roboticist Ronald C. Arkin, and a test for “machine morality” based upon his research that might work for any type of robot (not just the underwater version).  Arkin, a computer scientist at Georgia Tech, argues that machines can be “more ethical than humans” since, devoid of feelings, they can act rationally, according to preprogrammed rules which alone govern their behavior.  Such robots would remain wholly dependent on the embedded software program, which means they would remain under external control and human supervision to some degree.   Should we arm such machines to launch military attacks on the enemy (e.g. drone strikes) or to return enemy fire, if attacked?

Professor Lucas said that we can design robots to detect the presence of AK47 assault rifles in enemy territory (i.e. above ground).  How should such a robot behave upon such detection?  Lucas said it should perform as “the least-experienced member of the human military — e.g. a newly-recruited Army Private, who would be expected to follow internationally-recognized “rules of engagement” for armed conflicts.


Key issue:  If we deliberately design robots that are devoid of emotions or feelings, one would deprive them of any opportunity of acting with genuine moral autonomy.  Is that a good or bad thing?  For example, should intelligent robots be permitted to supersede orders or override programmed instructions if they perceive them to be morally wrong?  If intelligent, armed robots can discriminate between combatants and non-combatants in a given area should they act on that information to reduce civilian casualties/ collateral damage?


Robots have a much faster reaction time than humans, which is a key advantage, according to Lucas.  A more important advantage is that their use prevents putting U.S. military in harm’s way.  However, they are still not that reliable, and may not yet be safe to deploy in many environments.  Another disadvantage of military robots was said to be “lack of situational awareness” to possibly modify the rules of engagement.

Two examples of their non-armed, military use (beyond drones) are border patrols and autonomous surveillance of other targeted geographical areas via robot helicopters.

“Machines don’t care,” said Prof. Lucas.  “They have no emotions, conscience, or feelings, ambitions, or self-regard, and they don’t need these properties in order to comply with basic elements of law and morality,” he added.

Another key question;

Should we deconstruct human consciousness, guilt, and other feelings and incorporate those into some kind of digital model for use in future “human-like” robotic design?  Or should we develop some kind of analog of that human behavior to build intelligent robots that are “moral agents?”  In other words, what are the “N” states for robots to implement so that their programmed actions are legally issued and morally justified?

Lucas’ answer:

“Set aside these speculative and highly ambitious goals of Artificial Intelligence (AI) for now, and instead build something we know how to build.  Make it simple, do it right and let’s see what we learn and where that leads us,”  he said.

Unanswered question:

Is the U.S. relying too much on drones for targeted killings (e.g. Yemen, North Pakistan, etc) in light of the collateral damage?  Of course those missions are too dangerous for manned flight, but are they morally and ethically justified- either to kill perceived terrorists without a military trial or to inadvertently kill innocent civilians that are nearby at the time of the drone strike?

Lucas’ answer (post-presentation):

“Targeted killing is a policy, with very troubling moral and legal implications.  The policy is carried out by using Special Forces (killing of Osama bin Laden), sharp-shooters (Navy SEALS killing of Somali pirates), assassins (covert intelligence operatives), and of course, with unmanned systems.  It is the policy, not the platform, which needs examination.  I do suspect we may have come to rely on it too much, and the use of remotely-piloted aircraft to carry it out adds a kind of Star Wars/evil empire dimension, which further clouds public perception and analysis of this policy.”


Reference:

http://www.scu.edu/ethics/conscience/robot-morality.html

0 thoughts on “Robot Morality: Can a Machine Have a Conscience? Can It Be On a Higher Moral Ground than Humans?

  1. Thanks Alan for reporting on this very important talk. At least in the near-term, it appears that the policies and the resulting software that people create that will determine the morality of the actions of these machines. It is probably more important than ever that the policy makers and the programmers get the moral part right, as the machines are efficient executors of that policy.

    Wish I would have been at this thought-provoking discussion.

  2. Alan, a good article.
    I personally don’t understand why robot-assisted killings have become so controversial or argumentative. Every offensive weapon which man invented since beginning of time has extended his capability to kill. Weapons of any age have reflected the available technology. There is no reason to think that trend will not hold for the foreseeable future. Let me ask a couple of blunt questions. Would Truman have hesitated more if he could dispatch an unmanned bomber to drop nuclear bombs? Would any opposing (aka enemy)country hesitate to send drones if it could?

    We have to separate robots capable of making better or more perfect decisions relative to robots with conscience. In my opinion we are very distant from robots with conscience or emotions.Whose conscience will be embedded in the robots? Still there is no denying it is a thought provoking issue.

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.

This site uses Akismet to reduce spam. Learn how your comment data is processed.