As robots and artificially intelligent software become more capable and autonomous, many experts say at some point robots will need to have the ability to make moral decisions. There currently are two broad ideas for how to instill a robot with morality. The first strategy is called top down, and would involve explicitly programming the robot with moral guidelines to follow. However, experts say this approach has potential downsides. For example, trying to compute the moral consequences of its actions could prove too much for a robot’s processor, and even the most rigorous guidelines have flaws and loopholes that could lead to undesirable consequences. The other approach, bottom up, would involve having the robot use a form of machine learning to “learn” moral behavior, possibly by observing human media. Moral issues will also start to crop up around how people perceive machines as they become more and more human-like. The law will have to determine the legal status of such robots and how to handle the potential legal repercussions of their autonomous actions. This will especially be true as robots and artificial intelligence become more complex and begin to exhibit emergent behavior, actions they were not specifically programmed to carry out.
More info here: California Magazine (06/04/15) Coby McDonald