Researchers Teaching Robots How to Best Reject Orders From Humans

As robotics researchers seek to develop more sophisticated and natural means for humans to interact with robots, they are also seeking to develop systems that will ensure these interactions do not prove dangerous for the robots. Gordon Briggs and Matthias Scheutz of Tufts University’s Human-Robot Interaction Lab are working specifically on techniques that will enable robots to reject orders from humans that could prove dangerous to the robots. The system borrows the concept of “felicity conditions” from linguistic theory, which reflect a person’s understanding of and capability to fulfill instructions. Briggs and Scheutz’s system is designed to create a framework that allows a robot to utilize felicity conditions to determine whether it is able to carry out instructions it receives, and whether or not it should do so. For example, the system will enable the robot to refuse an order to walk forward if it detects that doing so will cause it to run into a wall or off a table. The system also allows human operators to clarify their commands after they have been rejected, such as by saying that they will catch the robot if it falls. Briggs and Scheutz’ research was presented at the AI for Human-Robot Interaction Symposium in Washington, D.C., earlier this month.

More info here: IEEE Spectrum (11/19/15) Evan Ackerman

Esta entrada fue publicada en Ciencia y programación. Guarda el enlace permanente.

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s