Robots, in order to properly interact with people and effectively perform the requested tasks, should have a deep and specific knowledge of the environment they live in. Current capabilities of robotic platforms in understanding the surrounding environment and the assigned tasks are limited, despite the recent progress in robotic perception. Moreover, novel improvements in human-robot interaction support the view that robots should be regarded as intelligent agents that can request the help of the user to improve their knowledge and performance.In this paper, we present a novel approach to semantic mapping. Instead of requiring our robots to autonomously learn every possible aspect of the environment, we propose a shift in perspective, allowing non-expert users to shape robot knowledge through human-robot interaction. Thus, we present a fully operational prototype system that is able to incrementally and on-line build a rich and specific representation of the environment. Such a novel representation combines the metric information needed for navigation tasks with the symbolic information that conveys meaning to the elements of the environment and the objects therein. Thanks to such a representation, we are able to exploit multiple AI techniques to solve spatial referring expressions and support task execution. The proposed approach has been experimentally validated on different kinds of environments, by several users, and on multiple robotic platforms.
Living with robots: Interactive environmental knowledge acquisition
BLOISI, Domenico Daniele;
2016-01-01
Abstract
Robots, in order to properly interact with people and effectively perform the requested tasks, should have a deep and specific knowledge of the environment they live in. Current capabilities of robotic platforms in understanding the surrounding environment and the assigned tasks are limited, despite the recent progress in robotic perception. Moreover, novel improvements in human-robot interaction support the view that robots should be regarded as intelligent agents that can request the help of the user to improve their knowledge and performance.In this paper, we present a novel approach to semantic mapping. Instead of requiring our robots to autonomously learn every possible aspect of the environment, we propose a shift in perspective, allowing non-expert users to shape robot knowledge through human-robot interaction. Thus, we present a fully operational prototype system that is able to incrementally and on-line build a rich and specific representation of the environment. Such a novel representation combines the metric information needed for navigation tasks with the symbolic information that conveys meaning to the elements of the environment and the objects therein. Thanks to such a representation, we are able to exploit multiple AI techniques to solve spatial referring expressions and support task execution. The proposed approach has been experimentally validated on different kinds of environments, by several users, and on multiple robotic platforms.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.