This paper presents an autonomous robot-to-robot object handover in the presence of uncertainties and in the absence of explicit communication. Both the giver and receiver robots are equipped with an eye-in-hand depth camera. The object to handle is roughly positioned in the field of view of the giver robot's camera and a deep learning based approach is adopted for detecting the object. The physical exchange is performed by recurring to an estimate of the contact forces and an impedance control, which allows the receiver robot to perceive the presence of the object and the giver one to recognize that the handover is complete. Experimental results, conducted on a couple of collaborative 7 DoF manipulators in a partially structured environment, demonstrate the effectiveness of the proposed approach.
Vision based robot-to-robot object handover
Bloisi D. D.;
2021-01-01
Abstract
This paper presents an autonomous robot-to-robot object handover in the presence of uncertainties and in the absence of explicit communication. Both the giver and receiver robots are equipped with an eye-in-hand depth camera. The object to handle is roughly positioned in the field of view of the giver robot's camera and a deep learning based approach is adopted for detecting the object. The physical exchange is performed by recurring to an estimate of the contact forces and an impedance control, which allows the receiver robot to perceive the presence of the object and the giver one to recognize that the handover is complete. Experimental results, conducted on a couple of collaborative 7 DoF manipulators in a partially structured environment, demonstrate the effectiveness of the proposed approach.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.