Accurate grasp detection learning using oriented regression loss

Abstract
Robot automatic grasping has important application value in industrial applications. Recent works have explored on the performance of deep learning for robotic grasp detection. They usually use oriented anchor boxes (OABs) as detection prior and achieve better performance than previous works. However, the parameters of their loss belong to different coordinates, this may affect the regression accuracy. This paper aims to propose an oriented regression loss to solve the problem of inconsistency among the loss parameters. In the oriented loss, the center coordinates errors between the ground truth grasp rectangle and the predicted grasp rectangle rotate to the vertical and horizontal of the OAB. And then the direction error is used as an orientation factor, combining with the errors of the rotated center coordinates, width and height of the predicted grasp rectangle. The proposed oriented regression loss is evaluated on the YOLO-v3 framework to the grasp detection task. It yields state-of-the-art performance with an accuracy of 98.8% and a speed of 71 frames per second with GTX 1080Ti on Cornell datasets. This paper proposes an oriented loss to improve the regression accuracy of deep learning for grasp detection. The authors apply the proposed deep grasp network to the visual servo intelligent crane. The experimental result indicates that the approach is accurate and robust enough for real-time grasping applications.

This publication has 11 references indexed in Scilit: