Abstract
In this article we describe a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions reqrsired for grasping an unmodeled object. The utility of such a system is most evident in environ ments where the robot will have to grasp and manipulate a variety of unknown objects, but many of these manipu lation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assis tance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configu ration. Experimental results are presented to illustrate the functioning of the system.

This publication has 5 references indexed in Scilit: