Abstract
Current three-dimensional vision algorithms can generate depth maps or vector maps from images, but few algorithms extract high-level information from these depth maps. This paper identifies one algorithm that determines an object's orientation by matching object models to depth map data. The object models are constructed by mapping surface orien tation data onto spheres. This process is based on a mathe matical theorem that can be applied only to convex objects, but some extensions for nonconvex objects are presented. The paper shows that a global approach can be used successfully in cases where objects do not touch one another. Another important result illustrates the size of the space of rotations. It shows that even when 6,000 rotations are almost uniformly distributed for matching, errors of 17 degrees are still possible.

This publication has 20 references indexed in Scilit: