A Computational Model for the Stereoscopic Optics of a Head-Mounted Display

Abstract
For stereoscopic photography or telepresence, orthostereoscopy occurs when the perceived size, shape, and relative position of objects in the three-dimensional scene being viewed match those of the physical objects in front of the camera. In virtual reality, the simulated scene has no physical counterpart, so orthostereoscopy must be defined in this case as constancy, as the head moves around, of the perceived size, shape, and relative positions of the simulated objects. Achieving this constancy requires that the computational model used to generate the graphics matches the physical geometry of the head-mounted display being used. This geometry includes the optics used to image the displays and the placement of the displays with respect to the eyes. The model may fail to match the geometry because model parameters are difficult to measure accurately, or because the model itself is in error. Two common modeling errors are ignoring the distortion caused by the optics and ignoring the variation in interpupillary distance across different users. A computational model for the geometry of a head-mounted display is presented, and the parameters of this model for the VPL EyePhone are calculated.

This publication has 1 reference indexed in Scilit: