Analysis of eyepoint locations and accuracy of rendered depth in binocular head-mounted displays

Abstract
Accuracy of rendered depth in virtual environments includes the correct specification of the eyepoints from which a stereoscopic pair of images is rendered. Rendered depth errors should be minimized for any virtual environment. It is however critical if perception is the object of study in such environments, or augmented reality environments are created where virtual objects must be registered with their real counterparts. Based on fundamental optical principles, the center of the entrance pupil is the eyepoint location that minimizes rendered depth errors over the entire field of view if eyetracking is enable. Because binocular head mounted displays (HMDs) have typically no eyetracking capability, the change in eyepoints location associate with eye vergence in HMDs is not accounted for. To predict the types and the magnitude of rendered depth errors that thus result, we conducted a theoretical investigation of rendered depth errors linked to natural eye movements in virtual environments for three possible eyepoints locations: the center of the entrance pupil, the nodal point, and the center of rotation of the eye. Results show that, while the center of rotation yields minimal rendered depth errors at the gaze point, it also yields rendered angular errors around the gaze point, not previously reported.