While it is common knowledge that humans visually judge depth when the brain compares images from the two eyes, scientists have now found out that the brain can also judge depth when a person sees with only one eye.
As to how the brain accomplishes this feat, researchers at the University of Rochester say that the answer perhaps lies in a small part of the brain that processes both the image from a single eye and also with the motion of the body.
"It looks as though in this area of the brain, the neurons are combining visual cues and non-visual cues to come up with a unique way to determine depth," Nature magazine quoted lead researcher Greg DeAngelis, professor in the Department of Brain and Cognitive Sciences at the university, as saying.
The findings of the study suggest that the brain uses a whole array of methods to gauge depth, says DeAngelis.
The researcher says that the brain creates an approximation of the three-dimensional world in our minds by employing neurons to specifically measure our motion, perspective, and how objects pass in front of or behind each other.
DeAngelis believes that the findings may prove helpful in instructing children who were born with misalignment of the eyes to restore more normal functions of binocular vision in the brain.
Once the researchers gain an in-depth knowledge of how the brain constructs three-dimensional perception, they will be able to create more compelling virtual reality environments.
DeAngelis says that the newly identified neural mechanism is based on the fact that objects at different distances move across the human vision at different speeds due to a phenomenon called motion parallax.
The researcher says that neurons in the middle temporal area of the brain are combine visual information and physical movement to extract depth information.
According to him, if the eye is moving while tracking the overall movement of the group of objects, it gives the middle temporal neurons enough information to grasp that the object moving fastest in the same direction must be the closest object, and the one moving slowest must be the farthest.
"We use binocular disparity, occlusion, perspective, and our own motion all together to create a representation of the real, 3D world in our minds," says DeAngelis.