澳门六合彩开奖预测

Mel GoodaleBrain and Mind Institute

Latest Research

Size constancy for perception and action

Imagine a car driving away from you down the road. Even though the image of the car is becoming smaller and smaller on your eye, you continue to see it as holding the same size. Apparently, our brain calculates the size of the moving car by using perspective and other distance cues in the visual scene in conjunction with the size of the image of the car projected on our eye to calculate its real size, a phenomenon known as size constancy.  But it is important to remember that size constancy not only allows us to make sense of the world, it also allows to make skilled and fluid movements when we interact with objects – and it is here where important differences in the way size constancy is maintained begin to emerge.

 

 

size

 

These cars all are exactly the same size on your retina but the perspective makes the one on the right appear much smaller than the one on the left.

 

 

To look at the differences between size constancy for perception and size constancy for action, we asked observers to look at a glowing ball in the dark through a tiny pinhole. The ball changed in size from trial to trial and was presented at different distances. As a result, the study participants had no idea how big the ball they were looking at was because they had no visual information about its distance.  Even when observers held a small pedestal on which the ball was resting and therefore knew exactly where their hand was, they were still not sure of the size of the ball. Remarkably, however, if they reached out with their other hand and tried to grab the ball, not only did they go to the right place but their hand opening was scaled to the real size of the ball! 

This shows that size (grip) constancy occurs in this action task because our brain can use relative body position, called proprioception, to maintain size constancy in the grasping hand whereas the neural networks that govern size constancy when estimating the size of objects, rely mainly on visual distance cues. These new findings have important implications not only for understanding how input from different sensory systems can be combined, particularly when one sensory system is compromised, but also for the design of autonomous robots, in which rapid, accurate and precise calculations are required.

In a related paper, we show that, even though primary visual cortex is necessary for perceptual size constancy, it is not required for grip constancy. We tested a patient (MC) with bilateral lesions of primary visual cortex and much of the ventral stream.  Her perceptual estimates of object size co-varied with retinal-image size rather than real-world size as viewing distance varies. In contrast, she showed near-normal scaling of in-flight grasp aperture to object size despite changes in viewing distance. This suggests that grip constancy is mediated instead by separate visual inputs to dorsal-stream visuomotor areas

For more information, see:

Chen, J., Sperandio I., & Goodale, M.A. (2018). Proprioceptive distance cues restore perfect size constancy in grasping, but not perception, when vision is limited. Current Biology, 28, 927-932. 

Whitwell, R. L., Sperandio, I., Buckingham, G., Chouinard, P. A., & Goodale, M. A. (2020). Grip constancy but not perceptual size constancy survives lesions of early visual cortex. Current Biology, 30, 3680-3686. 


De-blurring a visual image without glasses!

deblur

Prof. Derek Arnold and his fellow researchers at the University of Queensland working together with our lab have shown that the ability to see fine visual detail can be sharpened by simply staring for a few seconds at a rapidly flickering display.

This counter-intuitive result arises from the fact that there are two major pathways that carry information from the eyes to the visual areas of the brain.  One pathway is fast and carries out coarse processing of the visual scene; the other is slower but provides more detailed and fine-grained information.  Staring for a while at a flickering field of visual ‘noise’ tires out the fast coarse-grained pathway and allows the pathway carrying fine-grained and detailed information to dominate. 

It has long been thought that the fast pathway allows us to see visual motion or to detect the rapid appearance of an object, but contributes little to our perception of the form of objects.  The research team’s findings suggest, however, that this this is not the case.   The improvement in one’s ability to see fine detail after the coarse-grained pathway has been taken off line strongly suggests that both pathways contribute to our ability to see visual shapes and patterns.  Reducing the input from this pathway effectively ‘de-blurs’ the image! 

So the next time you want to read the fine print on a form or on the back of a medicine bottle (a difficult task demanding a fine spatial vision) – and you don’t have a magnifying glass to hand – you might want to first view a flickering field of dynamic noise!

For more information, see:

Arnold, D.H., Williams, J.D., Phipps, N.E., & Goodale, M.A. (2016). Sharpening vision by adapting to flicker. Proceedings of the National Academy of Sciences (USA), 113, 12556-12561.