Bayesian Model of Dynamic Image Stabilization in the Visual System
MetadataShow full item record
CitationBurak, Yoram, Uri Rokni, Markus Meister, and Haim Sompolinsky. 2010. Bayesian Model of Dynamic Image Stabilization in the Visual System. Proceedings of the National Academy of Sciences 107, no. 45: 19525–19530.
AbstractHumans can resolve the fine details of visual stimuli although the image projected on the retina is constantly drifting relative to the photoreceptor array. Here we demonstrate that the brain must take this drift into account when performing high acuity visual tasks. Further, we propose a decoding strategy for interpreting the spikes emitted by the retina, which takes into account the ambiguity caused by retinal noise and the unknown trajectory of the projected image on the retina. A main difficulty, addressed in our proposal, is the exponentially large number of possible stimuli, which renders the ideal Bayesian solution to the problem computationally intractable. In contrast, the strategy that we propose suggests a realistic implementation in the visual cortex. The implementation involves two populations of cells, one that tracks the position of the image and another that represents a stabilized estimate of the image itself. Spikes from the retina are dynamically routed to the two populations and are interpreted in a probabilistic manner. We consider the architecture of neural circuitry that could implement this strategy and its performance under measured statistics of human fixational eye motion. A salient prediction is that in high acuity tasks, fixed features within the visual scene are beneficial because they provide information about the drifting position of the image. Therefore, complete elimination of peripheral features in the visual scene should degrade performance on high acuity tasks involving very small stimuli.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:12724041
- FAS Scholarly Articles