Perceptual learning in a non-human primate model of artificial vision
MetadataShow full item record
CitationKillian, Nathaniel J., Milena Vurro, Sarah B. Keith, Margee J. Kyada, and John S. Pezaris. 2016. “Perceptual learning in a non-human primate model of artificial vision.” Scientific Reports 6 (1): 36329. doi:10.1038/srep36329. http://dx.doi.org/10.1038/srep36329.
AbstractVisual perceptual grouping, the process of forming global percepts from discrete elements, is experience-dependent. Here we show that the learning time course in an animal model of artificial vision is predicted primarily from the density of visual elements. Three naïve adult non-human primates were tasked with recognizing the letters of the Roman alphabet presented at variable size and visualized through patterns of discrete visual elements, specifically, simulated phosphenes mimicking a thalamic visual prosthesis. The animals viewed a spatially static letter using a gaze-contingent pattern and then chose, by gaze fixation, between a matching letter and a non-matching distractor. Months of learning were required for the animals to recognize letters using simulated phosphene vision. Learning rates increased in proportion to the mean density of the phosphenes in each pattern. Furthermore, skill acquisition transferred from trained to untrained patterns, not depending on the precise retinal layout of the simulated phosphenes. Taken together, the findings suggest that learning of perceptual grouping in a gaze-contingent visual prosthesis can be described simply by the density of visual activation.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:29625969
- HMS Scholarly Articles