Perceptual Annotation: Measuring Human Vision to Improve Computer Vision
MetadataShow full item record
CitationScheirer, Walter, Samuel Anthony, Ken Nakayama, and David Cox. 2014. Perceptual annotation: measuring human vision to improve computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence PP(99): 1–8.
AbstractFor many problems in computer vision, human learners are considerably better than machines. Humans possess highly accurate internal recognition and learning mechanisms that are not yet understood, and they frequently have access to more extensive training data through a lifetime of unbiased experience with the visual world. We propose to use visual psychophysics to directly leverage the abilities of human subjects to build better machine learning systems. First, we use an advanced online psychometric testing platform to make new kinds of annotation data available for learning. Second, we develop a technique for harnessing these new kinds of information – “perceptual annotations” – for support vector machines. A key intuition for this approach is that while it may remain infeasible to dramatically increase the amount of data and high-quality labels available for the training of a given system, measuring the exemplar-by-exemplar difficulty and pattern of errors of human annotators can provide important information for regularizing the solution of the system at hand. A case study for the problem face detection demonstrates that this approach yields state-ofthe- art results on the challenging FDDB data set.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:12111387
- FAS Scholarly Articles