Learning Generic Prior Models for Visual Computation

DSpace/Manakin Repository

Learning Generic Prior Models for Visual Computation

Citable link to this page


Title: Learning Generic Prior Models for Visual Computation
Author: Zhu, Song Chun; Mumford, David Bryant

Note: Order does not necessarily reflect citation order of authors.

Citation: Zhu, Song Chun, and David Bryant Mumford. 1997. Learning generic prior models for visual computation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition: June 17 - 19, San Juan. Puerto Rico, ed. IEEE Computer Society, 463-469. Los Alamitos, CA : IEEE Computer Society.
Full Text & Related Files:
Abstract: This paper presents a novel theory for learning generic prior models from a set of observed natural images based on a minimax entropy theory that the authors studied in modeling textures. We start by studying the statistics of natural images including the scale invariant properties, then generic prior models were learnt to duplicate the observed statistics. The learned Gibbs distributions confirm and improve the forms of existing prior models. More interestingly inverted potentials are found to be necessary, and such potentials form patterns and enhance preferred image features. The learned model is compared with existing prior models in experiments of image restoration.
Published Version: doi:10.1109/CVPR.1997.609366
Other Sources: http://www.dam.brown.edu/people/mumford/Papers/DigitizedVisionPapers--forNonCommercialUse/97a--LearningPriors-Zhu.pdf
Terms of Use: This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA
Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:3627119
Downloads of this work:

Show full Dublin Core record

This item appears in the following Collection(s)


Search DASH

Advanced Search