Depth and Deblurring from a Spectrally-varying Depth-of-Field

DSpace/Manakin Repository

Depth and Deblurring from a Spectrally-varying Depth-of-Field

Citable link to this page

 

 
Title: Depth and Deblurring from a Spectrally-varying Depth-of-Field
Author: Chakrabarti, Ayan; Zickler, Todd

Note: Order does not necessarily reflect citation order of authors.

Citation: Chakrabarti, Ayan, and Todd Zickler. 2012. Depth and Deblurring from a Spectrally-varying Depth-of-Field. Lecture Notes in Computer Science 7576: 648-661.
Full Text & Related Files:
Abstract: We propose modifying the aperture of a conventional color camera so that the effective aperture size for one color channel is smaller than that for the other two. This produces an image where different color channels have different depths-of-field, and from this we can computationally recover scene depth, reconstruct an all-focus image and achieve synthetic re-focusing, all from a single shot. These capabilities are enabled by a spatio-spectral image model that encodes the statistical relationship between gradient profiles across color channels. This approach substantially improves depth accuracy over alternative single-shot coded-aperture designs, and since it avoids introducing additional spatial distortions and is light efficient, it allows high-quality deblurring and lower exposure times. We demonstrate these benefits with comparisons on synthetic data, as well as results on images captured with a prototype lens.
Terms of Use: This article is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#OAP
Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:12006818
Downloads of this work:

Show full Dublin Core record

This item appears in the following Collection(s)

 
 

Search DASH


Advanced Search
 
 

Submitters