Unstructured lumigraph rendering

DSpace/Manakin Repository

Unstructured lumigraph rendering

Citable link to this page


Title: Unstructured lumigraph rendering
Author: Gortler, Steven; Buehler, Chris; Bosse, Michael; McMillan, Leonard; Cohen, Michael F.

Note: Order does not necessarily reflect citation order of authors.

Citation: Buehler, Chris, Michael Bosse, Leonard McMillan, Steven J. Gortler, and Michael Cohen. 2001. Unstructured lumigraph rendering. In Proceedings of the 28th annual conference on computer graphics and interactive techniques (SIGGRAPH 2001), August 12-17, 2001, Los Angeles, Calif., ed. SIGGRAPH and Eugene L. Fiume, 425-432. New York, N.Y.: Association for Computing Machinery.
Full Text & Related Files:
Abstract: We describe an image based rendering approach that generalizes many current image based rendering algorithms, including light field rendering and view-dependent texture mapping. In particular, it allows for lumigraph-style rendering from a set of input cameras in arbitrary configurations (i.e., not restricted to a plane or to any specific manifold). In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. When presented with fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. The algorithm achieves this flexibility because it is designed to meet a set of specific goals that we describe. We demonstrate this flexibility with a variety of examples.
Published Version: http://dx.doi.org/10.1145/383259.383309
Terms of Use: This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA
Citable link to this page: http://nrs.harvard.edu/urn-3:HUL.InstRepos:2641679
Downloads of this work:

Show full Dublin Core record

This item appears in the following Collection(s)


Search DASH

Advanced Search