Publication: Segmentation fusion for connectomics
Open/View Files
Date
2011
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
IEEE
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Vazquez-Reina, Amelio, Michael Gelbart, Daniel Huang, Jeff Lichtman, Eric Miller, and Hanspeter Pfister. 2011. “Segmentation Fusion for Connectomics.” IEEE International Conference on Computer Vision: 177-184, Barcelona, Spain, November 6-13, 2011.
Research Data
Abstract
We address the problem of automatic 3D segmentation of a stack of electron microscopy sections of brain tissue. Unlike previous efforts, where the reconstruction is usually done on a section-to-section basis, or by the agglomerative clustering of 2D segments, we leverage information from the entire volume to obtain a globally optimal 3D segmentation. To do this, we formulate the segmentation as the solution to a fusion problem. We first enumerate multiple possible 2D segmentations for each section in the stack, and a set of 3D links that may connect segments across consecutive sections. We then identify the fusion of segments and links that provide the most globally consistent segmentation of the stack. We show that this two-step approach of pre-enumeration and posterior fusion yields significant advantages and provides state-of-the-art reconstruction results. Finally, as part of this method, we also introduce a robust rotationally-invariant set of features that we use to learn and enumerate the above 2D segmentations. Our features outperform previous connectomic-specific descriptors without relying on a large set of heuristics or manually designed filter banks.
Description
Other Available Sources
Keywords
Terms of Use
This article is made available under the terms and conditions applicable to Open Access Policy Articles (OAP), as set forth at Terms of Service