Segmentation fusion for connectomics
Author
Vazquez-Reina, Amelio
Miller, Eric
Published Version
https://doi.org/10.1109/iccv.2011.6126240Metadata
Show full item recordCitation
Vazquez-Reina, Amelio, Michael Gelbart, Daniel Huang, Jeff Lichtman, Eric Miller, and Hanspeter Pfister. 2011. “Segmentation Fusion for Connectomics.” IEEE International Conference on Computer Vision: 177-184, Barcelona, Spain, November 6-13, 2011.Abstract
We address the problem of automatic 3D segmentation of a stack of electron microscopy sections of brain tissue. Unlike previous efforts, where the reconstruction is usually done on a section-to-section basis, or by the agglomerative clustering of 2D segments, we leverage information from the entire volume to obtain a globally optimal 3D segmentation. To do this, we formulate the segmentation as the solution to a fusion problem. We first enumerate multiple possible 2D segmentations for each section in the stack, and a set of 3D links that may connect segments across consecutive sections. We then identify the fusion of segments and links that provide the most globally consistent segmentation of the stack. We show that this two-step approach of pre-enumeration and posterior fusion yields significant advantages and provides state-of-the-art reconstruction results. Finally, as part of this method, we also introduce a robust rotationally-invariant set of features that we use to learn and enumerate the above 2D segmentations. Our features outperform previous connectomic-specific descriptors without relying on a large set of heuristics or manually designed filter banks.Terms of Use
This article is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#OAPCitable link to this page
http://nrs.harvard.edu/urn-3:HUL.InstRepos:12374865
Collections
- FAS Scholarly Articles [18256]
Contact administrator regarding this item (to report mistakes or request changes)