Publication: Analyzing Brain Connectivity and Computing Machine Perception
No Thumbnail Available
Open/View Files
Date
2019-04-25
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Haehn, Daniel F. 2019. Analyzing Brain Connectivity and Computing Machine Perception. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences.
Research Data
Abstract
Artificial intelligence is loosely inspired by neuroscientific discoveries. However, existing computational intelligence methods are fragile and do not generalize well. In contrast, the brain allows humans to reliably recognize, extrapolate, and classify enormous amounts of stimuli-seemingly without effort. This difference in performance is likely the result of limited architectural correspondences between neurobiology and current machine learning models.
This dissertation presents progress on the grand challenge of reducing the gap between natural and artificial intelligence: First, with investigations how bottom-up visual computing methods can aid brain connectivity analysis, and second, exploring how top-down perceptual studies can increase our understanding of machine learning models.
Our first three contributions target connectomics, a research field that studies neurons and their connections using modern electron microscopes. Resulting image volumes can be petabytes in size, and we need automatic segmentation methods to label individual cells. However, all available segmentation algorithms produce errors and their output, on average, requires hundreds of manual corrections per cubic micron of tissue. We introduce Dojo, a web-based visual proofreading software that allows novice users to find and correct segmentation errors interactively. Our experiments show that the visual search for errors typically takes over 30 seconds. To reduce this time, we propose the Guided Proofreading method: a machine learning algorithm that automatically finds and recommends potential segmentation errors to the user. This system allows novices and experts to proofread with a series of yes/no decisions and results in 7.5x faster error correction. All steps of the general connectomics processing workflow require interactive visual exploration by researchers. To establish a common processing framework, we present the scalable Butterfly middleware that unifies data management and storage, semantic queries, 2D and 3D visualization, interactive editing, and graph-based analysis for massive neurobiological datasets.
As the fourth contribution, we study machine perception and answer a series of questions by replicating Cleveland and McGill's seminal studies of human perception. We train neural networks to estimate elementary perceptual encodings including angles, curvature, volumes, and textures. We further evaluate the networks' perceptual capabilities with more complex visualizations such as pie charts, bar charts, and point clouds. Under limited circumstances, modern neural network architectures can meet or outperform human task performance, but in the majority of cases, they do not match human graphical perception capabilities.
Finally, we discuss the outlook and foreseeable limitations of biologically inspired artificial general intelligence.
Description
Other Available Sources
Keywords
connectomics, brain connectivity, neuroscience, machine perception, artificial intelligence
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service