Show simple item record

dc.contributor.advisorValiant, Leslie G.en_US
dc.contributor.advisorAdams, Ryan P.en_US
dc.contributor.authorLinderman, Scott Warrenen_US
dc.date.accessioned2017-07-25T14:40:05Z
dc.date.created2016-05en_US
dc.date.issued2016-05-18en_US
dc.date.submitted2016en_US
dc.identifier.citationLinderman, Scott Warren. 2016. Bayesian Methods for Discovering Structure in Neural Spike Trains. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences.en_US
dc.identifier.urihttp://nrs.harvard.edu/urn-3:HUL.InstRepos:33493391
dc.description.abstractNeuroscience is entering an exciting new age. Modern recording technologies enable simultaneous measurements of thousands of neurons in organisms performing complex behaviors. Such recordings offer an unprecedented opportunity to glean insight into the mechanistic underpinnings of intelligence, but they also present an extraordinary statistical and computational challenge: how do we make sense of these large scale recordings? This thesis develops a suite of tools that instantiate hypotheses about neural computation in the form of probabilistic models and a corresponding set of Bayesian inference algorithms that efficiently fit these models to neural spike trains. From the posterior distribution of model parameters and variables, we seek to advance our understanding of how the brain works. Concretely, the challenge is to hypothesize latent structure in neural populations, encode that structure in a probabilistic model, and efficiently fit the model to neural spike trains. To surmount this challenge, we introduce a collection of structural motifs, the design patterns from which we construct interpretable models. In particular, we focus on random network models, which provide an intuitive bridge between latent types and features of neurons and the temporal dynamics of neural populations. In order to reconcile these models with the discrete nature of spike trains, we build on the Hawkes process — a multivariate generalization of the Poisson process — and its discrete time analogue, the linear autoregressive Poisson model. By leveraging the linear nature of these models and the Poisson superposition principle, we derive elegant auxiliary variable formulations and efficient inference algorithms. We then generalize these to nonlinear and nonstationary models of neural spike trains and take advantage of the Pólya-gamma augmentation to develop novel Markov chain Monte Carlo (MCMC) inference algorithms. In a variety of real neural recordings, we show how our methods reveal interpretable structure underlying neural spike trains. In the latter chapters, we shift our focus from autoregressive models to latent state space models of neural activity. We perform an empirical study of Bayesian nonparametric methods for hidden Markov models of neural spike trains. Then, we develop an MCMC algorithm for switching linear dynamical systems with discrete observations and a novel algorithm for sampling Pólya-gamma random variables that enables efficient annealed importance sampling for model comparison. Finally, we consider the “Bayesian brain” hypothesis — the hypothesis that neural circuits are themselves performing Bayesian inference. We show how one particular implementation of this hypothesis implies autoregressive dynamics of the form studied in earlier chapters, thereby providing a theoretical interpretation of our probabilistic models. This closes the loop, connecting top-down theory with bottom-up inferences, and suggests a path toward translating large scale recording capabilities into new insights about neural computation.en_US
dc.description.sponsorshipEngineering and Applied Sciences - Computer Scienceen_US
dc.format.mimetypeapplication/pdfen_US
dc.language.isoenen_US
dash.licenseLAAen_US
dc.subjectComputer Scienceen_US
dc.subjectBiology, Neuroscienceen_US
dc.subjectStatisticsen_US
dc.titleBayesian Methods for Discovering Structure in Neural Spike Trainsen_US
dc.typeThesis or Dissertationen_US
dash.depositing.authorLinderman, Scott Warrenen_US
dc.date.available2017-07-25T14:40:05Z
thesis.degree.date2016en_US
thesis.degree.grantorGraduate School of Arts & Sciencesen_US
thesis.degree.levelDoctoralen_US
thesis.degree.nameDoctor of Philosophyen_US
dc.contributor.committeeMemberSompolinsky, Haimen_US
dc.contributor.committeeMemberGershman, Samuel J.en_US
dc.type.materialtexten_US
thesis.degree.departmentEngineering and Applied Sciences - Computer Scienceen_US
dash.identifier.vireohttp://etds.lib.harvard.edu/gsas/admin/view/702en_US
dc.description.keywordscomputational neuroscience; neural spike train; Bayesian inference; machine learning; Markov chain Monte Carlo; Bayesian brain; Polya-gamma; random network modelen_US
dash.author.emailscott.linderman@gmail.comen_US
dash.identifier.orcid0000-0002-3878-9073en_US
dash.contributor.affiliatedLinderman, Scott Warren
dc.identifier.orcid0000-0002-3878-9073


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record