Show simple item record

dc.contributor.advisorCash, Sydney S.
dc.contributor.authorChan, Alexander Mark
dc.date.accessioned2012-09-13T19:16:12Z
dash.embargo.terms2014-06-21en_US
dash.embargo.terms2014-06-21
dc.date.issued2012-09-13
dc.date.submitted2012
dc.identifier.citationChan, Alexander Mark. 2012. Extracting Spatiotemporal Word and Semantic Representations from Multiscale Neurophysiological Recordings in Humans. Doctoral dissertation, Harvard University.en_US
dc.identifier.otherhttp://dissertations.umi.com/gsas.harvard:10251en
dc.identifier.urihttp://nrs.harvard.edu/urn-3:HUL.InstRepos:9549941
dc.description.abstractWith the recent advent of neuroimaging techniques, the majority of the research studying the neural basis of language processing has focused on the localization of various lexical and semantic functions. Unfortunately, the limited time resolution of functional neuroimaging prevents a detailed analysis of the dynamics involved in word recognition, and the hemodynamic basis of these techniques prevents the study of the underlying neurophysiology. Compounding this problem, current techniques for the analysis of high-dimensional neural data are mainly sensitive to large effects in a small area, preventing a thorough study of the distributed processing involved for representing semantic knowledge. This thesis demonstrates the use of multivariate machine-learning techniques for the study of the neural representation of semantic and speech information in electro/magneto-physiological recordings with high temporal resolution. Support vector machines (SVMs) allow for the decoding of semantic category and word-specific information from non-invasive electroencephalography (EEG) and magnetoenecephalography (MEG) and demonstrate the consistent, but spatially and temporally distributed nature of such information. Moreover, the anteroventral temporal lobe (avTL) may be important for coordinating these distributed representations, as supported by the presence of supramodal category-specific information in intracranial recordings from the avTL as early as 150ms after auditory or visual word presentation. Finally, to study the inputs to this lexico-semantic system, recordings from a high density microelectrode array in anterior superior temporal gyrus (aSTG) are obtained, and the recorded spiking activity demonstrates the presence of single neurons that respond specifically to speech sounds. The successful decoding of word identity from this firing rate information suggests that the aSTG may be involved in the population coding of acousto-phonetic speech information that is likely on the pathway for mapping speech-sounds to meaning in the avTL. The feasibility of extracting semantic and phonological information from multichannel neural recordings using machine learning techniques provides a powerful method for studying language using large datasets and has potential implications for the development of fast and intuitive communication prostheses.en_US
dc.description.sponsorshipEngineering and Applied Sciencesen_US
dc.language.isoen_USen_US
dash.licenseLAA
dc.subjectdecodingen_US
dc.subjectlanguageen_US
dc.subjectmachine learningen_US
dc.subjectneuroscienceen_US
dc.subjectsemanticsen_US
dc.subjectspeech processingen_US
dc.subjectneurosciencesen_US
dc.subjectbiomedical engineeringen_US
dc.titleExtracting Spatiotemporal Word and Semantic Representations from Multiscale Neurophysiological Recordings in Humansen_US
dc.typeThesis or Dissertationen_US
dc.date.available2014-06-21T07:30:41Z
thesis.degree.date2012en_US
thesis.degree.disciplineMedical Engineering and Medical Physics and Engineering and Applied Sciencesen_US
thesis.degree.grantorHarvard Universityen_US
thesis.degree.leveldoctoralen_US
thesis.degree.namePh.D.en_US
dc.contributor.committeeMemberSmith, Mauriceen_US
dc.contributor.committeeMemberHalgren, Ericen_US
dc.contributor.committeeMemberBrockett, Rogeren_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record