Show simple item record

dc.contributor.advisorKing, Gary
dc.contributor.authorLucas, Christopher
dc.date.accessioned2019-08-08T12:43:37Z
dash.embargo.terms2028-05-01
dc.date.created2018-05
dc.date.issued2018-05-12
dc.date.submitted2018
dc.identifier.citationLucas, Christopher. 2018. Three Models for Audio-Visual Data in Politics. Doctoral dissertation, Harvard University, Graduate School of Arts & Sciences.
dc.identifier.urihttp://nrs.harvard.edu/urn-3:HUL.InstRepos:41127192*
dc.description.abstractAudio-visual data is ubiquitous in politics. Campaign advertisements, political debates, and the news cycle all constantly generate sound bites and imagery, which in turn inform and affect voters. Though these sources of information have been a topic of research in political science for decades, their study has been limited by the cost of human coding. To name but one example, to answer questions about the effects of negative campaign advertisements, humans must watch tens of thousands of advertisements and manually label them. And even if the necessary resources can be mustered for such a study, future researchers may be interested in a different set of labels, and so must either recode every advertisement or discard the exercise entirely. Through three separate models, this dissertation resolves this limitation by developing automated methods to study the most common types of audio-video data in political science. The first two models are neural networks, the third a hierarchical hidden Markov model. In Chapter 1, I introduce neural networks and their complications to political science, building up from familiar statistical methods. I then develop a novel neural network for classifying newspaper articles, using both the text of the article and the imagery as data. The model is applied to an original data set of articles about fake news, which I collected by developing and deploying bots to concurrently crawl the online pages of newspapers and download news text and images. This is a novel engineering effort that future researchers can leverage to collect effectively limitless amounts of data about the news. Building on the methodological foundations established in Chapter 1, in Chapter 2 I develop a second neural network for classifying political video and demonstrate that the model can automate classification of campaign advertisements, using both the visual and the audio information. In Chapter 3 (joint with Dean Knox), I develop a hierarchical hidden Markov model for speech classification and demonstrate it with an application to speech on the Supreme Court. Finally, in Chapter 4 (joint with Volha Charnysh and Prerna Singh), I demonstrate the behavioral effects of imagery through a dictator game in which a visual image reduces out-group bias. In sum, this dissertation introduces a new type of data to political science, validates its substantive importance, and develops models for its study in the substantive context of politics.
dc.description.sponsorshipGovernment
dc.format.mimetypeapplication/pdf
dc.language.isoen
dash.licenseLAA
dc.subjectPolitics
dc.titleThree Models for Audio-Visual Data in Politics
dc.typeThesis or Dissertation
dash.depositing.authorLucas, Christopher
dash.embargo.until2028-05-01
dc.date.available2019-08-08T12:43:37Z
thesis.degree.date2018
thesis.degree.grantorGraduate School of Arts & Sciences
thesis.degree.grantorGraduate School of Arts & Sciences
thesis.degree.levelDoctoral
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy
thesis.degree.nameDoctor of Philosophy
dc.contributor.committeeMemberTingley, Dustin
dc.contributor.committeeMemberZhou, Xiang
dc.type.materialtext
thesis.degree.departmentGovernment
thesis.degree.departmentGovernment
dash.identifier.vireo
dash.author.emaillucas.christopherd@gmail.com


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record