Publication: Learning World Dynamics With Structured Representations in a Computational Model of Event Cognition
No Thumbnail Available
Date
2018-06-29
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Research Data
Abstract
The segmentation of the sensory stream into discrete events is a central process in cognition, with implications for memory, learning, and attention. Event cognition can be modeled computationally as the process of inferring underlying events from an unlabeled sequence of scenes, learning the dynamics of different classes of events using the event schema abstraction, and using trained event schemata to predict future scenes and recall past events. In this paper we explore the problem of representing scenes and event schemata so that an event schema can be trained on sequences of generated scenes to learn the dynamics of a single class of events, providing a benchmark for the segmentation process. In particular, scenes are encoded with structured representations to capture their semantic content, allowing for the decodable vectorization of complex semantic relationships. Event schemata, which are transition functions that learn event dynamics, are represented as recurrent neural networks that operate on sequences of observed scene vectors to predict the next observed scene. This paper addresses problems in scene representation, event schema architecture, and training methods, arriving upon a framework for learning world dynamics that can be used as a foundation for event segmentation models.
Description
Other Available Sources
Keywords
Computer Science, Psychology, Cognitive, Artificial Intelligence
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service