Publication: Simulating the evolution and control of dynamical systems
No Thumbnail Available
Open/View Files
Date
2021-11-16
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Mishra, Shruti. 2021. Simulating the evolution and control of dynamical systems. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
Research Data
Abstract
This dissertation presents a series of investigations, into the evolution and control of dynamical systems, which are crucially made possible by the tools of numerical simulation. Our first study is described in Chapter 2, which considers the passive fluid dynamical system of a liquid drop impacting on a solid surface, in the presence of an intervening layer of air. Towards understanding the spatiotemporal evolution of this system, we develop a computational model that captures the essential dynamics, enabling us to carry out simulations that resolve the discrepancy between existing theory and experimental observations. Chapters 3–5 consider active dynamical systems, where we use reinforcement learning to develop control policies for agents acting to navigate their respective environments. In Chapter 3, we consider the locomotion of a segmented crawler, with minimal descriptions for its neuronal system, musculature and passive body mechanics, interacting with an external environment via friction. Embedding this system in a reinforcement learning algorithm, we recover the biological gait of peristaltic motion that ubiquitous in nature, both in the whole bodies of animals, as they move, and in their body parts, as they execute bodily functions, such as the digestion of food. In Chapter 4, we modify a state-of-the-art reinforcement learning algorithm to incorporate structure in the mechanics of a simulated quadruped, in the form of symmetry about the saggital plane. In this study, we observe a speed up in the learning of control policies via reinforcement learning when the stream of experiences is similar to that of a realistic embodied robot. Finally, in Chapter 5, we use multi-objective reinforcement learning to incorporate pre-existing knowledge of how to execute certain tasks while learning control policies for new tasks. We observe that the speed up in the learning of policies that can be achieved naturally depends on the relationship between the pre-existing policies and success metrics for the new task. We also observe that the learned control policies, when armed with pre-existing knowledge, bear closer resemblance to the pre-existing policies than would otherwise be the case.
Description
Other Available Sources
Keywords
Applied mathematics
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service