Simulating the evolution and control of dynamical systems
Access StatusFull text of the requested work is not available in DASH at this time ("dark deposit"). For more information on dark deposits, see our FAQ.
MetadataShow full item record
CitationMishra, Shruti. 2021. Simulating the evolution and control of dynamical systems. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
AbstractThis dissertation presents a series of investigations, into the evolution and control of dynamical systems, which are crucially made possible by the tools of numerical simulation. Our first study is described in Chapter 2, which considers the passive fluid dynamical system of a liquid drop impacting on a solid surface, in the presence of an intervening layer of air. Towards understanding the spatiotemporal evolution of this system, we develop a computational model that captures the essential dynamics, enabling us to carry out simulations that resolve the discrepancy between existing theory and experimental observations. Chapters 3–5 consider active dynamical systems, where we use reinforcement learning to develop control policies for agents acting to navigate their respective environments. In Chapter 3, we consider the locomotion of a segmented crawler, with minimal descriptions for its neuronal system, musculature and passive body mechanics, interacting with an external environment via friction. Embedding this system in a reinforcement learning algorithm, we recover the biological gait of peristaltic motion that ubiquitous in nature, both in the whole bodies of animals, as they move, and in their body parts, as they execute bodily functions, such as the digestion of food. In Chapter 4, we modify a state-of-the-art reinforcement learning algorithm to incorporate structure in the mechanics of a simulated quadruped, in the form of symmetry about the saggital plane. In this study, we observe a speed up in the learning of control policies via reinforcement learning when the stream of experiences is similar to that of a realistic embodied robot. Finally, in Chapter 5, we use multi-objective reinforcement learning to incorporate pre-existing knowledge of how to execute certain tasks while learning control policies for new tasks. We observe that the speed up in the learning of policies that can be achieved naturally depends on the relationship between the pre-existing policies and success metrics for the new task. We also observe that the learned control policies, when armed with pre-existing knowledge, bear closer resemblance to the pre-existing policies than would otherwise be the case.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37370217
- FAS Theses and Dissertations