Publication:
Adding to and building up very small nervous systems

No Thumbnail Available

Date

2024-09-04

Published Version

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Li, Chenguang. 2024. Adding to and building up very small nervous systems. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.

Research Data

Abstract

This dissertation asks and tries to answer two questions: first, how might one add to the living nervous system of a small animal using artificial methods? Second, how might one design an algorithm that reproduces some of the hallmark characteristics of natural intelligence, many of which are still missing from current artificial neural networks? To address the first question, we present an approach that integrates deep reinforcement learning agents with the nervous system of the model organism Caenorhabditis elegans, a nematode with 302 neurons. We integrate the artificial and biological networks using optogenetics, and design our artificial agent to navigate animals to given targets. We find that agents can learn appropriate strategies for different sets of neurons, even when neuronal roles in behavior are very different from each other. We show several possible applications of the reinforcement learning (RL)-C. elegans system, including mapping out neural policies that are sufficient to drive target behaviors, studying the behavior of RL agents in biologically relevant environments, and accomplishing goals that utilize the strengths of both artificial and biological intelligences. To answer the second question, we consider the view of nervous systems that model behaviors and computations as emergent properties of many small interacting components. We combine this emergent view with three other strongly supported features of all nervous systems: that neural functions heavily rely on predictive principles, that neurons are noisy, and that many diverse and fundamental computations in the brain are implemented via attractor dynamics. The resultant model uses only local prediction and noise updates, and yet can seek out and optimize reward, balance exploration and exploitation, and adapt to dramatic changes in architectures or environments. When networks have a choice between different tasks, they can form preferences that depend on patterns of noise and initialization, and we show that these preferences can be biased by network architectures or by changing learning rates. Our algorithm presents a flexible, biologically plausible way of interacting with environments without requiring an explicit environmental reward function, allowing for behavior that is both highly adaptable and autonomous.

Description

Other Available Sources

Keywords

caenorhabditis elegans, emergence, intelligence, neural networks, reinforcement learning, Neurosciences, Artificial intelligence

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories