|dc.description.abstract||The ease and grace of animal movements belie the incredible challenge of controlling those movements, as the motor system does so capably. To maintain this fine control, the motor system needs to adapt to changes in the required movement dynamics over both the long-term, such as a growing body, and the short-term, such as muscle fatigue or wearing a jacket. Moreover, the motor system can learn entirely new dynamics, as it does when learning to use a new tool such as a spear or a tennis racquet. Each of these adaptations is encoded in a motor memory, but the stability of these memories remains unclear. Here, we took three approaches to examine the stability of motor memories.
First, we examined whether new motor memories are intrinsically stable or volatile. A recent theory suggests there is a bank of highly stable memories that are only activated in a specific context. We show how the evidence for this hypothesis was statistically biased and instead find that memories are intrinsically volatile and applied over a range of contexts.
Second, we examined whether new motor memories consolidate to become more stable within 24h. In a novel approach to examining consolidation, we dissected motor memories into three components based on their intrinsic stability to time and memory retrieval, then tracked those three components for 24h. We found that all three memory components retained their original stability properties after 24h instead of consolidating into more stable memories.
Finally, we examined whether reinforcement learning stabilizes motor memories, as has been proposed. However, when limiting the use of cognitive strategies in our tasks, we found reinforcement training did not produce any learning on its own and was unable to stabilize existing motor memories, suggesting reinforcement does not influence the stability of human motor memories within a single training session.
Together, these results indicate that new motor memories are activated over a range of contexts and consist of three distinct components, two of which are intrinsically volatile and do not consolidate within 24h and none of which achieved stability through reinforcement learning.||