Publication: Understanding and Collapsing Symmetries in Neural Network Parameter Spaces
No Thumbnail Available
Open/View Files
Date
2020-06-18
Authors
Published Version
Published Version
Journal Title
Journal ISSN
Volume Title
Publisher
The Harvard community has made this article openly available. Please share how this access benefits you.
Citation
Sorensen, Hikari. 2020. Understanding and Collapsing Symmetries in Neural Network Parameter Spaces. Bachelor's thesis, Harvard College.
Research Data
Abstract
IIt has been mentioned numerous times in the deep learning research field that neural network parameter spaces contain many redundancies. However, there seems to be little work that addresses specifically whence redundancy arises, and those papers that do consider redundant parameterizations by and large address the matter from a statistical perspective, in terms of the frequency at which local optima sampled from the loss surface seem to have identical or near-identical loss values.
I here consider the redundancy in neural network parameter spaces from a combinatorial perspective as a matter of symmetries between permutations of nodes in layers of neural networks. Moreover, I present a way to identify networks that are symmetric in this way by establishing a notion of a "universal basis" with respect to which networks can be uniquely expressed. This further becomes of great interest when considering weight averaging.
Description
Other Available Sources
Keywords
Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service