Understanding and Collapsing Symmetries in Neural Network Parameter Spaces
Access StatusFull text of the requested work is not available in DASH at this time ("dark deposit"). For more information on dark deposits, see our FAQ.
MetadataShow full item record
CitationSorensen, Hikari. 2020. Understanding and Collapsing Symmetries in Neural Network Parameter Spaces. Bachelor's thesis, Harvard College.
AbstractIIt has been mentioned numerous times in the deep learning research field that neural network parameter spaces contain many redundancies. However, there seems to be little work that addresses specifically whence redundancy arises, and those papers that do consider redundant parameterizations by and large address the matter from a statistical perspective, in terms of the frequency at which local optima sampled from the loss surface seem to have identical or near-identical loss values.
I here consider the redundancy in neural network parameter spaces from a combinatorial perspective as a matter of symmetries between permutations of nodes in layers of neural networks. Moreover, I present a way to identify networks that are symmetric in this way by establishing a notion of a "universal basis" with respect to which networks can be uniquely expressed. This further becomes of great interest when considering weight averaging.
Citable link to this pagehttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37364688
- FAS Theses and Dissertations