Show simple item record

dc.contributor.advisorBa, Demba
dc.contributor.authorTankala, Pranay Bennett
dc.date.accessioned2021-07-19T04:09:40Z
dc.date.created2021
dc.date.issued2021-06-04
dc.date.submitted2021
dc.identifier.citationTankala, Pranay Bennett. 2021. Clustering via Deep Dictionary Learning. Bachelor's thesis, Harvard College.
dc.identifier.other28410942
dc.identifier.urihttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37368572*
dc.description.abstractClustering, the process of discovering hidden groups in data, is a fundamentally important problem in statistical data analysis with applications ranging from image segmentation to sequence analysis to anomaly detection. Although clustering algorithms have traditionally utilized a variety of techniques from statistics, optimization, and linear algebra, in recent years, clustering algorithms have increasingly relied on deep learning to achieve state-of-the-art performance. Deep learning, the subfield of machine learning based on the use of artificial neural networks, has recently revolutionized a wide range of scientific disciplines. Unfortunately, the astonishing increase in accuracy and efficiency achieved by modern deep clustering algorithms has come at the cost of transparency and interpretability, with neural networks often making predictions and learning weights that are difficult to rigorously analyze. In this work, we attempt to bridge the gap between state-of-the-art deep clustering methods and older but better-understood optimization algorithms for clustering. To this end, we propose, analyze, and evaluate novel, domain-agnostic clustering algorithms that offer the performance gains of deep learning while retaining the interpretability of existing clustering algorithms based on spectral methods and sparse self-representation. In contrast to black-box deep learning approaches, our algorithms facilitate rigorous analysis and provable performance guarantees. Our first contribution is a novel theoretical analysis of a clustering algorithm developed in (Theodosis et al., 2021), which uses a highly structured recurrent neural network to perform subspace clustering, the task of partitioning a data set into clusters corresponding to low-dimensional linear subspaces of a high-dimensional ambient space. Using analytical tools from convex optimization and compressed sensing, we relate the convergence of this network's hidden states to properties of the underlying data distribution. We also propose a novel manifold clustering algorithm (Tankala et al., 2021) that draws on techniques from sparse dictionary learning and nonlinear dimensionality reduction. Our method is based on algorithm unrolling, an emerging technique that uses deep learning to accelerate iterative optimization algorithms. Our algorithm enjoys tremendous computational advantages over related approaches and is both interpretable and flexible. We also prove theoretical performance guarantees and conduct experiments to show that our algorithm is highly efficient and performs competitively on synthetic and real-world data sets.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dash.licenseLAA
dc.subjectComputer science
dc.subjectMathematics
dc.titleClustering via Deep Dictionary Learning
dc.typeThesis or Dissertation
dash.depositing.authorTankala, Pranay Bennett
dc.date.available2021-07-19T04:09:40Z
thesis.degree.date2021
thesis.degree.grantorHarvard College
thesis.degree.levelBachelor's
thesis.degree.levelUndergraduate
thesis.degree.nameAB
dc.contributor.committeeMemberBarak, Boaz
dc.contributor.committeeMemberPehlevan, Cengiz
dc.type.materialtext
thesis.degree.departmentComputer Science
dc.identifier.orcid0000-0002-4424-0853
dash.author.emailpbt2017@gmail.com


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record