Show simple item record

dc.contributor.advisorLu, Yue M.
dc.contributor.authorDhifallah, Oussama
dc.date.accessioned2022-03-18T04:24:33Z
dc.date.created2022
dc.date.issued2022-01-14
dc.date.submitted2022-03
dc.identifier.citationDhifallah, Oussama. 2022. Estimation and Learning via Convex Optimization: Asymptotics, Phase Transitions, and New Algorithms. Doctoral dissertation, Harvard University Graduate School of Arts and Sciences.
dc.identifier.other28963946
dc.identifier.urihttps://nrs.harvard.edu/URN-3:HUL.INSTREPOS:37371145*
dc.description.abstractModern datasets are formed by a large number of samples and features. This new regime presents new challenges and opportunities. One major challenge that arises when working with high--dimensional datasets is the curse of dimensionality. One aspect of the curse of dimensionality is data sparsity. The latter refers to the problem that the number of required samples to guarantee reliable learning grows exponentially with the number of features. Despite this challenge, the high--dimensionality allows the design and application of effective asymptotic approaches that can be used to derive exact predictions. The accurate predictions can help compare different estimation and learning methods and identify valuable properties such as the phase transition phenomenon. This phenomenon means a sharp performance transition occurs as some hyperparameters move past a specific threshold. This asymptotic property is beneficial as it precisely characterizes the successful regime of different estimation and learning algorithms. Moreover, precise characterization can help design novel estimation and learning algorithms. This dissertation considers the high--dimensional setting. Furthermore, it exploits powerful asymptotic tools to derive precise analysis for convex formulations with applications in signal estimation and statistical learning. In the first part of this dissertation, we present a precise analysis of a recently proposed convex formulation of a signal estimation problem known as the phase retrieval problem. This new scheme, known as PhaseMax, is computationally efficient compared to standard convex relaxation methods based on lifting techniques. We start by presenting an accurate performance analysis of PhaseMax under generic datasets in the high--dimensional settings. In contrast to previously known performance bounds in the literature, our results are asymptotically exact, revealing a sharp phase transition phenomenon. Furthermore, the geometrical insights gained from our analysis led us to a novel nonconvex formulation of the phase retrieval problem and an accompanying iterative algorithm based on successive linearization and maximization over a polytope. This new algorithm, which we call PhaseLamp, has provably superior recovery performance over the original PhaseMax method. The second part of this dissertation presents a precise theoretical analysis of popular convex statistical learning methods. We first consider general convex formulations for learning an unknown function using random feature models. Our main contribution is to characterize the asymptotic training and generalization errors corresponding to these formulations. Then, we focus on transfer learning methods which are commonly used approaches to improving the generalization performance of a target task by exploiting the knowledge learned from a related source task. We present a theoretical analysis of these methods by studying a pair of related convex perceptron learning tasks. Our analysis reveals a phase transition from negative to positive transfer as the similarity of the two tasks moves past a well-defined threshold. Finally, we present a theoretical study of a data augmentation method on a random feature model. This method corresponds to injecting artificial noise into the training data. It commonly reveals implicit regularization effects on the training process. Our analysis precisely characterizes the form of the regularization for convex learning formulations. In particular, we show that noise injection in the training process is equivalent to introducing a weighted ridge regularization when the number of noise injections tends to infinity.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dash.licenseLAA
dc.subjectElectrical engineering
dc.titleEstimation and Learning via Convex Optimization: Asymptotics, Phase Transitions, and New Algorithms
dc.typeThesis or Dissertation
dash.depositing.authorDhifallah, Oussama
dc.date.available2022-03-18T04:24:33Z
thesis.degree.date2022
thesis.degree.grantorHarvard University Graduate School of Arts and Sciences
thesis.degree.levelDoctoral
thesis.degree.namePh.D.
dc.contributor.committeeMemberCalmon, Flavio
dc.contributor.committeeMemberSen , Subhabrata
dc.type.materialtext
thesis.degree.departmentEngineering and Applied Sciences - Engineering Sciences
dc.identifier.orcid0000-0002-8961-3553
dash.author.emaildhifallahoussama@gmail.com


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record