Projection-Free and Accelerated Methods for Constrained Optimization and Saddle-Points Problems
Publisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
This dissertation investigates primal-dual optimization methods for solving nonconvex problems in both stochastic and deterministic settings. The proposed methods address challenges in feasibility, scalability, and computational efficiency, offering robust solutions with theoretical guarantees. In the first part of the dissertation, we propose an inexact-proximal accelerated gradient method to address nonconvex stochastic composite optimization problems, where the objective function is composed of smooth and nonsmooth components, and the constraints are deterministic. The proximal map for the nonsmooth part is computed inexactly at each iteration. By employing an increasing sample size strategy and ensuring that the error in the proximal operator decreases at an appropriate rate, we establish a convergence rate guarantee in stochastic settings. Furthermore, we extend the proposed method to tackle stochastic nonconvex optimization problems with nonlinear constraints, providing theoretical guarantees for its convergence rate. In the second part, we focus on stochastic nonconvex-concave min-max saddle-point problems, which arise in machine learning applications such as distributionally robust optimization and adversarial learning. We examine a class of nonconvex saddle-point problems where the objective function satisfies the Polyak-Łojasiewicz condition with respect to the minimization variable and is concave with respect to the maximization variable. To address these problems, we propose a novel single-loop accelerated primal-dual algorithm with convergence results. Finally, in the last part of this dissertation, we consider constrained nonconvex-concave saddle-point problems with applications in robust classification and dictionary learning. To overcome the limitations of projection-based methods, we propose efficient projection-free algorithms leveraging regularization and nested approximation techniques. Assuming that the constraint set in the maximization is strongly convex, our method achieves a stationary solution with theoretical convergence guarantee. When the projection onto the constraint set of maximization is easy to compute, we propose a one-sided projection-free method with improved convergence guarantees. Moreover, we present improved performance of our methods under a strong concavity assumption. Building on this framework, we extend our methods to the stochastic setting, incorporating a variance reduction technique. Our stochastic projection-free algorithm achieves theoretical convergence guarantees for both the primal and dual gaps in nonconvex-concave problems, with further improvements under strong concavity assumption.Type
textElectronic Dissertation
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeSystems & Industrial Engineering
