On Nonconvex Optimization for Machine Learning
Open Access
- 24 February 2021
- journal article
- research article
- Published by Association for Computing Machinery (ACM) in Journal of the ACM
- Vol. 68 (2), 1-29
- https://doi.org/10.1145/3418526
Abstract
Gradient descent (GD) and stochastic gradient descent (SGD) are the workhorses of large-scale machine learning. While classical theory focused on analyzing the performance of these methods in convex optimization problems, the most notable successes in machine learning have involved nonconvex optimization, and a gap has arisen between theory and practice. Indeed, traditional analyses of GD and SGD show that both algorithms converge to stationary points efficiently. But these analyses do not take into account the possibility of converging to saddle points. More recent theory has shown that GD and SGD can avoid saddle points, but the dependence on dimension in these analyses is polynomial. For modern machine learning, where the dimension can be in the millions, such dependence would be catastrophic. We analyze perturbed versions of GD and SGD and show that they are truly efficient—their dimension dependence is only polylogarithmic. Indeed, these algorithms converge to second-order stationary points in essentially the same time as they take to converge to classical first-order stationary points.Keywords
Funding Information
- Mathematical Data Science program of the Office of Naval Research (N00014-18-1-2764)
- NSF (CCF-1704656)
This publication has 16 references indexed in Scilit:
- Complete Dictionary Recovery Over the Sphere I: Overview and the Geometric PictureIEEE Transactions on Information Theory, 2016
- A geometric analysis of phase retrievalPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2016
- A trust region algorithm with a worst-case iteration complexity of $$\mathcal{O}(\epsilon ^{-3/2})$$ O ( ϵ - 3 / 2 ) for nonconvex optimizationMathematical Programming, 2016
- Low-rank matrix completion using alternating minimizationPublished by Association for Computing Machinery (ACM) ,2013
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic ProgrammingSIAM Journal on Optimization, 2013
- Cubic regularization of Newton method and its global performanceMathematical Programming, 2006
- Metastability in Reversible Diffusion Processes I: Sharp Asymptotics for Capacities and Exit TimesJournal of the European Mathematical Society, 2004
- Squared Functional Systems and Optimization ProblemsPublished by Springer Science and Business Media LLC ,2000
- Exponential Convergence of Langevin Distributions and Their Discrete ApproximationsBernoulli, 1996
- Gradient methods for the minimisation of functionalsUSSR Computational Mathematics and Mathematical Physics, 1963