Skip to main content
Cornell University
We gratefully acknowledge support from
the Simons Foundation and member institutions.
arxiv logo > stat > arXiv:1806.01811

Help | Advanced Search

Statistics > Machine Learning

(stat)
[Submitted on 5 Jun 2018 (v1), last revised 19 Apr 2021 (this version, v8)]

Title:AdaGrad stepsizes: Sharp convergence over nonconvex landscapes

Authors:Rachel Ward, Xiaoxia Wu, Leon Bottou
Download PDF
Abstract: Adaptive gradient methods such as AdaGrad and its variants update the stepsize in stochastic gradient descent on the fly according to the gradients received along the way; such methods have gained widespread use in large-scale optimization for their ability to converge robustly, without the need to fine-tune the stepsize schedule. Yet, the theoretical guarantees to date for AdaGrad are for online and convex optimization. We bridge this gap by providing theoretical guarantees for the convergence of AdaGrad for smooth, nonconvex functions. We show that the norm version of AdaGrad (AdaGrad-Norm) converges to a stationary point at the O(log(N)/N−−√) rate in the stochastic setting, and at the optimal O(1/N) rate in the batch (non-stochastic) setting -- in this sense, our convergence guarantees are 'sharp'. In particular, the convergence of AdaGrad-Norm is robust to the choice of all hyper-parameters of the algorithm, in contrast to stochastic gradient descent whose convergence depends crucially on tuning the step-size to the (generally unknown) Lipschitz smoothness constant and level of stochastic noise on the gradient. Extensive numerical experiments are provided to corroborate our theory; moreover, the experiments suggest that the robustness of AdaGrad-Norm extends to state-of-the-art models in deep learning, without sacrificing generalization.
Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG)
Cite as: arXiv:1806.01811 [stat.ML]
  (or arXiv:1806.01811v8 [stat.ML] for this version)
  https://doi.org/10.48550/arXiv.1806.01811
arXiv-issued DOI via DataCite
Journal reference: journal = {Journal of Machine Learning Research}, year = {2020}, volume = {21}, number = {219}, pages = {1-30}, url = {http://jmlr.org/papers/v21/18-352.html}

Submission history

From: Xiaoixa Wu [view email]
[v1] Tue, 5 Jun 2018 16:59:08 UTC (788 KB)
[v2] Thu, 7 Jun 2018 02:47:02 UTC (795 KB)
[v3] Sun, 10 Jun 2018 04:54:41 UTC (781 KB)
[v4] Thu, 14 Jun 2018 04:51:29 UTC (788 KB)
[v5] Thu, 21 Jun 2018 16:20:44 UTC (801 KB)
[v6] Wed, 10 Apr 2019 15:50:16 UTC (802 KB)
[v7] Sun, 21 Feb 2021 17:25:21 UTC (2,058 KB)
[v8] Mon, 19 Apr 2021 02:15:24 UTC (2,058 KB)
Full-text links:

Download:

  • PDF
  • PostScript
  • Other formats
(license)
Current browse context:
stat.ML
< prev   |   next >
new | recent | 1806
Change to browse by:
cs
cs.LG
stat

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export bibtex citation Loading...

Bookmark

BibSonomy logo Mendeley logo Reddit logo ScienceWISE logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack