Skip to main content
Cornell University
We gratefully acknowledge support from
the Simons Foundation and member institutions.
arxiv logo > math > arXiv:1707.02670

Help | Advanced Search

Mathematics > Optimization and Control

(math)
[Submitted on 10 Jul 2017]

Title:Accelerated Stochastic Power Iteration

Authors:Christopher De Sa, Bryan He, Ioannis Mitliagkas, Christopher Ré, Peng Xu
Download PDF
Abstract: Principal component analysis (PCA) is one of the most powerful tools in machine learning. The simplest method for PCA, the power iteration, requires O(1/Δ) full-data passes to recover the principal component of a matrix with eigen-gap Δ. Lanczos, a significantly more complex method, achieves an accelerated rate of O(1/Δ−−√) passes. Modern applications, however, motivate methods that only ingest a subset of available data, known as the stochastic setting. In the online stochastic setting, simple algorithms like Oja's iteration achieve the optimal sample complexity O(σ2/Δ2). Unfortunately, they are fully sequential, and also require O(σ2/Δ2) iterations, far from the O(1/Δ−−√) rate of Lanczos. We propose a simple variant of the power iteration with an added momentum term, that achieves both the optimal sample and iteration complexity. In the full-pass setting, standard analysis shows that momentum achieves the accelerated rate, O(1/Δ−−√). We demonstrate empirically that naively applying momentum to a stochastic method, does not result in acceleration. We perform a novel, tight variance analysis that reveals the "breaking-point variance" beyond which this acceleration does not occur. By combining this insight with modern variance reduction techniques, we construct stochastic PCA algorithms, for the online and offline setting, that achieve an accelerated iteration complexity O(1/Δ−−√). Due to the embarassingly parallel nature of our methods, this acceleration translates directly to wall-clock time if deployed in a parallel environment. Our approach is very general, and applies to many non-convex optimization problems that can now be accelerated using the same technique.
Comments: 37 pages, 5 figures
Subjects: Optimization and Control (math.OC); Data Structures and Algorithms (cs.DS); Machine Learning (cs.LG); Numerical Analysis (math.NA); Machine Learning (stat.ML)
Cite as: arXiv:1707.02670 [math.OC]
  (or arXiv:1707.02670v1 [math.OC] for this version)
  https://doi.org/10.48550/arXiv.1707.02670
arXiv-issued DOI via DataCite

Submission history

From: Peng Xu [view email]
[v1] Mon, 10 Jul 2017 01:13:33 UTC (1,062 KB)
Full-text links:

Download:

  • PDF
  • Other formats
(license)
Current browse context:
math.OC
< prev   |   next >
new | recent | 1707
Change to browse by:
cs
cs.DS
cs.LG
math
math.NA
stat
stat.ML

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export bibtex citation Loading...

Bookmark

BibSonomy logo Mendeley logo Reddit logo ScienceWISE logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack