close this message

Donate to arXiv

Please join the Simons Foundation and our generous member organizations in supporting arXiv during our giving campaign September 23-27. 100% of your contribution will fund improvements and new initiatives to benefit arXiv's global scientific community.

DONATE

[secure site, no need to create account]

Skip to main content
Cornell University
We gratefully acknowledge support from
the Simons Foundation and member institutions.
arXiv.org > cs > arXiv:2005.08854

Help | Advanced Search

arXiv
Cornell University Logo

quick links

  • Login
  • Help Pages
  • About

Computer Science > Machine Learning

arXiv:2005.08854 (cs)
[Submitted on 18 May 2020 (v1), last revised 31 Aug 2020 (this version, v2)]

Title:Scaling-up Distributed Processing of Data Streams for Machine Learning

Authors:Matthew Nokleby, Haroon Raja, Waheed U. Bajwa
Download PDF
Abstract: Emerging applications of machine learning in numerous areas involve continuous gathering of and learning from streams of data. Real-time incorporation of streaming data into the learned models is essential for improved inference in these applications. Further, these applications often involve data that are either inherently gathered at geographically distributed entities or that are intentionally distributed across multiple machines for memory, computational, and/or privacy reasons. Training of models in this distributed, streaming setting requires solving stochastic optimization problems in a collaborative manner over communication links between the physical entities. When the streaming data rate is high compared to the processing capabilities of compute nodes and/or the rate of the communications links, this poses a challenging question: how can one best leverage the incoming data for distributed training under constraints on computing capabilities and/or communications rate? A large body of research has emerged in recent decades to tackle this and related problems. This paper reviews recently developed methods that focus on large-scale distributed stochastic optimization in the compute- and bandwidth-limited regime, with an emphasis on convergence analysis that explicitly accounts for the mismatch between computation, communication and streaming rates. In particular, it focuses on methods that solve: (i) distributed stochastic convex problems, and (ii) distributed principal component analysis, which is a nonconvex problem with geometric structure that permits global convergence. For such methods, the paper discusses recent advances in terms of distributed algorithmic designs when faced with high-rate streaming data. Further, it reviews guarantees underlying these methods, which show there exist regimes in which systems can learn from distributed, streaming data at order-optimal rates.
Comments: 45 pages, 9 figures; preprint of a journal paper published in Proceedings of the IEEE (Special Issue on Optimization for Data-driven Learning and Control)
Subjects: Machine Learning (cs.LG); Distributed, Parallel, and Cluster Computing (cs.DC); Signal Processing (eess.SP); Optimization and Control (math.OC); Machine Learning (stat.ML)
Cite as: arXiv:2005.08854 [cs.LG]
  (or arXiv:2005.08854v2 [cs.LG] for this version)

Bibliographic data

[Enable Bibex (What is Bibex?)]

Submission history

From: Waheed Bajwa [view email]
[v1] Mon, 18 May 2020 16:28:54 UTC (2,160 KB)
[v2] Mon, 31 Aug 2020 23:48:59 UTC (2,161 KB)
Full-text links:

Download:

  • PDF
  • Other formats
(license)
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2005
Change to browse by:
cs
cs.DC
eess
eess.SP
math
math.OC
stat
stat.ML

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
Export citation

Bookmark

BibSonomy logo Mendeley logo Reddit logo ScienceWISE logo
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) Browse v0.3.2.5 released 2020-07-27   
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack