Skip to main content
Cornell University
We gratefully acknowledge support from
the Simons Foundation and member institutions.
arxiv logo > cs > arXiv:2103.06406

Help | Advanced Search

Computer Science > Machine Learning

(cs)
[Submitted on 11 Mar 2021 (v1), last revised 12 Oct 2021 (this version, v3)]

Title:Distributed Principal Subspace Analysis for Partitioned Big Data: Algorithms, Analysis, and Implementation

Authors:Arpita Gang, Bingqing Xiang, Waheed U. Bajwa
Download PDF
Abstract: Principal Subspace Analysis (PSA) -- and its sibling, Principal Component Analysis (PCA) -- is one of the most popular approaches for dimensionality reduction in signal processing and machine learning. But centralized PSA/PCA solutions are fast becoming irrelevant in the modern era of big data, in which the number of samples and/or the dimensionality of samples often exceed the storage and/or computational capabilities of individual machines. This has led to the study of distributed PSA/PCA solutions, in which the data are partitioned across multiple machines and an estimate of the principal subspace is obtained through collaboration among the machines. It is in this vein that this paper revisits the problem of distributed PSA/PCA under the general framework of an arbitrarily connected network of machines that lacks a central server. The main contributions of the paper in this regard are threefold. First, two algorithms are proposed in the paper that can be used for distributed PSA/PCA, with one in the case of data partitioned across samples and the other in the case of data partitioned across (raw) features. Second, in the case of sample-wise partitioned data, the proposed algorithm and a variant of it are analyzed, and their convergence to the true subspace at linear rates is established. Third, extensive experiments on both synthetic and real-world data are carried out to validate the usefulness of the proposed algorithms. In particular, in the case of sample-wise partitioned data, an MPI-based distributed implementation is carried out to study the interplay between network topology and communications cost as well as to study the effects of straggler machines on the proposed algorithms.
Comments: 16 pages; Final accepted version; To appear in IEEE Transactions on Signal and Information Processing Over Networks
Subjects: Machine Learning (cs.LG); Distributed, Parallel, and Cluster Computing (cs.DC); Signal Processing (eess.SP); Optimization and Control (math.OC)
Cite as: arXiv:2103.06406 [cs.LG]
  (or arXiv:2103.06406v3 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2103.06406
arXiv-issued DOI via DataCite
Journal reference: IEEE Trans. Signal Inform. Proc. over Netw., vol. 7, pp. 699-715, Oct. 2021
Related DOI: https://doi.org/10.1109/TSIPN.2021.3122297
DOI(s) linking to related resources

Submission history

From: Waheed Bajwa [view email]
[v1] Thu, 11 Mar 2021 01:33:38 UTC (9,857 KB)
[v2] Wed, 22 Sep 2021 21:30:27 UTC (16,501 KB)
[v3] Tue, 12 Oct 2021 18:01:16 UTC (17,422 KB)
Full-text links:

Download:

  • PDF
  • Other formats
(license)
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2103
Change to browse by:
cs
cs.DC
eess
eess.SP
math
math.OC

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar

DBLP - CS Bibliography

listing | bibtex
Arpita Gang
Waheed U. Bajwa
a export bibtex citation Loading...

Bookmark

BibSonomy logo Mendeley logo Reddit logo ScienceWISE logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack