Skip to main content
arXiv logo
Cornell University Logo

Computer Science > Machine Learning

arXiv:2502.07977 (cs)
[Submitted on 11 Feb 2025]

Title:RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent

Authors:Cheng Fang, Rishabh Dixit, Waheed U. Bajwa, Mert Gurbuzbalaban
View a PDF of the paper titled RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent, by Cheng Fang and 3 other authors
View PDF
Abstract:Empirical risk minimization (ERM) is a cornerstone of modern machine learning (ML), supported by advances in optimization theory that ensure efficient solutions with provable algorithmic convergence rates, which measure the speed at which optimization algorithms approach a solution, and statistical learning rates, which characterize how well the solution generalizes to unseen data. Privacy, memory, computational, and communications constraints increasingly necessitate data collection, processing, and storage across network-connected devices. In many applications, these networks operate in decentralized settings where a central server cannot be assumed, requiring decentralized ML algorithms that are both efficient and resilient. Decentralized learning, however, faces significant challenges, including an increased attack surface for adversarial interference during decentralized learning processes. This paper focuses on the man-in-the-middle (MITM) attack, which can cause models to deviate significantly from their intended ERM solutions. To address this challenge, we propose RESIST (Resilient dEcentralized learning using conSensus gradIent deScenT), an optimization algorithm designed to be robust against adversarially compromised communication links. RESIST achieves algorithmic and statistical convergence for strongly convex, Polyak-Lojasiewicz, and nonconvex ERM problems. Experimental results demonstrate the robustness and scalability of RESIST for real-world decentralized learning in adversarial environments.
Comments: preprint of a journal paper; 100 pages and 17 figures
Subjects: Machine Learning (cs.LG); Optimization and Control (math.OC); Machine Learning (stat.ML)
Cite as: arXiv:2502.07977 [cs.LG]
  (or arXiv:2502.07977v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2502.07977
arXiv-issued DOI via DataCite

Submission history

From: Cheng Fang [view email]
[v1] Tue, 11 Feb 2025 21:48:10 UTC (1,666 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent, by Cheng Fang and 3 other authors
  • View PDF
  • TeX Source
  • Other Formats
view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-02
Change to browse by:
cs
math
math.OC
stat
stat.ML

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
export BibTeX citation Loading...

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack