Skip to main content
arXiv logo
Cornell University Logo

Statistics > Machine Learning

arXiv:1908.08098 (stat)
[Submitted on 21 Aug 2019 (v1), last revised 15 Jun 2022 (this version, v3)]

Title:BRIDGE: Byzantine-resilient Decentralized Gradient Descent

Authors:Cheng Fang, Zhixiong Yang, Waheed U. Bajwa
View a PDF of the paper titled BRIDGE: Byzantine-resilient Decentralized Gradient Descent, by Cheng Fang and 2 other authors
View PDF
Abstract:Machine learning has begun to play a central role in many applications. A multitude of these applications typically also involve datasets that are distributed across multiple computing devices/machines due to either design constraints (e.g., multiagent systems) or computational/privacy reasons (e.g., learning on smartphone data). Such applications often require the learning tasks to be carried out in a decentralized fashion, in which there is no central server that is directly connected to all nodes. In real-world decentralized settings, nodes are prone to undetected failures due to malfunctioning equipment, cyberattacks, etc., which are likely to crash non-robust learning algorithms. The focus of this paper is on robustification of decentralized learning in the presence of nodes that have undergone Byzantine failures. The Byzantine failure model allows faulty nodes to arbitrarily deviate from their intended behaviors, thereby ensuring designs of the most robust of algorithms. But the study of Byzantine resilience within decentralized learning, in contrast to distributed learning, is still in its infancy. In particular, existing Byzantine-resilient decentralized learning methods either do not scale well to large-scale machine learning models, or they lack statistical convergence guarantees that help characterize their generalization errors. In this paper, a scalable, Byzantine-resilient decentralized machine learning framework termed Byzantine-resilient decentralized gradient descent (BRIDGE) is introduced. Algorithmic and statistical convergence guarantees for one variant of BRIDGE are also provided in the paper for both strongly convex problems and a class of nonconvex problems. In addition, large-scale decentralized learning experiments are used to establish that the BRIDGE framework is scalable and it delivers competitive results for Byzantine-resilient convex and nonconvex learning.
Comments: 20 pages, 10 figures, 2 tables; some expanded discussion as well as additional numerical experiments using the CIFAR-10 dataset
Subjects: Machine Learning (stat.ML); Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (cs.LG); Multiagent Systems (cs.MA); Signal Processing (eess.SP)
Cite as: arXiv:1908.08098 [stat.ML]
  (or arXiv:1908.08098v3 [stat.ML] for this version)
  https://doi.org/10.48550/arXiv.1908.08098
arXiv-issued DOI via DataCite

Submission history

From: Waheed Bajwa [view email]
[v1] Wed, 21 Aug 2019 19:49:56 UTC (36 KB)
[v2] Wed, 26 Jan 2022 03:46:00 UTC (421 KB)
[v3] Wed, 15 Jun 2022 02:06:56 UTC (1,171 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled BRIDGE: Byzantine-resilient Decentralized Gradient Descent, by Cheng Fang and 2 other authors
  • View PDF
  • TeX Source
  • Other Formats
view license
Current browse context:
stat.ML
< prev   |   next >
new | recent | 2019-08
Change to browse by:
cs
cs.DC
cs.LG
cs.MA
eess
eess.SP
stat

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
export BibTeX citation Loading...

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack