{"quality_controlled":"1","title":"Asynchronous decentralized SGD with quantized and local updates","related_material":{"record":[{"status":"public","relation":"dissertation_contains","id":"10429"}]},"acknowledgement":"We gratefully acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML). PD partly conducted this work while at IST Austria and was supported by the European Union’s Horizon 2020 programme under the Marie Skłodowska-Curie grant agreement No. 754411. SL was funded in part by European Research Council (ERC) under the European Union’s Horizon 2020 programme (grant agreement DAPP, No. 678880, and EPiGRAM-HS, No. 801039).\r\n","status":"public","project":[{"name":"ISTplus - Postdoctoral Fellowships","_id":"260C2330-B435-11E9-9278-68D0E5697425","grant_number":"754411","call_identifier":"H2020"},{"call_identifier":"H2020","grant_number":"805223","name":"Elastic Coordination for Scalable Machine Learning","_id":"268A44D6-B435-11E9-9278-68D0E5697425"}],"oa":1,"oa_version":"Published Version","user_id":"2DF688A6-F248-11E8-B48F-1D18A9856A87","date_created":"2021-12-09T10:59:12Z","publisher":"Neural Information Processing Systems Foundation","year":"2021","ec_funded":1,"citation":{"short":"G. Nadiradze, A. Sabour, P. Davies, S. Li, D.-A. Alistarh, in:, 35th Conference on Neural Information Processing Systems, Neural Information Processing Systems Foundation, 2021.","ama":"Nadiradze G, Sabour A, Davies P, Li S, Alistarh D-A. Asynchronous decentralized SGD with quantized and local updates. In: 35th Conference on Neural Information Processing Systems. Neural Information Processing Systems Foundation; 2021.","ista":"Nadiradze G, Sabour A, Davies P, Li S, Alistarh D-A. 2021. Asynchronous decentralized SGD with quantized and local updates. 35th Conference on Neural Information Processing Systems. NeurIPS: Neural Information Processing Systems.","apa":"Nadiradze, G., Sabour, A., Davies, P., Li, S., & Alistarh, D.-A. (2021). Asynchronous decentralized SGD with quantized and local updates. In 35th Conference on Neural Information Processing Systems. Sydney, Australia: Neural Information Processing Systems Foundation.","chicago":"Nadiradze, Giorgi, Amirmojtaba Sabour, Peter Davies, Shigang Li, and Dan-Adrian Alistarh. “Asynchronous Decentralized SGD with Quantized and Local Updates.” In 35th Conference on Neural Information Processing Systems. Neural Information Processing Systems Foundation, 2021.","ieee":"G. Nadiradze, A. Sabour, P. Davies, S. Li, and D.-A. Alistarh, “Asynchronous decentralized SGD with quantized and local updates,” in 35th Conference on Neural Information Processing Systems, Sydney, Australia, 2021.","mla":"Nadiradze, Giorgi, et al. “Asynchronous Decentralized SGD with Quantized and Local Updates.” 35th Conference on Neural Information Processing Systems, Neural Information Processing Systems Foundation, 2021."},"conference":{"end_date":"2021-12-14","location":"Sydney, Australia","name":"NeurIPS: Neural Information Processing Systems","start_date":"2021-12-06"},"date_published":"2021-12-01T00:00:00Z","publication_status":"published","language":[{"iso":"eng"}],"department":[{"_id":"DaAl"}],"_id":"10435","main_file_link":[{"url":"https://papers.nips.cc/paper/2021/hash/362c99307cdc3f2d8b410652386a9dd1-Abstract.html","open_access":"1"}],"abstract":[{"text":"Decentralized optimization is emerging as a viable alternative for scalable distributed machine learning, but also introduces new challenges in terms of synchronization costs. To this end, several communication-reduction techniques, such as non-blocking communication, quantization, and local steps, have been explored in the decentralized setting. Due to the complexity of analyzing optimization in such a relaxed setting, this line of work often assumes \\emph{global} communication rounds, which require additional synchronization. In this paper, we consider decentralized optimization in the simpler, but harder to analyze, \\emph{asynchronous gossip} model, in which communication occurs in discrete, randomly chosen pairings among nodes. Perhaps surprisingly, we show that a variant of SGD called \\emph{SwarmSGD} still converges in this setting, even if \\emph{non-blocking communication}, \\emph{quantization}, and \\emph{local steps} are all applied \\emph{in conjunction}, and even if the node data distributions and underlying graph topology are both \\emph{heterogenous}. Our analysis is based on a new connection with multi-dimensional load-balancing processes. We implement this algorithm and deploy it in a super-computing environment, showing that it can outperform previous decentralized methods in terms of end-to-end training time, and that it can even rival carefully-tuned large-batch SGD for certain tasks.","lang":"eng"}],"type":"conference","day":"01","author":[{"orcid":"0000-0001-5634-0731","id":"3279A00C-F248-11E8-B48F-1D18A9856A87","full_name":"Nadiradze, Giorgi","last_name":"Nadiradze","first_name":"Giorgi"},{"last_name":"Sabour","first_name":"Amirmojtaba","id":"bcc145fd-e77f-11ea-ae8b-80d661dbff67","full_name":"Sabour, Amirmojtaba"},{"id":"11396234-BB50-11E9-B24C-90FCE5697425","full_name":"Davies, Peter","last_name":"Davies","first_name":"Peter","orcid":"0000-0002-5646-9524"},{"first_name":"Shigang","last_name":"Li","full_name":"Li, Shigang"},{"first_name":"Dan-Adrian","last_name":"Alistarh","id":"4A899BFC-F248-11E8-B48F-1D18A9856A87","full_name":"Alistarh, Dan-Adrian","orcid":"0000-0003-3650-940X"}],"publication":"35th Conference on Neural Information Processing Systems","external_id":{"arxiv":["1910.12308"]},"article_processing_charge":"No","month":"12","date_updated":"2023-10-17T11:48:56Z"}