Federated Optimization in Heterogeneous Networks (MLSys '20)
-
Updated
Mar 24, 2023 - Python
Federated Optimization in Heterogeneous Networks (MLSys '20)
Fair Resource Allocation in Federated Learning (ICLR '20)
FedTorch is a generic repository for benchmarking different federated and distributed learning algorithms using PyTorch Distributed API.
DISROPT: A Python framework for distributed optimization
Implementation of (overlap) local SGD in Pytorch
FedDANE: A Federated Newton-Type Method (Asilomar Conference on Signals, Systems, and Computers ‘19)
Communication-efficient decentralized SGD (Pytorch)
Scalable, structured, dynamically-scheduled hyperparameter optimization.
A ray-based library of Distributed POPulation-based OPtimization for Large-Scale Black-Box Optimization.
tvopt is a prototyping and benchmarking Python framework for time-varying (or online) optimization.
We present a set of all-reduce compatible gradient compression algorithms which significantly reduce the communication overhead while maintaining the performance of vanilla SGD. We empirically evaluate the performance of the compression methods by training deep neural networks on the CIFAR10 dataset.
Distributed approach of scheduling residential EV charging to maintain reliability of power distribution grids.
Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees
Implementation of Redundancy Infused SGD for faster distributed SGD.
We present an algorithm to dynamically adjust the data assigned for each worker at every epoch during the training in a heterogeneous cluster. We empirically evaluate the performance of the dynamic partitioning by training deep neural networks on the CIFAR10 dataset.
Distributed Multidisciplinary Design Optimization
Code for ''Distributed Online Optimization with Coupled Inequality Constraints over Unbalanced Directed Networks'' (CDC 2023)
We present UDP-based aggregation algorithms for federated learning. We also present a scalable framework for practical federated learning. We empirically evaluate the performance by training deep convolutional neural networks on the MNIST dataset and the CIFAR10 dataset.
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization. NeurIPS, 2022
Implementation of Local Updates Periodic Averaging (LUPA) SGD
Add a description, image, and links to the distributed-optimization topic page so that developers can more easily learn about it.
To associate your repository with the distributed-optimization topic, visit your repo's landing page and select "manage topics."