This repository contains the code and resources to replicate the simulations presented in the paper:
"Communication-Efficient Federated Learning via Clipped Uniform Quantization".
The method integrates clipped uniform quantization with federated learning to enhance communication efficiency without sacrificing accuracy.
In Main function, you can specify the type of the algoithm that one would like besides the intended dataset! I am going to make all the other necessary details and posting them here!
This work presents a novel framework to reduce communication costs in federated learning using clipped uniform quantization. The key contributions include:
- Optimal Clipping Thresholds: Balances quantization and clipping noise to minimize information loss.
- Stochastic Quantization: Enhances robustness by introducing diversity in client model initialization.
- Privacy Preservation: Obviates the need for disclosing client-specific dataset sizes during aggregation.
- Error-Averaging: Averaging based on inverse of mean of quantization error, instead of FedAvg.
The proposed method achieves near-full-precision accuracy with significant communication savings, demonstrated through extensive simulations on the MNIST and CIFAR-10 datasets.
- Enhanced Communication Efficiency:
- Optimal clipping of model weights before transmission.
- Stochastic quantization for increased robustness.
- Privacy-Aware Design:
- Avoids sharing dataset sizes with the server.
- Versatile Aggregation Methods:
- Supports both FedAvg and error-weighted aggregation.
- Code for MNIST and CIFAR-10 Simulations:
- Implements various quantization configurations (e.g., "4-2-2-4", "2-2-2-2").
- Includes training scripts and evaluation metrics.
- Clipping Threshold Optimization:
- Analytical method for optimal threshold selection.
- Quantization Strategies:
- Deterministic and stochastic quantization.
- Communication Savings: Reduces communication costs.
- Scalability
- Performance Metrics
- Python 3.7 or later
- Required libraries (see
requirements.txt)
- Clone the repository:
git clone https://github.com/username/ClippedQuantFL.git cd ClippedQuantFL
If you find this repository helpful, please consider citing:
@inproceedings{bozorgasl2025communication,
title={Communication-Efficient Federated Learning via Clipped Uniform Quantization},
author={Bozorgasl, Zavareh and Chen, Hao},
booktitle={2025 59th Annual Conference on Information Sciences and Systems (CISS)},
pages={1--6},
year={2025},
organization={IEEE}
}