Oct 4, 2023
Enhancing Federated Learning: Leveraging Weight Distribution Analysis for Superior Aggregation
Federated learning has recently been proposed as a solution to the problem of using private or sensitive data for training a central deep model, without exchanging the local data. In federated learning, local models are trained on the client side using the available data, while a server is responsible for aggregating the weights of these models into a global model. However, the traditional weight averaging approach does not take into consideration the importance of the different weights for the performance of a model. To this end, this work proposes a novel federated learning weight aggregation method that estimates the statistical distance of each client's parameters from the Gaussianity, and weighs the contribution of each client to the global model accordingly so that the most significant information is retained and enhanced. To create an accurate global model, a complex weighted averaging of the parameters of clients' models at the layer level is performed, considering as low quality the parameters following the Gaussian distribution. The proposed method can be employed to both convolutional and linear layers and it is based on the notion that parameters following a Gaussian distribution do not significantly affect the output of a model. Experiments with different network architectures and a comparison with a plethora of state-of-the-art approaches on three well-known image classification datasets demonstrate the superiority of the proposed method for federated learning weight aggregation.
Read the full publication: https://www.researchgate.net/publication/374440973_Federated_Learning_Aggregation_based_on_Weight_Distribution_Analysis