Please use this identifier to cite or link to this item:
http://hdl.handle.net/10397/117003
| Title: | Robust federated learning with contrastive learning and meta-learning | Authors: | Zhang, H Chen, Y Li, K Li, Y Zhou, S Liang, W Poniszewska-Maranda, A |
Issue Date: | 2025 | Source: | International journal of interactive multimedia and artificial intelligence, Published: 2025-10-09, In Press, https://doi.org/10.9781/ijimai.2025.09.004 | Abstract: | Federated learning is regarded as an effective approach to addressing data privacy issues in the era of artificial intelligence. Still, it faces the challenges of unbalanced data distribution and client vulnerability to attacks. Current research solves these challenges but ignores the situation where abnormal updates account for a large proportion, which may cause the aggregated model to contain excessive abnormal information to deviate from the normal update direction, thereby reducing model performance. Some are not suitable for non-Independent and Identically Distribution (non-IID) situations, which may lead to the lack of information on small category data under non-IID and, thus, inaccurate prediction. In this work, we propose a robust federated learning architecture, called FedCM, which integrates contrastive learning and meta-learning to mitigate the impact of poisoned client data on global model updates. The approach improves features by leveraging extracted data characteristics combined with the previous round of local models through contrastive learning to improve accuracy. Additionally, a meta-learning method based on Gaussian noise model parameters is employed to fine-tune the local model using a global model, addressing the challenges posed by non-independent and identically distributed data, thereby enhancing the model’s robustness. Experimental validation is conducted on real datasets, including CIFAR10, CIFAR100, and SVHN. The experimental results show that FedCM achieves the highest average model accuracy across all proportions of attacked clients. In the case of a non-IID distribution with a parameter of 0.5 on CIFAR10, under attack client proportions of 0.2, 0.5, and 0.8, FedCM improves the average accuracy compared to the baseline methods by 8.2%, 7.9%, and 4.6%, respectively. Across different proportions of attacked clients, FedCM achieves at least 4.6%, 5.2%, and 0.45% improvements in average accuracy on the CIFAR10, CIFAR100, and SVHN datasets, respectively. FedCM converges faster in all training groups, especially showing a clear advantage on the SVHN dataset, where the number of training rounds required for convergence is reduced by approximately 34.78% compared to other methods. | Keywords: | Contrastive learning Federated learning Meta-learning Non-independent and identically distribution (Non-IID) |
Publisher: | IMAI Software | Journal: | International journal of interactive multimedia and artificial intelligence | EISSN: | 1989-1660 | DOI: | 10.9781/ijimai.2025.09.004 | Rights: | Authors transfer copyright of the article to the publisher UNIR and agree that the article will be distributed under the terms of the Creative Commons Attribution 3.0 unported License (http://creativecommons.org/licenses/by/3.0/), which permits at any time to make digital or hard copies of the work, share, link, distribute, remix, transform, and build upon the work, giving the appropriate credit to the Authors and IJIMAI, providing a link to the license and indicating if changes were made. The following publication Zhang, H., Chen, Y., Li, K., Li, Y., Zhou, S., Liang, W., and Poniszewska- Maranda, A. (2025). Robust Federated Learning With Contrastive Learning and Meta-Learning. International Journal of Interactive Multimedia and Artificial Intelligence is available at https://doi.org/10.9781/ijimai.2025.09.004. |
| Appears in Collections: | Journal/Magazine Article |
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.



