FLDP-DM: Defense Method Against Poisoning Attacks with Local Differential Privacy in Federated Learning Systems
With the rapid development of big data and artificial intelligence technologies, large-scale machine learning models have become mainstream, widely applied in both civilian and military fields. Federated learning (FL) is a distributed machine learning framework that enables multiple participants to...
Saved in:
| Published in | IEEE transactions on cognitive communications and networking p. 1 |
|---|---|
| Main Authors | , , , |
| Format | Journal Article |
| Language | English |
| Published |
IEEE
2025
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 2332-7731 2332-7731 |
| DOI | 10.1109/TCCN.2025.3561316 |
Cover
| Summary: | With the rapid development of big data and artificial intelligence technologies, large-scale machine learning models have become mainstream, widely applied in both civilian and military fields. Federated learning (FL) is a distributed machine learning framework that enables multiple participants to collaboratively train a shared model without sharing their local data. This approach effectively addresses the issue of data silos, particularly in high-privacy domains such as healthcare, where data sharing is restricted due to privacy concerns, especially in high-privacy domains such as healthcare. However, the involvement of multiple participants and the sharing of training updates introduce risks related to model security and privacy leakage, making enhanced security and privacy protection in federated learning an urgent priority. To address these challenges, we constructed a sleep posture detection dataset using custom-built devices and designed a weight allocation-based algorithm to detect poisoning attacks on computational nodes. Additionally, we proposed a hybrid detection method, built two attack models, and developed corresponding defense strategies. Simulation experiments demonstrate that the proposed method effectively defends against poisoning attacks even at high infection rates, while the introduced optimizer enhances detection stability. Furthermore, we integrated a differential privacy mechanism during training to protect participant data privacy while maintaining superior model performance. |
|---|---|
| ISSN: | 2332-7731 2332-7731 |
| DOI: | 10.1109/TCCN.2025.3561316 |