• Treffer 3 von 13
Zurück zur Trefferliste

Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data

  • Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of 0.940.94 on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation byPrivacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of 0.940.94 on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of a privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets ε∈{1,3,6,10}�∈{1,3,6,10}. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of 0.940.94 for ε=6�=6. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of 0.760.76 in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training.zeige mehrzeige weniger

Volltext Dateien herunterladen

  • pde_14.pdfeng
    (2024KB)

    SHA-512:f7268e0e5f3d73b7d7e26f1a2ccd3d1a164701d83650f6a03cc73c93f257e8e89304e4827caa64803ad5d5e0f88e2cfccd6e6395bc8649050a9d0a2d90cdec23

Metadaten exportieren

Weitere Dienste

Suche bei Google Scholar Statistik - Anzahl der Zugriffe auf das Dokument
Metadaten
Verfasserangaben:Joceline ZieglerORCiD, Bjarne PfitznerORCiD, Heinrich SchulzORCiD, Axel SaalbachORCiDGND, Bert ArnrichORCiDGND
URN:urn:nbn:de:kobv:517-opus4-581322
DOI:https://doi.org/10.25932/publishup-58132
Titel des übergeordneten Werks (Deutsch):Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät
Schriftenreihe (Bandnummer):Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät (14)
Publikationstyp:Postprint
Sprache:Englisch
Datum der Erstveröffentlichung:24.02.2023
Erscheinungsjahr:2022
Veröffentlichende Institution:Universität Potsdam
Datum der Freischaltung:24.02.2023
Freies Schlagwort / Tag:X-ray; federated learning; privacy and security; privacy attack
Ausgabe:14
Seitenanzahl:25
Organisationseinheiten:Digital Engineering Fakultät / Hasso-Plattner-Institut für Digital Engineering GmbH
DDC-Klassifikation:6 Technik, Medizin, angewandte Wissenschaften / 62 Ingenieurwissenschaften / 620 Ingenieurwissenschaften und zugeordnete Tätigkeiten
Peer Review:Referiert
Publikationsweg:Open Access / Green Open-Access
Lizenz (Deutsch):License LogoCC-BY - Namensnennung 4.0 International
Externe Anmerkung:Bibliographieeintrag der Originalveröffentlichung/Quelle
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.