TY - JOUR A1 - Oster, Simon A1 - Fritsch, Tobias A1 - Ulbricht, Alexander A1 - Mohr, Gunther A1 - Bruno, Giovanni A1 - Maierhofer, Christiane A1 - Altenburg, Simon T1 - On the registration of thermographic in situ monitoring data and computed tomography reference data in the scope of defect prediction in laser powder bed fusion JF - Metals : open access journal N2 - The detection of internal irregularities is crucial for quality assessment in metal-based additive manufacturing (AM) technologies such as laser powder bed fusion (L-PBF). The utilization of in-process thermography as an in situ monitoring tool in combination with post-process X-ray micro computed tomography (XCT) as a reference technique has shown great potential for this aim. Due to the small irregularity dimensions, a precise registration of the datasets is necessary as a requirement for correlation. In this study, the registration of thermography and XCT reference datasets of a cylindric specimen containing keyhole pores is carried out for the development of a porosity prediction model. The considered datasets show variations in shape, data type and dimensionality, especially due to shrinkage and material elevation effects present in the manufactured part. Since the resulting deformations are challenging for registration, a novel preprocessing methodology is introduced that involves an adaptive volume adjustment algorithm which is based on the porosity distribution in the specimen. Thus, the implementation of a simple three-dimensional image-to-image registration is enabled. The results demonstrate the influence of the part deformation on the resulting porosity location and the importance of registration in terms of irregularity prediction. KW - selective laser melting (SLM) KW - laser powder bed fusion (L-PBF) KW - additive KW - manufacturing (AM) KW - process monitoring KW - infrared thermography KW - X-ray KW - micro computed tomography (XCT) KW - defect detection KW - image registration Y1 - 2022 U6 - https://doi.org/10.3390/met12060947 SN - 2075-4701 VL - 12 IS - 6 PB - MDPI CY - Basel ER - TY - GEN A1 - Ziegler, Joceline A1 - Pfitzner, Bjarne A1 - Schulz, Heinrich A1 - Saalbach, Axel A1 - Arnrich, Bert T1 - Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data T2 - Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät N2 - Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of 0.940.94 on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of a privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets ε∈{1,3,6,10}�∈{1,3,6,10}. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of 0.940.94 for ε=6�=6. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of 0.760.76 in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training. T3 - Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät - 14 KW - federated learning KW - privacy and security KW - privacy attack KW - X-ray Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-581322 IS - 14 ER - TY - JOUR A1 - Ziegler, Joceline A1 - Pfitzner, Bjarne A1 - Schulz, Heinrich A1 - Saalbach, Axel A1 - Arnrich, Bert T1 - Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data JF - Sensors N2 - Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of 0.940.94 on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of a privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets ε∈{1,3,6,10}�∈{1,3,6,10}. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of 0.940.94 for ε=6�=6. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of 0.760.76 in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training. KW - federated learning KW - privacy and security KW - privacy attack KW - X-ray Y1 - 2022 U6 - https://doi.org/10.3390/s22145195 SN - 1424-8220 VL - 22 PB - MDPI CY - Basel, Schweiz ET - 14 ER -