• Treffer 1 von 1
Zurück zur Trefferliste

Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools

  • Artificial intelligence (AI) offers the potential to support healthcare delivery, but poorly trained or validated algorithms bear risks of harm. Ethical guidelines stated transparency about model development and validation as a requirement for trustworthy AI. Abundant guidance exists to provide transparency through reporting, but poorly reported medical AI tools are common. To close this transparency gap, we developed and piloted a framework to quantify the transparency of medical AI tools with three use cases. Our framework comprises a survey to report on the intended use, training and validation data and processes, ethical considerations, and deployment recommendations. The transparency of each response was scored with either 0, 0.5, or 1 to reflect if the requested information was not, partially, or fully provided. Additionally, we assessed on an analogous three-point scale if the provided responses fulfilled the transparency requirement for a set of trustworthiness criteria from ethical guidelines. The degree of transparency andArtificial intelligence (AI) offers the potential to support healthcare delivery, but poorly trained or validated algorithms bear risks of harm. Ethical guidelines stated transparency about model development and validation as a requirement for trustworthy AI. Abundant guidance exists to provide transparency through reporting, but poorly reported medical AI tools are common. To close this transparency gap, we developed and piloted a framework to quantify the transparency of medical AI tools with three use cases. Our framework comprises a survey to report on the intended use, training and validation data and processes, ethical considerations, and deployment recommendations. The transparency of each response was scored with either 0, 0.5, or 1 to reflect if the requested information was not, partially, or fully provided. Additionally, we assessed on an analogous three-point scale if the provided responses fulfilled the transparency requirement for a set of trustworthiness criteria from ethical guidelines. The degree of transparency and trustworthiness was calculated on a scale from 0% to 100%. Our assessment of three medical AI use cases pin-pointed reporting gaps and resulted in transparency scores of 67% for two use cases and one with 59%. We report anecdotal evidence that business constraints and limited information from external datasets were major obstacles to providing transparency for the three use cases. The observed transparency gaps also lowered the degree of trustworthiness, indicating compliance gaps with ethical guidelines. All three pilot use cases faced challenges to provide transparency about medical AI tools, but more studies are needed to investigate those in the wider medical AI sector. Applying this framework for an external assessment of transparency may be infeasible if business constraints prevent the disclosure of information. New strategies may be necessary to enable audits of medical AI tools while preserving business secrets.zeige mehrzeige weniger

Volltext Dateien herunterladen

  • pde15.pdfeng
    (803KB)

    SHA-512:6c3778128185f74299de0e7b6470689774e27e7806e31b61a8210d1a13daa7878373ddd25b9e76a757dea1daacd96dca5e211ca674b9e12f540785a1d8b527e6

Metadaten exportieren

Weitere Dienste

Suche bei Google Scholar Statistik - Anzahl der Zugriffe auf das Dokument
Metadaten
Verfasserangaben:Jana FehrORCiD, Giovanna Jaramillo-GutierrezORCiD, Luis OalaORCiD, Matthias I. GröschelORCiD, Manuel Bierwirth, Pradeep BalachandranORCiD, Alixandro Werneck-LeiteORCiD, Christoph LippertORCiDGND
URN:urn:nbn:de:kobv:517-opus4-583281
DOI:https://doi.org/10.25932/publishup-58328
Titel des übergeordneten Werks (Deutsch):Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät
Schriftenreihe (Bandnummer):Zweitveröffentlichungen der Universität Potsdam : Reihe der Digital Engineering Fakultät (15)
Publikationstyp:Postprint
Sprache:Englisch
Datum der Erstveröffentlichung:13.03.2023
Erscheinungsjahr:2022
Veröffentlichende Institution:Universität Potsdam
Datum der Freischaltung:13.03.2023
Freies Schlagwort / Tag:artificial intelligence for health; quality assessment; transparency; trustworthiness
Ausgabe:15
Seitenanzahl:30
Organisationseinheiten:Digital Engineering Fakultät / Hasso-Plattner-Institut für Digital Engineering GmbH
DDC-Klassifikation:6 Technik, Medizin, angewandte Wissenschaften / 61 Medizin und Gesundheit / 610 Medizin und Gesundheit
Peer Review:Referiert
Publikationsweg:Open Access / Green Open-Access
Lizenz (Deutsch):License LogoCC-BY - Namensnennung 4.0 International
Externe Anmerkung:Bibliographieeintrag der Originalveröffentlichung/Quelle
Verstanden ✔
Diese Webseite verwendet technisch erforderliche Session-Cookies. Durch die weitere Nutzung der Webseite stimmen Sie diesem zu. Unsere Datenschutzerklärung finden Sie hier.