TY - JOUR A1 - Fehr, Jana A1 - Jaramillo-Gutierrez, Giovanna A1 - Oala, Luis A1 - Gröschel, Matthias I. A1 - Bierwirth, Manuel A1 - Balachandran, Pradeep A1 - Werneck-Leite, Alixandro A1 - Lippert, Christoph T1 - Piloting a Survey-Based Assessment of Transparency and Trustworthiness with Three Medical AI Tools JF - Healthcare N2 - Artificial intelligence (AI) offers the potential to support healthcare delivery, but poorly trained or validated algorithms bear risks of harm. Ethical guidelines stated transparency about model development and validation as a requirement for trustworthy AI. Abundant guidance exists to provide transparency through reporting, but poorly reported medical AI tools are common. To close this transparency gap, we developed and piloted a framework to quantify the transparency of medical AI tools with three use cases. Our framework comprises a survey to report on the intended use, training and validation data and processes, ethical considerations, and deployment recommendations. The transparency of each response was scored with either 0, 0.5, or 1 to reflect if the requested information was not, partially, or fully provided. Additionally, we assessed on an analogous three-point scale if the provided responses fulfilled the transparency requirement for a set of trustworthiness criteria from ethical guidelines. The degree of transparency and trustworthiness was calculated on a scale from 0% to 100%. Our assessment of three medical AI use cases pin-pointed reporting gaps and resulted in transparency scores of 67% for two use cases and one with 59%. We report anecdotal evidence that business constraints and limited information from external datasets were major obstacles to providing transparency for the three use cases. The observed transparency gaps also lowered the degree of trustworthiness, indicating compliance gaps with ethical guidelines. All three pilot use cases faced challenges to provide transparency about medical AI tools, but more studies are needed to investigate those in the wider medical AI sector. Applying this framework for an external assessment of transparency may be infeasible if business constraints prevent the disclosure of information. New strategies may be necessary to enable audits of medical AI tools while preserving business secrets. KW - artificial intelligence for health KW - quality assessment KW - transparency KW - trustworthiness Y1 - 2022 U6 - https://doi.org/10.3390/healthcare10101923 SN - 2227-9032 VL - 10 IS - 10 PB - MDPI CY - Basel, Schweiz ER - TY - JOUR A1 - Ewelt-Knauer, Corinna A1 - Schwering, Anja A1 - Winkelmann, Sandra T1 - Probabilistic audits and misreporting BT - the influence of audit process design on employee behavior JF - European accounting review N2 - We investigate how the design of audit processes influences employees’ reporting decisions. We focus specifically on detective employee audits for which several employees are randomly selected after a defined period to audit their ex-post behavior. We investigate two design features of the audit process, namely, employee anonymity and process transparency, and analyze their impact on misreporting. Overall, we find that both components influence the extent of individuals’ misreporting. A nonanonymous audit decreases performance misreporting more than an audit in which the employee remains anonymous. Furthermore, the high incidence of performance misreporting in the case of anonymous audits can be decreased when the process transparency is low. Thus, our study informs accountants about how the two design features of employee anonymity and transparency of the audit process can be used to constrain performance misreporting to increase the efficiency of audits KW - Performance misreporting KW - Employee audits KW - Employee anonymity KW - Process KW - transparency Y1 - 2021 U6 - https://doi.org/10.1080/09638180.2021.1899014 SN - 1468-4497 SN - 0963-8180 VL - 30 IS - 5 SP - 989 EP - 1012 PB - Routledge CY - London ER -