The search result changed since you submitted your search request. Documents might be displayed in a different sort order.
  • search hit 196 of 20578
Back to Result List

Using interpretability approaches to update "black-box" clinical prediction models

  • Despite advances in machine learning-based clinical prediction models, only few of such models are actually deployed in clinical contexts. Among other reasons, this is due to a lack of validation studies. In this paper, we present and discuss the validation results of a machine learning model for the prediction of acute kidney injury in cardiac surgery patients initially developed on the MIMIC-III dataset when applied to an external cohort of an American research hospital. To help account for the performance differences observed, we utilized interpretability methods based on feature importance, which allowed experts to scrutinize model behavior both at the global and local level, making it possible to gain further insights into why it did not behave as expected on the validation cohort. The knowledge gleaned upon derivation can be potentially useful to assist model update during validation for more generalizable and simpler models. We argue that interpretability methods should be considered by practitioners as a further tool to helpDespite advances in machine learning-based clinical prediction models, only few of such models are actually deployed in clinical contexts. Among other reasons, this is due to a lack of validation studies. In this paper, we present and discuss the validation results of a machine learning model for the prediction of acute kidney injury in cardiac surgery patients initially developed on the MIMIC-III dataset when applied to an external cohort of an American research hospital. To help account for the performance differences observed, we utilized interpretability methods based on feature importance, which allowed experts to scrutinize model behavior both at the global and local level, making it possible to gain further insights into why it did not behave as expected on the validation cohort. The knowledge gleaned upon derivation can be potentially useful to assist model update during validation for more generalizable and simpler models. We argue that interpretability methods should be considered by practitioners as a further tool to help explain performance differences and inform model update in validation studies.show moreshow less

Export metadata

Additional Services

Search Google Scholar Statistics
Metadaten
Author details:Harry Freitas da CruzORCiDGND, Boris PfahringerORCiD, Tom Martensen, Frederic Schneider, Alexander Meyer, Erwin BöttingerGND, Matthieu-Patrick SchapranowORCiDGND
DOI:https://doi.org/10.1016/j.artmed.2020.101982
ISSN:0933-3657
ISSN:1873-2860
Pubmed ID:https://pubmed.ncbi.nlm.nih.gov/33461682
Title of parent work (English):Artificial intelligence in medicine : AIM
Subtitle (English):an external validation study in nephrology
Publisher:Elsevier
Place of publishing:Amsterdam
Publication type:Article
Language:English
Date of first publication:2021/01/01
Publication year:2021
Release date:2024/02/28
Tag:Clinical predictive modeling; Interpretability; Nephrology; Validation; methods
Volume:111
Article number:101982
Number of pages:13
Funding institution:European Union's Horizon 2020 research and innovation program [780495]; Office of Research Infrastructure of the National Institutes of Health [S10OD026880]
Organizational units:Digital Engineering Fakultät / Hasso-Plattner-Institut für Digital Engineering GmbH
DDC classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 004 Datenverarbeitung; Informatik
6 Technik, Medizin, angewandte Wissenschaften / 61 Medizin und Gesundheit / 610 Medizin und Gesundheit
Peer review:Referiert
Accept ✔
This website uses technically necessary session cookies. By continuing to use the website, you agree to this. You can find our privacy policy here.