Toward a principled Bayesian workflow in cognitive science
- Experiments in research on memory, language, and in other areas of cognitive science are increasingly being analyzed using Bayesian methods. This has been facilitated by the development of probabilistic programming languages such as Stan, and easily accessible front-end packages such as brms. The utility of Bayesian methods, however, ultimately depends on the relevance of the Bayesian model, in particular whether or not it accurately captures the structure of the data and the data analyst's domain expertise. Even with powerful software, the analyst is responsible for verifying the utility of their model. To demonstrate this point, we introduce a principled Bayesian workflow (Betancourt, 2018) to cognitive science. Using a concrete working example, we describe basic questions one should ask about the model: prior predictive checks, computational faithfulness, model sensitivity, and posterior predictive checks. The running example for demonstrating the workflow is data on reading times with a linguistic manipulation of object versusExperiments in research on memory, language, and in other areas of cognitive science are increasingly being analyzed using Bayesian methods. This has been facilitated by the development of probabilistic programming languages such as Stan, and easily accessible front-end packages such as brms. The utility of Bayesian methods, however, ultimately depends on the relevance of the Bayesian model, in particular whether or not it accurately captures the structure of the data and the data analyst's domain expertise. Even with powerful software, the analyst is responsible for verifying the utility of their model. To demonstrate this point, we introduce a principled Bayesian workflow (Betancourt, 2018) to cognitive science. Using a concrete working example, we describe basic questions one should ask about the model: prior predictive checks, computational faithfulness, model sensitivity, and posterior predictive checks. The running example for demonstrating the workflow is data on reading times with a linguistic manipulation of object versus subject relative clause sentences. This principled Bayesian workflow also demonstrates how to use domain knowledge to inform prior distributions. It provides guidelines and checks for valid data analysis, avoiding overfitting complex models to noise, and capturing relevant data structure in a probabilistic model. Given the increasing use of Bayesian methods, we aim to discuss how these methods can be properly employed to obtain robust answers to scientific questions.…
Author details: | Daniel SchadORCiDGND, Michael BetancourtORCiD, Shravan VasishthORCiDGND |
---|---|
DOI: | https://doi.org/10.1037/met0000275 |
ISSN: | 1082-989X |
ISSN: | 1939-1463 |
Pubmed ID: | https://pubmed.ncbi.nlm.nih.gov/32551748 |
Title of parent work (English): | Psychological methods |
Publisher: | American Psychological Association |
Place of publishing: | Washington |
Publication type: | Article |
Language: | English |
Date of first publication: | 2021/02/01 |
Publication year: | 2021 |
Release date: | 2023/11/27 |
Tag: | Bayesian data analysis; building; model; posterior predictive checks; prior predictive checks; workflow |
Volume: | 26 |
Issue: | 1 |
Number of pages: | 24 |
First page: | 103 |
Last Page: | 126 |
Funding institution: | Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)German Research Foundation (DFG) [317633480-SFB 1287] |
Organizational units: | Humanwissenschaftliche Fakultät / Strukturbereich Kognitionswissenschaften / Department Psychologie |
DDC classification: | 1 Philosophie und Psychologie / 15 Psychologie / 150 Psychologie |
Peer review: | Referiert |