Refine
Document Type
- Conference Proceeding (1)
- Working Paper (1)
Language
- English (2)
Is part of the Bibliography
- yes (2)
Keywords
- conomics (1)
- open science (1)
- political science (1)
- replication (1)
- reproduction (1)
- research transparency (1)
Institute
This first volume of the DIGAREC Series holds the proceedings of the conference “The Philosophy of Computer Games”, held at the University of Potsdam from May 8-10, 2008. The contributions of the conference address three fields of computer game research that are philosophically relevant and, likewise, to which philosophical reflection is crucial. These are: ethics and politics, the action-space of games, and the magic circle. All three topics are interlinked and constitute the paradigmatic object of computer games: Whereas the first describes computer games on the outside, looking at the cultural effects of games as well as on moral practices acted out with them, the second describes computer games on the inside, i.e. how they are constituted as a medium. The latter finally discusses the way in which a border between these two realms, games and non-games, persists or is already transgressed in respect to a general performativity.
This study pushes our understanding of research reliability by reproducing and replicating claims from 110 papers in leading economic and political science journals. The analysis involves computational reproducibility checks and robustness assessments. It reveals several patterns. First, we uncover a high rate of fully computationally reproducible results (over 85%). Second, excluding minor issues like missing packages or broken pathways, we uncover coding errors for about 25% of studies, with some studies containing multiple errors. Third, we test the robustness of the results to 5,511 re-analyses. We find a robustness reproducibility of about 70%. Robustness reproducibility rates are relatively higher for re-analyses that introduce new data and lower for re-analyses that change the sample or the definition of the dependent variable. Fourth, 52% of re-analysis effect size estimates are smaller than the original published estimates and the average statistical significance of a re-analysis is 77% of the original. Lastly, we rely on six teams of researchers working independently to answer eight additional research questions on the determinants of robustness reproducibility. Most teams find a negative relationship between replicators' experience and reproducibility, while finding no relationship between reproducibility and the provision of intermediate or even raw data combined with the necessary cleaning codes.