Refine
Has Fulltext
- no (3)
Language
- English (3)
Is part of the Bibliography
- yes (3)
Keywords
- 2AFC (1)
- ROC curve (1)
- compound detection or discrimination (1)
- probability summation (1)
- psychometric functions (1)
- redundancy gain (1)
Institute
Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams.
We consider the specific transformation of a Wiener process {X(t), t >= 0} in the presence of an absorbing barrier a that results when this process is "time-locked" with respect to its first passage time T-a through a criterion level a, and the evolution of X(t) is considered backwards ( retrospectively) from T-a. Formally, we study the random variables defined by Y(t) = X(T-a - t) and derive explicit results for their density and mean, and also for their asymptotic forms. We discuss how our results can aid interpretations of time series "response-locked" to their times of crossing a criterion level.
Many perceptual and cognitive tasks permit or require the integrated cooperation of specialized sensory channels, detectors, or other functionally separate units. In compound detection or discrimination tasks, 1 prominent general mechanism to model the combination of the output of different processing channels is probability summation. The classical example is the binocular summation model of Pirenne (1943), according to which a weak visual stimulus is detected if at least 1 of the 2 eyes detects this stimulus; as we review briefly, exactly the same reasoning is applied in numerous other fields. It is generally accepted that this mechanism necessarily predicts performance based on 2 (or more) channels to be superior to single channel performance, because 2 separate channels provide "2 chances" to succeed with the task. We argue that this reasoning is misleading because it neglects the increased opportunity with 2 channels not just for hits but also for false alarms and that there may well be no redundancy gain at all when performance is measured in terms of receiver operating characteristic curves. We illustrate and support these arguments with a visual detection experiment involving different spatial uncertainty conditions. Our arguments and findings have important implications for all models that, in one way or another, rest on, or incorporate, the notion of probability summation for the analysis of detection tasks, 2-alternative forced-choice tasks, and psychometric functions.