Refine
Has Fulltext
- no (27)
Year of publication
Document Type
- Article (27) (remove)
Language
- English (27)
Is part of the Bibliography
- yes (27)
Keywords
- Numerical distance effect (3)
- Mental number line (2)
- numerical distance effect (2)
- Activation suppression model (1)
- Bayesian inference (1)
- Bundesliga (1)
- Categorization (1)
- Category effect (1)
- Conflict task (1)
- Decision-making (1)
Institute
Aggregate and individual replication probability within an explicit model of the research process
(2011)
We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.
Whereas many cognitive tasks show pronounced aging effects, even in healthy older adults, other tasks seem more resilient to aging. A small number of recent studies suggests that number comparison is possibly one of the abilities that remain unaltered across the life span. We investigated the ability to compare single-digit numbers in young (19-39 years; n = 39) and healthy older (65-79 years; n = 39) adults in considerable detail, analyzing accuracy as well as mean and variance of their response time, together with several other well-established hallmarks of numerical comparison. Using a recent comprehensive process model that parsimoniously accounts quantitatively for many aspects of number comparison (Reike & Schwarz, 2016), we address two fundamental problems in the comparison of older to young adults in numerical comparison tasks: (a) to adequately correct speed measures for different levels of accuracy (older participants were significantly more accurate than young participants), and (b) to distinguish between general sensory and motor slowing on the one hand, as opposed to a specific age-related decline in the efficiency to retrieve and compare numerical magnitude representations. Our results represent strong evidence that healthy older adults compare magnitudes as efficiently as young adults, when the measure of efficiency is uncontaminated by strategic speed-accuracy trade-offs and by sensory and motor stages that are not related to numerical comparison per se. At the same time, older adults aim at a significantly higher accuracy level (risk aversion), which necessarily prolongs processing time, and they also show the well-documented general decline in sensory and/or motor functions.
Following the classical work of Moyer and Landauer (1967), experimental studies investigating the way in which humans process and compare symbolic numerical information regularly used one of two experimental designs. In selection tasks, two numbers are presented, and the task of the participant is to select (for example) the larger one. In classification tasks, a single number is presented, and the participant decides if it is smaller or larger than a predefined standard. Many findings obtained with these paradigms fit in well with the notion of a mental analog representation, or an Approximate Number System (ANS; e.g., Piazza 2010). The ANS is often conceptualized metaphorically as a mental number line, and data from both paradigms are well accounted for by diffusion models based on the stochastic accumulation of noisy partial numerical information over time. The present study investigated a categorization paradigm in which participants decided if a number presented falls into a numerically defined central category. We show that number categorization yields a highly regular, yet considerably more complex pattern of decision times and error rates as compared to the simple monotone relations obtained in traditional selection and classification tasks. We also show that (and how) standard diffusion models of number comparison can be adapted so as to account for mean and standard deviations of all RTs and for error rates in considerable quantitative detail. We conclude that just as traditional number comparison, the more complex process of categorizing numbers conforms well with basic notions of the ANS.
Comparing continuous and discrete birthday coincidences : "Same-Day" versus "Within 24 Hours"
(2010)
In its classical form the famous birthday problem (Feller 1968; Mosteller 1987) addresses coincidences within a discrete sample space, looking at births that fall on the same calendar day. However, coincidence phenomena often arise in situations in which it is more natural to consider a continuous-time parameter. We first describe an elementary variant of the classical problem in continuous time, and then derive and illustrate close approximate relations that exist between the discrete and the continuous formulations.
Using a large representative database (12,902 matches from the top professional football league in Germany), I show that the number (441) of two-penalty matches is larger than expected by chance, and that among these 441 matches there are considerably more matches in which each team is awarded one penalty than would be expected on the basis of independent penalty kick decisions (odds ratio=11.2, relative risk=6.34). Additional analyses based on the score in the match before a penalty is awarded and on the timing of penalties, suggest that awarding a first penalty to one team raises the referee's penalty evidence criterion for the same team, and lowers the corresponding criterion for the other team.
We describe a mathematically simple yet precise model of activation suppression that can explain the negative-going delta plots often observed in standard Simon tasks. The model postulates a race between the identification of the relevant stimulus attribute and the suppression of irrelevant location-based activation, with the irrelevant activation only having an effect if the irrelevant activation is still present at the moment when central processing of the relevant attribute starts. The model can be fitted by maximum likelihood to observed distributions of RTs in congruent and incongruent trials, and it provides good fits to two previously-reported data sets with plausible parameter values. R and MATLAB software for use with the model is provided.
Dissociations between reaction times and temporal order judgments : a diffusion model approach
(2006)
A diffusion model for simple reaction time (RT) and temporal order judgment (TOJ) tasks was developed to account for a commonly observed dissociation between these 2 tasks: Most stimulus manipulations (e.g., intensity) have larger effects in RT tasks than in TOJ tasks. The model assumes that a detection criterion determines the level of sensory evidence needed to conclude that a stimulus has been presented. Analysis of the performance that would be achieved with different possible criterion settings revealed that performance was optimal with a lower criterion setting for the TOJ task than for the RT task. In addition, the model predicts that effects of stimulus manipulations should increase with the size of the detection criterion. Thus, the model suggests that commonly observed dissociations between RT and TOJ tasks may simply be due to performance optimization in the face of conflicting task demands
Physical size modulates the efficiency of digit comparison, depending on whether the relation of numerical magnitude and physical size is congruent or incongruent (Besner & Coltheart, Neuropsychologia, 17, 467–472, 1979), the number-size congruency effect (NSCE). In addition, Henik and Tzelgov (Memory & Cognition, 10, 389–395, 1982) first reported an NSCE for the reverse task of comparing the physical size of digits such that the numerical magnitude of digits modulated the time required to compare their physical sizes. Does the NSCE in physical comparisons simply reflect a number-mediated bias mechanism related to making decisions and selecting responses about the digit’s sizes? Alternatively, or in addition, the NSCE might indicate a true increase in the ability to discriminate small and large font sizes when these sizes are congruent with the digit’s symbolic numerical meaning, over and above response bias effects. We present a new research design that permits us to apply signal detection theory to a task that required observers to judge the physical size of digits. Our results clearly demonstrate that the NSCE cannot be reduced to mere response bias effects, and that genuine sensitivity gains for congruent number-size pairings contribute to the NSCE.
We present a new quantitative process model (GSDT) of visual search that seeks to integrate various processing mechanisms suggested by previous studies within a single, coherent conceptual frame. It incorporates and combines 4 distinct model components: guidance (G), a serial (S) item inspection process, diffusion (D) modeling of individual item inspections, and a strategic termination (T) rule. For this model, we derive explicit closed-form results for response probability and mean search time (reaction time [RT]) as a function of display size and target presence/absence. The fit of the model is compared in detail to data from 4 visual search experiments in which the effects of target/distractor discriminability and of target prevalence on performance (present/absent display size functions for mean RT and error rate) are studied. We describe how GSDT accounts for various detailed features of our results such as the probabilities of hits and correct rejections and their mean RTs; we also apply the model to explain further aspects of the data, such as RT variance and mean miss RT.
Most psychological models are intended to describe processes that operate within each individual. In many research areas, however, models are tested by looking at results averaged across many individuals, despite the fact that such averaged results may give a misleading picture of what is true for each one. We consider this conundrum with respect to the interpretation of on-average null effects. Specifically, even though an experimental manipulation might have no effect on average across individuals, it might still have demonstrable effects-albeit in opposite directions-for many or all of the individuals tested. We discuss several examples of research questions for which it would be theoretically crucial to determine whether manipulations really have no effect at the individual level, and we present a method of testing for individual-level effects.