Refine
Has Fulltext
- no (35) (remove)
Year of publication
Document Type
- Article (32)
- Other (2)
- Conference Proceeding (1)
Language
- English (35)
Is part of the Bibliography
- yes (35)
Keywords
- Alcohol dependence (4)
- Eye movements (4)
- Reading (3)
- Reinforcement learning (3)
- Computational psychiatry (2)
- Decision-making (2)
- Pavlovian-to-instrumental transfer (2)
- Perceptual span (2)
- alcohol (2)
- Affective modulation (1)
When researchers carry out a null hypothesis significance test, it is tempting to assume that a statistically significant result lowers Prob(H0), the probability of the null hypothesis being true. Technically, such a statement is meaningless for various reasons: e.g., the null hypothesis does not have a probability associated with it. However, it is possible to relax certain assumptions to compute the posterior probability Prob(H0) under repeated sampling. We show in a step-by-step guide that the intuitively appealing belief, that Prob(H0) is low when significant results have been obtained under repeated sampling, is in general incorrect and depends greatly on: (a) the prior probability of the null being true; (b) type-I error rate, (c) type-II error rate, and (d) replication of a result. Through step-by-step simulations using open-source code in the R System of Statistical Computing, we show that uncertainty about the null hypothesis being true often remains high despite a significant result. To help the reader develop intuitions about this common misconception, we provide a Shiny app (https://danielschad.shinyapps.io/probnull/). We expect that this tutorial will help researchers better understand and judge results from null hypothesis significance tests.
Inferences about hypotheses are ubiquitous in the cognitive sciences. Bayes factors provide one general way to compare different hypotheses by their compatibility with the observed data. Those quantifications can then also be used to choose between hypotheses. While Bayes factors provide an immediate approach to hypothesis testing, they are highly sensitive to details of the data/model assumptions and it's unclear whether the details of the computational implementation (such as bridge sampling) are unbiased for complex analyses. Hem, we study how Bayes factors misbehave under different conditions. This includes a study of errors in the estimation of Bayes factors; the first-ever use of simulation-based calibration to test the accuracy and bias of Bayes factor estimates using bridge sampling; a study of the stability of Bayes factors against different MCMC draws and sampling variation in the data; and a look at the variability of decisions based on Bayes factors using a utility function. We outline a Bayes factor workflow that researchers can use to study whether Bayes factors are robust for their individual analysis. Reproducible code is available from haps://osf.io/y354c/. <br /> Translational Abstract <br /> In psychology and related areas, scientific hypotheses are commonly tested by asking questions like "is [some] effect present or absent." Such hypothesis testing is most often carried out using frequentist null hypothesis significance testing (NIIST). The NHST procedure is very simple: It usually returns a p-value, which is then used to make binary decisions like "the effect is present/abscnt." For example, it is common to see studies in the media that draw simplistic conclusions like "coffee causes cancer," or "coffee reduces the chances of geuing cancer." However, a powerful and more nuanced alternative approach exists: Bayes factors. Bayes factors have many advantages over NHST. However, for the complex statistical models that arc commonly used for data analysis today, computing Bayes factors is not at all a simple matter. In this article, we discuss the main complexities associated with computing Bayes factors. This is the first article to provide a detailed workflow for understanding and computing Bayes factors in complex statistical models. The article provides a statistically more nuanced way to think about hypothesis testing than the overly simplistic tendency to declare effects as being "present" or "absent".
Language production ultimately aims to convey meaning. Yet words differ widely in the richness and density of their semantic representations, and these differences impact conceptual and lexical processes during speech planning. Here, we replicated the recent finding that semantic richness, measured as the number of associated semantic features according to semantic feature production norms, facilitates object naming. In contrast, intercorrelational semantic feature density, measured as the degree of intercorrelation of a concept's features, presumably resulting in the coactivation of closely related concepts, has an inhibitory influence. We replicated the behavioral effects and investigated their relative time course and electrophysiological correlates. Both the facilitatory effect of high semantic richness and the inhibitory influence of high feature density were reflected in an increased posterior positivity starting at about 250 ms, in line with previous reports of posterior positivities in paradigms employing contextual manipulations to induce semantic interference during language production. Furthermore, amplitudes at the same posterior electrode sites were positively correlated with object naming times between about 230 and 380 ms. The observed effects follow naturally from the assumption of conceptual facilitation and simultaneous lexical competition and are difficult to explain by language production theories dismissing lexical competition.
Much work has shown that differences in the timecourse of language processing are central to comparing native (L1) and non-native (L2) speakers. However, estimating the onset of experimental effects in timecourse data presents several statistical problems including multiple comparisons and autocorrelation. We compare several approaches to tackling these problems and illustrate them using an L1-L2 visual world eye-tracking dataset. We then present a bootstrapping procedure that allows not only estimation of an effect onset, but also of a temporal confidence interval around this divergence point. We describe how divergence points can be used to demonstrate timecourse differences between speaker groups or between experimental manipulations, two important issues in evaluating L2 processing accounts. We discuss possible extensions of the bootstrapping procedure, including determining divergence points for individual speakers and correlating them with individual factors like L2 exposure and proficiency. Data and an analysis tutorial are available at https://osf.io/exbmk/.
Experiments in research on memory, language, and in other areas of cognitive science are increasingly being analyzed using Bayesian methods. This has been facilitated by the development of probabilistic programming languages such as Stan, and easily accessible front-end packages such as brms. The utility of Bayesian methods, however, ultimately depends on the relevance of the Bayesian model, in particular whether or not it accurately captures the structure of the data and the data analyst's domain expertise. Even with powerful software, the analyst is responsible for verifying the utility of their model. To demonstrate this point, we introduce a principled Bayesian workflow (Betancourt, 2018) to cognitive science. Using a concrete working example, we describe basic questions one should ask about the model: prior predictive checks, computational faithfulness, model sensitivity, and posterior predictive checks. The running example for demonstrating the workflow is data on reading times with a linguistic manipulation of object versus subject relative clause sentences. This principled Bayesian workflow also demonstrates how to use domain knowledge to inform prior distributions. It provides guidelines and checks for valid data analysis, avoiding overfitting complex models to noise, and capturing relevant data structure in a probabilistic model. Given the increasing use of Bayesian methods, we aim to discuss how these methods can be properly employed to obtain robust answers to scientific questions.
Pavlovian-to-instrumental transfer (PIT) tasks examine the influence of Pavlovian stimuli on ongoing instrumental behaviour. Previous studies reported associations between a strong PIT effect, high-risk drinking and alcohol use disorder. This study investigated whether susceptibility to interference between Pavlovian and instrumental control is linked to risky alcohol use in a community sample of 18-year-old male adults. Participants (N = 191) were instructed to 'collect good shells' and 'leave bad shells' during the presentation of appetitive (monetary reward), aversive (monetary loss) or neutral Pavlovian stimuli. We compared instrumental error rates (ER) and functional magnetic resonance imaging (fMRI) brain responses between the congruent and incongruent conditions, as well as among high-risk and low-risk drinking groups. On average, individuals showed a substantial PIT effect, that is, increased ER when Pavlovian cues and instrumental stimuli were in conflict compared with congruent trials. Neural PIT correlates were found in the ventral striatum and the dorsomedial and lateral prefrontal cortices (lPFC). Importantly, high-risk drinking was associated with a stronger behavioural PIT effect, a decreased lPFC response and an increased neural response in the ventral striatum on the trend level. Moreover, high-risk drinkers showed weaker connectivity from the ventral striatum to the lPFC during incongruent trials. Our study links interference during PIT to drinking behaviour in healthy, young adults. High-risk drinkers showed higher susceptibility to Pavlovian cues, especially when they conflicted with instrumental behaviour, indicating lower interference control abilities. Increased activity in the ventral striatum (bottom-up), decreased lPFC response (top-down), and their altered interplay may contribute to poor interference control in the high-risk drinkers.
Factorial experiments in research on memory, language, and in other areas are often analyzed using analysis of variance (ANOVA). However, for effects with more than one numerator degrees of freedom, e.g., for experimental factors with more than two levels, the ANOVA omnibus F-test is not informative about the source of a main effect or interaction. Because researchers typically have specific hypotheses about which condition means differ from each other, a priori contrasts (i.e., comparisons planned before the sample means are known) between specific conditions or combinations of conditions are the appropriate way to represent such hypotheses in the statistical model. Many researchers have pointed out that contrasts should be "tested instead of, rather than as a supplement to, the ordinary 'omnibus' F test" (Hays, 1973, p. 601). In this tutorial, we explain the mathematics underlying different kinds of contrasts (i.e., treatment, sum, repeated, polynomial, custom, nested, interaction contrasts), discuss their properties, and demonstrate how they are applied in the R System for Statistical Computing (R Core Team, 2018). In this context, we explain the generalized inverse which is needed to compute the coefficients for contrasts that test hypotheses that are not covered by the default set of contrasts. A detailed understanding of contrast coding is crucial for successful and correct specification in linear models (including linear mixed models). Contrasts defined a priori yield far more useful confirmatory tests of experimental hypotheses than standard omnibus F-tests. Reproducible code is available from https://osf.io/7ukf6/.
Background Aversive stimuli in the environment influence human actions. This includes valence-dependent influences on action selection, e.g., increased avoidance but decreased approach behavior. However, it is yet unclear how aversive stimuli interact with complex learning and decision-making in the reward and avoidance domain. Moreover, the underlying computational mechanisms of these decision-making biases are unknown. Methods To elucidate these mechanisms, 54 healthy young male subjects performed a two-step sequential decision-making task, which allows to computationally model different aspects of learning, e.g., model-free, habitual, and model-based, goal-directed learning. We used a within-subject design, crossing task valence (reward vs. punishment learning) with emotional context (aversive vs. neutral background stimuli). We analyzed choice data, applied a computational model, and performed simulations. Results Whereas model-based learning was not affected, aversive stimuli interacted with model-free learning in a way that depended on task valence. Thus, aversive stimuli increased model-free avoidance learning but decreased model-free reward learning. The computational model confirmed this effect: the parameter lambda that indicates the influence of reward prediction errors on decision values was increased in the punishment condition but decreased in the reward condition when aversive stimuli were present. Further, by using the inferred computational parameters to simulate choice data, our effects were captured. Exploratory analyses revealed that the observed biases were associated with subclinical depressive symptoms. Conclusion Our data show that aversive environmental stimuli affect complex learning and decision-making, which depends on task valence. Further, we provide a model of the underlying computations of this affective modulation. Finally, our finding of increased decision-making biases in subjects reporting subclinical depressive symptoms matches recent reports of amplified Pavlovian influences on action selection in depression and suggests a potential vulnerability factor for mood disorders. We discuss our findings in the light of the involvement of the neuromodulators serotonin and dopamine.
In animals and humans, behavior can be influenced by irrelevant stimuli, a phenomenon called Pavlovian-to-instrumental transfer (PIT). In subjects with substance use disorder, PIT is even enhanced with functional activation in the nucleus accumbens (NAcc) and amygdala. While we observed enhanced behavioral and neural PIT effects in alcohol-dependent subjects, we here aimed to determine whether behavioral PIT is enhanced in young men with high-risk compared to low-risk drinking and subsequently related functional activation in an a-priori region of interest encompassing the NAcc and amygdala and related to polygenic risk for alcohol consumption. A representative sample of 18-year old men (n = 1937) was contacted: 445 were screened, 209 assessed: resulting in 191 valid behavioral, 139 imaging and 157 genetic datasets. None of the subjects fulfilled criteria for alcohol dependence according to the Diagnostic and Statistical Manual of Mental Disorders-IV-TextRevision (DSM-IV-TR). We measured how instrumental responding for rewards was influenced by background Pavlovian conditioned stimuli predicting action-independent rewards and losses. Behavioral PIT was enhanced in high-compared to low-risk drinkers (b = 0.09, SE = 0.03, z = 2.7, p < 0.009). Across all subjects, we observed PIT-related neural blood oxygen level-dependent (BOLD) signal in the right amygdala (t = 3.25, p(SVC) = 0.04, x = 26, y = -6, z = -12), but not in NAcc. The strength of the behavioral PIT effect was positively correlated with polygenic risk for alcohol consumption (r(s) = 0.17, p = 0.032). We conclude that behavioral PIT and polygenic risk for alcohol consumption might be a biomarker for a subclinical phenotype of risky alcohol consumption, even if no drug-related stimulus is present. The association between behavioral PIT effects and the amygdala might point to habitual processes related to out PIT task. In non-dependent young social drinkers, the amygdala rather than the NAcc is activated during PIT; possible different involvement in association with disease trajectory should be investigated in future studies.
Drunk decisions
(2018)
Background: Studies in humans and animals suggest a shift from goal-directed to habitual decision-making in addiction. We therefore tested whether acute alcohol administration reduces goal-directed and promotes habitual decision-making, and whether these effects are moderated by self-reported drinking problems. Methods: Fifty-three socially drinking males completed the two-step task in a randomised crossover design while receiving an intravenous infusion of ethanol (blood alcohol level=80 mg%), or placebo. To minimise potential bias by long-standing heavy drinking and subsequent neuropsychological impairment, we tested 18- to 19-year-old adolescents. Results: Alcohol administration consistently reduced habitual, model-free decisions, while its effects on goal-directed, model-based behaviour varied as a function of drinking problems measured with the Alcohol Use Disorders Identification Test. While adolescents with low risk for drinking problems (scoring <8) exhibited an alcohol-induced numerical reduction in goal-directed choices, intermediate-risk drinkers showed a shift away from habitual towards goal-directed decision-making, such that alcohol possibly even improved their performance. Conclusions: We assume that alcohol disrupted basic cognitive functions underlying habitual and goal-directed decisions in low-risk drinkers, thereby enhancing hasty choices. Further, we speculate that intermediate-risk drinkers benefited from alcohol as a negative reinforcer that reduced unpleasant emotional states, possibly displaying a novel risk factor for drinking in adolescence.