Refine
Year of publication
- 2022 (40) (remove)
Document Type
- Article (35)
- Doctoral Thesis (3)
- Bachelor Thesis (1)
- Postprint (1)
Language
- English (40) (remove)
Is part of the Bibliography
- yes (40) (remove)
Keywords
- German (3)
- alternatives (3)
- focus (3)
- Individual differences (2)
- N400 (2)
- Prosody (2)
- information structure (2)
- interference (2)
- processing (2)
- similarity-based interference (2)
Institute
- Department Linguistik (40) (remove)
Inferences about hypotheses are ubiquitous in the cognitive sciences. Bayes factors provide one general way to compare different hypotheses by their compatibility with the observed data. Those quantifications can then also be used to choose between hypotheses. While Bayes factors provide an immediate approach to hypothesis testing, they are highly sensitive to details of the data/model assumptions and it's unclear whether the details of the computational implementation (such as bridge sampling) are unbiased for complex analyses. Hem, we study how Bayes factors misbehave under different conditions. This includes a study of errors in the estimation of Bayes factors; the first-ever use of simulation-based calibration to test the accuracy and bias of Bayes factor estimates using bridge sampling; a study of the stability of Bayes factors against different MCMC draws and sampling variation in the data; and a look at the variability of decisions based on Bayes factors using a utility function. We outline a Bayes factor workflow that researchers can use to study whether Bayes factors are robust for their individual analysis. Reproducible code is available from haps://osf.io/y354c/. <br /> Translational Abstract <br /> In psychology and related areas, scientific hypotheses are commonly tested by asking questions like "is [some] effect present or absent." Such hypothesis testing is most often carried out using frequentist null hypothesis significance testing (NIIST). The NHST procedure is very simple: It usually returns a p-value, which is then used to make binary decisions like "the effect is present/abscnt." For example, it is common to see studies in the media that draw simplistic conclusions like "coffee causes cancer," or "coffee reduces the chances of geuing cancer." However, a powerful and more nuanced alternative approach exists: Bayes factors. Bayes factors have many advantages over NHST. However, for the complex statistical models that arc commonly used for data analysis today, computing Bayes factors is not at all a simple matter. In this article, we discuss the main complexities associated with computing Bayes factors. This is the first article to provide a detailed workflow for understanding and computing Bayes factors in complex statistical models. The article provides a statistically more nuanced way to think about hypothesis testing than the overly simplistic tendency to declare effects as being "present" or "absent".
In the picture-word interference paradigm, participants name pictures while ignoring a written or spoken distractor word. Naming times to the pictures are slowed down by the presence of the distractor word. The present study investigates in detail the impact of distractor and target word properties on picture naming times, building on the seminal study by Miozzo and Caramazza. We report the results of several Bayesian meta-analyses based on 26 datasets. These analyses provide estimates of effect sizes and their precision for several variables and their interactions. They show the reliability of the distractor frequency effect on picture naming latencies (latencies decrease as the frequency of the distractor increases) and demonstrate for the first time the impact of distractor length, with longer naming latencies for trials with longer distractors. Moreover, distractor frequency interacts with target word frequency to predict picture naming latencies. The methodological and theoretical implications of these findings are discussed.
The Final-over-Final Condition has emerged as a robust and explanatory generalization for a wide range of phenomena (Biberauer, Holmberg, and Roberts 2014, Sheehan et al. 2017). In this article, we argue that it also holds in another domain, nominalization. In languages that show overt nominalization of VPs, one word order is routinely unattested, namely, a head-initial VP with a suffixal nominalizer. This typological gap can be accounted for by the Final-over-Final Condition, if we allow it to hold within mixed extended projections. This view also makes correct predictions about agentive nominalizations and nominalized serial verb constructions.
Background:
Aphasia therapy software applications (apps) can help achieve recommendations regarding aphasia treatment intensity and duration.
However, we currently know very little about speech and language therapists' (SLTs) preferences with regards to these apps.
This may be problematic, as clinician acceptance of novel treatments and technology are a key factor for successful translation from research evidence to practice.
Aim:
This research aimed to increase our understanding of clinicians' experiences with aphasia therapy apps and their perceived barriers and facilitators to the use of aphasia apps. Furthermore, we wanted to explore the influence of some demographic factors (age, country, and SLT availability in the client's hometown) on SLTs' attitudes towards these apps.
Method & Procedures:
35 Dutch and 29 Australian SLTs completed an online survey. The survey contained 9 closed-ended questions and 3 open-ended questions. Responses to the closed-ended questions were summarised through the use of descriptive statistics. The responses to the open questions were analysed and coded into recurring themes that were derived from the data. Logistic regression analyses were performed to explore the relationship between the demographic variables and the responses to the closed-ended questions.
Outcomes & results:
Participants were overwhelmingly positive about aphasia therapy apps and saw the potential for their clients to use apps independently. As facilitators of app use, participants reported accessibility and inclusion of different language modalities, while high costs, absence of a compatible device, and clients' potential computer illiteracy were listed as barriers. None of the analysed demographic factors consistently influenced differences in participants' attitudes towards aphasia therapy apps.
Conclusions:
The positive, extensive and insightful feedback from speech and language therapists is both useful and encouraging for app developers and aphasia researchers, and should facilitate the development of appropriate, high-quality therapy apps.
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.
When researchers carry out a null hypothesis significance test, it is tempting to assume that a statistically significant result lowers Prob(H0), the probability of the null hypothesis being true. Technically, such a statement is meaningless for various reasons: e.g., the null hypothesis does not have a probability associated with it. However, it is possible to relax certain assumptions to compute the posterior probability Prob(H0) under repeated sampling. We show in a step-by-step guide that the intuitively appealing belief, that Prob(H0) is low when significant results have been obtained under repeated sampling, is in general incorrect and depends greatly on: (a) the prior probability of the null being true; (b) type-I error rate, (c) type-II error rate, and (d) replication of a result. Through step-by-step simulations using open-source code in the R System of Statistical Computing, we show that uncertainty about the null hypothesis being true often remains high despite a significant result. To help the reader develop intuitions about this common misconception, we provide a Shiny app (https://danielschad.shinyapps.io/probnull/). We expect that this tutorial will help researchers better understand and judge results from null hypothesis significance tests.
Language processing requires memory retrieval to integrate current input with previous context and making predictions about upcoming input. We propose that prediction and retrieval are two sides of the same coin, i.e. functionally the same, as they both activate memory representations. Under this assumption, memory retrieval and prediction should interact: Retrieval interference can only occur at a word that triggers retrieval and a fully predicted word would not do that. The present study investigated the proposed interaction with event-related potentials (ERPs) during the processing of sentence pairs in German. Predictability was measured via cloze probability. Memory retrieval was manipulated via the position of a distractor inducing proactive or retroactive similarity-based interference. Linear mixed model analyses provided evidence for the hypothesised interaction in a broadly distributed negativity, which we discuss in relation to the interference ERP literature. Our finding supports the proposal that memory retrieval and prediction are functionally the same.
In this study, we investigated the cognitive-emotional interplay by measuring the effects of executive competition (Pessoa, 2013), i.e., how inhibitory control is influenced when emotional information is encountered. Sixty-three children (8 to 9 years of age) participated in an inhibition task (central task) accompanied by happy, sad, or neutral emoticons (displayed in the periphery). Typical interference effects were found in the main task for speed and accuracy, but in general, these effects were not additionally modulated by the peripheral emoticons indicating that processing of the main task exhausted the limited capacity such that interference from the task-irrelevant, peripheral information did not show (Pessoa, 2013). Further analyses revealed that the magnitude of interference effects depended on the order of congruency conditions: when incongruent conditions preceded congruent ones, there was greater interference. This effect was smaller in sad conditions, and particularly so at the beginning of the experiment. These findings suggest that the bottom-up perception of task-irrelevant emotional information influenced the top-down process of inhibitory control among children in the sad condition when processing demands were particularly high. We discuss if the salience and valence of the emotional stimuli as well as task demands are the decisive characteristics that modulate the strength of this relation.
Stimulus data and experimental design for a self-paced reading study on emoji-word substitutions
(2022)
This data paper presents the experimental design and stimuli from an online self-paced reading study on the processing of emojis substituting lexically ambiguous nouns. We recorded reading times for the target ambiguous nouns and for emojis depicting either the intended target referent or a contextually inappropriate homophonous noun. Furthermore, we recorded comprehension accuracy, demographics and a self-assessment of the participants' emoji usage frequency. The data includes all stimuli used, the raw data, the full JavaScript code for the online experiment, as well as Python and R code for the data analysis. We believe that our dataset may give important insights related to the comprehension mechanisms involved in the cognitive processing of emojis. For interpretation and discussion of the experiment, please see the original article entitled "The processing of emoji-word substitutions: A self-paced-reading study".