Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (15)
- Dissertation (8)
- Postprint (5)
- Rezension (2)
- Monographie/Sammelband (1)
Gehört zur Bibliographie
- ja (31) (entfernen)
Schlagworte
- EEG (31) (entfernen)
Institut
- Department Psychologie (16)
- Department Linguistik (6)
- Institut für Informatik und Computational Science (3)
- Strukturbereich Kognitionswissenschaften (2)
- Department Sport- und Gesundheitswissenschaften (1)
- Extern (1)
- Fakultät für Gesundheitswissenschaften (1)
- Institut für Physik und Astronomie (1)
- Mathematisch-Naturwissenschaftliche Fakultät (1)
The temporal dynamics of climate processes are spread across different timescales and, as such, the study of these processes at only one selected timescale might not reveal the complete mechanisms and interactions within and between the (sub-) processes. To capture the non-linear interactions between climatic events, the method of event synchronization has found increasing attention recently. The main drawback with the present estimation of event synchronization is its restriction to analysing the time series at one reference timescale only. The study of event synchronization at multiple scales would be of great interest to comprehend the dynamics of the investigated climate processes. In this paper, the wavelet-based multi-scale event synchronization (MSES) method is proposed by combining the wavelet transform and event synchronization. Wavelets are used extensively to comprehend multi-scale processes and the dynamics of processes across various timescales. The proposed method allows the study of spatio-temporal patterns across different timescales. The method is tested on synthetic and real-world time series in order to check its replicability and applicability. The results indicate that MSES is able to capture relationships that exist between processes at different timescales.
The topic of synchronization forms a link between nonlinear dynamics and neuroscience. On the one hand, neurobiological research has shown that the synchronization of neuronal activity is an essential aspect of the working principle of the brain. On the other hand, recent advances in the physical theory have led to the discovery of the phenomenon of phase synchronization. A method of data analysis that is motivated by this finding - phase synchronization analysis - has already been successfully applied to empirical data. The present doctoral thesis ties up to these converging lines of research. Its subject are methodical contributions to the further development of phase synchronization analysis, as well as its application to event-related potentials, a form of EEG data that is especially important in the cognitive sciences. The methodical contributions of this work consist firstly in a number of specialized statistical tests for a difference in the synchronization strength in two different states of a system of two oscillators. Secondly, in regard of the many-channel character of EEG data an approach to multivariate phase synchronization analysis is presented. For the empirical investigation of neuronal synchronization a classic experiment on language processing was replicated, comparing the effect of a semantic violation in a sentence context with that of the manipulation of physical stimulus properties (font color). Here phase synchronization analysis detects a decrease of global synchronization for the semantic violation as well as an increase for the physical manipulation. In the latter case, by means of the multivariate analysis the global synchronization effect can be traced back to an interaction of symmetrically located brain areas.<BR> The findings presented show that the method of phase synchronization analysis motivated by physics is able to provide a relevant contribution to the investigation of event-related potentials in the cognitive sciences.
How do late proficient bilinguals process morphosyntactic and lexical-semantic information in their non-native language (L2)? How is this information represented in the L2 mental lexicon? And what are the neural signatures of L2 morphosyntactic and lexical-semantic processing? We addressed these questions in one behavioral and two ERP priming experiments on inflected German adjectives testing a group of advanced late Russian learners of German in comparison to native speaker (L1) controls. While in the behavioral experiment, the L2 learners performed native-like, the ERP data revealed clear L1/L2 differences with respect to the temporal dynamics of grammatical processing. Specifically, our results show that L2 morphosyntactic processing yielded temporally and spatially extended brain responses relative to L1 processing, indicating that grammatical processing of inflected words in an L2 is more demanding and less automatic than in the L1. However, this group of advanced L2 learners showed native-like lexical-semantic processing.
In this thesis sentence processing was investigated using a psychophysiological measure known as pupillometry as well as Event-Related Potentials (ERP). The scope of the the- sis was broad, investigating the processing of several different movement constructions with native speakers of English and second language learners of English, as well as word order and case marking in German speaking adults and children. Pupillometry and ERP allowed us to test competing linguistic theories and use novel methodologies to investigate the processing of word order. In doing so we also aimed to establish pupillometry as an effective way to investigate the processing of word order thus broadening the methodological spectrum.
Moving Beyond ERP Components
(2018)
Relationships between neuroimaging measures and behavior provide important clues about brain function and cognition in healthy and clinical populations. While electroencephalography (EEG) provides a portable, low cost measure of brain dynamics, it has been somewhat underrepresented in the emerging field of model-based inference. We seek to address this gap in this article by highlighting the utility of linking EEG and behavior, with an emphasis on approaches for EEG analysis that move beyond focusing on peaks or "components" derived from averaging EEG responses across trials and subjects (generating the event-related potential, ERP). First, we review methods for deriving features from EEG in order to enhance the signal within single-trials. These methods include filtering based on user-defined features (i.e., frequency decomposition, time-frequency decomposition), filtering based on data-driven properties (i.e., blind source separation, BSS), and generating more abstract representations of data (e.g., using deep learning). We then review cognitive models which extract latent variables from experimental tasks, including the drift diffusion model (DDM) and reinforcement learning (RL) approaches. Next, we discuss ways to access associations among these measures, including statistical models, data-driven joint models and cognitive joint modeling using hierarchical Bayesian models (HBMs). We think that these methodological tools are likely to contribute to theoretical advancements, and will help inform our understandings of brain dynamics that contribute to moment-to-moment cognitive function.
Moving beyond ERP components
(2018)
Relationships between neuroimaging measures and behavior provide important clues about brain function and cognition in healthy and clinical populations. While electroencephalography (EEG) provides a portable, low cost measure of brain dynamics, it has been somewhat underrepresented in the emerging field of model-based inference. We seek to address this gap in this article by highlighting the utility of linking EEG and behavior, with an emphasis on approaches for EEG analysis that move beyond focusing on peaks or "components" derived from averaging EEG responses across trials and subjects (generating the event-related potential, ERP). First, we review methods for deriving features from EEG in order to enhance the signal within single-trials. These methods include filtering based on user-defined features (i.e., frequency decomposition, time-frequency decomposition), filtering based on data-driven properties (i.e., blind source separation, BSS), and generating more abstract representations of data (e.g., using deep learning). We then review cognitive models which extract latent variables from experimental tasks, including the drift diffusion model (DDM) and reinforcement learning (RL) approaches. Next, we discuss ways to access associations among these measures, including statistical models, data-driven joint models and cognitive joint modeling using hierarchical Bayesian models (HBMs). We think that these methodological tools are likely to contribute to theoretical advancements, and will help inform our understandings of brain dynamics that contribute to moment-to-moment cognitive function.
This study focuses on the ability of the adult sound system to reorganise as a result of experience. Participants were exposed to existing and novel syllables in either a listening task or a production task over the course of two days. On the third day, they named disyllabic pseudowords while their electroencephalogram was recorded. The first syllable of these pseudowords had either been trained in the auditory modality, trained in production or had not been trained. The EEG response differed between existing and novel syllables for untrained but not for trained syllables, indicating that training novel sound sequences modifies the processes involved in the production of these sequences to make them more similar to those underlying the production of existing sound sequences. Effects of training on the EEG response were observed both after production training and mere auditory exposure.
In reading, word frequency is commonly regarded as the major bottom-up determinant for the speed of lexical access. Moreover, language processing depends on top-down information, such as the predictability of a word from a previous context. Yet, however, the exact role of top-down predictions in visual word recognition is poorly understood: They may rapidly affect lexical processes, or alternatively, influence only late post-lexical stages. To add evidence about the nature of top-down processes and their relation to bottom-up information in the timeline of word recognition, we examined influences of frequency and predictability on event-related potentials (ERPs) in several sentence reading studies. The results were related to eye movements from natural reading as well as to models of word recognition. As a first and major finding, interactions of frequency and predictability on ERP amplitudes consistently revealed top-down influences on lexical levels of word processing (Chapters 2 and 4). Second, frequency and predictability mediated relations between N400 amplitudes and fixation durations, pointing to their sensitivity to a common stage of word recognition; further, larger N400 amplitudes entailed longer fixation durations on the next word, a result providing evidence for ongoing processing beyond a fixation (Chapter 3). Third, influences of presentation rate on ERP frequency and predictability effects demonstrated that the time available for word processing critically co-determines the course of bottom-up and top-down influences (Chapter 4). Fourth, at a near-normal reading speed, an early predictability effect suggested the rapid comparison of top-down hypotheses with the actual visual input (Chapter 5). The present results are compatible with interactive models of word recognition assuming that early lexical processes depend on the concerted impact of bottom-up and top-down information. We offered a framework that reconciles the findings on a timeline of word recognition taking into account influences of frequency, predictability, and presentation rate (Chapter 4).
During natural reading, a parafoveal preview of the upcoming word facilitates its subsequent recognition (e.g., shorter fixation durations compared to masked preview) but nothing is known about the neural correlates of this so-called preview benefit. Furthermore, while the evidence is strong that readers preprocess orthographic features of upcoming words, it is controversial whether word meaning can also be accessed parafoveally. We investigated the timing, scope, and electrophysiological correlates of parafoveal information use in reading by simultaneously recording eye movements and fixation-related brain potentials (FRPs) while participants read word lists fluently from left to right. For one word the target (e.g., "blade") parafoveal information was manipulated by showing an identical ("blade"), semantically related ("knife"), or unrelated ("sugar") word as preview. In boundary trials, the preview was shown parafoveally but changed to the correct target word during the incoming saccade. Replicating classic findings, target words were fixated shorter after identical previews. In the EEG, this benefit was reflected in an occipitotemporal preview positivity between 200 and 280 ms. In contrast, there was no facilitation from related previews. In parafoveal-on-foveal trials, preview and target were embedded at neighboring list positions without a display change. Consecutive fixation of two related words produced N400 priming effects, but only shortly (160 ms) after the second word was directly fixated. Results demonstrate that neural responses to words are substantially altered by parafoveal preprocessing under normal reading conditions. We found no evidence that word meaning contributes to these effects. Saccade-contingent display manipulations can be combined with EEG recordings to study extrafoveal perception in vision.
Brain-electric correlates of reading have traditionally been studied with word-by-word presentation, a condition that eliminates important aspects of the normal reading process and precludes direct comparisons between neural activity and oculomotor behavior. In the present study, we investigated effects of word predictability on eye movements (EM) and fixation-related brain potentials (FRPs) during natural sentence reading. Electroencephalogram (EEG) and EM (via video-based eye tracking) were recorded simultaneously while subjects read heterogeneous German sentences, moving their eyes freely over the text. FRPs were time-locked to first-pass reading fixations and analyzed according to the cloze probability of the currently fixated word. We replicated robust effects of word predictability on EMs and the N400 component in FRPs. The data were then used to model the relation among fixation duration, gaze duration, and N400 amplitude, and to trace the time course of EEG effects relative to effects in EM behavior. In an extended Methodological Discussion section, we review 4 technical and data-analytical problems that need to be addressed when FRPs are recorded in free-viewing situations (such as reading, visual search, or scene perception) and propose solutions. Results suggest that EEG recordings during normal vision are feasible and useful to consolidate findings from EEG and eye-tracking studies.