Refine
Year of publication
Document Type
- Article (114) (remove)
Is part of the Bibliography
- yes (114)
Keywords
- Eye movements (11)
- eye movements (10)
- scene viewing (6)
- Reading (5)
- attention (5)
- saccades (5)
- spatial frequencies (5)
- Computational modelling (3)
- gaze-contingent displays (3)
- Bayesian inference (2)
When the mind wanders, attention turns away from the external environment and cognitive processing is decoupled from perceptual information. Mind wandering is usually treated as a dichotomy (dichotomy-hypothesis), and is often measured using self-reports. Here, we propose the levels of inattention hypothesis, which postulates attentional decoupling to graded degrees at different hierarchical levels of cognitive processing. To measure graded levels of attentional decoupling during reading we introduce the sustained attention to stimulus task (SAST), which is based on psychophysics of error detection. Under experimental conditions likely to induce mind wandering, we found that subjects were less likely to notice errors that required high-level processing for their detection as opposed to errors that only required low-level processing. Eye tracking revealed that before errors were overlooked influences of high- and low-level linguistic variables on eye fixations were reduced in a graded fashion, indicating episodes of mindless reading at weak and deep levels. Individual fixation durations predicted overlooking of lexical errors 5 s before they occurred. Our findings support the levels of inattention hypothesis and suggest that different levels of mindless reading can be measured behaviorally in the SAST. Using eye tracking to detect mind wandering online represents a promising approach for the development of new techniques to study mind wandering and to ameliorate its negative consequences.
Following up on an exchange about the relation between microsaccades and spatial attention (Horowitz, Fencsik, Fine, Yurgenson, & Wolfe, 2007; Horowitz, Fine, Fencsik, Yurgenson, & Wolfe, 2007; Laubrock, Engbert, Rolfs, & Kliegl, 2007), we examine the effects of selection criteria and response modality. We show that for Posner cuing with saccadic responses, microsaccades go with attention in at least 75% of cases (almost 90% if probability matching is assumed) when they are first (or only) microsaccades in the cue target interval and when they occur between 200 and 400 msec after the cue. The relation between spatial attention and the direction of microsaccades drops to chance level for unselected microsaccades collected during manual-response conditions. Analyses of data from four cross-modal cuing experiments demonstrate an above-chance, intermediate link for visual cues, but no systematic relation for auditory cues. Thus, the link between spatial attention and direction of microsaccades depends on the experimental condition and time of occurrence, but it can be very strong.
Reading requires the orchestration of visual, attentional, language-related, and oculomotor processing constraints. This study replicates previous effects of frequency, predictability, and length of fixated words on fixation durations in natural reading and demonstrates new effects of these variables related to previous and next words. Results are based on fixation durations recorded from 222 persons, each reading 144 sentences. Such evidence for distributed processing of words across fixation durations challenges psycholinguistic immediacy-of-processing and eye- mind assumptions. Most of the time the mind processes several words in parallel at different perceptual and cognitive levels. Eye movements can help to unravel these processes
The zoom lens of attention simulating shuffled versus normal text reading using the SWIFT model
(2012)
Assumptions on the allocation of attention during reading are crucial for theoretical models of eye guidance. The zoom lens model of attention postulates that attentional deployment can vary from a sharp focus to a broad window. The model is closely related to the foveal load hypothesis, i.e., the assumption that the perceptual span is modulated by the difficulty of the fixated word. However, these important theoretical concepts for cognitive research have not been tested quantitatively in eye movement models. Here we show that the zoom lens model, implemented in the SWIFT model of saccade generation, captures many important patterns of eye movements. We compared the model's performance to experimental data from normal and shuffled text reading. Our results demonstrate that the zoom lens of attention might be an important concept for eye movement control in reading.
The aim of this work was to verify the processing of pronominal anaphora by children that have attention deficit hyperactivity disorder or dyslexia. The sample studied consisted of 75 children that speak German, which read two texts of 80 words containing pronominal anaphora. The eye movements of all participants were recorded and, to make sure they were reading with attention, two activities that tested reading comprehension were proposed. Through the analysis of eye movements, specifically the fixations, the data indicate that children with disorders have difficulty to process the pronominal anaphora, especially dyslexic children.
Fixation durations in reading are longer for within-word fixation positions close to word center than for positions near word boundaries. This counterintuitive result was termed the Inverted-Optimal Viewing Position (IOVP) effect. We proposed an explanation of the effect based on error-correction of mislocated fixations [Nuthmann, A., Engbert, R., & Kliegl, R. (2005). Mislocated fixations during reading and the inverted optimal viewing position effect. Vision Research, 45, 2201-2217], that suggests that the IOVP effect is not related to word processing. Here we demonstrate the existence of an IOVP effect in "mindless reading", a G-string scanning task. We compare the results from experimental data with results obtained from computer simulations of a simple model of the IOVP effect and discuss alternative accounts. We conclude that oculornotor errors, which often induce mislocalized fixations, represent the most important source of the IOVP effect. (c) 2006 Elsevier Ltd. All rights reserved.
Computational models such as E-Z Reader and SWIFT are ideal theoretical tools to test quantitatively our current understanding of eye-movement control in reading. Here we present a mathematical analysis of word skipping in the E-Z Reader model by semianalytic methods, to highlight the differences in current modeling approaches. In E-Z Reader, the word identification system must outperform the oculomotor system to induce word skipping. In SWIFT, there is competition among words to be selected as a saccade target. We conclude that it is the question of competitors in the "game" of word skipping that must be solved in eye movement research
The fast and the slow of skilled bimanual rhythm production : parallel versus integrated timing
(2000)
When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image—a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias.
We investigate the cognitive control in polyrhythmic hand movements as a model paradigm for bimanual coordination. Using a symbolic coding of the recorded time series, we demonstrate the existence of qualitative transitions induced by experimental manipulation of the tempo. A nonlinear model with delayed feedback control is proposed, which accounts for these dynamical transitions in terms of bifurcations resulting from variation of the external control parameter. Furthermore, it is shown that transitions can also be observed due to fluctuations in the timing control level. We conclude that the complexity of coordinated bimanual movements results from interactions between nonlinear control mechanisms with delayed feedback and stochastic timing components.
Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data were transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm.
We analyse time series from a study on bimanual rhythmic movements in which the speed of performance (the external control parameter) was experimentally manipulated. Using symbolic transformations as a visualization technique we observe qualitative changes in the dynamics of the timing patterns. Such phase transitions are quantitatively described by measures of complexity. Using these results we develop an advanced symbolic coding which enables us to detect important dynamical structures. Furthermore, our analysis raises new questions concerning the modelling of the underlying human cognitive-motor system.
SWIFT explorations
(2003)
Mathematical, models,have become an important tool for understanding the control of eye movements during reading. Main goals of the development of the SWIFT model (R. Engbert, A. Longtin, & R. Kliegl, 2002) were to investigate the possibility of spatially distributed processing and to implement a general mechanism for all types of eye movements observed in reading experiments. The authors present an advanced version of SWIFT that integrates properties of the oculomotor system and effects of word recognition to explain many of the experimental phenomena faced in reading research. They propose new procedures for the estimation of model parameters and for the test of the model's performance. They also present a mathematical analysis of the dynamics of the SWIFT model. Finally, within this framework, they present an analysis of the transition from parallel to serial processing
Scene viewing is used to study attentional selection in complex but still controlled environments. One of the main observations on eye movements during scene viewing is the inhomogeneous distribution of fixation locations: While some parts of an image are fixated by almost all observers and are inspected repeatedly by the same observer, other image parts remain unfixated by observers even after long exploration intervals. Here, we apply spatial point process methods to investigate the relationship between pairs of fixations. More precisely, we use the pair correlation function, a powerful statistical tool, to evaluate dependencies between fixation locations along individual scanpaths. We demonstrate that aggregation of fixation locations within 4 degrees is stronger than expected from chance. Furthermore, the pair correlation function reveals stronger aggregation of fixations when the same image is presented a second time. We use simulations of a dynamical model to show that a narrower spatial attentional span may explain differences in pair correlations between the first and the second inspection of the same image.
In humans and in foveated animals visual acuity is highly concentrated at the center of gaze, so that choosing where to look next is an important example of online, rapid decision-making. Computational neuroscientists have developed biologically-inspired models of visual attention, termed saliency maps, which successfully predict where people fixate on average. Using point process theory for spatial statistics, we show that scanpaths contain, however, important statistical structure, such as spatial clustering on top of distributions of gaze positions. Here, we develop a dynamical model of saccadic selection that accurately predicts the distribution of gaze positions as well as spatial clustering along individual scanpaths. Our model relies on activation dynamics via spatially-limited (foveated) access to saliency information, and, second, a leaky memory process controlling the re-inspection of target regions. This theoretical framework models a form of context-dependent decision-making, linking neural dynamics of attention to behavioral gaze data.
Visuospatial attention and gaze control depend on the interaction of foveal and peripheral processing. The foveal and peripheral regions of the visual field are differentially sensitive to parts of the spatial frequency spectrum. In two experiments, we investigated how the selective attenuation of spatial frequencies in the central or the peripheral visual field affects eye-movement behavior during real-world scene viewing. Gaze-contingent low-pass or high-pass filters with varying filter levels (i.e., cutoff frequencies; Experiment 1) or filter sizes (Experiment 2) were applied. Compared to unfiltered control conditions, mean fixation durations increased most with central high-pass and peripheral low-pass filtering. Increasing filter size prolonged fixation durations with peripheral filtering, but not with central filtering. Increasing filter level prolonged fixation durations with low-pass filtering, but not with high-pass filtering. These effects indicate that fixation durations are not always longer under conditions of increased processing difficulty. Saccade amplitudes largely adapted to processing difficulty: amplitudes increased with central filtering and decreased with peripheral filtering; the effects strengthened with increasing filter size and filter level. In addition, we observed a trade-off between saccade timing and saccadic selection, since saccade amplitudes were modulated when fixation durations were unaffected by the experimental manipulations. We conclude that interactions of perception and gaze control are highly sensitive to experimental manipulations of input images as long as the residual information can still be accessed for gaze control. (C) 2016 Elsevier Ltd. All rights reserved.
Natural vision is characterized by alternating sequences of rapid gaze shifts (saccades) and fixations. During fixations, microsaccades and slower drift movements occur spontaneously, so that the eye is never motionless. Theoretical models of fixational eye movements predict that microsaccades are dynamically coupled to slower drift movements generated immediately before microsaccades, which might be used as a criterion to distinguish microsaccades from small voluntary saccades. Here we investigate a sequential scanning task, where participants generate goal-directed saccades and microsaccades with overlapping amplitude distributions. We show that properties of microsaccades are correlated with precursory drift motion, while amplitudes of goal-directed saccades do not dependent on previous drift epochs. We develop and test a mathematical model that integrates goal-directed and fixational eye movements, including microsaccades. Using model simulations, we reproduce the experimental finding of correlations within fixational eye movement components (i.e., between physiological drift and microsaccades) but not between goal-directed saccades and fixational drift motion. These results lend support to a functional difference between microsaccades and goal-directed saccades, while, at the same time, both types of behavior may be part of an oculomotor continuum that is quantitatively described by our mathematical model. (C) 2015 Elsevier Ltd. All rights reserved.
Sequential data assimilation of the stochastic SEIR epidemic model for regional COVID-19 dynamics
(2021)
Newly emerging pandemics like COVID-19 call for predictive models to implement precisely tuned responses to limit their deep impact on society. Standard epidemic models provide a theoretically well-founded dynamical description of disease incidence. For COVID-19 with infectiousness peaking before and at symptom onset, the SEIR model explains the hidden build-up of exposed individuals which creates challenges for containment strategies. However, spatial heterogeneity raises questions about the adequacy of modeling epidemic outbreaks on the level of a whole country. Here, we show that by applying sequential data assimilation to the stochastic SEIR epidemic model, we can capture the dynamic behavior of outbreaks on a regional level. Regional modeling, with relatively low numbers of infected and demographic noise, accounts for both spatial heterogeneity and stochasticity. Based on adapted models, short-term predictions can be achieved. Thus, with the help of these sequential data assimilation methods, more realistic epidemic models are within reach.
When searching a target in a natural scene, it has been shown that both the target’s visual properties and similarity to the background influence whether and how fast humans are able to find it. So far, it was unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment, participants searched natural scenes for six artificial targets with different spatial frequency content throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known. If a saccade was programmed in the same direction as the previous saccade, fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical fixation density at the endpoints of saccades which maintain direction were comparatively low, indicating that these saccades were less selective. Our results suggest that searchers adjust their eye movement dynamics to the search target efficiently, since previous research has shown that low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. We interpret the saccade direction specificity of our effects as an underlying separation into a default scanning mechanism and a selective, target-dependent mechanism.
Eye movements during fixation of a stationary target prevent the adaptation of the visual system to continuous illumination and inhibit fading of the image. These random, involuntary, small movements are restricted at long time scales so as to keep the target at the center of the field of view. Here we use detrended fluctuation analysis in order to study the properties of fixational eye movements at different time scales. Results show different scaling behavior between horizontal and vertical movements. When the small ballistic movements, i.e., microsaccades, are removed, the scaling exponents in both planes become similar. Our findings suggest that microsaccades enhance the persistence at short time scales mostly in the horizontal component and much less in the vertical component. This difference may be due to the need for continuously moving the eyes in the horizontal plane, in order to match the stereoscopic image for different viewing distances
Saccades move objects of interest into the center of the visual field for high-acuity visual analysis. White, Stritzke, and Gegenfurtner (Current Biology, 18, 124-128, 2008) have shown that saccadic latencies in the context of a structured background are much shorter than those with an unstructured background at equal levels of visibility. This effect has been explained by possible preactivation of the saccadic circuitry whenever a structured background acts as a mask for potential saccade targets. Here, we show that background textures modulate rates of microsaccades during visual fixation. First, after a display change, structured backgrounds induce a stronger decrease of microsaccade rates than do uniform backgrounds. Second, we demonstrate that the occurrence of a microsaccade in a critical time window can delay a subsequent saccadic response. Taken together, our findings suggest that microsaccades contribute to the saccadic facilitation effect, due to a modulation of microsaccade rates by properties of the background.
Microsaccades - i.e., small fixational saccades generated in the superior colliculus (SC) - have been linked to spatial attention. While maintaining fixation, voluntary shifts of covert attention toward peripheral targets result in a sequence of attention-aligned and attention-opposing microsaccades. In most previous studies the direction of the voluntary shift is signaled by a spatial cue (e.g., a leftwards pointing arrow) that presents the most informative part of the cue (e.g., the arrowhead) in the to-be attended visual field. Here we directly investigated the influence of cue position and tested the hypothesis that microsaccades align with cue position rather than with the attention shift. In a spatial cueing task, we presented the task-relevant part of a symmetric cue either in the to-be attended visual field or in the opposite field. As a result, microsaccades were still weakly related to the covert attention shift; however, they were strongly related to the position of the cue even if that required a movement opposite to the cued attention shift. Moreover, if microsaccades aligned with cue position, we observed stronger cueing effects on manual response times. Our interpretation of the data is supported by numerical simulations of a computational model of microsaccade generation that is based on SC properties, where we explain our findings by separate attentional mechanisms for cue localization and the cued attention shift. We conclude that during cueing of voluntary attention, microsaccades are related to both - the overt attentional selection of the task-relevant part of the cue stimulus and the subsequent covert attention shift.(C) 2017 Elsevier Ltd. All rights reserved.
During reading, saccadic landing positions within words show a pronounced peak close to the word center, with an additional systematic error that is modulated by the distance from the launch site and the length of the target word. Here we show that the systematic variation of fixation positions within words, the saccadic range error, can be derived from Bayesian decision theory. We present the first mathematical model for the saccadic range error; this model makes explicit assumptions regarding underlying visual and oculomotor processes. Analyzing a corpus of eye movement recordings, we obtained results that are consistent with the view that readers use Bayesian estimation for saccade planning. Furthermore, we show that alternative models fail to reproduce the experimental data.
Skilled reading requires information processing of the fixated and the not-yet-fixated words to generate precise control of gaze. Over the last 30 years, experimental research provided evidence that word processing is distributed across the perceptual span, which permits recognition of the fixated (foveal) word as well as preview of parafoveal words to the right of fixation. However, theoretical models have been unable to differentiate the specific influences of foveal and parafoveal information on saccade control. Here we show how parafoveal word difficulty modulates spatial and temporal control of gaze in a computational model to reproduce experimental results. In a fully Bayesian framework, we estimated model parameters for different models of parafoveal processing and carried out large-scale predictive simulations and model comparisons for a gaze-contingent reading experiment. We conclude that mathematical modeling of data from gaze-contingent experiments permits the precise identification of pathways from parafoveal information processing to gaze control, uncovering potential mechanisms underlying the parafoveal contribution to eye-movement control.
Skilled reading requires information processing of the fixated and the not-yet-fixated words to generate precise control of gaze. Over the last 30 years, experimental research provided evidence that word processing is distributed across the perceptual span, which permits recognition of the fixated (foveal) word as well as preview of parafoveal words to the right of fixation. However, theoretical models have been unable to differentiate the specific influences of foveal and parafoveal information on saccade control. Here we show how parafoveal word difficulty modulates spatial and temporal control of gaze in a computational model to reproduce experimental results. In a fully Bayesian framework, we estimated model parameters for different models of parafoveal processing and carried out large-scale predictive simulations and model comparisons for a gaze-contingent reading experiment. We conclude that mathematical modeling of data from gaze-contingent experiments permits the precise identification of pathways from parafoveal information processing to gaze control, uncovering potential mechanisms underlying the parafoveal contribution to eye-movement control.
Portal Wissen = Exzellenz
(2023)
Was nicht nur gut oder sehr gut ist, nennen wir gern exzellent. Aber was meint das eigentlich? Vom lateinischen „excellere“ kommend, beschreibt es Dinge, Personen oder Handlungen, die „hervor-“ oder „herausragen“ aus der Menge, sich „auszeichnen“ gegenüber anderen. Mehr geht nicht. Exzellenz ist das Mittel der Wahl, wenn es darum geht, der Erste oder Beste zu sein. Und das macht auch vor der Forschung nicht halt. Wer auf die Universität Potsdam schaut, findet zahlreiche ausgezeichnete Forschende, hervorragende Projekte und immer wieder auch aufsehenerregende Erkenntnisse, Veröffentlichungen und Ergebnisse.
Aber ist die UP auch exzellent? Eine Frage, die 2023 ganz sicher andere Wellen schlägt als vielleicht vor 20 Jahren. Denn seit dem Start der Exzellenzinitiative 2005 gelten als – wörtlich – exzellent jene Hochschulen, denen es gelingt, in dem umfangreichsten Förderprogramm für Wissenschaft in Deutschland einen Zuschlag zu erhalten. Egal ob in Form von Graduiertenschulen, Forschungsclustern oder – seit Fortsetzung des Programms ab 2019 unter dem Titel „Exzellenzstrategie“ – ganzen Exzellenzuniversitäten: Wer im Kreis der Forschungsuniversitäten zu den Besten gehören will, braucht das Siegel der Exzellenz. In der gerade eingeläuteten neuen Wettbewerbsrunde der „Exzellenzstrategie des Bundes und der Länder“ bewirbt sich die Universität Potsdam mit drei Clusterskizzen um Förderung.
Ein Antrag kommt aus der Ökologie- und Biodiversitätsforschung. Ziel ist es, ein komplexes Bild ökologischer Prozesse zu zeichnen – und dabei die Rolle von einzelnen Individuen ebenso zu betrachten wie das Zusammenwirken vieler Arten in einem Ökosystem, um die Funktion der Artenvielfalt genauer zu bestimmen. Eine zweite Skizze haben die Kognitionswissenschaften eingereicht. Hier soll das komplexe Nebeneinander von Sprache und Kognition, Entwicklung und Lernen sowie Motivation und Verhalten als dynamisches Miteinander erforscht werden – wobei auch mit den Erziehungswissenschaften kooperiert wird, um verknüpfte Lernund Bildungsprozesse stets mitzudenken. Der dritte Antrag aus den Geo- und Umweltwissenschaften nimmt extreme und besonders folgenschwere Naturgefahren und -prozesse wie Überschwemmungen und Dürren in den Blick. Die Forschenden untersuchen die Extremereignisse mit besonderem Fokus auf deren Wechselwirkung mit der Gesellschaft, um mit ihnen einhergehende Risiken und Schäden besser einschätzen sowie künftig rechtzeitig Maßnahmen einleiten zu können.
„Alle drei Anträge zeichnen ein hervorragendes Bild unserer Leistungsfähigkeit“, betont der Präsident der Universität, Prof. Oliver Günther, Ph.D. „Die Skizzen dokumentieren eindrucksvoll unser Engagement, vorhandene Forschungsexzellenz sowie die Potenziale der Universität Potsdam insgesamt. Allein die Tatsache, dass sich drei schlagkräftige Konsortien in ganz unterschiedlichen Themenbereichen zusammengefunden haben, zeigt, dass wir auf unserem Weg in die Spitzengruppe der deutschen Universitäten einen guten Schritt vorangekommen sind.“
In diesem Heft schauen wir, was sich in und hinter diesen Anträgen verbirgt: Wir haben mit den Wissenschaftlerinnen und Wissenschaftlern gesprochen, die sie geschrieben haben, und sie gefragt, was sie sich vornehmen, sollten sie den Zuschlag erhalten und ein Cluster an die Universität holen. Wir haben aber auch auf die Forschung geschaut, die zu den Anträgen geführt hat und die schon länger das Profil der Universität prägt und ihr national wie international Anerkennung eingebracht hat. Wir stellen eine kleine Auswahl an Projekten, Methoden und Forschenden vor, um zu zeigen, warum in diesen Anträgen tatsächlich exzellente Forschung steckt! Übrigens: Auch „Exzellenz“ ist nicht das Ende der Fahnenstange. Immerhin lässt sich das Adjektiv exzellent sogar steigern. In diesem Sinne wünschen wir exzellentestes Vergnügen beim Lesen!
Portal Wissen = Excellence
(2023)
When something is not just good or very good, we often call it excellent. But what does that really mean? Coming from the Latin word “excellere,” it describes things, persons, or actions that are outstanding or superior and distinguish themselves from others. It cannot get any better. Excellence is the top choice for being the first or the best. Research is no exception.
At the university, you will find numerous exceptional researchers, outstanding projects, and, time and again, sensational findings, publications, and results. But is the University of Potsdam also excellent? A question that will certainly create a different stir in 2023 than it did perhaps 20 years ago. Since the launch of the Excellence Initiative in 2005, universities that succeed in winning the most comprehensive funding program for research in Germany have been considered – literally – excellent. Whether in the form of graduate schools, research clusters, or – since the program was continued in 2019 under the title “Excellence Strategy” – entire universities of excellence: Anyone who wants to be among the best research universities needs the seal of excellence.
The University of Potsdam is applying for funding with three cluster proposals in the recently launched new round of the “Excellence Strategy of the German Federal and State Governments.” One proposal comes from ecology and biodiversity research. The aim is to paint a comprehensive picture of ecological processes by examining the role of single individuals as well as the interactions among many species in an ecosystem to precisely determine the function of biodiversity. A second proposal has been submitted by the cognitive sciences. Here, the complex coexistence of language and cognition, development and learning, as well as motivation and behavior will be researched as a dynamic interrelation. The projects will include cooperation with the educational sciences to constantly consider linked learning and educational processes. The third proposal from the geo and environmental sciences concentrates on extreme and particularly devastating natural hazards and processes such as floods and droughts. The researchers examine these extreme events, focusing on their interaction with society, to be able to better assess the risks and damages they might involve and to initiate timely measures in the future.
“All three proposals highlight the excellence of our performance,” emphasizes University President Prof. Oliver Günther, Ph.D. “The outlines impressively document our commitment, existing research excellence, and the potential of the University of Potsdam as a whole. The fact that three powerful consortia have come together in different subject areas shows that we have taken a good step forward on our way to becoming one of the top German universities.”
In this issue, we are looking at what is in and behind these proposals: We talked to the researchers who wrote them. We asked them about their plans in case their proposals are successful and they bring a cluster of excellence to the university. But we also looked at the research that has led to the proposals, has long shaped the university’s profile, and earned it national and international recognition. We present a small selection of projects, methods, and researchers to illustrate why there really is excellent research in these proposals!
By the way, “excellence” is also not the end of the flagpole. After all, the adjective “excellent” even has a comparative and a superlative. With this in mind, I wish you the most excellent pleasure reading this issue!
When we fixate our gaze on a stable object, our eyes move continuously with extremely small involuntary and autonomic movements, that even we are unaware of during their occurrence. One of the roles of these fixational eye movements is to prevent the adaptation of the visual system to continuous illumination and inhibit fading of the image. These random, small movements are restricted at long time scales so as to keep the target at the centre of the field of view. In addition, the synchronisation properties between both eyes are related to binocular coordination in order to provide stereopsis. We investigated the roles of different time scale behaviours, especially how they are expressed in the different spatial directions (vertical versus horizontal). We also tested the synchronisation between both eyes. Results show different scaling behaviour between horizontal and vertical movements. When the small ballistic movements, i.e., microsaccades, are removed, the scaling behaviour in both axes becomes similar. Our findings suggest that microsaccades enhance the persistence at short time scales mostly in the horizontal component and much less in the vertical component. We also applied the phase synchronisation decay method to study the synchronisation between six combinations of binocular fixational eye movement components. We found that the vertical-vertical components of right and left eyes are significantly more synchronised than the horizontal-horizontal components. These differences may be due to the need for continuously moving the eyes in the horizontal plane in order to match the stereoscopic image for different viewing distances.
The launch-site effect, a systematic variation of within-word landing position as a function of launch-site distance, is among the most important oculomotor phenomena in reading. Here we show that the launch-site effect is strongly modulated in word skipping, a finding which is inconsistent with the view that the launch-site effect is caused by a saccadic-range error. We observe that distributions of landing positions in skipping saccades show an increased leftward shift compared to non-skipping saccades at equal launch-site distances. Using an improved algorithm for the estimation of mislocated fixations, we demonstrate the reliability of our results.
Using a serial search paradigm, we observed several effects of within-object fixation position on spatial and temporal control of eye movements: the preferred viewing location, launch site effect, the optimal viewing position, and the inverted optimal viewing position of fixation duration. While these effects were first identified by eye-movement studies in reading, our approach permits an analysis of the functional relationships between the effects in a different paradigm. Our results demonstrate that the fixation position is an important predictor of the subsequent saccade by influencing both fixation duration and the selection of the next saccade target.
In this paper we apply symbolic transformations as a visualisation technique for analysing rhythm production. It is shown that qualitative information can be extracted from the experimental data. This approach may provide new insights into the organisation of temporal order by the brain on different levels of description. A simple phenomenological model for the explanation of the observed phenomena is proposed.
Active motor processes are present in many sensory systems to enhance perception. In the human visual system, miniature eye movements are produced involuntarily and unconsciously when we fixate a stationary target. These fixational eye movements represent self-generated noise which serves important perceptual functions. Here we investigate fixational eye movements under the influence of external noise. In a two-choice discrimination task, the target stimulus performed a random walk with varying noise intensity. We observe noise-enhanced discrimination of the target stimulus characterized by a U-shaped curve of manual response times as a function of the diffusion constant of the stimulus. Based on the experiments, we develop a stochastic information-accumulator model for stimulus discrimination in a noisy environment. Our results provide a new explanation for the constructive role of fixational eye movements in visual perception.
In an influential theoretical model, human sensorimotor control is achieved by a Bayesian decision process, which combines noisy sensory information and learned prior knowledge. A ubiquitous signature of prior knowledge and Bayesian integration in human perception and motor behavior is the frequently observed bias toward an average stimulus magnitude (i.e., a central-tendency bias, range effect, regression-to-the-mean effect). However, in the domain of eye movements, there is a recent controversy about the fundamental existence of a range effect in the saccadic system. Here we argue that the problem of the existence of a range effect is linked to the availability of prior knowledge for saccade control. We present results from two prosaccade experiments that both employ an informative prior structure (i.e., a nonuniform Gaussian distribution of saccade target distances). Our results demonstrate the validity of Bayesian integration in saccade control, which generates a range effect in saccades. According to Bayesian integration principles, the saccadic range effect depends on the availability of prior knowledge and varies in size as a function of the reliability of the prior and the sensory likelihood.
Saccades to single targets in peripheral vision are typically characterized by an undershoot bias. Putting this bias to a test, Kapoula [1] used a paradigm in which observers were presented with two different sets of target eccentricities that partially overlapped each other. Her data were suggestive of a saccadic range effect (SRE): There was a tendency for saccades to overshoot close targets and undershoot far targets in a block, suggesting that there was a response bias towards the center of eccentricities in a given block. Our Experiment 1 was a close replication of the original study by Kapoula [1]. In addition, we tested whether the SRE is sensitive to top-down requirements associated with the task, and we also varied the target presentation duration. In Experiments 1 and 2, we expected to replicate the SRE for a visual discrimination task. The simple visual saccade-targeting task in Experiment 3, entailing minimal top-down influence, was expected to elicit a weaker SRE. Voluntary saccades to remembered target locations in Experiment 3 were expected to elicit the strongest SRE. Contrary to these predictions, we did not observe a SRE in any of the tasks. Our findings complement the results reported by Gillen et al. [2] who failed to find the effect in a saccade-targeting task with a very brief target presentation. Together, these results suggest that unlike arm movements, saccadic eye movements are not biased towards making saccades of a constant, optimal amplitude for the task.