Refine
Year of publication
Document Type
- Article (114) (remove)
Is part of the Bibliography
- yes (114)
Keywords
- Eye movements (11)
- eye movements (10)
- scene viewing (6)
- Reading (5)
- attention (5)
- saccades (5)
- spatial frequencies (5)
- Computational modelling (3)
- gaze-contingent displays (3)
- Bayesian inference (2)
Portal Wissen = Exzellenz
(2023)
Was nicht nur gut oder sehr gut ist, nennen wir gern exzellent. Aber was meint das eigentlich? Vom lateinischen „excellere“ kommend, beschreibt es Dinge, Personen oder Handlungen, die „hervor-“ oder „herausragen“ aus der Menge, sich „auszeichnen“ gegenüber anderen. Mehr geht nicht. Exzellenz ist das Mittel der Wahl, wenn es darum geht, der Erste oder Beste zu sein. Und das macht auch vor der Forschung nicht halt. Wer auf die Universität Potsdam schaut, findet zahlreiche ausgezeichnete Forschende, hervorragende Projekte und immer wieder auch aufsehenerregende Erkenntnisse, Veröffentlichungen und Ergebnisse.
Aber ist die UP auch exzellent? Eine Frage, die 2023 ganz sicher andere Wellen schlägt als vielleicht vor 20 Jahren. Denn seit dem Start der Exzellenzinitiative 2005 gelten als – wörtlich – exzellent jene Hochschulen, denen es gelingt, in dem umfangreichsten Förderprogramm für Wissenschaft in Deutschland einen Zuschlag zu erhalten. Egal ob in Form von Graduiertenschulen, Forschungsclustern oder – seit Fortsetzung des Programms ab 2019 unter dem Titel „Exzellenzstrategie“ – ganzen Exzellenzuniversitäten: Wer im Kreis der Forschungsuniversitäten zu den Besten gehören will, braucht das Siegel der Exzellenz. In der gerade eingeläuteten neuen Wettbewerbsrunde der „Exzellenzstrategie des Bundes und der Länder“ bewirbt sich die Universität Potsdam mit drei Clusterskizzen um Förderung.
Ein Antrag kommt aus der Ökologie- und Biodiversitätsforschung. Ziel ist es, ein komplexes Bild ökologischer Prozesse zu zeichnen – und dabei die Rolle von einzelnen Individuen ebenso zu betrachten wie das Zusammenwirken vieler Arten in einem Ökosystem, um die Funktion der Artenvielfalt genauer zu bestimmen. Eine zweite Skizze haben die Kognitionswissenschaften eingereicht. Hier soll das komplexe Nebeneinander von Sprache und Kognition, Entwicklung und Lernen sowie Motivation und Verhalten als dynamisches Miteinander erforscht werden – wobei auch mit den Erziehungswissenschaften kooperiert wird, um verknüpfte Lernund Bildungsprozesse stets mitzudenken. Der dritte Antrag aus den Geo- und Umweltwissenschaften nimmt extreme und besonders folgenschwere Naturgefahren und -prozesse wie Überschwemmungen und Dürren in den Blick. Die Forschenden untersuchen die Extremereignisse mit besonderem Fokus auf deren Wechselwirkung mit der Gesellschaft, um mit ihnen einhergehende Risiken und Schäden besser einschätzen sowie künftig rechtzeitig Maßnahmen einleiten zu können.
„Alle drei Anträge zeichnen ein hervorragendes Bild unserer Leistungsfähigkeit“, betont der Präsident der Universität, Prof. Oliver Günther, Ph.D. „Die Skizzen dokumentieren eindrucksvoll unser Engagement, vorhandene Forschungsexzellenz sowie die Potenziale der Universität Potsdam insgesamt. Allein die Tatsache, dass sich drei schlagkräftige Konsortien in ganz unterschiedlichen Themenbereichen zusammengefunden haben, zeigt, dass wir auf unserem Weg in die Spitzengruppe der deutschen Universitäten einen guten Schritt vorangekommen sind.“
In diesem Heft schauen wir, was sich in und hinter diesen Anträgen verbirgt: Wir haben mit den Wissenschaftlerinnen und Wissenschaftlern gesprochen, die sie geschrieben haben, und sie gefragt, was sie sich vornehmen, sollten sie den Zuschlag erhalten und ein Cluster an die Universität holen. Wir haben aber auch auf die Forschung geschaut, die zu den Anträgen geführt hat und die schon länger das Profil der Universität prägt und ihr national wie international Anerkennung eingebracht hat. Wir stellen eine kleine Auswahl an Projekten, Methoden und Forschenden vor, um zu zeigen, warum in diesen Anträgen tatsächlich exzellente Forschung steckt! Übrigens: Auch „Exzellenz“ ist nicht das Ende der Fahnenstange. Immerhin lässt sich das Adjektiv exzellent sogar steigern. In diesem Sinne wünschen wir exzellentestes Vergnügen beim Lesen!
Portal Wissen = Excellence
(2023)
When something is not just good or very good, we often call it excellent. But what does that really mean? Coming from the Latin word “excellere,” it describes things, persons, or actions that are outstanding or superior and distinguish themselves from others. It cannot get any better. Excellence is the top choice for being the first or the best. Research is no exception.
At the university, you will find numerous exceptional researchers, outstanding projects, and, time and again, sensational findings, publications, and results. But is the University of Potsdam also excellent? A question that will certainly create a different stir in 2023 than it did perhaps 20 years ago. Since the launch of the Excellence Initiative in 2005, universities that succeed in winning the most comprehensive funding program for research in Germany have been considered – literally – excellent. Whether in the form of graduate schools, research clusters, or – since the program was continued in 2019 under the title “Excellence Strategy” – entire universities of excellence: Anyone who wants to be among the best research universities needs the seal of excellence.
The University of Potsdam is applying for funding with three cluster proposals in the recently launched new round of the “Excellence Strategy of the German Federal and State Governments.” One proposal comes from ecology and biodiversity research. The aim is to paint a comprehensive picture of ecological processes by examining the role of single individuals as well as the interactions among many species in an ecosystem to precisely determine the function of biodiversity. A second proposal has been submitted by the cognitive sciences. Here, the complex coexistence of language and cognition, development and learning, as well as motivation and behavior will be researched as a dynamic interrelation. The projects will include cooperation with the educational sciences to constantly consider linked learning and educational processes. The third proposal from the geo and environmental sciences concentrates on extreme and particularly devastating natural hazards and processes such as floods and droughts. The researchers examine these extreme events, focusing on their interaction with society, to be able to better assess the risks and damages they might involve and to initiate timely measures in the future.
“All three proposals highlight the excellence of our performance,” emphasizes University President Prof. Oliver Günther, Ph.D. “The outlines impressively document our commitment, existing research excellence, and the potential of the University of Potsdam as a whole. The fact that three powerful consortia have come together in different subject areas shows that we have taken a good step forward on our way to becoming one of the top German universities.”
In this issue, we are looking at what is in and behind these proposals: We talked to the researchers who wrote them. We asked them about their plans in case their proposals are successful and they bring a cluster of excellence to the university. But we also looked at the research that has led to the proposals, has long shaped the university’s profile, and earned it national and international recognition. We present a small selection of projects, methods, and researchers to illustrate why there really is excellent research in these proposals!
By the way, “excellence” is also not the end of the flagpole. After all, the adjective “excellent” even has a comparative and a superlative. With this in mind, I wish you the most excellent pleasure reading this issue!
Dynamical models make specific assumptions about cognitive processes that generate human behavior. In data assimilation, these models are tested against timeordered data. Recent progress on Bayesian data assimilation demonstrates that this approach combines the strengths of statistical modeling of individual differences with the those of dynamical cognitive models.
Sequential data assimilation of the stochastic SEIR epidemic model for regional COVID-19 dynamics
(2021)
Newly emerging pandemics like COVID-19 call for predictive models to implement precisely tuned responses to limit their deep impact on society. Standard epidemic models provide a theoretically well-founded dynamical description of disease incidence. For COVID-19 with infectiousness peaking before and at symptom onset, the SEIR model explains the hidden build-up of exposed individuals which creates challenges for containment strategies. However, spatial heterogeneity raises questions about the adequacy of modeling epidemic outbreaks on the level of a whole country. Here, we show that by applying sequential data assimilation to the stochastic SEIR epidemic model, we can capture the dynamic behavior of outbreaks on a regional level. Regional modeling, with relatively low numbers of infected and demographic noise, accounts for both spatial heterogeneity and stochasticity. Based on adapted models, short-term predictions can be achieved. Thus, with the help of these sequential data assimilation methods, more realistic epidemic models are within reach.
Skilled reading requires information processing of the fixated and the not-yet-fixated words to generate precise control of gaze. Over the last 30 years, experimental research provided evidence that word processing is distributed across the perceptual span, which permits recognition of the fixated (foveal) word as well as preview of parafoveal words to the right of fixation. However, theoretical models have been unable to differentiate the specific influences of foveal and parafoveal information on saccade control. Here we show how parafoveal word difficulty modulates spatial and temporal control of gaze in a computational model to reproduce experimental results. In a fully Bayesian framework, we estimated model parameters for different models of parafoveal processing and carried out large-scale predictive simulations and model comparisons for a gaze-contingent reading experiment. We conclude that mathematical modeling of data from gaze-contingent experiments permits the precise identification of pathways from parafoveal information processing to gaze control, uncovering potential mechanisms underlying the parafoveal contribution to eye-movement control.
Skilled reading requires information processing of the fixated and the not-yet-fixated words to generate precise control of gaze. Over the last 30 years, experimental research provided evidence that word processing is distributed across the perceptual span, which permits recognition of the fixated (foveal) word as well as preview of parafoveal words to the right of fixation. However, theoretical models have been unable to differentiate the specific influences of foveal and parafoveal information on saccade control. Here we show how parafoveal word difficulty modulates spatial and temporal control of gaze in a computational model to reproduce experimental results. In a fully Bayesian framework, we estimated model parameters for different models of parafoveal processing and carried out large-scale predictive simulations and model comparisons for a gaze-contingent reading experiment. We conclude that mathematical modeling of data from gaze-contingent experiments permits the precise identification of pathways from parafoveal information processing to gaze control, uncovering potential mechanisms underlying the parafoveal contribution to eye-movement control.
In eye-movement control during reading, advanced process-oriented models have been developed to reproduce behavioral data. So far, model complexity and large numbers of model parameters prevented rigorous statistical inference and modeling of interindividual differences. Here we propose a Bayesian approach to both problems for one representative computational model of sentence reading (SWIFT; Engbert et al., Psychological Review, 112, 2005, pp. 777-813). We used experimental data from 36 subjects who read the text in a normal and one of four manipulated text layouts (e.g., mirrored and scrambled letters). The SWIFT model was fitted to subjects and experimental conditions individually to investigate between- subject variability. Based on posterior distributions of model parameters, fixation probabilities and durations are reliably recovered from simulated data and reproduced for withheld empirical data, at both the experimental condition and subject levels. A subsequent statistical analysis of model parameters across reading conditions generates model-driven explanations for observable effects between conditions.
Lisa Schwetlick et al. present a computational model linking visual scan path generation in scene viewing to physiological and experimental work on perisaccadic covert attention, the act of attending to an object visually without obviously moving the eyes toward it. They find that integrating covert attention into predictive models of visual scan paths greatly improves the model's agreement with experimental data. <br /> How we perceive a visual scene depends critically on the selection of gaze positions. For this selection process, visual attention is known to play a key role in two ways. First, image-features attract visual attention, a fact that is captured well by time-independent fixation models. Second, millisecond-level attentional dynamics around the time of saccade drives our gaze from one position to the next. These two related research areas on attention are typically perceived as separate, both theoretically and experimentally. Here we link the two research areas by demonstrating that perisaccadic attentional dynamics improve predictions on scan path statistics. In a mathematical model, we integrated perisaccadic covert attention with dynamic scan path generation. Our model reproduces saccade amplitude distributions, angular statistics, intersaccadic turning angles, and their impact on fixation durations as well as inter-individual differences using Bayesian inference. Therefore, our result lend support to the relevance of perisaccadic attention to gaze statistics.
The interplay between cognitive and oculomotor processes during reading can be explored when the spatial layout of text deviates from the typical display. In this study, we investigate various eye-movement measures during reading of text with experimentally manipulated layout (word-wise and letter-wise mirrored-reversed text as well as inverted and scrambled text). While typical findings (e.g., longer mean fixation times, shorter mean saccades lengths) in reading manipulated texts compared to normal texts were reported in earlier work, little is known about changes of oculomotor targeting observed in within-word landing positions under the above text layouts. Here we carry out precise analyses of landing positions and find substantial changes in the so-called launch-site effect in addition to the expected overall slow-down of reading performance. Specifically, during reading of our manipulated text conditions with reversed letter order (against overall reading direction), we find a reduced launch-site effect, while in all other manipulated text conditions, we observe an increased launch-site effect. Our results clearly indicate that the oculomotor system is highly adaptive when confronted with unusual reading conditions.
In an influential theoretical model, human sensorimotor control is achieved by a Bayesian decision process, which combines noisy sensory information and learned prior knowledge. A ubiquitous signature of prior knowledge and Bayesian integration in human perception and motor behavior is the frequently observed bias toward an average stimulus magnitude (i.e., a central-tendency bias, range effect, regression-to-the-mean effect). However, in the domain of eye movements, there is a recent controversy about the fundamental existence of a range effect in the saccadic system. Here we argue that the problem of the existence of a range effect is linked to the availability of prior knowledge for saccade control. We present results from two prosaccade experiments that both employ an informative prior structure (i.e., a nonuniform Gaussian distribution of saccade target distances). Our results demonstrate the validity of Bayesian integration in saccade control, which generates a range effect in saccades. According to Bayesian integration principles, the saccadic range effect depends on the availability of prior knowledge and varies in size as a function of the reliability of the prior and the sensory likelihood.
During reading, rapid eye movements (saccades) shift the reader's line of sight from one word to another for high-acuity visual information processing. While experimental data and theoretical models show that readers aim at word centers, the eye-movement (oculomotor) accuracy is low compared to other tasks. As a consequence, distributions of saccadic landing positions indicate large (i) random errors and (ii) systematic over- and undershoot of word centers, which additionally depend on saccade lengths (McConkie et al.Visual Research, 28(10), 1107-1118,1988). Here we show that both error components can be simultaneously reduced by reading texts from right to left in German language (N= 32). We used our experimental data to test a Bayesian model of saccade planning. First, experimental data are consistent with the model. Second, the model makes specific predictions of the effects of the precision of prior and (sensory) likelihood. Our results suggest that it is a more precise sensory likelihood that can explain the reduction of both random and systematic error components.
When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision.
Process-oriented theories of cognition must be evaluated against time-ordered observations. Here we present a representative example for data assimilation of the SWIFT model, a dynamical model of the control of fixation positions and fixation durations during natural reading of single sentences. First, we develop and test an approximate likelihood function of the model, which is a combination of a spatial, pseudo-marginal likelihood and a temporal likelihood obtained by probability density approximation Second, we implement a Bayesian approach to parameter inference using an adaptive Markov chain Monte Carlo procedure. Our results indicate that model parameters can be estimated reliably for individual subjects. We conclude that approximative Bayesian inference represents a considerable step forward for computational models of eye-movement control, where modeling of individual data on the basis of process-based dynamic models has not been possible so far.
Author summary <br /> Switching between local and global attention is a general strategy in human information processing. We investigate whether this strategy is a viable approach to model sequences of fixations generated by a human observer in a free viewing task with natural scenes. Variants of the basic model are used to predict the experimental data based on Bayesian inference. Results indicate a high predictive power for both aggregated data and individual differences across observers. The combination of a novel model with state-of-the-art Bayesian methods lends support to our two-state model using local and global internal attention states for controlling eye movements. <br /> Understanding the decision process underlying gaze control is an important question in cognitive neuroscience with applications in diverse fields ranging from psychology to computer vision. The decision for choosing an upcoming saccade target can be framed as a selection process between two states: Should the observer further inspect the information near the current gaze position (local attention) or continue with exploration of other patches of the given scene (global attention)? Here we propose and investigate a mathematical model motivated by switching between these two attentional states during scene viewing. The model is derived from a minimal set of assumptions that generates realistic eye movement behavior. We implemented a Bayesian approach for model parameter inference based on the model's likelihood function. In order to simplify the inference, we applied data augmentation methods that allowed the use of conjugate priors and the construction of an efficient Gibbs sampler. This approach turned out to be numerically efficient and permitted fitting interindividual differences in saccade statistics. Thus, the main contribution of our modeling approach is two-fold; first, we propose a new model for saccade generation in scene viewing. Second, we demonstrate the use of novel methods from Bayesian inference in the field of scan path modeling.
Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data were transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm.
Scene viewing is used to study attentional selection in complex but still controlled environments. One of the main observations on eye movements during scene viewing is the inhomogeneous distribution of fixation locations: While some parts of an image are fixated by almost all observers and are inspected repeatedly by the same observer, other image parts remain unfixated by observers even after long exploration intervals. Here, we apply spatial point process methods to investigate the relationship between pairs of fixations. More precisely, we use the pair correlation function, a powerful statistical tool, to evaluate dependencies between fixation locations along individual scanpaths. We demonstrate that aggregation of fixation locations within 4 degrees is stronger than expected from chance. Furthermore, the pair correlation function reveals stronger aggregation of fixations when the same image is presented a second time. We use simulations of a dynamical model to show that a narrower spatial attentional span may explain differences in pair correlations between the first and the second inspection of the same image.
Bottom-up and top-down as well as low-level and high-level factors influence where we fixate when viewing natural scenes. However, the importance of each of these factors and how they interact remains a matter of debate. Here, we disentangle these factors by analyzing their influence over time. For this purpose, we develop a saliency model that is based on the internal representation of a recent early spatial vision model to measure the low-level, bottom-up factor. To measure the influence of high-level, bottom-up features, we use a recent deep neural network-based saliency model. To account for top-down influences, we evaluate the models on two large data sets with different tasks: first, a memorization task and, second, a search task. Our results lend support to a separation of visual scene exploration into three phases: the first saccade, an initial guided exploration characterized by a gradual broadening of the fixation density, and a steady state that is reached after roughly 10 fixations. Saccade-target selection during the initial exploration and in the steady state is related to similar areas of interest, which are better predicted when including high-level features. In the search data set, fixation locations are determined predominantly by top-down processes. In contrast, the first fixation follows a different fixation density and contains a strong central fixation bias. Nonetheless, first fixations are guided strongly by image properties, and as early as 200 ms after image onset, fixations are better predicted by high-level information. We conclude that any low-level, bottom-up factors are mainly limited to the generation of the first saccade. All saccades are better explained when high-level features are considered, and later, this high-level, bottom-up control can be overruled by top-down influences.
When searching a target in a natural scene, it has been shown that both the target’s visual properties and similarity to the background influence whether and how fast humans are able to find it. So far, it was unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment, participants searched natural scenes for six artificial targets with different spatial frequency content throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known. If a saccade was programmed in the same direction as the previous saccade, fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical fixation density at the endpoints of saccades which maintain direction were comparatively low, indicating that these saccades were less selective. Our results suggest that searchers adjust their eye movement dynamics to the search target efficiently, since previous research has shown that low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. We interpret the saccade direction specificity of our effects as an underlying separation into a default scanning mechanism and a selective, target-dependent mechanism.
When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image—a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias.