TY - JOUR A1 - Adam, Maurits A1 - Elsner, Birgit T1 - The impact of salient action effects on 6-, 7-, and 11-month-olds’ goal-predictive gaze shifts for a human grasping action JF - PLOS ONE N2 - When infants observe a human grasping action, experience-based accounts predict that all infants familiar with grasping actions should be able to predict the goal regardless of additional agency cues such as an action effect. Cue-based accounts, however, suggest that infants use agency cues to identify and predict action goals when the action or the agent is not familiar. From these accounts, we hypothesized that younger infants would need additional agency cues such as a salient action effect to predict the goal of a human grasping action, whereas older infants should be able to predict the goal regardless of agency cues. In three experiments, we presented 6-, 7-, and 11-month-olds with videos of a manual grasping action presented either with or without an additional salient action effect (Exp. 1 and 2), or we presented 7-month-olds with videos of a mechanical claw performing a grasping action presented with a salient action effect (Exp. 3). The 6-month-olds showed tracking gaze behavior, and the 11-month-olds showed predictive gaze behavior, regardless of the action effect. However, the 7-month-olds showed predictive gaze behavior in the action-effect condition, but tracking gaze behavior in the no-action-effect condition and in the action-effect condition with a mechanical claw. The results therefore support the idea that salient action effects are especially important for infants' goal predictions from 7 months on, and that this facilitating influence of action effects is selective for the observation of human hands. KW - attention KW - eye movements KW - infants perception KW - mechanisms KW - origins Y1 - 2020 U6 - https://doi.org/10.1371/journal.pone.0240165 SN - 1932-6203 VL - 15 IS - 10 PB - Public Library of Science CY - San Fransisco ER - TY - JOUR A1 - Barthelme, Simon A1 - Trukenbrod, Hans Arne A1 - Engbert, Ralf A1 - Wichmann, Felix A. T1 - Modeling fixation locations using spatial point processes JF - Journal of vision N2 - Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation. KW - eye movements KW - fixation locations KW - saliency KW - modeling KW - point process KW - spatial statistics Y1 - 2013 U6 - https://doi.org/10.1167/13.12.1 SN - 1534-7362 VL - 13 IS - 12 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Cajar, Anke A1 - Engbert, Ralf A1 - Laubrock, Jochen T1 - Potsdam Eye-Movement Corpus for Scene Memorization and Search With Color and Spatial-Frequency Filtering JF - Frontiers in psychology / Frontiers Research Foundation KW - eye movements KW - corpus dataset KW - scene viewing KW - object search KW - scene memorization KW - spatial frequencies KW - color KW - central and peripheral vision Y1 - 2022 U6 - https://doi.org/10.3389/fpsyg.2022.850482 SN - 1664-1078 VL - 13 SP - 1 EP - 7 PB - Frontiers Research Foundation CY - Lausanne, Schweiz ER - TY - JOUR A1 - Cajar, Anke A1 - Engbert, Ralf A1 - Laubrock, Jochen T1 - How spatial frequencies and color drive object search in real-world scenes BT - a new eye-movement corpus JF - Journal of vision N2 - When studying how people search for objects in scenes, the inhomogeneity of the visual field is often ignored. Due to physiological limitations, peripheral vision is blurred and mainly uses coarse-grained information (i.e., low spatial frequencies) for selecting saccade targets, whereas high-acuity central vision uses fine-grained information (i.e., high spatial frequencies) for analysis of details. Here we investigated how spatial frequencies and color affect object search in real-world scenes. Using gaze-contingent filters, we attenuated high or low frequencies in central or peripheral vision while viewers searched color or grayscale scenes. Results showed that peripheral filters and central high-pass filters hardly affected search accuracy, whereas accuracy dropped drastically with central low-pass filters. Peripheral filtering increased the time to localize the target by decreasing saccade amplitudes and increasing number and duration of fixations. The use of coarse-grained information in the periphery was limited to color scenes. Central filtering increased the time to verify target identity instead, especially with low-pass filters. We conclude that peripheral vision is critical for object localization and central vision is critical for object identification. Visual guidance during peripheral object localization is dominated by low-frequency color information, whereas high-frequency information, relatively independent of color, is most important for object identification in central vision. KW - scene viewing KW - eye movements KW - object search KW - central and peripheral KW - vision KW - spatial frequencies KW - color KW - gaze-contingent displays Y1 - 2020 U6 - https://doi.org/10.1167/jov.20.7.8 SN - 1534-7362 VL - 20 IS - 7 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Cunnings, Ian A1 - Patterson, Clare A1 - Felser, Claudia T1 - Structural constraints on pronoun binding and coreference: evidence from eye movements during reading JF - Frontiers in psychology N2 - A number of recent studies have investigated how syntactic and non-syntactic constraints combine to cue memory retrieval during anaphora resolution. In this paper we investigate how syntactic constraints and gender congruence interact to guide memory retrieval during the resolution of subject pronouns. Subject pronouns are always technically ambiguous, and the application of syntactic constraints on their interpretation depends on properties of the antecedent that is to be retrieved. While pronouns can freely corefer with non-quantified referential antecedents, linking a pronoun to a quantified antecedent is only possible in certain syntactic configurations via variable binding. We report the results from a judgment task and three online reading comprehension experiments investigating pronoun resolution with quantified and non-quantified antecedents. Results from both the judgment task and participants' eye movements during reading indicate that comprehenders freely allow pronouns to corefer with non-quantified antecedents, but that retrieval of quantified antecedents is restricted to specific syntactic environments. We interpret our findings as indicating that syntactic constraints constitute highly weighted cues to memory retrieval during anaphora resolution. KW - pronoun resolution KW - memory retrieval KW - quantification KW - eye movements KW - reading KW - English Y1 - 2015 U6 - https://doi.org/10.3389/fpsyg.2015.00840 SN - 1664-1078 VL - 6 PB - Frontiers Research Foundation CY - Lausanne ER - TY - JOUR A1 - Engbert, Ralf A1 - Trukenbrod, Hans Arne A1 - Barthelme, Simon A1 - Wichmann, Felix A. T1 - Spatial statistics and attentional dynamics in scene viewing JF - Journal of vision N2 - In humans and in foveated animals visual acuity is highly concentrated at the center of gaze, so that choosing where to look next is an important example of online, rapid decision-making. Computational neuroscientists have developed biologically-inspired models of visual attention, termed saliency maps, which successfully predict where people fixate on average. Using point process theory for spatial statistics, we show that scanpaths contain, however, important statistical structure, such as spatial clustering on top of distributions of gaze positions. Here, we develop a dynamical model of saccadic selection that accurately predicts the distribution of gaze positions as well as spatial clustering along individual scanpaths. Our model relies on activation dynamics via spatially-limited (foveated) access to saliency information, and, second, a leaky memory process controlling the re-inspection of target regions. This theoretical framework models a form of context-dependent decision-making, linking neural dynamics of attention to behavioral gaze data. KW - scene perception KW - eye movements KW - attention KW - saccades KW - modeling KW - spatial statistics Y1 - 2015 U6 - https://doi.org/10.1167/15.1.14 SN - 1534-7362 VL - 15 IS - 1 PB - Association for Research in Vision and Opthalmology CY - Rockville ER - TY - JOUR A1 - Felser, Claudia A1 - Patterson, Clare A1 - Cunnings, Ian T1 - Structural constraints on pronoun binding and coreference: Evidence from eye movements during reading JF - Frontiers in psychology N2 - A number of recent studies have investigated how syntactic and non-syntactic constraints combine to cue memory retrieval during anaphora resolution. In this paper we investigate how syntactic constraints and gender congruence interact to guide memory retrieval during the resolution of subject pronouns. Subject pronouns are always technically ambiguous, and the application of syntactic constraints on their interpretation depends on properties of the antecedent that is to be retrieved. While pronouns can freely corefer with non-quantified referential antecedents, linking a pronoun to a quantified antecedent is only possible in certain syntactic configurations via variable binding. We report the results from a judgment task and three online reading comprehension experiments investigating pronoun resolution with quantified and non-quantified antecedents. Results from both the judgment task and participants' eye movements during reading indicate that comprehenders freely allow pronouns to corefer with non-quantified antecedents, but that retrieval of quantified antecedents is restricted to specific syntactic environments. We interpret our findings as indicating that syntactic constraints constitute highly weighted cues to memory retrieval during anaphora resolution. KW - pronoun resolution KW - memory retrieval KW - quantification KW - eye movements KW - reading KW - English Y1 - 2015 U6 - https://doi.org/10.3389/fpsyg.2015.00840 SN - 1664-1078 VL - 6 IS - 840 PB - Frontiers Research Foundation CY - Lausanne ER - TY - JOUR A1 - Fernandez, Gerardo A1 - Shalom, Diego E. A1 - Kliegl, Reinhold A1 - Sigman, Mariano T1 - Eye movements during reading proverbs and regular sentences: the incoming word predictability effect JF - Language, cognition and neuroscience KW - eye movements KW - reading KW - proverbs KW - incoming word predictability effect Y1 - 2014 U6 - https://doi.org/10.1080/01690965.2012.760745 SN - 2327-3798 SN - 2327-3801 VL - 29 IS - 3 SP - 260 EP - 273 PB - Routledge, Taylor & Francis Group CY - Abingdon ER - TY - JOUR A1 - Hartmann, Matthias A1 - Mast, Fred W. A1 - Fischer, Martin H. T1 - Spatial biases during mental arithmetic: evidence from eye movements on a blank screen JF - Frontiers in psychology N2 - While the influence of spatial-numerical associations in number categorization tasks has been well established, their role in mental arithmetic is less clear. It has been hypothesized that mental addition leads to rightward and upward shifts of spatial attention (along the "mental number line"), whereas subtraction leads to leftward and downward shifts. We addressed this hypothesis by analyzing spontaneous eye movements during mental arithmetic. Participants solved verbally presented arithmetic problems (e.g., 2 + 7, 8-3) aloud while looking at a blank screen. We found that eye movements reflected spatial biases in the ongoing mental operation: Gaze position shifted more upward when participants solved addition compared to subtraction problems, and the horizontal gaze position was partly determined by the magnitude of the operands. Interestingly, the difference between addition and subtraction trials was driven by the operator (plus vs. minus) but was not influenced by the computational process. Thus, our results do not support the idea of a mental movement toward the solution during arithmetic but indicate a semantic association between operation and space. KW - mental arithmetic KW - eye movements KW - mental number line KW - operational momentum KW - embodied cognition KW - grounded cognition Y1 - 2015 U6 - https://doi.org/10.3389/fpsyg.2015.00012 SN - 1664-1078 VL - 6 PB - Frontiers Research Foundation CY - Lausanne ER - TY - JOUR A1 - Hohenstein, Sven A1 - Kliegl, Reinhold T1 - Semantic preview benefit during reading JF - Journal of experimental psychology : Learning, memory, and cognition N2 - Word features in parafoveal vision influence eye movements during reading. The question of whether readers extract semantic information from parafoveal words was studied in 3 experiments by using a gaze-contingent display change technique. Subjects read German sentences containing 1 of several preview words that were replaced by a target word during the saccade to the preview (boundary paradigm). In the 1st experiment the preview word was semantically related or unrelated to the target. Fixation durations on the target were shorter for semantically related than unrelated previews, consistent with a semantic preview benefit. In the 2nd experiment, half the sentences were presented following the rules of German spelling (i.e., previews and targets were printed with an initial capital letter), and the other half were presented completely in lowercase. A semantic preview benefit was obtained under both conditions. In the 3rd experiment, we introduced 2 further preview conditions, an identical word and a pronounceable nonword, while also manipulating the text contrast. Whereas the contrast had negligible effects, fixation durations on the target were reliably different for all 4 types of preview. Semantic preview benefits were greater for pretarget fixations closer to the boundary (large preview space) and, although not as consistently, for long pretarget fixation durations (long preview time). The results constrain theoretical proposals about eye movement control in reading. KW - eye movements KW - reading KW - semantic preview benefit KW - parafoveal processing KW - display change awareness Y1 - 2014 U6 - https://doi.org/10.1037/a0033670 SN - 0278-7393 SN - 1939-1285 VL - 40 IS - 1 SP - 166 EP - 190 PB - American Psychological Association CY - Washington ER -