TY - JOUR A1 - Tseng, Chiao-I A1 - Laubrock, Jochen A1 - Bateman, John A. T1 - The impact of multimodal cohesion on attention and interpretation in film JF - Discourse, context & media N2 - This article presents results of an exploratory investigation combining multimodal cohesion analysis and eye-tracking studies. Multimodal cohesion, as a tool of multimodal discourse analysis, goes beyond lin-guistic cohesive mechanisms to enable the construction of cross-modal discourse structures that system-atically relate technical details of audio, visual and verbal modalities. Patterns of multimodal cohesion from these discourse structures were used to design eye-tracking experiments and questionnaires in order to empirically investigate how auditory and visual cohesive cues affect attention and comprehen-sion. We argue that the cross-modal structures of cohesion revealed by our method offer a strong methodology for addressing empirical questions concerning viewers' comprehension of narrative settings and the comparative salience of visual, verbal and audio cues. Analyses are presented of the beginning of Hitchcock's The Birds (1963) and a sketch from Monty Python filmed in 1971. Our approach balances the narrative-based issue of how narrative elements in film guide meaning interpretation and the recipient -based question of where a film viewer's attention is directed during viewing and how this affects comprehension. KW - Film KW - Cohesion KW - Discourse semantics KW - Multimodality KW - Eye-tracking KW - Attention Y1 - 2021 U6 - https://doi.org/10.1016/j.dcm.2021.100544 SN - 2211-6958 VL - 44 PB - Amsterdam [u.a.] CY - Oxford ER - TY - JOUR A1 - Koc-Januchta, Marta A1 - Höffler, Tim A1 - Thoma, Gun-Brit A1 - Prechtl, Helmut A1 - Leutner, Detlev T1 - Visualizers versus verbalizers BT - Effects of cognitive style on learning with texts and pictures - An eye-tracking study JF - Computers in human behavior N2 - This study was conducted in order to examine the differences between visualizers and verbalizers in the way they gaze at pictures and texts while learning. Using a collection of questionnaires, college students were classified according to their visual or verbal cognitive style and were asked to learn about two different, in terms of subject and type of knowledge, topics by means of text-picture combinations. Eye-tracking was used to investigate their gaze behavior. The results show that visualizers spent significantly more time inspecting pictures than verbalizers, while verbalizers spent more time inspecting texts. Results also suggest that both visualizers' and verbalizers' way of learning is active but mostly within areas providing the source of information in line with their cognitive style (pictures or text). Verbalizers tended to enter non-informative, irrelevant areas of pictures sooner than visualizers. The comparison of learning outcomes showed that the group of visualizers achieved better results than the group of verbalizers on a comprehension test. KW - Cognitive style KW - Verbalizer KW - Visualizer KW - Eye-tracking KW - Multimedia learning Y1 - 2016 U6 - https://doi.org/10.1016/j.chb.2016.11.028 SN - 0747-5632 SN - 1873-7692 VL - 68 SP - 170 EP - 179 PB - Elsevier CY - Oxford ER - TY - JOUR A1 - Söchting, Maximilian A1 - Trapp, Matthias T1 - Controlling image-stylization techniques using eye tracking JF - Science and Technology Publications N2 - With the spread of smart phones capable of taking high-resolution photos and the development of high-speed mobile data infrastructure, digital visual media is becoming one of the most important forms of modern communication. With this development, however, also comes a devaluation of images as a media form with the focus becoming the frequency at which visual content is generated instead of the quality of the content. In this work, an interactive system using image-abstraction techniques and an eye tracking sensor is presented, which allows users to experience diverting and dynamic artworks that react to their eye movement. The underlying modular architecture enables a variety of different interaction techniques that share common design principles, making the interface as intuitive as possible. The resulting experience allows users to experience a game-like interaction in which they aim for a reward, the artwork, while being held under constraints, e.g., not blinking. The co nscious eye movements that are required by some interaction techniques hint an interesting, possible future extension for this work into the field of relaxation exercises and concentration training. KW - Eye-tracking KW - Image Abstraction KW - Image Processing KW - Artistic Image Stylization KW - Interactive Media Y1 - 2020 SN - 2184-4321 PB - Springer CY - Berlin ER - TY - GEN A1 - Marwecki, Sebastian A1 - Wilson, Andrew D. A1 - Ofek, Eyal A1 - Franco, Mar Gonzalez A1 - Holz, Christian T1 - Mise-Unseen BT - Using Eye-Tracking to Hide Virtual Reality Scene Changes in Plain Sight T2 - UIST '19: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology N2 - Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques. KW - Eye-tracking KW - virtual reality KW - change blindness KW - inattentional blindness KW - staging Y1 - 2019 SN - 978-1-4503-6816-2 U6 - https://doi.org/10.1145/3332165.3347919 SP - 777 EP - 789 PB - Association for Computing Machinery CY - New York ER - TY - JOUR A1 - von der Malsburg, Titus Raban A1 - Angele, Bernhard T1 - False positives and other statistical errors in standard analyses of eye movements in reading JF - Journal of memory and language N2 - In research on eye movements in reading, it is common to analyze a number of canonical dependent measures to study how the effects of a manipulation unfold over time. Although this gives rise to the well-known multiple comparisons problem, i.e. an inflated probability that the null hypothesis is incorrectly rejected (Type I error), it is accepted standard practice not to apply any correction procedures. Instead, there appears to be a widespread belief that corrections are not necessary because the increase in false positives is too small to matter. To our knowledge, no formal argument has ever been presented to justify this assumption. Here, we report a computational investigation of this issue using Monte Carlo simulations. Our results show that, contrary to conventional wisdom, false positives are increased to unacceptable levels when no corrections are applied. Our simulations also show that counter-measures like the Bonferroni correction keep false positives in check while reducing statistical power only moderately. Hence, there is little reason why such corrections should not be made a standard requirement. Further, we discuss three statistical illusions that can arise when statistical power is low, and we show how power can be improved to prevent these illusions. In sum, our work renders a detailed picture of the various types of statistical errors than can occur in studies of reading behavior and we provide concrete guidance about how these errors can be avoided. (C) 2016 Elsevier Inc. All rights reserved. KW - Statistics KW - False positives KW - Null-hypothesis testing KW - Eye-tracking KW - Reading KW - Sentence processing Y1 - 2017 U6 - https://doi.org/10.1016/j.jml.2016.10.003 SN - 0749-596X SN - 1096-0821 VL - 94 SP - 119 EP - 133 PB - Elsevier CY - San Diego ER - TY - JOUR A1 - Garoufi, Konstantina A1 - Staudte, Maria A1 - Koller, Alexander A1 - Crocker, Matthew W. T1 - Exploiting Listener Gaze to Improve Situated Communication in Dynamic Virtual Environments JF - Cognitive science : a multidisciplinary journal of anthropology, artificial intelligence, education, linguistics, neuroscience, philosophy, psychology ; journal of the Cognitive Science Society N2 - Beyond the observation that both speakers and listeners rapidly inspect the visual targets of referring expressions, it has been argued that such gaze may constitute part of the communicative signal. In this study, we investigate whether a speaker may, in principle, exploit listener gaze to improve communicative success. In the context of a virtual environment where listeners follow computer-generated instructions, we provide two kinds of support for this claim. First, we show that listener gaze provides a reliable real-time index of understanding even in dynamic and complex environments, and on a per-utterance basis. Second, we show that a language generation system that uses listener gaze to provide rapid feedback improves overall task performance in comparison with two systems that do not use gaze. Aside from demonstrating the utility of listener gaze insituated communication, our findings open the door to new methods for developing and evaluating multi-modal models of situated interaction. KW - Listener gaze KW - Eye-tracking KW - Referential understanding KW - Virtual environments KW - Situated communication Y1 - 2016 U6 - https://doi.org/10.1111/cogs.12298 SN - 0364-0213 SN - 1551-6709 VL - 40 SP - 1671 EP - 1703 PB - Wiley-Blackwell CY - Hoboken ER - TY - JOUR A1 - Hanne, Sandra A1 - Burchert, Frank A1 - De Bleser, Ria A1 - Vasishth, Shravan T1 - Sentence comprehension and morphological cues in aphasia: What eye-tracking reveals about integration and prediction JF - Journal of neurolinguistics : an international journal for the study of brain function in language behavior and experience N2 - Comprehension of non-canonical sentences can be difficult for individuals with aphasia (IWA). It is still unclear to which extent morphological cues like case marking or verb inflection may influence IWA's performance or even help to override deficits in sentence comprehension. Until now, studies have mainly used offline methods to draw inferences about syntactic deficits and, so far, only a few studies have looked at online syntactic processing in aphasia. We investigated sentence processing in German-speaking IWA by combining an offline (sentence-picture matching) and an online (eye-tracking in the visual-world paradigm) method. Our goal was to determine whether IWA are capable of using inflectional morphology (number-agreement markers on verbs and case markers in noun phrases) as a cue to sentence interpretation. We report results of two visual-world experiments using German reversible SVO and OVS sentences. In each study, there were eight IWA and 20 age-matched controls. Experiment 1 targeted the role of unambiguous case morphology, while Experiment 2 looked at processing of number-agreement cues at the verb in caseambiguous sentences. IWA showed deficits in using both types of morphological markers as a cue to non-canonical sentence interpretation and the results indicate that in aphasia, processing of case-marking cues is more vulnerable as compared to verbagreement morphology. We ascribe this finding to the higher cue reliability of agreement cues, which renders them more resistant against impairments in aphasia. However, the online data revealed that IWA are in principle capable of successfully computing morphological cues, but the integration of morphological information is delayed as compared to age-matched controls. Furthermore, we found striking differences between controls and IWA regarding subject-before-object parsing predictions. While in case-unambiguous sentences IWA showed evidence for early subjectbefore-object parsing commitments, they exhibited no straightforward subject-first prediction in case-ambiguous sentences, although controls did so for ambiguous structures. IWA delayed their parsing decisions in case-ambiguous sentences until unambiguous morphological information, such as a subject-verbnumber-agreement cue, was available. We attribute the results for IWA to deficits in predictive processes based on morphosyntactic cues during sentence comprehension. The results indicate that IWA adopt a wait-and-see strategy and initiate prediction of upcoming syntactic structure only when unambiguous case or agreement cues are available. (C) 2015 Elsevier Ltd. All rights reserved. KW - Aphasia KW - Sentence comprehension deficits KW - Prediction KW - Eye-tracking KW - Online morpho-syntactic processing KW - Morphological cues Y1 - 2015 U6 - https://doi.org/10.1016/j.jneuroling.2014.12.003 SN - 0911-6044 VL - 34 SP - 83 EP - 111 PB - Elsevier CY - Oxford ER - TY - JOUR A1 - Brandt-Kobele, Oda-Christina A1 - Höhle, Barbara T1 - The detection of subject-verb agreement violations by German-speaking children: An eye-tracking study JF - Lingua : international review of general linguistics N2 - This study examines the processing of sentences with and without subject verb agreement violations in German-speaking children at three and five years of age. An eye-tracking experiment was conducted to measure whether children's looking behavior was influenced by the grammaticality of the test sentences. The older group of children turned their gaze faster towards a target picture and looked longer at it when the object noun referring to the target was presented in a grammatical sentence with subject verb agreement compared to when the object noun was presented in a sentence in which an agreement violation occurred. The younger group of children displayed less conclusive results, with a tendency to look longer but not faster towards the target picture in the grammatical compared to the ungrammatical condition. This is the first experimental evidence that German-speaking five-year old children are sensitive to subject verb agreement and violations thereof. Our results additionally substantiate that the eye-tracking paradigm is suitable to examine children's sensitivity to subtle grammatical violations. KW - Subject-verb agreement KW - Eye-tracking KW - Language acquisition KW - German Y1 - 2014 U6 - https://doi.org/10.1016/j.lingua.2013.12.008 SN - 0024-3841 SN - 1872-6135 VL - 144 SP - 7 EP - 20 PB - Elsevier CY - Amsterdam ER - TY - JOUR A1 - Bos, Laura S. A1 - Hanne, Sandra A1 - Wartenburger, Isabell A1 - Bastiaanse, Roelien T1 - Losing track of time? Processing of time reference inflection in agrammatic and healthy speakers of German JF - Neuropsychologia : an international journal in behavioural and cognitive neuroscience N2 - Background: Individuals with agrammatic aphasia (IWAs) have problems with grammatical decoding of tense inflection. However, these difficulties depend on the time frame that the tense refers to. Verb morphology with reference to the past is more difficult than with reference to the non-past, because a link needs to be made to the past event in discourse, as captured in the PAst Discourse Linking Hypothesis (PADILIH; Bastiaanse, R., Bamyaci, E., Hsu, C., Lee, J., Yarbay Duman, T., Thompson, C. K., 2011. Time reference in agrammatic aphasia: A cross-linguistic study. J. Neurolinguist. 24, 652-673). With respect to reference to the (non-discourse-linked) future, data so far indicate that IWAs experience less difficulties as compared to past time reference (Bastiaanse, R., Bamyaci, E., Hsu, C., Lee, J., Yarbay Duman, T., Thompson, C. K., 2011. Time reference in agrammatic aphasia: A cross-linguistic study. J. Neurolinguist. 24, 652-673), supporting the assumptions of the PADILIH. Previous online studies of time reference in aphasia used methods such as reaction times analysis (e.g., Faroqi-Shah, Y., Dickey, M. W., 2009. On-line processing of tense and temporality in agrammatic aphasia. Brain Lang. 108, 97-111). So far, no such study used eye-tracking, even though this technique can bring additional insights (Burchert, F., Hanne, S., Vasishth, S., 2013. Sentence comprehension disorders in aphasia: the concept of chance performance revisited. Aphasiology 27, 112-125, doi:10.1080/02687038.2012.730603). Aims: This study investigated (1) whether processing of future and past time reference inflection differs between non-brain-damaged individuals (NBDs) and IWAs, and (2) underlying mechanisms of time reference comprehension failure by IWAs. Results and discussion: NBDs scored at ceiling and significantly higher than the IWAs. IWAs had below-ceiling performance on the future condition, and both participant groups were faster to respond to the past than to the future condition. These differences are attributed to a pre-existing preference to look at a past picture, which has to be overcome. Eye movement patterns suggest that both groups interpret future time reference similarly, while IWAs show a delay relative to NBDs in interpreting past time reference inflection. The eye tracking results support the PADILIH, because processing reference to the past in discourse syntax requires additional resources and thus, is problematic and delayed for people with aphasia. (C) 2014 Elsevier Ltd. All rights reserved. KW - Visual-world paradigm KW - Spoken language comprehension KW - Time reference KW - Morphology KW - Agrammatism KW - Eye-tracking KW - Aphasia KW - Discourse linking Y1 - 2014 U6 - https://doi.org/10.1016/j.neuropsychologia.2014.10.026 SN - 0028-3932 SN - 1873-3514 VL - 65 SP - 180 EP - 190 PB - Elsevier CY - Oxford ER -