• search hit 5 of 9
Back to Result List

Mise-Unseen

  • Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find thatCreating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.show moreshow less

Export metadata

Additional Services

Search Google Scholar Statistics
Metadaten
Author details:Sebastian Marwecki, Andrew D. Wilson, Eyal Ofek, Mar Gonzalez Franco, Christian HolzORCiDGND
DOI:https://doi.org/10.1145/3332165.3347919
ISBN:978-1-4503-6816-2
Title of parent work (English):UIST '19: Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
Subtitle (English):Using Eye-Tracking to Hide Virtual Reality Scene Changes in Plain Sight
Publisher:Association for Computing Machinery
Place of publishing:New York
Publication type:Other
Language:English
Year of first publication:2019
Publication year:2019
Release date:2021/04/23
Tag:Eye-tracking; change blindness; inattentional blindness; staging; virtual reality
Number of pages:13
First page:777
Last Page:789
Organizational units:An-Institute / Hasso-Plattner-Institut für Digital Engineering gGmbH
DDC classification:0 Informatik, Informationswissenschaft, allgemeine Werke / 00 Informatik, Wissen, Systeme / 000 Informatik, Informationswissenschaft, allgemeine Werke
Peer review:Referiert
Accept ✔
This website uses technically necessary session cookies. By continuing to use the website, you agree to this. You can find our privacy policy here.