TY - JOUR A1 - Söchting, Maximilian A1 - Trapp, Matthias T1 - Controlling image-stylization techniques using eye tracking JF - Science and Technology Publications N2 - With the spread of smart phones capable of taking high-resolution photos and the development of high-speed mobile data infrastructure, digital visual media is becoming one of the most important forms of modern communication. With this development, however, also comes a devaluation of images as a media form with the focus becoming the frequency at which visual content is generated instead of the quality of the content. In this work, an interactive system using image-abstraction techniques and an eye tracking sensor is presented, which allows users to experience diverting and dynamic artworks that react to their eye movement. The underlying modular architecture enables a variety of different interaction techniques that share common design principles, making the interface as intuitive as possible. The resulting experience allows users to experience a game-like interaction in which they aim for a reward, the artwork, while being held under constraints, e.g., not blinking. The co nscious eye movements that are required by some interaction techniques hint an interesting, possible future extension for this work into the field of relaxation exercises and concentration training. KW - Eye-tracking KW - Image Abstraction KW - Image Processing KW - Artistic Image Stylization KW - Interactive Media Y1 - 2020 SN - 2184-4321 PB - Springer CY - Berlin ER - TY - THES A1 - Trapp, Matthias T1 - Analysis and exploration of virtual 3D city models using 3D information lenses N2 - This thesis addresses real-time rendering techniques for 3D information lenses based on the focus & context metaphor. It analyzes, conceives, implements, and reviews its applicability to objects and structures of virtual 3D city models. In contrast to digital terrain models, the application of focus & context visualization to virtual 3D city models is barely researched. However, the purposeful visualization of contextual data of is extreme importance for the interactive exploration and analysis of this field. Programmable hardware enables the implementation of new lens techniques, that allow the augmentation of the perceptive and cognitive quality of the visualization compared to classical perspective projections. A set of 3D information lenses is integrated into a 3D scene-graph system: • Occlusion lenses modify the appearance of virtual 3D city model objects to resolve their occlusion and consequently facilitate the navigation. • Best-view lenses display city model objects in a priority-based manner and mediate their meta information. Thus, they support exploration and navigation of virtual 3D city models. • Color and deformation lenses modify the appearance and geometry of 3D city models to facilitate their perception. The presented techniques for 3D information lenses and their application to virtual 3D city models clarify their potential for interactive visualization and form a base for further development. N2 - Diese Diplomarbeit behandelt echtzeitfähige Renderingverfahren für 3D Informationslinsen, die auf der Fokus-&-Kontext-Metapher basieren. Im folgenden werden ihre Anwendbarkeit auf Objekte und Strukturen von virtuellen 3D-Stadtmodellen analysiert, konzipiert, implementiert und bewertet. Die Focus-&-Kontext-Visualisierung für virtuelle 3D-Stadtmodelle ist im Gegensatz zum Anwendungsbereich der 3D Geländemodelle kaum untersucht. Hier jedoch ist eine gezielte Visualisierung von kontextbezogenen Daten zu Objekten von großer Bedeutung für die interaktive Exploration und Analyse. Programmierbare Computerhardware erlaubt die Umsetzung neuer Linsen-Techniken, welche die Steigerung der perzeptorischen und kognitiven Qualität der Visualisierung im Vergleich zu klassischen perspektivischen Projektionen zum Ziel hat. Für eine Auswahl von 3D-Informationslinsen wird die Integration in ein 3D-Szenengraph-System durchgeführt: • Verdeckungslinsen modifizieren die Gestaltung von virtuellen 3D-Stadtmodell- Objekten, um deren Verdeckungen aufzulösen und somit die Navigation zu erleichtern. • Best-View Linsen zeigen Stadtmodell-Objekte in einer prioritätsdefinierten Weise und vermitteln Meta-Informationen virtueller 3D-Stadtmodelle. Sie unterstützen dadurch deren Exploration und Navigation. • Farb- und Deformationslinsen modifizieren die Gestaltung und die Geometrie von 3D-Stadtmodell-Bereichen, um deren Wahrnehmung zu steigern. Die in dieser Arbeit präsentierten Techniken für 3D Informationslinsen und die Anwendung auf virtuelle 3D Stadt-Modelle verdeutlichen deren Potenzial in der interaktiven Visualisierung und bilden eine Basis für Weiterentwicklungen. KW - Virtuelles 3D Stadtmodell KW - 3D Linsen KW - Shader KW - Echtzeitanwendung KW - virtual 3D city model KW - 3D lenses KW - shader KW - real-time application Y1 - 2007 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-13930 ER - TY - JOUR A1 - Semmo, Amir A1 - Trapp, Matthias A1 - Jobst, Markus A1 - Döllner, Jürgen Roland Friedrich T1 - Cartography-Oriented Design of 3D Geospatial Information Visualization - Overview and Techniques JF - The cartographic journal N2 - In economy, society and personal life map-based interactive geospatial visualization becomes a natural element of a growing number of applications and systems. The visualization of 3D geospatial information, however, raises the question how to represent the information in an effective way. Considerable research has been done in technology-driven directions in the fields of cartography and computer graphics (e.g., design principles, visualization techniques). Here, non-photorealistic rendering (NPR) represents a promising visualization category - situated between both fields - that offers a large number of degrees for the cartography-oriented visual design of complex 2D and 3D geospatial information for a given application context. Still today, however, specifications and techniques for mapping cartographic design principles to the state-of-the-art rendering pipeline of 3D computer graphics remain to be explored. This paper revisits cartographic design principles for 3D geospatial visualization and introduces an extended 3D semiotic model that complies with the general, interactive visualization pipeline. Based on this model, we propose NPR techniques to interactively synthesize cartographic renditions of basic feature types, such as terrain, water, and buildings. In particular, it includes a novel iconification concept to seamlessly interpolate between photorealistic and cartographic representations of 3D landmarks. Our work concludes with a discussion of open challenges in this field of research, including topics, such as user interaction and evaluation. KW - 3D information visualization KW - 3D semiotic model KW - cartographic design KW - user interaction KW - real-time rendering Y1 - 2015 U6 - https://doi.org/10.1080/00087041.2015.1119462 SN - 0008-7041 SN - 1743-2774 VL - 52 IS - 2 SP - 95 EP - 106 PB - Routledge, Taylor & Francis Group CY - Leeds ER - TY - GEN A1 - Stojanovic, Vladeta A1 - Trapp, Matthias A1 - Richter, Rico A1 - Döllner, Jürgen Roland Friedrich T1 - A service-oriented approach for classifying 3D points clouds by example of office furniture classification T2 - Web3D 2018: Proceedings of the 23rd International ACM Conference on 3D Web Technology N2 - The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology. KW - Indoor Models KW - 3D Point Clouds KW - Machine KW - Learning KW - BIM KW - Service-Oriented Y1 - 2018 SN - 978-1-4503-5800-2 U6 - https://doi.org/10.1145/3208806.3208810 SP - 1 EP - 9 PB - Association for Computing Machinery CY - New York ER - TY - GEN A1 - Reimann, Max A1 - Klingbeil, Mandy A1 - Pasewaldt, Sebastian A1 - Semmo, Amir A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich ED - Sourin, A Sourina T1 - MaeSTrO: A Mobile App for Style Transfer Orchestration using Neural Networks T2 - International Conference on Cyberworlds (CW) N2 - Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. This work enhances state-of-the-art neural style transfer techniques by a generalized user interface with interactive tools to facilitate a creative and localized editing process. Thereby, we first propose a problem characterization representing trade-offs between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, first user tests indicate different levels of satisfaction for the implemented techniques and interaction design. KW - non-photorealistic rendering KW - style transfer Y1 - 2018 SN - 978-1-5386-7315-7 U6 - https://doi.org/10.1109/CW.2018.00016 SP - 9 EP - 16 PB - IEEE CY - New York ER - TY - GEN A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich ED - Banissi, E Ursyn T1 - Interactive Close-Up Rendering for Detail plus Overview Visualization of 3D Digital Terrain Models T2 - 2019 23rd International Conference Information Visualisation (IV) N2 - This paper presents an interactive rendering technique for detail+overview visualization of 3D digital terrain models using interactive close-ups. A close-up is an alternative presentation of input data varying with respect to geometrical scale, mapping, appearance, as well as Level-of-Detail (LOD) and Level-of-Abstraction (LOA) used. The presented 3D close-up approach enables in-situ comparison of multiple Regionof-Interests (ROIs) simultaneously. We describe a GPU-based rendering technique for the image-synthesis of multiple close-ups in real-time. KW - Terrain Visualization KW - Detail plus Overview KW - Close-Up KW - Coordinated and Multiple Views Y1 - 2019 SN - 978-1-7281-2838-2 SN - 978-1-7281-2839-9 U6 - https://doi.org/10.1109/IV.2019.00053 SN - 2375-0138 SN - 1550-6037 SP - 275 EP - 280 PB - Inst. of Electr. and Electronics Engineers CY - Los Alamitos ER - TY - JOUR A1 - Semmo, Amir A1 - Hildebrandt, Dieter A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich T1 - Concepts for cartography-oriented visualization of virtual 3D city models JF - Photogrammetrie, Fernerkundung, Geoinformation N2 - Virtual 3D city models serve as an effective medium with manifold applications in geoinformation systems and services. To date, most 3D city models are visualized using photorealistic graphics. But an effective communication of geoinformation significantly depends on how important information is designed and cognitively processed in the given application context. One possibility to visually emphasize important information is based on non-photorealistic rendering, which comprehends artistic depiction styles and is characterized by its expressiveness and communication aspects. However, a direct application of non-photorealistic rendering techniques primarily results in monotonic visualization that lacks cartographic design aspects. In this work, we present concepts for cartography-oriented visualization of virtual 3D city models. These are based on coupling non-photorealistic rendering techniques and semantics-based information for a user, context, and media-dependent representation of thematic information. This work highlights challenges for cartography-oriented visualization of 3D geovirtual environments, presents stylization techniques and discusses their applications and ideas for a standardized visualization. In particular, the presented concepts enable a real-time and dynamic visualization of thematic geoinformation. KW - 3D city models KW - cartography-oriented visualization KW - style description languages KW - real-time rendering Y1 - 2012 U6 - https://doi.org/10.1127/1432-8364/2012/0131 SN - 1432-8364 IS - 4 SP - 455 EP - 465 PB - Schweizerbart CY - Stuttgart ER - TY - JOUR A1 - Semmo, Amir A1 - Trapp, Matthias A1 - Kyprianidis, Jan Eric A1 - Döllner, Jürgen Roland Friedrich T1 - Interactive visualization of generalized virtual 3D city models using level-of-abstraction transitions JF - Computer graphics forum : journal of the European Association for Computer Graphics N2 - Virtual 3D city models play an important role in the communication of complex geospatial information in a growing number of applications, such as urban planning, navigation, tourist information, and disaster management. In general, homogeneous graphic styles are used for visualization. For instance, photorealism is suitable for detailed presentations, and non-photorealism or abstract stylization is used to facilitate guidance of a viewer's gaze to prioritized information. However, to adapt visualization to different contexts and contents and to support saliency-guided visualization based on user interaction or dynamically changing thematic information, a combination of different graphic styles is necessary. Design and implementation of such combined graphic styles pose a number of challenges, specifically from the perspective of real-time 3D visualization. In this paper, the authors present a concept and an implementation of a system that enables different presentation styles, their seamless integration within a single view, and parametrized transitions between them, which are defined according to tasks, camera view, and image resolution. The paper outlines potential usage scenarios and application fields together with a performance evaluation of the implementation. Y1 - 2012 U6 - https://doi.org/10.1111/j.1467-8659.2012.03081.x SN - 0167-7055 VL - 31 IS - 3 SP - 885 EP - 894 PB - Wiley-Blackwell CY - Hoboken ER - TY - JOUR A1 - Reimann, Max A1 - Klingbeil, Mandy A1 - Pasewaldt, Sebastian A1 - Semmo, Amir A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich T1 - Locally controllable neural style transfer on mobile devices JF - The Visual Computer N2 - Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. In this work, we first propose a problem characterization of interactive style transfer representing a trade-off between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, we enhance state-of-the-art neural style transfer techniques by mask-based loss terms that can be interactively parameterized by a generalized user interface to facilitate a creative and localized editing process. We report on a usability study and an online survey that demonstrate the ability of our app to transfer styles at improved semantic plausibility. KW - Non-photorealistic rendering KW - Style transfer KW - Neural networks KW - Mobile devices KW - Interactive control KW - Expressive rendering Y1 - 2019 U6 - https://doi.org/10.1007/s00371-019-01654-1 SN - 0178-2789 SN - 1432-2315 VL - 35 IS - 11 SP - 1531 EP - 1547 PB - Springer CY - New York ER - TY - JOUR A1 - Vollmer, Jan Ole A1 - Trapp, Matthias A1 - Schumann, Heidrun A1 - Döllner, Jürgen Roland Friedrich T1 - Hierarchical spatial aggregation for level-of-detail visualization of 3D thematic data JF - ACM transactions on spatial algorithms and systems N2 - Thematic maps are a common tool to visualize semantic data with a spatial reference. Combining thematic data with a geometric representation of their natural reference frame aids the viewer’s ability in gaining an overview, as well as perceiving patterns with respect to location; however, as the amount of data for visualization continues to increase, problems such as information overload and visual clutter impede perception, requiring data aggregation and level-of-detail visualization techniques. While existing aggregation techniques for thematic data operate in a 2D reference frame (i.e., map), we present two aggregation techniques for 3D spatial and spatiotemporal data mapped onto virtual city models that hierarchically aggregate thematic data in real time during rendering to support on-the-fly and on-demand level-of-detail generation. An object-based technique performs aggregation based on scene-specific objects and their hierarchy to facilitate per-object analysis, while the scene-based technique aggregates data solely based on spatial locations, thus supporting visual analysis of data with arbitrary reference geometry. Both techniques can apply different aggregation functions (mean, minimum, and maximum) for ordinal, interval, and ratio-scaled data and can be easily extended with additional functions. Our implementation utilizes the programmable graphics pipeline and requires suitably encoded data, i.e., textures or vertex attributes. We demonstrate the application of both techniques using real-world datasets, including solar potential analyses and the propagation of pressure waves in a virtual city model. KW - Level-of-detail visualization KW - spatial aggregation KW - real-time rendering Y1 - 2018 U6 - https://doi.org/10.1145/3234506 SN - 2374-0353 SN - 2374-0361 VL - 4 IS - 3 PB - Association for Computing Machinery CY - New York ER -