TY - GEN A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich ED - Banissi, E Ursyn T1 - Interactive Close-Up Rendering for Detail plus Overview Visualization of 3D Digital Terrain Models T2 - 2019 23rd International Conference Information Visualisation (IV) N2 - This paper presents an interactive rendering technique for detail+overview visualization of 3D digital terrain models using interactive close-ups. A close-up is an alternative presentation of input data varying with respect to geometrical scale, mapping, appearance, as well as Level-of-Detail (LOD) and Level-of-Abstraction (LOA) used. The presented 3D close-up approach enables in-situ comparison of multiple Regionof-Interests (ROIs) simultaneously. We describe a GPU-based rendering technique for the image-synthesis of multiple close-ups in real-time. KW - Terrain Visualization KW - Detail plus Overview KW - Close-Up KW - Coordinated and Multiple Views Y1 - 2019 SN - 978-1-7281-2838-2 SN - 978-1-7281-2839-9 U6 - https://doi.org/10.1109/IV.2019.00053 SN - 2375-0138 SN - 1550-6037 SP - 275 EP - 280 PB - Inst. of Electr. and Electronics Engineers CY - Los Alamitos ER - TY - JOUR A1 - Reimann, Max A1 - Klingbeil, Mandy A1 - Pasewaldt, Sebastian A1 - Semmo, Amir A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich T1 - Locally controllable neural style transfer on mobile devices JF - The Visual Computer N2 - Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. In this work, we first propose a problem characterization of interactive style transfer representing a trade-off between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, we enhance state-of-the-art neural style transfer techniques by mask-based loss terms that can be interactively parameterized by a generalized user interface to facilitate a creative and localized editing process. We report on a usability study and an online survey that demonstrate the ability of our app to transfer styles at improved semantic plausibility. KW - Non-photorealistic rendering KW - Style transfer KW - Neural networks KW - Mobile devices KW - Interactive control KW - Expressive rendering Y1 - 2019 U6 - https://doi.org/10.1007/s00371-019-01654-1 SN - 0178-2789 SN - 1432-2315 VL - 35 IS - 11 SP - 1531 EP - 1547 PB - Springer CY - New York ER - TY - GEN A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich T1 - Real-time Screen-space Geometry Draping for 3D Digital Terrain Models T2 - 2019 23rd International Conference Information Visualisation (IV) N2 - A fundamental task in 3D geovisualization and GIS applications is the visualization of vector data that can represent features such as transportation networks or land use coverage. Mapping or draping vector data represented by geometric primitives (e.g., polylines or polygons) to 3D digital elevation or 3D digital terrain models is a challenging task. We present an interactive GPU-based approach that performs geometry-based draping of vector data on per-frame basis using an image-based representation of a 3D digital elevation or terrain model only. KW - Geometry Draping KW - Geovisualization KW - GPU-based Real-time Rendering Y1 - 2019 SN - 978-1-7281-2838-2 SN - 978-1-7281-2839-9 U6 - https://doi.org/10.1109/IV.2019.00054 SN - 2375-0138 SN - 1550-6037 SP - 281 EP - 286 PB - Inst. of Electr. and Electronics Engineers CY - Los Alamitos ER - TY - GEN A1 - Florio, Alessandro A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich T1 - Semantic-driven Visualization Techniques for Interactive Exploration of 3D Indoor Models T2 - 2019 23rd International Conference Information Visualisation (IV) N2 - The availability of detailed virtual 3D building models including representations of indoor elements, allows for a wide number of applications requiring effective exploration and navigation functionality. Depending on the application context, users should be enabled to focus on specific Objects-of-Interests (OOIs) or important building elements. This requires approaches to filtering building parts as well as techniques to visualize important building objects and their relations. For it, this paper explores the application and combination of interactive rendering techniques as well as their semanticallydriven configuration in the context of 3D indoor models. KW - Building Information Models KW - BIM KW - Industry Foundation Classes KW - IFC KW - Interactive Visualization KW - Real-time Rendering Y1 - 2019 SN - 978-1-7281-2838-2 SN - 978-1-7281-2839-9 U6 - https://doi.org/10.1109/IV.2019.00014 SN - 2375-0138 SN - 1550-6037 SP - 25 EP - 30 PB - Inst. of Electr. and Electronics Engineers CY - Los Alamitos ER - TY - JOUR A1 - Stojanovic, Vladeta A1 - Trapp, Matthias A1 - Richter, Rico A1 - Döllner, Jürgen Roland Friedrich T1 - Service-oriented semantic enrichment of indoor point clouds using octree-based multiview classification JF - Graphical Models N2 - The use of Building Information Modeling (BIM) for Facility Management (FM) in the Operation and Maintenance (O&M) stages of the building life-cycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing the state of the built environment due to non-automated generation of associated semantics. 3D point clouds can be used to capture the physical state of the built environment, but also lack these associated semantics. A prototypical implementation of a service-oriented architecture for classification of indoor point cloud scenes of office environments is presented, using multiview classification. The multiview classification approach is tested using a retrained Convolutional Neural Network (CNN) model - Inception V3. The presented approach for classifying common office furniture objects (chairs, sofas and desks), contained in 3D point cloud scans, is tested and evaluated. The results show that the presented approach can classify common office furniture up to an acceptable degree of accuracy, and is suitable for quick and robust semantics approximation - based on RGB (red, green and blue color channel) cubemap images of the octree partitioned areas of the 3D point cloud scan. Additional methods for web-based 3D visualization, editing and annotation of point clouds are also discussed. Using the described approach, captured scans of indoor environments can be semantically enriched using object annotations derived from multiview classification results. Furthermore, the presented approach is suited for semantic enrichment of lower resolution indoor point clouds acquired using commodity mobile devices. KW - Semantic enrichment KW - 3D point clouds KW - Multiview classification KW - Service-oriented KW - Indoor environments Y1 - 2019 U6 - https://doi.org/10.1016/j.gmod.2019.101039 SN - 1524-0703 SN - 1524-0711 VL - 105 PB - Elsevier CY - San Diego ER -