TY - JOUR A1 - Stojanovic, Vladeta A1 - Trapp, Matthias A1 - Richter, Rico A1 - Döllner, Jürgen Roland Friedrich T1 - Service-oriented semantic enrichment of indoor point clouds using octree-based multiview classification JF - Graphical Models N2 - The use of Building Information Modeling (BIM) for Facility Management (FM) in the Operation and Maintenance (O&M) stages of the building life-cycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing the state of the built environment due to non-automated generation of associated semantics. 3D point clouds can be used to capture the physical state of the built environment, but also lack these associated semantics. A prototypical implementation of a service-oriented architecture for classification of indoor point cloud scenes of office environments is presented, using multiview classification. The multiview classification approach is tested using a retrained Convolutional Neural Network (CNN) model - Inception V3. The presented approach for classifying common office furniture objects (chairs, sofas and desks), contained in 3D point cloud scans, is tested and evaluated. The results show that the presented approach can classify common office furniture up to an acceptable degree of accuracy, and is suitable for quick and robust semantics approximation - based on RGB (red, green and blue color channel) cubemap images of the octree partitioned areas of the 3D point cloud scan. Additional methods for web-based 3D visualization, editing and annotation of point clouds are also discussed. Using the described approach, captured scans of indoor environments can be semantically enriched using object annotations derived from multiview classification results. Furthermore, the presented approach is suited for semantic enrichment of lower resolution indoor point clouds acquired using commodity mobile devices. KW - Semantic enrichment KW - 3D point clouds KW - Multiview classification KW - Service-oriented KW - Indoor environments Y1 - 2019 U6 - https://doi.org/10.1016/j.gmod.2019.101039 SN - 1524-0703 SN - 1524-0711 VL - 105 PB - Elsevier CY - San Diego ER - TY - GEN A1 - Florio, Alessandro A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich T1 - Semantic-driven Visualization Techniques for Interactive Exploration of 3D Indoor Models T2 - 2019 23rd International Conference Information Visualisation (IV) N2 - The availability of detailed virtual 3D building models including representations of indoor elements, allows for a wide number of applications requiring effective exploration and navigation functionality. Depending on the application context, users should be enabled to focus on specific Objects-of-Interests (OOIs) or important building elements. This requires approaches to filtering building parts as well as techniques to visualize important building objects and their relations. For it, this paper explores the application and combination of interactive rendering techniques as well as their semanticallydriven configuration in the context of 3D indoor models. KW - Building Information Models KW - BIM KW - Industry Foundation Classes KW - IFC KW - Interactive Visualization KW - Real-time Rendering Y1 - 2019 SN - 978-1-7281-2838-2 SN - 978-1-7281-2839-9 U6 - https://doi.org/10.1109/IV.2019.00014 SN - 2375-0138 SN - 1550-6037 SP - 25 EP - 30 PB - Inst. of Electr. and Electronics Engineers CY - Los Alamitos ER - TY - GEN A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich T1 - Real-time Screen-space Geometry Draping for 3D Digital Terrain Models T2 - 2019 23rd International Conference Information Visualisation (IV) N2 - A fundamental task in 3D geovisualization and GIS applications is the visualization of vector data that can represent features such as transportation networks or land use coverage. Mapping or draping vector data represented by geometric primitives (e.g., polylines or polygons) to 3D digital elevation or 3D digital terrain models is a challenging task. We present an interactive GPU-based approach that performs geometry-based draping of vector data on per-frame basis using an image-based representation of a 3D digital elevation or terrain model only. KW - Geometry Draping KW - Geovisualization KW - GPU-based Real-time Rendering Y1 - 2019 SN - 978-1-7281-2838-2 SN - 978-1-7281-2839-9 U6 - https://doi.org/10.1109/IV.2019.00054 SN - 2375-0138 SN - 1550-6037 SP - 281 EP - 286 PB - Inst. of Electr. and Electronics Engineers CY - Los Alamitos ER - TY - JOUR A1 - Pasewaldt, Sebastian A1 - Semmo, Amir A1 - Trapp, Matthias A1 - Döllner, Jürgen T1 - Multi-perspective 3D panoramas JF - International journal of geographical information science N2 - This article presents multi-perspective 3D panoramas that focus on visualizing 3D geovirtual environments (3D GeoVEs) for navigation and exploration tasks. Their key element, a multi-perspective view (MPV), seamlessly combines what is seen from multiple viewpoints into a single image. This approach facilitates the presentation of information for virtual 3D city and landscape models, particularly by reducing occlusions, increasing screen-space utilization, and providing additional context within a single image. We complement MPVs with cartographic visualization techniques to stylize features according to their semantics and highlight important or prioritized information. When combined, both techniques constitute the core implementation of interactive, multi-perspective 3D panoramas. They offer a large number of effective means for visual communication of 3D spatial information, a high degree of customization with respect to cartographic design, and manifold applications in different domains. We discuss design decisions of 3D panoramas for the exploration of and navigation in 3D GeoVEs. We also discuss a preliminary user study that indicates that 3D panoramas are a promising approach for navigation systems using 3D GeoVEs. KW - multi-perspective visualization KW - panorama KW - focus plus context visualization KW - 3D geovirtual environments KW - cartographic design Y1 - 2014 U6 - https://doi.org/10.1080/13658816.2014.922686 SN - 1365-8816 SN - 1362-3087 VL - 28 IS - 10 SP - 2030 EP - 2051 PB - Routledge, Taylor & Francis Group CY - Abingdon ER - TY - GEN A1 - Limberger, Daniel A1 - Scheibel, Willy A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich T1 - Mixed-projection treemaps BT - a novel approach mixing 2D and 2.5D treemaps T2 - 21st International Conference Information Visualisation (IV) N2 - This paper presents a novel technique for combining 2D and 2.5D treemaps using multi-perspective views to leverage the advantages of both treemap types. It enables a new form of overview+detail visualization for tree-structured data and contributes new concepts for real-time rendering of and interaction with treemaps. The technique operates by tilting the graphical elements representing inner nodes using affine transformations and animated state transitions. We explain how to mix orthogonal and perspective projections within a single treemap. Finally, we show application examples that benefit from the reduced interaction overhead. KW - Information Visualization KW - Overview plus Detail KW - Treemaps KW - 2.5D Treemaps KW - Multi-perspective Views Y1 - 2017 SN - 978-1-5386-0831-9 U6 - https://doi.org/10.1109/iV.2017.67 SN - 2375-0138 SP - 164 EP - 169 PB - Institute of Electrical and Electronics Engineers CY - Los Alamitos ER - TY - GEN A1 - Reimann, Max A1 - Klingbeil, Mandy A1 - Pasewaldt, Sebastian A1 - Semmo, Amir A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich ED - Sourin, A Sourina T1 - MaeSTrO: A Mobile App for Style Transfer Orchestration using Neural Networks T2 - International Conference on Cyberworlds (CW) N2 - Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. This work enhances state-of-the-art neural style transfer techniques by a generalized user interface with interactive tools to facilitate a creative and localized editing process. Thereby, we first propose a problem characterization representing trade-offs between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, first user tests indicate different levels of satisfaction for the implemented techniques and interaction design. KW - non-photorealistic rendering KW - style transfer Y1 - 2018 SN - 978-1-5386-7315-7 U6 - https://doi.org/10.1109/CW.2018.00016 SP - 9 EP - 16 PB - IEEE CY - New York ER - TY - JOUR A1 - Reimann, Max A1 - Klingbeil, Mandy A1 - Pasewaldt, Sebastian A1 - Semmo, Amir A1 - Trapp, Matthias A1 - Döllner, Jürgen Roland Friedrich T1 - Locally controllable neural style transfer on mobile devices JF - The Visual Computer N2 - Mobile expressive rendering gained increasing popularity among users seeking casual creativity by image stylization and supports the development of mobile artists as a new user group. In particular, neural style transfer has advanced as a core technology to emulate characteristics of manifold artistic styles. However, when it comes to creative expression, the technology still faces inherent limitations in providing low-level controls for localized image stylization. In this work, we first propose a problem characterization of interactive style transfer representing a trade-off between visual quality, run-time performance, and user control. We then present MaeSTrO, a mobile app for orchestration of neural style transfer techniques using iterative, multi-style generative and adaptive neural networks that can be locally controlled by on-screen painting metaphors. At this, we enhance state-of-the-art neural style transfer techniques by mask-based loss terms that can be interactively parameterized by a generalized user interface to facilitate a creative and localized editing process. We report on a usability study and an online survey that demonstrate the ability of our app to transfer styles at improved semantic plausibility. KW - Non-photorealistic rendering KW - Style transfer KW - Neural networks KW - Mobile devices KW - Interactive control KW - Expressive rendering Y1 - 2019 U6 - https://doi.org/10.1007/s00371-019-01654-1 SN - 0178-2789 SN - 1432-2315 VL - 35 IS - 11 SP - 1531 EP - 1547 PB - Springer CY - New York ER - TY - JOUR A1 - Semmo, Amir A1 - Trapp, Matthias A1 - Kyprianidis, Jan Eric A1 - Döllner, Jürgen Roland Friedrich T1 - Interactive visualization of generalized virtual 3D city models using level-of-abstraction transitions JF - Computer graphics forum : journal of the European Association for Computer Graphics N2 - Virtual 3D city models play an important role in the communication of complex geospatial information in a growing number of applications, such as urban planning, navigation, tourist information, and disaster management. In general, homogeneous graphic styles are used for visualization. For instance, photorealism is suitable for detailed presentations, and non-photorealism or abstract stylization is used to facilitate guidance of a viewer's gaze to prioritized information. However, to adapt visualization to different contexts and contents and to support saliency-guided visualization based on user interaction or dynamically changing thematic information, a combination of different graphic styles is necessary. Design and implementation of such combined graphic styles pose a number of challenges, specifically from the perspective of real-time 3D visualization. In this paper, the authors present a concept and an implementation of a system that enables different presentation styles, their seamless integration within a single view, and parametrized transitions between them, which are defined according to tasks, camera view, and image resolution. The paper outlines potential usage scenarios and application fields together with a performance evaluation of the implementation. Y1 - 2012 U6 - https://doi.org/10.1111/j.1467-8659.2012.03081.x SN - 0167-7055 VL - 31 IS - 3 SP - 885 EP - 894 PB - Wiley-Blackwell CY - Hoboken ER - TY - THES A1 - Trapp, Matthias T1 - Interactive rendering techniques for focus+context visualization of 3D geovirtual environments T1 - Interaktive Rendering-Techniken für die Fokus-&-Kontext-Visualisierung von geovirtuellen 3D-Umgebungen N2 - This thesis introduces a collection of new real-time rendering techniques and applications for focus+context visualization of interactive 3D geovirtual environments such as virtual 3D city and landscape models. These environments are generally characterized by a large number of objects and are of high complexity with respect to geometry and textures. For these reasons, their interactive 3D rendering represents a major challenge. Their 3D depiction implies a number of weaknesses such as occlusions, cluttered image contents, and partial screen-space usage. To overcome these limitations and, thus, to facilitate the effective communication of geo-information, principles of focus+context visualization can be used for the design of real-time 3D rendering techniques for 3D geovirtual environments (see Figure). In general, detailed views of a 3D geovirtual environment are combined seamlessly with abstracted views of the context within a single image. To perform the real-time image synthesis required for interactive visualization, dedicated parallel processors (GPUs) for rasterization of computer graphics primitives are used. For this purpose, the design and implementation of appropriate data structures and rendering pipelines are necessary. The contribution of this work comprises the following five real-time rendering methods: • The rendering technique for 3D generalization lenses enables the combination of different 3D city geometries (e.g., generalized versions of a 3D city model) in a single image in real time. The method is based on a generalized and fragment-precise clipping approach, which uses a compressible, raster-based data structure. It enables the combination of detailed views in the focus area with the representation of abstracted variants in the context area. • The rendering technique for the interactive visualization of dynamic raster data in 3D geovirtual environments facilitates the rendering of 2D surface lenses. It enables a flexible combination of different raster layers (e.g., aerial images or videos) using projective texturing for decoupling image and geometry data. Thus, various overlapping and nested 2D surface lenses of different contents can be visualized interactively. • The interactive rendering technique for image-based deformation of 3D geovirtual environments enables the real-time image synthesis of non-planar projections, such as cylindrical and spherical projections, as well as multi-focal 3D fisheye-lenses and the combination of planar and non-planar projections. • The rendering technique for view-dependent multi-perspective views of 3D geovirtual environments, based on the application of global deformations to the 3D scene geometry, can be used for synthesizing interactive panorama maps to combine detailed views close to the camera (focus) with abstract views in the background (context). This approach reduces occlusions, increases the usage the available screen space, and reduces the overload of image contents. • The object-based and image-based rendering techniques for highlighting objects and focus areas inside and outside the view frustum facilitate preattentive perception. The concepts and implementations of interactive image synthesis for focus+context visualization and their selected applications enable a more effective communication of spatial information, and provide building blocks for design and development of new applications and systems in the field of 3D geovirtual environments. N2 - Die Darstellung immer komplexerer raumbezogener Information durch Geovisualisierung stellt die existierenden Technologien und den Menschen ständig vor neue Herausforderungen. In dieser Arbeit werden fünf neue, echtzeitfähige Renderingverfahren und darauf basierende Anwendungen für die Fokus-&-Kontext-Visualisierung von interaktiven geovirtuellen 3D-Umgebungen – wie virtuelle 3D-Stadt- und Landschaftsmodelle – vorgestellt. Die große Menge verschiedener darzustellender raumbezogener Information in 3D-Umgebungen führt oft zu einer hohen Anzahl unterschiedlicher Objekte und somit zu einer hohen Geometrie- und Texturkomplexität. In der Folge verlieren 3D-Darstellungen durch Verdeckungen, überladene Bildinhalte und eine geringe Ausnutzung des zur Verfügung stehenden Bildraumes an Informationswert. Um diese Beschränkungen zu kompensieren und somit die Kommunikation raumbezogener Information zu verbessern, kann das Prinzip der Fokus-&-Kontext-Visualisierung angewendet werden. Hierbei wird die für den Nutzer wesentliche Information als detaillierte Ansicht im Fokus mit abstrahierter Kontextinformation nahtlos miteinander kombiniert. Um das für die interaktive Visualisierung notwendige Echtzeit-Rendering durchzuführen, können spezialisierte Parallelprozessoren für die Rasterisierung von computergraphischen Primitiven (GPUs) verwendet werden. Dazu ist die Konzeption und Implementierung von geeigneten Datenstrukturen und Rendering-Pipelines notwendig. Der Beitrag dieser Arbeit umfasst die folgenden fünf Renderingverfahren. • Das Renderingverfahren für interaktive 3D-Generalisierungslinsen: Hierbei wird die Kombination unterschiedlicher 3D-Szenengeometrien, z. B. generalisierte Varianten eines 3DStadtmodells, in einem Bild ermöglicht. Das Verfahren basiert auf einem generalisierten Clipping-Ansatz, der es erlaubt, unter Verwendung einer komprimierbaren, rasterbasierten Datenstruktur beliebige Bereiche einer 3D-Szene freizustellen bzw. zu kappen. Somit lässt sich eine Kombination von detaillierten Ansichten im Fokusbereich mit der Darstellung einer abstrahierten Variante im Kontextbereich implementieren. • Das Renderingverfahren zur Visualisierung von dynamischen Raster-Daten in geovirtuellen 3D-Umgebungen zur Darstellung von 2D-Oberflächenlinsen: Die Verwendung von projektiven Texturen zur Entkoppelung von Bild- und Geometriedaten ermöglicht eine flexible Kombination verschiedener Rasterebenen (z.B. Luftbilder oder Videos). Somit können verschiedene überlappende sowie verschachtelte 2D-Oberflächenlinsen mit unterschiedlichen Dateninhalten interaktiv visualisiert werden. • Das Renderingverfahren zur bildbasierten Deformation von geovirtuellen 3D-Umgebungen: Neben der interaktiven Bildsynthese von nicht-planaren Projektionen, wie beispielsweise zylindrischen oder sphärischen Panoramen, lassen sich mit diesem Verfahren multifokale 3D-Fischaugen-Linsen erzeugen sowie planare und nicht-planare Projektionen miteinander kombinieren. • Das Renderingverfahren für die Generierung von sichtabhängigen multiperspektivischen Ansichten von geovirtuellen 3D-Umgebungen: Das Verfahren basiert auf globalen Deformationen der 3D-Szenengeometrie und kann zur Erstellung von interaktiven 3D-Panoramakarten verwendet werden, welche beispielsweise detaillierte Absichten nahe der virtuellen Kamera (Fokus) mit abstrakten Ansichten im Hintergrund (Kontext) kombinieren. Dieser Ansatz reduziert Verdeckungen, nutzt den zur Verfügung stehenden Bildraum in verbesserter Weise aus und reduziert das Überladen von Bildinhalten. • Objekt-und bildbasierte Renderingverfahren für die Hervorhebung von Fokus-Objekten und Fokus-Bereichen innerhalb und außerhalb des sichtbaren Bildausschnitts, um die präattentive Wahrnehmung eines Benutzers besser zu unterstützen. Die in dieser Arbeit vorgestellten Konzepte, Entwürfe und Implementierungen von interaktiven Renderingverfahren zur Fokus-&-Kontext-Visualisierung sowie deren ausgewählte Anwendungen ermöglichen eine effektivere Kommunikation raumbezogener Information und repräsentieren softwaretechnische Bausteine für die Entwicklung neuer Anwendungen und Systeme im Bereich der geovirtuellen 3D-Umgebungen. KW - 3D Computer Grafik KW - Interaktives Rendering KW - Fokus-&-Kontext Visualisierung KW - 3D Computer Graphics KW - Interactive Rendering KW - Focus+Context Visualization Y1 - 2013 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus-66824 ER - TY - JOUR A1 - Shekhar, Sumit A1 - Reimann, Max A1 - Mayer, Maximilian A1 - Semmo, Amir A1 - Pasewaldt, Sebastian A1 - Döllner, Jürgen A1 - Trapp, Matthias T1 - Interactive photo editing on smartphones via intrinsic decomposition JF - Computer graphics forum : journal of the European Association for Computer Graphics N2 - Intrinsic decomposition refers to the problem of estimating scene characteristics, such as albedo and shading, when one view or multiple views of a scene are provided. The inverse problem setting, where multiple unknowns are solved given a single known pixel-value, is highly under-constrained. When provided with correlating image and depth data, intrinsic scene decomposition can be facilitated using depth-based priors, which nowadays is easy to acquire with high-end smartphones by utilizing their depth sensors. In this work, we present a system for intrinsic decomposition of RGB-D images on smartphones and the algorithmic as well as design choices therein. Unlike state-of-the-art methods that assume only diffuse reflectance, we consider both diffuse and specular pixels. For this purpose, we present a novel specularity extraction algorithm based on a multi-scale intensity decomposition and chroma inpainting. At this, the diffuse component is further decomposed into albedo and shading components. We use an inertial proximal algorithm for non-convex optimization (iPiano) to ensure albedo sparsity. Our GPU-based visual processing is implemented on iOS via the Metal API and enables interactive performance on an iPhone 11 Pro. Further, a qualitative evaluation shows that we are able to obtain high-quality outputs. Furthermore, our proposed approach for specularity removal outperforms state-of-the-art approaches for real-world images, while our albedo and shading layer decomposition is faster than the prior work at a comparable output quality. Manifold applications such as recoloring, retexturing, relighting, appearance editing, and stylization are shown, each using the intrinsic layers obtained with our method and/or the corresponding depth data. KW - CCS Concepts KW - center dot Computing KW - methodologie KW - Image-based rendering KW - Image KW - processing KW - Computational photography Y1 - 2021 U6 - https://doi.org/10.1111/cgf.142650 SN - 0167-7055 SN - 1467-8659 VL - 40 SP - 497 EP - 510 PB - Blackwell CY - Oxford ER -