TY - THES A1 - Discher, Sören T1 - Real-Time Rendering Techniques for Massive 3D Point Clouds T1 - Echtzeit-Rendering-Techniken für massive 3D-Punktwolken N2 - Today, point clouds are among the most important categories of spatial data, as they constitute digital 3D models of the as-is reality that can be created at unprecedented speed and precision. However, their unique properties, i.e., lack of structure, order, or connectivity information, necessitate specialized data structures and algorithms to leverage their full precision. In particular, this holds true for the interactive visualization of point clouds, which requires to balance hardware limitations regarding GPU memory and bandwidth against a naturally high susceptibility to visual artifacts. This thesis focuses on concepts, techniques, and implementations of robust, scalable, and portable 3D visualization systems for massive point clouds. To that end, a number of rendering, visualization, and interaction techniques are introduced, that extend several basic strategies to decouple rendering efforts and data management: First, a novel visualization technique that facilitates context-aware filtering, highlighting, and interaction within point cloud depictions. Second, hardware-specific optimization techniques that improve rendering performance and image quality in an increasingly diversified hardware landscape. Third, natural and artificial locomotion techniques for nausea-free exploration in the context of state-of-the-art virtual reality devices. Fourth, a framework for web-based rendering that enables collaborative exploration of point clouds across device ecosystems and facilitates the integration into established workflows and software systems. In cooperation with partners from industry and academia, the practicability and robustness of the presented techniques are showcased via several case studies using representative application scenarios and point cloud data sets. In summary, the work shows that the interactive visualization of point clouds can be implemented by a multi-tier software architecture with a number of domain-independent, generic system components that rely on optimization strategies specific to large point clouds. It demonstrates the feasibility of interactive, scalable point cloud visualization as a key component for distributed IT solutions that operate with spatial digital twins, providing arguments in favor of using point clouds as a universal type of spatial base data usable directly for visualization purposes. N2 - Punktwolken gehören heute zu den wichtigsten Kategorien räumlicher Daten, da sie digitale 3D-Modelle der Ist-Realität darstellen, die mit beispielloser Geschwindigkeit und Präzision erstellt werden können. Ihre einzigartigen Eigenschaften, d.h. das Fehlen von Struktur-, Ordnungs- oder Konnektivitätsinformationen, erfordern jedoch spezielle Datenstrukturen und Algorithmen, um ihre volle Präzision zu nutzen. Insbesondere gilt dies für die interaktive Visualisierung von Punktwolken, die es erfordert, Hardwarebeschränkungen in Bezug auf GPU-Speicher und -Bandbreite mit einer naturgemäß hohen Anfälligkeit für visuelle Artefakte in Einklang zu bringen. Diese Arbeit konzentriert sich auf Konzepte, Techniken und Implementierungen von robusten, skalierbaren und portablen 3D-Visualisierungssystemen für massive Punktwolken. Zu diesem Zweck wird eine Reihe von Rendering-, Visualisierungs- und Interaktionstechniken vorgestellt, die mehrere grundlegende Strategien zur Entkopplung von Rendering-Aufwand und Datenmanagement erweitern: Erstens eine neuartige Visualisierungstechnik, die kontextabhängiges Filtern, Hervorheben und Interaktion innerhalb von Punktwolkendarstellungen erleichtert. Zweitens hardwarespezifische Optimierungstechniken, welche die Rendering-Leistung und die Bildqualität in einer immer vielfältigeren Hardware-Landschaft verbessern. Drittens natürliche und künstliche Fortbewegungstechniken für eine übelkeitsfreie Erkundung im Kontext moderner Virtual-Reality-Geräte. Viertens ein Framework für webbasiertes Rendering, das die kollaborative Erkundung von Punktwolken über Geräteökosysteme hinweg ermöglicht und die Integration in etablierte Workflows und Softwaresysteme erleichtert. In Zusammenarbeit mit Partnern aus Industrie und Wissenschaft wird die Praxistauglichkeit und Robustheit der vorgestellten Techniken anhand mehrerer Fallstudien aufgezeigt, die repräsentative Anwendungsszenarien und Punktwolkendatensätze verwenden. Zusammenfassend zeigt die Arbeit, dass die interaktive Visualisierung von Punktwolken durch eine mehrstufige Softwarearchitektur mit einer Reihe von domänenunabhängigen, generischen Systemkomponenten realisiert werden kann, die auf Optimierungsstrategien beruhen, die speziell für große Punktwolken geeignet sind. Sie demonstriert die Machbarkeit einer interaktiven, skalierbaren Punktwolkenvisualisierung als Schlüsselkomponente für verteilte IT-Lösungen, die mit räumlichen digitalen Zwillingen arbeiten, und liefert Argumente für die Verwendung von Punktwolken als universelle Art von räumlichen Basisdaten, die direkt für Visualisierungszwecke verwendet werden können. KW - 3D Point Clouds KW - Real-Time Rendering KW - Visualization KW - Virtual Reality KW - Web-Based Rendering KW - 3D-Punktwolken KW - Echtzeit-Rendering KW - Visualisierung KW - Virtuelle Realität KW - Webbasiertes Rendering Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-601641 ER - TY - JOUR A1 - Discher, Sören A1 - Richter, Rico A1 - Döllner, Jürgen Roland Friedrich T1 - Interactive and View-Dependent See-Through Lenses for Massive 3D Point Clouds JF - Advances in 3D Geoinformation N2 - 3D point clouds are a digital representation of our world and used in a variety of applications. They are captured with LiDAR or derived by image-matching approaches to get surface information of objects, e.g., indoor scenes, buildings, infrastructures, cities, and landscapes. We present novel interaction and visualization techniques for heterogeneous, time variant, and semantically rich 3D point clouds. Interactive and view-dependent see-through lenses are introduced as exploration tools to enhance recognition of objects, semantics, and temporal changes within 3D point cloud depictions. We also develop filtering and highlighting techniques that are used to dissolve occlusion to give context-specific insights. All techniques can be combined with an out-of-core real-time rendering system for massive 3D point clouds. We have evaluated the presented approach with 3D point clouds from different application domains. The results show the usability and how different visualization and exploration tasks can be improved for a variety of domain-specific applications. KW - 3D point clouds KW - LIDAR KW - Visualization KW - Point-based rendering Y1 - 2016 SN - 978-3-319-25691-7 SN - 978-3-319-25689-4 U6 - https://doi.org/10.1007/978-3-319-25691-7_3 SN - 1863-2246 SP - 49 EP - 62 PB - Springer CY - Cham ER - TY - JOUR A1 - Discher, Sören A1 - Richter, Rico A1 - Döllner, Jürgen Roland Friedrich T1 - Concepts and techniques for web-based visualization and processing of massive 3D point clouds with semantics JF - Graphical Models N2 - 3D point cloud technology facilitates the automated and highly detailed acquisition of real-world environments such as assets, sites, and countries. We present a web-based system for the interactive exploration and inspection of arbitrary large 3D point clouds. Our approach is able to render 3D point clouds with billions of points using spatial data structures and level-of-detail representations. Point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering, e.g., based on semantics. A set of interaction techniques allows users to collaboratively work with the data (e.g., measuring distances and annotating). Additional value is provided by the system’s ability to display additional, context-providing geodata alongside 3D point clouds and to integrate processing and analysis operations. We have evaluated the presented techniques and in case studies and with different data sets from aerial, mobile, and terrestrial acquisition with up to 120 billion points to show their practicality and feasibility. KW - 3D Point clouds KW - Web-based rendering KW - Point-based rendering KW - Processing strategies Y1 - 2019 U6 - https://doi.org/10.1016/j.gmod.2019.101036 SN - 1524-0703 SN - 1524-0711 VL - 104 PB - Elsevier CY - San Diego ER - TY - GEN A1 - Discher, Sören A1 - Richter, Rico A1 - Döllner, Jürgen Roland Friedrich ED - Spencer, SN T1 - A scalable webGL-based approach for visualizing massive 3D point clouds using semantics-dependent rendering techniques T2 - Web3D 2018: The 23rd International ACM Conference on 3D Web Technology N2 - 3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility. KW - 3D Point Clouds KW - web-based rendering KW - point-based rendering Y1 - 2018 SN - 978-1-4503-5800-2 U6 - https://doi.org/10.1145/3208806.3208816 SP - 1 EP - 9 PB - Association for Computing Machinery CY - New York ER -