Refine
Document Type
- Article (8)
- Other (2)
- Doctoral Thesis (1)
Is part of the Bibliography
- yes (11)
Keywords
- 3D point clouds (5)
- 3D Point Clouds (2)
- Point-based rendering (2)
- Visualization (2)
- 3D Point clouds (1)
- 3D visualization (1)
- 3D-Punktwolken (1)
- 3D-Visualisierung (1)
- BIM (1)
- Big Data (1)
- Classification (1)
- Echtzeit-Rendering (1)
- Fernerkundung (1)
- GPU (1)
- ISM: jets and outflows (1)
- Indoor Models (1)
- Indoor environments (1)
- Klassifizierung (1)
- LIDAR (1)
- Laserscanning (1)
- Learning (1)
- LiDAR (1)
- Machine (1)
- Mobile-Mapping (1)
- Multiview classification (1)
- Out-of-core (1)
- Processing strategies (1)
- Semantic enrichment (1)
- Service-Oriented (1)
- Service-oriented (1)
- System architecture (1)
- Veränderungsanalyse (1)
- Web-based rendering (1)
- X-rays: binaries (1)
- bridge management systems (1)
- building information modeling (1)
- change detection (1)
- classification (1)
- damage detection (1)
- gamma rays: general (1)
- laserscanning (1)
- mobile mapping (1)
- multiview classification (1)
- point-based rendering (1)
- real-time rendering (1)
- remote sensing (1)
- stars: black holes (1)
- web-based rendering (1)
Concepts and techniques for integration, analysis and visualization of massive 3D point clouds
(2014)
Remote sensing methods, such as LiDAR and image-based photogrammetry, are established approaches for capturing the physical world. Professional and low-cost scanning devices are capable of generating dense 3D point clouds. Typically, these 3D point clouds are preprocessed by GIS and are then used as input data in a variety of applications such as urban planning, environmental monitoring, disaster management, and simulation. The availability of area-wide 3D point clouds will drastically increase in the future due to the availability of novel capturing methods (e.g., driver assistance systems) and low-cost scanning devices. Applications, systems, and workflows will therefore face large collections of redundant, up-to-date 3D point clouds and have to cope with massive amounts of data. Hence, approaches are required that will efficiently integrate, update, manage, analyze, and visualize 3D point clouds. In this paper, we define requirements for a system infrastructure that enables the integration of 3D point clouds from heterogeneous capturing devices and different timestamps. Change detection and update strategies for 3D point clouds are presented that reduce storage requirements and offer new insights for analysis purposes. We also present an approach that attributes 3D point clouds with semantic information (e.g., object class category information), which enables more effective data processing, analysis, and visualization. Out-of-core real-time rendering techniques then allow for an interactive exploration of the entire 3D point cloud and the corresponding analysis results. Web-based visualization services are utilized to make 3D point clouds available to a large community. The proposed concepts and techniques are designed to establish 3D point clouds as base datasets, as well as rendering primitives for analysis and visualization tasks, which allow operations to be performed directly on the point data. Finally, we evaluate the presented system, report on its applications, and discuss further research challenges.
The use of Building Information Modeling (BIM) for Facility Management (FM) in the Operation and Maintenance (O&M) stages of the building life-cycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing the state of the built environment due to non-automated generation of associated semantics. 3D point clouds can be used to capture the physical state of the built environment, but also lack these associated semantics. A prototypical implementation of a service-oriented architecture for classification of indoor point cloud scenes of office environments is presented, using multiview classification. The multiview classification approach is tested using a retrained Convolutional Neural Network (CNN) model - Inception V3. The presented approach for classifying common office furniture objects (chairs, sofas and desks), contained in 3D point cloud scans, is tested and evaluated. The results show that the presented approach can classify common office furniture up to an acceptable degree of accuracy, and is suitable for quick and robust semantics approximation - based on RGB (red, green and blue color channel) cubemap images of the octree partitioned areas of the 3D point cloud scan. Additional methods for web-based 3D visualization, editing and annotation of point clouds are also discussed. Using the described approach, captured scans of indoor environments can be semantically enriched using object annotations derived from multiview classification results. Furthermore, the presented approach is suited for semantic enrichment of lower resolution indoor point clouds acquired using commodity mobile devices.
The rapid digitalization of the Facility Management (FM) sector has increased the demand for mobile, interactive analytics approaches concerning the operational state of a building. These approaches provide the key to increasing stakeholder engagement associated with Operation and Maintenance (O&M) procedures of living and working areas, buildings, and other built environment spaces. We present a generic and fast approach to process and analyze given 3D point clouds of typical indoor office spaces to create corresponding up-to-date approximations of classified segments and object-based 3D models that can be used to analyze, record and highlight changes of spatial configurations. The approach is based on machine-learning methods used to classify the scanned 3D point cloud data using 2D images. This approach can be used to primarily track changes of objects over time for comparison, allowing for routine classification, and presentation of results used for decision making. We specifically focus on classification, segmentation, and reconstruction of multiple different object types in a 3D point-cloud scene. We present our current research and describe the implementation of these technologies as a web-based application using a services-oriented methodology.
If sites, cities, and landscapes are captured at different points in time using technology such as LiDAR, large collections of 3D point clouds result. Their efficient storage, processing, analysis, and presentation constitute a challenging task because of limited computation, memory, and time resources. In this work, we present an approach to detect changes in massive 3D point clouds based on an out-of-core spatial data structure that is designed to store data acquired at different points in time and to efficiently attribute 3D points with distance information. Based on this data structure, we present and evaluate different processing schemes optimized for performing the calculation on the CPU and GPU. In addition, we present a point-based rendering technique adapted for attributed 3D point clouds, to enable effective out-of-core real-time visualization of the computation results. Our approach enables conclusions to be drawn about temporal changes in large highly accurate 3D geodata sets of a captured area at reasonable preprocessing and rendering times. We evaluate our approach with two data sets from different points in time for the urban area of a city, describe its characteristics, and report on applications.
Integrated real-time visualisation of massive 3D-Point clouds and geo-referenced textured dates
(2011)
Bridge damage
(2020)
Building Information Modeling (BIM) representations of bridges enriched by inspection data will add tremendous value to future Bridge Management Systems (BMSs). This paper presents an approach for point cloud-based detection of spalling damage, as well as integrating damage components into a BIM via semantic enrichment of an as-built Industry Foundation Classes (IFC) model. An approach for generating the as-built BIM, geometric reconstruction of detected damage point clusters and semantic-enrichment of the corresponding IFC model is presented. Multiview-classification is used and evaluated for the detection of spalling damage features. The semantic enrichment of as-built IFC models is based on injecting classified and reconstructed damage clusters back into the as-built IFC, thus generating an accurate as-is IFC model compliant to the BMS inspection requirements.
Context. The large jet kinetic power and non-thermal processes occurring in the microquasar SS 433 make this source a good candidate for a very high-energy (VHE) gamma-ray emitter. Gamma-ray fluxes above the sensitivity limits of current Cherenkov telescopes have been predicted for both the central X-ray binary system and the interaction regions of SS 433 jets with the surrounding W50 nebula. Non-thermal emission at lower energies has been previously reported, indicating that efficient particle acceleration is taking place in the system. Aims. We explore the capability of SS 433 to emit VHE gamma rays during periods in which the expected flux attenuation due to periodic eclipses (P-orb similar to 13.1 days) and precession of the circumstellar disk (P-pre similar to 162 days) periodically covering the central binary system is expected to be at its minimum. The eastern and western SS 433/W50 interaction regions are also examined using the whole data set available. We aim to constrain some theoretical models previously developed for this system with our observations. Methods. We made use of dedicated observations from the Major Atmospheric Gamma Imaging Cherenkov telescopes (MAGIC) and High Energy Spectroscopic System (H.E.S.S.) of SS 433 taken from 2006 to 2011. These observation were combined for the first time and accounted for a total effective observation time of 16.5 h, which were scheduled considering the expected phases of minimum absorption of the putative VHE emission. Gamma-ray attenuation does not affect the jet/medium interaction regions. In this case, the analysis of a larger data set amounting to similar to 40-80 h, depending on the region, was employed. Results. No evidence of VHE gamma-ray emission either from the central binary system or from the eastern/western interaction regions was found. Upper limits were computed for the combined data set. Differential fluxes from the central system are found to be less than or similar to 10(-12)-10(-13) TeV-1 cm(-2) s(-1) in an energy interval ranging from similar to few x 100 GeV to similar to few TeV. Integral flux limits down to similar to 10(-12)-10(-13) ph cm(-2) s(-1) and similar to 10(-13)-10(-14) ph cm(-2) s(-1) are obtained at 300 and 800 GeV, respectively. Our results are used to place constraints on the particle acceleration fraction at the inner jet regions and on the physics of the jet/medium interactions. Conclusions. Our findings suggest that the fraction of the jet kinetic power that is transferred to relativistic protons must be relatively small in SS 433, q(p) <= 2.5 x 10(-5), to explain the lack of TeV and neutrino emission from the central system. At the SS 433/W50 interface, the presence of magnetic fields greater than or similar to 10 mu G is derived assuming a synchrotron origin for the observed X-ray emission. This also implies the presence of high-energy electrons with E-e up to 50 TeV, preventing an efficient production of gamma-ray fluxes in these interaction regions.
3D point cloud technology facilitates the automated and highly detailed acquisition of real-world environments such as assets, sites, and countries. We present a web-based system for the interactive exploration and inspection of arbitrary large 3D point clouds. Our approach is able to render 3D point clouds with billions of points using spatial data structures and level-of-detail representations. Point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering, e.g., based on semantics. A set of interaction techniques allows users to collaboratively work with the data (e.g., measuring distances and annotating). Additional value is provided by the system’s ability to display additional, context-providing geodata alongside 3D point clouds and to integrate processing and analysis operations. We have evaluated the presented techniques and in case studies and with different data sets from aerial, mobile, and terrestrial acquisition with up to 120 billion points to show their practicality and feasibility.
3D point cloud technology facilitates the automated and highly detailed digital acquisition of real-world environments such as assets, sites, cities, and countries; the acquired 3D point clouds represent an essential category of geodata used in a variety of geoinformation applications and systems. In this paper, we present a web-based system for the interactive and collaborative exploration and inspection of arbitrary large 3D point clouds. Our approach is based on standard WebGL on the client side and is able to render 3D point clouds with billions of points. It uses spatial data structures and level-of-detail representations to manage the 3D point cloud data and to deploy out-of-core and web-based rendering concepts. By providing functionality for both, thin-client and thick-client applications, the system scales for client devices that are vastly different in computing capabilities. Different 3D point-based rendering techniques and post-processing effects are provided to enable task-specific and data-specific filtering and highlighting, e.g., based on per-point surface categories or temporal information. A set of interaction techniques allows users to collaboratively work with the data, e.g., by measuring distances and areas, by annotating, or by selecting and extracting data subsets. Additional value is provided by the system's ability to display additional, context-providing geodata alongside 3D point clouds and to integrate task-specific processing and analysis operations. We have evaluated the presented techniques and the prototype system with different data sets from aerial, mobile, and terrestrial acquisition campaigns with up to 120 billion points to show their practicality and feasibility.
3D point clouds are a digital representation of our world and used in a variety of applications. They are captured with LiDAR or derived by image-matching approaches to get surface information of objects, e.g., indoor scenes, buildings, infrastructures, cities, and landscapes. We present novel interaction and visualization techniques for heterogeneous, time variant, and semantically rich 3D point clouds. Interactive and view-dependent see-through lenses are introduced as exploration tools to enhance recognition of objects, semantics, and temporal changes within 3D point cloud depictions. We also develop filtering and highlighting techniques that are used to dissolve occlusion to give context-specific insights. All techniques can be combined with an out-of-core real-time rendering system for massive 3D point clouds. We have evaluated the presented approach with 3D point clouds from different application domains. The results show the usability and how different visualization and exploration tasks can be improved for a variety of domain-specific applications.