Refine
Year of publication
Document Type
- Article (13)
- Other (6)
- Postprint (2)
- Doctoral Thesis (1)
- Master's Thesis (1)
Language
- English (23)
Is part of the Bibliography
- yes (23)
Keywords
- real-time rendering (3)
- 3D point clouds (2)
- BIM (2)
- Treemaps (2)
- cartographic design (2)
- 2.5D Treemaps (1)
- 3D Computer Grafik (1)
- 3D Computer Graphics (1)
- 3D Linsen (1)
- 3D Point Clouds (1)
- 3D city models (1)
- 3D geovirtual environments (1)
- 3D information visualization (1)
- 3D lenses (1)
- 3D semiotic model (1)
- 3D visualization (1)
- Artistic Image Stylization (1)
- Building Information Models (1)
- CCS Concepts (1)
- Close-Up (1)
- Computational photography (1)
- Coordinated and Multiple Views (1)
- Detail plus Overview (1)
- Echtzeitanwendung (1)
- Expressive rendering (1)
- Eye-tracking (1)
- Focus+Context Visualization (1)
- Fokus-&-Kontext Visualisierung (1)
- GPU-based Real-time Rendering (1)
- Geometry Draping (1)
- Geovisualization (1)
- IFC (1)
- Image (1)
- Image Abstraction (1)
- Image Processing (1)
- Image-based rendering (1)
- Indoor Models (1)
- Indoor environments (1)
- Industry Foundation Classes (1)
- Information Visualization (1)
- Interactive Media (1)
- Interactive Rendering (1)
- Interactive Visualization (1)
- Interactive control (1)
- Interaktives Rendering (1)
- Learning (1)
- Level-of-detail visualization (1)
- Machine (1)
- Mobile devices (1)
- Multi-perspective Views (1)
- Multiview classification (1)
- Neural networks (1)
- Non-photorealistic rendering (1)
- Overview plus Detail (1)
- Real-time Rendering (1)
- Real-time rendering (1)
- Semantic enrichment (1)
- Service-Oriented (1)
- Service-oriented (1)
- Shader (1)
- Spatio-temporal visualization (1)
- Style transfer (1)
- Taxonomy (1)
- Terrain Visualization (1)
- Trajectory visualization (1)
- Virtuelles 3D Stadtmodell (1)
- Visual analytics (1)
- artistic image stylization (1)
- bridge management systems (1)
- building information modeling (1)
- cartography-oriented visualization (1)
- center dot Computing (1)
- damage detection (1)
- eye-tracking (1)
- focus plus context visualization (1)
- image abstraction (1)
- image processing (1)
- interactive media (1)
- methodologie (1)
- multi-perspective visualization (1)
- multiview classification (1)
- non-photorealistic rendering (1)
- panorama (1)
- processing (1)
- real-time application (1)
- shader (1)
- spatial aggregation (1)
- style description languages (1)
- style transfer (1)
- taxonomy (1)
- treemaps (1)
- user interaction (1)
- virtual 3D city model (1)
A fundamental task in 3D geovisualization and GIS applications is the visualization of vector data that can represent features such as transportation networks or land use coverage. Mapping or draping vector data represented by geometric primitives (e.g., polylines or polygons) to 3D digital elevation or 3D digital terrain models is a challenging task. We present an interactive GPU-based approach that performs geometry-based draping of vector data on per-frame basis using an image-based representation of a 3D digital elevation or terrain model only.
The availability of detailed virtual 3D building models including representations of indoor elements, allows for a wide number of applications requiring effective exploration and navigation functionality. Depending on the application context, users should be enabled to focus on specific Objects-of-Interests (OOIs) or important building elements. This requires approaches to filtering building parts as well as techniques to visualize important building objects and their relations. For it, this paper explores the application and combination of interactive rendering techniques as well as their semanticallydriven configuration in the context of 3D indoor models.
The use of Building Information Modeling (BIM) for Facility Management (FM) in the Operation and Maintenance (O&M) stages of the building life-cycle is intended to bridge the gap between operations and digital data, but lacks the functionality of assessing the state of the built environment due to non-automated generation of associated semantics. 3D point clouds can be used to capture the physical state of the built environment, but also lack these associated semantics. A prototypical implementation of a service-oriented architecture for classification of indoor point cloud scenes of office environments is presented, using multiview classification. The multiview classification approach is tested using a retrained Convolutional Neural Network (CNN) model - Inception V3. The presented approach for classifying common office furniture objects (chairs, sofas and desks), contained in 3D point cloud scans, is tested and evaluated. The results show that the presented approach can classify common office furniture up to an acceptable degree of accuracy, and is suitable for quick and robust semantics approximation - based on RGB (red, green and blue color channel) cubemap images of the octree partitioned areas of the 3D point cloud scan. Additional methods for web-based 3D visualization, editing and annotation of point clouds are also discussed. Using the described approach, captured scans of indoor environments can be semantically enriched using object annotations derived from multiview classification results. Furthermore, the presented approach is suited for semantic enrichment of lower resolution indoor point clouds acquired using commodity mobile devices.