Refine
Document Type
- Doctoral Thesis (5)
- Article (4)
- Postprint (3)
- Monograph/Edited Volume (1)
- Review (1)
Is part of the Bibliography
- yes (14) (remove)
Keywords
- classification (14) (remove)
Institute
- Institut für Geowissenschaften (3)
- Mathematisch-Naturwissenschaftliche Fakultät (3)
- Hasso-Plattner-Institut für Digital Engineering GmbH (2)
- Institut für Umweltwissenschaften und Geographie (2)
- Institut für Biochemie und Biologie (1)
- Institut für Chemie (1)
- Institut für Ernährungswissenschaft (1)
- Institut für Künste und Medien (1)
Klassifikationen von Computerspielen widmet sich den Begriffen, mit denen Computerspiele zu Klassifikationszwecken versehen werden. Eine repräsentative Auswahl an derartigen Klassifikationsmodellen, die die Arbeiten von Designern, Journalisten, Pädagogen, Laien und expliziten Computerspielforschern abdeckt, wird vorgestellt und hinsichtlich ihrer Anwendbarkeit zur eindeutigen Bestimmung konkreter Spiele bewertet. Dabei zeigen sich zwei grundlegend verschiedene Herangehensweisen an die Problematik: „Kategorisierungen“ stellen feste Kategorien auf, in die einzelne Spiel eindeutig einsortiert werden sollen, während „Typologien“ die einzelnen Elemente von Spielen untersuchen und klassifizieren. Beide Ansätze werden analysiert und ihre jeweiligen Vor- und Nachteile aufgezeigt. Da offensichtlich wird, dass die Klassifikation von Computerspielen in bedeutendem Maße vom jeweiligen zugrunde liegenden Verständnis davon, was ein „Computerspiel“ sei, abhängt, ist der Untersuchung der Klassifikationsmodelle eine Betrachtung dieser problematischen Begriffsdefinition vorangestellt, die beispielhaft an vier ausgewählten Aspekten durchgeführt wird.
Remote sensing technology, such as airborne, mobile, or terrestrial laser scanning, and photogrammetric techniques, are fundamental approaches for efficient, automatic creation of digital representations of spatial environments. For example, they allow us to generate 3D point clouds of landscapes, cities, infrastructure networks, and sites. As essential and universal category of geodata, 3D point clouds are used and processed by a growing number of applications, services, and systems such as in the domains of urban planning, landscape architecture, environmental monitoring, disaster management, virtual geographic environments as well as for spatial analysis and simulation.
While the acquisition processes for 3D point clouds become more and more reliable and widely-used, applications and systems are faced with more and more 3D point cloud data. In addition, 3D point clouds, by their very nature, are raw data, i.e., they do not contain any structural or semantics information. Many processing strategies common to GIS such as deriving polygon-based 3D models generally do not scale for billions of points. GIS typically reduce data density and precision of 3D point clouds to cope with the sheer amount of data, but that results in a significant loss of valuable information at the same time.
This thesis proposes concepts and techniques designed to efficiently store and process massive 3D point clouds. To this end, object-class segmentation approaches are presented to attribute semantics to 3D point clouds, used, for example, to identify building, vegetation, and ground structures and, thus, to enable processing, analyzing, and visualizing 3D point clouds in a more effective and efficient way. Similarly, change detection and updating strategies for 3D point clouds are introduced that allow for reducing storage requirements and incrementally updating 3D point cloud databases. In addition, this thesis presents out-of-core, real-time rendering techniques used to interactively explore 3D point clouds and related analysis results. All techniques have been implemented based on specialized spatial data structures, out-of-core algorithms, and GPU-based processing schemas to cope with massive 3D point clouds having billions of points.
All proposed techniques have been evaluated and demonstrated their applicability to the field of geospatial applications and systems, in particular for tasks such as classification, processing, and visualization. Case studies for 3D point clouds of entire cities with up to 80 billion points show that the presented approaches open up new ways to manage and apply large-scale, dense, and time-variant 3D point clouds as required by a rapidly growing number of applications and systems.
The Himalayas are a region that is most dependent, but also frequently prone to hazards from changing meltwater resources. This mountain belt hosts the highest mountain peaks on earth, has the largest reserve of ice outside the polar regions, and is home to a rapidly growing population in recent decades. One source of hazard has attracted scientific research in particular in the past two decades: glacial lake outburst floods (GLOFs) occurred rarely, but mostly with fatal and catastrophic consequences for downstream communities and infrastructure. Such GLOFs can suddenly release several million cubic meters of water from naturally impounded meltwater lakes. Glacial lakes have grown in number and size by ongoing glacial mass losses in the Himalayas. Theory holds that enhanced meltwater production may increase GLOF frequency, but has never been tested so far. The key challenge to test this notion are the high altitudes of >4000 m, at which lakes occur, making field work impractical. Moreover, flood waves can attenuate rapidly in mountain channels downstream, so that many GLOFs have likely gone unnoticed in past decades. Our knowledge on GLOFs is hence likely biased towards larger, destructive cases, which challenges a detailed quantification of their frequency and their response to atmospheric warming. Robustly quantifying the magnitude and frequency of GLOFs is essential for risk assessment and management along mountain rivers, not least to implement their return periods in building design codes.
Motivated by this limited knowledge of GLOF frequency and hazard, I developed an algorithm that efficiently detects GLOFs from satellite images. In essence, this algorithm classifies land cover in 30 years (~1988–2017) of continuously recorded Landsat images over the Himalayas, and calculates likelihoods for rapidly shrinking water bodies in the stack of land cover images. I visually assessed such detected tell-tale sites for sediment fans in the river channel downstream, a second key diagnostic of GLOFs. Rigorous tests and validation with known cases from roughly 10% of the Himalayas suggested that this algorithm is robust against frequent image noise, and hence capable to identify previously unknown GLOFs. Extending the search radius to the entire Himalayan mountain range revealed some 22 newly detected GLOFs. I thus more than doubled the existing GLOF count from 16 previously known cases since 1988, and found a dominant cluster of GLOFs in the Central and Eastern Himalayas (Bhutan and Eastern Nepal), compared to the rarer affected ranges in the North. Yet, the total of 38 GLOFs showed no change in the annual frequency, so that the activity of GLOFs per unit glacial lake area has decreased in the past 30 years. I discussed possible drivers for this finding, but left a further attribution to distinct GLOF-triggering mechanisms open to future research.
This updated GLOF frequency was the key input for assessing GLOF hazard for the entire Himalayan mountain belt and several subregions. I used standard definitions in flood hydrology, describing hazard as the annual exceedance probability of a given flood peak discharge [m3 s-1] or larger at the breach location. I coupled the empirical frequency of GLOFs per region to simulations of physically plausible peak discharges from all existing ~5,000 lakes in the Himalayas. Using an extreme-value model, I could hence calculate flood return periods. I found that the contemporary 100-year GLOF discharge (the flood level that is reached or exceeded on average once in 100 years) is 20,600+2,200/–2,300 m3 s-1 for the entire Himalayas. Given the spatial and temporal distribution of historic GLOFs, contemporary GLOF hazard is highest in the Eastern Himalayas, and lower for regions with rarer GLOF abundance. I also calculated GLOF hazard for some 9,500 overdeepenings, which could expose and fill with water, if all Himalayan glaciers have melted eventually. Assuming that the current GLOF rate remains unchanged, the 100-year GLOF discharge could double (41,700+5,500/–4,700 m3 s-1), while the regional GLOF hazard may increase largest in the Karakoram.
To conclude, these three stages–from GLOF detection, to analysing their frequency and estimating regional GLOF hazard–provide a framework for modern GLOF hazard assessment. Given the rapidly growing population, infrastructure, and hydropower projects in the Himalayas, this thesis assists in quantifying the purely climate-driven contribution to hazard and risk from GLOFs.
Laser-induced breakdown spectroscopy (LIBS) analysers are becoming increasingly common for material classification purposes. However, to achieve good classification accuracy, mostly noncompact units are used based on their stability and reproducibility. In addition, computational algorithms that require significant hardware resources are commonly applied. For performing measurement campaigns in hard-to-access environments, such as mining sites, there is a need for compact, portable, or even handheld devices capable of reaching high measurement accuracy. The optics and hardware of small (i.e., handheld) devices are limited by space and power consumption and require a compromise of the achievable spectral quality. As long as the size of such a device is a major constraint, the software is the primary field for improvement. In this study, we propose a novel combination of handheld LIBS with non-negative tensor factorisation to investigate its classification capabilities of copper minerals. The proposed approach is based on the extraction of source spectra for each mineral (with the use of tensor methods) and their labelling based on the percentage contribution within the dataset. These latent spectra are then used in a regression model for validation purposes. The application of such an approach leads to an increase in the classification score by approximately 5% compared to that obtained using commonly used classifiers such as support vector machines, linear discriminant analysis, and the k-nearest neighbours algorithm.