Refine
Has Fulltext
- yes (106) (remove)
Year of publication
- 2018 (106) (remove)
Document Type
- Doctoral Thesis (106) (remove)
Language
- English (106) (remove)
Is part of the Bibliography
- yes (106) (remove)
Keywords
- Fernerkundung (3)
- Magnetismus (3)
- magnetism (3)
- remote sensing (3)
- uncertainty (3)
- Angriffserkennung (2)
- Bakterien (2)
- Big Data (2)
- Biodiversität (2)
- DNA Origami (2)
Institute
- Institut für Physik und Astronomie (20)
- Institut für Chemie (19)
- Institut für Geowissenschaften (19)
- Institut für Biochemie und Biologie (13)
- Hasso-Plattner-Institut für Digital Engineering GmbH (12)
- Extern (10)
- Wirtschaftswissenschaften (6)
- Department Linguistik (5)
- Department Psychologie (3)
- Institut für Umweltwissenschaften und Geographie (3)
- Institut für Informatik und Computational Science (2)
- Department Sport- und Gesundheitswissenschaften (1)
- Institut für Anglistik und Amerikanistik (1)
- Institut für Mathematik (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Potsdam Research Institute for Multilingualism (PRIM) (1)
- Sozialwissenschaften (1)
In a changing world facing several direct or indirect anthropogenic challenges the freshwater resources are endangered in quantity and quality. An excessive supply of nutrients, for example, can cause disproportional phytoplankton development and oxygen deficits in large rivers, leading to failure of the aims requested by the Water Framework Directive (WFD). Such problems can be observed in many European river catchments including the Elbe basin, and effective measures for improving water quality status are highly appreciated.
In water resources management and protection, modelling tools can help to understand the dominant nutrient processes and to identify the main sources of nutrient pollution in a watershed. They can be effective instruments for impact assessments investigating the effects of changing climate or socio-economic conditions on the status of surface water bodies, and for testing the usefulness of possible protection measures. Due to the high number of interrelated processes, ecohydrological model approaches containing water quality components are more complex than the pure hydrological ones, and their setup and calibration require more efforts. Such models, including the Soil and Water Integrated Model (SWIM), still need some further development and improvement.
Therefore, this cumulative dissertation focuses on two main objectives: 1) the approach-related objectives aiming in the SWIM model improvement and further development regarding nutrient (nitrogen and phosphorus) process description, and 2) the application-related objectives in meso- to large-scale Elbe river basins to support adaptive river basin management in view of possible future changes. The dissertation is based on five scientific papers published in international journals and dealing with these research questions.
Several adaptations were implemented in the model code to improve the representation of nutrient processes including a simple wetland approach, an extended by ammonium nitrogen cycle in the soils, as well as a detailed in-stream module, simulating algal growth, nutrient transformation processes and oxygen conditions in the river reaches, mainly driven by water temperature and light. Although this new approaches created a highly complex ecohydrological model with a large number of additional calibration parameters and rising uncertainty, the calibration and validation of the SWIM model enhanced by the new approaches in selected subcatchment and the entire Elbe river basin delivered satisfactory to good model results in terms of criteria of fit. Thus, the calibrated and validated model provided a sound base for the assessment of possible future changes and impacts in climate, land use and management in the Elbe river (sub)basin(s).
The new enhanced modelling approach improved the applicability of the SWIM model for the WFD related research questions, where the ability to consider biological water quality components (such as phytoplankton) is important. It additionally enhanced its ability to simulate the behaviour of nutrients coming mainly from point sources (e.g. phosphate phosphorus). Scenario results can be used by decision makers and stakeholders to find and understand future challenges and possible adaptation measures in the Elbe river basin.
Systems biology aims at investigating biological systems in its entirety by gathering and analyzing large-scale data sets about the underlying components. Computational systems biology approaches use these large-scale data sets to create models at different scales and cellular levels. In addition, it is concerned with generating and testing hypotheses about biological processes. However, such approaches are inevitably leading to computational challenges due to the high dimensionality of the data and the differences in the dimension of data from different cellular layers.
This thesis focuses on the investigation and development of computational approaches to analyze metabolite profiles in the context of cellular networks. This leads to determining what aspects of the network functionality are reflected in the metabolite levels. With these methods at hand, this thesis aims to answer three questions: (1) how observability of biological systems is manifested in metabolite profiles and if it can be used for phenotypical comparisons; (2) how to identify couplings of reaction rates from metabolic profiles alone; and (3) which regulatory mechanism that affect metabolite levels can be distinguished by integrating transcriptomics and metabolomics read-outs.
I showed that sensor metabolites, identified by an approach from observability theory, are more correlated to each other than non-sensors. The greater correlations between sensor metabolites were detected both with publicly available metabolite profiles and synthetic data simulated from a medium-scale kinetic model. I demonstrated through robustness analysis that correlation was due to the position of the sensor metabolites in the network and persisted irrespectively of the experimental conditions. Sensor metabolites are therefore potential candidates for phenotypical comparisons between conditions through targeted metabolic analysis.
Furthermore, I demonstrated that the coupling of metabolic reaction rates can be investigated from a purely data-driven perspective, assuming that metabolic reactions can be described by mass action kinetics. Employing metabolite profiles from domesticated and wild wheat and tomato species, I showed that the process of domestication is associated with a loss of regulatory control on the level of reaction rate coupling. I also found that the same metabolic pathways in Arabidopsis thaliana and Escherichia coli exhibit differences in the number of reaction rate couplings.
I designed a novel method for the identification and categorization of transcriptional effects on metabolism by combining data on gene expression and metabolite levels. The approach determines the partial correlation of metabolites with control by the principal components of the transcript levels. The principle components contain the majority of the transcriptomic information allowing to partial out the effect of the transcriptional layer from the metabolite profiles. Depending whether the correlation between metabolites persists upon controlling for the effect of the transcriptional layer, the approach allows us to group metabolite pairs into being associated due to post-transcriptional or transcriptional regulation, respectively. I showed that the classification of metabolite pairs into those that are associated due to transcriptional or post-transcriptional regulation are in agreement with existing literature and findings from a Bayesian inference approach.
The approaches developed, implemented, and investigated in this thesis open novel ways to jointly study metabolomics and transcriptomics data as well as to place metabolic profiles in the network context. The results from these approaches have the potential to provide further insights into the regulatory machinery in a biological system.
Magnetotellurics (MT) is a geophysical method that is able to image the electrical conductivity structure of the subsurface by recording time series of natural electromagnetic (EM) field variations. During the data processing these time series are divided into small segments and for each segment spectral values are computed which are typically averaged in a statistical manner to obtain MT transfer functions. Unfortunately, the presence of man-made EM noise sources often deteriorates a significant amount of the recorded time series resulting in disturbed transfer functions. Many advanced processing techniques, e.g. robust statistics, pre-stack data selection or remote reference, have been developed to tackle this problem. The first two techniques reduce the amount of outliers and noise in the data whereas the latter approach removes noise by using data from another MT station. However, especially in populated regions the data processing is still quite challenging even with these approaches. In this thesis, I present two novel pre-stack data confinement and selection criteria for the detection of outliers and noise affected data based on (i) a distance measure of each data segment with regard to the entire sample distribution and (ii) the evaluation of the magnetic polarisation direction of all segments. The first criterion is able to remove data points that scatter around the desired MT distribution and furthermore it can, under some circumstances, even reject complete data cluster originating from noise sources. The second criterion eliminates data points caused by a strongly polarised magnetic signal. Both criteria have been successfully applied to many stations with different noise contaminations showing that they can significantly improve the transfer function estimation. The novel criteria were used to evaluate a MT data set from the Eastern Karoo Basin in South Africa. The corresponding field experiment is part of an extensive research programme to collect information of the current e.g. geological setting in this region prior to a potential shale gas exploitation. The aim was to investigate whether a three-dimensional (3D) inversion of the newly measured data fosters a more realistic mapping of physical properties of the target horizon. For this purpose, a comprehensive 3D model was derived by using all available data. In a second step, I analysed parameters of the target horizon, e.g. its conductivity, that are proxies for physical properties such as thermal maturity and porosity.
Gamma-ray astronomy has proven to provide unique insights into cosmic-ray accelerators in the past few decades. By combining information at the highest photon energies with the entire electromagnetic spectrum in multi-wavelength studies, detailed knowledge of non-thermal particle populations in astronomical objects and systems has been gained: Many individual classes of gamma-ray sources could be identified inside our galaxy and outside of it. Different sources were found to exhibit a wide range of temporal evolution, ranging from seconds to stable behaviours over many years of observations. With the dawn of both neutrino- and gravitational wave astronomy, additional messengers have come into play over the last years. This development presents the advent of multi-messenger astronomy: a novel approach not only to search for sources of cosmic rays, but for astronomy in general.
In this thesis, both traditional multi-wavelength studies and multi-messenger studies will be presented. They were carried out with the H.E.S.S. experiment, an imaging air Cherenkov telescope array located in the Khomas Highland of Namibia. H.E.S.S. has entered its second phase in 2012 with the addition of a large, fifth telescope. While the initial array was limited to the study of gamma-rays with energies above 100 GeV, the new instrument allows to access gamma-rays with energies down to a few tens of GeV. Strengths of the multi-wavelength approach will be demonstrated at the example of the galaxy NGC253, which is undergoing an episode of enhanced star-formation. The gamma-ray emission will be discussed in light of all the information on this system available from radio, infrared and X-rays. These wavelengths reveal detailed information on the population of supernova remnants, which are suspected cosmic-ray accelerators. A broad-band gamma-ray spectrum is derived from H.E.S.S. and Fermi-LAT data. The improved analysis of H.E.S.S. data provides a measurement which is no longer dominated by systematic uncertainties. The long-term behaviour of cosmic rays in the starburst galaxy NGC253 is finally characterised.
In contrast to the long time-scale evolution of a starburst galaxy, multi-messenger studies are especially intriguing when shorter time-scales are being probed. A prime example of a short time-scale transient are Gamma Ray Bursts. The efforts to understand this phenomenon effectively founded the branch of gamma-ray astronomy. The multi-messenger approach allows for the study of illusive phenomena such as Gamma Ray Bursts and other transients using electromagnetic radiation, neutrinos, cosmic rays and gravitational waves contemporaneously. With contemporaneous observations getting more important just recently, the execution of such observation campaigns still presents a big challenge due to the different limitations and strengths of the infrastructures.
An alert system for transient phenomena has been developed over the course of this thesis for H.E.S.S. It aims to address many follow-up challenges in order to maximise the science return of the new large telescope, which is able to repoint much faster than the initial four telescopes. The system allows for fully automated observations based on scientific alerts from any wavelength or messenger and allows H.E.S.S. to participate in multi-messenger campaigns. Utilising this new system, many interesting multi-messenger observation campaigns have been performed. Several highlight observations with H.E.S.S. are analysed, presented and discussed in this work. Among them are observations of Gamma Ray Bursts with low latency and low energy threshold, the follow-up of a neutrino candidate in spatial coincidence with a flaring active galactic nucleus and of the merger of two neutron stars, which was revealed by the coincidence of gravitational waves and a Gamma-Ray Burst.
Modern gamma-ray telescopes, provide the main stream of data for astrophysicists in quest of detecting the sources of gamma rays such as active galactic nuclei (AGN). Many blazars have been detected with gamma-ray telescopes such as HESS, VERITAS, MAGIC and Fermi satellite as sources of gamma-rays with the energy E ≥ 100 GeV. These very-high-energy photons interact with extragalactic background light (EBL) producing ultra-relativistic electron-positron pairs. Observations with Fermi-LAT indicate that the GeV gamma-ray flux from some blazars is lower than that predicted from the full electromagnetic cascade. The pairs can induce electrostatic and electromagnetic instabilities. In this case, wave-particle interactions can reduce the energy of the pairs. Therefore, the collective plasma effects can also substantially suppress the GeV-band gamma-ray emission affecting as well the IGMF constraints. Using Particle in cell (PIC) simulations, we have revisited the issue of plasma instabilities induced by electron-positron beams in the fully ionized intergalactic medium. This problem is related to pair beams produced by TeV radiation of blazars. The main objective of our study is to clarify the feedback of the beam-driven instabilities on the pairs. The present dissertation provides new results regarding the plasma instabilities from blazar induced pair beams interacting with intergalactic medium. This clarifies the relevance of plasma instabilities and improves our understanding of blazars.
The purpose of Probabilistic Seismic Hazard Assessment (PSHA) at a construction site is to provide the engineers with a probabilistic estimate of ground-motion level that could be equaled or exceeded at least once in the structure’s design lifetime. A certainty on the predicted ground-motion allows the engineers to confidently optimize structural design and mitigate the risk of extensive damage, or in worst case, a collapse. It is therefore in interest of engineering, insurance, disaster mitigation, and security of society at large, to reduce uncertainties in prediction of design ground-motion levels.
In this study, I am concerned with quantifying and reducing the prediction uncertainty of regression-based Ground-Motion Prediction Equations (GMPEs). Essentially, GMPEs are regressed best-fit formulae relating event, path, and site parameters (predictor variables) to observed ground-motion values at the site (prediction variable). GMPEs are characterized by a parametric median (μ) and a non-parametric variance (σ) of prediction. μ captures the known ground-motion physics i.e., scaling with earthquake rupture properties (event), attenuation with distance from source (region/path), and amplification due to local soil conditions (site); while σ quantifies the natural variability of data that eludes μ. In a broad sense, the GMPE prediction uncertainty is cumulative of 1) uncertainty on estimated regression coefficients (uncertainty on μ,σ_μ), and 2) the inherent natural randomness of data (σ). The extent of μ parametrization, the quantity, and quality of ground-motion data used in a regression, govern the size of its prediction uncertainty: σ_μ and σ.
In the first step, I present the impact of μ parametrization on the size of σ_μ and σ. Over-parametrization appears to increase the σ_μ, because of the large number of regression coefficients (in μ) to be estimated with insufficient data. Under-parametrization mitigates σ_μ, but the reduced explanatory strength of μ is reflected in inflated σ. For an optimally parametrized GMPE, a ~10% reduction in σ is attained by discarding the low-quality data from pan-European events with incorrect parametric values (of predictor variables).
In case of regions with scarce ground-motion recordings, without under-parametrization, the only way to mitigate σ_μ is to substitute long-term earthquake data at a location with short-term samples of data across several locations – the Ergodic Assumption. However, the price of ergodic assumption is an increased σ, due to the region-to-region and site-to-site differences in ground-motion physics. σ of an ergodic GMPE developed from generic ergodic dataset is much larger than that of non-ergodic GMPEs developed from region- and site-specific non-ergodic subsets - which were too sparse to produce their specific GMPEs. Fortunately, with the dramatic increase in recorded ground-motion data at several sites across Europe and Middle-East, I could quantify the region- and site-specific differences in ground-motion scaling and upgrade the GMPEs with 1) substantially more accurate region- and site-specific μ for sites in Italy and Turkey, and 2) significantly smaller prediction variance σ. The benefit of such enhancements to GMPEs is quite evident in my comparison of PSHA estimates from ergodic versus region- and site-specific GMPEs; where the differences in predicted design ground-motion levels, at several sites in Europe and Middle-Eastern regions, are as large as ~50%.
Resolving the ergodic assumption with mixed-effects regressions is feasible when the quantified region- and site-specific effects are physically meaningful, and the non-ergodic subsets (regions and sites) are defined a priori through expert knowledge. In absence of expert definitions, I demonstrate the potential of machine learning techniques in identifying efficient clusters of site-specific non-ergodic subsets, based on latent similarities in their ground-motion data. Clustered site-specific GMPEs bridge the gap between site-specific and fully ergodic GMPEs, with their partially non-ergodic μ and, σ ~15% smaller than the ergodic variance.
The methodological refinements to GMPE development produced in this study are applicable to new ground-motion datasets, to further enhance certainty of ground-motion prediction and thereby, seismic hazard assessment. Advanced statistical tools show great potential in improving the predictive capabilities of GMPEs, but the fundamental requirement remains: large quantity of high-quality ground-motion data from several sites for an extended time-period.