Refine
Year of publication
- 2018 (154) (remove)
Document Type
- Doctoral Thesis (154) (remove)
Language
- English (154) (remove)
Keywords
- Fernerkundung (3)
- Magnetismus (3)
- climate change (3)
- magnetism (3)
- remote sensing (3)
- uncertainty (3)
- Angriffserkennung (2)
- Bakterien (2)
- Big Data (2)
- Biodiversität (2)
Institute
- Institut für Physik und Astronomie (28)
- Institut für Biochemie und Biologie (27)
- Institut für Chemie (24)
- Institut für Geowissenschaften (23)
- Extern (14)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Wirtschaftswissenschaften (8)
- Department Psychologie (6)
- Department Linguistik (5)
- Institut für Ernährungswissenschaft (5)
- Sozialwissenschaften (5)
- Institut für Umweltwissenschaften und Geographie (3)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (3)
- Institut für Informatik und Computational Science (2)
- Institut für Mathematik (2)
- Department Sport- und Gesundheitswissenschaften (1)
- Institut für Anglistik und Amerikanistik (1)
- Potsdam Research Institute for Multilingualism (PRIM) (1)
The scientific drilling campaign PALEOVAN was conducted in the summer of 2010 and was part of the international continental drilling programme (ICDP). The main goal of the campaign was the recovery of a sensitive climate archive in the East of Anatolia. Lacustrine deposits underneath the lake floor of ‘Lake Van’ constitute this archive. The drilled core material was recovered from two locations: the Ahlat Ridge and the Northern Basin. A composite core was constructed from cored material of seven parallel boreholes at the Ahlat Ridge and covers an almost complete lacustrine history of Lake Van. The composite record offered sensitive climate proxies such as variations of total organic carbon, K/Ca ratios, or a relative abundance of arboreal pollen. These proxies revealed patterns that are similar to climate proxy variations from Greenland ice cores. Climate variations in Greenland ice cores have been dated by modelling the timing of orbital forces to affect the climate. Volatiles from melted ice aliquots are often taken as high-resolution proxies and provide a base for fitting the according temporal models.
The ICDP PALEOVAN scientific team fitted proxy data from the lacustrine drilling record to ice core data and constructed an age model. Embedded volcaniclastic layers had to be dated radiometrically in order to provide independent age constraints to the climate-stratigraphic age model. Solving this task by an application of the 40Ar/39Ar method was the main objective of this thesis. Earlier efforts to apply the 40Ar/39Ar dating resulted in inaccuracies that could not be explained satisfactorily.
The absence of K-rich feldspars in suitable tephra layers implied that feldspar crystals needed to be 500 μm in size minimum, in order to apply single-crystal 40Ar/39Ar dating. Some of the samples did not contain any of these grain sizes or only very few crystals of that size. In order to overcome this problem this study applied a combined single-crystal and multi-crystal approach with different crystal fractions from the same sample. The preferred method of a stepwise heating analysis of an aliquot of feldspar crystals has been applied to three samples. The Na-rich crystals and their young geological age required 20 mg of inclusion-free, non-corroded feldspars. Small sample volumes (usually 25 % aliquots of 5 cm3 of sample material – a spoon full of tephra) and the widespread presence of melt-inclusion led to the application of combined single- and multigrain total fusion analyses. 40Ar/39Ar analyses on single crystals have the advantage of being able to monitor the presence of excess 40Ar and detrital or xenocrystic contamination in the samples. Multigrain analyses may hide the effects from these obstacles. The results from the multigrain analyses are therefore discussed with respect to the findings from the respective cogenetic single crystal ages. Some of the samples in this study were dated by 40Ar/39Ar on feldspars on multigrain separates and (if available) in combination with only a few single crystals. 40Ar/39Ar ages from two of the samples deviated statistically from the age model. All other samples resulted in identical ages. The deviations displayed older ages than those obtained from the age model. t-Tests compared radiometric ages with available age control points from various proxies and from the relative paleointensity of the earth magnetic field within a stratigraphic range of ± 10 m. Concordant age control points from different relative chronometers indicated that deviations are a result of erroneous 40Ar/39Ar ages. The thesis argues two potential reasons for these ages: (1) the irregular appearance of 40Ar from rare melt- and fluid- inclusions and (2) the contamination of the samples with older crystals due to a rapid combination of assimilation and ejection.
Another aliquot of feldspar crystals that underwent separation for the application of 40Ar/39Ar dating was investigated for geochemical inhomogeneities. Magmatic zoning is ubiquitous in the volcaniclastic feldspar crystals. Four different types of magmatic zoning were detected. The zoning types are compositional zoning (C-type zoning), pseudo-oscillatory zoning of trace ele- ment concentrations (PO-type zoning), chaotic and patchy zoning of major and trace element concentrations (R-type zoning) and concentric zoning of trace elements (CC-type zoning). Sam- ples that deviated in 40Ar/39Ar ages showed C-type zoning, R-type zoning or a mix of different types of zoning (C-type and PO-type). Feldspars showing PO-type zoning typically represent the smallest grain size fractions in the samples. The constant major element compositions of these crystals are interpreted to represent the latest stages in the compositional evolution of feldspars in a peralkaline melt. PO-type crystals contain less melt- inclusions than other zoning types and are rarely corroded. This thesis concludes that feldspars that show PO-type zoning are most promising chronometers for the 40Ar/39Ar method, if samples provide mixed zoning types of Quaternary anorthoclase feldspars.
Five samples were dated by applying the 40Ar/39Ar method to volcanic glass. High fractions of atmospheric Ar (typically > 98%) significantly hampered the precision of the 40Ar/39Ar ages and resulted in rough age estimates that widely overlap the age model. Ar isotopes indicated that the glasses bore a chorine-rich Ar-end member. The chlorine-derived 38Ar indicated chlorine-rich fluid-inclusions or the hydration of the volcanic glass shards. This indication strengthened the evidence that irregularly distributed melt-inclusions and thus irregular distributed excess 40Ar influenced the problematic feldspar 40Ar/39Ar ages. Whether a connection between a corrected initial 40Ar/36Ar ratio from glasses to the 40Ar/36Ar ratios from pore waters exists remains unclear.
This thesis offers another age model, which is similarly based on the interpolation of the temporal tie points from geophysical and climate-stratigraphic data. The model used a PCHIP- interpolation (piecewise cubic hermite interpolating polynomial) whereas the older age model used a spline-interpolation. Samples that match in ages from 40Ar/39Ar dating of feldspars with the earlier published age model were additionally assigned with an age from the PCHIP- interpolation. These modelled ages allowed a recalculation of the Alder Creek sanidine mineral standard. The climate-stratigraphic calibration of an 40Ar/39Ar mineral standard proved that the age versus depth interpolations from PAELOVAN drilling cores were accurate, and that the applied chronometers recorded the temporal evolution of Lake Van synchronously.
Petrochemical discrimination of the sampled volcaniclastic material is also given in this thesis. 41 from 57 sampled volcaniclastic layers indicate Nemrut as their provenance. Criteria that served for the provenance assignment are provided and reviewed critically. Detailed correlations of selected PALEOVAN volcaniclastics to onshore samples that were described in detail by earlier studies are also discussed. The sampled volcaniclastics dominantly have a thickness of < 40 cm and have been ejected by small to medium sized eruptions. Onshore deposits from these types of eruptions are potentially eroded due to predominant strong winds on Nemrut and Süphan slopes. An exact correlation with the data presented here is therefore equivocal or not possible at all.
Deviating feldspar 40Ar/39Ar ages can possibly be explained by inherited 40Ar from feldspar xenocrysts contaminating the samples. In order to test this hypothesis diffusion couples of Ba were investigated in compositionally zoned feldspar crystals. The diffusive behaviour of Ba in feldspar is known, and gradients in the changing concentrations allowed for the calculation of the duration of the crystal’s magmatic development since the formation of the zoning interface. Durations were compared with degassing scenarios that model the Ar-loss during assimilation and subsequent ejection of the xenocrystals. Diffusive equilibration of the contrasting Ba concentrations is assumed to generate maximum durations as the gradient could have been developed in several growth and heating stages. The modelling does not show any indication of an involvement of inherited 40Ar in any of the deviating samples. However, the analytical set-up represents the lower limit of the required spatial resolution. Therefore, it cannot be excluded that the degassing modelling relies on a significant overestimation of the maximum duration of the magmatic history. Nevertheless, the modelling of xenocrystal degassing evidences that the irregular incorporation of excess 40Ar by melt- and fluid inclusions represents the most critical problem that needs to be overcome in dating volcaniclastic feldspars from the PALEOVAN drill cores. This thesis provides the complete background in generating and presenting 40Ar/39Ar ages that are compared to age data from a climate-stratigraphic model. Deviations are identified statistically and then discussed in order to find explanations from the age model and/or from 40Ar/39Ar geochronology. Most of the PALEOVAN stratigraphy provides several chronometers that have been proven for their synchronicity. Lacustrine deposits from Lake Van represent a key archive for reconstructing climate evolution in the eastern Mediterranean and in the Near East. The PALEOVAN record offers a climate-stratigraphic age model with a remarkable accuracy and resolution.
The aim of this doctoral thesis was to establish a technique for the analysis of biomolecules with infrared matrix-assisted laser dispersion (IR-MALDI) ion mobility (IM) spectrometry. The main components of the work were the characterization of the IR-MALDI process, the development and characterization of different ion mobility spectrometers, the use of IR-MALDI-IM spectrometry as a robust, standalone spectrometer and the development of a collision cross-section estimation approach for peptides based on molecular dynamics and thermodynamic reweighting.
First, the IR-MALDI source was studied with atmospheric pressure ion mobility spectrometry and shadowgraphy. It consisted of a metal capillary, at the tip of which a self-renewing droplet of analyte solution was met by an IR laser beam. A relationship between peak shape, ion desolvation, diffusion and extraction pulse delay time (pulse delay) was established. First order desolvation kinetics were observed and related to peak broadening by diffusion, both influenced by the pulse delay. The transport mechanisms in IR-MALDI were then studied by relating different laser impact positions on the droplet surface to the corresponding ion mobility spectra. Two different transport mechanisms were determined: phase explosion due to the laser pulse and electrical transport due to delayed ion extraction. The velocity of the ions stemming from the phase explosion was then measured by ion mobility and shadowgraphy at different time scales and distances from the source capillary, showing an initially very high but rapidly decaying velocity. Finally, the anatomy of the dispersion plume was observed in detail with shadowgraphy and general conclusions over the process were drawn.
Understanding the IR-MALDI process enabled the optimization of the different IM spectrometers at atmospheric and reduced pressure (AP and RP, respectively). At reduced pressure, both an AP and an RP IR-MALDI source were used. The influence of the pulsed ion extraction parameters (pulse delay, width and amplitude) on peak shape, resolution and area was systematically studied in both AP and RP IM spectrometers and discussed in the context of the IR-MALDI process. Under RP conditions, the influence of the closing field and of the pressure was also examined for both AP and RP sources. For the AP ionization RP IM spectrometer, the influence of the inlet field (IF) in the source region was also examined. All of these studies led to the determination of the optimal analytical parameters as well as to a better understanding of the initial ion cloud anatomy.
The analytical performance of the spectrometer was then studied. Limits of detection (LOD) and linear ranges were determined under static and pulsed ion injection conditions and interpreted in the context of the IR-MALDI mechanism. Applications in the separation of simple mixtures were also illustrated, demonstrating good isomer separation capabilities and the advantages of singly charged peaks. The possibility to couple high performance liquid chromatography (HPLC) to IR-MALDI-IM spectrometry was also demonstrated. Finally, the reduced pressure spectrometer was used to study the effect of high reduced field strength on the mobility of polyatomic ions in polyatomic gases.
The last focus point was on the study of peptide ions. A dataset obtained with electrospray IM spectrometry was characterized and used for the calibration of a collision cross-section (CCS) determination method based on molecular dynamics (MD) simulations at high temperature. Instead of producing candidate structures which are evaluated one by one, this semi-automated method uses the simulation as a whole to determine a single average collision cross-section value by reweighting the CCS of a few representative structures. The method was compared to the intrinsic size parameter (ISP) method and to experimental results. Additional MD data obtained from the simulations was also used to further analyze the peptides and understand the experimental results, an advantage with regard to the ISP method. Finally, the CCS of peptide ions analyzed by IR-MALDI were also evaluated with both ISP and MD methods and the results compared to experiment, resulting in a first validation of the MD method. Thus, this thesis brings together the soft ionization technique that is IR-MALDI, which produces mostly singly charged peaks, with ion mobility spectrometry, which can distinguish between isomers, and a collision cross-section determination method which also provides structural information on the analyte at hand.
Spatio-temporal data denotes a category of data that contains spatial as well as temporal components. For example, time-series of geo-data, thematic maps that change over time, or tracking data of moving entities can be interpreted as spatio-temporal data.
In today's automated world, an increasing number of data sources exist, which constantly generate spatio-temporal data. This includes for example traffic surveillance systems, which gather movement data about human or vehicle movements, remote-sensing systems, which frequently scan our surroundings and produce digital representations of cities and landscapes, as well as sensor networks in different domains, such as logistics, animal behavior study, or climate research.
For the analysis of spatio-temporal data, in addition to automatic statistical and data mining methods, exploratory analysis methods are employed, which are based on interactive visualization. These analysis methods let users explore a data set by interactively manipulating a visualization, thereby employing the human cognitive system and knowledge of the users to find patterns and gain insight into the data.
This thesis describes a software framework for the visualization of spatio-temporal data, which consists of GPU-based techniques to enable the interactive visualization and exploration of large spatio-temporal data sets. The developed techniques include data management, processing, and rendering, facilitating real-time processing and visualization of large geo-temporal data sets. It includes three main contributions:
- Concept and Implementation of a GPU-Based Visualization Pipeline.
The developed visualization methods are based on the concept of a GPU-based visualization pipeline, in which all steps -- processing, mapping, and rendering -- are implemented on the GPU. With this concept, spatio-temporal data is represented directly in GPU memory, using shader programs to process and filter the data, apply mappings to visual properties, and finally generate the geometric representations for a visualization during the rendering process. Data processing, filtering, and mapping are thereby executed in real-time, enabling dynamic control over the mapping and a visualization process which can be controlled interactively by a user.
- Attributed 3D Trajectory Visualization.
A visualization method has been developed for the interactive exploration of large numbers of 3D movement trajectories. The trajectories are visualized in a virtual geographic environment, supporting basic geometries such as lines, ribbons, spheres, or tubes. Interactive mapping can be applied to visualize the values of per-node or per-trajectory attributes, supporting shape, height, size, color, texturing, and animation as visual properties. Using the dynamic mapping system, several kind of visualization methods have been implemented, such as focus+context visualization of trajectories using interactive density maps, and space-time cube visualization to focus on the temporal aspects of individual movements.
- Geographic Network Visualization.
A method for the interactive exploration of geo-referenced networks has been developed, which enables the visualization of large numbers of nodes and edges in a geographic context. Several geographic environments are supported, such as a 3D globe, as well as 2D maps using different map projections, to enable the analysis of networks in different contexts and scales. Interactive filtering, mapping, and selection can be applied to analyze these geographic networks, and visualization methods for specific types of networks, such as coupled 3D networks or temporal networks have been implemented.
As a demonstration of the developed visualization concepts, interactive visualization tools for two distinct use cases have been developed. The first contains the visualization of attributed 3D movement trajectories of airplanes around an airport. It allows users to explore and analyze the trajectories of approaching and departing aircrafts, which have been recorded over the period of a month. By applying the interactive visualization methods for trajectory visualization and interactive density maps, analysts can derive insight from the data, such as common flight paths, regular and irregular patterns, or uncommon incidents such as missed approaches on the airport.
The second use case involves the visualization of climate networks, which are geographic networks in the climate research domain. They represent the dynamics of the climate system using a network structure that expresses statistical interrelationships between different regions. The interactive tool allows climate analysts to explore these large networks, analyzing the network's structure and relating it to the geographic background. Interactive filtering and selection enables them to find patterns in the climate data and identify e.g. clusters in the networks or flow patterns.
In recent years, the ever-growing amount of documents on the Web as well as in closed systems for private or business contexts led to a considerable increase of valuable textual information about topics, events, and entities. It is a truism that the majority of information (i.e., business-relevant data) is only available in unstructured textual form. The text mining research field comprises various practice areas that have the common goal of harvesting high-quality information from textual data. These information help addressing users' information needs.
In this thesis, we utilize the knowledge represented in user-generated content (UGC) originating from various social media services to improve text mining results. These social media platforms provide a plethora of information with varying focuses. In many cases, an essential feature of such platforms is to share relevant content with a peer group. Thus, the data exchanged in these communities tend to be focused on the interests of the user base. The popularity of social media services is growing continuously and the inherent knowledge is available to be utilized. We show that this knowledge can be used for three different tasks.
Initially, we demonstrate that when searching persons with ambiguous names, the information from Wikipedia can be bootstrapped to group web search results according to the individuals occurring in the documents. We introduce two models and different means to handle persons missing in the UGC source. We show that the proposed approaches outperform traditional algorithms for search result clustering. Secondly, we discuss how the categorization of texts according to continuously changing community-generated folksonomies helps users to identify new information related to their interests. We specifically target temporal changes in the UGC and show how they influence the quality of different tag recommendation approaches. Finally, we introduce an algorithm to attempt the entity linking problem, a necessity for harvesting entity knowledge from large text collections. The goal is the linkage of mentions within the documents with their real-world entities. A major focus lies on the efficient derivation of coherent links.
For each of the contributions, we provide a wide range of experiments on various text corpora as well as different sources of UGC.
The evaluation shows the added value that the usage of these sources provides and confirms the appropriateness of leveraging user-generated content to serve different information needs.
In this thesis we provide a construction of the operator framework starting from the functional formulation of group field theory (GFT). We define operator algebras on Hilbert spaces whose expectation values in specific states provide correlation functions of the functional formulation. Our construction allows us to give a direct relation between the ingredients of the functional GFT and its operator formulation in a perturbative regime. Using this construction we provide an example of GFT states that can not be formulated as states in a Fock space and lead to math- ematically inequivalent representations of the operator algebra. We show that such inequivalent representations can be grouped together by their symmetry properties and sometimes break the left translation symmetry of the GFT action. We interpret these groups of inequivalent representations as phases of GFT, similar to the classification of phases that we use in QFT’s on space-time.
In this thesis, we treat the extreme Newman-Penrose components of both the Maxwell field (s=±1) and the linearized gravitational perturbations (or "linearized gravity" for short) (s=±2) in the exterior of a slowly rotating Kerr black hole. Upon different rescalings, we can obtain spin s components which satisfy the separable Teukolsky master equation (TME). For each of these spin s components defined in Kinnersley tetrad, the resulting equations by performing some first-order differential operator on it once and twice (twice only for s=±2), together with the TME, are in the form of an "inhomogeneous spin-weighted wave equation" (ISWWE) with different potentials and constitute a linear spin-weighted wave system. We then prove energy and integrated local energy decay (Morawetz) estimates for this type of ISWWE, and utilize them to achieve both a uniform bound of a positive definite energy and a Morawetz estimate for the regular extreme Newman-Penrose components defined in the regular Hawking-Hartle tetrad.
We also present some brief discussions on mode stability for TME for the case of real frequencies. This says that in a fixed subextremal Kerr spacetime, there is no nontrivial separated mode solutions to TME which are purely ingoing at horizon and purely outgoing at infinity. This yields a representation formula for solutions to inhomogeneous Teukolsky equations, and will play a crucial role in generalizing the above energy and Morawetz estimates results to the full subextremal Kerr case.
Be Creative, Now!
(2018)
Purpose – This thesis set out to explore, describe, and evaluate the reality behind the rhetoric of freedom and control in the context of creativity. The overarching subject is concerned with the relationship between creativity, freedom, and control, considering freedom is also seen as an element of control to manage creativity.
Design/methodology/approach – In-depth qualitative data gathered from at two innovative start-ups. Two ethnographic studies were conducted. The data are based on participatory observations, interviews, and secondary sources, each of which included a three months field study and a total of 41 interviews from both organizations.
Findings – The thesis provides explanations for the practice of freedom and the control of creativity within organizations and expands the existing theory of neo-normative control. The findings indicate that organizations use complex control systems that allow a high degree of freedom that paradoxically leads to more control. Freedom is a cover of control, which in turn leads to creativity. Covert control even results in the responsibility to be creative outside working hours.
Practical implications – Organizations, which rely on creativity might use the results of this thesis. Positive workplace control of creativity provides both freedom and structure for creative work. While freedom leads to organizational members being more motivated and committing themselves more strongly to their and the organization’s goals, and a specific structure also helps to provide the requirements for creativity.
Originality/value – The thesis provides an insight into an approach to workplace control, which has mostly neglected in creativity research and proposes a modified concept of neo-normative control. It serves to provide a further understanding of freedom for creativity and to challenge the liberal claims of new control forms.
Microswimmers, i.e. swimmers of micron size experiencing low Reynolds numbers, have received a great deal of attention in the last years, since many applications are envisioned in medicine and bioremediation. A promising field is the one of magnetic swimmers, since magnetism is biocom-patible and could be used to direct or actuate the swimmers. This thesis studies two examples of magnetic microswimmers from a physics point of view.
The first system to be studied are magnetic cells, which can be magnetic biohybrids (a swimming cell coupled with a magnetic synthetic component) or magnetotactic bacteria (naturally occurring bacteria that produce an intracellular chain of magnetic crystals). A magnetic cell can passively interact with external magnetic fields, which can be used for direction. The aim of the thesis is to understand how magnetic cells couple this magnetic interaction to their swimming strategies, mainly how they combine it with chemotaxis (the ability to sense external gradient of chemical species and to bias their walk on these gradients). In particular, one open question addresses the advantage given by these magnetic interactions for the magnetotactic bacteria in a natural environment, such as porous sediments. In the thesis, a modified Active Brownian Particle model is used to perform simulations and to reproduce experimental data for different systems such as bacteria swimming in the bulk, in a capillary or in confined geometries. I will show that magnetic fields speed up chemotaxis under special conditions, depending on parameters such as their swimming strategy (run-and-tumble or run-and-reverse), aerotactic strategy (axial or polar), and magnetic fields (intensities and orientations), but it can also hinder bacterial chemotaxis depending on the system.
The second example of magnetic microswimmer are rigid magnetic propellers such as helices or random-shaped propellers. These propellers are actuated and directed by an external rotating magnetic field. One open question is how shape and magnetic properties influence the propeller behavior; the goal of this research field is to design the best propeller for a given situation. The aim of the thesis is to propose a simulation method to reproduce the behavior of experimentally-realized propellers and to determine their magnetic properties. The hydrodynamic simulations are based on the use of the mobility matrix. As main result, I propose a method to match the experimental data, while showing that not only shape but also the magnetic properties influence the propellers swimming characteristics.
Neuroinflammatory and neurodegenerative diseases such as Parkinson's (PD) and multiple sclerosis (MS) often result in a severe impairment of the patient´s quality of life. Effective therapies for the treatment are currently not available, which results in a high socio-economic burden. Due to the heterogeneity of the disease subtypes, stratification is particularly difficult in the early phase of the disease and is mainly based on clinical parameters such as neurophysiological tests and central nervous imaging. Due to good accessibility and stability, blood and cerebrospinal fluid metabolite markers could serve as surrogates for neurodegenerative processes. This can lead to an improved mechanistic understanding of these diseases and further be used as "treatment response" biomarkers in preclinical and clinical development programs. Therefore, plasma and CSF metabolite profiles will be identified that allow differentiation of PD from healthy controls, association of PD with dementia (PDD) and differentiation of PD subtypes such as akinetic rigid and tremor dominant PD patients. In addition, plasma metabolites for the diagnosis of primary progressive MS (PPMS) should be investigated and tested for their specificity to relapsing-remitting MS (RRMS) and their development during PPMS progression.
By applying untargeted high-resolution metabolomics of PD patient samples and in using random forest and partial least square machine learning algorithms, this study identified 20 plasma metabolites and 14 CSF metabolite biomarkers. These differentiate against healthy individuals with an AUC of 0.8 and 0.9 in PD, respectively. We also identify ten PDD specific serum metabolites, which differentiate against healthy individuals and PD patients without dementia with an AUC of 1.0, respectively. Furthermore, 23 akinetic-rigid specific plasma markers were identified, which differentiate against tremor-dominant PD patients with an AUC of 0.94 and against healthy individuals with an AUC of 0.98. These findings also suggest more severe disease pathology in the akinetic-rigid PD than in tremor dominant PD. In the analysis of MS patient samples a partial least square analysis yielded predictive models for the classification of PPMS and resulted in 20 PPMS specific metabolites. In another MS study unknown changes in human metabolism were identified after administration of the multiple sclerosis drug dimethylfumarate, which is used for the treatment of RRMS. These results allow to describe and understand the hitherto completely unknown mechanism of action of this new drug and to use these findings for the further development of new drugs and targets against RRMS.
In conclusion, these results have the potential for improved diagnosis of these diseases and improvement of mechanistic understandings, as multiple deregulated pathways were identified. Moreover, novel Dimethylfumarate targets can be used to aid drug development and treatment efficiency. Overall, metabolite profiling in combination with machine learning identified as a promising approach for biomarker discovery and mode of action elucidation.
Global climate change is one of the greatest challenges of the 21st century, with influence on the environment, societies, politics and economies. The (semi-)arid areas of Southern Africa already suffer from water scarcity. There is a great variety of ongoing research related to global climate history but important questions on regional differences still exist.
In southern African regions terrestrial climate archives are rare, which makes paleoclimate studies challenging. Based on the assumption that continental pans (sabkhas) represent a suitable geo-archive for the climate history, two different pans were studied in the southern and western Kalahari Desert. A combined approach of molecular biological and biogeochemical analyses is utilized to investigate the diversity and abundance of microorganisms and to trace temporal and spatial changes in paleoprecipitation in arid environments. The present PhD thesis demonstrates the applicability of pan sediments as a late Quaternary geo-archive based on microbial signature lipid biomarkers, such as archaeol, branched and isoprenoid glycerol dialkyl glycerol tetraethers (GDGTs) as well as phospholipid fatty acids (PLFA). The microbial signatures contained in the sediment provide information on the current or past microbial community from the Last Glacial Maximum to the recent epoch, the Holocene. The results are discussed in the context of regional climate evolution in southwestern Africa. The seasonal shift of the Innertropical Convergence Zone (ITCZ) along the equator influences the distribution of precipitation- and climate zones. The different expansion of the winter- and summer rainfall zones in southern Africa was confirmed by the frequency of certain microbial biomarkers. A period of increased precipitation in the south-western Kalahari could be described as a result of the extension of the winter rainfall zone during the last glacial maximum (21 ± 2 ka). Instead a period of increased paleoprecipitation in the western Kalahari was indicated during the Late Glacial to Holocene transition. This was possibly caused by a southwestern shift in the position of the summer rainfall zone associated to the southward movement of the ITCZ.
Furthermore, for the first time this study characterizes the bacterial and archaeal life based on 16S rRNA gene high-throughput sequencing in continental pan sediments and provides an insight into the recent microbial community structure. Near-surface processes play an important role for the modern microbial ecosystem in the pans. Water availability as well as salinity might determine the abundance and composition of the microbial communities. The microbial community of pan sediments is dominated by halophilic and dry-adapted archaea and bacteria. Frequently occurring microorganisms such as, Halobacteriaceae, Bacillus and Gemmatimonadetes are described in more detail in this study.
In the thesis there are constructed new quantizations for pseudo-differential boundary value problems (BVPs) on manifolds with edge. The shape of operators comes from Boutet de Monvel’s calculus which exists on smooth manifolds with boundary. The singular case, here with edge and boundary, is much more complicated. The present approach simplifies the operator-valued symbolic structures by using suitable Mellin quantizations on infinite stretched model cones of wedges with boundary. The Mellin symbols themselves are, modulo smoothing ones, with asymptotics, holomorphic in the complex Mellin covariable. One of the main results is the construction of parametrices of elliptic elements in the corresponding operator algebra, including elliptic edge conditions.
Basaltic fissure eruptions, such as on Hawai'i or on Iceland, are thought to be driven by the lateral propagation of feeder dikes and graben subsidence. Associated solid earth processes, such as deformation and structural development, are well studied by means of geophysical and geodetic technologies. The eruptions themselves, lava fountaining and venting dynamics, in turn, have been much less investigated due to hazardous access, local dimension, fast processes, and resulting poor data availability.
This thesis provides a detailed quantitative understanding of the shape and dynamics of lava fountains and the morphological changes at their respective eruption sites. For this purpose, I apply image processing techniques, including drones and fixed installed cameras, to the sequence of frames of video records from two well-known fissure eruptions in Hawai'i and Iceland. This way I extract the dimensions of multiple lava fountains, visible in all frames. By putting these results together and considering the acquisition times of the frames I quantify the variations in height, width and eruption velocity of the lava fountains. Then I analyse these time-series in both time and frequency domains and investigate the similarities and correlations between adjacent lava fountains. Following this procedure, I am able to link the dynamics of the individual lava fountains to physical parameters of the magma transport in the feeder dyke of the fountains.
The first case study in this thesis focuses on the March 2011 Pu'u'O'o eruption, Hawai'i, where a continuous pulsating behaviour at all eight lava fountains has been observed. The lava fountains, even those from different parts of the fissure that are closely connected, show a similar frequency content and eruption behaviour. The regular pattern in the heights of lava fountain suggests a controlling process within the magma feeder system like a hydraulic connection in the underlying dyke, affecting or even controlling the pulsating behaviour.
The second case study addresses the 2014-2015 Holuhraun fissure eruption, Iceland. In this case, the feeder dyke is highlighted by the surface expressions of graben-like structures and fault systems. At the eruption site, the activity decreases from a continuous line of fire of ~60 vents to a limited number of lava fountains. This can be explained by preferred upwards magma movements through vertical structures of the pre-eruptive morphology. Seismic tremors during the eruption reveal vent opening at the surface and/or pressure changes in the feeder dyke. The evolving topography of the cinder cones during the eruption interacts with the lava fountain behaviour. Local variations in the lava fountain height and width are controlled by the conduit diameter, the depth of the lava pond and the shape of the crater. Modelling of the fountain heights shows that long-term eruption behaviour is controlled mainly by pressure changes in the feeder dyke.
This research consists of six chapters with four papers, including two first author and two co-author papers. It establishes a new method to analyse lava fountain dynamics by video monitoring. The comparison with the seismicity, geomorphologic and structural expressions of fissure eruptions shows a complex relationship between focussed flow through dykes, the morphology of the cinder cones, and the lava fountain dynamics at the vents of a fissure eruption.
Fluvial terraces, floodplains, and alluvial fans are the main landforms to store sediments and to decouple hillslopes from eroding mountain rivers. Such low-relief landforms are also preferred locations for humans to settle in otherwise steep and poorly accessible terrain. Abundant water and sediment as essential sources for buildings and infrastructure make these areas amenable places to live at. Yet valley floors are also prone to rare and catastrophic sedimentation that can overload river systems by abruptly increasing the volume of sediment supply, thus causing massive floodplain aggradation, lateral channel instability, and increased flooding. Some valley-fill sediments should thus record these catastrophic sediment pulses, allowing insights into their timing, magnitude, and consequences.
This thesis pursues this theme and focuses on a prominent ~150 km2 valley fill in the Pokhara Valley just south of the Annapurna Massif in central Nepal. The Pokhara Valley is conspicuously broad and gentle compared to the surrounding dissected mountain terrain,
and is filled with locally more than 70 m of clastic debris. The area’s main river, Seti Khola, descends from the Annapurna Sabche Cirque at 3500-4500 m asl down to 900 m asl where it incises into this valley fill. Humans began to settle on this extensive
fan surface in the 1750’s when the Trans-Himalayan trade route connected the Higher Himalayas, passing Pokhara city, with the subtropical lowlands of the Terai. High and unstable river terraces and steep gorges undermined by fast flowing rivers with highly seasonal (monsoon-driven) discharge, a high earthquake risk, and a growing population make the Pokhara Valley an ideal place to study the recent geological and geomorphic history of its sediments and the implication for natural hazard appraisals.
The objective of this thesis is to quantify the timing, the sedimentologic and geomorphic processes as well as the fluvial response to a series of strong sediment pulses. I report
diagnostic sedimentary archives, lithofacies of the fan terraces, their geochemical provenance, radiocarbon-age dating and the stratigraphic relationship between them. All these various and independent lines of evidence show consistently that multiple sediment pulses filled the Pokhara Valley in medieval times, most likely in connection with, if not triggered by, strong seismic ground shaking. The geomorphic and sedimentary evidence is
consistent with catastrophic fluvial aggradation tied to the timing of three medieval Himalayan earthquakes in ~1100, 1255, and 1344 AD. Sediment provenance and calibrated radiocarbon-age data are the key to distinguish three individual sediment pulses, as these are not evident from their sedimentology alone. I explore various measures of adjustment and fluvial response of the river system following these massive aggradation pulses. By using proxies such as net volumetric erosion, incision and erosion rates, clast provenance on active river banks, geomorphic markers such as re-exhumed tree trunks in growth position, and knickpoint locations in tributary valleys, I estimate the response of the river network in the Pokhara Valley to earthquake disturbance over several centuries. Estimates of the removed volumes since catastrophic valley filling began, require average net sediment
yields of up to 4200 t km−2 yr−1 since, rates that are consistent with those reported for Himalayan rivers. The lithological composition of active channel-bed load differs from that of local bedrock material, confirming that rivers have adjusted 30-50% depending on data of different tributary catchments, locally incising with rates of 160-220 mm yr−1. In many tributaries to the Seti Khola, most of the contemporary river loads come from a Higher Himalayan source, thus excluding local hillslopes as sources. This imbalance in sediment provenance emphasizes how the medieval sediment pulses must have rapidly traversed up to 70 km downstream to invade the downstream reaches of the tributaries
up to 8 km upstream, thereby blocking the local drainage and thus reinforcing, or locally creating new, floodplain lakes still visible in the landscape today.
Understanding the formation, origin, mechanism and geomorphic processes of this valley fill is crucial to understand the landscape evolution and response to catastrophic sediment pulses. Several earthquake-triggered long-runout rock-ice avalanches or catastrophic dam burst in the Higher Himalayas are the only plausible mechanisms to explain both the geomorphic and sedimentary legacy that I document here. In any case, the Pokhara Valley was most likely hit by a cascade of extremely rare processes over some two centuries starting in the early 11th century. Nowhere in the Himalayas do we find valley fills of
comparable size and equally well documented depositional history, making the Pokhara Valley one of the most extensively dated valley fill in the Himalayas to date. Judging from the growing record of historic Himalayan earthquakes in Nepal that were traced and
dated in fault trenches, this thesis shows that sedimentary archives can be used to directly aid reconstructions and predictions of both earthquake triggers and impacts from a sedimentary-response perspective. The knowledge about the timing, evolution, and response of the Pokhara Valley and its river system to earthquake triggered sediment pulses is important to address the seismic and geomorphic risk for the city of Pokhara. This
thesis demonstrates how geomorphic evidence on catastrophic valley infill can help to independently verify paleoseismological fault-trench records and may initiate re-thinking on post-seismic hazard assessments in active mountain regions.
Causes for slow weathering and erosion in the steep, warm, monsoon-subjected Highlands of Sri Lanka
(2018)
In the Highlands of Sri Lanka, erosion and chemical weathering rates are among the lowest for global mountain denudation. In this tropical humid setting, highly weathered deep saprolite profiles have developed from high-grade metamorphic charnockite during spheroidal weathering of the bedrock. The spheroidal weathering produces rounded corestones and spalled rindlets at the rock-saprolite interface. I used detailed textural, mineralogical, chemical, and electron-microscopic (SEM, FIB, TEM) analyses to identify the factors limiting the rate of weathering front advance in the profile, the sequence of weathering reactions, and the underlying mechanisms. The first mineral attacked by weathering was found to be pyroxene initiated by in situ Fe oxidation, followed by in situ biotite oxidation. Bulk dissolution of the primary minerals is best described with a dissolution – re-precipitation process, as no chemical gradients towards the mineral surface and sharp structural boundaries are observed at the nm scale. Only the local oxidation in pyroxene and biotite is better described with an ion by ion process. The first secondary phases are oxides and amorphous precipitates from which secondary minerals (mainly smectite and kaolinite) form. Only for biotite direct solid state transformation to kaolinite is likely. The initial oxidation of pyroxene and biotite takes place in locally restricted areas and is relatively fast: log J = -11 molmin/(m2 s). However, calculated corestone-scale mineral oxidation rates are comparable to corestone-scale mineral dissolution rates: log R = -13 molpx/(m2 s) and log R = -15 molbt/(m2 s). The oxidation reaction results in a volume increase. Volumetric calculations suggest that this observed oxidation leads to the generation of porosity due to the formation of micro-fractures in the minerals and the bedrock allowing for fluid transport and subsequent dissolution of plagioclase. At the scale of the corestone, this fracture reaction is responsible for the larger fractures that lead to spheroidal weathering and to the formation of rindlets. Since these fractures have their origin from the initial oxidational induced volume increase, oxidation is the rate limiting parameter for weathering to take place. The ensuing plagioclase weathering leads to formation of high secondary porosity in the corestone over a distance of only a few cm and eventually to the final disaggregation of bedrock to saprolite. As oxidation is the first weathering reaction, the supply of O2 is a rate-limiting factor for chemical weathering. Hence, the supply of O2 and its consumption at depth connects processes at the weathering front with erosion at the surface in a feedback mechanism. The strength of the feedback depends on the relative weight of advective versus diffusive transport of O2 through the weathering profile. The feedback will be stronger with dominating diffusive transport. The low weathering rate ultimately depends on the transport of O2 through the whole regolith, and on lithological factors such as low bedrock porosity and the amount of Fe-bearing primary minerals. In this regard the low-porosity charnockite with its low content of Fe(II) bearing minerals impedes fast weathering reactions. Fresh weatherable surfaces are a pre-requisite for chemical weathering. However, in the case of the charnockite found in the Sri Lankan Highlands, the only process that generates these surfaces is the fracturing induced by oxidation. Tectonic quiescence in this region and low pre-anthropogenic erosion rate (attributed to a dense vegetation cover) minimize the rejuvenation of the thick and cohesive regolith column, and lowers weathering through the feedback with erosion.
Characterization of altered inflorescence architecture in Arabidopsis thaliana BG-5 x Kro-0 hybrid
(2018)
A reciprocal cross between two A. thaliana accessions, Kro-0 (Krotzenburg, Germany) and BG-5 (Seattle, USA), displays purple rosette leaves and dwarf bushy phenotype in F1 hybrids when grown at 17 °C and a parental-like phenotype when grown at 21 °C. This F1 temperature-dependent-dwarf-bushy phenotype is characterized by reduced growth of the primary stem together with an increased number of branches. The reduced stem growth was the strongest at the first internode. In addition, we found that a temperature switch from 21 °C to 17 °C induced the phenotype only before the formation of the first internode of the stem. Similarly, the F1 dwarf-bushy phenotype could not be reversed when plants were shifted from 17 °C to 21 °C after the first internode was formed. Metabolic analysis showed that the F1 phenotype was associated with a significant upregulation of anthocyanin(s), kaempferol(s), salicylic acid, jasmonic acid and abscisic acid. As it has been previously shown that the dwarf-bushy phenotype is linked to two loci, one on chromosome 2 from Kro-0 and one on chromosome 3 from BG-5, an artificial micro-RNA approach was used to investigate the necessary genes on these intervals. From the results obtained, it was found that two genes, AT2G14120 that encodes for a DYNAMIN RELATED PROTEIN3B and AT2G14100 that encodes a member of the Cytochrome P450 family protein CYP705A13, were necessary for the appearance of the F1 phenotype on chromosome 2. It was also discovered that AT3G61035 that encodes for another cytochrome P450 family protein CYP705A13 and AT3G60840 that encodes for a MICROTUBULE-ASSOCIATED PROTEIN65-4 on chromosome 3 were both necessary for the induction of the F1 phenotype. To prove the causality of these genes, genomic constructs of the Kro-0 candidate genes on chromosome 2 were transferred to BG-5 and genomic constructs of the chromosome 3 candidate genes from BG-5 were transferred to Kro-0. The T1 lines showed that these genes are not sufficient alone to induce the phenotype. In addition to the F1 phenotype, more severe phenotypes were observed in the F2 generations that were grouped into five different phenotypic classes. Whilst seed yield was comparable between F1 hybrids and parental lines, three phenotypic classes in the F2 generation exhibited hybrid breakdown in the form of reproductive failure. This F2 hybrid breakdown was less sensitive to temperature and showed a dose-dependent effect of the loci involved in F1 phenotype. The severest class of hybrid breakdown phenotypes was observed only in the population of backcross with the parent Kro-0, which indicates a stronger contribution of the BG-5 allele when compared to the Kro-0 allele on the hybrid breakdown phenotypes. Overall, the findings of my thesis provide a further understanding of the genetic and metabolic factors underlying altered shoot architecture in hybrid dysfunction.
Remote sensing technology, such as airborne, mobile, or terrestrial laser scanning, and photogrammetric techniques, are fundamental approaches for efficient, automatic creation of digital representations of spatial environments. For example, they allow us to generate 3D point clouds of landscapes, cities, infrastructure networks, and sites. As essential and universal category of geodata, 3D point clouds are used and processed by a growing number of applications, services, and systems such as in the domains of urban planning, landscape architecture, environmental monitoring, disaster management, virtual geographic environments as well as for spatial analysis and simulation.
While the acquisition processes for 3D point clouds become more and more reliable and widely-used, applications and systems are faced with more and more 3D point cloud data. In addition, 3D point clouds, by their very nature, are raw data, i.e., they do not contain any structural or semantics information. Many processing strategies common to GIS such as deriving polygon-based 3D models generally do not scale for billions of points. GIS typically reduce data density and precision of 3D point clouds to cope with the sheer amount of data, but that results in a significant loss of valuable information at the same time.
This thesis proposes concepts and techniques designed to efficiently store and process massive 3D point clouds. To this end, object-class segmentation approaches are presented to attribute semantics to 3D point clouds, used, for example, to identify building, vegetation, and ground structures and, thus, to enable processing, analyzing, and visualizing 3D point clouds in a more effective and efficient way. Similarly, change detection and updating strategies for 3D point clouds are introduced that allow for reducing storage requirements and incrementally updating 3D point cloud databases. In addition, this thesis presents out-of-core, real-time rendering techniques used to interactively explore 3D point clouds and related analysis results. All techniques have been implemented based on specialized spatial data structures, out-of-core algorithms, and GPU-based processing schemas to cope with massive 3D point clouds having billions of points.
All proposed techniques have been evaluated and demonstrated their applicability to the field of geospatial applications and systems, in particular for tasks such as classification, processing, and visualization. Case studies for 3D point clouds of entire cities with up to 80 billion points show that the presented approaches open up new ways to manage and apply large-scale, dense, and time-variant 3D point clouds as required by a rapidly growing number of applications and systems.
This project was focused on exploring the phase behavior of poly(styrene)187000-block-poly(2-vinylpyridine)203000 (SV390) with high molecular weight (390 kg/mol) in thin films, in which the self-assembly of block copolymers (BCPs) was realized via thermo-solvent annealing. The advanced processing technique of solvent vapor treatment provides controlled and stable conditions.
In Chapter 3, the factors to influence the annealing process and the swelling behavior of homopolymers are presented and discussed. The swelling behavior of BCP in films is controlled by the temperature of the vapor and of the substrate, on one hand, and variation of the saturation of the solvent vapor atmosphere (different solvents), on the other hand. Additional factors like the geometry and material of the chamber, the type of flow inside the chamber etc. also influence the reproducibility and stability of the processing. The slightly selective solvent vapor of chloroform gives 10% more swelling of P2VP than PS in films with thickness of ~40 nm.
The tunable morphology in ultrathin films of high molecular weight BCP (SV390) was investigated in Chapter 4. First, the swelling behavior can be precisely tuned by temperature and/or vapor flow separately, which provided information for exploring the multiple-parameter-influenced segmental chain mobility of polymer films. The equilibrium state of SV390 in thin films influenced by temperature was realized at various temperatures with the same degree of swelling. Various methods including characterization with SFM, metallization and RIE were used to identify the morphology of films as porous half-layer with PS dots and P2VP matrix. The kinetic investigations demonstrate that on substrates with either weak or strong interaction the original morphology of the BCP with high molecular weight is changed very fast within 5 min, and the further annealing serves for annihilation of defects.
The morphological development of symmetric BCP in films with thickness increasing from half-layer to one-layer influenced by confinement factors of gradient film thicknesses and various surface properties of substrates was studied in Chapter 5. SV390 and SV99 films show bulk lamella-forming morphology after slightly selective solvent vapor (chloroform) treatment. SV99 films show cylinder-forming morphology under strongly selective solvent vapor (toluene) treatment since the asymmetric structure (caused by toluene uptake in PS blocks only) of SV99 block copolymer during annealing. Both kinds of morphology (lamella and cylinder) are influenced by the film thickness. The annealed morphology of SV390 and SV99 influenced by the combination of confined film and substrate property is similar to the morphology on flat silicon wafers. In this chapter the gradients in the film thickness and surface properties of the substrates with regard to their influence on the morphological development in thin BCP films are presented. Directed self-assembly (graphoepitaxy) of this SV390 was also investigated to compare with systematically reported SV99.
In Chapter 6 an approach to induced oriented microphase separation in thick block copolymer films via treatment with the oriented vapor flow using mini-extruder is envisaged to be an alternative to existing methodologies, e.g. via non-solvent-induced phase separation. The preliminary tests performed in this study confirm potential perspective of this method, which alters the structure through the bulk of the film (as revealed by SAXS measurements), but more detailed studies have to be conducted in order to optimize the preparation.
To reach its climate targets, the European Union has to implement a major sustainability transition in the coming decades. While the socio-technical change required for this transition is well discussed in the academic literature, the economics that go along with it are often reduced to a cost-benefit perspective of climate policy measures. By investigating climate change mitigation as a coordination problem, this thesis offers a novel perspective: It integrates the economic and the socio-technical dimension and thus allows to better understand the opportunities of a sustainability transition in Europe.
First, a game theoretic framework is developed to illustrate coordination on green or brown investment from an agent perspective. A model based on the coordination game "stag hunt" is used to discuss the influence of narratives and signals for green investment as a means to coordinate expectations towards green growth. Public and private green investment impulses – triggered by credible climate policy measures and targets – serve as an example for a green growth perspective for Europe in line with a sustainability transition. This perspective also embodies a critical view on classical analyses of climate policy measures.
Secondly, this analysis is enriched with empirical results derived from stakeholder involvement. In interviews and with a survey among European insurance companies, coordination mechanisms such as market and policy signals are identified and evaluated by their impact on investment strategies for green infrastructure. The latter, here defined as renewable energy, electricity distribution and transmission as well as energy efficiency improvements, is considered a central element of the transition to a low-carbon society.
Thirdly, this thesis identifies and analyzes major criticisms raised towards stakeholder involvement in sustainability science. On a conceptual level, different ways of conducting such qualitative research are classified. This conceptualization is then evaluated by scientists, thereby generating empirical evidence on ideals and practices of stakeholder involvement in sustainability science.
Through the combination of theoretical and empirical research on coordination problems, this thesis offers several contributions: On the one hand, it outlines an approach that allows to assess the economic opportunities of sustainability transitions. This is helpful for policy makers in Europe that are striving to implement climate policy measures addressing the targets of the Paris Agreement as well as to encourage a shift of investments towards green infrastructure. On the other hand, this thesis enhances the stabilization of the theoretical foundations in sustainability science. Therefore, it can aid researchers who involve stakeholders when studying sustainability transitions.
Systems biology aims at investigating biological systems in its entirety by gathering and analyzing large-scale data sets about the underlying components. Computational systems biology approaches use these large-scale data sets to create models at different scales and cellular levels. In addition, it is concerned with generating and testing hypotheses about biological processes. However, such approaches are inevitably leading to computational challenges due to the high dimensionality of the data and the differences in the dimension of data from different cellular layers.
This thesis focuses on the investigation and development of computational approaches to analyze metabolite profiles in the context of cellular networks. This leads to determining what aspects of the network functionality are reflected in the metabolite levels. With these methods at hand, this thesis aims to answer three questions: (1) how observability of biological systems is manifested in metabolite profiles and if it can be used for phenotypical comparisons; (2) how to identify couplings of reaction rates from metabolic profiles alone; and (3) which regulatory mechanism that affect metabolite levels can be distinguished by integrating transcriptomics and metabolomics read-outs.
I showed that sensor metabolites, identified by an approach from observability theory, are more correlated to each other than non-sensors. The greater correlations between sensor metabolites were detected both with publicly available metabolite profiles and synthetic data simulated from a medium-scale kinetic model. I demonstrated through robustness analysis that correlation was due to the position of the sensor metabolites in the network and persisted irrespectively of the experimental conditions. Sensor metabolites are therefore potential candidates for phenotypical comparisons between conditions through targeted metabolic analysis.
Furthermore, I demonstrated that the coupling of metabolic reaction rates can be investigated from a purely data-driven perspective, assuming that metabolic reactions can be described by mass action kinetics. Employing metabolite profiles from domesticated and wild wheat and tomato species, I showed that the process of domestication is associated with a loss of regulatory control on the level of reaction rate coupling. I also found that the same metabolic pathways in Arabidopsis thaliana and Escherichia coli exhibit differences in the number of reaction rate couplings.
I designed a novel method for the identification and categorization of transcriptional effects on metabolism by combining data on gene expression and metabolite levels. The approach determines the partial correlation of metabolites with control by the principal components of the transcript levels. The principle components contain the majority of the transcriptomic information allowing to partial out the effect of the transcriptional layer from the metabolite profiles. Depending whether the correlation between metabolites persists upon controlling for the effect of the transcriptional layer, the approach allows us to group metabolite pairs into being associated due to post-transcriptional or transcriptional regulation, respectively. I showed that the classification of metabolite pairs into those that are associated due to transcriptional or post-transcriptional regulation are in agreement with existing literature and findings from a Bayesian inference approach.
The approaches developed, implemented, and investigated in this thesis open novel ways to jointly study metabolomics and transcriptomics data as well as to place metabolic profiles in the network context. The results from these approaches have the potential to provide further insights into the regulatory machinery in a biological system.
More than a billion people rely on water from rivers sourced in High Mountain Asia (HMA), a significant portion of which is derived from snow and glacier melt. Rural communities are heavily dependent on the consistency of runoff, and are highly vulnerable to shifts in their local environment brought on by climate change. Despite this dependence, the impacts of climate change in HMA remain poorly constrained due to poor process understanding, complex terrain, and insufficiently dense in-situ measurements.
HMA's glaciers contain more frozen water than any region outside of the poles. Their extensive retreat is a highly visible and much studied marker of regional and global climate change. However, in many catchments, snow and snowmelt represent a much larger fraction of the yearly water budget than glacial meltwaters. Despite their importance, climate-related changes in HMA's snow resources have not been well studied.
Changes in the volume and distribution of snowpack have complex and extensive impacts on both local and global climates. Eurasian snow cover has been shown to impact the strength and direction of the Indian Summer Monsoon -- which is responsible for much of the precipitation over the Indian Subcontinent -- by modulating earth-surface heating. Shifts in the timing of snowmelt have been shown to limit the productivity of major rangelands, reduce streamflow, modify sediment transport, and impact the spread of vector-borne diseases. However, a large-scale regional study of climate impacts on snow resources had yet to be undertaken.
Passive Microwave (PM) remote sensing is a well-established empirical method of studying snow resources over large areas. Since 1987, there have been consistent daily global PM measurements which can be used to derive an estimate of snow depth, and hence snow-water equivalent (SWE) -- the amount of water stored in snowpack. The SWE estimation algorithms were originally developed for flat and even terrain -- such as the Russian and Canadian Arctic -- and have rarely been used in complex terrain such as HMA.
This dissertation first examines factors present in HMA that could impact the reliability of SWE estimates. Forest cover, absolute snow depth, long-term average wind speeds, and hillslope angle were found to be the strongest controls on SWE measurement reliability. While forest density and snow depth are factors accounted for in modern SWE retrieval algorithms, wind speed and hillslope angle are not. Despite uncertainty in absolute SWE measurements and differences in the magnitude of SWE retrievals between sensors, single-instrument SWE time series were found to be internally consistent and suitable for trend analysis.
Building on this finding, this dissertation tracks changes in SWE across HMA using a statistical decomposition technique. An aggregate decrease in SWE was found (10.6 mm/yr), despite large spatial and seasonal heterogeneities. Winter SWE increased in almost half of HMA, despite general negative trends throughout the rest of the year. The elevation distribution of these negative trends indicates that while changes in SWE have likely impacted glaciers in the region, climate change impacts on these two pieces of the cryosphere are somewhat distinct.
Following the discussion of relative changes in SWE, this dissertation explores changes in the timing of the snowmelt season in HMA using a newly developed algorithm. The algorithm is shown to accurately track the onset and end of the snowmelt season (70% within 5 days of a control dataset, 89% within 10). Using a 29-year time series, changes in the onset, end, and duration of snowmelt are examined. While nearly the entirety of HMA has experienced an earlier end to the snowmelt season, large regions of HMA have seen a later start to the snowmelt season. Snowmelt periods have also decreased in almost all of HMA, indicating that the snowmelt season is generally shortening and ending earlier across HMA.
By examining shifts in both the spatio-temporal distribution of SWE and the timing of the snowmelt season across HMA, we provide a detailed accounting of changes in HMA's snow resources. The overall trend in HMA is towards less SWE storage and a shorter snowmelt season. However, long-term and regional trends conceal distinct seasonal, temporal, and spatial heterogeneity, indicating that changes in snow resources are strongly controlled by local climate and topography, and that inter-annual variability plays a significant role in HMA's snow regime.
In this thesis, deficits in theory of mind (ToM) and executive function (EF) were examined in tandem and separately as risk factors for conduct problems, including different forms and functions of aggressive behavior. All three reported studies and the additional analyses were based on a large community sample of N = 1,657 children, including three waves of a longitudinal study covering middle childhood and the transition to early adolescence (range 6 to 13 years) over a total of about three years. All data were analyzed with structural equation modeling.
Altogether, the results of all the conducted studies in this thesis extend previous research and confirm the propositions of the SIP model (Crick & Dodge, 1994) and of the amygdala theory of violent behavior (e.g., Blair et al., 2014) besides other accounts. Considering the three main research questions, the results of the thesis suggest first that deficits in ToM are a risk factor for relational and physical aggression from a mean age of 8 to 11 years under the control of stable between-person differences in aggression. In addition, earlier relationally aggressive behavior predicts later deficits in ToM in this age range, which confirms transactional relations between deficits in ToM and aggressive behavior in children (Crick & Dodge, 1994). Further, deficits in ToM seem to be a risk factor for parent-rated conduct problems cross-sectionally in an age range from 9 to 13 years. Second, deficits in cool EF are a risk factor for later physical, relational, and reactive aggression but not for proactive aggression over a course of three years from middle childhood to early adolescence. Habitual anger seems to mediate the relation between cool EF and physical, and as a trend also relational, aggression. Deficits in emotional and inhibitory control and planning have a direct effect on the individual level of conduct problems under the control of interindividual differences in conduct problems at a mean age of 8 years, but not on the trajectory of conduct problems over the course from age 8 to 11. Third, when deficits in cool EF and ToM are studied in tandem cross-sectionally at the transition from middle childhood to early adolescence, deficits in cool EF seem to play only an indirect role through deficits in ToM as a risk factor for conduct problems. Finally, all results hold equal for females and males in the conducted studies.
The results of this thesis emphasize the need to intervene in the transactional processes between deficits in ToM and in EF and conduct problems, including different forms and functions of aggression, particularly in the socially sensible period from middle and late childhood to early adolescence.
Poly(vinylidene fluoride-trifluoroethylene) (P(VDF-TrFE)) ferroelectric thin films of different molar ratio have been studied with regard to data memory applications. Therefore, films with thicknesses of 200 nm and less have been spin coated from solution. Observations gained from single layers have been extended to multilayer capacitors and three terminal transistor devices.
Besides conventional hysteresis measurements, the measurement of dielectric non-linearities has been used as a main tool of characterisation. Being a very sensitive and non-destructive method, non-linearity measurements are well suited for polarisation readout and property studies. Samples have been excited using a high quality, single-frequency sinusoidal voltage with an amplitude significantly smaller than the coercive field of the samples. The response was then measured at the excitation frequency and its higher harmonics. Using the measurement results, the linear and non-linear dielectric permittivities ɛ₁, ɛ₂ and ɛ₃ have been determined. The permittivities have been used to derive the temperature-dependent polarisation behaviour as well as the polarisation state and the order of the phase transitions.
The coercive field in VDF-TrFE copolymers is high if compared to their ceramic competitors. Therefore, the film thickness had to be reduced significantly. Considering a switching voltage of 5 V and a coercive field of 50 MV/m, the film thickness has to be 100 nm and below. If the thickness becomes substantially smaller than the other dimensions, surface and interface layer effects become more pronounced. For thicker films of P(VDF-TrFE) with a molar fraction of 56/44 a second-order phase transition without a thermal hysteresis for an ɛ₁(T) temperature cycle has been predicted and observed. This however, could not be confirmed by the measurements of thinner films. A shift of transition temperatures as well as a temperature independent, non-switchable polarisation and a thermal hysteresis for P(VDF-TrFE) 56/44 have been observed. The impact of static electric fields on the polarisation and the phase transition has therefore been studied and simulated, showing that all aforementioned phenomena including a linear temperature dependence of the polarisation might originate from intrinsic electric fields.
In further experiments the knowledge gained from single layer capacitors has been extended to bilayer copolymer thin films of different molar composition. Bilayers have been deposited by succeeding cycles of spin coating from solution. Single layers and their bilayer combination have been studied individually in order to prove the layers stability. The individual layers have been found to be physically stable. But while the bilayers reproduced the main ɛ₁(T) properties of the single layers qualitatively, quantitative numbers could not be explained by a simple serial connection of capacitors. Furthermore, a linear behaviour of the polarisation throughout the measured temperature range has been observed. This was found to match the behaviour predicted considering a constant electric field.
Retention time is an important quantity for memory applications. Hence, the retention behaviour of VDF-TrFE copolymer thin films has been determined using dielectric non-linearities. The polarisation loss in P(VDF-TrFE) poled samples has been found to be less than 20% if recorded over several days. The loss increases significantly if the samples have been poled with lower amplitudes, causing an unsaturated polarisation. The main loss was attributed to injected charges. Additionally, measurements of dielectric non-linearities have been proven to be a sensitive and non-destructive tool to measure the retention behaviour.
Finally, a ferroelectric field effect transistor using mainly organic materials (FerrOFET) has been successfully studied. DiNaphtho[2,3-b:2',3'-f]Thieno[3,2-b]Thiophene (DNTT) has proven to be a stable, suitable organic semiconductor to build up ferroelectric memory devices. Furthermore, an oxidised aluminium bottom electrode and additional dielectric layers, i.e. parylene C, have proven to reduce the leakage current and therefore enhance the performance significantly.
Business process management is an acknowledged asset for running an organization in a productive and sustainable way. One of the most important aspects of business process management, occurring on a daily basis at all levels, is decision making. In recent years, a number of decision management frameworks have appeared in addition to existing business process management systems. More recently, Decision Model and Notation (DMN) was developed by the OMG consortium with the aim of complementing the widely used Business Process Model and Notation (BPMN). One of the reasons for the emergence of DMN is the increasing interest in the evolving paradigm known as the separation of concerns. This paradigm states that modeling decisions complementary to processes reduces process complexity by externalizing decision logic from process models and importing it into a dedicated decision model. Such an approach increases the agility of model design and execution. This provides organizations with the flexibility to adapt to the ever increasing rapid and dynamic changes in the business ecosystem. The research gap, identified by us, is that the separation of concerns, recommended by DMN, prescribes the externalization of the decision logic of process models in one or more separate decision models, but it does not specify this can be achieved.
The goal of this thesis is to overcome the presented gap by developing a framework for discovering decision models in a semi-automated way from information about existing process decision making. Thus, in this thesis we develop methodologies to extract decision models from: (1) control flow and data of process models that exist in enterprises; and (2) from event logs recorded by enterprise information systems, encapsulating day-to-day operations. Furthermore, we provide an extension of the methodologies to discover decision models from event logs enriched with fuzziness, a tool dealing with partial knowledge of the process execution information. All the proposed techniques are implemented and evaluated in case studies using real-life and synthetic process models and event logs. The evaluation of these case studies shows that the proposed methodologies provide valid and accurate output decision models that can serve as blueprints for executing decisions complementary to process models. Thus, these methodologies have applicability in the real world and they can be used, for example, for compliance checks, among other uses, which could improve the organization's decision making and hence it's overall performance.
Nanophotonics is the field of science and engineering aimed at studying the light-matter interactions on the nanoscale. One of the key aspects in studying such optics at the nanoscale is the ability to assemble the material components in a spatially controlled manner. In this work, DNA origami nanostructures were used to self-assemble dye molecules and DNA coated plasmonic nanoparticles. Optical properties of dye nanoarrays, where the dyes were arranged at distances where they can interact by Förster resonance energy transfer (FRET), were systematically studied according to the size and arrangement of the dyes using fluorescein (FAM) as the donor and cyanine 3 (Cy 3) as the acceptor. The optimized design, based on steady-state and time-resolved fluorometry, was utilized in developing a ratiometric pH sensor with pH-inert coumarin 343 (C343) as the donor and pH-sensitive FAM as the acceptor. This design was further applied in developing a ratiometric toxin sensor, where the donor C343 is unresponsive and FAM is responsive to thioacetamide (TAA) which is a well-known hepatotoxin. The results indicate that the sensitivity of the ratiometric sensor can be improved by simply arranging the dyes into a well-defined array. The ability to assemble multiple fluorophores without dye-dye aggregation also provides a strategy to amplify the signal measured from a fluorescent reporter, and was utilized here to develop a reporter for sensing oligonucleotides. By incorporating target capturing sequences and multiple fluorophores (ATTO 647N dye molecules), a reporter for microbead-based assay for non-amplified target oligonucleotide sensing was developed. Analysis of the assay using VideoScan, a fluorescence microscope-based technology capable of conducting multiplex analysis, showed the DNA origami nanostructure based reporter to have a lower limit of detection than a single stranded DNA reporter. Lastly, plasmonic nanostructures were assembled on DNA origami nanostructures as substrates to study interesting optical behaviors of molecules in the near-field. Specifically, DNA coated gold nanoparticles, silver nanoparticles, and gold nanorods, were placed on the DNA origami nanostructure aiming to study surface-enhanced fluorescence (SEF) and surface-enhanced Raman scattering (SERS) of molecules placed in the hotspot of coupled plasmonic structures.
The Sun is the nearest star to the Earth. It consists of an interior and an atmosphere. The convection zone is the outermost layer of the solar interior. A flux rope may emerge as a coherent structure from the convection zone into the solar atmosphere or be formed by magnetic reconnection in the atmosphere. A flux rope is a bundle of magnetic field lines twisting around an axis field line, creating a helical shape by which dense filament material can be supported against gravity. The flux rope is also considered as the key structure of the most energetic phenomena in the solar system, such as coronal mass ejections (CMEs) and flares. These magnetic flux ropes can produce severe geomagnetic storms. In particular, to improve the ability to forecast space weather, it is important to enrich our knowledge about the dynamic formation of flux ropes and the underlying physical mechanisms that initiate their eruption, such as a CME.
A confined eruption consists of a filament eruption and usually an associated are, but does not evolve into a CME; rather, the moving plasma is halted in the solar corona and usually seen to fall back. The first detailed observations of a confined filament eruption were obtained on 2002 May 27by the TRACE satellite in the 195 A band. So, in the Chapter 3, we focus on a flux rope instability model. A twisted flux rope can become unstable by entering the kink instability regime. We show that the kink instability, which occurs if the twist of a flux rope exceeds a critical value, is capable of initiating of an eruption. This model is tested against the well observed confined eruption on 2002 May 27 in a parametric magnetohydrodynamic (MHD) simulation study that comprises all phases of the event. Very good agreement with the essential observed properties is obtained, only except for a relatively poor matching of the initial filament height.
Therefore, in Chapter 4, we submerge the center point of the flux rope deeper below the photosphere to obtain a flatter coronal rope section and a better matching with the initial height profile of the erupting filament. This implies a more realistic inclusion of the photospheric line tying. All basic assumptions and the other parameter settings are kept the same as in Chapter 3. This complement of the parametric study shows that the flux rope instability model can yield an even better match with the observational data. We also focus in Chapters 3 and 4 on the magnetic reconnection during the confined eruption, demonstrating that it occurs in two distinct locations and phases that correspond to the observed brightenings and changes of topology, and consider the fate of the erupting flux, which can reform a (less twisted) flux rope.
The Sun also produces series of homologous eruptions, i.e. eruptions which occur repetitively in the same active region and are of similar morphology. Therefore, in Chapter 5, we employ the reformed flux rope as a new initial condition, to investigate the possibility of subsequent homologous eruptions. Free magnetic energy is built up by imposing motions in the bottom boundary, such as converging motions, leading to flux cancellation. We apply converging motions in the sunspot area, such that a small part of the flux from the sunspots with different polarities is transported toward the polarity inversion line (PIL) and cancels with each other. The reconnection associated with the cancellation process forms more helical magnetic flux around the reformed flux rope, which leads to a second and a third eruption. In this study, we obtain the first MHD simulation results of a homologous sequence of eruptions that show a transition from a confined to two ejective eruptions, based on the reformation of a flux rope after each eruption.
Microbial processing of organic matter (OM) in the freshwater biosphere is a key component of global biogeochemical cycles. Freshwaters receive and process valuable amounts of leaf OM from their terrestrial landscape. These terrestrial subsidies provide an essential source of energy and nutrients to the aquatic environment as a function of heterotrophic processing by fungi and bacteria. Particularly in freshwaters with low in-situ primary production from algae (microalgae, cyanobacteria), microbial turnover of leaf OM significantly contributes to the productivity and functioning of freshwater ecosystems and not least their contribution to global carbon cycling.
Based on differences in their chemical composition, it is believed that leaf OM is less bioavailable to microbial heterotrophs than OM photosynthetically produced by algae. Especially particulate leaf OM, consisting predominantly of structurally complex and aromatic polymers, is assumed highly resistant to enzymatic breakdown by microbial heterotrophs. However, recent research has demonstrated that OM produced by algae promotes the heterotrophic breakdown of leaf OM in aquatic ecosystems, with profound consequences for the metabolism of leaf carbon (C) within microbial food webs. In my thesis, I aimed at investigating the underlying mechanisms of this so called priming effect of algal OM on the use of leaf C in natural microbial communities, focusing on fungi and bacteria.
The works of my thesis underline that algal OM provides highly bioavailable compounds to the microbial community that are quickly assimilated by bacteria (Paper II). The substrate composition of OM pools determines the proportion of fungi and bacteria within the microbial community (Paper I). Thereby, the fraction of algae OM in the aquatic OM pool stimulates the activity and hence contribution of bacterial communities to leaf C turnover by providing an essential energy and nutrient source for the assimilation of the structural complex leaf OM substrate. On the contrary, the assimilation of algal OM remains limited for fungal communities as a function of nutrient competition between fungi and bacteria (Paper I, II). In addition, results provide evidence that environmental conditions determine the strength of interactions between microalgae and heterotrophic bacteria during leaf OM decomposition (Paper I, III). However, the stimulatory effect of algal photoautotrophic activities on leaf C turnover remained significant even under highly dynamic environmental conditions, highlighting their functional role for ecosystem processes (Paper III).
The results of my thesis provide insights into the mechanisms by which algae affect the microbial turnover of leaf C in freshwaters. This in turn contributes to a better understanding of the function of algae in freshwater biogeochemical cycles, especially with regard to their interaction with the heterotrophic community.
The last years have shown an increasing sophistication of attacks against enterprises. Traditional security solutions like firewalls, anti-virus systems and generally Intrusion Detection Systems (IDSs) are no longer sufficient to protect an enterprise against these advanced attacks. One popular approach to tackle this issue is to collect and analyze events generated across the IT landscape of an enterprise. This task is achieved by the utilization of Security Information and Event Management (SIEM) systems. However, the majority of the currently existing SIEM solutions is not capable of handling the massive volume of data and the diversity of event representations. Even if these solutions can collect the data at a central place, they are neither able to extract all relevant information from the events nor correlate events across various sources. Hence, only rather simple attacks are detected, whereas complex attacks, consisting of multiple stages, remain undetected. Undoubtedly, security operators of large enterprises are faced with a typical Big Data problem.
In this thesis, we propose and implement a prototypical SIEM system named Real-Time Event Analysis and Monitoring System (REAMS) that addresses the Big Data challenges of event data with common paradigms, such as data normalization, multi-threading, in-memory storage, and distributed processing. In particular, a mostly stream-based event processing workflow is proposed that collects, normalizes, persists and analyzes events in near real-time. In this regard, we have made various contributions in the SIEM context. First, we propose a high-performance normalization algorithm that is highly parallelized across threads and distributed across nodes. Second, we are persisting into an in-memory database for fast querying and correlation in the context of attack detection. Third, we propose various analysis layers, such as anomaly- and signature-based detection, that run on top of the normalized and correlated events. As a result, we demonstrate our capabilities to detect previously known as well as unknown attack patterns. Lastly, we have investigated the integration of cyber threat intelligence (CTI) into the analytical process, for instance, for correlating monitored user accounts with previously collected public identity leaks to identify possible compromised user accounts.
In summary, we show that a SIEM system can indeed monitor a large enterprise environment with a massive load of incoming events. As a result, complex attacks spanning across the whole network can be uncovered and mitigated, which is an advancement in comparison to existing SIEM systems on the market.
This dissertation consists of four self-contained papers that deal with the implications of financial market imperfections and heterogeneity. The analysis mainly relates to the class of incomplete-markets models but covers different research topics.
The first paper deals with the distributional effects of financial integration for developing countries. Based on a simple heterogeneous-agent approach, it is shown that capital owners experience large welfare losses while only workers moderately gain due to higher wages. The large welfare losses for capital owners contrast with the small average welfare gains from representative-agent economies and indicate that a strong opposition against capital market opening has to be expected.
The second paper considers the puzzling observation of capital flows from poor to rich countries and the accompanying changes in domestic economic development. Motivated by the mixed results from the literature, we employ an incomplete-markets model with different types of idiosyncratic risk and borrowing constraints. Based on different scenarios, we analyze under what conditions the presence of financial market imperfections contributes to explain the empirical findings and how the conditions may change with different model assumptions.
The third paper deals with the interplay of incomplete information and financial market imperfections in an incomplete-markets economy. In particular, it analyzes the impact of incomplete information about idiosyncratic income shocks on aggregate saving. The results show that the effect of incomplete information is not only quantitatively substantial but also qualitatively ambiguous and varies with the influence of the income risk and the borrowing constraint.
Finally, the fourth paper analyzes the influence of different types of fiscal rules on the response of key macroeconomic variables to a government spending shock. We find that a strong temporary increase in public debt contributes to stabilizing consumption and leisure in the first periods following the change in government spending, whereas a non-debt-intensive fiscal rule leads to a faster recovery of consumption, leisure, capital and output in later periods. Regarding optimal debt policy, we find that a debt-intensive fiscal rule leads to the largest aggregate welfare benefit and that the individual welfare gain is particularly high for wealth-poor agents.
This dissertation consists of five self-contained essays, addressing different aspects of career choices, especially the choice of entrepreneurship, under risk and ambiguity. In Chapter 2, the first essay develops an occupational choice model with boundedly rational agents, who lack information, receive noisy feedback, and are restricted in their decisions by their personality, to analyze and explain puzzling empirical evidence on entrepreneurial decision processes. In the second essay, in Chapter 3, I contribute to the literature on entrepreneurial choice by constructing a general career choice model on the basis of the assumption that outcomes are partially ambiguous. The third essay, in Chapter 4, theoretically and empirically analyzes the impact of media on career choices, where information on entrepreneurship provided by the media is treated as an informational shock affecting prior beliefs. The fourth essay, presented in Chapter 5, contains an empirical analysis of the effects of cyclical macro variables (GDP and unemployment) on innovative start-ups in Germany. In the fifth, and last, essay in Chapter 6, we examine whether information on personality is useful for advice, using the example of career advice.
Eta Carinae
(2018)
The exceptional binary star Eta Carinae has been fascinating scientists and the people in the Southern hemisphere alike for hundreds of years. It survived an enormous outbreak, comparable to a supernova energy-wise, and for a short period became the brightest star of the night sky. From observations from the radio regime to X-rays the system's characteristics and its emission in photon energies up to ~ 50 keV are well studied today. The binary is composed of two massive stars of ~ 30 and ~ 100 solar masses. Either star drives a strong stellar wind that continuously carries away a fraction of its mass. The collision of these winds leads to a shock on each side of the encounter. In the wind-wind-collision region plasma gets heated when it is overrun by the shocks. Part of the emission seen in X-rays can be attributed to this plasma. Above ~ 50 keV the emission is no longer of thermal origin: the required plasma temperature exceeds the available mechanical energy input of the stellar winds. In contrast to its observational history in thermal energies observational evidence of Eta Carinae's non-thermal emission has only recently built up. In high-energy gamma-rays Eta Carinae is the only binary of its kind that has been detected unambiguously. Its energy spectrum reaches up to ~ hundred GeV, a regime where satellite-based gamma-ray experiments run out of statistics. Ground-based gamma-ray experiments have the advantage of large photon collection areas. H.E.S.S. is the only gamma-ray experiment located in the Southern hemisphere and thus able to observe Eta Carinae in this energy range. H.E.S.S. measures gamma-rays via electromagnetic showers of particles that very-high-energy gamma-rays initiate in the atmosphere. The main challenge in observations of Eta Carinae with H.E.S.S. is the UV emission of the Carina nebula that leads to a background that is up to 10 times stronger than usual for H.E.S.S. This thesis presents the first detection of a colliding-wind binary in very-high-energy gamma-rays and documents the studies that led to it. The differential gamma-ray energy spectrum of Eta Carinae is measured up to 700 GeV. A hadronic and leptonic origin of the gamma-ray emission is discussed and based on the comparison of cooling times a hadronic scenario is favoured.
This doctoral dissertation aims at elucidating the development of hot and cool executive functions in middle childhood and at gaining insight about their role in childhood overweight. The dissertation is based on three empirical studies which have been published in peer-reviewed journals. Data from a large 3-year longitudinal study (the “PIER-study”) was used.
The findings presented in the dissertation demonstrated that both hot and cool EF abilities increase during middle childhood. They also supported the notion that hot and cool EF facets are distinguishable from each other in middle childhood, that they have distinct developmental trajectories, and different predictors.
Evidence was found for associations of hot and cool EF with body weight in middle childhood, which is in line with the notion that they might play a role in the self-regulation of eating and the multifactorial etiology of childhood overweight.
The topic of this thesis is the experimental investigation of evaporating thin films on planar solid substrates and the enrichment, the crystal growth and Marangoni flows near the three phase line in the case of partially wetting mixtures of volatile and non volatile liquids. In short, it deals with the properties of planar liquid films and with those of thin liquid sections near the three phase contact line. In both cases the liquid looses continuously one component by evaporation. One topic is the rupture behavior of ultra-thin films of binary mixtures of a volatile solvent and a nonvolatile solute. It is studied how the thickness at which the film ruptures is related to the solute crystallization at the liquid/ substrate interface as soon as the solute reaches supersaturation. A universal relation between the rupture thickness and the saturation behaviour is presented. The second research subject are individual nanoparticles embedded in molecularly thin films at planar substrates. It is found that the nanoparticles cause an unexpectedly large film surface distortion (meniscus). This distortion can be measured quantitatively by conventional reflective microscopy although the nanoparticles are much smaller than the Rayleigh diffraction limit. Investigations with binary mixtures of volatile solvents and non-volatile solutes (polymers) aim at a better understanding/prediction of the final solute coverage, the timeresolved film thinning, the time-resolved solvent evaporation, and the evolution of the solute concentration within the thinning film. A quantiative theoretical description of the experimental findings is derived. Experiments of completely miscible binary mixtures of volatile liquids, which individually form continuous planar films show unexpectedly that films of mixtures are not necessarily continuous and planar. Rather, they may form surface
undulations or even rupture. This is explained with surface Marangoni flows. A new method for the exceptionally fast fabrication (mm/min) of ultralong aligned diphenylalanin single crystals via dip casting is presented. It is shown how the specific evaporation conditions at the three phase line can be used for a controlled peptide crystal growth process. It is further demonstrated how the confinement inside a smalll capillary affects the peptide crystallization and how this can be understood (and used).
Arctic warming has implications for the functioning of terrestrial Arctic ecosystems, global climate and socioeconomic systems of northern communities. A research gap exists in high spatial resolution monitoring and understanding of the seasonality of permafrost degradation, spring snowmelt and vegetation phenology. This thesis explores the diversity and utility of dense TerraSAR-X (TSX) X-Band time series for monitoring ice-rich riverbank erosion, snowmelt, and phenology of Arctic vegetation at long-term study sites in the central Lena Delta, Russia and on Qikiqtaruk (Herschel Island), Canada. In the thesis the following three research questions are addressed:
• Is TSX time series capable of monitoring the dynamics of rapid permafrost degradation in ice-rich permafrost on an intra-seasonal scale and can these datasets in combination with climate data identify the climatic drivers of permafrost degradation?
• Can multi-pass and multi-polarized TSX time series adequately monitor seasonal snow cover and snowmelt in small Arctic catchments and how does it perform compared to optical satellite data and field-based measurements?
• Do TSX time series reflect the phenology of Arctic vegetation and how does the recorded signal compare to in-situ greenness data from RGB time-lapse camera data and vegetation height from field surveys?
To answer the research questions three years of TSX backscatter data from 2013 to 2015 for the Lena Delta study site and from 2015 to 2017 for the Qikiqtaruk study site were used in quantitative and qualitative analysis complimentary with optical satellite data and in-situ time-lapse imagery.
The dynamics of intra-seasonal ice-rich riverbank erosion in the central Lena Delta, Russia were quantified using TSX backscatter data at 2.4 m spatial resolution in HH polarization and validated with 0.5 m spatial resolution optical satellite data and field-based time-lapse camera data. Cliff top lines were automatically extracted from TSX intensity images using threshold-based segmentation and vectorization and combined in a geoinformation system with manually digitized cliff top lines from the optical satellite data and rates of erosion extracted from time-lapse cameras. The results suggest that the cliff top eroded at a constant rate throughout the entire erosional season. Linear mixed models confirmed that erosion was coupled with air temperature and precipitation at an annual scale, seasonal fluctuations did not influence 22-day erosion rates. The results highlight the potential of HH polarized X-Band backscatter data for high temporal resolution monitoring of rapid permafrost degradation.
The distinct signature of wet snow in backscatter intensity images of TSX data was exploited to generate wet snow cover extent (SCE) maps on Qikiqtaruk at high temporal resolution. TSX SCE showed high similarity to Landsat 8-derived SCE when using cross-polarized VH data. Fractional snow cover (FSC) time series were extracted from TSX and optical SCE and compared to FSC estimations from in-situ time-lapse imagery. The TSX products showed strong agreement with the in-situ data and significantly improved the temporal resolution compared to the Landsat 8 time series. The final combined FSC time series revealed two topography-dependent snowmelt patterns that corresponded to in-situ measurements. Additionally TSX was able to detect snow patches longer in the season than Landsat 8, underlining the advantage of TSX for detection of old snow. The TSX-derived snow information provided valuable insights into snowmelt dynamics on Qikiqtaruk previously not available.
The sensitivity of TSX to vegetation structure associated with phenological changes was explored on Qikiqtaruk. Backscatter and coherence time series were compared to greenness data extracted from in-situ digital time-lapse cameras and detailed vegetation parameters on 30 areas of interest. Supporting previous results, vegetation height corresponded to backscatter intensity in co-polarized HH/VV at an incidence angle of 31°. The dry, tall shrub dominated ecological class showed increasing backscatter with increasing greenness when using the cross polarized VH/HH channel at 32° incidence angle. This is likely driven by volume scattering of emerging and expanding leaves. Ecological classes with more prostrate vegetation and higher bare ground contributions showed decreasing backscatter trends over the growing season in the co-polarized VV/HH channels likely a result of surface drying instead of a vegetation structure signal. The results from shrub dominated areas are promising and provide a complementary data source for high temporal monitoring of vegetation phenology.
Overall this thesis demonstrates that dense time series of TSX with optical remote sensing and in-situ time-lapse data are complementary and can be used to monitor rapid and seasonal processes in Arctic landscapes at high spatial and temporal resolution.
There are numerous situations in which people ask for something or make a request, e.g. asking a favor, asking for help or requesting compliance with specific norms. For this reason, how to ask for something in order to increase people’s willingness to fulfill such requests is one of the most important question for many people working in various different fields of responsibility such as charitable giving, marketing, management or policy making.
This dissertation consists of four chapters that deal with the effects of small changes in the decision-making environment on altruistic decision-making and compliance behavior. Most notably, written communication as an influencing factor is the focus of the first three chapters. The starting point was the question how to devise a request in order to maximize its chance of success (Chapter 1). The results of the first chapter originate the ideas for the second and third chapter. Chapter 2 analyzes how communication by a neutral third-party, i.e. a text from the experimenters that either reminds potential benefactors of their responsibility or highlights their freedom of choice, affects altruistic decision-making. Chapter 3 elaborates on the effect of thanking people in advance when asking them for help. While being not as closely related to the other chapters as the three first ones are, the fourth chapter deals as well with the question how compliance (here: compliance with norms and rules) is affected by subtle manipulations of the environment in which decisions are made. This chapter analyzes the effect of default settings in a tax return on tax compliance.
In order to study the research questions outlined above, controlled experiments were conducted. Chapter 1, which analyzes the effect of text messages on the decision to give something to another person, employs a mini-dictator game. The recipient sends a free-form text message to the dictator before the latter makes a binary decision whether or not to give part of her or his endowment to the recipient. We find that putting effort into the message by writing a long note without spelling mistakes increases dictators’ willingness to give. Moreover, writing in a humorous way and mentioning reasons why the money is needed pays off. Furthermore, men and women seem to react differently to some message categories. Only men react positively to efficiency arguments, while only women react to messages that emphasize the dictator’s power and responsibility.
Building on this last result, Chapter 2 attempts to disentangle the effect of reminding potential benefactors of their responsibility for the potential beneficiary and the effect of highlighting their decision power and freedom of choice on altruistic decision-making by studying the effects of two different texts on giving in a dictator game. We find that only men react positively to a text that stresses their responsibility for the recipient by giving more to her or him, whereas only women seem to react positively to a text that emphasizes their decision power and freedom of choice.
Chapter 3 focuses on the compliance with a request. In the experiment, participants are asked to provide a detailed answer to an open question. Compliance is measured by the effort participants spend on answering the question. The treatment variable is whether or not they see the text “thanks in advance.” We find that participants react negatively by putting less effort into complying with the request in response to the phrase “thanks in advance.”
Chapter 4 studies the effect of prefilled tax returns with mostly inaccurate default values on tax compliance. In a laboratory experiment, participants earn income by performing a real-effort task and must subsequently file a tax return for three consecutive rounds. In the main treatment, the tax return is prefilled with a default value, resulting from participants’ own performance in previous rounds, which varies in its relative size. The results suggest that there is no lasting effect of a default value on tax honesty, neither for relatively low nor relatively high defaults. However, participants who face a default that is lower than their true income in the first round evade significantly and substantially more taxes in this round than participants in the control treatment without a default.
Physical computing covers the design and realization of interactive objects and installations and allows learners to develop concrete, tangible products of the real world, which arise from their imagination. This can be used in computer science education to provide learners with interesting and motivating access to the different topic areas of the subject in constructionist and creative learning environments. However, if at all, physical computing has so far mostly been taught in afternoon clubs or other extracurricular settings. Thus, for the majority of students so far there are no opportunities to design and create their own interactive objects in regular school lessons.
Despite its increasing popularity also for schools, the topic has not yet been clearly and sufficiently characterized in the context of computer science education. The aim of this doctoral thesis therefore is to clarify physical computing from the perspective of computer science education and to adequately prepare the topic both content-wise and methodologically for secondary school teaching. For this purpose, teaching examples, activities, materials and guidelines for classroom use are developed, implemented and evaluated in schools.
In the theoretical part of the thesis, first the topic is examined from a technical point of view. A structured literature analysis shows that basic concepts used in physical computing can be derived from embedded systems, which are the core of a large field of different application areas and disciplines. Typical methods of physical computing in professional settings are analyzed and, from an educational perspective, elements suitable for computer science teaching in secondary schools are extracted, e. g. tinkering and prototyping. The investigation and classification of suitable tools for school teaching show that microcontrollers and mini computers, often with extensions that greatly facilitate the handling of additional components, are particularly attractive tools for secondary education. Considering the perspectives of science, teachers, students and society, in addition to general design principles, exemplary teaching approaches for school education and suitable learning materials are developed and the design, production and evaluation of a physical computing construction kit suitable for teaching is described.
In the practical part of this thesis, with “My Interactive Garden”, an exemplary approach to integrate physical computing in computer science teaching is tested and evaluated in different courses and refined based on the findings in a design-based research approach. In a series of workshops on physical computing, which is based on a concept for constructionist professional development that is developed specifically for this purpose, teachers are empowered and encouraged to develop and conduct physical computing lessons suitable for their particular classroom settings. Based on their in-class experiences, a process model of physical computing teaching is derived. Interviews with those teachers illustrate that benefits of physical computing, including the tangibility of crafted objects and creativity in the classroom, outweigh possible drawbacks like longer preparation times, technical difficulties or difficult assessment. Hurdles in the classroom are identified and possible solutions discussed.
Empirical investigations in the different settings reveal that “My Interactive Garden” and physical computing in general have a positive impact, among others, on learner motivation, fun and interest in class and perceived competencies.
Finally, the results from all evaluations are combined to evaluate the design principles for physical computing teaching and to provide a perspective on the development of decision-making aids for physical computing activities in school education.
Due to a challenging population growth and environmental changes, a need for new routes to provide required chemicals for human necessities arises. An effective solution discussed in this thesis is industrial heterogeneous catalysis. The development of an advanced industrial heterogeneous catalyst is investigated herein by considering porous carbon nano-material as supports and modifying their surface chemistry structure with heteroatoms. Such modifications showed a significant influence on the performance of the catalyst and provided a deeper insight regarding the interaction between the surface structure of the catalyst and the surrounding phase. This thesis contributes to the few present studies about heteroatoms effect on the catalyst performance and emphasizes on the importance of understanding surface structure functionalization in a catalyst in different phases (liquid and gaseous) and for different reactions (hydrogenolysis, oxidation, and hydrogenation/ polymerization). Herein, the heteroatoms utilized for the modifications are hydrogen (H), oxygen (O), and nitrogen (N). The heteroatoms effect on the metal particle size, on the polarity of the support/ the catalyst, on the catalytic performance (activity, selectivity, and stability), and on the interaction with the surrounding phase has been explored. First hierarchical porous carbon nanomaterials functionalized with heteroatoms (N) is synthesized and applied as supports for nickel nanoparticles for hydrogenolysis process of kraft lignin in liquid phase. This reaction has been performed in batch and flow reactors for three different catalysts, two of comparable hierarchical porosity, yet one is modified with N and the other is not, and a third is a prepared catalyst from a commercial carbon support. The reaction production and analyses show that the catalysts with hierarchical porosity perform catalytically much better than in presence of a commercial carbon support with lower surface area. Moreover, the modification with N-heteroatoms enhanced the catalytic performance because the heteroatom modified porous carbon material with nickel nanoparticles catalyst (Ni-NDC) performed highest among the other catalysts. In the flow reactor, Ni-NDC selectively degraded the ether bonds (β-O-4) in kraft lignin with an activity of 2.2 x10^-4 mg lignin mg Ni-1 s-1 for 50 h at 350°C and 3.5 mL min-1 flow, providing ~99 % conversion to shorter chained chemicals (mainly guaiacol derivatives). Then, the functionalization of carbon surface was further studied in selective oxidation of glucose to gluconic acid using < 1 wt. % of gold (Au) deposited on the previously-mentioned synthesized carbon (C) supports with different functionalities (Au-CGlucose, Au-CGlucose-H, Au-CGlucose-O, Au-CGlucoseamine). Except for Au-CGlucose-O, the other catalysts achieved full glucose conversion within 40-120 min and 100% selectivity towards gluconic acid with a maximum activity of 1.5 molGlucose molAu-1 s-1 in an aqueous phase at 45 °C and pH 9. Each heteroatom influenced the polarity of the carbon differently, affecting by that the deposition of Au on the support and thus the activity of the catalyst and its selectivity. The heteroatom effect was further investigated in a gas phase. The Fischer-Tropsch reaction was applied to convert synthetic gas (CO and H2) to short olefins and paraffins using surface-functionalized carbon nanotubes (CNTs) with heteroatoms as supports for ion (Fe) deposition in presence and absence of promoters (Na and S). The results showed the promoted Fe-CNT doped with nitrogen catalyst to be stable up to 180 h and selective to the formation of olefins (~ 47 %) and paraffins (~6 %) with a conversion of CO ~ 92 % at a maximum activity of 94 *10^-5 mol CO g Fe-1 s-1. The more information given regarding this topic can open wide range of applications not only in catalysis, but in other approaches as well. In conclusion, incorporation of heteroatoms can be the next approach for an advanced industrial heterogeneous catalyst, but also for other applications (e.g. electrocatalysis, gas adsorption, or supercapacitors).
Climate change affects societies across the globe in various ways. In addition to gradual changes in temperature and other climatic variables, global warming is likely to increase intensity and frequency of extreme weather events.
Beyond biophysical impacts, these also directly affect societal and economic activity. Additionally, indirect effects can occur; spatially, economic losses can spread along global supply-chains; temporally, climate impacts can change the economic development trajectory of countries.
This thesis first examines how climate change alters river flood risk and its local socio-economic implications. Then, it studies the global economic response to river floods in particular, and to climate change in general.
Changes in high-end river flood risk are calculated for the next three decades on a global scale with high spatial resolution. In order to account for uncertainties, this assessment makes use of an ensemble of climate and hydrological models as well as a river routing model, that is found to perform well regarding peak river discharge. The results show an increase in high-end flood risk in many parts of the world, which require profound adaptation efforts. This pressure to adapt is measured as the enhancement in protection level necessary to stay at historical high-end risk. In developing countries as well as in industrialized regions, a high pressure to adapt is observed - the former to increase low protection levels, the latter to maintain the low risk levels perceived in the past.
Further in this thesis, the global agent-based dynamic supply-chain model acclimate is developed. It models the cascading of indirect losses in the global supply network. As an anomaly model its agents - firms and consumers - maximize their profit locally to respond optimally to local perturbations. Incorporating quantities as well as prices on a daily basis, it is suitable to dynamically resolve the impacts of unanticipated climate extremes.
The model is further complemented by a static measure, which captures the inter-dependencies between sectors across regions that are only connected indirectly. These higher-order dependencies are shown to be important for a comprehensive assessment of loss-propagation and overall costs of local disasters.
In order to study the economic response to river floods, the acclimate model is driven by flood simulations. Within the next two decades, the increase in direct losses can only partially be compensated by market adjustments, and total losses are projected to increase by 17% without further adaptation efforts. The US and the EU are both shown to receive indirect losses from China, which is strongly affected directly. However, recent trends in the trade relations leave the EU in a better position to compensate for these losses.
Finally, this thesis takes a broader perspective when determining the investment response to the climate change damages employing the integrated assessment model DICE. On an optimal economic development path, the increase in damages is anticipated as emissions and consequently temperatures increase. This leads to a significant devaluation of investment returns and the income losses from climate damages almost double.
Overall, the results highlight the need to adapt to extreme weather events - local physical adaptation measures have to be combined with regional and global policy measures to prepare the global supply-chain network to climate change.
The prediction of the ground shaking that can occur at a site of interest due to an earthquake is crucial in any seismic hazard analysis. Usually, empirically derived ground-motion prediction equations (GMPEs) are employed within a logic-tree framework to account for this step. This is, however, challenging if the area under consideration has only low seismicity and lacks enough recordings to develop a region-specific GMPE. It is then usual practice to adapt GMPEs from data-rich regions (host area) to the area with insufficient ground-motion recordings (target area). Host GMPEs must be adjusted in such a way that they will capture the specific ground-motion characteristics of the target area. In order to do so, seismological parameters of the target region have to be provided as, for example, the site-specific attenuation factor kappa0. This is again an intricate task if data amount is too sparse to derive these parameters.
In this thesis, I explore methods that can facilitate the selection of non-endemic GMPEs in a logic-tree analysis or their adjustment to a data-poor region. I follow two different strategies towards this goal.
The first approach addresses the setup of a ground-motion logic tree if no indigenous GMPE is available. In particular, I propose a method to derive an optimized backbone model that captures the median ground-motion characteristics in the region of interest. This is done by aggregating several foreign GMPEs as weighted components of a mixture model in which the weights are inferred from observed data. The approach is applied to Northern Chile, a region for which no indigenous GMPE existed at the time of the study. Mixture models are derived for interface and intraslab type events using eight subduction zone GMPEs originating from different parts of the world. The derived mixtures provide satisfying results in terms of average residuals and average sample log-likelihoods. They outperform all individual non-endemic GMPEs and are comparable to a regression model that was specifically derived for that area.
The second approach is concerned with the derivation of the site-specific attenuation factor kappa0. kappa0 is one of the key parameters in host-to-target adjustments of GMPEs but is hard to derive if data amount is sparse. I explore methods to estimate kappa0 from ambient seismic noise. Seismic noise is, in contrast to earthquake recordings, continuously available. The rapidly emerging field of seismic interferometry gives the possibility to infer velocity and attenuation information from the cross-correlation or deconvolution of long noise recordings. The extraction of attenuation parameters from diffuse wavefields is, however, not straightforward especially not for frequencies above 1 Hz and at shallow depth. In this thesis, I show the results of two studies. In the first one, data of a small-scale array experiment in Greece are used to derive Love wave quality factors in
the frequency range 1-4 Hz. In a second study, frequency dependent quality factors of S-waves (5-15 Hz) are estimated by deconvolving noise recorded in a borehole and at a co-located surface station in West Bohemia/Vogtland. These two studies can be seen as preliminary steps towards the estimation of kappa0 from seismic noise.
The rapid development and integration of Information Technologies over the last decades influenced all areas of our life, including the business world. Yet not only the modern enterprises become digitalised, but also security and criminal threats move into the digital sphere. To withstand these threats, modern companies must be aware of all activities within their computer networks.
The keystone for such continuous security monitoring is a Security Information and Event Management (SIEM) system that collects and processes all security-related log messages from the entire enterprise network. However, digital transformations and technologies, such as network virtualisation and widespread usage of mobile communications, lead to a constantly increasing number of monitored devices and systems. As a result, the amount of data that has to be processed by a SIEM system is increasing rapidly. Besides that, in-depth security analysis of the captured data requires the application of rather sophisticated outlier detection algorithms that have a high computational complexity. Existing outlier detection methods often suffer from performance issues and are not directly applicable for high-speed and high-volume analysis of heterogeneous security-related events, which becomes a major challenge for modern SIEM systems nowadays.
This thesis provides a number of solutions for the mentioned challenges. First, it proposes a new SIEM system architecture for high-speed processing of security events, implementing parallel, in-memory and in-database processing principles. The proposed architecture also utilises the most efficient log format for high-speed data normalisation. Next, the thesis offers several novel high-speed outlier detection methods, including generic Hybrid Outlier Detection that can efficiently be used for Big Data analysis. Finally, the special User Behaviour Outlier Detection is proposed for better threat detection and analysis of particular user behaviour cases.
The proposed architecture and methods were evaluated in terms of both performance and accuracy, as well as compared with classical architecture and existing algorithms. These evaluations were performed on multiple data sets, including simulated data, well-known public intrusion detection data set, and real data from the large multinational enterprise. The evaluation results have proved the high performance and efficacy of the developed methods.
All concepts proposed in this thesis were integrated into the prototype of the SIEM system, capable of high-speed analysis of Big Security Data, which makes this integrated SIEM platform highly relevant for modern enterprise security applications.