Refine
Has Fulltext
- yes (148) (remove)
Year of publication
Document Type
- Doctoral Thesis (148) (remove)
Keywords
- Arktis (6)
- Arctic (5)
- Fernerkundung (4)
- Klimawandel (4)
- Nanopartikel (4)
- climate change (4)
- Anden (3)
- Andes (3)
- Holozän (3)
- Kohlenstoff (3)
Institute
- Extern (148) (remove)
Earthquake modeling is the key to a profound understanding of a rupture. Its kinematics or dynamics are derived from advanced rupture models that allow, for example, to reconstruct the direction and velocity of the rupture front or the evolving slip distribution behind the rupture front. Such models are often parameterized by a lattice of interacting sub-faults with many degrees of freedom, where, for example, the time history of the slip and rake on each sub-fault are inverted. To avoid overfitting or other numerical instabilities during a finite-fault estimation, most models are stabilized by geometric rather than physical constraints such as smoothing.
As a basis for the inversion approach of this study, we build on a new pseudo-dynamic rupture model (PDR) with only a few free parameters and a simple geometry as a physics-based solution of an earthquake rupture. The PDR derives the instantaneous slip from a given stress drop on the fault plane, with boundary conditions on the developing crack surface guaranteed at all times via a boundary element approach. As a side product, the source time function on each point on the rupture plane is not constraint and develops by itself without additional parametrization. The code was made publicly available as part of the Pyrocko and Grond Python packages. The approach was compared with conventional modeling for different earthquakes. For example, for the Mw 7.1 2016 Kumamoto, Japan, earthquake, the effects of geometric changes in the rupture surface on the slip and slip rate distributions could be reproduced by simply projecting stress vectors. For the Mw 7.5 2018 Palu, Indonesia, strike-slip earthquake, we also modelled rupture propagation using the 2D Eikonal equation and assuming a linear relationship between rupture and shear wave velocity. This allowed us to give a deeper and faster propagating rupture front and the resulting upward refraction as a new possible explanation for the apparent supershear observed at the Earth's surface.
The thesis investigates three aspects of earthquake inversion using PDR: (1) to test whether implementing a simplified rupture model with few parameters into a probabilistic Bayesian scheme without constraining geometric parameters is feasible, and whether this leads to fast and robust results that can be used for subsequent fast information systems (e.g., ground motion predictions). (2) To investigate whether combining broadband and strong-motion seismic records together with near-field ground deformation data improves the reliability of estimated rupture models in a Bayesian inversion. (3) To investigate whether a complex rupture can be represented by the inversion of multiple PDR sources and for what type of earthquakes this is recommended.
I developed the PDR inversion approach and applied the joint data inversions to two seismic sequences in different tectonic settings. Using multiple frequency bands and a multiple source inversion approach, I captured the multi-modal behaviour of the Mw 8.2 2021 South Sandwich subduction earthquake with a large, curved and slow rupturing shallow earthquake bounded by two faster and deeper smaller events. I could cross-validate the results with other methods, i.e., P-wave energy back-projection, a clustering analysis of aftershocks and a simple tsunami forward model.
The joint analysis of ground deformation and seismic data within a multiple source inversion also shed light on an earthquake triplet, which occurred in July 2022 in SE Iran. From the inversion and aftershock relocalization, I found indications for a vertical separation between the shallower mainshocks within the sedimentary cover and deeper aftershocks at the sediment-basement interface. The vertical offset could be caused by the ductile response of the evident salt layer to stress perturbations from the mainshocks.
The applications highlight the versatility of the simple PDR in probabilistic seismic source inversion capturing features of rather different, complex earthquakes. Limitations, as the evident focus on the major slip patches of the rupture are discussed as well as differences to other finite fault modeling methods.
In the present thesis, AC electrokinetic forces, like dielectrophoresis and AC electroosmosis, were demonstrated as a simple and fast method to functionalize the surface of nanoelectrodes with submicrometer sized biological objects. These nanoelectrodes have a cylindrical shape with a diameter of 500 nm arranged in an array of 6256 electrodes. Due to its medical relevance influenza virus as well as anti-influenza antibodies were chosen as a model organism. Common methods to bring antibodies or proteins to biosensor surfaces are complex and time-consuming. In the present work, it was demonstrated that by applying AC electric fields influenza viruses and antibodies can be immobilized onto the nanoelectrodes within seconds without any prior chemical modification of neither the surface nor the immobilized biological object. The distribution of these immobilized objects is not uniform over the entire array, it exhibits a decreasing gradient from the outer row to the inner ones. Different causes for this gradient have been discussed, such as the vortex-shaped fluid motion above the nanoelectrodes generated by, among others, electrothermal fluid flow. It was demonstrated that parts of the accumulated material are permanently immobilized to the electrodes. This is a unique characteristic of the presented system since in the literature the AC electrokinetic immobilization is almost entirely presented as a method just for temporary immobilization. The spatial distribution of the immobilized viral material or the anti-influenza antibodies at the electrodes was observed by either the combination of fluorescence microscopy and deconvolution or by super-resolution microscopy (STED). On-chip immunoassays were performed to examine the suitability of the functionalized electrodes as a potential affinity-based biosensor. Two approaches were pursued: A) the influenza virus as the bio-receptor or B) the influenza virus as the analyte. Different sources of error were eliminated by ELISA and passivation experiments. Hence, the activity of the immobilized object was inspected by incubation with the analyte. This resulted in the successful detection of anti-influenza antibodies by the immobilized viral material. On the other hand, a detection of influenza virus particles by the immobilized anti-influenza antibodies was not possible. The latter might be due to lost activity or wrong orientation of the antibodies. Thus, further examinations on the activity of by AC electric fields immobilized antibodies should follow. When combined with microfluidics and an electrical read-out system, the functionalized chips possess the potential to serve as a rapid, portable, and cost-effective point-of-care (POC) device. This device can be utilized as a basis for diverse applications in diagnosing and treating influenza, as well as various other pathogens.
Stars under influence: evidence of tidal interactions between stars and substellar companions
(2023)
Tidal interactions occur between gravitationally bound astrophysical bodies. If their spatial separation is sufficiently small, the bodies can induce tides on each other, leading to angular momentum transfer and altering of evolutionary path the bodies would have followed if they were single objects. The tidal processes are well established in the Solar planet-moon systems and close stellar binary systems. However, how do stars behave if they are orbited by a substellar companion (e.g. a planet or a brown dwarf) on a tight orbit?
Typically, a substellar companion inside the corotation radius of a star will migrate toward the star as it loses orbital angular momentum. On the other hand, the star will gain angular momentum which has the potential to increase its rotation rate. The effect should be more pronounced if the substellar companion is more massive. As the stellar rotation rate and the magnetic activity level are coupled, the star should appear more magnetically active under the tidal influence of the orbiting substellar companion. However, the difficulty in proving that a star has a higher magnetic activity level due to tidal interactions lies in the fact that (I) substellar companions around active stars are easier to detect if they are more massive, leading to a bias toward massive companions around active stars and mimicking the tidal interaction effect, and that (II) the age of a main-sequence star cannot be easily determined, leaving the possibility that a star is more active due to its young age.
In our work, we overcome these issues by employing wide stellar binary systems where one star hosts a substellar companion, and where the other star provides the magnetic activity baseline for the host star, assuming they have coevolved, and thereby provides the host's activity level if tidal interactions have no effect on it. Firstly, we find that extrasolar planets can noticeably increase the host star's X-ray luminosity and that the effect is more pronounced if the exoplanet is at least Jupiter-like in mass and close to the star. Further, we find that a brown dwarf will have an even stronger effect, as expected, and that the X-ray surface flux difference between the host star and the wide stellar companion is a significant outlier when compared to a large sample of similar wide binary systems without any known substellar companions. This result proves that substellar hosting wide binary systems can be good tools to reveal the tidal effect on host stars, and also show that the typical stellar age indicators as activity or rotation cannot be used for these stars. Finally, knowing that the activity difference is a good tracer of the substellar companion's tidal impact, we develop an analytical method to calculate the modified tidal quality factor Q' of individual host stars, which defines the tidal dissipation efficiency in the convective envelope of a given main-sequence star.
Digitalisation in industry – also called “Industry 4.0” – is seen by numerous actors as an opportunity to reduce the environmental impact of the industrial sector. The scientific assessments of the effects of digitalisation in industry on environmental sustainability, however, are ambivalent. This cumulative dissertation uses three empirical studies to examine the expected and observed effects of digitalisation in industry on environmental sustainability. The aim of this dissertation is to identify opportunities and risks of digitalisation at different system levels and to derive options for action in politics and industry for a more sustainable design of digitalisation in industry. I use an interdisciplinary, socio-technical approach and look at selected countries of the Global South (Study 1) and the example of China (all studies). In the first study (section 2, joint work with Marcel Matthess), I use qualitative content analysis to examine digital and industrial policies from seven different countries in Africa and Asia for expectations regarding the impact of digitalisation on sustainability and compare these with the potentials of digitalisation for sustainability in the respective country contexts. The analysis reveals that the documents express a wide range of vague expectations that relate more to positive indirect impacts of information and communication technology (ICT) use, such as improved energy efficiency and resource management, and less to negative direct impacts of ICT, such as electricity consumption through ICT. In the second study (section 3, joint work with Marcel Matthess, Grischa Beier and Bing Xue), I conduct and analyse interviews with 18 industry representatives of the electronics industry from Europe, Japan and China on digitalisation measures in supply chains using qualitative content analysis. I find that while there are positive expectations regarding the effects of digital technologies on supply chain sustainability, their actual use and observable effects are still limited. Interview partners can only provide few examples from their own companies which show that sustainability goals have already been pursued through digitalisation of the supply chain or where sustainability effects, such as resource savings, have been demonstrably achieved. In the third study (section 4, joint work with Peter Neuhäusler, Melissa Dachrodt and Marcel Matthess), I conduct an econometric panel data analysis. I examine the relationship between the degree of Industry 4.0, energy consumption and energy intensity in ten manufacturing sectors in China between 2006 and 2019. The results suggest that overall, there is no significant relationship between the degree of Industry 4.0 and energy consumption or energy intensity in manufacturing sectors in China. However, differences can be found in subgroups of sectors. I find a negative correlation of Industry 4.0 and energy intensity in highly digitalised sectors, indicating an efficiency-enhancing effect of Industry 4.0 in these sectors. On the other hand, there is a positive correlation of Industry 4.0 and energy consumption for sectors with low energy consumption, which could be explained by the fact that digitalisation, such as the automation of previously mainly labour-intensive sectors, requires energy and also induces growth effects. In the discussion section (section 6) of this dissertation, I use the classification scheme of the three levels macro, meso and micro, as well as of direct and indirect environmental effects to classify the empirical observations into opportunities and risks, for example, with regard to the probability of rebound effects of digitalisation at the three levels. I link the investigated actor perspectives (policy makers, industry representatives), statistical data and additional literature across the system levels and consider political economy aspects to suggest fields of action for more sustainable (digitalised) industries. The dissertation thus makes two overarching contributions to the academic and societal discourse. First, my three empirical studies expand the limited state of research at the interface between digitalisation in industry and sustainability, especially by considering selected countries in the Global South and the example of China. Secondly, exploring the topic through data and methods from different disciplinary contexts and taking a socio-technical point of view, enables an analysis of (path) dependencies, uncertainties, and interactions in the socio-technical system across different system levels, which have often not been sufficiently considered in previous studies. The dissertation thus aims to create a scientifically and practically relevant knowledge basis for a value-guided, sustainability-oriented design of digitalisation in industry.
Late-type stars are by far the most frequent stars in the universe and of fundamental interest to various fields of astronomy – most notably to Galactic archaeology and exoplanet research. However, such stars barely change during their main sequence lifetime; their temperature, luminosity, or chemical composition evolve only very slowly over the course of billions of years. As such, it is difficult to obtain the age of such a star, especially when it is isolated and no other indications (like cluster association) can be used. Gyrochronology offers a way to overcome this problem.
Stars, just like all other objects in the universe, rotate and the rate at which stars rotate impacts many aspects of their appearance and evolution. Gyrochronology leverages the observed rotation rate of a late-type main sequence star and its systematic evolution to estimate their ages. Unlike the above-mentioned parameters, the rotation rate of a main sequence star changes drastically throughout its main sequence lifetime; stars spin down. The youngest stars rotate every few hours, whereas much older stars rotate only about once a month, or – in the case of some late M-stars – once in a hundred days. Given that this spindown is systematic (with an additional mass dependence), it gave rise to the idea of using the observed rotation rate of a star (and its mass or a suitable proxy thereof) to estimate a star’s age. This has been explored widely in young stellar open clusters but remains essentially unconstrained for stars older than the sun, and K and M stars older than 1 Gyr.
This thesis focuses on the continued exploration of the spindown behavior to assess, whether gyrochronology remains applicable for stars of old ages, whether it is universal for late-type main sequence stars (including field stars), and to provide calibration mileposts for spindown models. To accomplish this, I have analyzed data from Kepler space telescope for the open clusters Ruprecht 147 (2.7 Gyr old) and M 67 (4 Gyr). Time series photometry data (light curves)
were obtained for both clusters during Kepler’s K2 mission. However, due to technical limitations and telescope malfunctions, extracting usable data from the K2 mission to identify (especially long) rotation periods requires extensive data preparation.
For Ruprecht 147, I have compiled a list of about 300 cluster members from the literature and adopted preprocessed light curves from the Kepler archive where available. They have been cleaned of the gravest of data artifacts but still contained systematics. After correcting them for said artifacts, I was able to identify rotation periods in 31 of them.
For M 67 more effort was taken. My work on Ruprecht 147 has shown the limitations imposed by the preselection of Kepler targets. Therefore, I adopted the time series full frame image directly and performed photometry on a much higher spatial resolution to be able to obtain data for as many stars as possible. This also means that I had to deal with the ubiquitous artifacts in Kepler data. For that, I devised a method that correlates the artificial flux variations with the ongoing drift of the telescope pointing in order to remove it. This process was a large success and I was able to create light curves whose quality match and even exceede those that were created by the Kepler mission – all while operating on higher spatial resolution and processing fainter stars. Ultimately, I was able to identify signs of periodic variability in the (created) light curves for 31 and 47 stars in Ruprecht 147 and M 67, respectively. My data connect well to bluer stars of cluster of the same age and extend for the first time to stars redder than early-K and older than 1 Gyr. The cluster data show a clear flattening in the distribution of Ruprecht 147 and even a downturn for M 67, resulting in a somewhat sinusoidal shape. With that, I have shown that the systematic spindown of stars continues at least until 4 Gyr and stars continue to live on a single surface in age-rotation periods-mass space which allows gyrochronology to be used at least up to that age. However, the shape of the spindown – as exemplified by the newly discovered sinusoidal shape of the cluster sequence – deviates strongly from the expectations.
I then compiled an extensive sample of rotation data in open clusters – very much including my own work – and used the resulting cluster skeleton (with each cluster forming a rip in color-rotation period-mass space) to investigate if field stars follow the same spindown as cluster stars. For the field stars, I used wide binaries, which – with their shared origin and coevality – are in a sense the smallest possible open clusters. I devised an empirical method to evaluate the consistency between the rotation rates of the wide binary components and found that the vast majority of them are in fact consistent with what is observed in open clusters. This leads me to conclude that gyrochronology – calibrated on open clusters – can be applied to determine the ages of field stars.
Reactive eutectic media based on ammonium formate for the valorization of bio-sourced materials
(2023)
In the last several decades eutectic mixtures of different compositions were successfully used as solvents for vast amount of chemical processes, and only relatively recently they were discovered to be widely spread in nature. As such they are discussed as a third liquid media of the living cell, that is composed of common cell metabolites. Such media may also incorporate water as a eutectic component in order to regulate properties such as enzyme activity or viscosity. Taking inspiration form such sophisticated use of eutectic mixtures, this thesis will explore the use of reactive eutectic media (REM) for organic synthesis. Such unconventional media are characterized by the reactivity of their components, which means that mixture may assume the role of the solvent as well as the reactant itself.
The thesis focuses on novel REM based on ammonium formate and investigates their potential for the valorization of bio-sourced materials. The use of REM allows the performance of a number of solvent-free reactions, which entails the benefits of a superior atom and energy economy, higher yields and faster rates compared to reactions in solution. This is evident for the Maillard reaction between ammonium formate and various monosaccharides for the synthesis of substituted pyrazines as well as for a Leuckart type reaction between ammonium formate and levulinic acid for the synthesis of 5-methyl-2-pyrrolidone. Furthermore, reaction of ammonium formate with citric acid for the synthesis of yet undiscovered fluorophores, shows that synthesis in REM can open up unexpected reaction pathways.
Another focus of the thesis is the study of water as a third component in the REM. As a result, the concept of two different dilution regimes (tertiary REM and in REM in solvent) appears useful for understanding the influence of water. It is shown that small amounts of water can be of great benefit for the reaction, by reducing viscosity and at the same time increasing reaction yields.
REM based on ammonium formate and organic acids are employed for lignocellulosic biomass treatment. The thesis thereby introduces an alternative approach towards lignocellulosic biomass fractionation that promises a considerable process intensification by the simultaneous generation of cellulose and lignin as well as the production of value-added chemicals from REM components. The thesis investigates the generated cellulose and the pathway to nanocellulose generation and also includes the structural analysis of extracted lignin.
Finally, the thesis investigates the potential of microwave heating to run chemical reactions in REM and describes the synergy between these two approaches. Microwave heating for chemical reactions and the use of eutectic mixtures as alternative reaction media are two research fields that are often described in the scope of green chemistry. The thesis will therefore also contain a closer inspection of this terminology and its greater goal of sustainability.
The Lyman-𝛼 (Ly𝛼) line commonly assists in the detection of high-redshift galaxies, the so-called Lyman-alpha emitters (LAEs). LAEs are useful tools to study the baryonic matter distribution of the high-redshift universe. Exploring their spatial distribution not only reveals the large-scale structure of the universe at early epochs, but it also provides an insight into the early formation and evolution of the galaxies we observe today. Because dark matter halos (DMHs) serve as sites of galaxy formation, the LAE distribution also traces that of the underlying dark matter. However, the details of this relation and their co-evolution over time remain unclear. Moreover, theoretical studies predict that the spatial distribution of LAEs also impacts their own circumgalactic medium (CGM) by influencing their extended Ly𝛼 gaseous halos (LAHs), whose origin is still under investigation. In this thesis, I make several contributions to improve the knowledge on these fields using samples of LAEs observed with the Multi Unit Spectroscopic Explorer (MUSE) at redshifts of 3 < 𝑧 < 6.
Solar photocatalysis is the one of leading concepts of research in the current paradigm of sustainable chemical industry. For actual practical implementation of sunlight-driven catalytic processes in organic synthesis, a cheap, efficient, versatile and robust heterogeneous catalyst is necessary. Carbon nitrides are a class of organic semiconductors who are known to fulfill these requirements.
First, current state of solar photocatalysis in economy, industry and lab research is overviewed, outlining EU project funding, prospective synthetic and reforming bulk processes, small scale solar organic chemistry, and existing reactor designs and prototypes, concluding feasibility of the approach.
Then, the photocatalytic aerobic cleavage of oximes to corresponding aldehydes and ketones by anionic poly(heptazine imide) carbon nitride is discussed. The reaction provides a feasible method of deprotection and formation of carbonyl compounds from nitrosation products and serves as a convenient model to study chromoselectivity and photophysics of energy transfer in heterogeneous photocatalysis.
Afterwards, the ability of mesoporous graphitic carbon nitride to conduct proton-coupled electron transfer was utilized for the direct oxygenation of 1,3-oxazolidin-2-ones to corresponding 1,3-oxazlidine-2,4-diones. This reaction provides an easier access to a key scaffold of diverse types of drugs and agrochemicals.
Finally, a series of novel carbon nitrides based on poly(triazine imide) and poly(heptazine imide) structure was synthesized from cyanamide and potassium rhodizonate. These catalysts demonstrated a good performance in a set of photocatalytic benchmark reactions, including aerobic oxidation, dual nickel photoredox catalysis, hydrogen peroxide evolution and chromoselective transformation of organosulfur precursors.
Concluding, the scope of carbon nitride utilization for net-oxidative and net-neutral photocatalytic processes was expanded, and a new tunable platform for catalyst synthesis was discovered.
Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change.
This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events?
To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain.
Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability.
Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor.
In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.
The Andean Cordillera is a mountain range located at the western South American margin and is part of the Eastern- Circum-Pacific orogenic Belt. The ~7000 km long mountain range is one of the longest on Earth and hosts the second largest orogenic plateau in the world, the Altiplano-Puna plateau. The Andes are known as a non-collisional subduction-type orogen which developed as a result of the interaction between the subducted oceanic Nazca plate and the South American continental plate. The different Andean segments exhibit along-strike variations of morphotectonic provinces characterized by different elevations, volcanic activity, deformation styles, crustal thickness, shortening magnitude and oceanic plate geometry. Most of the present-day elevation can be explained by crustal shortening in the last ~50 Ma, with the shortening magnitude decreasing from ~300 km in the central (15°S-30°S) segment to less than half that in the southern part (30°S-40°S). Several factors were proposed that might control the magnitude and acceleration of shortening of the Central Andes in the last 15 Ma. One important factor is likely the slab geometry. At 27-33°S, the slab dips horizontally at ~100 km depth due to the subduction of the buoyant Juan Fernandez Ridge, forming the Pampean flat-slab. This horizontal subduction is thought to influence the thermo-mechanical state of the Sierras Pampeanas foreland, for instance, by strengthening the lithosphere and promoting the thick-skinned propagation of deformation to the east, resulting in the uplift of the Sierras Pampeanas basement blocks. The flat-slab has migrated southwards from the Altiplano latitude at ~30 Ma to its present-day position and the processes and consequences associated to its passage on the contemporaneous acceleration of the shortening rate in Central Andes remain unclear. Although the passage of the flat-slab could offer an explanation to the acceleration of the shortening, the timing does not explain the two pulses of shortening at about 15 Ma and 4 Ma that are suggested from geological observations. I hypothesize that deformation in the Central Andes is controlled by a complex interaction between the subduction dynamics of the Nazca plate and the dynamic strengthening and weakening of the South American plate due to several upper plate processes. To test this hypothesis, a detailed investigation into the role of the flat-slab, the structural inheritance of the continental plate, and the subduction dynamics in the Andes is needed. Therefore, I have built two classes of numerical thermo-mechanical models: (i) The first class of models are a series of generic E-W-oriented high-resolution 2D subduction models thatinclude flat subduction in order to investigate the role of the subduction dynamics on the temporal variability of the shortening rate in the Central Andes at Altiplano latitudes (~21°S). The shortening rate from the models was then validated with the observed tectonic shortening rate in the Central Andes. (ii) The second class of models are a series of 3D data-driven models of the present-day Pampean flat-slab configuration and the Sierras Pampeanas (26-42°S). The models aim to investigate the relative contribution of the present-day flat subduction and inherited structures in the continental lithosphere on the strain localization. Both model classes were built using the advanced finite element geodynamic code ASPECT.
The first main finding of this work is to suggest that the temporal variability of shortening in the Central Andes is primarily controlled by the subduction dynamics of the Nazca plate while it penetrates into the mantle transition zone. These dynamics depends on the westward velocity of the South American plate that provides the main crustal shortening force to the Andes and forces the trench to retreat. When the subducting plate reaches the lower mantle, it buckles on it-self until the forced trench retreat causes the slab to steepen in the upper mantle in contrast with the classical slab-anchoring model. The steepening of the slab hinders the trench causing it to resist the advancing South American plate, resulting in the pulsatile shortening. This buckling and steepening subduction regime could have been initiated because of the overall decrease in the westwards velocity of the South American plate. In addition, the passage of the flat-slab is required to promote the shortening of the continental plate because flat subduction scrapes the mantle lithosphere, thus weakening the continental plate. This process contributes to the efficient shortening when the trench is hindered, followed by mantle lithosphere delamination at ~20 Ma. Finally, the underthrusting of the Brazilian cratonic shield beneath the orogen occurs at ~11 Ma due to the mechanical weakening of the thick sediments covered the shield margin, and due to the decreasing resistance of the weakened lithosphere of the orogen.
The second main finding of this work is to suggest that the cold flat-slab strengthens the overriding continental lithosphere and prevents strain localization. Therefore, the deformation is transmitted to the eastern front of the flat-slab segment by the shear stress operating at the subduction interface, thus the flat-slab acts like an indenter that “bulldozes” the mantle-keel of the continental lithosphere. The offset in the propagation of deformation to the east between the flat and steeper slab segments in the south causes the formation of a transpressive dextral shear zone. Here, inherited faults of past tectonic events are reactivated and further localize the deformation in an en-echelon strike-slip shear zone, through a mechanism that I refer to as “flat-slab conveyor”. Specifically, the shallowing of the flat-slab causes the lateral deformation, which explains the timing of multiple geological events preceding the arrival of the flat-slab at 33°S. These include the onset of the compression and of the transition between thin to thick-skinned deformation styles resulting from the crustal contraction of the crust in the Sierras Pampeanas some 10 and 6 Myr before the Juan Fernandez Ridge collision at that latitude, respectively.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
In recent decades, astronomy has seen a boom in large-scale stellar surveys of the Galaxy. The detailed information obtained about millions of individual stars in the Milky Way is bringing us a step closer to answering one of the most outstanding questions in astrophysics: how do galaxies form and evolve? The Milky Way is the only galaxy where we can dissect many stars into their high-dimensional chemical composition and complete phase space, which analogously as fossil records can unveil the past history of the genesis of the Galaxy. The processes that lead to large structure formation, such as the Milky Way, are critical for constraining cosmological models; we call this line of study Galactic archaeology or near-field cosmology.
At the core of this work, we present a collection of efforts to chemically and dynamically characterise the disks and bulge of our Galaxy. The results we present in this thesis have only been possible thanks to the advent of the Gaia astrometric satellite, which has revolutionised the field of Galactic archaeology by precisely measuring the positions, parallax distances and motions of more than a billion stars. Another, though not less important, breakthrough is the APOGEE survey, which has observed spectra in the near-infrared peering into the dusty regions of the Galaxy, allowing us to determine detailed chemical abundance patterns in hundreds of thousands of stars. To accurately depict the Milky Way structure, we use and develop the Bayesian isochrone fitting tool/code called StarHorse; this software can predict stellar distances, extinctions and ages by combining astrometry, photometry and spectroscopy based on stellar evolutionary models. The StarHorse code is pivotal to calculating distances where Gaia parallaxes alone cannot allow accurate estimates.
We show that by combining Gaia, APOGEE, photometric surveys and using StarHorse, we can produce a chemical cartography of the Milky way disks from their outermost to innermost parts. Such a map is unprecedented in the inner Galaxy. It reveals a continuity of the bimodal chemical pattern previously detected in the solar neighbourhood, indicating two populations with distinct formation histories. Furthermore, the data reveals a chemical gradient within the thin disk where the content of 𝛼-process elements and metals is higher towards the centre. Focusing on a sample in the inner MW we confirm the extension of the chemical duality to the innermost regions of the Galaxy. We find stars with bar shape orbits to show both high- and low-𝛼 abundances, suggesting the bar formed by secular evolution trapping stars that already existed. By analysing the chemical orbital space of the inner Galactic regions, we disentangle the multiple populations that inhabit this complex region. We reveal the presence of the thin disk, thick disk, bar, and a counter-rotating population, which resembles the outcome of a perturbed proto-Galactic disk. Our study also finds that the inner Galaxy holds a high quantity of super metal-rich stars up to three times solar suggesting it is a possible repository of old super-metal-rich stars found in the solar neighbourhood.
We also enter into the complicated task of deriving individual stellar ages. With StarHorse, we calculate the ages of main-sequence turn-off and sub-giant stars for several public spectroscopic surveys. We validate our results by investigating linear relations between chemical abundances and time since the 𝛼 and neutron capture elements are sensitive to age as a reflection of the different enrichment timescales of these elements. For further study of the disks in the solar neighbourhood, we use an unsupervised machine learning algorithm to delineate a multidimensional separation of chrono-chemical stellar groups revealing the chemical thick disk, the thin disk, and young 𝛼-rich stars. The thick disk is shown to have a small age dispersion indicating its fast formation contrary to the thin disk that spans a wide range of ages.
With groundbreaking data, this thesis encloses a detailed chemo-dynamical view of the disk and bulge of our Galaxy. Our findings on the Milky Way can be linked to the evolution of high redshift disk galaxies, helping to solve the conundrum of galaxy formation.
The global climate crisis is significantly contributing to changing ecosystems, loss of biodiversity and is putting numerous species on the verge of extinction. In principle, many species are able to adapt to changing conditions or shift their habitats to more suitable regions. However, change is progressing faster than some species can adjust, or potential adaptation is blocked and disrupted by direct and indirect human action. Unsustainable anthropogenic land use in particular is one of the driving factors, besides global heating, for these ecologically critical developments. Precisely because land use is anthropogenic, it is also a factor that could be quickly and immediately corrected by human action.
In this thesis, I therefore assess the impact of three climate change scenarios of increasing intensity in combination with differently scheduled mowing regimes on the long-term development and dispersal success of insects in Northwest German grasslands. The large marsh grasshopper (LMG, Stethophyma grossum, Linné 1758) is used as a species of reference for the analyses. It inhabits wet meadows and marshes and has a limited, yet fairly good ability to disperse. Mowing and climate conditions affect the development and mortality of the LMG differently depending on its life stage.
The specifically developed simulation model HiLEG (High-resolution Large Environmental
Gradient) serves as a tool for investigating and projecting viability and dispersal success under different climate conditions and land use scenarios. It is a spatially explicit, stage- and cohort-based model that can be individually configured to represent the life cycle and characteristics of terrestrial insect species, as well as high-resolution environmental data and the occurrence of external disturbances. HiLEG is a freely available and adjustable software that can be used to support conservation planning in cultivated grasslands.
In the three case studies of this thesis, I explore various aspects related to the structure of simulation models per se, their importance in conservation planning in general, and insights regarding the LMG in particular. It became apparent that the detailed resolution of model processes and components is crucial to project the long-term effect of spatially and temporally confined events. Taking into account conservation measures at the regional level has further proven relevant, especially in light of the climate crisis. I found that the LMG is benefiting from global warming in principle, but continues to be constrained by harmful mowing regimes. Land use measures could, however, be adapted in such a way that they allow the expansion and establishment of the LMG without overly affecting agricultural yields.
Overall, simulation models like HiLEG can make an important contribution and add value
to conservation planning and policy-making. Properly used, simulation results shed light
on aspects that might be overlooked by subjective judgment and the experience of individual stakeholders. Even though it is in the nature of models that they are subject to limitations and only represent fragments of reality, this should not keep stakeholders from using them, as long as these limitations are clearly communicated. Similar to HiLEG, models could further be designed in such a way that not only the parameterization can be adjusted as required, but also the implementation itself can be improved and changed as desired. This openness and flexibility should become more widespread in the development of simulation models.
Recurrences in past climates
(2023)
Our ability to predict the state of a system relies on its tendency to recur to states it has visited before. Recurrence also pervades common intuitions about the systems we are most familiar with: daily routines, social rituals and the return of the seasons are just a few relatable examples. To this end, recurrence plots (RP) provide a systematic framework to quantify the recurrence of states. Despite their conceptual simplicity, they are a versatile tool in the study of observational data. The global climate is a complex system for which an understanding based on observational data is not only of academical relevance, but vital for the predurance of human societies within the planetary boundaries. Contextualizing current global climate change, however, requires observational data far beyond the instrumental period. The palaeoclimate record offers a valuable archive of proxy data but demands methodological approaches that adequately address its complexities. In this regard, the following dissertation aims at devising novel and further developing existing methods in the framework of recurrence analysis (RA). The proposed research questions focus on using RA to capture scale-dependent properties in nonlinear time series and tailoring recurrence quantification analysis (RQA) to characterize seasonal variability in palaeoclimate records (‘Palaeoseasonality’).
In the first part of this thesis, we focus on the methodological development of novel approaches in RA. The predictability of nonlinear (palaeo)climate time series is limited by abrupt transitions between regimes that exhibit entirely different dynamical complexity (e.g. crossing of ‘tipping points’). These possibly depend on characteristic time scales. RPs are well-established for detecting transitions and capture scale-dependencies, yet few approaches have combined both aspects. We apply existing concepts from the study of self-similar textures to RPs to detect abrupt transitions, considering the most relevant time scales. This combination of methods further results in the definition of a novel recurrence based nonlinear dependence measure. Quantifying lagged interactions between multiple variables is a common problem, especially in the characterization of high-dimensional complex systems. The proposed ‘recurrence flow’ measure of nonlinear dependence offers an elegant way to characterize such couplings. For spatially extended complex systems, the coupled dynamics of local variables result in the emergence of spatial patterns. These patterns tend to recur in time. Based on this observation, we propose a novel method that entails dynamically distinct regimes of atmospheric circulation based on their recurrent spatial patterns. Bridging the two parts of this dissertation, we next turn to methodological advances of RA for the study of Palaeoseasonality. Observational series of palaeoclimate ‘proxy’ records involve inherent limitations, such as irregular temporal sampling. We reveal biases in the RQA of time series with a non-stationary sampling rate and propose a correction scheme.
In the second part of this thesis, we proceed with applications in Palaeoseasonality. A review of common and promising time series analysis methods shows that numerous valuable tools exist, but their sound application requires adaptions to archive-specific limitations and consolidating transdisciplinary knowledge. Next, we study stalagmite proxy records from the Central Pacific as sensitive recorders of mid-Holocene El Niño-Southern Oscillation (ENSO) dynamics. The records’ remarkably high temporal resolution allows to draw links between ENSO and seasonal dynamics, quantified by RA. The final study presented here examines how seasonal predictability could play a role for the stability of agricultural societies. The Classic Maya underwent a period of sociopolitical disintegration that has been linked to drought events. Based on seasonally resolved stable isotope records from Yok Balum cave in Belize, we propose a measure of seasonal predictability. It unveils the potential role declining seasonal predictability could have played in destabilizing agricultural and sociopolitical systems of Classic Maya populations.
The methodological approaches and applications presented in this work reveal multiple exciting future research avenues, both for RA and the study of Palaeoseasonality.
Technologically important, environmentally friendly InP quantum dots (QDs) typically used as green and red emitters in display devices can achieve exceptional photoluminescence quantum yields (PL QYs) of near-unity (95-100%) when the-state-of-the-art core/shell heterostructure of the ZnSe inner/ZnS outer shell is elaborately applied. Nevertheless, it has only led to a few industrial applications as QD liquid crystal display (QD–LCD) which is applied to blue backlight units, even though QDs has a lot of possibilities that able to realize industrially feasible applications, such as QD light-emitting diodes (QD‒LEDs) and luminescence solar concentrator (LSC), due to their functionalizable characteristics.
Before introducing the main research, the theoretical basis and fundamentals of QDs are described in detail on the basis of the quantum mechanics and experimental synthetic results, where a concept of QD and colloidal QD, a type-I core/shell structure, a transition metal doped semiconductor QDs, the surface chemistry of QD, and their applications (LSC, QD‒LEDs, and EHD jet printing) are sequentially elucidated for better understanding. This doctoral thesis mainly focused on the connectivity between QD materials and QD devices, based on the synthesis of InP QDs that are composed of inorganic core (core/shell heterostructure) and organic shell (surface ligands on the QD surface). In particular, as for the former one (core/shell heterostructure), the ZnCuInS mid-shell as an intermediate layer is newly introduced between a Cu-doped InP core and a ZnS shell for LSC devices. As for the latter one (surface ligands), the ligand effect by 1-octanethiol and chloride ion are investigated for the device stability in QD‒LEDs and the printability of electro-hydrodynamic (EHD) jet printing system, in which this research explores the behavior of surface ligands, based on proton transfer mechanism on the QD surface.
Chapter 3 demonstrates the synthesis of strain-engineered highly emissive Cu:InP/Zn–Cu–In–S (ZCIS)/ZnS core/shell/shell heterostructure QDs via a one-pot approach. When this unconventional combination of a ZCIS/ZnS double shelling scheme is introduced to a series of Cu:InP cores with different sizes, the resulting Cu:InP/ZCIS/ZnS QDs with a tunable near-IR PL range of 694–850 nm yield the highest-ever PL QYs of 71.5–82.4%. These outcomes strongly point to the efficacy of the ZCIS interlayer, which makes the core/shell interfacial strain effectively alleviated, toward high emissivity. The presence of such an intermediate ZCIS layer is further examined by comparative size, structural, and compositional analyses. The end of this chapter briefly introduces the research related to the LSC devices, fabricated from Cu:InP/ZCIS/ZnS QDs, currently in progress.
Chapter 4 mainly deals with ligand effect in 1-octanethiol passivation of InP/ZnSe/ZnS QDs in terms of incomplete surface passivation during synthesis. This chapter demonstrates the lack of anionic carboxylate ligands on the surface of InP/ZnSe/ZnS quantum dots (QDs), where zinc carboxylate ligands can be converted to carboxylic acid or carboxylate ligands via proton transfer by 1-octanethiol. The as-synthesized QDs initially have an under-coordinated vacancy surface, which is passivated by solvent ligands such as ethanol and acetone. Upon exposure of 1-octanethiol to the QD surface, 1-octanthiol effectively induces the surface binding of anionic carboxylate ligands (derived from zinc carboxylate ligands) by proton transfer, which consequently exchanges ethanol and acetone ligands that bound on the incomplete QD surface. The systematic chemical analyses, such as thermogravimetric analysis‒mass spectrometry and proton nuclear magnetic resonance spectroscopy, directly show the interplay of surface ligands, and it associates with QD light-emitting diodes (QD‒LEDs).
Chapter 5 shows the relation between material stability of QDs and device stability of QD‒LEDs through the investigation of surface chemistry and shell thickness. In typical III–V colloidal InP quantum dots (QDs), an inorganic ZnS outermost shell is used to provide stability when overcoated onto the InP core. However, this work presents a faster photo-degradation of InP/ZnSe/ZnS QDs with a thicker ZnS shell than that with a thin ZnS shell when 1-octanethiol was applied as a sulfur source to form ZnS outmost shell. Herein, 1-octanethiol induces the form of weakly-bound carboxylate ligand via proton transfer on the QD surface, resulting in a faster degradation at UV light even though a thicker ZnS shell was formed onto InP/ZnSe QDs. Detailed insight into surface chemistry was obtained from proton nuclear magnetic resonance spectroscopy and thermogravimetric analysis–mass spectrometry. However, the lifetimes of the electroluminescence devices fabricated from InP/ZnSe/ZnS QDs with a thick or a thin ZnS shell show surprisingly the opposite result to the material stability of QDs, where the QD light-emitting diodes (QD‒LEDs) with a thick ZnS shelled QDs maintained its luminance more stable than that with a thin ZnS shelled QDs. This study elucidates the degradation mechanism of the QDs and the QD light-emitting diodes based on the results and discuss why the material stability of QDs is different from the lifetime of QD‒LEDs.
Chapter 6 suggests a method how to improve a printability of EHD jet printing when QD materials are applied to QD ink formulation, where this work introduces the application of GaP mid-shelled InP QDs as a role of surface charge in EHD jet printing technique. In general, GaP intermediate shell has been introduced in III–V colloidal InP quantum dots (QDs) to enhance their thermal stability and quantum efficiency in the case of type-I core/shell/shell heterostructure InP/GaP/ZnSeS QDs. Herein, these highly luminescent InP/GaP/ZnSeS QDs were synthesized and applied to EHD jet printing, by which this study demonstrates that unreacted Ga and Cl ions on the QD surface induce the operating voltage of cone jet and cone jet formation to be reduced and stabilized, respectively. This result indicates GaP intermediate shell not only improves PL QY and thermal stability of InP QDs but also adjusts the critical flow rate required for cone-jet formation. In other words, surface charges of quantum dots can have a significant role in forming cone apex in the EHD capillary nozzle. For an industrially convenient validation of surface charges on the QD surface, Zeta potential analyses of QD solutions as a simple method were performed, as well as inductively coupled plasma optical emission spectrometry (ICP-OES) for a composition of elements.
Beyond the generation of highly emissive InP QDs with narrow FWHM, these studies talk about the connection between QD material and QD devices not only to make it a vital jumping-off point for industrially feasible applications but also to reveal from chemical and physical standpoints the origin that obstructs the improvement of device performance experimentally and theoretically.
This dissertation aimed to determine differential expressed miRNAs in the context of chronic pain in polyneuropathy. For this purpose, patients with chronic painful polyneuropathy were compared with age matched healthy patients. Taken together, all miRNA pre library preparation quality controls were successful and none of the samples was identified as an outlier or excluded for library preparation. Pre sequencing quality control showed that library preparation worked for all samples as well as that all samples were free of adapter dimers after BluePippin size selection and reached the minimum molarity for further processing. Thus, all samples were subjected to sequencing. The sequencing control parameters were in their optimal range and resulted in valid sequencing results with strong sample to sample correlation for all samples. The resulting FASTQ file of each miRNA library was analyzed and used to perform a differential expression analysis. The differentially expressed and filtered miRNAs were subjected to miRDB to perform a target prediction. Three of those four miRNAs were downregulated: hsa-miR-3135b, hsa-miR-584-5p and hsa-miR-12136, while one was upregulated: hsa-miR-550a-3p. miRNA target prediction showed that chronic pain in polyneuropathy might be the result of a combination of miRNA mediated high blood flow/pressure and neural activity dysregulations/disbalances. Thus, leading to the promising conclusion that these four miRNAs could serve as potential biomarkers for the diagnosis of chronic pain in polyneuropathy.
Since TRPV1 seems to be one of the major contributors of nociception and is associated with neuropathic pain, the influence of PKA phosphorylated ARMS on the sensitivity of TRPV1 as well as the part of AKAP79 during PKA phosphorylation of ARMS was characterized. Therefore, possible PKA-sites in the sequence of ARMS were identified. This revealed five canonical PKA-sites: S882, T903, S1251/52, S1439/40 and S1526/27. The single PKA-site mutants of ARMS revealed that PKA-mediated ARMS phosphorylation seems not to influence the interaction rate of TRPV1/ARMS. While phosphorylation of ARMST903 does not increase the interaction rate with TRPV1, ARMSS1526/27 is probably not phosphorylated and leads to an increased interaction rate. The calcium flux measurements indicated that the higher the interaction rate of TRPV1/ARMS, the lower the EC50 for capsaicin of TRPV1, independent of the PKA phosphorylation status of ARMS. In addition, the western blot analysis confirmed the previously observed TRPV1/ARMS interaction. More importantly, AKAP79 seems to be involved in the TRPV1/ARMS/PKA signaling complex. To overcome the problem of ARMS-mediated TRPV1 sensitization by interaction, ARMS was silenced by shRNA. ARMS silencing resulted in a restored TRPV1 desensitization without affecting the TRPV1 expression and therefore could be used as new topical therapeutic analgesic alternative to stop ARMS mediated TRPV1 sensitization.
In this work, binding interactions between biomolecules were analyzed by a technique that is based on electrically controllable DNA nanolevers. The technique was applied to virus-receptor interactions for the first time. As receptors, primarily peptides on DNA nanostructures and antibodies were utilized. The DNA nanostructures were integrated into the measurement technique and enabled the presentation of the peptides in a controllable geometrical order. The number of peptides could be varied to be compatible to the binding sites of the viral surface proteins.
Influenza A virus served as a model system, on which the general measurability was demonstrated. Variations of the receptor peptide, the surface ligand density, the measurement temperature and the virus subtypes showed the sensitivity and applicability of the technology. Additionally, the immobilization of virus particles enabled the measurement of differences in oligovalent binding of DNA-peptide nanostructures to the viral proteins in their native environment.
When the coronavirus pandemic broke out in 2020, work on binding interactions of a peptide from the hACE2 receptor and the spike protein of the SARS-CoV-2 virus revealed that oligovalent binding can be quantified in the switchSENSE technology. It could also be shown that small changes in the amino acid sequence of the spike protein resulted in complete loss of binding. Interactions of the peptide and inactivated virus material as well as pseudo virus particles could be measured. Additionally, the switchSENSE technology was utilized to rank six antibodies for their binding affinity towards the nucleocapsid protein of SARS-CoV-2 for the development of a rapid antigen test device.
The technique was furthermore employed to show binding of a non-enveloped virus (adenovirus) and a virus-like particle (norovirus-like particle) to antibodies. Apart from binding interactions, the use of DNA origami levers with a length of around 50 nm enabled the switching of virus material. This proved that the technology is also able to size objects with a hydrodynamic diameter larger than 14 nm.
A theoretical work on diffusion and reaction-limited binding interactions revealed that the technique and the chosen parameters enable the determination of binding rate constants in the reaction-limited regime.
Overall, the applicability of the switchSENSE technique to virus-receptor binding interactions could be demonstrated on multiple examples. While there are challenges that remain, the setup enables the determination of affinities between viruses and receptors in their native environment. Especially the possibilities regarding the quantification of oligo- and multivalent binding interactions could be presented.
The central gas in half of all galaxy clusters shows short cooling times. Assuming unimpeded cooling, this should lead to high star formation and mass cooling rates, which are not observed. Instead, it is believed that condensing gas is accreted by the central black hole that powers an active galactic nuclei jet, which heats the cluster. The detailed heating mechanism remains uncertain. A promising mechanism invokes cosmic ray protons that scatter on self-generated magnetic fluctuations, i.e. Alfvén waves. Continuous damping of Alfvén waves provides heat to the intracluster medium. Previous work has found steady state solutions for a large sample of clusters where cooling is balanced by Alfvénic wave heating. To verify modeling assumptions, we set out to study cosmic ray injection in three-dimensional magnetohydrodynamical simulations of jet feedback in an idealized cluster with the moving-mesh code arepo. We analyze the interaction of jet-inflated bubbles with the turbulent magnetized intracluster medium.
Furthermore, jet dynamics and heating are closely linked to the largely unconstrained jet composition. Interactions of electrons with photons of the cosmic microwave background result in observational signatures that depend on the bubble content. Those recent observations provided evidence for underdense bubbles with a relativistic filling while adopting simplifying modeling assumptions for the bubbles. By reproducing the observations with our simulations, we confirm the validity of their modeling assumptions and as such, confirm the important finding of low-(momentum) density jets.
In addition, the velocity and magnetic field structure of the intracluster medium have profound consequences for bubble evolution and heating processes. As velocity and magnetic fields are physically coupled, we demonstrate that numerical simulations can help link and thereby constrain their respective observables. Finally, we implement the currently preferred accretion model, cold accretion, into the moving-mesh code arepo and study feedback by light jets in a radiatively cooling magnetized cluster. While self-regulation is attained independently of accretion model, jet density and feedback efficiencies, we find that in order to reproduce observed cold gas morphology light jets are preferred.
Facing the environmental crisis, new technologies are needed to sustain our society. In this context, this thesis aims to describe the properties and applications of carbon-based sustainable materials. In particular, it reports the synthesis and characterization of a wide set of porous carbonaceous materials with high nitrogen content obtained from nucleobases. These materials are used as cathodes for Li-ion capacitors, and a major focus is put on the cathode preparation, highlighting the oxidation resistance of nucleobase-derived materials. Furthermore, their catalytic properties for acid/base and redox reactions are described, pointing to the role of nitrogen speciation on their surfaces. Finally, these materials are used as supports for highly dispersed nickel loading, activating the materials for carbon dioxide electroreduction.
Selenium (Se) is an essential trace element that is ubiquitously present in the environment in small concentrations. Essential functions of Se in the human body are manifested through the wide range of proteins, containing selenocysteine as their active center. Such proteins are called selenoproteins which are found in multiple physiological processes like antioxidative defense and the regulation of thyroid hormone functions. Therefore, Se deficiency is known to cause a broad spectrum of physiological impairments, especially in endemic regions with low Se content. Nevertheless, being an essential trace element, Se could exhibit toxic effects, if its intake exceeds tolerable levels. Accordingly, this range between deficiency and overexposure represents optimal Se supply. However, this range was found to be narrower than for any other essential trace element. Together with significantly varying Se concentrations in soil and the presence of specific bioaccumulation factors, this represents a noticeable difficulty in the assessment of Se
epidemiological status. While Se is acting in the body through multiple selenoproteins, its intake occurs mainly in form of small organic or inorganic molecular mass species. Thus, Se exposure not only depends on daily intake but also on the respective chemical form, in which it is present.
The essential functions of selenium have been known for a long time and its primary forms in different food sources have been described. Nevertheless, analytical capabilities for a comprehensive investigation of Se species and their derivatives have been introduced only in the last decades. A new Se compound was identified in 2010 in the blood and tissues of bluefin tuna. It was called selenoneine (SeN) since it is an isologue of naturally occurring antioxidant ergothioneine (ET), where Se replaces sulfur. In the following years, SeN was identified in a number of edible fish species and attracted attention as a new dietary Se source and potentially strong antioxidant. Studies in populations whose diet largely relies on fish revealed that SeN
represents the main non-protein bound Se pool in their blood. First studies, conducted with enriched fish extracts, already demonstrated the high antioxidative potential of SeN and its possible function in the detoxification of methylmercury in fish. Cell culture studies demonstrated, that SeN can utilize the same transporter as ergothioneine, and SeN metabolite was found in human urine.
Until recently, studies on SeN properties were severely limited due to the lack of ways to obtain the pure compound. As a predisposition to this work was firstly a successful approach to SeN synthesis in the University of Graz, utilizing genetically modified yeasts. In the current study, by use of HepG2 liver carcinoma cells, it was demonstrated, that SeN does not cause toxic effectsup to 100 μM concentration in hepatocytes. Uptake experiments showed that SeN is not bioavailable to the used liver cells.
In the next part a blood-brain barrier (BBB) model, based on capillary endothelial cells from the porcine brain, was used to describe the possible transfer of SeN into the central nervous system (CNS). The assessment of toxicity markers in these endothelial cells and monitoring of barrier conditions during transfer experiments demonstrated the absence of toxic effects from SeN on the BBB endothelium up to 100 μM concentration. Transfer data for SeN showed slow but substantial transfer. A statistically significant increase was observed after 48 hours following SeN incubation from the blood-facing side of the barrier. However, an increase in Se content was clearly visible already after 6 hours of incubation with 1 μM of SeN. While the transfer rate of SeN after application of 0.1 μM dose was very close to that for 1 μM, incubation with 10 μM of SeN resulted in a significantly decreased transfer rate. Double-sided application of SeN caused no side-specific transfer of SeN, thus suggesting a passive diffusion mechanism of SeN across the BBB. This data is in accordance with animal studies, where ET accumulation was observed in the rat brain, even though rat BBB does not have the primary ET transporter – OCTN1. Investigation of capillary endothelial cell monolayers after incubation with SeN and reference selenium compounds showed no significant increase of intracellular selenium concentration. Speciesspecific Se measurements in medium samples from apical and basolateral compartments, as good as in cell lysates, showed no SeN metabolization. Therefore, it can be concluded that SeN may reach the brain without significant transformation.
As the third part of this work, the assessment of SeN antioxidant properties was performed in Caco-2 human colorectal adenocarcinoma cells. Previous studies demonstrated that the intestinal epithelium is able to actively transport SeN from the intestinal lumen to the blood side and accumulate SeN. Further investigation within current work showed a much higher antioxidant potential of SeN compared to ET. The radical scavenging activity after incubation with SeN was close to the one observed for selenite and selenomethionine. However, the SeN effect on the viability of intestinal cells under oxidative conditions was close to the one caused by ET. To answer the question if SeN is able to be used as a dietary Se source and induce the activity of selenoproteins, the activity of glutathione peroxidase (GPx) and the secretion of selenoprotein P (SelenoP) were measured in Caco-2 cells, additionally. As expected, reference selenium compounds selenite and selenomethionine caused efficient induction of GPx activity. In contrast to those SeN had no effect on GPx activity. To examine the possibility of SeN being embedded into the selenoproteome, SelenoP was measured in a culture medium. Even though Caco-2 cells effectively take up SeN in quantities much higher than selenite or selenomethionine, no secretion of SelenoP was observed after SeN incubation.
Summarizing, we can conclude that SeN can hardly serve as a Se source for selenoprotein synthesis. However, SeN exhibit strong antioxidative properties, which appear when sulfur in ET is exchanged by Se. Therefore, SeN is of particular interest for research not as part of Se metabolism, but important endemic dietary antioxidant.
Among the multitude of geomorphological processes, aeolian shaping processes are of special character, Pedogenic dust is one of the most important sources of atmospheric aerosols and therefore regarded as a key player for atmospheric processes. Soil dust emissions, being complex in composition and properties, influence atmospheric processes and air quality and has impacts on other ecosystems. In this because even though their immediate impact can be considered low (exceptions exist), their constant and large-scale force makes them a powerful player in the earth system. dissertation, we unravel a novel scientific understanding of this complex system based on a holistic dataset acquired during a series of field experiments on arable land in La Pampa, Argentina. The field experiments as well as the generated data provide information about topography, various soil parameters, the atmospheric dynamics in the very lower atmosphere (4m height) as well as measurements regarding aeolian particle movement across a wide range of particle size classes between 0.2μm up to the coarse sand.
The investigations focus on three topics: (a) the effects of low-scale landscape structures on aeolian transport processes of the coarse particle fraction, (b) the horizontal and vertical fluxes of the very fine particles and (c) the impact of wind gusts on particle emissions.
Among other considerations presented in this thesis, it could in particular be shown, that even though the small-scale topology does have a clear impact on erosion and deposition patterns, also physical soil parameters need to be taken into account for a robust statistical modelling of the latter. Furthermore, specifically the vertical fluxes of particulate matter have different characteristics for the particle size classes. Finally, a novel statistical measure was introduced to quantify the impact of wind gusts on the particle uptake and its application on the provided data set. The aforementioned measure shows significantly increased particle concentrations during points in time defined as gust event.
With its holistic approach, this thesis further contributes to the fundamental understanding of how atmosphere and pedosphere are intertwined and affect each other.
The increasing demand for energy in the current technological era and the recent political decisions about giving up on nuclear energy diverted humanity to focus on alternative environmentally friendly energy sources like solar energy. Although silicon solar cells are the product of a matured technology, the search for highly efficient and easily applicable materials is still ongoing. These properties made the efficiency of halide perovskites comparable with silicon solar cells for single junctions within a decade of research. However, the downside of halide perovskites are poor stability and lead toxicity for the most stable ones.
On the other hand, chalcogenide perovskites are one of the most promising absorber materials for the photovoltaic market, due to their elemental abundance and chemical stability against moisture and oxygen. In the search of the ultimate solar absorber material, combining the good optoelectronic properties of halide perovskites with the stability of chalcogenides could be the promising candidate.
Thus, this work investigates new techniques for the synthesis and design of these novel chalcogenide perovskites, that contain transition metals as cations, e.g., BaZrS3, BaHfS3, EuZrS3, EuHfS3 and SrHfS3. There are two stages in the deposition techniques of this study: In the first stage, the binary compounds are deposited via a solution processing method. In the second stage, the deposited materials are annealed in a chalcogenide atmosphere to form the perovskite structure by using solid-state reactions.
The research also focuses on the optimization of a generalized recipe for a molecular ink to deposit precursors of chalcogenide perovskites with different binaries. The implementation of the precursor sulfurization resulted in either binaries without perovskite formation or distorted perovskite structures, whereas some of these materials are reported in the literature as they are more favorable in the needle-like non-perovskite configuration.
Lastly, there are two categories for the evaluation of the produced materials: The first category is about the determination of the physical properties of the deposited layer, e.g., crystal structure, secondary phase formation, impurities, etc. For the second category, optoelectronic properties are measured and compared to an ideal absorber layer, e.g., band gap, conductivity, surface photovoltage, etc.
In the last century, several astronomical measurements have supported that a significant percentage (about 22%) of the total mass of the Universe, on galactic and extragalactic scales, is composed of a mysterious ”dark” matter (DM). DM does not interact with the electromagnetic force; in other words it does not reflect, absorb or emit light. It is possible that DM particles are weakly interacting massive particles (WIMPs) that can annihilate (or decay) into Standard Model (SM) particles, and modern very- high-energy (VHE; > 100 GeV) instruments such as imaging atmospheric Cherenkov telescopes (IACTs) can play an important role in constraining the main properties of such DM particles, by detecting these products. One of the most privileged targets where to look for DM signal are dwarf spheroidal galaxies (dSphs), as they are expected to be high DM-dominated objects with a clean, gas-free environment. Some dSphs could be considered as extended sources, considering the angular resolution of IACTs; their angu- lar resolution is adequate to detect extended emission from dSphs. For this reason, we performed an extended-source analysis, by taking into account in the unbinned maximum likelihood estimation both the energy and the angular extension dependency of observed events. The goal was to set more constrained upper limits on the velocity-averaged cross-section annihilation of WIMPs with VERITAS data. VERITAS is an array of four IACTs, able to detect γ-ray photons ranging between 100 GeV and 30 TeV. The results of this extended analysis were compared against the traditional spectral analysis. We found that a 2D analysis may lead to more constrained results, depending on the DM mass, channel, and source. Moreover, in this thesis, the results of a multi-instrument project are presented too. Its goal was to combine already published 20 dSphs data from five different experiments, such as Fermi-LAT, MAGIC, H.E.S.S., VERITAS and HAWC, in order to set upper limits on the WIMP annihilation cross-section in the widest mass range ever reported.
Abzug unter Beobachtung
(2022)
Mehr als vier Jahrzehnte lang beobachteten die Streitkräfte und Militärnachrichtendienste der NATO-Staaten die sowjetischen Truppen in der DDR. Hierfür übernahm in der Bundesrepublik Deutschland der Bundesnachrichtendienst (BND) die militärische Auslandsaufklärung unter Anwendung nachrichtendienstlicher Mittel und Methoden. Die Bundeswehr betrieb dagegen taktische Fernmelde- und elektronische Aufklärung und hörte vor allem den Funkverkehr der „Gruppe der sowjetischen Streitkräfte in Deutschland“ (GSSD) ab. Mit der Aufstellung einer zentralen Dienststelle für das militärische Nachrichtenwesen, dem Amt für Nachrichtenwesen der Bundeswehr, bündelte und erweiterte zugleich das Bundesministerium für Verteidigung in den 1980er Jahren seine analytischen Kapazitäten. Das Monopol des BND in der militärischen Auslandsaufklärung wurde von der Bundeswehr dadurch zunehmend infrage gestellt.
Nach der deutschen Wiedervereinigung am 3. Oktober 1990 befanden sich immer noch mehr als 300.000 sowjetische Soldaten auf deutschem Territorium. Die 1989 in Westgruppe der Truppen (WGT) umbenannte GSSD sollte – so der Zwei-plus-Vier-Vertrag – bis 1994 vollständig abziehen. Der Vertrag verbot auch den drei Westmächten, in den neuen Bundesländern militärisch tätig zu sein. Die für die Militäraufklärung bis dahin unverzichtbaren Militärverbindungsmissionen der Westmächte mussten ihre Dienste einstellen. Doch was geschah mit diesem „alliierten Erbe“? Wer übernahm auf deutscher Seite die Aufklärung der sowjetischen Truppen und wer kontrollierte den Truppenabzug?
Die Studie untersucht die Rolle von Bundeswehr und BND beim Abzug der WGT zwischen 1990 und 1994 und fragt dabei nach Kooperation und Konkurrenz zwischen Streitkräften und Nachrichtendiensten. Welche militärischen und nachrichtendienstlichen Mittel und Fähigkeiten stellte die Bundesregierung zur Bewältigung des Truppenabzugs zur Verfügung, nachdem die westlichen Militärverbindungsmissionen aufgelöst wurden? Wie veränderten sich die Anforderungen an die militärische Auslandsaufklärung des BND? Inwieweit setzten sich Konkurrenz und Kooperation von Bundeswehr und BNDbeim Truppenabzug fort? Welche Rolle spielten dabei die einstigen Westmächte? Die Arbeit versteht sich nicht nur als Beitrag zur Militärgeschichte, sondern auch zur deutschen Nachrichtendienstgeschichte.
Complex emulsions are dispersions of kinetically stabilized multiphasic emulsion droplets comprised of two or more immiscible liquids that provide a novel material platform for the generation of active and dynamic soft materials. In recent years, the intrinsic reconfigurable morphological behavior of complex emulsions, which can be attributed to the unique force equilibrium between the interfacial tensions acting at the various interfaces, has become of fundamental and applied interest. As such, particularly biphasic Janus droplets have been investigated as structural templates for the generation of anisotropic precision objects, dynamic optical elements or as transducers and signal amplifiers in chemo- and bio-sensing applications. In the present thesis, switchable internal morphological responses of complex droplets triggered by stimuli-induced alterations of the balance of interfacial tensions have been explored as a universal building block for the design of multiresponsive, active, and adaptive liquid colloidal systems. A series of underlying principles and mechanisms that influence the equilibrium of interfacial tensions have been uncovered, which allowed the targeted design of emulsion bodies that can alter their shape, bind and roll on surfaces, or change their geometrical shape in response to chemical stimuli. Consequently, combinations of the unique triggerable behavior of Janus droplets with designer surfactants, such as a stimuli-responsive photosurfactant (AzoTAB) resulted for instance in shape-changing soft colloids that exhibited a jellyfish inspired buoyant motion behavior, holding great promise for the design of biological inspired active material architectures and transformable soft robotics.
In situ observations of spherical Janus emulsion droplets using a customized side-view microscopic imaging setup with accompanying pendant dropt measurements disclosed the sensitivity regime of the unique chemical-morphological coupling inside complex emulsions and enabled the recording of calibration curves for the extraction of critical parameters of surfactant effectiveness. The deduced new "responsive drop" method permitted a convenient and cost-efficient quantification and comparison of the critical micelle concentrations (CMCs) and effectiveness of various cationic, anionic, and nonionic surfactants. Moreover, the method allowed insightful characterization of stimuli-responsive surfactants and monitoring of the impact of inorganic salts on the CMC and surfactant effectiveness of ionic and nonionic surfactants. Droplet functionalization with synthetic crown ether surfactants yielded a synthetically minimal material platform capable of autonomous and reversible adaptation to its chemical environment through different supramolecular host-guest recognition events. Addition of metal or ammonium salts resulted in the uptake of the resulting hydrophobic complexes to the hydrocarbon hemisphere, whereas addition of hydrophilic ammonium compounds such as amino acids or polypeptides resulted in supramolecular assemblies at the hydrocarbon-water interface of the droplets. The multiresponsive material platform enabled interfacial complexation and
thus triggered responses of the droplets to a variety of chemical triggers including metal ions, ammonium compounds, amino acids, antibodies, carbohydrates as well as amino-functionalized solid surfaces.
In the final chapter, the first documented optical logic gates and combinatorial logic circuits based on complex emulsions are presented. More specifically, the unique reconfigurable and multiresponsive properties of complex emulsions were exploited to realize droplet-based logic gates of varying complexity using different stimuli-responsive surfactants in combination with diverse readout methods. In summary, different designs for multiresponsive, active, and adaptive liquid colloidal systems were presented and investigated, enabling the design of novel transformative chemo-intelligent soft material platforms.
Cosmic rays (CRs) constitute an important component of the interstellar medium (ISM) of galaxies and are thought to play an essential role in governing their evolution. In particular, they are able to impact the dynamics of a galaxy by driving galactic outflows or heating the ISM and thereby affecting the efficiency of star-formation. Hence, in order to understand galaxy formation and evolution, we need to accurately model this non-thermal constituent of the ISM. But except in our local environment within the Milky Way, we do not have the ability to measure CRs directly in other galaxies. However, there are many ways to indirectly observe CRs via the radiation they emit due to their interaction with magnetic and interstellar radiation fields as well as with the ISM.
In this work, I develop a numerical framework to calculate the spectral distribution of CRs in simulations of isolated galaxies where a steady-state between injection and cooling is assumed. Furthermore, I calculate the non-thermal emission processes arising from the modelled CR proton and electron spectra ranging from radio wavelengths up to the very high-energy gamma-ray regime.
I apply this code to a number of high-resolution magneto-hydrodynamical (MHD) simulations of isolated galaxies, where CRs are included. This allows me to study their CR spectra and compare them to observations of the CR proton and electron spectra by the Voyager-1 satellite and the AMS-02 instrument in order to reveal the origin of the measured spectral features.
Furthermore, I provide detailed emission maps, luminosities and spectra of the non-thermal emission from our simulated galaxies that range from dwarfs to Milk-Way analogues to starburst galaxies at different evolutionary stages. I successfully reproduce the observed relations between the radio and gamma-ray luminosities with the far-infrared (FIR) emission of star-forming (SF) galaxies, respectively, where the latter is a good tracer of the star-formation rate. I find that highly SF galaxies are close to the limit where their CR population would lose all of their energy due to the emission of radiation, whereas CRs tend to escape low SF galaxies more quickly. On top of that, I investigate the properties of CR transport that are needed in order to match the observed gamma-ray spectra.
Furthermore, I uncover the underlying processes that enable the FIR-radio correlation (FRC) to be maintained even in starburst galaxies and find that thermal free-free-emission naturally explains the observed radio spectra in SF galaxies like M82 and NGC 253 thus solving the riddle of flat radio spectra that have been proposed to contradict the observed tight FRC.
Lastly, I scrutinise the steady-state modelling of the CR proton component by investigating for the first time the influence of spectrally resolved CR transport in MHD simulations on the hadronic gamma-ray emission of SF galaxies revealing new insights into the observational signatures of CR transport both spectrally and spatially.
Stimuli-promoted in situ formation of hydrogels with thiol/thioester containing peptide precursors
(2022)
Hydrogels are potential synthetic ECM-like substitutes since they provide functional and structural similarities compared to soft tissues. They can be prepared by crosslinking of macromolecules or by polymerizing suitable precursors. The crosslinks are not necessarily covalent bonds, but could also be formed by physical interactions such as π-π interactions, hydrophobic interactions, or H-bonding. On demand in situ forming hydrogels have garnered increased interest especially for biomedical applications over preformed gels due to the relative ease of in vivo delivery and filling of cavities. The thiol-Michael addition reaction provides a straightforward and robust strategy for in situ gel formation with its fast reaction kinetics and ability to proceed under physiological conditions. The incorporation of a trigger function into a crosslinking system becomes even more interesting since gelling can be controlled with stimulus of choice. The use of small molar mass crosslinker precursors with active groups orthogonal to thiol-Michael reaction type electrophile provides the opportunity to implement an on-demand in situ crosslinking without compromising the fast reaction kinetics.
It was postulated that short peptide sequences due to the broad range structural-function relations available with the different constituent amino acids, can be exploited for the realisation of stimuli-promoted in situ covalent crosslinking and gelation applications. The advantages of this system over conventional polymer-polymer hydrogel systems are the ability tune and predict material property at the molecular level.
The main aim of this work was to develop a simplified and biologically-friendly stimuli-promoted in situ crosslinking and hydrogelation system using peptide mimetics as latent crosslinkers. The approach aims at using a single thiodepsipeptide sequence to achieve separate pH- and enzyme-promoted gelation systems with little modification to the thiodepsipeptide sequence. The realization of this aim required the completion of three milestones.
In the first place, after deciding on the thiol-Michael reaction as an effective in situ crosslinking strategy, a thiodepsipeptide, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH (TDP) with expected propensity towards pH-dependent thiol-thioester exchange (TTE) activation, was proposed as a suitable crosslinker precursor for pH-promoted gelation system. Prior to the synthesis of the proposed peptide-mimetic, knowledge of the thiol-Michael reactivity of the would-be activated thiol moiety SH-Leu, which is internally embedded in the thiodepsipeptide was required. In line with pKa requirements for a successful TTE, the reactivity of a more acidic thiol, SH-Phe was also investigated to aid the selection of the best thiol to be incorporated in the thioester bearing peptide based crosslinker precursor. Using ‘pseudo’ 2D-NMR investigations, it was found that only reactions involving SH-Leu yielded the expected thiol-Michael product, an observation that was attributed to the steric hindrance of the bulkier nature of SH-Phe. The fast reaction rates and complete acrylate/maleimide conversion obtained with SH-Leu at pH 7.2 and higher aided the direct elimination of SH-Phe as a potential thiol for the synthesis of the peptide mimetic.
Based on the initial studies, for the pH-promoted gelation system, the proposed Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH was kept unmodified. The subtle difference in pKa values between SH-Leu (thioester thiol) and the terminal cysteamine thiol from theoretical conditions should be enough to effect a ‘pseudo’ intramolecular TTE. In polar protic solvents and under basic aqueous conditions, TDP successfully undergoes a ‘pseudo’ intramolecular TTE reaction to yield an α,ω-dithiol tripeptide, HSLeu-Leu-Gly-NEtSH. The pH dependence of thiolate ion generation by the cysteamine thiol aided the incorporation of the needed stimulus (pH) for the overall success of TTE (activation step) – thiol-Michael addition (crosslinking) strategy.
Secondly, with potential biomedical applications in focus, the susceptibility of TDP, like other thioesters, to intermolecular TTE reaction was probed with a group of thiols of varying thiol pKa values, since biological milieu characteristically contain peptide/protein thiols. L-cysteine, which is a biologically relevant thiol, and a small molecular weight thiol, methylthioglycolate both with relatively similar thiol pKa, values, led to an increase concentration of the dithiol crosslinker when reacted with TDP. In the presence of acidic thiols (p-NTP and 4MBA), a decrease in the dithiol concentration was observed, an observation that can be attributed to the inability of the TTE tetrahedral intermediate to dissociate into exchange products and is in line with pKa requirements for successful TTE reaction. These results additionally makes TDP more attractive and the potentially the first crosslinker precursor for applications in biologically relevant media.
Finally, the ability of TDP to promote pH-sensitive in situ gel formation was probed with maleimide functionalized 4-arm polyethylene glycol polymers in tris-buffered media of varying pHs. When a 1:1 thiol: maleimide molar ratio was used, TDP-PEG4MAL hydrogels formed within 3, 12 and 24 hours at pH values of 8.5, 8.0 and 7.5 respectively. However, gelation times of 3, 5 and 30 mins were observed for the same pH trend when the thiol: maleimide molar was increased to 2:1.
A direct correlation of thiol content with G’ of the gels at each pH could also be drawn by comparing gels with thiol: maleimide ratios of 1:1 to those with 2:1 thiol: maleimide mole ratios. This is supported by the fact that the storage modulus (G') is linearly dependent on the crosslinking density of the polymer. The values of initial G′ for all gels ranged between (200 – 5000 Pa), which falls in the range of elasticities of certain tissue microenvironments for example brain tissue 200 – 1000 Pa and adipose tissue (2500 – 3500 Pa).
Knowledge so far gained from the study on the ability to design and tune the exchange reaction of thioester containing peptide mimetic will give those working in the field further insight into the development of new sequences tailored towards specific applications.
TTE substrate design using peptide mimetic as presented in this work has revealed interesting new insights considering the state-of-the-art. Using the results obtained as reference, the strategy provides a possibility to extend the concept to the controlled delivery of active molecules needed for other robust and high yielding crosslinking reactions for biomedical applications. Application for this sequentially coupled functional system could be seen e.g. in the treatment of inflamed tissues associated with urinary tract like bladder infections for which pH levels above 7 were reported. By the inclusion of cell adhesion peptide motifs, the hydrogel network formed at this pH could act as a new support layer for the healing of damage epithelium as shown in interfacial gel formation experiments using TDP and PEG4MAL droplets.
The versatility of the thiodepsipeptide sequence, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-(TDPo) was extended for the design and synthesis of a MMP-sensitive 4-arm PEG-TDPo conjugate. The purported cleavage of TDPo at the Gly-SLeu bond yields active thiol units for subsequent reaction of orthogonal Michael acceptor moieties. One of the advantages of stimuli-promoted in situ crosslinking systems using short peptides should be the ease of design of required peptide molecules due to the predictability of peptide functions their sequence structure. Consequently the functionalisation of a 4-arm PEG core with the collagenase active TDPo sequence yielded an MMP-sensitive 4-arm thiodepsipeptide-PEG conjugate (PEG4TDPo) substrate.
Cleavage studies using thiol flourometric assay in the presence of MMPs -2 and -9 confirmed the susceptibility of PEG4TDPo towards these enzymes. The resulting time-dependent increase in fluorescence intensity in the presence of thiol assay signifies the successful cleavage of TDPo at the Gly-SLeu bond as expected. It was observed that the cleavage studies with thiol flourometric assay introduces a sigmoid non-Michaelis-Menten type kinetic profile, hence making it difficult to accurately determine the enzyme cycling parameters, kcat and KM .
Gelation studies with PEG4MAL at 10 % wt. concentrations revealed faster gelation with MMP-2 than MMP-9 with 28 and 40 min gelation times respectively. Possible contributions by hydrolytic cleavage of PEG4TDPo has resulted in the gelation of PEG4MAL blank samples but only after 60 minutes of reaction. From theoretical considerations, the simultaneous gelation reaction would be expected to more negatively impact the enzymatic than hydrolytic cleavage. The exact contributions from hydrolytic cleavage of PEG4TDPo would however require additional studies.
In summary this new and simplified in situ crosslinking system using peptide-based crosslinker precursors with tuneable properties exhibited in situ crosslinking gelation kinetics on similar levels with already active dithiols reported. The advantageous on-demand functionality associated with its pH-sensitivity and physiological compatibility makes it a strong candidate worth further research as biomedical applications in general and on-demand material synthesis is concerned.
Results from MMP-promoted gelation system unveils a simple but unexplored approach for in situ synthesis of covalently crosslinked soft materials, that could lead to the development of an alternative pathway in addressing cancer metastasis by making use of MMP overexpression as a trigger. This goal has so far not being reach with MMP inhibitors despite the extensive work this regard.
The Greenland Ice Sheet is the second-largest mass of ice on Earth. Being almost 2000 km long, more than 700 km wide, and more than 3 km thick at the summit, it holds enough ice to raise global sea levels by 7m if melted completely. Despite its massive size, it is particularly vulnerable to anthropogenic climate change: temperatures over the Greenland Ice Sheet have increased by more than 2.7◦C in the past 30 years, twice as much as the global mean temperature. Consequently, the ice sheet has been significantly losing mass since the 1980s and the rate of loss has increased sixfold since then. Moreover, it is one of the potential tipping elements of the Earth System, which might undergo irreversible change once a warming threshold is exceeded. This thesis aims at extending the understanding of the resilience of the Greenland Ice Sheet against global warming by analyzing processes and feedbacks relevant to its centennial to multi-millennial stability using ice sheet modeling.
One of these feedbacks, the melt-elevation-feedback is driven by the temperature rise with decreasing altitudes: As the ice sheet melts, its thickness and surface elevation decrease, exposing the ice surface to warmer air and thus increasing the melt rates even further. The glacial isostatic adjustment (GIA) can partly mitigate this melt-elevation feedback as the bedrock lifts in response to an ice load decrease, forming the negative GIA feedback. In my thesis, I show that the interaction between these two competing feedbacks can lead to qualitatively different dynamical responses of the Greenland Ice Sheet to warming – from permanent loss to incomplete recovery, depending on the feedback parameters. My research shows that the interaction of those feedbacks can initiate self-sustained oscillations of the ice volume while the climate forcing remains constant.
Furthermore, the increased surface melt changes the optical properties of the snow or ice surface, e.g. by lowering their albedo, which in turn enhances melt rates – a process known as the melt-albedo feedback. Process-based ice sheet models often neglect this melt-albedo feedback. To close this gap, I implemented a simplified version of the diurnal Energy Balance Model, a computationally efficient approach that can capture the first-order effects of the melt-albedo feedback, into the Parallel Ice Sheet Model (PISM). Using the coupled model, I show in warming experiments that the melt-albedo feedback almost doubles the ice loss until the year 2300 under the low greenhouse gas emission scenario RCP2.6, compared to simulations where the melt-albedo feedback is neglected,
and adds up to 58% additional ice loss under the high emission scenario RCP8.5. Moreover, I find that the melt-albedo feedback dominates the ice loss until 2300, compared to the melt-elevation feedback.
Another process that could influence the resilience of the Greenland Ice Sheet is the warming induced softening of the ice and the resulting increase in flow. In my thesis, I show with PISM how the uncertainty in Glen’s flow law impacts the simulated response to warming. In a flow line setup at fixed climatic mass balance, the uncertainty in flow parameters leads to a range of ice loss comparable to the range caused by different warming levels.
While I focus on fundamental processes, feedbacks, and their interactions in the first three projects of my thesis, I also explore the impact of specific climate scenarios on the sea level rise contribution of the Greenland Ice Sheet. To increase the carbon budget flexibility, some warming scenarios – while still staying within the limits of the Paris Agreement – include a temporal overshoot of global warming. I show that an overshoot by 0.4◦C increases the short-term and long-term ice loss from Greenland by several centimeters. The long-term increase is driven by the warming at high latitudes, which persists even when global warming is reversed. This leads to a substantial long-term commitment of the sea level rise contribution from the Greenland Ice Sheet.
Overall, in my thesis I show that the melt-albedo feedback is most relevant for the ice loss of the Greenland Ice Sheet on centennial timescales. In contrast, the melt-elevation feedback and its interplay with the GIA feedback become increasingly relevant on millennial timescales. All of these influence the resilience of the Greenland Ice Sheet against global warming, in the near future and on the long term.
The Arctic is changing rapidly and permafrost is thawing. Especially ice-rich permafrost, such as the late Pleistocene Yedoma, is vulnerable to rapid and deep thaw processes such as surface subsidence after the melting of ground ice. Due to permafrost thaw, the permafrost carbon pool is becoming increasingly accessible to microbes, leading to increased greenhouse gas emissions, which enhances the climate warming.
The assessment of the molecular structure and biodegradability of permafrost organic matter (OM) is highly needed. My research revolves around the question “how does permafrost thaw affect its OM storage?” More specifically, I assessed (1) how molecular biomarkers can be applied to characterize permafrost OM, (2) greenhouse gas production rates from thawing permafrost, and (3) the quality of OM of frozen and (previously) thawed sediments.
I studied deep (max. 55 m) Yedoma and thawed Yedoma permafrost sediments from Yakutia (Sakha Republic). I analyzed sediment cores taken below thermokarst lakes on the Bykovsky Peninsula (southeast of the Lena Delta) and in the Yukechi Alas (Central Yakutia), and headwall samples from the permafrost cliff Sobo-Sise (Lena Delta) and the retrogressive thaw slump Batagay (Yana Uplands). I measured biomarker concentrations of all sediment samples. Furthermore, I carried out incubation experiments to quantify greenhouse gas production in thawing permafrost.
I showed that the biomarker proxies are useful to assess the source of the OM and to distinguish between OM derived from terrestrial higher plants, aquatic plants and microbial activity. In addition, I showed that some proxies help to assess the degree of degradation of permafrost OM, especially when combined with sedimentological data in a multi-proxy approach. The OM of Yedoma is generally better preserved than that of thawed Yedoma sediments. The greenhouse gas production was highest in the permafrost sediments that thawed for the first time, meaning that the frozen Yedoma sediments contained most labile OM. Furthermore, I showed that the methanogenic communities had established in the recently thawed sediments, but not yet in the still-frozen sediments.
My research provided the first molecular biomarker distributions and organic carbon turnover data as well as insights in the state and processes in deep frozen and thawed Yedoma sediments. These findings show the relevance of studying OM in deep permafrost sediments.
Climate change is one of the greatest challenges to humanity in this century, and most noticeable consequences are expected to be impacts on the water cycle – in particular the distribution and availability of water, which is fundamental for all life on Earth. In this context, it is essential to better understand where and when water is available and what processes influence variations in water storages. While estimates of the overall terrestrial water storage (TWS) variations are available from the GRACE satellites, these represent the vertically integrated signal over all water stored in ice, snow, soil moisture, groundwater and surface water bodies. Therefore, complementary observational data and hydrological models are still required to determine the partitioning of the measured signal among different water storages and to understand the underlying processes. However, the application of large-scale observational data is limited by their specific uncertainties and the incapacity to measure certain water fluxes and storages. Hydrological models, on the other hand, vary widely in their structure and process-representation, and rarely incorporate additional observational data to minimize uncertainties that arise from their simplified representation of the complex hydrologic cycle.
In this context, this thesis aims to contribute to improving the understanding of global water storage variability by combining simple hydrological models with a variety of complementary Earth observation-based data. To this end, a model-data integration approach is developed, in which the parameters of a parsimonious hydrological model are calibrated against several observational constraints, inducing GRACE TWS, simultaneously, while taking into account each data’s specific strengths and uncertainties. This approach is used to investigate 3 specific aspects that are relevant for modelling and understanding the composition of large-scale TWS variations.
The first study focusses on Northern latitudes, where snow and cold-region processes define the hydrological cycle. While the study confirms previous findings that seasonal dynamics of TWS are dominated by the cyclic accumulation and melt of snow, it reveals that inter-annual TWS variations on the contrary, are determined by variations in liquid water storages. Additionally, it is found to be important to consider the impact of compensatory effects of spatially heterogeneous hydrological variables when aggregating the contribution of different storage components over large areas. Hence, the determinants of TWS variations are scale-dependent and underlying driving mechanism cannot be simply transferred between spatial and temporal scales. These findings are supported by the second study for the global land areas beyond the Northern latitudes as well.
This second study further identifies the considerable impact of how vegetation is represented in hydrological models on the partitioning of TWS variations. Using spatio-temporal varying fields of Earth observation-based data to parameterize vegetation activity not only significantly improves model performance, but also reduces parameter equifinality and process uncertainties. Moreover, the representation of vegetation drastically changes the contribution of different water storages to overall TWS variability, emphasizing the key role of vegetation for water allocation, especially between sub-surface and delayed water storages. However, the study also identifies parameter equifinality regarding the decay of sub-surface and delayed water storages by either evapotranspiration or runoff, and thus emphasizes the need for further constraints hereof.
The third study focuses on the role of river water storage, in particular whether it is necessary to include computationally expensive river routing for model calibration and validation against the integrated GRACE TWS. The results suggest that river routing is not required for model calibration in such a global model-data integration approach, due to the larger influence other observational constraints, and the determinability of certain model parameters and associated processes are identified as issues of greater relevance. In contrast to model calibration, considering river water storage derived from routing schemes can already significantly improve modelled TWS compared to GRACE observations, and thus should be considered for model evaluation against GRACE data.
Beyond these specific findings that contribute to improved understanding and modelling of large-scale TWS variations, this thesis demonstrates the potential of combining simple modeling approaches with diverse Earth observational data to improve model simulations, overcome inconsistencies of different observational data sets, and identify areas that require further research. These findings encourage future efforts to take advantage of the increasing number of diverse global observational data.
Biofilms are complex living materials that form as bacteria get embedded in a matrix of self-produced protein and polysaccharide fibres. The formation of a network of extracellular biopolymer fibres contributes to the cohesion of the biofilm by promoting cell-cell attachment and by mediating biofilm-substrate interactions. This sessile mode of bacteria growth has been well studied by microbiologists to prevent the detrimental effects of biofilms in medical and industrial settings. Indeed, biofilms are associated with increased antibiotic resistance in bacterial infections, and they can also cause clogging of pipelines or promote bio-corrosion. However, biofilms also gained interest from biophysics due to their ability to form complex morphological patterns during growth. Recently, the emerging field of engineered living materials investigates biofilm mechanical properties at multiple length scales and leverages the tools of synthetic biology to tune the functions of their constitutive biopolymers.
This doctoral thesis aims at clarifying how the morphogenesis of Escherichia coli (E. coli) biofilms is influenced by their growth dynamics and mechanical properties. To address this question, I used methods from cell mechanics and materials science. I first studied how biological activity in biofilms gives rise to non-uniform growth patterns. In a second study, I investigated how E. coli biofilm morphogenesis and its mechanical properties adapt to an environmental stimulus, namely the water content of their substrate. Finally, I estimated how the mechanical properties of E. coli biofilms are altered when the bacteria express different extracellular biopolymers.
On nutritive hydrogels, micron-sized E. coli cells can build centimetre-large biofilms. During this process, bacterial proliferation and matrix production introduce mechanical stresses in the biofilm, which release through the formation of macroscopic wrinkles and delaminated buckles. To relate these biological and mechanical phenomena, I used time-lapse fluorescence imaging to track cell and matrix surface densities through the early and late stages of E. coli biofilm growth. Colocalization of high cell and matrix densities at the periphery precede the onset of mechanical instabilities at this annular region. Early growth is detected at this outer annulus, which was analysed by adding fluorescent microspheres to the bacterial inoculum. But only when high rates of matrix production are present in the biofilm centre, does overall biofilm spreading initiate along the solid-air interface. By tracking larger fluorescent particles for a long time, I could distinguish several kinematic stages of E. coli biofilm expansion and observed a transition from non-linear to linear velocity profiles, which precedes the emergence of wrinkles at the biofilm periphery. Decomposing particle velocities to their radial and circumferential components revealed a last kinematic stage, where biofilm movement is mostly directed towards the radial delaminated buckles, which verticalize. The resulting compressive strains computed in these regions were observed to substantially deform the underlying agar substrates. The co-localization of higher cell and matrix densities towards an annular region and the succession of several kinematic stages are thus expected to promote the emergence of mechanical instabilities at the biofilm periphery. These experimental findings are predicted to advance future modelling approaches of biofilm morphogenesis.
E. coli biofilm morphogenesis is further anticipated to depend on external stimuli from the environment. To clarify how the water could be used to tune biofilm material properties, we quantified E. coli biofilm growth, wrinkling dynamics and rigidity as a function of the water content of the nutritive substrates. Time-lapse microscopy and computational image analysis revealed that substrates with high water content promote biofilm spreading kinetics, while substrates with low water content promote biofilm wrinkling. The wrinkles observed on biofilm cross-sections appeared more bent on substrates with high water content, while they tended to be more vertical on substrates with low water content. Both wet and dry biomass, accumulated over 4 days of culture, were larger in biofilms cultured on substrates with high water content, despite extra porosity within the matrix layer. Finally, the micro-indentation analysis revealed that substrates with low water content supported the formation of stiffer biofilms. This study shows that E. coli biofilms respond to the water content of their substrate, which might be used for tuning their material properties in view of further applications.
Biofilm material properties further depend on the composition and structure of the matrix of extracellular proteins and polysaccharides. In particular, E. coli biofilms were suggested to present tissue-like elasticity due to a dense fibre network consisting of amyloid curli and phosphoethanolamine-modified cellulose. To understand the contribution of these components to the emergent mechanical properties of E. coli biofilms, we performed micro-indentation on biofilms grown from bacteria of several strains. Besides showing higher dry masses, larger spreading diameters and slightly reduced water contents, biofilms expressing both main matrix components also presented high rigidities in the range of several hundred kPa, similar to biofilms containing only curli fibres. In contrast, a lack of amyloid curli fibres provides much higher adhesive energies and more viscoelastic fluid-like material behaviour. Therefore, the combination of amyloid curli and phosphoethanolamine-modified cellulose fibres implies the formation of a composite material whereby the amyloid curli fibres provide rigidity to E. coli biofilms, whereas the phosphoethanolamine-modified cellulose rather acts as a glue. These findings motivate further studies involving purified versions of these protein and polysaccharide components to better understand how their interactions benefit biofilm functions.
All three studies depict different aspects of biofilm morphogenesis, which are interrelated. The first work reveals the correlation between non-uniform biological activities and the emergence of mechanical instabilities in the biofilm. The second work acknowledges the adaptive nature of E. coli biofilm morphogenesis and its mechanical properties to an environmental stimulus, namely water. Finally, the last study reveals the complementary role of the individual matrix components in the formation of a stable biofilm material, which not only forms complex morphologies but also functions as a protective shield for the bacteria it contains. Our experimental findings on E. coli biofilm morphogenesis and their mechanical properties can have further implications for fundamental and applied biofilm research fields.
Die vorliegende kumulative Promotionsarbeit beschäftigt sich mit leistungsstarken Schülerinnen und Schülern, die seit 2015 in der deutschen Bildungspolitik, zum Beispiel im Rahmen von Förderprogrammen wieder mehr Raum einnehmen, nachdem in Folge des „PISA-Schocks“ im Jahr 2000 zunächst der Fokus stärker auf den Risikogruppen lag. Während leistungsstärkere Schülerinnen und Schüler in der öffentlichen Wahrnehmung häufig mit „(Hoch-)Begabten“ identifiziert werden, geht die Arbeit über die traditionelle Begabungsforschung, die eine generelle Intelligenz als Grundlage für Leistungsfähigkeit von Schülerinnen und Schülern begreift und beforscht, hinaus. Stattdessen lässt sich eher in den Bereich der Talentforschung einordnen, die den Fokus weg von allgemeinen Begabungen auf spezifische Prädiktoren und Outcomes im individuellen Entwicklungsverlauf legt. Der Fokus der Arbeit liegt daher nicht auf Intelligenz als Potenzial, sondern auf der aktuellen schulischen Leistung, die als Ergebnis und Ausgangspunkt von Entwicklungsprozessen in einer Leistungsdomäne doppelte Bedeutung erhält.
Die Arbeit erkennt die Vielgestaltigkeit des Leistungsbegriffs an und ist bestrebt, neue Anlässe zu schaffen, über den Leistungsbegriff und seine Operationalisierung in der Forschung zu diskutieren. Hierfür wird im ersten Teil ein systematisches Review zur Operationalisierung von Leistungsstärke durchgeführt (Artikel I). Es werden Faktoren herausgearbeitet, auf welchen sich die Operationalisierungen unterscheiden können. Weiterhin wird ein Überblick gegeben, wie Studien zu Leistungsstarken sich seit dem Jahr 2000 auf diesen Dimensionen verorten lassen. Es zeigt sich, dass eindeutige Konventionen zur Definition schulischer Leistungsstärke noch nicht existieren, woraus folgt, dass Ergebnisse aus Studien, die sich mit leistungsstarken Schülerinnen und Schülern beschäftigen, nur bedingt miteinander vergleichbar sind. Im zweiten Teil der Arbeit wird im Rahmen zwei weiterer Artikel, welche sich mit der Leistungsentwicklung (Artikel II) und der sozialen Einbindung (Artikel III) von leistungsstarken Schülerinnen und Schülern befassen, darauf aufbauend der Ansatz verfolgt, die Variabilität von Ergebnissen über verschiedene Operationalisierungen von Leistungsstärke deutlich zu machen. Damit wird unter anderem auch die künftige Vergleichbarkeit mit anderen Studien erleichtert. Genutzt wird dabei das Konzept der Multiversumsanalyse (Steegen et al., 2016), bei welcher viele parallele Spezifikationen, die zugleich sinnvolle Alternativen für die Operationalisierung darstellen, nebeneinandergestellt und in ihrem Effekt verglichen werden (Jansen et al., 2021). Die Multiversumsanalyse knüpft konzeptuell an das bereits vor längerem entwickelte Forschungsprogramm des kritischen Multiplismus an (Patry, 2013; Shadish, 1986, 1993), erhält aber als spezifische Methode aktuell im Rahmen der Replizierbarkeitskrise in der Psychologie eine besondere Bedeutung. Dabei stützt sich die vorliegende Arbeit auf die Sekundäranalyse großangelegter Schulleistungsstudien, welche den Vorteil besitzen, dass eine große Zahl an Datenpunkten (Variablen und Personen) zur Verfügung steht, um Effekte unterschiedlicher Operationalisierungen zu vergleichen.
Inhaltlich greifen Artikel II und III Themen auf, die in der wissenschaftlichen und gesellschaftlichen Diskussion zu Leistungsstarken und ihrer Wahrnehmung in der Öffentlichkeit immer wieder aufscheinen: In Artikel II wird zunächst die Frage gestellt, ob Leistungsstarke bereits im aktuellen Regelunterricht einen kumulativen Vorteil gegenüber ihren weniger leistungsstarken Mitschülerinnen und Mitschülern haben (Matthäus-Effekt). Die Ergebnisse zeigen, dass an Gymnasien keineswegs von sich vergrößernden Unterschieden gesprochen werden kann. Im Gegenteil, es verringerte sich im Laufe der Sekundarstufe der Abstand zwischen den Gruppen, indem die Lernraten bei leistungsschwächeren Schülerinnen und Schülern höher waren. Artikel III hingegen betrifft die soziale Wahrnehmung von leistungsstarken Schülerinnen und Schülern. Auch hier hält sich in der öffentlichen Diskussion die Annahme, dass höhere Leistungen mit Nachteilen in der sozialen Integration einhergehen könnten, was sich auch in Studien widerspiegelt, die sich mit Geschlechterstereotypen Jugendlicher in Bezug auf Schulleistung beschäftigen. In Artikel III wird unter anderem erneut das Potenzial der Multiversumsanalyse genutzt, um die Variation des Zusammenhangs über Operationalisierungen von Leistungsstärke zu beschreiben. Es zeigt sich unter unterschiedlichen Operationalisierungen von Leistungsstärke und über verschiedene Facetten sozialer Integration hinweg, dass die Zusammenhänge zwischen Leistung und sozialer Integration insgesamt leicht positiv ausfallen. Annahmen, die auf differenzielle Effekte für Jungen und Mädchen oder für unterschiedliche Fächer abzielen, finden in diesen Analysen keine Bestätigung.
Die Dissertation zeigt, dass der Vergleich unterschiedlicher Ansätze zur Operationalisierung von Leistungsstärke — eingesetzt im Rahmen eines kritischen Multiplismus — das Verständnis von Phänomenen vertiefen kann und auch das Potenzial hat, Theorieentwicklung voranzubringen.
The complex hierarchical structure of bone undergoes a lifelong remodeling process, where it adapts to mechanical needs. Hereby, bone resorption by osteoclasts and bone formation by osteoblasts have to be balanced to sustain a healthy and stable organ. Osteocytes orchestrate this interplay by sensing mechanical strains and translating them into biochemical signals. The osteocytes are located in lacunae and are connected to one another and other bone cells via cell processes through small channels, the canaliculi. Lacunae and canaliculi form a network (LCN) of extracellular spaces that is able to transport ions and enables cell-to-cell communication. Osteocytes might also contribute to mineral homeostasis by direct interactions with the surrounding matrix. If the LCN is acting as a transport system, this should be reflected in the mineralization pattern. The central hypothesis of this thesis is that osteocytes are actively changing their material environment. Characterization methods of material science are used to achieve the aim of detecting traces of this interaction between osteocytes and the extracellular matrix. First, healthy murine bones were characterized. The properties analyzed were then compared with three murine model systems: 1) a loading model, where a bone of the mouse was loaded during its life time; 2) a healing model, where a bone of the mouse was cut to induce a healing response; and 3) a disease model, where the Fbn1 gene is dysfunctional causing defects in the formation of the extracellular tissue.
The measurement strategy included routines that make it possible to analyze the organization of the LCN and the material components (i.e., the organic collagen matrix and the mineral particles) in the same bone volumes and compare the spatial distribution of different data sets. The three-dimensional network architecture of the LCN is visualized by confocal laser scanning microscopy (CLSM) after rhodamine staining and is then subsequently quantified. The calcium content is determined via quantitative backscattered electron imaging (qBEI), while small- and wide-angle X-ray scattering (SAXS and WAXS) are employed to determine the thickness and length of local mineral particles.
First, tibiae cortices of healthy mice were characterized to investigate how changes in LCN architecture can be attributed to interactions of osteocytes with the surrounding bone matrix. The tibial mid-shaft cross-sections showed two main regions, consisting of a band with unordered LCN surrounded by a region with ordered LCN. The unordered region is a remnant of early bone formation and exhibited short and thin mineral particles. The surrounding, more aligned bone showed ordered and dense LCN as well as thicker and longer mineral particles. The calcium content was unchanged between the two regions.
In the mouse loading model, the left tibia underwent two weeks of mechanical stimulation, which results in increased bone formation and decreased resorption in skeletally mature mice. Here the specific research question addressed was how do bone material characteristics change at (re)modeling sites? The new bone formed in response to mechanical stimulation showed similar properties in terms of the mineral particles, like the ordered calcium region but lower calcium content compared to the right, non-loaded control bone of the same mice. There was a clear, recognizable border between mature and newly formed bone. Nevertheless, some canaliculi went through this border connecting the LCN of mature and newly formed bone.
Additionally, the question should be answered whether the LCN topology and the bone matrix material properties adapt to loading. Although, mechanically stimulated bones did not show differences in calcium content compared to controls, different correlations were found between the local LCN density and the local Ca content depending on whether the bone was loaded or not. These results suggest that the LCN may serve as a mineral reservoir.
For the healing model, the femurs of mice underwent an osteotomy, stabilized with an external fixator and were allowed to heal for 21 days. Thus, the spatial variations in the LCN topology with mineral properties within different tissue types and their interfaces, namely calcified cartilage, bony callus and cortex, could be simultaneously visualized and compared in this model. All tissue types showed structural differences across multiple length scales. Calcium content increased and became more homogeneous from calcified cartilage to bony callus to lamellar cortical bone. The degree of LCN organization increased as well, while the lacunae became smaller, as did the lacunar density between these different tissue types that make up the callus. In the calcified cartilage, the mineral particles were short and thin. The newly formed callus exhibited thicker mineral particles, which still had a low degree of orientation. While most of the callus had a woven-like structure, it also served as a scaffold for more lamellar tissue at the edges. The lamelar bone callus showed thinner mineral particles, but a higher degree of alignment in both, mineral particles and the LCN. The cortex showed the highest values for mineral length, thickness and degree of orientation. At the same time, the lacunae number density was 34% lower and the lacunar volume 40% smaller compared to bony callus. The transition zone between cortical and callus regions showed a continuous convergence of bone mineral properties and lacunae shape. Although only a few canaliculi connected callus and the cortical region, this indicates that communication between osteocytes of both tissues should be possible. The presented correlations between LCN architecture and mineral properties across tissue types may suggest that osteocytes have an active role in mineralization processes of healing.
A mouse model for the disease marfan syndrome, which includes a genetic defect in the fibrillin-1 gene, was investigated. In humans, Marfan syndrome is characterized by a range of clinical symptoms such as long bone overgrowth, loose joints, reduced bone mineral density, compromised bone microarchitecture, and increased fracture rates. Thus, fibrillin-1 seems to play a role in the skeletal homeostasis. Therefore, the present work studied how marfan syndrome alters LCN architecture and the surrounding bone matrix. The mice with marfan syndrome showed longer tibiae than their healthy littermates from an age of seven weeks onwards. In contrast, the cortical development appeared retarded, which was observed across all measured characteristics, i. e. lower endocortical bone formation, looser and less organized lacuno-canalicular network, less collagen orientation, thinner and shorter mineral particles.
In each of the three model systems, this study found that changes in the LCN architecture spatially correlated with bone matrix material parameters. While not knowing the exact mechanism, these results provide indications that osteocytes can actively manipulate a mineral reservoir located around the canaliculi to make a quickly accessible contribution to mineral homeostasis. However, this interaction is most likely not one-sided, but could be understood as an interplay between osteocytes and extra-cellular matrix, since the bone matrix contains biochemical signaling molecules (e.g. non-collagenous proteins) that can change osteocyte behavior. Bone (re)modeling can therefore not only be understood as a method for removing defects or adapting to external mechanical stimuli, but also for increasing the efficiency of possible osteocyte-mineral interactions during bone homeostasis. With these findings, it seems reasonable to consider osteocytes as a target for drug development related to bone diseases that cause changes in bone composition and mechanical properties. It will most likely require the combined effort of materials scientists, cell biologists, and molecular biologists to gain a deeper understanding of how bone cells respond to their material environment.
In the present thesis I investigate the lattice dynamics of thin film hetero structures of magnetically ordered materials upon femtosecond laser excitation as a probing and manipulation scheme for the spin system. The quantitative assessment of laser induced thermal dynamics as well as generated picosecond acoustic pulses and their respective impact on the magnetization dynamics of thin films is a challenging endeavor. All the more, the development and implementation of effective experimental tools and comprehensive models are paramount to propel future academic and technological progress.
In all experiments in the scope of this cumulative dissertation, I examine the crystal lattice of nanoscale thin films upon the excitation with femtosecond laser pulses. The relative change of the lattice constant due to thermal expansion or picosecond strain pulses is directly monitored by an ultrafast X-ray diffraction (UXRD) setup with a femtosecond laser-driven plasma X-ray source (PXS). Phonons and spins alike exert stress on the lattice, which responds according to the elastic properties of the material, rendering the lattice a versatile sensor for all sorts of ultrafast interactions. On the one hand, I investigate materials with strong magneto-elastic properties; The highly magnetostrictive rare-earth compound TbFe2, elemental Dysprosium or the technological relevant Invar material FePt. On the other hand I conduct a comprehensive study on the lattice dynamics of Bi1Y2Fe5O12 (Bi:YIG), which exhibits high-frequency coherent spin dynamics upon femtosecond laser excitation according to the literature. Higher order standing spinwaves (SSWs) are triggered by coherent and incoherent motion of atoms, in other words phonons, which I quantified with UXRD. We are able to unite the experimental observations of the lattice and magnetization dynamics qualitatively and quantitatively. This is done with a combination of multi-temperature, elastic, magneto-elastic, anisotropy and micro-magnetic modeling.
The collective data from UXRD, to probe the lattice, and time-resolved magneto-optical Kerr effect (tr-MOKE) measurements, to monitor the magnetization, were previously collected at different experimental setups. To improve the precision of the quantitative assessment of lattice and magnetization dynamics alike, our group implemented a combination of UXRD and tr-MOKE in a singular experimental setup, which is to my knowledge, the first of its kind. I helped with the conception and commissioning of this novel experimental station, which allows the simultaneous observation of lattice and magnetization dynamics on an ultrafast timescale under identical excitation conditions. Furthermore, I developed a new X-ray diffraction measurement routine which significantly reduces the measurement time of UXRD experiments by up to an order of magnitude. It is called reciprocal space slicing (RSS) and utilizes an area detector to monitor the angular motion of X-ray diffraction peaks, which is associated with lattice constant changes, without a time-consuming scan of the diffraction angles with the goniometer. RSS is particularly useful for ultrafast diffraction experiments, since measurement time at large scale facilities like synchrotrons and free electron lasers is a scarce and expensive resource. However, RSS is not limited to ultrafast experiments and can even be extended to other diffraction techniques with neutrons or electrons.
One aspect of achieving a more sustainable chemical industry is the minimization of the usage of solvents and chemicals. Thus, optimization and development of chemical processes for large-scale production is favourably performed in small batches. The critical step in this approach is upscaling the batches from the small reaction systems to the large reactors mandatory for cost efficient production in an industrial environment. Scaling up the bulk volume always goes along with increasing the surface where the reaction medium is in contact with the confining vessel. Since volume scales proportional with the cubic dimension while the surface scales quadratic, their ratio is size-dependent. The influence of reaction vessel walls can change the reaction performance. A number of phenomena occurring at the surface-liquid interface can affect reaction rates and yields, resulting in possible difficulties in predicting and extrapolating from small size production scale to large industrial processes. The application of levitated droplets as a containerless reaction vessels provides a promising possibility to avoid the above-mentioned issues.
In the presented work, an efficient coupling of acoustically levitated droplets to an ion mobility (IM) spectrometer, operating at ambient conditions, was designed for real-time monitoring of chemical reactions. The design of the system comprises noncontact sampling and ionization of the droplet realised by laser desorption/ionization at 2,94 µm. The scope of the work includes fundamental studies covering understanding of laser irradiation of droplets enclosed in an acoustical field. Understanding of this phenomenon is crucial to comprehending the effects of temporal and spatial resolution of the generated ion plume that influence the resolution of the system.
The set-up includes an acoustic trap, laser irradiation and ion manipulation electrostatic lenses operating at high voltage at ambient pressure. The complexity of the design needs to fully be considered for an effective ion transfer at the interface region between the levitated droplet and IM spectrometer. For sampling and ionization, two distinct laser pulse lengths were evaluated, ns and µs. Irradiation via µs laser pulses provides several advantages: i) the droplet volume is not extensively impinged, as in case of ns laser pulses, allowing the sampling of only the small volume of the droplet; ii) the lower fluence results in less pronounced oscillations of the droplet confined in the acoustic field. The droplet will not be dissipated out of the acoustic field leading to loss of the sample; iii) the mild laser irradiation results in better spatial and temporal ion plume confinement, leading to better resolution of the detected ion packets. Finally, this knowledge allows the application of ion optics necessary to induce ion flow between the droplet suspended in the acoustic field and the IM spectrometer. The ion optics, composed of 2 electrostatic lenses placed in the near vicinity of the droplet, allow effective focusing of the ion plume and its redirection directly to the IM spectrometer entrance. This novel coupling has proved to be successful for detection of some simple molecules ionizable at the 2.94 µm wavelength. To further demonstrate the applicability of the system, a proof-of-principle reaction was selected, fulfilling the requirements of the system, and was subjected to comprehensive investigation of its performance. Herein, the reaction between N-Boc cysteine methyl ester and allyl alcohol has been performed in a batch reactor and on-line monitored via 1H NMR to establish reaction propagation. With the additional assessment, it was confirmed that the thiol-ene coupling can be performed within first 20 minutes of the irradiation with a reaction yield above 50%, proving that the reaction can be applied as a study case to assess the possibilities of the developed system.
What are the consequences of unemployment and precarious employment for individuals' health in Europe? What are the moderating factors that may offset (or increase) the health consequences of labor-market risks? How do the effects of these risks vary across different contexts, which differ in their institutional and cultural settings? Does gender, regarded as a social structure, play a role, and how? To answer these questions is the aim of my cumulative thesis. This study aims to advance our knowledge about the health consequences that unemployment and precariousness cause over the life course. In particular, I investigate how several moderating factors, such as gender, the family, and the broader cultural and institutional context, may offset or increase the impact of employment instability and insecurity on individual health.
In my first paper, 'The buffering role of the family in the relationship between job loss and self-perceived health: Longitudinal results from Europe, 2004-2011', I and my co-authors measure the causal effect of job loss on health and the role of the family and welfare states (regimes) as moderating factors. Using EU-SILC longitudinal data (2004-2011), we estimate the probability of experiencing 'bad health' following a transition to unemployment by applying linear probability models and undertake separate analyses for men and women. Firstly, we measure whether changes in the independent variable 'job loss' lead to changes in the dependent variable 'self-rated health' for men and women separately. Then, by adding into the model different interaction terms, we measure the moderating effect of the family, both in terms of emotional and economic support, and how much it varies across different welfare regimes. As an identification strategy, we first implement static fixed-effect panel models, which control for time-varying observables and indirect health selection—i.e., constant unobserved heterogeneity. Secondly, to control for reverse causality and path dependency, we implement dynamic fixed-effect panel models, adding a lagged dependent variable to the model.
We explore the role of the family by focusing on close ties within households: we consider the presence of a stable partner and his/her working status as a source of social and economic support. According to previous literature, having a partner should reduce the stress from adverse events, thanks to the symbolic and emotional dimensions that such a relationship entails, regardless of any economic benefits. Our results, however, suggest that benefits linked to the presence of a (female) partner also come from the financial stability that (s)he can provide in terms of a second income. Furthermore, we find partners' employment to be at least as important as the mere presence of the partner in reducing the negative effect of job loss on the individual's health by maintaining the household's standard of living and decreasing economic strain on the family. Our results are in line with previous research, which has highlighted that some people cope better than others with adverse life circumstances, and the support provided by the family is a crucial resource in that regard.
We also reported an important interaction between the family and the welfare state in moderating the health consequences of unemployment, showing how the compensation effect of the family varies across welfare regimes. The family plays a decisive role in cushioning the adverse consequences of labor market risks in Southern and Eastern welfare states, characterized by less developed social protection systems and –especially the Southern – high level of familialism.
The first paper also found important gender differences concerning job loss, family and welfare effects. Of particular interest is the evidence suggesting that health selection works differently for men and women, playing a more prominent role for women than for men in explaining the relationship between job loss and self-perceived health. The second paper, 'Gender roles and selection mechanisms across contexts: A comparative analysis of the relationship between unemployment, self-perceived health, and gender.' investigates more in-depth the gender differential in health driven by unemployment.
Being a highly contested issue in literature, we aim to study whether men are more penalized than women or the other way around and the mechanisms that may explain the gender difference. To do that, we rely on two theoretical arguments: the availability of alternative roles and social selection. The first argument builds on the idea that men and women may compensate for the detrimental health consequences of unemployment through the commitment to 'alternative roles,' which can provide for the resources needed to fulfill people's socially constructed needs. Notably, the availability of alternative options depends on the different positions that men and women have in society.
Further, we merge the availability of the 'alternative roles' argument with the health selection argument. We assume that health selection could be contingent on people's social position as defined by gender and, thus, explain the gender differential in the relationship between unemployment and health. Ill people might be less reluctant to fall or remain (i.e., self-select) in unemployment if they have alternative roles. In Western societies, women generally have more alternative roles than men and thus more discretion in their labor market attachment. Therefore, health selection should be stronger for them, explaining why unemployment is less menace for women than for their male counterparts.
Finally, relying on the idea of different gender regimes, we extended these arguments to comparison across contexts. For example, in contexts where being a caregiver is assumed to be women's traditional and primary roles and the primary breadwinner role is reserved to men, unemployment is less stigmatized, and taking up alternative roles is more socially accepted for women than for men (Hp.1). Accordingly, social (self)selection should be stronger for women than for men in traditional contexts, where, in the case of ill-health, the separation from work is eased by the availability of alternative roles (Hp.2).
By focusing on contexts that are representative of different gender regimes, we implement a multiple-step comparative approach. Firstly, by using EU-SILC longitudinal data (2004-2015), our analysis tests gender roles and selection mechanisms for Sweden and Italy, representing radically different gender regimes, thus providing institutional and cultural variation. Then, we limit institutional heterogeneity by focusing on Germany and comparing East- and West-Germany and older and younger cohorts—for West-Germany (SOEP data 1995-2017). Next, to assess the differential impact of unemployment for men and women, we compared (unemployed and employed) men with (unemployed and employed) women. To do so, we calculate predicted probabilities and average marginal effect from two distinct random-effects probit models. Our first step is estimating random-effects models that assess the association between unemployment and self-perceived health, controlling for observable characteristics. In the second step, our fully adjusted model controls for both direct and indirect selection. We do this using dynamic correlated random-effects (CRE) models. Further, based on the fully adjusted model, we test our hypotheses on alternative roles (Hp.1) by comparing several contexts – models are estimated separately for each context. For this hypothesis, we pool men and women and include an interaction term between unemployment and gender, which has the advantage to allow for directly testing whether gender differences in the effect of unemployment exist and are statistically significant. Finally, we test the role of selection mechanisms (Hp.2), using the KHB method to compare coefficients across nested nonlinear models. Specifically, we test the role of selection for the relationship between unemployment and health by comparing the partially-adjusted and fully-adjusted models. To allow selection mechanisms to operate differently between genders, we estimate separate models for men and women.
We found support to our first hypotheses—the context where people are embedded structures the relationship between unemployment, health, and gender. We found no gendered effect of unemployment on health in the egalitarian context of Sweden. Conversely, in the traditional context of Italy, we observed substantive and statistically significant gender differences in the effect of unemployment on bad health, with women suffering less than men. We found the same pattern for comparing East and West Germany and younger and older cohorts in West Germany.
On the contrary, our results did not support our theoretical argument on social selection. We found that in Sweden, women are more selected out of employment than men. In contrast, in Italy, health selection does not seem to be the primary mechanism behind the gender differential—Italian men and women seem to be selected out of employment to the same extent. Namely, we do not find any evidence that health selection is stronger for women in more traditional countries (Hp2), despite the fact that the institutional and the cultural context would offer them a more comprehensive range of 'alternative roles' relative to men. Moreover, our second hypothesis is also rejected in the second and third comparisons, where the cross-country heterogeneity is reduced to maximize cultural differences within the same institutional context. Further research that addresses selection into inactivity is needed to evaluate the interplay between selection and social roles across gender regimes.
While the health consequences of unemployment have been on the research agenda for a pretty long time, the interest in precarious employment—defined as the linking of the vulnerable worker to work that is characterized by uncertainty and insecurity concerning pay, the stability of the work arrangement, limited access to social benefits, and statutory protections—has emerged only later. Since the 80s, scholars from different disciplines have raised concerns about the social consequences of de-standardization of employment relationships. However, while work has become undoubtedly more precarious, very little is known about its causal effect on individual health and the role of gender as a moderator. These questions are at the core of my third paper : 'Bad job, bad health? A longitudinal analysis of the interaction between precariousness, gender and self-perceived health in Germany'. Herein, I investigate the multidimensional nature of precarious employment and its causal effect on health, particularly focusing on gender differences.
With this paper, I aim at overcoming three major shortcomings of earlier studies: The first one regards the cross-sectional nature of data that prevents the authors from ruling out unobserved heterogeneity as a mechanism for the association between precarious employment and health. Indeed, several unmeasured individual characteristics—such as cognitive abilities—may confound the relationship between precarious work and health, leading to biased results. Secondly, only a few studies have directly addressed the role of gender in shaping the relationship. Moreover, available results on the gender differential are mixed and inconsistent: some found precarious employment being more detrimental for women's health, while others found no gender differences or stronger negative association for men. Finally, previous attempts to an empirical translation of the employment precariousness (EP) concept have not always been coherent with their theoretical framework. EP is usually assumed to be a multidimensional and continuous phenomenon; it is characterized by different dimensions of insecurity that may overlap in the same job and lead to different "degrees of precariousness." However, researchers have predominantly focused on one-dimensional indicators—e.g., temporary employment, subjective job insecurity—to measure EP and study the association with health. Besides the fact that this approach partially grasps the phenomenon's complexity, the major problem is the inconsistency of evidence that it has produced. Indeed, this line of inquiry generally reveals an ambiguous picture, with some studies finding substantial adverse effects of temporary over permanent employment, while others report only minor differences.
To measure the (causal) effect of precarious work on self-rated health and its variation by gender, I focus on Germany and use four waves from SOEP data (2003, 2007, 2011, and 2015). Germany is a suitable context for my study. Indeed, since the 1980s, the labor market and welfare system have been restructured in many ways to increase the German economy's competitiveness in the global market. As a result, the (standard) employment relationship has been de-standardized: non-standard and atypical employment arrangements—i.e., part-time work, fixed-term contracts, mini-jobs, and work agencies—have increased over time while wages have lowered, even among workers with standard work. In addition, the power of unions has also fallen over the last three decades, leaving a large share of workers without collective protection. Because of this process of de-standardization, the link between wage employment and strong social rights has eroded, making workers more powerless and more vulnerable to labor market risks than in the past. EP refers to this uneven distribution of power in the employment relationship, which can be detrimental to workers' health. Indeed, by affecting individuals' access to power and other resources, EP puts precarious workers at risk of experiencing health shocks and influences their ability to gain and accumulate health advantages (Hp.1).
Further, the focus on Germany allows me to investigate my second research question on the gender differential. Germany is usually regarded as a traditionalist gender regime: a context characterized by a configuration of roles. Here, being a caregiver is assumed to be women's primary role, whereas the primary breadwinner role is reserved for men. Although many signs of progress have been made over the last decades towards a greater equalization of opportunities and more egalitarianism, the breadwinner model has barely changed towards a modified version. Thus, women usually take on the double role of workers (the so-called secondary earner) and caregivers, and men still devote most of their time to paid work activities. Moreover, the overall upward trend towards more egalitarian gender ideologies has leveled off over the last decades, moving notably towards more traditional gender ideologies.
In this setting, two alternative hypotheses are possible. Firstly, I assume that the negative relationship between EP and health is stronger for women than for men. This is because women are systematically more disadvantaged than men in the public and private spheres of life, having less access to formal and informal sources of power. These gender-related power asymmetries may interact with EP-related power asymmetries resulting in a stronger effect of EP on women's health than on men's health (Hp.2).
An alternative way of looking at the gender differential is to consider the interaction that precariousness might have with men's and women's gender identities. According to this view, the negative relationship between EP and health is weaker for women than for men (Hp.2a). In a society with a gendered division of labor and a strong link between masculine identities and stable and well-rewarded job—i.e., a job that confers the role of primary family provider—a male worker with precarious employment might violate the traditional male gender role. Men in precarious jobs may perceive themselves (and by others) as possessing a socially undesirable characteristic, which conflicts with the stereotypical idea of themselves as the male breadwinner. Engaging in behaviors that contradict stereotypical gender identity may decrease self-esteem and foster feelings of inferiority, helplessness, and jealousy, leading to poor health.
I develop a new indicator of EP that empirically translates a definition of EP as a multidimensional and continuous phenomenon. I assume that EP is a latent construct composed of seven dimensions of insecurity chosen according to the theory and previous empirical research: Income insecurity, social insecurity, legal insecurity, employment insecurity, working-time insecurity, representation insecurity, worker's vulnerability. The seven dimensions are proxied by eight indicators available in the four waves of the SOEP dataset. The EP composite indicator is obtained by performing a multiple correspondence analysis (MCA) on the eight indicators. This approach aims to construct a summary scale in which all dimensions contribute jointly to the measured experience of precariousness and its health impact.
Further, the relationship between EP and 'general self-perceived health' is estimated by applying ordered probit random-effects estimators and calculating average marginal effect (further AME). Then, to control for unobserved heterogeneity, I implement correlated random-effects models that add to the model the within-individual means of the time-varying independent variables. To test the significance of the gender differential, I add an interaction term between EP and gender in the fully adjusted model in the pooled sample.
My correlated random-effects models showed EP's negative and substantial 'effect' on self-perceived health for both men and women. Although nonsignificant, the evidence seems in line with previous cross-sectional literature. It supports the hypothesis that employment precariousness could be detrimental to workers' health. Further, my results showed the crucial role of unobserved heterogeneity in shaping the health consequences of precarious employment. This is particularly important as evidence accumulates, yet it is still mostly descriptive.
Moreover, my results revealed a substantial difference among men and women in the relationship between EP and health: when EP increases, the risk of experiencing poor health increases much more for men than for women. This evidence falsifies previous theory according to whom the gender differential is contingent on the structurally disadvantaged position of women in western societies. In contrast, they seem to confirm the idea that men in precarious work could experience role conflict to a larger extent than women, as their self-standard is supposed to be the stereotypical breadwinner worker with a good and well-rewarded job. Finally, results from the multiple correspondence analysis contribute to the methodological debate on precariousness, showing that a multidimensional and continuous indicator can express a latent variable of EP.
All in all, complementarities are revealed in the results of unemployment and employment precariousness, which have two implications: Policy-makers need to be aware that the total costs of unemployment and precariousness go far beyond the economic and material realm penetrating other fundamental life domains such as individual health. Moreover, they need to balance the trade-off between protecting adequately unemployed people and fostering high-quality employment in reaction to the highlighted market pressures. In this sense, the further development of a (universalistic) welfare state certainly helps mitigate the adverse health effects of unemployment and, therefore, the future costs of both individuals' health and welfare spending. In addition, the presence of a working partner is crucial for reducing the health consequences of employment instability. Therefore, policies aiming to increase female labor market participation should be promoted, especially in those contexts where the welfare state is less developed.
Moreover, my results support the significance of taking account of a gender perspective in health research. The findings of the three articles show that job loss, unemployment, and precarious employment, in general, have adverse effects on men's health but less or absent consequences for women's health. Indeed, this suggests the importance of labor and health policies that consider and further distinguish the specific needs of the male and female labor force in Europe. Nevertheless, a further implication emerges: the health consequences of employment instability and de-standardization need to be investigated in light of the gender arrangements and the transforming gender relationships in specific cultural and institutional contexts. My results indeed seem to suggest that women's health advantage may be a transitory phenomenon, contingent on the predominant gendered institutional and cultural context. As the structural difference between men's and women's position in society is eroded, egalitarianism becomes the dominant normative status, so will probably be the gender difference in the health consequences of job loss and precariousness. Therefore, while gender equality in opportunities and roles is a desirable aspect for contemporary societies and a political goal that cannot be postponed further, this thesis raises a further and maybe more crucial question: What kind of equality should be pursued to provide men and women with both good life quality and equal chances in the public and private spheres? In this sense, I believe that social and labor policies aiming to reduce gender inequality in society should focus on improving women's integration into the labor market, implementing policies targeting men, and facilitating their involvement in the private sphere of life. Equal redistribution of social roles could activate a crucial transformation of gender roles and the cultural models that sustain and still legitimate gender inequality in Western societies.
Technological progress allows for producing ever more complex predictive models on the basis of increasingly big datasets. For risk management of natural hazards, a multitude of models is needed as basis for decision-making, e.g. in the evaluation of observational data, for the prediction of hazard scenarios, or for statistical estimates of expected damage. The question arises, how modern modelling approaches like machine learning or data-mining can be meaningfully deployed in this thematic field. In addition, with respect to data availability and accessibility, the trend is towards open data. Topic of this thesis is therefore to investigate the possibilities and limitations of machine learning and open geospatial data in the field of flood risk modelling in the broad sense. As this overarching topic is broad in scope, individual relevant aspects are identified and inspected in detail.
A prominent data source in the flood context is satellite-based mapping of inundated areas, for example made openly available by the Copernicus service of the European Union. Great expectations are directed towards these products in scientific literature, both for acute support of relief forces during emergency response action, and for modelling via hydrodynamic models or for damage estimation. Therefore, a focus of this work was set on evaluating these flood masks. From the observation that the quality of these products is insufficient in forested and built-up areas, a procedure for subsequent improvement via machine learning was developed. This procedure is based on a classification algorithm that only requires training data from a particular class to be predicted, in this specific case data of flooded areas, but not of the negative class (dry areas). The application for hurricane Harvey in Houston shows the high potential of this method, which depends on the quality of the initial flood mask.
Next, it is investigated how much the predicted statistical risk from a process-based model chain is dependent on implemented physical process details. Thereby it is demonstrated what a risk study based on established models can deliver. Even for fluvial flooding, such model chains are already quite complex, though, and are hardly available for compound or cascading events comprising torrential rainfall, flash floods, and other processes. In the fourth chapter of this thesis it is therefore tested whether machine learning based on comprehensive damage data can offer a more direct path towards damage modelling, that avoids explicit conception of such a model chain. For that purpose, a state-collected dataset of damaged buildings from the severe El Niño event 2017 in Peru is used. In this context, the possibilities of data-mining for extracting process knowledge are explored as well. It can be shown that various openly available geodata sources contain useful information for flood hazard and damage modelling for complex events, e.g. satellite-based rainfall measurements, topographic and hydrographic information, mapped settlement areas, as well as indicators from spectral data. Further, insights on damaging processes are discovered, which mainly are in line with prior expectations. The maximum intensity of rainfall, for example, acts stronger in cities and steep canyons, while the sum of rain was found more informative in low-lying river catchments and forested areas. Rural areas of Peru exhibited higher vulnerability in the presented study compared to urban areas. However, the general limitations of the methods and the dependence on specific datasets and algorithms also become obvious.
In the overarching discussion, the different methods – process-based modelling, predictive machine learning, and data-mining – are evaluated with respect to the overall research questions. In the case of hazard observation it seems that a focus on novel algorithms makes sense for future research. In the subtopic of hazard modelling, especially for river floods, the improvement of physical models and the integration of process-based and statistical procedures is suggested. For damage modelling the large and representative datasets necessary for the broad application of machine learning are still lacking. Therefore, the improvement of the data basis in the field of damage is currently regarded as more important than the selection of algorithms.
Enhanced geothermal systems (EGS) are considered a cornerstone of future sustainable energy production. In such systems, high-pressure fluid injections break the rock to provide pathways for water to circulate in and heat up. This approach inherently induces small seismic events that, in rare cases, are felt or can even cause damage. Controlling and reducing the seismic impact of EGS is crucial for a broader public acceptance. To evaluate the applicability of hydraulic fracturing (HF) in EGS and to improve the understanding of fracturing processes and the hydromechanical relation to induced seismicity, six in-situ, meter-scale HF experiments with different injection schemes were performed under controlled conditions in crystalline rock in a depth of 410 m at the Äspö Hard Rock Laboratory (Sweden).
I developed a semi-automated, full-waveform-based detection, classification, and location workflow to extract and characterize the acoustic emission (AE) activity from the continuous recordings of 11 piezoelectric AE sensors. Based on the resulting catalog of 20,000 AEs, with rupture sizes of cm to dm, I mapped and characterized the fracture growth in great detail. The injection using a novel cyclic injection scheme (HF3) had a lower seismic impact than the conventional injections. HF3 induced fewer AEs with a reduced maximum magnitude and significantly larger b-values, implying a decreased number of large events relative to the number of small ones. Furthermore, HF3 showed an increased fracture complexity with multiple fractures or a fracture network. In contrast, the conventional injections developed single, planar fracture zones (Publication 1).
An independent, complementary approach based on a comparison of modeled and observed tilt exploits transient long-period signals recorded at the horizontal components of two broad-band seismometers a few tens of meters apart from the injections. It validated the efficient creation of hydraulic fractures and verified the AE-based fracture geometries. The innovative joint analysis of AEs and tilt signals revealed different phases of the fracturing process, including the (re-)opening, growth, and aftergrowth of fractures, and provided evidence for the reactivation of a preexisting fault in one of the experiments (Publication 2). A newly developed network-based waveform-similarity analysis applied to the massive AE activity supports the latter finding.
To validate whether the reduction of the seismic impact as observed for the cyclic injection schemes during the Äspö mine-scale experiments is transferable to other scales, I additionally calculated energy budgets for injection experiments from previously conducted laboratory tests and from a field application. Across all three scales, the cyclic injections reduce the seismic impact, as depicted by smaller maximum magnitudes, larger b-values, and decreased injection efficiencies (Publication 3).
Localisation of deformation is a ubiquitous feature in continental rift dynamics and observed across drastically different time and length scales. This thesis comprises one experimental and two numerical modelling studies investigating strain localisation in (1) a ductile shear zone induced by a material heterogeneity and (2) in an active continental rift setting. The studies are related by the fact that the weakening mechanisms on the crystallographic and grain size scale enable bulk rock weakening, which fundamentally enables the formation of shear zones, continental rifts and hence plate tectonics. Aiming to investigate the controlling mechanisms on initiation and evolution of a shear zone, the torsion experiments of the experimental study were conducted in a Patterson type apparatus with strong Carrara marble cylinders with a weak, planar Solnhofen limestone inclusion. Using state-of-the-art numerical modelling software, the torsion experiments were simulated to answer questions regarding localisation procedure like stress distribution or the impact of rheological weakening. 2D numerical models were also employed to integrate geophysical and geological data to explain characteristic tectonic evolution of the Southern and Central Kenya Rift. Key elements of the numerical tools are a randomized initial strain distribution and the usage of strain softening. During the torsion experiments, deformation begins to localise at the limestone inclusion tips in a process zone, which propagates into the marble matrix with increasing deformation until a ductile shear zone is established. Minor indicators for coexisting brittle deformation are found close to the inclusion tip and presumed to slightly facilitate strain localisation besides the dominant ductile deformation processes. The 2D numerical model of the torsion experiment successfully predicts local stress concentration and strain rate amplification ahead of the inclusion in first order agreement with the experimental results. A simple linear parametrization of strain weaking enables high accuracy reproduction of phenomenological aspects of the observed weakening. The torsion experiments suggest that loading conditions do not affect strain localisation during high temperature deformation of multiphase material with high viscosity contrasts. A numerical simulation can provide a way of analysing the process zone evolution virtually and extend the examinable frame. Furthermore, the nested structure and anastomosing shape of an ultramylonite band was mimicked with an additional second softening step. Rheological weakening is necessary to establish a shear zone in a strong matrix around a weak inclusion and for ultramylonite formation.
Such strain weakening laws are also incorporated into the numerical models of the
Southern and Central Kenya Rift that capture the characteristic tectonic evolution. A three-stage early rift evolution is suggested that starts with (1) the accommodation of strain by a single border fault and flexure of the hanging-wall crust, after which (2) faulting in the hanging-wall and the basin centre increases before (3) the early-stage asymmetry is lost and basinward localisation of deformation occurs. Along-strike variability of rifts can be produced by modifying the initial random noise distribution. In summary, the three studies address selected aspects of the broad range of mechanisms and processes that fundamentally enable the deformation of rock and govern the localisation patterns across the scales. In addition to the aforementioned results, the first and second manuscripts combined, demonstrate a procedure to find new or improve on existing numerical formulations for specific rheologies and their dynamic weakening. These formulations are essential in addressing rock deformation from the grain to the global scale. As within the third study of this thesis, where geodynamic controls on the evolution of a rift were examined and acquired by the integration of geological and geophysical data into a numerical model.
Sustainable urban growth
(2022)
This dissertation explores the determinants for sustainable and socially optimalgrowth in a city. Two general equilibrium models establish the base for this evaluation, each adding its puzzle piece to the urban sustainability discourse and examining the role of non-market-based and market-based policies for balanced growth and welfare improvements in different theory settings. Sustainable urban growth either calls for policy actions or a green energy transition. Further, R&D market failures can pose severe challenges to the sustainability of urban growth and the social optimality of decentralized allocation decisions. Still, a careful (holistic) combination of policy instruments can achieve sustainable growth and even be first best.
The development of novel programmable materials aiming to control friction in real-time holds potential to facilitate innovative lubrication solutions for reducing wear and energy losses. This work describes the integration of light-responsiveness into two lubricating materials, silicon oils and polymer brush surfaces.
The first part focusses on the assessment on 9-anthracene ester-terminated polydimethylsiloxanes (PDMS-A) and, in particular, on the variability of rheological properties and the implications that arise with UV-light as external trigger. The applied rheometer setup contains an UV-transparent quartz-plate, which enables radiation and simultaneous measurement of the dynamic moduli. UV-A radiation (354 nm) triggers the cycloaddition reaction between the terminal functionalities of linear PDMS, resulting in chain extension. The newly-formed anthracene dimers cleave by UV-C radiation (254 nm) or at elevated temperatures (T > 130 °C). The sequential UV-A radiation and thermal reprogramming over three cycles demonstrate high conversions and reproducible programming of rheological properties. In contrast, the photochemical back reaction by UV-C is incomplete and can only partially restore the initial rheological properties. The dynamic moduli increase with each cycle in photochemical programming, presumably resulting from a chain segment re-arrangement as a result of the repeated partial photocleavage and subsequent chain length-dependent dimerization. In addition, long periods of radiation cause photooxidative degradation, which damages photo-responsive functions and consequently reduces the programming range. The absence of oxygen, however, reduces undesired side reactions. Anthracene-functionalized PDMS and native PDMS mix depending on the anthracene ester content and chain length, respectively, and allow fine-tuning of programmable rheological properties. The work shows the influence of mixing conditions during the photoprogramming step on the rheological properties, indicating that material property gradients induced by light attenuation along the beam have to be considered. Accordingly, thin lubricant films are suggested as potential application for light-programmable silicon fluids.
The second part compares strategies for the grafting of spiropyran (SP) containing copolymer brushes from Si wafers and evaluates the light-responsiveness of the surfaces. Pre-experiments on the kinetics of the thermally initiated RAFT copolymerization of 2-hydroxyethyl acrylate (HEA) and spiropyran acrylate (SPA) in solution show, first, a strong retardation by SP and, second, the dependence of SPA polymerization on light. Surprisingly, the copolymerization of SPA is inhibited in the dark. These findings contribute to improve the synthesis of polar, spiropyran-containing copolymers. The comparison between initiator systems for the grafting-from approach indicates PET-RAFT superior to thermally initiated RAFT, suggesting a more efficient initiation of surface-bound CTA by light. Surface-initiated polymerization via PET-RAFT with an initiator system of EosinY (EoY) and ascorbic acid (AscA) facilitates copolymer synthesis from HEA and 5-25 mol% SPA. The resulting polymer film with a thickness of a few nanometers was detected by atomic force microscopy (AFM) and ellipsometry. Water contact angle (CA) measurements demonstrate photo-switchable surface polarity, which is attributed to the photoisomerization between non-polar spiropyran and zwitterionic merocyanine isomer. Furthermore, the obtained spiropyran brushes show potential for further studies on light-programmable properties. In this context, it would be interesting to investigate whether swollen spiropyran-containing polymers change their configuration and thus their film thickness under the influence of light. In addition, further experiments using an AFM or microtribometer should evaluate whether light-programmable solvation enables a change in frictional properties between polymer brush surfaces.
The echo chamber model describes the development of groups in heterogeneous social networks. By heterogeneous social network we mean a set of individuals, each of whom represents exactly one opinion. The existing relationships between individuals can then be represented by a graph. The echo chamber model is a time-discrete model which, like a board game, is played in rounds. In each round, an existing relationship is randomly and uniformly selected from the network and the two connected individuals interact. If the opinions of the individuals involved are sufficiently similar, they continue to move closer together in their opinions, whereas in the case of opinions that are too far apart, they break off their relationship and one of the individuals seeks a new relationship. In this paper we examine the building blocks of this model. We start from the observation that changes in the structure of relationships in the network can be described by a system of interacting particles in a more abstract space.
These reflections lead to the definition of a new abstract graph that encompasses all possible relational configurations of the social network. This provides us with the geometric understanding necessary to analyse the dynamic components of the echo chamber model in Part III. As a first step, in Part 7, we leave aside the opinions of the inidividuals and assume that the position of the edges changes with each move as described above, in order to obtain a basic understanding of the underlying dynamics. Using Markov chain theory, we find upper bounds on the speed of convergence of an associated Markov chain to its unique stationary distribution and show that there are mutually identifiable networks that are not apparent in the dynamics under analysis, in the sense that the stationary distribution of the associated Markov chain gives equal weight to these networks.
In the reversible cases, we focus in particular on the explicit form of the stationary distribution as well as on the lower bounds of the Cheeger constant to describe the convergence speed.
The final result of Section 8, based on absorbing Markov chains, shows that in a reduced version of the echo chamber model, a hierarchical structure of the number of conflicting relations can be identified.
We can use this structure to determine an upper bound on the expected absorption time, using a quasi-stationary distribution. This hierarchy of structure also provides a bridge to classical theories of pure death processes. We conclude by showing how future research can exploit this link and by discussing the importance of the results as building blocks for a full theoretical understanding of the echo chamber model. Finally, Part IV presents a published paper on the birth-death process with partial catastrophe. The paper is based on the explicit calculation of the first moment of a catastrophe. This first part is entirely based on an analytical approach to second degree recurrences with linear coefficients. The convergence to 0 of the resulting sequence as well as the speed of convergence are proved. On the other hand, the determination of the upper bounds of the expected value of the population size as well as its variance and the difference between the determined upper bound and the actual value of the expected value. For these results we use almost exclusively the theory of ordinary nonlinear differential equations.
Hydraulic-driven fractures play a key role in subsurface energy technologies across several scales. By injecting fluid at high hydraulic pressure into rock with intrinsic low permeability, in-situ stress field and fracture development pattern can be characterised as well as rock permeability can be enhanced. Hydraulic fracturing is a commercial standard procedure for enhanced oil and gas production of rock reservoirs with low permeability in petroleum industry. However, in EGS utilization, a major geological concern is the unsolicited generation of earthquakes due to fault reactivation, referred to as induced seismicity, with a magnitude large enough to be felt on the surface or to damage facilities and buildings. Furthermore, reliable interpretation of hydraulic fracturing tests for stress measurement is a great challenge for the energy technologies. Therefore, in this cumulative doctoral thesis the following research questions are investigated. (1): How do hydraulic fractures grow in hard rock at various scales?; (2): Which parameters control hydraulic fracturing and hydro-mechanical coupling?; and (3): How can hydraulic fracturing in hard rock be modelled?
In the laboratory scale study, several laboratory hydraulic fracturing experiments are investigated numerically using Irazu2D that were performed on intact cubic Pocheon granite samples from South Korea applying different injection protocols. The goal of the laboratory experiments is to test the concept of cyclic soft stimulation which may enable sustainable permeability enhancement (Publication 1).
In the borehole scale study, hydraulic fracturing tests are reported that were performed in boreholes located in central Hungary to determine the in-situ stress for a geological site investigation. At depth of about 540 m, the recorded pressure versus time curves in mica schist with low dip angle foliation show atypical evolution. In order to provide explanation for this observation, a series of discrete element computations using Particle Flow Code 2D are performed (Publication 2).
In the reservoir scale study, the hydro-mechanical behaviour of fractured crystalline rock due to one of the five hydraulic stimulations at the Pohang Enhanced Geothermal site in South Korea is studied. Fluid pressure perturbation at faults of several hundred-meter lengths during hydraulic stimulation is simulated using FracMan (Publication 3).
The doctoral research shows that the resulting hydraulic fracturing geometry will depend “locally”, i.e. at the length scale of representative elementary volume (REV) and below that (sub-REV), on the geometry and strength of natural fractures, and “globally”, i.e. at super-REV domain volume, on far-field stresses. Regarding hydro-mechanical coupling, it is suggested to define separate coupling relationship for intact rock mass and natural fractures. Furthermore, the relative importance of parameters affecting the magnitude of formation breakdown pressure, a parameter characterising hydro-mechanical coupling, is defined. It can be also concluded that there is a clear gap between the capacity of the simulation software and the complexity of the studied problems. Therefore, the computational time of the simulation of complex hydraulic fracture geometries must be reduced while maintaining high fidelity simulation results. This can be achieved either by extending the computational resources via parallelization techniques or using time scaling techniques. The ongoing development of used numerical models focuses on tackling these methodological challenges.
In my doctoral thesis, I examine continuous gravity measurements for monitoring of the geothermal site at Þeistareykir in North Iceland. With the help of high-precision superconducting gravity meters (iGravs), I investigate underground mass changes that are caused by operation of the geothermal power plant (i.e. by extraction of hot water and reinjection of cold water). The overall goal of this research project is to make a statement about the sustainable use of the geothermal reservoir, from which also the Icelandic energy supplier and power plant operator Landsvirkjun should benefit.
As a first step, for investigating the performance and measurement stability of the gravity meters, in summer 2017, I performed comparative measurements at the gravimetric observatory J9 in Strasbourg. From the three-month gravity time series, I examined calibration, noise and drift behaviour of the iGravs in comparison to stable long-term time series of the observatory superconducting gravity meters. After preparatory work in Iceland (setup of gravity stations, additional measuring equipment and infrastructure, discussions with Landsvirkjun and meetings with the Icelandic partner institute ISOR), gravity monitoring at Þeistareykir was started in December 2017. With the help of the iGrav records of the initial 18 months after start of measurements, I carried out the same investigations (on calibration, noise and drift behaviour) as in J9 to understand how the transport of the superconducting gravity meters to Iceland may influence instrumental parameters.
In the further course of this work, I focus on modelling and reduction of local gravity contributions at Þeistareykir. These comprise additional mass changes due to rain, snowfall and vertical surface displacements that superimpose onto the geothermal signal of the gravity measurements. For this purpose, I used data sets from additional monitoring sensors that are installed at each gravity station and adapted scripts for hydro-gravitational modelling. The third part of my thesis targets geothermal signals in the gravity measurements.
Together with my PhD colleague Nolwenn Portier from France, I carried out additional gravity measurements with a Scintrex CG5 gravity meter at 26 measuring points within the geothermal field in the summers of 2017, 2018 and 2019. These annual time-lapse gravity measurements are intended to increase the spatial coverage of gravity data from the three continuous monitoring stations to the entire geothermal field. The combination of CG5 and iGrav observations, as well as annual reference measurements with an FG5 absolute gravity meter represent the hybrid gravimetric monitoring method for Þeistareykir. Comparison of the gravimetric data to local borehole measurements (of groundwater levels, geothermal extraction and injection rates) is used to relate the observed gravity changes to the actually extracted (and reinjected) geothermal fluids. An approach to explain the observed gravity signals by means of forward modelling of the geothermal production rate is presented at the end of the third (hybrid gravimetric) study. Further modelling with the help of the processed gravity data is planned by Landsvirkjun. In addition, the experience from time-lapse and continuous gravity monitoring will be used for future gravity measurements at the Krafla geothermal field 22 km south-east of Þeistareykir.
Plate tectonics describes the movement of rigid plates at the surface of the Earth as well as their complex deformation at three types of plate boundaries: 1) divergent boundaries such as rift zones and mid-ocean ridges, 2) strike-slip boundaries where plates grind past each other, such as the San Andreas Fault, and 3) convergent boundaries that form large mountain ranges like the Andes. The generally narrow deformation zones that bound the plates exhibit complex strain patterns that evolve through time. During this evolution, plate boundary deformation is driven by tectonic forces arising from Earth’s deep interior and from within the lithosphere, but also by surface processes, which erode topographic highs and deposit the resulting sediment into regions of low elevation. Through the combination of these factors, the surface of the Earth evolves in a highly dynamic way with several feedback mechanisms. At divergent boundaries, for example, tensional stresses thin the lithosphere, forcing uplift and subsequent erosion of rift flanks, which creates a sediment source. Meanwhile, the rift center subsides and becomes a topographic low where sediments accumulate. This mass transfer from foot- to hanging wall plays an important role during rifting, as it prolongs the activity of individual normal faults. When rifting continues, continents are eventually split apart, exhuming Earth’s mantle and creating new oceanic crust. Because of the complex interplay between deep tectonic forces that shape plate boundaries and mass redistribution at the Earth’s surface, it is vital to understand feedbacks between the two domains and how they shape our planet.
In this study I aim to provide insight on two primary questions: 1) How do divergent and strike-slip plate boundaries evolve? 2) How is this evolution, on a large temporal scale and a smaller structural scale, affected by the alteration of the surface through erosion and deposition? This is done in three chapters that examine the evolution of divergent and strike-slip plate boundaries using numerical models. Chapter 2 takes a detailed look at the evolution of rift systems using two-dimensional models. Specifically, I extract faults from a range of rift models and correlate them through time to examine how fault networks evolve in space and time. By implementing a two-way coupling between the geodynamic code ASPECT and landscape evolution code FastScape, I investigate how the fault network and rift evolution are influenced by the system’s erosional efficiency, which represents many factors like lithology or climate. In Chapter 3, I examine rift evolution from a three-dimensional perspective. In this chapter I study linkage modes for offset rifts to determine when fast-rotating plate-boundary structures known as continental microplates form. Chapter 4 uses the two-way numerical coupling between tectonics and landscape evolution to investigate how a strike-slip boundary responds to large sediment loads, and whether this is sufficient to form an entirely new type of flexural strike-slip basin.
The Andes are a ~7000 km long N-S trending mountain range developed along the South American western continental margin. Driven by the subduction of the oceanic Nazca plate beneath the continental South American plate, the formation of the northern and central parts of the orogen is a type case for a non-collisional orogeny. In the southern Central Andes (SCA, 29°S-39°S), the oceanic plate changes the subduction angle between 33°S and 35°S from almost horizontal (< 5° dip) in the north to a steeper angle (~30° dip) in the south. This sector of the Andes also displays remarkable along- and across- strike variations of the tectonic deformation patterns. These include a systematic decrease of topographic elevation, of crustal shortening and foreland and orogenic width, as well as an alternation of the foreland deformation style between thick-skinned and thin-skinned recorded along- and across the strike of the subduction zone. Moreover, the SCA are a very seismically active region. The continental plate is characterized by a relatively shallow seismicity (< 30 km depth) which is mainly focussed at the transition from the orogen to the lowland areas of the foreland and the forearc; in contrast, deeper seismicity occurs below the interiors of the northern foreland. Additionally, frequent seismicity is also recorded in the shallow parts of the oceanic plate and in a sector of the flat slab segment between 31°S and 33°S. The observed spatial heterogeneity in tectonic and seismic deformation in the SCA has been attributed to multiple causes, including variations in sediment thickness, the presence of inherited structures and changes in the subduction angle of the oceanic slab. However, there is no study that inquired the relationship between the long-term rheological configuration of the SCA and the spatial deformation patterns. Moreover, the effects of the density and thickness configuration of the continental plate and of variations in the slab dip angle in the rheological state of the lithosphere have been not thoroughly investigated yet. Since rheology depends on composition, pressure and temperature, a detailed characterization of the compositional, structural and thermal fields of the lithosphere is needed. Therefore, by using multiple geophysical approaches and data sources, I constructed the following 3D models of the SCA lithosphere: (i) a seismically-constrained structural and density model that was tested against the gravity field; (ii) a thermal model integrating the conversion of mantle shear-wave velocities to temperature with steady-state conductive calculations in the uppermost lithosphere (< 50 km depth), validated by temperature and heat-flow measurements; and (iii) a rheological model of the long-term lithospheric strength using as input the previously-generated models.
The results of this dissertation indicate that the present-day thermal and rheological fields of the SCA are controlled by different mechanisms at different depths. At shallow depths (< 50 km), the thermomechanical field is modulated by the heterogeneous composition of the continental lithosphere. The overprint of the oceanic slab is detectable where the oceanic plate is shallow (< 85 km depth) and the radiogenic crust is thin, resulting in overall lower temperatures and higher strength compared to regions where the slab is steep and the radiogenic crust is thick. At depths > 50 km, largest temperatures variations occur where the descending slab is detected, which implies that the deep thermal field is mainly affected by the slab dip geometry.
The outcomes of this thesis suggests that long-term thermomechanical state of the lithosphere influences the spatial distribution of seismic deformation. Most of the seismicity within the continental plate occurs above the modelled transition from brittle to ductile conditions. Additionally, there is a spatial correlation between the location of these events and the transition from the mechanically strong domains of the forearc and foreland to the weak domain of the orogen. In contrast, seismicity within the oceanic plate is also detected where long-term ductile conditions are expected. I therefore analysed the possible influence of additional mechanisms triggering these earthquakes, including the compaction of sediments in the subduction interface and dehydration reactions in the slab. To that aim, I carried out a qualitative analysis of the state of hydration in the mantle using the ratio between compressional- and shear-wave velocity (vp/vs ratio) from a previous seismic tomography. The results from this analysis indicate that the majority of the seismicity spatially correlates with hydrated areas of the slab and overlying continental mantle, with the exception of the cluster within the flat slab segment. In this region, earthquakes are likely triggered by flexural processes where the slab changes from a flat to a steep subduction angle.
First-order variations in the observed tectonic patterns also seem to be influenced by the thermomechanical configuration of the lithosphere. The mechanically strong domains of the forearc and foreland, due to their resistance to deformation, display smaller amounts of shortening than the relatively weak orogenic domain. In addition, the structural and thermomechanical characteristics modelled in this dissertation confirm previous analyses from geodynamic models pointing to the control of the observed heterogeneities in the orogen and foreland deformation style. These characteristics include the lithospheric and crustal thickness, the presence of weak sediments and the variations in gravitational potential energy.
Specific conditions occur in the cold and strong northern foreland, which is characterized by active seismicity and thick-skinned structures, although the modelled crustal strength exceeds the typical values of externally-applied tectonic stresses. The additional mechanisms that could explain the strain localization in a region that should resist deformation are: (i) increased tectonic forces coming from the steepening of the slab and (ii) enhanced weakening along inherited structures from pre-Andean deformation events. Finally, the thermomechanical conditions of this sector of the foreland could be a key factor influencing the preservation of the flat subduction angle at these latitudes of the SCA.
The NAC transcription factor (TF) JUNGBRUNNEN1 (JUB1) is an important negative regulator of plant senescence, as well as of gibberellic acid (GA) and brassinosteroid (BR) biosynthesis in Arabidopsis thaliana. Overexpression of JUB1 promotes longevity and enhances tolerance to drought and other abiotic stresses. A similar role of JUB1 has been observed in other plant species, including tomato and banana. Our data show that JUB1 overexpressors (JUB1-OXs) accumulate higher levels of proline than WT plants under control conditions, during the onset of drought stress, and thereafter. We identified that overexpression of JUB1 induces key proline biosynthesis and suppresses key proline degradation genes. Furthermore, bZIP63, the transcription factor involved in proline metabolism, was identified as a novel downstream target of JUB1 by Yeast One-Hybrid (Y1H) analysis and Chromatin immunoprecipitation (ChIP). However, based on Electrophoretic Mobility Shift Assay (EMSA), direct binding of JUB1 to bZIP63 could not be confirmed. Our data indicate that JUB1-OX plants exhibit reduced stomatal conductance under control conditions. However, selective overexpression of JUB1 in guard cells did not improve drought stress tolerance in Arabidopsis. Moreover, the drought-tolerant phenotype of JUB1 overexpressors does not solely depend on the transcriptional control of the DREB2A gene. Thus, our data suggest that JUB1 confers tolerance to drought stress by regulating multiple components. Until today, none of the previous studies on JUB1´s regulatory network focused on identifying protein-protein interactions. We, therefore, performed a yeast two-hybrid screen (Y2H) which identified several protein interactors of JUB1, two of which are the calcium-binding proteins CaM1 and CaM4. Both proteins interact with JUB1 in the nucleus of Arabidopsis protoplasts. Moreover, JUB1 is expressed with CaM1 and CaM4 under the same conditions. Since CaM1.1 and CaM4.1 encode proteins with identical amino acid sequences, all further experiments were performed with constructs involving the CaM4 coding sequence. Our data show that JUB1 harbors multiple CaM-binding sites, which are localized in both the N-terminal and C-terminal regions of the protein. One of the CaM-binding sites, localized in the DNA-binding domain of JUB1, was identified as a functional CaM-binding site since its mutation strongly reduced the binding of CaM4 to JUB1. Furthermore, JUB1 transactivates expression of the stress-related gene DREB2A in mesophyll cells; this effect is significantly reduced when the calcium-binding protein CaM4 is expressed as well. Overexpression of both genes in Arabidopsis results in early senescence observed through lower chlorophyll content and an enhanced expression of senescence-associated genes (SAGs) when compared with single JUB1 overexpressors. Our data also show that JUB1 and CaM4 proteins interact in senescent leaves, which have increased Ca2+ levels when compared to young leaves. Collectively, our data indicate that JUB1 activity towards its downstream targets is fine-tuned by calcium-binding proteins during leaf senescence.
Salt deposits offer a variety of usage types. These include the mining of rock salt and potash salt as important raw materials, the storage of energy in man-made underground caverns, and the disposal of hazardous substances in former mines. The most serious risk with any of these usage types comes from the contact with groundwater or surface water. It causes an uncontrolled dissolution of salt rock, which in the worst case can result in the flooding or collapse of underground facilities. Especially along potash seams, cavernous structures can spread quickly, because potash salts show a much higher solubility than rock salt. However, as their chemical behavior is quite complex, previous models do not account for these highly soluble interlayers. Therefore, the objective of the present thesis is to describe the evolution of cavernous structures along potash seams in space and time in order to improve hazard mitigation during the utilization of salt deposits.
The formation of cavernous structures represents an interplay of chemical and hydraulic processes. Hence, the first step is to systematically investigate the dissolution and precipitation reactions that occur when water and potash salt come into contact. For this purpose, a geochemical reaction model is used. The results show that the minerals are only partially dissolved, resulting in a porous sponge like structure. With the saturation of the solution increasing, various secondary minerals are formed, whose number and type depend on the original rock composition. Field data confirm a correlation between the degree of saturation and the distance from the center of the cavern, where solution is entering. Subsequently, the reaction model is coupled with a flow and transport code and supplemented by a novel approach called ‘interchange’. The latter enables the exchange of solution and rock between areas of different porosity and mineralogy, and thus ultimately the growth of the cavernous structure. By means of several scenario analyses, cavern shape, growth rate and mineralogy are systematically investigated, taking also heterogeneous potash seams into account. The results show that basically four different cases can be distinguished, with mixed forms being a frequent occurrence in nature. The classification scheme is based on the dimensionless numbers Péclet and Damköhler, and allows for a first assessment of the hazard potential. In future, the model can be applied to any field case, using measurement data for calibration.
The presented research work provides a reactive transport model that is able to spatially and temporally characterize the propagation of cavernous structures along potash seams for the first time. Furthermore, it allows to determine thickness and composition of transition zones between cavern center and unaffected salt rock. The latter is particularly important in potash mining, so that natural cavernous structures can be located at an early stage and the risk of mine flooding can thus be reduced. The models may also contribute to an improved hazard prevention in the construction of storage caverns and the disposal of hazardous waste in salt deposits. Predictions regarding the characteristics and evolution of cavernous structures enable a better assessment of potential hazards, such as integrity or stability loss, as well as of suitable mitigation measures.
In the frame of a world fighting a dramatic global warming caused by human-related activities, research towards the development of renewable energies plays a crucial role. Solar energy is one of the most important clean energy sources and its role in the satisfaction of the global energy demand is set to increase. In this context, a particular class of materials captured the attention of the scientific community for its attractive properties: halide perovskites. Devices with perovskite as light-absorber saw an impressive development within the last decade, reaching nowadays efficiencies comparable to mature photovoltaic technologies like silicon solar cells. Yet, there are still several roadblocks to overcome before a wide-spread commercialization of this kind of devices is enabled. One of the critical points lies at the interfaces: perovskite solar cells (PSCs) are made of several layers with different chemical and physical features. In order for the device to function properly, these properties have to be well-matched.
This dissertation deals with some of the challenges related to interfaces in PSCs, with a focus on the interface between the perovskite material itself and the subsequent charge transport layer. In particular, molecular assemblies with specific properties are deposited on the perovskite surface to functionalize it. The functionalization results in energy level alignment adjustment, interfacial losses reduction, and stability improvement.
First, a strategy to tune the perovskite’s energy levels is introduced: self-assembled monolayers of dipolar molecules are used to functionalize the surface, obtaining simultaneously a shift in the vacuum level position and a saturation of the dangling bonds at the surface. A shift in the vacuum level corresponds to an equal change in work function, ionization energy, and electron affinity. The direction of the shift depends on the direction of the collective interfacial dipole. The magnitude of the shift can be tailored by controlling the deposition parameters, such as the concentration of the solution used for the deposition. The shift for different molecules is characterized by several non-invasive techniques, including in particular Kelvin probe. Overall, it is shown that it is possible to shift the perovskite energy levels in both directions by several hundreds of meV. Moreover, interesting insights on the molecules deposition dynamics are revealed.
Secondly, the application of this strategy in perovskite solar cells is explored. Devices with different perovskite compositions (“triple cation perovskite” and MAPbBr3) are prepared. The two resulting model systems present different energetic offsets at the perovskite/hole-transport layer interface. Upon tailored perovskite surface functionalization, the devices show a stabilized open circuit voltage (Voc) enhancement of approximately 60 meV on average for devices with MAPbBr3, while the impact is limited on triple-cation solar cells. This suggests that the proposed energy level tuning method is valid, but its effectiveness depends on factors such as the significance of the energetic offset compared to the other losses in the devices.
Finally, the above presented method is further developed by incorporating the ability to interact with the perovskite surface directly into a novel hole-transport material (HTM), named PFI. The HTM can anchor to the perovskite halide ions via halogen bonding (XB). Its behaviour is compared to that of another HTM (PF) with same chemical structure and properties, except for the ability of forming XB. The interaction of perovskite with PFI and PF is characterized through UV-Vis, atomic force microscopy and Kelvin probe measurements combined with simulations. Compared to PF, PFI exhibits enhanced resilience against solvent exposure and improved energy level alignment with the perovskite layer. As a consequence, devices comprising PFI show enhanced Voc and operational stability during maximum-power-point tracking, in addition to hysteresis reduction. XB promotes the formation of a high-quality interface by anchoring to the halide ions and forming a stable and ordered interfacial layer, showing to be a particularly interesting candidate for the development of tailored charge transport materials in PSCs.
Overall, the results exposed in this dissertation introduce and discuss a versatile tool to functionalize the perovskite surface and tune its energy levels. The application of this method in devices is explored and insights on its challenges and advantages are given. Within this frame, the results shed light on XB as ideal interaction for enhancing stability and efficiency in perovskite-based devices.
Major challenges during geothermal exploration and exploitation include the structural-geological characterization of the geothermal system and the application of sustainable monitoring concepts to explain changes in a geothermal reservoir during production and/or reinjection of fluids. In the absence of sufficiently permeable reservoir rocks, faults and fracture networks are preferred drilling targets because they can facilitate the migration of hot and/or cold fluids. In volcanic-geothermal systems considerable amounts of gas emissions can be released at the earth surface, often related to these fluid-releasing structures.
In this thesis, I developed and evaluated different methodological approaches and measurement concepts to determine the spatial and temporal variation of several soil gas parameters to understand the structural control on fluid flow. In order to validate their potential as innovative geothermal exploration and monitoring tools, these methodological approaches were applied to three different volcanic-geothermal systems. At each site an individual survey design was developed regarding the site-specific questions.
The first study presents results of the combined measurement of CO2 flux, ground temperatures, and the analysis of isotope ratios (δ13CCO2, 3He/4He) across the main production area of the Los Humeros geothermal field, to identify locations with a connection to its supercritical (T > 374◦C and P > 221 bar) geothermal reservoir. The results of the systematic and large-scale (25 x 200 m) CO2 flux scouting survey proved to be a fast and flexible way to identify areas of anomalous degassing. Subsequent sampling with high resolution surveys revealed the actual extent and heterogenous pattern of anomalous degassing areas. They have been related to the internal fault hydraulic architecture and allowed to assess favourable structural settings for fluid flow such as fault intersections. Finally, areas of unknown structurally controlled permeability with a connection to the superhot geothermal reservoir have been determined, which represent promising targets for future geothermal exploration and development.
In the second study, I introduce a novel monitoring approach by examining the variation of CO2 flux to monitor changes in the reservoir induced by fluid reinjection. For that reason, an automated, multi-chamber CO2 flux system was deployed across the damage zone of a major normal fault crossing the Los Humeros geothermal field. Based on the results of the CO2 flux scouting survey, a suitable site was selected that had a connection to the geothermal reservoir, as identified by hydrothermal CO2 degassing and hot ground temperatures (> 50 °C). The results revealed a response of gas emissions to changes in reinjection rates within 24 h, proving an active hydraulic communication between the geothermal reservoir and the earth surface. This is a promising monitoring strategy that provides nearly real-time and in-situ data about changes in the reservoir and allows to timely react to unwanted changes (e.g., pressure decline, seismicity).
The third study presents results from the Aluto geothermal field in Ethiopia where an area-wide and multi-parameter analysis, consisting of measurements of CO2 flux, 222Rn, and 220Rn activity concentrations and ground temperatures was conducted to detect hidden permeable structures. 222Rn and 220Rn activity concentrations are evaluated as a complementary soil gas parameter to CO2 flux, to investigate their potential to understand tectono-volcanic degassing. The combined measurement of all parameters enabled to develop soil gas fingerprints, a novel visualization approach. Depending on the magnitude of gas emissions and their migration velocities the study area was divided in volcanic (heat), tectonic (structures), and volcano-tectonic dominated areas. Based on these concepts, volcano-tectonic dominated areas, where hot hydrothermal fluids migrate along permeable faults, present the most promising targets for future geothermal exploration and development in this geothermal field. Two of these areas have been identified in the south and south-east which have not yet been targeted for geothermal exploitation. Furthermore, two unknown areas of structural related permeability could be identified by 222Rn and 220Rn activity concentrations.
Eventually, the fourth study presents a novel measurement approach to detect structural controlled CO2 degassing, in Ngapouri geothermal area, New Zealand. For the first time, the tunable diode laser (TDL) method was applied in a low-degassing geothermal area, to evaluate its potential as a geothermal exploration method. Although the sampling approach is based on profile measurements, which leads to low spatial resolution, the results showed a link between known/inferred faults and increased CO2 concentrations. Thus, the TDL method proved to be a successful in the determination of structural related permeability, also in areas where no obvious geothermal activity is present. Once an area of anomalous CO2 concentrations has been identified, it can be easily complemented by CO2 flux grid measurements to determine the extent and orientation of the degassing segment.
With the results of this work, I was able to demonstrate the applicability of systematic and area-wide soil gas measurements for geothermal exploration and monitoring purposes. In particular, the combination of different soil gases using different measurement networks enables the identification and characterization of fluid-bearing structures and has not yet been used and/or tested as standard practice. The different studies present efficient and cost-effective workflows and demonstrate a hands-on approach to a successful and sustainable exploration and monitoring of geothermal resources. This minimizes the resource risk during geothermal project development. Finally, to advance the understanding of the complex structure and dynamics of geothermal systems, a combination of comprehensive and cutting-edge geological, geochemical, and geophysical exploration methods is essential.