Refine
Has Fulltext
- yes (159) (remove)
Year of publication
- 2022 (159) (remove)
Document Type
- Doctoral Thesis (159) (remove)
Is part of the Bibliography
- yes (159)
Keywords
- Klimawandel (5)
- climate change (5)
- Digitalisierung (3)
- Modellierung (3)
- Röntgenspektroskopie (3)
- modelling (3)
- Adipositas (2)
- Arbeitszufriedenheit (2)
- Bewegungsökologie (2)
- Bundeswehr (2)
Institute
- Institut für Biochemie und Biologie (27)
- Extern (26)
- Institut für Physik und Astronomie (20)
- Institut für Chemie (19)
- Institut für Geowissenschaften (17)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Institut für Umweltwissenschaften und Geographie (7)
- Wirtschaftswissenschaften (7)
- Institut für Ernährungswissenschaft (5)
- Institut für Mathematik (5)
While estimated numbers of past and future climate migrants are alarming, the growing empirical evidence suggests that the association between adverse climate-related events and migration is not universally positive. This dissertation seeks to advance our understanding of when and how climate migration emerges by analyzing heterogeneous climatic influences on migration in low- and middle-income countries. To this end, it draws on established economic theories of migration, datasets from physical and social sciences, causal inference techniques and approaches from systematic literature review. In three of its five chapters, I estimate causal effects of processes of climate change on inequality and migration in India and Sub-Saharan Africa. By employing interaction terms and by analyzing sub-samples of data, I explore how these relationships differ for various segments of the population. In the remaining two chapters, I present two systematic literature reviews. First, I undertake a comprehensive meta-regression analysis of the econometric climate migration literature to summarize general climate migration patterns and explain the conflicting findings. Second, motivated by the broad range of approaches in the field, I examine the literature from a methodological perspective to provide best practice guidelines for studying climate migration empirically. Overall, the evidence from this dissertation shows that climatic influences on human migration are highly heterogeneous. Whether adverse climate-related impacts materialize in migration depends on the socio-economic characteristics of the individual households, such as wealth, level of education, agricultural dependence or access to adaptation technologies and insurance. For instance, I show that while adverse climatic shocks are generally associated with an increase in migration in rural India, they reduce migration in the agricultural context of Sub-Saharan Africa, where the average wealth levels are much lower so that households largely cannot afford the upfront costs of moving. I find that unlike local climatic shocks which primarily enhance internal migration to cities and hence accelerate urbanization, shocks transmitted via agricultural producer prices increase migration to neighboring countries, likely due to the simultaneous decrease in real income in nearby urban areas. These findings advance our current understanding by showing when and how economic agents respond to climatic events, thus providing explicit contexts and mechanisms of climate change effects on migration in the future. The resulting collection of findings can guide policy interventions to avoid or mitigate any present and future welfare losses from climate change-related migration choices.
Distances affect economic decision-making in numerous situations. The time at which we make a decision about future consumption has an impact on our consumption behavior. The spatial distance to employer, school or university impacts the place where we live and vice versa. The emotional closeness to other individuals influences our willingness to give money to them. This cumulative thesis aims to enrich the literature on the role of distance for economic decision-making. Thereby, each of my research projects sheds light on the impact of one kind of distance for efficient decision-making.
Biofilms are complex living materials that form as bacteria get embedded in a matrix of self-produced protein and polysaccharide fibres. The formation of a network of extracellular biopolymer fibres contributes to the cohesion of the biofilm by promoting cell-cell attachment and by mediating biofilm-substrate interactions. This sessile mode of bacteria growth has been well studied by microbiologists to prevent the detrimental effects of biofilms in medical and industrial settings. Indeed, biofilms are associated with increased antibiotic resistance in bacterial infections, and they can also cause clogging of pipelines or promote bio-corrosion. However, biofilms also gained interest from biophysics due to their ability to form complex morphological patterns during growth. Recently, the emerging field of engineered living materials investigates biofilm mechanical properties at multiple length scales and leverages the tools of synthetic biology to tune the functions of their constitutive biopolymers.
This doctoral thesis aims at clarifying how the morphogenesis of Escherichia coli (E. coli) biofilms is influenced by their growth dynamics and mechanical properties. To address this question, I used methods from cell mechanics and materials science. I first studied how biological activity in biofilms gives rise to non-uniform growth patterns. In a second study, I investigated how E. coli biofilm morphogenesis and its mechanical properties adapt to an environmental stimulus, namely the water content of their substrate. Finally, I estimated how the mechanical properties of E. coli biofilms are altered when the bacteria express different extracellular biopolymers.
On nutritive hydrogels, micron-sized E. coli cells can build centimetre-large biofilms. During this process, bacterial proliferation and matrix production introduce mechanical stresses in the biofilm, which release through the formation of macroscopic wrinkles and delaminated buckles. To relate these biological and mechanical phenomena, I used time-lapse fluorescence imaging to track cell and matrix surface densities through the early and late stages of E. coli biofilm growth. Colocalization of high cell and matrix densities at the periphery precede the onset of mechanical instabilities at this annular region. Early growth is detected at this outer annulus, which was analysed by adding fluorescent microspheres to the bacterial inoculum. But only when high rates of matrix production are present in the biofilm centre, does overall biofilm spreading initiate along the solid-air interface. By tracking larger fluorescent particles for a long time, I could distinguish several kinematic stages of E. coli biofilm expansion and observed a transition from non-linear to linear velocity profiles, which precedes the emergence of wrinkles at the biofilm periphery. Decomposing particle velocities to their radial and circumferential components revealed a last kinematic stage, where biofilm movement is mostly directed towards the radial delaminated buckles, which verticalize. The resulting compressive strains computed in these regions were observed to substantially deform the underlying agar substrates. The co-localization of higher cell and matrix densities towards an annular region and the succession of several kinematic stages are thus expected to promote the emergence of mechanical instabilities at the biofilm periphery. These experimental findings are predicted to advance future modelling approaches of biofilm morphogenesis.
E. coli biofilm morphogenesis is further anticipated to depend on external stimuli from the environment. To clarify how the water could be used to tune biofilm material properties, we quantified E. coli biofilm growth, wrinkling dynamics and rigidity as a function of the water content of the nutritive substrates. Time-lapse microscopy and computational image analysis revealed that substrates with high water content promote biofilm spreading kinetics, while substrates with low water content promote biofilm wrinkling. The wrinkles observed on biofilm cross-sections appeared more bent on substrates with high water content, while they tended to be more vertical on substrates with low water content. Both wet and dry biomass, accumulated over 4 days of culture, were larger in biofilms cultured on substrates with high water content, despite extra porosity within the matrix layer. Finally, the micro-indentation analysis revealed that substrates with low water content supported the formation of stiffer biofilms. This study shows that E. coli biofilms respond to the water content of their substrate, which might be used for tuning their material properties in view of further applications.
Biofilm material properties further depend on the composition and structure of the matrix of extracellular proteins and polysaccharides. In particular, E. coli biofilms were suggested to present tissue-like elasticity due to a dense fibre network consisting of amyloid curli and phosphoethanolamine-modified cellulose. To understand the contribution of these components to the emergent mechanical properties of E. coli biofilms, we performed micro-indentation on biofilms grown from bacteria of several strains. Besides showing higher dry masses, larger spreading diameters and slightly reduced water contents, biofilms expressing both main matrix components also presented high rigidities in the range of several hundred kPa, similar to biofilms containing only curli fibres. In contrast, a lack of amyloid curli fibres provides much higher adhesive energies and more viscoelastic fluid-like material behaviour. Therefore, the combination of amyloid curli and phosphoethanolamine-modified cellulose fibres implies the formation of a composite material whereby the amyloid curli fibres provide rigidity to E. coli biofilms, whereas the phosphoethanolamine-modified cellulose rather acts as a glue. These findings motivate further studies involving purified versions of these protein and polysaccharide components to better understand how their interactions benefit biofilm functions.
All three studies depict different aspects of biofilm morphogenesis, which are interrelated. The first work reveals the correlation between non-uniform biological activities and the emergence of mechanical instabilities in the biofilm. The second work acknowledges the adaptive nature of E. coli biofilm morphogenesis and its mechanical properties to an environmental stimulus, namely water. Finally, the last study reveals the complementary role of the individual matrix components in the formation of a stable biofilm material, which not only forms complex morphologies but also functions as a protective shield for the bacteria it contains. Our experimental findings on E. coli biofilm morphogenesis and their mechanical properties can have further implications for fundamental and applied biofilm research fields.
In the present thesis I investigate the lattice dynamics of thin film hetero structures of magnetically ordered materials upon femtosecond laser excitation as a probing and manipulation scheme for the spin system. The quantitative assessment of laser induced thermal dynamics as well as generated picosecond acoustic pulses and their respective impact on the magnetization dynamics of thin films is a challenging endeavor. All the more, the development and implementation of effective experimental tools and comprehensive models are paramount to propel future academic and technological progress.
In all experiments in the scope of this cumulative dissertation, I examine the crystal lattice of nanoscale thin films upon the excitation with femtosecond laser pulses. The relative change of the lattice constant due to thermal expansion or picosecond strain pulses is directly monitored by an ultrafast X-ray diffraction (UXRD) setup with a femtosecond laser-driven plasma X-ray source (PXS). Phonons and spins alike exert stress on the lattice, which responds according to the elastic properties of the material, rendering the lattice a versatile sensor for all sorts of ultrafast interactions. On the one hand, I investigate materials with strong magneto-elastic properties; The highly magnetostrictive rare-earth compound TbFe2, elemental Dysprosium or the technological relevant Invar material FePt. On the other hand I conduct a comprehensive study on the lattice dynamics of Bi1Y2Fe5O12 (Bi:YIG), which exhibits high-frequency coherent spin dynamics upon femtosecond laser excitation according to the literature. Higher order standing spinwaves (SSWs) are triggered by coherent and incoherent motion of atoms, in other words phonons, which I quantified with UXRD. We are able to unite the experimental observations of the lattice and magnetization dynamics qualitatively and quantitatively. This is done with a combination of multi-temperature, elastic, magneto-elastic, anisotropy and micro-magnetic modeling.
The collective data from UXRD, to probe the lattice, and time-resolved magneto-optical Kerr effect (tr-MOKE) measurements, to monitor the magnetization, were previously collected at different experimental setups. To improve the precision of the quantitative assessment of lattice and magnetization dynamics alike, our group implemented a combination of UXRD and tr-MOKE in a singular experimental setup, which is to my knowledge, the first of its kind. I helped with the conception and commissioning of this novel experimental station, which allows the simultaneous observation of lattice and magnetization dynamics on an ultrafast timescale under identical excitation conditions. Furthermore, I developed a new X-ray diffraction measurement routine which significantly reduces the measurement time of UXRD experiments by up to an order of magnitude. It is called reciprocal space slicing (RSS) and utilizes an area detector to monitor the angular motion of X-ray diffraction peaks, which is associated with lattice constant changes, without a time-consuming scan of the diffraction angles with the goniometer. RSS is particularly useful for ultrafast diffraction experiments, since measurement time at large scale facilities like synchrotrons and free electron lasers is a scarce and expensive resource. However, RSS is not limited to ultrafast experiments and can even be extended to other diffraction techniques with neutrons or electrons.
The Greenland Ice Sheet is the second-largest mass of ice on Earth. Being almost 2000 km long, more than 700 km wide, and more than 3 km thick at the summit, it holds enough ice to raise global sea levels by 7m if melted completely. Despite its massive size, it is particularly vulnerable to anthropogenic climate change: temperatures over the Greenland Ice Sheet have increased by more than 2.7◦C in the past 30 years, twice as much as the global mean temperature. Consequently, the ice sheet has been significantly losing mass since the 1980s and the rate of loss has increased sixfold since then. Moreover, it is one of the potential tipping elements of the Earth System, which might undergo irreversible change once a warming threshold is exceeded. This thesis aims at extending the understanding of the resilience of the Greenland Ice Sheet against global warming by analyzing processes and feedbacks relevant to its centennial to multi-millennial stability using ice sheet modeling.
One of these feedbacks, the melt-elevation-feedback is driven by the temperature rise with decreasing altitudes: As the ice sheet melts, its thickness and surface elevation decrease, exposing the ice surface to warmer air and thus increasing the melt rates even further. The glacial isostatic adjustment (GIA) can partly mitigate this melt-elevation feedback as the bedrock lifts in response to an ice load decrease, forming the negative GIA feedback. In my thesis, I show that the interaction between these two competing feedbacks can lead to qualitatively different dynamical responses of the Greenland Ice Sheet to warming – from permanent loss to incomplete recovery, depending on the feedback parameters. My research shows that the interaction of those feedbacks can initiate self-sustained oscillations of the ice volume while the climate forcing remains constant.
Furthermore, the increased surface melt changes the optical properties of the snow or ice surface, e.g. by lowering their albedo, which in turn enhances melt rates – a process known as the melt-albedo feedback. Process-based ice sheet models often neglect this melt-albedo feedback. To close this gap, I implemented a simplified version of the diurnal Energy Balance Model, a computationally efficient approach that can capture the first-order effects of the melt-albedo feedback, into the Parallel Ice Sheet Model (PISM). Using the coupled model, I show in warming experiments that the melt-albedo feedback almost doubles the ice loss until the year 2300 under the low greenhouse gas emission scenario RCP2.6, compared to simulations where the melt-albedo feedback is neglected,
and adds up to 58% additional ice loss under the high emission scenario RCP8.5. Moreover, I find that the melt-albedo feedback dominates the ice loss until 2300, compared to the melt-elevation feedback.
Another process that could influence the resilience of the Greenland Ice Sheet is the warming induced softening of the ice and the resulting increase in flow. In my thesis, I show with PISM how the uncertainty in Glen’s flow law impacts the simulated response to warming. In a flow line setup at fixed climatic mass balance, the uncertainty in flow parameters leads to a range of ice loss comparable to the range caused by different warming levels.
While I focus on fundamental processes, feedbacks, and their interactions in the first three projects of my thesis, I also explore the impact of specific climate scenarios on the sea level rise contribution of the Greenland Ice Sheet. To increase the carbon budget flexibility, some warming scenarios – while still staying within the limits of the Paris Agreement – include a temporal overshoot of global warming. I show that an overshoot by 0.4◦C increases the short-term and long-term ice loss from Greenland by several centimeters. The long-term increase is driven by the warming at high latitudes, which persists even when global warming is reversed. This leads to a substantial long-term commitment of the sea level rise contribution from the Greenland Ice Sheet.
Overall, in my thesis I show that the melt-albedo feedback is most relevant for the ice loss of the Greenland Ice Sheet on centennial timescales. In contrast, the melt-elevation feedback and its interplay with the GIA feedback become increasingly relevant on millennial timescales. All of these influence the resilience of the Greenland Ice Sheet against global warming, in the near future and on the long term.
The world energy consumption has constantly increased every year due to economic development and population growth. This inevitably caused vast amount of CO2 emission, and the CO2 concentration in the atmosphere keeps increasing with economic growth. To reduce CO2 emission, various methods have been developed but there are still many bottlenecks to be solved. Solvents easily absorbing CO2 such as monoethanol-amine (MEA) and diethanolamine, for example, have limitations of solvent loss, amine degradation, vulnerability to heat and toxicity, and the high cost of regeneration which is especially caused due to chemisorption process. Though some of these drawbacks can be compensated through physisorption with zeolites and metal-organic frameworks (MOFs) by displaying significant adsorption selectivity and capacity even in ambient conditions, limitations for these materials still exist. Zeolites demand relatively high regeneration energy and have limited adsorption kinetics due to the exceptionally narrow pore structure. MOFs have low stability against heat and moisture and high manufacturing cost.
Nanoporous carbons have recently received attention as an attractive functional porous material due to their unique properties. These materials are crucial in many applications of modern science and industry such as water and air purification, catalysis, gas separation, and energy storage/conversion due to their high chemical and thermal stability, and in particular electronic conductivity in combination with high specific surface areas. Nanoporous carbons can be used to adsorb environmental pollutants or small gas molecules such as CO2 and to power electrochemical energy storage devices such as batteries and fuel cells. In all fields, their pore structure or electrical properties can be modified depending on their purposes.
This thesis provides an in-depth look at novel nanoporous carbons from the synthetic and the application point of view. The interplay between pore structure, atomic construction, and the adsorption properties of nanoporous carbon materials are investigated. Novel nanoporous carbon materials are synthesized by using simple precursor molecules containing heteroatoms through a facile
templating method. The affinity, and in turn the adsorption capacity, of carbon materials toward polar gas molecules (CO2 and H2O) is enhanced by the modification of their chemical construction. It is also shown that these properties are important in electrochemical energy storage, here especially for supercapacitors with aqueous electrolytes which are basically based on the physisorption of ions on carbon surfaces. This shows that nanoporous carbons can be a “functional” material with specific physical or chemical interactions with guest species just like zeolites and MOFs.
The synthesis of sp2-conjugated materials with high heteroatom content from a mixture of citrazinic acid and melamine in which heteroatoms are already bonded in specific motives is illustrated. By controlling the removal procedure of the salt-template and the condensation temperature, the role of salts in the formation of porosity and as coordination sites for the stabilization of heteroatoms is proven. A high amount of nitrogen of up to 20 wt. %, oxygen contents of up to 19 wt.%, and a high CO2/N2 selectivity with maximum CO2 uptake at 273 K of 5.31 mmol g–1 are achieved. Besides, the further controlled thermal condensation of precursor molecules and advanced functional properties on applications of the synthesized porous carbons are described. The materials have different porosity and atomic construction exhibiting a high nitrogen content up to 25 wt. % as well as a high porosity with a specific surface area of more than 1800 m2 g−1, and a high performance in selective CO2 gas adsorption of 62.7. These pore structure as well as properties of surface affect to water adsorption with a remarkably high Qst of over 100 kJ mol−1 even higher than that of zeolites or CaCl2 well known as adsorbents. In addition to that, the pore structure of HAT-CN-derived carbon materials during condensation in vacuum is fundamentally understood which is essential to maximize the utilization of porous system in materials showing significant difference in their pore volume of 0.5 cm3 g−1 and 0.25 cm3 g−1 without and with vacuum, respectively.
The molecular designs of heteroatom containing porous carbon derived from abundant and simple molecules are introduced in the presented thesis. Abundant precursors that already containing high amount of nitrogen or oxygen are beneficial to achieve enhanced interaction with adsorptives. The physical and chemical properties of these heteroatom-doped porous carbons are affected by mainly two parameters, that is, the porosity from the pore structure and the polarity from the atomic composition on the surface. In other words, controlling the porosity as well as the polarity of the carbon materials is studied to understand interactions with different guest species which is a fundamental knowledge for the utilization on various applications.
On January 1, 2015, Germany introduced a general statutory minimum wage of €8.50 gross per hour. This thesis analyses the effects of the minimum wage introduction in Germany as well as wage floors in the European context, contributing to national and international research.
The second chapter of this dissertation summarizes the short-run effects of the minimum wage reform found in previous studies.
We show that the introduction of the minimum wage had a positive effect on wages at the bottom of the distribution. Yet, there was still a significant amount of non-compliance shortly after the reform. Additionally, previous evidence points to small negative employment effects mainly driven by a reduction in mini-jobs. Contrary to expectations, though, there were no effects on poverty and general inequality found in the short run. This is mostly due to the fact that working hours were reduced and the increase of hourly wages was therefore not reflected in monthly wages.
The third chapter identifies whether the job losses predicted in ex-ante studies materialized in the short run and, if so, which type of employment was affected the most. To identify the effects, this chapter (as well as chapter four) uses a regional difference-in-difference approach to estimate the effects on regular employment (part- and full-time) and mini-jobs.
Our results suggest that the minimum wage has slightly reduced overall employment, mainly due to a decline in mini-jobs.
The fourth chapter has the same methodological approach as the previous one. Its motivated by the fact that women are often overrepresented among low-wage employees. Thus, the primary research question in this chapter is whether the minimum wage has led to a narrowing of the gender wage gap. In order to answer that, we identify the effects on the wage gap at the 10th and 25th percentiles and at the mean of the underlying gender-specific wage distributions. Our results imply that for eligible employees the gender wage gap at the 10th percentile decreased by 4.6 percentage points between 2014 and 2018 in high-bite regions compared to low-bite regions. We estimate this to be a reduction of 32% compared to 2014. Higher up the distribution – i.e. at the 25th percentile and the mean – the effects are smaller and not as robust.
The fifth chapter keeps the gender-specific emphasis on minimum wage effects. However, in contrast to the rest of the dissertation, it widens the scope to other European Union countries. Following the rationale of the previous chapter, women could potentially benefit particularly from a minimum wage. However, they could also be more prone to suffer from the possibly induced job losses or reductions in working hours. Therefore, this chapter summarizes existing evidence from EU member states dealing with the relationship between wage floors and the gender wage gap. In addition, it provides a systematic summary of studies that examine the impact of minimum wages on employment losses or changes in working hours that particularly affect women. The evidence shows that higher wage floors are often associated with smaller gender wage gaps. With respect to employment, women do not appear to experience greater employment losses than men per se. However, studies show that the minimum wage has a particular impact on part-time workers. Therefore, it cannot be ruled out that the negative correlation between the minimum wage and the gender wage gap is related to the job losses of these lower-paid, often female, part-time workers. This working arrangement should therefore be specially focused on in the context of minimum wages.
Plant metabolism is the main process of converting assimilated carbon to different crucial compounds for plant growth and therefore crop yield, which makes it an important research topic. Although major advances in understanding genetic principles contributing to metabolism and yield have been made, little is known about the genetics responsible for trait variation or canalization although the concepts have been known for a long time. In light of a growing global population and progressing climate change, understanding canalization of metabolism and yield seems ever-more important to ensure food security. Our group has recently found canalization metabolite quantitative trait loci (cmQTL) for tomato fruit metabolism, showing that the concept of canalization applies on metabolism. In this work two approaches to investigate plant metabolic canalization and one approach to investigate yield canalization are presented.
In the first project, primary and secondary metabolic data from Arabidopsis thaliana and Phaseolus vulgaris leaf material, obtained from plants grown under different conditions was used to calculate cross-environment coefficient of variations or fold-changes of metabolite levels per genotype and used as input for genome wide association studies. While primary metabolites have lower CV across conditions and show few and mostly weak associations to genomic regions, secondary metabolites have higher CV and show more, strong metabolite to genome associations. As candidate genes, both potential regulatory genes as well as metabolic genes, can be found, albeit most metabolic genes are rarely directly related to the target metabolites, suggesting a role for both potential regulatory mechanisms as well as metabolic network structure for canalization of metabolism.
In the second project, candidate genes of the Solanum lycopersicum cmQTL mapping are selected and CRISPR/Cas9-mediated gene-edited tomato lines are created, to validate the genes role in canalization of metabolism. Obtained mutants appeared to either have strong aberrant developmental phenotypes or appear wild type-like. One phenotypically inconspicuous mutant of a pantothenate kinase, selected as candidate for malic acid canalization shows a significant increase of CV across different watering conditions. Another such mutant of a protein putatively involved in amino acid transport, selected as candidate for phenylalanine canalization shows a similar tendency to increased CV without statistical significance. This potential role of two genes involved in metabolism supports the hypothesis of structural relevance of metabolism for its own stability.
In the third project, a mutant for a putative disulfide isomerase, important for thylakoid biogenesis, is characterized by a multi-omics approach. The mutant was characterized previously in a yield stability screening and showed a variegated leaf phenotype, ranging from green leaves with wild type levels of chlorophyll over differently patterned variegated to completely white leaves almost completely devoid of photosynthetic pigments. White mutant leaves show wild type transcript levels of photosystem assembly factors, with the exception of ELIP and DEG orthologs indicating a stagnation at an etioplast to chloroplast transition state. Green mutant leaves show an upregulation of these assembly factors, possibly acting as overcompensation for partially defective disulfide isomerase, which seems sufficient for proper chloroplast development as confirmed by a wild type-like proteome. Likely as a result of this phenotype, a general stress response, a shift to a sink-like tissue and abnormal thylakoid membranes, strongly alter the metabolic profile of white mutant leaves. As the severity and pattern of variegation varies from plant to plant and may be effected by external factors, the effect on yield instability, may be a cause of a decanalized ability to fully exploit the whole leaf surface area for photosynthetic activity.
The NAC transcription factor (TF) JUNGBRUNNEN1 (JUB1) is an important negative regulator of plant senescence, as well as of gibberellic acid (GA) and brassinosteroid (BR) biosynthesis in Arabidopsis thaliana. Overexpression of JUB1 promotes longevity and enhances tolerance to drought and other abiotic stresses. A similar role of JUB1 has been observed in other plant species, including tomato and banana. Our data show that JUB1 overexpressors (JUB1-OXs) accumulate higher levels of proline than WT plants under control conditions, during the onset of drought stress, and thereafter. We identified that overexpression of JUB1 induces key proline biosynthesis and suppresses key proline degradation genes. Furthermore, bZIP63, the transcription factor involved in proline metabolism, was identified as a novel downstream target of JUB1 by Yeast One-Hybrid (Y1H) analysis and Chromatin immunoprecipitation (ChIP). However, based on Electrophoretic Mobility Shift Assay (EMSA), direct binding of JUB1 to bZIP63 could not be confirmed. Our data indicate that JUB1-OX plants exhibit reduced stomatal conductance under control conditions. However, selective overexpression of JUB1 in guard cells did not improve drought stress tolerance in Arabidopsis. Moreover, the drought-tolerant phenotype of JUB1 overexpressors does not solely depend on the transcriptional control of the DREB2A gene. Thus, our data suggest that JUB1 confers tolerance to drought stress by regulating multiple components. Until today, none of the previous studies on JUB1´s regulatory network focused on identifying protein-protein interactions. We, therefore, performed a yeast two-hybrid screen (Y2H) which identified several protein interactors of JUB1, two of which are the calcium-binding proteins CaM1 and CaM4. Both proteins interact with JUB1 in the nucleus of Arabidopsis protoplasts. Moreover, JUB1 is expressed with CaM1 and CaM4 under the same conditions. Since CaM1.1 and CaM4.1 encode proteins with identical amino acid sequences, all further experiments were performed with constructs involving the CaM4 coding sequence. Our data show that JUB1 harbors multiple CaM-binding sites, which are localized in both the N-terminal and C-terminal regions of the protein. One of the CaM-binding sites, localized in the DNA-binding domain of JUB1, was identified as a functional CaM-binding site since its mutation strongly reduced the binding of CaM4 to JUB1. Furthermore, JUB1 transactivates expression of the stress-related gene DREB2A in mesophyll cells; this effect is significantly reduced when the calcium-binding protein CaM4 is expressed as well. Overexpression of both genes in Arabidopsis results in early senescence observed through lower chlorophyll content and an enhanced expression of senescence-associated genes (SAGs) when compared with single JUB1 overexpressors. Our data also show that JUB1 and CaM4 proteins interact in senescent leaves, which have increased Ca2+ levels when compared to young leaves. Collectively, our data indicate that JUB1 activity towards its downstream targets is fine-tuned by calcium-binding proteins during leaf senescence.
A task-based parallel elliptic solver for numerical relativity with discontinuous Galerkin methods
(2022)
Elliptic partial differential equations are ubiquitous in physics. In numerical relativity---the study of computational solutions to the Einstein field equations of general relativity---elliptic equations govern the initial data that seed every simulation of merging black holes and neutron stars. In the quest to produce detailed numerical simulations of these most cataclysmic astrophysical events in our Universe, numerical relativists resort to the vast computing power offered by current and future supercomputers. To leverage these computational resources, numerical codes for the time evolution of general-relativistic initial value problems are being developed with a renewed focus on parallelization and computational efficiency. Their capability to solve elliptic problems for accurate initial data must keep pace with the increasing detail of the simulations, but elliptic problems are traditionally hard to parallelize effectively.
In this thesis, I develop new numerical methods to solve elliptic partial differential equations on computing clusters, with a focus on initial data for orbiting black holes and neutron stars. I develop a discontinuous Galerkin scheme for a wide range of elliptic equations, and a stack of task-based parallel algorithms for their iterative solution. The resulting multigrid-Schwarz preconditioned Newton-Krylov elliptic solver proves capable of parallelizing over 200 million degrees of freedom to at least a few thousand cores, and already solves initial data for a black hole binary about ten times faster than the numerical relativity code SpEC. I also demonstrate the applicability of the new elliptic solver across physical disciplines, simulating the thermal noise in thin mirror coatings of interferometric gravitational-wave detectors to unprecedented accuracy. The elliptic solver is implemented in the new open-source SpECTRE numerical relativity code, and set up to support simulations of astrophysical scenarios for the emerging era of gravitational-wave and multimessenger astronomy.
Heimat
(2022)
Esta investigación propone un estudio transareal de las series autoficcionales del escritor austriaco Thomas Bernhard y el colombiano Fernando Vallejo, dos autores cuya obra se caracteriza por una dura crítica a sus países de origen, a sus Heimaten, pero también por un complejo arraigamiento. Los análisis interpretativos demuestran que en Die Autobiographie y El río del tiempo la Heimat se presenta como un constructo que abarca no solamente elementos dichosos, sino que presenta también elementos negativos, disolutivos, destructivos, con lo cual ambos autores de distancian de una concepción tradicional de Heimat como territorio necesariamente armónico al que el sujeto se siente positivamente vinculado. En cambio, ella se concibe como un conjunto disímil, frente al cual el sujeto se relaciona, necesariamente, de modo ambivalente y problemático. En ambos autores la narración literaria se configura como un acto en el que no simplemente se representa esa ambivalencia, sino en el que, sobre todo, se impugnan las formas de hostilidad que le confieren a la Heimat su carácter inhóspito. Para ello, ambos autores recurren a la implementación de dos recursos fundamentales: la mímesis y el movimiento. La investigación muestra de qué manera las obras estudiadas la Heimat se presenta como un espacio de continuos movimientos, intercambios e interacciones, en el que actúan mecanismos de opresión, pero también dispositivos de oposición, prácticas de apertura intersubjetiva y aspiraciones de integración comunitaria.
Climate change is one of the greatest challenges to humanity in this century, and most noticeable consequences are expected to be impacts on the water cycle – in particular the distribution and availability of water, which is fundamental for all life on Earth. In this context, it is essential to better understand where and when water is available and what processes influence variations in water storages. While estimates of the overall terrestrial water storage (TWS) variations are available from the GRACE satellites, these represent the vertically integrated signal over all water stored in ice, snow, soil moisture, groundwater and surface water bodies. Therefore, complementary observational data and hydrological models are still required to determine the partitioning of the measured signal among different water storages and to understand the underlying processes. However, the application of large-scale observational data is limited by their specific uncertainties and the incapacity to measure certain water fluxes and storages. Hydrological models, on the other hand, vary widely in their structure and process-representation, and rarely incorporate additional observational data to minimize uncertainties that arise from their simplified representation of the complex hydrologic cycle.
In this context, this thesis aims to contribute to improving the understanding of global water storage variability by combining simple hydrological models with a variety of complementary Earth observation-based data. To this end, a model-data integration approach is developed, in which the parameters of a parsimonious hydrological model are calibrated against several observational constraints, inducing GRACE TWS, simultaneously, while taking into account each data’s specific strengths and uncertainties. This approach is used to investigate 3 specific aspects that are relevant for modelling and understanding the composition of large-scale TWS variations.
The first study focusses on Northern latitudes, where snow and cold-region processes define the hydrological cycle. While the study confirms previous findings that seasonal dynamics of TWS are dominated by the cyclic accumulation and melt of snow, it reveals that inter-annual TWS variations on the contrary, are determined by variations in liquid water storages. Additionally, it is found to be important to consider the impact of compensatory effects of spatially heterogeneous hydrological variables when aggregating the contribution of different storage components over large areas. Hence, the determinants of TWS variations are scale-dependent and underlying driving mechanism cannot be simply transferred between spatial and temporal scales. These findings are supported by the second study for the global land areas beyond the Northern latitudes as well.
This second study further identifies the considerable impact of how vegetation is represented in hydrological models on the partitioning of TWS variations. Using spatio-temporal varying fields of Earth observation-based data to parameterize vegetation activity not only significantly improves model performance, but also reduces parameter equifinality and process uncertainties. Moreover, the representation of vegetation drastically changes the contribution of different water storages to overall TWS variability, emphasizing the key role of vegetation for water allocation, especially between sub-surface and delayed water storages. However, the study also identifies parameter equifinality regarding the decay of sub-surface and delayed water storages by either evapotranspiration or runoff, and thus emphasizes the need for further constraints hereof.
The third study focuses on the role of river water storage, in particular whether it is necessary to include computationally expensive river routing for model calibration and validation against the integrated GRACE TWS. The results suggest that river routing is not required for model calibration in such a global model-data integration approach, due to the larger influence other observational constraints, and the determinability of certain model parameters and associated processes are identified as issues of greater relevance. In contrast to model calibration, considering river water storage derived from routing schemes can already significantly improve modelled TWS compared to GRACE observations, and thus should be considered for model evaluation against GRACE data.
Beyond these specific findings that contribute to improved understanding and modelling of large-scale TWS variations, this thesis demonstrates the potential of combining simple modeling approaches with diverse Earth observational data to improve model simulations, overcome inconsistencies of different observational data sets, and identify areas that require further research. These findings encourage future efforts to take advantage of the increasing number of diverse global observational data.
Due to the major role of greenhouse gas emissions in global climate change, the development of non-fossil energy technologies is essential. Deep geothermal energy represents such an alternative, which offers promising properties such as a high base load capability and a large untapped potential. The present work addresses barite precipitation within geothermal systems and the associated reduction in rock permeability, which is a major obstacle to maintaining high efficiency. In this context, hydro-geochemical models are essential to quantify and predict the effects of precipitation on the efficiency of a system.
The objective of the present work is to quantify the induced injectivity loss using numerical and analytical reactive transport simulations. For the calculations, the fractured-porous reservoirs of the German geothermal regions North German Basin (NGB) and Upper Rhine Graben (URG) are considered.
Similar depth-dependent precipitation potentials could be determined for both investigated regions (2.8-20.2 g/m3 fluid). However, the reservoir simulations indicate that the injectivity loss due to barite deposition in the NGB is significant (1.8%-6.4% per year) and the longevity of the system is affected as a result; this is especially true for deeper reservoirs (3000 m). In contrast, simulations of URG sites indicate a minor role of barite (< 0.1%-1.2% injectivity loss per year). The key differences between the investigated regions are reservoir thicknesses and the presence of fractures in the rock, as well as the ionic strength of the fluids. The URG generally has fractured-porous reservoirs with much higher thicknesses, resulting in a greater distribution of precipitates in the subsurface. Furthermore, ionic strengths are higher in the NGB, which accelerates barite precipitation, causing it to occur more concentrated around the wellbore. The more concentrated the precipitates occur around the wellbore, the higher the injectivity loss.
In this work, a workflow was developed within which numerical and analytical models can be used to estimate and quantify the risk of barite precipitation within the reservoir of geothermal systems. A key element is a newly developed analytical scaling score that provides a reliable estimate of induced injectivity loss. The key advantage of the presented approach compared to fully coupled reservoir simulations is its simplicity, which makes it more accessible to plant operators and decision makers. Thus, in particular, the scaling score can find wide application within geothermal energy, e.g., in the search for potential plant sites and the estimation of long-term efficiency.
Cosmic rays (CRs) are a ubiquitous and an important component of astrophysical environments such as the interstellar medium (ISM) and intracluster medium (ICM). Their plasma physical interactions with electromagnetic fields strongly influence their transport properties. Effective models which incorporate the microphysics of CR transport are needed to study the effects of CRs on their surrounding macrophysical media. Developing such models is challenging because of the conceptional, length-scale, and time-scale separation between the microscales of plasma physics and the macroscales of the environment. Hydrodynamical theories of CR transport achieve this by capturing the evolution of CR population in terms of statistical moments. In the well-established one-moment hydrodynamical model for CR transport, the dynamics of the entire CR population are described by a single statistical quantity such as the commonly used CR energy density. In this work, I develop a new hydrodynamical two-moment theory for CR transport that expands the well-established hydrodynamical model by including the CR energy flux as a second independent hydrodynamical quantity. I detail how this model accounts for the interaction between CRs and gyroresonant Alfvén waves. The small-scale magnetic fields associated with these Alfvén waves scatter CRs which fundamentally alters CR transport along large-scale magnetic field lines. This leads to the effects of CR streaming and diffusion which are both captured within the presented hydrodynamical theory. I use an Eddington-like approximation to close the hydrodynamical equations and investigate the accuracy of this closure-relation by comparing it to high-order approximations of CR transport. In addition, I develop a finite-volume scheme for the new hydrodynamical model and adapt it to the moving-mesh code Arepo. This scheme is applied using a simulation of a CR-driven galactic wind. I investigate how CRs launch the wind and perform a statistical analysis of CR transport properties inside the simulated circumgalactic medium (CGM). I show that the new hydrodynamical model can be used to explain the morphological appearance of a particular type of radio filamentary structures found inside the central molecular zone (CMZ). I argue that these harp-like features are synchrotron-radiating CRs which are injected into braided magnetic field lines by a point-like source such as a stellar wind of a massive star or a pulsar. Lastly, I present the finite-volume code Blinc that uses adaptive mesh refinement (AMR) techniques to perform simulations of radiation and magnetohydrodynamics (MHD). The mesh of Blinc is block-structured and represented in computer memory using a graph-based approach. I describe the implementation of the mesh graph and how a diffusion process is employed to achieve load balancing in parallel computing environments. Various test problems are used to verify the accuracy and robustness of the employed numerical algorithms.
Moderne Technologien befähigen die beteiligten Akteure eines Produktionsprozesses die Informationsaufnahme, Entscheidungsfindung und -ausführung selbstständig auszuführen. Hierarchische Kontrollbeziehungen werden aufgelöst und die Entscheidungsfindung auf eine Vielzahl von Akteuren verteilt. Positive Folgen sind unter anderem die Nutzung lokaler Kompetenzen und ein schnelles Handeln vor Ort ohne (zeit-)aufwändige prozessübergreifende Planungsläufe durch eine zentrale Steuerungsinstanz. Die Bewertung der Dezentralität des Prozesses hilft beim Vergleich verschiedener Steuerungsstrategien und trägt so zur Beherrschung komplexerer Produktionsprozesse bei.
Obwohl die Kommunikationsstruktur der an der Entscheidungsfindung beteiligten Akteure zunehmend an Bedeutung gewinnt, existiert keine Methode, welche diese als Grundlage für die Operationalisierung der Dezentralität verwendet. Hier setzt diese Arbeit an. Es wird ein dreistufiges Bewertungsmodell entwickelt, dass die Dezentralität eines Produktionsprozesses auf Basis der Kommunikations- und Entscheidungsstruktur der am Prozess beteiligten, autonomen Akteure ermittelt.
Aufbauend auf einer Definition von Dezentralität von Produktionsprozessen werden Anforderungen an eine Kennzahl erhoben und - auf Basis der Kommunikationsstruktur - eine die strukturelle Autonomie der Akteure bestimmenden Kenngröße der sozialen Netzwerkanalyse ermittelt. Die Notwendigkeit der zusätzlichen Berücksichtigung der Entscheidungsstruktur wird basierend auf der Möglichkeit der Integration von Entscheidungsfindung und -ausführung begründet.
Die Differenzierung beider Faktoren bildet die Grundlage für die Klassifikation der Akteure; die Multiplikation beider Werte resultiert in dem die Autonomie eines Akteurs beschreibenden Kennwert tatsächliche Autonomie, welcher das Ergebnis der ersten Stufe des Modells darstellt. Homogene Akteurswerte charakterisieren eine hohe Dezentralität des Prozessschrittes, welcher Betrachtungsobjekt der zweiten Stufe ist. Durch einen Vergleich der vorhandenen mit der maximal möglichen Dezentralität der Prozessschritte wird auf der dritten Stufe der Autonomie Index ermittelt, welcher die Dezentralität des Prozesses operationalisiert.
Das erstellte Bewertungsmodell wird anhand einer Simulationsstudie im Zentrum Industrie 4.0 validiert. Dafür wird das Modell auf zwei Simulationsexperimente - einmal mit einer zentralen und einmal mit einer dezentralen Steuerung - angewendet und die Ergebnisse verglichen. Zusätzlich wird es auf einen umfangreichen Produktionsprozess aus der Praxis angewendet.
In plant cells, subcellular transport of cargo proteins relies to a large extent on post-Golgi transport pathways, many of which are mediated by clathrin-coated vesicles (CCVs). Vesicle formation is facilitated by different factors like accessory proteins and adaptor protein complexes (APs), the latter serving as a bridge between cargo proteins and the coat protein clathrin. One type of accessory proteins is defined by a conserved EPSIN N-TERMINAL HOMOLOGY (ENTH) domain and interacts with APs and clathrin via motifs in the C-terminal part. In Arabidopsis thaliana, there are three closely related ENTH domain proteins (EPSIN1, 2 and 3) and one highly conserved but phylogenetically distant outlier, termed MODIFIED TRANSPORT TO THE VACUOLE1 (MTV1). In case of the trans-Golgi network (TGN) located MTV1, clathrin association and a role in vacuolar transport have been shown previously (Sauer et al. 2013). In contrast, for EPSIN1 and EPSIN2 limited functional and localization data were available; and EPSIN3 remained completely uncharacterized prior to this study (Song et al. 2006; Lee et al. 2007). The molecular details of ENTH domain proteins in plants are still unknown. In order to systematically characterize all four ENTH proteins in planta, we first investigated expression and subcellular localization by analysis of stable reporter lines under their endogenous promotors. Although all four genes are ubiquitously expressed, their subcellular distribution differs markedly. EPSIN1 and MTV1 are located at the TGN, whereas EPSIN2 and EPSIN3 are associated with the plasma membrane (PM) and the cell plate. To examine potential functional redundancy, we isolated knockout T-DNA mutant lines and created all higher order mutant combinations. The clearest evidence for functional redundancy was observed in the epsin1 mtv1 double mutant, which is a dwarf displaying overall growth reduction. These findings are in line with the TGN localization of both MTV1 and EPS1. In contrast, loss of EPSIN2 and EPSIN3 does not result in a growth phenotype compared to wild type, however, a triple knockout of EPSIN1, EPSIN2 and EPSIN3 shows partially sterile plants. We focused mainly on the epsin1 mtv1 double mutant and addressed the functional role of these two genes in clathrin-mediated vesicle transport by comprehensive molecular, biochemical, and genetic analyses. Our results demonstrate that EPSIN1 and MTV1 promote vacuolar transport and secretion of a subset of cargo. However, they do not seem to be involved in endocytosis and recycling. Importantly, employing high-resolution imaging, genetic and biochemical experiments probing the relationship of the AP complexes, we found that EPSIN1/AP1 and MTV1/AP4 define two spatially and molecularly distinct subdomains of the TGN. The AP4 complex is essential for MTV1 recruitment to the TGN, whereas EPSIN1 is independent of AP4 but presumably acts in an AP1-dependent framework. Our findings suggest that this ENTH/AP pairing preference is conserved between animals and plants.
What are the consequences of unemployment and precarious employment for individuals' health in Europe? What are the moderating factors that may offset (or increase) the health consequences of labor-market risks? How do the effects of these risks vary across different contexts, which differ in their institutional and cultural settings? Does gender, regarded as a social structure, play a role, and how? To answer these questions is the aim of my cumulative thesis. This study aims to advance our knowledge about the health consequences that unemployment and precariousness cause over the life course. In particular, I investigate how several moderating factors, such as gender, the family, and the broader cultural and institutional context, may offset or increase the impact of employment instability and insecurity on individual health.
In my first paper, 'The buffering role of the family in the relationship between job loss and self-perceived health: Longitudinal results from Europe, 2004-2011', I and my co-authors measure the causal effect of job loss on health and the role of the family and welfare states (regimes) as moderating factors. Using EU-SILC longitudinal data (2004-2011), we estimate the probability of experiencing 'bad health' following a transition to unemployment by applying linear probability models and undertake separate analyses for men and women. Firstly, we measure whether changes in the independent variable 'job loss' lead to changes in the dependent variable 'self-rated health' for men and women separately. Then, by adding into the model different interaction terms, we measure the moderating effect of the family, both in terms of emotional and economic support, and how much it varies across different welfare regimes. As an identification strategy, we first implement static fixed-effect panel models, which control for time-varying observables and indirect health selection—i.e., constant unobserved heterogeneity. Secondly, to control for reverse causality and path dependency, we implement dynamic fixed-effect panel models, adding a lagged dependent variable to the model.
We explore the role of the family by focusing on close ties within households: we consider the presence of a stable partner and his/her working status as a source of social and economic support. According to previous literature, having a partner should reduce the stress from adverse events, thanks to the symbolic and emotional dimensions that such a relationship entails, regardless of any economic benefits. Our results, however, suggest that benefits linked to the presence of a (female) partner also come from the financial stability that (s)he can provide in terms of a second income. Furthermore, we find partners' employment to be at least as important as the mere presence of the partner in reducing the negative effect of job loss on the individual's health by maintaining the household's standard of living and decreasing economic strain on the family. Our results are in line with previous research, which has highlighted that some people cope better than others with adverse life circumstances, and the support provided by the family is a crucial resource in that regard.
We also reported an important interaction between the family and the welfare state in moderating the health consequences of unemployment, showing how the compensation effect of the family varies across welfare regimes. The family plays a decisive role in cushioning the adverse consequences of labor market risks in Southern and Eastern welfare states, characterized by less developed social protection systems and –especially the Southern – high level of familialism.
The first paper also found important gender differences concerning job loss, family and welfare effects. Of particular interest is the evidence suggesting that health selection works differently for men and women, playing a more prominent role for women than for men in explaining the relationship between job loss and self-perceived health. The second paper, 'Gender roles and selection mechanisms across contexts: A comparative analysis of the relationship between unemployment, self-perceived health, and gender.' investigates more in-depth the gender differential in health driven by unemployment.
Being a highly contested issue in literature, we aim to study whether men are more penalized than women or the other way around and the mechanisms that may explain the gender difference. To do that, we rely on two theoretical arguments: the availability of alternative roles and social selection. The first argument builds on the idea that men and women may compensate for the detrimental health consequences of unemployment through the commitment to 'alternative roles,' which can provide for the resources needed to fulfill people's socially constructed needs. Notably, the availability of alternative options depends on the different positions that men and women have in society.
Further, we merge the availability of the 'alternative roles' argument with the health selection argument. We assume that health selection could be contingent on people's social position as defined by gender and, thus, explain the gender differential in the relationship between unemployment and health. Ill people might be less reluctant to fall or remain (i.e., self-select) in unemployment if they have alternative roles. In Western societies, women generally have more alternative roles than men and thus more discretion in their labor market attachment. Therefore, health selection should be stronger for them, explaining why unemployment is less menace for women than for their male counterparts.
Finally, relying on the idea of different gender regimes, we extended these arguments to comparison across contexts. For example, in contexts where being a caregiver is assumed to be women's traditional and primary roles and the primary breadwinner role is reserved to men, unemployment is less stigmatized, and taking up alternative roles is more socially accepted for women than for men (Hp.1). Accordingly, social (self)selection should be stronger for women than for men in traditional contexts, where, in the case of ill-health, the separation from work is eased by the availability of alternative roles (Hp.2).
By focusing on contexts that are representative of different gender regimes, we implement a multiple-step comparative approach. Firstly, by using EU-SILC longitudinal data (2004-2015), our analysis tests gender roles and selection mechanisms for Sweden and Italy, representing radically different gender regimes, thus providing institutional and cultural variation. Then, we limit institutional heterogeneity by focusing on Germany and comparing East- and West-Germany and older and younger cohorts—for West-Germany (SOEP data 1995-2017). Next, to assess the differential impact of unemployment for men and women, we compared (unemployed and employed) men with (unemployed and employed) women. To do so, we calculate predicted probabilities and average marginal effect from two distinct random-effects probit models. Our first step is estimating random-effects models that assess the association between unemployment and self-perceived health, controlling for observable characteristics. In the second step, our fully adjusted model controls for both direct and indirect selection. We do this using dynamic correlated random-effects (CRE) models. Further, based on the fully adjusted model, we test our hypotheses on alternative roles (Hp.1) by comparing several contexts – models are estimated separately for each context. For this hypothesis, we pool men and women and include an interaction term between unemployment and gender, which has the advantage to allow for directly testing whether gender differences in the effect of unemployment exist and are statistically significant. Finally, we test the role of selection mechanisms (Hp.2), using the KHB method to compare coefficients across nested nonlinear models. Specifically, we test the role of selection for the relationship between unemployment and health by comparing the partially-adjusted and fully-adjusted models. To allow selection mechanisms to operate differently between genders, we estimate separate models for men and women.
We found support to our first hypotheses—the context where people are embedded structures the relationship between unemployment, health, and gender. We found no gendered effect of unemployment on health in the egalitarian context of Sweden. Conversely, in the traditional context of Italy, we observed substantive and statistically significant gender differences in the effect of unemployment on bad health, with women suffering less than men. We found the same pattern for comparing East and West Germany and younger and older cohorts in West Germany.
On the contrary, our results did not support our theoretical argument on social selection. We found that in Sweden, women are more selected out of employment than men. In contrast, in Italy, health selection does not seem to be the primary mechanism behind the gender differential—Italian men and women seem to be selected out of employment to the same extent. Namely, we do not find any evidence that health selection is stronger for women in more traditional countries (Hp2), despite the fact that the institutional and the cultural context would offer them a more comprehensive range of 'alternative roles' relative to men. Moreover, our second hypothesis is also rejected in the second and third comparisons, where the cross-country heterogeneity is reduced to maximize cultural differences within the same institutional context. Further research that addresses selection into inactivity is needed to evaluate the interplay between selection and social roles across gender regimes.
While the health consequences of unemployment have been on the research agenda for a pretty long time, the interest in precarious employment—defined as the linking of the vulnerable worker to work that is characterized by uncertainty and insecurity concerning pay, the stability of the work arrangement, limited access to social benefits, and statutory protections—has emerged only later. Since the 80s, scholars from different disciplines have raised concerns about the social consequences of de-standardization of employment relationships. However, while work has become undoubtedly more precarious, very little is known about its causal effect on individual health and the role of gender as a moderator. These questions are at the core of my third paper : 'Bad job, bad health? A longitudinal analysis of the interaction between precariousness, gender and self-perceived health in Germany'. Herein, I investigate the multidimensional nature of precarious employment and its causal effect on health, particularly focusing on gender differences.
With this paper, I aim at overcoming three major shortcomings of earlier studies: The first one regards the cross-sectional nature of data that prevents the authors from ruling out unobserved heterogeneity as a mechanism for the association between precarious employment and health. Indeed, several unmeasured individual characteristics—such as cognitive abilities—may confound the relationship between precarious work and health, leading to biased results. Secondly, only a few studies have directly addressed the role of gender in shaping the relationship. Moreover, available results on the gender differential are mixed and inconsistent: some found precarious employment being more detrimental for women's health, while others found no gender differences or stronger negative association for men. Finally, previous attempts to an empirical translation of the employment precariousness (EP) concept have not always been coherent with their theoretical framework. EP is usually assumed to be a multidimensional and continuous phenomenon; it is characterized by different dimensions of insecurity that may overlap in the same job and lead to different "degrees of precariousness." However, researchers have predominantly focused on one-dimensional indicators—e.g., temporary employment, subjective job insecurity—to measure EP and study the association with health. Besides the fact that this approach partially grasps the phenomenon's complexity, the major problem is the inconsistency of evidence that it has produced. Indeed, this line of inquiry generally reveals an ambiguous picture, with some studies finding substantial adverse effects of temporary over permanent employment, while others report only minor differences.
To measure the (causal) effect of precarious work on self-rated health and its variation by gender, I focus on Germany and use four waves from SOEP data (2003, 2007, 2011, and 2015). Germany is a suitable context for my study. Indeed, since the 1980s, the labor market and welfare system have been restructured in many ways to increase the German economy's competitiveness in the global market. As a result, the (standard) employment relationship has been de-standardized: non-standard and atypical employment arrangements—i.e., part-time work, fixed-term contracts, mini-jobs, and work agencies—have increased over time while wages have lowered, even among workers with standard work. In addition, the power of unions has also fallen over the last three decades, leaving a large share of workers without collective protection. Because of this process of de-standardization, the link between wage employment and strong social rights has eroded, making workers more powerless and more vulnerable to labor market risks than in the past. EP refers to this uneven distribution of power in the employment relationship, which can be detrimental to workers' health. Indeed, by affecting individuals' access to power and other resources, EP puts precarious workers at risk of experiencing health shocks and influences their ability to gain and accumulate health advantages (Hp.1).
Further, the focus on Germany allows me to investigate my second research question on the gender differential. Germany is usually regarded as a traditionalist gender regime: a context characterized by a configuration of roles. Here, being a caregiver is assumed to be women's primary role, whereas the primary breadwinner role is reserved for men. Although many signs of progress have been made over the last decades towards a greater equalization of opportunities and more egalitarianism, the breadwinner model has barely changed towards a modified version. Thus, women usually take on the double role of workers (the so-called secondary earner) and caregivers, and men still devote most of their time to paid work activities. Moreover, the overall upward trend towards more egalitarian gender ideologies has leveled off over the last decades, moving notably towards more traditional gender ideologies.
In this setting, two alternative hypotheses are possible. Firstly, I assume that the negative relationship between EP and health is stronger for women than for men. This is because women are systematically more disadvantaged than men in the public and private spheres of life, having less access to formal and informal sources of power. These gender-related power asymmetries may interact with EP-related power asymmetries resulting in a stronger effect of EP on women's health than on men's health (Hp.2).
An alternative way of looking at the gender differential is to consider the interaction that precariousness might have with men's and women's gender identities. According to this view, the negative relationship between EP and health is weaker for women than for men (Hp.2a). In a society with a gendered division of labor and a strong link between masculine identities and stable and well-rewarded job—i.e., a job that confers the role of primary family provider—a male worker with precarious employment might violate the traditional male gender role. Men in precarious jobs may perceive themselves (and by others) as possessing a socially undesirable characteristic, which conflicts with the stereotypical idea of themselves as the male breadwinner. Engaging in behaviors that contradict stereotypical gender identity may decrease self-esteem and foster feelings of inferiority, helplessness, and jealousy, leading to poor health.
I develop a new indicator of EP that empirically translates a definition of EP as a multidimensional and continuous phenomenon. I assume that EP is a latent construct composed of seven dimensions of insecurity chosen according to the theory and previous empirical research: Income insecurity, social insecurity, legal insecurity, employment insecurity, working-time insecurity, representation insecurity, worker's vulnerability. The seven dimensions are proxied by eight indicators available in the four waves of the SOEP dataset. The EP composite indicator is obtained by performing a multiple correspondence analysis (MCA) on the eight indicators. This approach aims to construct a summary scale in which all dimensions contribute jointly to the measured experience of precariousness and its health impact.
Further, the relationship between EP and 'general self-perceived health' is estimated by applying ordered probit random-effects estimators and calculating average marginal effect (further AME). Then, to control for unobserved heterogeneity, I implement correlated random-effects models that add to the model the within-individual means of the time-varying independent variables. To test the significance of the gender differential, I add an interaction term between EP and gender in the fully adjusted model in the pooled sample.
My correlated random-effects models showed EP's negative and substantial 'effect' on self-perceived health for both men and women. Although nonsignificant, the evidence seems in line with previous cross-sectional literature. It supports the hypothesis that employment precariousness could be detrimental to workers' health. Further, my results showed the crucial role of unobserved heterogeneity in shaping the health consequences of precarious employment. This is particularly important as evidence accumulates, yet it is still mostly descriptive.
Moreover, my results revealed a substantial difference among men and women in the relationship between EP and health: when EP increases, the risk of experiencing poor health increases much more for men than for women. This evidence falsifies previous theory according to whom the gender differential is contingent on the structurally disadvantaged position of women in western societies. In contrast, they seem to confirm the idea that men in precarious work could experience role conflict to a larger extent than women, as their self-standard is supposed to be the stereotypical breadwinner worker with a good and well-rewarded job. Finally, results from the multiple correspondence analysis contribute to the methodological debate on precariousness, showing that a multidimensional and continuous indicator can express a latent variable of EP.
All in all, complementarities are revealed in the results of unemployment and employment precariousness, which have two implications: Policy-makers need to be aware that the total costs of unemployment and precariousness go far beyond the economic and material realm penetrating other fundamental life domains such as individual health. Moreover, they need to balance the trade-off between protecting adequately unemployed people and fostering high-quality employment in reaction to the highlighted market pressures. In this sense, the further development of a (universalistic) welfare state certainly helps mitigate the adverse health effects of unemployment and, therefore, the future costs of both individuals' health and welfare spending. In addition, the presence of a working partner is crucial for reducing the health consequences of employment instability. Therefore, policies aiming to increase female labor market participation should be promoted, especially in those contexts where the welfare state is less developed.
Moreover, my results support the significance of taking account of a gender perspective in health research. The findings of the three articles show that job loss, unemployment, and precarious employment, in general, have adverse effects on men's health but less or absent consequences for women's health. Indeed, this suggests the importance of labor and health policies that consider and further distinguish the specific needs of the male and female labor force in Europe. Nevertheless, a further implication emerges: the health consequences of employment instability and de-standardization need to be investigated in light of the gender arrangements and the transforming gender relationships in specific cultural and institutional contexts. My results indeed seem to suggest that women's health advantage may be a transitory phenomenon, contingent on the predominant gendered institutional and cultural context. As the structural difference between men's and women's position in society is eroded, egalitarianism becomes the dominant normative status, so will probably be the gender difference in the health consequences of job loss and precariousness. Therefore, while gender equality in opportunities and roles is a desirable aspect for contemporary societies and a political goal that cannot be postponed further, this thesis raises a further and maybe more crucial question: What kind of equality should be pursued to provide men and women with both good life quality and equal chances in the public and private spheres? In this sense, I believe that social and labor policies aiming to reduce gender inequality in society should focus on improving women's integration into the labor market, implementing policies targeting men, and facilitating their involvement in the private sphere of life. Equal redistribution of social roles could activate a crucial transformation of gender roles and the cultural models that sustain and still legitimate gender inequality in Western societies.
Carbohydrates are found in every living organism, where they are responsible for numerous, essential biological functions and processes. Synthetic polymers with pendant saccharides, called glycopolymers, mimic natural glycoconjugates in their special properties and functions. Employing such biomimetics furthers the understanding and controlling of biological processes. Hence, glycopolymers are valuable and interesting for applications in the medical and biological field. However, the synthesis of carbohydrate-based materials can be very challenging. In this thesis, the synthesis of biofunctional glycopolymers is presented, with the focus on aqueous-based, protecting group free and short synthesis routes to further advance in the field of glycopolymer synthesis.
A practical and versatile precursor for glycopolymers are glycosylamines. To maintain biofunctionality of the saccharides after their amination, regioselective functionalization was performed. This frequently performed synthesis was optimized for different sugars. The optimization was facilitated using a design of experiment (DoE) approach to enable a reduced number of necessary experiments and efficient procedure. Here, the utility of using DoE for optimizing the synthesis of glycosylamines is discussed.
The glycosylamines were converted to glycomonomers which were then polymerized to yield biofunctional glycopolymers. Here, the glycopolymers were aimed to be applicable as layer-by-layer (LbL) thin film coatings for drug delivery systems. To enable the LbL technique, complimentary glycopolymer electrolytes were synthesized by polymerization of the glycomonomers and subsequent modification or by post-polymerization modification. For drug delivery, liposomes were embedded into the glycopolymer coating as potential cargo carriers. The stability as well as the integrity of the glycopolymer layers and liposomes were investigated at physiological pH range.
Different glycopolymers were also synthesized to be applicable as anti-adhesion therapeutics by providing advanced architectures with multivalent presentations of saccharides, which can inhibit the binding of pathogene lectins. Here, the synthesis of glycopolymer hydrogel particles based on biocompatible poly(N-isopropylacrylamide) (NiPAm) was established using the free-radical precipitation polymerization technique. The influence of synthesis parameters on the sugar content in the gels and on the hydrogel morphology is discussed. The accessibility of the saccharides to model lectins and their enhanced, multivalent interaction were investigated.
At the end of this work, the synthesis strategies for the glycopolymers are generally discussed as well as their potential application in medicine.
The demand for learning Design Thinking (DT) as a path towards acquiring 21st-century skills has increased globally in the last decade. Because DT education originated in the Silicon Valley context of the d.school at Stanford, it is important to evaluate how the teaching of the methodology adapts to different cultural contexts.The thesis explores the impact of the socio-cultural context on DT education.
DT institutes in Cape Town, South Africa and Kuala Lumpur, Malaysia, were visited to observe their programs and conduct 22 semistructured interviews with local educators regarding their adaption strategies. Grounded theory methodology was used to develop a model of Socio-Cultural Adaptation of Design Thinking Education that maps these strategies onto five dimensions: Planning, Process, People, Place, and Presentation. Based on this model, a list of recommendations is provided to help DT educators and practitioners in designing and delivering culturally inclusive DT education.
Writing travel, writing life
(2022)
The book compares the texts of three Swiss authors: Ella Maillart, Annemarie Schwarzenbach and Nicolas Bouvier. The focus is on their trip from Genève to Kabul that Ella Maillart and Annemarie Schwarzenbach made together in 1939/1940 and Nicolas Bouvier 1953/1954 with the artist Thierry Vernet. The comparison shows the strong connection between the journey and life and between ars vivendi and travel literature.
This book also gives an overview of and organises the numerous terms, genres, and categories that already exist to describe various travel texts and proposes the new term travelling narration. The travelling narration looks at the text from a narratological perspective that distinguishes the author, narrator, and protagonist within the narration.
In the examination, ten motifs could be found to characterise the travelling narration: Culture, Crossing Borders, Freedom, Time and Space, the Aesthetics of Landscapes, Writing and Reading, the Self and/as the Other, Home, Religion and Spirituality as well as the Journey. The importance of each individual motif does not only apply in the 1930s or 1950s but also transmits important findings for living together today and in the future.
Objective: The behaviors of endothelial cells or mesenchymal stem cells are remarkably influenced by the mechanical properties of their surrounding microenvironments. Here, electrospun fiber meshes containing various mechanical characteristics were developed from polyetheresterurethane (PEEU) copolymers. The goal of this study was to explore how fiber mesh stiffness affected endothelial cell shape, growth, migration, and angiogenic potential of endothelial cells. Furthermore, the effects of the E-modulus of fiber meshes on human adipose-derived stem cells (hADSCs) osteogenic potential was investigated.
Methods: Polyesteretherurethane (PEEU) polymers with various poly(p-dioxanone) (PPDO) to poly (ε-caprolactone) (PCL) weight percentages (40 wt.%, 50 wt.%, 60 wt.%, and 70 wt.%) were synthesized, termed PEEU40, PEEU50, PEEU60, and PEEU70, accordingly. The electrospinning method was used for the preparation of PEEU fiber meshes. The effects of PEEU fiber meshes with varying elasticities on the human umbilical vein endothelial cells (HUVECs) shape, growth, migration and angiogenic potential were characterized. To determine how the E-modulus of fiber meshes affects the osteogenic potential of hADSCs, the cellular and nuclear morphologies and osteogenic differentiation abilities were evaluated.
Results: With the increasing stiffness of PEEU fiber meshes, the aspect ratios of HUVECs cultivated on PEEU materials increased. HUVECs cultivated on high stiffness fiber meshes (4.5 ± 0.8 MPa) displayed a considerably greater proliferation rate and migratory velocity, in addition demonstrating increased tube formation capability, compared with those of the cells cultivated on lower stiffness fiber meshes (2.6 ± 0.8 MPa). Furthermore, in comparison to those cultivated on lower stiffness fiber meshes, hADSCs adhered to the highest stiffness fiber meshes PEEU70 had an elongated shape. The hADSCs grown on the softer PEEU40 fiber meshes showed a reduced nuclear aspect ratio (width to height) than those cultivated on the stiffer fiber meshes. Culturing hADSCs on stiffer fibers improved their osteogenic differentiation potential. Compared with cells cultured on PEEU40, osteocalcin expression and alkaline phosphatase (ALP) activity increased by 73 ± 10% and 43 ± 16%, respectively, in cells cultured on PEEU70.
Conclusion: The mechanical characteristics of the substrate are crucial in the modulation of cell behaviors. These findings indicate that adjusting the elasticity of fiber meshes might be a useful method for controlling the blood vessels development and regeneration. Furthermore, the mechanical characteristics of PEEU fiber meshes might be modified to control the osteogenic potential of hADSCs.
With the fast rise of cloud computing adoption in the past few years, more companies are migrating their confidential files from their private data center to the cloud to help enterprise's digital transformation process. Enterprise file synchronization and share (EFSS) is one of the solutions offered for enterprises to store their files in the cloud with secure and easy file sharing and collaboration between its employees. However, the rapidly increasing number of cyberattacks on the cloud might target company's files on the cloud to be stolen or leaked to the public. It is then the responsibility of the EFSS system to ensure the company's confidential files to only be accessible by authorized employees.
CloudRAID is a secure personal cloud storage research collaboration project that provides data availability and confidentiality in the cloud. It combines erasure and cryptographic techniques to securely store files as multiple encrypted file chunks in various cloud service providers (CSPs). However, several aspects of CloudRAID's concept are unsuitable for secure and scalable enterprise cloud storage solutions, particularly key management system, location-based access control, multi-cloud storage management, and cloud file access monitoring.
This Ph.D. thesis focuses on CloudRAID for Business (CfB) as it resolves four main challenges of CloudRAID's concept for a secure and scalable EFSS system. First, the key management system is implemented using the attribute-based encryption scheme to provide secure and scalable intra-company and inter-company file-sharing functionalities. Second, an Internet-based location file access control functionality is introduced to ensure files could only be accessed at pre-determined trusted locations. Third, a unified multi-cloud storage resource management framework is utilized to securely manage cloud storage resources available in various CSPs for authorized CfB stakeholders. Lastly, a multi-cloud storage monitoring system is introduced to monitor the activities of files in the cloud using the generated cloud storage log files from multiple CSPs.
In summary, this thesis helps CfB system to provide holistic security for company's confidential files on the cloud-level, system-level, and file-level to ensure only authorized company and its employees could access the files.
Die aktuelle COVID-19-Pandemie zeigt deutlich, wie sich Infektionskrankheiten weltweit verbreiten können. Neben Viruserkrankungen breiten sich auch multiresistente bakterielle Erreger weltweit aus. Dementsprechend besteht ein hoher Bedarf, durch frühzeitige Erkennung Erkrankte zu finden und Infektionswege zu unterbrechen.
Herkömmliche kulturelle Verfahren benötigen minimalinvasive bzw. invasive Proben und dauern für Screeningmaßnahmen zu lange. Deshalb werden schnelle, nichtinvasive Verfahren benötigt.
Im klassischen Griechenland verließen sich die Ärzte unter anderem auf ihren Geruchssinn, um Infektionen und andere Krankheiten zu differenzieren. Diese charakteristischen Gerüche sind flüchtige organische Substanzen (VOC), die im Rahmen des Metabolismus eines Organismus entstehen. Tiere, die einen besseren Geruchssinn haben, werden trainiert, bestimmte Krankheitserreger am Geruch zu unterscheiden. Allerdings ist der Einsatz von Tieren im klinischen Alltag nicht praktikabel. Es bietet sich an, auf technischem Weg diese VOCs zu analysieren.
Ein technisches Verfahren, diese VOCs zu unterscheiden, ist die Ionenmobilitätsspektrometrie gekoppelt mit einer multikapillaren Gaschromatographiesäule (MCC-IMS). Hier zeigte sich, dass es sich bei dem Verfahren um eine schnelle, sensitive und verlässliche Methode handelt.
Es ist bekannt, dass verschiedene Bakterien aufgrund des Metabolismus unterschiedliche VOCs und damit eigene spezifische Gerüche produzieren. Im ersten Schritt dieser Arbeit konnte gezeigt werden, dass die verschiedenen Bakterien in-vitro nach einer kurzen Inkubationszeitzeit von 90 Minuten anhand der VOCs differenziert werden können. Hier konnte analog zur Diagnose in biochemischen Testreihen eine hierarchische Klassifikation der Bakterien erfolgen.
Im Gegensatz zu Bakterien haben Viren keinen eigenen Stoffwechsel. Ob virusinfizierte Zellen andere VOCs als nicht-infizierte Zellen freisetzen, wurde an Zellkulturen überprüft. Hier konnte gezeigt werden, dass sich die Fingerprints der VOCs in Zellkulturen infizierter Zellen mit Respiratorischen Synzytial-Viren (RSV) von nicht-infizierten Zellen unterscheiden.
Virusinfektionen im intakten Organismus unterscheiden sich von den Zellkulturen dadurch, dass hier neben Veränderungen im Zellstoffwechsel auch durch Abwehrmechanismen VOCs freigesetzt werden können.
Zur Überprüfung, inwiefern sich Infektionen im intakten Organismus ebenfalls anhand VOCs unterscheiden lassen, wurde bei Patienten mit und ohne Nachweis einer Influenza A Infektion als auch bei Patienten mit Verdacht auf SARS-CoV-2 (Schweres-akutes-Atemwegssyndrom-Coronavirus Typ 2) Infektion die Atemluft untersucht. Sowohl Influenza-infizierte als auch SARS-CoV-2 infizierte Patienten konnten untereinander und von nicht-infizierten Patienten mittels MCC-IMS Analyse der Atemluft unterschieden werden.
Zusammenfassend erbringt die MCC-IMS ermutigende Resultate in der schnellen nichtinvasiven Erkennung von Infektionen sowohl in vitro als auch in vivo.
The current generation of ground-based instruments has rapidly extended the limits of the range accessible to us with very-high-energy (VHE) gamma-rays, and more than a hundred sources have now been detected in the Milky Way. These sources represent only the tip of the iceberg, but their number has reached a level that allows population studies. In this work, a model of the global population of VHE gamma-ray sources based on the most comprehensive census of Galactic sources in this energy regime, the H.E.S.S. Galactic plane survey (HGPS), will be presented. A population synthesis approach was followed in the construction of the model. Particular attention was paid to correcting for the strong observational bias inherent in the sample of detected sources. The methods developed for estimating the model parameters have been validated with extensive Monte Carlo simulations and will be shown to provide unbiased estimates of the model parameters. With these methods, five models for different spatial distributions of sources have been constructed. To test the validity of these models, their predictions for the composition of sources within the sensitivity range of the HGPS are compared with the observed sample. With one exception, similar results are obtained for all spatial distributions, showing that the observed longitude profile and the source distribution over photon flux are in fair agreement with observation. Regarding the latitude profile and the source distribution over angular extent, it becomes apparent that the model needs to be further adjusted to bring its predictions in agreement with observation. Based on the model, predictions of the global properties of the Galactic population of VHE gamma-ray sources and the prospects of the Cherenkov Telescope Array (CTA) will be presented.
CTA will significantly increase our knowledge of VHE gamma-ray sources by lowering the threshold for source detection, primarily through a larger detection area compared to current-generation instruments. In ground-based gamma-ray astronomy, the sensitivity of an instrument depends strongly, in addition to the detection area, on the ability to distinguish images of air showers produced by gamma-rays from those produced by cosmic rays, which are a strong background. This means that the number of detectable sources depends on the background rejection algorithm used and therefore may also be increased by improving the performance of such algorithms. In this context, in addition to the population model, this work presents a study on the application of deep-learning techniques to the task of gamma-hadron separation in the analysis of data from ground-based gamma-ray instruments. Based on a systematic survey of different neural-network architectures, it is shown that robust classifiers can be constructed with competitive performance compared to the best existing algorithms. Despite the broad coverage of neural-network architectures discussed, only part of the potential offered by the
application of deep-learning techniques to the analysis of gamma-ray data is exploited in the context of this study. Nevertheless, it provides an important basis for further research on this topic.
Salt deposits offer a variety of usage types. These include the mining of rock salt and potash salt as important raw materials, the storage of energy in man-made underground caverns, and the disposal of hazardous substances in former mines. The most serious risk with any of these usage types comes from the contact with groundwater or surface water. It causes an uncontrolled dissolution of salt rock, which in the worst case can result in the flooding or collapse of underground facilities. Especially along potash seams, cavernous structures can spread quickly, because potash salts show a much higher solubility than rock salt. However, as their chemical behavior is quite complex, previous models do not account for these highly soluble interlayers. Therefore, the objective of the present thesis is to describe the evolution of cavernous structures along potash seams in space and time in order to improve hazard mitigation during the utilization of salt deposits.
The formation of cavernous structures represents an interplay of chemical and hydraulic processes. Hence, the first step is to systematically investigate the dissolution and precipitation reactions that occur when water and potash salt come into contact. For this purpose, a geochemical reaction model is used. The results show that the minerals are only partially dissolved, resulting in a porous sponge like structure. With the saturation of the solution increasing, various secondary minerals are formed, whose number and type depend on the original rock composition. Field data confirm a correlation between the degree of saturation and the distance from the center of the cavern, where solution is entering. Subsequently, the reaction model is coupled with a flow and transport code and supplemented by a novel approach called ‘interchange’. The latter enables the exchange of solution and rock between areas of different porosity and mineralogy, and thus ultimately the growth of the cavernous structure. By means of several scenario analyses, cavern shape, growth rate and mineralogy are systematically investigated, taking also heterogeneous potash seams into account. The results show that basically four different cases can be distinguished, with mixed forms being a frequent occurrence in nature. The classification scheme is based on the dimensionless numbers Péclet and Damköhler, and allows for a first assessment of the hazard potential. In future, the model can be applied to any field case, using measurement data for calibration.
The presented research work provides a reactive transport model that is able to spatially and temporally characterize the propagation of cavernous structures along potash seams for the first time. Furthermore, it allows to determine thickness and composition of transition zones between cavern center and unaffected salt rock. The latter is particularly important in potash mining, so that natural cavernous structures can be located at an early stage and the risk of mine flooding can thus be reduced. The models may also contribute to an improved hazard prevention in the construction of storage caverns and the disposal of hazardous waste in salt deposits. Predictions regarding the characteristics and evolution of cavernous structures enable a better assessment of potential hazards, such as integrity or stability loss, as well as of suitable mitigation measures.
Li and B in ascending magmas: an experimental study on their mobility and isotopic fractionation
(2022)
This research study focuses on the behaviour of Li and B during magmatic ascent, and decompression-driven degassing related to volcanic systems. The main objective of this dissertation is to determine whether it is possible to use the diffusion properties of the two trace elements as a tool to trace magmatic ascent rate. With this objective, diffusion-couple and decompression experiments have been performed in order to study Li and B mobility in intra-melt conditions first, and then in an evolving system during decompression-driven degassing.
Synthetic glasses were prepared with rhyolitic composition and an initial water content of 4.2 wt%, and all the experiments were performed using an internally heated pressure vessel, in order to ensure a precise control on the experimental parameters such as temperature and pressure.
Diffusion-couple experiments were performed with a fix pressure 300 MPa. The temperature was varied in the range of 700-1250 °C with durations between 0 seconds and 24 hours. The diffusion-couple results show that Li diffusivity is very fast and starts already at very low temperature. Significant isotopic fractionation occurs due to the faster mobility of 6Li compared to 7Li. Boron diffusion is also accelerated by the presence of water, but the results of the isotopic ratios are unclear, and further investigation would be necessary to well constrain the isotopic fractionation process of boron in hydrous silicate melts. The isotopic ratios results show that boron isotopic fractionation might be affected by the speciation of boron in the silicate melt structure, as 10B and 11B tend to have tetrahedral and trigonal coordination, respectively.
Several decompression experiments were performed at 900 °C and 1000 °C, with pressures going from 300 MPa to 71-77 MPa and durations of 30 minutes, two, five and ten hours, in order to trigger water exsolution and the formation of vesicles in the sample. Textural observations and the calculation of the bubble number density confirmed that the bubble size and distribution after decompression is directly proportional to the decompression rate.
The overall SIMS results of Li and B show that the two trace elements tend to progressively decrease their concentration with decreasing decompression rates. This is explained because for longer decompression times, the diffusion of Li and B into the bubbles has more time to progress and the melt continuously loses volatiles as the bubbles expand their volumes.
For fast decompression, Li and B results show a concentration increase with a δ7Li and δ11B decrease close to the bubble interface, related to the sudden formation of the gas bubble, and the occurrence of a diffusion process in the opposite direction, from the bubble meniscus to the unaltered melt. When the bubble growth becomes dominant and Li and B start to exsolve into the gas phase, the silicate melt close to the bubble gets depleted in Li and B, because of a stronger diffusion of the trace elements into the bubble.
Our data are being applied to different models, aiming to combine the dynamics of bubble nucleation and growth with the evolution of trace elements concentration and isotopic ratios. Here, first considerations on these models will be presented, giving concluding remarks on this research study. All in all, the final remarks constitute a good starting point for further investigations. These results are a promising base to continue to study this process, and Li and B can indeed show clear dependences on decompression-related magma ascent rates in volcanic systems.
The increasing demand for energy in the current technological era and the recent political decisions about giving up on nuclear energy diverted humanity to focus on alternative environmentally friendly energy sources like solar energy. Although silicon solar cells are the product of a matured technology, the search for highly efficient and easily applicable materials is still ongoing. These properties made the efficiency of halide perovskites comparable with silicon solar cells for single junctions within a decade of research. However, the downside of halide perovskites are poor stability and lead toxicity for the most stable ones.
On the other hand, chalcogenide perovskites are one of the most promising absorber materials for the photovoltaic market, due to their elemental abundance and chemical stability against moisture and oxygen. In the search of the ultimate solar absorber material, combining the good optoelectronic properties of halide perovskites with the stability of chalcogenides could be the promising candidate.
Thus, this work investigates new techniques for the synthesis and design of these novel chalcogenide perovskites, that contain transition metals as cations, e.g., BaZrS3, BaHfS3, EuZrS3, EuHfS3 and SrHfS3. There are two stages in the deposition techniques of this study: In the first stage, the binary compounds are deposited via a solution processing method. In the second stage, the deposited materials are annealed in a chalcogenide atmosphere to form the perovskite structure by using solid-state reactions.
The research also focuses on the optimization of a generalized recipe for a molecular ink to deposit precursors of chalcogenide perovskites with different binaries. The implementation of the precursor sulfurization resulted in either binaries without perovskite formation or distorted perovskite structures, whereas some of these materials are reported in the literature as they are more favorable in the needle-like non-perovskite configuration.
Lastly, there are two categories for the evaluation of the produced materials: The first category is about the determination of the physical properties of the deposited layer, e.g., crystal structure, secondary phase formation, impurities, etc. For the second category, optoelectronic properties are measured and compared to an ideal absorber layer, e.g., band gap, conductivity, surface photovoltage, etc.
Among the multitude of geomorphological processes, aeolian shaping processes are of special character, Pedogenic dust is one of the most important sources of atmospheric aerosols and therefore regarded as a key player for atmospheric processes. Soil dust emissions, being complex in composition and properties, influence atmospheric processes and air quality and has impacts on other ecosystems. In this because even though their immediate impact can be considered low (exceptions exist), their constant and large-scale force makes them a powerful player in the earth system. dissertation, we unravel a novel scientific understanding of this complex system based on a holistic dataset acquired during a series of field experiments on arable land in La Pampa, Argentina. The field experiments as well as the generated data provide information about topography, various soil parameters, the atmospheric dynamics in the very lower atmosphere (4m height) as well as measurements regarding aeolian particle movement across a wide range of particle size classes between 0.2μm up to the coarse sand.
The investigations focus on three topics: (a) the effects of low-scale landscape structures on aeolian transport processes of the coarse particle fraction, (b) the horizontal and vertical fluxes of the very fine particles and (c) the impact of wind gusts on particle emissions.
Among other considerations presented in this thesis, it could in particular be shown, that even though the small-scale topology does have a clear impact on erosion and deposition patterns, also physical soil parameters need to be taken into account for a robust statistical modelling of the latter. Furthermore, specifically the vertical fluxes of particulate matter have different characteristics for the particle size classes. Finally, a novel statistical measure was introduced to quantify the impact of wind gusts on the particle uptake and its application on the provided data set. The aforementioned measure shows significantly increased particle concentrations during points in time defined as gust event.
With its holistic approach, this thesis further contributes to the fundamental understanding of how atmosphere and pedosphere are intertwined and affect each other.
Vegetation change at high latitudes is one of the central issues nowadays with respect to ongoing climate changes and triggered potential feedback. At high latitude ecosystems, the expected changes include boreal treeline advance, compositional, phenological, physiological (plants), biomass (phytomass) and productivity changes. However, the rate and the extent of the changes under climate change are yet poorly understood and projections are necessary for effective adaptive strategies and forehanded minimisation of the possible negative feedbacks.
The vegetation itself and environmental conditions, which are playing a great role in its development and distribution are diverse throughout the Subarctic to the Arctic. Among the least investigated areas is central Chukotka in North-Eastern Siberia, Russia. Chukotka has mountainous terrain and a wide variety of vegetation types on the gradient from treeless tundra to northern taiga forests. The treeline there in contrast to subarctic North America and north-western and central Siberia is represented by a deciduous conifer, Larix cajanderi Mayr. The vegetation varies from prostrate lichen Dryas octopetala L. tundra to open graminoid (hummock and non-hummock) tundra to tall Pinus pumila (Pall.) Regel shrublands to sparse and dense larch forests.
Hence, this thesis presents investigations on recent compositional and above-ground biomass (AGB) changes, as well as potential future changes in AGB in central Chukotka. The aim is to assess how tundra-taiga vegetation develops under changing climate conditions particularly in Fareast Russia, central Chukotka. Therefore, three main research questions were considered:
1) What changes in vegetation composition have recently occurred in central Chukotka?
2) How have the above-ground biomass AGB rates and distribution changed in central Chukotka?
3) What are the spatial dynamics and rates of tree AGB change in the upcoming millennia in the northern tundra-taiga of central Chukotka?
Remote sensing provides information on the spatial and temporal variability of vegetation. I used Landsat satellite data together with field data (foliage projective cover and AGB) from two expeditions in 2016 and 2018 to Chukotka to upscale vegetation types and AGB for the study area. More specifically, I used Landsat spectral indices (Normalised Difference Vegetation Index (NDVI), Normalised Difference Water Index (NDWI) and Normalised Difference Snow Index (NDSI)) and constrained ordination (Redundancy analysis, RDA) for further k-means-based land-cover classification and general additive model (GAM)-based AGB maps for 2000/2001/2002 and 2016/2017. I also used Tandem-X DEM data for a topographical correction of the Landsat satellite data and to derive slope, aspect, and Topographical Wetness Index (TWI) data for forecasting AGB.
Firstly, in 2016, taxa-specific projective cover data were collected during a Russian-German expedition. I processed the field data and coupled them with Landsat spectral Indices in the RDA model that was used for k-means classification. I could establish four meaningful land-cover classes: (1) larch closed-canopy forest, (2) forest tundra and shrub tundra, (3) graminoid tundra and (4) prostrate herb tundra and barren areas, and accordingly, I produced the land cover maps for 2000/2001/2002 and 2016/20017. Changes in land-cover classes between the beginning of the century (2000/2001/2002) and the present time (2016/2017) were estimated and interpreted as recent compositional changes in central Chukotka. The transition from graminoid tundra to forest tundra and shrub tundra was interpreted as shrubification and amounts to a 20% area increase in the tundra-taiga zone and 40% area increase in the northern taiga. Major contributors of shrubification are alder, dwarf birch and some species of the heather family. Land-cover change from the forest tundra and shrub tundra class to the larch closed-canopy forest class is interpreted as tree infilling and is notable in the northern taiga. We find almost no land-cover changes in the present treeless tundra.
Secondly, total AGB state and change were investigated for the same areas. In addition to the total vegetation AGB, I provided estimations for the different taxa present at the field sites. As an outcome, AGB in the study region of central Chukotka ranged from 0 kg m-2 at barren areas to 16 kg m-2 in closed-canopy forests with the larch trees contributing the highest. A comparison of changes in AGB within the investigated period from 2000 to 2016 shows that the greatest changes (up to 1.25 kg m 2 yr 1) occurred in the northern taiga and in areas where land cover changed to larch closed-canopy forest. Our estimations indicate a general increase in total AGB throughout the investigated tundra-taiga and northern taiga, whereas the tundra showed no evidence of change in AGB within the 15 years from 2002 to 2017.
In the third manuscript, potential future AGB changes were estimated based on the results of simulations of the individual-based spatially explicit vegetation model LAVESI using different climate scenarios, depending on Representative Concentration Pathways (RCPs) RCP 2.6, RCP 4.5 and RCP 8.5 with or without cooling after 2300 CE. LAVESI-based AGB was simulated for the current state until 3000 CE for the northern tundra-taiga study area for larch species because we expect the most notable changes to occur will be associated with forest expansion in the treeline ecotone. The spatial distribution and current state of tree AGB was validated against AGB field data, AGB extracted from Landsat satellite data and a high spatial resolution image with distinctive trees visible. The simulation results are indicating differences in tree AGB dynamics plot wise, depending on the distance to the current treeline. The simulated tree AGB dynamics are in concordance with fundamental ecological (emigrational and successional) processes: tree stand formation in simulated results starts with seed dispersion, tree stand establishment, tree stand densification and episodic thinning. Our results suggest mostly densification of existing tree stands in the study region within the current century in the study region and a lagged forest expansion (up to 39% of total area in the RCP 8.5) under all considered climate scenarios without cooling in different local areas depending on the closeness to the current treeline. In scenarios with cooling air temperature after 2300 CE, forests stopped expanding at 2300 CE (up to 10%, RCP 8.5) and then gradually retreated to their pre-21st century position. The average tree AGB rates of increase are the strongest in the first 300 years of the 21st century. The rates depend on the RCP scenario, where the highest are as expected under RCP 8.5.
Overall, this interdisciplinary thesis shows a successful integration of field data, satellite data and modelling for tracking recent and predicting future vegetation changes in mountainous subarctic regions. The obtained results are unique for the focus area in central Chukotka and overall, for mountainous high latitude ecosystems.
Deep geological repositories represent a promising solution for the final disposal of nuclear waste. Due to its low permeability, high sorption capacity and self-sealing potential, Opalinus Clay (OPA) is considered a suitable host rock formation for the long-term storage of nuclear waste in Switzerland and Germany. However, the clay formation is characterized by compositional and structural variabilities including the occurrence of carbonate- and quartz-rich layers, pronounced bedding planes as well as tectonic elements such as pre-existing fault zones and fractures, suggesting heterogeneous rock mass properties.
Characterizing the heterogeneity of host rock properties is therefore essential for safety predictions of future repositories. This includes a detailed understanding of the mechanical and hydraulic properties, deformation behavior and the underlying deformation processes for an improved assessment of the sealing integrity and long-term safety of a deep repository in OPA. Against this background, this thesis presents the results of deformation experiments performed on intact and artificially fractured specimens of the quartz-rich, sandy and clay-rich, shaly facies of OPA. The experiments focus on the influence of mineralogical composition on the deformation behavior as well as the reactivation and sealing properties of pre-existing faults and fractures at different boundary conditions (e.g., pressure, temperature, strain rate).
The anisotropic mechanical properties of the sandy facies of OPA are presented in the first section, which were determined from triaxial deformation experiments using dried and resaturated samples loaded at 0°, 45° and 90° to the bedding plane orientation. A Paterson-type deformation apparatus was used that allowed to investigate how the deformation behavior is influenced by the variation of confining pressure (50 – 100 MPa), temperature (25 – 200 °C), and strain rate (1 × 10-3 – 5 × 10-6 s-1). Constant strain rate experiments revealed brittle to semi-brittle deformation behavior of the sandy facies at the applied conditions. Deformation behavior showed a strong dependence on confining pressure, degree of water saturation as well as bedding orientation, whereas the variation of temperature and strain rate had no significant effect on deformation. Furthermore, the sandy facies displays higher strength and stiffness compared to the clay-rich shaly facies deformed at similar conditions by Nüesch (1991). From the obtained results it can be concluded that cataclastic mechanisms dominate the short-term deformation behavior of dried samples from both facies up to elevated pressure (<200 MPa) and temperature (<200 °C) conditions.
The second part presents triaxial deformation tests that were performed to investigate how structural discontinuities affect the deformation behavior of OPA and how the reactivation of preexisting faults is influenced by mineral composition and confining pressure. To this end, dried cylindrical samples of the sandy and shaly facies of OPA were used, which contained a saw-cut fracture oriented at 30° to the long axis. After hydrostatic pre-compaction at 50 MPa, constant strain rate deformation tests were performed at confining pressures of 5, 20 or 35 MPa. With increasing confinement, a gradual transition from brittle, highly localized fault slip including a stress drop at fault reactivation to semi-brittle deformation behavior, characterized by increasing delocalization and non-linear strain hardening without dynamic fault reactivation, can be observed. Brittle localization was limited by the confining pressure at which the fault strength exceeded the matrix yield strength, above which strain partitioning between localized fault slip and distributed matrix deformation occurred. The sandy facies displayed a slightly higher friction coefficient (≈0.48) compared to the shaly facies (≈0.4). In addition, slide-hold-slide tests were conducted, revealing negative or negligible frictional strengthening, which suggests stable creep and long-term weakness of faults in both facies of OPA. The conducted experiments demonstrate that dilatant brittle fault reactivation in OPA may be favored at high overconsolidation ratios and shallow depths, increasing the risk of seismic hazard and the creation of fluid pathways.
The final section illustrates how the sealing capacity of fractures in OPA is affected by mineral composition. Triaxial flow-through experiments using Argon-gas were performed with dried samples from the sandy and shaly facies of OPA containing a roughened, artificial fracture. Slate, graywacke, quartzite, natural fault gouge, and granite samples were also tested to highlight the influence of normal stress, mineralogy and diagenesis on the sustainability of fracture transmissivity. With increasing normal stress, a non-linear decrease of fracture transmissivity can be observed that resulted in a permanent reduction of transmissivity after stress release. The transmissivity of rocks with a high portion of strong minerals (e.g., quartz) and high unconfined compressive strength was less sensitive to stress changes. In accordance with this, the sandy facies of OPA displayed a higher initial transmissivity that was less sensitive to stress changes compared to the shaly facies. However, transmissivity of rigid slate was less sensitive to stress changes than the sandy facies of OPA, although the slate is characterized by a higher phyllosilicate content. This demonstrates that in addition to mineral composition, other factors such as the degree of metamorphism, cementation and consolidation have to be considered when evaluating the sealing capacity of phyllosilicate-rich rocks.
The results of this thesis highlighted the role of confining pressure on the failure behavior of intact and artificially fractured OPA. Although the quartz-rich sandy facies may be considered as being more favorable for underground constructions due to its higher shear strength and stiffness than the shaly facies, the results indicate that when fractures develop in the sandy facies, they are more conductive and remain more permeable compared to fractures in the clay-dominated shaly facies at a given stress. The results may provide the basis for constitutive models to predict the integrity and evolution of a future repository. Clearly, the influence of composition and consolidation, e.g., by geological burial and uplift, on the mechanical sealing behavior of OPA highlights the need for a detailed site-specific material characterization for a future repository.
Die vorliegende Dissertation verfolgt das Ziel, die diagnostischen Möglichkeiten für das Stö-rungsbild der erworbenen Dyslexie bei deutschsprachigen Personen mit Dyslexie (PmD) zu erweitern und zu spezifizieren.
In der Literatur werden verschiedene Sprachverarbeitungsmodelle diskutiert, die den kognitiven Prozess der Schriftsprachverarbeitung zu erklären versuchen. Alle Überlegungen, Erhebungen und Analysen dieser Dissertation fußen auf den theoretischen Annahmen des kognitiven Zwei-Routen-Lesemodells, welches zwischen lexikalisch-semantischer und segmentaler, sub-lexikalischer Verarbeitung beim Lesen unterscheidet und so die voneinander unabhängigen Fähigkeiten zum Lesen bekannter und unbekannter Wörter abbilden kann. Mit dem im Rahmen der Dissertation entwickelten, kognitiv orientierten Diagnostikverfahren DYMO (Dyslexie Mo-dellorientiert) soll durch die Erhebung der Lesefähigkeiten von PmD eine möglichst genaue modelltheoretische Verortung der Lesebeeinträchtigung erreicht und eine Grundlage für die Planung einer lesebezogenen Therapie geschaffen werden. Dabei werden auch Modellkomponenten des Zwei-Routen-Lesemodells berücksichtigt, die bisher im deutschsprachigen Raum noch nicht etabliert sind. Dazu zählen Unterkomponenten der Visuellen Analyse, die für die Identifikation von Buchstaben und das Kodieren von Buchstabenpositionen verantwortlich sind und Unterkomponenten der segmentalen Leseroute, die den einzelheitlichen Leseprozess auf dieser Modellroute schrittweise abbilden. Das Itemmaterial aus DYMO ist nach diversen psycholinguistisch kontrollierten Variablen kontrolliert. Hierbei werden auch Variablen berücksichtigt, die bisher in der Dyslexiediagnostik für deutschsprachige PmD nicht systematisch erfasst werden können, wie die Wortlänge und die graphematische Komplexität von Pseudowörtern.
Die erste dieser Dissertation zugrundeliegende Publikation (Originalarbeit I) befasst sich mit den Parametern und Modellkomponenten, die für eine umfassende modelltheoretisch basierte Di-agnostik bei erworbener Dyslexie entscheidend sind. Es werden außerdem Überlegungen zu Fehlertypen-Kategorisierung angestellt.
Die zweite Publikation (Originalarbeit II) stellt das Testverfahren DYMO dar. Das dazugehörige Handbuch liefert detaillierte Informationen zum Aufbau und der Konstruktion des Testverfah-rens, zur Durchführung und Auswertung der einzelnen Untertests und zur Einstufung einer Leistung in einen Leistungsbereich. Anhand von ausführlich beschriebenen Fallbeispielen zweier PmD werden die Durchführung, Auswertung, Interpretation und das Ableiten von Therapiezielen dargestellt. Die Ergebnisse dieser Fallbeschreibungen verdeutlichen die diagnostische Ergänzung durch DYMO und zeigen, dass das explizite Untersuchen der Unterkomponenten der Visuellen Analyse und der segmentalen Leseroute sowie der Einbezug der Variablen Wortlänge und gra-phematische Komplexität den Lesebefund spezifizieren und den Therapieeinstieg konkretisieren können.
Die dritte Publikation (Originalarbeit III) zeigt in einer systematischen Vergleichsstudie anhand einer Fallserie von zwölf PmD die Unterschiede zwischen dem Diagnostikverfahren DYMO und einem weiteren kognitiv basierten Diagnostikverfahren. Es wird diskutiert, inwieweit DYMO eine sinnvolle Ergänzung im Diagnostikprozess erworbener Dyslexien darstellen kann. Außerdem werden leicht und schwer beeinträchtigte PmD in Gruppenanalysen verglichen, um zu prüfen, ob DYMO insbesondere bei leicht beeinträchtigten PmD eine Ergänzung bieten kann. Aufgrund des komplexeren Itemmaterials von DYMO (beispielsweise aufgrund der Kontrolle der Wortlänge) wurde angenommen, dass leicht beeinträchtigte PmD in DYMO-Untertests auffälligere Leseleistungen zeigen als in Aufgaben des gegenübergestellten anderen Diagnostikverfahrens. Diese Hypothese konnte teilweise bestätigt werden. Leicht beeinträchtigte PmD zeigten häufiger Längeneffekte als schwer beeinträchtigte PmD. Insgesamt fiel der Gruppenunterschied jedoch nicht so deutlich aus, wie erwartet.
Mit dem kriteriumsorientiert normierten und finalisierten Material von DYMO wurden 17 PmD getestet. Ausführliche Befunde für jede einzelne PmD mit darauffolgenden Therapieimplikationen zeigen, dass insbesondere die Spezifizierung eines segmentalen Lesedefizits bei einer schwer beeinträchtigten Leistung im Lesen von Pseudowörtern zur erweiterten Aussage bezüglich des modelltheoretischen Störungsortes beitragen kann. Dies verdeutlicht die hohe Aussagekraft der DYMO-Untertests und die Relevanz einer spezifischen und detaillierten modellbasierten Befunderhebung für eine explizite, individuelle Therapieplanung bei erworbenen Dyslexien.
Das Ziel dieser Arbeit ist die Entwicklung eines Industrie 4.0 Reifegradindex für produzierende Unternehmen (KMU und Mittelstand) mit diskreter Produktion. Die Motivation zu dieser Arbeit entstand aus dem Zögern vieler Unternehmen – insbesondere KMU und Mittelstand – bei der Transformation in Richtung Industrie 4.0. Im Rahmen einer Marktstudie konnte belegt werden, dass 86 Prozent der befragten produzierenden Unternehmen kein für ihr Unternehmen geeignetes Industrie 4.0 Reifegradmodell gefunden haben, mit dem sie ihren Status Quo bewerten und Maßnahmen für einen höheren Grad der Reife ableiten könnten. Die Bewertung bestehender Reifegradmodelle zeigte Defizite hinsichtlich der Industrie 4.0 Abdeckung, der Betrachtung der sozio-technischen Dimensionen Mensch, Technik und Organisation sowie der Betrachtung von Management und Unternehmenskultur. Basierend auf den aktuellen Industrie 4.0 Technologien und Handlungsbereichen wurde ein neues, modular aufgebautes Industrie 4.0 Reifegradmodell entwickelt, das auf einer ganzheitlichen Betrachtung aller sozio-technischen Dimensionen Mensch, Technik und Organisation sowie deren Schnittstellen basiert. Das Modell ermittelt neben dem Overall Industry 4.0 Maturity Index (OI4MI) vier weitere Indizes zur Bewertung der Industrie 4.0 Reife des Unternehmens. Das Modell wurde bei einem Unternehmen validiert und steht nun als Template für darauf aufbauende Forschungsarbeiten zur Verfügung.
The deciduous needle tree larch (Larix Mill.) covers more than 80% of the Asian boreal forests. Only a few Larix species constitute the vast forests and these species differ markedly in their ecological traits, most importantly in their ability to grow on and stabilize underlying permafrost. The pronounced dominance of the summergreen larches makes the Asian boreal forests unique, as the rest of the northern hemisphere boreal forests is almost exclusively dominated by evergreen needle-leaf forests. Global warming is impacting the whole world but is especially pronounced in the arctic and boreal regions. Although adapted to extreme climatic conditions, larch forests are sensitive to varying climatic conditions. By their sheer size, changes in Asian larch forests as range shifts or changes in species composition and the resulting vegetation-climate feedbacks are of global relevance. It is however still uncertain if larch forests will persist under the ongoing warming climate or if they will be replaced by evergreen forests. It is therefore of great importance to understand how these ecosystems will react to future climate warmings and if they will maintain their dominance. One step in the better understanding of larch dynamics is to study how the vast dominant forests developed and why they only established in northern Asia. A second step is to study how the species reacted to past changes in the climate.
The first objective of this thesis was to review and identify factors promoting Asian larch dominance. I achieved this by synthesizing and comparing reported larch occurrences and influencing components on the northern hemisphere continents in the present and in the past. The second objective was to find a possibility to directly study past Larix populations in Siberia and specifically their genetic variation, enabling the study of geographic movements. For this, I established chloroplast enrichment by hybridization capture from sedimentary ancient DNA (sedaDNA) isolated from lake sediment records. The third objective was to use the established method to track past larch populations, their glacial refugia during the Last Glacial Maximum (LGM) around 21,000 years before present (ka BP), and their post-glacial migration patterns.
To study larch promoting factors, I compared the present state of larch species ranges, areas of dominance, their bioclimatic niches, and the distribution on different extents and thaw depths of permafrost. The species comparison showed that the bioclimatic niches greatly overlap between the American and Asian species and that it is only in the extremely continental climates in which only the Asian larch species can persist. I revealed that the area of dominance is strongly connected to permafrost extent but less linked to permafrost seasonal thaw depths. Comparisons of the paleorecord of larch between the continents suggest differences in the recolonization history. Outside of northern Asia and Alaska, glacial refugial populations of larch were confined to the southern regions and thus recolonization could only occur as migration from south to north. Alaskan larch populations could not establish wide-range dominant forest which could be related to their own genetically depletion as separated refugial population. In Asia, it is still unclear whether or not the northern refugial populations contributed and enhanced the postglacial colonization or whether they were replaced by populations invading from the south in the course of climate warming. Asian larch dominance is thus promoted partly by adaptions to extremely continental climates and by adaptations to grow on continuous permafrost but could be also connected to differences in glacial survival and recolonization history of Larix species.
Except for extremely rare macrofossil findings of fossilized cones, traditional methods to study past vegetation are not able to distinguish between larch species or populations. Within the scope of this thesis, I therefore established a method to retrieve genetic information of past larch populations to distinguish between species. Using the Larix chloroplast genome as target, I successfully applied the method of DNA target enrichment by hybridization capture on sedaDNA samples from lake records and showed that it is able to distinguish between larch species. I then used the method on samples from lake records from across Siberia dating back up to 50 ka BP. The results allowed me to address the question of glacial survival and post-glacial recolonization mode in Siberian larch species. The analyzed pattern showed that LGM refugia were almost exclusively constituted by L. gmelinii, even in sites of current L. sibirica distribution. For included study sites, L. sibirica migrated into its extant northern distribution area only in the Holocene. Consequently, the post-glacial recolonization of L. sibirica was not enhanced by northern glacial refugia. In case of sites in extant distribution area of L. gmelinii, the absence of a genetic turn-over point to a continuous population rather than an invasion of southern refugia. The results suggest that climate has a strong influence on the distribution of Larix species and that species may also respond differently to future climate warming. Because species differ in their ecological characteristics, species distribution is also relevant with respect to further feedbacks between vegetation and climate.
With this thesis, I give an overview of present and past larch occurrences and evaluate which factors promote their dominance. Furthermore, I provide the tools to study past Larix species and give first important insights into the glacial history of Larix populations.
The present dissertation conducts empirical research on the relationship between urban life and its economic costs, especially for the environment. On the one hand, existing gaps in research on the influence of population density on air quality are closed and, on the other hand, innovative policy measures in the transport sector are examined that are intended to make metropolitan areas more sustainable. The focus is on air pollution, congestion and traffic accidents, which are important for general welfare issues and represent significant cost factors for urban life. They affect a significant proportion of the world's population. While 55% of the world's people already lived in cities in 2018, this share is expected to reach approximately 68% by 2050.
The four self-contained chapters of this thesis can be divided into two sections: Chapters 2 and 3 provide new causal insights into the complex interplay between urban structures and air pollution. Chapters 4 and 5 then examine policy measures to promote non-motorised transport and their influence on air quality as well as congestion and traffic accidents.
The Antarctic ice sheet is the largest freshwater reservoir worldwide. If it were to melt completely, global sea levels would rise by about 58 m. Calculation of projections of the Antarctic contribution to sea level rise under global warming conditions is an ongoing effort which
yields large ranges in predictions. Among the reasons for this are uncertainties related to the physics of ice sheet modeling. These
uncertainties include two processes that could lead to runaway ice retreat: the Marine Ice Sheet Instability (MISI), which causes rapid grounding line retreat on retrograde bedrock, and the Marine Ice Cliff Instability (MICI), in which tall ice cliffs become unstable and calve off, exposing even taller ice cliffs.
In my thesis, I investigated both marine instabilities (MISI and MICI) using the Parallel Ice Sheet Model (PISM), with a focus on MICI.
Data profiling is the extraction of metadata from relational databases. An important class of metadata are multi-column dependencies. They come associated with two computational tasks. The detection problem is to decide whether a dependency of a given type and size holds in a database. The discovery problem instead asks to enumerate all valid dependencies of that type. We investigate the two problems for three types of dependencies: unique column combinations (UCCs), functional dependencies (FDs), and inclusion dependencies (INDs).
We first treat the parameterized complexity of the detection variants. We prove that the detection of UCCs and FDs, respectively, is W[2]-complete when parameterized by the size of the dependency. The detection of INDs is shown to be one of the first natural W[3]-complete problems. We further settle the enumeration complexity of the three discovery problems by presenting parsimonious equivalences with well-known enumeration problems. Namely, the discovery of UCCs is equivalent to the famous transversal hypergraph problem of enumerating the hitting sets of a hypergraph. The discovery of FDs is equivalent to the simultaneous enumeration of the hitting sets of multiple input hypergraphs. Finally, the discovery of INDs is shown to be equivalent to enumerating the satisfying assignments of antimonotone, 3-normalized Boolean formulas.
In the remainder of the thesis, we design and analyze discovery algorithms for unique column combinations. Since this is as hard as the general transversal hypergraph problem, it is an open question whether the UCCs of a database can be computed in output-polynomial time in the worst case. For the analysis, we therefore focus on instances that are structurally close to databases in practice, most notably, inputs that have small solutions. The equivalence between UCCs and hitting sets transfers the computational hardness, but also allows us to apply ideas from hypergraph theory to data profiling. We devise an discovery algorithm that runs in polynomial space on arbitrary inputs and achieves polynomial delay whenever the maximum size of any minimal UCC is bounded. Central to our approach is the extension problem for minimal hitting sets, that is, to decide for
a set of vertices whether they are contained in any minimal solution. We prove that this is yet another problem that is complete for the complexity class W[3], when parameterized by the size of the set that is to be extended. We also give several conditional lower bounds under popular hardness conjectures such as the Strong Exponential Time Hypothesis (SETH). The lower bounds suggest that the running time of our algorithm for the extension problem is close to optimal.
We further conduct an empirical analysis of our discovery algorithm on real-world databases to confirm that the hitting set perspective on data profiling has merits also in practice. We show that the resulting enumeration times undercut their theoretical worst-case bounds on practical data, and that the memory consumption of our method is much smaller than that of previous solutions. During the analysis we make two observations about the connection between databases and their corresponding hypergraphs. On the one hand, the hypergraph representations containing all relevant information are usually significantly smaller than the original inputs. On the other hand, obtaining those hypergraphs is the actual bottleneck of any practical application. The latter often takes much longer than enumerating the solutions, which is in stark contrast to the fact that the preprocessing is guaranteed to be polynomial while the enumeration may take exponential time.
To make the first observation rigorous, we introduce a maximum-entropy model for non-uniform random hypergraphs and prove that their expected number of minimal hyperedges undergoes a phase transition with respect to the total number of edges. The result also explains why larger databases may have smaller hypergraphs. Motivated by the second observation, we present a new kind of UCC discovery algorithm called Hitting Set Enumeration with Partial Information and Validation (HPIValid). It utilizes the fast enumeration times in practice in order to speed up the computation of the corresponding hypergraph. This way, we sidestep the bottleneck while maintaining the advantages of the hitting set perspective. An exhaustive empirical evaluation shows that HPIValid outperforms the current state of the art in UCC discovery. It is capable of processing databases that were previously out of reach for data profiling.
The complex hierarchical structure of bone undergoes a lifelong remodeling process, where it adapts to mechanical needs. Hereby, bone resorption by osteoclasts and bone formation by osteoblasts have to be balanced to sustain a healthy and stable organ. Osteocytes orchestrate this interplay by sensing mechanical strains and translating them into biochemical signals. The osteocytes are located in lacunae and are connected to one another and other bone cells via cell processes through small channels, the canaliculi. Lacunae and canaliculi form a network (LCN) of extracellular spaces that is able to transport ions and enables cell-to-cell communication. Osteocytes might also contribute to mineral homeostasis by direct interactions with the surrounding matrix. If the LCN is acting as a transport system, this should be reflected in the mineralization pattern. The central hypothesis of this thesis is that osteocytes are actively changing their material environment. Characterization methods of material science are used to achieve the aim of detecting traces of this interaction between osteocytes and the extracellular matrix. First, healthy murine bones were characterized. The properties analyzed were then compared with three murine model systems: 1) a loading model, where a bone of the mouse was loaded during its life time; 2) a healing model, where a bone of the mouse was cut to induce a healing response; and 3) a disease model, where the Fbn1 gene is dysfunctional causing defects in the formation of the extracellular tissue.
The measurement strategy included routines that make it possible to analyze the organization of the LCN and the material components (i.e., the organic collagen matrix and the mineral particles) in the same bone volumes and compare the spatial distribution of different data sets. The three-dimensional network architecture of the LCN is visualized by confocal laser scanning microscopy (CLSM) after rhodamine staining and is then subsequently quantified. The calcium content is determined via quantitative backscattered electron imaging (qBEI), while small- and wide-angle X-ray scattering (SAXS and WAXS) are employed to determine the thickness and length of local mineral particles.
First, tibiae cortices of healthy mice were characterized to investigate how changes in LCN architecture can be attributed to interactions of osteocytes with the surrounding bone matrix. The tibial mid-shaft cross-sections showed two main regions, consisting of a band with unordered LCN surrounded by a region with ordered LCN. The unordered region is a remnant of early bone formation and exhibited short and thin mineral particles. The surrounding, more aligned bone showed ordered and dense LCN as well as thicker and longer mineral particles. The calcium content was unchanged between the two regions.
In the mouse loading model, the left tibia underwent two weeks of mechanical stimulation, which results in increased bone formation and decreased resorption in skeletally mature mice. Here the specific research question addressed was how do bone material characteristics change at (re)modeling sites? The new bone formed in response to mechanical stimulation showed similar properties in terms of the mineral particles, like the ordered calcium region but lower calcium content compared to the right, non-loaded control bone of the same mice. There was a clear, recognizable border between mature and newly formed bone. Nevertheless, some canaliculi went through this border connecting the LCN of mature and newly formed bone.
Additionally, the question should be answered whether the LCN topology and the bone matrix material properties adapt to loading. Although, mechanically stimulated bones did not show differences in calcium content compared to controls, different correlations were found between the local LCN density and the local Ca content depending on whether the bone was loaded or not. These results suggest that the LCN may serve as a mineral reservoir.
For the healing model, the femurs of mice underwent an osteotomy, stabilized with an external fixator and were allowed to heal for 21 days. Thus, the spatial variations in the LCN topology with mineral properties within different tissue types and their interfaces, namely calcified cartilage, bony callus and cortex, could be simultaneously visualized and compared in this model. All tissue types showed structural differences across multiple length scales. Calcium content increased and became more homogeneous from calcified cartilage to bony callus to lamellar cortical bone. The degree of LCN organization increased as well, while the lacunae became smaller, as did the lacunar density between these different tissue types that make up the callus. In the calcified cartilage, the mineral particles were short and thin. The newly formed callus exhibited thicker mineral particles, which still had a low degree of orientation. While most of the callus had a woven-like structure, it also served as a scaffold for more lamellar tissue at the edges. The lamelar bone callus showed thinner mineral particles, but a higher degree of alignment in both, mineral particles and the LCN. The cortex showed the highest values for mineral length, thickness and degree of orientation. At the same time, the lacunae number density was 34% lower and the lacunar volume 40% smaller compared to bony callus. The transition zone between cortical and callus regions showed a continuous convergence of bone mineral properties and lacunae shape. Although only a few canaliculi connected callus and the cortical region, this indicates that communication between osteocytes of both tissues should be possible. The presented correlations between LCN architecture and mineral properties across tissue types may suggest that osteocytes have an active role in mineralization processes of healing.
A mouse model for the disease marfan syndrome, which includes a genetic defect in the fibrillin-1 gene, was investigated. In humans, Marfan syndrome is characterized by a range of clinical symptoms such as long bone overgrowth, loose joints, reduced bone mineral density, compromised bone microarchitecture, and increased fracture rates. Thus, fibrillin-1 seems to play a role in the skeletal homeostasis. Therefore, the present work studied how marfan syndrome alters LCN architecture and the surrounding bone matrix. The mice with marfan syndrome showed longer tibiae than their healthy littermates from an age of seven weeks onwards. In contrast, the cortical development appeared retarded, which was observed across all measured characteristics, i. e. lower endocortical bone formation, looser and less organized lacuno-canalicular network, less collagen orientation, thinner and shorter mineral particles.
In each of the three model systems, this study found that changes in the LCN architecture spatially correlated with bone matrix material parameters. While not knowing the exact mechanism, these results provide indications that osteocytes can actively manipulate a mineral reservoir located around the canaliculi to make a quickly accessible contribution to mineral homeostasis. However, this interaction is most likely not one-sided, but could be understood as an interplay between osteocytes and extra-cellular matrix, since the bone matrix contains biochemical signaling molecules (e.g. non-collagenous proteins) that can change osteocyte behavior. Bone (re)modeling can therefore not only be understood as a method for removing defects or adapting to external mechanical stimuli, but also for increasing the efficiency of possible osteocyte-mineral interactions during bone homeostasis. With these findings, it seems reasonable to consider osteocytes as a target for drug development related to bone diseases that cause changes in bone composition and mechanical properties. It will most likely require the combined effort of materials scientists, cell biologists, and molecular biologists to gain a deeper understanding of how bone cells respond to their material environment.
The geomagnetic main field is vital for live on Earth, as it shields our habitat against the solar wind and cosmic rays. It is generated by the geodynamo in the Earth’s outer core and has a rich dynamic on various timescales. Global models of the field are used to study the interaction of the field and incoming charged particles, but also to infer core dynamics and to feed numerical simulations of the geodynamo. Modern satellite missions, such as the SWARM or the CHAMP mission, support high resolution reconstructions of the global field. From the 19 th century on, a global network of magnetic observatories has been established. It is growing ever since and global models can be constructed from the data it provides. Geomagnetic field models that extend further back in time rely on indirect observations of the field, i.e. thermoremanent records such as burnt clay or volcanic rocks and sediment records from lakes and seas. These indirect records come with (partially very large) uncertainties, introduced by the complex measurement methods and the dating procedure.
Focusing on thermoremanent records only, the aim of this thesis is the development of a new modeling strategy for the global geomagnetic field during the Holocene, which takes the uncertainties into account and produces realistic estimates of the reliability of the model. This aim is approached by first considering snapshot models, in order to address the irregular spatial distribution of the records and the non-linear relation of the indirect observations to the field itself. In a Bayesian setting, a modeling algorithm based on Gaussian process regression is developed and applied to binned data. The modeling algorithm is then extended to the temporal domain and expanded to incorporate dating uncertainties. Finally, the algorithm is sequentialized to deal with numerical challenges arising from the size of the Holocene dataset.
The central result of this thesis, including all of the aspects mentioned, is a new global geomagnetic field model. It covers the whole Holocene, back until 12000 BCE, and we call it ArchKalmag14k. When considering the uncertainties that are produced together with the model, it is evident that before 6000 BCE the thermoremanent database is not sufficient to support global models. For times more recent, ArchKalmag14k can be used to analyze features of the field under consideration of posterior uncertainties. The algorithm for generating ArchKalmag14k can be applied to different datasets and is provided to the community as an open source python package.
The estimation of financial losses is an integral part of flood risk assessment. The application of existing flood loss models on locations or events different from the ones used to train the models has led to low performance, showing that characteristics of the flood damaging process have not been sufficiently well represented yet. To improve flood loss model transferability, I explore various model structures aiming at incorporating different (inland water) flood types and pathways. That is based on a large survey dataset of approximately 6000 flood-affected households which addresses several aspects of the flood event, not only the hazard characteristics but also information on the affected building, socioeconomic factors, the household's preparedness level, early warning, and impacts. Moreover, the dataset reports the coincidence of different flood pathways. Whilst flood types are a classification of flood events reflecting their generating process (e.g. fluvial, pluvial), flood pathways represent the route the water takes to reach the receptors (e.g. buildings). In this work, the following flood pathways are considered: levee breaches, river floods, surface water floods, and groundwater floods.
The coincidence of several hazard processes at the same time and place characterises a compound event. In fact, many flood events develop through several pathways, such as the ones addressed in the survey dataset used. Earlier loss models, although developed with one or multiple predictor variables, commonly use loss data from a single flood event which is attributed to a single flood type, disregarding specific flood pathways or the coincidence of multiple pathways. This gap is addressed by this thesis through the following research questions: 1. In which aspects do flood pathways of the same (compound inland) flood event differ? 2. How much do factors which contribute to the overall flood loss in a building differ in various settings, specifically across different flood pathways? 3. How well can Bayesian loss models learn from different settings? 4. Do compound, that is, coinciding flood pathways result in higher losses than a single pathway, and what does the outcome imply for future loss modelling?
Statistical analysis has found that households affected by different flood pathways also show, in general, differing characteristics of the affected building, preparedness, and early warning, besides the hazard characteristics. Forecasting and early warning capabilities and the preparedness of the population are dominated by the general flood type, but characteristics of the hazard at the object-level, the impacts, and the recovery are more related to specific flood pathways, indicating that risk communication and loss models could benefit from the inclusion of flood-pathway-specific information.
For the development of the loss model, several potentially relevant predictors are analysed: water depth, duration, velocity, contamination, early warning lead time, perceived knowledge about self-protection, warning information, warning source, gap between warning and action, emergency measures, implementation of property-level precautionary measures (PLPMs), perceived efficacy of PLPMs, previous flood experience, awareness of flood risk, ownership, building type, number of flats, building quality, building value, house/flat area, building area, cellar, age, household size, number of children, number of elderly residents, income class, socioeconomic status, and insurance against floods. After a variable selection, descriptors of the hazard, building, and preparedness were deemed significant, namely: water depth, contamination, duration, velocity, building area, building quality, cellar, PLPMs, perceived efficacy of PLPMs, emergency measures, insurance, and previous flood experience. The inclusion of the indicators of preparedness is relevant, as they are rarely involved in loss datasets and in loss modelling, although previous studies have shown their potential in reducing losses. In addition, the linear model fit indicates that the explanatory factors are, in several cases, differently relevant across flood pathways.
Next, Bayesian multilevel models were trained, which intrinsically incorporate uncertainties and allow for partial pooling (i.e. different groups of data, such as households affected by different flood pathways, can learn from each other), increasing the statistical power of the model. A new variable selection was performed for this new model approach, reducing the number of predictors from twelve to seven variables but keeping factors of the hazard, building, and preparedness, namely: water depth, contamination, duration, building area, PLPMs, insurance, and previous flood experience. The new model was trained not only across flood pathways but also across regions of Germany, divided according to general socioeconomic factors and insurance policies, and across flood events. The distinction across regions and flood events did not improve loss modelling and led to a large overlap of regression coefficients, with no clear trend or pattern. The distinction of flood pathways showed credibly distinct regression coefficients, leading to a better understanding of flood loss modelling and indicating one potential reason why model transferability has been challenging.
Finally, new model structures were trained to include the possibility of compound inland floods (i.e. when multiple flood pathways coincide on the same affected asset). The dataset does not allow for verifying in which sequence the flood pathway waves occurred and predictor variables reflect only their mixed or combined outcome. Thus, two Bayesian models were trained: 1. a multi-membership model, a structure which learns the regression coefficients for multiple flood pathways at the same time, and 2. a multilevel model wherein the combination of coinciding flood pathways makes individual categories. The multi-membership model resulted in credibly different coefficients across flood pathways but did not improve model performance in comparison to the model assuming only a single dominant flood pathway. The model with combined categories signals an increase in impacts after compound floods, but due to the uncertainty in model coefficients and estimates, it is not possible to ascertain such an increase as credible. That is, with the current level of uncertainty in differentiating the flood pathways, the loss estimates are not credibly distinct from individual flood pathways.
To overcome the challenges faced, non-linear or mixed models could be explored in the future. Interactions, moderation, and mediation effects, as well as non-linear effects, should also be further studied. Loss data collection should regularly include preparedness indicators, and either data collection or hydraulic modelling should focus on the distinction of coinciding flood pathways, which could inform loss models and further improve estimates. Flood pathways show distinct (financial) impacts, and their inclusion in loss modelling proves relevant, for it helps in clarifying the different contribution of influencing factors to the final loss, improving understanding of the damaging process, and indicating future lines of research.
Boolean Satisfiability (SAT) is one of the problems at the core of theoretical computer science. It was the first problem proven to be NP-complete by Cook and, independently, by Levin. Nowadays it is conjectured that SAT cannot be solved in sub-exponential time. Thus, it is generally assumed that SAT and its restricted version k-SAT are hard to solve. However, state-of-the-art SAT solvers can solve even huge practical instances of these problems in a reasonable amount of time.
Why is SAT hard in theory, but easy in practice? One approach to answering this question is investigating the average runtime of SAT. In order to analyze this average runtime the random k-SAT model was introduced. The model generates all k-SAT instances with n variables and m clauses with uniform probability. Researching random k-SAT led to a multitude of insights and tools for analyzing random structures in general. One major observation was the emergence of the so-called satisfiability threshold: A phase transition point in the number of clauses at which the generated formulas go from asymptotically almost surely satisfiable to asymptotically almost surely unsatisfiable. Additionally, instances around the threshold seem to be particularly hard to solve.
In this thesis we analyze a more general model of random k-SAT that we call non-uniform random k-SAT. In contrast to the classical model each of the n Boolean variables now has a distinct probability of being drawn. For each of the m clauses we draw k variables according to the variable distribution and choose their signs uniformly at random. Non-uniform random k-SAT gives us more control over the distribution of Boolean variables in the resulting formulas. This allows us to tailor distributions to the ones observed in practice. Notably, non-uniform random k-SAT contains the previously proposed models random k-SAT, power-law random k-SAT and geometric random k-SAT as special cases.
We analyze the satisfiability threshold in non-uniform random k-SAT depending on the variable probability distribution. Our goal is to derive conditions on this distribution under which an equivalent of the satisfiability threshold conjecture holds. We start with the arguably simpler case of non-uniform random 2-SAT. For this model we show under which conditions a threshold exists, if it is sharp or coarse, and what the leading constant of the threshold function is. These are exactly the three ingredients one needs in order to prove or disprove the satisfiability threshold conjecture. For non-uniform random k-SAT with k=3 we only prove sufficient conditions under which a threshold exists. We also show some properties of the variable probabilities under which the threshold is sharp in this case. These are the first results on the threshold behavior of non-uniform random k-SAT.
The Andes are a ~7000 km long N-S trending mountain range developed along the South American western continental margin. Driven by the subduction of the oceanic Nazca plate beneath the continental South American plate, the formation of the northern and central parts of the orogen is a type case for a non-collisional orogeny. In the southern Central Andes (SCA, 29°S-39°S), the oceanic plate changes the subduction angle between 33°S and 35°S from almost horizontal (< 5° dip) in the north to a steeper angle (~30° dip) in the south. This sector of the Andes also displays remarkable along- and across- strike variations of the tectonic deformation patterns. These include a systematic decrease of topographic elevation, of crustal shortening and foreland and orogenic width, as well as an alternation of the foreland deformation style between thick-skinned and thin-skinned recorded along- and across the strike of the subduction zone. Moreover, the SCA are a very seismically active region. The continental plate is characterized by a relatively shallow seismicity (< 30 km depth) which is mainly focussed at the transition from the orogen to the lowland areas of the foreland and the forearc; in contrast, deeper seismicity occurs below the interiors of the northern foreland. Additionally, frequent seismicity is also recorded in the shallow parts of the oceanic plate and in a sector of the flat slab segment between 31°S and 33°S. The observed spatial heterogeneity in tectonic and seismic deformation in the SCA has been attributed to multiple causes, including variations in sediment thickness, the presence of inherited structures and changes in the subduction angle of the oceanic slab. However, there is no study that inquired the relationship between the long-term rheological configuration of the SCA and the spatial deformation patterns. Moreover, the effects of the density and thickness configuration of the continental plate and of variations in the slab dip angle in the rheological state of the lithosphere have been not thoroughly investigated yet. Since rheology depends on composition, pressure and temperature, a detailed characterization of the compositional, structural and thermal fields of the lithosphere is needed. Therefore, by using multiple geophysical approaches and data sources, I constructed the following 3D models of the SCA lithosphere: (i) a seismically-constrained structural and density model that was tested against the gravity field; (ii) a thermal model integrating the conversion of mantle shear-wave velocities to temperature with steady-state conductive calculations in the uppermost lithosphere (< 50 km depth), validated by temperature and heat-flow measurements; and (iii) a rheological model of the long-term lithospheric strength using as input the previously-generated models.
The results of this dissertation indicate that the present-day thermal and rheological fields of the SCA are controlled by different mechanisms at different depths. At shallow depths (< 50 km), the thermomechanical field is modulated by the heterogeneous composition of the continental lithosphere. The overprint of the oceanic slab is detectable where the oceanic plate is shallow (< 85 km depth) and the radiogenic crust is thin, resulting in overall lower temperatures and higher strength compared to regions where the slab is steep and the radiogenic crust is thick. At depths > 50 km, largest temperatures variations occur where the descending slab is detected, which implies that the deep thermal field is mainly affected by the slab dip geometry.
The outcomes of this thesis suggests that long-term thermomechanical state of the lithosphere influences the spatial distribution of seismic deformation. Most of the seismicity within the continental plate occurs above the modelled transition from brittle to ductile conditions. Additionally, there is a spatial correlation between the location of these events and the transition from the mechanically strong domains of the forearc and foreland to the weak domain of the orogen. In contrast, seismicity within the oceanic plate is also detected where long-term ductile conditions are expected. I therefore analysed the possible influence of additional mechanisms triggering these earthquakes, including the compaction of sediments in the subduction interface and dehydration reactions in the slab. To that aim, I carried out a qualitative analysis of the state of hydration in the mantle using the ratio between compressional- and shear-wave velocity (vp/vs ratio) from a previous seismic tomography. The results from this analysis indicate that the majority of the seismicity spatially correlates with hydrated areas of the slab and overlying continental mantle, with the exception of the cluster within the flat slab segment. In this region, earthquakes are likely triggered by flexural processes where the slab changes from a flat to a steep subduction angle.
First-order variations in the observed tectonic patterns also seem to be influenced by the thermomechanical configuration of the lithosphere. The mechanically strong domains of the forearc and foreland, due to their resistance to deformation, display smaller amounts of shortening than the relatively weak orogenic domain. In addition, the structural and thermomechanical characteristics modelled in this dissertation confirm previous analyses from geodynamic models pointing to the control of the observed heterogeneities in the orogen and foreland deformation style. These characteristics include the lithospheric and crustal thickness, the presence of weak sediments and the variations in gravitational potential energy.
Specific conditions occur in the cold and strong northern foreland, which is characterized by active seismicity and thick-skinned structures, although the modelled crustal strength exceeds the typical values of externally-applied tectonic stresses. The additional mechanisms that could explain the strain localization in a region that should resist deformation are: (i) increased tectonic forces coming from the steepening of the slab and (ii) enhanced weakening along inherited structures from pre-Andean deformation events. Finally, the thermomechanical conditions of this sector of the foreland could be a key factor influencing the preservation of the flat subduction angle at these latitudes of the SCA.
Die internationale Schifffahrt erhofft sich mit der Entwicklung unbemannter Schiffe, die nur noch von Kontrollzentren an Land durch Personal überwacht werden und sonst durch Elektromotoren und Solarenergie betrieben und mit selbstlernenden Navigationsprogrammen ausgestattet weitgehend autark agieren, eine Einsparung von Transportkosten von über 20 %. Diese voranschreitende technische Entwicklung wird insbesondere das internationale Seerecht in Zukunft vor Herausforderungen stellen. Das Werk untersucht vor diesem Hintergrund primär die Kompatibilität dieser Schiffe mit dem Seerechtsübereinkommen. Zunächst wird eine Schiffsdefinition für den Vertrag entwickelt und eine Anwendung des Regelwerks auf autonome Schiffe überprüft. Dann wird auf Problemfelder wie die Einhaltung von Pflichten durch die Schiffe, die Notwendigkeit besonderer Schutzrechte vor allem in Bezug auf Zwangsmaßnahmen durch die Küstenstaaten an Bord und die Anwendbarkeit der bestehenden Piraterievorschriften auf diese Schiffe eingegangen. Weiter wirft die Arbeit die Frage auf, ob die Staatengemeinschaft, besonders mit Hinblick auf den maritimen Umweltschutz, nach dem Seerechtsübereinkommen eine Pflicht zur Förderung unbemannter Schiffe hat. Abschließend wird auf erforderliche Cyber Security Maßnahmen für diesen besonderen Schiffstyp eingegangen. Insgesamt zeigt sich nach dieser Analyse, dass das Seerechtsübereinkommen, mit überschaubaren Anpassungen, gut Anwendung auf autonome Schiffe finden kann.
Localisation of deformation is a ubiquitous feature in continental rift dynamics and observed across drastically different time and length scales. This thesis comprises one experimental and two numerical modelling studies investigating strain localisation in (1) a ductile shear zone induced by a material heterogeneity and (2) in an active continental rift setting. The studies are related by the fact that the weakening mechanisms on the crystallographic and grain size scale enable bulk rock weakening, which fundamentally enables the formation of shear zones, continental rifts and hence plate tectonics. Aiming to investigate the controlling mechanisms on initiation and evolution of a shear zone, the torsion experiments of the experimental study were conducted in a Patterson type apparatus with strong Carrara marble cylinders with a weak, planar Solnhofen limestone inclusion. Using state-of-the-art numerical modelling software, the torsion experiments were simulated to answer questions regarding localisation procedure like stress distribution or the impact of rheological weakening. 2D numerical models were also employed to integrate geophysical and geological data to explain characteristic tectonic evolution of the Southern and Central Kenya Rift. Key elements of the numerical tools are a randomized initial strain distribution and the usage of strain softening. During the torsion experiments, deformation begins to localise at the limestone inclusion tips in a process zone, which propagates into the marble matrix with increasing deformation until a ductile shear zone is established. Minor indicators for coexisting brittle deformation are found close to the inclusion tip and presumed to slightly facilitate strain localisation besides the dominant ductile deformation processes. The 2D numerical model of the torsion experiment successfully predicts local stress concentration and strain rate amplification ahead of the inclusion in first order agreement with the experimental results. A simple linear parametrization of strain weaking enables high accuracy reproduction of phenomenological aspects of the observed weakening. The torsion experiments suggest that loading conditions do not affect strain localisation during high temperature deformation of multiphase material with high viscosity contrasts. A numerical simulation can provide a way of analysing the process zone evolution virtually and extend the examinable frame. Furthermore, the nested structure and anastomosing shape of an ultramylonite band was mimicked with an additional second softening step. Rheological weakening is necessary to establish a shear zone in a strong matrix around a weak inclusion and for ultramylonite formation.
Such strain weakening laws are also incorporated into the numerical models of the
Southern and Central Kenya Rift that capture the characteristic tectonic evolution. A three-stage early rift evolution is suggested that starts with (1) the accommodation of strain by a single border fault and flexure of the hanging-wall crust, after which (2) faulting in the hanging-wall and the basin centre increases before (3) the early-stage asymmetry is lost and basinward localisation of deformation occurs. Along-strike variability of rifts can be produced by modifying the initial random noise distribution. In summary, the three studies address selected aspects of the broad range of mechanisms and processes that fundamentally enable the deformation of rock and govern the localisation patterns across the scales. In addition to the aforementioned results, the first and second manuscripts combined, demonstrate a procedure to find new or improve on existing numerical formulations for specific rheologies and their dynamic weakening. These formulations are essential in addressing rock deformation from the grain to the global scale. As within the third study of this thesis, where geodynamic controls on the evolution of a rift were examined and acquired by the integration of geological and geophysical data into a numerical model.
Text collections, such as corpora of books, research articles, news, or business documents are an important resource for knowledge discovery. Exploring large document collections by hand is a cumbersome but necessary task to gain new insights and find relevant information. Our digitised society allows us to utilise algorithms to support the information seeking process, for example with the help of retrieval or recommender systems. However, these systems only provide selective views of the data and require some prior knowledge to issue meaningful queries and asses a system’s response. The advancements of machine learning allow us to reduce this gap and better assist the information seeking process. For example, instead of sighting countless business documents by hand, journalists and investigator scan employ natural language processing techniques, such as named entity recognition. Al-though this greatly improves the capabilities of a data exploration platform, the wealth of information is still overwhelming. An overview of the entirety of a dataset in the form of a two-dimensional map-like visualisation may help to circumvent this issue. Such overviews enable novel interaction paradigms for users, which are similar to the exploration of digital geographical maps. In particular, they can provide valuable context by indicating how apiece of information fits into the bigger picture.This thesis proposes algorithms that appropriately pre-process heterogeneous documents and compute the layout for datasets of all kinds. Traditionally, given high-dimensional semantic representations of the data, so-called dimensionality reduction algorithms are usedto compute a layout of the data on a two-dimensional canvas. In this thesis, we focus on text corpora and go beyond only projecting the inherent semantic structure itself. Therefore,we propose three dimensionality reduction approaches that incorporate additional information into the layout process: (1) a multi-objective dimensionality reduction algorithm to jointly visualise semantic information with inherent network information derived from the underlying data; (2) a comparison of initialisation strategies for different dimensionality reduction algorithms to generate a series of layouts for corpora that grow and evolve overtime; (3) and an algorithm that updates existing layouts by incorporating user feedback provided by pointwise drag-and-drop edits. This thesis also contains system prototypes to demonstrate the proposed technologies, including pre-processing and layout of the data and presentation in interactive user interfaces.
The development of speaking competence is widely regarded as a central aspect of second language (L2) learning. It may be questioned, however, if the currently predominant ways of conceptualising the term fully satisfy the complexity of the construct: Although there is growing recognition that language primarily constitutes a tool for communication and participation in social life, as yet it is rare for conceptualisations of speaking competence to incorporate the ability to inter-act and co-construct meaning with co-participants. Accordingly, skills allowing for the successful accomplishment of interactional tasks (such as orderly speaker change, and resolving hearing and understanding trouble) also remain largely unrepresented in language teaching and assessment. As fostering the ability to successfully use the L2 within social interaction should arguably be a main objective of language teaching, it appears pertinent to broaden the construct of speaking competence by incorporating interactional competence (IC). Despite there being a growing research interest in the conceptualisation and development of (L2) IC, much of the materials and instruments required for its teaching and assessment, and thus for fostering a broader understanding of speaking competence in the L2 classroom, still await development. This book introduces an approach to the identification of candidate criterial features for the assessment of EFL learners’ L2 repair skills. Based on a corpus of video-recorded interaction between EFL learners, and following conversation-analytic and interactional-linguistic methodology as well as drawing on basic premises of research in the framework of Conversation Analysis for Second Language Acquisition, differences between (groups of) learners in terms of their L2 repair conduct are investigated through qualitative and inductive analyses. Candidate criterial features are derived from the analysis results. This book does not only contribute to the operationalisation of L2 IC (and of L2 repair skills in particular), but also lays groundwork for the construction of assessment scales and rubrics geared towards the evaluation of EFL learners’ L2 interactional skills.
Antikörper werden in verschiedensten Bereichen, sowohl zu therapeutischen als auch zu diagnostischen und forschungsorientierten Zwecken verwendet. Vor der Verwendung des Antikörpers bedarf es der Charakterisierung seiner Eigenschaften, in Bezug auf sein Epitop und sein Bindeverhalten gegenüber dem Paratop. Gleichzeitig muss, in Abhängigkeit des Einsatzes, der Antikörper, für den gewünschten Gebrauch, validiert werden. Zu diesem Zweck wurden in der vorliegenden Arbeit Bead-basierte, multiplexe Testsysteme entworfen, ausgetestet und etabliert mit dem Ziel, eine einfache Screeningmethode zu entwickeln, um eine hohe Anzahl an Proben beziehungsweise Analyten gleichzeitig bestimmen zu können. Dafür wurden drei verschiedene Herangehensweisen etabliert.
So wurden ein phospho-PKA-Substrat Antikörper, welcher phosphorylierte Bindemotive der PKA der Form RRxpS erkennt, gleichzeitig mit einer Reihe an Peptide getestet, welche Punktmutationen im Vergleich zur Konsensussequenz enthielten, um den Einfluss einzelner Aminosäuren auf die Bindung des Antikörpers zu untersuchen. Es konnte im Multiplex gezeigt werden, dass die Unterschiede im Antikörperbindungsverhalten in Abhängigkeit der Aminosäure an verschiedenen P-Positionen detektierbar waren. Mit dem Bead-basierten Multiplexansatz konnten, durch Messungen von Konzentrationsreihen des Antikörpers, Bindungskinetiken aufgenommen und diese mit bereits etablierten Methoden verglichen werden.
Des Weiteren wurden verschiedene Antikörper, welche essenzielle Bestandteile von Bead-basierten Testsystemen darstellten, validiert. Es wurden dabei verschiedene Antikörper, welche spezifisch THC und CBD erkennen ausgetestet und anschließend ein kompetitiver Assay zur Detektion von THC und CBD in humanem Serum etabliert, und die Nachweisgrenzen bestimmt.
Ferner sollten Pferdeseren von Tieren, welche am Sommerekzem leiden, auf ihren IgE-Gehalt hin bestimmt werden. Dafür wurden relevante Proteine rekombinant hergestellt und durch Immobilisierung an Beads im Multiplex mit Serum inkubiert. Die spezifische Bindung des IgE an die Allergen sollte damit messbar gemacht werden können. Für die Gesamtvalidierung des Testsystems wurden zuvor sämtliche Einzelschritte einzeln validiert, um im Anschluss im multiplexen Screening zu vermessen.
Die Nutzung von Bead-basierten Multiplexmessungen als eine Plattformtechnologie erleichtert die Charakterisierung von Antikörpern sowie ihre Validierung für verschiedene Testsysteme.
Poly(vinylidene fluoride) (PVDF)-based homo-, co- and ter-polymers are well-known for their ferroelectric and relaxor-ferroelectric properties. Their semi-crystalline morphology consists of crystalline and amorphous phases, plus interface regions in between, and governs the relevant electro-active properties. In this work, the influence of chemical, thermal and mechanical treatments on the structure and morphology of PVDF-based polymers and on the related ferroelectric/relaxor-ferroelectric properties is investigated. Polymer films were prepared in different ways and subjected to various treatments such as annealing, quenching and stretching. The resulting changes in the transitions and relaxations of the polymer samples were studied by means of dielectric, thermal, mechanical and optical techniques. In particular, the origin(s) behind the mysterious mid-temperature transition (T_{mid}) that is observed in all PVDF-based polymers was assessed. A new hypothesis is proposed to describe the T_{mid} transition as a result of multiple processes taking place within the temperature range of the transition. The contribution of the individual processes to the observed overall transition depends on both the chemical structure of the monomer units and the processing conditions which also affect the melting transition. Quenching results in a decrease of the overall crystallinity and in smaller crystallites. On samples quenched after annealing, notable differences in the fractions of different crystalline phases have been observed when compared to samples that had been slowly cooled. Stretching of poly(vinylidene fluoride-tetrafluoroethylene) (P(VDF-TFE)) films causes an increase in the fraction of the ferroelectric β-phase with simultaneous increments in the melting point (T_m) and the crystallinity (\chi_c) of the copolymer. While an increase in the stretching temperature does not have a profound effect on the amount of the ferroelectric phase, its stability appears to improve.
Measurements of the non-linear dielectric permittivity \varepsilon_2^\prime in a poly(vinylidenefluoride-trifluoroethylene-chlorofluoroethylene) (P(VDF-TrFE- CFE)) relaxor-ferroelectric (R-F) terpolymer reveal peaks at 30 and 80 °C that cannot be identified in conventional dielectric spectroscopy. The former peak is associated with T_{mid}\ and may help to understand the non-zero \varepsilon_2^\prime values that are found for the paraelectric terpolymer phase. The latter peak can also be observed during cooling of P(VDF-TrFE) copolymer samples at 100 °C and is due to conduction processes and space-charge polarization as a result of the accumulation of real charges at the electrode-sample interface. Annealing lowers the Curie-transition temperature of the terpolymer as a consequence of its smaller ferroelectric-phase fraction, which by default exists even in terpolymers with relatively high CFE content. Changes in the transition temperatures are in turn related to the behavior of the hysteresis curves observed on differently heat-treated samples. Upon heating, the hysteresis curves evolve from those known for a ferroelectric to those of a typical relaxor-ferroelectric material. Comparing dielectric-hysteresis loops obtained at various temperatures, we find that annealed terpolymer films show higher electric-displacement values and lower coercive fields than the non-annealed samples − irrespective of the measurement temperature − and also exhibit ideal relaxor-ferroelectric behavior at ambient temperatures, which makes them excellent candidates for related applications at or near room temperature. However, non-annealed films − by virtue of their higher ferroelectric activity − show a larger and more stable remanent polarization at room temperature, while annealed samples need to be poled below 0 °C to induce a well-defined polarization. Overall, by modifying the three phases in PVDF-based polymers, it has been demonstrated how the preparation steps and processing conditions can be tailored to achieve the desired properties that are optimal for specific applications.
Pädagog*innen der Primarstufe nehmen an spezifischen bewegungsorientierten Weiterbildungen teil. Zahlreiche Untersuchungen im Kontext von Fort- und Weiterbildungen stellen dar, unter welchen Bedingungen sich Teilnahmen förderlich oder hinderlich auswirken. In didaktisch-konzeptionellen Überlegungen werden häufig Fragen diskutiert, wie äußere Umstände, etwa in Bezug auf zeitliche, räumliche oder inhaltliche Dimensionen, zu gestalten sind, damit Bildungsangebote im Schulsystem bestimmte Wirkungen erzielen. Unter welchen Bedingungen erfolgt sozusagen günstiger Weise eine Vermittlung von spezifischen Inhalten an Lehrer*innen, damit über diese ein Transfereffekt von (system-)relevantem Wissen in das Schulsystem erfolgen kann?
In dieser Forschungsarbeit soll nicht ein Bedingungsdiskurs im Vordergrund stehen, auf dessen Grundlage wirkungsvolle Vermittlungsstrategien für Bildungsangebote diskutiert werden. Im Zentrum steht die Frage nach je eigenen Teilnahme- und Lernbegründungen von Pädagog*innen, und wie sie sich zu ihrer Weiterbildung ins Verhältnis setzen. Dieser Zugang verändert die Perspektive auf die Thematik und erlaubt die Auseinandersetzung mit Subjekten im Rahmen eines Begründungsdiskurses. Im Zuge einer empirisch-qualitativen Studie werden narrative Interviews mit elf Absolvent*innen einer bewegungsorientierten Weiterbildung geführt, die Auswertung der Daten erfolgt mit der Dokumentarischen Methode. Die Rekonstruktionsergebnisse werden in Form von zwei Fallbeschreibungen und durch vier typische, in der Studie entwickelte, Begründungsfiguren dargestellt: die Figur Lernen, die Figur Wissensmanagement, die Figur Neugierige Suche und die Figur Körperliche Aktivität. Neben der Rekonstruktion von Teilnahme- und Lernbegründungsmustern wird deutlich, dass Teilnehmen und Lernen keine unterschiedlichen Zugangslogiken in Bezug auf Bedeutungs-Begründungs-Zusammenhänge verfolgen. Vielmehr sind sowohl expansive als auch defensive Lernbegründungen im Zuge von Teilnahmebegründungen identifizierbar.
The post-antiretroviral therapy era has transformed HIV into a chronic disease and non-HIV comorbidities (i.e., cardiovascular and mental diseases) are more prevalent in PLWH. The source of these non-HIV comorbidities aside from traditional risk factor include HIV infection, inflammation, distorted immune activation, burden of chronic diseases, and unhealthy lifestyle like sedentarism. Exercise is known for its beneficial effects in mental and physical health; reasons why exercise is recommended to prevent and treat difference cardiovascular and mental diseases in the general population. This cumulative thesis aimed to comprehend the relation exercise has to non-HIV comorbidities in German PLWH. Four studies were conducted to 1) understand exercise effects in cardiorespiratory fitness and muscle strength on PLWH through a systematic review and meta-analyses and 2) determine the likelihood of German PLWH developing non-HIV comorbidities, in a cross-sectional study. Meta-analytic examination indicates PLWH cardiorespiratory fitness (VO2max SMD = 0.61 ml·kg·min-1, 95% CI: 0.35-0.88, z = 4.47, p < 0.001, I2 = 50%) and strength (of remark lowerbody strength by 16.8 kg, 95% CI: 13–20.6, p< 0.001) improves after an exercise intervention in comparison to a control group. Cross-sectional data suggest exercise has a positive effect on German PLWH mental health (less anxiety and depressive symptoms) and protects against the development of anxiety (PR: 0.57, 95%IC: 0.36 – 0.91, p = 0.01) and depression (PR: 0.62, 95%IC: 0.41 – 0.94, p = 0.01). Likewise, exercise duration is related to a lower likelihood of reporting heart arrhythmias (PR: 0.20, 95%IC: 0.10 – 0.60, p < 0.01) and exercise frequency to a lower likelihood of reporting diabetes mellitus (PR: 0.40, 95%IC: 0.10 – 1, p < 0.01) in German PLWH. A preliminary recommendation for German PLWH who want to engage in exercise can be to exercise ≥ 1 time per week, at an intensity of 5 METs per session or > 103 MET·min·day-1, with a duration ≥ 150 minutes per week. Nevertheless, further research is needed to comprehend exercise dose response and protective effect for cardiovascular diseases, anxiety, and depression in German PLWH.
The doctoral thesis presented provides a comprehensive view of laser-based ablation techniques promoted to new fields of operation, including, but not limited to, size, composition, and concentration analyses. It covers various applications of laser ablation techniques over a wide range of sizes, from single molecules all the way to aerosol particles. The research for this thesis started with broadening and deepening the field of application and the fundamental understanding of liquid-phase IR-MALDI. Here, the hybridization of ion mobility spectrometry and microfluidics was realized by using IR-MALDI as the coupling technique for the first time. The setup was used for monitoring the photocatalytic performance of the E-Z isomerization of olefins. Using this hybrid, measurement times were so drastically reduced that such photocatalyst screenings became a matter of minutes rather than hours. With this on hand, triple measurements screenings could not only be performed within ten minutes, but also with a minimum amount of resources highlighting its potential as a green chemistry alternative to batch-sized reactions. Along the optimizing process of the IR-MALDI source for microfluidics came its application for another liquid sample supply method, the hanging drop. This demarcated one of the first applications of IR-MALDI for the charging of sub-micron particles directly from suspensions via their gas-phase transfer, followed by their characterization with differential mobility analysis. Given the high spectral quality of the data up to octuply charged particles became experimentally accessible, this laid the foundation for deriving a new charge distribution model for IR-MALDI in that size regime. Moving on to even larger analyte sizes, LIBS and LII were employed as ablation techniques for the solid phase, namely the aerosol particles themselves. Both techniques produce light-emitting events and were used to quantify and classify different aerosols. The unique configuration of stroboscopic imaging, photoacoustics, LII, and LIBS measurements opened new realms for analytical synergies and their potential application in industry. The concept of using low fluences, below 100 J/cm2, and high repetition rates of up to 500 Hz for LIBS makes for an excellent phase-selective LIBS setup. This concept was combined with a new approach to the photoacoustic normalization of LIBS. Also, it was possible to acquire statistically relevant amounts of data in a matter of seconds, showing its potential as a real-time optimization technique. On the same time axis, but at much lower fluences, LII was used with a similar methodology to quickly quantify and classify airborne particles of different compositions. For the first time, aerosol particles were evaluated on their LII susceptibility by using a fluence screening approach.