Refine
Has Fulltext
- yes (2502) (remove)
Year of publication
Document Type
- Doctoral Thesis (2502) (remove)
Language
Keywords
- climate change (52)
- Klimawandel (50)
- Modellierung (34)
- Nanopartikel (28)
- machine learning (21)
- Fernerkundung (20)
- Synchronisation (19)
- remote sensing (18)
- Spracherwerb (17)
- Blickbewegungen (16)
Institute
- Institut für Physik und Astronomie (402)
- Institut für Biochemie und Biologie (380)
- Institut für Geowissenschaften (325)
- Institut für Chemie (301)
- Extern (147)
- Institut für Umweltwissenschaften und Geographie (121)
- Institut für Ernährungswissenschaft (102)
- Wirtschaftswissenschaften (97)
- Department Psychologie (87)
- Hasso-Plattner-Institut für Digital Engineering GmbH (85)
Life on Earth is diverse and ranges from unicellular organisms to multicellular creatures like humans. Although there are theories about how these organisms might have evolved, we understand little about how ‘life’ started from molecules. Bottom-up synthetic biology aims to create minimal cells by combining different modules, such as compartmentalization, growth, division, and cellular communication.
All living cells have a membrane that separates them from the surrounding aqueous medium and helps to protect them. In addition, all eukaryotic cells have organelles that are enclosed by intracellular membranes. Each cellular membrane is primarily made of a lipid bilayer with membrane proteins. Lipids are amphiphilic molecules that assemble into molecular bilayers consisting of two leaflets. The hydrophobic chains of the lipids in the two leaflets face each other, and their hydrophilic headgroups face the aqueous surroundings. Giant unilamellar vesicles (GUVs) are model membrane systems that form large compartments with a size of many micrometers and enclosed by a single lipid bilayer. The size of GUVs is comparable to the size of cells, making them good membrane models which can be studied using an optical microscope. However, after the initial preparation, GUV membranes lack membrane proteins which have to be reconstituted into these membranes by subsequent preparation steps. Depending on the protein, it can be either attached via anchor lipids to one of the membrane leaflets or inserted into the lipid bilayer via its transmembrane domains.
The first step is to prepare the GUVs and then expose them to an exterior solution with proteins. Various protocols have been developed for the initial preparation of GUVs. For the second step, the GUVs can be exposed to a bulk solution of protein or can be trapped in a microfluidic device and then supplied with the protein solution. To minimize the amount of solution and for more precise measurements, I have designed a microfluidic device that has a main channel, and several dead-end side channels that are perpendicular to the main channel. The GUVs are trapped in the dead-end channels. This design exchanges the solution around the GUVs via diffusion from the main channel, thus shielding the GUVs from the flow within the main channel. This device has a small volume of just 2.5 μL, can be used without a pump and can be combined with a confocal microscope, enabling uninterrupted imaging of the GUVs during the experiments. I used this device for most of the experiments on GUVs that are discussed in this thesis.
In the first project of the thesis, a lipid mixture doped with an anchor lipid was used that can bind to a histidine chain (referred to as His-tag(ged) or 6H) via the metal cation Ni2+. This method is widely used for the biofunctionalization of GUVs by attaching proteins without a transmembrane domain. Fluorescently labeled His-tags which are bound to a membrane can be observed in a confocal microscope. Using the same lipid mixture, I prepared the GUVs with different protocols and investigated the membrane composition of the resulting GUVs by evaluating the amount of fluorescently labeled His-tagged molecules bound to their membranes. I used the microfluidic device described above to expose the outer leaflet of the vesicle to a constant concentration of the His-tagged molecules. Two fluorescent molecules with a His-tag were studied and compared: green fluorescent protein (6H-GFP) and fluorescein isothiocyanate (6H-FITC). Although the quantum yield in solution is similar for both molecules, the brightness of the membrane-bound 6H-GFP is higher than the brightness of the membrane-bound 6H-FITC. The observed difference in the brightness reveals that the fluorescence of the 6H-FITC is quenched by the anchor lipid via the Ni2+ ion. Furthermore, my measurements also showed that the fluorescence intensity of the membranebound His-tagged molecules depends on microenvironmental factors such as pH. For both 6H-GFP and 6H-FITC, the interaction with the membrane is quantified by evaluating the equilibrium dissociation constant. The membrane fluorescence is measured as a function of the fluorophores’ molar concentration. Theoretical analysis of these data leads to the equilibrium dissociation constants of (37.5 ± 7.5) nM for 6H-GFP and (18.5 ± 3.7) nM for 6H-FITC.
The anchor lipid mentioned previously used the metal cation Ni2+ to mediate the bond between the anchor lipid and the His-tag. The Ni2+ ion can be replaced by other transition metal ions. Studies have shown that Co3+ forms the strongest bonds with the His-tags attached to proteins. In these studies, strong oxidizing agents were used to oxidize the Co2+ mediated complex with the His-tagged protein to a Co3+ mediated complex. This procedure puts the proteins at risk of being oxidized as well. In this thesis, the vesicles were first prepared with anchor lipids without any metal cation. The Co3+ was added to these anchor lipids and finally the His-tagged protein was added to the GUVs to form the Co3+ mediated bond. This system was also established using the microfluidic device.
The different preparation procedures of GUVs usually lead to vesicles with a spherical morphology. On the other hand, many cell organelles have a more complex architecture with a non spherical topology. One fascinating example is provided by the endoplasmic reticulum (ER) which is made of a continuous membrane and extends throughout the cell in the form of tubes and sheets. The tubes are connected by three-way junctions and form a tubular network of irregular polygons. The formation and maintenance of these reticular networks requires membrane proteins that hydrolyize guanosine triphosphate (GTP). One of these membrane proteins is atlastin. In this thesis, I reconstituted the atlastin protein in GUV membranes using detergent-assisted reconstitution protocols to insert the proteins directly into lipid bilayers.
This thesis focuses on protein reconstitution by binding His-tagged proteins to anchor lipids and by detergent-assisted insertion of proteins with transmembrane domains. It also provides the design of a microfluidic device that can be used in various experiments, one example is the evaluation of the equilibrium dissociation constant for membrane-protein interactions. The results of this thesis will help other researchers to understand the protocols for preparing GUVs, to reconstitute proteins in GUVs, and to perform experiments using the microfluidic device. This knowledge should be beneficial for the long-term goal of combining the different modules of synthetic biology to make a minimal cell.
Housing in metabolic cages can induce a pronounced stress response. Metabolic cage systems imply housing mice on metal wire mesh for the collection of urine and feces in addition to monitoring food and water intake. Moreover, mice are single-housed, and no nesting, bedding, or enrichment material is provided, which is often argued to have a not negligible impact on animal welfare due to cold stress. We therefore attempted to reduce stress during metabolic cage housing for mice by comparing an innovative metabolic cage (IMC) with a commercially available metabolic cage from Tecniplast GmbH (TMC) and a control cage. Substantial refinement measures were incorporated into the IMC cage design. In the frame of a multifactorial approach for severity assessment, parameters such as body weight, body composition, food intake, cage and body surface temperature (thermal imaging), mRNA expression of uncoupling protein 1 (Ucp1) in brown adipose tissue (BAT), fur score, and fecal corticosterone metabolites (CMs) were included. Female and male C57BL/6J mice were single-housed for 24 h in either conventional Macrolon cages (control), IMC, or TMC for two sessions. Body weight decreased less in the IMC (females—1st restraint: 6.94%; 2nd restraint: 6.89%; males—1st restraint: 8.08%; 2nd restraint: 5.82%) compared to the TMC (females—1st restraint: 13.2%; 2nd restraint: 15.0%; males—1st restraint: 13.1%; 2nd restraint: 14.9%) and the IMC possessed a higher cage temperature (females—1st restraint: 23.7°C; 2nd restraint: 23.5 °C; males—1st restraint: 23.3 °C; 2nd restraint: 23.5 °C) compared with the TMC (females—1st restraint: 22.4 °C; 2nd restraint: 22.5 °C; males—1st restraint: 22.6 °C; 2nd restraint: 22.4 °C). The concentration of fecal corticosterone metabolites in the TMC (females—1st restraint: 1376 ng/g dry weight (DW); 2nd restraint: 2098 ng/g DW; males—1st restraint: 1030 ng/g DW; 2nd restraint: 1163 ng/g DW) was higher compared to control cage housing (females—1st restraint:
640 ng/g DW; 2nd restraint: 941 ng/g DW; males—1st restraint: 504 ng/g DW; 2nd restraint: 537 ng/g DW). Our results show the stress potential induced by metabolic cage restraint that is markedly influenced by the lower housing temperature. The IMC represents a first attempt to target cold stress reduction during metabolic cage application thereby producing more animal welfare friendly data.
Sulfur is essential for the functionality of some important biomolecules in humans. Biomolecules like the Iron-sulfur clusters, tRNAs, Molybdenum cofactor, and some vitamins. The trafficking of sulfur involves proteins collectively called sulfurtransferase. Among these are TUM1, MOCS3, and NFS1.
This research investigated the role of TUM1 for molybdenum cofactor biosynthesis and cytosolic tRNA thiolation in humans. The rhodanese-like protein MOCS3 and the L-cysteine desulfurase (NFS1) have been previously demonstrated to interact with TUM1. These interactions suggested a dual function of TUM1 in sulfur transfer for Moco biosynthesis and cytosolic tRNA thiolation. TUM1 deficiency has been implicated to be responsible for a rare inheritable disorder known as mercaptolactate cysteine disulfiduria (MCDU), which is associated with a mental disorder. This mental disorder is similar to the symptoms of sulfite oxidase deficiency which is characterised by neurological disorders. Therefore, the role of TUM1 as a sulfurtransferase in humans was investigated, in CRISPR/Cas9 generated TUM1 knockout HEK 293T cell lines.
For the first time, TUM1 was implicated in Moco biosynthesis in humans by quantifying the intermediate product cPMP and Moco using HPLC. Comparing the TUM1 knockout cell lines to the wild-type, accumulation and reduction of cPMP and Moco were observed respectively. The effect of TUM1 knockout on the activity of a Moco-dependent enzyme, Sulfite oxidase, was also investigated. Sulfite oxidase is essential for the detoxification of sulfite to sulfate. Sulfite oxidase activity and protein abundance were reduced due to less availability of Moco. This shows that TUM1 is essential for efficient sulfur transfer for Moco biosynthesis. Reduction in cystathionin -lyase in TUM1 knockout cells was quantified, a possible coping mechanism of the cell against sulfite production through cysteine catabolism.
Secondly, the involvement of TUM1 in tRNA thio-modification at the wobble Uridine-34 was reported by quantifying the amount of mcm5s2U and mcm5U via HPLC. The reduction and accumulation of mcm5s2U and mcm5U in TUM1 knockout cells were observed in the nucleoside analysis. Herein, exogenous treatment with NaHS, a hydrogen sulfide donor, rescued the Moco biosynthesis, cytosolic tRNA thiolation, and cell proliferation deficits in TUM1 knockout cells.
Further, TUM1 was shown to impact mitochondria bioenergetics through the measurement of the oxygen consumption rate and extracellular acidification rate (ECAR) via the seahorse cell Mito stress analyzer. Reduction in total ATP production was also measured. This reveals how important TUM1 is for H2S biosynthesis in the mitochondria of HEK 293T.
Finally, the inhibition of NFS1 in HEK 293T and purified NFS1 protein by 2-methylene 3-quinuclidinone was demonstrated via spectrophotometric and radioactivity quantification. Inhibition of NFS1 by MQ further affected the iron-sulfur cluster-dependent enzyme aconitase activity.
Due to anthropogenic greenhouse gas emissions, Earth’s average surface temperature is steadily increasing. As a consequence, many weather extremes are likely to become more frequent and intense. This poses a threat to natural and human systems, with local impacts capable of destroying exposed assets and infrastructure, and disrupting economic and societal activity. Yet, these effects are not locally confined to the directly affected regions, as they can trigger indirect economic repercussions through loss propagation along supply chains. As a result, local extremes yield a potentially global economic response. To build economic resilience and design effective adaptation measures that mitigate adverse socio-economic impacts of ongoing climate change, it is crucial to gain a comprehensive understanding of indirect impacts and the underlying economic mechanisms.
Presenting six articles in this thesis, I contribute towards this understanding. To this end, I expand on local impacts under current and future climate, the resulting global economic response, as well as the methods and tools to analyze this response.
Starting with a traditional assessment of weather extremes under climate change, the first article investigates extreme snowfall in the Northern Hemisphere until the end of the century. Analyzing an ensemble of global climate model projections reveals an increase of the most extreme snowfall, while mean snowfall decreases.
Assessing repercussions beyond local impacts, I employ numerical simulations to compute indirect economic effects from weather extremes with the numerical agent-based shock propagation model Acclimate. This model is used in conjunction with the recently emerged storyline framework, which involves analyzing the impacts of a particular reference extreme event and comparing them to impacts in plausible counterfactual scenarios under various climate or socio-economic conditions. Using this approach, I introduce three primary storylines that shed light on the complex mechanisms underlying economic loss propagation.
In the second and third articles of this thesis, I analyze storylines for the historical Hurricanes Sandy (2012) and Harvey (2017) in the USA. For this, I first estimate local economic output losses and then simulate the resulting global economic response with Acclimate. The storyline for Hurricane Sandy thereby focuses on global consumption price anomalies and the resulting changes in consumption. I find that the local economic disruption leads to a global wave-like economic price ripple, with upstream effects propagating in the supplier direction and downstream effects in the buyer direction. Initially, an upstream demand reduction causes consumption price decreases, followed by a downstream supply shortage and increasing prices, before the anomalies decay in a normalization phase. A dominant upstream or downstream effect leads to net consumption gains or losses of a region, respectively. Moreover, I demonstrate that a longer direct economic shock intensifies the downstream effect for many regions, leading to an overall consumption loss.
The third article of my thesis builds upon the developed loss estimation method by incorporating projections to future global warming levels. I use these projections to explore how the global production response to Hurricane Harvey would change under further increased global warming. The results show that, while the USA is able to nationally offset direct losses in the reference configuration, other countries have to compensate for increasing shares of counterfactual future losses. This compensation is mainly achieved by large exporting countries, but gradually shifts towards smaller regions. These findings not only highlight the economy’s ability to flexibly mitigate disaster losses to a certain extent, but also reveal the vulnerability and economic disadvantage of regions that are exposed to extreme weather events.
The storyline in the fourth article of my thesis investigates the interaction between global economic stress and the propagation of losses from weather extremes. I examine indirect impacts of weather extremes — tropical cyclones, heat stress, and river floods — worldwide under two different economic conditions: an unstressed economy and a globally stressed economy, as seen during the Covid-19 pandemic. I demonstrate that the adverse effects of weather extremes on global consumption are strongly amplified when the economy is under stress. Specifically, consumption losses in the USA and China double and triple, respectively, due to the global economy’s decreased capacity for disaster loss compensation. An aggravated scarcity intensifies the price response, causing consumption losses to increase.
Advancing on the methods and tools used here, the final two articles in my thesis extend the agent-based model Acclimate and formalize the storyline approach. With the model extension described in the fifth article, regional consumers make rational choices on the goods bought such that their utility is maximized under a constrained budget. In an out-of-equilibrium economy, these rational consumers are shown to temporarily increase consumption of certain goods in spite of rising prices.
The sixth article of my thesis proposes a formalization of the storyline framework, drawing on multiple studies including storylines presented in this thesis. The proposed guideline defines eight central elements that can be used to construct a storyline.
Overall, this thesis contributes towards a better understanding of economic repercussions of weather extremes. It achieves this by providing assessments of local direct impacts, highlighting mechanisms and impacts of loss propagation, and advancing on methods and tools used.
Magmatic-hydrothermal systems form a variety of ore deposits at different proximities to upper-crustal hydrous magma chambers, ranging from greisenization in the roof zone of the intrusion, porphyry mineralization at intermediate depths to epithermal vein deposits near the surface. The physical transport processes and chemical precipitation mechanisms vary between deposit types and are often still debated.
The majority of magmatic-hydrothermal ore deposits are located along the Pacific Ring of Fire, whose eastern part is characterized by the Mesozoic to Cenozoic orogenic belts of the western North and South Americas, namely the American Cordillera. Major magmatic-hydrothermal ore deposits along the American Cordillera include (i) porphyry Cu(-Mo-Au) deposits (along the western cordilleras of Mexico, the western U.S., Canada, Chile, Peru, and Argentina); (ii) Climax- (and sub−) type Mo deposits (Colorado Mineral Belt and northern New Mexico); and (iii) porphyry and IS-type epithermal Sn(-W-Ag) deposits of the Central Andean Tin Belt (Bolivia, Peru and northern Argentina).
The individual studies presented in this thesis primarily focus on the formation of different styles of mineralization located at different proximities to the intrusion in magmatic-hydrothermal systems along the American Cordillera. This includes (i) two individual geochemical studies on the Sweet Home Mine in the Colorado Mineral Belt (potential endmember of peripheral Climax-type mineralization); (ii) one numerical modeling study setup in a generic porphyry Cu-environment; and (iii) a numerical modeling study on the Central Andean Tin Belt-type Pirquitas Mine in NW Argentina.
Microthermometric data of fluid inclusions trapped in greisen quartz and fluorite from the Sweet Home Mine (Detroit City Portal) suggest that the early-stage mineralization precipitated from low- to medium-salinity (1.5-11.5 wt.% equiv. NaCl), CO2-bearing fluids at temperatures between 360 and 415°C and at depths of at least 3.5 km. Stable isotope and noble gas isotope data indicate that greisen formation and base metal mineralization at the Sweet Home Mine was related to fluids of different origins. Early magmatic fluids were the principal source for mantle-derived volatiles (CO2, H2S/SO2, noble gases), which subsequently mixed with significant amounts of heated meteoric water. Mixing of magmatic fluids with meteoric water is constrained by δ2Hw-δ18Ow relationships of fluid inclusions. The deep hydrothermal mineralization at the Sweet Home Mine shows features similar to deep hydrothermal vein mineralization at Climax-type Mo deposits or on their periphery. This suggests that fluid migration and the deposition of ore and gangue minerals in the Sweet Home Mine was triggered by a deep-seated magmatic intrusion.
The second study on the Sweet Home Mine presents Re-Os molybdenite ages of 65.86±0.30 Ma from a Mo-mineralized major normal fault, namely the Contact Structure, and multimineral Rb-Sr isochron ages of 26.26±0.38 Ma and 25.3±3.0 Ma from gangue minerals in greisen assemblages. The age data imply that mineralization at the Sweet Home Mine formed in two separate events: Late Cretaceous (Laramide-related) and Oligocene (Rio Grande Rift-related). Thus, the age of Mo mineralization at the Sweet Home Mine clearly predates that of the Oligocene Climax-type deposits elsewhere in the Colorado Mineral Belt. The Re-Os and Rb-Sr ages also constrain the age of the latest deformation along the Contact Structure to between 62.77±0.50 Ma and 26.26±0.38 Ma, which was employed and/or crosscut by Late Cretaceous and Oligocene fluids. Along the Contact Structure Late Cretaceous molybdenite is spatially associated with Oligocene minerals in the same vein system, a feature that precludes molybdenite recrystallization or reprecipitation by Oligocene ore fluids.
Ore precipitation in porphyry copper systems is generally characterized by metal zoning (Cu-Mo to Zn-Pb-Ag), which is suggested to be variably related to solubility decreases during fluid cooling, fluid-rock interactions, partitioning during fluid phase separation and mixing with external fluids. The numerical modeling study setup in a generic porphyry Cu-environment presents new advances of a numerical process model by considering published constraints on the temperature- and salinity-dependent solubility of Cu, Pb and Zn in the ore fluid. This study investigates the roles of vapor-brine separation, halite saturation, initial metal contents, fluid mixing, and remobilization as first-order controls of the physical hydrology on ore formation. The results show that the magmatic vapor and brine phases ascend with different residence times but as miscible fluid mixtures, with salinity increases generating metal-undersaturated bulk fluids. The release rates of magmatic fluids affect the location of the thermohaline fronts, leading to contrasting mechanisms for ore precipitation: higher rates result in halite saturation without significant metal zoning, lower rates produce zoned ore shells due to mixing with meteoric water. Varying metal contents can affect the order of the final metal precipitation sequence. Redissolution of precipitated metals results in zoned ore shell patterns in more peripheral locations and also decouples halite saturation from ore precipitation.
The epithermal Pirquitas Sn-Ag-Pb-Zn mine in NW Argentina is hosted in a domain of metamorphosed sediments without geological evidence for volcanic activity within a distance of about 10 km from the deposit. However, recent geochemical studies of ore-stage fluid inclusions indicate a significant contribution of magmatic volatiles. This study tested different formation models by applying an existing numerical process model for porphyry-epithermal systems with a magmatic intrusion located either at a distance of about 10 km underneath the nearest active volcano or hidden underneath the deposit. The results show that the migration of the ore fluid over a 10-km distance results in metal precipitation by cooling before the deposit site is reached. In contrast, simulations with a hidden magmatic intrusion beneath the Pirquitas deposit are in line with field observations, which include mineralized hydrothermal breccias in the deposit area.
Hybrid nanomaterials offer the combination of individual properties of different types of nanoparticles. Some strategies for the development of new nanostructures in larger scale rely on the self-assembly of nanoparticles as a bottom-up approach. The use of templates provides ordered assemblies in defined patterns. In a typical soft-template, nanoparticles and other surface-active agents are incorporated into non-miscible liquids. The resulting self-organized dispersions will mediate nanoparticle interactions to control the subsequent self-assembly. Especially interactions between nanoparticles of very different dispersibility and functionality can be directed at a liquid-liquid interface.
In this project, water-in-oil microemulsions were formulated from quasi-ternary mixtures with Aerosol-OT as surfactant. Oleyl-capped superparamagnetic iron oxide and/or silver nanoparticles were incorporated in the continuous organic phase, while polyethyleneimine-stabilized gold nanoparticles were confined in the dispersed water droplets. Each type of nanoparticle can modulate the surfactant film and the inter-droplet interactions in diverse ways, and their combination causes synergistic effects. Interfacial assemblies of nanoparticles resulted after phase-separation. On one hand, from a biphasic Winsor type II system at low surfactant concentration, drop-casting of the upper phase afforded thin films of ordered nanoparticles in filament-like networks. Detailed characterization proved that this templated assembly over a surface is based on the controlled clustering of nanoparticles and the elongation of the microemulsion droplets. This process offers versatility to use different nanoparticle compositions by keeping the surface functionalization, in different solvents and over different surfaces. On the other hand, a magnetic heterocoagulate was formed at higher surfactant concentration, whose phase-transfer from oleic acid to water was possible with another auxiliary surfactant in ethanol-water mixture. When the original components were initially mixed under heating, defined oil-in-water, magnetic-responsive nanostructures were obtained, consisting on water-dispersible nanoparticle domains embedded by a matrix-shell of oil-dispersible nanoparticles.
Herein, two different approaches were demonstrated to form diverse hybrid nanostructures from reverse microemulsions as self-organized dispersions of the same components. This shows that microemulsions are versatile soft-templates not only for the synthesis of nanoparticles, but also for their self-assembly, which suggest new approaches towards the production of new sophisticated nanomaterials in larger scale.
Volcanoes are one of the Earth’s most dynamic zones and responsible for many changes in our planet. Volcano seismology aims to provide an understanding of the physical processes in volcanic systems and anticipate the style and timing of eruptions by analyzing the seismic records. Volcanic tremor signals are usually observed in the seismic records before or during volcanic eruptions. Their analysis contributes to evaluate the evolving volcanic activity and potentially predict eruptions. Years of continuous seismic monitoring now provide useful information for operational eruption forecasting. The continuously growing amount of seismic recordings, however, poses a challenge for analysis, information extraction, and interpretation, to support timely decision making during volcanic crises. Furthermore, the complexity of eruption processes and precursory activities makes the analysis challenging.
A challenge in studying seismic signals of volcanic origin is the coexistence of transient signal swarms and long-lasting volcanic tremor signals. Separating transient events from volcanic tremors can, therefore, contribute to improving our understanding of the underlying physical processes. Some similar issues (data reduction, source separation, extraction, and classification) are addressed in the context of music information retrieval (MIR). The signal characteristics of acoustic and seismic recordings comprise a number of similarities. This thesis is going beyond classical signal analysis techniques usually employed in seismology by exploiting similarities of seismic and acoustic signals and building the information retrieval strategy on the expertise developed in the field of MIR.
First, inspired by the idea of harmonic–percussive separation (HPS) in musical signal processing, I have developed a method to extract harmonic volcanic tremor signals and to detect transient events from seismic recordings. This provides a clean tremor signal suitable for tremor investigation along with a characteristic function suitable for earthquake detection. Second, using HPS algorithms, I have developed a noise reduction technique for seismic signals. This method is especially useful for denoising ocean bottom seismometers, which are highly contaminated by noise. The advantage of this method compared to other denoising techniques is that it doesn’t introduce distortion to the broadband earthquake waveforms, which makes it reliable for different applications in passive seismological analysis. Third, to address the challenge of extracting information from high-dimensional data and investigating the complex eruptive phases, I have developed an advanced machine learning model that results in a comprehensive signal processing scheme for volcanic tremors. Using this method seismic signatures of major eruptive phases can be automatically detected. This helps to provide a chronology of the volcanic system. Also, this model is capable to detect weak precursory volcanic tremors prior to the eruption, which could be used as an indicator of imminent eruptive activity. The extracted patterns of seismicity and their temporal variations finally provide an explanation for the transition mechanism between eruptive phases.
Predator-forager interactions are a major factor in evolutionary adaptation of many species, as predators need to gain energy by consuming prey species, and foragers needs to avoid the worst fate of mortality while still consuming resources for energetic gains. In this evolutionary arms race, the foragers have constantly evolved anti-predator behaviours (e.g. foraging activity changes). To describe all these complex changes, researchers developed the framework of the landscape of fear, that is, the spatio-temporal variation of perceived predation risk. This concept simplifies all the involved ecological processes into one framework, by integrating animal biology and distribution with habitat characteristics. Researchers can then evaluate the perception of predation risk in prey species, what are the behavioural responses of the prey and, therefore, understand the cascading effects of landscapes of fear at the resource levels (tri-trophic effects). Although tri-trophic effects are well studied at the predator-prey interaction level, little is known on how the forager-resource interactions are part of the overall cascading effects of landscapes of fear, despite the changes of forager feeding behaviour - that occur with perceived predation risk - affecting directly the level of the resources.
This thesis aimed to evaluate the cascading effects of the landscape of fear on biodiversity of resources, and how the feeding behaviour and movement of foragers shaped the final resource species composition (potential coexistence mechanisms). We studied the changes caused by landscapes of fear on wild and captive rodent communities and evaluated: the cascading effects of different landscapes of fear on a tri-trophic system (I), the effects of fear on a forager’s movement patterns and dietary preferences (II) and cascading effects of different types of predation risk (terrestrial versus avian, III).
In Chapter I, we applied a novel measure to evaluate the cascading effects of fear at the level of resources, by quantifying the diversity of resources left after the foragers gave-up on foraging (diversity at the giving-up density). We tested the measure at different spatial levels (local and regional) and observed that with decreased perceived predation risk, the density and biodiversity of resources also decreased. Foragers left a very dissimilar community of resources based on perceived risk and resources functional traits, and therefore acted as an equalising mechanism.
In Chapter II, we wanted to understand further the decision-making processes of rodents in different landscapes of fear, namely, in which resource species rodents decided to forage on (based on three functional traits: size, nutrients and shape) and how they moved depending on perceived predation risk. In safe landscapes, individuals increased their feeding activity and movements and despite the increased costs, they visited more often patches that were further away from their central-place. Despite a preference for the bigger resources regardless of risk, when perceived predation risk was low, individuals changed their preference to fat-rich resources.
In Chapter III, we evaluated the cascading effects of two different types of predation risk in rodents: terrestrial (raccoon) versus avian predation risk. Raccoon presence or absence did not alter the rodents feeding behaviour in different landscapes of fear. Rodent’s showed risk avoidance behaviours towards avian predators (spatial risk avoidance), but not towards raccoons (lack of temporal risk avoidance).
By analysing the effects of fear in tri-trophic systems, we were able to deepen the knowledge of how non-consumptive effects of predators affect the behaviour of foragers, and quantitatively measure the cascading effects at the level of resources with a novel measure. Foragers are at the core of the ecological processes and responses to the landscape of fear, acting as variable coexistence agents for resource species depending on perceived predation risk. This newly found measures and knowledge can be applied to more trophic chains, and inform researchers on biodiversity patterns originating from landscapes of fear.
Continental rifts are key geodynamic regions where the complex interplay of magmatism and faulting activity can be studied to understand the driving forces of extension and the formation of new divergent plate boundaries. Well-preserved rift morphology can provide a wealth of information on the growth, interaction, and linkage of normal-fault systems through time. If rift basins are preserved over longer geologic time periods, sedimentary archives generated during extensional processes may mirror tectonic and climatic influences on erosional and sedimentary processes that have varied over time. Rift basins are furthermore strategic areas for hydrocarbon and geothermal energy exploration, and they play a central role in species dispersal and evolution as well as providing or inhibiting hydrologic connectivity along basins at emerging plate boundaries.
The Cenozoic East African rift system (EARS) is one of the most important continental extension zones, reflecting a range of evolutionary stages from an early rift stage with isolated basins in Malawi to an advanced stage of continental extension in southern Afar. Consequently, the EARS is an ideal natural laboratory that lends itself to the study of different stages in the breakup of a continent. The volcanically and seismically active eastern branch of the EARS is characterized by multiple, laterally offset tectonic and magmatic segments where adjacent extensional basins facilitate crustal extension either across a broad deformation zone or via major transfer faulting. The Broadly Rifted Zone (BRZ) in southern Ethiopia is an integral part of the eastern branch of the EARS; in this region, rift segments of the southern Ethiopian Rift (sMER) and northern Kenyan Rift (nKR) propagate in opposite directions in a region with one of the earliest manifestations of volcanism and extensional tectonism in East Africa. The basin margins of the Chew-Bahir Basin and the Gofa Province, characterized by a semi-arid climate and largely uniform lithology, provide ideal conditions for studying the tectonic and geomorphologic features of this complex kinematic transfer zone, but more importantly, this area is suitable for characterizing and quantifying the overlap between the propagating structures of the sMER and nKR and the resulting deformation patterns of the BRZ transfer zones.
In this study, I have combined data from thermochronology, thermal modeling, morphometry, paleomagnetic analysis, geochronology, and geomorphological field observations with information from published studies to reconstruct the spatiotemporal relationship between volcanism and fault activity in the BRZ and quantify the deformation patterns of the overlapping rift segments. I present the following results: (1) new thermochronological data from the en-échelon basin margins and footwall blocks of the rift flanks and morphometric results verified in the field to link different phases of magmatism and faulting during extension and infer geomorphological landscape features related to the current tectonic interaction between the nKR and the sMER; (2) temporally constrained paleomagnetic data from the BRZ overlap zone between the Ethiopian and Kenyan rifts to quantitatively determine block rotation between the two segments. Combining the collected data, time-temperature histories of thermal modeling results from representative samples show well-defined deformation phases between 25–20 Ma, 15–9Ma, and ~5 Ma to the present. Each deformation phase is characterized by the onset of rapid cooling (>2°C/Ma) of the crust associated with uplift or exhumation of the rift shoulder. After an initial, spatially very diffuse phase of extension, the rift has gradually evolved into a system of connected structures formed in an increasingly focused rift zone during the last 5 Ma. Regarding the morphometric analysis of the rift structures, it can be shown that normalized slope indices of the river courses, spatial arrangement of knickpoints in the river longitudinal profiles of the footwall blocks, local relief values, and the average maximum values of the slope of the river profiles indicate a gradual increase in the extension rate from north (Sawula basin: mature) to south (Chew Bahir: young). The complexity of the structural evolution of the BRZ overlap zone between nKR and sMER is further emphasized by the documentation of crustal blocks around a vertical axis. A comparison of the mean directions obtained for the Eo-Oligocene (Ds=352.6°, Is=-17.0°, N=18, α95=5.5°) and Miocene (Ds=2.9°, Is=0.9°, N=9, α95=12.4°) volcanics relative to the pole for stable South Africa and with respect to the corresponding ages of the analyzed units record a significant counterclockwise rotation of ~11.1°± 6.4° and insignificant CCW rotation of ~3.2° ± 11.5°, respectively.
Modern datasets often exhibit diverse, feature-rich, unstructured data, and they are massive in size. This is the case of social networks, human genome, and e-commerce databases. As Artificial Intelligence (AI) systems are increasingly used to detect pattern in data and predict future outcome, there are growing concerns on their ability to process large amounts of data. Motivated by these concerns, we study the problem of designing AI systems that are scalable to very large and heterogeneous data-sets.
Many AI systems require to solve combinatorial optimization problems in their course of action. These optimization problems are typically NP-hard, and they may exhibit additional side constraints. However, the underlying objective functions often exhibit additional properties. These properties can be exploited to design suitable optimization algorithms. One of these properties is the well-studied notion of submodularity, which captures diminishing returns. Submodularity is often found in real-world applications. Furthermore, many relevant applications exhibit generalizations of this property.
In this thesis, we propose new scalable optimization algorithms for combinatorial problems with diminishing returns. Specifically, we focus on three problems, the Maximum Entropy Sampling problem, Video Summarization, and Feature Selection. For each problem, we propose new algorithms that work at scale. These algorithms are based on a variety of techniques, such as forward step-wise selection and adaptive sampling. Our proposed algorithms yield strong approximation guarantees, and the perform well experimentally.
We first study the Maximum Entropy Sampling problem. This problem consists of selecting a subset of random variables from a larger set, that maximize the entropy. By using diminishing return properties, we develop a simple forward step-wise selection optimization algorithm for this problem. Then, we study the problem of selecting a subset of frames, that represent a given video. Again, this problem corresponds to a submodular maximization problem. We provide a new adaptive sampling algorithm for this problem, suitable to handle the complex side constraints imposed by the application. We conclude by studying Feature Selection. In this case, the underlying objective functions generalize the notion of submodularity. We provide a new adaptive sequencing algorithm for this problem, based on the Orthogonal Matching Pursuit paradigm.
Overall, we study practically relevant combinatorial problems, and we propose new algorithms to solve them. We demonstrate that these algorithms are suitable to handle massive datasets. However, our analysis is not problem-specific, and our results can be applied to other domains, if diminishing return properties hold. We hope that the flexibility of our framework inspires further research into scalability in AI.
Solar photocatalysis is the one of leading concepts of research in the current paradigm of sustainable chemical industry. For actual practical implementation of sunlight-driven catalytic processes in organic synthesis, a cheap, efficient, versatile and robust heterogeneous catalyst is necessary. Carbon nitrides are a class of organic semiconductors who are known to fulfill these requirements.
First, current state of solar photocatalysis in economy, industry and lab research is overviewed, outlining EU project funding, prospective synthetic and reforming bulk processes, small scale solar organic chemistry, and existing reactor designs and prototypes, concluding feasibility of the approach.
Then, the photocatalytic aerobic cleavage of oximes to corresponding aldehydes and ketones by anionic poly(heptazine imide) carbon nitride is discussed. The reaction provides a feasible method of deprotection and formation of carbonyl compounds from nitrosation products and serves as a convenient model to study chromoselectivity and photophysics of energy transfer in heterogeneous photocatalysis.
Afterwards, the ability of mesoporous graphitic carbon nitride to conduct proton-coupled electron transfer was utilized for the direct oxygenation of 1,3-oxazolidin-2-ones to corresponding 1,3-oxazlidine-2,4-diones. This reaction provides an easier access to a key scaffold of diverse types of drugs and agrochemicals.
Finally, a series of novel carbon nitrides based on poly(triazine imide) and poly(heptazine imide) structure was synthesized from cyanamide and potassium rhodizonate. These catalysts demonstrated a good performance in a set of photocatalytic benchmark reactions, including aerobic oxidation, dual nickel photoredox catalysis, hydrogen peroxide evolution and chromoselective transformation of organosulfur precursors.
Concluding, the scope of carbon nitride utilization for net-oxidative and net-neutral photocatalytic processes was expanded, and a new tunable platform for catalyst synthesis was discovered.
Essays in public economics
(2023)
This cumulative dissertation uses economic theory and micro-econometric tools and evaluation methods to analyse public policies and their impact on welfare and individual behaviour. In particular, it focuses on policies in two distinct areas that represent fundamental societal challenges in the 21st century: the ageing of society and life in densely-populated urban agglomerations. Together, these areas shape important financial decisions in a person's life, impact welfare, and are driving forces behind many of the challenges in today's societies. The five self-contained research chapters of this thesis analyse the forward looking effects of pension reforms, affordable housing policies as well as a public transport subsidy and its effect on air pollution.
The Andes reflect Cenozoic deformation and uplift along the South American margin in the context of regional shortening associated with the interaction between the subducting Nazca plate and the overriding continental South American plate. Simultaneously, multiple levels of uplifted marine terraces constitute laterally continuous geomorphic features related to the accumulation of permanent forearc deformation in the coastal realm. However, the mechanisms responsible for permanent coastal uplift and the persistency of current/decadal deformation patterns over millennial timescales are still not fully understood. This dissertation presents a continental-scale database of last interglacial terrace elevations and uplift rates along the South American coast that provides the basis for an analysis of a variety of mechanisms that are possibly responsible for the accumulation of permanent coastal uplift. Regional-scale mapping and analysis of multiple, late Pleistocene terrace levels in central Chile furthermore provide valuable insights regarding the persistency of current seismic asperities, the role of upper-plate faulting, and the impact of bathymetric ridges on permanent forearc deformation.
The database of last interglacial terrace elevations reveals an almost continuous signal of background-uplift rates along the South American coast at ~0.22 mm/yr that is modified by various short- to long-wavelength changes. Spatial correlations with crustal faults and subducted bathymetric ridges suggest long-term deformation to be affected by these features, while the latitudinal variability of climate forcing factors has a profound impact on the generation and preservation of marine terraces. Systematic wavelength analyses and comparisons of the terrace-uplift rate signal with different tectonic parameters reveal short-wavelength deformation to result from crustal faulting, while intermediate- to long-wavelength deformation might indicate various extents of long-term seismotectonic segments on the megathrust, which are at least partially controlled by the subduction of bathymetric anomalies. The observed signal of background-uplift rate is likely accumulated by moderate earthquakes near the Moho, suggesting multiple, spatiotemporally distinct phases of uplift that manifest as a continuous uplift signal over millennial timescales.
Various levels of late Pleistocene marine terraces in the 2015 M8.3 Illapel-earthquake area reveal a range of uplift rates between 0.1 and 0.6 mm/yr and indicate decreasing uplift rates since ~400 ka. These glacial-cycle uplift rates do not correlate with current or decadal estimates of coastal deformation suggesting seismic asperities not to be persistent features on the megathrust that control the accumulation of permanent forearc deformation over long timescales of 105 years. Trench-parallel, crustal normal faults modulate the characteristics of permanent forearc-deformation; upper-plate extension likely represents a second-order phenomenon resulting from subduction erosion and subsequent underplating that lead to regional tectonic uplift and local gravitational collapse of the forearc. In addition, variable activity with respect to the subduction of the Juan Fernández Ridge can be detected in the upper plate over the course of multiple interglacial periods, emphasizing the role of bathymetric anomalies in causing local increases in terrace-uplift rate. This thesis therefore provides new insights into the current understanding of subduction-zone processes and the dynamics of coastal forearc deformation, whose different interacting forcing factors impact the topographic and geomorphic evolution of the western South American coast.
Extreme flooding displaces an average of 12 million people every year. Marginalized populations in low-income countries are in particular at high risk, but also industrialized countries are susceptible to displacement and its inherent societal impacts. The risk of being displaced results from a complex interaction of flood hazard, population exposed in the floodplains, and socio-economic vulnerability. Ongoing global warming changes the intensity, frequency, and duration of flood hazards, undermining existing protection measures. Meanwhile, settlements in attractive yet hazardous flood-prone areas have led to a higher degree of population exposure. Finally, the vulnerability to displacement is altered by demographic and social change, shifting economic power, urbanization, and technological development. These risk components have been investigated intensively in the context of loss of life and economic damage, however, only little is known about the risk of displacement under global change.
This thesis aims to improve our understanding of flood-induced displacement risk under global climate change and socio-economic change. This objective is tackled by addressing the following three research questions. First, by focusing on the choice of input data, how well can a global flood modeling chain reproduce flood hazards of historic events that lead to displacement? Second, what are the socio-economic characteristics that shape the vulnerability to displacement? Finally, to what degree has climate change potentially contributed to recent flood-induced displacement events?
To answer the first question, a global flood modeling chain is evaluated by comparing simulated flood extent with satellite-derived inundation information for eight major flood events. A focus is set on the sensitivity to different combinations of the underlying climate reanalysis datasets and global hydrological models which serve as an input for the global hydraulic model. An evaluation scheme of performance scores shows that simulated flood extent is mostly overestimated without the consideration of flood protection and only for a few events dependent on the choice of global hydrological models. Results are more sensitive to the underlying climate forcing, with two datasets differing substantially from a third one. In contrast, the incorporation of flood protection standards results in an underestimation of flood extent, pointing to potential deficiencies in the protection level estimates or the flood frequency distribution within the modeling chain.
Following the analysis of a physical flood hazard model, the socio-economic drivers of vulnerability to displacement are investigated in the next step. For this purpose, a satellite- based, global collection of flood footprints is linked with two disaster inventories to match societal impacts with the corresponding flood hazard. For each event the number of affected population, assets, and critical infrastructure, as well as socio-economic indicators are computed. The resulting datasets are made publicly available and contain 335 displacement events and 695 mortality/damage events. Based on this new data product, event-specific displacement vulnerabilities are determined and multiple (national) dependencies with the socio-economic predictors are derived. The results suggest that economic prosperity only partially shapes vulnerability to displacement; urbanization, infant mortality rate, the share of elderly, population density and critical infrastructure exhibit a stronger functional relationship, suggesting that higher levels of development are generally associated with lower vulnerability.
Besides examining the contextual drivers of vulnerability, the role of climate change in the context of human displacement is also being explored. An impact attribution approach is applied on the example of Cyclone Idai and associated extreme coastal flooding in Mozambique. A combination of coastal flood modeling and satellite imagery is used to construct factual and counterfactual flood events. This storyline-type attribution method allows investigating the isolated or combined effects of sea level rise and the intensification of cyclone wind speeds on coastal flooding. The results suggest that displacement risk has increased by 3.1 to 3.5% due to the total effects of climate change on coastal flooding, with the effects of increasing wind speed being the dominant factor.
In conclusion, this thesis highlights the potentials and challenges of modeling flood- induced displacement risk. While this work explores the sensitivity of global flood modeling to the choice of input data, new questions arise on how to effectively improve the reproduction of flood return periods and the representation of protection levels. It is also demonstrated that disentangling displacement vulnerabilities is feasible, with the results providing useful information for risk assessments, effective humanitarian aid, and disaster relief. The impact attribution study is a first step in assessing the effects of global warming on displacement risk, leading to new research challenges, e.g., coupling fluvial and coastal flood models or the attribution of other hazard types and displacement events. This thesis is one of the first to address flood-induced displacement risk from a global perspective. The findings motivate for further development of the global flood modeling chain to improve our understanding of displacement vulnerability and the effects of global warming.
The present thesis focuses on the synthesis of nanostructured iron-based compounds by using β-FeOOH nanospindles and poly(ionic liquid)s (PILs) vesicles as hard and soft templates, respectively, to suppress the shuttle effect of lithium polysulfides (LiPSs) in Li-S batteries. Three types of composites with different nanostructures (mesoporous nanospindle, yolk-shell nanospindle, and nanocapsule) have been synthesized and applied as sulfur host material for Li-S batteries. Their interactions with LiPSs and effects on the electrochemical performance of Li-S batteries have been systematically studied.
In the first part of the thesis, carbon-coated mesoporous Fe3O4 (C@M-Fe3O4) nanospindles have been synthesized to suppress the shuttle effect of LiPSs. First, β-FeOOH nanospindles have been synthesized via the hydrolysis of iron (III) chloride in aqueous solution and after silica coating and subsequent calcination, mesoporous Fe2O3 (M-Fe2O3) have been obtained inside the confined silica layer through pyrolysis of β-FeOOH. After the removal of the silica layer, electron tomography (ET) has been applied to rebuild the 3D structure of the M-Fe2O3 nanospindles. After coating a thin layer of polydopamine (PDA) as carbon source, the PDA-coated M-Fe2O3 particles have been calcinated to synthesize C@M-Fe3O4 nanospindles. With the chemisorption of Fe3O4 and confinement of mesoporous structure to anchor LiPSs, the composite C@M-Fe3O4/S electrode delivers a remaining capacity of 507.7 mAh g-1 at 1 C after 600 cycles.
In the second part of the thesis, a series of iron-based compounds (Fe3O4, FeS2, and FeS) with the same yolk-shell nanospindle morphology have been synthesized, which allows for the direct comparison of the effects of compositions on the electrochemical performance of Li-S batteries. The Fe3O4-carbon yolk-shell nanospindles have been synthesized by using the β-FeOOH nanospindles as hard template. Afterwards, Fe3O4-carbon yolk-shell nanospindles have been used as precursors to obtain iron sulfides (FeS and FeS2)-carbon yolk-shell nanospindles through sulfidation at different temperatures. Using the three types of yolk-shell nanospindles as sulfur host, the effects of compositions on interactions with LiPSs and electrochemical performance in Li-S batteries have been systematically investigated and compared. Benefiting from the chemisorption and catalytic effect of FeS2 particles and the physical confinement of the carbon shell, the FeS2-C/S electrode exhibits the best electrochemical performance with an initial specific discharge capacity of 877.6 mAh g-1 at 0.5 C and a retention ratio of 86.7% after 350 cycles.
In the third part, PILs vesicles have been used as soft template to synthesize carbon nanocapsules embedded with iron nitride particles to immobilize and catalyze LiPSs in Li-S batteries. First, 3-n-decyl-1-vinylimidazolium bromide has been used as monomer to synthesize PILs nanovesicles by free radical polymerization. Assisted by PDA coating route and ion exchange, PIL nanovesicles have been successfully applied as soft template in morphology-maintaining carbonization to prepare carbon nanocapsules embedded with iron nitride nanoparticles (FexN@C). The well-dispersed iron nitride nanoparticles effectively catalyze the conversion of LiPSs to Li2S, owing to their high electrical conductivity and strong chemical binding to LiPSs. The constructed FexN@C/S cathode demonstrates a high initial discharge capacity of 1085.0 mAh g-1 at 0.5 C with a remaining value of 930.0 mAh g-1 after 200 cycles.
The results in the present thesis demonstrate the facile synthetic routes of nanostructured iron-based compounds with controllable morphologies and compositions using soft and hard colloidal templates, which can be applied as sulfur host to suppress the shuttle behavior of LiPSs. The synthesis approaches developed in this thesis are also applicable to fabricating other transition metal-based compounds with porous nanostructures for other applications.
Reflexion und Reflexivität
(2023)
Reflexion gilt in der Lehrkräftebildung als eine Schlüsselkategorie der professionellen Entwicklung. Entsprechend wird auf vielfältige Weise die Qualität reflexionsbezogener Kompetenzen untersucht. Eine Herausforderung hierbei kann in der Annahme bestehen, von der Analyse schriftlicher Reflexionen unmittelbar auf die Reflexivität einer Person zu schließen, da Reflexion stets kontextspezifisch als Abbild reflexionsbezogener Argumentationsprozesse angesehen werden sollte und reflexionsbezogenen Dispositionen unterliegt. Auch kann die Qualität einer Reflexion auf mehreren Dimensionen bewertet werden, ohne quantifizierbare, absolute Aussagen treffen zu können.
Daher wurden im Rahmen einer Physik-Videovignette N = 134 schriftliche Fremdreflexionen verfasst und kontextspezifische reflexionsbezogene Dispositionen erhoben. Expert*innen erstellten theoriegeleitet Qualitätsbewertungen zur Breite, Tiefe, Kohärenz und Spezifität eines jeden Reflexionstextes. Unter Verwendung computerbasierter Klassifikations- und Analyseverfahren wurden weitere Textmerkmale erhoben. Mittels explorativer Faktorenanalyse konnten die Faktoren Qualität, Quantität und Deskriptivität gefunden werden. Da alle konventionell eingeschätzten Qualitätsbewertungen durch einen Faktor repräsentiert wurden, konnte ein maximales Qualitätskorrelat kalkuliert werden, zu welchem jede schriftliche Fremdreflexion im Rahmen der vorliegenden Vignette eine computerbasiert bestimmbare Distanz aufweist. Diese Distanz zum maximalen Qualitätskorrelat konnte validiert werden und kann die Qualität der schriftlichen Reflexionen unabhängig von menschlichen Ressourcen quantifiziert repräsentieren. Abschließend konnte identifiziert werden, dass ausgewählte Dispositionen in unterschiedlichem Maße mit der Reflexionsqualität zusammenhängen. So konnten beispielsweise bezogen auf das Physik-Fachwissen minimale Zusammenhänge identifiziert werden, wohingegen Werthaltung sowie wahrgenommene Unterrichtsqualität eng mit der Qualität einer schriftlichen Reflexion in Verbindung stehen können.
Es wird geschlussfolgert, dass reflexionsbezogene Dispositionen moderierenden Einfluss auf Reflexionen nehmen können. Es wird empfohlen bei der Erhebung von Reflexion mit dem Ziel der Kompetenzmessung ausgewählte Dispositionen mit zu erheben. Weiter verdeutlicht diese Arbeit die Möglichkeit, aussagekräftige Quantifizierungen auch in der Analyse komplexer Konstrukte vorzunehmen. Durch computerbasierte Qualitätsabschätzungen können objektive und individuelle Analysen und differenzierteres automatisiertes Feedback ermöglicht werden.
Background: The concept self-compassion (SC), a special way of being compassionate with oneself while dealing with stressful life circumstances, has attracted increasing attention in research over the past two decades. Research has already shown that SC has beneficial effects on affective well-being and other mental health outcomes. However, little is known in which ways SC might facilitate our affective well-being in stressful situations. Hence, a central concern of this dissertation was to focus on the question which underlying processes might influence the link between SC and affective well-being. Two established components in stress processing, which might also play an important role in this context, could be the amount of experienced stress and the way of coping with a stressor. Thus, using a multi-method approach, this dissertation aimed at finding to which extent SC might help to alleviate the experienced stress and promotes the use of more salutary coping, while dealing with stressful circumstances. These processes might ultimately help improve one’s affective well-being. Derived from that, it was hypothesized that more SC is linked to less perceived stress and intensified use of salutary coping responses. Additionally, it was suggested that perceived stress and coping mediate the relation between SC and affective well-being.
Method: The research questions were targeted in three single studies and one meta-study. To test my assumptions about the relations of SC and coping in particular, a systematic literature search was conducted resulting in k = 136 samples with an overall sample size of N = 38,913. To integrate the z-transformed Pearson correlation coefficients, random-effects models were calculated. All hypotheses were tested with a three-wave cross-lagged design in two short-term longitudinal online studies assessing SC, perceived stress and coping responses in all waves. The first study explored the assumptions in a student sample (N = 684) with a mean age of 27.91 years over a six-week period, whereas the measurements were implemented in the GESIS Panel (N = 2934) with a mean age of 52.76 years analyzing the hypotheses in a populationbased sample across eight weeks. Finally, an ambulatory assessment study was designed to expand the findings of the longitudinal studies to the intraindividual level. Thus, a sample of 213 participants completed questionnaires of momentary SC, perceived stress, engagement and disengagement coping, and affective well-being on their smartphones three times per day over seven consecutive days. The data was processed using 1-1-1 multilevel mediation analyses.
Results: Results of the meta-analysis indicated that higher SC is significantly associated with more use of engagement coping and less use of disengagement coping. Considering the relations between SC and stress processing variables in all three single studies, cross-lagged paths from the longitudinal data, as well as multilevel modeling paths from the ambulatory assessment data indicated a notable relation between all relevant stress variables. As expected, results showed a significant negative relation between SC and perceived stress and disengagement coping, as well as a positive connection with engagement coping responses at the dispositional and intra-individual level. However, considering the mediational hypothesis, the most promising pathway in the link between SC and affective well-being turned out to be perceived stress in all three studies, while effects of the mediational pathways through coping responses were less robust.
Conclusion: Thus, a more self-compassionate attitude and higher momentary SC, when needed in specific situations, can help to engage in effective stress processing. Considering the underlying mechanisms in the link between SC and affective well-being, stress perception in particular seemed to be the most promising candidate for enhancing affective well-being at the dispositional and at the intraindividual level. Future research should explore the pathways between SC and affective well-being in specific contexts and samples, and also take into account additional influential factors.
The purpose of this thesis was to investigate the developmental dynamics between interest, motivation, and learning strategy use during physics learning. The target population was lower secondary school students from a developing country, given that there is hardly in research that studies the above domain-specific concepts in the context of developing countries. The aim was addressed in four parts.
The first part of the study was guided by three objectives: (a) to adapt and validate the Science Motivation Questionnaire (SMQ-II) for the Ugandan context; (b) to examine whether there are significant differences in motivation for learning Physics with respect to students’ gender; and (c) to establish the extent to which students’ interest predicts their motivation to learn Physics. Being a pilot study, the sample comprised 374 randomly selected students from five schools in central Uganda who responded to anonymous questionnaires that included scales from the SMQ-II and the Individual Interest Questionnaire. Data were analysed using confirmatory factor analyses, t-tests and structural equation modelling in SPSS-25 and Mplus-8. The five-factor model solution of the SMQ-II fitted adequately with the study data, with deletion of one item. The modified SMQ-II exhibited invariant factor loadings and intercepts (i.e., strong measurement invariance) when administered to boys and girls. Furthermore, on assessing whether motivation for learning Physics varied with gender, no significant differences were noted. On assessing the predictive effects of individual interest on students’ motivation, individual interest significantly predicted all motivational constructs, with stronger predictive strength on students’ self-efficacy and self-determination in learning Physics.
In the second part whilst using comprised 934 Grade 9 students from eight secondary schools in Uganda, Latent profile analysis (LPA) - a person-centred approach was used to investigate motivation patterns that exist in lower secondary school students during physics learning. A three-step approach to LPA was used to answer three research questions: RQ1, which profiles of secondary school students exist with regards to their motivation for Physics learning; RQ2, are there differences in students’ cognitive learning strategies in the identified profiles; and RQ3, does students’ gender, attitudes, and individual interest predict membership in these profiles? Six motivational profiles were identified: (i) low-quantity motivation profile (101 students; 10.8%); (ii) moderate-quantity motivation profile (246 students; 26.3%); (iii) high-quantity motivation profile (365 students; 39.1%); (iv) primarily intrinsically motivated profile (60 students,6.4%); (v) mostly extrinsically motivated profile (88 students, 9.4%); and (vi) grade-introjected profile (74 students, 7.9%). Low-quantity and grade introjected motivated students mostly used surface learning strategies whilst the high-quantity and primarily intrinsically motivated students used deep learning strategies. On examining the predictive effect of gender, individual interest, and students’ attitudes on the profile membership, unlike gender, individual interest and students’ attitudes towards Physics learning strongly predicted profile membership.
In the third part of the study, the occurrence of different secondary school learner profiles depending on their various combinations of cognitive and metacognitive learning strategy use, as well as their differences in perceived autonomy support, intrinsic motivation, and gender was examined. Data were collected from 576 9th grade student. Four learner profiles were identified: competent strategy user, struggling user, surface-level learner, and deep-level learner profiles. Gender differences were noted in students’ use of elaboration and organization strategies to learn Physics, in favour of girls. In terms of profile memberships, significant differences in gender, intrinsic motivation and perceived autonomy support were also noted. Girls were 2.4 - 2.7 times more likely than boys to be members of the competent strategy user and surface-level learner profiles. Additionally, higher levels of intrinsic motivation predicted an increased likelihood membership into the deep-level learner profile, whilst higher levels of perceived teacher autonomy predicted an increased likelihood membership into the competent strategy user profile as compared to other profiles.
Lastly, in the fourth part, changes in secondary school students’ physics motivation and cognitive learning strategies use during physics learning across time were examined. Two waves of data were collected from initially 954 9th students through to their 10th grade. A three-step approach to Latent transition analysis was used. Generally, students’ motivation decreased from 9th to 10th grade. Qualitative students’ motivation profiles indicated strong with-in person stability whilst the quantitative profiles were relatively less stable. Mostly, students moved from the high quantity motivation profile to the extrinsically motivated profiles. On the other hand, the cognitive learning strategies use profiles were moderately stable; with higher with-in person stability in the deep-level learner profile. None of the struggling users and surface-level learners transitioned into the deep-level learners’ profile. Additionally, students who perceived increased support for autonomy from their teachers had higher membership likelihood into the competent users’ profiles whilst those with an increase in individual interest score had higher membership likelihood into the deep-level learner profile.
Development of electrochemical antibody-based and enzymatic assays for mycotoxin analysis in food
(2023)
Electrochemical methods are promising to meet the demand for easy-to-use devices monitoring key parameters in the food industry. Many companies run own lab procedures for mycotoxin analysis, but it is a major goal to simplify the analysis. The enzyme-linked immunosorbent assay using horseradish peroxidase as enzymatic label, together with 3,3',5,5' tetramethylbenzidine (TMB)/H2O2 as substrates allows sensitive mycotoxin detection with optical detection methods. For the miniaturization of the detection step, an electrochemical system for mycotoxin analysis was developed. To this end, the electrochemical detection of TMB was studied by cyclic voltammetry on different screen-printed electrodes (carbon and gold) and at different pH values (pH 1 and pH 4). A stable electrode reaction, which is the basis for the further construction of the electrochemical detection system, could be achieved at pH 1 on gold electrodes. An amperometric detection method for oxidized TMB, using a custom-made flow cell for screen-printed electrodes, was established and applied for a competitive magnetic bead-based immunoassay for the mycotoxin ochratoxin A. A limit of detection of 150 pM (60 ng/L) could be obtained and the results were verified with optical detection. The applicability of the magnetic bead-based immunoassay was tested in spiked beer using a handheld potentiostat connected via Bluetooth to a smartphone for amperometric detection allowing to quantify ochratoxin A down to 1.2 nM (0.5 µg/L).
Based on the developed electrochemical detection system for TMB, the applicability of the approach was demonstrated with a magnetic bead-based immunoassay for the ergot alkaloid, ergometrine. Under optimized assay conditions a limit of detection of 3 nM (1 µg/L) was achieved and in spiked rye flour samples ergometrine levels in a range from 25 to 250 µg/kg could be quantified. All results were verified with optical detection. The developed electrochemical detection method for TMB gives great promise for the detection of TMB in many other HRP-based assays.
A new sensing approach, based on an enzymatic electrochemical detection system for the mycotoxin fumonisin B1 was established using an Aspergillus niger fumonisin amine oxidase (AnFAO). AnFAO was produced recombinantly in E. coli as maltose-binding protein fusion protein and catalyzes the oxidative deamination of fumonisins, producing hydrogen peroxide. It was found that AnFAO has a high storage and temperature stability. The enzyme was coupled covalently to magnetic particles, and the enzymatically produced H2O2 in the reaction with fumonisin B1 was detected amperometrically in a flow injection system using Prussian blue/carbon electrodes and the custom-made wall-jet flow cell. Fumonisin B1 could be quantified down to 1.5 µM (≈ 1 mg/L). The developed system represents a new approach to detect mycotoxins using enzymes and electrochemical methods.
This research focuses on empowering leadership, a leadership style that shares autonomy and responsibilities with the followers. Empowering leadership enhances the meaningfulness of work by fostering participation in decision-making, expressing confidence in high performance, and providing autonomy in target setting (Cheong, 2016). I examine how empowering leadership affects followers’ reflection. I used data from 528 individuals across 172 teams and found a positive relationship between empowering leadership and followers’ reflection. Followers’ reflection, in turn, is negatively associated with followers’ withdrawal, which mediates the beneficial effect of empowering leadership on leaders’ emotional exhaustion. As for the leaders, I propose that empowering leadership is negatively related also to leaders’ emotional exhaustion. This research broadens our understanding of empowering leadership's effects on both followers and leaders. Moreover, it integrates empowering leadership, leader emotional exhaustion, and burnout literature. Overall, empowering leadership strengthens members’ reflective attitudes and behaviors, which result in reduced withdrawal (and increased presence and contribution) in teams. Because the members contribute to team effort more, the leaders experience less emotional exhaustion. Hence, my work not only identifies new ways through which empowering leadership positively affects followers but also shows how these positive effects on followers benefit the leaders’ well-being.
In dieser Dissertation konnten erfolgreich mechanisch stabile Hydrogele über eine freie radikalische Polymerisation (FRP) in Wasser synthetisiert werden. Dabei diente vor allem das Sulfobetain SPE als Monomer. Dieses wurde mit dem über eine nukleophile Substitution erster bzw. zweiter Ordnung hergestellten Vernetzer TMBEMPA/Br umgesetzt.
Die entstandenen Netzwerke wurden im Gleichgewichtsquellzustand im Wesentlichen mittels Niederfeld-Kernresonanzspektroskopie, Röntgenkleinwinkelstreuung (SAXS), Rasterelektronenmikroskopie mit Tieftemperaturtechnik (Kryo-REM), dynamisch-mechanische Analyse (DMA), Rheologie, thermogravimetrische Analyse (TGA) und dynamische Differenzkalorimetrie (DSC) analysiert.
Das hierarchisch aufgebaute Netzwerk wurde anschließend für die matrixgesteuerten Mineralisation von Calciumphosphat und –carbonat genutzt. Über das alternierende Eintauchverfahren (engl. „alternate soaking method“) und der Variation von Mineralisationsparametern, wie pH-Wert, Konzentration c und Temperatur T konnten dann verschiedene Modifikationen des Calciumphosphats generiert werden. Das entstandene Hybridmaterial wurde qualitativ mittels Röntgenpulverdiffraktometrie (XRD), abgeschwächte Totalreflexion–fouriertransformierte Infrarot Spektroskopie (ATR-FTIR), Raman-Spektroskopie, Rasterelektronenmikroskopie (REM) mit energiedispersiver Röntgenspektroskopie (EDXS) und optischer Mikroskopie (OM) als auch quantitative mittels Gravimetrie und TGA analysiert.
Für die potentielle Verwendung in der Medizintechnik, z.B. als Implantatmaterial, ist die grundlegende Einschätzung der Wechselwirkung zwischen Hydrogel bzw. Hybridmaterial und verschiedener Zelltypen unerlässlich. Dazu wurden verschiedene Zelltypen, wie Einzeller, Bakterien und adulte Stammzellen verwendet. Die Wechselwirkung mit Peptidsequenzen von Phagen komplettiert das biologische Unterkapitel.
Hydrogele sind mannigfaltig einsetzbar. Diese Arbeit fasst daher weitere Projektperspektiven, auch außerhalb des biomedizinischem Anwendungsspektrums, auf. So konnten erste Ansätze zur serienmäßige bzw. maßgeschneiderte Produktion über das „Inkjet“ Verfahren erreicht werden. Um dies ermöglichen zu können wurden erfolgreich weitere Synthesestrategien, wie die Photopolymerisation und die redoxinitiierte Polymerisation, ausgenutzt. Auch die Eignung als Filtermaterial oder Superabsorber wurde analysiert.
Establishment of final leaf size in plants represents a complex mechanism that relies on the precise regulation of two interconnected cellular processes, cell division and cell expansion. In previous work, the barley protein BROAD LEAF1 (BLF1) was identified as a novel negative regulator of cell proliferation, that mainly limits leaf growth in the width direction. Here I identified a novel RING/U-box protein that interacts with BLF1 through a yeast two hybrid screen. Using BiFC, Co-IP and FRET I confirmed the interaction of the two proteins in planta. Enrichment of the BLF1-mEGFP fusion protein and the increase of the FRET signal upon MG132 treatment of tobacco plants, together with an in vivo ubiquitylation assay in bacteria, confirmed that the RING/U-box E3 interacts with BLF1 to mediate its ubiquitylation and degradation by the 26S proteasome system. Consistent with regulation of endogenous BLF1 in barley by proteasomal degradation, inhibition of the proteasome by bortezomib treatment on BLF1-vYFP transgenic barley plants also resulted in an enrichment of the BLF1 protein. I thus demonstrated that RING/U-box E3 is colocalized with BLF1 in nuclei and negatively regulates BLF1 protein levels. Analysis of ring-e3_1 knock-out mutants suggested the involvement of the RING/U-box E3 gene in leaf growth control, although the effect was mainly on leaf length. Together, my results suggest that proteasomal degradation, possibly mediated by RING/U-box E3, contributes to fine-tuning BLF1 protein-level in barley.
An exploration of activity and therapist preferences and their predictors in German-speaking samples
(2023)
According to current definitions of evidence-based practice, patients’ preferences play an important role for the psychotherapeutic process and outcomes. However, whereas a significant body of research investigated preferences regarding specific treatments, research on preferred activities or therapist characteristics is rare, investigated heterogeneous aspects with inconclusive results, lacked validated assessment tools, and neglected relevant preferences, their predictors as well as the perspective of mental health professionals. Therefore, the three studies of this dissertation aimed to address the most fundamental drawbacks in current preference research by providing a validated questionnaire, focus efforts on activity and therapist preferences and add preferences of psychotherapy trainees. To this end, Paper I reports the translation and validation of the 18-item Cooper-Norcross Inventory of Preference (C-NIP) in a broad, heterogeneous sample of N = 969 laypeople, resulting in good to acceptable reliabilities and first evidence of validity. However, the original factor structure was not replicated. Paper II assesses activity preferences of psychotherapists in training using the C-NIP and compares them with the initial laypeople sample. There were significant differences between both samples, with trainees preferring a more patient-directed, emotionally intense and confrontational approach than laypeople. CBT trainees preferred a more therapist-directed, present-focused, challenging and less emotional intense approach than psychodynamic or -analytic trainees. Paper III explores therapist preferences and tests predictors for specific preference choices. For most characteristics, more than half of the participants did not have specific preferences. Results pointed towards congruency effects (i.e., preference for similar characteristics), especially for members of marginalized groups. The dissertation provides both researchers and practitioners with a validated questionnaire, shows potentially obstructive differences between patients and therapists and underlines the importance of therapist characteristics for marginalized groups, thereby laying the foundation for future applications and implementations in research and practice.
Evaluation of nitrogen dynamics in high-order streams and rivers based on high-frequency monitoring
(2023)
Nutrient storage, transform and transport are important processes for achieving environmental and ecological health, as well as conducting water management plans. Nitrogen is one of the most noticeable elements due to its impacts on tremendous consequences of eutrophication in aquatic systems. Among all nitrogen components, researches on nitrate are blooming because of widespread deployments of in-situ high-frequency sensors. Monitoring and studying nitrate can become a paradigm for any other reactive substances that may damage environmental conditions and cause economic losses.
Identifying nitrate storage and its transport within a catchment are inspiring to the management of agricultural activities and municipal planning. Storm events are periods when hydrological dynamics activate the exchange between nitrate storage and flow pathways. In this dissertation, long-term high-frequency monitoring data at three gauging stations in the Selke river were used to quantify event-scale nitrate concentration-discharge (C-Q) hysteretic relationships. The Selke catchment is characterized into three nested subcatchments by heterogeneous physiographic conditions and land use. With quantified hysteresis indices, impacts of seasonality and landscape gradients on C-Q relationships are explored. For example, arable area has deep nitrate legacy and can be activated with high intensity precipitation during wetting/wet periods (i.e., the strong hydrological connectivity). Hence, specific shapes of C-Q relationships in river networks can identify targeted locations and periods for agricultural management actions within the catchment to decrease nitrate output into downstream aquatic systems like the ocean.
The capacity of streams for removing nitrate is of both scientific and social interest, which makes the quantification motivated. Although measurements of nitrate dynamics are advanced compared to other substances, the methodology to directly quantify nitrate uptake pathways is still limited spatiotemporally. The major problem is the complex convolution of hydrological and biogeochemical processes, which limits in-situ measurements (e.g., isotope addition) usually to small streams with steady flow conditions. This makes the extrapolation of nitrate dynamics to large streams highly uncertain. Hence, understanding of in-stream nitrate dynamic in large rivers is still necessary. High-frequency monitoring of nitrate mass balance between upstream and downstream measurement sites can quantitatively disentangle multi-path nitrate uptake dynamics at the reach scale (3-8 km). In this dissertation, we conducted this approach in large stream reaches with varying hydro-morphological and environmental conditions for several periods, confirming its success in disentangling nitrate uptake pathways and their temporal dynamics. Net nitrate uptake, autotrophic assimilation and heterotrophic uptake were disentangled, as well as their various diel and seasonal patterns. Natural streams generally can remove more nitrate under similar environmental conditions and heterotrophic uptake becomes dominant during post-wet seasons. Such two-station monitoring provided novel insights into reach-scale nitrate uptake processes in large streams.
Long-term in-stream nitrate dynamics can also be evaluated with the application of water quality model. This is among the first time to use a data-model fusion approach to upscale the two-station methodology in large-streams with complex flow dynamics under long-term high-frequency monitoring, assessing the in-stream nitrate retention and its responses to drought disturbances from seasonal to sub-daily scale. Nitrate retention (both net uptake and net release) exhibited substantial seasonality, which also differed in the investigated normal and drought years. In the normal years, winter and early spring seasons exhibited extensive net releases, then general net uptake occurred after the annual high-flow season at later spring and early summer with autotrophic processes dominating and during later summer-autumn low-flow periods with heterotrophy-characteristics predominating. Net nitrate release occurred since late autumn until the next early spring. In the drought years, the late-autumn net releases were not so consistently persisted as in the normal years and the predominance of autotrophic processes occurred across seasons. Aforementioned comprehensive results of nitrate dynamics on stream scale facilitate the understanding of instream processes, as well as raise the importance of scientific monitoring schemes for hydrology and water quality parameters.
Planets outside our solar system, so-called "exoplanets", can be detected with different methods, and currently more than 5000 exoplanets have been confirmed, according to NASA Exoplanet Archive. One major highlight of the studies on exoplanets in the past twenty years is the characterization of their atmospheres usingtransmission spectroscopy as the exoplanet transits. However, this characterization is a challenging process and sometimes there are reported discrepancies in the literature regarding the atmosphere of the same exoplanet. One potential reason for the observed atmospheric inconsistencies is called impact parameter degeneracy, and it is highly driven by the limb darkening effect of the host star. A brief introductionto those topics in presented in chapter 1, while the motivation and objectives of thiswork are described in chapter 2.The first goal is to clarify the origin of the transmission spectrum, which is anindicator of an exoplanet’s atmosphere; whether it is real or influenced by the impactparameter degeneracy. A second goal is to determine whether photometry from space using the Transiting Exoplanet Survey Satellite (TESS), could improve on the major parameters, which are responsible for the aforementioned degeneracy, of known exoplanetary systems. Three individual projects were conducted in order toaddress those goals. The three manuscripts are presented, in short, in the manuscriptoverview in chapter 3.More specifically, in chapter 4, the first manuscript is presented, which is an ex-tended investigation on the impact parameter degeneracy and its application onsynthetic transmission spectra. Evidently, the limb darkening of the host star isan important driver for this effect. It keeps the degeneracy persisting through different groups of exoplanets, based on the uncertainty of their impact parameter and on the type of their host star. The second goal, was addressed in the second and third manuscripts (chapter 5 and chapter 6 respectively). Using observationsfrom the TESS mission, two samples of exoplanets were studied; 10 transiting inflated hot-Jupiters and 43 transiting grazing systems. Potentially, the refinement or confirmation of their major system parameters’ measurements can assist in solving current or future discrepancies regarding their atmospheric characterization.In chapter 7 the conclusions of this work are discussed, while in chapter 8 itis proposed how TESS’s measurements can be able to discern between erroneousinterpretations of transmission spectra, especially on systems where the impact parameter degeneracy is likely not applicable.
Transposable elements (TEs) are loci that can replicate and multiply within the genome of their host. Within the host, TEs through transposition are responsible for variation on genomic architecture and gene regulation across all vertebrates. Genome assemblies have increased in numbers in recent years. However, to explore in deep the variations within different genomes, such as SNPs (single nucleotide polymorphism), INDELs (Insertion-deletion), satellites and transposable elements, we need high-quality genomes. Studies of molecular markers in the past 10 years have limitations to correlate with biological differences because molecular markers rely on the accuracy of the genomic resources. This has generated that a substantial part of the studies of TE in recent years have been on high quality genomic resources such as Drosophila, zebrafinch and maize. As testudine have a slow mutation rate lower only to crocodilians, with more than 300 species, adapted to different environments all across the globe, the testudine clade can help us to study variation. Here we propose Testudines as a clade to study variation and the abundance of TE on different species that diverged a long time ago. We investigated the genomic diversity of sea turtles, identifying key genomic regions associated to gene family duplication, specific expansion of particular TE families for Dermochelyidae and that are important for phenotypic differentiation, the impact of environmental changes on their populations, and the dynamics of TEs within different lineages. In chapter 1, we identify that despite high levels of genome synteny within sea turtles, we identified that regions of reduced collinearity and microchromosomes showed higher concentrations of multicopy gene families, as well as genetic distances between species, indicating their potential importance as sources of variation underlying phenotypic differentiation. We found that differences in the ecological niches occupied by leatherback and green turtles have led to contrasting evolutionary paths for their olfactory receptor genes. We identified in leatherback turtles a long-term low population size. Nonetheless, we identify no correlation between the regions of reduced collinearity with abundance of TEs or an accumulation of a particular TE group. In chapter 2, we identified that sea turtle genomes contain a significant proportion of TEs, with differences in TE abundance between species, and the discovery of a recent expansion of Penelope-like elements (PLEs) in the highly conserved sea turtle genome provides new insights into the dynamics of TEs within Testudines. In chapter 3, we compared the proportion of TE across the Testudine clade, and we identified that the proportion of transposable elements within the clade is stable, regardless of the quality of the assemblies. However, we identified that the proportion of TEs orders has correlation with genome quality depending of their expanded abundancy. For retrotransposon, a highly abundant element for this clade, we identify no correlation. However, for DNA elements a rarer element on this clade, correlate with the quality of the assemblies.
Here we confirm that high-quality genomes are fundamental for the study of transposable element evolution and the conservation within the clade. The detection and abundance of specific orders of TEs are influenced by the quality of the genomes. We identified that a reduction in the population size on D. coriacea had left signals of long-term low population sizes on their genomes. On the same note we identified an expansion of TE on D. coriacea, not present in any other member of the available genomes of Testudines, strongly suggesting that it is a response of deregulation of TE on their genomes as consequences of the low population sizes.
Here we have identified important genomic regions and gene families for phenotypic differentiation and highlighted the impact of environmental changes on the populations of sea turtles. We stated that accurate classification and analysis of TE families are important and require high-quality genome assemblies. Using TE analysis we manage to identify differences in highly syntenic species. These findings have significant implications for conservation and provide a foundation for further research into genome evolution and gene function in turtles and other vertebrates. Overall, this study contributes to our understanding of evolutionary change and adaptation mechanisms.
Distributed decision-making studies the choices made among a group of interactive and self-interested agents. Specifically, this thesis is concerned with the optimal sequence of choices an agent makes as it tries to maximize its achievement on one or multiple objectives in the dynamic environment. The optimization of distributed decision-making is important in many real-life applications, e.g., resource allocation (of products, energy, bandwidth, computing power, etc.) and robotics (heterogeneous agent cooperation on games or tasks), in various fields such as vehicular network, Internet of Things, smart grid, etc.
This thesis proposes three multi-agent reinforcement learning algorithms combined with game-theoretic tools to study strategic interaction between decision makers, using resource allocation in vehicular network as an example. Specifically, the thesis designs an interaction mechanism based on second-price auction, incentivizes the agents to maximize multiple short-term and long-term, individual and system objectives, and simulates a dynamic environment with realistic mobility data to evaluate algorithm performance and study agent behavior.
Theoretical results show that the mechanism has Nash equilibria, is a maximization of social welfare and Pareto optimal allocation of resources in a stationary environment. Empirical results show that in the dynamic environment, our proposed learning algorithms outperform state-of-the-art algorithms in single and multi-objective optimization, and demonstrate very good generalization property in significantly different environments. Specifically, with the long-term multi-objective learning algorithm, we demonstrate that by considering the long-term impact of decisions, as well as by incentivizing the agents with a system fairness reward, the agents achieve better results in both individual and system objectives, even when their objectives are private, randomized, and changing over time. Moreover, the agents show competitive behavior to maximize individual payoff when resource is scarce, and cooperative behavior in achieving a system objective when resource is abundant; they also learn the rules of the game, without prior knowledge, to overcome disadvantages in initial parameters (e.g., a lower budget).
To address practicality concerns, the thesis also provides several computational performance improvement methods, and tests the algorithm in a single-board computer. Results show the feasibility of online training and inference in milliseconds.
There are many potential future topics following this work. 1) The interaction mechanism can be modified into a double-auction, eliminating the auctioneer, resembling a completely distributed, ad hoc network; 2) the objectives are assumed to be independent in this thesis, there may be a more realistic assumption regarding correlation between objectives, such as a hierarchy of objectives; 3) current work limits information-sharing between agents, the setup befits applications with privacy requirements or sparse signaling; by allowing more information-sharing between the agents, the algorithms can be modified for more cooperative scenarios such as robotics.
Extreme weather and climate events are one of the greatest dangers for present-day society. Therefore, it is important to provide reliable statements on what changes in extreme events can be expected along with future global climate change. However, the projected overall response to future climate change is generally a result of a complex interplay between individual physical mechanisms originated within the different climate subsystems. Hence, a profound understanding of these individual contributions is required in order to provide meaningful assessments of future changes in extreme events. One aspect of climate change is the recently observed phenomenon of Arctic Amplification and the related dramatic Arctic sea ice decline, which is expected to continue over the next decades. The question to what extent Arctic sea ice loss is able to affect atmospheric dynamics and extreme events over mid-latitudes has received a lot of attention over recent years and still remains a highly debated topic.
In this respect, the objective of this thesis is to contribute to a better understanding on the impact of future Arctic sea ice retreat on European temperature extremes and large-scale atmospheric dynamics.
The outcomes are based on model data from the atmospheric general circulation model ECHAM6. Two different sea ice sensitivity simulations from the Polar Amplification Intercomparison Project are employed and contrasted to a present day reference experiment: one experiment with prescribed future sea ice loss over the entire Arctic, as well as another one with sea ice reductions only locally prescribed over the Barents-Kara Sea.% prescribed over the entire Arctic, as well as only locally over the Barent/Karasea with a present day reference experiment.
The first part of the thesis focuses on how future Arctic sea ice reductions affect large-scale atmospheric dynamics over the Northern Hemisphere in terms of occurrence frequency changes of five preferred Euro-Atlantic circulation regimes. When compared to circulation regimes computed from ERA5 it shows that ECHAM6 is able to realistically simulate the regime structures. Both ECHAM6 sea ice sensitivity experiments exhibit similar regime frequency changes. Consistent with tendencies found in ERA5, a more frequent occurrence of a Scandinavian blocking pattern in midwinter is for instance detected under future sea ice conditions in the sensitivity experiments. Changes in occurrence frequencies of circulation regimes in summer season are however barely detected.
After identifying suitable regime storylines for the occurrence of European temperature extremes in winter, the previously detected regime frequency changes are used to quantify dynamically and thermodynamically driven contributions to sea ice-induced changes in European winter temperature extremes.
It is for instance shown how the preferred occurrence of a Scandinavian blocking regime under low sea ice conditions dynamically contributes to more frequent midwinter cold extreme occurrences over Central Europe. In addition, a reduced occurrence frequency of a Atlantic trough regime is linked to reduced winter warm extremes over Mid-Europe. Furthermore, it is demonstrated how the overall thermodynamical warming effect due to sea ice loss can result in less (more) frequent winter cold (warm) extremes, and consequently counteracts the dynamically induced changes.
Compared to winter season, circulation regimes in summer are less suitable as storylines for the occurrence of summer heat extremes.
Therefore, an approach based on circulation analogues is employed in order to quantify thermodyamically and dynamically driven contributions to sea ice-induced changes of summer heat extremes over three different European sectors. Reduced occurrences of blockings over Western Russia are detected in the ECHAM6 sea ice sensitivity experiments; however, arguing for dynamically and thermodynamically induced contributions to changes in summer heat extremes remains rather challenging.
During the Cenozoic, global cooling and uplift of the Tian Shan, Pamir, and Tibetan plateau modified atmospheric circulation and reduced moisture supply to Central Asia. These changes led to aridification in the region during the Neogene. Afterwards, Quaternary glaciations led to modification of the landscape and runoff.
In the Issyk-Kul basin of the Kyrgyz Tian Shan, the sedimentary sequences reflect the development of the adjacent ranges and local climatic conditions. In this work, I reconstruct the late Miocene – early Pleistocene depositional environment, climate, and lake development in the Issyk-Kul basin using facies analyses and stable δ18O and δ13C isotopic records from sedimentary sections dated by magnetostratigraphy and 26Al/10Be isochron burial dating. Also, I present 10Be-derived millennial-scale modern and paleo-denudation rates from across the Kyrgyz Tian Shan and long-term exhumation rates calculated from published thermochronology data. This allows me to examine spatial and temporal changes in surface processes in the Kyrgyz Tian Shan.
In the Issyk-Kul basin, the style of fluvial deposition changed at ca. 7 Ma, and aridification in the basin commenced concurrently, as shown by magnetostratigraphy and the δ18O and δ13C data. Lake formation commenced on the southern side of the basin at ca. 5 Ma, followed by a ca. 2 Ma local depositional hiatus. 26Al/10Be isochron burial dating and paleocurrent analysis show that the Kungey range to the north of the basin grew eastward, leading to a change from fluvial-alluvial deposits to proximal alluvial fan conglomerates at 5-4 Ma in the easternmost part of the basin. This transition occurred at 2.6-2.8 Ma on the southern side of the basin, synchronously with the intensification of the Northern Hemisphere glaciation. The paleo-denudation rates from 2.7-2.0 Ma are as low as long-term exhumation rates, and only the millennial-scale denudation rates record an acceleration of denudation.
This work concludes that the growth of the ranges to the north of the basin led to creation of the topographic barrier at ca. 7 Ma and a subsequent aridification in the Issyk-Kul basin. Increased subsidence and local tectonically-induced river system reorganization on the southern side of the basin enabled lake formation at ca. 5 Ma, while growth of the Kungey range blocked westward-draining rivers and led to sediment starvation and lake expansion. Denudational response of the Kyrgyz Tian Shan landscape is delayed due to aridity and only substantial cooling during the late Quaternary glacial cycles led to notable acceleration of denudation. Currently, increased glacier reduction and runoff controls a more rapid denudation of the northern slope of the Terskey range compared to other ranges of the Kyrgyz Tian Shan.
The trace elements, selenium (Se) and copper (Cu) play an important role in maintaining normal brain function. Since they have essential functions as cofactors of enzymes or structural components of proteins, an optimal supply as well as a well-defined homeostatic regulation are crucial. Disturbances in trace element homeostasis affect the health status and contribute to the incidence and severity of various diseases. The brain in particular is vulnerable to oxidative stress due to its extensive oxygen consumption and high energy turnover, among other factors. As components of a number of antioxidant enzymes, both elements are involved in redox homeostasis. However, high concentrations are also associated with the occurrence of oxidative stress, which can induce cellular damage. Especially high Cu concentrations in some brain areas are associated with the development and progression of neurodegenerative diseases such as Alzheimer's disease (AD). In contrast, reduced Se levels were measured in brains of AD patients. The opposing behavior of Cu and Se renders the study of these two trace elements as well as the interactions between them being particularly relevant and addressed in this work.
This cumulative doctoral thesis consists of five empirical studies examining various aspects of crisis and change from a management-accounting perspective. Within the first study, a bibliometric analysis is conducted. More precisely, based on publications between the financial crisis (since 2007) and the COVID-19 crisis (starting in 2020), the crisis literature in management accounting is investigated to uncover the most influential aspects of the field and to analyze the theoretical foundations of the literature. Moreover, this investigation also serves to identify future research streams and to provide starting points for future research. Based on a survey, the second study investigates the impact of several management-accounting tools on organizational resilience and its effect on a company’s competitive advantage during a crisis. The results show that their target-oriented use positively influences organizational resilience and contributes to the company’s competitive advantage during the crisis. The third study provides a more detailed view on the relationship between budgeting and risk management and their benefit for companies in times of crisis. For this purpose, the relationship between the relevance of budgeting functions and risk management in the company and the corresponding impact on company performance are investigated. The results show a positive relationship. However, a crisis can also affect the relationship between the company and its shareholders: Thus, the fourth study – based on publicly available data and a survey – examines the consequences of virtual annual general meetings on shareholder rights. The results show that, temporarily, particularly the right to information was severely restricted. For the following year, this problem was fixed, and ultimately, the virtual option was introduced permanently. The crisis has thus brought about a lasting change. But not only crises cause changes: The fifth study, also based on survey data, investigates the changes in the role of management accountants caused by digitalization. More precisely, it investigates how management accountants deal with tasks that are considered outdated and unattractive. The results of the study show that different types of personalities also act differently as far as the willingness to do those unattractive tasks is concerned, and career ambitions also influence that willingness. In addition to this, the results provide insights into the motivation of management accountants to conduct tasks and thus counteract existing assumptions based on stereotypes and clichés circulating within the research community.
Understanding hydrological processes is of fundamental importance for the Vietnamese national food security and the livelihood of the population in the Vietnamese Mekong Delta (VMD). As a consequence of sparse data in this region, however, hydrologic processes, such as the controlling processes of precipitation, the interaction between surface and groundwater, and groundwater dynamics, have not been thoroughly studied. The lack of this knowledge may negatively impact the long-term strategic planning for sustainable groundwater resources management and may result in insufficient groundwater recharge and freshwater scarcity. It is essential to develop useful methods for a better understanding of hydrological processes in such data-sparse regions. The goal of this dissertation is to advance methodologies that can improve the understanding of fundamental hydrological processes in the VMD, based on the analyses of stable water isotopes and monitoring data. The thesis mainly focuses on the controlling processes of precipitation, the mechanism of surface–groundwater interaction, and the groundwater dynamics. These processes have not been fully addressed in the VMD so far. The thesis is based on statistical analyses of the isotopic data of Global Network of Isotopes in Precipitation (GNIP), of meteorological and hydrological data from Vietnamese agencies, and of the stable water isotopes and monitoring data collected as part of this work.
First, the controlling processes of precipitation were quantified by the combination of trajectory analysis, multi-factor linear regression, and relative importance analysis (hereafter, a model‐based statistical approach). The validity of this approach is confirmed by similar, but mainly qualitative results obtained in other studies. The total variation in precipitation isotopes (δ18O and δ2H) can be better explained by multiple linear regression (up to 80%) than single-factor linear regression (30%). The relative importance analysis indicates that atmospheric moisture regimes control precipitation isotopes rather than local climatic conditions. The most crucial factor is the upstream rainfall along the trajectories of air mass movement. However, the influences of regional and local climatic factors vary in importance over the seasons. The developed model‐based statistical approach is a robust tool for the interpretation of precipitation isotopes and could also be applied to understand the controlling processes of precipitation in other regions.
Second, the concept of the two-component lumped-parameter model (LPM) in conjunction with stable water isotopes was applied to examine the surface–groundwater interaction in the VMD. A calibration framework was also set up to evaluate the behaviour, parameter identifiability, and uncertainties of two-component LPMs. The modelling results provided insights on the subsurface flow conditions, the recharge contributions, and the spatial variation of groundwater transit time. The subsurface flow conditions at the study site can be best represented by the linear-piston flow distribution. The contributions of the recharge sources change with distance to the river. The mean transit time (mTT) of riverbank infiltration increases with the length of the horizontal flow path and the decreasing gradient between river and groundwater. River water infiltrates horizontally mainly via the highly permeable aquifer, resulting in short mTTs (<40 weeks) for locations close to the river (<200 m). The vertical infiltration from precipitation takes place primarily via a low‐permeable overlying aquitard, resulting in considerably longer mTTs (>80 weeks). Notably, the transit time of precipitation infiltration is independent of the distance to the river. All these results are hydrologically plausible and could be quantified by the presented method for the first time. This study indicates that the highly complex mechanism of surface–groundwater interaction at riverbank infiltration systems can be conceptualized by exploiting two‐component LPMs. It is illustrated that the model concept can be used as a tool to investigate the hydrological functioning of mixing processes and the flow path of multiple water components in riverbank infiltration systems.
Lastly, a suite of time series analysis approaches was applied to examine the groundwater dynamics in the VMD. The assessment was focused on the time-variant trends of groundwater levels (GWLs), the groundwater memory effect (representing the time that an aquifer holds water), and the hydraulic response between surface water and multi-layer alluvial aquifers. The analysis indicates that the aquifers act as low-pass filters to reduce the high‐frequency signals in the GWL variations, and limit the recharge to the deep groundwater. The groundwater abstraction has exceeded groundwater recharge between 1997 and 2017, leading to the decline of groundwater levels (0.01-0.55 m/year) in all considered aquifers in the VMD. The memory effect varies according to the geographical location, being shorter in shallow aquifers and flood-prone areas and longer in deep aquifers and coastal regions. Groundwater depth, season, and location primarily control the variation of the response time between the river and alluvial aquifers. These findings are important contributions to the hydrogeological literature of a little-known groundwater system in an alluvial setting. It is suggested that time series analysis can be used as an efficient tool to understand groundwater systems where resources are insufficient to develop a physical-based groundwater model.
This doctoral thesis demonstrates that important aspects of hydrological processes can be understood by statistical analysis of stable water isotope and monitoring data. The approaches developed in this thesis can be easily transferred to regions in similar tropical environments, particularly those in alluvial settings. The results of the thesis can be used as a baseline for future isotope-based studies and contribute to the hydrogeological literature of little-known groundwater systems in the VMD.
Advances in hydrogravimetry
(2023)
The interest of the hydrological community in the gravimetric method has steadily increased within the last decade. This is reflected by numerous studies from many different groups with a broad range of approaches and foci. Many of those are traditionally rather hydrology-oriented groups who recognized gravimetry as a potential added value for their hydrological investigations. While this resulted in a variety of interesting and useful findings, contributing to extend the respective knowledge and confirming the methodological potential, on the other hand, many interesting and unresolved questions emerged.
This thesis manifests efforts, analyses and solutions carried out in this regard. Addressing and evaluating many of those unresolved questions, the research contributes to advancing hydrogravimetry, the combination of gravimetric and hydrological methods, in showing how gravimeters are a highly useful tool for applied hydrological field research.
In the first part of the thesis, traditional setups of stationary terrestrial superconducting gravimeters are addressed. They are commonly installed within a dedicated building, the impermeable structure of which shields the underlying soil from natural exchange of water masses (infiltration, evapotranspiration, groundwater recharge). As gravimeters are most sensitive to mass changes directly beneath the meter, this could impede their suitability for local hydrological process investigations, especially for near-surface water storage changes (WSC). By studying temporal local hydrological dynamics at a dedicated site equipped with traditional hydrological measurement devices, both below and next to the building, the impact of these absent natural dynamics on the gravity observations were quantified. A comprehensive analysis with both a data-based and model-based approach led to the development of an alternative method for dealing with this limitation. Based on determinable parameters, this approach can be transferred to a broad range of measurement sites where gravimeters are deployed in similar structures. Furthermore, the extensive considerations on this topic enabled a more profound understanding of this so called umbrella effect.
The second part of the thesis is a pilot study about the field deployment of a superconducting gravimeter. A newly developed field enclosure for this gravimeter was tested in an outdoor installation adjacent to the building used to investigate the umbrella effect. Analyzing and comparing the gravity observations from both indoor and outdoor gravimeters showed performance with respect to noise and stable environmental conditions was equivalent while the sensitivity to near-surface WSC was highly increased for the field deployed instrument. Furthermore it was demonstrated that the latter setup showed gravity changes independent of the depth where mass changes occurred, given their sufficiently wide horizontal extent. As a consequence, the field setup suits monitoring of WSC for both short and longer time periods much better. Based on a coupled data-modeling approach, its gravity time series was successfully used to infer and quantify local water budget components (evapotranspiration, lateral subsurface discharge) on the daily to annual time scale.
The third part of the thesis applies data from a gravimeter field deployment for applied hydrological process investigations. To this end, again at the same site, a sprinkling experiment was conducted in a 15 x 15 m area around the gravimeter. A simple hydro-gravimetric model was developed for calculating the gravity response resulting from water redistribution in the subsurface. It was found that, from a theoretical point of view, different subsurface water distribution processes (macro pore flow, preferential flow, wetting front advancement, bypass flow and perched water table rise) lead to a characteristic shape of their resulting gravity response curve. Although by using this approach it was possible to identify a dominating subsurface water distribution process for this site, some clear limitations stood out. Despite the advantage for field installations that gravimetry is a non-invasive and integral method, the problem of non-uniqueness could only be overcome by additional measurements (soil moisture, electric resistivity tomography) within a joint evaluation. Furthermore, the simple hydrological model was efficient for theoretical considerations but lacked the capability to resolve some heterogeneous spatial structures of water distribution up to a needed scale. Nevertheless, this unique setup for plot to small scale hydrological process research underlines the high potential of gravimetery and the benefit of a field deployment.
The fourth and last part is dedicated to the evaluation of potential uncertainties arising from the processing of gravity observations. The gravimeter senses all mass variations in an integral way, with the gravitational attraction being directly proportional to the magnitude of the change and inversely proportional to the square of the distance of the change. Consequently, all gravity effects (for example, tides, atmosphere, non-tidal ocean loading, polar motion, global hydrology and local hydrology) are included in an aggregated manner. To isolate the signal components of interest for a particular investigation, all non-desired effects have to be removed from the observations. This process is called reduction. The large-scale effects (tides, atmosphere, non-tidal ocean loading and global hydrology) cannot be measured directly and global model data is used to describe and quantify each effect. Within the reduction process, model errors and uncertainties propagate into the residual, the result of the reduction. The focus of this part of the thesis is quantifying the resulting, propagated uncertainty for each individual correction. Different superconducting gravimeter installations were evaluated with respect to their topography, distance to the ocean and the climate regime. Furthermore, different time periods of aggregated gravity observation data were assessed, ranging from 1 hour up to 12 months. It was found that uncertainties were highest for a frequency of 6 months and smallest for hourly frequencies. Distance to the ocean influences the uncertainty of the non-tidal ocean loading component, while geographical latitude affects uncertainties of the global hydrological component. It is important to highlight that the resulting correction-induced uncertainties in the residual have the potential to mask the signal of interest, depending on the signal magnitude and its frequency. These findings can be used to assess the value of gravity data across a range of applications and geographic settings.
In an overarching synthesis all results and findings are discussed with a general focus on their added value for bringing hydrogravimetric field research to a new level. The conceptual and applied methodological benefits for hydrological studies are highlighted. Within an outlook for future setups and study designs, it was once again shown what enormous potential is offered by gravimeters as hydrological field tools.
The Andean Cordillera is a mountain range located at the western South American margin and is part of the Eastern- Circum-Pacific orogenic Belt. The ~7000 km long mountain range is one of the longest on Earth and hosts the second largest orogenic plateau in the world, the Altiplano-Puna plateau. The Andes are known as a non-collisional subduction-type orogen which developed as a result of the interaction between the subducted oceanic Nazca plate and the South American continental plate. The different Andean segments exhibit along-strike variations of morphotectonic provinces characterized by different elevations, volcanic activity, deformation styles, crustal thickness, shortening magnitude and oceanic plate geometry. Most of the present-day elevation can be explained by crustal shortening in the last ~50 Ma, with the shortening magnitude decreasing from ~300 km in the central (15°S-30°S) segment to less than half that in the southern part (30°S-40°S). Several factors were proposed that might control the magnitude and acceleration of shortening of the Central Andes in the last 15 Ma. One important factor is likely the slab geometry. At 27-33°S, the slab dips horizontally at ~100 km depth due to the subduction of the buoyant Juan Fernandez Ridge, forming the Pampean flat-slab. This horizontal subduction is thought to influence the thermo-mechanical state of the Sierras Pampeanas foreland, for instance, by strengthening the lithosphere and promoting the thick-skinned propagation of deformation to the east, resulting in the uplift of the Sierras Pampeanas basement blocks. The flat-slab has migrated southwards from the Altiplano latitude at ~30 Ma to its present-day position and the processes and consequences associated to its passage on the contemporaneous acceleration of the shortening rate in Central Andes remain unclear. Although the passage of the flat-slab could offer an explanation to the acceleration of the shortening, the timing does not explain the two pulses of shortening at about 15 Ma and 4 Ma that are suggested from geological observations. I hypothesize that deformation in the Central Andes is controlled by a complex interaction between the subduction dynamics of the Nazca plate and the dynamic strengthening and weakening of the South American plate due to several upper plate processes. To test this hypothesis, a detailed investigation into the role of the flat-slab, the structural inheritance of the continental plate, and the subduction dynamics in the Andes is needed. Therefore, I have built two classes of numerical thermo-mechanical models: (i) The first class of models are a series of generic E-W-oriented high-resolution 2D subduction models thatinclude flat subduction in order to investigate the role of the subduction dynamics on the temporal variability of the shortening rate in the Central Andes at Altiplano latitudes (~21°S). The shortening rate from the models was then validated with the observed tectonic shortening rate in the Central Andes. (ii) The second class of models are a series of 3D data-driven models of the present-day Pampean flat-slab configuration and the Sierras Pampeanas (26-42°S). The models aim to investigate the relative contribution of the present-day flat subduction and inherited structures in the continental lithosphere on the strain localization. Both model classes were built using the advanced finite element geodynamic code ASPECT.
The first main finding of this work is to suggest that the temporal variability of shortening in the Central Andes is primarily controlled by the subduction dynamics of the Nazca plate while it penetrates into the mantle transition zone. These dynamics depends on the westward velocity of the South American plate that provides the main crustal shortening force to the Andes and forces the trench to retreat. When the subducting plate reaches the lower mantle, it buckles on it-self until the forced trench retreat causes the slab to steepen in the upper mantle in contrast with the classical slab-anchoring model. The steepening of the slab hinders the trench causing it to resist the advancing South American plate, resulting in the pulsatile shortening. This buckling and steepening subduction regime could have been initiated because of the overall decrease in the westwards velocity of the South American plate. In addition, the passage of the flat-slab is required to promote the shortening of the continental plate because flat subduction scrapes the mantle lithosphere, thus weakening the continental plate. This process contributes to the efficient shortening when the trench is hindered, followed by mantle lithosphere delamination at ~20 Ma. Finally, the underthrusting of the Brazilian cratonic shield beneath the orogen occurs at ~11 Ma due to the mechanical weakening of the thick sediments covered the shield margin, and due to the decreasing resistance of the weakened lithosphere of the orogen.
The second main finding of this work is to suggest that the cold flat-slab strengthens the overriding continental lithosphere and prevents strain localization. Therefore, the deformation is transmitted to the eastern front of the flat-slab segment by the shear stress operating at the subduction interface, thus the flat-slab acts like an indenter that “bulldozes” the mantle-keel of the continental lithosphere. The offset in the propagation of deformation to the east between the flat and steeper slab segments in the south causes the formation of a transpressive dextral shear zone. Here, inherited faults of past tectonic events are reactivated and further localize the deformation in an en-echelon strike-slip shear zone, through a mechanism that I refer to as “flat-slab conveyor”. Specifically, the shallowing of the flat-slab causes the lateral deformation, which explains the timing of multiple geological events preceding the arrival of the flat-slab at 33°S. These include the onset of the compression and of the transition between thin to thick-skinned deformation styles resulting from the crustal contraction of the crust in the Sierras Pampeanas some 10 and 6 Myr before the Juan Fernandez Ridge collision at that latitude, respectively.
At the beginning of 2020, with COVID-19, courts of justice worldwide had to move online to continue providing judicial service. Digital technologies materialized the court practices in ways unthinkable shortly before the pandemic creating resonances with judicial and legal regulation, as well as frictions. A better understanding of the dynamics at play in the digitalization of courts is paramount for designing justice systems that serve their users better, ensure fair and timely dispute resolutions, and foster access to justice. Building on three major bodies of literature —e-justice, digitalization and organization studies, and design research— Designing for Digital Justice takes a nuanced approach to account for human and more-than-human agencies.
Using a qualitative approach, I have studied in depth the digitalization of Chilean courts during the pandemic, specifically between April 2020 and September 2022. Leveraging a comprehensive source of primary and secondary data, I traced back the genealogy of the novel materializations of courts’ practices structured by the possibilities offered by digital technologies. In five (5) cases studies, I show in detail how the courts got to 1) work remotely, 2) host hearings via videoconference, 3) engage with users via social media (i.e., Facebook and Chat Messenger), 4) broadcast a show with judges answering questions from users via Facebook Live, and 5) record, stream, and upload judicial hearings to YouTube to fulfil the publicity requirement of criminal hearings. The digitalization of courts during the pandemic is characterized by a suspended normativity, which makes innovation possible yet presents risks. While digital technologies enabled the judiciary to provide services continuously, they also created the risk of displacing traditional judicial and legal regulation.
Contributing to liminal innovation and digitalization research, Designing for Digital Justice theorizes four phases: 1) the pre-digitalization phase resulting in the development of regulation, 2) the hotspot of digitalization resulting in the extension of regulation, 3) the digital innovation redeveloping regulation (moving to a new, preliminary phase), and 4) the permanence of temporal practices displacing regulation. Contributing to design research Designing for Digital Justice provides new possibilities for innovation in the courts, focusing at different levels to better address tensions generated by digitalization. Fellow researchers will find in these pages a sound theoretical advancement at the intersection of digitalization and justice with novel methodological references. Practitioners will benefit from the actionable governance framework Designing for Digital Justice Model, which provides three fields of possibilities for action to design better justice systems. Only by taking into account digital, legal, and social factors can we design better systems that promote access to justice, the rule of law, and, ultimately social peace.
The shallow Earth’s layers are at the interplay of many physical processes: some being driven by atmospheric forcing (precipitation, temperature...) whereas others take their origins at depth, for instance ground shaking due to seismic activity. These forcings cause the subsurface to continuously change its mechanical properties, therefore modulating the strength of the surface geomaterials and hydrological fluxes. Because our societies settle and rely on the layers hosting these time-dependent properties, constraining the hydro-mechanical dynamics of the shallow subsurface is crucial for our future geographical development. One way to investigate the ever-changing physical changes occurring under our feet is through the inference of seismic velocity changes from ambient noise, a technique called seismic interferometry. In this dissertation, I use this method to monitor the evolution of groundwater storage and damage induced by earthquakes. Two research lines are investigated that comprise the key controls of groundwater recharge in steep landscapes and the predictability and duration of the transient physical properties due to earthquake ground shaking. These two types of dynamics modulate each other and influence the velocity changes in ways that are challenging to disentangle. A part of my doctoral research also addresses this interaction. Seismic data from a range of field settings spanning several climatic conditions (wet to arid climate) in various seismic-prone areas are considered. I constrain the obtained seismic velocity time-series using simple physical models, independent dataset, geophysical tools and nonlinear analysis. Additionally, a methodological development is proposed to improve the time-resolution of passive seismic monitoring.
Soil is today considered a non-renewable resource on societal time scale, as the rate of soil loss is higher than the one of soil formation.
Soil formation is complex, can take several thousands of years and is influenced by a variety of factors, one of them is time. Oftentimes, there is the assumption of constant and progressive conditions for soil and/or profile development (i.e., steady-state). In reality, for most of the soils, their (co-)evolution leads to a complex and irregular soil development in time and space characterised by “progressive” and “regressive” phases.
Lateral transport of soil material (i.e., soil erosion) is one of the principal processes shaping the land surface and soil profile during “regressive” phases and one of the major environmental problems the world faces.
Anthropogenic activities like agriculture can exacerbate soil erosion. Thus, it is of vital importance to distinguish short-term soil redistribution rates (i.e., within decades) influenced by human activities differ from long-term natural rates. To do so, soil erosion (and denudation) rates can be determined by using a set of isotope methods that cover different time scales at landscape level.
With the aim to unravel the co-evolution of weathering, soil profile development and lateral redistribution on a landscape level, we used Pluthonium-239+240 (239+240Pu), Beryllium-10 (10Be, in situ and meteoric) and Radiocarbon (14C) to calculate short- and long-term erosion rates in two settings, i.e., a natural and an anthropogenic environment in the hummocky ground moraine landscape of the Uckermark, North-eastern Germany. The main research questions were:
1. How do long-term and short-term rates of soil redistributing processes differ?
2. Are rates calculated from in situ 10Be comparable to those of using meteoric 10Be?
3. How do soil redistribution rates (short- and long-term) in an agricultural and in a natural landscape compare to each other?
4. Are the soil patterns observed in northern Germany purely a result of past events (natural and/or anthropogenic) or are they imbedded in ongoing processes?
Erosion and deposition are reflected in a catena of soil profiles with no or almost no erosion on flat positions (hilltop), strong erosion on the mid-slope and accumulation of soil material at the toeslope position. These three characteristic process domains were chosen within the CarboZALF-D experimental site, characterised by intense anthropogenic activities. Likewise, a hydrosequence in an ancient forest was chosen for this study and being regarded as a catena strongly influenced by natural soil transport.
The following main results were obtained using the above-mentioned range of isotope methods available to measure soil redistribution rates depending on the time scale needed (e.g., 239+240Pu, 10Be, 14C):
1. Short-term erosion rates are one order of magnitude higher than long-term rates in agricultural settings.
2. Both meteoric and in situ 10Be are suitable soil tracers to measure the long-term soil redistribution rates giving similar results in an anthropogenic environment for different landscape positions (e.g., hilltop, mid-slope, toeslope)
3. Short-term rates were extremely low/negligible in a natural landscape and very high in an agricultural landscape – -0.01 t ha-1 yr-1 (average value) and -25 t ha-1 yr-1 respectively. On the contrary, long-term rates in the forested landscape are comparable to those calculated in the agricultural area investigated with average values of -1.00 t ha-1 yr-1 and -0.79 t ha-1 yr-1.
4. Soil patterns observed in the forest might be due to human impact and activities started after the first settlements in the region, earlier than previously postulated, between 4.5 and 6.8 kyr BP, and not a result of recent soil erosion.
5. Furthermore, long-term soil redistribution rates are similar independently from the settings, meaning past natural soil mass redistribution processes still overshadow the present anthropogenic erosion processes.
Overall, this study could make important contributions to the deciphering of the co-evolution of weathering, soil profile development and lateral redistribution in North-eastern Germany. The multi-methodological approach used can be challenged by the application in a wider range of landscapes and geographic regions.
The collaboration-based professional development approach Lesson Study (LS), which has its roots in the Japanese education system, has gained international recognition over the past three decades and spread quickly throughout the world. LS is a collaborative method to professional development (PD) that incorporates multiple characteristics that have been identified in the research literature as key to effective PD. Specifically, LS is a long-term process that consists of subsequent inquiry cycles, it is site-based and integrated in teachers’ practice, it encourages collaboration and reflection, places a strong emphasis on student learning, and it typically involves external experts that support the process or offer additional insights.
As LS integrates all these characteristics, it has rapidly gained international popularity since the turn of the 21st century and is currently being practiced in over 40 countries around the world. This international borrowing of the idea of LS to new national contexts has given rise to a research field that aims to investigate the effectiveness of LS on teacher learning as well as the circumstances and mechanisms that make LS effective in various settings around the world. Such research is important, as borrowing educational innovations and adapting them to new contexts can be a challenging process. Educational innovations that fail to deliver the expected outcomes tend to be abandoned prematurely and before they have been completely understood or a substantial research base has been established.
In order to prevent LS from early abandonment, Lewis and colleagues outlined three critical research needs in 2006, not long after LS was initially introduced to the United States. These research needs included (1) developing a descriptive knowledge base on LS, (2) examining the mechanisms by which teachers learn through LS, and (3) using design-based research cycles to analyze and improve LS.
This dissertation set out to take stock of the progress that has been made on these research needs over the past 20 years. The scoping review conducted for the framework of this dissertation indicates that, while a large and international knowledge base has been developed, the field has not yet produced reliable evidence of the effectiveness of LS. Based on the scoping review, this dissertation makes the case that Lewis et al.’s (2006) critical research needs should be updated. In order to do so, a number of limitations to the current knowledge base on LS need to be addressed. These limitations include (1) the frequent lack of comparable and replicable descriptions of the LS intervention in publications, (2) the incoherent use or lack of use of theoretical frameworks to explain teacher learning through LS, (3) the inconsistent use of terminology and concepts, and (4) the lack of scientific rigor in research studies and of established ways or tools to measure the effectiveness of LS.
This dissertation aims to advance the critical research needs in the field by examining the extent and nature of these limitations in three research studies. The focus of these studies lies on the LS stages of observation and reflection, as these stages have a high potential to facilitate teacher learning. The first study uses a mixed-method design to examine how teachers at German primary schools reflect critically together. The study derives a theory-based definition of critical and collaborative reflection in order to re-frame the reflection element in LS.
The second study, a systematic review of 129 articles on LS, assess how transparent research articles are in reporting how teachers observed and reflected together. In addition, it is investigated whether these articles provide any kind of theorization for the stages of observation and reflection.
The third study proposes a conceptual model for the field of LS that is based on existing models of continuous professional development and research findings on team effectiveness and collaboration. The model describes the dimensions of input, mediating mechanisms, and outcomes in order to provide a conceptual grid to teachers’ continuous professional development through LS.
Carbonates carried in subducting slabs may play a major role in sourcing and storing carbon in the deep Earth’s interior. Current estimates indicate that between 40 to 66 million tons of carbon per year enter subduction zones, but it is uncertain how much of it reaches the lower mantle. It appears that most of this carbon might be extracted from subducting slabs at the mantle wedge and only a limited amount continues deeper and eventually reaches the deep mantle. However, estimations on deeply subducted carbon broadly range from 0.0001 to 52 million tons of carbon per year. This disparity is primarily due to the limited understanding of the survival of carbonate minerals during their transport to deep mantle conditions. Indeed, carbon has very low solubility in mantle silicates, therefore it is expected to be stored primarily in accessory phases such as carbonates. Among those carbonates, magnesite (MgCO3), as a single phase, is the most stable under all mantle conditions. However, experimental investigation on the stability of magnesite in contact with SiO2 at lower mantle conditions suggests that magnesite is stable only along a cold subducted slab geotherm. Furthermore, our understanding of magnesite’s stability when interacting with more complex mantle silicate phases remains incomplete. In the first part of this dissertation, laser-heated diamond anvil cells and multi-anvil apparatus experiments were performed to investigate the stability of magnesite in contact with iron-bearing mantle silicates. Sub-solidus reactions, melting, decarbonation and diamond formation were examined from shallow to mid-lower mantle conditions (25 to 68 GPa; 1300 to 2000 K). Multi-anvil experiments at 25 GPa show the formation of carbonate-rich melt, bridgmanite, and stishovite with melting occurring at a temperature corresponding to all geotherms except the coldest one. In situ X-ray diffraction, in laser-heating diamond anvil cells experiments, shows crystallization of bridgmanite and stishovite but no melt phase was detected in situ at high temperatures. To detect decarbonation phases such as diamond, Raman spectroscopy was used. Crystallization of diamonds is observed as a sub-solidus process even at temperatures relevant and lower than the coldest slab geotherm (1350 K at 33 GPa). Data obtained from this work suggest that magnesite is unstable in contact with the surrounding peridotite mantle in the upper-most lower mantle. The presence of magnesite instead induces melting under oxidized conditions and/or foster diamond formation under more reduced conditions, at depths ∼700 km. Consequently, carbonates will be removed from the carbonate-rich slabs at shallow lower mantle conditions, where subducted slabs can stagnate. Therefore, the transport of carbonate to deeper depths will be restricted, supporting the presence of a barrier for carbon subduction at the top of the lower mantle. Moreover, the reduction of magnesite, forming diamonds provides additional evidence that super-deep diamond crystallization is related to the reduction of carbonates or carbonated-rich melt.
The second part of this dissertation presents the development of a portable laser-heating system optimized for X-ray emission spectroscopy (XES) or nuclear inelastic scattering (NIS) spectroscopy with signal collection at near 90◦. The laser-heated diamond anvil cell is the only static pressure device that can replicate the pressure and temperatures of the Earth’s lower mantle and core. The high temperatures are reached by using high-powered lasers focused on the sample contained between the diamond anvils. Moreover, diamonds’ transparency to X-rays enables in situ X-ray spectroscopy measurements that can probe the sample under high-temperature and high-pressure conditions. Therefore, the development of portable laser-heating systems has linked high-pressure and temperature research with high-resolution X-ray spectroscopy techniques to synchrotron beamlines that do not have a dedicated, permanent, laser-heating system. A general description of the system is provided, as well as details on the use of a parabolic mirror as a reflective imaging objective for on-axis laser heating and radiospectrometric temperature measurements with zero attenuation of incoming X-rays. The parabolic mirror improves the accuracy of temperature measurements free from chromatic aberrations in a wide spectral range and its perforation permits in situ X-rays measurement at synchrotron facilities. The parabolic mirror is a well-suited alternative to refractive objectives in laser heating systems, which will facilitate future applications in the use of CO2 lasers.
In model-driven engineering, the adaptation of large software systems with dynamic structure is enabled by architectural runtime models. Such a model represents an abstract state of the system as a graph of interacting components. Every relevant change in the system is mirrored in the model and triggers an evaluation of model queries, which search the model for structural patterns that should be adapted. This thesis focuses on a type of runtime models where the expressiveness of the model and model queries is extended to capture past changes and their timing. These history-aware models and temporal queries enable more informed decision-making during adaptation, as they support the formulation of requirements on the evolution of the pattern that should be adapted. However, evaluating temporal queries during adaptation poses significant challenges. First, it implies the capability to specify and evaluate requirements on the structure, as well as the ordering and timing in which structural changes occur. Then, query answers have to reflect that the history-aware model represents the architecture of a system whose execution may be ongoing, and thus answers may depend on future changes. Finally, query evaluation needs to be adequately fast and memory-efficient despite the increasing size of the history---especially for models that are altered by numerous, rapid changes.
The thesis presents a query language and a querying approach for the specification and evaluation of temporal queries. These contributions aim to cope with the challenges of evaluating temporal queries at runtime, a prerequisite for history-aware architectural monitoring and adaptation which has not been systematically treated by prior model-based solutions. The distinguishing features of our contributions are: the specification of queries based on a temporal logic which encodes structural patterns as graphs; the provision of formally precise query answers which account for timing constraints and ongoing executions; the incremental evaluation which avoids the re-computation of query answers after each change; and the option to discard history that is no longer relevant to queries. The query evaluation searches the model for occurrences of a pattern whose evolution satisfies a temporal logic formula. Therefore, besides model-driven engineering, another related research community is runtime verification. The approach differs from prior logic-based runtime verification solutions by supporting the representation and querying of structure via graphs and graph queries, respectively, which is more efficient for queries with complex patterns. We present a prototypical implementation of the approach and measure its speed and memory consumption in monitoring and adaptation scenarios from two application domains, with executions of an increasing size. We assess scalability by a comparison to the state-of-the-art from both related research communities. The implementation yields promising results, which pave the way for sophisticated history-aware self-adaptation solutions and indicate that the approach constitutes a highly effective technique for runtime monitoring on an architectural level.
Layered structures are ubiquitous in nature and industrial products, in which individual layers could have different mechanical/thermal properties and functions independently contributing to the performance of the whole layered structure for their relevant application. Tuning each layer affects the performance of the whole layered system.
Pores are utilized in various disciplines, where low density, but large surfaces are demanded. Besides, open and interconnected pores would act as a transferring channel for guest chemical molecules. The shape of pores influences compression behavior of the material. Moreover, introducing pores decreases the density and subsequently the mechanical strength. To maintain defined mechanical strength under various stress, porous structure can be reinforced by adding reinforcement agent such as fiber, filler or layered structure to bear the mechanical stress on demanded application.
In this context, this thesis aimed to generate new functions in bilayer systems by combining layers having different moduli and/or porosity, and to develop suitable processing techniques to access these structures.
Manufacturing processes of layered structures employ often organic solvents mostly causing environmental pollution. In this regard, the studied bilayer structures here were manufactured by processes free of organic solvents.
In this thesis, three bilayer systems were studied to answer the individual questions.
First, while various methods of introducing pores in melt-phase are reported for one-layer constructs with simple geometry, can such methods be applied to a bilayer structure, giving two porous layers?
This was addressed with Bilayer System 1. Two porous layers were obtained from melt-blending of two different polyurethanes (PU) and polyvinyl alcohol (PVA) in a co-continuous phase followed by sequential injection molding and leaching the PVA phase in deionized water. A porosity of 50 ± 5% with a high interconnectivity was obtained, in which the pore sizes in both layers ranged from 1 µm to 100 µm with an average of 22 µm in both layers. The obtained pores were tailored by applying an annealing treatment at relevant high temperatures of 110 °C and 130 °C, which allowed the porosity to be kept constant. The disadvantage of this system is that a maximum of 50% porosity could be reached and removal of leaching material in the weld line section of both layers is not guaranteed. Such a construct serves as a model for bilayer porous structure for determining structure-property relationships with respect to the pore size, porosity and mechanical properties of each layer. This fabrication method is also applicable to complex geometries by designing a relevant mold for injection molding.
Secondly, utilizing scCO2 foaming process at elevated temperature and pressure is considered as a green manufacturing process. Employing this method as a post-treatment can alter the history orientation of polymer chains created by previous fabrication methods. Can a bilayer structure be fabricated by a combination of sequential injection molding and scCO2 foaming process, in which a porous layer is supported by a compact layer?
Such a construct (Bilayer System 2) was generated by sequential injection molding of a PCL (Tm ≈ 58 °C) layer and a PLLA (Tg ≈ 58 °C) layer. Soaking this structure in the autoclave with scCO2 at T = 45 °C and P = 100 bar led to the selective foaming of PCL with a porosity of 80%, while the PLA layer was kept compact. The scCO2 autoclave led to the formation of a porous core and skin layer of the PCL, however, the degree of crystallinity of PLLA layer increased from 0 to 50% at the defined temperature and pressure. The microcellular structure of PCL as well as the degree of crystallinity of PLLA were controlled by increasing soaking time.
Thirdly, wrinkles on surfaces in micro/nano scale alter the properties, which are surface-related. Wrinkles are formed on a surface of a bilayer structure having a compliant substrate and a stiff thin film. However, the reported wrinkles were not reversible. Moreover, dynamic wrinkles in nano and micro scale have numerous examples in nature such as gecko foot hair offering reversible adhesion and an ability of lotus leaves for self-cleaning altering hydrophobicity of the surface. It was envisioned to imitate this biomimetic function on the bilayer structure, where self-assembly on/off patterns would be realized on the surface of this construct.
In summary, developing layered constructs having different properties/functions in the individual layer or exhibiting a new function as the consequence of layered structure can give novel insight for designing layered constructs in various disciplines such as packaging and transport industry, aerospace industry and health technology.
Most machine learning methods provide only point estimates when being queried to predict on new data. This is problematic when the data is corrupted by noise, e.g. from imperfect measurements, or when the queried data point is very different to the data that the machine learning model has been trained with. Probabilistic modelling in machine learning naturally equips predictions with corresponding uncertainty estimates which allows a practitioner to incorporate information about measurement noise into the modelling process and to know when not to trust the predictions. A well-understood, flexible probabilistic framework is provided by Gaussian processes that are ideal as building blocks of probabilistic models. They lend themself naturally to the problem of regression, i.e., being given a set of inputs and corresponding observations and then predicting likely observations for new unseen inputs, and can also be adapted to many more machine learning tasks. However, exactly inferring the optimal parameters of such a Gaussian process model (in a computationally tractable manner) is only possible for regression tasks in small data regimes. Otherwise, approximate inference methods are needed, the most prominent of which is variational inference.
In this dissertation we study models that are composed of Gaussian processes embedded in other models in order to make those more flexible and/or probabilistic. The first example are deep Gaussian processes which can be thought of as a small network of Gaussian processes and which can be employed for flexible regression. The second model class that we study are Gaussian process state-space models. These can be used for time-series modelling, i.e., the task of being given a stream of data ordered by time and then predicting future observations. For both model classes the state-of-the-art approaches offer a trade-off between expressive models and computational properties (e.g. speed or convergence properties) and mostly employ variational inference. Our goal is to improve inference in both models by first getting a deep understanding of the existing methods and then, based on this, to design better inference methods. We achieve this by either exploring the existing trade-offs or by providing general improvements applicable to multiple methods.
We first provide an extensive background, introducing Gaussian processes and their sparse (approximate and efficient) variants. We continue with a description of the models under consideration in this thesis, deep Gaussian processes and Gaussian process state-space models, including detailed derivations and a theoretical comparison of existing methods.
Then we start analysing deep Gaussian processes more closely: Trading off the properties (good optimisation versus expressivity) of state-of-the-art methods in this field, we propose a new variational inference based approach. We then demonstrate experimentally that our new algorithm leads to better calibrated uncertainty estimates than existing methods.
Next, we turn our attention to Gaussian process state-space models, where we closely analyse the theoretical properties of existing methods.The understanding gained in this process leads us to propose a new inference scheme for general Gaussian process state-space models that incorporates effects on multiple time scales. This method is more efficient than previous approaches for long timeseries and outperforms its comparison partners on data sets in which effects on multiple time scales (fast and slowly varying dynamics) are present.
Finally, we propose a new inference approach for Gaussian process state-space models that trades off the properties of state-of-the-art methods in this field. By combining variational inference with another approximate inference method, the Laplace approximation, we design an efficient algorithm that outperforms its comparison partners since it achieves better calibrated uncertainties.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Species are adapted to the environment they live in. Today, most environments are subjected to rapid global changes induced by human activity, most prominently land cover and climate changes. Such transformations can cause adjustments or disruptions in various eco-evolutionary processes. The repercussions of this can appear at the population level as shifted ranges and altered abundance patterns. This is where global change effects on species are usually detected first.
To understand how eco-evolutionary processes act and interact to generate patterns of range and abundance and how these processes themselves are influenced by environmental conditions, spatially-explicit models provide effective tools. They estimate a species’ niche as the set of environmental conditions in which it can persist. However, the currently most commonly used models rely on static correlative associations that are established between a set of spatial predictors and observed species distributions. For this, they assume stationary conditions and are therefore unsuitable in contexts of global change. Better equipped are process-based models that explicitly implement algorithmic representations of eco-evolutionary mechanisms and evaluate their joint dynamics. These models have long been regarded as difficult to parameterise, but an increased data availability and improved methods for data integration lessen this challenge. Hence, the goal of this thesis is to further develop process-based models, integrate them into a complete modelling workflow, and provide the tools and guidance for their successful application.
With my thesis, I presented an integrated platform for spatially-explicit eco-evolutionary modelling and provided a workflow for their inverse calibration to observational data. In the first chapter, I introduced RangeShiftR, a software tool that implements an individual-based modelling platform for the statistical programming language R. Its open-source licensing, extensive help pages and available tutorials make it accessible to a wide audience. In the second chapter, I demonstrated a comprehensive workflow for the specification, calibration and validation of RangeShiftR by the example of the red kite in Switzerland. The integration of heterogeneous data sources, such as literature and monitoring data, allowed to successfully calibrate the model. It was then used to make validated, spatio-temporal predictions of future red kite abundance. The presented workflow can be adopted to any study species if data is available. In the third chapter, I extended RangeShiftR to directly link demographic processes to climatic predictors. This allowed me to explore the climate-change responses of eight Swiss breeding birds in more detail. Specifically, the model could identify the most influential climatic predictors, delineate areas of projected demographic suitability, and attribute current population trends to contemporary climate change.
My work shows that the application of complex, process-based models in conservation-relevant contexts is feasible, utilising available tools and data. Such models can be successfully calibrated and outperform other currently used modelling approaches in terms of predictive accuracy. Their projections can be used to predict future abundances or to assess alternative conservation scenarios. They further improve our mechanistic understanding of niche and range dynamics under climate change. However, only fully mechanistic models, that include all relevant processes, allow to precisely disentangle the effects of single processes on observed abundances. In this respect, the RangeShiftR model still has potential for further extensions that implement missing influential processes, such as species interactions.
Dynamic, process-based models are needed to adequately model a dynamic reality. My work contributes towards the advancement, integration and dissemination of such models. This will facilitate numeric, model-based approaches for species assessments, generate ecological insights and strengthen the reliability of predictions on large spatial scales under changing conditions.
In Forschungsprogrammen werden zahlreiche Akteure mit unterschiedlichen Hintergründen und fachlichen Expertisen in Einzel- oder Verbundvorhaben vereint, die jedoch weitestgehend unabhängig voneinander durchgeführt werden. Vor dem Hintergrund, dass gesamtgesellschaftliche Herausforderungen wie die globale Erwärmung zunehmend disziplinübergreifende Lösungsansätze erfordern, sollten Vernetzungs- und Transferprozesse in Forschungsprogrammen stärker in den Fokus rücken. Mit der Implementierung einer Begleitforschung kann dieser Forderung Rechnung getragen werden. Begleitforschung unterscheidet sich in ihrer Herangehensweise und ihrer Zielvorstellung von den „üblichen“ Projekten und kann in unterschiedlichen theoretischen Reinformen auftreten. Verkürzt dargestellt agiert sie entweder (1) inhaltlich komplementär zu den jeweiligen Forschungsprojekten, (2) auf einer Metaebene mit Fokus auf die Prozesse im Forschungsprogramm oder (3) als integrierende, synthetisierende Instanz, für die die Vernetzung der Projekte im Forschungsprogramm sowie der Wissenstransfer von Bedeutung sind. Zwar sind diese Formen analytisch in theoretische Reinformen trennbar, in der Praxis ergibt sich in der Regel jedoch ein Mix aus allen dreien.
In diesem Zusammenhang schließt die vorliegende Dissertation als ergänzende Studie an bisherige Ansätze zum methodischen Handwerkszeug der Begleitforschung an und fokussiert auf folgende Fragestellungen: Auf welcher Basis kann die Vernetzung der Akteure in einem Forschungsprogramm durchgeführt werden, um diese effektiv zusammenzubringen? Welche weiteren methodischen Elemente sollten daran ansetzen, um einen Mehrwert zu generieren, der die Summe der Einzelergebnisse des Forschungsprogrammes übersteigt? Von welcher Art kann dann ein solcher Mehrwert sein und welche Rolle spielt dabei die Begleitforschung?
Das erste methodische Element bildet die Erhebung und Aufbereitung einer Ausgangsdatenbasis. Durch eine auf semantischer Analyse basierenden Verschlagwortung projektbezogener Texte lässt sich eine umfassende Datenbasis aus den Inhalten der Forschungsprojekte generieren. Die Schlagwörter werden dabei anhand eines kontrollierten Vokabulars in einem Schlagwortkatalog strukturiert. Parallel dazu werden sie wiederum den jeweiligen Projekten zugeordnet, wodurch diese thematische Merkmale erhalten. Um thematische Überschneidungen zwischen Forschungsprojekten sichtbar und interpretierbar zu machen, beinhaltet das zweite Element Ansätze zur Visualisierung. Dazu werden die Informationen in einen Netzwerkgraphen transferiert, der sowohl alle im Forschungsprogramm involvierten Projekte als auch die identifizierten Schlagwörter in Relation zueinander abbilden kann. So kann zum Beispiel sichtbar gemacht werden, welche Forschungsprojekte sich auf Basis ihrer Inhalte „näher“ sind als andere. Genau diese Information wird im dritten methodischen Element als Planungsgrundlage für unterschiedliche Veranstaltungsformate wie Arbeitstagungen oder Transferwerkstätten genutzt. Das vierte methodische Element umfasst die Synthesebildung. Diese gestaltet sich als Prozess über den gesamten Zeitraum der Zusammenarbeit zwischen Begleitforschung und den weiteren Forschungsprojekten hinweg, da in die Synthese unter anderem Zwischen-, Teil- und Endergebnisse der Projekte einfließen, genauso wie Inhalte aus den unterschiedlichen Veranstaltungen. Letztendlich ist dieses vierte Element auch das Mittel, um aus den integrierten und synthetisierten Informationen Handlungsempfehlungen für zukünftige Vorhaben abzuleiten.
Die Erarbeitung der methodischen Elemente erfolgte im laufenden Prozess des Begleitforschungsprojektes KlimAgrar, welches der vorliegenden Dissertation als Fallbeispiel dient und dessen Hintergründe in der Thematik Klimaschutz und Klimaanpassung in der Landwirtschaft im Text ausführlich erläutert werden.
This cumulative dissertation consists of three full empirical investigations based on three separate collections of data dealing with the phenomenon of negotiations in audit processes, which are combined in two research articles. In the first study, I examine internal auditors’ views on negotiation interactions with auditees. My research is based on 23 semi-structured interviews with internal auditors (14 in-house and 9 external service providers) to gain insight into when and about what (RQ1), why (RQ2), and how (RQ3) they negotiate with auditees. By adapting the Gibbins et al. (2001) negotiation framework to the context of internal auditing, I obtain specific process (negotiation issue, auditor-auditee process, and outcome) and context elements that form the basis of my analyses. Through the additional use of inductive procedures, I conclude that internal auditors negotiate when they face professional and non-professional resistance from auditees during the audit process (RQ1). This resistance occurs in a variety of audit types and audit issues. Internal auditors choose negotiations to overcome this resistance primarily out of functional interest, as they cannot simply instruct auditees to acknowledge the findings and implement the required actions (RQ2). I find that the implementation of the required actions is the main goal of the respondents, which is also an important quality factor for internal auditing. Although few respondents interpret these interactions with auditees as negotiations, all respondents use a variety of negotiation strategies to create value (e.g., cost cutting, logrolling, and bridging) and claim value (e.g. positional commitment and threats) (RQ3). Finally, I contribute to empirical research on internal audit negotiations and internal audit quality by shedding light on the black box of internal auditor-auditee interactions. The second study consists of two experiments that examine the effects of tax auditors’ emotion expressions during tax audit negotiations. In the first experiment, we demonstrate that auditors expressing anger obtain more concessions from taxpayers than auditors expressing happiness. This reveals that taxpayers interpret auditors’ emotions strategically and do not respond affectively. In the second experiment, we show that the experience with an auditor who expressed either happiness or anger reduces taxpayers’ post-audit compliance compared to the experience with an emotionally neutral auditor. Apparently, taxpayers use their experience with an emotional auditor to rationalize later noncompliance. Taken together, both experiments show the potentially detrimental effects of positive and negative emotion expressions by the auditor and point to the benefits of avoiding emotion expressions. We find that when auditors avoid emotion expressions this does not result in fewer concessions from taxpayers than when auditors express anger. However, when auditors avoid emotion expressions this leads to a significantly better evaluation of the taxpayer-auditor relationship and significantly reduces taxpayers’ post-audit noncompliance.
Natural gas hydrates are ice-like crystalline compounds containing water cavities that trap natural gas molecules like methane (CH4), which is a potent greenhouse gas with high energy density. The Mallik site at the Mackenzie Delta in the Canadian Arctic contains a large volume of technically recoverable CH4 hydrate beneath the base of the permafrost. Understanding how the sub-permafrost hydrate is distributed can aid in searching for the ideal locations for deploying CH4 production wells to develop the hydrate as a cleaner alternative to crude oil or coal. Globally, atmospheric warming driving permafrost thaw results in sub-permafrost hydrate dissociation, releasing CH4 into the atmosphere to intensify global warming. It is therefore crucial to evaluate the potential risk of hydrate dissociation due to permafrost degradation. To quantitatively predict hydrate distribution and volume in complex sub-permafrost environments, a numerical framework was developed to simulate sub-permafrost hydrate formation by coupling the equilibrium CH4-hydrate formation approach with a fluid flow and transport simulator (TRANSPORTSE). In addition, integrating the equations of state describing ice melting and forming with TRANSPORTSE enabled this framework to simulate the permafrost evolution during the sub-permafrost hydrate formation. A modified sub-permafrost hydrate formation mechanism for the Mallik site is presented in this study. According to this mechanism, the CH4-rich fluids have been vertically transported since the Late Pleistocene from deep overpressurized zones via geologic fault networks to form the observed hydrate deposits in the Kugmallit–Mackenzie Bay Sequences. The established numerical framework was verified by a benchmark of hydrate formation via dissolved methane. Model calibration was performed based on laboratory data measured during a multi-stage hydrate formation experiment undertaken in the LArge scale Reservoir Simulator (LARS). As the temporal and spatial evolution of simulated and observed hydrate saturation matched well, the LARS model was therefore validated. This laboratory-scale model was then upscaled to a field-scale 2D model generated from a seismic transect across the Mallik site. The simulation confirmed the feasibility of the introduced sub-permafrost hydrate formation mechanism by demonstrating consistency with field observations. The 2D model was extended to the first 3D model of the Mallik site by using well-logs and seismic profiles, to investigate the geologic controls on the spatial hydrate distribution. An assessment of this simulation revealed the hydraulic contribution of each geological element, including relevant fault networks and sedimentary sequences. Based on the simulation results, the observed heterogeneous distribution of sub-permafrost hydrate resulted from the combined factors of the source-gas generation rate, subsurface temperature, and the permeability of geologic elements. Analysis of the results revealed that the Mallik permafrost was heated by 0.8–1.3 °C, induced by the global temperature increase of 0.44 °C and accelerated by Arctic amplification from the early 1970s to the mid-2000s. This study presents a numerical framework that can be applied to study the formation of the permafrost-hydrate system from laboratory to field scales, across timescales ranging from hours to millions of years. Overall, these simulations deepen the knowledge about the dominant factors controlling the spatial hydrate distribution in sub-permafrost environments with heterogeneous geologic elements. The framework can support improving the design of hydrate formation experiments and provide valuable contributions to future industrial hydrate exploration and exploitation activities.
The East African Rift System (EARS) is a significant example of active tectonics, which provides opportunities to examine the stages of continental faulting and landscape evolution. The southwest extension of the EARS is one of the most significant examples of active tectonics nowadays, however, seismotectonic research in the area has been scarce, despite the fundamental importance of neotectonics. Our first study area is located between the Northern Province of Zambia and the southeastern Katanga Province of the Democratic Republic of Congo. Lakes Mweru and Mweru Wantipa are part of the southwest extension of the EARS. Fault analysis reveals that, since the Miocene, movements along the active Mweru-Mweru Wantipa Fault System (MMFS) have been largely responsible for the reorganization of the landscape and the drainage patterns across the southwestern branch of the EARS. To investigate the spatial and temporal patterns of fluvial-lacustrine landscape development, we determined in-situ cosmogenic 10Be and 26Al in a total of twenty-six quartzitic bedrock samples that were collected from knickpoints across the Mporokoso Plateau (south of Lake Mweru) and the eastern part of the Kundelungu Plateau (north of Lake Mweru). Samples from the Mporokoso Plateau and close to the MMFS provide evidence of temporary burial. By contrast, surfaces located far from the MMFS appear to have remained uncovered since their initial exposure as they show consistent 10Be and 26Al exposure ages ranging up to ~830 ka. Reconciliation of the observed burial patterns with morphotectonic and stratigraphic analysis reveals the existence of an extensive paleo-lake during the Pleistocene. Through hypsometric analyses of the dated knickpoints, the potential maximum water level of the paleo-lake is constrained to ~1200 m asl (present lake lavel: 917 m asl). High denudation rates (up to ~40 mm ka-1) along the eastern Kundelungu Plateau suggest that footwall uplift, resulting from normal faulting, caused river incision, possibly controlling paleo-lake drainage. The lake level was reduced gradually reaching its current level at ~350 ka.
Parallel to the MMFS in the north, the Upemba Fault System (UFS) extends across the southeastern Katanga Province of the Democratic Republic of Congo. This part of our research is focused on the geomorphological behavior of the Kiubo Waterfalls. The waterfalls are the currently active knickpoint of the Lufira River, which flows into the Upemba Depression. Eleven bedrock samples along the Lufira River and its tributary stream, Luvilombo River, were collected. In-situ cosmogenic 10Be and 26Al were used in order to constrain the K constant of the Stream Power Law equation. Constraining the K constant allowed us to calculate the knickpoint retreat rate of the Kiubo Waterfalls at ~0.096 m a-1. Combining the calculated retreat rate of the knickpoint with DNA sequencing from fish populations, we managed to present extrapolation models and estimate the location of the onset of the Kiubo Waterfalls, revealing its connection to the seismicity of the UFS.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
Rainfall-triggered landslides are a globally occurring hazard that cause several thousand fatalities per year on average and lead to economic damages by destroying buildings and infrastructure and blocking transportation networks. For people living and governing in susceptible areas, knowing not only where, but also when landslides are most probable is key to inform strategies to reduce risk, requiring reliable assessments of weather-related landslide hazard and adequate warning. Taking proper action during high hazard periods, such as moving to higher levels of houses, closing roads and rail networks, and evacuating neighborhoods, can save lives. Nevertheless, many regions of the world with high landslide risk currently lack dedicated, operational landslide early warning systems.
The mounting availability of temporal landslide inventory data in some regions has increasingly enabled data-driven approaches to estimate landslide hazard on the basis of rainfall conditions. In other areas, however, such data remains scarce, calling for appropriate statistical methods to estimate hazard with limited data. The overarching motivation for this dissertation is to further our ability to predict rainfall-triggered landslides in time in order to expand and improve warning. To this end, I applied Bayesian inference to probabilistically quantify and predict landslide activity as a function of rainfall conditions at spatial scales ranging from a small coastal town, to metropolitan areas worldwide, to a multi-state region, and temporal scales from hourly to seasonal. This thesis is composed of three studies.
In the first study, I contributed to developing and validating statistical models for an online landslide warning dashboard for the small town of Sitka, Alaska, USA. We used logistic and Poisson regressions to estimate daily landslide probability and counts from an inventory of only five reported landslide events and 18 years of hourly precipitation measurements at the Sitka airport. Drawing on community input, we established two warning thresholds for implementation in the dashboard, which uses observed rainfall and US National Weather Service forecasts to provide real-time estimates of landslide hazard.
In the second study, I estimated rainfall intensity-duration thresholds for shallow landsliding for 26 cities worldwide and a global threshold for urban landslides. I found that landslides in urban areas occurred at rainfall intensities that were lower than previously reported global thresholds, and that 31% of urban landslides were triggered during moderate rainfall events. However, landslides in cities with widely varying climates and topographies were triggered above similar critical rainfall intensities: thresholds for 77% of cities were indistinguishable from the global threshold, suggesting that urbanization may harmonize thresholds between cities, overprinting natural variability. I provide a baseline threshold that could be considered for warning in cities with limited landslide inventory data.
In the third study, I investigated seasonal landslide response to annual precipitation patterns in the Pacific Northwest region, USA by using Bayesian multi-level models to combine data from five heterogeneous landslide inventories that cover different areas and time periods. I quantitatively confirmed a distinctly seasonal pattern of landsliding and found that peak landslide activity lags the annual precipitation peak. In February, at the height of the landslide season, landslide intensity for a given amount of monthly rainfall is up to ten times higher than at the season onset in November, underlining the importance of antecedent seasonal hillslope conditions.
Together, these studies contributed actionable, objective information for landslide early warning and examples for the application of Bayesian methods to probabilistically quantify landslide hazard from inventory and rainfall data.
Residential segregation is a widespread phenomenon that can be observed in almost every major city. In these urban areas, residents with different ethnical or socioeconomic backgrounds tend to form homogeneous clusters. In Schelling’s classical segregation model two types of agents are placed on a grid. An agent is content with its location if the fraction of its neighbors, which have the same type as the agent, is at least 𝜏, for some 0 < 𝜏 ≤ 1. Discontent agents simply swap their location with a randomly chosen other discontent agent or jump to a random empty location. The model gives a coherent explanation of how clusters can form even if all agents are tolerant, i.e., if they agree to live in mixed neighborhoods. For segregation to occur, all it needs is a slight bias towards agents preferring similar neighbors.
Although the model is well studied, previous research focused on a random process point of view. However, it is more realistic to assume instead that the agents strategically choose where to live. We close this gap by introducing and analyzing game-theoretic models of Schelling segregation, where rational agents strategically choose their locations.
As the first step, we introduce and analyze a generalized game-theoretic model that allows more than two agent types and more general underlying graphs modeling the residential area. We introduce different versions of Swap and Jump Schelling Games. Swap Schelling Games assume that every vertex of the underlying graph serving as a residential area is occupied by an agent and pairs of discontent agents can swap their locations, i.e., their occupied vertices, to increase their utility. In contrast, for the Jump Schelling Game, we assume that there exist empty vertices in the graph and agents can jump to these vacant vertices if this increases their utility. We show that the number of agent types as well as the structure of underlying graph heavily influence the dynamic properties and the tractability of finding an optimal strategy profile.
As a second step, we significantly deepen these investigations for the swap version with 𝜏 = 1 by studying the influence of the underlying topology modeling the residential area on the existence of equilibria, the Price of Anarchy, and the dynamic properties. Moreover, we restrict the movement of agents locally. As a main takeaway, we find that both aspects influence the existence and the quality of stable states.
Furthermore, also for the swap model, we follow sociological surveys and study, asking the same core game-theoretic questions, non-monotone singlepeaked utility functions instead of monotone ones, i.e., utility functions that are not monotone in the fraction of same-type neighbors. Our results clearly show that moving from monotone to non-monotone utilities yields novel structural properties and different results in terms of existence and quality of stable states.
In the last part, we introduce an agent-based saturated open-city variant, the Flip Schelling Process, in which agents, based on the predominant type in their neighborhood, decide whether to change their types. We provide a general framework for analyzing the influence of the underlying topology on residential segregation and investigate the probability that an edge is monochrome, i.e., that both incident vertices have the same type, on random geometric and Erdős–Rényi graphs. For random geometric graphs, we prove the existence of a constant c > 0 such that the expected fraction of monochrome edges after the Flip Schelling Process is at least 1/2 + c. For Erdős–Rényi graphs, we show the expected fraction of monochrome edges after the Flip Schelling Process is at most 1/2 + o(1).
Today, point clouds are among the most important categories of spatial data, as they constitute digital 3D models of the as-is reality that can be created at unprecedented speed and precision. However, their unique properties, i.e., lack of structure, order, or connectivity information, necessitate specialized data structures and algorithms to leverage their full precision. In particular, this holds true for the interactive visualization of point clouds, which requires to balance hardware limitations regarding GPU memory and bandwidth against a naturally high susceptibility to visual artifacts.
This thesis focuses on concepts, techniques, and implementations of robust, scalable, and portable 3D visualization systems for massive point clouds. To that end, a number of rendering, visualization, and interaction techniques are introduced, that extend several basic strategies to decouple rendering efforts and data management: First, a novel visualization technique that facilitates context-aware filtering, highlighting, and interaction within point cloud depictions. Second, hardware-specific optimization techniques that improve rendering performance and image quality in an increasingly diversified hardware landscape. Third, natural and artificial locomotion techniques for nausea-free exploration in the context of state-of-the-art virtual reality devices. Fourth, a framework for web-based rendering that enables collaborative exploration of point clouds across device ecosystems and facilitates the integration into established workflows and software systems.
In cooperation with partners from industry and academia, the practicability and robustness of the presented techniques are showcased via several case studies using representative application scenarios and point cloud data sets. In summary, the work shows that the interactive visualization of point clouds can be implemented by a multi-tier software architecture with a number of domain-independent, generic system components that rely on optimization strategies specific to large point clouds. It demonstrates the feasibility of interactive, scalable point cloud visualization as a key component for distributed IT solutions that operate with spatial digital twins, providing arguments in favor of using point clouds as a universal type of spatial base data usable directly for visualization purposes.
Control over spin and electronic structure of MoS₂ monolayer via interactions with substrates
(2023)
The molybdenum disulfide (MoS2) monolayer is a semiconductor with a direct bandgap while it is a robust and affordable material.
It is a candidate for applications in optoelectronics and field-effect transistors.
MoS2 features a strong spin-orbit coupling which makes its spin structure promising for acquiring the Kane-Mele topological concept with corresponding applications in spintronics and valleytronics.
From the optical point of view, the MoS2 monolayer features two valleys in the regions of K and K' points. These valleys are differentiated by opposite spins and a related valley-selective circular dichroism.
In this study we aim to manipulate the MoS2 monolayer spin structure in the vicinity of the K and K' points to explore the possibility of getting control over the optical and electronic properties.
We focus on two different substrates to demonstrate two distinct routes: a gold substrate to introduce a Rashba effect and a graphene/cobalt substrate to introduce a magnetic proximity effect in MoS2.
The Rashba effect is proportional to the out-of-plane projection of the electric field gradient. Such a strong change of the electric field occurs at the surfaces of a high atomic number materials and effectively influence conduction electrons as an in-plane magnetic field. A molybdenum and a sulfur are relatively light atoms, thus, similar to many other 2D materials, intrinsic Rashba effect in MoS2 monolayer is vanishing small. However, proximity of a high atomic number substrate may enhance Rashba effect in a 2D material as it was demonstrated for graphene previously.
Another way to modify the spin structure is to apply an external magnetic field of high magnitude (several Tesla), and cause a Zeeman splitting, the conduction electrons.
However, a similar effect can be reached via magnetic proximity which allows us to reduce external magnetic fields significantly or even to zero. The graphene on cobalt interface is ferromagnetic and stable for MoS2 monolayer synthesis. Cobalt is not the strongest magnet; therefore, stronger magnets may lead to more significant results.
Nowadays most experimental studies on the dichalcogenides (MoS2 included) are performed on encapsulated heterostructures that are produced by mechanical exfoliation.
While mechanical exfoliation (or scotch-tape method) allows to produce a huge variety of structures, the shape and the size of the samples as well as distance between layers in heterostructures are impossible to control reproducibly.
In our study we used molecular beam epitaxy (MBE) methods to synthesise both MoS2/Au(111) and MoS2/graphene/Co systems.
We chose to use MBE, as it is a scalable and reproducible approach, so later industry may adapt it and take over.
We used graphene/cobalt instead of just a cobalt substrate because direct contact of MoS2\ monolayer and a metallic substrate may lead to photoluminescence (PL) quenching in the metallic substrate. Graphene and hexagonal boron nitride monolayer are considered building blocks of a new generation of electronics also commonly used as encapsulating materials for PL studies. Moreover graphene is proved to be a suitable substrate for the MBE growth of transitional metal dichalcogenides (TMDCs).
In chapter 1,
we start with an introduction to TMDCs. Then we focus on MoS2 monolayer state of the art research in the fields of application scenario; synthesis approaches; electronic, spin, and optical properties; and interactions with magnetic fields and magnetic materials.
We briefly touch the basics of magnetism in solids and move on to discuss various magnetic exchange interactions and magnetic proximity effect.
Then we describe MoS2 optical properties in more detail. We start from basic exciton physics and its manifestation in the MoS2 monolayer. We consider optical selection rules in the MoS2 monolayer and such properties as chirality, spin-valley locking, and coexistence of bright and dark excitons.
Chapter 2 contains an overview of the employed surface science methods: angle-integrated, angle-resolved, and spin-resolved photoemission; low energy electron diffraction and scanning tunneling microscopy.
In chapter 3, we describe MoS2 monolayer synthesis details for two substrates: gold monocrystal with (111) surface and graphene on cobalt thin film with Co(111) surface orientation.
The synthesis descriptions are followed by a detailed characterisation of the obtained structures: fingerprints of MoS2 monolayer formation; MoS2 monolayer symmetry and its relation to the substrate below; characterisation of MoS2 monolayer coverage, domain distribution, sizes and shapes, and moire structures.
In chapter~4, we start our discussion with MoS2/Au(111) electronic and spin structure. Combining density functional theory computations (DFT) and spin-resolved photoemission studies, we demonstrate that the MoS2 monolayer band structure features an in-plane Rashba spin splitting. This confirms the possibility of MoS2 monolayer spin structure manipulation via a substrate.
Then we investigate the influence of a magnetic proximity in the MoS2/graphene/Co system on the MoS2 monolayer spin structure.
We focus our investigation on MoS2 high symmetry points: G and K.
First, using spin-resolved measurements, we confirm that electronic states are spin-split at the G point via a magnetic proximity effect. Second, combining spin-resolved measurements and DFT computations for MoS2 monolayer in the K point region, we demonstrate the appearance of a small in-plane spin polarisation in the valence band top and predict a full in-plane spin polarisation for the conduction band bottom.
We move forward discussing how these findings are related to the MoS2 monolayer optical properties, in particular the possibility of dark exciton observation. Additionally, we speculate on the control of the MoS2 valley energy via magnetic proximity from cobalt.
As graphene is spatially buffering the MoS2 monolayer from the Co thin film, we speculate on the role of graphene in the magnetic proximity transfer by replacing graphene with vacuum and other 2D materials in our computations.
We finish our discussion by investigating the K-doped MoS2/graphene/Co system and the influence of this doping on the electronic and spin structure as well as on the magnetic proximity effect.
In summary, using a scalable MBE approach we synthesised
MoS2/Au(111) and MoS2/graphene/Co systems. We found a Rashba effect taking place in MoS2/Au(111) which proves that the MoS2 monolayer in-plane spin structure can be modified. In MoS2/graphene/Co the in-plane magnetic proximity effect indeed takes place which rises the possibility of fine tuning the MoS2 optical properties via manipulation of the the substrate magnetisation.
The work is designed to investigate the impacts and sensitivity of climate change on water resources, droughts and hydropower production in Malawi, the South-Eastern region which is highly vulnerable to climate change. It is observed that rainfall is decreasing and temperature is increasing which calls for the understanding of what these changes may impact the water resources, drought occurrences and hydropower generation in the region. The study is conducted in the Greater Lake Malawi Basin (Lake Malawi and Shire River Basins) and is divided into three projects. The first study is assessing the variability and trends of both meteorological and hydrological droughts from 1970-2013 in Lake Malawi and Shire River basins using the standardized precipitation index (SPI) and standardized precipitation and evaporation Index (SPEI) for meteorological droughts and the lake level change index (LLCI) for hydrological droughts. And later the relationship of the meteorological and hydrological droughts is established.
While the second study extends the drought analysis into the future by examining the potential future meteorological water balance and associated drought characteristics such as the drought intensity (DI), drought months (DM), and drought events (DE) in the Greater Lake Malawi Basin. The sensitivity of drought to changes of rainfall and temperature is also assessed using the scenario-neutral approach. The climate change projections from 20 Coordinated Regional Climate Downscaling Experiment (CORDEX) models for Africa based on two scenarios (RCP4.5 and RCP8.5) for the periods 2021–2050 and 2071–2100 are used. The study also investigates the effect of bias-correction (i.e., empirical quantile mapping) on the ability of the climate model ensemble in reproducing observed drought characteristics as compared to raw climate projections.
The sensitivity of key hydrologic variables and hydropower generation to climate change in Lake Malawi and Shire River basins is assessed in third study. The study adapts the mesoscale Hydrological Model (mHM) which is applied separately in the Upper Lake Malawi and Shire River basins. A particular Lake Malawi model, which focuses on reservoir routing and lake water balance, has been developed and is interlinked between the two basins. Similar to second study, the scenario-neutral approach is also applied to determine the sensitivity of climate change on water resources more particularly Lake Malawi level and Shire River flow which later helps to estimate the hydropower production susceptibility.
Results suggest that meteorological droughts are increasing due to a decrease in precipitation which is exacerbated by an increase in temperature (potential evapotranspiration). The hydrological system of Lake Malawi seems to have a >24-month memory towards meteorological conditions since the 36-months SPEI can predict hydrological droughts ten-months in advance. The study has found the critical lake level that would trigger hydrological drought to be 474.1 m.a.s.l.
Despite the differences in the internal structures and uncertainties that exist among the climate models, they all agree on an increase of meteorological droughts in the future in terms of higher DI and longer events (DM). DI is projected to increase between +25% and +50% during 2021-2050 and between +131% and +388% during 2071-2100. This translates into +3 to +5, and +7 to +8 more drought months per year during both periods, respectively. With longer lasting drought events, DE is decreasing. Projected droughts based on RCP8.5 are 1.7 times more severe than droughts based on RCP4.5.
It is also found that an annual temperature increase of 1°C decreases mean lake level and outflow by 0.3 m and 17%, respectively, signifying the importance of intensified evaporation for Lake Malawi’s water budget. Meanwhile, a +5% (-5%) deviation in annual rainfall changes mean lake level by +0.7 m (-0.6 m). The combined effects of temperature increase and rainfall decrease result in significantly lower flows on Shire River. The hydrological river regime may change from perennial to seasonal with the combination of annual temperature increase and precipitation decrease beyond 1.5°C (3.5°C) and -20% (-15%). The study further projects a reduction in annual hydropower production between 1% (RCP8.5) and 2.5% (RCP4.5) during 2021–2050 and between 5% (RCP4.5) and 24% (RCP8.5) during 2071–2100.
The findings are later linked to global policies more particularly the United Nations Framework Convention on Climate Change (UNFCCC)’s Paris Agreement and the United Nations (UN)’s Sustainable Development Goals (SDGs), and how the failure to adhere the restriction of temperature increase below the global limit of 1.5°C will affect drought and the water resources in Malawi consequently impact the hydropower production. As a result, the achievement of most of the SDGs will be compromised.
The results show that it is of great importance that a further development of hydro energy on the Shire River should take into account the effects of climate change. The information generation is important for decision making more especially supporting the climate action required to fight against climate change. The frequency of extreme climate events due to climate change has reached the climate emergency as saving lives and livelihoods require urgent action.
Reiz der Revolution
(2023)
Die Dissertation untersucht die vielseitigen Verflechtungen und Transfers im Rahmen der deutschen Nicaraguasolidarität der späten 1970er und der 1980er Jahre. Bereits im Vorfeld ihres Machtantritts hatten die Sandinistas in beiden Lagern um ausländische staatliche und zivile Unterstützung geworben. Nun gestalteten sie mit dem sandinistischen Reformstaat zugleich ein internationales Netz an Solidaritätsbeziehungen aus, die zur Finanzierung ihrer sozialreformerischen Programme, aber auch zur Legitimation ihrer Herrschaft dienten.
Allein in der Bundesrepublik entstanden mehrere hundert Solidaritätsgruppen. In der DDR löste die politische Führung eine staatlich gelenkte Solidarisierung mit Nicaragua aus, der sich zehntausende Menschen und unabhängige Basisinitiativen anschlossen. Trotz ihrer Verwurzelung in rivalisierenden Systemen und der Heterogenität ihrer Weltbilder – von christlicher Soziallehre bis zur kritischen Linken – arbeiteten etliche Solidaritätsinitiativen in beiden Ländern am selben Zielobjekt: einem Nicaragua jenseits der Blöcke. Gemeinsam mit ihren nicaraguanischen Projektpartner_innen eröffneten sie auf transnationaler Ebene einen neuen Raum für Kommunikation und stießen dabei auf Differenzen und Auseinandersetzungen über politische Ideen, die beiderseits des Atlantiks neue Praktiken anregten.
Die Forschungsarbeit basiert auf einer umfangreichen Quellenauswertung in insgesamt 13 Archiven, darunter das Archiv der Robert-Havemann-Gesellschaft, das Archiv der BStU, verschiedene westdeutsche Bewegungsarchive und die archivalischen Nachlässe des nicaraguanischen Kulturministeriums.
Este trabajo pretende demostrar que en la obra narrativa del escritor Tomás Carrasquilla Naranjo (1858 - 1940) hay un Wahrheitsgehalt (Benjamin, 2012), la concreción temporal de una idea, que se materializa a través de lo que aquí he denominado imagen de la religiosidad popular. Esto quiere decir que la obra del antioqueño estaría construida a la manera de un gran mosaico, en el que pese a los variados y disparejos elementos que la componen, la unión de todos produce una imagen (Bild). En dicha imagen se representa la experiencia histórica de lo moderno en los sectores populares, a partir de la unión fugaz entre los rezagos de tradiciones vetustas y las formas de vida más novedosas. Lejos de las convenciones de su época, donde la pregunta por la experiencia de lo moderno redunda en los ámbitos metropolitanos y el papel del artista, Carrasquilla se pregunta por lo que ocurre en los extensos ámbitos rurales o liminares entre lo citadino y lo rural, y sus respectivos entrecruzamientos. Los sujetos que habitan estos ámbitos, al carecer de herramientas conceptuales que les permita definir esta nueva “experiencia viviente”, esa nueva Structures and Feeling como la denomina Raymond Williams (2019); apelan a lo único que conocen, los vetustos saberes transmitidos oralmente para explicar su ahora.
En este sentido, es posible afirmar que Carrasquilla, valiéndose de esta imagen de la religiosidad popular, intentó establecer un diálogo en el campo de lo literario, desde el que postuló una idea de lo moderno diferencial. En varias ocasiones, el antioqueño manifestó que la literatura debía incorporar las experiencias locales al diálogo de lo universal. Ejemplo de esto es el símil de la literatura con el sistema planetario, pues, según él, las relaciones de jerarquía se establecen cuando los países que producen modas literarias, los planetas (Europa), relegan a los otros a ser simples satélites, es decir, a imitar (Carrasquilla, 1991). Hoy en día, se aprecia en aquella crítica dirigida a sus paisanos, los modernistas antioqueños, una reivindicación de la alteridad. Por lo que aquí se postula, que si bien dichas vivencias, no son similares a las que se dan en los nacientes ámbitos metropolitanos, donde las mercancías representan a los nuevos sustitutos de la fe; en esos extensos ámbitos, en apariencia provincianos y alejados del contacto con otras culturas y saberes, la imagen de religiosidad popular viene a desempeñar el mismo papel que aquellas. En otras palabras, “indem an Dingen ihr Gebrauchswert abstirbt” (utilidad o adoración), la subjetividad del personaje las carga con “Intentionen von Wunsch und Angst” (Benjamin, 2013a.), convirtiéndolas en objetos de contemplación, bien sea portándolas o coleccionándolas. De manera similar Carrasquilla se habría valido del cúmulo de saberes (Wissen) residuales de su hipotético público lector, heredado de diversas áreas culturales -durante el proceso de la colonización-, sus respectivos y heterogéneos tiempos y lenguas particulares (Ette, 2019), para aunarlos a las experiencias profanas actuales. Así, la obra (cuento o novela) representaría artísticamente “formas de vida” popular, a través de las cuales se “experimenta estéticamente” cómo se sobrevive (überleben) (Ette, 2015) a la modernidad en los sectores marginados. Es decir, solo desde lo vetusto y ruinoso de la religiosidad popular, otrora sagrado, es posible explicar la experiencia de lo moderno, su aquí y ahora.
Visual perception is a complex and dynamic process that plays a crucial role in how we perceive and interact with the world. The eyes move in a sequence of saccades and fixations, actively modulating perception by moving different parts of the visual world into focus. Eye movement behavior can therefore offer rich insights into the underlying cognitive mechanisms and decision processes. Computational models in combination with a rigorous statistical framework are critical for advancing our understanding in this field, facilitating the testing of theory-driven predictions and accounting for observed data. In this thesis, I investigate eye movement behavior through the development of two mechanistic, generative, and theory-driven models. The first model is based on experimental research regarding the distribution of attention, particularly around the time of a saccade, and explains statistical characteristics of scan paths. The second model implements a self-avoiding random walk within a confining potential to represent the microscopic fixational drift, which is present even while the eye is at rest, and investigates the relationship to microsaccades. Both models are implemented in a likelihood-based framework, which supports the use of data assimilation methods to perform Bayesian parameter inference at the level of individual participants, analyses of the marginal posteriors of the interpretable parameters, model comparisons, and posterior predictive checks. The application of these methods enables a thorough investigation of individual variability in the space of parameters. Results show that dynamical modeling and the data assimilation framework are highly suitable for eye movement research and, more generally, for cognitive modeling.
Im Rahmen dieser Arbeit wurden Energie induzierte Nanopartikel-Substrat Interaktionen untersucht. Dazu wurden Goldnanopartikelanordnungen (AuNPA) auf verschiedenen Silizium-basierten Substraten hergestellt und der Einfluss eines Energieeintrages, genauer gesagt einer thermischen Behandlung oder des Metall-assistierten chemischen Ätzens (MaCE) getestet. Die Nanopartikelanordnungen, welche für die thermische Behandlung eingesetzt wurden, wurden nass-chemisch in Toluol synthetisiert, mit Thiol-terminiertem Polystyrol funktionalisiert und mittels Schleuderbeschichtung auf verschiedenen Substraten (drei Gläser und ein Siliziumwafer) in quasi-hexagonalen Mustern angeordnet. Diese AuNP-Anordnungen wurden mit Temperaturen zwischen 475 °C – 792 °C über verschiedene Zeiträume thermisch behandelt. Generell sanken die Nanopartikel in die Substrate ein, und es wurde festgestellt, dass mit Erhöhung der Glasübergangstemperatur der Substrate die Einsinktiefe der Nanopartikel abnahm. Die AuNPA auf Siliziumwafern wurden auf Temperaturen von 700 °C – 900 °C erhitzt. Die Goldnanopartikel sanken dabei bis zu 2,5 nm in das Si-Substrat ein. Ein Sintern der Nanopartikel fand ab einer Temperatur über 660 °C statt. Welcher Sintermechanismus der dominante ist konnte abschließend nicht eindeutig geklärt werden.
Für die Untersuchung des Einflusses des zweiten Energieeintrages mittels MaCE wurden AuNPA sowie Goldkern-Silberschale-Anordnungen auf Siliziumsubstraten genutzt. Die AuNPA wurden mit Hilfe von Poly-N-Isopropylacrylamid Mikrogelen und Natriumcitrat-stabilisierten Goldnanopartikeln (Na-AuNP) bzw. Tetrachloridogoldsäure (TCG) präpariert. Es ergaben sich Nanopartikelanordnungen mit hemisphärischen Partikeln (aus Na-AuNP) und zum anderen Nanopartikelanordnungen mit sphärischen Partikeln (aus TCG). Durch eine anschließende Silberwachstumsreaktion konnten dann die dazugehörigen Goldkern-Silberschale Nanopartikelanordnungen erhalten werden. Beim MaCE konnten signifikante Unterschiede im Verhalten dieser vier Nanopartikelanordnungen festgestellt werden, z.B. mussten bei den hemisphärischen Partikelanordnungen höhere Wasserstoffperoxidkonzentrationen (0,70 M – 0,91 M) als bei den sphärischen Partikelanordnungen (0,08 M – 0,32 M) für das Ätzen eingesetzt werden, um ein Einsinken der Nanopartikel in das Substrat zu erreichen.
In der Schule sollen alle Kinder und Jugendliche die Kompetenzen erwerben, die sie benötigen, um selbstbestimmt am gesellschaftlichen Leben teilhaben zu können. Dabei ist es notwendig im Unterricht auf die individuellen Lernvoraussetzungen der Schüler:innen zu reagieren, damit sie optimal beim Lernen unterstützt werden können. Häufig wird in diesem Zusammenhang vom „individualisierten Unterricht“ gesprochen, der sich dadurch auszeichnen, dass das Lernangebot bestmöglich an die einzelnen Schüler:innen angepasst wird. Eine Individualisierung des Unterrichts kann jedoch bedeuten, dass die Schüler:innen nur noch an ihren eigenen Aufgaben arbeiten, ohne sich miteinander auszutauschen. Von einigen Autor:innen wurde daher die Befürchtung geäußert, dass die Individualisierung des Unterrichts zu einer Vereinzelung im Unterricht führt und die Lerngruppe als Gemeinschaft kaum noch eine Rolle spielt.
Schule soll neben fachlichen Kompetenzen jedoch auch soziale Werte und Normen vermitteln, die zu einer gesellschaftlichen Integration beitragen und Demokratie fördern. Dabei wird als zentrale Aufgabe von Schule die Vorbereitung der Kinder und Jugendliche auf ein Zusammenleben in einer pluralen Gesellschaft gesehen. So sollen sie lernen gemeinsame Lösungen, Solidarität und Verantwortungsübernahme über Differenzen hinweg zu entwickeln. Dies kann gelingen, indem eine Gemeinschaft im Unterricht geschaffen wird, in der alle Schüler:innen sich zugehörig und wertgeschätzt fühlen, voneinander lernen und gleichzeitig gefordert werden in Aushandlungsprozesse miteinander einzutreten.
Eine individualisierte Unterrichtsgestaltung und das Erleben einer Gemeinschaft wird von manchen Autor:innen als Spannungsfeld im Unterricht beschrieben. Es gibt jedoch bisher kaum empirische Forschungsarbeiten die den Forschungsgegenstand „Gemeinschaft im individualisierten Unterricht“ genauer betrachtet haben. Die vorliegende Studie setzt hier an und geht folgenden Fragestellungen nach: 1. „Was wird von Lehrkräften unter einer Gemeinschaft im individualisierten Unterricht verstanden?“, 2. „Wie wird eine Gemeinschaft im individualisierten Unterricht von Lehrkräften gestaltet?“, 3. „Inwiefern kann Gemeinschaft im individualisierten Unterricht über das Gemeinschaftsgefühl der Schüler:innen erfasst werden?“ 4. „Lässt sich ein Zusammenhang zwischen dem Gemeinschaftsgefühl der Schüler:innen und der Individualisierung des Unterrichts feststellen?“ und 5. „Lässt sich im individualisierten Unterricht ein Zusammenhang zwischen dem Gemeinschaftsgefühl und der sozialen Eingebundenheit der Schüler:innen feststellen?“.
Den Forschungsfragen wurde anhand von drei Teilstudien nachgegangen, die in einem Mixed Methods Design parallel zueinander durchgeführt und abschließend aufeinander bezogen wurden. Alle drei Studien bezogen sich auf Datenquellen aus dem Ada*Q-Projekt („Adaptivität und Unterrichtsqualität im individualisierten Unterricht“), bei dem neun Grundschulen, die den Deutschen Schulpreis gewonnen haben, anhand verschiedener Datenerhebungen untersucht wurden. Insgesamt wurden in der vorliegenden Studie Daten von 32 Lehrkräften und 542 Schüler:innen aus 49 Lerngruppen herangezogen. Teilstudie 1 nahm die Verständnisse und die Gestaltung von Gemeinschaft (Forschungsfrage 1 und 2) anhand von Interviews mit Lehrkräften in den Blick. In Teilstudie 2 wurde eine Fragebogenskala für Schüler:innen zur Erfassung des Gemeinschaftsgefühls entwickelt (Forschungsfrage 3) und Zusammenhänge mit einer individualisierten Unterrichtsgestaltung und Unterrichtsqualität überprüft (Forschungsfrage 4). In Teilstudie 3 wurden Schüler:innen anhand der Experience-Sampling-Methode mehrfach im Unterricht zu ihrer sozialen Eingebundenheit befragt und ebenfalls Zusammenhänge mit einer individualisierten Unterrichtsgestaltung und Unterrichtsqualität sowie mit dem Gemeinschaftsgefühl überprüft (Forschungsfrage 5).
In Teilstudie 1 zeigte sich mit Blick auf die Forschungsfrage 1, dass die Lehrkräfte unterschiedliche Verständnisse von Gemeinschaft hatten, die auf unterschiedliche Aspekte von Gemeinschaft fokussierten und sich ergänzten. Dabei spielte das Spannungsfeld zwischen Individualität, Heterogenität und Gemeinschaft für alle Lehrkräfte eine Rolle. In Rahmen der Forschungsfrage 2 konnten außerdem verschiedene Handlungen und Praktiken identifiziert werden, wie Lehrkräfte eine Gemeinschaft im individualisierten Unterricht gestalteten und dabei individualisiertes mit gemeinschaftlichem Lernen produktiv miteinander verbanden. Dabei wurde eine Gemeinschaft im Unterricht von den Lehrkräften als zentral für das soziale Lernen der Schüler:innen beschrieben. So verstanden sie den gemeinsamen Unterricht in heterogenen Lerngruppen als Vorbereitung für das Leben und Arbeiten in einer pluralen Gesellschaft.
Teilstudie 2 konnte zeigen, dass die Fragebogenskala zum Gemeinschaftsgefühl der Schüler:innen gute Reliabilität und Validität aufwies und damit geeignet für weitere Untersuchungen war. Im Anschluss daran zeigten sich Zusammenhänge des Gemeinschaftsgefühls mit der Unterrichtsqualität (kognitive Aktivierung, Klassenführung und konstruktive Unterstützung). Insbesondere konstruktive Unterstützung (durch die Lehrkraft) hing stark mit dem Gemeinschaftsgefühl der Schüler:innen zusammen. Dieser Zusammenhang war geringer in besonders leistungsheterogenen Lerngruppen (hier erfasst über die Jahrgangsmischung). So war das Gemeinschaftsgefühl dort weniger abhängig von der Beziehungsqualität zur Lehrkraft. In weiteren Untersuchungen konnte außerdem kein Zusammenhang mit Merkmalen einer individualisierten Unterrichtsgestaltung gefunden werden, was die Befürchtung nicht bestärkte, dass eine Individualisierung des Unterrichts und eine Gemeinschaft im Unterricht sich gegenseitig ausschließen.
In Teilstudie 3 zeigte sich, dass die soziale Eingebundenheit der Schüler:innen von Situation zu Situation und von Schüler:in zu Schüler:in stark variierte. Die durchschnittlich empfundene soziale Eingebundenheit der Schüler:innen und auch das Ausmaß der Variation der sozialen Eingebundenheit hingen dabei eng mit dem Gemeinschaftsgefühl zusammen. Dieser Befund blieb auch unter Einbezug der Unterrichtsqualität bestehen, die keinen eigenen Einfluss auf die soziale Eingebundenheit zeigte. Außerdem ließen sich positive Zusammenhänge zwischen sozialer Eingebundenheit und Merkmalen der individualisierten Unterrichtsgestaltung finden. So fühlten sich Schüler:innen stärker sozial eingebunden, wenn die Aufgaben stärker differenziert waren und sie mehr Autonomie bei der Bearbeitung der Aufgaben hatten.
Zusammengefasst weisen die Ergebnisse der vorliegenden Studie darauf hin, dass Gemeinschaft im individualisierten Unterricht sowohl für die Lehrkräfte als auch für die Schüler:innen eine wichtige Rolle spielt. So sprechen die Lehrkräfte einer Gemeinschaft wichtige Funktionen in ihrem Unterricht zu und kümmern sich aktiv darum eine Gemeinschaft zu gestalten. Das Gemeinschaftsgefühl der Schüler:innen zeigte positive Zusammenhänge mit relevanten Aspekten der Unterrichtsqualität und mit sozialer Eingebundenheit. Die Befürchtung, dass eine Individualisierung des Unterrichts zu einer Vereinzelung der Schüler:innen führt, konnte nicht bestätigt werden. Vielmehr scheinen Individualisierung und Gemeinschaft sich gegenseitig, als zwei Komplementäre beim Umgang mit Heterogenität zu unterstützen.
Die Herausforderung, produktiv mit Heterogenität umzugehen, wird sich auch in der Zukunft an Schulen stellen. Dabei ist nicht nur das individuelle Lernen, sondern auch der soziale Umgang der Schüler:innen miteinander in den Blick zu nehmen. Die vorliegende Studie trägt dazu bei, beides miteinander zu verknüpfen, indem die Bedeutung einer Gemeinschaft im individualisierten Unterricht anhand verschiedener Perspektiven herausgearbeitet wird. Abschließend werden Implikationen für Forschung und Praxis formuliert.
Design Thinking is a human-centered approach to innovation that has become increasingly popular globally over the last decade. While the spread of Design Thinking is well understood and documented in the Western cultural contexts, particularly in Europe and the US due to the popularity of the Stanford-Potsdam Design Thinking education model, this is not the case when it comes to non-Western cultural contexts. This thesis fills a gap identified in the literature regarding how Design Thinking emerged, was perceived, adopted, and practiced in the Arab world. The culture in that part of the world differs from that of the Western context, which impacts the mindset of people and how they interact with Design Thinking tools and methods.
A mixed-methods research approach was followed in which both quantitative and qualitative methods were employed. First, two methods were used in the quantitative phase: a social media analysis using Twitter as a source of data, and an online questionnaire. The results and analysis of the quantitative data informed the design of the qualitative phase in which two methods were employed: ten semi-structured interviews, and participant observation of seven Design Thinking training events.
According to the analyzed data, the Arab world appears to have had an early, though relatively weak, and slow, adoption of Design Thinking since 2006. Increasing adoption, however, has been witnessed over the last decade, especially in Saudi Arabia, the United Arab Emirates and Egypt. The results also show that despite its limited spread, Design Thinking has been practiced the most in education, information technology and communication, administrative services, and the non-profit sectors. The way it is being practiced, though, is not fully aligned with how it is being practiced and taught in the US and Europe, as most people in the region do not necessarily believe in all mindset attributes introduced by the Stanford-Potsdam tradition.
Practitioners in the Arab world also seem to shy away from the 'wild side' of Design Thinking in particular, and do not fully appreciate the connection between art-design, and science-engineering. This questions the role of the educational institutions in the region since -according to the findings- they appear to be leading the movement in promoting and developing Design Thinking in the Arab world. Nonetheless, it is notable that people seem to be aware of the positive impact of applying Design Thinking in the region, and its potential to bring meaningful transformation. However, they also seem to be concerned about the current cultural, social, political, and economic challenges that may challenge this transformation. Therefore, they call for more awareness and demand to create Arabic, culturally appropriate programs to respond to the local needs. On another note, the lack of Arabic content and local case studies on Design Thinking were identified by several interviewees and were also confirmed by the participant observation as major challenges that are slowing down the spread of Design Thinking or sometimes hampering capacity building in the region. Other challenges that were revealed by the study are: changing the mindset of people, the lack of dedicated Design Thinking spaces, and the need for clear instructions on how to apply Design Thinking methods and activities. The concept of time and how Arabs deal with it, gender management during trainings, and hierarchy and power dynamics among training participants are also among the identified challenges. Another key finding revealed by the study is the confirmation of التفكير التصميمي as the Arabic term to be most widely adopted in the region to refer to Design Thinking, since four other Arabic terms were found to be associated with Design Thinking.
Based on the findings of the study, the thesis concludes by presenting a list of recommendations on how to overcome the mentioned challenges and what factors should be considered when designing and implementing culturally-customized Design Thinking training in the Arab region.
Casualties and damages from urban pluvial flooding are increasing. Triggered by short, localized, and intensive rainfall events, urban pluvial floods can occur anywhere, even in areas without a history of flooding. Urban pluvial floods have relatively small temporal and spatial scales. Although cumulative losses from urban pluvial floods are comparable, most flood risk management and mitigation strategies focus on fluvial and coastal flooding. Numerical-physical-hydrodynamic models are considered the best tool to represent the complex nature of urban pluvial floods; however, they are computationally expensive and time-consuming. These sophisticated models make large-scale analysis and operational forecasting prohibitive. Therefore, it is crucial to evaluate and benchmark the performance of other alternative methods.
The findings of this cumulative thesis are represented in three research articles. The first study evaluates two topographic-based methods to map urban pluvial flooding, fill–spill–merge (FSM) and topographic wetness index (TWI), by comparing them against a sophisticated hydrodynamic model. The FSM method identifies flood-prone areas within topographic depressions while the TWI method employs maximum likelihood estimation to calibrate a TWI threshold (τ) based on inundation maps from the 2D hydrodynamic model. The results point out that the FSM method outperforms the TWI method. The study highlights then the advantage and limitations of both methods.
Data-driven models provide a promising alternative to computationally expensive hydrodynamic models. However, the literature lacks benchmarking studies to evaluate the different models' performance, advantages and limitations. Model transferability in space is a crucial problem. Most studies focus on river flooding, likely due to the relative availability of flow and rain gauge records for training and validation. Furthermore, they consider these models as black boxes. The second study uses a flood inventory for the city of Berlin and 11 predictive features which potentially indicate an increased pluvial flooding hazard to map urban pluvial flood susceptibility using a convolutional neural network (CNN), an artificial neural network (ANN) and the benchmarking machine learning models random forest (RF) and support vector machine (SVM). I investigate the influence of spatial resolution on the implemented models, the models' transferability in space and the importance of the predictive features. The results show that all models perform well and the RF models are superior to the other models within and outside the training domain. The models developed using fine spatial resolution (2 and 5 m) could better identify flood-prone areas. Finally, the results point out that aspect is the most important predictive feature for the CNN models, and altitude is for the other models.
While flood susceptibility maps identify flood-prone areas, they do not represent flood variables such as velocity and depth which are necessary for effective flood risk management. To address this, the third study investigates data-driven models' transferability to predict urban pluvial floodwater depth and the models' ability to enhance their predictions using transfer learning techniques. It compares the performance of RF (the best-performing model in the previous study) and CNN models using 12 predictive features and output from a hydrodynamic model. The findings in the third study suggest that while CNN models tend to generalise and smooth the target function on the training dataset, RF models suffer from overfitting. Hence, RF models are superior for predictions inside the training domains but fail outside them while CNN models could control the relative loss in performance outside the training domains. Finally, the CNN models benefit more from transfer learning techniques than RF models, boosting their performance outside training domains.
In conclusion, this thesis has evaluated both topographic-based methods and data-driven models to map urban pluvial flooding. However, further studies are crucial to have methods that completely overcome the limitation of 2D hydrodynamic models.
The North Pamir, part of the India-Asia collision zone, essentially formed during the late Paleozoic to late Triassic–early Jurassic. Coeval to the subduction of the Turkestan ocean—during the Carboniferous Hercynian orogeny in the Tien Shan—a portion of the Paleo-Tethys ocean subducted northward and lead to the formation and obduction of a volcanic arc. This Carboniferous North Pamir arc is of Andean style in the western Darvaz segment and trends towards an intraoceanic arc in the eastern, Oytag segment. A suite of arc-volcanic rocks and intercalated, marine sediments together with intruded voluminous plagiogranites (trondhjemite and tonalite) and granodiorites was uplifted and eroded during the Permian, as demonstrated by widespread sedimentary unconformities. Today it constitutes a major portion of the North Pamir.
In this work, the first comprehensive Uranium-Lead (U-Pb) laser-ablation inductively-coupled-plasma mass-spectrometry (LA-ICP-MS) radiometric age data are presented along with geochemical data from the volcanic and plutonic rocks of the North Pamir volcanic arc. Zircon U-Pb data indicate a major intrusive phase between 340 and 320 Ma. The magmatic rocks show an arc-signature, with more primitive signatures in the Oytag segment compared to the Darvaz segment. Volcanic rocks in the Chinese North Pamir were indirectly dated by determining the age of ocean floor alteration. We investigate calcite filled vesicles and show that oxidative sea water and the basaltic host rock are major trace element sources. The age of ocean floor alteration, within a range of 25 Ma, constrains the extrusion age of the volcanic rocks. In the Chinese Pamir, arc-volcanic basalts have been dated to the Visean-Serpukhovian boundary. This relates the North Pamir volcanic arc to coeval units in the Tien Shan. Our findings further question the idea of a continuous Tarim-Tajik continent in the Paleozoic.
From the Permian (Guadalupian) on, a progressive sea-retreat led to continental conditions in the northeastern Pamir. Large parts of Central Asia were affected by transcurrent tectonics, while subduction of the Paleo-Tethys went on south of the accreted North Pamir arc, likely forming an accretionary wedge, representing an early stage of the later Karakul-Mazar tectonic unit. Graben systems dissected the Permian carbonate platforms, that formed on top of the uplifted Carboniferous arc in the central and western North Pamir. A continental graben formed in the eastern North Pamir. Zircon U-Pb dating suggest initiation of volcanic activity at ~260 Ma. Extensional tectonics prevailed throughout the Triassic, forming the Hindukush-North Pamir rift system. New geochemistry and zircon U-Pb data tie volcanic rocks, found in the Chinese Pamir, to coeval arc-related plutonic rocks found within the Karakul-Mazar arc-accretionary complex. The sedimentary environment in the continental North Pamir rift evolved from an alluvial plain, lake dominated environment in the Guadalupian to a coarser-clastic, alluvial, braided river dominated in the Triassic. Volcanic activity terminated in the early Jurassic. We conducted Potassium-Argon (K-Ar) fine-fraction dating on the Shala Tala thrust fault, a major structure juxtaposing Paleozoic marine units of lower greenschist to amphibolite facies conditions against continental Permian deposits. Fault slip under epizonal conditions is dated to 204.8 ± 3.7 Ma (2σ), implying Rhaetian nappe emplacement. This pinpoints the Central–North Pamir collision, since the Shala Tala thrust was a back-thrust at that time.
This thesis bridges two areas of mathematics, algebra on the one hand with the Milnor-Moore theorem (also called Cartier-Quillen-Milnor-Moore theorem) as well as the Poincaré-Birkhoff-Witt theorem, and analysis on the other hand with Shintani zeta functions which generalise multiple zeta functions.
The first part is devoted to an algebraic formulation of the locality principle in physics and generalisations of classification theorems such as Milnor-Moore and Poincaré-Birkhoff-Witt theorems to the locality framework. The locality principle roughly says that events that take place far apart in spacetime do not infuence each other. The algebraic formulation of this principle discussed here is useful when analysing singularities which arise from events located far apart in space, in order to renormalise them while keeping a memory of the fact that they do not influence each other. We start by endowing a vector space with a symmetric relation, named the locality relation, which keeps track of elements that are "locally independent". The pair of a vector space together with such relation is called a pre-locality vector space. This concept is extended to tensor products allowing only tensors made of locally independent elements. We extend this concept to the locality tensor algebra, and locality symmetric algebra of a pre-locality vector space and prove the universal properties of each of such structures. We also introduce the pre-locality Lie algebras, together with their associated locality universal enveloping algebras and prove their universal property. We later upgrade all such structures and results from the pre-locality to the locality context, requiring the locality relation to be compatible with the linear structure of the vector space. This allows us to define locality coalgebras, locality bialgebras, and locality Hopf algebras. Finally, all the previous results are used to prove the locality version of the Milnor-Moore and the Poincaré-Birkhoff-Witt theorems. It is worth noticing that the proofs presented, not only generalise the results in the usual (non-locality) setup, but also often use less tools than their counterparts in their non-locality counterparts.
The second part is devoted to study the polar structure of the Shintani zeta functions. Such functions, which generalise the Riemman zeta function, multiple zeta functions, Mordell-Tornheim zeta functions, among others, are parametrised by matrices with real non-negative arguments. It is known that Shintani zeta functions extend to meromorphic functions with poles on afine hyperplanes. We refine this result in showing that the poles lie on hyperplanes parallel to the facets of certain convex polyhedra associated to the defining matrix for the Shintani zeta function. Explicitly, the latter are the Newton polytopes of the polynomials induced by the columns of the underlying matrix. We then prove that the coeficients of the equation which describes the hyperplanes in the canonical basis are either zero or one, similar to the poles arising when renormalising generic Feynman amplitudes. For that purpose, we introduce an algorithm to distribute weight over a graph such that the weight at each vertex satisfies a given lower bound.
Movement is a mechanism that shapes biodiversity patterns across spatialtemporal scales. Thereby, the movement process affects species interactions, population dynamics and community composition. In this thesis, I disentangled the effects of movement on the biodiversity of zooplankton ranging from the individual to the community level. On the individual movement level, I used video-based analysis to explore the implication of movement behavior on preypredator interactions. My results showed that swimming behavior was of great importance as it determined their survival in the face of predation. The findings also additionally highlighted the relevance of the defense status/morphology of prey, as it not only affected the prey-predator relationship by the defense itself but also by plastic movement behavior. On the community movement level, I used a field mesocosm experiment to explore the role of dispersal (time i.e., from the egg bank into the water body and space i.e., between water bodies) in shaping zooplankton metacommunities. My results revealed that priority effects and taxon-specific dispersal limitation influenced community composition. Additionally, different modes of dispersal also generated distinct community structures. The egg bank and biotic vectors (i.e. mobile links) played significant roles in the colonization of newly available habitat patches. One crucial aspect that influences zooplankton species after arrival in new habitats is the local environmental conditions. By using common garden experiments, I assessed the performance of zooplankton communities in their home vs away environments in a group of ponds embedded within an agricultural landscape. I identified environmental filtering as a driving factor as zooplankton communities from individual ponds developed differently in their home and away environments. On the individual species level, there was no consistent indication of local adaptation. For some species, I found a higher abundance/fitness in their home environment, but for others, the opposite was the case, and some cases were indifferent.
Overall, the thesis highlights the links between movement and biodiversity patterns, ranging from the individual active movement to the community level.
Learning the causal structures from observational data is an omnipresent challenge in data science. The amount of observational data available to Causal Structure Learning (CSL) algorithms is increasing as data is collected at high frequency from many data sources nowadays. While processing more data generally yields higher accuracy in CSL, the concomitant increase in the runtime of CSL algorithms hinders their widespread adoption in practice. CSL is a parallelizable problem. Existing parallel CSL algorithms address execution on multi-core Central Processing Units (CPUs) with dozens of compute cores. However, modern computing systems are often heterogeneous and equipped with Graphics Processing Units (GPUs) to accelerate computations. Typically, these GPUs provide several thousand compute cores for massively parallel data processing.
To shorten the runtime of CSL algorithms, we design efficient execution strategies that leverage the parallel processing power of GPUs. Particularly, we derive GPU-accelerated variants of a well-known constraint-based CSL method, the PC algorithm, as it allows choosing a statistical Conditional Independence test (CI test) appropriate to the observational data characteristics.
Our two main contributions are: (1) to reflect differences in the CI tests, we design three GPU-based variants of the PC algorithm tailored to CI tests that handle data with the following characteristics. We develop one variant for data assuming the Gaussian distribution model, one for discrete data, and another for mixed discrete-continuous data and data with non-linear relationships. Each variant is optimized for the appropriate CI test leveraging GPU hardware properties, such as shared or thread-local memory. Our GPU-accelerated variants outperform state-of-the-art parallel CPU-based algorithms by factors of up to 93.4× for data assuming the Gaussian distribution model, up to 54.3× for discrete data, up to 240× for continuous data with non-linear relationships and up to 655× for mixed discrete-continuous data. However, the proposed GPU-based variants are limited to datasets that fit into a single GPU’s memory. (2) To overcome this shortcoming, we develop approaches to scale our GPU-based variants beyond a single GPU’s memory capacity. For example, we design an out-of-core GPU variant that employs explicit memory management to process arbitrary-sized datasets. Runtime measurements on a large gene expression dataset reveal that our out-of-core GPU variant is 364 times faster than a parallel CPU-based CSL algorithm. Overall, our proposed GPU-accelerated variants speed up CSL in numerous settings to foster CSL’s adoption in practice and research.
Creative intensive processes
(2023)
Creativity – developing something new and useful – is a constant challenge in the working world. Work processes, services, or products must be sensibly adapted to changing times. To be able to analyze and, if necessary, adapt creativity in work processes, a precise understanding of these creative activities is necessary. Process modeling techniques are often used to capture business processes, represent them graphically and analyze them for adaptation possibilities. This has been very limited for creative work. An accurate understanding of creative work is subject to the challenge that, on the one hand, it is usually very complex and iterative. On the other hand, it is at least partially unpredictable as new things emerge. How can the complexity of creative business processes be adequately addressed and simultaneously manageable? This dissertation attempts to answer this question by first developing a precise process understanding of creative work. In an interdisciplinary approach, the literature on the process description of creativity-intensive work is analyzed from the perspective of psychology, organizational studies, and business informatics. In addition, a digital ethnographic study in the context of software development is used to analyze creative work. A model is developed based on which four elementary process components can be analyzed: Intention of the creative activity, Creation to develop the new, Evaluation to assess its meaningfulness, and Planning of the activities arising in the process – in short, the ICEP model. These four process elements are then translated into the Knockledge Modeling Description Language (KMDL), which was developed to capture and represent knowledge-intensive business processes. The modeling extension based on the ICEP model enables creative business processes to be identified and specified without the need for extensive modeling of all process details. The modeling extension proposed here was developed using ethnographic data and then applied to other organizational process contexts. The modeling method was applied to other business contexts and evaluated by external parties as part of two expert studies. The developed ICEP model provides an analytical framework for complex creative work processes. It can be comprehensively integrated into process models by transforming it into a modeling method, thus expanding the understanding of existing creative work in as-is process analyses.
This thesis explores the variation in coreference patterns across language modes (i.e., spoken and written) and text genres. The significance of research on variation in language use has been emphasized in a number of linguistic studies. For instance, Biber and Conrad [2009] state that “register/genre variation is a fundamental aspect of human language” and “Given the ubiquity of register/genre variation, an understanding of how linguistic features are used in patterned ways across text varieties is of central importance for both the description of particular languages and the development of cross-linguistic theories of language use.”[p.23]
We examine the variation across genres with the primary goal of contributing to the body of knowledge on the description of language use in English. On the computational side, we believe that incorporating linguistic knowledge into learning-based systems can boost the performance of automatic natural language processing systems, particularly for non-standard texts. Therefore, in addition to their descriptive value, the linguistic findings we provide in this study may prove to be helpful for improving the performance of automatic coreference resolution, which is essential for a good text understanding and beneficial for several downstream NLP applications, including machine translation and text summarization.
In particular, we study a genre of texts that is formed of conversational interactions on the well-known social media platform Twitter. Two factors motivate us: First, Twitter conversations are realized in written form but resemble spoken communication [Scheffler, 2017], and therefore they form an atypical genre for the written mode. Second, while Twitter texts are a complicated genre for automatic coreference resolution, due to their widespread use in the digital sphere, at the same time they are highly relevant for applications that seek to extract information or sentiments from users’ messages. Thus, we are interested in discovering more about the linguistic and computational aspects of coreference in Twitter conversations. We first created a corpus of such conversations for this purpose and annotated it for coreference. We are interested in not only the coreference patterns but the overall discourse behavior of Twitter conversations. To address this, in addition to the coreference relations, we also annotated the coherence relations on the corpus we compiled. The corpus is available online in a newly developed form that allows for separating the tweets from their annotations.
This study consists of three empirical analyses where we independently apply corpus-based, psycholinguistic and computational approaches for the investigation of variation in coreference patterns in a complementary manner. (1) We first make a descriptive analysis of variation across genres through a corpus-based study. We investigate the linguistic aspects of nominal coreference in Twitter conversations and we determine how this genre relates to other text genres in spoken and written modes. In addition to the variation across genres, studying the differences in spoken-written modes is also in focus of linguistic research since from Woolbert [1922]. (2) In order to investigate whether the language mode alone has any effect on coreference patterns, we carry out a crowdsourced experiment and analyze the patterns in the same genre for both spoken and written modes. (3) Finally, we explore the potentials of domain adaptation of automatic coreference resolution (ACR) for the conversational Twitter data. In order to answer the question of how the genre of Twitter conversations relates to other genres in spoken and written modes with respect to coreference patterns, we employ a state-of-the-art neural ACR model [Lee et al., 2018] to examine whether ACR on Twitter conversations will benefit from mode-based separation in out-of-domain training data.
The impact of individual differences in cognitive skills and socioeconomic background on key educational, occupational, and health outcomes, as well as the mechanisms underlying inequalities in these outcomes across the lifespan, are two central questions in lifespan psychology. The contextual embeddedness of such questions in ontogenetic (i.e., individual, age-related) and historical time is a key element of lifespan psychological theoretical frameworks such as the HIstorical changes in DEvelopmental COntexts (HIDECO) framework (Drewelies et al., 2019). Because the dimension of time is also a crucial part of empirical research designs examining developmental change, a third central question in research on lifespan development is how the timing and spacing of observations in longitudinal studies might affect parameter estimates of substantive phenomena. To address these questions in the present doctoral thesis, I applied innovative state-of-the-art methodology including static and dynamic longitudinal modeling approaches, used data from multiple international panel studies, and systematically simulated data based on empirical panel characteristics, in three empirical studies.
The first study of this dissertation, Study I, examined the importance of adolescent intelligence (IQ), grade point average (GPA), and parental socioeconomic status (pSES) for adult educational, occupational, and health outcomes over ontogenetic and historical time. To examine the possible impact of historical changes in the 20th century on the relationships between adolescent characteristics and key adult life outcomes, the study capitalized on data from two representative US cohort studies, the National Longitudinal Surveys of Youth 1979 and 1997, whose participants were born in the late 1960s and 1980s, respectively. Adolescent IQ, GPA, and pSES were positively associated with adult educational attainment, wage levels, and mental and physical health. Across historical time, the influence of IQ and pSES for educational, occupational, and health outcomes remained approximately the same, whereas GPA gained in importance over time for individuals born in the 1980s.
The second study of this dissertation, Study II, aimed to examine strict cumulative advantage (CA) processes as possible mechanisms underlying individual differences and inequality in wage development across the lifespan. It proposed dynamic structural equation models (DSEM) as a versatile statistical framework for operationalizing and empirically testing strict CA processes in research on wages and wage dynamics (i.e., wage levels and growth rates). Drawing on longitudinal representative data from the US National Longitudinal Survey of Youth 1979, the study modeled wage levels and growth rates across 38 years. Only 0.5 % of the sample revealed strict CA processes and explosive wage growth (autoregressive coefficients AR > 1), with the majority of individuals following logarithmic wage trajectories across the lifespan. Adolescent intelligence (IQ) and adult highest educational level explained substantial heterogeneity in initial wage levels and long-term wage growth rates over time.
The third study of this dissertation, Study III, investigated the role of observation timing variability in the estimation of non-experimental intervention effects in panel data. Although longitudinal studies often aim at equally spaced intervals between their measurement occasions, this goal is hardly ever met. Drawing on continuous time dynamic structural equation models, the study examines the –seemingly counterintuitive – potential benefits of measurement intervals that vary both within and between participants (often called individually varying time intervals, IVTs) in a panel study. It illustrates the method by modeling the effect of the transition from primary to secondary school on students’ academic motivation using empirical data from the German National Educational Panel Study (NEPS). Results of a simulation study based on this real-life example reveal that individual variation in time intervals can indeed benefit the estimation precision and recovery of the true intervention effect parameters.
Climate change of anthropogenic origin is affecting Earth’s biodiversity and therefore ecosystems and their services. High latitude ecosystems are even more impacted than the rest of Northern Hemisphere because of the amplified polar warming. Still, it is challenging to predict the dynamics of high latitude ecosystems because of complex interaction between abiotic and biotic components. As the past is the key to the future, the interpretation of past ecological changes to better understand ongoing processes is possible. In the Quaternary, the Pleistocene experienced several glacial and interglacial stages that affected past ecosystems. During the last Glacial, the Pleistocene steppe-tundra was covering most of unglaciated northern hemisphere and disappeared in parallel to the megafauna’s extinction at the transition to the Holocene (~11,700 years ago). The origin of the steppe-tundra decline is not well understood and knowledge on the mechanisms, which caused shifts in past communities and ecosystems, is of high priority as they are likely comparable to those affecting modern ecosystems. Lake or permafrost core sediments can be retrieved to investigate past biodiversity at transitions between glacial and interglacial stages. Siberia and Beringia were the origin of dispersal of the steppe-tundra, which make investigation this area of high priority. Until recently, macrofossils and pollen were the most common approaches. They are designed to reconstruct past composition changes but have limit and biases. Since the end of the 20th century, sedimentary ancient DNA (sedaDNA) can also be investigated. My main objectives were, by using sedaDNA approaches to provide scientific evidence of compositional and diversity changes in the Northern Hemisphere ecosystems at the transition between Quaternary glacial and interglacial stages.
In this thesis, I provide snapshots of entire ancient ecosystems and describe compositional changes between Quaternary glacial and interglacial stages, and confirm the vegetation composition and the spatial and temporal boundaries of the Pleistocene steppe-tundra. I identify a general loss of plant diversity with extinction events happening in parallel of megafauna’ extinction. I demonstrate how loss of biotic resilience led to the collapse of a previously well-established system and discuss my results in regards to the ongoing climate change. With further work to constrain biases and limits, sedaDNA can be used in parallel or even replace the more established macrofossils and pollen approaches as my results support the robustness and potential of sedaDNA to answer new palaeoecological questions such as plant diversity changes, loss and provide snapshots of entire ancient biota.
Bilingualer Unterricht gilt als das Erfolgsmodell für den schulischen Fremdsprachenerwerb in Deutschland und die Beherrschung einer Fremdsprache in Wort und Schrift ist eine entscheidende berufsqualifizierende Kompetenz in unserer globalisierten Welt. Insbesondere die Verzahnung fachlicher und sprachlicher Inhalte im Kontext Bilingualen Unterrichts scheint gewinnbringend für den Fremdspracherwerb zu sein. Dabei ist die Diskrepanz zwischen den zumeist noch geringen fremdsprachlichen Fähigkeiten der Lernenden und den fachlichen Ansprüchen des Geographieunterrichts eine große Herausforderung für fachliches Lernen im bilingualen Sachfachunterricht. Es stellt sich die Frage, wie der Bilinguale Unterricht gestaltet sein muss, um einerseits geographische Themen fachlich komplex behandeln zu können und andererseits die Lernenden fremdsprachlich nicht zu überfordern.
Im Rahmen einer Design-Based-Research-Studie im bilingualen Geographieunterricht wurde untersucht, wie fachliches Lernen im bilingualen Geographieunterricht durch den Einsatz beider beteiligter Sprachen (Englisch/Deutsch) gefördert werden kann.
Auf Grundlage eines theoretisch fundierten Kenntnisstands zum Bilingualen Unterricht und zum Lernen mit Fachkonzepten im Geographieunterricht wurde eine Lernumgebung konzipiert, im Unterricht erprobt und weiterentwickelt, in der Strategien des Sprachwechsels zum Einsatz kommen.
Die Ergebnisse der Studie sind kontextbezogene Theorien einer zweisprachigen Didaktik für den bilingualen Geographieunterricht und Erkenntnisse zum Lernen mit Fachkonzepten im Geographieunterricht am Beispiel des geographischen Konzepts Wandel. Produkt der Studie ist eine unterrichtstaugliche Lernumgebung zum Thema Wandlungsprozesse an ausgewählten Orten für den bilingualen Geographieunterricht mit didaktischem Konzept, Unterrichtsmaterialien und -medien.
Die vorliegende Studie beschäftigt sich mit dem nach einer Strukturveränderung in der Sekundarstufe I entstandenen Schulmodell der Neuen Mittelschule. Untersucht wird, ob sich durch dieses Schulmodell und der damit intendierten neuen Lehr-, Lern- und Prüfungskultur Zusammenhänge zwischen gemessenen mathematischen Kompetenzen der Schüler und den durch Lehrer vergebenen Jahresnoten feststellen lassen.
Die Literaturrecherche macht deutlich, dass die Kritik an der Monokultur des leh-rerzentrierten Unterrichts zwar zu einer neuen Lehr-, Lern- und Prüfungskultur führt, deren Inhalte sind aber recht unterschiedlich, komplex und nicht eindeutig definiert. In der NMS soll die Leistungsbewertung als Lernhilfe fungieren, aber auch verlässliche Aussagen über die Leistung der Schüler treffen. Zur Wirkung der neuen Lernkultur in der NMS gibt es ebenso keine empirischen Befunde wie über die Wirkung der Leistungsbewertung.
An der empirischen Untersuchung nehmen 79 Schüler der sechsten Schulstufe aus drei Neuen Mittelschulen (dicht besiedelte, mittel besiedelte, dünn besiedelte Gemeinde) in Niederösterreich teil. In jeder Schule werden zwei Klassen untersucht. Dabei werden der Kompetenzstand in Mathematik, Schülerzentriertheit sowie Sozial- und Leistungsdruck aus Sicht der Schüler gemeinsam mit der Jah-resnote erhoben.
Für die Studie wird ein Pfadmodell entwickelt und mit einer Pfadanalyse ausge-wertet. Dabei zeigen sich zwar Zusammenhänge zwischen den gemessenen Kompetenzen in Mathematik und den Jahresnoten. Diese Jahresnoten besitzen über die Klasse bzw. die Schule hinaus aber nur eine bedingte Aussagekraft über die erbrachten Leistungen.
Im Rahmen dieser Dissertation wurden die erstmaligen Totalsynthesen der Arylnaphthalen-Lignane Alashinol D, Vitexdoin C, Vitrofolal E, Noralashinol C1 und Ternifoliuslignan E vorgestellt. Der Schlüsselschritt der entwickelten Methode, basiert auf einer regioselektiven intramolekularen Photo-Dehydro-Diels-Alder (PDDA)-Reaktion, die mittels UV-Strahlung im Durchflussreaktor durchgeführt wurde. Bei der Synthese der PDDA-Vorläufer (Diarylsuberate) wurde eine Synthesestrategie nach dem Baukastenprinzip verfolgt. Diese ermöglicht die Darstellung asymmetrischer komplexer Systeme aus nur wenigen Grundbausteinen und die Totalsynthese einer Vielzahl an Lignanen. In systematischen Voruntersuchungen konnte zudem die klare Überlegenheit der intra- gegenüber der intermolekularen PDDA-Reaktion aufgezeigt werden. Dabei stellte sich eine Verknüpfung der beiden Arylpropiolester über einen Korksäurebügel, in para-Position, als besonders effizient heraus. Werden asymmetrisch substituierte Diarylsuberate, bei denen einer der endständigen Estersubstituenten durch eine Trimethylsilyl-Gruppe oder ein Wasserstoffatom ersetzt wurde, verwendet, durchlaufen diese Systeme eine regioselektive Cyclisierung und als Hauptprodukt werden Naphthalenophane mit einem Methylester in 3-Position erhalten. Mit Hilfe von umfangreichen Experimenten zur Funktionalisierung der 4-Position, konnte zudem gezeigt werden, dass die Substitution der nucleophilen Cycloallen-Intermediate, während der PDDA-Reaktion, generell durch die Zugabe von N-Halogen-Succinimiden möglich ist. In Anbetracht der geringen Ausbeuten haben diese intermolekularen Abfangreaktionen, jedoch keinen präparativen Nutzen für die Totalsynthesen von Lignanen. Mit dem Ziel die allgemeinen photochemischen Reaktionsbedingungen zu optimieren, wurde erstmalig die triplettsensibilisierte PDDA-Reaktion vorgestellt. Durch die Verwendung von Xanthon als Sensibilisator wurde der Einsatz von effizienteren UVA-Lichtquellen ermöglicht, wodurch die Gefahr einer Photozersetzung durch Überbestrahlung minimiert wurde. Im Vergleich zur direkten Anregung mit UVB-Strahlung, konnten die Ausbeuten mit indirekter Anregung durch einen Photokatalysator signifikant gesteigert werden. Die grundlegenden Erkenntnisse und die entwickelten Synthesestrategien dieser Arbeit, können dazu beitragen zukünftig die Erschließung neuer pharmakologisch interessanter Lignane voranzutreiben.
1 Bisher ist nur die semisynthetische Darstellung von Noralashinol C ausgehend von Hydroxymatairesinol literaturbekannt.
Background: The characteristics of osteoporosis are decreased bone mass and destruction towards the microarchitecture of bone tissue, which raises the risk of fracture. Psychosocialstress and osteoporosis are linked by sympathetic nervous system, hypothalamic-pituitary-adrenal axis, and other endocrine factors. Psychosocial stress causes a series of effects on the organism, and this long-term depletion at the cellular level is considered to be mitochondrial allostatic load, including mitochondrial dysfunction and oxidative stress. Extracellular vesicles (EVs) are involved in the mitochondrial allostatic load process and may as biomarkers in this setting. As critical participants during cell-to-cell communications, EVs serve as transport vehicles for nucleic acid and proteins, alter the phenotypic and functional characteristics of their target cells, and promote cell-to-cell contact. And hence, they play a significant role in the diagnosis and therapy of many diseases, such as osteoporosis.
Aim: This narrative review attempts to outline the features of EVs, investigate their involvement in both psychosocial stress and osteoporosis, and analyze if EVs can be potential mediators between both.
Methods: The online database from PubMed, Google Scholar, and Science Direct were searched for keywords related to the main topic of this study, and the availability of all the selected studies was verified. Afterward, the findings from the articles were summarized and synthesized.
Results: Psychosocial stress affects bone remodeling through increased neurotransmitters such as glucocorticoids and catecholamines, as well as increased glucose metabolism. Furthermore, psychosocial stress leads to mitochondrial allostatic load, including oxidative stress, which may affect bone remodeling. In vitro and in vivo data suggest EVs might involve in the link between psychosocial stress and bone remodeling through the transfer of bioactive substances and thus be a potential mediator of psychosocial stress leading to osteoporosis.
Conclusions: According to the included studies, psychosocial stress affects bone remodeling, leading to osteoporosis. By summarizing the specific properties of EVs and the function of EVs in both psychosocial stress and osteoporosis, respectively, it has been demonstrated that EVs are possible mediators of both, and have the prospects to be useful in innovative research areas.
River flooding is a constant peril for societies, causing direct economic losses in the order of $100 billion worldwide each year. Under global change, the prolonged concentration of people and assets in floodplains is accompanied by an emerging intensification of flood extremes due to anthropogenic global warming, ultimately exacerbating flood risk in many regions of the world.
Flood adaptation plays a key role in the mitigation of impacts, but poor understanding of vulnerability and its dynamics limits the validity of predominant risk assessment methods and impedes effective adaptation strategies. Therefore, this thesis investigates new methods for flood risk assessment that embrace the complexity of flood vulnerability, using the understudied commercial sector as an application example.
Despite its importance for accurate risk evaluation, flood loss modeling has been based on univariable and deterministic stage-damage functions for a long time. However, such simplistic methods only insufficiently describe the large variation in damage processes, which initiated the development of multivariable and probabilistic loss estimation techniques. The first study of this thesis developed flood loss models for companies that are based on emerging statistical and machine learning approaches (i.e., random forest, Bayesian network, Bayesian regression). In a benchmarking experiment on basis of object-level loss survey data, the study showed that all proposed models reproduced the heterogeneity in damage processes and outperformed conventional stage-damage functions with respect to predictive accuracy. Another advantage of the novel methods is that they convey probabilistic information in predictions, which communicates the large remaining uncertainties transparently and, hence, supports well-informed risk assessment.
Flood risk assessment combines vulnerability assessment (e.g., loss estimation) with hazard and exposure analyses. Although all of the three risk drivers interact and change over time, such dependencies and dynamics are usually not explicitly included in flood risk models. Recently, systemic risk assessment that dissolves the isolated consideration of risk drivers has gained traction, but the move to holistic risk assessment comes with limited thoroughness in terms of loss estimation and data limitations. In the second study, I augmented a socio-hydrological system dynamics model for companies in Dresden, Germany, with the multivariable Bayesian regression loss model from the first study. The additional process-detail and calibration data improved the loss estimation in the systemic risk assessment framework and contributed to more accurate and reliable simulations. The model uses Bayesian inference to quantify uncertainty and learn the model parameters from a combination of prior knowledge and diverse data.
The third study demonstrates the potential of the socio-hydrological flood risk model for continuous, long-term risk assessment and management. Using hydroclimatic ad socioeconomic forcing data, I projected a wide range of possible risk trajectories until the end of the century, taking into account the adaptive behavior of companies. The study results underline the necessity of increased adaptation efforts to counteract the expected intensification of flood risk due to climate change. A sensitivity analysis of the effectiveness of different adaptation measures and strategies revealed that optimized adaptation has the potential to mitigate flood risk by up to 60%, particularly when combining structural and non-structural measures. Additionally, the application shows that systemic risk assessment is capable of capturing adverse long-term feedbacks in the human-flood system such as the levee effect.
Overall, this thesis advances the representation of vulnerability in flood risk modeling by offering modeling solutions that embrace the complexity of human-flood interactions and quantify uncertainties consistently using probabilistic modeling. The studies show how scarce information in data and previous experiments can be integrated in the inference process to provide model predictions and simulations that are reliable and rich in information. Finally, the focus on the flood vulnerability of companies provides new insights into the heterogeneous damage processes and distinct flood coping of this sector.
Childhood compared to adolescence and adulthood is characterized by high neuroplasticity represented by accelerated cognitive maturation and rapid cognitive developmental trajectories. Natural growth, biological maturation and permanent interaction with the physical and social environment fosters motor and cognitive development in children. Of note, the promotion of physical activity, physical fitness, and motor skill learning at an early age is mandatory first, as these aspects are essential for a healthy development and an efficient functioning in everyday life across the life span and second, physical activity behaviors and lifestyle habits tend to track from childhood into adulthood.
The main objective of the present thesis was to optimize and deepen the knowledge of motor and cognitive performance in young children and to develop an effective and age-appropriate exercise program feasible for the implementation in kindergarten and preschool settings. A systematic review with meta-analysis was conducted to examine the effectiveness of fundamental movement skill and exercise interventions in healthy preschool-aged children. Further, the relation between measures of physical fitness (i.e., static balance, muscle strength, power, and coordination) and attention as one domain of cognitive performance in preschool-aged children was analyzed. Subsequently, effects of a strength-dominated kindergarten-based exercise program on physical fitness components (i.e., static balance, muscle strength, power, and coordination) and cognitive performance (i.e., attention) compared to a usual kindergarten curriculum was examined.
The systematic review included trials focusing on healthy young children in kindergarten or preschool settings that applied fundamental movement skill-enhancing intervention programs of at least 4 weeks and further reported standardized motor skill outcome measures for the intervention and the control group. Children aged 4-6 years from three kindergartens participated in the cross-sectional and the longitudinal study. Product-orientated measures were conducted for the assessment of muscle strength (i.e., handgrip strength), muscle power (i.e., standing long jump), balance (i.e., timed single-leg stand), coordination (hopping on right/left leg), and attentional span (i.e., “Konzentrations-Handlungsverfahren für Vorschulkinder” [concentration-action procedure for preschoolers]).
With regards to the scientific literature, exercise and fundamental movement skill interventions are an effective method to promote overall proficiency in motor skills (i.e., object control and locomotor skills) in preschool children particularly when conducted by external experts with a duration of 4 weeks to 5 months. Moreover, significant medium associations were found between the composite score of physical fitness and attention as well as between coordination separately and attention in children aged 4-6 years. A 10-weeks strength-dominated exercise program implemented in kindergarten and preschool settings by educated and trained kindergarten teachers revealed significant improvements for the standing long jump test and the Konzentrations-Handlungsverfahren of intervention children compared to children of the control group.
The findings of the present thesis imply that fundamental movement skill and exercise interventions improve motor skills (i.e., locomotor and object control skills). Nonetheless, more high-quality research is needed. Additionally, physical fitness, particularly high performance in complex fitness components (i.e., coordination measured with the hopping on one leg test), tend to predict attention in preschool age. Furthermore, an exercise program including strength-dominated exercises, fundamental movement skills and elements of gymnastics has a beneficial effect on jumping performance with a concomitant trend toward improvements in attentional capacity in healthy preschool children. Finally, it is recommended to start early with the integration of muscular fitness (i.e., muscle strength, muscle power, muscular endurance) next to coordination, agility, balance, and fundamental movement skill exercises into regular physical activity curriculums in kindergarten settings.
Biogeochemical analyses of lacustrine environments are well-established methods that allow exploring and understanding complex systems in the lake ecosystem. However, most were conducted in temperate lakes controlled by entirely different physical conditions than in tropical climates. The most important difference between the temperate and tropical lakes is lacking seasonal temperature fluctuations in the latter, which leads to a stable temperature gradient in the water column. Thus, the water column in tropical latitudes generally is void of perturbations that can be seen in their temperate counterparts. Permanent stratification in the water column provides optimal conditions for intact sedimentation. The geochemical processes in the water column and the weathering process in the distinct lithology in the catchment leads to the different biogeochemical characteristic in the sediment. Conducting a biogeochemical study in this lake sediment, especially in the Sediment Water Interface (SWI) helps reveal the sedimentation and diagenetic process records influenced by the internal or external loading. Lake Sentani, the study area, is one of the thousands of lakes in Indonesia and located in the Papua province. This tropical lake has a unique feature, as it consists of four interconnected sub-basins with different water depths. More importantly, its catchment is comprised of various different lithologies. Hence, its lithological characteristics are highly diverse, and range from mafic and ultramafic rocks to clastic sediment and carbonates. Each sub-basin receives a distinct sediment input. Equally important, besides the natural loading, Lake Sentani is also influenced by anthropogenic input. Previous studies have elaborated that there is an increase in population growth rate around the lake which has direct consequences on eutrophication. Considering these factors, the government of The Republic of Indonesia put Lake Sentani on the list of national priority lakes for restoration. This thesis aims to develop a fundamental understanding of Lake Sentani's sedimentary geochemistry and geomicrobiology with a special focus on the effects of different lithologies and anthropogenic pressures in the catchment area. We conducted geochemical and geomicrobiology research on Lake Sentani to meet this objective. We investigated geochemical characteristics in the water column, porewater, and sediment core of the four sub-basins. Additional to direct investigations of the lake itself, we also studied the sediments in the tributary rivers, of which some are ephemeral, as well as the river mouths, as connections between riverine and the lacustrine habitat. The thesis is composed of three main publications about Lake Sentani and supported by several publications that focus on other tropical lakes in Indonesia. The first main publication investigates the geochemical characterization of the water column, porewater, and surface sediment (upper 40-50 cm) from the center of the four sub-basins. It reveals that besides catchment lithology, the water column heavily influences the geochemical characteristics in the lake sediments and their porewater. The findings indicate that water column stratification has a strong influence on overall chemistry. The four sub-basins are very different with regard to their water column chemistry. Based on the physicochemical profiles, especially dissolved oxygen, one sub-basin is oxygenated, one intermediate i.e. just reaches oxygen depletion at the sediment-water interface, and two sub-basins are fully meromictic. However, all four sub-basins share the same surface water chemistry. The structure of the water column creates differences on the patterns of anions and cations in the porewater. Likewise, the distinct differences in geochemical composition between the sub-basins show that the lithology in the catchment affects the geochemical characteristic in the sediment. Overall, water column stratification and particularly bottom water oxygenation strongly influence the overall elemental composition of the sediment and porewater composition. The second publication reveals differences in surface sediment composition between habitats, influenced by lithological variations in the catchment area. The macro-element distribution shows that the geochemical characteristics between habitats are different. Furthermore, the geochemical composition also indicates a distinct distribution between the sub-basins. The geochemical composition of the eastern sub-basin suggests that lithogenic elements are more dominant than authigenic elements. This is also supported by sulfide speciation, particle distribution, and smear slide data. The third publication is a geomicrobiological study of the surface sediment. We compare the geochemical composition of the surface sediment and its microbiological composition and compare the different signals. Next Generation Sequencing (NGS) of the 16S rRNA gene was applied to determine the microbial community composition of the surface sediment from a great number of locations. We use a large number of sampling sites in all four sub-basins as well as in the rivers and river mouths to illustrate the links between the river, the river mouth, and the lake. Rigorous assessment of microbial communities across the diverse Lake Sentani habitats allowed us to study some of these links and report novel findings on microbial patterns in such ecosystems. The main result of the Principal Coordinates Analysis (PCoA) based on microbial community composition highlighted some commonalities but also differences between the microbial community analysis and the geochemical data. The microbial community in rivers, river mouths and sub-basins is strongly influenced by anthropogenic input from the catchment area. Generally, Bacteroidetes and Firmicutes could be an indicator for river sediments. The microbial community in the river is directly influenced by anthropogenic pressure and is markedly different from the lake sediment. Meanwhile, the microbial community in the lake sediment reflects the anoxic environment, which is prevalent across the lake in all sediments below a few mm burial depth. The lake sediments harbour abundant sulfate reducers and methanogens. The microbial communities in sediments from river mouths are influenced by both rivers and lake ecosystems. This study provides valuable information to understand the basic processes that control biogeochemical cycling in Lake Sentani. Our findings are critical for lake managers to accurately assess the uncertainties of the changing environmental conditions related to the anthropogenic pressure in the catchment area. Lake Sentani is a unique study site directly influenced by the different geology across the watershed and morphometry of the four studied basins. As a result of these factors, there are distinct geochemical differences between the habitats (river, river mouth, lake) and the four sub-basins. In addition to geochemistry, microbial community composition also shows differences between habitats, although there are no obvious differences between the four sub-basins. However, unlike sediment geochemistry, microbial community composition is impacted by human activities. Therefore, this thesis will provide crucial baseline data for future lake management.
In this thesis, we investigate language learning in the formalisation of Gold [Gol67]. Here, a learner, being successively presented all information of a target language, conjectures which language it believes to be shown. Once these hypotheses converge syntactically to a correct explanation of the target language, the learning is considered successful. Fittingly, this is termed explanatory learning. To model learning strategies, we impose restrictions on the hypotheses made, for example requiring the conjectures to follow a monotonic behaviour. This way, we can study the impact a certain restriction has on learning.
Recently, the literature shifted towards map charting. Here, various seemingly unrelated restrictions are contrasted, unveiling interesting relations between them. The results are then depicted in maps. For explanatory learning, the literature already provides maps of common restrictions for various forms of data presentation.
In the case of behaviourally correct learning, where the learners are required to converge semantically instead of syntactically, the same restrictions as in explanatory learning have been investigated. However, a similarly complete picture regarding their interaction has not been presented yet.
In this thesis, we transfer the map charting approach to behaviourally correct learning. In particular, we complete the partial results from the literature for many well-studied restrictions and provide full maps for behaviourally correct learning with different types of data presentation. We also study properties of learners assessed important in the literature. We are interested whether learners are consistent, that is, whether their conjectures include the data they are built on. While learners cannot be assumed consistent in explanatory learning, the opposite is the case in behaviourally correct learning. Even further, it is known that learners following different restrictions may be assumed consistent. We contribute to the literature by showing that this is the case for all studied restrictions.
We also investigate mathematically interesting properties of learners. In particular, we are interested in whether learning under a given restriction may be done with strongly Bc-locking learners. Such learners are of particular value as they allow to apply simulation arguments when, for example, comparing two learning paradigms to each other. The literature gives a rich ground on when learners may be assumed strongly Bc-locking, which we complete for all studied restrictions.
Lithium-ion capacitors (LICs) are promising energy storage devices by asymmetrically combining anode with a high energy density close to lithium-ion batteries and cathode with a high power density and long-term stability close to supercapacitors. For the further improvement of LICs, the development of electrode materials with hierarchical porosity, nitrogen-rich lithiophilic sites, and good electrical conductivity is essential. Nitrogen-rich all-carbon composite hybrids are suitable for these conditions along with high stability and tunability, resulting in a breakthrough to achieve the high performance of LICs. In this thesis, two different all-carbon composites are suggested to unveil how the pore structure of lithiophilic composites influences the properties of LICs. Firstly, the composite with 0-dimensional zinc-templated carbon (ZTC) and hexaazatriphenylene-hexacarbonitrile (HAT) is examined how the pore structure is connected to Li-ion storage property as LIC electrode. As the pore structure of HAT/ZTC composite is easily tunable depending on the synthetic factor and ratio of each component, the results will allow deeper insights into Li-ion dynamics in different porosity, and low-cost synthesis by optimization of the HAT:ZTC ratio. Secondly, the composite with 1-dimensional nanoporous carbon fiber (ACF) and cost-effective melamine is proposed as a promising all-carbon hybrid for large-scale application. Since ACF has ultra-micropores, the numerical structure-property relationships will be calculated out not only from total pore volume but more specifically from ultra-micropore volume. From these results above, it would be possible to understand how hybrid all-carbon composites interact with lithium ions in nanoscale as well as how structural properties affect the energy storage performance. Based on this understanding derived from the simple materials modeling, it will provide a clue to design the practical hybrid materials for efficient electrodes in LICs.
Following the extinction of dinosaurs, the great adaptive radiation of mammals occurred, giving rise to an astonishing ecological and phenotypic diversity of mammalian species. Even closely related species often inhabit vastly different habitats, where they encounter diverse environmental challenges and are exposed to different evolutionary pressures. As a response, mammals evolved various adaptive phenotypes over time, such as morphological, physiological and behavioural ones. Mammalian genomes vary in their content and structure and this variation represents the molecular mechanism for the long-term evolution of phenotypic variation. However, understanding this molecular basis of adaptive phenotypic variation is usually not straightforward.
The recent development of sequencing technologies and bioinformatics tools has enabled a better insight into mammalian genomes. Through these advances, it was acknowledged that mammalian genomes differ more, both within and between species, as a consequence of structural variation compared to single-nucleotide differences. Structural variant types investigated in this thesis - such as deletion, duplication, inversion and insertion, represent a change in the structure of the genome, impacting the size, copy number, orientation and content of DNA sequences. Unlike short variants, structural variants can span multiple genes. They can alter gene dosage, and cause notable gene expression differences and subsequently phenotypic differences. Thus, they can lead to a more dramatic effect on the fitness (reproductive success) of individuals, local adaptation of populations and speciation.
In this thesis, I investigated and evaluated the potential functional effect of structural variations on the genomes of mustelid species. To detect the genomic regions associated with phenotypic variation I assembled the first reference genome of the tayra (Eira barbara) relying on linked-read sequencing technology to achieve a high level of genome completeness important for reliable structural variant discovery. I then set up a bioinformatics pipeline to conduct a comparative genomic analysis and explore variation between mustelid species living in different environments. I found numerous genes associated with species-specific phenotypes related to diet, body condition and reproduction among others, to be impacted by structural variants.
Furthermore, I investigated the effects of artificial selection on structural variants in mice selected for high fertility, increased body mass and high endurance. Through selective breeding of each mouse line, the desired phenotypes have spread within these populations, while maintaining structural variants specific to each line. In comparison to the control line, the litter size has doubled in the fertility lines, individuals in the high body mass lines have become considerably larger, and mice selected for treadmill performance covered substantially more distance. Structural variants were found in higher numbers in these trait-selected lines than in the control line when compared to the mouse reference genome. Moreover, we have found twice as many structural variants spanning protein-coding genes (specific to each line) in trait-selected lines. Several of these variants affect genes associated with selected phenotypic traits. These results imply that structural variation does indeed contribute to the evolution of the selected phenotypes and is heritable.
Finally, I suggest a set of critical metrics of genomic data that should be considered for a stringent structural variation analysis as comparative genomic studies strongly rely on the contiguity and completeness of genome assemblies. Because most of the available data used to represent reference genomes of mammalian species is generated using short-read sequencing technologies, we may have incomplete knowledge of genomic features. Therefore, a cautious structural variation analysis is required to minimize the effect of technical constraints.
The impact of structural variants on the adaptive evolution of mammalian genomes is slowly gaining more focus but it is still incorporated in only a small number of population studies. In my thesis, I advocate the inclusion of structural variants in studies of genomic diversity for a more comprehensive insight into genomic variation within and between species, and its effect on adaptive evolution.
Justice structures societies and social relations of any kind; its psychological integration provides a fundamental cornerstone for social, moral, and personality development. The trait justice sensitivity captures individual differences in responses toward perceived injustice (JS; Schmitt et al., 2005, 2010). JS has shown substantial relations to social and moral behavior in adult and adolescent samples; however, it was not yet investigated in middle childhood despite this being a sensitive phase for personality development. JS differentiates in underlying perspectives that are either more self- or other-oriented regarding injustice, with diverging outcome relations. The present research project investigated JS and its perspectives in children aged 6 to 12 years with a special focus on variables of social and moral development as potential correlates and outcomes in four cross-sectional studies. Study 1 started with a closer investigation of JS trait manifestation, measurement, and relations to important variables from the nomological network, such as temperamental dimensions, social-cognitive skills, and global pro- and antisocial behavior in a pilot sample of children from south Germany. Study 2 investigated relations between JS and distributive behavior following distributive principles in a large-scale data set of children from Berlin and Brandenburg. Study 3 explored the relations of JS with moral reasoning, moral emotions, and moral identity as important precursors of moral development in the same large-scale data set. Study 4 investigated punishment motivation to even out, prevent, or compensate norm transgressions in a subsample, whereby JS was considered as a potential predictor of different punishment motives. All studies indicated that a large-scale, economic measurement of JS is possible at least from middle childhood onward. JS showed relations to temperamental dimensions, social skills, global social behavior; distributive decisions and preferences for distributive principles; moral reasoning, emotions, and identity; as well as with punishment motivation; indicating that trait JS is highly relevant for social and moral development. The underlying self- or other-oriented perspectives showed diverging correlate and outcome relations mostly in line with theory and previous findings from adolescent and adult samples, but also provided new theoretical ideas on the construct and its differentiation. Findings point to an early internal justice motive underlying trait JS, but additional motivations underlying the JS perspectives. Caregivers, educators, and clinical psychologists should pay attention to children’s JS and toward promoting an adaptive justice-related personality development to foster children’s prosocial and moral development as well as their mental health.
This dissertation focuses on the handling of time in dialogue. Specifically, it investigates how humans bridge time, or “buy time”, when they are expected to convey information that is not yet available to them (e.g. a travel agent searching for a flight in a long list while the customer is on the line, waiting). It also explores the feasibility of modeling such time-bridging behavior in spoken dialogue systems, and it examines
how endowing such systems with more human-like time-bridging capabilities may affect humans’ perception of them.
The relevance of time-bridging in human-human dialogue seems to stem largely from a need to avoid lengthy pauses, as these may cause both confusion and discomfort among the participants of a conversation (Levinson, 1983; Lundholm Fors, 2015). However, this avoidance of prolonged silence is at odds with the incremental nature of speech production in dialogue (Schlangen and Skantze, 2011): Speakers often start to verbalize their contribution before it is fully formulated, and sometimes even before they possess the information they need to provide, which may result in them running out of content mid-turn.
In this work, we elicit conversational data from humans, to learn how they avoid being silent while they search for information to convey to their interlocutor. We identify commonalities in the types of resources employed by different speakers, and we propose a classification scheme. We explore ways of modeling human time-buying behavior computationally, and we evaluate the effect on human listeners of embedding this behavior in a spoken dialogue system.
Our results suggest that a system using conversational speech to bridge time while searching for information to convey (as humans do) can provide a better experience in several respects than one which remains silent for a long period of time. However, not all speech serves this purpose equally: Our experiments also show that a system whose time-buying behavior is more varied (i.e. which exploits several categories from the classification scheme we developed and samples them based on information from human data) can prevent overestimation of waiting time when compared, for example, with a system that repeatedly asks the interlocutor to wait (even if these requests for waiting are phrased differently each time). Finally, this research shows that it is possible to model human time-buying behavior on a relatively small corpus, and that a system using such a model can be preferred by participants over one employing a simpler strategy, such as randomly choosing utterances to produce during the wait —even when the utterances used by both strategies are the same.
To grant high-quality evidence-based research in the field of exercise sciences, it is often necessary for various institutions to collaborate over longer distances and internationally. Here, not only with regard to the recent COVID-19-pandemic, digital means provide new options for remote scientific exchanges. This thesis is meant to analyse and test digital opportunities to support the dissemination of knowledge and instruction of investigators about defined examination protocols in an international multi-center context.
The project consisted of three studies. The first study, a questionnaire-based survey, aimed at learning about the opinions and preferences of digital learning or social media among students of sport science faculties in two universities each in Germany, the UK and Italy. Based on these findings, in a second study, an examination video of an ultrasound determination of the intima-media-thickness and diameter of an artery was distributed by a messenger app to doctors and nursing personnel as simulated investigators and efficacy of the test setting was analysed. Finally, a third study integrated the use of an augmented reality device for direct remote supervision of the same ultrasound examinations in a long-distance international setting with international experts from the fields of engineering and sports science and later remote supervision of augmented reality equipped physicians performing a given task.
The first study with 229 participating students revealed a high preference for YouTube to receive video-based knowledge as well as a preference for using WhatsApp and Facebook for peer-to-peer contacts for learning purposes and to exchange and discuss knowledge. In the second study, video-based instructions send by WhatsApp messenger
showed high approval of the setup in both study groups, one with doctors familiar with the use of ultrasound technology as well as one with nursing staff who were not familiar with the device, with similar results in overall time of performance and the measurements of the femoral arteries. In the third and final study, experts from different continents were connected remotely to the examination site via an augmented reality device with good transmission quality. The remote supervision to doctors ́ examination produced a good interrater correlation. Experiences with the augmented reality-based setting were rated as highly positive by the participants. Potential benefits of this technique were seen in the fields of education, movement analysis, and supervision.
Concluding, the findings of this thesis were able to suggest modern and addressee- centred digital solutions to enhance the understanding of given examinations techniques of potential investigators in exercise science research projects. Head-mounted augmented reality devices have a special value and may be recommended for collaborative research projects with physical examination–based research questions. While the established setting should be further investigated in prospective clinical studies, digital competencies of future researchers should already be enhanced during the early stages of their education.
In recent decades, astronomy has seen a boom in large-scale stellar surveys of the Galaxy. The detailed information obtained about millions of individual stars in the Milky Way is bringing us a step closer to answering one of the most outstanding questions in astrophysics: how do galaxies form and evolve? The Milky Way is the only galaxy where we can dissect many stars into their high-dimensional chemical composition and complete phase space, which analogously as fossil records can unveil the past history of the genesis of the Galaxy. The processes that lead to large structure formation, such as the Milky Way, are critical for constraining cosmological models; we call this line of study Galactic archaeology or near-field cosmology.
At the core of this work, we present a collection of efforts to chemically and dynamically characterise the disks and bulge of our Galaxy. The results we present in this thesis have only been possible thanks to the advent of the Gaia astrometric satellite, which has revolutionised the field of Galactic archaeology by precisely measuring the positions, parallax distances and motions of more than a billion stars. Another, though not less important, breakthrough is the APOGEE survey, which has observed spectra in the near-infrared peering into the dusty regions of the Galaxy, allowing us to determine detailed chemical abundance patterns in hundreds of thousands of stars. To accurately depict the Milky Way structure, we use and develop the Bayesian isochrone fitting tool/code called StarHorse; this software can predict stellar distances, extinctions and ages by combining astrometry, photometry and spectroscopy based on stellar evolutionary models. The StarHorse code is pivotal to calculating distances where Gaia parallaxes alone cannot allow accurate estimates.
We show that by combining Gaia, APOGEE, photometric surveys and using StarHorse, we can produce a chemical cartography of the Milky way disks from their outermost to innermost parts. Such a map is unprecedented in the inner Galaxy. It reveals a continuity of the bimodal chemical pattern previously detected in the solar neighbourhood, indicating two populations with distinct formation histories. Furthermore, the data reveals a chemical gradient within the thin disk where the content of 𝛼-process elements and metals is higher towards the centre. Focusing on a sample in the inner MW we confirm the extension of the chemical duality to the innermost regions of the Galaxy. We find stars with bar shape orbits to show both high- and low-𝛼 abundances, suggesting the bar formed by secular evolution trapping stars that already existed. By analysing the chemical orbital space of the inner Galactic regions, we disentangle the multiple populations that inhabit this complex region. We reveal the presence of the thin disk, thick disk, bar, and a counter-rotating population, which resembles the outcome of a perturbed proto-Galactic disk. Our study also finds that the inner Galaxy holds a high quantity of super metal-rich stars up to three times solar suggesting it is a possible repository of old super-metal-rich stars found in the solar neighbourhood.
We also enter into the complicated task of deriving individual stellar ages. With StarHorse, we calculate the ages of main-sequence turn-off and sub-giant stars for several public spectroscopic surveys. We validate our results by investigating linear relations between chemical abundances and time since the 𝛼 and neutron capture elements are sensitive to age as a reflection of the different enrichment timescales of these elements. For further study of the disks in the solar neighbourhood, we use an unsupervised machine learning algorithm to delineate a multidimensional separation of chrono-chemical stellar groups revealing the chemical thick disk, the thin disk, and young 𝛼-rich stars. The thick disk is shown to have a small age dispersion indicating its fast formation contrary to the thin disk that spans a wide range of ages.
With groundbreaking data, this thesis encloses a detailed chemo-dynamical view of the disk and bulge of our Galaxy. Our findings on the Milky Way can be linked to the evolution of high redshift disk galaxies, helping to solve the conundrum of galaxy formation.
Distances affect economic decision-making in numerous situations. The time at which we make a decision about future consumption has an impact on our consumption behavior. The spatial distance to employer, school or university impacts the place where we live and vice versa. The emotional closeness to other individuals influences our willingness to give money to them. This cumulative thesis aims to enrich the literature on the role of distance for economic decision-making. Thereby, each of my research projects sheds light on the impact of one kind of distance for efficient decision-making.
The Antarctic ice sheet is the largest freshwater reservoir worldwide. If it were to melt completely, global sea levels would rise by about 58 m. Calculation of projections of the Antarctic contribution to sea level rise under global warming conditions is an ongoing effort which
yields large ranges in predictions. Among the reasons for this are uncertainties related to the physics of ice sheet modeling. These
uncertainties include two processes that could lead to runaway ice retreat: the Marine Ice Sheet Instability (MISI), which causes rapid grounding line retreat on retrograde bedrock, and the Marine Ice Cliff Instability (MICI), in which tall ice cliffs become unstable and calve off, exposing even taller ice cliffs.
In my thesis, I investigated both marine instabilities (MISI and MICI) using the Parallel Ice Sheet Model (PISM), with a focus on MICI.
The amount of data stored in databases and the complexity of database workloads are ever- increasing. Database management systems (DBMSs) offer many configuration options, such as index creation or unique constraints, which must be adapted to the specific instance to efficiently process large volumes of data. Currently, such database optimization is complicated, manual work performed by highly skilled database administrators (DBAs). In cloud scenarios, manual database optimization even becomes infeasible: it exceeds the abilities of the best DBAs due to the enormous number of deployed DBMS instances (some providers maintain millions of instances), missing domain knowledge resulting from data privacy requirements, and the complexity of the configuration tasks.
Therefore, we investigate how to automate the configuration of DBMSs efficiently with the help of unsupervised database optimization. While there are numerous configuration options, in this thesis, we focus on automatic index selection and the use of data dependencies, such as functional dependencies, for query optimization. Both aspects have an extensive performance impact and complement each other by approaching unsupervised database optimization from different perspectives.
Our contributions are as follows: (1) we survey automated state-of-the-art index selection algorithms regarding various criteria, e.g., their support for index interaction. We contribute an extensible platform for evaluating the performance of such algorithms with industry-standard datasets and workloads. The platform is well-received by the community and has led to follow-up research. With our platform, we derive the strengths and weaknesses of the investigated algorithms. We conclude that existing solutions often have scalability issues and cannot quickly determine (near-)optimal solutions for large problem instances. (2) To overcome these limitations, we present two new algorithms. Extend determines (near-)optimal solutions with an iterative heuristic. It identifies the best index configurations for the evaluated benchmarks. Its selection runtimes are up to 10 times lower compared with other near-optimal approaches. SWIRL is based on reinforcement learning and delivers solutions instantly. These solutions perform within 3 % of the optimal ones. Extend and SWIRL are available as open-source implementations.
(3) Our index selection efforts are complemented by a mechanism that analyzes workloads to determine data dependencies for query optimization in an unsupervised fashion. We describe and classify 58 query optimization techniques based on functional, order, and inclusion dependencies as well as on unique column combinations. The unsupervised mechanism and three optimization techniques are implemented in our open-source research DBMS Hyrise. Our approach reduces the Join Order Benchmark’s runtime by 26 % and accelerates some TPC-DS queries by up to 58 times.
Additionally, we have developed a cockpit for unsupervised database optimization that allows interactive experiments to build confidence in such automated techniques. In summary, our contributions improve the performance of DBMSs, support DBAs in their work, and enable them to contribute their time to other, less arduous tasks.
Monoklonale Antikörper (mAK) sind eines der wichtigsten Biomoleküle für die Umweltanalytik und die medizinische Diagnostik. Für die Detektion von Mikroorganismen bilden sie die Grundlage für ein schnelles und präzises Testverfahren. Bis heute gibt es, aufgrund des hohen zeitlichen und materiellen Aufwandes und der unspezifischen Immunisierungsstrategien, nur wenige mAK, die spezifisch Mikroorganismen erkennen.
Zu diesem Zweck sollte ein anwendbares Verfahren für die Generierung von mAK gegen Mikroorganismen entwickelt werden, welches anhand von Escherichia coli O157:H7 und Legionella pneumophila validiert wurde. In dieser Dissertation konnten neue Oberflächenstrukturen auf den Mikroorganismen mittels vergleichender Genomanalysen und in silico Epitopanalysen identifiziert werden. Diese wurden in das Virushüllprotein VP1 integriert und für eine gezielte Immunisierungsstrategie verwendet. Für die Bestimmung antigenspezifischer antikörperproduzierender Hybridome wurde ein Immunfärbeprotokoll entwickelt und etabliert, um die Hybridome im Durchflusszytometer zu sortieren.
In der vorliegenden Studie konnten für E. coli O157:H7 insgesamt 53 potenzielle Proteinkandidaten und für L. pneumophila 38 Proteine mithilfe der bioinformatischen Analyse identifiziert werden. Fünf verschiedene potenzielle Epitope wurden für E. coli O157:H7 und drei verschiedenen für L. pneumophila ausgewählt und für die Immunisierung mit chimären VP1 verwendet. Alle Immunseren zeigten eine antigenspezifische Immunantwort. Aus den nachfolgend generierten Hybridomzellen konnten mehrere Antikörperkandidaten gewonnen werden, welche in Charakterisierungsstudien eine starke Bindung zu E. coli O157:H7 bzw. L. pneumophila vorwiesen. Kreuzreaktivitäten zu anderen relevanten Mikroorganismen konnten keine bzw. nur in geringem Maße festgestellt werden.
Folglich konnte der hier beschriebene interdisziplinäre Ansatz zur Generierung spezifischer mAK gegen Mikroorganismen nachweislich spezifische mAK hervorbringen und ist als hocheffizienter Arbeitsablauf für die Herstellung von Antikörpern gegen Mikroorganismen einsetzbar.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.
Search for light primordial black holes with VERITAS using gamma γ-ray and optical observations
(2023)
The Very Energetic Radiation Imaging Telescope Array System (VERITAS) is an array of four imaging atmospheric Cherenkov telescopes (IACTs). VERITAS is sensitive to very-high-energy gamma-rays in the range of 100 GeV to >30 TeV. Hypothesized primordial black holes (PBHs) are attractive targets for IACTs. If they exist, their potential cosmological impact reaches beyond the candidacy for constituents of dark matter. The sublunar mass window is the largest unconstrained range of PBH masses. This thesis aims to develop novel concepts searching for light PBHs with VERITAS. PBHs below the sublunar window lose mass due to Hawking radiation. They would evaporate at the end of their lifetime, leading to a short burst of gamma-rays. If PBHs formed at about 10^15 g, the evaporation would occur nowadays. Detecting these signals might not only confirm the existence of PBHs but also prove the theory of Hawking radiation. This thesis probes archival VERITAS data recorded between 2012 and 2021 for possible PBH signals. This work presents a new automatic approach to assess the quality of the VERITAS data. The array-trigger rate and far infrared temperature are well suited to identify periods with poor data quality. These are masked by time cuts to obtain a consistent and clean dataset which contains about 4222 hours. The PBH evaporations could occur at any location in the field of view or time within this data. Only a blind search can be performed to identify these short signals. This thesis implements a data-driven deep learning based method to search for short transient signals with VERITAS. It does not depend on the modelling of the effective area and radial acceptance. This work presents the first application of this method to actual observational IACT data. This thesis develops new concepts dealing with the specifics of the data and the transient detection method. These are reflected in the developed data preparation pipeline and search strategies. After correction for trial factors, no candidate PBH evaporation is found in the data. Thus, new constraints of the local rate of PBH evaporations are derived. At the 99% confidence limit it is below <1.07 * 10^5 pc^-3 yr^-1. This constraint with the new, independent analysis approach is in the range of existing limits for the evaporation rate.
This thesis also investigates an alternative novel approach to searching for PBHs with IACTs. Above the sublunar window, the PBH abundance is constrained by optical microlensing studies. The sampling speed, which is of order of minutes to hours for traditional optical telescopes, is a limiting factor in expanding the limits to lower masses. IACTs are also powerful instruments for fast transient optical astronomy with up to O(ns) sampling. This thesis investigates whether IACTs might constrain the sublunar window with optical microlensing observations. This study confirms that, in principle, the fast sampling speed might allow extending microlensing searches into the sublunar mass window. However, the limiting factor for IACTs is the modest sensitivity to detect changes in optical fluxes. This thesis presents the expected rate of detectable events for VERITAS as well as prospects of possible future next-generation IACTs. For VERITAS, the rate of detectable microlensing events in the sublunar range is ~10^-6 per year of observation time. The future prospects for a 100 times more sensitive instrument are at ~0.05 events per year.
Reliable and robust data processing is one of the hardest requirements for systems in fields such as medicine, security, automotive, aviation, and space, to prevent critical system failures caused by changes in operating or environmental conditions. In particular, Signal Integrity (SI) effects such as crosstalk may distort the signal information in sensitive mixed-signal designs. A challenge for hardware systems used in the space are radiation effects. Namely, Single Event Effects (SEEs) induced by high-energy particle hits may lead to faulty computation, corrupted configuration settings, undesired system behavior, or even total malfunction.
Since these applications require an extra effort in design and implementation, it is beneficial to master the standard cell design process and corresponding design flow methodologies optimized for such challenges. Especially for reliable, low-noise differential signaling logic such as Current Mode Logic (CML), a digital design flow is an orthogonal approach compared to traditional manual design. As a consequence, mandatory preliminary considerations need to be addressed in more detail. First of all, standard cell library concepts with suitable cell extensions for reliable systems and robust space applications have to be elaborated. Resulting design concepts at the cell level should enable the logical synthesis for differential logic design or improve the radiation-hardness. In parallel, the main objectives of the proposed cell architectures are to reduce the occupied area, power, and delay overhead. Second, a special setup for standard cell characterization is additionally required for a proper and accurate logic gate modeling. Last but not least, design methodologies for mandatory design flow stages such as logic synthesis and place and route need to be developed for the respective hardware systems to keep the reliability or the radiation-hardness at an acceptable level.
This Thesis proposes and investigates standard cell-based design methodologies and techniques for reliable and robust hardware systems implemented in a conventional semi-conductor technology. The focus of this work is on reliable differential logic design and robust radiation-hardening-by-design circuits. The synergistic connections of the digital design flow stages are systematically addressed for these two types of hardware systems. In more detail, a library for differential logic is extended with single-ended pseudo-gates for intermediate design steps to support the logic synthesis and layout generation with commercial Computer-Aided Design (CAD) tools. Special cell layouts are proposed to relax signal routing. A library set for space applications is similarly extended by novel Radiation-Hardening-by-Design (RHBD) Triple Modular Redundancy (TMR) cells, enabling a one fault correction. Therein, additional optimized architectures for glitch filter cells, robust scannable and self-correcting flip-flops, and clock-gates are proposed. The circuit concepts and the physical layout representation views of the differential logic gates and the RHBD cells are discussed. However, the quality of results of designs depends implicitly on the accuracy of the standard cell characterization which is examined for both types therefore. The entire design flow is elaborated from the hardware design description to the layout representations. A 2-Phase routing approach together with an intermediate design conversion step is proposed after the initial place and route stage for reliable, pure differential designs, whereas a special constraining for RHBD applications in a standard technology is presented.
The digital design flow for differential logic design is successfully demonstrated on a reliable differential bipolar CML application. A balanced routing result of its differential signal pairs is obtained by the proposed 2-Phase-routing approach. Moreover, the elaborated standard cell concepts and design methodology for RHBD circuits are applied to the digital part of a 7.5-15.5 MSPS 14-bit Analog-to-Digital Converter (ADC) and a complex microcontroller architecture. The ADC is implemented in an unhardened standard semiconductor technology and successfully verified by electrical measurements. The overhead of the proposed hardening approach is additionally evaluated by design exploration of the microcontroller application. Furthermore, the first obtained related measurement results of novel RHBD-∆TMR flip-flops show a radiation-tolerance up to a threshold Linear Energy Transfer (LET) of 46.1, 52.0, and 62.5 MeV cm2 mg-1 and savings in silicon area of 25-50 % for selected TMR standard cell candidates.
As a conclusion, the presented design concepts at the cell and library levels, as well as the design flow modifications are adaptable and transferable to other technology nodes. In particular, the design of hybrid solutions with integrated reliable differential logic modules together with robust radiation-tolerant circuit parts is enabled by the standard cell concepts and design methods proposed in this work.
The evolution of a galaxy is pivotally governed by its pattern of star formation over a given period of time. The star formation rate at any given time is strongly dependent on the amount of cold gas available in the galaxy. Accretion of pristine gas from the Intergalactic medium (IGM) is thought to be one of the primary sources for star-forming gas. This gas first passes through the virial regions of the galaxy before reaching the Interstellar medium (ISM), the hub of star formation. On the other hand, owing to the evolutionary course of young and massive stars, energetic winds are ejected from the ISM to the virial regions of the galaxy. A bunch of interlinked, complex astrophysical processes, arising from the concurrent presence of both infalling as well as outbound gas, play out over a range of timescales in the halo region or the Circumgalactic medium (CGM) of a galaxy. It would not be incorrect to say that the CGM has a stronghold over the gas reserves of a galaxy and thus, plays a backhand, yet, rather pivotal role in shaping many galactic properties, some of which are also readily observable. Observing the multi-phase CGM (via spectral-line ion measurements), however, remains a non-trivial effort even today. Low particle densities as well as the CGM’s vast spatial extent, coupled with likely deviations from a spherical distribution, marr the possibility of obtaining complete, unbiased, high-quality spectral information tracing the full extent of the gaseous halo. This often incomplete information leads to multiple inferences about the CGM properties that give rise to multiple contradicting models. In this regard, computer simulations offer a neat solution towards testing and, subsequently, falsifying many of these existing CGM models. Thanks to their controlled environments, simulations are able to not only effortlessly transcend several orders of magnitude in time and space, but also get around many of the observational limitations and provide some unique views on many CGM properties. In this thesis, I focus on effectively using different computer simulations to understand the role of CGM in various astrophysical contexts, namely, the effect of Local Group (LG) environment, major merger events and satellite galaxies. In Chapter 2, I discuss the approach used for modeling various phases of the simulated z = 0 LG CGM in Hestia constrained simulations. Each of the three realizations contain a Milky Way (MW)–Andromeda (M31) galaxy pair, along with their corresponding sets of satellite galaxies, all embedded within the larger cosmological context. For characterizing the different temperature–density phases within the CGM, I model five tracer ions with cloudy ionization modeling. The cold and cool–ionized CGM (H i and Si iii respectively) in Hestia is very clumpy and distributed close to the galactic centers, while the warm-hot and hot CGM (O vi, O vii and O viii) is tenuous and volume-filling. On comparing the H i and Si iii column densities for the simulated M31 with observational measurements from Project AMIGA survey and other low-z galaxies, I found that Hestia galaxies produced less gas in the outer CGM, unlike observations. My carefully designed observational bias model subsequently revealed the possibility that some MW gas clouds might be incorrectly associated with the M31 CGM in observations, and hence, may be partly responsible for giving rise to the detected mismatch between simulated data and observations. In Chapter 3, I present results from four zoom–in, major merger, gas–rich simulations and the subsequent role of the gas, originally situated in the CGM, in influencing some of the galactic observables. The progenitor parameters are selected such that the post–merger remnants are MW–mass galaxies. We generally see a very clear gas bridge joining the merging galaxies in case of multiple passage mergers while such a bridge is mostly absent when a direct collision occurs. On the basis of particle–to–galaxy distance computations and tracer particle analysis, I found that about 33–48 percent of the cold gas contributing to the merger–induced star formation in the bridge originated from the CGM regions. In Chapter 4, I used a sample of 234 MW-mass, L* galaxies from the TNG50 cosmological simulations, with an aim of characterizing the impact of their global satellite populations on the extended cold CGM properties of their host L* halos. On the basis of halo mass and number of satellite galaxies (N_sats ), I categorized the sample into low and high mass bins, and subsequently into bottom, inter and top quartiles respectively. After confirming that satellites indeed influence the extended cold halo gas density profiles of the host galaxies, I investigated the effects of different satellite population parameters on the host halo cold CGMs. My analysis showed that there is hardly any cold gas associated with the satellite population of the lowest mass halos. The stellar mass of the most massive satellite (M_*mms ) impacted the cold gas in low mass bin halos the most, while N_sats (followed by M_*mms ) was the most influential factor for the high mass halos. In any case, how easily cold gas was stripped off the most massive satellite did not play much role. The number of massive (Stellar mass, M* > 10^8 M_solar) satellites as well as the M_*mms associated with a galaxy are two of the most crucial parameters determining how much cold gas ultimately finds its way from the satellites to the host halo. Low mass galaxies are found rather lacking on both these fronts unlike their high mass counterparts. This work highlights some aspects of the complex gas physics that constitute the basic essence of a low-z CGM. My analysis proved the importance of a cosmological environment, local surroundings and merger history in defining some key observable properties of a galactic CGM. Furthermore, I found that different satellite properties were responsible for affecting the cold–dense CGM of the low and high-mass parent galaxies. Finally, the LG emerged as an exciting prospect for testing and pinning down several intricate details about the CGM.
Enacted in 2009, the National Policy on Climate Change (PNMC) is a milestone in the institutionalisation of climate action in Brazil. It sets greenhouse gas (GHG) emission reduction targets and a set of principles and directives that are intended to lay the foundations for a cross-sectoral and multilevel climate policy in the country. However, after more than a decade since its establishment, the PNMC has experienced several obstacles related to its governance, such as coordination, planning and implementation issues. All of these issues pose threats to the effectiveness of GHG mitigation actions in the country.
By looking at the intragovernmental and intergovernmental relationships that have taken place during the lifetime of the PNMC and its sectoral plans on agriculture (the Sectoral Plan for Mitigation and Adaptation to Climate Change for the Consolidation of a Low-Carbon Economy in Agriculture [ABC Plan]), transport and urban mobility (the Sectoral Plan for Transportation and Urban Mobility for Mitigation and Adaption of Climate Change [PSTM]), this exploratory qualitative research investigates the Brazilian climate change governance guided by the following relevant questions: how are climate policy arrangements organised and coordinated among governmental actors to mitigate GHG emissions in Brazil? What might be the reasons behind how such arrangements are established? What are the predominant governance gaps of the different GHG mitigation actions examined? Why do these governance gaps occur?
Theoretically grounded in the literature on multilevel governance and coordination of public policies, this study employs a novel analytical framework that aims to identify and discuss the occurrence of four types of governance gaps (i.e. politics, institutions and processes, resources and information) in the three GHG mitigation actions (cases) examined (i.e. the PNMC, ABC Plan and PSTM). The research results are twofold. First, they reveal that Brazil has struggled to organise and coordinate governmental actors from different policy constituencies and different levels of government in the implementation of the GHG mitigation actions examined. Moreover, climate policymaking has mostly been influenced by the Ministry of Environment (MMA) overlooking the multilevel and cross-sectoral approaches required for a country’s climate policy to mitigate and adapt to climate change, especially if it is considered an economy-wide Nationally Determined Contribution (NDC), as the Brazilian one is.
Second, the study identifies a greater manifestation of gaps in politics (e.g. lack of political will in supporting climate action), institutions and processes (e.g. failures in the design of institutions and policy instruments, coordination and monitoring flaws, and difficulties in building climate federalism) in all cases studied. It also identifies that there have been important advances in the production of data and information for decision-making and, to a lesser extent, in the allocation of technical and financial resources in the cases studied; however, it is necessary to highlight the limitation of these improvements due to turf wars, a low willingness to share information among federal government players, a reduced volume of financial resources and an unequal distribution of capacities among the federal ministries and among the three levels of government.
A relevant finding is that these gaps tend to be explained by a combination of general and sectoral set aspects. Regarding the general aspects, which are common to all cases examined, the following can be mentioned: i) unbalanced policy capabilities existing among the different levels of government, ii) a limited (bureaucratic) practice to produce a positive coordination mode within cross-sectoral policies, iii) the socioeconomic inequalities that affect the way different governments and economic sectors perceive the climate issue (selective perception) and iv) the reduced dialogue between national and subnational governments on the climate agenda (poor climate federalism). The following sectoral aspects can be mentioned: i) the presence of path dependencies that make the adoption of transformative actions harder and ii) the absence of perceived co-benefits that the climate agenda can bring to each economic sector (e.g. reputational gains, climate protection and access to climate financial markets).
By addressing the theoretical and practical implications of the results, this research provides key insights to tackle the governance gaps identified and to help Brazil pave the way to achieving its NDCs and net-zero targets. At the theoretical level, this research and the current country’s GHG emissions profile suggest that the Brazilian climate policy is embedded in a cross-sectoral and multilevel arena, which requires the effective involvement of different levels of political and bureaucratic powers and the consideration of the country’s socioeconomic differences. Thus, the research argues that future improvements of the Brazilian climate policy and its governance setting must frame climate policy as an economic development agenda, the ramifications of which go beyond the environmental sector. An initial consequence of this new perspective may be a shift in the political and technical leadership from the MMA to the institutions of the centre of government (Executive Office of the President of Brazil) and those in charge of the country’s economic policy (Ministry of Economy). This change could provide greater capacity for coordination, integration and enforcement as well as for addressing certain expected gaps (e.g. financial and technical resources). It could also lead to greater political prioritisation of the agenda at the highest levels of government. Moreover, this shift of the institutional locus could contribute to greater harmonisation between domestic development priorities and international climate politics. Finally, the research also suggests that this approach would reduce bureaucratic elitism currently in place due to climate policy being managed by Brazilian governmental institutions, which is still a theme of a few ministries and a reason for the occurrence of turf wars.
The first part of the thesis studies the properties of fast mode in magneto hydro-dynamic (MHD) turbulence. 1D and 3D numerical simulations are carried out to generate decaying fast mode MHD turbulence. The injection of waves are carried out in a collinear and isotropic fashion to generate fast mode turbulence. The properties of fast mode turbulence are analyzed by studying their energy spectral density, 2D structure functions and energy decay/cascade time. The injection wave vector is varied to study the dependence of the above properties on the injection wave vectors. The 1D energy spectrum obtained for the velocity and magnetic fields has 𝐸 (𝑘) ∝ 𝑘−2. The 2D energy spectrum and 2D structure functions in parallel and perpendicular directions shows that fast mode turbulence generated is isotropic in nature. The cascade/decay rate of fast mode MHD turbulence is proportional to 𝑘−0.5 for different kinds of wave vector injection. Simulations are also carried out in 1D and 3D to compare balanced and imbalanced turbulence. The results obtained shows that while 1D imbalanced turbulence decays faster than 1D balanced turbulence, there is no difference in the decay of 3D balanced and imbalanced turbulence for the current resolution of 512 grid points.
"The second part of the thesis studies cosmic ray (CR) transport in driven MHD turbulence and is strongly dependent on it’s properties. Test particle simulations are carried out to study CR interaction with both total MHD turbulence and decomposed MHD modes. The spatial diffusion coefficients and the pitch angle scattering diffusion coefficients are calculated from the test particle trajectories in turbulence. The results confirms that the fast modes dominate the CR propagation, whereas Alfvén, slow modes are much less efficient with similar pitch angle scattering rates. The cross field transport on large and small scales are investigated next. On large/global scales, normal diffusion is observed and the diffusion coefficient is suppressed by 𝑀𝜁𝐴 compared to the parallel diffusion coefficients, with 𝜁 closer to 4 in Alfvén modes than that in total turbulence as theoretically expected. For the CR transport on scales smaller than the turbulence injection scale 𝐿, both the local and global magnetic reference frames are adopted. Super diffusion is observed on such small scales in all the cases. Particularly, CR transport in Alfvén modes show clear Richardson diffusion in the local reference frame. The diffusion transition smoothly from the Richardson’s one with index 1.5 to normal diffusion as particle’s mean free path decreases from 𝜆∥ ≫ 𝐿 to 𝜆∥ ≪ 𝐿. These results have broad applications to CRs in various astrophysical environments".
With the implementation of intense, short pulsed light sources throughout the last years, the powerful technique of resonant inelastic X-ray scattering (RIXS) became feasible for a wide range of experiments within femtosecond dynamics in correlated materials and molecules.
In this thesis I investigate the potential to bring RIXS into the fluence regime of nonlinear X-ray-matter interactions, especially focusing on the impact of stimulated scattering on RIXS in transition metal systems in a transmission spectroscopy geometry around transition metal L-edges.
After presenting the RIXS toolbox and the capabilities of free electron laser light sources for ultrafast intense X-ray experiments, the thesis explores an experiment designed to understand the impact of stimulated scattering on diffraction and direct beam transmission spectroscopy on a CoPd multilayer system. The experiments require short X-ray pulses that can only be generated at free electron lasers (FEL). Here the pulses are not only short, but also very intense, which opens the door to nonlinear X-ray-matter interactions. In the second part of this thesis, we investigate observations in the nonlinear interaction regime, look at potential difficulties for classic spectroscopy and investigate possibilities to enhance the RIXS through stimulated scattering. Here, a study on stimulated RIXS is presented, where we investigate the light field intensity dependent CoPd demagnetization in transmission as well as scattering geometry. Thereby we show the first direct observation of stimulated RIXS as well as light field induced nonlinear effects,
namely the breakdown of scattering intensity and the increase in sample transmittance. The topic is of ongoing interest and will just increase in relevance as more free electron lasers are planned and the number of experiments at such light sources will continue to increase in the near future.
Finally we present a discussion on the accessibility of small DOS shifts in the absorption-band of transition metal complexes through stimulated resonant X-ray scattering. As these shifts occur for example in surface states this finding could expand the experimental selectivity of NEXAFS and RIXS to the detectability of surface states. We show how stimulation can indeed enhance the visibility of DOS shifts through the detection of stimulated spectral shifts and enhancements in this theoretical study. We also forecast the observation of stimulated enhancements in resonant excitation experiments at FEL sources in systems with a high density of states just below the Fermi edge and in systems with an occupied to unoccupied DOS ratio in the valence band above 1.
This research investigated the relationship between frequent engagement in industrial action (also known as ‘employee strikes’) and the internal attractiveness of government employment. It focused on a special group of public employees: public university lecturers and public-school teachers in Uganda who frequently engaged in industrial action. At the very basic level, the research explored whether public employees frequently engaged in industrial action because they considered public service employment to be unattractive or whether frequent engagement in industrial action was in fact part of the attractiveness of government employment. Beyond exploring these relationships, it also explained why (or why not) such relationships existed.
Methodologically, the research was conducted using an exploratory sequential design – a mixed methods study design that starts with a qualitative followed by a quantitative phase. It is the results of the initial qualitative phase that determined the direction of the subsequent quantitative phase. The qualitative phase started with an exploration of the relationship between industrial action and internal public service attractiveness, resulting into two specific research questions:
1) Why do public employees engage in industrial action and what role does frequent engagement in industrial action play in their perception of public service attractiveness?
2) Why and how is organizational justice related to public employees’ perception of public service attractiveness?
The above questions were answered both qualitatively and quantitatively. The theoretical postulations of the Social Movements Theories, Social Exchange Theory, and the Signaling Theory were used to structure the research assumptions and hypotheses.
The results showed that public employees engaged in industrial action mostly because of relative, rather than absolute deprivation. An established culture of workplace militancy was also found to be key in actualizing industrial action as was the (perceived) absence of alternatives to achieve workplace justice. Importantly, there was a clear dichotomy between absolute working conditions and frequent engagement in industrial action. Frequent engagement in industrial action was itself found to have both positive and negative effects on internal public service attractiveness. It was also found that public service attractiveness from the perspective of current public employees might be different from what it is from the perspective of prospective employees. This is because current public employees do not assume what it feels like to work for government, but mostly use their day-to-day lived experiences to judge the attractiveness of their employer. The existing literature is particularly deficient on analyzing public service attractiveness from an internal perspective, which is surprising given the public sector’s high reliance on internal recruitment.
The research results underlined key implications for theory, practice, and research. At theory level, the results suggested that public employee ratings of internal public service attractiveness were heavily affected by halo effects and should therefore not be taken at face value. The complex workplace social exchanges which are deeply rooted in organizational justice and the ‘personification metaphor’ were also emphasized. From an empirical perspective, the results underlined the need to prioritize internal public service attractiveness as recent research has confirmed the value of family socialization and internal recommendations in making public sector employment attractive, even to external applicants. This research argues that the centrality of organizational justice in public sector employee relations requires public sector organizations to be intentional in their bid to create fair, just, and attractive workplaces. Beyond assessing the fairness of personnel policies, procedures, and interactional relationships, it is also important to prepare and equip public managers with the right skills to promote and practice justice in their day-to-day interactions with public employees, and to encourage, improve, and facilitate alternative public employee feedback mechanisms.
Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions.
This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets.
Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information.
Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure.
The global climate crisis is significantly contributing to changing ecosystems, loss of biodiversity and is putting numerous species on the verge of extinction. In principle, many species are able to adapt to changing conditions or shift their habitats to more suitable regions. However, change is progressing faster than some species can adjust, or potential adaptation is blocked and disrupted by direct and indirect human action. Unsustainable anthropogenic land use in particular is one of the driving factors, besides global heating, for these ecologically critical developments. Precisely because land use is anthropogenic, it is also a factor that could be quickly and immediately corrected by human action.
In this thesis, I therefore assess the impact of three climate change scenarios of increasing intensity in combination with differently scheduled mowing regimes on the long-term development and dispersal success of insects in Northwest German grasslands. The large marsh grasshopper (LMG, Stethophyma grossum, Linné 1758) is used as a species of reference for the analyses. It inhabits wet meadows and marshes and has a limited, yet fairly good ability to disperse. Mowing and climate conditions affect the development and mortality of the LMG differently depending on its life stage.
The specifically developed simulation model HiLEG (High-resolution Large Environmental
Gradient) serves as a tool for investigating and projecting viability and dispersal success under different climate conditions and land use scenarios. It is a spatially explicit, stage- and cohort-based model that can be individually configured to represent the life cycle and characteristics of terrestrial insect species, as well as high-resolution environmental data and the occurrence of external disturbances. HiLEG is a freely available and adjustable software that can be used to support conservation planning in cultivated grasslands.
In the three case studies of this thesis, I explore various aspects related to the structure of simulation models per se, their importance in conservation planning in general, and insights regarding the LMG in particular. It became apparent that the detailed resolution of model processes and components is crucial to project the long-term effect of spatially and temporally confined events. Taking into account conservation measures at the regional level has further proven relevant, especially in light of the climate crisis. I found that the LMG is benefiting from global warming in principle, but continues to be constrained by harmful mowing regimes. Land use measures could, however, be adapted in such a way that they allow the expansion and establishment of the LMG without overly affecting agricultural yields.
Overall, simulation models like HiLEG can make an important contribution and add value
to conservation planning and policy-making. Properly used, simulation results shed light
on aspects that might be overlooked by subjective judgment and the experience of individual stakeholders. Even though it is in the nature of models that they are subject to limitations and only represent fragments of reality, this should not keep stakeholders from using them, as long as these limitations are clearly communicated. Similar to HiLEG, models could further be designed in such a way that not only the parameterization can be adjusted as required, but also the implementation itself can be improved and changed as desired. This openness and flexibility should become more widespread in the development of simulation models.
The relevance of physical fitness for children’s and adolescents’ health is indisputable and it is crucial to regularly assess and evaluate children’s and adolescents’ individual physical fitness development to detect potential negative health consequences in time. Physical fitness tests are easy-to-administer, reliable, and valid which is why they should be widely used to provide information on performance development and health status of children and adolescents. When talking about development of physical fitness, two perspectives can be distinguished. One perspective is how the physical fitness status of children and adolescents changed / developed over the past decades (i.e., secular trends). The other perspective covers the analyses how physical fitness develops with increasing age due to growth and maturation processes. Although, the development of children’s and adolescents’ physical fitness has been extensively described and analyzed in the literature, still some questions remain to be uncovered that will be addressed in the present doctoral thesis.
Previous systematic reviews and meta-analyses have examined secular trends in children’s and adolescents’ physical fitness. However, considering that those analyses are by now 15 years old and that updates are available only to limited components of physical fitness, it is time to re-analyze the literature and examine secular trends for selected components of physical fitness (i.e., cardiorespiratory endurance, muscle strength, proxies of muscle power, and speed). Fur-thermore, the available studies on children’s development of physical fitness as well as the ef-fects of moderating variables such as age and sex have been investigated within a long-term ontogenetic perspective. However, the effects of age and sex in the transition from pre-puberty to puberty in the ninth year of life using a short-term ontogenetic perspective and the effect of timing of school enrollment on children’s development of physical fitness have not been clearly identified. Therefore, the present doctoral thesis seeks to complement the knowledge of children’s and adolescents’ physical fitness development by updating secular trend analysis in selected components of physical fitness, by examining short-term ontogenetic cross-sectional developmental differences in children`s physical fitness, and by comparing physical fitness of older- and younger-than-keyage children versus keyage-children. These findings provide valuable information about children’s and adolescents’ physical fitness development to help prevent potential deficits in physical fitness as early as possible and consequently ensure a holistic development and a lifelong healthy life.
Initially, a systematic review to provide an ‘update’ on secular trends in selected components of physical fitness (i.e., cardiorespiratory endurance, relative muscle strength, proxies of muscle power, speed) in children and adolescents aged 6 to 18 years was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement guidelines. To examine short-term ontogenetic cross-sectional developmental differences and to compare physical fitness of older- and younger-than-keyage children versus keyage-children physical fitness data of 108,295 keyage-children (i.e., aged 8.00 to 8.99 years), 2,586 younger-than-keyage children (i.e., aged 7.00 to 7.99 years), and 26,540 older-than-keyage children (i.e., aged 9.00 to 9.99 years) from the third grade were analyzed. Physical fitness was assessed through the EMOTIKON test battery measuring cardiorespiratory endurance (i.e., 6-min-run test), coordina-tion (i.e., star-run test), speed (i.e., 20-m linear sprint test), and proxies of lower (i.e., standing long jump test) and upper limbs (i.e., ball-push test) muscle power. Statistical inference was based on Linear Mixed Models.
Findings from the systematic review revealed a large initial improvement and an equally large subsequent decline between 1986 and 2010 as well as a stabilization between 2010 and 2015 in cardiorespiratory endurance, a general trend towards a small improvement in relative muscle strength from 1972 to 2015, an overall small negative quadratic trend for proxies of muscle power from 1972 to 2015, and a small-to-medium improvement in speed from 2002 to 2015. Findings from the cross-sectional studies showed that even in a single prepubertal year of life (i.e., ninth year) physical fitness performance develops linearly with increasing chronological age, boys showed better performances than girls in all physical fitness components, and the components varied in the size of sex and age effects. Furthermore, findings revealed that older-than-keyage children showed poorer performance in physical fitness compared to keyage-children, older-than-keyage girls showed better performances than older-than-keyage boys, and younger-than-keyage children outperformed keyage-children.
Due to the varying secular trends in physical fitness, it is recommended to promote initiatives for physical activity and physical fitness for children and adolescents to prevent adverse effects on health and well-being. More precisely, public health initiatives should specifically consider exercising cardiorespiratory endurance and muscle strength because both components showed strong positive associations with markers of health. Furthermore, the findings implied that physical education teachers, coaches, or researchers can utilize a proportional adjustment to individually interpret physical fitness of prepubertal school-aged children. Special attention should be given to the promotion of physical fitness of older-than-keyage children because they showed poorer performance in physical fitness than keyage-children. Therefore, it is necessary to specifically consider this group and provide additional health and fitness programs to reduce their deficits in physical fitness experienced during prior years to guarantee a holistic development.
Personal data privacy is considered to be a fundamental right. It forms a part of our highest ethical standards and is anchored in legislation and various best practices from the technical perspective. Yet, protecting against personal data exposure is a challenging problem from the perspective of generating privacy-preserving datasets to support machine learning and data mining operations. The issue is further compounded by the fact that devices such as consumer wearables and sensors track user behaviours on such a fine-grained level, thereby accelerating the formation of multi-attribute and large-scale high-dimensional datasets.
In recent years, increasing news coverage regarding de-anonymisation incidents, including but not limited to the telecommunication, transportation, financial transaction, and healthcare sectors, have resulted in the exposure of sensitive private information. These incidents indicate that releasing privacy-preserving datasets requires serious consideration from the pre-processing perspective. A critical problem that appears in this regard is the time complexity issue in applying syntactic anonymisation methods, such as k-anonymity, l-diversity, or t-closeness to generating privacy-preserving data. Previous studies have shown that this problem is NP-hard.
This thesis focuses on large high-dimensional datasets as an example of a special case of data that is characteristically challenging to anonymise using syntactic methods. In essence, large high-dimensional data contains a proportionately large number of attributes in proportion to the population of attribute values. Applying standard syntactic data anonymisation approaches to generating privacy-preserving data based on such methods results in high information-loss, thereby rendering the data useless for analytics operations or in low privacy due to inferences based on the data when information loss is minimised.
We postulate that this problem can be resolved effectively by searching for and eliminating all the quasi-identifiers present in a high-dimensional dataset. Essentially, we quantify the privacy-preserving data sharing problem as the Find-QID problem.
Further, we show that despite the complex nature of absolute privacy, the discovery of QID can be achieved reliably for large datasets. The risk of private data exposure through inferences can be circumvented, and both can be practicably achieved without the need for high-performance computers.
For this purpose, we present, implement, and empirically assess both mathematical and engineering optimisation methods for a deterministic discovery of privacy-violating inferences. This includes a greedy search scheme by efficiently queuing QID candidates based on their tuple characteristics, projecting QIDs on Bayesian inferences, and countering Bayesian network’s state-space-explosion with an aggregation strategy taken from multigrid context and vectorised GPU acceleration. Part of this work showcases magnitudes of processing acceleration, particularly in high dimensions. We even achieve near real-time runtime for currently impractical applications. At the same time, we demonstrate how such contributions could be abused to de-anonymise Kristine A. and Cameron R. in a public Twitter dataset addressing the US Presidential Election 2020.
Finally, this work contributes, implements, and evaluates an extended and generalised version of the novel syntactic anonymisation methodology, attribute compartmentation. Attribute compartmentation promises sanitised datasets without remaining quasi-identifiers while minimising information loss. To prove its functionality in the real world, we partner with digital health experts to conduct a medical use case study. As part of the experiments, we illustrate that attribute compartmentation is suitable for everyday use and, as a positive side effect, even circumvents a common domain issue of base rate neglect.
The present dissertation conducts empirical research on the relationship between urban life and its economic costs, especially for the environment. On the one hand, existing gaps in research on the influence of population density on air quality are closed and, on the other hand, innovative policy measures in the transport sector are examined that are intended to make metropolitan areas more sustainable. The focus is on air pollution, congestion and traffic accidents, which are important for general welfare issues and represent significant cost factors for urban life. They affect a significant proportion of the world's population. While 55% of the world's people already lived in cities in 2018, this share is expected to reach approximately 68% by 2050.
The four self-contained chapters of this thesis can be divided into two sections: Chapters 2 and 3 provide new causal insights into the complex interplay between urban structures and air pollution. Chapters 4 and 5 then examine policy measures to promote non-motorised transport and their influence on air quality as well as congestion and traffic accidents.