Refine
Has Fulltext
- yes (1755) (remove)
Year of publication
Document Type
- Doctoral Thesis (1755) (remove)
Language
- English (1755) (remove)
Keywords
- climate change (49)
- Klimawandel (48)
- Modellierung (26)
- Nanopartikel (22)
- machine learning (21)
- Fernerkundung (17)
- Blickbewegungen (16)
- Synchronisation (15)
- remote sensing (15)
- Arktis (14)
Institute
- Institut für Physik und Astronomie (351)
- Institut für Geowissenschaften (301)
- Institut für Biochemie und Biologie (265)
- Institut für Chemie (193)
- Extern (129)
- Hasso-Plattner-Institut für Digital Engineering GmbH (89)
- Institut für Umweltwissenschaften und Geographie (84)
- Department Linguistik (65)
- Institut für Informatik und Computational Science (65)
- Institut für Mathematik (57)
Supermassive black holes reside in the hearts of almost all massive galaxies. Their evolutionary path seems to be strongly linked to the evolution of their host galaxies, as implied by several empirical relations between the black hole mass (M BH ) and different host galaxy properties. The physical driver of this co-evolution is, however, still not understood. More mass measurements over homogeneous samples and a detailed understanding of systematic uncertainties are required to fathom the origin of the scaling relations.
In this thesis, I present the mass estimations of supermassive black holes in the nuclei of one late-type and thirteen early-type galaxies. Our SMASHING sample extends from the intermediate to the massive galaxy mass regime and was selected to fill in gaps in number of galaxies along the scaling relations. All galaxies were observed at high spatial resolution, making use of the adaptive-optics mode of integral field unit (IFU) instruments on state-of-the-art telescopes (SINFONI, NIFS, MUSE). I extracted the stellar kinematics from these observations and constructed dynamical Jeans and Schwarzschild models to estimate the mass of the central black holes robustly. My new mass estimates increase the number of early-type galaxies with measured black hole masses by 15%. The seven measured galaxies with nuclear light deficits (’cores’) augment the sample of cored galaxies with measured black holes by 40%. Next to determining massive black hole masses, evaluating the accuracy of black hole masses is crucial for understanding the intrinsic scatter of the black hole- host galaxy scaling relations. I tested various sources of systematic uncertainty on my derived mass estimates.
The M BH estimate of the single late-type galaxy of the sample yielded an upper limit, which I could constrain very robustly. I tested the effects of dust, mass-to-light ratio (M/L) variation, and dark matter on my measured M BH . Based on these tests, the typically assumed constant M/L ratio can be an adequate assumption to account for the small amounts of dark matter in the center of that galaxy. I also tested the effect of a variable M/L variation on the M BH measurement on a second galaxy. By considering stellar M/L variations in the dynamical modeling, the measured M BH decreased by 30%. In the future, this test should be performed on additional galaxies to learn how an as constant assumed M/L flaws the estimated black hole masses.
Based on our upper limit mass measurement, I confirm previous suggestions that resolving the predicted BH sphere-of-influence is not a strict condition to measure black hole masses. Instead, it is only a rough guide for the detection of the black hole if high-quality, and high signal-to-noise IFU data are used for the measurement. About half of our sample consists of massive early-type galaxies which show nuclear surface brightness cores and signs of triaxiality. While these types of galaxies are typically modeled with axisymmetric modeling methods, the effects on M BH are not well studied yet. The massive galaxies of our presented galaxy sample are well suited to test the effect of different stellar dynamical models on the measured black hole mass in evidently triaxial galaxies. I have compared spherical Jeans and axisymmetric Schwarzschild models and will add triaxial Schwarzschild models to this comparison in the future. The constructed Jeans and Schwarzschild models mostly disagree with each other and cannot reproduce many of the triaxial features of the galaxies (e.g., nuclear sub-components, prolate rotation). The consequence of the axisymmetric-triaxial assumption on the accuracy of M BH and its impact on the black hole - host galaxy relation needs to be carefully examined in the future.
In the sample of galaxies with published M BH , we find measurements based on different dynamical tracers, requiring different observations, assumptions, and methods. Crucially, different tracers do not always give consistent results. I have used two independent tracers (cold molecular gas and stars) to estimate M BH in a regular galaxy of our sample. While the two estimates are consistent within their errors, the stellar-based measurement is twice as high as the gas-based. Similar trends have also been found in the literature. Therefore, a rigorous test of the systematics associated with the different modeling methods is required in the future. I caution to take the effects of different tracers (and methods) into account when discussing the scaling relations.
I conclude this thesis by comparing my galaxy sample with the compilation of galaxies with measured black holes from the literature, also adding six SMASHING galaxies, which were published outside of this thesis. None of the SMASHING galaxies deviates significantly from the literature measurements. Their inclusion to the published early-type galaxies causes a change towards a shallower slope for the M BH - effective velocity dispersion relation, which is mainly driven by the massive galaxies of our sample. More unbiased and homogenous measurements are needed in the future to determine the shape of the relation and understand its physical origin.
Massive Open Online Courses (MOOCs) open up new opportunities to learn a wide variety of skills online and are thus well suited for individual education, especially where proffcient teachers are not available locally. At the same time, modern society is undergoing a digital transformation, requiring the training of large numbers of current and future employees. Abstract thinking, logical reasoning, and the need to formulate instructions for computers are becoming increasingly relevant. A holistic way to train these skills is to learn how to program. Programming, in addition to being a mental discipline, is also considered a craft, and practical training is required to achieve mastery. In order to effectively convey programming skills in MOOCs, practical exercises are incorporated into the course curriculum to offer students the necessary hands-on experience to reach an in-depth understanding of the programming concepts presented. Our preliminary analysis showed that while being an integral and rewarding part of courses, practical exercises bear the risk of overburdening students who are struggling with conceptual misunderstandings and unknown syntax. In this thesis, we develop, implement, and evaluate different interventions with the aim to improve the learning experience, sustainability, and success of online programming courses. Data from four programming MOOCs, with a total of over 60,000 participants, are employed to determine criteria for practical programming exercises best suited for a given audience.
Based on over five million executions and scoring runs from students' task submissions, we deduce exercise difficulties, students' patterns in approaching the exercises, and potential flaws in exercise descriptions as well as preparatory videos. The primary issue in online learning is that students face a social gap caused by their isolated physical situation. Each individual student usually learns alone in front of a computer and suffers from the absence of a pre-determined time structure as provided in traditional school classes. Furthermore, online learning usually presses students into a one-size-fits-all curriculum, which presents the same content to all students, regardless of their individual needs and learning styles. Any means of a personalization of content or individual feedback regarding problems they encounter are mostly ruled out by the discrepancy between the number of learners and the number of instructors. This results in a high demand for self-motivation and determination of MOOC participants. Social distance exists between individual students as well as between students and course instructors. It decreases engagement and poses a threat to learning success. Within this research, we approach the identified issues within MOOCs and suggest scalable technical solutions, improving social interaction and balancing content difficulty.
Our contributions include situational interventions, approaches for personalizing educational content as well as concepts for fostering collaborative problem-solving. With these approaches, we reduce counterproductive struggles and create a universal improvement for future programming MOOCs. We evaluate our approaches and methods in detail to improve programming courses for students as well as instructors and to advance the state of knowledge in online education.
Data gathered from our experiments show that receiving peer feedback on one's programming problems improves overall course scores by up to 17%. Merely the act of phrasing a question about one's problem improved overall scores by about 14%. The rate of students reaching out for help was significantly improved by situational just-in-time interventions. Request for Comment interventions increased the share of students asking for help by up to 158%. Data from our four MOOCs further provide detailed insight into the learning behavior of students. We outline additional significant findings with regard to student behavior and demographic factors. Our approaches, the technical infrastructure, the numerous educational resources developed, and the data collected provide a solid foundation for future research.
This thesis presents new approaches of SAR methods and their application to tectonically active systems and related surface deformation. With 3 publications two case studies are presented:
(1) The coseismic deformation related to the Nura earthquake (5th October 2008, magnitude Mw 6.6) at the eastern termination of the intramontane Alai valley. Located between the southern Tien Shan and the northern Pamir the coseismic surface displacements are analysed using SAR (Synthetic Aperture RADAR) data. The results show clear gradients in the vertical and horizontal directions along a complex pattern of surface ruptures and active faults. To integrate and to interpret these observations in the context of the regional active tectonics a SAR data analysis is complemented with seismological data and geological field observations. The main moment release of the Nura earthquake appears to be on the Pamir Frontal thrust, while the main surface displacements and surface rupture occurred in the footwall and along of the NE–SW striking Irkeshtam fault. With InSAR data from ascending and descending satellite tracks along with pixel offset measurements the Nura earthquake source is modelled as a segmented rupture. One fault segment corresponds to high-angle brittle faulting at the Pamir Frontal thrust and two more fault segments show moderate-angle and low-friction thrusting at the Irkeshtam fault. The integrated analysis of the coseismic deformation argues for a rupture segmentation and strain partitioning associated to the earthquake. It possibly activated an orogenic wedge in the easternmost segment of the Pamir-Alai collision zone. Further, the style of the segmentation may be associated with the presence of Paleogene evaporites.
(2) The second focus is put on slope instabilities and consequent landslides in the area of prominent topographic transition between the Fergana basin and high-relief Alai range. The Alai range constitutes an active orogenic wedge of the Pamir – Tien Shan collision zone that described as a progressively northward propagating fold-and-thrust belt. The interferometric analysis of ALOS/PALSAR radar data integrates a period of 4 years (2007-2010) based on the Small Baseline Subset (SBAS) time-series technique to assess surface deformation with millimeter surface change accuracy. 118 interferograms are analyzed to observe spatially-continuous movements with downslope velocities up to 71 mm/yr. The obtained rates indicate slow movement of the deep-seated landslides during the observation time. We correlated these movements with precipitation and seismic records. The results suggest that the deformation peaks correlate with rainfall in the 3 preceding months and with one earthquake event. In the next step, to understand the spatial pattern of landslide processes, the tectonic morphologic and lithologic settings are combined with the patterns of surface deformation. We demonstrate that the lithological and tectonic structural patterns are the main controlling factors for landslide occurrence and surface deformation magnitudes. Furthermore active contractional deformation in the front of the orogenic wedge is the main mechanism to sustain relief. Some of the slower but continuously moving slope instabilities are directly related to tectonically active faults and unconsolidated young Quaternary syn-orogenic sedimentary sequences. The InSAR observed slow moving landslides represent active deep-seated gravitational slope deformation phenomena which is first time observed in the Tien Shan mountains. Our approach offers a new combination of InSAR techniques and tectonic aspects to localize and understand enhanced slope instabilities in tectonically active mountain fronts in the Kyrgyz Tien Shan.
The contractile vacuole (CV) is an osmoregulatory organelle found exclusively in algae and protists. In addition to expelling excessive water out of the cell, it also expels ions and other metabolites and thereby contributes to the cell's metabolic homeostasis. The interest in the CV reaches beyond its immediate cellular roles. The CV's function is tightly related to basic cellular processes such as membrane dynamics and vesicle budding and fusion; several physiological processes in animals, such as synaptic neurotransmission and blood filtration in the kidney, are related to the CV's function; and several pathogens, such as the causative agents of sleeping sickness, possess CVs, which may serve as pharmacological targets. The green alga Chlamydomonas reinhardtii has two CVs. They are the smallest known CVs in nature, and they remain relatively untouched in the CV-related literature. Many genes that have been shown to be related to the CV in other organisms have close homologues in C. reinhardtii. We attempted to silence some of these genes and observe the effect on the CV. One of our genes, VMP1, caused striking, severe phenotypes when silenced. Cells exhibited defective cytokinesis and aberrant morphologies. The CV, incidentally, remained unscathed. In addition, mutant cells showed some evidence of disrupted autophagy. Several important regulators of the cell cycle as well as autophagy were found to be underexpressed in the mutant. Lipidomic analysis revealed many meaningful changes between wild-type and mutant cells, reinforcing the compromised-autophagy observation. VMP1 is a singular protein, with homologues in numerous eukaryotic organisms (aside from fungi), but usually with no relatives in each particular genome. Since its first characterization in 2002 it has been associated with several cellular processes and functions, namely autophagy, programmed cell-death, secretion, cell adhesion, and organelle biogenesis. It has been implicated in several human diseases: pancreatitis, diabetes, and several types of cancer. Our results reiterate some of the observations in VMP1's six reported homologues, but, importantly, show for the first time an involvement of this protein in cell division. The mechanisms underlying this involvement in Chlamydomonas, as well as other key aspects, such as VMP1's subcellular localization and interaction partners, still await elucidation.
The creation of complex polymer structures has been one of the major research topics over the last couple of decades. This work deals with the synthesis of (block co-)polymers, the creation of complex and stimuli-responsive aggregates by self-assembly, and the cross-linking of these structures. Also the higher-order self-assembly of the aggregates is investigated. The formation of poly-2-oxazoline based micelles in aqueous solution and their simultaneous functionalization and cross-linking using thiol-yne chemistry is e.g. presented. By introducing pH responsive thiols in the core of the micelles the influence of charged groups in the core of micelles on the entire structure can be studied. The charging of these groups leads to a swelling of the core and a decrease in the local concentration of the corona forming block (poly(2-ethyl-2-oxazoline)). This decrease in concentration yields a shift in the cloud point temperature to higher temperatures for this Type I thermoresponsive polymer. When the swelling of the core is prohibited, e.g. by the introduction of sufficient amounts of salt, this behavior disappears. Similar structures can be prepared using complex coacervate core micelles (C3Ms) built through the interaction of weakly acidic and basic polymer blocks. The advantage of these structures is that two different stabilizing blocks can be incorporated, which allows for more diverse and complex structures and behavior of the micelles. Using block copolymers with either a polyanionic or a polycationic block C3Ms could be created with a corona which contains two different soluble nonionic polymers, which either have a mixed corona or a Janus type corona, depending on the polymers that were chosen. Using NHS and EDC the micelles could easily be cross-linked by the formation of amide bonds in the core of the micelles. The higher-order self-assembly behavior of these core cross-linked complex coacervate core micelles (C5Ms) was studied. Due to the cross-linking the micelles are stabilized towards changes in pH and ionic strength, but polymer chains are also no longer able to rearrange. For C5Ms with a mixed corona likely network structures were formed upon the collapse of the thermoresponsive poly(N-isopropylacrylamide) (PNIPAAm), whereas for Janus type C5Ms well defined spherical aggregates of micelles could be obtained, depending on the pH of the solution. Furthermore it could be shown that Janus micelles can adsorb onto inorganic nanoparticles such as colloidal silica (through a selective interaction between PEO and the silica surface) or gold nanoparticles (by the binding of thiol end-groups). Asymmetric aggregates were also formed using the streptavidin-biotin binding motive. This is achieved by using three out of the four binding sites of streptavidin for the binding of one three-arm star polymer, end-functionalized with biotin groups. A homopolymer with one biotin end-group can be used to occupy the last position. This binding of two different polymers makes it possible to create asymmetric complexes. This phase separation is theoretically independent of the kind of polymer since the structure of the protein is the driving force, not the intrinsic phase separation between polymers. Besides Janus structures also specific cross-linking can be achieved by using other mixing ratios.
The role of biogenic carbonate producers in the evolution of the geometries of carbonate systems has been the subject of numerous research projects. Attempts to classify modern and ancient carbonate systems by their biotic components have led to the discrimination of biogenic carbonate producers broadly into Photozoans, which are characterised by an affinity for warm tropical waters and high dependence on light penetration, and Heterozoans which are generally associated with both cool water environments and nutrient-rich settings with little to no light penetration. These broad categories of carbonate sediment producers have also been recognised to dominate in specific carbonate systems. Photozoans are commonly dominant in flat-topped platforms with steep margins, while Heterozoans generally dominate carbonate ramps. However, comparatively little is known on how these two main groups of carbonate producers interact in the same system and impact depositional geometries responding to changes in environmental conditions such as sea level fluctuation, antecedent slope, sediment transport processes, etc. This thesis presents numerical models to investigate the evolution of Miocene carbonate systems in the Mediterranean from two shallow marine domains: 1) a Miocene flat-topped platform dominated by Photozoans, with a significant component of Hetrozoans in the slope and 2) a Heterozoan distally steepened ramp, with seagrass-influenced (Photozoan) inner ramp. The overarching aim of the three articles comprising this cumulative thesis is to provide a numerical study of the role of Photozoans and Heterozoans in the evolution of carbonate system geometries and how these biotas respond to changes in environmental conditions. This aim was achieved using stratigraphic forward modelling, which provides an approach to quantitatively integrate multi-scale datasets to reconstruct sedimentary processes and products during the evolution of a sedimentary system.
In a Photozoan-dominated carbonate system, such as the Miocene Llucmajor platform in Western Mediterranean, stratigraphic forward modelling dovetailed with a robust set of sensitivity tests reveal how the geometry of the carbonate system is determined by the complex interaction of Heterozoan and Photozoan biotas in response to variable conditions of sea level fluctuation, substrate configuration, sediment transport processes and the dominance of Photozoan over Heterozoan production. This study provides an enhanced understanding of the different carbonate systems that are possible under different ecological and hydrodynamic conditions. The research also gives insight into the roles of different biotic associations in the evolution of carbonate geometries through time and space. The results further show that the main driver of platform progradation in a Llucmajor-type system is the lowstand production of Heterozoan sediments, which form the necessary substratum for Photozoan production.
In Heterozoan systems, sediment production is mainly characterised by high transport deposits, that are prone to redistribution by waves and gravity, thereby precluding the development of steep margins. However, in the Menorca ramp, the occurrence of sediment trapping by seagrass led to the evolution of distal slope steepening. We investigated, through numerical modelling, how such a seagrass-influenced ramp responds to the frequency and amplitude of sea level changes, variable carbonate production between the euphotic and oligophotic zone, and changes in the configuration of the paleoslope. The study reinforces some previous hypotheses and presents alternative scenarios to the established concepts of high-transport ramp evolution. The results of sensitivity experiments show that steep slopes are favoured in ramps that develop in high-frequency sea level fluctuation with amplitudes between 20 m and 40 m. We also show that ramp profiles are significantly impacted by the paleoslope inclination, such that an optimal antecedent slope of about 0.15 degrees is required for the Menorca distally steepened ramp to develop.
The third part presents an experimental case to argue for the existence of a Photozoan sediment threshold required for the development of steep margins in carbonate platforms. This was carried out by developing sensitivity tests on the forward models of the flat-topped (Llucmajor) platform and the distally steepened (Menorca) platform. The results show that models with Photozoan sediment proportion below a threshold of about 40% are incapable of forming steep slopes. The study also demonstrates that though it is possible to develop steep margins by seagrass sediment trapping, such slopes can only be stabilized by the appropriate sediment fabric and/or microbial binding. In the Photozoan-dominated system, the magnitude of slope steepness depends on the proportion of Photozoan sediments in the system. Therefore, this study presents a novel tool for characterizing carbonate systems based on their biogenic components.
Partial synchronous states exist in systems of coupled oscillators between full synchrony and asynchrony. They are an important research topic because of their variety of different dynamical states. Frequently, they are studied using phase dynamics. This is a caveat, as phase dynamics are generally obtained in the weak coupling limit of a first-order approximation in the coupling strength. The generalization to higher orders in the coupling strength is an open problem. Of particular interest in the research of partial synchrony are systems containing both attractive and repulsive coupling between the units. Such a mix of coupling yields very specific dynamical states that may help understand the transition between full synchrony and asynchrony. This thesis investigates partial synchronous states in mixed-coupling systems. First, a method for higher-order phase reduction is introduced to observe interactions beyond the pairwise one in the first-order phase description, hoping that these may apply to mixed-coupling systems. This new method for coupled systems with known phase dynamics of the units gives correct results but, like most comparable methods, is computationally expensive. It is applied to three Stuart-Landau oscillators coupled in a line with a uniform coupling strength. A numerical method is derived to verify the analytical results. These results are interesting but give importance to simpler phase models that still exhibit exotic states. Such simple models that are rarely considered are Kuramoto oscillators with attractive and repulsive interactions. Depending on how the units are coupled and the frequency difference between the units, it is possible to achieve many different states. Rich synchronization dynamics, such as a Bellerophon state, are observed when considering a Kuramoto model with attractive interaction in two subpopulations (groups) and repulsive interactions between groups. In two groups, one attractive and one repulsive, of identical oscillators with a frequency difference, an interesting solitary state appears directly between full and partial synchrony. This system can be described very well analytically.
In plant cells, subcellular transport of cargo proteins relies to a large extent on post-Golgi transport pathways, many of which are mediated by clathrin-coated vesicles (CCVs). Vesicle formation is facilitated by different factors like accessory proteins and adaptor protein complexes (APs), the latter serving as a bridge between cargo proteins and the coat protein clathrin. One type of accessory proteins is defined by a conserved EPSIN N-TERMINAL HOMOLOGY (ENTH) domain and interacts with APs and clathrin via motifs in the C-terminal part. In Arabidopsis thaliana, there are three closely related ENTH domain proteins (EPSIN1, 2 and 3) and one highly conserved but phylogenetically distant outlier, termed MODIFIED TRANSPORT TO THE VACUOLE1 (MTV1). In case of the trans-Golgi network (TGN) located MTV1, clathrin association and a role in vacuolar transport have been shown previously (Sauer et al. 2013). In contrast, for EPSIN1 and EPSIN2 limited functional and localization data were available; and EPSIN3 remained completely uncharacterized prior to this study (Song et al. 2006; Lee et al. 2007). The molecular details of ENTH domain proteins in plants are still unknown. In order to systematically characterize all four ENTH proteins in planta, we first investigated expression and subcellular localization by analysis of stable reporter lines under their endogenous promotors. Although all four genes are ubiquitously expressed, their subcellular distribution differs markedly. EPSIN1 and MTV1 are located at the TGN, whereas EPSIN2 and EPSIN3 are associated with the plasma membrane (PM) and the cell plate. To examine potential functional redundancy, we isolated knockout T-DNA mutant lines and created all higher order mutant combinations. The clearest evidence for functional redundancy was observed in the epsin1 mtv1 double mutant, which is a dwarf displaying overall growth reduction. These findings are in line with the TGN localization of both MTV1 and EPS1. In contrast, loss of EPSIN2 and EPSIN3 does not result in a growth phenotype compared to wild type, however, a triple knockout of EPSIN1, EPSIN2 and EPSIN3 shows partially sterile plants. We focused mainly on the epsin1 mtv1 double mutant and addressed the functional role of these two genes in clathrin-mediated vesicle transport by comprehensive molecular, biochemical, and genetic analyses. Our results demonstrate that EPSIN1 and MTV1 promote vacuolar transport and secretion of a subset of cargo. However, they do not seem to be involved in endocytosis and recycling. Importantly, employing high-resolution imaging, genetic and biochemical experiments probing the relationship of the AP complexes, we found that EPSIN1/AP1 and MTV1/AP4 define two spatially and molecularly distinct subdomains of the TGN. The AP4 complex is essential for MTV1 recruitment to the TGN, whereas EPSIN1 is independent of AP4 but presumably acts in an AP1-dependent framework. Our findings suggest that this ENTH/AP pairing preference is conserved between animals and plants.
Predators can have numerical and behavioral effects on prey animals. While numerical effects are well explored, the impact of behavioral effects is unclear. Furthermore, behavioral effects are generally either analyzed with a focus on single individuals or with a focus on consequences for other trophic levels. Thereby, the impact of fear on the level of prey communities is overlooked, despite potential consequences for conservation and nature management. In order to improve our understanding of predator-prey interactions, an assessment of the consequences of fear in shaping prey community structures is crucial.
In this thesis, I evaluated how fear alters prey space use, community structure and composition, focusing on terrestrial mammals. By integrating landscapes of fear in an existing individual-based and spatially-explicit model, I simulated community assembly of prey animals via individual home range formation. The model comprises multiple hierarchical levels from individual home range behavior to patterns of prey community structure and composition. The mechanistic approach of the model allowed for the identification of underlying mechanism driving prey community responses under fear.
My results show that fear modified prey space use and community patterns. Under fear, prey animals shifted their home ranges towards safer areas of the landscape. Furthermore, fear decreased the total biomass and the diversity of the prey community and reinforced shifts in community composition towards smaller animals. These effects could be mediated by an increasing availability of refuges in the landscape. Under landscape changes, such as habitat loss and fragmentation, fear intensified negative effects on prey communities. Prey communities in risky environments were subject to a non-proportional diversity loss of up to 30% if fear was taken into account. Regarding habitat properties, I found that well-connected, large safe patches can reduce the negative consequences of habitat loss and fragmentation on prey communities. Including variation in risk perception between prey animals had consequences on prey space use. Animals with a high risk perception predominantly used safe areas of the landscape, while animals with a low risk perception preferred areas with a high food availability. On the community level, prey diversity was higher in heterogeneous landscapes of fear if individuals varied in their risk perception compared to scenarios in which all individuals had the same risk perception.
Overall, my findings give a first, comprehensive assessment of the role of fear in shaping prey communities. The linkage between individual home range behavior and patterns at the community level allows for a mechanistic understanding of the underlying processes. My results underline the importance of the structure of the landscape of fear as a key driver of prey community responses, especially if the habitat is threatened by landscape changes. Furthermore, I show that individual landscapes of fear can improve our understanding of the consequences of trait variation on community structures. Regarding conservation and nature management, my results support calls for modern conservation approaches that go beyond single species and address the protection of biotic interactions.
Ferroic materials have attracted a lot of attention over the years due to their wide range of applications in sensors, actuators, and memory devices. Their technological applications originate from their unique properties such as ferroelectricity and piezoelectricity. In order to optimize these materials, it is necessary to understand the coupling between their nanoscale structure and transient response, which are related to the atomic structure of the unit cell.
In this thesis, synchrotron X-ray diffraction is used to investigate the structure of ferroelectric thin film capacitors during application of a periodic electric field. Combining electrical measurements with time-resolved X-ray diffraction on a working device allows for visualization of the interplay between charge flow and structural motion. This constitutes the core of this work. The first part of this thesis discusses the electrical and structural dynamics of a ferroelectric Pt/Pb(Zr0.2,Ti0.8)O3/SrRuO3 heterostructure during charging, discharging, and polarization reversal. After polarization reversal a non-linear piezoelectric response develops on a much longer time scale than the RC time constant of the device. The reversal process is inhomogeneous and induces a transient disordered domain state. The structural dynamics under sub-coercive field conditions show that this disordered domain state can be remanent and can be erased with an appropriate voltage pulse sequence. The frequency-dependent dynamic characterization of a Pb(Zr0.52,Ti0.48)O3 layer, at the morphotropic phase boundary, shows that at high frequency, the limited domain wall velocity causes a phase lag between the applied field and both the structural and electrical responses. An external modification of the RC time constant of the measurement delays the switching current and widens the electromechanical hysteresis loop while achieving a higher compressive piezoelectric strain within the crystal.
In the second part of this thesis, time-resolved reciprocal space maps of multiferroic BiFeO3 thin films were measured to identify the domain structure and investigate the development of an inhomogeneous piezoelectric response during the polarization reversal. The presence of 109° domains is evidenced by the splitting of the Bragg peak.
The last part of this work investigates the effect of an optically excited ultrafast strain or heat pulse propagating through a ferroelectric BaTiO3 layer, where we observed an additional current response due to the laser pulse excitation of the metallic bottom electrode of the heterostructure.
What are the consequences of unemployment and precarious employment for individuals' health in Europe? What are the moderating factors that may offset (or increase) the health consequences of labor-market risks? How do the effects of these risks vary across different contexts, which differ in their institutional and cultural settings? Does gender, regarded as a social structure, play a role, and how? To answer these questions is the aim of my cumulative thesis. This study aims to advance our knowledge about the health consequences that unemployment and precariousness cause over the life course. In particular, I investigate how several moderating factors, such as gender, the family, and the broader cultural and institutional context, may offset or increase the impact of employment instability and insecurity on individual health.
In my first paper, 'The buffering role of the family in the relationship between job loss and self-perceived health: Longitudinal results from Europe, 2004-2011', I and my co-authors measure the causal effect of job loss on health and the role of the family and welfare states (regimes) as moderating factors. Using EU-SILC longitudinal data (2004-2011), we estimate the probability of experiencing 'bad health' following a transition to unemployment by applying linear probability models and undertake separate analyses for men and women. Firstly, we measure whether changes in the independent variable 'job loss' lead to changes in the dependent variable 'self-rated health' for men and women separately. Then, by adding into the model different interaction terms, we measure the moderating effect of the family, both in terms of emotional and economic support, and how much it varies across different welfare regimes. As an identification strategy, we first implement static fixed-effect panel models, which control for time-varying observables and indirect health selection—i.e., constant unobserved heterogeneity. Secondly, to control for reverse causality and path dependency, we implement dynamic fixed-effect panel models, adding a lagged dependent variable to the model.
We explore the role of the family by focusing on close ties within households: we consider the presence of a stable partner and his/her working status as a source of social and economic support. According to previous literature, having a partner should reduce the stress from adverse events, thanks to the symbolic and emotional dimensions that such a relationship entails, regardless of any economic benefits. Our results, however, suggest that benefits linked to the presence of a (female) partner also come from the financial stability that (s)he can provide in terms of a second income. Furthermore, we find partners' employment to be at least as important as the mere presence of the partner in reducing the negative effect of job loss on the individual's health by maintaining the household's standard of living and decreasing economic strain on the family. Our results are in line with previous research, which has highlighted that some people cope better than others with adverse life circumstances, and the support provided by the family is a crucial resource in that regard.
We also reported an important interaction between the family and the welfare state in moderating the health consequences of unemployment, showing how the compensation effect of the family varies across welfare regimes. The family plays a decisive role in cushioning the adverse consequences of labor market risks in Southern and Eastern welfare states, characterized by less developed social protection systems and –especially the Southern – high level of familialism.
The first paper also found important gender differences concerning job loss, family and welfare effects. Of particular interest is the evidence suggesting that health selection works differently for men and women, playing a more prominent role for women than for men in explaining the relationship between job loss and self-perceived health. The second paper, 'Gender roles and selection mechanisms across contexts: A comparative analysis of the relationship between unemployment, self-perceived health, and gender.' investigates more in-depth the gender differential in health driven by unemployment.
Being a highly contested issue in literature, we aim to study whether men are more penalized than women or the other way around and the mechanisms that may explain the gender difference. To do that, we rely on two theoretical arguments: the availability of alternative roles and social selection. The first argument builds on the idea that men and women may compensate for the detrimental health consequences of unemployment through the commitment to 'alternative roles,' which can provide for the resources needed to fulfill people's socially constructed needs. Notably, the availability of alternative options depends on the different positions that men and women have in society.
Further, we merge the availability of the 'alternative roles' argument with the health selection argument. We assume that health selection could be contingent on people's social position as defined by gender and, thus, explain the gender differential in the relationship between unemployment and health. Ill people might be less reluctant to fall or remain (i.e., self-select) in unemployment if they have alternative roles. In Western societies, women generally have more alternative roles than men and thus more discretion in their labor market attachment. Therefore, health selection should be stronger for them, explaining why unemployment is less menace for women than for their male counterparts.
Finally, relying on the idea of different gender regimes, we extended these arguments to comparison across contexts. For example, in contexts where being a caregiver is assumed to be women's traditional and primary roles and the primary breadwinner role is reserved to men, unemployment is less stigmatized, and taking up alternative roles is more socially accepted for women than for men (Hp.1). Accordingly, social (self)selection should be stronger for women than for men in traditional contexts, where, in the case of ill-health, the separation from work is eased by the availability of alternative roles (Hp.2).
By focusing on contexts that are representative of different gender regimes, we implement a multiple-step comparative approach. Firstly, by using EU-SILC longitudinal data (2004-2015), our analysis tests gender roles and selection mechanisms for Sweden and Italy, representing radically different gender regimes, thus providing institutional and cultural variation. Then, we limit institutional heterogeneity by focusing on Germany and comparing East- and West-Germany and older and younger cohorts—for West-Germany (SOEP data 1995-2017). Next, to assess the differential impact of unemployment for men and women, we compared (unemployed and employed) men with (unemployed and employed) women. To do so, we calculate predicted probabilities and average marginal effect from two distinct random-effects probit models. Our first step is estimating random-effects models that assess the association between unemployment and self-perceived health, controlling for observable characteristics. In the second step, our fully adjusted model controls for both direct and indirect selection. We do this using dynamic correlated random-effects (CRE) models. Further, based on the fully adjusted model, we test our hypotheses on alternative roles (Hp.1) by comparing several contexts – models are estimated separately for each context. For this hypothesis, we pool men and women and include an interaction term between unemployment and gender, which has the advantage to allow for directly testing whether gender differences in the effect of unemployment exist and are statistically significant. Finally, we test the role of selection mechanisms (Hp.2), using the KHB method to compare coefficients across nested nonlinear models. Specifically, we test the role of selection for the relationship between unemployment and health by comparing the partially-adjusted and fully-adjusted models. To allow selection mechanisms to operate differently between genders, we estimate separate models for men and women.
We found support to our first hypotheses—the context where people are embedded structures the relationship between unemployment, health, and gender. We found no gendered effect of unemployment on health in the egalitarian context of Sweden. Conversely, in the traditional context of Italy, we observed substantive and statistically significant gender differences in the effect of unemployment on bad health, with women suffering less than men. We found the same pattern for comparing East and West Germany and younger and older cohorts in West Germany.
On the contrary, our results did not support our theoretical argument on social selection. We found that in Sweden, women are more selected out of employment than men. In contrast, in Italy, health selection does not seem to be the primary mechanism behind the gender differential—Italian men and women seem to be selected out of employment to the same extent. Namely, we do not find any evidence that health selection is stronger for women in more traditional countries (Hp2), despite the fact that the institutional and the cultural context would offer them a more comprehensive range of 'alternative roles' relative to men. Moreover, our second hypothesis is also rejected in the second and third comparisons, where the cross-country heterogeneity is reduced to maximize cultural differences within the same institutional context. Further research that addresses selection into inactivity is needed to evaluate the interplay between selection and social roles across gender regimes.
While the health consequences of unemployment have been on the research agenda for a pretty long time, the interest in precarious employment—defined as the linking of the vulnerable worker to work that is characterized by uncertainty and insecurity concerning pay, the stability of the work arrangement, limited access to social benefits, and statutory protections—has emerged only later. Since the 80s, scholars from different disciplines have raised concerns about the social consequences of de-standardization of employment relationships. However, while work has become undoubtedly more precarious, very little is known about its causal effect on individual health and the role of gender as a moderator. These questions are at the core of my third paper : 'Bad job, bad health? A longitudinal analysis of the interaction between precariousness, gender and self-perceived health in Germany'. Herein, I investigate the multidimensional nature of precarious employment and its causal effect on health, particularly focusing on gender differences.
With this paper, I aim at overcoming three major shortcomings of earlier studies: The first one regards the cross-sectional nature of data that prevents the authors from ruling out unobserved heterogeneity as a mechanism for the association between precarious employment and health. Indeed, several unmeasured individual characteristics—such as cognitive abilities—may confound the relationship between precarious work and health, leading to biased results. Secondly, only a few studies have directly addressed the role of gender in shaping the relationship. Moreover, available results on the gender differential are mixed and inconsistent: some found precarious employment being more detrimental for women's health, while others found no gender differences or stronger negative association for men. Finally, previous attempts to an empirical translation of the employment precariousness (EP) concept have not always been coherent with their theoretical framework. EP is usually assumed to be a multidimensional and continuous phenomenon; it is characterized by different dimensions of insecurity that may overlap in the same job and lead to different "degrees of precariousness." However, researchers have predominantly focused on one-dimensional indicators—e.g., temporary employment, subjective job insecurity—to measure EP and study the association with health. Besides the fact that this approach partially grasps the phenomenon's complexity, the major problem is the inconsistency of evidence that it has produced. Indeed, this line of inquiry generally reveals an ambiguous picture, with some studies finding substantial adverse effects of temporary over permanent employment, while others report only minor differences.
To measure the (causal) effect of precarious work on self-rated health and its variation by gender, I focus on Germany and use four waves from SOEP data (2003, 2007, 2011, and 2015). Germany is a suitable context for my study. Indeed, since the 1980s, the labor market and welfare system have been restructured in many ways to increase the German economy's competitiveness in the global market. As a result, the (standard) employment relationship has been de-standardized: non-standard and atypical employment arrangements—i.e., part-time work, fixed-term contracts, mini-jobs, and work agencies—have increased over time while wages have lowered, even among workers with standard work. In addition, the power of unions has also fallen over the last three decades, leaving a large share of workers without collective protection. Because of this process of de-standardization, the link between wage employment and strong social rights has eroded, making workers more powerless and more vulnerable to labor market risks than in the past. EP refers to this uneven distribution of power in the employment relationship, which can be detrimental to workers' health. Indeed, by affecting individuals' access to power and other resources, EP puts precarious workers at risk of experiencing health shocks and influences their ability to gain and accumulate health advantages (Hp.1).
Further, the focus on Germany allows me to investigate my second research question on the gender differential. Germany is usually regarded as a traditionalist gender regime: a context characterized by a configuration of roles. Here, being a caregiver is assumed to be women's primary role, whereas the primary breadwinner role is reserved for men. Although many signs of progress have been made over the last decades towards a greater equalization of opportunities and more egalitarianism, the breadwinner model has barely changed towards a modified version. Thus, women usually take on the double role of workers (the so-called secondary earner) and caregivers, and men still devote most of their time to paid work activities. Moreover, the overall upward trend towards more egalitarian gender ideologies has leveled off over the last decades, moving notably towards more traditional gender ideologies.
In this setting, two alternative hypotheses are possible. Firstly, I assume that the negative relationship between EP and health is stronger for women than for men. This is because women are systematically more disadvantaged than men in the public and private spheres of life, having less access to formal and informal sources of power. These gender-related power asymmetries may interact with EP-related power asymmetries resulting in a stronger effect of EP on women's health than on men's health (Hp.2).
An alternative way of looking at the gender differential is to consider the interaction that precariousness might have with men's and women's gender identities. According to this view, the negative relationship between EP and health is weaker for women than for men (Hp.2a). In a society with a gendered division of labor and a strong link between masculine identities and stable and well-rewarded job—i.e., a job that confers the role of primary family provider—a male worker with precarious employment might violate the traditional male gender role. Men in precarious jobs may perceive themselves (and by others) as possessing a socially undesirable characteristic, which conflicts with the stereotypical idea of themselves as the male breadwinner. Engaging in behaviors that contradict stereotypical gender identity may decrease self-esteem and foster feelings of inferiority, helplessness, and jealousy, leading to poor health.
I develop a new indicator of EP that empirically translates a definition of EP as a multidimensional and continuous phenomenon. I assume that EP is a latent construct composed of seven dimensions of insecurity chosen according to the theory and previous empirical research: Income insecurity, social insecurity, legal insecurity, employment insecurity, working-time insecurity, representation insecurity, worker's vulnerability. The seven dimensions are proxied by eight indicators available in the four waves of the SOEP dataset. The EP composite indicator is obtained by performing a multiple correspondence analysis (MCA) on the eight indicators. This approach aims to construct a summary scale in which all dimensions contribute jointly to the measured experience of precariousness and its health impact.
Further, the relationship between EP and 'general self-perceived health' is estimated by applying ordered probit random-effects estimators and calculating average marginal effect (further AME). Then, to control for unobserved heterogeneity, I implement correlated random-effects models that add to the model the within-individual means of the time-varying independent variables. To test the significance of the gender differential, I add an interaction term between EP and gender in the fully adjusted model in the pooled sample.
My correlated random-effects models showed EP's negative and substantial 'effect' on self-perceived health for both men and women. Although nonsignificant, the evidence seems in line with previous cross-sectional literature. It supports the hypothesis that employment precariousness could be detrimental to workers' health. Further, my results showed the crucial role of unobserved heterogeneity in shaping the health consequences of precarious employment. This is particularly important as evidence accumulates, yet it is still mostly descriptive.
Moreover, my results revealed a substantial difference among men and women in the relationship between EP and health: when EP increases, the risk of experiencing poor health increases much more for men than for women. This evidence falsifies previous theory according to whom the gender differential is contingent on the structurally disadvantaged position of women in western societies. In contrast, they seem to confirm the idea that men in precarious work could experience role conflict to a larger extent than women, as their self-standard is supposed to be the stereotypical breadwinner worker with a good and well-rewarded job. Finally, results from the multiple correspondence analysis contribute to the methodological debate on precariousness, showing that a multidimensional and continuous indicator can express a latent variable of EP.
All in all, complementarities are revealed in the results of unemployment and employment precariousness, which have two implications: Policy-makers need to be aware that the total costs of unemployment and precariousness go far beyond the economic and material realm penetrating other fundamental life domains such as individual health. Moreover, they need to balance the trade-off between protecting adequately unemployed people and fostering high-quality employment in reaction to the highlighted market pressures. In this sense, the further development of a (universalistic) welfare state certainly helps mitigate the adverse health effects of unemployment and, therefore, the future costs of both individuals' health and welfare spending. In addition, the presence of a working partner is crucial for reducing the health consequences of employment instability. Therefore, policies aiming to increase female labor market participation should be promoted, especially in those contexts where the welfare state is less developed.
Moreover, my results support the significance of taking account of a gender perspective in health research. The findings of the three articles show that job loss, unemployment, and precarious employment, in general, have adverse effects on men's health but less or absent consequences for women's health. Indeed, this suggests the importance of labor and health policies that consider and further distinguish the specific needs of the male and female labor force in Europe. Nevertheless, a further implication emerges: the health consequences of employment instability and de-standardization need to be investigated in light of the gender arrangements and the transforming gender relationships in specific cultural and institutional contexts. My results indeed seem to suggest that women's health advantage may be a transitory phenomenon, contingent on the predominant gendered institutional and cultural context. As the structural difference between men's and women's position in society is eroded, egalitarianism becomes the dominant normative status, so will probably be the gender difference in the health consequences of job loss and precariousness. Therefore, while gender equality in opportunities and roles is a desirable aspect for contemporary societies and a political goal that cannot be postponed further, this thesis raises a further and maybe more crucial question: What kind of equality should be pursued to provide men and women with both good life quality and equal chances in the public and private spheres? In this sense, I believe that social and labor policies aiming to reduce gender inequality in society should focus on improving women's integration into the labor market, implementing policies targeting men, and facilitating their involvement in the private sphere of life. Equal redistribution of social roles could activate a crucial transformation of gender roles and the cultural models that sustain and still legitimate gender inequality in Western societies.
Carbohydrates are found in every living organism, where they are responsible for numerous, essential biological functions and processes. Synthetic polymers with pendant saccharides, called glycopolymers, mimic natural glycoconjugates in their special properties and functions. Employing such biomimetics furthers the understanding and controlling of biological processes. Hence, glycopolymers are valuable and interesting for applications in the medical and biological field. However, the synthesis of carbohydrate-based materials can be very challenging. In this thesis, the synthesis of biofunctional glycopolymers is presented, with the focus on aqueous-based, protecting group free and short synthesis routes to further advance in the field of glycopolymer synthesis.
A practical and versatile precursor for glycopolymers are glycosylamines. To maintain biofunctionality of the saccharides after their amination, regioselective functionalization was performed. This frequently performed synthesis was optimized for different sugars. The optimization was facilitated using a design of experiment (DoE) approach to enable a reduced number of necessary experiments and efficient procedure. Here, the utility of using DoE for optimizing the synthesis of glycosylamines is discussed.
The glycosylamines were converted to glycomonomers which were then polymerized to yield biofunctional glycopolymers. Here, the glycopolymers were aimed to be applicable as layer-by-layer (LbL) thin film coatings for drug delivery systems. To enable the LbL technique, complimentary glycopolymer electrolytes were synthesized by polymerization of the glycomonomers and subsequent modification or by post-polymerization modification. For drug delivery, liposomes were embedded into the glycopolymer coating as potential cargo carriers. The stability as well as the integrity of the glycopolymer layers and liposomes were investigated at physiological pH range.
Different glycopolymers were also synthesized to be applicable as anti-adhesion therapeutics by providing advanced architectures with multivalent presentations of saccharides, which can inhibit the binding of pathogene lectins. Here, the synthesis of glycopolymer hydrogel particles based on biocompatible poly(N-isopropylacrylamide) (NiPAm) was established using the free-radical precipitation polymerization technique. The influence of synthesis parameters on the sugar content in the gels and on the hydrogel morphology is discussed. The accessibility of the saccharides to model lectins and their enhanced, multivalent interaction were investigated.
At the end of this work, the synthesis strategies for the glycopolymers are generally discussed as well as their potential application in medicine.
Synthesis, assembly and thermo-responsivity of polymer-functionalized magnetic cobalt nanoparticles
(2018)
This thesis mainly covers the synthesis, surface modification, magnetic-field-induced assembly and thermo-responsive functionalization of superparamagnetic Co NPs initially stabilized by hydrophobic small molecules oleic acid (OA) and trioctylphosphine oxide (TOPO), as well as the synthesis of both superparamagnetic and ferromagnetic Co NPs by using end-functionalized-polystyrene as stabilizer.
Co NPs, due to their excellent magnetic and catalytic properties, have great potential application in various fields, such as ferrofluids, catalysis, and magnetic resonance imaging (MRI). Superparamagnetic Co NPs are especially interesting, since they exhibit zero coercivity. They get magnetized in an external magnetic field and reach their saturation magnetization rapidly, but no magnetic moment remains after removal of the applied magnetic field. Therefore, they do not agglomerate in the body when they are used in biomedical applications. Normally, decomposition of metallic precursors at high temperature is one of the most important methods in preparation of monodisperse magnetic NPs, providing tunability in size and shape. Hydrophobic ligands like OA, TOPO and oleylamine are often used to both control the growth of NPs and protect them from agglomeration. The as-prepared magnetic NPs can be used in biological applications as long as they are transferred into water. Moreover, their supercrystal assemblies have the potential for high density data storage and electronic devices. In addition to small molecules, polymers can also be used as surfactants for the synthesis of ferromagnetic and superparamagnetic NPs by changing the reaction conditions. Therefore, chapter 2 gives an overview on the basic concept of synthesis, surface modification and self-assembly of magnetic nanoparticles. Various examples were used to illustrate the recent work.
The hydrophobic Co NPs synthesized with small molecules as surfactants limit their biological applications, which require a hydrophilic or aqueous environment. Surface modification (e.g., ligand exchange) is a general idea for either phase transition or surface-functionalization. Therefore, in chapter 3, a ligand exchange process was conducted to functionalize the surface of Co NPs. PNIPAM is one of the most popular smart polymers and its lower critical solution temperature (LCST) is around 32 °C, with a reversible change in the conformation structure between hydrophobic and hydrophilic. The novel nanocomposites of superparamagnetic Co NPs and thermo-responsive PNIPAM are of great interest. Thus, well-defined superparamagnetic Co NPs were firstly synthesized through the thermolysis of cobalt carbonyl by using OA and TOPO as surfactants. A functional ATRP initiator, containing an amine (as anchoring group) and a 2-bromopropionate group (SI-ATRP initiator), was used to replace the original ligands. This process is rapid and facial for efficient surface functionalization and afterwards the Co NPs can be dispersed into polar solvent DMF without aggregation. FT-IR spectroscopy showed that the TOPO was completely replaced, but a small amount of OA remained on the surface. A TGA measurement allowed the calculation of the grafting density of the initiator as around 3.2 initiator/nm2. Then, the surface-initiated ATRP was conducted for the polymerization of NIPAM on the surface of Co NPs and rendered the nanocomposites water-dispersible. A temperature-dependent dynamic light scattering study showed the aggregation behavior of PNIPAM-coated Co NPs upon heating and this process was proven to be reversible. The combination of superparamagnetic and thermo-responsive properties in these hybrid nanoparticles is promising for future applications e.g. in biomedicine.
In chapter 4, the magnetic-field-induced assembly of superparamagnetic cobalt nanoparticles both on solid substrates and at liquid-air interface was investigated. OA- and TOPO-coated Co NPs were synthesized via the thermolysis of cobalt carbonyl and dispersed into either hexane or toluene. The Co NP dispersion was dropped onto substrates (e.g., TEM grid, silicon wafer) and at liquid-air (water-air or ethylene glycol-air) interface. Due to the attractive dipolar interaction, 1-D chains formed in the presence of an external magnetic field. It is known that the concentration and the strength of the magnetic field can affect the assembly behavior of superparamagnetic Co NPs. Therefore, the influence of these two parameters on the morphology of the assemblies was studied. The formed 1-D chains were shorter and flexible at either lower concentration of the Co NP dispersion or lower strength of the external magnetic field due to thermal fluctuation. However, by increasing either the concentration of the NP dispersion or the strength of the applied magnetic field, these chains became longer, thicker and straighter. The reason could be that a high concentration led to a high fraction of short dipolar chains, and their interaction resulted in longer and thicker chains under applied magnetic field. On the other hand, when the magnetic field increased, the induced moments of the magnetic nanoparticles became larger, which dominated over the thermal fluctuation. Thus, the formed short chains connected to each other and grew in length. Thicker chains were also observed through chain-chain interaction. Furthermore, the induced moments of the NPs tended to direct into one direction with increased magnetic field, thus the chains were straighter. In comparison between the assembly on substrates, at water-air interface and at ethylene glycol-air interface, the assembly of Co NPs in hexane dispersion at ethylene glycol-air interface showed the most regular and homogeneous chain structures due to the better spreading of the dispersion on ethylene glycol subphase than on water subphase and substrates. The magnetic-field-induced assembly of superparamagnetic nanoparticles could provide a powerful approach for applications in data storage and electronic devices.
Chapter 5 presented the synthesis of superparamagnetic and ferromagnetic cobalt nanoparticles through a dual-stage thermolysis of cobalt carbonyl (Co2(CO)8) by using polystyrene as surfactant. The amine end-functionalized polystyrene surfactants with different molecular weight were prepared via atom transfer radical polymerization technique. The molecular weight determination of polystyrene was conducted by gel permeation chromatography (GPC) and matrix-assisted laser desorption/ionization time-of-flight (MALDI-ToF) mass spectrometry techniques. The results showed that, when the molecular weight distribution is low (Mw/Mn < 1.2), the measurement by GPC and MALDI-ToF MS provided nearly similar results. For example, the molecular weight of 10600 Da was obtained by MALDI-ToF MS, while GPC gave 10500 g/mol (Mw/Mn = 1.17). However, if the polymer is poly distributed, MALDI-ToF MS cannot provide an accurate value. This was exemplified for a polymer with a molecular weight of 3130 Da measured by MALDI-TOF MS, while GPC showed 2300 g/mol (Mw/Mn = 1.38). The size, size distribution and magnetic properties of the hybrid particles were different by changing either the molecular weight or concentration of the polymer surfactants. The analysis from TEM characterization showed that the size of cobalt nanoparticles stabilized with polystyrene of lower molecular weight (Mn = 2300 g/mol) varied from 12–22 nm, while the size with middle (Mn = 4500 g/mol) and higher molecular weight (Mn = 10500 g/mol) of polystyrene-coated cobalt nanoparticles showed little change. Magnetic measurements exhibited that the small cobalt particles (12 nm) were superparamagnetic, while larger particles (21 nm) were ferromagnetic and assembled into 1-D chains. The grafting density calculated from thermogravimetric analysis showed that a higher grafting density of polystyrene was obtained with lower molecular weight (Mn = 2300 g/mol) than those with higher molecular weight (Mn = 10500 g/mol). Due to the larger steric hindrance, polystyrene with higher molecular weight cannot form a dense shell on the surface of the nanoparticles, which resulted in a lower grafting density. Wide angle X-ray scattering measurements revealed the epsilon cobalt crystalline phases of both superparamagnetic Co NPs coated with polystyrene (Mn = 2300 g/mol) and ferromagnetic Co NPs coated with polystyrene (Mn = 10500 g/mol). Furthermore, a stability study showed that PS-Co NPs prepared with higher polymer concentration and polymer molecular weight exhibited a better stability.
Distributed decision-making studies the choices made among a group of interactive and self-interested agents. Specifically, this thesis is concerned with the optimal sequence of choices an agent makes as it tries to maximize its achievement on one or multiple objectives in the dynamic environment. The optimization of distributed decision-making is important in many real-life applications, e.g., resource allocation (of products, energy, bandwidth, computing power, etc.) and robotics (heterogeneous agent cooperation on games or tasks), in various fields such as vehicular network, Internet of Things, smart grid, etc.
This thesis proposes three multi-agent reinforcement learning algorithms combined with game-theoretic tools to study strategic interaction between decision makers, using resource allocation in vehicular network as an example. Specifically, the thesis designs an interaction mechanism based on second-price auction, incentivizes the agents to maximize multiple short-term and long-term, individual and system objectives, and simulates a dynamic environment with realistic mobility data to evaluate algorithm performance and study agent behavior.
Theoretical results show that the mechanism has Nash equilibria, is a maximization of social welfare and Pareto optimal allocation of resources in a stationary environment. Empirical results show that in the dynamic environment, our proposed learning algorithms outperform state-of-the-art algorithms in single and multi-objective optimization, and demonstrate very good generalization property in significantly different environments. Specifically, with the long-term multi-objective learning algorithm, we demonstrate that by considering the long-term impact of decisions, as well as by incentivizing the agents with a system fairness reward, the agents achieve better results in both individual and system objectives, even when their objectives are private, randomized, and changing over time. Moreover, the agents show competitive behavior to maximize individual payoff when resource is scarce, and cooperative behavior in achieving a system objective when resource is abundant; they also learn the rules of the game, without prior knowledge, to overcome disadvantages in initial parameters (e.g., a lower budget).
To address practicality concerns, the thesis also provides several computational performance improvement methods, and tests the algorithm in a single-board computer. Results show the feasibility of online training and inference in milliseconds.
There are many potential future topics following this work. 1) The interaction mechanism can be modified into a double-auction, eliminating the auctioneer, resembling a completely distributed, ad hoc network; 2) the objectives are assumed to be independent in this thesis, there may be a more realistic assumption regarding correlation between objectives, such as a hierarchy of objectives; 3) current work limits information-sharing between agents, the setup befits applications with privacy requirements or sparse signaling; by allowing more information-sharing between the agents, the algorithms can be modified for more cooperative scenarios such as robotics.
Towards greener stationary phases : thermoresponsive and carbonaceous chromatographic supports
(2011)
Polymers which are sensitive towards external physical, chemical and electrical stimuli are termed as ‘intelligent materials’ and are widely used in medical and engineering applications. Presently, polymers which can undergo a physical change when heat is applied at a certain temperature (cloud point) in water are well-studied for this property in areas of separation chemistry, gene and drug delivery and as surface modifiers. One example of such a polymer is the poly (N-isopropylacrylamide) PNIPAAM, where it is dissolved well in water below 32 oC, while by increasing the temperature further leads to its precipitation. In this work, an alternative polymer poly (2-(2-methoxy ethoxy)ethyl methacrylate-co- oligo(ethylene glycol) methacrylate) (P(MEO2MA-co-OEGMA)) is studied due to its biocompatibility and the ability to vary its cloud points in water. When a layer of temperature responsive polymer was attached to a single continuous porous piece of silica-based material known as a monolith, the thermoresponsive characteristic was transferred to the column surfaces. The hybrid material was demonstrated to act as a simple temperature ‘switch’ in the separation of a mixture of five steroids under water. Different analytes were observed to be separated under varying column temperatures. Furthermore, more complex biochemical compounds such as proteins were also tested for separation. The importance of this work is attributed to separation processes utilizing environmentally friendly conditions, since harsh chemical environments conventionally used to resolve biocompounds could cause their biological activities to be rendered inactive.
Infants' lexical processing is modulated by featural manipulations made to words, suggesting that early lexical representations are sufficiently specified to establish a match with the corresponding label. However, the precise degree of detail in early words requires further investigation due to equivocal findings. We studied this question by assessing children’s sensitivity to the degree of featural manipulation (Chapters 2 and 3), and sensitivity to the featural makeup of homorganic and heterorganic consonant clusters (Chapter 4). Gradient sensitivity on the one hand and sensitivity to homorganicity on the other hand would suggest that lexical processing makes use of sub-phonemic information, which in turn would indicate that early words contain sub-phonemic detail. The studies presented in this thesis assess children’s sensitivity to sub-phonemic detail using minimally demanding online paradigms suitable for infants: single-picture pupillometry and intermodal preferential looking. Such paradigms have the potential to uncover lexical knowledge that may be masked otherwise due to cognitive limitations. The study reported in Chapter 2 obtained a differential response in pupil dilation to the degree of featural manipulation, a result consistent with gradient sensitivity. The study reported in Chapter 3 obtained a differential response in proportion of looking time and pupil dilation to the degree of featural manipulation, a result again consistent with gradient sensitivity. The study reported in Chapter 4 obtained a differential response to the manipulation of homorganic and heterorganic consonant clusters, a result consistent with sensitivity to homorganicity. These results suggest that infants' lexical representations are not only specific, but also detailed to the extent that they contain sub-phonemic information.
Deep learning has seen widespread application in many domains, mainly for its ability to learn data representations from raw input data. Nevertheless, its success has so far been coupled with the availability of large annotated (labelled) datasets. This is a requirement that is difficult to fulfil in several domains, such as in medical imaging. Annotation costs form a barrier in extending deep learning to clinically-relevant use cases. The labels associated with medical images are scarce, since the generation of expert annotations of multimodal patient data at scale is non-trivial, expensive, and time-consuming. This substantiates the need for algorithms that learn from the increasing amounts of unlabeled data. Self-supervised representation learning algorithms offer a pertinent solution, as they allow solving real-world (downstream) deep learning tasks with fewer annotations. Self-supervised approaches leverage unlabeled samples to acquire generic features about different concepts, enabling annotation-efficient downstream task solving subsequently.
Nevertheless, medical images present multiple unique and inherent challenges for existing self-supervised learning approaches, which we seek to address in this thesis: (i) medical images are multimodal, and their multiple modalities are heterogeneous in nature and imbalanced in quantities, e.g. MRI and CT; (ii) medical scans are multi-dimensional, often in 3D instead of 2D; (iii) disease patterns in medical scans are numerous and their incidence exhibits a long-tail distribution, so it is oftentimes essential to fuse knowledge from different data modalities, e.g. genomics or clinical data, to capture disease traits more comprehensively; (iv) Medical scans usually exhibit more uniform color density distributions, e.g. in dental X-Rays, than natural images. Our proposed self-supervised methods meet these challenges, besides significantly reducing the amounts of required annotations.
We evaluate our self-supervised methods on a wide array of medical imaging applications and tasks. Our experimental results demonstrate the obtained gains in both annotation-efficiency and performance; our proposed methods outperform many approaches from related literature. Additionally, in case of fusion with genetic modalities, our methods also allow for cross-modal interpretability. In this thesis, not only we show that self-supervised learning is capable of mitigating manual annotation costs, but also our proposed solutions demonstrate how to better utilize it in the medical imaging domain. Progress in self-supervised learning has the potential to extend deep learning algorithms application to clinical scenarios.
Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
Galaxy clusters are the largest known gravitationally bound objects, their study is important for both an intrinsic understanding of their systems and an investigation of the large scale structure of the universe. The multi- component nature of galaxy clusters offers multiple observable signals across the electromagnetic spectrum. At X-ray wavelengths, galaxy clusters are simply identified as X-ray luminous, spatially extended, and extragalactic sources. X-ray observations offer the most powerful technique for constructing cluster catalogues. The main advantages of the X-ray cluster surveys are their excellent purity and completeness and the X-ray observables are tightly correlated with mass, which is indeed the most fundamental parameter of clusters. In my thesis I have conducted the 2XMMi/SDSS galaxy cluster survey, which is a serendipitous search for galaxy clusters based on the X-ray extended sources in the XMM-Newton Serendipitous Source Catalogue (2XMMi-DR3). The main aims of the survey are to identify new X-ray galaxy clusters, investigate their X-ray scaling relations, identify distant cluster candidates, and study the correlation of the X-ray and optical properties. The survey is constrained to those extended sources that are in the footprint of the Sloan Digital Sky Survey (SDSS) in order to be able to identify the optical counterparts as well as to measure their redshifts that are mandatory to measure their physical properties. The overlap area be- tween the XMM-Newton fields and the SDSS-DR7 imaging, the latest SDSS data release at the starting of the survey, is 210 deg^2. The survey comprises 1180 X-ray cluster candidates with at least 80 background-subtracted photon counts, which passed the quality control process. To measure the optical redshifts of the X-ray cluster candidates, I used three procedures; (i) cross-matching these candidates with the recent and largest optically selected cluster catalogues in the literature, which yielded the photometric redshifts of about a quarter of the X-ray cluster candidates. (ii) I developed a finding algorithm to search for overdensities of galaxies at the positions of the X-ray cluster candidates in the photometric redshift space and to measure their redshifts from the SDSS-DR8 data, which provided the photometric redshifts of 530 groups/clusters. (iii) I developed an algorithm to identify the cluster candidates associated with spectroscopically targeted Luminous Red Galaxies (LRGs) in the SDSS-DR9 and to measure the cluster spectroscopic redshift, which provided 324 groups and clusters with spectroscopic confirmation based on spectroscopic redshift of at least one LRG. In total, the optically confirmed cluster sample comprises 574 groups and clusters with redshifts (0.03 ≤ z ≤ 0.77), which is the largest X-ray selected cluster catalogue to date based on observations from the current X-ray observatories (XMM-Newton, Chandra, Suzaku, and Swift/XRT). Among the cluster sample, about 75 percent are newly X-ray discovered groups/clusters and 40 percent are new systems to the literature. To determine the X-ray properties of the optically confirmed cluster sample, I reduced and analysed their X-ray data in an automated way following the standard pipelines of processing the XMM-Newton data. In this analysis, I extracted the cluster spectra from EPIC(PN, MOS1, MOS2) images within an optimal aperture chosen to maximise the signal-to-noise ratio. The spectral fitting procedure provided the X-ray temperatures kT (0.5 - 7.5 keV) for 345 systems that have good quality X-ray data. For all the optically confirmed cluster sample, I measured the physical properties L500 (0.5 x 10^42 – 1.2 x 10^45 erg s-1 ) and M500 (1.1 x 10^13 – 4.9 x 10^14 M⊙) from an iterative procedure using published scaling relations. The present X-ray detected groups and clusters are in the low and intermediate luminosity regimes apart from few luminous systems, thanks to the XMM-Newton sensitivity and the available XMM-Newton deep fields The optically confirmed cluster sample with measurements of redshift and X-ray properties can be used for various astrophysical applications. As a first application, I investigated the LX - T relation for the first time based on a large cluster sample of 345 systems with X-ray spectroscopic parameters drawn from a single survey. The current sample includes groups and clusters with wide ranges of redshifts, temperatures, and luminosities. The slope of the relation is consistent with the published ones of nearby clusters with higher temperatures and luminosities. The derived relation is still much steeper than that predicted by self-similar evolution. I also investigated the evolution of the slope and the scatter of the LX - T relation with the cluster redshift. After excluding the low luminosity groups, I found no significant changes of the slope and the intrinsic scatter of the relation with redshift when dividing the sample into three redshift bins. When including the low luminosity groups in the low redshift subsample, I found its LX - T relation becomes after than the relation of the intermediate and high redshift subsamples. As a second application of the optically confirmed cluster sample from our ongoing survey, I investigated the correlation between the cluster X-ray and the optical parameters that have been determined in a homogenous way. Firstly, I investigated the correlations between the BCG properties (absolute magnitude and optical luminosity) and the cluster global proper- ties (redshift and mass). Secondly, I computed the richness and the optical luminosity within R500 of a nearby subsample (z ≤ 0.42, with a complete membership detection from the SDSS data) with measured X-ray temperatures from our survey. The relation between the estimated optical luminosity and richness is also presented. Finally, the correlation between the cluster optical properties (richness and luminosity) and the cluster global properties (X-ray luminosity, temperature, mass) are investigated.
The demand for learning Design Thinking (DT) as a path towards acquiring 21st-century skills has increased globally in the last decade. Because DT education originated in the Silicon Valley context of the d.school at Stanford, it is important to evaluate how the teaching of the methodology adapts to different cultural contexts.The thesis explores the impact of the socio-cultural context on DT education.
DT institutes in Cape Town, South Africa and Kuala Lumpur, Malaysia, were visited to observe their programs and conduct 22 semistructured interviews with local educators regarding their adaption strategies. Grounded theory methodology was used to develop a model of Socio-Cultural Adaptation of Design Thinking Education that maps these strategies onto five dimensions: Planning, Process, People, Place, and Presentation. Based on this model, a list of recommendations is provided to help DT educators and practitioners in designing and delivering culturally inclusive DT education.
This work presents a new design for programming environments that promote the exploration of domain-specific software artifacts and the construction of graphical tools for such program comprehension tasks. In complex software projects, tool building is essential because domain- or task-specific tools can support decision making by representing concerns concisely with low cognitive effort. In contrast, generic tools can only support anticipated scenarios, which usually align with programming language concepts or well-known project domains.
However, the creation and modification of interactive tools is expensive because the glue that connects data to graphics is hard to find, change, and test. Even if valuable data is available in a common format and even if promising visualizations could be populated, programmers have to invest many resources to make changes in the programming environment. Consequently, only ideas of predictably high value will be implemented. In the non-graphical, command-line world, the situation looks different and inspiring: programmers can easily build their own tools as shell scripts by configuring and combining filter programs to process data.
We propose a new perspective on graphical tools and provide a concept to build and modify such tools with a focus on high quality, low effort, and continuous adaptability. That is, (1) we propose an object-oriented, data-driven, declarative scripting language that reduces the amount of and governs the effects of glue code for view-model specifications, and (2) we propose a scalable UI-design language that promotes short feedback loops in an interactive, graphical environment such as Morphic known from Self or Squeak/Smalltalk systems.
We implemented our concept as a tool building environment, which we call VIVIDE, on top of Squeak/Smalltalk and Morphic. We replaced existing code browsing and debugging tools to iterate within our solution more quickly. In several case studies with undergraduate and graduate students, we observed that VIVIDE can be applied to many domains such as live language development, source-code versioning, modular code browsing, and multi-language debugging. Then, we designed a controlled experiment to measure the effect on the time to build tools. Several pilot runs showed that training is crucial and, presumably, takes days or weeks, which implies a need for further research.
As a result, programmers as users can directly work with tangible representations of their software artifacts in the VIVIDE environment. Tool builders can write domain-specific scripts to populate views to approach comprehension tasks from different angles. Our novel perspective on graphical tools can inspire the creation of new trade-offs in modularity for both data providers and view designers.
The spread of shrubs in Namibian savannas raises questions about the resilience of these ecosystems to global change. This makes it necessary to understand the past dynamics of the vegetation, since there is no consensus on whether shrub encroachment is a new phenomenon, nor on its main drivers. However, a lack of long-term vegetation datasets for the region and the scarcity of suitable palaeoecological archives, makes reconstructing past vegetation and land cover of the savannas a challenge.
To help meet this challenge, this study addresses three main research questions: 1) is pollen analysis a suitable tool to reflect the vegetation change associated with shrub encroachment in savanna environments? 2) Does the current encroached landscape correspond to an alternative stable state of savanna vegetation? 3) To what extent do pollen-based quantitative vegetation reconstructions reflect changes in past land cover?
The research focuses on north-central Namibia, where despite being the region most affected by shrub invasion, particularly since the 21st century, little is known about the dynamics of this phenomenon.
Field-based vegetation data were compared with modern pollen data to assess their correspondence in terms of composition and diversity along precipitation and grazing intensity gradients. In addition, two sediment cores from Lake Otjikoto were analysed to reveal changes in vegetation composition that have occurred in the region over the past 170 years and their possible drivers. For this, a multiproxy approach (fossil pollen, sedimentary ancient DNA (sedaDNA), biomarkers, compound specific carbon (δ13C) and deuterium (δD) isotopes, bulk carbon isotopes (δ13Corg), grain size, geochemical properties) was applied at high taxonomic and temporal resolution. REVEALS modelling of the fossil pollen record from Lake Otjikoto was run to quantitatively reconstruct past vegetation cover. For this, we first made pollen productivity estimates (PPE) of the most relevant savanna taxa in the region using the extended R-value model and two pollen dispersal options (Gaussian plume model and Lagrangian stochastic model). The REVEALS-based vegetation reconstruction was then validated using remote sensing-based regional vegetation data.
The results show that modern pollen reflects the composition of the vegetation well, but diversity less well. Interestingly, precipitation and grazing explain a significant amount of the compositional change in the pollen and vegetation spectra. The multiproxy record shows that a state change from open Combretum woodland to encroached Terminalia shrubland can occur over a century, and that the transition between states spans around 80 years and is characterized by a unique vegetation composition. This transition is supported by gradual environmental changes induced by management (i.e. broad-scale logging for the mining industry, selective grazing and reduced fire activity associated with intensified farming) and related land-use change. Derived environmental changes (i.e. reduced soil moisture, reduced grass cover, changes in species composition and competitiveness, reduced fire intensity) may have affected the resilience of Combretum open woodlands, making them more susceptible to change to an encroached state by stochastic events such as consecutive years of precipitation and drought, and by high concentrations of pCO2. We assume that the resulting encroached state was further stabilized by feedback mechanisms that favour the establishment and competitiveness of woody vegetation.
The REVEALS-based quantitative estimates of plant taxa indicate the predominance of a semi-open landscape throughout the 20th century and a reduction in grass cover below 50% since the 21st century associated with the spread of encroacher woody taxa. Cover estimates show a close match with regional vegetation data, providing support for the vegetation dynamics inferred from multiproxy analyses. Reasonable PPEs were made for all woody taxa, but not for Poaceae.
In conclusion, pollen analysis is a suitable tool to reconstruct past vegetation dynamics in savannas. However, because pollen cannot identify grasses beyond family level, a multiproxy approach, particularly the use of sedaDNA, is required. I was able to separate stable encroached states from mere woodland phases, and could identify drivers and speculate about related feedbacks. In addition, the REVEALS-based quantitative vegetation reconstruction clearly reflects the magnitude of the changes in the vegetation cover that occurred during the last 130 years, despite the limitations of some PPEs.
This research provides new insights into pollen-vegetation relationships in savannas and highlights the importance of multiproxy approaches when reconstructing past vegetation dynamics in semi-arid environments. It also provides the first time series with sufficient taxonomic resolution to show changes in vegetation composition during shrub encroachment, as well as the first quantitative reconstruction of past land cover in the region. These results help to identify the different stages in savanna dynamics and can be used to calibrate predictive models of vegetation change, which are highly relevant to land management.
In this work, the development of temperature- and protein-responsive sensor materials based on biocompatible, inverse hydrogel opals (IHOs) is presented. With these materials, large biomolecules can be specifically recognised and the binding event visualised. The preparation of the IHOs was performed with a template process, for which monodisperse silica particles were vertically deposited onto glass slides as the first step. The obtained colloidal crystals with a thickness of 5 μm displayed opalescent reflections because of the uniform alignment of the colloids. As a second step, the template was embedded in a matrix consisting of biocompatible, thermoresponsive hydrogels. The comonomers were selected from the family of oligo(ethylene glycol)methacrylates. The monomer solution was injected into a polymerisation mould, which contained the colloidal crystals as a template. The space in-between the template particles was filled with the monomer solution and the hydrogel was cured via UV-polymerisation. The particles were chemically etched, which resulted in a porous inner structure. The uniform alignment of the pores and therefore the opalescent reflection were maintained, so these system were denoted as inverse hydrogel opals. A pore diameter of several hundred nanometres as well as interconnections between the pores should facilitate a diffusion of bigger (bio)molecules, which was always a challenge in the presented systems until now. The copolymer composition was chosen to result in a hydrogel collapse over 35 °C. All hydrogels showed pronounced swelling in water below the critical temperature. The incorporation of a reactive monomer with hydroxyl groups ensured a potential coupling group for the introduction of recognition units for analytes, e.g. proteins. As a test system, biotin as a recognition unit for avidin was coupled to the IHO via polymer-analogous Steglich esterification. The amount of accessible biotin was quantified with a colorimetric binding assay. When avidin was added to the biotinylated IHO, the wavelength of the opalescent reflection was significantly shifted and therefore the binding event was visualised. This effect is based on the change in swelling behaviour of the hydrogel after binding of the hydrophilic avidin, which is amplified by the thermoresponsive nature of the hydrogel. A swelling or shrinking of the pores induces a change in distance of the crystal planes, which are responsible for the colour of the reflection. With these findings, the possibility of creating sensor materials or additional biomolecules in the size range of avidin is given.
Writing travel, writing life
(2022)
The book compares the texts of three Swiss authors: Ella Maillart, Annemarie Schwarzenbach and Nicolas Bouvier. The focus is on their trip from Genève to Kabul that Ella Maillart and Annemarie Schwarzenbach made together in 1939/1940 and Nicolas Bouvier 1953/1954 with the artist Thierry Vernet. The comparison shows the strong connection between the journey and life and between ars vivendi and travel literature.
This book also gives an overview of and organises the numerous terms, genres, and categories that already exist to describe various travel texts and proposes the new term travelling narration. The travelling narration looks at the text from a narratological perspective that distinguishes the author, narrator, and protagonist within the narration.
In the examination, ten motifs could be found to characterise the travelling narration: Culture, Crossing Borders, Freedom, Time and Space, the Aesthetics of Landscapes, Writing and Reading, the Self and/as the Other, Home, Religion and Spirituality as well as the Journey. The importance of each individual motif does not only apply in the 1930s or 1950s but also transmits important findings for living together today and in the future.
This thesis is focused on the electronic, spin-dependent and dynamical properties of thin magnetic systems. Photoemission-related techniques are combined with synchrotron radiation to study the spin-dependent properties of these systems in the energy and time domains. In the first part of this thesis, the strength of electron correlation effects in the spin-dependent electronic structure of ferromagnetic bcc Fe(110) and hcp Co(0001) is investigated by means of spin- and angle-resolved photoemission spectroscopy. The experimental results are compared to theoretical calculations within the three-body scattering approximation and within the dynamical mean-field theory, together with one-step model calculations of the photoemission process. From this comparison it is demonstrated that the present state of the art many-body calculations, although improving the description of correlation effects in Fe and Co, give too small mass renormalizations and scattering rates thus demanding more refined many-body theories including nonlocal fluctuations. In the second part, it is shown in detail monitoring by photoelectron spectroscopy how graphene can be grown by chemical vapour deposition on the transition-metal surfaces Ni(111) and Co(0001) and intercalated by a monoatomic layer of Au. For both systems, a linear E(k) dispersion of massless Dirac fermions is observed in the graphene pi-band in the vicinity of the Fermi energy. Spin-resolved photoemission from the graphene pi-band shows that the ferromagnetic polarization of graphene/Ni(111) and graphene/Co(0001) is negligible and that graphene on Ni(111) is after intercalation of Au spin-orbit split by the Rashba effect. In the last part, a time-resolved x-ray magnetic circular dichroic-photoelectron emission microscopy study of a permalloy platelet comprising three cross-tie domain walls is presented. It is shown how a fast picosecond magnetic response in the precessional motion of the magnetization can be induced by means of a laser-excited photoswitch. From a comparision to micromagnetic calculations it is demonstrated that the relatively high precessional frequency observed in the experiments is directly linked to the nature of the vortex/antivortex dynamics and its response to the magnetic perturbation. This includes the time-dependent reversal of the vortex core polarization, a process which is beyond the limit of detection in the present experiments.
Magnetorotational instability (MRI) is one of the most important and most common instabilities in astrophysics. Today it is widely accepted that it serves as a major source of turbulent viscosity in accretion disks, the most energy efficient objects in the universe. The importance of the MRI for astrophysics has been realized only in recent fifteen years. However, originally it was discovered much earlier, in 1959, in a very different context. Theoretical flow of a conducting liquid confined between differentially rotating cylinders in the presence of an external magnetic field was analyzed. The central conclusion is that the additional magnetic field parallel to the axis of rotation can destabilize otherwise stable flow. Theory of non-magnetized fluid motion between rotating cylinders has much longer history, though. It has been studied already in 1888 and today such setup is usually referred as a Taylor-Couette flow. To prove experimentally the existence of MRI in a magnetized Taylor-Couette flow is a demanding task and different MHD groups around the world try to achieve it. The main problem lies in the fact that laboratory liquid metals which are used in such experiments are characterized by small magnetic Prandtl number. Consequently rotation rates of the cylinders must be extremely large and vast amount of technical problems emerge. One of the most important difficulties is an influence of plates enclosing the cylinders in any experiment. For fast rotation the plates tend to dominate the whole flow and the MRI can not be observed. In this thesis we discuss a special helical configuration of the applied magnetic field which allows the critical rotation rates to be much smaller. If only the axial magnetic field is present, the cylinders must rotate with angular velocities corresponding to Reynolds numbers of order Re ≈ 10^6. With the helical field this number is dramatically reduced to Re ≈ 10^3. The azimuthal component of the magnetic field can be easily generated by letting an electric current through the axis of rotation, In a Taylor-Couette flow the (primary) instability manifests itself as Taylor vortices. The specific geometry of the helical magnetic field leads to a traveling wave solution and the vortices are drifting in a direction determined by rotation and the magnetic field. In an idealized study for infinitely long cylinders this is not a problem. However, if the cylinders have finite length and are bounded vertically by the plates the situation is different. In this dissertation it is shown, with use of numerical methods, that the traveling wave solution also exists for MHD Taylor-Couette flow at finite aspect ratio H/D, H being height of the cylinders, D width of the gap between them. The nonlinear simulations provide amplitudes of fluid velocity which are helpful in designing an experiment. Although the plates disturb the flow, parameters like the drift velocity indicate that the helical MRI operates in this case. The idea of the helical MRI was implemented in a very recent experiment PROMISE. The results provided, for the first time, an evidence that the (helical) MRI indeed exists. Nevertheless, the influence of the vertical endplates was evident and the experiment can be, in principle, improved. Exemplary methods of reduction of the end-effect are here proposed. Near the vertical boundaries develops an Ekman-Hartmann layer. Study of this layer for the MHD Taylor-Couette system as well as its impact on the global flow properties is presented. It is shown that the plates, especially if they are conducting, can disturb the flow far more then previously thought also for relatively slow rotation rates.
Savannas cover a broad geographical range across continents and are a biome best described by a mix of herbaceous and woody plants. The former create a more or less continuous layer while the latter should be sparse enough to leave an open canopy. What has long intrigued ecologists is how these two competing plant life forms of vegetation coexist.
Initially attributed to resource competition, coexistence was considered the stable outcome of a root niche differentiation between trees and grasses. The importance of environmental factors became evident later, when data from moister environments demonstrated that tree cover was often lower than what the rainfall conditions would allow for. Our current understanding relies on the interaction of competition and disturbances in space and time. Hence, the influence of grazing and fire and the corresponding feedbacks they generate have been keenly investigated. Grazing removes grass cover, initiating a self-reinforcing process propagating tree cover expansion. This is known as the encroachment phenomenon. Fire, on the other hand, imposes a bottleneck on the tree population by halting the recruitment of young trees into adulthood. Since grasses fuel fires, a feedback linking grazing, grass cover, fire, and tree cover is created. In African savannas, which are the focus of this dissertation, these feedbacks play a major role in the dynamics.
The importance of these feedbacks came into sharp focus when the notion of alternative states began to be applied to savannas. Alternative states in ecology arise when different states of an ecosystem can occur under the same conditions. According to this an open savanna and a tree-dominated savanna can be classified as alternative states, since they can both occur under the same climatic conditions. The aforementioned feedbacks are critical in the creation of alternative states. The grass-fire feedback can preserve an open canopy as long as fire intensity and frequency remain above a certain threshold. Conversely, crossing a grazing threshold can force an open savanna to shift to a tree-dominated state. Critically, transitions between such alternative states can produce hysteresis, where a return to pre-transition conditions will not suffice to restore the ecosystem to its original state.
In the chapters that follow, I will cover aspects relating to the coexistence mechanisms and the role of feedbacks in tree-grass interactions. Coming back to the coexistence question, due to the overwhelming focus on competition and disturbance another important ecological process was neglected: facilitation. Therefore, in the first study within this dissertation I examine how facilitation can expand the tree-grass coexistence range into drier conditions. For the second study I focus on another aspect of savanna dynamics which remains underrepresented in the literature: the impacts of inter-annual rainfall variability upon savanna trees and the resilience of the savanna state. In the third and final study within this dissertation I approach the well-researched encroachment phenomenon from a new perspective: I search for an early warning indicator of the process to be used as a prevention tool for savanna conservation. In order to perform all this work I developed a mathematical ecohydrological model of Ordinary Differential Equations (ODEs) with three variables: soil moisture content, grass cover and tree cover.
Facilitation: Results showed that the removal of grass cover through grazing was detrimental to trees under arid conditions, contrary to expectation based on resource competition. The reason was that grasses preserved moisture in the soil through infiltration and shading, thus ameliorating the harsh conditions for trees in accordance with the Stress Gradient Hypothesis. The exclusion of grasses from the model further demonstrated this: tree cover was lower in the absence of grasses, indicating that the benefits of grass facilitation outweighed the costs of grass competition for trees. Thus, facilitation expanded the climatic range where savannas persisted into drier conditions.
Rainfall variability: By adjusting the model to current rainfall patterns in East Africa, I simulated conditions of increasing inter-annual rainfall variability for two distinct mean rainfall scenarios: semi-arid and mesic. Alternative states of tree-less grassland and tree-dominated savanna emerged in both cases. Increasing variability reduced semi-arid savanna tree cover to the point that at high variability the savanna state was eliminated, because variability intensified resource competition and strengthened the fire disturbance during high rainfall years. Mesic savannas, on the other hand, became more resilient along the variability gradient: increasing rainfall variability created more opportunities for the rapid growth of trees to overcome the fire disturbance, boosting the chances of savannas persisting and thus increasing mesic savanna resilience.
Preventing encroachment: The breakdown in the grass-fire feedback caused by heavy grazing promoted the expansion of woody cover. This could be irreversible due to the presence of alternative states of encroached and open savanna, which I found along a simulated grazing gradient. When I simulated different short term heavy grazing treatments followed by a reduction to the original grazing conditions, certain cases converged to the encroached state. Utilising woody cover changes only during the heavy grazing treatment, I developed an early warning indicator which identified these cases with a high risk of such hysteresis and successfully distinguished them from those with a low risk. Furthermore, after validating the indicator on encroachment data, I demonstrated that it appeared early enough for encroachment to be prevented through realistic grazing-reduction treatments.
Though this dissertation is rooted in the theory of savanna dynamics, its results can have significant applications in savanna conservation. Facilitation has only recently become a topic of interest within savanna literature. Given the threat of increasing droughts and a general anticipation of drier conditions in parts of Africa, insights stemming from this research may provide clues for preserving arid savannas. The impacts of rainfall variability on savannas have not yet been thoroughly studied, either. Conflicting results appear as a result of the lack of a robust theoretical understanding of plant interactions under variable conditions. . My work and other recent studies argue that such conditions may increase the importance of fast resource acquisition creating a ‘temporal niche’. Woody encroachment has been extensively studied as phenomenon, though not from the perspective of its early identification and prevention. The development of an encroachment forecasting tool, as the one presented in this work, could protect both the savanna biome and societies dependent upon it for (economic) survival. All studies which follow are bound by the attempt to broaden the horizons of savanna-related research in order to deal with extreme conditions and phenomena; be it through the enhancement of the coexistence debate or the study of an imminent external threat or the development of a management-oriented tool for the conservation of savannas.
The potential increase in frequency and magnitude of extreme floods is currently discussed in terms of global warming and the intensification of the hydrological cycle. The profound knowledge of past natural variability of floods is of utmost importance in order to assess flood risk for the future. Since instrumental flood series cover only the last ~150 years, other approaches to reconstruct historical and pre-historical flood events are needed. Annually laminated (varved) lake sediments are meaningful natural geoarchives because they provide continuous records of environmental changes > 10000 years down to a seasonal resolution. Since lake basins additionally act as natural sediment traps, the riverine sediment supply, which is preserved as detrital event layers in the lake sediments, can be used as a proxy for extreme discharge events. Within my thesis I examined a ~ 8.50 m long sedimentary record from the pre-Alpine Lake Mondsee (Northeast European Alps), which covered the last 7000 years. This sediment record consists of calcite varves and intercalated detrital layers, which range in thickness from 0.05 to 32 mm. Detrital layer deposition was analysed by a combined method of microfacies analysis via thin sections, Scanning Electron Microscopy (SEM), μX-ray fluorescence (μXRF) scanning and magnetic susceptibility. This approach allows characterizing individual detrital event layers and assigning a corresponding input mechanism and catchment. Based on varve counting and controlled by 14C age dates, the main goals of this thesis are (i) to identify seasonal runoff processes, which lead to significant sediment supply from the catchment into the lake basin and (ii) to investigate flood frequency under changing climate boundary conditions. This thesis follows a line of different time slices, presenting an integrative approach linking instrumental and historical flood data from Lake Mondsee in order to evaluate the flood record inferred from Lake Mondsee sediments. The investigation of eleven short cores covering the last 100 years reveals the abundance of 12 detrital layers. Therein, two types of detrital layers are distinguished by grain size, geochemical composition and distribution pattern within the lake basin. Detrital layers, which are enriched in siliciclastic and dolomitic material, reveal sediment supply from the Flysch sediments and Northern Calcareous Alps into the lake basin. These layers are thicker in the northern lake basin (0.1-3.9 mm) and thinner in the southern lake basin (0.05-1.6 mm). Detrital layers, which are enriched in dolomitic components forming graded detrital layers (turbidites), indicate the provenance from the Northern Calcareous Alps. These layers are generally thicker (0.65-32 mm) and are solely recorded within the southern lake basin. In comparison with instrumental data, thicker graded layers result from local debris flow events in summer, whereas thin layers are deposited during regional flood events in spring/summer. Extreme summer floods as reported from flood layer deposition are principally caused by cyclonic activity from the Mediterranean Sea, e.g. July 1954, July 1997 and August 2002. During the last two millennia, Lake Mondsee sediments reveal two significant flood intervals with decadal-scale flood episodes, during the Dark Ages Cold Period (DACP) and the transition from the Medieval Climate Anomaly (MCA) into the Little Ice Age (LIA) suggesting a linkage of transition to climate cooling and summer flood recurrences in the Northeastern Alps. In contrast, intermediate or decreased flood episodes appeared during the MWP and the LIA. This indicates a non-straightforward relationship between temperature and flood recurrence, suggesting higher cyclonic activity during climate transition in the Northeast Alps. The 7000-year flood chronology reveals 47 debris flows and 269 floods, with increased flood activity shifting around 3500 and 1500 varve yr BP (varve yr BP = varve years before present, before present = AD 1950). This significant increase in flood activity shows a coincidence with millennial-scale climate cooling that is reported from main Alpine glacier advances and lower tree lines in the European Alps since about 3300 cal. yr BP (calibrated years before present). Despite relatively low flood occurrence prior to 1500 varve yr BP, floods at Lake Mondsee could have also influenced human life in early Neolithic lake dwellings (5750-4750 cal. yr BP). While the first lake dwellings were constructed on wetlands, the later lake dwellings were built on piles in the water suggesting an early flood risk adaptation of humans and/or a general change of the Late Neolithic Culture of lake-dwellers because of socio-economic reasons. However, a direct relationship between the final abandonment of the lake dwellings and higher flood frequencies is not evidenced.
We calculate the additional carbon emissions as a result of the conversion of natural land in a process of urbanisation; and the change of carbon flows by “urbanised” ecosystems, when the atmospheric carbon is exported to the neighboring territories, from 1980 till 2050 for the eight regions of the world. As a scenario we use combined UN and demographic model′s prognoses for regional total and urban population growth. The calculations of urban areas dynamics are based on two models: the regression model and the Gamma-model. The urbanised area is sub-divided on built-up, „green“ (parks, etc.) and informal settlements (favelas) areas. The next step is to calculate the regional and world dynamics of carbon emission and export, and the annual total carbon balance. Both models give similar results with some quantitative differences. In the first model, the world annual emissions attain a maximum of 205 MtC/year between 2020-2030. Emissions will then slowly decrease. The maximum contributions are given by China and the Asia and Pacific regions. In the second model, world annual emissions increase to 1.25 GtC in 2005, beginning to decrease afterwards. If we compare the emission maximum with the annual emission caused by deforestation, 1.36GtC per year, then we can say that the role of urbanised territories (UT) is of a comparable magnitude. Regarding the world annual export of carbon by UT, we observe its monotonous growth by three times, from 24 MtC to 66 MtC in the first model, and from 249 MtC to 505 MtC in the second one. The latter, is therefore comparable to the amount of carbon transported by rivers into the ocean (196-537 MtC). By estimating the total balance we find that urbanisation shifts the total balance towards a “sink” state. The urbanisation is inhibited in the interval 2020-2030, and by 2050 the growth of urbanised areas would almost stop. Hence, the total emission of natural carbon at that stage will stabilise at the level of the 1980s (80 MtC per year). As estimated by the second model, the total balance, being almost constant until 2000, then starts to decrease at an almost constant rate. We can say that by the end of the XXI century, the total carbon balance will be equal to zero, when the exchange flows are fully balanced, and may even be negative, when the system begins to take up carbon from the atmosphere, i.e., becomes a “sink”.
Objective: The behaviors of endothelial cells or mesenchymal stem cells are remarkably influenced by the mechanical properties of their surrounding microenvironments. Here, electrospun fiber meshes containing various mechanical characteristics were developed from polyetheresterurethane (PEEU) copolymers. The goal of this study was to explore how fiber mesh stiffness affected endothelial cell shape, growth, migration, and angiogenic potential of endothelial cells. Furthermore, the effects of the E-modulus of fiber meshes on human adipose-derived stem cells (hADSCs) osteogenic potential was investigated.
Methods: Polyesteretherurethane (PEEU) polymers with various poly(p-dioxanone) (PPDO) to poly (ε-caprolactone) (PCL) weight percentages (40 wt.%, 50 wt.%, 60 wt.%, and 70 wt.%) were synthesized, termed PEEU40, PEEU50, PEEU60, and PEEU70, accordingly. The electrospinning method was used for the preparation of PEEU fiber meshes. The effects of PEEU fiber meshes with varying elasticities on the human umbilical vein endothelial cells (HUVECs) shape, growth, migration and angiogenic potential were characterized. To determine how the E-modulus of fiber meshes affects the osteogenic potential of hADSCs, the cellular and nuclear morphologies and osteogenic differentiation abilities were evaluated.
Results: With the increasing stiffness of PEEU fiber meshes, the aspect ratios of HUVECs cultivated on PEEU materials increased. HUVECs cultivated on high stiffness fiber meshes (4.5 ± 0.8 MPa) displayed a considerably greater proliferation rate and migratory velocity, in addition demonstrating increased tube formation capability, compared with those of the cells cultivated on lower stiffness fiber meshes (2.6 ± 0.8 MPa). Furthermore, in comparison to those cultivated on lower stiffness fiber meshes, hADSCs adhered to the highest stiffness fiber meshes PEEU70 had an elongated shape. The hADSCs grown on the softer PEEU40 fiber meshes showed a reduced nuclear aspect ratio (width to height) than those cultivated on the stiffer fiber meshes. Culturing hADSCs on stiffer fibers improved their osteogenic differentiation potential. Compared with cells cultured on PEEU40, osteocalcin expression and alkaline phosphatase (ALP) activity increased by 73 ± 10% and 43 ± 16%, respectively, in cells cultured on PEEU70.
Conclusion: The mechanical characteristics of the substrate are crucial in the modulation of cell behaviors. These findings indicate that adjusting the elasticity of fiber meshes might be a useful method for controlling the blood vessels development and regeneration. Furthermore, the mechanical characteristics of PEEU fiber meshes might be modified to control the osteogenic potential of hADSCs.
Organic solar cells (OSCs) represent a new generation of solar cells with a range of captivating attributes including low-cost, light-weight, aesthetically pleasing appearance, and flexibility. Different from traditional silicon solar cells, the photon-electron conversion in OSCs is usually accomplished in an active layer formed by blending two kinds of organic molecules (donor and acceptor) with different energy levels together.
The first part of this thesis focuses on a better understanding of the role of the energetic offset and each recombination channel on the performance of these low-offset OSCs. By combining advanced experimental techniques with optical and electrical simulation, the energetic offsets between CT and excitons, several important insights were achieved: 1. The short circuit current density and fill-factor of low-offset systems are largely determined by field-dependent charge generation in such low-offset OSCs. Interestingly, it is strongly evident that such field-dependent charge generation originates from a field-dependent exciton dissociation yield. 2. The reduced energetic offset was found to be accompanied by strongly enhanced bimolecular recombination coefficient, which cannot be explained solely by exciton repopulation from CT states. This implies the existence of another dark decay channel apart from CT.
The second focus of the thesis was on the technical perspective. In this thesis, the influence of optical artifacts in differential absorption spectroscopy upon the change of sample configuration and active layer thickness was studied. It is exemplified and discussed thoroughly and systematically in terms of optical simulations and experiments, how optical artifacts originated from non-uniform carrier profile and interference can manipulate not only the measured spectra, but also the decay dynamics in various measurement conditions. In the end of this study, a generalized methodology based on an inverse optical transfer matrix formalism was provided to correct the spectra and decay dynamics manipulated by optical artifacts.
Overall, this thesis paves the way for a deeper understanding of the keys toward higher PCEs in low-offset OSC devices, from the perspectives of both device physics and characterization techniques.
The solar tachocline is a thin transition layer between the solar radiative zone rotating uniformly and the solar convection zone, which has a mainly latitudinal differential rotation profile. This layer has a thickness of less than $0.05R_{\sun}$ and is subject to extreme radial as well as latitudinal shears. Helioseismological estimates put this layer at roughly $0.7R_{\sun}$. The tachocline mostly resides in the sub-adiabatic, non-turbulent radiative interior, except for a small overlap with the convection zone on the top. Many proposed dynamo mechanisms involve strong toroidal magnetic fields in this transition region. The exact mechanisms behind the formation of such a thin layer is still disputed. A very plausible mechanism is the one involving a weak, relic poloidal magnetic field trapped inside the radiative zone, which is responsible for expelling differential rotation outwards. This was first proposed by \citet{RK97}. The present work develops this idea with numerical simulations including additional effects like meridional circulation. It is shown that a relic field of 1~Gauss or smaller would be sufficient to explain the observed thickness of the tachocline. The stability of the solar tachocline is addressed as the next part of the problem. It is shown that the tachocline is stable up to a differential rotation of 52\% in the absence of magnetic fields. This is a new finding as compared to the earlier two dimensional models which estimated the solar differential rotation (about 28\%) to be marginally stable or even unstable. The changed stability limit is attributed to the changed stability criterion of the 3-dimensional model which also involves radial gradients of the angular velocity. In the presence of toroidal magnetic field belts, the lowest non-axisymmetric mode is shown to be the most unstable one for the radiative part of the tachocline. It is estimated that the tachocline would become unstable for toroidal fields exceeding about 100~Gauss. With both formation and stability questions satisfactorily addressed, this work presents the most comprehensive analysis of the physical processes in the solar tachocline to date.
With the fast rise of cloud computing adoption in the past few years, more companies are migrating their confidential files from their private data center to the cloud to help enterprise's digital transformation process. Enterprise file synchronization and share (EFSS) is one of the solutions offered for enterprises to store their files in the cloud with secure and easy file sharing and collaboration between its employees. However, the rapidly increasing number of cyberattacks on the cloud might target company's files on the cloud to be stolen or leaked to the public. It is then the responsibility of the EFSS system to ensure the company's confidential files to only be accessible by authorized employees.
CloudRAID is a secure personal cloud storage research collaboration project that provides data availability and confidentiality in the cloud. It combines erasure and cryptographic techniques to securely store files as multiple encrypted file chunks in various cloud service providers (CSPs). However, several aspects of CloudRAID's concept are unsuitable for secure and scalable enterprise cloud storage solutions, particularly key management system, location-based access control, multi-cloud storage management, and cloud file access monitoring.
This Ph.D. thesis focuses on CloudRAID for Business (CfB) as it resolves four main challenges of CloudRAID's concept for a secure and scalable EFSS system. First, the key management system is implemented using the attribute-based encryption scheme to provide secure and scalable intra-company and inter-company file-sharing functionalities. Second, an Internet-based location file access control functionality is introduced to ensure files could only be accessed at pre-determined trusted locations. Third, a unified multi-cloud storage resource management framework is utilized to securely manage cloud storage resources available in various CSPs for authorized CfB stakeholders. Lastly, a multi-cloud storage monitoring system is introduced to monitor the activities of files in the cloud using the generated cloud storage log files from multiple CSPs.
In summary, this thesis helps CfB system to provide holistic security for company's confidential files on the cloud-level, system-level, and file-level to ensure only authorized company and its employees could access the files.
Inflammatory bowel diseases (IBD), characterised by a chronic inflammation of the gut wall, develop as consequence of an overreacting immune response to commensal bacteria, caused by a combination of genetic and environmental conditions. Large inter-individual differences in the outcome of currently available therapies complicate the decision for the best option for an individual patient. Predicting the prospects of therapeutic success for an individual patient is currently only possible to a limited extent; for this, a better understanding of possible differences between responders and non-responders is needed.
In this thesis, we have developed a mathematical model describing the most important processes of the gut mucosal immune system on the cellular level. The model is based on literature data, which were on the one hand used (qualitatively) to choose which cell types and processes to incorporate and to derive the model structure, and on the other hand (quantitatively) to derive the parameter values. Using ordinary differential equations, it describes the concentration-time course of neutrophils, macrophages, dendritic cells, T cells and bacteria, each subdivided into different cell types and activation states, in the lamina propria and mesenteric lymph nodes. We evaluate the model by means of simulations of the healthy immune response to salmonella infection and mucosal injury.
A virtual population includes IBD patients, which we define through their initially asymptomatic, but after a trigger chronically inflamed gut wall. We demonstrate the model's usefulness in different analyses: (i) The comparison of virtual IBD patients with virtual healthy individuals shows that the disease is elicited by many small or fewer large changes, and allows to make hypotheses about dispositions relevant for development of the disease. (ii) We simulate the effects of different therapeutic targets and make predictions about the therapeutic outcome based on the pre-treatment state. (iii) From the analysis of differences between virtual responders and non-responders, we derive hypotheses about reasons for the inter-individual variability in treatment outcome. (iv) For the example of anti-TNF-alpha therapy, we analyse, which alternative therapies are most promising in case of therapeutic failure, and which therapies are most suited for combination therapies: For drugs also directly targeting the cytokine levels or inhibiting the recruitment of innate immune cells, we predict a low probability of success when used as alternative treatment, but a large gain when used in a combination treatment. For drugs with direct effects on T cells, via modulation of the sphingosine-1-phosphate receptor or inhibition of T cell proliferation, we predict a considerably larger probability of success when used as alternative treatment, but only a small additional gain when used in a combination therapy.
With the rise of nanotechnology in the last decade, nanofluidics has been established as a research field and gained increased interest in science and industry. Natural aqueous nanofluidic systems are very complex, there is often a predominance of liquid interfaces or the fluid contains charged or differently shaped colloids. The effects, promoted by these additives, are far from being completely understood and interesting questions arise with regards to the confinement of such complex fluidic systems. A systematic study of nanofluidic processes requires designing suitable experimental model nano – channels with required characteristics. The present work employed thin liquid films (TLFs) as experimental models. They have proven to be useful experimental tools because of their simple geometry, reproducible preparation, and controllable liquid interfaces. The thickness of the channels can be adjusted easily by the concentration of electrolyte in the film forming solution. This way, channel dimensions from 5 – 100 nm are possible, a high flexibility for an experimental system. TLFs have liquid IFs of different charge and properties and they offer the possibility to confine differently shaped ions and molecules to very small spaces, or to subject them to controlled forces. This makes the foam films a unique “device” available to obtain information about fluidic systems in nanometer dimensions. The main goal of this thesis was to study nanofluidic processes using TLFs as models, or tools, and to subtract information about natural systems plus deepen the understanding on physical chemical conditions. The presented work showed that foam films can be used as experimental models to understand the behavior of liquids in nano – sized confinement. In the first part of the thesis, we studied the process of thinning of thin liquid films stabilized with the non – ionic surfactant n – dodecyl – β – maltoside (β – C₁₂G₂) with primary interest in interfacial diffusion processes during the thinning process dependent on surfactant concentration 64. The surfactant concentration in the film forming solutions was varied at constant electrolyte (NaCl) concentration. The velocity of thinning was analyzed combining previously developed theoretical approaches. Qualitative information about the mobility of the surfactant molecules at the film surfaces was obtained. We found that above a certain limiting surfactant concentration the film surfaces were completely immobile and they behaved as non – deformable, which decelerated the thinning process. This follows the predictions for Reynolds flow of liquid between two non – deformable disks. In the second part of the thesis, we designed a TLF nanofluidic system containing rod – like multivalent ions and compared this system to films containing monovalent ions. We presented first results which recognized for the first time the existence of an additional attractive force in the foam films based on the electrostatic interaction between rod – like ions and oppositely charged surfaces. We may speculate that this is an ion bridging component of the disjoining pressure. The results show that for films prepared in presence of spermidine the transformation of the thicker CF to the thinnest NBF is more probable as films prepared with NaCl at similar conditions of electrostatic interaction. This effect is not a result of specific adsorption of any of the ions at the fluid surfaces and it does not lead to any changes in the equilibrium properties of the CF and NBF. Our hypothesis was proven using the trivalent ion Y3+ which does not show ion bridging. The experimental results are compared to theoretical predictions and a quantitative agreement on the system’s energy gain for the change from CF to NBF could be obtained. In the third part of the work, the behavior of nanoparticles in confinement was investigated with respect to their impact on the fluid flow velocity. The particles altered the flow velocity by an unexpected high amount, so that the resulting changes in the dynamic viscosity could not be explained by a realistic change of the fluid viscosity. Only aggregation, flocculation and plug formation can explain the experimental results. The particle systems in the presented thesis had a great impact on the film interfaces due to the stabilizer molecules present in the bulk solution. Finally, the location of the particles with respect to their lateral and vertical arrangement in the film was studied with advanced reflectivity and scattering methods. Neutron Reflectometry studies were performed to investigate the location of nanoparticles in the TLF perpendicular to the IF. For the first time, we study TLFs using grazing incidence small angle X – ray scattering (GISAXS), which is a technique sensitive to the lateral arrangement of particles in confined volumes. This work provides preliminary data on a lateral ordering of particles in the film.
This thesis investigates whether multilingual speakers’ use of grammatical constraints in an additional language (La) is affected by the native (L1) and non-native grammars (L2) of their linguistic repertoire.
Previous studies have used untimed measures of grammatical performance to show that L1 and L2 grammars affect the initial stages of La acquisition. This thesis extends this work by examining whether speakers at intermediate levels of La proficiency, who demonstrate mature untimed/offline knowledge of the target La constraints, are differentially affected by their L1 and L2 knowledge when they comprehend sentences under processing pressure. With this purpose, several groups of La German speakers were tested on word order and agreement phenomena using online/timed measures of grammatical knowledge. Participants had mirror distributions of their prior languages and they were either L1English/L2Spanish speakers or L1Spanish/L2English speakers. Crucially, in half of the phenomena the target La constraint aligned with English but not with Spanish, while in the other half it aligned with Spanish but not with English. Results show that the L1 grammar plays a major role in the use of La constraints under processing pressure, as participants displayed increased sensitivity to La constraints when they aligned with their L1, and reduced sensitivity when they did not. Further, in specific phenomena in which the L2 and La constraints aligned, increased L2 proficiency resulted in an enhanced sensitivity to the La constraint. These findings suggest that both native and non-native grammars affect how speakers use La grammatical constraints under processing pressure. However, L1 and L2 grammars differentially influence on participants’ performance: While L1 constraints seem to be reliably recruited to cope with the processing demands of real-time La use, proficiency in an L2 can enhance sensitivity to La constraints only in specific circumstances, namely when L2 and La constraints align.
Carbonatite magmatism is a highly efficient transport mechanism from Earth’s mantle to the crust, thus providing insights into the chemistry and dynamics of the Earth’s mantle. One evolving and promising tool for tracing magma interaction are stable iron isotopes, particularly because iron isotope fractionation is controlled by oxidation state and bonding environment. Meanwhile, a large data set on iron isotope fractionation in igneous rocks exists comprising bulk rock compositions and fractionation between mineral groups. Iron isotope data from natural carbonatite rocks are extremely light and of remarkably high variability. This resembles iron isotope data from mantle xenoliths, which are characterized by a variability in δ56Fe spanning three times the range found in basalts, and by the extremely light values of some whole rock samples, reaching δ56Fe as low as -0.69 ‰ in a spinel lherzolite. Cause to this large range of variations may be metasomatic processes, involving metasomatic agents like volatile bearing high-alkaline silicate melts or carbonate melts. The expected effects of metasomatism on iron isotope fractionation vary with parameters like melt/rock-ratio, reaction time, and the nature of metasomatic agents and mineral reactions involved. An alternative or additional way to enrich light isotopes in the mantle could be multiple phases of melt extraction. To interpret the existing data sets more knowledge on iron isotope fractionation factors is needed.
To investigate the behavior of iron isotopes in the carbonatite systems, kinetic and equilibration experiments in natro-carbonatite systems between immiscible silicate and carbonate melts were performed in an internally heated gas pressure vessel at intrinsic redox conditions at temperatures between 900 and 1200 °C and pressures of 0.5 and 0.7 GPa. The iron isotope compositions of coexisting silicate melt and carbonate melt were analyzed by solution MC-ICP-MS. The kinetic experiments employing a Fe-58 spiked starting material show that isotopic equilibrium is obtained after 48 hours. The experimental studies of equilibrium iron isotope fractionation between immiscible silicate and carbonate melts have shown that light isotopes are enriched in the carbonatite melt. The highest Δ56Fesil.m.-carb.melt (mean) of 0.13 ‰ was determined in a system with a strongly peralkaline silicate melt composition (ASI ≥ 0.21, Na/Al ≤ 2.7). In three systems with extremely peralkaline silicate melt compositions (ASI between 0.11 and 0.14) iron isotope fractionation could analytically not be resolved. The lowest Δ56Fesil.m.-carb.melt (mean) of 0.02 ‰ was determined in a system with an extremely peralkaline silicate melt composition (ASI ≤ 0.11 , Na/Al ≥ 6.1). The observed iron isotope fractionation is most likely governed by the redox conditions of the system. Yet, in the systems, where no fractionation occurred, structural changes induced by compositional changes possibly overrule the influence of redox conditions. This interpretation implicates, that the iron isotope system holds the potential to be useful not only for exploring redox conditions in magmatic systems, but also for discovering structural changes in a melt.
In situ iron isotope analyses by femtosecond laser ablation coupled to MC-ICP-MS on magnetite and olivine grains were performed to reveal variations in iron isotope composition on the micro scale. The investigated sample is a melilitite bomb from the Salt Lake Crater group at Honolulu (Oahu, Hawaii), showing strong evidence for interaction with a carbonatite melt. While magnetite grains are rather homogeneous in their iron isotope compositions, olivine grains span a far larger range in iron isotope ratios. The variability of δ56Fe in magnetite is limited from - 0.17 ‰ (± 0.11 ‰, 2SE) to +0.08 ‰ (± 0.09 ‰, 2SE). δ56Fe in olivine range from -0.66‰ (± 0.11 ‰, 2SE) to +0.10 ‰ (± 0.13 ‰, 2SE). Olivine and magnetite grains hold different informations regarding kinetic and equilibrium fractionation due to their different Fe diffusion coefficients. The observations made in the experiments and in the in situ iron isotope analyses suggest that the extremely light iron isotope signatures found in carbonatites are generated by several steps of isotope fractionation during carbonatite genesis. These may involve equilibrium and kinetic fractionation. Since iron isotopic signatures in natural systems are generated by a combination of multiple factors (pressure, temperature, redox conditions, phase composition and structure, time scale), multi tracer approaches are needed to explain signatures found in natural rocks.
Development of techniques for earthquake microzonation studies in different urban environment
(2010)
The proliferation of megacities in many developing countries, and their location in areas where they are exposed to a high risk from large earthquakes, coupled with a lack of preparation, demonstrates the requirement for improved capabilities in hazard assessment, as well as the rapid adjustment and development of land-use planning. In particular, within the context of seismic hazard assessment, the evaluation of local site effects and their influence on the spatial distribution of ground shaking generated by an earthquake plays an important role. It follows that the carrying out of earthquake microzonation studies, which aim at identify areas within the urban environment that are expected to respond in a similar way to a seismic event, are essential to the reliable risk assessment of large urban areas. Considering the rate at which many large towns in developing countries that are prone to large earthquakes are growing, their seismic microzonation has become mandatory. Such activities are challenging and techniques suitable for identifying site effects within such contexts are needed. In this dissertation, I develop techniques for investigating large-scale urban environments that aim at being non-invasive, cost-effective and quickly deployable. These peculiarities allow one to investigate large areas over a relative short time frame, with a spatial sampling resolution sufficient to provide reliable microzonation. Although there is a negative trade-off between the completeness of available information and extent of the investigated area, I attempt to mitigate this limitation by combining two, what I term layers, of information: in the first layer, the site effects at a few calibration points are well constrained by analyzing earthquake data or using other geophysical information (e.g., shear-wave velocity profiles); in the second layer, the site effects over a larger areal coverage are estimated by means of single-station noise measurements. The microzonation is performed in terms of problem-dependent quantities, by considering a proxy suitable to link information from the first layer to the second one. In order to define the microzonation approach proposed in this work, different methods for estimating site effects have been combined and tested in Potenza (Italy), where a considerable amount of data was available. In particular, the horizontal-to-vertical spectral ratio computed for seismic noise recorded at different sites has been used as a proxy to combine the two levels of information together and to create a microzonation map in terms of spectral intensity ratio (SIR). In the next step, I applied this two-layer approach to Istanbul (Turkey) and Bishkek (Kyrgyzstan). A similar hybrid approach, i.e., combining earthquake and noise data, has been used for the microzonation of these two different urban environments. For both cities, after having calibrated the fundamental frequencies of resonance estimated from seismic noise with those obtained by analysing earthquakes (first layer), a fundamental frequency map has been computed using the noise measurements carried out within the town (second layer). By applying this new approach, maps of the fundamental frequency of resonance for Istanbul and Bishkek have been published for the first time. In parallel, a microzonation map in terms of SIR has been incorporated into a risk scenario for the Potenza test site by means of a dedicated regression between spectral intensity (SI) and macroseismic intensity (EMS). The scenario study confirms the importance of site effects within the risk chain. In fact, their introduction into the scenario led to an increase of about 50% in estimates of the number of buildings that would be partially or totally collapsed. Last, but not least, considering that the approach developed and applied in this work is based on measurements of seismic noise, their reliability has been assessed. A theoretical model describing the self-noise curves of different instruments usually adopted in microzonation studies (e.g., those used in Potenza, Istanbul and Bishkek) have been considered and compared with empirical data recorded in Cologne (Germany) and Gubbio (Italy). The results show that, depending on the geological and environmental conditions, the instrumental noise could severely bias the results obtained by recording and analysing ambient noise. Therefore, in this work I also provide some guidelines for measuring seismic noise.
In this thesis we utilize resolved stellar populations to improve our understanding of galaxy formation and evolution. In the first part we improve a method for metallicity determination of faint old stellar systems, in the second and third part we analyze the individual history of six nearby disk galaxies outside the Local Group.
A New Calibration of the Color Metallicity Relation of Red Giants for HST data:
It is well known, that the color distribution of stars on the the Red Giant Branch (RGB) can be used to determine metallicities of old stellar populations that have only shallow photometry. Based on the largest sample of globular clusters ever used for such studies, we quantify the relation between metallicity and color in the widely used HST ACS filters F606W and F814W.
We use a sample of globular clusters from the ACS Globular Cluster Survey and measure their RGB color at given absolute magnitudes to derive the color-metallicity relation. We find a clear relation between metallicity and RGB color; we investigate the scatter and the uncertainties in this relation and show its limitations. A comparison with isochrones shows reasonably good agreement with BaSTI models, a small offset to Dartmouth models, and a larger offset to Padua models.
Even for the best globular cluster data available, the metallicity of a simple stellar population can be determined from the RGB alone only with an accuracy of 0.3 dex for [M/H]<-1, and 0.15 dex for [M/H]>-1. For mixed populations, as they are observed in external galaxies, the uncertainties will be even larger due to uncertainties in extinction, age, etc. Therefore caution is necessary when interpreting photometric metallicities.
The Structural History of Nearby Low Mass Disk Galaxies:
We study the individual evolution histories of three nearby, low-mass, edge-on galaxies (IC5052, NGC4244, NGC5023).
Using the color magnitude diagrams of resolved stellar populations, we construct star count density maps for populations of different ages and analyze the change of structural parameters with stellar age within each galaxy.
The three galaxies show low vertical heating rates, which are much lower than the heating rate of the Milky Way. This indicates that heating agents, as giant molecular clouds and spiral structure are weak in low mass galaxies.
We do not detect a separate thick disk in any of the three galaxies, even though our observations cover a larger range in equivalent surface brightness than any integrated light study. While scaleheights increase with age, each population can be well described by a single disk. Only two of the galaxies contain a very weak additional component, which we identify as the faint halo. The mass of these faint halos is less than 1% of the mass of the disk.
All populations in the three galaxies exhibit no or only little flaring. While this finding is consistent with previous integrated light studies, it poses strong constraints on galaxy formation models, because most theoretical simulations often find strong flaring due to interactions or radial migration.
Furthermore, we find breaks in the radial profiles of all three galaxies. The radii of these breaks are independent of age, and the break strength is decreasing with age in two of the galaxies (NGC4244 and NGC5023). This is consistent with break formation models, that combine a star formation cutoff with radial migration. The differing behavior of IC5052 can be explained by a recent interaction or minor merger.
The Structural History of Massive Disk Galaxies:
We extend the structural analysis of stellar populations with distinct ages to three massive galaxies, NGC891, NGC4565 and NGC7814. While confusion effects due to the high stellar number densities in their central region, and the prominent dust lanes inhibit an detailed analysis of the radial profiles, we can study their vertical structure.
These massive galaxies also have a slower heating than the Milky Way, comparable to the low mass galaxies. This can be traced back to their already thick young populations and thick layers of their interstellar medium.
We do not find a clear separate thick disk in any of these three galaxies; all populations can be described by a single disk plus a S\'ersic bulge/halo component. In contrast to the low mass galaxies, we cannot rule out the presence of thick disks in the massive galaxies, because of the strong influence of the halo, that might hide the possible contribution of the thick disk to the vertical star count profiles. However, the faintness of the possible thick disks still points to problems in the earlier ubiquitous findings of thick disks in external galaxies.
Today, analytical chemistry does not longer consist of only the big measuring devices and methods which are time consuming and expensive, which can furthermore only be handled by the qualified staff and in addition the results can also only be evaluated by this qualified staff. Usually, this technique, which shall be described in the following as 'classic analytic measuring technique', requires also rooms equipped especially and often a relative big quantity of the test compounds which should be prepared especially. Beside this classic analytic measuring technique, limited on definite substance groups and requests, a new measuring technique has gained acceptance particularly within the last years, which one can often be used by a layman, too. Often the new measuring technique has very little pieces of equipment. The needed sample volumes are also small and a special sample preparation isn't required. In addition, the new measuring instruments are simple to handle. They are cheap both in their production and in the use and they permit even a continuous measurement recording usually. Numerous of this new measuring instruments base on the research in the field of Biosensorik during the last 40 years. Since Clark and Lyon in the year 1962 were able to measure glucose with a simple oxygen electrode, completed by an enzyme the development of the new measuring technique did not have to be held back any longer. Biosensors, special pickups which consists of a combination from a biological component (permits a specific recognition of the analyte also without purification of the sample previously) and a physical pickup (convert the primary physicochemical effect into an electronically measurable signal), conquered the market. In the context of this thesis different tyrosinasesensors were developed which fulfilling the various requests, depending on origin and features of the used tyrosinase. One of the tyrosinasesensors for example was used for quantification of phenolic compounds in river and sea water and the results could correlated very well with the corresponding DIN-test for the determination of phenolic compounds. An other developed tyrosinasesensor showed a very high sensitiveness for catecholamines, substances which are of special importance in the medical diagnostics. In addition, the investigations of two different tyrosinases, which were carried out also in the context of this thesis, have shown, that a special tyrosinase (tyrosinase from Streptomyces antibioticus) will be the better choice as tyrosinase from Agaricus bisporus, which is used in the area of biosensor research till now, if one wants to develop in future even more sensitive tyrosinasesensors. Furthermore, first successes became reached on a molecular biological field, the production of tyrosinasemutants with special, before well-considered features. These successes can be used to develop a new generation of tyrosinasesensors, tyrosinasesensors in which tyrosinase can be bound directionally both to the corresponding physical pickup or also to another enzyme. From this one expects to achieve ways minimized which the substance to be determined (or whose product) otherwise must cover. Finally, this should result in an clearly visible increase of sensitivity of the Biosensor.
Justice structures societies and social relations of any kind; its psychological integration provides a fundamental cornerstone for social, moral, and personality development. The trait justice sensitivity captures individual differences in responses toward perceived injustice (JS; Schmitt et al., 2005, 2010). JS has shown substantial relations to social and moral behavior in adult and adolescent samples; however, it was not yet investigated in middle childhood despite this being a sensitive phase for personality development. JS differentiates in underlying perspectives that are either more self- or other-oriented regarding injustice, with diverging outcome relations. The present research project investigated JS and its perspectives in children aged 6 to 12 years with a special focus on variables of social and moral development as potential correlates and outcomes in four cross-sectional studies. Study 1 started with a closer investigation of JS trait manifestation, measurement, and relations to important variables from the nomological network, such as temperamental dimensions, social-cognitive skills, and global pro- and antisocial behavior in a pilot sample of children from south Germany. Study 2 investigated relations between JS and distributive behavior following distributive principles in a large-scale data set of children from Berlin and Brandenburg. Study 3 explored the relations of JS with moral reasoning, moral emotions, and moral identity as important precursors of moral development in the same large-scale data set. Study 4 investigated punishment motivation to even out, prevent, or compensate norm transgressions in a subsample, whereby JS was considered as a potential predictor of different punishment motives. All studies indicated that a large-scale, economic measurement of JS is possible at least from middle childhood onward. JS showed relations to temperamental dimensions, social skills, global social behavior; distributive decisions and preferences for distributive principles; moral reasoning, emotions, and identity; as well as with punishment motivation; indicating that trait JS is highly relevant for social and moral development. The underlying self- or other-oriented perspectives showed diverging correlate and outcome relations mostly in line with theory and previous findings from adolescent and adult samples, but also provided new theoretical ideas on the construct and its differentiation. Findings point to an early internal justice motive underlying trait JS, but additional motivations underlying the JS perspectives. Caregivers, educators, and clinical psychologists should pay attention to children’s JS and toward promoting an adaptive justice-related personality development to foster children’s prosocial and moral development as well as their mental health.
Permafrost, defined as ground that is frozen for at least two consecutive years, is a distinct feature of the terrestrial unglaciated Arctic. It covers approximately one quarter of the land area of the Northern Hemisphere (23,000,000 km²). Arctic landscapes, especially those underlain by permafrost, are threatened by climate warming and may degrade in different ways, including active layer deepening, thermal erosion, and development of rapid thaw features. In Siberian and Alaskan late Pleistocene ice-rich Yedoma permafrost, rapid and deep thaw processes (called thermokarst) can mobilize deep organic carbon (below 3 m depth) by surface subsidence due to loss of ground ice. Increased permafrost thaw could cause a feedback loop of global significance if its stored frozen organic carbon is reintroduced into the active carbon cycle as greenhouse gases, which accelerate warming and inducing more permafrost thaw and carbon release. To assess this concern, the major objective of the thesis was to enhance the understanding of the origin of Yedoma as well as to assess the associated organic carbon pool size and carbon quality (concerning degradability). The key research questions were:
- How did Yedoma deposits accumulate?
- How much organic carbon is stored in the Yedoma region?
- What is the susceptibility of the Yedoma region's carbon for future decomposition?
To address these three research questions, an interdisciplinary approach, including detailed field studies and sampling in Siberia and Alaska as well as methods of sedimentology, organic biogeochemistry, remote sensing, statistical analyses, and computational modeling were applied. To provide a panarctic context, this thesis additionally includes results both from a newly compiled northern circumpolar carbon database and from a model assessment of carbon fluxes in a warming Arctic.
The Yedoma samples show a homogeneous grain-size composition. All samples were poorly sorted with a multi-modal grain-size distribution, indicating various (re-) transport processes. This contradicts the popular pure loess deposition hypothesis for the origin of Yedoma permafrost. The absence of large-scale grinding processes via glaciers and ice sheets in northeast Siberian lowlands, processes which are necessary to create loess as material source, suggests the polygenetic origin of Yedoma deposits.
Based on the largest available data set of the key parameters, including organic carbon content, bulk density, ground ice content, and deposit volume (thickness and coverage) from Siberian and Alaskan study sites, this thesis further shows that deep frozen organic carbon in the Yedoma region consists of two distinct major reservoirs, Yedoma deposits and thermokarst deposits (formed in thaw-lake basins). Yedoma deposits contain ~80 Gt and thermokarst deposits ~130 Gt organic carbon, or a total of ~210 Gt. Depending on the approach used for calculating uncertainty, the range for the total Yedoma region carbon store is ±75 % and ±20 % for conservative single and multiple bootstrapping calculations, respectively. Despite the fact that these findings reduce the Yedoma region carbon pool by nearly a factor of two compared to previous estimates, this frozen organic carbon is still capable of inducing a permafrost carbon feedback to climate warming. The complete northern circumpolar permafrost region contains between 1100 and 1500 Gt organic carbon, of which ~60 % is perennially frozen and decoupled from the short-term carbon cycle.
When thawed and reintroduced into the active carbon cycle, the organic matter qualities become relevant. Furthermore, results from investigations into Yedoma and thermokarst organic matter quality studies showed that Yedoma and thermokarst organic matter exhibit no depth-dependent quality trend. This is evidence that after freezing, the ancient organic matter is preserved in a state of constant quality. The applied alkane and fatty-acid-based biomarker proxies including the carbon-preference and the higher-land-plant-fatty-acid indices show a broad range of organic matter quality and thus no significantly different qualities of the organic matter stored in thermokarst deposits compared to Yedoma deposits. This lack of quality differences shows that the organic matter biodegradability depends on different decomposition trajectories and the previous decomposition/incorporation history. Finally, the fate of the organic matter has been assessed by implementing deep carbon pools and thermokarst processes in a permafrost carbon model. Under various warming scenarios for the northern circumpolar permafrost region, model results show a carbon release from permafrost regions of up to ~140 Gt and ~310 Gt by the years 2100 and 2300, respectively. The additional warming caused by the carbon release from newly-thawed permafrost contributes 0.03 to 0.14°C by the year 2100. The model simulations predict that a further increase by the 23rd century will add 0.4°C to global mean surface air temperatures.
In conclusion, Yedoma deposit formation during the late Pleistocene was dominated by water-related (alluvial/fluvial/lacustrine) as well as aeolian processes under periglacial conditions. The circumarctic permafrost region, including the Yedoma region, contains a substantial amount of currently frozen organic carbon. The carbon of the Yedoma region is well-preserved and therefore available for decomposition after thaw. A missing quality-depth trend shows that permafrost preserves the quality of ancient organic matter. When the organic matter is mobilized by deep degradation processes, the northern permafrost region may add up to 0.4°C to the global warming by the year 2300.
Soft nanocomposites with enhanced electromechanical response for dielectric elastomer actuators
(2011)
Electromechanical transducers based on elastomer capacitors are presently considered for many soft actuation applications, due to their large reversible deformation in response to electric field induced electrostatic pressure. The high operating voltage of such devices is currently a large drawback, hindering their use in applications such as biomedical devices and biomimetic robots, however, they could be improved with a careful design of their material properties. The main targets for improving their properties are increasing the relative permittivity of the active material, while maintaining high electric breakdown strength and low stiffness, which would lead to enhanced electrostatic storage ability and hence, reduced operating voltage. Improvement of the functional properties is possible through the use of nanocomposites. These exploit the high surface-to-volume ratio of the nanoscale filler, resulting in large effects on macroscale properties. This thesis explores several strategies for nanomaterials design. The resulting nanocomposites are fully characterized with respect to their electrical and mechanical properties, by use of dielectric spectroscopy, tensile mechanical analysis, and electric breakdown tests. First, nanocomposites consisting of high permittivity rutile TiO2 nanoparticles dispersed in thermoplastic block copolymer SEBS (poly-styrene-coethylene-co-butylene-co-styrene) are shown to exhibit permittivity increases of up to 3.7 times, leading to 5.6 times improvement in electrostatic energy density, but with a trade-off in mechanical properties (an 8-fold increase in stiffness). The variation in both electrical and mechanical properties still allows for electromechanical improvement, such that a 27 % reduction of the electric field is found compared to the pure elastomer. Second, it is shown that the use of nanofiller conductive particles (carbon black (CB)) can lead to a strong increase of relative permittivity through percolation, however, with detrimental side effects. These are due to localized enhancement of the electric field within the composite, which leads to sharp reductions in electric field strength. Hence, the increase in permittivity does not make up for the reduction in breakdown strength in relation to stored electrical energy, which may prohibit their practical use. Third, a completely new approach for increasing the relative permittivity and electrostatic energy density of a polymer based on 'molecular composites' is presented, relying on chemically grafting soft π-conjugated macromolecules to a flexible elastomer backbone. Polarization caused by charge displacement along the conjugated backbone is found to induce a large and controlled permittivity enhancement (470 % over the elastomer matrix), while chemical bonding, encapsulates the PANI chains manifesting in hardly any reduction in electric breakdown strength, and hence resulting in a large increase in stored electrostatic energy. This is shown to lead to an improvement in the sensitivity of the measured electromechanical response (83 % reduction of the driving electric field) as well as in the maximum actuation strain (250 %). These results represent a large step forward in the understanding of the strategies which can be employed to obtain high permittivity polymer materials with practical use for electro-elastomer actuation.
A large body of research now supports the presence of both syntactic and lexical predictions in sentence processing. Lexical predictions, in particular, are considered to indicate a deep level of predictive processing that extends past the structural features of a necessary word (e.g. noun), right down to the phonological features of the lexical identity of a specific word (e.g. /kite/; DeLong et al., 2005). However, evidence for lexical predictions typically focuses on predictions in very local environments, such as the adjacent word or words (DeLong et al., 2005; Van Berkum et al., 2005; Wicha et al., 2004). Predictions in such local environments may be indistinguishable from lexical priming, which is transient and uncontrolled, and as such may prime lexical items that are not compatible with the context (e.g. Kukona et al., 2014). Predictive processing has been argued to be a controlled process, with top-down information guiding preactivation of plausible upcoming lexical items (Kuperberg & Jaeger, 2016). One way to distinguish lexical priming from prediction is to demonstrate that preactivated lexical content can be maintained over longer distances.
In this dissertation, separable German particle verbs are used to demonstrate that preactivation of lexical items can be maintained over multi-word distances. A self-paced reading time and an eye tracking experiment provide some support for the idea that particle preactivation triggered by a verb and its context can be observed by holding the sentence context constant and manipulating the predictabilty of the particle. Although evidence of an effect of particle predictability was only seen in eye tracking, this is consistent with previous evidence suggesting that predictive processing facilitates only some eye tracking measures to which the self-paced reading modality may not be sensitive (Staub, 2015; Rayner1998). Interestingly, manipulating the distance between the verb and the particle did not affect reading times, suggesting that the surprisal-predicted faster reading times at long distance may only occur when the additional distance is created by information that adds information about the lexical identity of a distant element (Levy, 2008; Grodner & Gibson, 2005). Furthermore, the results provide support for models proposing that temporal decay is not major influence on word processing (Lewandowsky et al., 2009; Vasishth et al., 2019).
In the third and fourth experiments, event-related potentials were used as a method for detecting specific lexical predictions. In the initial ERP experiment, we found some support for the presence of lexical predictions when the sentence context constrained the number of plausible particles to a single particle. This was suggested by a frontal post-N400 positivity (PNP) that was elicited when a lexical prediction had been violated, but not to violations when more than one particle had been plausible. The results of this study were highly consistent with previous research suggesting that the PNP might be a much sought-after ERP marker of prediction failure (DeLong et al., 2011; DeLong et al., 2014; Van Petten & Luka, 2012; Thornhill & Van Petten, 2012; Kuperberg et al., 2019). However, a second experiment in a larger sample experiment failed to replicate the effect, but did suggest the relationship of the PNP to predictive processing may not yet be fully understood. Evidence for long-distance lexical predictions was inconclusive.
The conclusion drawn from the four experiments is that preactivation of the lexical entries of plausible upcoming particles did occur and was maintained over long distances. The facilitatory effect of this preactivation at the particle site therefore did not appear to be the result of transient lexical priming. However, the question of whether this preactivation can also lead to lexical predictions of a specific particle remains unanswered. Of particular interest to future research on predictive processing is further characterisation of the PNP. Implications for models of sentence processing may be the inclusion of long-distance lexical predictions, or the possibility that preactivation of lexical material can facilitate reading times and ERP amplitude without commitment to a specific lexical item.
Magmatic-hydrothermal systems form a variety of ore deposits at different proximities to upper-crustal hydrous magma chambers, ranging from greisenization in the roof zone of the intrusion, porphyry mineralization at intermediate depths to epithermal vein deposits near the surface. The physical transport processes and chemical precipitation mechanisms vary between deposit types and are often still debated.
The majority of magmatic-hydrothermal ore deposits are located along the Pacific Ring of Fire, whose eastern part is characterized by the Mesozoic to Cenozoic orogenic belts of the western North and South Americas, namely the American Cordillera. Major magmatic-hydrothermal ore deposits along the American Cordillera include (i) porphyry Cu(-Mo-Au) deposits (along the western cordilleras of Mexico, the western U.S., Canada, Chile, Peru, and Argentina); (ii) Climax- (and sub−) type Mo deposits (Colorado Mineral Belt and northern New Mexico); and (iii) porphyry and IS-type epithermal Sn(-W-Ag) deposits of the Central Andean Tin Belt (Bolivia, Peru and northern Argentina).
The individual studies presented in this thesis primarily focus on the formation of different styles of mineralization located at different proximities to the intrusion in magmatic-hydrothermal systems along the American Cordillera. This includes (i) two individual geochemical studies on the Sweet Home Mine in the Colorado Mineral Belt (potential endmember of peripheral Climax-type mineralization); (ii) one numerical modeling study setup in a generic porphyry Cu-environment; and (iii) a numerical modeling study on the Central Andean Tin Belt-type Pirquitas Mine in NW Argentina.
Microthermometric data of fluid inclusions trapped in greisen quartz and fluorite from the Sweet Home Mine (Detroit City Portal) suggest that the early-stage mineralization precipitated from low- to medium-salinity (1.5-11.5 wt.% equiv. NaCl), CO2-bearing fluids at temperatures between 360 and 415°C and at depths of at least 3.5 km. Stable isotope and noble gas isotope data indicate that greisen formation and base metal mineralization at the Sweet Home Mine was related to fluids of different origins. Early magmatic fluids were the principal source for mantle-derived volatiles (CO2, H2S/SO2, noble gases), which subsequently mixed with significant amounts of heated meteoric water. Mixing of magmatic fluids with meteoric water is constrained by δ2Hw-δ18Ow relationships of fluid inclusions. The deep hydrothermal mineralization at the Sweet Home Mine shows features similar to deep hydrothermal vein mineralization at Climax-type Mo deposits or on their periphery. This suggests that fluid migration and the deposition of ore and gangue minerals in the Sweet Home Mine was triggered by a deep-seated magmatic intrusion.
The second study on the Sweet Home Mine presents Re-Os molybdenite ages of 65.86±0.30 Ma from a Mo-mineralized major normal fault, namely the Contact Structure, and multimineral Rb-Sr isochron ages of 26.26±0.38 Ma and 25.3±3.0 Ma from gangue minerals in greisen assemblages. The age data imply that mineralization at the Sweet Home Mine formed in two separate events: Late Cretaceous (Laramide-related) and Oligocene (Rio Grande Rift-related). Thus, the age of Mo mineralization at the Sweet Home Mine clearly predates that of the Oligocene Climax-type deposits elsewhere in the Colorado Mineral Belt. The Re-Os and Rb-Sr ages also constrain the age of the latest deformation along the Contact Structure to between 62.77±0.50 Ma and 26.26±0.38 Ma, which was employed and/or crosscut by Late Cretaceous and Oligocene fluids. Along the Contact Structure Late Cretaceous molybdenite is spatially associated with Oligocene minerals in the same vein system, a feature that precludes molybdenite recrystallization or reprecipitation by Oligocene ore fluids.
Ore precipitation in porphyry copper systems is generally characterized by metal zoning (Cu-Mo to Zn-Pb-Ag), which is suggested to be variably related to solubility decreases during fluid cooling, fluid-rock interactions, partitioning during fluid phase separation and mixing with external fluids. The numerical modeling study setup in a generic porphyry Cu-environment presents new advances of a numerical process model by considering published constraints on the temperature- and salinity-dependent solubility of Cu, Pb and Zn in the ore fluid. This study investigates the roles of vapor-brine separation, halite saturation, initial metal contents, fluid mixing, and remobilization as first-order controls of the physical hydrology on ore formation. The results show that the magmatic vapor and brine phases ascend with different residence times but as miscible fluid mixtures, with salinity increases generating metal-undersaturated bulk fluids. The release rates of magmatic fluids affect the location of the thermohaline fronts, leading to contrasting mechanisms for ore precipitation: higher rates result in halite saturation without significant metal zoning, lower rates produce zoned ore shells due to mixing with meteoric water. Varying metal contents can affect the order of the final metal precipitation sequence. Redissolution of precipitated metals results in zoned ore shell patterns in more peripheral locations and also decouples halite saturation from ore precipitation.
The epithermal Pirquitas Sn-Ag-Pb-Zn mine in NW Argentina is hosted in a domain of metamorphosed sediments without geological evidence for volcanic activity within a distance of about 10 km from the deposit. However, recent geochemical studies of ore-stage fluid inclusions indicate a significant contribution of magmatic volatiles. This study tested different formation models by applying an existing numerical process model for porphyry-epithermal systems with a magmatic intrusion located either at a distance of about 10 km underneath the nearest active volcano or hidden underneath the deposit. The results show that the migration of the ore fluid over a 10-km distance results in metal precipitation by cooling before the deposit site is reached. In contrast, simulations with a hidden magmatic intrusion beneath the Pirquitas deposit are in line with field observations, which include mineralized hydrothermal breccias in the deposit area.
There are numerous situations in which people ask for something or make a request, e.g. asking a favor, asking for help or requesting compliance with specific norms. For this reason, how to ask for something in order to increase people’s willingness to fulfill such requests is one of the most important question for many people working in various different fields of responsibility such as charitable giving, marketing, management or policy making.
This dissertation consists of four chapters that deal with the effects of small changes in the decision-making environment on altruistic decision-making and compliance behavior. Most notably, written communication as an influencing factor is the focus of the first three chapters. The starting point was the question how to devise a request in order to maximize its chance of success (Chapter 1). The results of the first chapter originate the ideas for the second and third chapter. Chapter 2 analyzes how communication by a neutral third-party, i.e. a text from the experimenters that either reminds potential benefactors of their responsibility or highlights their freedom of choice, affects altruistic decision-making. Chapter 3 elaborates on the effect of thanking people in advance when asking them for help. While being not as closely related to the other chapters as the three first ones are, the fourth chapter deals as well with the question how compliance (here: compliance with norms and rules) is affected by subtle manipulations of the environment in which decisions are made. This chapter analyzes the effect of default settings in a tax return on tax compliance.
In order to study the research questions outlined above, controlled experiments were conducted. Chapter 1, which analyzes the effect of text messages on the decision to give something to another person, employs a mini-dictator game. The recipient sends a free-form text message to the dictator before the latter makes a binary decision whether or not to give part of her or his endowment to the recipient. We find that putting effort into the message by writing a long note without spelling mistakes increases dictators’ willingness to give. Moreover, writing in a humorous way and mentioning reasons why the money is needed pays off. Furthermore, men and women seem to react differently to some message categories. Only men react positively to efficiency arguments, while only women react to messages that emphasize the dictator’s power and responsibility.
Building on this last result, Chapter 2 attempts to disentangle the effect of reminding potential benefactors of their responsibility for the potential beneficiary and the effect of highlighting their decision power and freedom of choice on altruistic decision-making by studying the effects of two different texts on giving in a dictator game. We find that only men react positively to a text that stresses their responsibility for the recipient by giving more to her or him, whereas only women seem to react positively to a text that emphasizes their decision power and freedom of choice.
Chapter 3 focuses on the compliance with a request. In the experiment, participants are asked to provide a detailed answer to an open question. Compliance is measured by the effort participants spend on answering the question. The treatment variable is whether or not they see the text “thanks in advance.” We find that participants react negatively by putting less effort into complying with the request in response to the phrase “thanks in advance.”
Chapter 4 studies the effect of prefilled tax returns with mostly inaccurate default values on tax compliance. In a laboratory experiment, participants earn income by performing a real-effort task and must subsequently file a tax return for three consecutive rounds. In the main treatment, the tax return is prefilled with a default value, resulting from participants’ own performance in previous rounds, which varies in its relative size. The results suggest that there is no lasting effect of a default value on tax honesty, neither for relatively low nor relatively high defaults. However, participants who face a default that is lower than their true income in the first round evade significantly and substantially more taxes in this round than participants in the control treatment without a default.
Fluvial terraces, floodplains, and alluvial fans are the main landforms to store sediments and to decouple hillslopes from eroding mountain rivers. Such low-relief landforms are also preferred locations for humans to settle in otherwise steep and poorly accessible terrain. Abundant water and sediment as essential sources for buildings and infrastructure make these areas amenable places to live at. Yet valley floors are also prone to rare and catastrophic sedimentation that can overload river systems by abruptly increasing the volume of sediment supply, thus causing massive floodplain aggradation, lateral channel instability, and increased flooding. Some valley-fill sediments should thus record these catastrophic sediment pulses, allowing insights into their timing, magnitude, and consequences.
This thesis pursues this theme and focuses on a prominent ~150 km2 valley fill in the Pokhara Valley just south of the Annapurna Massif in central Nepal. The Pokhara Valley is conspicuously broad and gentle compared to the surrounding dissected mountain terrain,
and is filled with locally more than 70 m of clastic debris. The area’s main river, Seti Khola, descends from the Annapurna Sabche Cirque at 3500-4500 m asl down to 900 m asl where it incises into this valley fill. Humans began to settle on this extensive
fan surface in the 1750’s when the Trans-Himalayan trade route connected the Higher Himalayas, passing Pokhara city, with the subtropical lowlands of the Terai. High and unstable river terraces and steep gorges undermined by fast flowing rivers with highly seasonal (monsoon-driven) discharge, a high earthquake risk, and a growing population make the Pokhara Valley an ideal place to study the recent geological and geomorphic history of its sediments and the implication for natural hazard appraisals.
The objective of this thesis is to quantify the timing, the sedimentologic and geomorphic processes as well as the fluvial response to a series of strong sediment pulses. I report
diagnostic sedimentary archives, lithofacies of the fan terraces, their geochemical provenance, radiocarbon-age dating and the stratigraphic relationship between them. All these various and independent lines of evidence show consistently that multiple sediment pulses filled the Pokhara Valley in medieval times, most likely in connection with, if not triggered by, strong seismic ground shaking. The geomorphic and sedimentary evidence is
consistent with catastrophic fluvial aggradation tied to the timing of three medieval Himalayan earthquakes in ~1100, 1255, and 1344 AD. Sediment provenance and calibrated radiocarbon-age data are the key to distinguish three individual sediment pulses, as these are not evident from their sedimentology alone. I explore various measures of adjustment and fluvial response of the river system following these massive aggradation pulses. By using proxies such as net volumetric erosion, incision and erosion rates, clast provenance on active river banks, geomorphic markers such as re-exhumed tree trunks in growth position, and knickpoint locations in tributary valleys, I estimate the response of the river network in the Pokhara Valley to earthquake disturbance over several centuries. Estimates of the removed volumes since catastrophic valley filling began, require average net sediment
yields of up to 4200 t km−2 yr−1 since, rates that are consistent with those reported for Himalayan rivers. The lithological composition of active channel-bed load differs from that of local bedrock material, confirming that rivers have adjusted 30-50% depending on data of different tributary catchments, locally incising with rates of 160-220 mm yr−1. In many tributaries to the Seti Khola, most of the contemporary river loads come from a Higher Himalayan source, thus excluding local hillslopes as sources. This imbalance in sediment provenance emphasizes how the medieval sediment pulses must have rapidly traversed up to 70 km downstream to invade the downstream reaches of the tributaries
up to 8 km upstream, thereby blocking the local drainage and thus reinforcing, or locally creating new, floodplain lakes still visible in the landscape today.
Understanding the formation, origin, mechanism and geomorphic processes of this valley fill is crucial to understand the landscape evolution and response to catastrophic sediment pulses. Several earthquake-triggered long-runout rock-ice avalanches or catastrophic dam burst in the Higher Himalayas are the only plausible mechanisms to explain both the geomorphic and sedimentary legacy that I document here. In any case, the Pokhara Valley was most likely hit by a cascade of extremely rare processes over some two centuries starting in the early 11th century. Nowhere in the Himalayas do we find valley fills of
comparable size and equally well documented depositional history, making the Pokhara Valley one of the most extensively dated valley fill in the Himalayas to date. Judging from the growing record of historic Himalayan earthquakes in Nepal that were traced and
dated in fault trenches, this thesis shows that sedimentary archives can be used to directly aid reconstructions and predictions of both earthquake triggers and impacts from a sedimentary-response perspective. The knowledge about the timing, evolution, and response of the Pokhara Valley and its river system to earthquake triggered sediment pulses is important to address the seismic and geomorphic risk for the city of Pokhara. This
thesis demonstrates how geomorphic evidence on catastrophic valley infill can help to independently verify paleoseismological fault-trench records and may initiate re-thinking on post-seismic hazard assessments in active mountain regions.
One of the key challenges in modern Facility Management (FM) is to digitally reflect the current state of the built environment, referred to as-is or as-built versus as-designed representation. While the use of Building Information Modeling (BIM) can address the issue of digital representation, the generation and maintenance of BIM data requires a considerable amount of manual work and domain expertise. Another key challenge is being able to monitor the current state of the built environment, which is used to provide feedback and enhance decision making. The need for an integrated solution for all data associated with the operational life cycle of a building is becoming more pronounced as practices from Industry 4.0 are currently being evaluated and adopted for FM use. This research presents an approach for digital representation of indoor environments in their current state within the life cycle of a given building. Such an approach requires the fusion of various sources of digital data. The key to solving such a complex issue of digital data integration, processing and representation is with the use of a Digital Twin (DT). A DT is a digital duplicate of the physical environment, states, and processes. A DT fuses as-designed and as-built digital representations of built environment with as-is data, typically in the form of floorplans, point clouds and BIMs, with additional information layers pertaining to the current and predicted states of an indoor environment or a complete building (e.g., sensor data). The design, implementation and initial testing of prototypical DT software services for indoor environments is presented and described. These DT software services are implemented within a service-oriented paradigm, and their feasibility is presented through functioning and tested key software components within prototypical Service-Oriented System (SOS) implementations. The main outcome of this research shows that key data related to the built environment can be semantically enriched and combined to enable digital representations of indoor environments, based on the concept of a DT. Furthermore, the outcomes of this research show that digital data, related to FM and Architecture, Construction, Engineering, Owner and Occupant (AECOO) activity, can be combined, analyzed and visualized in real-time using a service-oriented approach. This has great potential to benefit decision making related to Operation and Maintenance (O&M) procedures within the scope of the post-construction life cycle stages of typical office buildings.
Animal movement is a crucial aspect of life, influencing ecological and evolutionary processes. It plays an important role in shaping biodiversity patterns, connecting habitats and ecosystems. Anthropogenic landscape changes, such as in agricultural environments, can impede the movement of animals by affecting their ability to locate resources during recurring movements within home ranges and, on a larger scale, disrupt migration or dispersal. Inevitably, these changes in movement behavior have far-reaching consequences on the mobile link functions provided by species inhabiting such extensively altered matrix areas. In this thesis, I investigate the movement characteristics and activity patterns of the European hare (Lepus europaeus), aiming to understand their significance as a pivotal species in fragmented agricultural landscapes. I reveal intriguing results that shed light on the importance of hares for seed dispersal, the influence of personality traits on behavior and space use, the sensitivity of hares to extreme weather conditions, and the impacts of GPS collaring on mammals' activity patterns and movement behavior.
In Chapter I, I conducted a controlled feeding experiment to investigate the potential impact of hares on seed dispersal. By additionally utilizing GPS data of hares in two contrasting landscapes, I demonstrated that hares play a vital role, acting as effective mobile linkers for many plant species in small and isolated habitat patches. The analysis of seed intake and germination success revealed that distinct seed traits, such as density, surface area, and shape, profoundly affect hares' ability to disperse seeds through endozoochory. These findings highlight the interplay between hares and plant communities and thus provide valuable insights into seed dispersal mechanisms in fragmented landscapes.
By employing standardized behavioral tests in Chapter II, I revealed consistent behavioral responses among captive hares while simultaneously examining the intricate connection between personality traits and spatial patterns within wild hare populations. This analysis provides insights into the ecological interactions and dynamics within hare populations in agricultural habitats. Examining the concept of animal personality, I established a link between personality traits and hare behavior. I showed that boldness, measured through standardized tests, influences individual exploration styles, with shy and bold hares exhibiting distinct space use patterns. In addition to providing valuable insights into the role of animal personality in heterogeneous environments, my research introduced a novel approach demonstrating the feasibility of remotely assessing personality types using animal-borne sensors without additional disturbance of the focal individual.
While climate conditions severely impact the activity and, consequently, the fitness of wildlife species across the globe, in Chapter III, I uncovered the sensitivity of hares to temperature, humidity, and wind speed during their peak reproduction period. I found a strong response in activity to high temperatures above 25°C, with a particularly pronounced effect during temperature extremes of over 35°C. The non-linear relationship between temperature and activity was characterized by contrasting responses observed for day and night. These findings emphasize the vulnerability of hares to climate change and the potential consequences for their fitness and population dynamics with the ongoing rise of temperature.
Since such insights can only be obtained through capturing and tagging free-ranging animals, I assessed potential impacts and the recovery process post-collar attachment in Chapter IV. For this purpose, I examined the daily distances moved and the temporal-associated activity of 1451 terrestrial mammals out of 42 species during their initial tracking period. The disturbance intensity and the speed of recovery varied across species, with herbivores, females, and individuals captured and collared in relatively secluded study areas experiencing more pronounced disturbances due to limited anthropogenic influences.
Mobile linkers are essential for maintaining biodiversity as they influence the dynamics and resilience of ecosystems. Furthermore, their ability to move through fragmented landscapes makes them a key component for restoring disturbed sites. Individual movement decisions determine the scale of mobile links, and understanding variations in space use among individuals is crucial for interpreting their functions. Climate change poses further challenges, with wildlife species expected to adjust their behavior, especially in response to high-temperature extremes, and comprehending the anthropogenic influence on animal movements will remain paramount to effective land use planning and the development of successful conservation strategies.
This thesis provides a comprehensive ecological understanding of hares in agricultural landscapes. My research findings underscore the importance of hares as mobile linkers, the influence of personality traits on behavior and spatial patterns, the vulnerability of hares to extreme weather conditions, and the immediate consequences of collar attachment on mammalian movements. Thus, I contribute valuable insights to wildlife conservation and management efforts, aiding in developing strategies to mitigate the impact of environmental changes on hare populations. Moreover, these findings enable the development of methodologies aimed at minimizing the impacts of collaring while also identifying potential biases in the data, thereby benefiting both animal welfare and the scientific integrity of localization studies.
Arctic warming has implications for the functioning of terrestrial Arctic ecosystems, global climate and socioeconomic systems of northern communities. A research gap exists in high spatial resolution monitoring and understanding of the seasonality of permafrost degradation, spring snowmelt and vegetation phenology. This thesis explores the diversity and utility of dense TerraSAR-X (TSX) X-Band time series for monitoring ice-rich riverbank erosion, snowmelt, and phenology of Arctic vegetation at long-term study sites in the central Lena Delta, Russia and on Qikiqtaruk (Herschel Island), Canada. In the thesis the following three research questions are addressed:
• Is TSX time series capable of monitoring the dynamics of rapid permafrost degradation in ice-rich permafrost on an intra-seasonal scale and can these datasets in combination with climate data identify the climatic drivers of permafrost degradation?
• Can multi-pass and multi-polarized TSX time series adequately monitor seasonal snow cover and snowmelt in small Arctic catchments and how does it perform compared to optical satellite data and field-based measurements?
• Do TSX time series reflect the phenology of Arctic vegetation and how does the recorded signal compare to in-situ greenness data from RGB time-lapse camera data and vegetation height from field surveys?
To answer the research questions three years of TSX backscatter data from 2013 to 2015 for the Lena Delta study site and from 2015 to 2017 for the Qikiqtaruk study site were used in quantitative and qualitative analysis complimentary with optical satellite data and in-situ time-lapse imagery.
The dynamics of intra-seasonal ice-rich riverbank erosion in the central Lena Delta, Russia were quantified using TSX backscatter data at 2.4 m spatial resolution in HH polarization and validated with 0.5 m spatial resolution optical satellite data and field-based time-lapse camera data. Cliff top lines were automatically extracted from TSX intensity images using threshold-based segmentation and vectorization and combined in a geoinformation system with manually digitized cliff top lines from the optical satellite data and rates of erosion extracted from time-lapse cameras. The results suggest that the cliff top eroded at a constant rate throughout the entire erosional season. Linear mixed models confirmed that erosion was coupled with air temperature and precipitation at an annual scale, seasonal fluctuations did not influence 22-day erosion rates. The results highlight the potential of HH polarized X-Band backscatter data for high temporal resolution monitoring of rapid permafrost degradation.
The distinct signature of wet snow in backscatter intensity images of TSX data was exploited to generate wet snow cover extent (SCE) maps on Qikiqtaruk at high temporal resolution. TSX SCE showed high similarity to Landsat 8-derived SCE when using cross-polarized VH data. Fractional snow cover (FSC) time series were extracted from TSX and optical SCE and compared to FSC estimations from in-situ time-lapse imagery. The TSX products showed strong agreement with the in-situ data and significantly improved the temporal resolution compared to the Landsat 8 time series. The final combined FSC time series revealed two topography-dependent snowmelt patterns that corresponded to in-situ measurements. Additionally TSX was able to detect snow patches longer in the season than Landsat 8, underlining the advantage of TSX for detection of old snow. The TSX-derived snow information provided valuable insights into snowmelt dynamics on Qikiqtaruk previously not available.
The sensitivity of TSX to vegetation structure associated with phenological changes was explored on Qikiqtaruk. Backscatter and coherence time series were compared to greenness data extracted from in-situ digital time-lapse cameras and detailed vegetation parameters on 30 areas of interest. Supporting previous results, vegetation height corresponded to backscatter intensity in co-polarized HH/VV at an incidence angle of 31°. The dry, tall shrub dominated ecological class showed increasing backscatter with increasing greenness when using the cross polarized VH/HH channel at 32° incidence angle. This is likely driven by volume scattering of emerging and expanding leaves. Ecological classes with more prostrate vegetation and higher bare ground contributions showed decreasing backscatter trends over the growing season in the co-polarized VV/HH channels likely a result of surface drying instead of a vegetation structure signal. The results from shrub dominated areas are promising and provide a complementary data source for high temporal monitoring of vegetation phenology.
Overall this thesis demonstrates that dense time series of TSX with optical remote sensing and in-situ time-lapse data are complementary and can be used to monitor rapid and seasonal processes in Arctic landscapes at high spatial and temporal resolution.
Gold at the nanoscale
(2020)
In this cumulative dissertation, I want to present my contributions to the field of plasmonic nanoparticle science. Plasmonic nanoparticles are characterised by resonances of the free electron gas around the spectral range of visible light. In recent years, they have evolved as promising components for light based nanocircuits, light harvesting, nanosensors, cancer therapies, and many more.
This work exhibits the articles I authored or co-authored in my time as PhD student at the University of Potsdam. The main focus lies on the coupling between localised plasmons and excitons in organic dyes. Plasmon–exciton coupling brings light–matter coupling to the nanoscale. This size reduction is accompanied by strong enhancements of the light field which can, among others, be utilised to enhance the spectroscopic footprint of molecules down to single molecule detection, improve the efficiency of solar cells, or establish lasing on the nanoscale. When the coupling exceeds all decay channels, the system enters the strong coupling regime. In this case, hybrid light–matter modes emerge utilisable as optical switches, in quantum networks, or as thresholdless lasers. The present work investigates plasmon–exciton coupling in gold–dye core–shell geometries and contains both fundamental insights and technical novelties. It presents a technique which reveals the anticrossing in coupled systems without manipulating the particles themselves. The method is used to investigate the relation between coupling strength and particle size. Additionally, the work demonstrates that pure extinction measurements can be insufficient when trying to assess the coupling regime. Moreover, the fundamental quantum electrodynamic effect of vacuum induced saturation is introduced. This effect causes the vacuum fluctuations to diminish the polarisability of molecules and has not yet been considered in the plasmonic context.
The work additionally discusses the reaction of gold nanoparticles to optical heating. Such knowledge is of great importance for all potential optical applications utilising plasmonic nanoparticles since optical excitation always generates heat. This heat can induce a change in the optical properties, but also mechanical changes up to melting can occur. Here, the change of spectra in coupled plasmon–exciton particles is discussed and explained with a precise model. Moreover, the work discusses the behaviour of gold nanotriangles exposed to optical heating. In a pump–probe measurement, X-ray probe pulses directly monitored the particles’ breathing modes. In another experiment, the triangles were exposed to cw laser radiation with varying intensities and illumination areas. X-ray diffraction directly measured the particles’ temperature. Particle melting was investigated with surface enhanced Raman spectroscopy and SEM imaging demonstrating that larger illumination areas can cause melting at lower intensities. An elaborate methodological and theoretical introduction precedes the articles. This way, also readers without specialist’s knowledge get a concise and detailed overview of the theory and methods used in the articles. I introduce localised plasmons in metal nanoparticles of different shapes. For this work, the plasmons were mostly coupled to excitons in J-aggregates. Therefore, I discuss these aggregates of organic dyes with sharp and intense resonances and establish an understanding of the coupling between the two systems. For ab initio simulations of the coupled systems, models for the systems’ permittivites are presented, too. Moreover, the route to the sample fabrication – the dye coating of gold nanoparticles, their subsequent deposition on substrates, and the covering with polyelectrolytes – is presented together with the measurement methods that were used for the articles.
On a planetary scale human populations need to adapt to both socio-economic and environmental problems amidst rapid global change. This holds true for coupled human-environment (socio-ecological) systems in rural and urban settings alike. Two examples are drylands and urban coasts. Such socio-ecological systems have a global distribution. Therefore, advancing the knowledge base for identifying socio-ecological adaptation needs with local vulnerability assessments alone is infeasible: The systems cover vast areas, while funding, time, and human resources for local assessments are limited. They are lacking in low an middle-income countries (LICs and MICs) in particular.
But places in a specific socio-ecological system are not only unique and complex – they also exhibit similarities. A global patchwork of local rural drylands vulnerability assessments of human populations to socio-ecological and environmental problems has already been reduced to a limited number of problem structures, which typically cause vulnerability. However, the question arises whether this is also possible in urban socio-ecological systems. The question also arises whether these typologies provide added value in research beyond global change. Finally, the methodology employed for drylands needs refining and standardizing to increase its uptake in the scientific community. In this dissertation, I set out to fill these three gaps in research.
The geographical focus in my dissertation is on LICs and MICs, which generally have lower capacities to adapt, and greater adaptation needs, regarding rapid global change. Using a spatially explicit indicator-based methodology, I combine geospatial and clustering methods to identify typical configurations of key factors in case studies causing vulnerability to human populations in two specific socio-ecological systems. Then I use statistical and analytical methods to interpret and appraise both the typical configurations and the global typologies they constitute.
First, I improve the indicator-based methodology and then reanalyze typical global problem structures of socio-ecological drylands vulnerability with seven indicator datasets. The reanalysis confirms the key tenets and produces a more realistic and nuanced typology of eight spatially explicit problem structures, or vulnerability profiles: Two new profiles with typically high natural resource endowment emerge, in which overpopulation has led to medium or high soil erosion. Second, I determine whether the new drylands typology and its socio-ecological vulnerability concept advance a thematically linked scientific debate in human security studies: what drives violent conflict in drylands? The typology is a much better predictor for conflict distribution and incidence in drylands than regression models typically used in peace research. Third, I analyze global problem structures typically causing vulnerability in an urban socio-ecological system - the rapidly urbanizing coastal fringe (RUCF) – with eleven indicator datasets. The RUCF also shows a robust typology, and its seven profiles show huge asymmetries in vulnerability and adaptive capacity. The fastest population increase, lowest income, most ineffective governments, most prevalent poverty, and lowest adaptive capacity are all typically stacked in two profiles in LICs. This shows that beyond local case studies tropical cyclones and/or coastal flooding are neither stalling rapid population growth, nor urban expansion, in the RUCF. I propose entry points for scaling up successful vulnerability reduction strategies in coastal cities within the same vulnerability profile.
This dissertation shows that patchworks of local vulnerability assessments can be generalized to structure global socio-ecological vulnerabilities in both rural and urban socio-ecological systems according to typical problems. In terms of climate-related extreme events in the RUCF, conflicting problem structures and means to deal with them are threatening to widen the development gap between LICs and high-income countries unless successful vulnerability reduction measures are comprehensively scaled up. The explanatory power for human security in drylands warrants further applications of the methodology beyond global environmental change research in the future. Thus, analyzing spatially explicit global typologies of socio-ecological vulnerability is a useful complement to local assessments: The typologies provide entry points for where to consider which generic measures to reduce typical problem structures – including the countless places without local assessments. This can save limited time and financial resources for adaptation under rapid global change.
The current generation of ground-based instruments has rapidly extended the limits of the range accessible to us with very-high-energy (VHE) gamma-rays, and more than a hundred sources have now been detected in the Milky Way. These sources represent only the tip of the iceberg, but their number has reached a level that allows population studies. In this work, a model of the global population of VHE gamma-ray sources based on the most comprehensive census of Galactic sources in this energy regime, the H.E.S.S. Galactic plane survey (HGPS), will be presented. A population synthesis approach was followed in the construction of the model. Particular attention was paid to correcting for the strong observational bias inherent in the sample of detected sources. The methods developed for estimating the model parameters have been validated with extensive Monte Carlo simulations and will be shown to provide unbiased estimates of the model parameters. With these methods, five models for different spatial distributions of sources have been constructed. To test the validity of these models, their predictions for the composition of sources within the sensitivity range of the HGPS are compared with the observed sample. With one exception, similar results are obtained for all spatial distributions, showing that the observed longitude profile and the source distribution over photon flux are in fair agreement with observation. Regarding the latitude profile and the source distribution over angular extent, it becomes apparent that the model needs to be further adjusted to bring its predictions in agreement with observation. Based on the model, predictions of the global properties of the Galactic population of VHE gamma-ray sources and the prospects of the Cherenkov Telescope Array (CTA) will be presented.
CTA will significantly increase our knowledge of VHE gamma-ray sources by lowering the threshold for source detection, primarily through a larger detection area compared to current-generation instruments. In ground-based gamma-ray astronomy, the sensitivity of an instrument depends strongly, in addition to the detection area, on the ability to distinguish images of air showers produced by gamma-rays from those produced by cosmic rays, which are a strong background. This means that the number of detectable sources depends on the background rejection algorithm used and therefore may also be increased by improving the performance of such algorithms. In this context, in addition to the population model, this work presents a study on the application of deep-learning techniques to the task of gamma-hadron separation in the analysis of data from ground-based gamma-ray instruments. Based on a systematic survey of different neural-network architectures, it is shown that robust classifiers can be constructed with competitive performance compared to the best existing algorithms. Despite the broad coverage of neural-network architectures discussed, only part of the potential offered by the
application of deep-learning techniques to the analysis of gamma-ray data is exploited in the context of this study. Nevertheless, it provides an important basis for further research on this topic.
The Semantic Web provides information contained in the World Wide Web as machine-readable facts. In comparison to a keyword-based inquiry, semantic search enables a more sophisticated exploration of web documents. By clarifying the meaning behind entities, search results are more precise and the semantics simultaneously enable an exploration of semantic relationships. However, unlike keyword searches, a semantic entity-focused search requires that web documents are annotated with semantic representations of common words and named entities. Manual semantic annotation of (web) documents is time-consuming; in response, automatic annotation services have emerged in recent years. These annotation services take continuous text as input, detect important key terms and named entities and annotate them with semantic entities contained in widely used semantic knowledge bases, such as Freebase or DBpedia. Metadata of video documents require special attention. Semantic analysis approaches for continuous text cannot be applied, because information of a context in video documents originates from multiple sources possessing different reliabilities and characteristics. This thesis presents a semantic analysis approach consisting of a context model and a disambiguation algorithm for video metadata. The context model takes into account the characteristics of video metadata and derives a confidence value for each metadata item. The confidence value represents the level of correctness and ambiguity of the textual information of the metadata item. The lower the ambiguity and the higher the prospective correctness, the higher the confidence value. The metadata items derived from the video metadata are analyzed in a specific order from high to low confidence level. Previously analyzed metadata are used as reference points in the context for subsequent disambiguation. The contextually most relevant entity is identified by means of descriptive texts and semantic relationships to the context. The context is created dynamically for each metadata item, taking into account the confidence value and other characteristics. The proposed semantic analysis follows two hypotheses: metadata items of a context should be processed in descendent order of their confidence value, and the metadata that pertains to a context should be limited by content-based segmentation boundaries. The evaluation results support the proposed hypotheses and show increased recall and precision for annotated entities, especially for metadata that originates from sources with low reliability. The algorithms have been evaluated against several state-of-the-art annotation approaches. The presented semantic analysis process is integrated into a video analysis framework and has been successfully applied in several projects for the purpose of semantic video exploration of videos.
The past decades are characterized by various efforts to provide complete sequence information of genomes regarding various organisms. The availability of full genome data triggered the development of multiplex high-throughput assays allowing simultaneous measurement of transcripts, proteins and metabolites. With genome information and profiling technologies now in hand a highly parallel experimental biology is offering opportunities to explore and discover novel principles governing biological systems. Understanding biological complexity through modelling cellular systems represents the driving force which today allows shifting from a component-centric focus to integrative and systems level investigations. The emerging field of systems biology integrates discovery and hypothesis-driven science to provide comprehensive knowledge via computational models of biological systems. Within the context of evolving systems biology, investigations were made in large-scale computational analyses on transcript co-response data through selected prokaryotic and plant model organisms. CSB.DB - a comprehensive systems-biology database - (http://csbdb.mpimp-golm.mpg.de/) was initiated to provide public and open access to the results of biostatistical analyses in conjunction with additional biological knowledge. The database tool CSB.DB enables potential users to infer hypothesis about functional interrelation of genes of interest and may serve as future basis for more sophisticated means of elucidating gene function. The co-response concept and the CSB.DB database tool were successfully applied to predict operons in Escherichia coli by using the chromosomal distance and transcriptional co-responses. Moreover, examples were shown which indicate that transcriptional co-response analysis allows identification of differential promoter activities under different experimental conditions. The co-response concept was successfully transferred to complex organisms with the focus on the eukaryotic plant model organism Arabidopsis thaliana. The investigations made enabled the discovery of novel genes regarding particular physiological processes and beyond, allowed annotation of gene functions which cannot be accessed by sequence homology. GMD - the Golm Metabolome Database - was initiated and implemented in CSB.DB to integrated metabolite information and metabolite profiles. This novel module will allow addressing complex biological questions towards transcriptional interrelation and extent the recent systems level quest towards phenotyping.
Despite remarkable progress made in the past century, which has revolutionized our understanding of the universe, there are numerous open questions left in theoretical physics. Particularly important is the fact that the theories describing the fundamental interactions of nature are incompatible. Einstein's theory of general relative describes gravity as a dynamical spacetime, which is curved by matter and whose curvature determines the motion of matter. On the other hand we have quantum field theory, in form of the standard model of particle physics, where particles interact via the remaining interactions - electromagnetic, weak and strong interaction - on a flat, static spacetime without gravity. A theory of quantum gravity is hoped to cure this incompatibility by heuristically replacing classical spacetime by quantum spacetime'. Several approaches exist attempting to define such a theory with differing underlying premises and ideas, where it is not clear which is to be preferred. Yet a minimal requirement is the compatibility with the classical theory, they attempt to generalize. Interestingly many of these models rely on discrete structures in their definition or postulate discreteness of spacetime to be fundamental. Besides the direct advantages discretisations provide, e.g. permitting numerical simulations, they come with serious caveats requiring thorough investigation: In general discretisations break fundamental diffeomorphism symmetry of gravity and are generically not unique. Both complicates establishing the connection to the classical continuum theory. The main focus of this thesis lies in the investigation of this relation for spin foam models. This is done on different levels of the discretisation / triangulation, ranging from few simplices up to the continuum limit. In the regime of very few simplices we confirm and deepen the connection of spin foam models to discrete gravity. Moreover, we discuss dynamical, e.g. diffeomorphism invariance in the discrete, to fix the ambiguities of the models. In order to satisfy these conditions, the discrete models have to be improved in a renormalisation procedure, which also allows us to study their continuum dynamics. Applied to simplified spin foam models, we uncover a rich, non--trivial fixed point structure, which we summarize in a phase diagram. Inspired by these methods, we propose a method to consistently construct the continuum theory, which comes with a unique vacuum state.
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
Active continental margins are affected by complex feedbacks between tectonic, climate and surface processes, the intricate relations of which are still a matter of discussion. The Chilean convergent margin, forming the outstanding Andean subduction orogen, constitutes an ideal natural laboratory for the investigation of climate, tectonics and their interactions. In order to study both processes, I examined marine and lacustrine sediments from different depositional environments on- and offshore the south-central Chilean coast (38-40°S). I combined sedimentological, geochemical and isotopical analyses to identify climatic and tectonic signals within the sedimentary records. The investigation of marine trench sediments (ODP Site 1232, SONNE core 50SL) focused on frequency changes of turbiditic event layers since the late Pleistocene. In the active margin setting of south-central Chile, these layers were considered to reflect periodically occurring earthquakes and to constitute an archive of the regional paleoseismicity. The new results indicate glacial-interglacial changes in turbidite frequencies during the last 140 kyr, with short recurrence times (~200 years) during glacial and long recurrence times (~1000 years) during interglacial periods. Hence, the generation of turbidites appears to be strongly influenced by climate and sea level changes, which control on the amount of sediment delivered to the shelf edge and therewith the stability of the continental slope: more stable slope conditions during interglacial periods entail lower turbidite frequencies than in glacial periods. Since glacial turbidite recurrence times are congruent with earthquake recurrence times derived from the historical record and other paleoseismic archives of the region, I concluded that only during cold stages the sediment availability and slope instability enabled the complete series of large earthquakes to be recorded. The sediment transport to the shelf region is not only driven by climate conditions but also influenced by local forearc tectonics. Accelerating uplift rates along major tectonic structures involved drainage anomalies and river flow inversions, which seriously altered the sediment supply to the Pacific Ocean. Two examples for the tectonic hindrance of fluvial systems are the coastal lakes Lago Lanalhue and Lago Lleu Lleu. Both lakes developed within former river valleys, which once discharged towards the Pacific and were dammed by tectonically uplifted sills at ~8000 yr BP. Analyses of sediment cores from the lakes showed similar successions of marine/brackish deposits at the bottom, covered by lacustrine sediments on top. Dating of the transitions between these different units and the comparison with global sea level curves allowed me to calculate local Holocene uplift rates, which are distinctly higher for the upraised sills (Lanalhue: 8.83 ± 2.7 mm/yr, Lleu Lleu: 11.36 ± 1.77 mm/yr) than for the lake basins (Lanalhue: 0.42 ± 0.71 mm/yr, Lleu Lleu: 0.49 ± 0.44 mm/yr). I hence considered the sills to be the surface expression of a blind thrust associated with a prominent inverse fault that is controlling regional uplift and folding. After the final separation of Lago Lanalhue and Lago Lleu Lleu from the Pacific, a constant deposition of lacustrine sediments preserved continuous records of local environmental changes. Sequences from both lakes indicate a long-term climate trend with a significant shift from more arid conditions during the Mid-Holocene (8000 – 4200 cal yr BP) to more humid conditions during the Late Holocene (4200 cal yr BP – present). This trend is consistent with other regional paleoclimatic data and interpreted to reflect changes in the strength/position of the Southern Westerly Winds. Since ~5000 years, sediments of Lago Lleu Lleu are marked by numerous intercalated detrital layers that recur with a mean frequency of ~210 years. Deposition of these layers may be triggered by local tectonics (i.e. earthquakes), but may also originate from changes in the local climate (e.g. onset of modern ENSO conditions). During the last 2000 years, pronounced variations in the terrigenous sediment supply to both lakes suggest important hydrological changes on the centennial time-scale as well. A lower input of terrigenous matter points to less humid phases between 200 cal yr B.C. - 150 cal yr A.D., 900 - 1350 cal yr A.D. and 1850 cal yr A.D. to present (broadly corresponding to the Roman, Medieval, and Modern Warm Periods). More humid periods persisted from 150 - 900 cal yr A.D. and 1350 - 1850 cal yr A.D. (broadly corresponding to the Dark Ages and the Little Ice Age). In conclusion, the combined investigation of marine and lacustrine sediments is a feasible method for the reconstruction of climatic and tectonic processes on different time scales. My approach allows exploring both climate and tectonics in one and the same archive, and is largely transferable to other active margins worldwide.
In the interest of producing functional catalysts from sustainable building-blocks, 1, 3-dicarboxylate imidazolium salts derived from amino acids were successfully modified to be suitable as N-Heterocyclic carbene (NHC) ligands within metal complexes. Complexes of Ag(I), Pd(II), and Ir(I) were successfully produced using known procedures using ligands derived from glycine, alanine, β-alanine and phenylalanine. The complexes were characterized in solid state using X-Ray crystallography, which allowed for the steric and electronic comparison of these ligands to well-known NHC ligands within analogous metal complexes.
The palladium complexes were tested as catalysts for aqueous-phase Suzuki-Miyaura cross-coupling. Water-solubility could be induced via ester hydrolysis of the N-bound groups in the presence of base. The mono-NHC–Pd complexes were seen to be highly active in the coupling of aryl bromides with phenylboronic acid; the active catalyst of which was determined to be mostly Pd(0) nanoparticles. Kinetic studies determined that reaction proceeds quickly in the coupling of bromoacetophenone, for both pre-hydrolyzed and in-situ hydrolysis catalyst dissolution. The catalyst could also be recycled for an extra run by simply re-using the aqueous layer.
The imidazolium salts were also used to produce organosilica hybrid materials. This was attempted via two methods: by post-grafting onto a commercial organosilica, and co-condensation of the corresponding organosilane. The co-condensation technique harbours potential for the production of solid-support catalysts.
Salt deposits offer a variety of usage types. These include the mining of rock salt and potash salt as important raw materials, the storage of energy in man-made underground caverns, and the disposal of hazardous substances in former mines. The most serious risk with any of these usage types comes from the contact with groundwater or surface water. It causes an uncontrolled dissolution of salt rock, which in the worst case can result in the flooding or collapse of underground facilities. Especially along potash seams, cavernous structures can spread quickly, because potash salts show a much higher solubility than rock salt. However, as their chemical behavior is quite complex, previous models do not account for these highly soluble interlayers. Therefore, the objective of the present thesis is to describe the evolution of cavernous structures along potash seams in space and time in order to improve hazard mitigation during the utilization of salt deposits.
The formation of cavernous structures represents an interplay of chemical and hydraulic processes. Hence, the first step is to systematically investigate the dissolution and precipitation reactions that occur when water and potash salt come into contact. For this purpose, a geochemical reaction model is used. The results show that the minerals are only partially dissolved, resulting in a porous sponge like structure. With the saturation of the solution increasing, various secondary minerals are formed, whose number and type depend on the original rock composition. Field data confirm a correlation between the degree of saturation and the distance from the center of the cavern, where solution is entering. Subsequently, the reaction model is coupled with a flow and transport code and supplemented by a novel approach called ‘interchange’. The latter enables the exchange of solution and rock between areas of different porosity and mineralogy, and thus ultimately the growth of the cavernous structure. By means of several scenario analyses, cavern shape, growth rate and mineralogy are systematically investigated, taking also heterogeneous potash seams into account. The results show that basically four different cases can be distinguished, with mixed forms being a frequent occurrence in nature. The classification scheme is based on the dimensionless numbers Péclet and Damköhler, and allows for a first assessment of the hazard potential. In future, the model can be applied to any field case, using measurement data for calibration.
The presented research work provides a reactive transport model that is able to spatially and temporally characterize the propagation of cavernous structures along potash seams for the first time. Furthermore, it allows to determine thickness and composition of transition zones between cavern center and unaffected salt rock. The latter is particularly important in potash mining, so that natural cavernous structures can be located at an early stage and the risk of mine flooding can thus be reduced. The models may also contribute to an improved hazard prevention in the construction of storage caverns and the disposal of hazardous waste in salt deposits. Predictions regarding the characteristics and evolution of cavernous structures enable a better assessment of potential hazards, such as integrity or stability loss, as well as of suitable mitigation measures.
The terrestrial biosphere impacts considerably on the global carbon cycle. In particular, ecosystems contribute to set off anthropogenic induced fossil fuel emissions and hence decelerate the rise of the atmospheric CO₂ concentration. However, the future net sink strength of an ecosystem will heavily depend on the response of the individual processes to a changing climate. Understanding the makeup of these processes and their interaction with the environment is, therefore, of major importance to develop long-term climate mitigation strategies. Mathematical models are used to predict the fate of carbon in the soil-plant-atmosphere system under changing environmental conditions. However, the underlying processes giving rise to the net carbon balance of an ecosystem are complex and not entirely understood at the canopy level. Therefore, carbon exchange models are characterised by considerable uncertainty rendering the model-based prediction into the future prone to error. Observations of the carbon exchange at the canopy scale can help learning about the dominant processes and hence contribute to reduce the uncertainty associated with model-based predictions. For this reason, a global network of measurement sites has been established that provides long-term observations of the CO₂ exchange between a canopy and the atmosphere along with micrometeorological conditions. These time series, however, suffer from observation uncertainty that, if not characterised, limits their use in ecosystem studies. The general objective of this work is to develop a modelling methodology that synthesises physical process understanding with the information content in canopy scale data as an attempt to overcome the limitations in both carbon exchange models and observations. Similar hybrid modelling approaches have been successfully applied for signal extraction out of noisy time series in environmental engineering. Here, simple process descriptions are used to identify relationships between the carbon exchange and environmental drivers from noisy data. The functional form of these relationships are not prescribed a priori but rather determined directly from the data, ensuring the model complexity to be commensurate with the observations. Therefore, this data-led analysis results in the identification of the processes dominating carbon exchange at the ecosystem scale as reflected in the data. The description of these processes may then lead to robust carbon exchange models that contribute to a faithful prediction of the ecosystem carbon balance. This work presents a number of studies that make use of the developed data-led modelling approach for the analysis and interpretation of net canopy CO₂ flux observations. Given the limited knowledge about the underlying real system, the evaluation of the derived models with synthetic canopy exchange data is introduced as a standard procedure prior to any real data employment. The derived data-led models prove successful in several different applications. First, the data-based nature of the presented methods makes them particularly useful for replacing missing data in the observed time series. The resulting interpolated CO₂ flux observation series can then be analysed with dynamic modelling techniques, or integrated to coarser temporal resolution series for further use e.g., in model evaluation exercises. However, the noise component in these observations interferes with deterministic flux integration in particular when long time periods are considered. Therefore, a method to characterise the uncertainties in the flux observations that uses a semi-parametric stochastic model is introduced in a second study. As a result, an (uncertain) estimate of the annual net carbon exchange of the observed ecosystem can be inferred directly from a statistically consistent integration of the noisy data. For the forest measurement sites analysed, the relative uncertainty for the annual sum did not exceed 11 percent highlighting the value of the data. Based on the same models, a disaggregation of the net CO₂ flux into carbon assimilation and respiration is presented in a third study that allows for the estimation of annual ecosystem carbon uptake and release. These two components can then be further analysed for their separate response to environmental conditions. Finally, a fourth study demonstrates how the results from data-led analyses can be turned into a simple parametric model that is able to predict the carbon exchange of forest ecosystems. Given the global network of measurements available the derived model can now be tested for generality and transferability to other biomes. In summary, this work particularly highlights the potential of the presented data-led methodologies to identify and describe dominant carbon exchange processes at the canopy level contributing to a better understanding of ecosystem functioning.
Lifelong learning plays an increasingly important role in many societies. Technology is changing faster than ever and what has been important to learn today, may be obsolete tomorrow. The role of informal programs is becoming increasingly important. Particularly, Massive Open Online Courses have become popular among learners and instructors. In 2008, a group of Canadian education enthusiasts started the first Massive Open Online Courses or MOOCs to prove their cognitive theory of Connectivism. Around 2012, a variety of American start-ups redefined the concept of MOOCs. Instead of following the connectivist doctrine they returned to a more traditional approach. They focussed on video lecturing and combined this with a course forum that allowed the participants to discuss with each other and the teaching team. While this new version of the concept was enormously successful in terms of massiveness—hundreds of thousands of participants from all over the world joined the first of these courses—many educators criticized the re-lapse to the cognitivist model. In the early days, the evolving platforms often did not have more features than a video player, simple multiple-choice quizzes, and the course forum. It soon became a major interest of research to allow the scaling of more modern approaches of learning and teaching for the massiveness of these courses. Hands-on exercises, alternative forms of assessment, collaboration, and teamwork are some of the topics on the agenda. The insights provided by cognitive and pedagogical theories, however, do not necessarily always run in sync with the needs and the preferences of the majority of participants. While the former promote action-learning, hands-on-learning, competence-based-learning, project-based-learning, team-based-learning as the holy grail, many of the latter often rather prefer a more laid-back style of learning, sometimes referred to as edutainment. Obviously, given the large numbers of participants in these courses, there is not just one type of learners. Participants are not a homogeneous mass but a potpourri of individuals with a wildly heterogeneous mix of backgrounds, previous knowledge, familial and professional circumstances, countries of origin, gender, age, and so on. For the majority of participants, a full-time job and/or a family often just does not leave enough room for more time intensive tasks, such as practical exercises or teamwork. Others, however, particularly enjoy these hands-on or collaborative aspects of MOOCs. Furthermore, many subjects particularly require these possibilities and simply cannot be taught or learned in courses that lack collaborative or hands-on features. In this context, the thesis discusses how team assignments have been implemented on the HPI MOOC platform. During the recent years, several experiments have been conducted and a great amount of experience has been gained by employing team assignments in courses in areas, such as Object-Oriented Programming, Design Thinking, and Business Innovation on various instances of this platform: openHPI, openSAP, and mooc.house
Anthropogenic activities such as continuous landscape changes threaten biodiversity at both local and regional scales. Metacommunity models attempt to combine these two scales and continuously contribute to a better mechanistic understanding of how spatial processes and constraints, such as fragmentation, affect biodiversity. There is a strong consensus that such structural changes of the landscape tend to negatively effect the stability of metacommunities. However, in particular the interplay of complex trophic communities and landscape structure is not yet fully understood.
In this present dissertation, a metacommunity approach is used based on a dynamic and spatially explicit model that integrates population dynamics at the local scale and dispersal dynamics at the regional scale. This approach allows the assessment of complex spatial landscape components such as habitat clustering on complex species communities, as well as the analysis of population dynamics of a single species. In addition to the impact of a fixed landscape structure, periodic environmental disturbances are also considered, where a periodical change of habitat availability, temporally alters landscape structure, such as the seasonal drying of a water body.
On the local scale, the model results suggest that large-bodied animal species, such as predator species at high trophic positions, are more prone to extinction in a state of large patch isolation than smaller species at lower trophic levels.
Increased metabolic losses for species with a lower body mass lead to increased energy limitation for species on higher trophic levels and serves as an explanation for a predominant loss of these species. This effect is particularly pronounced for food webs, where species are more sensitive to increased metabolic losses through dispersal and a change in landscape structure.
In addition to the impact of species composition in a food web for diversity, the strength of local foraging interactions likewise affect the synchronization of population dynamics. A reduced predation pressure leads to more asynchronous population dynamics, beneficial for the stability of population dynamics as it reduces the risk of correlated extinction events among habitats. On the regional scale, two landscape aspects, which are the mean patch isolation and the formation of local clusters of two patches, promote an increase in $\beta$-diversity. Yet, the individual composition and robustness of the local species community equally explain a large proportion of the observed diversity patterns.
A combination of periodic environmental disturbance and patch isolation has a particular impact on population dynamics of a species. While the periodic disturbance has a synchronizing effect, it can even superimpose emerging asynchronous dynamics in a state of large patch isolation and unifies trends in synchronization between different species communities.
In summary, the findings underline a large local impact of species composition and interactions on local diversity patterns of a metacommunity. In comparison, landscape structures such as fragmentation have a negligible effect on local diversity patterns, but increase their impact for regional diversity patterns. In contrast, at the level of population dynamics, regional characteristics such as periodic environmental disturbance and patch isolation have a particularly strong impact and contribute substantially to the understanding of the stability of population dynamics in a metacommunity. These studies demonstrate once again the complexity of our ecosystems and the need for further analysis for a better understanding of our surrounding environment and more targeted conservation of biodiversity.
In the present thesis, AC electrokinetic forces, like dielectrophoresis and AC electroosmosis, were demonstrated as a simple and fast method to functionalize the surface of nanoelectrodes with submicrometer sized biological objects. These nanoelectrodes have a cylindrical shape with a diameter of 500 nm arranged in an array of 6256 electrodes. Due to its medical relevance influenza virus as well as anti-influenza antibodies were chosen as a model organism. Common methods to bring antibodies or proteins to biosensor surfaces are complex and time-consuming. In the present work, it was demonstrated that by applying AC electric fields influenza viruses and antibodies can be immobilized onto the nanoelectrodes within seconds without any prior chemical modification of neither the surface nor the immobilized biological object. The distribution of these immobilized objects is not uniform over the entire array, it exhibits a decreasing gradient from the outer row to the inner ones. Different causes for this gradient have been discussed, such as the vortex-shaped fluid motion above the nanoelectrodes generated by, among others, electrothermal fluid flow. It was demonstrated that parts of the accumulated material are permanently immobilized to the electrodes. This is a unique characteristic of the presented system since in the literature the AC electrokinetic immobilization is almost entirely presented as a method just for temporary immobilization. The spatial distribution of the immobilized viral material or the anti-influenza antibodies at the electrodes was observed by either the combination of fluorescence microscopy and deconvolution or by super-resolution microscopy (STED). On-chip immunoassays were performed to examine the suitability of the functionalized electrodes as a potential affinity-based biosensor. Two approaches were pursued: A) the influenza virus as the bio-receptor or B) the influenza virus as the analyte. Different sources of error were eliminated by ELISA and passivation experiments. Hence, the activity of the immobilized object was inspected by incubation with the analyte. This resulted in the successful detection of anti-influenza antibodies by the immobilized viral material. On the other hand, a detection of influenza virus particles by the immobilized anti-influenza antibodies was not possible. The latter might be due to lost activity or wrong orientation of the antibodies. Thus, further examinations on the activity of by AC electric fields immobilized antibodies should follow. When combined with microfluidics and an electrical read-out system, the functionalized chips possess the potential to serve as a rapid, portable, and cost-effective point-of-care (POC) device. This device can be utilized as a basis for diverse applications in diagnosing and treating influenza, as well as various other pathogens.
Actin is one of the most highly conserved proteins in eukaryotes and distinct actin-related proteins with filament-forming properties are even found in prokaryotes. Due to these commonalities, actin-modulating proteins of many species share similar structural properties and proposed functions. The polymerization and depolymerization of actin are critical processes for a cell as they can contribute to shape changes to adapt to its environment and to move and distribute nutrients and cellular components within the cell. However, to what extent functions of actin-binding proteins are conserved between distantly related species, has only been addressed in a few cases. In this work, functions of Coronin-A (CorA) and Actin-interacting protein 1 (Aip1), two proteins involved in actin dynamics, were characterized. In addition, the interchangeability and function of Aip1 were investigated in two phylogenetically distant model organisms. The flowering plant Arabidopsis thaliana (encoding two homologs, AIP1-1 and AIP1-2) and in the amoeba Dictyostelium discoideum (encoding one homolog, DdAip1) were chosen because the functions of their actin cytoskeletons may differ in many aspects. Functional analyses between species were conducted for AIP1 homologs as flowering plants do not harbor a CorA gene.
In the first part of the study, the effect of four different mutation methods on the function of Coronin-A protein and the resulting phenotype in D. discoideum was revealed in two genetic knockouts, one RNAi knockdown and a sudden loss-of-function mutant created by chemical-induced dislocation (CID). The advantages and disadvantages of the different mutation methods on the motility, appearance and development of the amoebae were investigated, and the results showed that not all observed properties were affected with the same intensity. Remarkably, a new combination of Selection-Linked Integration and CID could be established.
In the second and third parts of the thesis, the exchange of Aip1 between plant and amoeba was carried out. For A. thaliana, the two homologs (AIP1-1 and AIP1-2) were analyzed for functionality as well as in D. discoideum. In the Aip1-deficient amoeba, rescue with AIP1-1 was more effective than with AIP1-2. The main results in the plant showed that in the aip1-2 mutant background, reintroduced AIP1-2 displayed the most efficient rescue and A. thaliana AIP1-1 rescued better than DdAip1. The choice of the tagging site was important for the function of Aip1 as steric hindrance is a problem. The DdAip1 was less effective when tagged at the C-terminus, while the plant AIP1s showed mixed results depending on the tag position. In conclusion, the foreign proteins partially rescued phenotypes of mutant plants and mutant amoebae, despite the organisms only being very distantly related in evolutionary terms.
Aim The aim of the present study was to examine young female volleyballers’ body build, physical abilities, technical skills and psychophysiological properties in relation to their performance at competitions. The sample consisted of 46 female volleyballers aged 13-16 years. 49 basic anthropometric measurements were measured and 65 proportions and body composition characteristics were calculated. 9 physical ability tests, 9 volleyball technical skills tests and 21 psychophysiological tests were carried out. The game performance was recorded by the computer program Game. The program enabled to fix the performance of technical elements in case of each player. The computer program Game calculated the index of proficiency in case of each girl for each element. The first control group consisted of 74 female volleyballers aged 13–15 years with whom reduced anthropometry was provided and 28 games were recorded. The second control group consisted of 586 ordinary schoolgirls aged 13–16 years with whom full anthropometry was provided. Results In order to systematize all anthropometric characteristics, we first studied the essence of the anthropometric structure of the body as a whole. It turned out to be a characteristic system where all variables are in significant correlation between one another and where the leading characteristics are height and weight. Therefore we based the classification on the mean height and weight of the whole sample. We formed a 5 class SD classification. There are three classes of concordance between height and weight: small height – small weight, medium height – medium height, big height – big weight. The other two classes were classes of disconcordance between height and weight- pycnomorphs and leptomorphs. We managed to show that gradual increase in height and weight brought about statistically significant increase in length, breadth and depth measurements, circumferences, bone thicknesses and skinfolds. There were also systematic changes in indeces and body composition characteristics. Pycnomorphs and leptomorphs also showed differences specific to their body types in body measurements and body composition. The results of all tests were submitted to basic statistical analysis and all correlations were found between all the tests (volleyball technical skills, psychophysiological abilities, physical abilities), and all basic anthropometric variables (n = 49) and all proportions and body composition characteristics (n = 65). All anthropometric measurements and test results were correlated with the index of proficiency for all elements of the game. The best linear regression models were calculated for predicting proficiency in different elements of the game. We can see that body build and all kind of tests took part in predicting the proficiency of the game. The most essential for performing attack, block and feint were anthropometric and psychophysiological models. The studied complex of body build characteristics and tests results determine the players’ proficiency at competitions, are an important tool for testing the player’s individual development, enable to choose volleyballers from among schoolgirls and represent the whole body constitutional model of a young female volleyballer. Outlook Our outlook for the future is to continue recording of all Estonian championship games with the computer program Game, to continue the players’ anthropometric measuring and psychophysiological testing at competitions and to compile a national register for assessment of development of individual players and teams.
Optimizing power analysis for randomized experiments: Design parameters for student achievement
(2024)
Randomized trials (RTs) are promising methodological tools to inform evidence-based reform to enhance schooling. Establishing a robust knowledge base on how to promote student achievement requires sensitive RT designs demonstrating sufficient statistical power and precision to draw conclusive and correct inferences on the effectiveness of educational programs and innovations. Proper power analysis is therefore an integral component of any informative RT on student achievement. This venture critically hinges on the availability of reasonable input variance design parameters (and their inherent uncertainties) that optimally reflect the realities around the prospective RT—precisely, its target population and outcome, possibly applied covariates, the concrete design as well as the planned analysis. However, existing compilations in this vein show far-reaching shortcomings.
The overarching endeavor of the present doctoral thesis was to substantively expand available resources devoted to tweak the planning of RTs evaluating educational interventions. At the core of this thesis is a systematic analysis of design parameters for student achievement, generating reliable and versatile compendia and developing thorough guidance to support apt power analysis to design strong RTs. To this end, the thesis at hand bundles two complementary studies which capitalize on rich data of several national probability samples from major German longitudinal large-scale assessments.
Study I applied two- and three-level latent (covariate) modeling to analyze design parameters for a wide spectrum of mathematical-scientific, verbal, and domain-general achievement outcomes. Three vital covariate sets were covered comprising (a) pretests, (b) sociodemographic characteristics, and (c) their combination. The accumulated estimates were additionally summarized in terms of normative distributions.
Study II specified (manifest) single-, two-, and three-level models and referred to influential psychometric heuristics to analyze design parameters and develop concise selection guidelines for covariate (a) types of varying bandwidth-fidelity (domain-identical, cross-domain, fluid intelligence pretests; sociodemographic characteristics), (b) combinations quantifying incremental validities, and (c) time lags of 1- to 7-year-lagged pretests scrutinizing validity degradation. The estimates for various mathematical-scientific and verbal achievement outcomes were meta-analytically integrated and employed in precision simulations.
In doing so, Studies I and II addressed essential gaps identified in previous repertoires in six major dimensions: Taken together, this thesis accumulated novel design parameters and deliberate guidance for RT power analysis (1) tailored to four German student (sub)populations across the entire school career from Grade 1 to 12, (2) matched to 21 achievement (sub)domains, (3) adjusted for 11 covariate sets enriched by empirically supported guidelines, (4) adapted to six RT designs, (5) suitable for latent and manifest analysis models, (6) which were cataloged along with quantifications of their associated uncertainties. These resources are complemented by a plethora of illustrative application examples to gently direct psychological and educational researchers through pivotal steps in the process of RT design.
The striking heterogeneity of the design parameter estimates across all these dimensions constitutes the overall, joint key result of Studies I and II. Hence, this work convincingly reinforces calls for a close match between design parameters and the specific peculiarities of the target RT’s research context.
All in all, the present doctoral thesis offers a—so far unique—nuanced and extensive toolkit to optimize power analysis for sound RTs on student achievement in the German (and similar) school context. It is of utmost importance that research does not tire to spawn robust evidence on what actually works to improve schooling. With this in mind, I hope that the emerging compendia and guidance contribute to the quality and rigor of our randomized experiments in psychology and education.
Investigation of tropospheric arctic aerosol and mixed-phase clouds using airborne lidar technique
(2005)
An Airborne Mobile Aerosol Lidar (AMALi) was constructed and built at Alfred-Wegener-Institute for Polar and Marine Research (AWI) in Potsdam, Germany for the lower tropospheric aerosol and cloud research under tough arctic conditions. The system was successfully used during two AWI airborne field campaigns, ASTAR 2004 and SVALEX 2005, performed in vicinity of Spitsbergen in the Arctic. The novel evaluation schemes, the Two-Stream Inversion and the Iterative Airborne Inversion, were applied to the obtained lidar data. Thereby, calculation of the particle extinction and backscatter coefficient profiles with corresponding lidar ratio profiles characteristic for the arctic air was possible. The comparison of these lidar results with the results of other in-situ and remote instrumentation (ground based Koldewey Aerosol Raman Lidar (KARL), sunphotometer, radiosounding, satellite imagery) allowed to provided clean contra polluted (Arctic Haze) characteristics of the arctic aerosols. Moreover, the data interpretation by means of the ECMWF Operational Analyses and small-scale dispersion model EULAG allowed studying the effects of the Spitsbergens orography on the aerosol load in the Planetary Boundary Layer. With respect to the cloud studies a new methodology of alternated remote AMALi measurements with the airborne in-situ cloud optical and microphysical parameters measurements was proved feasible for the low density mixed-phase cloud studies. An example of such approach during observation of the natural cloud seeding (feeder-seeder phenomenon) with ice crystals precipitating into the lower supercooled stratocumulus deck were discussed in terms of the lidar signal intensity profiles and corresponding depolarisation ratio profiles. For parts of the cloud system characterised by almost negligible multiple scattering the calculation of the particle backscatter coefficient profiles was possible using the lidar ratio information obtained from the in-situ measurements in ice-crystal cloud and water cloud.
Kenya and Uganda are amongst the countries that, for different historical, political, and economic reasons, have embarked on law reform processes as regards to citizenship. In 2009, Uganda made provisions in its laws to allow citizens to have dual citizenship while Kenya’s 2010 constitution similarly introduced it, and at the same time, a general prohibition on dual citizenship was lifted, that is, a ban on state officers, including the President and Deputy President, being dual nationals (Manby, 2018).
Against this background, I analysed the reasons for which these countries that previously held stringent laws and policies against dual citizenship, made a shift in a close time proximity. Given their geo-political roles, location, regional, continental, and international obligations, I conducted a comparative study on the processes, actors, impact, and effect. A specific period of 2000 to 2010 was researched, that is, from when the debates for law reforms emerged, to the processes being implemented, the actors, and the implications.
According to Rubenstein (2000, p. 520), citizenship is observed in terms of “political institutions” that are free to act according to the will of, in the interests of, or with authority over, their citizenry. Institutions are emergent national or international, higher-order factors above the individual spectrum, having the interests and political involvement of their actors without requiring recurring collective mobilisation or imposing intervention to realise these regularities. Transnational institutions are organisations with authority beyond single governments. Given their International obligations, I analysed the role of the UN, AU, and EAC in influencing the citizenship debates and reforms in Kenya and Uganda. Further, non-state actors, such as civil society, were considered.
Veblen, (1899) describes institutions as a set of settled habits of thought common to the generality of men. Institutions function only because the rules involved are rooted in shared habits of thought and behaviour although there is some ambiguity in the definition of the term “habit”. Whereas abstracts and definitions depend on different analytical procedures, institutions restrain some forms of action and facilitate others. Transnational institutions both restrict and aid behaviour. The famous “invisible hand” is nothing else but transnational institutions. Transnational theories, as applied to politics, posit two distinct forms that are of influence over policy and political action (Veblen, 1899). This influence and durability of institutions is “a function of the degree to which they are instilled in political actors at the individual or organisational level, and the extent to which they thereby “tie up” material resources and networks. Against this background, transitional networks with connection to Kenya and Uganda were considered alongside the diaspora from these two countries and their role in the debate and reforms on Dual citizenship.
Sterian (2013, p. 310) notes that Nation states may be vulnerable to institutional influence and this vulnerability can pose a threat to a nation’s autonomy, political legitimacy, and to the democratic public law. Transnational institutions sometimes “collide with the sovereignty of the state when they create new structures for regulating cross-border relationships”. However, Griffin (2003) disagrees that transnational institutional behaviour is premised on the principles of neutrality, impartiality, and independence. Transnational institutions have become the main target of the lobby groups and civil society, consequently leading to excessive politicisation. Kenya and Uganda are member states not only of the broader African union but also of the E.A.C which has adopted elements of socio-economic uniformity. Therefore, in the comparative analysis, I examine the role of the East African Community and its partners in the dual citizenship debate on the two countries.
I argue in the analysis that it is not only important to be a citizen within Kenya or Uganda but also important to discover how the issue of dual citizenship is legally interpreted within the borders of each individual nation-state. In light of this discussion, I agree with Mamdani’s definition of the nation-state as a unique form of power introduced in Africa by colonial powers between 1880 and 1940 whose outcomes can be viewed as “debris of a modernist postcolonial project, an attempt to create a centralised modern state as the bearer of Westphalia sovereignty against the background of indirect rule” (Mamdani, 1996, p. xxii). I argue that this project has impacted the citizenship debate through the adopted legal framework of post colonialism, built partly on a class system, ethnic definitions, and political affiliation. I, however, insist that the nation-state should still be a vital custodian of the citizenship debate, not in any way denying the individual the rights to identity and belonging. The question then that arises is which type of nation-state? Mamdani (1996, p. 298) asserts that the core agenda that African states faced at independence was threefold: deracialising civil society; detribalising the native authority; and developing the economy in the context of unequal international relations. Post-independence governments grappled with overcoming the citizen and subject dichotomy through either preserving the customary in the name of “defending tradition against alien encroachment or abolishing it in the name of overcoming backwardness and embracing triumphant modernism”. Kenya and Uganda are among countries that have reformed their citizenship laws attesting to Mamdani’s latter assertion.
Mamdani’s (1996) assertions on how African states continue to deal with the issue of citizenship through either the defence of tradition against subjects or abolishing it in the name of overcoming backwardness and acceptance of triumphant modernism are based on the colonial legal theory and the citizen-subject dichotomy within Africa communities. To further create a wider perspective on legal theory, I argue that those assertions above, point to the historical divergence between the republican model of citizenship, which places emphasis on political agency as envisioned in Rousseau´s social contract, as opposed to the liberal model of citizenship, which stresses the legal status and protection (Pocock, 1995).
I, therefore, compare the contexts of both Kenya and Uganda, the actors, the implications of transnationalism and post-nationalism, on the citizens, the nation-state and the region. I conclude by highlighting the shortcomings in the law reforms that allowed for dual citizenship, further demonstrating an urgent need to address issues, such as child statelessness, gender nationality laws, and the rights of dual citizens. Ethnicity, a weak nation state, and inconsistent citizenship legal reforms are closely linked to the historical factors of both countries. I further indicate the economic and political incentives that influenced the reform.
Keywords: Citizenship, dual citizenship, nation state, republicanism, liberalism, transnationalism, post-nationalism
Does it have to be trees? : Data-driven dependency parsing with incomplete and noisy training data
(2011)
We present a novel approach to training data-driven dependency parsers on incomplete annotations. Our parsers are simple modifications of two well-known dependency parsers, the transition-based Malt parser and the graph-based MST parser. While previous work on parsing with incomplete data has typically couched the task in frameworks of unsupervised or semi-supervised machine learning, we essentially treat it as a supervised problem. In particular, we propose what we call agnostic parsers which hide all fragmentation in the training data from their supervised components. We present experimental results with training data that was obtained by means of annotation projection. Annotation projection is a resource-lean technique which allows us to transfer annotations from one language to another within a parallel corpus. However, the output tends to be noisy and incomplete due to cross-lingual non-parallelism and error-prone word alignments. This makes the projected annotations a suitable test bed for our fragment parsers. Our results show that (i) dependency parsers trained on large amounts of projected annotations achieve higher accuracy than the direct projections, and that (ii) our agnostic fragment parsers perform roughly on a par with the original parsers which are trained only on strictly filtered, complete trees. Finally, (iii) when our fragment parsers are trained on artificially fragmented but otherwise gold standard dependencies, the performance loss is moderate even with up to 50% of all edges removed.
The two hallmark features of Brownian motion are the linear growth < x2(t)> = 2Ddt of the mean squared displacement (MSD) with diffusion coefficient D in d spatial dimensions, and the Gaussian distribution of displacements. With the increasing complexity of the studied systems deviations from these two central properties have been unveiled over the years. Recently, a large variety of systems have been reported in which the MSD exhibits the linear growth in time of Brownian (Fickian) transport, however, the distribution of displacements is pronouncedly non-Gaussian (Brownian yet non-Gaussian, BNG). A similar behaviour is also observed for viscoelastic-type motion where an anomalous trend of the MSD, i.e., <x2(t)> ~ ta, is combined with a priori unexpected non-Gaussian distributions (anomalous yet non-Gaussian, ANG). This kind of behaviour observed in BNG and ANG diffusions has been related to the presence of heterogeneities in the systems and a common approach has been established to address it, that is, the random diffusivity approach.
This dissertation explores extensively the field of random diffusivity models. Starting from a chronological description of all the main approaches used as an attempt of describing BNG and ANG diffusion, different mathematical methodologies are defined for the resolution and study of these models. The processes that are reported in this work can be classified in three subcategories, i) randomly-scaled Gaussian processes, ii) superstatistical models and iii) diffusing diffusivity models, all belonging to the more general class of random diffusivity models. Eventually, the study focuses more on BNG diffusion, which is by now well-established and relatively well-understood. Nevertheless, many examples are discussed for the description of ANG diffusion, in order to highlight the possible scenarios which are known so far for the study of this class of processes.
The second part of the dissertation deals with the statistical analysis of random diffusivity processes. A general description based on the concept of moment-generating function is initially provided to obtain standard statistical properties of the models. Then, the discussion moves to the study of the power spectral analysis and the first passage statistics for some particular random diffusivity models. A comparison between the results coming from the random diffusivity approach and the ones for standard Brownian motion is discussed. In this way, a deeper physical understanding of the systems described by random diffusivity models is also outlined.
To conclude, a discussion based on the possible origins of the heterogeneity is sketched, with the main goal of inferring which kind of systems can actually be described by the random diffusivity approach.
Forming as a result of the collision between the Adriatic and European plates, the Alpine orogen exhibits significant lithospheric heterogeneity due to the long history of interplay between these plates, other continental and oceanic blocks in the region, and inherited features from preceeding orogenies. This implies that the thermal and rheological configuration of the lithosphere also varies significantly throughout the region. Lithology and temperature/pressure conditions exert a first order control on rock strength, principally via thermally activated creep deformation and on the distribution at depth of the brittle-ductile transition zone, which can be regarded as the lower bound to the seismogenic zone. Therefore, they influence the spatial distribution of seismicity within a lithospheric plate. In light of this, accurately constrained geophysical models of the heterogeneous Alpine lithospheric configuration, are crucial in describing regional deformation patterns. However, despite the amount of research focussing on the area, different hypotheses still exist regarding the present-day lithospheric state and how it might relate to the present-day seismicity distribution.
This dissertaion seeks to constrain the Alpine lithospheric configuration through a fully 3D integrated modelling workflow, that utilises multiple geophysical techniques and integrates from all available data sources. The aim is therefore to shed light on how lithospheric heterogeneity may play a role in influencing the heterogeneous patterns of seismicity distribution observed within the region. This was accomplished through the generation of: (i) 3D seismically constrained, structural and density models of the lithosphere, that were adjusted to match the observed gravity field; (ii) 3D models of the lithospheric steady state thermal field, that were adjusted to match observed wellbore temperatures; and (iii) 3D rheological models of long term lithospheric strength, with the results of each step used as input for the following steps.
Results indicate that the highest strength within the crust (~ 1 GPa) and upper mantle (> 2 GPa), are shown to occur at temperatures characteristic for specific phase transitions (more felsic crust: 200 – 400 °C; more mafic crust and upper lithospheric mantle: ~600 °C) with almost all seismicity occurring in these regions. However, inherited lithospheric heterogeneity was found to significantly influence this, with seismicity in the thinner and more mafic Adriatic crust (~22.5 km, 2800 kg m−3, 1.30E-06 W m-3) occuring to higher temperatures (~600 °C) than in the thicker and more felsic European crust (~27.5 km, 2750 kg m−3, 1.3–2.6E-06 W m-3, ~450 °C). Correlation between seismicity in the orogen forelands and lithospheric strength, also show different trends, reflecting their different tectonic settings. As such, events in the plate boundary setting of the southern foreland correlate with the integrated lithospheric strength, occurring mainly in the weaker lithosphere surrounding the strong Adriatic indenter. Events in the intraplate setting of the northern foreland, instead correlate with crustal strength, mainly occurring in the weaker and warmer crust beneath the Upper Rhine Graben.
Therefore, not only do the findings presented in this work represent a state of the art understanding of the lithospheric configuration beneath the Alps and their forelands, but also a significant improvement on the features known to significantly influence the occurrence of seismicity within the region. This highlights the importance of considering lithospheric state in regards to explaining observed patterns of deformation.
The ubiquitin-proteasome-system (UPS) is a cellular cascade involving three enzymatic steps for protein ubiquitination to target them to the 26S proteasome for proteolytic degradation. Several components of the UPS have been shown to be central for regulation of defense responses during infections with phytopathogenic bacteria. Upon recognition of the pathogen, local defense is induced which also primes the plant to acquire systemic resistance (SAR) for enhanced immune responses upon challenging infections. Here, ubiquitinated proteins were shown to accumulate locally and systemically during infections with Psm and after treatment with the SAR-inducing metabolites salicylic acid (SA) and pipecolic acid (Pip). The role of the 26S proteasome in local defense has been described in several studies, but the potential role during SAR remains elusive and was therefore investigated in this project by characterizing the Arabidopsis proteasome mutants rpt2a-2 and rpn12a-1 during priming and infections with Pseudomonas. Bacterial replication assays reveal decreased basal and systemic immunity in both mutants which was verified on molecular level showing impaired activation of defense- and SAR-genes. rpt2a-2 and rpn12a-1 accumulate wild type like levels of camalexin but less SA. Endogenous SA treatment restores local PR gene expression but does not rescue the SAR-phenotype. An RNAseq experiment of Col-0 and rpt2a-2 reveal weak or absent induction of defense genes in the proteasome mutant during priming. Thus, a functional 26S proteasome was found to be required for induction of SAR while compensatory mechanisms can still be initiated.
E3-ubiquitin ligases conduct the last step of substrate ubiquitination and thereby convey specificity to proteasomal protein turnover. Using RNAseq, 11 E3-ligases were found to be differentially expressed during priming in Col-0 of which plant U-box 54 (PUB54) and ariadne 12 (ARI12) were further investigated to gain deeper understanding of their potential role during priming.
PUB54 was shown to be expressed during priming and /or triggering with virulent Pseudomonas. pub54 I and pub54-II mutants display local and systemic defense comparable to Col-0. The heavy-metal associated protein 35 (HMP35) was identified as potential substrate of PUB54 in yeast which was verified in vitro and in vivo. PUB54 was shown to be an active E3-ligase exhibiting auto-ubiquitination activity and performing ubiquitination of HMP35. Proteasomal turnover of HMP35 was observed indicating that PUB54 targets HMP35 for ubiquitination and subsequent proteasomal degradation. Furthermore, hmp35-I benefits from increased resistance in bacterial replication assays. Thus, HMP35 is potentially a negative regulator of defense which is targeted and ubiquitinated by PUB54 to regulate downstream defense signaling. ARI12 is transcriptionally activated during priming or triggering and hyperinduced during priming and triggering. Gene expression is not inducible by the defense related hormone salicylic acid (SA) and is dampened in npr1 and fmo1 mutants consequently depending on functional SA- and Pip-pathways, respectively. ARI12 accumulates systemically after priming with SA, Pip or Pseudomonas. ari12 mutants are not altered in resistance but stable overexpression leads to increased resistance in local and systemic tissue. During priming and triggering, unbalanced ARI12 levels (i.e. knock out or overexpression) leads to enhanced FMO1 activation indicating a role of ARI12 in Pip-mediated SAR. ARI12 was shown to be an active E3-ligase with auto-ubiquitination activity likely required for activation with an identified ubiquitination site at K474. Mass spectrometrically identified potential substrates were not verified by additional experiments yet but suggest involvement of ARI12 in regulation of ROS in turn regulating Pip-dependent SAR pathways.
Thus, data from this project provide strong indications about the involvement of the 26S proteasome in SAR and identified a central role of the two so far barely described E3-ubiquitin ligases PUB54 and ARI12 as novel components of plant defense.
Li and B in ascending magmas: an experimental study on their mobility and isotopic fractionation
(2022)
This research study focuses on the behaviour of Li and B during magmatic ascent, and decompression-driven degassing related to volcanic systems. The main objective of this dissertation is to determine whether it is possible to use the diffusion properties of the two trace elements as a tool to trace magmatic ascent rate. With this objective, diffusion-couple and decompression experiments have been performed in order to study Li and B mobility in intra-melt conditions first, and then in an evolving system during decompression-driven degassing.
Synthetic glasses were prepared with rhyolitic composition and an initial water content of 4.2 wt%, and all the experiments were performed using an internally heated pressure vessel, in order to ensure a precise control on the experimental parameters such as temperature and pressure.
Diffusion-couple experiments were performed with a fix pressure 300 MPa. The temperature was varied in the range of 700-1250 °C with durations between 0 seconds and 24 hours. The diffusion-couple results show that Li diffusivity is very fast and starts already at very low temperature. Significant isotopic fractionation occurs due to the faster mobility of 6Li compared to 7Li. Boron diffusion is also accelerated by the presence of water, but the results of the isotopic ratios are unclear, and further investigation would be necessary to well constrain the isotopic fractionation process of boron in hydrous silicate melts. The isotopic ratios results show that boron isotopic fractionation might be affected by the speciation of boron in the silicate melt structure, as 10B and 11B tend to have tetrahedral and trigonal coordination, respectively.
Several decompression experiments were performed at 900 °C and 1000 °C, with pressures going from 300 MPa to 71-77 MPa and durations of 30 minutes, two, five and ten hours, in order to trigger water exsolution and the formation of vesicles in the sample. Textural observations and the calculation of the bubble number density confirmed that the bubble size and distribution after decompression is directly proportional to the decompression rate.
The overall SIMS results of Li and B show that the two trace elements tend to progressively decrease their concentration with decreasing decompression rates. This is explained because for longer decompression times, the diffusion of Li and B into the bubbles has more time to progress and the melt continuously loses volatiles as the bubbles expand their volumes.
For fast decompression, Li and B results show a concentration increase with a δ7Li and δ11B decrease close to the bubble interface, related to the sudden formation of the gas bubble, and the occurrence of a diffusion process in the opposite direction, from the bubble meniscus to the unaltered melt. When the bubble growth becomes dominant and Li and B start to exsolve into the gas phase, the silicate melt close to the bubble gets depleted in Li and B, because of a stronger diffusion of the trace elements into the bubble.
Our data are being applied to different models, aiming to combine the dynamics of bubble nucleation and growth with the evolution of trace elements concentration and isotopic ratios. Here, first considerations on these models will be presented, giving concluding remarks on this research study. All in all, the final remarks constitute a good starting point for further investigations. These results are a promising base to continue to study this process, and Li and B can indeed show clear dependences on decompression-related magma ascent rates in volcanic systems.
In this thesis we investigate the evaporation behaviour of sessile droplets of aqueous saline solutions on planar inert and metallic surfaces and characterise the corrosion phenomenon for iron surfaces. First we study the evaporation behaviour of sessile salty droplets on inert surfaces for a wide range of salt concentrations, relative humidities, droplet sizes and contact angles. Our study reveals the range of validity of the well-accepted diffusion-controlled evaporation model and highlights the impact of salt concentration (surface tension) gradients driven Marangoni flows on the evaporation behaviour and the subsequent salty deposit patterns. Furthermore we study the spatial-temporal evolution of sessile droplets from saline solutions on metallic surfaces. In contrast to the simple, generally accepted Evans droplet model, we show that the corrosion spreads ahead of the macroscopic contact line with a peripheral film. The three-phase contact line is destabilized by surface tension gradients induced by ionic composition changes during the course of the corrosion process and migration of cations towards the droplet perimeter. Finally we investigate the corrosion behaviour under drying salty sessile droplets on metallic surfaces. The corrosion process, in particular the location of anodic and cathodic activities over the footprint droplet area is correlated to the spatial distribution of the salt inside the drying droplet.
This dissertation is concerned with the relation between qualitative phonological organization in the form of syllabic structure and continuous phonetics, that is, the spatial and temporal dimensions of vocal tract action that express syllabic structure. The main claim of the dissertation is twofold. First, we argue that syllabic organization exerts multiple effects on the spatio-temporal properties of the segments that partake in that organization. That is, there is no unique or privileged exponent of syllabic organization. Rather, syllabic organization is expressed in a pleiotropy of phonetic indices. Second, we claim that a better understanding of the relation between qualitative phonological organization and continuous phonetics is reached when one considers how the string of segments (over which the nature of the phonological organization is assessed) responds to perturbations (scaling of phonetic variables) of localized properties (such as durations) within that string. Specifically, variation in phonetic variables and more specifically prosodic variation is a crucial key to understanding the nature of the link between (phonological) syllabic organization and the phonetic spatio-temporal manifestation of that organization. The effects of prosodic variation on segmental properties and on the overlap between the segments, we argue, offer the right pathway to discover patterns related to syllabic organization. In our approach, to uncover evidence for global organization, the sequence of segments partaking in that organization as well as properties of these segments or their relations with one another must be somehow locally varied. The consequences of such variation on the rest of the sequence can then be used to unveil the span of organization. When local perturbations to segments or relations between adjacent segments have effects that ripple through the rest of the sequence, this is evidence that organization is global. If instead local perturbations stay local with no consequences for the rest of the whole, this indicates that organization is local.
The intracontinental endorheic Aral Sea, remote from oceanic influences, represents an excellent sedimentary archive in Central Asia that can be used for high-resolution palaeoclimate studies. We performed palynological, microfacies and geochemical analyses on sediment cores retrieved from Chernyshov Bay, in the NW part of the modern Large Aral Sea. The most complete sedimentary sequence, whose total length is 11 m, covers approximately the past 2000 years of the late Holocene. High-resolution palynological analyses, conducted on both dinoflagellate cysts assemblages and pollen grains, evidenced prominent environmental change in the Aral Sea and in the catchment area. The diversity and the distribution of dinoflagellate cysts within the assemblages characterized the sequence of salinity and lake-level changes during the past 2000 years. Due to the strong dependence of the Aral Sea hydrology to inputs from its tributaries, the lake levels are ultimately linked to fluctuations in meltwater discharges during spring. As the amplitude of glacial meltwater inputs is largely controlled by temperature variations in the Tien Shan and Pamir Mountains during the melting season, salinity and lake-level changes of the Aral Sea reflect temperature fluctuations in the high catchment area during the past 2000 years. Dinoflagellate cyst assemblages document lake lowstands and hypersaline conditions during ca. 0–425 AD, 920–1230 AD, 1500 AD, 1600–1650 AD, 1800 AD and since the 1960s, whereas oligosaline conditions and higher lake levels prevailed during the intervening periods. Besides, reworked dinoflagellate cysts from Palaeogene and Neogene deposits happened to be a valuable proxy for extreme sheet-wash events, when precipitation is enhanced over the Aral Sea Basin as during 1230–1450 AD. We propose that the recorded environmental changes are related primarily to climate, but may have been possibly amplified during extreme conditions by human-controlled irrigation activities or military conflicts. Additionally, salinity levels and variations in solar activity show striking similarities over the past millennium, as during 1000–1300 AD, 1450–1550 and 1600–1700 AD when low lake levels match well with an increase in solar activity thus suggesting that an increase in the net radiative forcing reinforced past Aral Sea’s regressions. On the other hand, we used pollen analyses to quantify changes in moisture conditions in the Aral Sea Basin. High-resolution reconstruction of precipitation (mean annual) and temperature (mean annual, coldest versus warmest month) parameters are performed using the “probability mutual climatic spheres” method, providing the sequence of climate change for the past 2000 years in western Central Asia. Cold and arid conditions prevailed during ca. 0–400 AD, 900–1150 AD and 1500–1650 AD with the extension of xeric vegetation dominated by steppe elements. Conversely, warmer and less arid conditions occurred during ca. 400–900 AD and 1150–1450 AD, where steppe vegetation was enriched in plants requiring moister conditions. Change in the precipitation pattern over the Aral Sea Basin is shown to be predominantly controlled by the Eastern Mediterranean (EM) cyclonic system, which provides humidity to the Middle East and western Central Asia during winter and early spring. As the EM is significantly regulated by pressure modulations of the North Atlantic Oscillation (NAO) when the system is in a negative phase, a relationship between humidity over western Central Asia and the NAO is proposed. Besides, laminated sediments record shifts in sedimentary processes during the late Holocene that reflect pronounced changes in taphonomic dynamics. In Central Asia, the frequency of dust storms occurring during spring when the continent is heating up is mostly controlled by the intensity and the position of the Siberian High (SH) Pressure System. Using titanium (Ti) content in laminated sediments as a proxy for aeolian detrital inputs, changes in wind dynamics over Central Asia is documented for the past 1500 years, offering the longest reconstruction of SH variability to date. Based on high Ti content, stronger wind dynamics are reported from 450–700 AD, 1210–1265 AD, 1350–1750 AD and 1800–1975 AD, reporting a stronger SH during spring. In contrast, lower Ti content from 1750–1800 AD and 1980–1985 AD reflect a diminished influence of the SH and a reduced atmospheric circulation. During 1180–1210 AD and 1265–1310 AD, considerably weakened atmospheric circulation is evidenced. As a whole, though climate dynamics controlled environmental changes and ultimately modulated changes in the western Central Asia’s climate system, it is likely that changes in solar activity also had an impact by influencing to some extent the Aral Sea’s hydrology balance and also regional temperature patterns in the past. <hr> The appendix of the thesis is provided via the HTML document as ZIP download.
Galaxies are observational probes to study the Large Scale Structure. Their gravitational motions are tracers of the total matter density and therefore of the Large Scale Structure. Besides, studies of structure formation and galaxy evolution rely on numerical cosmological simulations. Still, only one universe observable from a given position, in time and space, is available for comparisons with simulations. The related cosmic variance affects our ability to interpret the results. Simulations constrained by observational data are a perfect remedy to this problem. Achieving such simulations requires the projects Cosmic flows and CLUES. Cosmic flows builds catalogs of accurate distance measurements to map deviations from the expansion. These measures are mainly obtained with the galaxy luminosity-rotation rate correlation. We present the calibration of that relation in the mid-infrared with observational data from Spitzer Space Telescope. Resulting accurate distance estimates will be included in the third catalog of the project. In the meantime, two catalogs up to 30 and 150 Mpc/h have been released. We report improvements and applications of the CLUES' method on these two catalogs. The technique is based on the constrained realization algorithm. The cosmic displacement field is computed with the Zel'dovich approximation. This latter is then reversed to relocate reconstructed three-dimensional constraints to their precursors' positions in the initial field. The size of the second catalog (8000 galaxies within 150 Mpc/h) highlighted the importance of minimizing the observational biases. By carrying out tests on mock catalogs, built from cosmological simulations, a method to minimize observational bias can be derived. Finally, for the first time, cosmological simulations are constrained solely by peculiar velocities. The process is successful as resulting simulations resemble the Local Universe. The major attractors and voids are simulated at positions approaching observational positions by a few megaparsecs, thus reaching the limit imposed by the linear theory.
The non-linear behaviour of the atmospheric dynamics is not well understood and makes the evaluation and usage of regional climate models (RCMs) difficult. Due to these non-linearities, chaos and internal variability (IV) within the RCMs are induced, leading to a sensitivity of RCMs to their initial conditions (IC). The IV is the ability of RCMs to realise different solutions of simulations that differ in their IC, but have the same lower and lateral boundary conditions (LBC), hence can be defined as the across-member spread between the ensemble members.
For the investigation of the IV and the dynamical and diabatic contributions generating the IV four ensembles of RCM simulations are performed with the atmospheric regional model HIRHAM5. The integration area is the Arctic and each ensemble consists of 20 members. The ensembles cover the time period from July to September for the years 2006, 2007, 2009 and 2012. The ensemble members have the same LBC and differ in their IC only. The different IC are arranged by an initialisation time that shifts successively by six hours. Within each ensemble the first simulation starts on 1st July at 00 UTC and the last simulation starts on 5th July at 18 UTC and each simulation runs until 30th September. The analysed time period ranges from 6th July to 30th September, the time period that is covered by all ensemble members. The model runs without any nudging to allow a free development of each simulation to get the full internal variability within the HIRHAM5.
As a measure of the model generated IV, the across-member standard deviation and the across-member variance is used and the dynamical and diabatic processes influencing the IV are estimated by applying a diagnostic budget study for the IV tendency of the potential temperature developed by Nikiema and Laprise [2010] and Nikiema and Laprise [2011]. The diagnostic budget study is based on the first law of thermodynamics for potential temperature and the mass-continuity equation. The resulting budget equation reveals seven contributions to the potential temperature IV tendency.
As a first study, this work analyses the IV within the HIRHAM5. Therefore, atmospheric circulation parameters and the potential temperature for all four ensemble years are investigated. Similar to previous studies, the IV fluctuates strongly in time. Further, due to the fact that all ensemble members are forced with the same LBC, the IV depends on the vertical level within the troposphere, with high values in the lower troposphere and at 500 hPa and low values in the upper troposphere and at the surface. By the same reason, the spatial distribution shows low values of IV at the boundaries of the model domain.
The diagnostic budget study for the IV tendency of potential temperature reveals that the seven contributions fluctuate in time like the IV. However, the individual terms reach different absolute magnitudes. The budget study identifies the horizontal and vertical ‘baroclinic’ terms as the main contributors to the IV tendency, with the horizontal ‘baroclinic’ term producing and the vertical ‘baroclinic’ term reducing the IV. The other terms fluctuate around zero, because they are small in general or are balanced due to the domain average.
The comparison of the results obtained for the four different ensembles (summers 2006, 2007, 2009 and 2012) reveals that on average the findings for each ensemble are quite similar concerning the magnitude and the general pattern of IV and its contributions. However, near the surface a weaker IV is produced with decreasing sea ice extent. This is caused by a smaller impact of the horizontal 'baroclinic' term over some regions and by the changing diabatic processes, particularly a more intense reducing tendency of the IV due to condensative heating. However, it has to be emphasised that the behaviour of the IV and its dynamical and diabatic contributions are influenced mainly by complex atmospheric feedbacks and large-scale processes and not by the sea ice distribution.
Additionally, a comparison with a second RCM covering the Arctic and using the same LBCs and IC is performed. For both models very similar results concerning the IV and its dynamical and diabatic contributions are found. Hence, this investigation leads to the conclusion that the IV is a natural phenomenon and is independent from the applied RCM.
Establishment of final leaf size in plants represents a complex mechanism that relies on the precise regulation of two interconnected cellular processes, cell division and cell expansion. In previous work, the barley protein BROAD LEAF1 (BLF1) was identified as a novel negative regulator of cell proliferation, that mainly limits leaf growth in the width direction. Here I identified a novel RING/U-box protein that interacts with BLF1 through a yeast two hybrid screen. Using BiFC, Co-IP and FRET I confirmed the interaction of the two proteins in planta. Enrichment of the BLF1-mEGFP fusion protein and the increase of the FRET signal upon MG132 treatment of tobacco plants, together with an in vivo ubiquitylation assay in bacteria, confirmed that the RING/U-box E3 interacts with BLF1 to mediate its ubiquitylation and degradation by the 26S proteasome system. Consistent with regulation of endogenous BLF1 in barley by proteasomal degradation, inhibition of the proteasome by bortezomib treatment on BLF1-vYFP transgenic barley plants also resulted in an enrichment of the BLF1 protein. I thus demonstrated that RING/U-box E3 is colocalized with BLF1 in nuclei and negatively regulates BLF1 protein levels. Analysis of ring-e3_1 knock-out mutants suggested the involvement of the RING/U-box E3 gene in leaf growth control, although the effect was mainly on leaf length. Together, my results suggest that proteasomal degradation, possibly mediated by RING/U-box E3, contributes to fine-tuning BLF1 protein-level in barley.
In this work we investigated ultrafast demagnetization in a Heusler-alloy. This material belongs to the halfmetal and exists in a ferromagnetic phase. A special feature of investigated alloy is a structure of electronic bands. The last leads to the specific density of the states. Majority electrons form a metallic like structure while minority electrons form a gap near the Fermi-level, like in semiconductor. This particularity offers a good possibility to use this material as model-like structure and to make some proof of principles concerning demagnetization. Using pump-probe experiments we carried out time-resolved measurements to figure out the times of demagnetization. For the pumping we used ultrashort laser pulses with duration around 100 fs. Simultaneously we used two excitation regimes with two different wavelengths namely 400 nm and 1240 nm. Decreasing the energy of photons to the gap size of the minority electrons we explored the effect of the gap on the demagnetization dynamics. During this work we used for the first time OPA (Optical Parametrical Amplifier) for the generation of the laser irradiation in a long-wave regime. We tested it on the FETOSPEX-beamline in BASSYII electron storage ring. With this new technique we measured wavelength dependent demagnetization dynamics. We estimated that the demagnetization time is in a correlation with photon energy of the excitation pulse. Higher photon energy leads to the faster demagnetization in our material. We associate this result with the existence of the energy-gap for minority electrons and explained it with Elliot-Yaffet-scattering events. Additionally we applied new probe-method for magnetization state in this work and verified their effectivity. It is about the well-known XMCD (X-ray magnetic circular dichroism) which we adopted for the measurements in reflection geometry. Static experiments confirmed that the pure electronic dynamics can be separated from the magnetic one. We used photon energy fixed on the L3 of the corresponding elements with circular polarization. Appropriate incidence angel was estimated from static measurements. Using this probe method in dynamic measurements we explored electronic and magnetic dynamics in this alloy.
During the drug discovery & development process, several phases encompassing a number of preclinical and clinical studies have to be successfully passed to demonstrate safety and efficacy of a new drug candidate. As part of these studies, the characterization of the drug's pharmacokinetics (PK) is an important aspect, since the PK is assumed to strongly impact safety and efficacy. To this end, drug concentrations are measured repeatedly over time in a study population. The objectives of such studies are to describe the typical PK time-course and the associated variability between subjects. Furthermore, underlying sources significantly contributing to this variability, e.g. the use of comedication, should be identified. The most commonly used statistical framework to analyse repeated measurement data is the nonlinear mixed effect (NLME) approach. At the same time, ample knowledge about the drug's properties already exists and has been accumulating during the discovery & development process: Before any drug is tested in humans, detailed knowledge about the PK in different animal species has to be collected. This drug-specific knowledge and general knowledge about the species' physiology is exploited in mechanistic physiological based PK (PBPK) modeling approaches -it is, however, ignored in the classical NLME modeling approach.
Mechanistic physiological based models aim to incorporate relevant and known physiological processes which contribute to the overlying process of interest. In comparison to data--driven models they are usually more complex from a mathematical perspective. For example, in many situations, the number of model parameters outrange the number of measurements and thus reliable parameter estimation becomes more complex and partly impossible. As a consequence, the integration of powerful mathematical estimation approaches like the NLME modeling approach -which is widely used in data-driven modeling -and the mechanistic modeling approach is not well established; the observed data is rather used as a confirming instead of a model informing and building input.
Another aggravating circumstance of an integrated approach is the inaccessibility to the details of the NLME methodology so that these approaches can be adapted to the specifics and needs of mechanistic modeling. Despite the fact that the NLME modeling approach exists for several decades, details of the mathematical methodology is scattered around a wide range of literature and a comprehensive, rigorous derivation is lacking. Available literature usually only covers selected parts of the mathematical methodology. Sometimes, important steps are not described or are only heuristically motivated, e.g. the iterative algorithm to finally determine the parameter estimates.
Thus, in the present thesis the mathematical methodology of NLME modeling is systemically described and complemented to a comprehensive description,
comprising the common theme from ideas and motivation to the final parameter estimation. Therein, new insights for the interpretation of different approximation methods used in the context of the NLME modeling approach are given and illustrated; furthermore, similarities and differences between them are outlined. Based on these findings, an expectation-maximization (EM) algorithm to determine estimates of a NLME model is described.
Using the EM algorithm and the lumping methodology by Pilari2010, a new approach on how PBPK and NLME modeling can be combined is presented and exemplified for the antibiotic levofloxacin. Therein, the lumping identifies which processes are informed by the available data and the respective model reduction improves the robustness in parameter estimation. Furthermore, it is shown how apriori known factors influencing the variability and apriori known unexplained variability is incorporated to further mechanistically drive the model development. Concludingly, correlation between parameters and between covariates is automatically accounted for due to the mechanistic derivation of the lumping and the covariate relationships.
A useful feature of PBPK models compared to classical data-driven PK models is in the possibility to predict drug concentration within all organs and tissue in the body. Thus, the resulting PBPK model for levofloxacin is used to predict drug concentrations and their variability within soft tissues which are the site of action for levofloxacin. These predictions are compared with data of muscle and adipose tissue obtained by microdialysis, which is an invasive technique to measure a proportion of drug in the tissue, allowing to approximate the concentrations in the interstitial fluid of tissues. Because, so far, comparing human in vivo tissue PK and PBPK predictions are not established, a new conceptual framework is derived. The comparison of PBPK model predictions and microdialysis measurements shows an adequate agreement and reveals further strengths of the presented new approach.
We demonstrated how mechanistic PBPK models, which are usually developed in the early stage of drug development, can be used as basis for model building in the analysis of later stages, i.e. in clinical studies. As a consequence, the extensively collected and accumulated knowledge about species and drug are utilized and updated with specific volunteer or patient data. The NLME approach combined with mechanistic modeling reveals new insights for the mechanistic model, for example identification and quantification of variability in mechanistic processes. This represents a further contribution to the learn & confirm paradigm across different stages of drug development.
Finally, the applicability of mechanism--driven model development is demonstrated on an example from the field of Quantitative Psycholinguistics to analyse repeated eye movement data. Our approach gives new insight into the interpretation of these experiments and the processes behind.
Fault planes of large earthquakes incorporate inhomogeneous structures. This can be observed in teleseismic studies through the spatial distribution of slip and seismic moment release caused by the mainshock. Both parameters are often concentrated on patches on the fault plane with much higher values for slip and moment release than their adjacent areas. These patches are called asperities which obviously have a strong influence on the mainshock rupture propagation. Condition and properties of structures in the fault plane area, which are responsible for the evolution of such asperities or their significance on damage distributions of future earthquakes, are still not well understood and subject to recent geo-scientific studies. In the presented thesis asperity structures are identified on the fault plane of the Mw=8.0 Antofagasta earthquake in northern Chile which occurred on 30th of July, 1995. It was a thrust-type event in the seismogenic zone between the subducting pacific Nazca plate and the overriding South American plate. In cooperation of the German Task Force for Earthquakes and the CINCA'95 project a network of up to 44 seismic stations was set up to record the aftershock sequence. The seaward extension of the network with 9 OBH stations increased significantly the precision of hypocenter determinations. They were distributed mainly on the fault plane itself around the city of Antofagasta and Mejillones Peninsula. The asperity structures were recognized here by the spatial variations of local seismological parameters; at first by the spatial distribution of the seismic b-value on the fault plane, derived from the magnitude-frequency relation of Gutenberg-Richter. The correlation of this b-value map with other parameters like the mainshock source time function, the gravity isostatic residual anomalies, the aftershock radiated seismic energy distribution and the vp/vs ratios from a local earthquake tomograhpy study revealed some ideas about the composition and asperity generating processes. The investigation of 295 aftershock focal mechanism solutions supported the resulting fault plane structure and proposed a 3D similar stress state in the area of the Antofagasta fault plane.
More than a billion people rely on water from rivers sourced in High Mountain Asia (HMA), a significant portion of which is derived from snow and glacier melt. Rural communities are heavily dependent on the consistency of runoff, and are highly vulnerable to shifts in their local environment brought on by climate change. Despite this dependence, the impacts of climate change in HMA remain poorly constrained due to poor process understanding, complex terrain, and insufficiently dense in-situ measurements.
HMA's glaciers contain more frozen water than any region outside of the poles. Their extensive retreat is a highly visible and much studied marker of regional and global climate change. However, in many catchments, snow and snowmelt represent a much larger fraction of the yearly water budget than glacial meltwaters. Despite their importance, climate-related changes in HMA's snow resources have not been well studied.
Changes in the volume and distribution of snowpack have complex and extensive impacts on both local and global climates. Eurasian snow cover has been shown to impact the strength and direction of the Indian Summer Monsoon -- which is responsible for much of the precipitation over the Indian Subcontinent -- by modulating earth-surface heating. Shifts in the timing of snowmelt have been shown to limit the productivity of major rangelands, reduce streamflow, modify sediment transport, and impact the spread of vector-borne diseases. However, a large-scale regional study of climate impacts on snow resources had yet to be undertaken.
Passive Microwave (PM) remote sensing is a well-established empirical method of studying snow resources over large areas. Since 1987, there have been consistent daily global PM measurements which can be used to derive an estimate of snow depth, and hence snow-water equivalent (SWE) -- the amount of water stored in snowpack. The SWE estimation algorithms were originally developed for flat and even terrain -- such as the Russian and Canadian Arctic -- and have rarely been used in complex terrain such as HMA.
This dissertation first examines factors present in HMA that could impact the reliability of SWE estimates. Forest cover, absolute snow depth, long-term average wind speeds, and hillslope angle were found to be the strongest controls on SWE measurement reliability. While forest density and snow depth are factors accounted for in modern SWE retrieval algorithms, wind speed and hillslope angle are not. Despite uncertainty in absolute SWE measurements and differences in the magnitude of SWE retrievals between sensors, single-instrument SWE time series were found to be internally consistent and suitable for trend analysis.
Building on this finding, this dissertation tracks changes in SWE across HMA using a statistical decomposition technique. An aggregate decrease in SWE was found (10.6 mm/yr), despite large spatial and seasonal heterogeneities. Winter SWE increased in almost half of HMA, despite general negative trends throughout the rest of the year. The elevation distribution of these negative trends indicates that while changes in SWE have likely impacted glaciers in the region, climate change impacts on these two pieces of the cryosphere are somewhat distinct.
Following the discussion of relative changes in SWE, this dissertation explores changes in the timing of the snowmelt season in HMA using a newly developed algorithm. The algorithm is shown to accurately track the onset and end of the snowmelt season (70% within 5 days of a control dataset, 89% within 10). Using a 29-year time series, changes in the onset, end, and duration of snowmelt are examined. While nearly the entirety of HMA has experienced an earlier end to the snowmelt season, large regions of HMA have seen a later start to the snowmelt season. Snowmelt periods have also decreased in almost all of HMA, indicating that the snowmelt season is generally shortening and ending earlier across HMA.
By examining shifts in both the spatio-temporal distribution of SWE and the timing of the snowmelt season across HMA, we provide a detailed accounting of changes in HMA's snow resources. The overall trend in HMA is towards less SWE storage and a shorter snowmelt season. However, long-term and regional trends conceal distinct seasonal, temporal, and spatial heterogeneity, indicating that changes in snow resources are strongly controlled by local climate and topography, and that inter-annual variability plays a significant role in HMA's snow regime.
Business process models are used within a range of organizational initiatives, where every stakeholder has a unique perspective on a process and demands the respective model. As a consequence, multiple process models capturing the very same business process coexist. Keeping such models in sync is a challenge within an ever changing business environment: once a process is changed, all its models have to be updated. Due to a large number of models and their complex relations, model maintenance becomes error-prone and expensive. Against this background, business process model abstraction emerged as an operation reducing the number of stored process models and facilitating model management. Business process model abstraction is an operation preserving essential process properties and leaving out insignificant details in order to retain information relevant for a particular purpose. Process model abstraction has been addressed by several researchers. The focus of their studies has been on particular use cases and model transformations supporting these use cases. This thesis systematically approaches the problem of business process model abstraction shaping the outcome into a framework. We investigate the current industry demand in abstraction summarizing it in a catalog of business process model abstraction use cases. The thesis focuses on one prominent use case where the user demands a model with coarse-grained activities and overall process ordering constraints. We develop model transformations that support this use case starting with the transformations based on process model structure analysis. Further, abstraction methods considering the semantics of process model elements are investigated. First, we suggest how semantically related activities can be discovered in process models-a barely researched challenge. The thesis validates the designed abstraction methods against sets of industrial process models and discusses the method implementation aspects. Second, we develop a novel model transformation, which combined with the related activity discovery allows flexible non-hierarchical abstraction. In this way this thesis advocates novel model transformations that facilitate business process model management and provides the foundations for innovative tool support.
The near-Earth space environment is a highly complex system comprised of several regions and particle populations hazardous to satellite operations. The trapped particles in the radiation belts and ring current can cause significant damage to satellites during space weather events, due to deep dielectric and surface charging. Closer to Earth is another important region, the ionosphere, which delays the propagation of radio signals and can adversely affect navigation and positioning. In response to fluctuations in solar and geomagnetic activity, both the inner-magnetospheric and ionospheric populations can undergo drastic and sudden changes within minutes to hours, which creates a challenge for predicting their behavior. Given the increasing reliance of our society on satellite technology, improving our understanding and modeling of these populations is a matter of paramount importance.
In recent years, numerous spacecraft have been launched to study the dynamics of particle populations in the near-Earth space, transforming it into a data-rich environment. To extract valuable insights from the abundance of available observations, it is crucial to employ advanced modeling techniques, and machine learning methods are among the most powerful approaches available. This dissertation employs long-term satellite observations to analyze the processes that drive particle dynamics, and builds interdisciplinary links between space physics and machine learning by developing new state-of-the-art models of the inner-magnetospheric and ionospheric particle dynamics.
The first aim of this thesis is to investigate the behavior of electrons in Earth's radiation belts and ring current. Using ~18 years of electron flux observations from the Global Positioning System (GPS), we developed the first machine learning model of hundreds-of-keV electron flux at Medium Earth Orbit (MEO) that is driven solely by solar wind and geomagnetic indices and does not require auxiliary flux measurements as inputs. We then proceeded to analyze the directional distributions of electrons, and for the first time, used Fourier sine series to fit electron pitch angle distributions (PADs) in Earth's inner magnetosphere. We performed a superposed epoch analysis of 129 geomagnetic storms during the Van Allen Probes era and demonstrated that electron PADs have a strong energy-dependent response to geomagnetic activity. Additionally, we showed that the solar wind dynamic pressure could be used as a good predictor of the PAD dynamics. Using the observed dependencies, we created the first PAD model with a continuous dependence on L, magnetic local time (MLT) and activity, and developed two techniques to reconstruct near-equatorial electron flux observations from low-PA data using this model.
The second objective of this thesis is to develop a novel model of the topside ionosphere. To achieve this goal, we collected observations from five of the most widely used ionospheric missions and intercalibrated these data sets. This allowed us to use these data jointly for model development, validation, and comparison with other existing empirical models. We demonstrated, for the first time, that ion density observations by Swarm Langmuir Probes exhibit overestimation (up to ~40-50%) at low and mid-latitudes on the night side, and suggested that the influence of light ions could be a potential cause of this overestimation. To develop the topside model, we used 19 years of radio occultation (RO) electron density profiles, which were fitted with a Chapman function with a linear dependence of scale height on altitude. This approximation yields 4 parameters, namely the peak density and height of the F2-layer and the slope and intercept of the linear scale height trend, which were modeled using feedforward neural networks (NNs). The model was extensively validated against both RO and in-situ observations and was found to outperform the International Reference Ionosphere (IRI) model by up to an order of magnitude. Our analysis showed that the most substantial deviations of the IRI model from the data occur at altitudes of 100-200 km above the F2-layer peak. The developed NN-based ionospheric model reproduces the effects of various physical mechanisms observed in the topside ionosphere and provides highly accurate electron density predictions.
This dissertation provides an extensive study of geospace dynamics, and the main results of this work contribute to the improvement of models of plasma populations in the near-Earth space environment.
Introduction: Intestinal bacteria influence gut morphology by affecting epithelial cell proliferation, development of the lamina propria, villus length and crypt depth [1]. Gut microbiota-derived factors have been proposed to also play a role in the development of a 30 % longer intestine, that is characteristic of PRM/Alf mice compared to other mouse strains [2, 3]. Polyamines and SCFAs produced by gut bacteria are important growth factors, which possibly influence mucosal morphology, in particular villus length and crypt depth and play a role in gut lengthening in the PRM/Alf mouse. However, experimental evidence is lacking. Aim: The objective of this work was to clarify the role of bacterially-produced polyamines on crypt depth, mucosa thickness and epithelial cell proliferation. For this purpose, C3H mice associated with a simplified human microbiota (SIHUMI) were compared with mice colonized with SIHUMI complemented by the polyamine-producing Fusobacterium varium (SIHUMI + Fv). In addition, the microbial impact on gut lengthening in PRM/Alf mice was characterized and the contribution of SCFAs and polyamines to this phenotype was examined. Results: SIHUMI + Fv mice exhibited an up to 1.7 fold higher intestinal polyamine concentration compared to SIHUMI mice, which was mainly due to increased putrescine concentrations. However, no differences were observed in crypt depth, mucosa thickness and epithelial proliferation. In PRM/Alf mice, the intestine of conventional mice was 8.5 % longer compared to germfree mice. In contrast, intestinal lengths of C3H mice were similar, independent of the colonization status. The comparison of PRM/Alf and C3H mice, both associated with SIHUMI + Fv, demonstrated that PRM/Alf mice had a 35.9 % longer intestine than C3H mice. However, intestinal SCFA and polyamine concentrations of PRM/Alf mice were similar or even lower, except N acetylcadaverine, which was 3.1-fold higher in PRM/Alf mice. When germfree PRM/Alf mice were associated with a complex PRM/Alf microbiota, the intestine was one quarter longer compared to PRM/Alf mice colonized with a C3H microbiota. This gut elongation correlated with levels of the polyamine N acetylspermine. Conclusion: The intestinal microbiota is able to influence intestinal length dependent on microbial composition and on the mouse genotype. Although SCFAs do not contribute to gut elongation, an influence of the polyamines N acetylcadaverine and N acetylspermine is conceivable. In addition, the study clearly demonstrated that bacterial putrescine does not influence gut morphology in C3H mice.
New ABC triblock copolymers were synthesized by controlled free-radical polymerization via Reversible Addition-Fragmentation chain Transfer (RAFT). Compared to amphiphilic diblock copolymers, the prepared materials formed more complex self-assembled structures in water due to three different functional units. Two strategies were followed: The first approach relied on double-thermoresponsive triblock copolymers exhibiting Lower Critical Solution Temperature (LCST) behavior in water. While the first phase transition triggers the self-assembly of triblock copolymers upon heating, the second one allows to modify the self-assembled state. The stepwise self-assembly was followed by turbidimetry, dynamic light scattering (DLS) and 1H NMR spectroscopy as these methods reflect the behavior on the macroscopic, mesoscopic and molecular scale. Although the first phase transition could be easily monitored due to the onset of self-assembly, it was difficult to identify the second phase transition unambiguously as the changes are either marginal or coincide with the slow response of the self-assembled system to relatively fast changes of temperature. The second approach towards advanced polymeric micelles exploited the thermodynamic incompatibility of “triphilic” block copolymers – namely polymers bearing a hydrophilic, a lipophilic and a fluorophilic block – as the driving force for self-assembly in water. The self-assembly of these polymers in water produced polymeric micelles comprising a hydrophilic corona and a microphase-separated micellar core with lipophilic and fluorophilic domains – so called multi-compartment micelles. The association of triblock copolymers in water was studied by 1H NMR spectroscopy, DLS and cryogenic transmission electron microscopy (cryo-TEM). Direct imaging of the polymeric micelles in solution by cryo-TEM revealed different morphologies depending on the block sequence and the preparation conditions. While polymers with the sequence hydrophilic-lipophilic-fluorophilic built core-shell-corona micelles with the core being the fluorinated compartment, block copolymers with the hydrophilic block in the middle formed spherical micelles where single or multiple fluorinated domains “float” as disks on the surface of the lipophilic core. Increasing the temperature during micelle preparation or annealing of the aqueous solutions after preparation at higher temperatures induced occasionally a change of the micelle morphology or the particle size distribution. By RAFT polymerization not only the desired polymeric architectures could be realized, but the technique provided in addition a precious tool for molar mass characterization. The thiocarbonylthio moieties, which are present at the chain ends of polymers prepared by RAFT, absorb light in the UV and visible range and were employed for end-group analysis by UV-vis spectroscopy. A variety of dithiobenzoate and trithiocarbonate RAFT agents with differently substituted initiating R groups were synthesized. The investigation of their absorption characteristics showed that the intensity of the absorptions depends sensitively on the substitution pattern next to the thiocarbonylthio moiety and on the solvent polarity. According to these results, the conditions for a reliable and convenient end-group analysis by UV-vis spectroscopy were optimized. As end-group analysis by UV-vis spectroscopy is insensitive to the potential association of polymers in solution, it was advantageously exploited for the molar mass characterization of the prepared amphiphilic block copolymers.
Background: A growing body of research has documented negative effects of sexualization in the media on individuals’ self-objectification. This research is predominantly built on studies examining traditional media, such as magazines and television, and young female samples. Furthermore, longitudinal studies are scarce, and research is missing studying mediators of the relationship. The first aim of the present PhD thesis was to investigate the relations between the use of sexualized interactive media and social media and self-objectification. The second aim of this work was to examine the presumed processes within understudied samples, such as males and females beyond college age, thus investigating the moderating roles of age and gender. The third aim was to shed light on possible mediators of the relation between sexualized media and self-objectification.
Method: The research aims were addressed within the scope of four studies. In an experiment, women’s self-objectification and body satisfaction was measured after playing a video game with a sexualized vs. a nonsexualized character that was either personalized or generic. The second study investigated the cross-sectional link between sexualized television use and self-objectification and consideration of cosmetic surgery in a sample of women across a broad age spectrum, examining the role of age in the relations. The third study looked at the cross-sectional link between male and female sexualized images on Instagram and their associations with self-objectification among a sample of male and female adolescents. Using a two-wave longitudinal design, the fourth study examined sexualized video game and Instagram use as predictors of adolescents’ self-objectification. Path models were conceptualized for the second, third and fourth study, in which media use predicted body surveillance via appearance comparisons (Study 4), thin-ideal internalization (Study 2, 3, 4), muscular-ideal internalization (Study 3, 4), and valuing appearance (all studies).
Results: The results of the experimental study revealed no effect of sexualized video game characters on women’s self-objectification and body satisfaction. No moderating effect of personalization emerged. Sexualized television use was associated to consideration of cosmetic surgery via body surveillance and valuing appearance for women of all ages in Study 2, while no moderating effect of age was found. Study 3 revealed that seeing sexualized male images on Instagram was indirectly associated with higher body surveillance via muscular-ideal internalization for boys and girls. Sexualized female images were indirectly linked to higher body surveillance via thin-ideal internalization and valuing appearance over competence only for girls. The longitudinal analysis of Study 4 showed no moderating effect of gender: For boys and girls, sexualized video game use at T1 predicted body surveillance at T2 via appearance comparisons, thin-ideal internalization and valuing appearance over competence. Furthermore, the use of sexualized Instagram images at T1 predicted body surveillance at T2 via valuing appearance.
Conclusion: The findings show that sexualization in the media is linked to self-objectification among a variety of media formats and within diverse groups of people. While the longitudinal study indicates that sexualized media predict self-objectification over time, the experimental null findings warrant caution regarding this temporal order. The results demonstrate that several mediating variables might be involved in this link. Possible implications for research and practice, such as intervention programs and policy-making, are discussed.
Functional analysis of selected DOF transcription factors in the model plant Arabidopsis thaliana
(2007)
Transcription factors (TFs) are global regulators of gene expression playing essential roles in almost all biological processes, and are therefore of great scientific and biotechnological interest. This project focused on functional characterisation of three DNA-binding-with-one-zinc-finger (DOF) TFs from the genetic model plant Arabidopsis thaliana, namely OBP1, OBP2 and AtDOF4;2. These genes were selected due to severe growth phenotypes conferred upon their constitutive over-expression. To identify biological processes regulated by OBP1, OBP2 and AtDOF4;2 in detail molecular and physiological characterization of transgenic plants with modified levels of OBP1, OBP2 and AtDOF4;2 expression (constitutive and inducible over-expression, RNAi) was performed using both targeted and profiling technologies. Additionally expression patterns of studied TFs and their target genes were analyzed using promoter-GUS lines and publicly available microarray data. Finally selected target genes were confirmed by chromatin immuno-precipitation and electrophoretic-mobility shift assays. This combinatorial approach revealed distinct biological functions of OBP1, OBP2 and AtDOF4;2. Specifically OBP2 controls indole glucosinolate / auxin homeostasis by directly regulating the enzyme at the branch of these pathways; CYP83B1 (Skirycz et al., 2006). Glucosinolates are secondary compounds important for defence against herbivores and pathogens in the plants order Caparales (e.g. Arabidopsis, canola and broccoli) whilst auxin is an essential plant hormone. Hence OBP2 is important for both response to biotic stress and plant growth. Similarly to OBP2 also AtDOF4;2 is involved in the regulation of plant secondary metabolism and affects production of various phenylpropanoid compounds in a tissue and environmental specific manner. It was found that under certain stress conditions AtDOF4;2 negatively regulates flavonoid biosynthetic genes whilst in certain tissues it activates hydroxycinnamic acid production. It was hypothesized that this dual function is most likely related to specific interactions with other proteins; perhaps other TFs (Skirycz et al., 2007). Finally OBP1 regulates both cell proliferation and cell expansion. It was shown that OBP1 controls cell cycle activity by directly targeting the expression of core cell cycle genes (CYCD3;3 and KRP7), other TFs and components of the replication machinery. Evidence for OBP1 mediated activation of cell cycle during embryogenesis and germination will be presented. Additionally and independently on its effects on cell proliferation OBP1 negatively affects cell expansion via reduced expression of cell wall loosening enzymes. Summing up this work provides an important input into our knowledge on DOF TFs function. Future work will concentrate on establishing exact regulatory networks of OBP1, OBP2 and AtDOF4;2 and their possible biotechnological applications.
Numbers are omnipresent in daily life. They vary in display format and in their meaning so that it does not seem self-evident that our brains process them more or less easily and flexibly. The present thesis addresses mental number representations in general, and specifically the impact of finger counting on mental number representations. Finger postures that result from finger counting experience are one of many ways to convey numerical information. They are, however, probably the one where the numerical content becomes most tangible. By investigating the role of fingers in adults’ mental number representations the four presented studies also tested the Embodied Cognition hypothesis which predicts that bodily experience (e.g., finger counting) during concept acquisition (e.g., number concepts) stays an immanent part of these concepts. The studies focussed on different aspects of finger counting experience. First, consistency and further details of spontaneously used finger configurations were investigated when participants repeatedly produced finger postures according to specific numbers (Study 1). Furthermore, finger counting postures (Study 2), different finger configurations (Study 2 and 4), finger movements (Study 3), and tactile finger perception (Study 4) were investigated regarding their capability to affect number processing. Results indicated that active production of finger counting postures and single finger movements as well as passive perception of tactile stimulation of specific fingers co-activated associated number knowledge and facilitated responses towards corresponding magnitudes and number symbols. Overall, finger counting experience was reflected in specific effects in mental number processing of adult participants. This indicates that finger counting experience is an immanent part of mental number representations.
Findings are discussed in the light of a novel model. The MASC (Model of Analogue and Symbolic Codes) combines and extends two established models of number and magnitude processing. Especially a symbolic motor code is introduced as an essential part of the model. It comprises canonical finger postures (i.e., postures that are habitually used to represent numbers) and finger-number associations. The present findings indicate that finger counting functions both as a sensorimotor magnitude and as a symbolic representational format and that it thereby directly mediates between physical and symbolic size. The implications are relevant both for basic research regarding mental number representations and for pedagogic practices regarding the effectiveness of finger counting as a means to acquire a fundamental grasp of numbers.
The goal of this work was to study the binding of ions to polymers and lipid bilayer membranes in aqueous solutions. In the first part of this work, the influence of various inorganic salts and polyelectrolytes on the structure of water was studied using Isothermal Titration Calorimetry (ITC). The heat of dilution of the salts was used as a scale of water structure making and breaking of the ions. The heats of dilution could be attributed to the Hofmeister Series. Following this, the binding of Ca2+ to poly(sodium acrylate) (NaPAA) was studied. ITC and a Ca2+ Ion Selective Electrode were used to measure the reaction enthalpy and binding isotherm. Binding of Ca2+ ions to PAA, was found to be highly endothermic and therefore solely driven by entropy. We then compared the binding of ions to the one-dimensional PAA polymer chain to the binding to lipid vesicles with the same functional groups. As for the polymer, Ca2+ binding was found to be endothermic. Binding of calcium to the lipid bilayer was found to be weaker than to the polymer. In the context of these experiments, it was shown that Ca2+ not only binds to charged but also to zwitterionic lipid vesicles. Finally, we studied the interaction of two salts, KCl and NaCl, to a neutral polymer gel, PNIPAAM, and to the ionic polymer PAA. Combining calorimetry and a potassium selective electrode we observed that the ions interact with both polymers, whether containing charges or not.
Gene expression describes the process of making functional gene products (e.g. proteins or special RNAs) from instructions encoded in the genetic information (e.g. DNA). This process is heavily regulated, allowing cells to produce the appropriate gene products necessary for cell survival, adapting production as necessary for different cell environments. Gene expression is subject to regulation at several levels, including transcription, mRNA degradation, translation and protein degradation. When intact, this system maintains cell homeostasis, keeping the cell alive and adaptable to different environments. Malfunction in the system can result in disease states and cell death. In this dissertation, we explore several aspects of gene expression control by analyzing data from biological experiments. Most of the work following uses a common mathematical model framework based on Markov chain models to test hypotheses, predict system dynamics or elucidate network topology. Our work lies in the intersection between mathematics and biology and showcases the power of statistical data analysis and math modeling for validation and discovery of biological phenomena.
The increasing demand for energy in the current technological era and the recent political decisions about giving up on nuclear energy diverted humanity to focus on alternative environmentally friendly energy sources like solar energy. Although silicon solar cells are the product of a matured technology, the search for highly efficient and easily applicable materials is still ongoing. These properties made the efficiency of halide perovskites comparable with silicon solar cells for single junctions within a decade of research. However, the downside of halide perovskites are poor stability and lead toxicity for the most stable ones.
On the other hand, chalcogenide perovskites are one of the most promising absorber materials for the photovoltaic market, due to their elemental abundance and chemical stability against moisture and oxygen. In the search of the ultimate solar absorber material, combining the good optoelectronic properties of halide perovskites with the stability of chalcogenides could be the promising candidate.
Thus, this work investigates new techniques for the synthesis and design of these novel chalcogenide perovskites, that contain transition metals as cations, e.g., BaZrS3, BaHfS3, EuZrS3, EuHfS3 and SrHfS3. There are two stages in the deposition techniques of this study: In the first stage, the binary compounds are deposited via a solution processing method. In the second stage, the deposited materials are annealed in a chalcogenide atmosphere to form the perovskite structure by using solid-state reactions.
The research also focuses on the optimization of a generalized recipe for a molecular ink to deposit precursors of chalcogenide perovskites with different binaries. The implementation of the precursor sulfurization resulted in either binaries without perovskite formation or distorted perovskite structures, whereas some of these materials are reported in the literature as they are more favorable in the needle-like non-perovskite configuration.
Lastly, there are two categories for the evaluation of the produced materials: The first category is about the determination of the physical properties of the deposited layer, e.g., crystal structure, secondary phase formation, impurities, etc. For the second category, optoelectronic properties are measured and compared to an ideal absorber layer, e.g., band gap, conductivity, surface photovoltage, etc.
Media artists have been struggling for financial survival ever since media art came into being. The non-material value of the artwork, a provocative attitude towards the traditional arts world and originally anti-capitalist mindset of the movement makes it particularly difficult to provide a constructive solution. However, a cultural entrepreneurial approach can be used to build a framework in order to find a balance between culture and business while ensuring that the cultural mission remains the top priority.
Dryland vulnerability : typical patterns and dynamics in support of vulnerability reduction efforts
(2011)
The pronounced constraints on ecosystem functioning and human livelihoods in drylands are frequently exacerbated by natural and socio-economic stresses, including weather extremes and inequitable trade conditions. Therefore, a better understanding of the relation between these stresses and the socio-ecological systems is important for advancing dryland development. The concept of vulnerability as applied in this dissertation describes this relation as encompassing the exposure to climate, market and other stresses as well as the sensitivity of the systems to these stresses and their capacity to adapt. With regard to the interest in improving environmental and living conditions in drylands, this dissertation aims at a meaningful generalisation of heterogeneous vulnerability situations. A pattern recognition approach based on clustering revealed typical vulnerability-creating mechanisms at global and local scales. One study presents the first analysis of dryland vulnerability with global coverage at a sub-national resolution. The cluster analysis resulted in seven typical patterns of vulnerability according to quantitative indication of poverty, water stress, soil degradation, natural agro-constraints and isolation. Independent case studies served to validate the identified patterns and to prove the transferability of vulnerability-reducing approaches. Due to their worldwide coverage, the global results allow the evaluation of a specific system’s vulnerability in its wider context, even in poorly-documented areas. Moreover, climate vulnerability of smallholders was investigated with regard to their food security in the Peruvian Altiplano. Four typical groups of households were identified in this local dryland context using indicators for harvest failure risk, agricultural resources, education and non-agricultural income. An elaborate validation relying on independently acquired information demonstrated the clear correlation between weather-related damages and the identified clusters. It also showed that household-specific causes of vulnerability were consistent with the mechanisms implied by the corresponding patterns. The synthesis of the local study provides valuable insights into the tailoring of interventions that reflect the heterogeneity within the social group of smallholders. The conditions necessary to identify typical vulnerability patterns were summarised in five methodological steps. They aim to motivate and to facilitate the application of the selected pattern recognition approach in future vulnerability analyses. The five steps outline the elicitation of relevant cause-effect hypotheses and the quantitative indication of mechanisms as well as an evaluation of robustness, a validation and a ranking of the identified patterns. The precise definition of the hypotheses is essential to appropriately quantify the basic processes as well as to consistently interpret, validate and rank the clusters. In particular, the five steps reflect scale-dependent opportunities, such as the outcome-oriented aspect of validation in the local study. Furthermore, the clusters identified in Northeast Brazil were assessed in the light of important endogenous processes in the smallholder systems which dominate this region. In order to capture these processes, a qualitative dynamic model was developed using generalised rules of labour allocation, yield extraction, budget constitution and the dynamics of natural and technological resources. The model resulted in a cyclic trajectory encompassing four states with differing degree of criticality. The joint assessment revealed aggravating conditions in major parts of the study region due to the overuse of natural resources and the potential for impoverishment. The changes in vulnerability-creating mechanisms identified in Northeast Brazil are well-suited to informing local adjustments to large-scale intervention programmes, such as “Avança Brasil”. Overall, the categorisation of a limited number of typical patterns and dynamics presents an efficient approach to improving our understanding of dryland vulnerability. Appropriate decision-making for sustainable dryland development through vulnerability reduction can be significantly enhanced by pattern-specific entry points combined with insights into changing hotspots of vulnerability and the transferability of successful adaptation strategies.
Redox signalling in plants
(2020)
Once proteins are synthesized, they can additionally be modified by post-translational modifications (PTMs). Proteins containing reactive cysteine thiols, stabilized in their deprotonated form due to their local environment as thiolates (RS-), serve as redox sensors by undergoing a multitude of oxidative PTMs (Ox-PTMs). Ox-PTMs such as S-nitrosylation or formation of inter- or intra-disulfide bridges induce functional changes in these proteins. Proteins containing cysteines, whose thiol oxidation state regulates their functions, belong to the so-called redoxome. Such Ox-PTMs are controlled by site-specific cellular events that play a crucial role in protein regulation, affecting enzyme catalytic sites, ligand binding affinity, protein-protein interactions or protein stability. Reversible protein thiol oxidation is an essential regulatory mechanism of photosynthesis, metabolism, and gene expression in all photosynthetic organisms. Therefore, studying PTMs will remain crucial for understanding plant adaptation to external stimuli like fluctuating light conditions. Optimizing methods suitable for studying plants Ox-PTMs is of high importance for elucidation of the redoxome in plants. This study focusses on thiol modifications occurring in plant and provides novel insight into in vivo redoxome of Arabidopsis thaliana in response to light vs. dark. This was achieved by utilizing a resin-assisted thiol enrichment approach. Furthermore, confirmation of candidates on the single protein level was carried out by a differential labelling approach. The thiols and disulfides were differentially labelled, and the protein levels were detected using immunoblot analysis. Further analysis was focused on light-reduced proteins. By the enrichment approach many well studied redox-regulated proteins were identified. Amongst those were fructose 1,6-bisphosphatase (FBPase) and sedoheptulose-1,7-bisphosphatase (SBPase) which have previously been described as thioredoxin system targeted enzymes. The redox regulated proteins identified in the current study were compared to several published, independent results showing redox regulated proteins in Arabidopsis leaves, root, mitochondria and specifically S-nitrosylated proteins. These proteins were excluded as potential new candidates but remain as a proof-of-concept to the enrichment experiments to be effective. Additionally, CSP41A and CSP41B proteins, which emerged from this study as potential targets of redox-regulation, were analyzed by Ribo-Seq. The active translatome study of csp41a mutant vs. wild-type showed most of the significant changes at end of the night, similarly as csp41b. Yet, in both mutants only several chloroplast-encoded genes were altered. Further studies of CSP41A and CSP41B proteins are needed to reveal their functions and elucidate the role of redox regulation of these proteins.
Among the multitude of geomorphological processes, aeolian shaping processes are of special character, Pedogenic dust is one of the most important sources of atmospheric aerosols and therefore regarded as a key player for atmospheric processes. Soil dust emissions, being complex in composition and properties, influence atmospheric processes and air quality and has impacts on other ecosystems. In this because even though their immediate impact can be considered low (exceptions exist), their constant and large-scale force makes them a powerful player in the earth system. dissertation, we unravel a novel scientific understanding of this complex system based on a holistic dataset acquired during a series of field experiments on arable land in La Pampa, Argentina. The field experiments as well as the generated data provide information about topography, various soil parameters, the atmospheric dynamics in the very lower atmosphere (4m height) as well as measurements regarding aeolian particle movement across a wide range of particle size classes between 0.2μm up to the coarse sand.
The investigations focus on three topics: (a) the effects of low-scale landscape structures on aeolian transport processes of the coarse particle fraction, (b) the horizontal and vertical fluxes of the very fine particles and (c) the impact of wind gusts on particle emissions.
Among other considerations presented in this thesis, it could in particular be shown, that even though the small-scale topology does have a clear impact on erosion and deposition patterns, also physical soil parameters need to be taken into account for a robust statistical modelling of the latter. Furthermore, specifically the vertical fluxes of particulate matter have different characteristics for the particle size classes. Finally, a novel statistical measure was introduced to quantify the impact of wind gusts on the particle uptake and its application on the provided data set. The aforementioned measure shows significantly increased particle concentrations during points in time defined as gust event.
With its holistic approach, this thesis further contributes to the fundamental understanding of how atmosphere and pedosphere are intertwined and affect each other.
Together with the gradual change of mean values, ongoing climate change is projected to increase frequency and amplitude of temperature and precipitation extremes in many regions of Europe. The impacts of such in most cases short term extraordinary climate situations on terrestrial ecosystems are a matter of central interest of recent climate change research, because it can not per se be assumed that known dependencies between climate variables and ecosystems are linearly scalable. So far, yet, there is a high demand for a method to quantify such impacts in terms of simultaneities of event time series.
In the course of this manuscript the new statistical approach of Event Coincidence Analysis (ECA) as well as it's R implementation is introduced, a methodology that allows assessing whether or not two types of event time series exhibit similar sequences of occurrences. Applications of the method are presented, analyzing climate impacts on different temporal and spacial scales: the impact of extraordinary expressions of various climatic variables on tree stem variations (subdaily and local scale), the impact of extreme temperature and precipitation events on the owering time of European shrub species (weekly and country scale), the impact of extreme temperature events on ecosystem health in terms of NDVI (weekly and continental scale) and the impact of El Niño and La Niña events on precipitation anomalies (seasonal and global scale).
The applications presented in this thesis refine already known relationships based on classical methods and also deliver substantial new findings to the scientific community: the widely known positive correlation between flowering time and temperature for example is confirmed to be valid for the tails of the distributions while the widely assumed positive dependency between stem diameter variation and temperature is shown to be not valid for very warm and very cold days. The larger scale investigations underline the sensitivity of anthrogenically shaped landscapes towards temperature extremes in Europe and provide a comprehensive global ENSO impact map for strong precipitation events.
Finally, by publishing the R implementation of the method, this thesis shall enable other researcher to further investigate on similar research questions by using Event Coincidence Analysis.
Natural extreme events are an integral part of nature on planet earth. Usually these events are only considered hazardous to humans, in case they are exposed. In this case, however, natural hazards can have devastating impacts on human societies. Especially hydro-meteorological hazards have a high damage potential in form of e.g. riverine and pluvial floods, winter storms, hurricanes and tornadoes, which can occur all over the globe. Along with an increasingly warm climate also an increase in extreme weather which potentially triggers natural hazards can be expected. Yet, not only changing natural systems, but also changing societal systems contribute to an increasing risk associated with these hazards. These can comprise increasing exposure and possibly also increasing vulnerability to the impacts of natural events. Thus, appropriate risk management is required to adapt all parts of society to existing and upcoming risks at various spatial scales. One essential part of risk management is the risk assessment including the estimation of the economic impacts. However, reliable methods for the estimation of economic impacts due to hydro-meteorological hazards are still missing. Therefore, this thesis deals with the question of how the reliability of hazard damage estimates can be improved, represented and propagated across all spatial scales. This question is investigated using the specific example of economic impacts to companies as a result of riverine floods in Germany.
Flood damage models aim to describe the damage processes during a given flood event. In other words they describe the vulnerability of a specific object to a flood. The models can be based on empirical data sets collected after flood events. In this thesis tree-based models trained with survey data are used for the estimation of direct economic flood impacts on the objects. It is found that these machine learning models, in conjunction with increasing sizes of data sets used to derive the models, outperform state-of-the-art damage models. However, despite the performance improvements induced by using multiple variables and more data points, large prediction errors remain at the object level. The occurrence of the high errors was explained by a further investigation using distributions derived from tree-based models. The investigation showed that direct economic impacts to individual objects cannot be modeled by a normal distribution. Yet, most state-of-the-art approaches assume a normal distribution and take mean values as point estimators. Subsequently, the predictions are unlikely values within the distributions resulting in high errors. At larger spatial scales more objects are considered for the damage estimation. This leads to a better fit of the damage estimates to a normal distribution. Consequently, also the performance of the point estimators get better, although large errors can still occur due to the variance of the normal distribution. It is recommended to use distributions instead of point estimates in order to represent the reliability of damage estimates.
In addition current approaches also mostly ignore the uncertainty associated with the characteristics of the hazard and the exposed objects. For a given flood event e.g. the estimation of the water level at a certain building is prone to uncertainties. Current approaches define exposed objects mostly by the use of land use data sets. These data sets often show inconsistencies, which introduce additional uncertainties. Furthermore, state-of-the-art approaches also imply problems of missing consistency when predicting the damage at different spatial scales. This is due to the use of different types of exposure data sets for model derivation and application. In order to face these issues a novel object-based method was developed in this thesis. The method enables a seamless estimation of hydro-meteorological hazard damage across spatial scales including uncertainty quantification. The application and validation of the method resulted in plausible estimations at all spatial scales without overestimating the uncertainty.
Mainly newly available data sets containing individual buildings make the application of the method possible as they allow for the identification of flood affected objects by overlaying the data sets with water masks. However, the identification of affected objects with two different water masks revealed huge differences in the number of identified objects. Thus, more effort is needed for their identification, since the number of objects affected determines the order of magnitude of the economic flood impacts to a large extent.
In general the method represents the uncertainties associated with the three components of risk namely hazard, exposure and vulnerability, in form of probability distributions. The object-based approach enables a consistent propagation of these uncertainties in space. Aside from the propagation of damage estimates and their uncertainties across spatial scales, a propagation between models estimating direct and indirect economic impacts was demonstrated. This enables the inclusion of uncertainties associated with the direct economic impacts within the estimation of the indirect economic impacts. Consequently, the modeling procedure facilitates the representation of the reliability of estimated total economic impacts. The representation of the estimates' reliability prevents reasoning based on a false certainty, which might be attributed to point estimates. Therefore, the developed approach facilitates a meaningful flood risk management and adaptation planning.
The successful post-event application and the representation of the uncertainties qualifies the method also for the use for future risk assessments. Thus, the developed method enables the representation of the assumptions made for the future risk assessments, which is crucial information for future risk management. This is an important step forward, since the representation of reliability associated with all components of risk is currently lacking in all state-of-the-art methods assessing future risk.
In conclusion, the use of object-based methods giving results in the form of distributions instead of point estimations is recommended. The improvement of the model performance by the means of multi-variable models and additional data points is possible, but small. Uncertainties associated with all components of damage estimation should be included and represented within the results. Furthermore, the findings of the thesis suggest that, at larger scales, the influence of the uncertainty associated with the vulnerability is smaller than those associated with the hazard and exposure. This leads to the conclusion that for an increased reliability of flood damage estimations and risk assessments, the improvement and active inclusion of hazard and exposure, including their uncertainties, is needed in addition to the improvements of the models describing the vulnerability of the objects.