Refine
Has Fulltext
- yes (140) (remove)
Year of publication
- 2018 (140) (remove)
Document Type
- Doctoral Thesis (140) (remove)
Keywords
- Fernerkundung (3)
- Magnetismus (3)
- magnetism (3)
- remote sensing (3)
- uncertainty (3)
- Angriffserkennung (2)
- Bakterien (2)
- Big Data (2)
- Bildung (2)
- Biodiversität (2)
Institute
- Institut für Chemie (24)
- Institut für Geowissenschaften (21)
- Institut für Physik und Astronomie (21)
- Institut für Biochemie und Biologie (14)
- Extern (12)
- Hasso-Plattner-Institut für Digital Engineering GmbH (12)
- Wirtschaftswissenschaften (10)
- Sozialwissenschaften (6)
- Department Linguistik (5)
- Department Sport- und Gesundheitswissenschaften (4)
Thematic role assignment and word order preferences in the child language acquisition of Tagalog
(2018)
A critical task in daily communications is identifying who did what to whom in an utterance, or assigning the thematic roles agent and patient in a sentence. This dissertation is concerned with Tagalog-speaking children’s use of word order and morphosyntactic markers for thematic role assignment. It aims to explain children’s difficulties in interpreting sentences with a non-canonical order of arguments (i.e., patient-before-agent) by testing the predictions of the following accounts: the frequency account (Demuth, 1989), the Competition model (MacWhinney & Bates, 1989), and the incremental processing account (Trueswell & Gleitman, 2004). Moreover, the experiments in this dissertation test the influence of a word order strategy in a language like Tagalog, where the thematic roles are always unambiguous in a sentence, due to its verb-initial order and its voice-marking system. In Tagalog’s voice-marking system, the inflection on the verb indicates the thematic role of the noun marked by 'ang.' First, the possible basis for a word order strategy in Tagalog was established using a sentence completion experiment given to adults and 5- and 7-year-old children (Chapter 2) and a child-directed speech corpus analysis (Chapter 3). In general, adults and children showed an agent-before-patient preference, although adults’ preference was also affected by sentence voice. Children’s comprehension was then examined through a self-paced listening and picture verification task (Chapter 3) and an eye-tracking and picture selection task (Chapter 4), where word order (agent-initial or patient-initial) and voice (agent voice or patient voice) were manipulated. Offline (i.e., accuracy) and online (i.e., listening times, looks to the target) measures revealed that 5- and 7-year-old Tagalog-speaking children had a bias to interpret the first noun as the agent. Additionally, the use of word order and morphosyntactic markers was found to be modulated by voice. In the agent voice, children relied more on a word order strategy; while in the patient voice, they relied on the morphosyntactic markers. These results are only partially explained by the accounts being tested in this dissertation. Instead, the findings support computational accounts of incremental word prediction and learning such as Chang, Dell, & Bock’s (2006) model.
In the here presented work we discuss a series of results that are all in one way or another connected to the phenomenon of trapping in black hole spacetimes.
First we present a comprehensive review of the Kerr-Newman-Taub-NUT-de-Sitter family of black hole spacetimes and their most important properties. From there we go into a detailed analysis of the bahaviour of null geodesics in the exterior region of a sub-extremal Kerr spacetime. We show that most well known fundamental properties of null geodesics can be represented in one plot. In particular, one can see immediately that the ergoregion and trapping are separated in phase space.
We then consider the sets of future/past trapped null geodesics in the exterior region of a sub-extremal Kerr-Newman-Taub-NUT spacetime. We show that from the point of view of any timelike observer outside of such a black hole, trapping can be understood as two smooth sets of spacelike directions on the celestial sphere of the observer. Therefore the topological structure of the trapped set on the celestial sphere of any observer is identical to that in Schwarzschild.
We discuss how this is relevant to the black hole stability problem.
In a further development of these observations we introduce the notion of what it means for the shadow of two observers to be degenerate. We show that, away from the axis of symmetry, no continuous degeneration exists between the shadows of observers at any point in the exterior region of any Kerr-Newman black hole spacetime of unit mass. Therefore, except possibly for discrete changes, an observer can, by measuring the black holes shadow, determine the angular momentum and the charge of the black hole under observation, as well as the observer's radial position and angle of elevation above the equatorial plane. Furthermore, his/her relative velocity compared to a standard observer can also be measured. On the other hand, the black hole shadow does not allow for a full parameter resolution in the case of a Kerr-Newman-Taub-NUT black hole, as a continuous degeneration relating specific angular momentum, electric charge, NUT charge and elevation angle exists in this case.
We then use the celestial sphere to show that trapping is a generic feature of any black hole spacetime.
In the last chapter we then prove a generalization of the mode stability result of Whiting (1989) for the Teukolsky equation for the case of real frequencies. The main result of the last chapter states that a separated solution of the Teukolsky equation governing massless test fields on the Kerr spacetime, which is purely outgoing at infinity, and purely ingoing at the horizon, must vanish. This has the consequence, that for real frequencies, there are linearly independent fundamental solutions of the radial Teukolsky equation which are purely ingoing at the horizon, and purely outgoing at infinity, respectively. This fact yields a representation formula for solutions of the inhomogenous Teukolsky equation, and was recently used by Shlapentokh-Rothman (2015) for the scalar wave equation.
This thesis investigates the comprehension of the passive voice in three distinct populations. First, the comprehension of passives by adult German speakers was studied, followed by an examination of how German-speaking children comprehend the structure. Finally, bilingual Mandarin-English speakers were tested on their comprehension of the passive voice in English, which is their L2. An integral part of testing the comprehension in all three populations is the use of structural priming. In each of the three distinct parts of the research, structural priming was used for a specific reason. In the study involving adult German speakers, productive and receptive structural priming was directly compared. The goal was to see the effect the two priming modalities have on language comprehension. In the study on German-acquiring children, structural priming was an important tool in answering the question regarding the delayed acquisition of the passive voice. Finally, in the study on the bilingual population, cross-linguistic priming was used to investigate the importance of word order in the priming effect, since Mandarin and English have different word orders in passive voice sentences.
Von einer Säkularisierung in einem Land wie Israel zu sprechen, wo die Religion offensichtlich einen wichtigen Teil des öffentlichen Lebens darstellt, scheint widersprüchlich zu sein. Doch Israel befindet sich bedingt durch Globalisierung, Pluralisierung und Modernisierung an einem Scheideweg. Teile der israelischen Gesellschaft säkularisieren sich bereits. Die religiöse orthodoxe Vorherrschaft scheint zu bröckeln. Kann jedoch deswegen von einem Mentalitätswandel oder einer Säkularisierung des Staates gesprochen werden? Kann ein Säkularisierungsprozess in Israel erfolgreich sein? Wie muss ein säkularer Staat beschaffen sein, um den unterschiedlichen religiösen Denominationen die gleichen Möglichkeiten zu bieten? Welche Rolle spielen dabei die jüdische Diaspora, Einwanderungen und gesellschaftliche Minderheiten? Ziel der vorliegenden Arbeit ist es diese Fragen zu erörtern. Auch wenn die enge Verknüpfung von Nation und Religion im Judentum eine Säkularisierung scheinbar unmöglich macht, so erlaubt unter Bezugnahme der Konzepte von Säkularismus und Nationalismus im Kontext der historischen Entwicklungen des Judentums eine differenziertere Betrachtung dieser Verknüpfung. Durch die Nutzung von unterschiedlichen qualitativen Methoden, wie der hermeneutischen Methode zur Betrachtung der verschiedenen theoretischen Begriffe und der Analyse des Verhältnisses von Nation und Religion im Judentum; der Nutzung von Zeitungsartikel zur Aufarbeitung der aktuellen Debatten in der israelischen Gesellschaft; der Auswertung von Statistiken; sowie der Durchführung von Experteninterviews erlauben einen vielseitigen Zugang zum Forschungsgegenstand. Letztendlich soll aufgezeigt werden, dass sich Israel zwar zunehmend säkularisiert, aber vor verschiedenen Herausforderungen, wie dem gesellschaftlichen Pluralismus, der instabilen Sicherheitslage, sowie einem zunehmenden religiösen Nationalismus steht.
Synthesis, assembly and thermo-responsivity of polymer-functionalized magnetic cobalt nanoparticles
(2018)
This thesis mainly covers the synthesis, surface modification, magnetic-field-induced assembly and thermo-responsive functionalization of superparamagnetic Co NPs initially stabilized by hydrophobic small molecules oleic acid (OA) and trioctylphosphine oxide (TOPO), as well as the synthesis of both superparamagnetic and ferromagnetic Co NPs by using end-functionalized-polystyrene as stabilizer.
Co NPs, due to their excellent magnetic and catalytic properties, have great potential application in various fields, such as ferrofluids, catalysis, and magnetic resonance imaging (MRI). Superparamagnetic Co NPs are especially interesting, since they exhibit zero coercivity. They get magnetized in an external magnetic field and reach their saturation magnetization rapidly, but no magnetic moment remains after removal of the applied magnetic field. Therefore, they do not agglomerate in the body when they are used in biomedical applications. Normally, decomposition of metallic precursors at high temperature is one of the most important methods in preparation of monodisperse magnetic NPs, providing tunability in size and shape. Hydrophobic ligands like OA, TOPO and oleylamine are often used to both control the growth of NPs and protect them from agglomeration. The as-prepared magnetic NPs can be used in biological applications as long as they are transferred into water. Moreover, their supercrystal assemblies have the potential for high density data storage and electronic devices. In addition to small molecules, polymers can also be used as surfactants for the synthesis of ferromagnetic and superparamagnetic NPs by changing the reaction conditions. Therefore, chapter 2 gives an overview on the basic concept of synthesis, surface modification and self-assembly of magnetic nanoparticles. Various examples were used to illustrate the recent work.
The hydrophobic Co NPs synthesized with small molecules as surfactants limit their biological applications, which require a hydrophilic or aqueous environment. Surface modification (e.g., ligand exchange) is a general idea for either phase transition or surface-functionalization. Therefore, in chapter 3, a ligand exchange process was conducted to functionalize the surface of Co NPs. PNIPAM is one of the most popular smart polymers and its lower critical solution temperature (LCST) is around 32 °C, with a reversible change in the conformation structure between hydrophobic and hydrophilic. The novel nanocomposites of superparamagnetic Co NPs and thermo-responsive PNIPAM are of great interest. Thus, well-defined superparamagnetic Co NPs were firstly synthesized through the thermolysis of cobalt carbonyl by using OA and TOPO as surfactants. A functional ATRP initiator, containing an amine (as anchoring group) and a 2-bromopropionate group (SI-ATRP initiator), was used to replace the original ligands. This process is rapid and facial for efficient surface functionalization and afterwards the Co NPs can be dispersed into polar solvent DMF without aggregation. FT-IR spectroscopy showed that the TOPO was completely replaced, but a small amount of OA remained on the surface. A TGA measurement allowed the calculation of the grafting density of the initiator as around 3.2 initiator/nm2. Then, the surface-initiated ATRP was conducted for the polymerization of NIPAM on the surface of Co NPs and rendered the nanocomposites water-dispersible. A temperature-dependent dynamic light scattering study showed the aggregation behavior of PNIPAM-coated Co NPs upon heating and this process was proven to be reversible. The combination of superparamagnetic and thermo-responsive properties in these hybrid nanoparticles is promising for future applications e.g. in biomedicine.
In chapter 4, the magnetic-field-induced assembly of superparamagnetic cobalt nanoparticles both on solid substrates and at liquid-air interface was investigated. OA- and TOPO-coated Co NPs were synthesized via the thermolysis of cobalt carbonyl and dispersed into either hexane or toluene. The Co NP dispersion was dropped onto substrates (e.g., TEM grid, silicon wafer) and at liquid-air (water-air or ethylene glycol-air) interface. Due to the attractive dipolar interaction, 1-D chains formed in the presence of an external magnetic field. It is known that the concentration and the strength of the magnetic field can affect the assembly behavior of superparamagnetic Co NPs. Therefore, the influence of these two parameters on the morphology of the assemblies was studied. The formed 1-D chains were shorter and flexible at either lower concentration of the Co NP dispersion or lower strength of the external magnetic field due to thermal fluctuation. However, by increasing either the concentration of the NP dispersion or the strength of the applied magnetic field, these chains became longer, thicker and straighter. The reason could be that a high concentration led to a high fraction of short dipolar chains, and their interaction resulted in longer and thicker chains under applied magnetic field. On the other hand, when the magnetic field increased, the induced moments of the magnetic nanoparticles became larger, which dominated over the thermal fluctuation. Thus, the formed short chains connected to each other and grew in length. Thicker chains were also observed through chain-chain interaction. Furthermore, the induced moments of the NPs tended to direct into one direction with increased magnetic field, thus the chains were straighter. In comparison between the assembly on substrates, at water-air interface and at ethylene glycol-air interface, the assembly of Co NPs in hexane dispersion at ethylene glycol-air interface showed the most regular and homogeneous chain structures due to the better spreading of the dispersion on ethylene glycol subphase than on water subphase and substrates. The magnetic-field-induced assembly of superparamagnetic nanoparticles could provide a powerful approach for applications in data storage and electronic devices.
Chapter 5 presented the synthesis of superparamagnetic and ferromagnetic cobalt nanoparticles through a dual-stage thermolysis of cobalt carbonyl (Co2(CO)8) by using polystyrene as surfactant. The amine end-functionalized polystyrene surfactants with different molecular weight were prepared via atom transfer radical polymerization technique. The molecular weight determination of polystyrene was conducted by gel permeation chromatography (GPC) and matrix-assisted laser desorption/ionization time-of-flight (MALDI-ToF) mass spectrometry techniques. The results showed that, when the molecular weight distribution is low (Mw/Mn < 1.2), the measurement by GPC and MALDI-ToF MS provided nearly similar results. For example, the molecular weight of 10600 Da was obtained by MALDI-ToF MS, while GPC gave 10500 g/mol (Mw/Mn = 1.17). However, if the polymer is poly distributed, MALDI-ToF MS cannot provide an accurate value. This was exemplified for a polymer with a molecular weight of 3130 Da measured by MALDI-TOF MS, while GPC showed 2300 g/mol (Mw/Mn = 1.38). The size, size distribution and magnetic properties of the hybrid particles were different by changing either the molecular weight or concentration of the polymer surfactants. The analysis from TEM characterization showed that the size of cobalt nanoparticles stabilized with polystyrene of lower molecular weight (Mn = 2300 g/mol) varied from 12–22 nm, while the size with middle (Mn = 4500 g/mol) and higher molecular weight (Mn = 10500 g/mol) of polystyrene-coated cobalt nanoparticles showed little change. Magnetic measurements exhibited that the small cobalt particles (12 nm) were superparamagnetic, while larger particles (21 nm) were ferromagnetic and assembled into 1-D chains. The grafting density calculated from thermogravimetric analysis showed that a higher grafting density of polystyrene was obtained with lower molecular weight (Mn = 2300 g/mol) than those with higher molecular weight (Mn = 10500 g/mol). Due to the larger steric hindrance, polystyrene with higher molecular weight cannot form a dense shell on the surface of the nanoparticles, which resulted in a lower grafting density. Wide angle X-ray scattering measurements revealed the epsilon cobalt crystalline phases of both superparamagnetic Co NPs coated with polystyrene (Mn = 2300 g/mol) and ferromagnetic Co NPs coated with polystyrene (Mn = 10500 g/mol). Furthermore, a stability study showed that PS-Co NPs prepared with higher polymer concentration and polymer molecular weight exhibited a better stability.
Various ways of preparing enantiomerically pure 2-amino[6]helicene derivatives were explored. Ni(0) mediated cyclotrimerization of enantiopure triynes provided (M)- and (P)-7,8-bis(p-tolyl)hexahelicene-2-amine in >99% ee as well as its benzoderivative in >99% ee. The stereocontrol was found to be inefficient for a 2- aminobenzo[6]helicene congener with an embedded five-membered ring. Helically chiral imidazolium salts bearing one or two helicene moieties have been synthesized and applied in enantioselective [2+2+2] cyclotrimerization catalyzed by an in situ formed Ni(0)-NHC complex. The synthesis of the first helically chiral Pd- and Ru-NHC complexes and their application in enantioselective catalysis was demonstrated. The latter shows promising results in enantioselective olefin metathesis reactions. A mechanistic proposal for asymmetric ring closing metathesis is provided.
Synthesis of artificial building blocks for sortase-mediated ligation and their enzymatic linkage
(2018)
The enzyme Sortase A catalyzes the formation of a peptide bond between the recognition sequence LPXTG and an oligoglycine. While manifold ligations between proteins and various biomolecules, proteins and small synthetic molecules as well as proteins and surfaces have been reported, the aim of this thesis was to investigate the sortase-catalyzed linkage between artificial building blocks. Hence, this could pave the way for the use of sortase A for tasks from a chemical point of view and maybe even materials science.
For the proof of concept, the studied systems were kept as simple as possible at first by choosing easily accessible silica NPs and commercially available polymers. These building blocks were functionalized with peptide motifs for sortase-mediated ligation. Silica nanoparticles were synthesized with diameters of 60 and 200 nm and surface modified with C=C functionalities. Then, peptides bearing a terminal cysteine were covalently linked by means of a thiol-ene reaction. 60 nm SiO2 NPs were functionalized with pentaglycines, while peptides with LPETG motif were linked to 200 nm silica particles. Polyethyleneglycol (PEG) and poly(N isopropylacrylamide) (PNIPAM) were likewise functionalized with peptides by thiol-ene reaction between cysteine residues and C=C units in the polymer end groups. Hence, G5-PEG and PNIPAM-LPETG conjugates were obtained. With this set of building blocks, NP–polymer hybrids, NP–NP, and polymer–polymer structures were generated by sortase-mediated ligation and the product formation shown by transmission electron microscopy, MALDI-ToF mass spectrometry and dynamic light scatting, among others. Thus, the linkage of these artificial building blocks by the enzyme sortase A could be demonstrated.
However, when using commercially available polymers, the purification of the polymer–peptide conjugates was impossible and resulted in a mixture containing unmodified polymer. Therefore, strategies were developed for the own synthesis of pure peptide-polymer and polymer-peptide conjugates as building blocks for sortase-mediated ligation. The designed routes are based on preparing polymer blocks via RAFT polymerization from CTAs that are attached to N- or C-terminus, respectively, of a peptide. GG-PNIPAM was synthesized through attachment of a suitable RAFT CTA to Fmoc-GG in an esterification reaction, followed by polymerization of NIPAM and cleavage of the Fmoc protection group. Furthermore, several peptides were synthesized by solid-phase peptide synthesis. The linkage of a RAFT CTA (or
polymerization initiator) to the N-terminus of a peptide can be conducted in an automated fashion as last step in a peptide synthesizer. The synthesis of such a conjugate couldn’t be realized in the time frame of this thesis, but many promising strategies exist to continue this strategy using different coupling reagents. Such polymer building blocks can be used to synthesize protein-polymer conjugates catalyzed by sortase A and the approach can be carried on to the synthesis of block copolymers by using polymer blocks with peptide motifs on both ends.
Although the proof of concept demonstrated in this thesis only shows examples that can be also synthesized by exclusively chemical techniques, a toolbox of such building blocks will enable the future formation of new materials and pave the way for the application of enzymes in materials science. In addition to nanoparticle systems and block copolymers, this also includes combination with protein-based building blocks to form hybrid materials. Hence, sortase could become an enzymatic tool that complements established chemical linking technologies and provides specific peptide motifs that are orthogonal to all existing chemical functional groups.
In the present work, we use symbolic regression for automated modeling of dynamical systems. Symbolic regression is a powerful and general method suitable for data-driven identification of mathematical expressions. In particular, the structure and parameters of those expressions are identified simultaneously.
We consider two main variants of symbolic regression: sparse regression-based and genetic programming-based symbolic regression. Both are applied to identification, prediction and control of dynamical systems.
We introduce a new methodology for the data-driven identification of nonlinear dynamics for systems undergoing abrupt changes. Building on a sparse regression algorithm derived earlier, the model after the change is defined as a minimum update with respect to a reference model of the system identified prior to the change. The technique is successfully exemplified on the chaotic Lorenz system and the van der Pol oscillator. Issues such as computational complexity, robustness against noise and requirements with respect to data volume are investigated.
We show how symbolic regression can be used for time series prediction. Again, issues such as robustness against noise and convergence rate are investigated us- ing the harmonic oscillator as a toy problem. In combination with embedding, we demonstrate the prediction of a propagating front in coupled FitzHugh-Nagumo oscillators. Additionally, we show how we can enhance numerical weather predictions to commercially forecast power production of green energy power plants.
We employ symbolic regression for synchronization control in coupled van der Pol oscillators. Different coupling topologies are investigated. We address issues such as plausibility and stability of the control laws found. The toolkit has been made open source and is used in turbulence control applications.
Genetic programming based symbolic regression is very versatile and can be adapted to many optimization problems. The heuristic-based algorithm allows for cost efficient optimization of complex tasks.
We emphasize the ability of symbolic regression to yield white-box models. In contrast to black-box models, such models are accessible and interpretable which allows the usage of established tool chains.
The utilization of lignin as renewable electrode material for electrochemical energy storage is a sustainable approach for future batteries and supercapacitors. The composite electrode was fabricated from Kraft lignin and conductive carbon and the charge storage contribution was determined in terms of electrical double layer (EDL) and redox reactions. The important factors at play for achieving high faradaic charge storage capacity contribute to high surface area, accessibility of redox sites in lignin and their interaction with conductive additives. A thinner layer of lignin covering the high surface area of carbon facilitates the electron transfer process with a shorter pathway from the active sites of nonconductive lignin to the current collector leading to the improvement of faradaic charge storage capacity.
Composite electrodes from lignin and carbon would be even more sustainable if the fluorinated binder can be omitted. A new route to fabricate a binder-free composite electrode from Kraft lignin and high surface area carbon has been proposed by crosslinking lignin with glyoxal. A high molecular weight of lignin is obtained to enhance both electroactivity and binder capability in composite electrodes. The order of the processing step of crosslinking lignin on the composite electrode plays a crucial role in achieving a stable electrode and high charge storage capacity. The crosslinked lignin based electrodes are promising since they allow for more stable, sustainable, halogen-free and environmentally benign devices for energy storage applications. Furthermore, improvement of the amount of redox active groups (quinone groups) in lignin is useful to enhance the capacity in lithium battery applications. Direct oxidative demethylation by cerium ammonium nitrate has been carried out under mild conditions. This proves that an increase of quinone groups is able to enhance the performance of lithium battery. Thus, lignin is a promising material and could be a good candidate for application in sustainable energy storage devices.
Numbers are omnipresent in daily life. They vary in display format and in their meaning so that it does not seem self-evident that our brains process them more or less easily and flexibly. The present thesis addresses mental number representations in general, and specifically the impact of finger counting on mental number representations. Finger postures that result from finger counting experience are one of many ways to convey numerical information. They are, however, probably the one where the numerical content becomes most tangible. By investigating the role of fingers in adults’ mental number representations the four presented studies also tested the Embodied Cognition hypothesis which predicts that bodily experience (e.g., finger counting) during concept acquisition (e.g., number concepts) stays an immanent part of these concepts. The studies focussed on different aspects of finger counting experience. First, consistency and further details of spontaneously used finger configurations were investigated when participants repeatedly produced finger postures according to specific numbers (Study 1). Furthermore, finger counting postures (Study 2), different finger configurations (Study 2 and 4), finger movements (Study 3), and tactile finger perception (Study 4) were investigated regarding their capability to affect number processing. Results indicated that active production of finger counting postures and single finger movements as well as passive perception of tactile stimulation of specific fingers co-activated associated number knowledge and facilitated responses towards corresponding magnitudes and number symbols. Overall, finger counting experience was reflected in specific effects in mental number processing of adult participants. This indicates that finger counting experience is an immanent part of mental number representations.
Findings are discussed in the light of a novel model. The MASC (Model of Analogue and Symbolic Codes) combines and extends two established models of number and magnitude processing. Especially a symbolic motor code is introduced as an essential part of the model. It comprises canonical finger postures (i.e., postures that are habitually used to represent numbers) and finger-number associations. The present findings indicate that finger counting functions both as a sensorimotor magnitude and as a symbolic representational format and that it thereby directly mediates between physical and symbolic size. The implications are relevant both for basic research regarding mental number representations and for pedagogic practices regarding the effectiveness of finger counting as a means to acquire a fundamental grasp of numbers.
Active and passive source data from two seismic experiments within the interdisciplinary project TIPTEQ (from The Incoming Plate to mega Thrust EarthQuake processes) were used to image and identify the structural and petrophysical properties (such as P- and S-velocities, Poisson's ratios, pore pressure, density and amount of fluids) within the Chilean seismogenic coupling zone at 38.25°S, where in 1960 the largest earthquake ever recorded (Mw 9.5) occurred. Two S-wave velocity models calculated using traveltime and noise tomography techniques were merged with an existing velocity model to obtain a 2D S-wave velocity model, which gathered the advantages of each individual model. In a following step, P- and S-reflectivity images of the subduction zone were obtained using different pre stack and post-stack depth migration techniques. Among them, the recent prestack line-drawing depth migration scheme yielded revealing results. Next, synthetic seismograms modelled using the reflectivity method allowed, through their input 1D synthetic P- and S-velocities, to infer the composition and rocks within the subduction zone. Finally, an image of the subduction zone is given, jointly interpreting the results from this work with results from other studies. The Chilean seismogenic coupling zone at 38.25°S shows a continental crust with highly reflective horizontal, as well as (steep) dipping events. Among them, the Lanalhue Fault Zone (LFZ), which is interpreted to be east-dipping, is imaged to very shallow depths. Some steep reflectors are observed for the first time, for example one near the coast, related to high seismicity and another one near the LFZ. Steep shallow reflectivity towards the volcanic arc could be related to a steep west-dipping reflector interpreted as fluids and/or melts, migrating upwards due to material recycling in the continental mantle wedge. The high resolution of the S-velocity model in the first kilometres allowed to identify several sedimentary basins, characterized by very low P- and S-velocities, high Poisson's ratios and possible steep reflectivity. Such high Poisson's ratios are also observed within the oceanic crust, which reaches the seismogenic zone hydrated due to bending-related faulting. It is interpreted to release water until reaching the coast and under the continental mantle wedge. In terms of seismic velocities, the inferred composition and rocks in the continental crust is in agreement with field geology observations at the surface along the proflle. Furthermore, there is no requirement to call on the existence of measurable amounts of present-day fluids above the plate interface in the continental crust of the Coastal Cordillera and the Central Valley in this part of the Chilean convergent margin. A large-scale anisotropy in the continental crust and upper mantle, previously proposed from magnetotelluric studies, is proposed from seismic velocities. However, quantitative studies on this topic in the continental crust of the Chilean seismogenic zone at 38.25°S do not exist to date.
Amorphous calcium carbonate(ACC) is a wide spread biological material found in many organisms, such as sea Urchins and mollusks, where it serves as either a precursor phase for the crystalline biominerals or is stabilized and used in the amorphous state. As ACC readily crystallizes, stabilizers such as anions, cations or macromolecules are often present to avoid or delay unwanted crystallization. Furthermore, additives often control the properties of the materials to suit the specific function needed for the organism. E.g. cystoliths in leaves that scatter light to optimize energy uptake from the sun or calcite/aragonite crystals used in protective shells in mussels and gastropods. Lifetime of the amorphous phase is controlled by the kinetic stability against crystallization. This has often been linked to water which plays a role in the mobility of ions and hence the probability of forming crystalline nuclei to initiate crystallization. However, it is unclear how the water molecules are incorporated within the amorphous phase, either as liquid confined in pores, as structural water binding to the ions or as a mixture of both. It is also unclear how this is perturbed when additives are added, especially Mg2+, one the most common additives found in biogenic samples. Mg2+ are expected to have a strong influence on the water incorporated into ACC, given the high energy barrier to dehydration of magnesium ions compared to calcium ions in solution.
During the last 10-15 years, there has been a large effort to understand the local environment of the ions/molecules and how this affects the properties of the amorphous phase. But only a few aspects of the structure have so far been well-described in literature. The reason for this is partly caused by the low stability of ACC if exposed to air, where it tends to crystallize within minutes and by the limited quantities of ACC produced in traditional synthesis routes. A further obstacle has been the difficulty in modeling the local structure based on experimental data. To solve the problem of stability and sample size, a few studies have used stabilizers such as Mg2+ or OH- and severely dehydrated samples so as to stabilize the amorphous state, allowing for combined neutron and x-ray analysis to be performed. However, so far, a clear description of the local environments of water present in the structure has not been reported.
In this study we show that ACC can be synthesized without any stabilizing additives in quantities necessary for neutron measurements and that accurate models can be derived with the help of empirical-potential structural refinement. These analyses have shown that there is a wide range of local environments for all of the components in the system suggesting that the amorphous phase is highly inhomogeneous, without any phase separation between ions and water. We also showed that the water in ACC is mainly structural and that there is no confined or liquid-like water present in the system. Analysis of amorphous magnesium carbonate also showed that there is a large difference in the local structure of the two cations and that Mg2+ surprisingly interacts with significantly less water molecules then Ca2+ despite the higher dehydration energy. All in all, this shows that the role of water molecules as a structural component of ACC, with a strong binding to cat- and anions probably retard or prevents the crystallization of the amorphous phase.
The interaction between surfaces displaying end-grafted hydrophilic polymer brushes plays important roles in biology and in many wet-technological applications. The outer surfaces of Gram-negative bacteria, for example, are composed of lipopolysaccharide (LPS) molecules exposing oligo- and polysaccharides to the aqueous environment. This unique, structurally complex biological interface is of great scientific interest as it mediates the interaction of bacteria with neighboring bacteria in colonies and biofilms. The interaction between polymer-decorated surfaces is generally coupled to the distance-dependent conformation of the polymer chains. Therefore, structural insight into the interacting surfaces is a prerequisite to understand the interaction characteristics as well as the underlying physical mechanisms. This problem has been addressed by theory, but accurate experimental data on polymer conformations under confinement are rare, because obtaining perturbation-free structural insight into buried soft interfaces is inherently difficult.
In this thesis, lipid membrane surfaces decorated with hydrophilic polymers of technological and biological relevance are investigated under controlled interaction conditions, i.e., at defined surface separations. For this purpose, dedicated sample architectures and experimental tools are developed. Via ellipsometry and neutron reflectometry pressure-distance curves and distance-dependent polymer conformations in terms of brush compression and reciprocative interpenetration are determined. Additional element-specific structural insight into the end-point distribution of interacting brushes is obtained by standing-wave x-ray fluorescence (SWXF).
The methodology is first established for poly[ethylene glycol] (PEG) brushes of defined length and grafting density. For this system, neutron reflectometry revealed pronounced brush interpenetration, which is not captured in common brush theories and therefore motivates rigorous simulation-based treatments. In the second step the same approach is applied to realistic mimics of the outer surfaces of Gram-negative bacteria: monolayers of wild type LPSs extracted from E. Coli O55:B5 displaying strain-specific O-side chains. The neutron reflectometry experiments yield unprecedented structural insight into bacterial interactions, which are of great relevance for the properties of biofilms.
Die Frage nach dem Zusammenhalt einer ganzen Gesellschaft ist eine der zentralen Fragen der Sozialwissenschaften und Soziologie. Seit dem Übergang in die Moderne bildet das Problem des Zusammenhalts von sich differenzierenden Gesellschaften den Gegenstand des wissenschaftlichen und gesellschaftlichen Diskurses. In der vorliegenden Studie stellt soziale Integration eine Form der gelungenen Vergesellschaftung dar, die sich in der Reproduktion von symbolischen und nicht-symbolischen Ressourcen artikuliert. Das Resultat dieser Reproduktion sind pluralistische Vergesellschaftungen, die, bezogen auf politische Präferenzen, konfligierende Interessen verursachen. Diese Präferenzen kommen in unterschiedlichen Formen, in ihrer Intensität und Wahrnehmung der politischen Partizipation zum Ausdruck. Da moderne politische Herrschaft aufgrund der rechtlichen und institutionellen Ausstattung einen bedeutsamen Einfluss auf soziale Reproduktion ausüben kann (z.B. durch Sozialpolitik), stellt direkte Beeinflussung politischer Entscheidungen, als Artikulation von sich aus den Konfliktlinien etablierenden, unterschiedlichen Präferenzen, das einzige legitime Mittel zwecks Umverteilung von Ressourcen auf der Ebene des Politischen dar. Somit wird die Konnotation zwischen Integration und politischer Partizipation sichtbar. In die Gesellschaft gut integrierte Mitglieder sind aufgrund einer breiten Teilnahme an Reproduktionsprozessen in der Lage, eigene Interessen zu erkennen und durch politische Aktivitäten zum Ausdruck zu bringen. Die empirischen Befunde scheinen den Eindruck zu vermitteln, dass der demokratische Konflikt in der modernen Gesellschaft nicht mehr direkt von Klassenzugehörigkeit und Klasseninteressen geprägt wird, sondern durch den Zugang zu und die Verfügbarkeit von symbolischen und nicht-symbolischen Ressourcen geformt wird. In der Konsequenz lautet die Fragestellung der vorliegenden Arbeit, ob integrierte Gesellschaften politisch aktiver sind.
Die Fragestellung der Arbeit wird mithilfe von Aggregatdaten demokratisch-verfasster politischer Systemen untersucht, die als etablierte Demokratien gelten und unterschiedlich Breite wohlfahrtstaatlichen Maßnahmen aufweisen. Die empirische Überprüfung der Hypothesen erfolgte mithilfe von bivariaten und multivariaten Regressionsanalysen. Die überprüften Hypothesen lassen sich folgend in einer Hypothese zusammenfassen: Je stärker die soziale Integration einer Gesellschaft, desto größer ist die konventionelle bzw. unkonventionelle politische Partizipation. Verallgemeinert ist die Aussage zulässig, dass soziale Integration einer Gesellschaft positive Effekte auf die Häufigkeit politischer Partizipation innerhalb dieser Gesellschaft hat. Stärker integrierte Gesellschaften sind politisch aktiver und dies unabhängig von der Form (konventionelle oder unkonventionelle) politischer Beteiligung. Dabei ist der direkte Effekt der gesamtgesellschaftlichen Integration auf die konventionellen Formen stärker als auf unkonventionellen. Diese Aussage ist nur zulässig, wenn die Elemente des Wahlsystems, wie z.B. Verhältniswahlrecht, und das BIP nicht berücksichtigt werden. Auf der Grundlage der Ergebnisse mit Kontrollvariablen erlauben die Daten die auf die Makroebene bezogene Aussage, dass neben einem hohen Niveau sozialer Integration auch ein durch (Mit-)Beteiligung bestimmtes Wahlsystem und ein hoher wirtschaftlicher Entwicklungsgrad begünstigend für ein hohes Niveau politischer Partizipation sind.
The solar activity and its consequences affect space weather and Earth’s climate. The solar activity exhibits a cyclic behaviour with a period of about 11 years. The solar cycle properties are governed by the dynamo taking place in the interior of the Sun, and they are distinctive. Extending the knowledge about solar cycle properties into the past is essential for understanding the solar dynamo and forecasting space weather. It can be acquired through the analysis of historical sunspot drawings. Sunspots are the dark areas, which are associated with strong magnetic fields, on the solar surface. Sunspots are the oldest and longest available observed features of solar activity.
One of the longest available records of sunspot drawings is the collection by Samuel Heinrich Schwabe during 1825–1867. The sunspot sizes measured from digitized Schwabe drawings are not to scale and need to be converted into physical sunspot areas. We employed a statistical approach assuming that the area distribution of sunspots was the same in the 19th century as it was in the 20th century. Umbral areas for about 130 000 sunspots observed by Schwabe were obtained. The annually averaged sunspot areas correlate reasonably well with the sunspot number. Tilt angles and polarity separations of sunspot groups were calculated assuming them to be bipolar. There is, of course, no polarity information in the observations. We derived an average tilt angle by attempting to exclude unipolar groups with a minimum separation of the two surmised polarities and an outlier rejection method, which follows the evolution of each group and detects the moment, when it turns unipolar as it decays. As a result, the tilt angles, although displaying considerable natural scatter, are on average 5.85° ± 0.25°, with the leading
polarity located closer to the equator, in good agreement with tilt angles obtained from 20th century data sets. Sources of uncertainties in the tilt angle determination are discussed and need to be addressed whenever different data sets are combined.
Digital images of observations printed in the books Rosa Ursina and Prodromus pro sole mobili by Christoph Scheiner, as well as the drawings from Scheiner’s letters to Marcus Welser, are analyzed to obtain information on the positions and sizes of sunspots that appeared before the Maunder minimum. In most cases, the given orientation of the ecliptic is used to set up the heliographic coordinate system for the drawings. Positions and sizes are measured manually displaying the drawings on a computer screen. Very early drawings have no indication of the solar orientation. A rotational matching using common spots of adjacent days is used in some cases, while in other cases, the assumption that images were aligned with a zenith–horizon coordinate system appeared to be the most likely. In total, 8167 sunspots were measured. A distribution of sunspot latitudes versus time (butterfly diagram) is obtained for Scheiner’s observations. The observations of 1611 are very inaccurate, but the drawings of 1612 have at least an indication of the solar orientation, while the remaining part of the spot positions from 1618–1631 have good to very good accuracy. We also computed 697 tilt angles of apparent bipolar sunspot groups, which were observed in the period 1618–1631. We find that the average tilt angle of nearly 4° does not significantly differ from the 20th century values.
The solar cycle properties seem to be related to the tilt angles of sunspot groups, and it is an important parameter in the surface flux transport models. The tilt angles of bipolar sunspot groups from various historical sets of solar drawings including from Schwabe and Scheiner are analyzed. Data by Scheiner, Hevelius, Staudacher, Zucconi, Schwabe, and Spörer deliver a series of average tilt angles spanning a period of 270 years, in addition to previously found values for 20th-century data obtained by other authors. We find that the average tilt angles before the Maunder minimum were not significantly different from modern values. However, the average tilt angles of a period 50 years after the Maunder minimum, namely for cycles 0 and 1, were much lower and near zero. The typical tilt angles before the Maunder minimum suggest that abnormally low tilt angles were not responsible for driving the solar cycle into a grand minimum.
With the Schwabe (1826–1867) and Spörer (1866–1880) sunspot data, the butterfly diagram of sunspot groups extends back till 1826. A recently developed method, which separates the wings of the butterfly diagram based on the long gaps present in sunspot group occurrences at different latitudinal bands, is used to separate the wings of the butterfly diagram. The cycle-to-cycle variation in the start (F), end (L), and highest (H) latitudes of the wings with respect to the strength of the wings are analyzed. On the whole, the wings of the stronger cycles tend to start at higher latitudes and have a greater extent. The time spans of the wings and the time difference between the wings in the northern hemisphere display a quasi-periodicity of 5–6 cycles. The average wing overlap is zero in the southern hemisphere, whereas it is 2–3 months in the north. A marginally significant oscillation of about 10 solar cycles is found in the asymmetry of the L latitudes. This latest, extended database of butterfly wings provides new observational constraints, regarding the spatio-temporal distribution of sunspot occurrences over the solar cycle, to solar dynamo models.
Signals stored in sediment
(2018)
Tectonic and climatic boundary conditions determine the amount and the characteristics (size distribution and composition) of sediment that is generated and exported from mountain regions. On millennial timescales, rivers adjust their morphology such that the incoming sediment (Qs,in) can be transported downstream by the available water discharge (Qw). Changes in climatic and tectonic boundary conditions thus trigger an adjustment of the downstream river morphology. Understanding the sensitivity of river morphology to perturbations in boundary conditions is therefore of major importance, for example, for flood assessments, infrastructure and habitats. Although we have a general understanding of how rivers evolve over longer timescales, the prediction of channel response to changes in boundary conditions on a more local scale and over shorter timescales remains a major challenge. To better predict morphological channel evolution, we need to test (i) how channels respond to perturbations in boundary conditions and (ii) how signals reflecting the persisting conditions are preserved in sediment characteristics. This information can then be applied to reconstruct how local river systems have evolved over time.
In this thesis, I address those questions by combining targeted field data collection in the Quebrada del Toro (Southern Central Andes of NW Argentina) with cosmogenic nuclide analysis and remote sensing data. In particular, I (1) investigate how information on hillslope processes is preserved in the 10Be concentration (geochemical composition) of fluvial sediments and how those signals are altered during downstream transport. I complement the field-based approach with physical experiments in the laboratory, in which I (2) explore how changes in sediment supply (Qs,in) or water discharge (Qw) generate distinct signals in the amount of sediment discharge at the basin outlet (Qs,out). With the same set of experiments, I (3) study the adjustments of alluvial channel morphology to changes in Qw and Qs,in, with a particular focus in fill-terrace formation. I transfer the findings from the experiments to the field to (4) reconstruct the evolution of a several-hundred meter thick fluvial fill-terrace sequence in the Quebrada del Toro. I create a detailed terrace chronology and perform reconstructions of paleo-Qs and Qw from the terrace deposits. In the following paragraphs, I summarize my findings on each of these four topics.
First, I sampled detrital sediment at the outlet of tributaries and along the main stem in the Quebrada del Toro, analyzed their 10Be concentration ([10Be]) and compared the data to a detailed hillslope-process inventory. The often observed non-linear increase in catchment-mean denudation rate (inferred from [10Be] in fluvial sediment) with catchment-median slope, which has commonly been explained by an adjustment in landslide-frequency, coincided with a shift in the main type of hillslope processes. In addition, the [10Be] in fluvial sediments varied with grain-size. I defined the normalized sand-gravel-index (NSGI) as the 10Be-concentration difference between sand and gravel fractions divided by their summed concentrations. The NSGI increased with median catchment slope and coincided with a shift in the prevailing hillslope processes active in the catchments, thus making the NSGI a potential proxy for the evolution of hillslope processes over time from sedimentary deposits. However, the NSGI recorded hillslope-processes less well in regions of reduced hillslope-channel connectivity and, in addition, has the potential to be altered during downstream transport due to lateral sediment input, size-selective sediment transport and abrasion.
Second, my physical experiments revealed that sediment discharge at the basin outlet (Qs,out) varied in response to changes in Qs,in or Qw. While changes in Qw caused a distinct signal in Qs,out during the transient adjustment phase of the channel to new boundary conditions, signals related to changes in Qs,in were buffered during the transient phase and likely only become apparent once the channel is adjusted to the new conditions. The temporal buffering is related to the negative feedback between Qs,in and channel-slope adjustments. In addition, I inferred from this result that signals extracted from the geochemical composition of sediments (e.g., [10Be]) are more likely to represent modern-day conditions during times of aggradation, whereas the signal will be temporally buffered due to mixing with older, remobilized sediment during times of channel incision.
Third, the same set of experiments revealed that river incision, channel-width narrowing and terrace cutting were initiated by either an increase in Qw, a decrease in Qs,in or a drop in base level. The lag-time between the external perturbation and the terrace cutting determined (1) how well terrace surfaces preserved the channel profile prior to perturbation and (2) the degree of reworking of terrace-surface material. Short lag-times and well preserved profiles occurred in cases with a rapid onset of incision. Also, lag-times were synchronous along the entire channel after upstream perturbations (Qw, Qs,in), whereas base-level fall triggered an upstream migrating knickzone, such that lag-times increased with distance upstream. Terraces formed after upstream perturbations (Qw, Qs,in) were always steeper when compared to the active channel in new equilibrium conditions. In the base-level fall experiment, the slope of the terrace-surfaces and the modern channel were similar. Hence, slope comparisons between the terrace surface and the modern channel can give insights into the mechanism of terrace formation.
Fourth, my detailed terrace-formation chronology indicated that cut-and-fill episodes in the Quebrada del Toro followed a ~100-kyr cyclicity, with the oldest terraces ~ 500 kyr old. The terraces were formed due to variability in upstream Qw and Qs. Reconstructions of paleo-Qs over the last 500 kyr, which were restricted to times of sediment deposition, indicated only minor (up to four-fold) variations in paleo-denudation rates. Reconstructions of paleo-Qw were limited to the times around the onset of river incision and revealed enhanced discharge from 10 to 85% compared to today. Such increases in Qw are in agreement with other quantitative paleo-hydrological reconstructions from the Eastern Andes, but have the advantage of dating further back in time.
Deoxyribonucleic acid (DNA) is the carrier of human genetic information and is exposed to environmental influences such as the ultraviolet (UV) fraction of sunlight every day. The photostability of the DNA against UV light is astonishing. Even if the DNA bases have a strong absorption maximum at around 260 nm/4.77 eV, their quantum yield of photoproducts remains very low 1. If the photon energies exceed the ionization energy (IE) of the nucleobases ( ̴ 8-9 eV) 2, the DNA can be severely damaged. Photoexcitation and -ionization reactions occur, which can induce strand breaks in the DNA. The efficiency of the excitation and ionization induced strand breaks in the target DNA sequences are represented by cross sections. If Si as a substrate material is used in the VUV irradiation experiments, secondary electrons with an energy below 3.6 eV are generated from the substrate. This low energy electrons (LEE) are known to induce dissociative electron attachment (DEA) in DNA and with it DNA strand breakage very efficiently. LEEs play an important role in cancer radiation therapy, since they are generated secondarily along the radiation track of ionizing radiation.
In the framework of this thesis, different single stranded DNA sequences were irradiated with 8.44 eV vacuum UV (VUV) light and cross sections for single strand breaks (SSB) were determined. Several sequences were also exposed to secondary LEEs, which additionally contributed to the SSBs. First, the cross sections for SSBs depending on the type of nucleobases were determined. Both types of DNA sequences, mono-nucleobase and mixed sequences showed very similar results upon VUV radiation. The additional influence of secondarily generated LEEs resulted in contrast in a clear trend for the SSB cross sections. In this, the polythymine sequence had the highest cross section for SSBs, which can be explained by strong anionic resonances in this energy range. Furthermore, SSB cross sections were determined as a function of sequence length. This resulted in an increase in the strand breaks to the same extent as the increase in the geometrical cross section. The longest DNA sequence (20 nucleotides) investigated in this series, however, showed smaller cross section values for SSBs, which can be explained by conformational changes in the DNA. Moreover, several DNA sequences that included the radiosensitizers 5-Bromouracil (5BrU) and 8-Bromoadenine (8BrA) were investigated and the corresponding SSB cross sections were determined. It was shown that 5BrU reacts very strongly to VUV radiation leading to high strand break yields, which showed in turn a strong sequence-dependency. 8BrA, on the other hand, showed no sensitization to the applied VUV radiation, since almost no increase in strand breakage yield was observed in comparison to non-modified DNA sequences.
In order to be able to identify the mechanisms of radiation damage by photons, the IEs of certain DNA sequences were further explored using photoionization tandem mass spectrometry. By varying the DNA sequence, both the IEs depending on the type of nucleobase as well as on the DNA strand length could be identified and correlated to the SSB cross sections. The influence of the IE on the photoinduced reaction in the brominated DNA sequences could be excluded.
Scalable data profiling
(2018)
Data profiling is the act of extracting structural metadata from datasets. Structural metadata, such as data dependencies and statistics, can support data management operations, such as data integration and data cleaning. Data management often is the most time-consuming activity in any data-related project. Its support is extremely valuable in our data-driven world, so that more time can be spent on the actual utilization of the data, e. g., building analytical models. In most scenarios, however, structural metadata is not given and must be extracted first. Therefore, efficient data profiling methods are highly desirable.
Data profiling is a computationally expensive problem; in fact, most dependency discovery problems entail search spaces that grow exponentially in the number of attributes. To this end, this thesis introduces novel discovery algorithms for various types of data dependencies – namely inclusion dependencies, conditional inclusion dependencies, partial functional dependencies, and partial unique column combinations – that considerably improve over state-of-the-art algorithms in terms of efficiency and that scale to datasets that cannot be processed by existing algorithms. The key to those improvements are not only algorithmic innovations, such as novel pruning rules or traversal strategies, but also algorithm designs tailored for distributed execution. While distributed data profiling has been mostly neglected by previous works, it is a logical consequence on the face of recent hardware trends and the computational hardness of dependency discovery.
To demonstrate the utility of data profiling for data management, this thesis furthermore presents Metacrate, a database for structural metadata. Its salient features are its flexible data model, the capability to integrate various kinds of structural metadata, and its rich metadata analytics library. We show how to perform a data anamnesis of unknown, complex datasets based on this technology. In particular, we describe in detail how to reconstruct the schemata and assess their quality as part of the data anamnesis.
The data profiling algorithms and Metacrate have been carefully implemented, integrated with the Metanome data profiling tool, and are available as free software. In that way, we intend to allow for easy repeatability of our research results and also provide them for actual usage in real-world data-related projects.
The formation and breaching of natural dammed lakes have formed the landscapes, especially in seismically active high-mountain regions. Dammed lakes pose both, potential water resources, and hazard in case of dam breaching. Central Asia has mostly arid and semi-arid climates. Rock glaciers already store more water than ice-glaciers in some semi-arid regions of the world, but their distribution and advance mechanisms are still under debate in recent research. Their impact on the water availability in Central Asia will likely increase as temperatures rise and glaciers diminish.
This thesis provides insight to the relative age distribution of selected Kyrgyz and Kazakh rock glaciers and their single lobes derived from lichenometric dating. The size of roughly 8000 different lichen specimens was used to approximate an exposure age of the underlying debris surface. We showed that rock-glacier movement differs signifcantly on small scales. This has several implications for climatic inferences from rock glaciers. First, reactivation of their lobes does not necessarily point to climatic changes, or at least at out-of-equilibrium conditions. Second, the elevations of rock-glacier toes can no longer be considered as general indicators of the limit of sporadic mountain permafrost as they have been used traditionally.
In the mountainous and seismically active region of Central Asia, natural dams, besides rock glaciers, also play a key role in controlling water and sediment infux into river valleys. However, rock glaciers advancing into valleys seem to be capable of infuencing the stream network, to dam rivers, or to impound lakes. This influence has not previously been addressed. We quantitatively explored these controls using a new inventory of 1300 Central Asian rock glaciers. Elevation, potential incoming solar radiation, and the size of rock glaciers and their feeder basins played key roles in predicting dam appearance. Bayesian techniques were used to credibly distinguish between lichen sizes on rock glaciers and their lobes, and to find those parameters of a rock-glacier system that are most credibly expressing the potential to build natural dams.
To place these studies in the region's history of natural dams, a combination of dating of former lake levels and outburst flood modelling addresses the history and possible outburst flood hypotheses of the second largest mountain lake of the world, Issyk Kul in Kyrgyzstan. Megafoods from breached earthen or glacial dams were found to be a likely explanation for some of the lake's highly fluctuating water levels. However, our detailed analysis of candidate lake sediments and outburst-flood deposits also showed that more localised dam breaks to the west of Issyk Kul could have left similar geomorphic and sedimentary evidence in this Central Asian mountain landscape. We thus caution against readily invoking megafloods as the main cause of lake-level drops of Issyk Kul. In summary, this thesis addresses some new pathways for studying rock glaciers and natural dams with several practical implications for studies on mountain permafrost and natural hazards.
Natural extreme events are an integral part of nature on planet earth. Usually these events are only considered hazardous to humans, in case they are exposed. In this case, however, natural hazards can have devastating impacts on human societies. Especially hydro-meteorological hazards have a high damage potential in form of e.g. riverine and pluvial floods, winter storms, hurricanes and tornadoes, which can occur all over the globe. Along with an increasingly warm climate also an increase in extreme weather which potentially triggers natural hazards can be expected. Yet, not only changing natural systems, but also changing societal systems contribute to an increasing risk associated with these hazards. These can comprise increasing exposure and possibly also increasing vulnerability to the impacts of natural events. Thus, appropriate risk management is required to adapt all parts of society to existing and upcoming risks at various spatial scales. One essential part of risk management is the risk assessment including the estimation of the economic impacts. However, reliable methods for the estimation of economic impacts due to hydro-meteorological hazards are still missing. Therefore, this thesis deals with the question of how the reliability of hazard damage estimates can be improved, represented and propagated across all spatial scales. This question is investigated using the specific example of economic impacts to companies as a result of riverine floods in Germany.
Flood damage models aim to describe the damage processes during a given flood event. In other words they describe the vulnerability of a specific object to a flood. The models can be based on empirical data sets collected after flood events. In this thesis tree-based models trained with survey data are used for the estimation of direct economic flood impacts on the objects. It is found that these machine learning models, in conjunction with increasing sizes of data sets used to derive the models, outperform state-of-the-art damage models. However, despite the performance improvements induced by using multiple variables and more data points, large prediction errors remain at the object level. The occurrence of the high errors was explained by a further investigation using distributions derived from tree-based models. The investigation showed that direct economic impacts to individual objects cannot be modeled by a normal distribution. Yet, most state-of-the-art approaches assume a normal distribution and take mean values as point estimators. Subsequently, the predictions are unlikely values within the distributions resulting in high errors. At larger spatial scales more objects are considered for the damage estimation. This leads to a better fit of the damage estimates to a normal distribution. Consequently, also the performance of the point estimators get better, although large errors can still occur due to the variance of the normal distribution. It is recommended to use distributions instead of point estimates in order to represent the reliability of damage estimates.
In addition current approaches also mostly ignore the uncertainty associated with the characteristics of the hazard and the exposed objects. For a given flood event e.g. the estimation of the water level at a certain building is prone to uncertainties. Current approaches define exposed objects mostly by the use of land use data sets. These data sets often show inconsistencies, which introduce additional uncertainties. Furthermore, state-of-the-art approaches also imply problems of missing consistency when predicting the damage at different spatial scales. This is due to the use of different types of exposure data sets for model derivation and application. In order to face these issues a novel object-based method was developed in this thesis. The method enables a seamless estimation of hydro-meteorological hazard damage across spatial scales including uncertainty quantification. The application and validation of the method resulted in plausible estimations at all spatial scales without overestimating the uncertainty.
Mainly newly available data sets containing individual buildings make the application of the method possible as they allow for the identification of flood affected objects by overlaying the data sets with water masks. However, the identification of affected objects with two different water masks revealed huge differences in the number of identified objects. Thus, more effort is needed for their identification, since the number of objects affected determines the order of magnitude of the economic flood impacts to a large extent.
In general the method represents the uncertainties associated with the three components of risk namely hazard, exposure and vulnerability, in form of probability distributions. The object-based approach enables a consistent propagation of these uncertainties in space. Aside from the propagation of damage estimates and their uncertainties across spatial scales, a propagation between models estimating direct and indirect economic impacts was demonstrated. This enables the inclusion of uncertainties associated with the direct economic impacts within the estimation of the indirect economic impacts. Consequently, the modeling procedure facilitates the representation of the reliability of estimated total economic impacts. The representation of the estimates' reliability prevents reasoning based on a false certainty, which might be attributed to point estimates. Therefore, the developed approach facilitates a meaningful flood risk management and adaptation planning.
The successful post-event application and the representation of the uncertainties qualifies the method also for the use for future risk assessments. Thus, the developed method enables the representation of the assumptions made for the future risk assessments, which is crucial information for future risk management. This is an important step forward, since the representation of reliability associated with all components of risk is currently lacking in all state-of-the-art methods assessing future risk.
In conclusion, the use of object-based methods giving results in the form of distributions instead of point estimations is recommended. The improvement of the model performance by the means of multi-variable models and additional data points is possible, but small. Uncertainties associated with all components of damage estimation should be included and represented within the results. Furthermore, the findings of the thesis suggest that, at larger scales, the influence of the uncertainty associated with the vulnerability is smaller than those associated with the hazard and exposure. This leads to the conclusion that for an increased reliability of flood damage estimations and risk assessments, the improvement and active inclusion of hazard and exposure, including their uncertainties, is needed in addition to the improvements of the models describing the vulnerability of the objects.
This paper introduces a novel measure to assess similarity between event hydrographs. It is based on Cross Recurrence Plots and Recurrence Quantification Analysis which have recently gained attention in a range of disciplines when dealing with complex systems. The method attempts to quantify the event runoff dynamics and is based on the time delay embedded phase space representation of discharge hydrographs. A phase space trajectory is reconstructed from the event hydrograph, and pairs of hydrographs are compared to each other based on the distance of their phase space trajectories. Time delay embedding allows considering the multi-dimensional relationships between different points in time within the event. Hence, the temporal succession of discharge values is taken into account, such as the impact of the initial conditions on the runoff event. We provide an introduction to Cross Recurrence Plots and discuss their parameterization. An application example based on flood time series demonstrates how the method can be used to measure the similarity or dissimilarity of events, and how it can be used to detect events with rare runoff dynamics. It is argued that this methods provides a more comprehensive approach to quantify hydrograph similarity compared to conventional hydrological signatures.
Together with the gradual change of mean values, ongoing climate change is projected to increase frequency and amplitude of temperature and precipitation extremes in many regions of Europe. The impacts of such in most cases short term extraordinary climate situations on terrestrial ecosystems are a matter of central interest of recent climate change research, because it can not per se be assumed that known dependencies between climate variables and ecosystems are linearly scalable. So far, yet, there is a high demand for a method to quantify such impacts in terms of simultaneities of event time series.
In the course of this manuscript the new statistical approach of Event Coincidence Analysis (ECA) as well as it's R implementation is introduced, a methodology that allows assessing whether or not two types of event time series exhibit similar sequences of occurrences. Applications of the method are presented, analyzing climate impacts on different temporal and spacial scales: the impact of extraordinary expressions of various climatic variables on tree stem variations (subdaily and local scale), the impact of extreme temperature and precipitation events on the owering time of European shrub species (weekly and country scale), the impact of extreme temperature events on ecosystem health in terms of NDVI (weekly and continental scale) and the impact of El Niño and La Niña events on precipitation anomalies (seasonal and global scale).
The applications presented in this thesis refine already known relationships based on classical methods and also deliver substantial new findings to the scientific community: the widely known positive correlation between flowering time and temperature for example is confirmed to be valid for the tails of the distributions while the widely assumed positive dependency between stem diameter variation and temperature is shown to be not valid for very warm and very cold days. The larger scale investigations underline the sensitivity of anthrogenically shaped landscapes towards temperature extremes in Europe and provide a comprehensive global ENSO impact map for strong precipitation events.
Finally, by publishing the R implementation of the method, this thesis shall enable other researcher to further investigate on similar research questions by using Event Coincidence Analysis.
The purpose of Probabilistic Seismic Hazard Assessment (PSHA) at a construction site is to provide the engineers with a probabilistic estimate of ground-motion level that could be equaled or exceeded at least once in the structure’s design lifetime. A certainty on the predicted ground-motion allows the engineers to confidently optimize structural design and mitigate the risk of extensive damage, or in worst case, a collapse. It is therefore in interest of engineering, insurance, disaster mitigation, and security of society at large, to reduce uncertainties in prediction of design ground-motion levels.
In this study, I am concerned with quantifying and reducing the prediction uncertainty of regression-based Ground-Motion Prediction Equations (GMPEs). Essentially, GMPEs are regressed best-fit formulae relating event, path, and site parameters (predictor variables) to observed ground-motion values at the site (prediction variable). GMPEs are characterized by a parametric median (μ) and a non-parametric variance (σ) of prediction. μ captures the known ground-motion physics i.e., scaling with earthquake rupture properties (event), attenuation with distance from source (region/path), and amplification due to local soil conditions (site); while σ quantifies the natural variability of data that eludes μ. In a broad sense, the GMPE prediction uncertainty is cumulative of 1) uncertainty on estimated regression coefficients (uncertainty on μ,σ_μ), and 2) the inherent natural randomness of data (σ). The extent of μ parametrization, the quantity, and quality of ground-motion data used in a regression, govern the size of its prediction uncertainty: σ_μ and σ.
In the first step, I present the impact of μ parametrization on the size of σ_μ and σ. Over-parametrization appears to increase the σ_μ, because of the large number of regression coefficients (in μ) to be estimated with insufficient data. Under-parametrization mitigates σ_μ, but the reduced explanatory strength of μ is reflected in inflated σ. For an optimally parametrized GMPE, a ~10% reduction in σ is attained by discarding the low-quality data from pan-European events with incorrect parametric values (of predictor variables).
In case of regions with scarce ground-motion recordings, without under-parametrization, the only way to mitigate σ_μ is to substitute long-term earthquake data at a location with short-term samples of data across several locations – the Ergodic Assumption. However, the price of ergodic assumption is an increased σ, due to the region-to-region and site-to-site differences in ground-motion physics. σ of an ergodic GMPE developed from generic ergodic dataset is much larger than that of non-ergodic GMPEs developed from region- and site-specific non-ergodic subsets - which were too sparse to produce their specific GMPEs. Fortunately, with the dramatic increase in recorded ground-motion data at several sites across Europe and Middle-East, I could quantify the region- and site-specific differences in ground-motion scaling and upgrade the GMPEs with 1) substantially more accurate region- and site-specific μ for sites in Italy and Turkey, and 2) significantly smaller prediction variance σ. The benefit of such enhancements to GMPEs is quite evident in my comparison of PSHA estimates from ergodic versus region- and site-specific GMPEs; where the differences in predicted design ground-motion levels, at several sites in Europe and Middle-Eastern regions, are as large as ~50%.
Resolving the ergodic assumption with mixed-effects regressions is feasible when the quantified region- and site-specific effects are physically meaningful, and the non-ergodic subsets (regions and sites) are defined a priori through expert knowledge. In absence of expert definitions, I demonstrate the potential of machine learning techniques in identifying efficient clusters of site-specific non-ergodic subsets, based on latent similarities in their ground-motion data. Clustered site-specific GMPEs bridge the gap between site-specific and fully ergodic GMPEs, with their partially non-ergodic μ and, σ ~15% smaller than the ergodic variance.
The methodological refinements to GMPE development produced in this study are applicable to new ground-motion datasets, to further enhance certainty of ground-motion prediction and thereby, seismic hazard assessment. Advanced statistical tools show great potential in improving the predictive capabilities of GMPEs, but the fundamental requirement remains: large quantity of high-quality ground-motion data from several sites for an extended time-period.
Previous studies on native language (L1) anaphor resolution have found that monolingual native speakers are sensitive to syntactic, pragmatic, and semantic constraints on pronouns and reflexive resolution. However, most studies have focused on English and other Germanic languages, and little is currently known about the online (i.e., real-time) processing of anaphors in languages with syntactically less restricted anaphors, such as Turkish. We also know relatively little about how 'non-standard' populations such as non-native (L2) speakers and heritage speakers (HSs) resolve anaphors.
This thesis investigates the interpretation and real-time processing of anaphors in German and in a typologically different and as yet understudied language, Turkish. It compares hypotheses about differences between native speakers' (L1ers) and L2 speakers' (L2ers) sentence processing, looking into differences in processing mechanisms as well as the possibility of cross-linguistic influence. To help fill the current research gap regarding HS sentence comprehension, it compares findings for this group with those for L2ers.
To investigate the representation and processing of anaphors in these three populations, I carried out a series of offline questionnaires and Visual-World eye-tracking experiments on the resolution of reflexives and pronouns in both German and Turkish. In the German experiments, native German speakers as well as L2ers of German were tested, while in the Turkish experiments, non-bilingual native Turkish speakers as well as HSs of Turkish with L2 German were tested. This allowed me to observe both cross-linguistic differences as well as population differences between monolinguals' and different types of bilinguals' resolution of anaphors.
Regarding the comprehension of Turkish anaphors by L1ers, contrary to what has been previously assumed, I found that Turkish has no reflexive that follows Condition A of Binding theory (Chomsky, 1981). Furthermore, I propose more general cross-linguistic differences between Turkish and German, in the form of a stronger reliance on pragmatic information in anaphor resolution overall in Turkish compared to German.
As for the processing differences between L1ers and L2ers of a language, I found evidence in support of hypotheses which propose that L2ers of German rely more strongly on non-syntactic information compared to L1ers (Clahsen & Felser, 2006, 2017; Cunnings, 2016, 2017) independent of a potential influence of their L1. HSs, on the other hand, showed a tendency to overemphasize interpretational contrasts between different Turkish anaphors compared to monolingual native speakers. However, lower-proficiency HSs were likely to merge different forms for simplified representation and processing. Overall, L2ers and HSs showed differences from monolingual native speakers both in their final interpretation of anaphors and during online processing. However, these differences were not parallel between the two types of bilingual and thus do not support a unified model of L2 and HS processing (cf. Montrul, 2012).
The findings of this thesis contribute to the field of anaphor resolution by providing data from a previously unexplored language, Turkish, as well as contributing to research on native and non-native processing differences. My results also illustrate the importance of considering individual differences in the acquisition process when studying bilingual language comprehension. Factors such as age of acquisition, language proficiency and the type of input a language learner receives may influence the processing mechanisms they develop and employ, both between and within different bilingual populations.
Photocatalysis is considered significant in this new energy era, because the inexhaustibly abundant, clean, and safe energy of the sun can be harnessed for sustainable, nonhazardous, and economically development of our society. In the research of photocatalysis, the current focus was held by the design and modification of photocatalyst.
As one of the most promising photocatalysts, g-C3N4 has gained considerable attention for its eye-catching properties. It has been extensively explored in photocatalysis applications, such as water splitting, organic pollutant degradation, and CO2 reduction. Even so, it also has its own drawbacks which inhibit its further application. Inspired by that, this thesis will mainly present and discuss the process and achievement on the preparation of some novel photocatalysts and their photocatalysis performance. These materials were all synthesized via the alteration of classic g-C3N4 preparation method, like using different pre-compositions for initial supramolecular complex and functional group post-modification. By taking place of cyanuric acid, 2,5-Dihydroxy-1,4-benzoquinone and chloranilic acid can form completely new supramolecular complex with melamine. After heating, the resulting products of the two complex shown 2D sheet-like and 1D fiber-like morphologies, respectively, which maintain at even up to high temperature of 800 °C. These materials cover crystals, polymers and N-doped carbons with the increase of synthesis temperature. Based on their different pre-compositions, they show different dye degradation performances. For CLA-M-250, it shows the highest photocatalytic activity and strong oxidation capacity. It shows not only great photo-performance in RhB degradation, but also oxygen production in water splitting. In the post-modification method, a novel photocatalysis solution was proposed to modify carbon nitride scaffold with cyano group, whose content can be well controlled by the input of sodium thiocyanate. The cyanation modification leads to narrowed band gap as well as improved photo-induced charges separation. Cyano group grafted carbon nitride thus shows dramatically enhanced performance in the photocatalytic coupling reaction between styrene and sodium benzenesulfinate under green light irradiation, which is in stark contrast with the inactivity of pristine g-C3N4.
Today, more than half of the world’s population lives in urban areas. With a high density of population and assets, urban areas are not only the economic, cultural and social hubs of every society, they are also highly susceptible to natural disasters. As a consequence of rising sea levels and an expected increase in extreme weather events caused by a changing climate in combination with growing cities, flooding is an increasing threat to many urban agglomerations around the globe.
To mitigate the destructive consequences of flooding, appropriate risk management and adaptation strategies are required. So far, flood risk management in urban areas is almost exclusively focused on managing river and coastal flooding. Often overlooked is the risk from small-scale rainfall-triggered flooding, where the rainfall intensity of rainstorms exceeds the capacity of urban drainage systems, leading to immediate flooding. Referred to as pluvial flooding, this flood type exclusive to urban areas has caused severe losses in cities around the world. Without further intervention, losses from pluvial flooding are expected to increase in many urban areas due to an increase of impervious surfaces compounded with an aging drainage infrastructure and a projected increase in heavy precipitation events. While this requires the integration of pluvial flood risk into risk management plans, so far little is known about the adverse consequences of pluvial flooding due to a lack of both detailed data sets and studies on pluvial flood impacts. As a consequence, methods for reliably estimating pluvial flood losses, needed for pluvial flood risk assessment, are still missing.
Therefore, this thesis investigates how pluvial flood losses to private households can be reliably estimated, based on an improved understanding of the drivers of pluvial flood loss. For this purpose, detailed data from pluvial flood-affected households was collected through structured telephone- and web-surveys following pluvial flood events in Germany and the Netherlands.
Pluvial flood losses to households are the result of complex interactions between impact characteristics such as the water depth and a household’s resistance as determined by its risk awareness, preparedness, emergency response, building properties and other influencing factors. Both exploratory analysis and machine-learning approaches were used to analyze differences in resistance and impacts between households and their effects on the resulting losses. The comparison of case studies showed that the awareness around pluvial flooding among private households is quite low. Low awareness not only challenges the effective dissemination of early warnings, but was also found to influence the implementation of private precautionary measures. The latter were predominately implemented by households with previous experience of pluvial flooding. Even cases where previous flood events affected a different part of the same city did not lead to an increase in preparedness of the surveyed households, highlighting the need to account for small-scale variability in both impact and resistance parameters when assessing pluvial flood risk.
While it was concluded that the combination of low awareness, ineffective early warning and the fact that only a minority of buildings were adapted to pluvial flooding impaired the coping capacities of private households, the often low water levels still enabled households to mitigate or even prevent losses through a timely and effective emergency response.
These findings were confirmed by the detection of loss-influencing variables, showing that cases in which households were able to prevent any loss to the building structure are predominately explained by resistance variables such as the household’s risk awareness, while the degree of loss is mainly explained by impact variables.
Based on the important loss-influencing variables detected, different flood loss models were developed. Similar to flood loss models for river floods, the empirical data from the preceding data collection was used to train flood loss models describing the relationship between impact and resistance parameters and the resulting loss to building structures. Different approaches were adapted from river flood loss models using both models with the water depth as only predictor for building structure loss and models incorporating additional variables from the preceding variable detection routine.
The high predictive errors of all compared models showed that point predictions are not suitable for estimating losses on the building level, as they severely impair the reliability of the estimates. For that reason, a new probabilistic framework based on Bayesian inference was introduced that is able to provide predictive distributions instead of single loss estimates. These distributions not only give a range of probable losses, they also provide information on how likely a specific loss value is, representing the uncertainty in the loss estimate.
Using probabilistic loss models, it was found that the certainty and reliability of a loss estimate on the building level is not only determined by the use of additional predictors as shown in previous studies, but also by the choice of response distribution defining the shape of the predictive distribution. Here, a mix between a beta and a Bernoulli distribution to account for households that are able to prevent losses to their building’s structure was found to provide significantly more certain and reliable estimates than previous approaches using Gaussian or non-parametric response distributions.
The successful model transfer and post-event application to estimate building structure loss in Houston, TX, caused by pluvial flooding during Hurricane Harvey confirmed previous findings, and demonstrated the potential of the newly developed multi-variable beta model for future risk assessments. The highly detailed input data set constructed from openly available data sources containing over 304,000 affected buildings in Harris County further showed the potential of data-driven, building-level loss models for pluvial flood risk assessment.
In conclusion, pluvial flood losses to private households are the result of complex interactions between impact and resistance variables, which should be represented in loss models. The local occurrence of pluvial floods requires loss estimates on high spatial resolutions, i.e. on the building level, where losses are variable and uncertainties are high.
Therefore, probabilistic loss estimates describing the uncertainty of the estimate should be used instead of point predictions. While the performance of probabilistic models on the building level are mainly driven by the choice of response distribution, multi-variable models are recommended for two reasons:
First, additional resistance variables improve the detection of cases in which households were able to prevent structural losses.
Second, the added variability of additional predictors provides a better representation of the uncertainties when loss estimates from multiple buildings are aggregated.
This leads to the conclusion that data-driven probabilistic loss models on the building level allow for a reliable loss estimation at an unprecedented level of detail, with a consistent quantification of uncertainties on all aggregation levels. This makes the presented approach suitable for a wide range of applications, from decision support in spatial planning to impact- based early warning systems.
Modern gamma-ray telescopes, provide the main stream of data for astrophysicists in quest of detecting the sources of gamma rays such as active galactic nuclei (AGN). Many blazars have been detected with gamma-ray telescopes such as HESS, VERITAS, MAGIC and Fermi satellite as sources of gamma-rays with the energy E ≥ 100 GeV. These very-high-energy photons interact with extragalactic background light (EBL) producing ultra-relativistic electron-positron pairs. Observations with Fermi-LAT indicate that the GeV gamma-ray flux from some blazars is lower than that predicted from the full electromagnetic cascade. The pairs can induce electrostatic and electromagnetic instabilities. In this case, wave-particle interactions can reduce the energy of the pairs. Therefore, the collective plasma effects can also substantially suppress the GeV-band gamma-ray emission affecting as well the IGMF constraints. Using Particle in cell (PIC) simulations, we have revisited the issue of plasma instabilities induced by electron-positron beams in the fully ionized intergalactic medium. This problem is related to pair beams produced by TeV radiation of blazars. The main objective of our study is to clarify the feedback of the beam-driven instabilities on the pairs. The present dissertation provides new results regarding the plasma instabilities from blazar induced pair beams interacting with intergalactic medium. This clarifies the relevance of plasma instabilities and improves our understanding of blazars.
Plant-derived Transcription Factors for Orthologous Regulation of Gene Expression in the Yeast Saccharomyces cerevisiae
Control of gene expression by transcription factors (TFs) is central in many synthetic biology projects where tailored expression of one or multiple genes is often needed. As TFs from evolutionary distant organisms are unlikely to affect gene expression in a host of choice, they represent excellent candidates for establishing orthogonal control systems. To establish orthogonal regulators for use in yeast (Saccharomyces cerevisiae), we chose TFs from the plant Arabidopsis thaliana. We established a library of 106 different combinations of chromosomally integrated TFs, activation domains (yeast GAL4 AD, herpes simplex virus VP64, and plant EDLL) and synthetic promoters harbouring cognate cis-regulatory motifs driving a yEGFP reporter. Transcriptional output of the different driver / reporter combinations varied over a wide spectrum, with EDLL being a considerably stronger transcription activation domain in yeast, than the GAL4 activation domain, in particular when fused to Arabidopsis NAC TFs. Notably, the strength of several NAC - EDLL fusions exceeded that of the strong yeast TDH3 promoter by 6- to 10-fold. We furthermore show that plant TFs can be used to build regulatory systems encoded by centromeric or episomal plasmids. Our library of TF – DNA-binding site combinations offers an excellent tool for diverse synthetic biology applications in yeast.
COMPASS: Rapid combinatorial optimization of biochemical pathways based on artificial transcription factors
We established a high-throughput cloning method, called COMPASS for COMbinatorial Pathway ASSembly, for the balanced expression of multiple genes in Saccharomyces cerevisiae. COMPASS employs orthogonal, plant-derived artificial transcription factors (ATFs) for controlling the expression of pathway genes, and homologous recombination-based cloning for the generation of thousands of individual DNA constructs in parallel. The method relies on a positive selection of correctly assembled pathway variants from both, in vivo and in vitro cloning procedures. To decrease the turnaround time in genomic engineering, we equipped COMPASS with multi-locus CRISPR/Cas9-mediated modification capacity. In its current realization, COMPASS allows combinatorial optimization of up to ten pathway genes, each transcriptionally controlled by nine different ATFs spanning a 10-fold difference in expression strength. The application of COMPASS was demonstrated by generating cell libraries producing beta-carotene and co-producing beta-ionone and biosensor-responsive naringenin. COMPASS will have many applications in other synthetic biology projects that require gene expression balancing.
CaPRedit: Genome editing using CRISPR-Cas9 and plant-derived transcriptional regulators for the redirection of flux through the FPP branch-point in yeast. Technologies developed over the past decade have made Saccharomyces cerevisiae a promising platform for production of different natural products. We developed CRISPR/Ca9- and plant derived regulator-mediated genome editing approach (CaPRedit) to greatly accelerate strain modification and to facilitate very low to very high expression of key enzymes using inducible regulators. CaPRedit can be implemented to enhance the production of yeast endogenous or heterologous metabolites in the yeast S. cerevisiae. The CaPRedit system aims to faciltiate modification of multiple targets within a complex metabolic pathway through providing new tools for increased expression of genes encoding rate-limiting enzymes, decreased expression of essential genes, and removed expression of competing pathways. This approach is based on CRISPR/Cas9-mediated one-step double-strand breaks to integrate modules containing IPTG-inducible plant-derived artificial transcription factor and promoter pair(s) in a desired locus or loci. Here, we used CaPRedit to redirect the yeast endogenous metabolic flux toward production of farnesyl diphosphate (FPP), a central precursor of nearly all yeast isoprenoid products, by overexpression of the enzymes lead to produce FPP from glutamate. We found significantly higher beta-carotene accumulation in the CaPRedit-mediated modified strain than in the wild type (WT) strain. More specifically, CaPRedit_FPP 1.0 strain was generated, in which three genes involved in FPP synthesis, tHMG1, ERG20, and GDH2, were inducibly overexpressed under the control of strong plant-derived ATFPs. The beta–carotene accumulated in CaPRedit_FPP 1.0 strain to a level 1.3-fold higher than the previously reported optimized strain that carries the same overexpressed genes (as well as additional genetic modifications to redirect yeast endogenous metabolism toward FPP production). Furthermore, the genetic modifications implemented in CaPRedit_FPP 1.0 strain resulted in only a very small growth defect (growth rate relative to the WT is ~ -0.03).
Monoclonal antibodies (mAbs) are an innovative group of drugs with increasing clinical importance in oncology, combining high specificity with generally low toxicity. There are, however, numerous challenges associated with the development of mAbs as therapeutics. Mechanistic understanding of factors that govern the pharmacokinetics (PK) of mAbs is critical for drug development and the optimisation of effective therapies; in particular, adequate dosing strategies can improve patient quality life and lower drug cost. Physiologically-based PK (PBPK) models offer a physiological and mechanistic framework, which is of advantage in the context of animal to human extrapolation. Unlike for small molecule drugs, however, there is no consensus on how to model mAb disposition in a PBPK context. Current PBPK models for mAb PK hugely vary in their representation of physiology and parameterisation. Their complexity poses a challenge for their applications, e.g., translating knowledge from animal species to humans.
In this thesis, we developed and validated a consensus PBPK model for mAb disposition taking into account recent insights into mAb distribution (antibody biodistribution coefficients and interstitial immunoglobulin G (IgG) pharmacokinetics) to predict tissue PK across several pre-clinical species and humans based on plasma data only. The model allows to a priori predict target-independent (unspecific) mAb disposition processes as well as mAb disposition in concentration ranges, for which the unspecific clearance (CL) dominates target-mediated CL processes. This is often the case for mAb therapies at steady state dosing.
The consensus PBPK model was then used and refined to address two important problems:
1) Immunodeficient mice are crucial models to evaluate mAb efficacy in cancer therapy. Protection from elimination by binding to the neonatal Fc receptor is known to be a major pathway influencing the unspecific CL of both, endogenous and therapeutic IgG. The concentration of endogenous IgG, however, is reduced in immunodeficient mouse models, and this effect on unspecific mAb CL is unknown, yet of great importance for the extrapolation to human in the context of mAb cancer therapy.
2) The distribution of mAbs into solid tumours is of great interest. To comprehensively investigate mAb distribution within tumour tissue and its implications for therapeutic efficacy, we extended the consensus PBPK model by a detailed tumour distribution model incorporating a cell-level model for mAb-target interaction. We studied the impact of variations in tumour microenvironment on therapeutic efficacy and explored the plausibility of different mechanisms of action in mAb cancer therapy.
The mathematical findings and observed phenomena shed new light on therapeutic utility and dosing regimens in mAb cancer treatment.
In this work, different strategies for the construction of biohybrid photoelectrodes are investigated and have been evaluated according to their intrinsic catalytic activity for the oxidation of the cofactor NADH or for the connection with the enzymes PQQ glucose dehydrogenase (PQQ-GDH), FAD-dependent glucose dehydrogenase (FAD-GDH) and fructose dehydrogenase (FDH). The light-controlled oxidation of NADH has been analyzed with InGaN/GaN nanowire-modified electrodes. Upon illumination with visible light the InGaN/GaN nanowires generate an anodic photocurrent, which increases in a concentration-dependent manner in the presence of NADH, thus allowing determination of the cofactor. Furthermore, different approaches for the connection of enzymes to quantum dot (QD)-modified electrodes via small redox molecules or redox polymers have been analyzed and discussed. First, interaction studies with diffusible redox mediators such as hexacyanoferrate(II) and ferrocenecarboxylic acid have been performed with CdSe/ZnS QD-modified gold electrodes to build up photoelectrochemical signal chains between QDs and the enzymes FDH and PQQ-GDH. In the presence of substrate and under illumination of the electrode, electrons are transferred from the enzyme via the redox mediators to the QDs. The resulting photocurrent is dependent on the substrate concentration and allows a quantification of the fructose and glucose content in solution. A first attempt with immobilized redox mediator, i.e. ferrocenecarboxylic acid chemically coupled to PQQ-GDH and attached to QD-modified gold electrodes, reveal the potential to build up photoelectrochemical signal chains even without diffusible redox mediators in solution. However, this approach results in a significant deteriorated photocurrent response compared to the situation with diffusing mediators. In order to improve the photoelectrochemical performance of such redox mediator-based, light-switchable signal chains, an osmium complex-containing redox polymer has been evaluated as electron relay for the electronic linkage between QDs and enzymes. The redox polymer allows the stable immobilization of the enzyme and the efficient wiring with the QD-modified electrode. In addition, a 3D inverse opal TiO2 (IO-TiO2) electrode has been used for the integration of PbS QDs, redox polymer and FAD-GDH in order to increase the electrode surface. This results in a significantly improved photocurrent response, a quite low onset potential for the substrate oxidation and a broader glucose detection range as compared to the approach with ferrocenecarboxylic acid and PQQ-GDH immobilized on CdSe/ZnS QD-modified gold electrodes. Furthermore, IO-TiO2 electrodes are used to integrate sulfonated polyanilines (PMSA1) and PQQ-GDH, and to investigate the direct interaction between the polymer and the enzyme for the light-switchable detection of glucose. While PMSA1 provides visible light excitation and ensures the efficient connection between the IO-TiO2 electrode and the biocatalytic entity, PQQ-GDH enables the oxidation of glucose. Here, the IO-TiO2 electrodes with pores of approximately 650 nm provide a suitable interface and morphology, which is required for a stable and functional assembly of the polymer and enzyme. The successful integration of the polymer and the enzyme can be confirmed by the formation of a glucose-dependent anodic photocurrent. In conclusion, this work provides insights into the design of photoelectrodes and presents different strategies for the efficient coupling of redox enzymes to photoactive entities, which allows for light-directed sensing and provides the basis for the generation of power from sun light and energy-rich compounds.
Uncertainty is an essential part of atmospheric processes and thus inherent to weather forecasts. Nevertheless, weather forecasts and warnings are still predominately issued as deterministic (yes or no) forecasts, although research suggests that providing weather forecast users with additional information about the forecast uncertainty can enhance the preparation of mitigation measures. Communicating forecast uncertainty would allow for a provision of information on possible future events at an earlier time. The desired benefit is to enable the users to start with preparatory protective action at an earlier stage of time based on the their own risk assessment and decision threshold. But not all users have the same threshold for taking action. In the course of the project WEXICOM (‘Wetterwarnungen: Von der Extremereignis-Information zu Kommunikation und Handlung’) funded by the Deutscher Wetterdienst (DWD), three studies were conducted between the years 2012 and 2016 to reveal how weather forecasts and warnings are reflected in weather-related decision-making. The studies asked which factors influence the perception of forecasts and the decision to take protective action and how forecast users make sense of probabilistic information and the additional lead time. In a first exploratory study conducted in 2012, members of emergency services in Germany were asked questions about how weather warnings are communicated to professional endusers in the emergency community and how the warnings are converted into mitigation measures. A large number of open questions were selected to identify new topics of interest. The questions covered topics like users’ confidence in forecasts, their understanding of probabilistic information as well as their lead time and decision thresholds to start with preparatory mitigation measures. Results show that emergency service personnel generally have a good sense of uncertainty inherent in weather forecasts. Although no single probability threshold could be identified for organisations to start with preparatory mitigation measures, it became clear that emergency services tend to avoid forecasts based on low probabilities as a basis for their decisions. Based on this findings, a second study conducted with residents of Berlin in 2014 further investigated the question of decision thresholds. The survey questions related to the topics of the perception of and prior experience with severe weather, trustworthiness of forecasters and confidence in weather forecasts, and socio-demographic and social-economic characteristics. Within the questionnaire a scenario was created to determine individual decision thresholds and see whether subgroups of the sample lead to different thresholds. The results show that people’s willingness to act tends to be higher and decision thresholds tend to be lower if the expected weather event is more severe or the property at risk is of higher value. Several influencing factors of risk perception have significant effects such as education, housing status and ability to act, whereas socio-demographic determinants alone are often not sufficient to fully grasp risk perception and protection behaviour. Parallel to the quantitative studies, an interview study was conducted with 27 members of German civil protection between 2012 and 2016. The results show that the latest developments in (numerical) weather forecasting do not necessarily fit the current practice of German emergency services. These practices are mostly carried out on alarms and ground truth in a reactive manner rather than on anticipation based on prognosis or forecasts. As the potential consequences rather than the event characteristics determine protective action, the findings support the call and need for impact-based warnings. Forecasters will rely on impact data and need to learn the users’ understanding of impact. Therefore, it is recommended to enhance weather communication not only by improving computer models and observation tools, but also by focusing on the aspects of communication and collaboration. Using information about uncertainty demands awareness about and acceptance of the limits of knowledge, hence, the capabilities of the forecaster to anticipate future developments of the atmosphere and the capabilities of the user to make sense of this information.
Future magnetic recording industry needs a high-density data storage technology. However, switching the magnetization of small bits requires high magnetic fields that cause excessive heat dissipation. Therefore, controlling magnetism without applying external magnetic field is an important research topic for potential applications in data storage devices with low power consumption. Among the different approaches being investigated, two of them stand out, namely i) all-optical helicity dependent switching (AO-HDS) and ii) ferroelectric control of magnetism. This thesis aims to contribute towards a better understanding of the physical processes behinds these effects as well as reporting new and exciting possibility for the optical and/or electric control of magnetic properties. Hence, the thesis contains two differentiated chapters of results; the first devoted to AO-HDS on TbFe alloys and the second to the electric field control of magnetism in an archetypal Fe/BaTiO3 system.
In the first part, the scalability of the AO-HDS to small laser spot-sizes of few microns in the ferrimagnetic TbFe alloy is investigated by spatially resolving the magnetic contrast with photo-emission electron microscopy (PEEM) and X-ray magnetic circular dichroism (XMCD). The results show that the AO-HDS is a local effect within the laser spot size that occurs in the ring-shaped region in the vicinity of thermal demagnetization. Within the ring region, the helicity dependent switching occurs via thermally activated domain wall motion. Further, the thesis reports on a novel effect of thickness dependent inversion of the switching orientation. It addresses some of the important questions like the role of laser heating and the microscopic mechanism driving AO-HDS.
The second part of the thesis focuses on the electric field control of magnetism in an artificial multiferroic heterostructure. The sample consists of an Fe wedge with thickness varying between 0:5 nm and 3 nm, deposited on top of a ferroelectric and ferroelastic BaTiO3 [001]-oriented single crystal substrate. Here, the magnetic contrast is imaged via PEEM and XMCD as a function of out-of-plane voltage. The results show the evidence of the electric field control of superparamagnetism mediated by a ferroelastic modification of the magnetic anisotropy. The changes in the magnetoelastic anisotropy drive the transition from the superparamagnetic to superferromagnetic state at localized sample positions.
BACKGROUND: Physical activity involving high spinal load has been exposed to possess a crucial impact in the genesis of acute and chronic low back pain and disorder. Vigorous spinal loads are surmised in drop landings, for which strenuous bending loads were formerly evinced for the lower extremity structures. Thus far, clinical studies revealed that repetitive landing impacts can evoke benign structural adaptions or damage to the lumbar vertebrae. Though, causes for these observations are hitherto not conclusively evinced; since actual spinal load has to date not been experimentally documented. Moreover, it is yet undetermined how physiological activation of trunk musculature compensates for landing impact induced spinal loads, and to which extend trunk activity and spinal load are affected by landing demands and performer characteristics. AIMS of this study are 1. the localisation and quantification of spinal bending loads under various landing demands and 2. the identification of compensatory trunk muscular activity pattern, which potentially alleviate spinal load magnitudes. Three consecutive Hypotheses (H1 - H3) were hereto postulated: H1 posits that spinal bending loads in segregated motion planes can feasibly and reliably be evaluated from peak spine segmental angular accelerations. H2 furthermore assumes that vertical drop landings elicit highest spine bending load in sagittal flexion of the lumbar spine. Based on these verifications, a second study shall prove the successive hypothesis (H3) that diversified landing conditions, like performer’s landing familiarity and gender, as an implementation of an instantaneous follow-up task, affect the emerging lumbar spinal bending load. Herein it is moreover surmised that lumbar spinal bending loads under distinct landing conditions are predominantly modulated by herewith disparately deployed conditioned pre-activations of trunk muscles. METHODS: To test the above arrayed hypothesis, two successive studies were carried out. In STUDY 1, 17 subjects were repetitively assessed performing various drop landings (heigth: 15, 30, 45, 60cm; unilateral, bilateral, blindfolded, catching a ball) in a test-retest-design. Herein individual peak angular accelerations [αMAX] were derived from three-dimensional motion data of four trunk-segments (upper thoracic, lower thoracic, lumbar, pelvis). αMAX was herein assessed in flexion, lateral flexion, and rotation of each spinal joint, formed by two adjacent segments. Reliability of αMAX within and between test-days was evaluated by CV%, ICC 2.1, TRV%, and Bland & Altman Analysis (BIAS±LoA). Subsequently, peak flexion acceleration of the lumbo-pelvic joint [αFLEX[LS-PV]] was statistically compared to αMAX expressions of each other assessed spinal joint and motion plane (Mean ±SD, Independent Samples T-test). STUDY 2 deliberately assessed mere peak lumbo-pelvic flexion accelerations [αFLEX[LS-PV]] and electro-myographic trunk pre-activity prior to αFLEX[LS-PV] on 43 subjects performing varied landing tasks (height 45cm; with definite or indefinite predictability of a subsequent instant follow up jump). Subjects were contrasted with respect to their previous landing familiarity ( >1000 vs. <100 landings performed in the past 10 years) and gender. Differences of αFLEX[LS-PV] and muscular pre-activity between contrasted subject groups as between landing tasks were equally statistically tested by three-way mixed ANOVA with Post-hoc tests. Associations between αFLEX[LS-PV] and muscular pre-activity were factor-specifically assessed by Spearman’s rank order correlation coefficient (rS). Complementarily, muscular pre-activity was subdivided by landing phases [DROP, IMPACT] and discretely assessed for phase specific associations to αFLEX[LS-PV]. Each muscular activity was moreover pairwise compared between DROP and IMPACT (Mean ±SD, Dependent Samples T-test). RESULTS: αMAX was presented with overall high variability within test-days (CV =36%). Lowest intra-individual variability and highest reproducibility of αMAX between test-days was shown in flexion of the spine. αFLEX[LS-PV] showed largely consistent sig. higher magnitudes compared to αMAX presented in more cranial spinal joints and other motion planes. αFLEX[LS-PV] moreover gradually increased with escalations in landing heights. Landing unfamiliar subjects presented sig. higher αFLEX[LS-PV] in contrast to landing familiar ones (p=.016). M. Obliquus Int. with M. Transversus Abd. (66 ±32%MVC) and M. Erector Spinae (47 ±15%MVC) presented maredly highest activity in contrast to lowest activity of M. Rectus Abd. (10 ±4%MVC). Landing unfamiliar subjects showed compared to landing familiar ones sig. higher activity of M. Obliquus Ext. (17 ±8%MVC, 12 ±7%MVC, p= .044). M. Obliquus Ext. and its co-contraction ratio with M. Erector Spinae moreover exhibited low but sig. positive correlations to αFLEX[LS-PV] (rs=.39, rs=.31). Each trunk muscule distributed larger shares of its activity to DROP, whereas peak activations of most muscles emerged in the proportionally shorter IMPACT phase. Commonly increased muscular pre-activation particularly at IMPACT was found in landings with a contrived follow up jump and in female subjects, whereby αFLEX[LS-PV] was hereof only marginally affected. DISCUSSION: Highest spine segmental angular accelerations in drop landings emerge in sagittal flexion of the lumbar spine. The compensatory stabilisation of the spine appears to be preponderantly provided by a dorso-ventral co-contraction of M. Obliquus Int., M. Transversus Abd. and M. Erector Spinae. Elevated pre-activity of M. Obliquuis Ext. supposably characterises poor landing experience, which might engender increased bending loads to the lumbar spine. A pervasive large variability of spinal angular accelerations measured across all landing types, suggests a multifarious utilisation of diverse mechanisms compensating for spinal impacts in landing performances. A standardised assessment and valid evaluation of landing evoked lumbar bending loads is hereof largley confined. CONCLUSION: Drop landings elicit most strenuous lumbo-pelvic flexion accelerations, which can be appraised as representatives for high energetic bending loads to the spine. Such entail the highest risk to overload the spinal tissue, when landing demands exceed the individual’s landing skill. Previous landing experience and training appears to effectively improve muscular spine stabilisation pattern, diminishing spinal bending loads.
Magnetotellurics (MT) is a geophysical method that is able to image the electrical conductivity structure of the subsurface by recording time series of natural electromagnetic (EM) field variations. During the data processing these time series are divided into small segments and for each segment spectral values are computed which are typically averaged in a statistical manner to obtain MT transfer functions. Unfortunately, the presence of man-made EM noise sources often deteriorates a significant amount of the recorded time series resulting in disturbed transfer functions. Many advanced processing techniques, e.g. robust statistics, pre-stack data selection or remote reference, have been developed to tackle this problem. The first two techniques reduce the amount of outliers and noise in the data whereas the latter approach removes noise by using data from another MT station. However, especially in populated regions the data processing is still quite challenging even with these approaches. In this thesis, I present two novel pre-stack data confinement and selection criteria for the detection of outliers and noise affected data based on (i) a distance measure of each data segment with regard to the entire sample distribution and (ii) the evaluation of the magnetic polarisation direction of all segments. The first criterion is able to remove data points that scatter around the desired MT distribution and furthermore it can, under some circumstances, even reject complete data cluster originating from noise sources. The second criterion eliminates data points caused by a strongly polarised magnetic signal. Both criteria have been successfully applied to many stations with different noise contaminations showing that they can significantly improve the transfer function estimation. The novel criteria were used to evaluate a MT data set from the Eastern Karoo Basin in South Africa. The corresponding field experiment is part of an extensive research programme to collect information of the current e.g. geological setting in this region prior to a potential shale gas exploitation. The aim was to investigate whether a three-dimensional (3D) inversion of the newly measured data fosters a more realistic mapping of physical properties of the target horizon. For this purpose, a comprehensive 3D model was derived by using all available data. In a second step, I analysed parameters of the target horizon, e.g. its conductivity, that are proxies for physical properties such as thermal maturity and porosity.
New bio-based polymers
(2018)
Redox-responsive polymers, such as poly(disulfide)s, are a versatile class of polymers with potential applications including gene- and drug-carrier systems. Their degradability under reductive conditions allows for a controlled response to the different redox states that are present throughout the body. Poly(disulfide)s are typically synthesized by step growth polymerizations. Step growth polymerizations, however, may suffer from low conversions and therefore low molar masses, limiting potential applications. The purpose of this thesis was therefore to find and investigate new synthetic routes towards the synthesis of amino acid-based poly(disulfide)s.
The different routes in this thesis include entropy-driven ring opening polymerizations of novel macrocyclic monomers, derived from cystine derivatives. These monomers were obtained with overall yields of up to 77% and were analyzed by mass spectrometry as well as by 1D and 2D NMR spectroscopy. The kinetics of the entropy-driven ring-opening metathesis polymerization (ED-ROMP) were thoroughly investigated in dependence of temperature, monomer concentration, and catalyst concentration. The polymerization was optimized to yield poly(disulfide)s with weight average molar masses of up to 80 kDa and conversions of ~80%, at the thermodynamic equilibrium. Additionally, an alternative metal free polymerization, namely the entropy-driven ring-opening disulfide metathesis polymerization (ED-RODiMP) was established for the polymerization of the macrocyclic monomers. The effect of different solvents, concentrations and catalyst loadings on the polymerization process and its kinetics were studied. Polymers with very high weight average molar masses of up to 177 kDa were obtained. Moreover, various post-polymerization reactions were successfully performed.
This work provides the first example of the homopolymerization of endo-cyclic disulfides by ED-ROMP and the first substantial study into the kinetics of the ED-RODiMP process.
Nervous allies
(2018)
Diese Dissertation untersucht die Entwicklung der diplomatischen Beziehungen zwischen Frankreich, den USA und der Bundesrepublik Deutschland im Zeitraum von 1969-1980. Auf breiter multiarchivarischer Quellengrundlage rekonstruiert sie die interdependente Außenpolitik dieser drei Staaten im Kontext zentraler Themenkomplexe der 1970er Jahre: des Aufstiegs und Verfalls der Entspannungspolitik, des Streits um den Status quo in Europa, die Deutsche Frage und die Zukunft Berlins, der internationalen Wirtschafts- und Währungskrise, der Debatte um Sicherheit und Zukunft des westlichen Bündnisses und des NATO-Doppelbeschlusses. Ebenso betrachtet werden eine Reihe von regionalen Ereignissen und Konflikten mit weitreichenden Auswirkungen wie der Jom-Kippur-Krieg, die Portugiesische Revolution oder die sowjetische Invasion Afghanistans.
Die Untersuchung folgt der zentralen, theoretisch motivierten Fragestellung, in welchem Maß staatliche Außenpolitik und diplomatische Beziehungen von individuellen Akteuren an der Spitze der Regierungen, ihren Agenden, Sichtweisen und persönlichen Beziehungen zu internationalen Partnern geprägt wurden oder in welchem Maß deren Entscheidungsfindung andererseits durch strukturelle Faktoren geopolitischer, ökonomischer oder politischer Natur definiert und limitiert wurde. Um diese Frage zu beantworten, fokussiert sich die Dissertation auf die Analyse von Regierungswechseln und deren Auswirkungen auf Kontinuität und Wandel der Außenpolitik. Die Narrative umfasst sieben solcher Regierungswechsel: von Bundeskanzler Kurt Georg Kiesinger zu Willy Brandt (1969) und von Brandt zu Helmut Schmidt (1974) in Bonn, von Präsident Charles de Gaulle zu Georges Pompidou (1969) und von Pompidou zu Valéry Giscard d’Estaing (1974) in Paris sowie von Lyndon B. Johnson zu Richard M. Nixon (1969), von Nixon zu Gerald R. Ford (1974) und von Ford zu Jimmy Carter (1977) in Washington.
Abseits eines Spektrums empirisch fundierter Erkenntnisse über die Geschichte der internationalen Beziehungen der 1970er Jahre belegt diese Arbeit vor allem hochgradig personalisierte und exklusive außenpolitische Entscheidungsstrukturen und eine deutliche Abhängigkeit der Qualität intergouvernementaler Beziehungen von den persönlichen Beziehungen außenpolitischer Führungspersönlichkeiten. Zugleich werden jedoch strukturelle Grenzen ihres Handlungsspielraums im internationalen System deutlich, die von Faktoren wie militärischer Sicherheit und geopolitischer Lage, Zugang zu Ressourcen und ökonomischer Leistungsfähigkeit sowie politischem Druck aus dem In- und Ausland abhängen. Die Dissertation kommt zu dem zentralen Ergebnis, dass Regierungswechsel zwar bisweilen drastische Einschnitte in Inhalt und Stil der auswärtigen Beziehungen nach sich zogen und Bonn, Paris und Washington im Laufe der Dekade mit vielerlei neuen Herausforderungen konfrontiert wurden, dass in der Gesamtschau jedoch pfadabhängige strukturelle Druckszenarien zu höherer politischer Kontinuität im internationalen System führten, als oft mit den für tiefgreifenden historischen Wandel bekannten 1970er Jahren assoziiert wird.
This research addressed the question, if it is possible to simplify current microcontact printing systems for the production of anisotropic building blocks or patchy particles, by using common chemicals while still maintaining reproducibility, high precision and tunability of the Janus-balance
Chapter 2 introduced the microcontact printing materials as well as their defined electrostatic interactions. In particular polydimethylsiloxane stamps, silica particles and high molecular weight polyethylenimine ink were mainly used in this research. All of these components are commercially available in large quantities and affordable, which gives this approach a huge potential for further up-scaling developments. The benefits of polymeric over molecular inks was described including its flexible influence on the printing pressure. With this alteration of the µCP concept, a new method of solvent assisted particle release mechanism enabled the switch from two-dimensional surface modification to three-dimensional structure printing on colloidal silica particles, without changing printing parameters or starting materials. This effect opened the way to use the internal volume of the achieved patches for incorporation of nano additives, introducing additional physical properties into the patches without alteration of the surface chemistry.
The success of this system and its achievable range was further investigated in chapter 3 by giving detailed information about patch geometry parameters including diameter, thickness and yield. For this purpose, silica particles in a size range between 1µm and 5µm were printed with different ink concentrations to change the Janus-balance of these single patched particles. A necessary intermediate step, consisting of air-plasma treatment, for the production of trivalent particles using "sandwich" printing was discovered and comparative studies concerning the patch geometry of single and double patched particles were conducted. Additionally, the usage of structured PDMS stamps during printing was described. These results demonstrate the excellent precision of this approach and opens the pathway for even greater accuracy as further parameters can be finely tuned and investigated, e.g. humidity and temperature during stamp loading.
The performance of these synthesized anisotropic colloids was further investigated in chapter 4, starting with behaviour studies in alcoholic and aqueous dispersions. Here, the stability of the applied patches was studied in a broad pH range, discovering a release mechanism by disabling the electrostatic bonding between particle surface and polyelectrolyte ink. Furthermore, the absence of strong attractive forces between divalent particles in water was investigated using XPS measurements. These results lead to the conclusion that the transfer of small PDMS oligomers onto the patch surface is shielding charges, preventing colloidal agglomeration. However, based on this knowledge, further patch modifications for particle self-assembly were introduced including physical approaches using magnetic nano additives, chemical patch functionalization with avidin-biotin or the light responsive cyclodextrin-arylazopyrazoles coupling as well as particle surface modification for the synthesis of highly amphiphilic colloids. The successful coupling, its efficiency, stability and behaviour in different solvents were evaluated to find a suitable coupling system for future assembly experiments. Based on these results the possibility of more sophisticated structures by colloidal self-assembly is given.
Certain findings needed further analysis to understand their underlying mechanics, including the relatively broad patch diameter distribution and the decreasing patch thickness for smaller silica particles. Mathematical assumptions for both effects are introduced in chapter 5. First, they demonstrate the connection between the naturally occurring particle size distribution and the broadening of the patch diameter, indicating an even higher precision for this µCP approach. Second, explaining the increase of contact area between particle and ink surface due to higher particle packaging, leading to a decrease in printing pressure for smaller particles.
These calculations ultimately lead to the development of a new mechanical microcontact printing approach, using centrifugal forces for high pressure control and excellent parallel alignment of printing substrates. First results with this device and the comparison with previously conducted by-hand experiments conclude this research. It furthermore displays the advantages of such a device for future applications using a mechanical printing approach, especially for accessing even smaller nano particles with great precision and excellent yield.
In conclusion, this work demonstrates the successful adjustment of the µCP approach using commercially available and affordable silica particles and polyelectrolytes for high flexibility, reduced costs and higher scale-up value. Furthermore, its was possible to increase the modification potential by introducing three-dimensional patches for additional functionalization volume. While keeping a high colloidal stability, different coupling systems showed the self-assembly capabilities of this toolbox for anisotropic particles.
Gamma-ray astronomy has proven to provide unique insights into cosmic-ray accelerators in the past few decades. By combining information at the highest photon energies with the entire electromagnetic spectrum in multi-wavelength studies, detailed knowledge of non-thermal particle populations in astronomical objects and systems has been gained: Many individual classes of gamma-ray sources could be identified inside our galaxy and outside of it. Different sources were found to exhibit a wide range of temporal evolution, ranging from seconds to stable behaviours over many years of observations. With the dawn of both neutrino- and gravitational wave astronomy, additional messengers have come into play over the last years. This development presents the advent of multi-messenger astronomy: a novel approach not only to search for sources of cosmic rays, but for astronomy in general.
In this thesis, both traditional multi-wavelength studies and multi-messenger studies will be presented. They were carried out with the H.E.S.S. experiment, an imaging air Cherenkov telescope array located in the Khomas Highland of Namibia. H.E.S.S. has entered its second phase in 2012 with the addition of a large, fifth telescope. While the initial array was limited to the study of gamma-rays with energies above 100 GeV, the new instrument allows to access gamma-rays with energies down to a few tens of GeV. Strengths of the multi-wavelength approach will be demonstrated at the example of the galaxy NGC253, which is undergoing an episode of enhanced star-formation. The gamma-ray emission will be discussed in light of all the information on this system available from radio, infrared and X-rays. These wavelengths reveal detailed information on the population of supernova remnants, which are suspected cosmic-ray accelerators. A broad-band gamma-ray spectrum is derived from H.E.S.S. and Fermi-LAT data. The improved analysis of H.E.S.S. data provides a measurement which is no longer dominated by systematic uncertainties. The long-term behaviour of cosmic rays in the starburst galaxy NGC253 is finally characterised.
In contrast to the long time-scale evolution of a starburst galaxy, multi-messenger studies are especially intriguing when shorter time-scales are being probed. A prime example of a short time-scale transient are Gamma Ray Bursts. The efforts to understand this phenomenon effectively founded the branch of gamma-ray astronomy. The multi-messenger approach allows for the study of illusive phenomena such as Gamma Ray Bursts and other transients using electromagnetic radiation, neutrinos, cosmic rays and gravitational waves contemporaneously. With contemporaneous observations getting more important just recently, the execution of such observation campaigns still presents a big challenge due to the different limitations and strengths of the infrastructures.
An alert system for transient phenomena has been developed over the course of this thesis for H.E.S.S. It aims to address many follow-up challenges in order to maximise the science return of the new large telescope, which is able to repoint much faster than the initial four telescopes. The system allows for fully automated observations based on scientific alerts from any wavelength or messenger and allows H.E.S.S. to participate in multi-messenger campaigns. Utilising this new system, many interesting multi-messenger observation campaigns have been performed. Several highlight observations with H.E.S.S. are analysed, presented and discussed in this work. Among them are observations of Gamma Ray Bursts with low latency and low energy threshold, the follow-up of a neutrino candidate in spatial coincidence with a flaring active galactic nucleus and of the merger of two neutron stars, which was revealed by the coincidence of gravitational waves and a Gamma-Ray Burst.
Over the last decades mechanisms of recognition of morphologically complex words have been extensively examined in order to determine whether all word forms are stored and retrieved from the mental lexicon as wholes or whether they are decomposed into their morphological constituents such as stems and affixes. Most of the research in this domain focusses on English. Several factors have been argued to affect morphological processing including, for instance, morphological structure of a word (e.g., existence of allomorphic stem alternations) and its linguistic nature (e.g., whether it is a derived word or an inflected word form). It is not clear, however, whether processing accounts based on experimental evidence from English would hold for other languages. Furthermore, there is evidence that processing mechanisms may differ across various populations including children, adult native speakers and language learners. Recent studies claim that processing mechanisms could also differ between older and younger adults (Clahsen & Reifegerste, 2017; Reifegerste, Meyer, & Zwitserlood, 2017).
The present thesis examined how properties of the morphological structure, types of linguistic operations involved (i.e., the linguistic contrast between inflection and derivation) and characteristics of the particular population such as older adults (e.g., potential effects of ageing as a result of the cognitive decline or greater experience and exposure of older adults) affect initial, supposedly automatic stages of morphological processing in Russian and German. To this end, a series of masked priming experiments was conducted.
In experiments on Russian, the processing of derived -ost’ nouns (e.g., glupost’ ‘stupidity’) and of inflected forms with and without allomorphic stem alternations in 1P.Sg.Pr. (e.g., igraju – igrat’ ‘to play’ vs. košu – kosit’ ‘to mow’) was examined. The first experiment on German examined and directly compared processing of derived -ung nouns (e.g., Gründung ‘foundation’) and inflected -t past participles (e.g., gegründet ‘founded’), whereas the second one investigated the processing of regular and irregular plural forms (-s forms such as Autos ‘cars’ and -er forms such as Kinder ‘children’, respectively).
The experiments on both languages have shown robust and comparable facilitation effects for derived words and regularly inflected forms without stem changes (-t participles in German, forms of -aj verbs in Russian). Observed morphological priming effects could be clearly distinguished from purely semantic or orthographic relatedness between words. At the same time, we found a contrast between forms with and without allomorphic stem alternations in Russian and regular and irregular forms in German, with significantly more priming for unmarked stems (relative to alternated ones) and significantly more priming for regular (compared) word forms. These findings indicate the relevance of morphological properties of a word for initial stages of processing, contrary to claims made in the literature holding that priming effects are determined by surface form and meaning overlap only. Instead, our findings are more consistent with approaches positing a contrast between combinatorial, rule-based and lexically-stored forms (Clahsen, Sonnenstuhl, & Blevins, 2003).
The doctoral dissertation also addressed the role of ageing and age-related cognitive changes on morphological processing. The results obtained on this research issue are twofold. On the one hand, the data demonstrate effects of ageing on general measures of language performance, i.e., overall longer reaction times and/or higher accuracy rates in older than younger individuals. These findings replicate results from previous studies, which have been linked to the general slowing of processing speed at older age and to the larger vocabularies of older adults. One the other hand, we found that more specific aspects of language processing appear to be largely intact in older adults as revealed by largely similar morphological priming effects for older and younger adults. These latter results indicate that initial stages of morphological processing investigated here by means of the masked priming paradigm persist in older age. One caveat should, however, be noted. Achieving the same performance as a younger individual in a behavioral task may not necessarily mean that the same neural processes are involved. Older people may have to recruit a wider brain network than younger individuals, for example. To address this and related possibilities, future studies should examine older people’s neural representations and mechanisms involved in morphological processing.
Answer Set Programming (ASP) is a declarative problem solving approach, combining a rich yet simple modeling language with high-performance solving capabilities. Although this has already resulted in various applications, certain aspects of such applications are more naturally modeled using variables over finite domains, for accounting for resources, fine timings, coordinates, or functions. Our goal is thus to extend ASP with constraints over integers while preserving its declarative nature. This allows for fast prototyping and elaboration tolerant problem descriptions of resource related applications. The resulting paradigm is called Constraint Answer Set Programming (CASP).
We present three different approaches for solving CASP problems. The first one, a lazy, modular approach combines an ASP solver with an external system for handling constraints. This approach has the advantage that two state of the art technologies work hand in hand to solve the problem, each concentrating on its part of the problem. The drawback is that inter-constraint dependencies cannot be communicated back to the ASP solver, impeding its learning algorithm. The second approach translates all constraints to ASP. Using the appropriate encoding techniques, this results in a very fast, monolithic system. Unfortunately, due to the large, explicit representation of constraints and variables, translation techniques are restricted to small and mid-sized domains. The third approach merges the lazy and the translational approach, combining the strength of both while removing their weaknesses. To this end, we enhance the dedicated learning techniques of an ASP solver with the inferences of the translating approach in a lazy way. That is, the important knowledge is only made explicit when needed.
By using state of the art techniques from neighboring fields, we provide ways to tackle real world, industrial size problems. By extending CASP to reactive solving, we open up new application areas such as online planning with continuous domains and durations.
Die deutsche Berufsausbildung hat in den vergangenen Jahren stark an Zuspruch verloren. Dies trifft insbesondere auch auf die duale kaufmännische Berufsausbildung zu. Galt sie vor einigen Jahren noch als ein möglicher Ausbildungsweg für leistungsstarke Schüler/-innen, präferieren diese heute zum großen Teil das Studium. Die wachsende Anzahl an Studienabbrechern belegt jedoch, dass dadurch auch Potenzial verloren geht, weil sich Jugendliche mit dem Studium für einen Ausbildungsweg entscheiden, der für sie nicht geeignet ist. Bisherige Bemühungen zur Etablierung alternativer Bildungswege wie zum Beispiel Berufsakademien weisen zwar Erfolge auf, basieren jedoch auf einem Konzept, das sich ausschließlich am Bedarf der Wirtschaft orientiert. Es ist jedoch die Überzeugung der Autorin, dass neue innovative Bildungswege auch die Bedürfnisse und Vorstellungen derjenigen berücksichtigen müssen, für die sie entworfen werden. Denn die Generation der heutigen Jugendlichen zeichnet sich dadurch aus, dass sie ein anderes Wertekonzept als ihre Vorgängergenerationen aufweist. Die Dissertation entwickelt daher ein Modell einer wirtschaftsorientierten Ausbildung, welches sich aus unterschiedlichen motivationstheoretischen Elementen ableitet und zugleich die Werte der Generation der heutigen Jugend-lichen berücksichtigt. Es umfasst sowohl die Anreiz-Beitrags-Theorie nach Barnard als auch die Inhalts-Erwartungstheorie nach Vroom. Zudem liegt ein Hauptaugenmerk dieser Arbeit auf der Anpassung der Zwei-Faktoren-Theorie nach Herzberg auf die heutige Zeit.
Empirisch basiert die Dissertation auf einem dreistufigen Untersuchungsdesign. Die erste Stufe umfasst eine quantitative Befragung von insgesamt 459 Abiturienten/-innen und 100 Studierenden. In der zweiten Stufe wurden 10 Studieren-de und 12 Abiturienten/-innen qualitativ befragt. Eine Validierung der Ergebnis-se erfolgte in der dritten Stufe mittels Experteninterviews. Das Ziel der empirischen Untersuchung bestand in der Überprüfung von vier Hypothesen als Basis zur Modellableitung:
Hypothese H1 - Flexibilität erhöht die Attraktivität einer wirtschaftsorientierten Ausbildung: Der Faktor Flexibilität wurde als ein relevanter Motivator für die Wahl eines Ausbildungsweges identifiziert. Jugendliche wollen sich heutzutage nicht sofort bzw. nicht zu früh festlegen müssen.
Hypothese H2 - Auslandsaufenthalte erhöhen die Attraktivität einer wirtschaftsorientierten Ausbildung: Es wurde bestätigt, dass Auslandsaufenthalte die Attraktivität einer wirtschaftsorientierten Ausbildung steigert, es besteht jedoch eine Reihe von Barrieren, die Jugendliche (obwohl sie den grundsätzlichen Vor-teil sehen) davon abhalten, einen Auslandsaufenthalt für sich selbst in Betracht zu ziehen.
Hypothese H3 - Das Aufzeigen einer Karriereperspektive erhöht die Attraktivität einer wirtschaftsorientierten Ausbildung: Für die Generation der heutigen Jugendlichen steht bezüglich der Wahl ihres Ausbildungsweges die Aussicht auf eine Tätigkeit im Vordergrund, die ein gesichertes Einkommen und somit ein gutes Leben ermöglicht und zudem aus ihrer Sicht eine gewisse Sinnhaftigkeit hat. Führungspositionen, in denen auch höhere Verantwortung übernommen wird, strebt nur eine Minderheit an.
Hypothese H4 - Zusätzliche monetäre Anreize erhöhen die Attraktivität einer wirtschaftsorientierten Ausbildung: Vergütungsbestandteile werden grundsätzlich nicht abgelehnt (das wäre auch irrational), haben jedoch auch nicht die Anreizfunktion, die ihr auf Basis der Voruntersuchung im Rahmen dieser Arbeit hätte unterstellt werden können. Für die Entscheidungsfindung bezüglich eines Ausbildungsweges spielen sie nur eine untergeordnete Rolle. Dennoch trägt die Vergütung zur Attraktivität eines Ausbildungsweges bei.
Basierend auf den zuvor genannten Ergebnissen wurde das Modell einer wirtschaftsorientieren Ausbildung abgeleitet, das sowohl horizontal als auch vertikal flexibel ist. Horizontale Flexibilität ist dadurch gegeben, dass innerhalb eines Ausbildungsjahres unterschiedliche Unternehmen und Branchen kennengelernt werden (Jahr 1 und Jahr 2). Eine Spezialisierung erfolgt erst in den späteren Ausbildungsjahren. Durch die Möglichkeit, nach jedem Ausbildungsjahr mit einem Abschluss ins Berufsleben zu wechseln und die Ausbildung gegebenenfalls zu einem späteren Zeitpunkt fortzusetzen, ist auch eine vertikale Flexibilität gegeben. Zudem bietet das Modell Studienabbrechern/-innen die Möglichkeit, im Ausbildungsjahr 2 bzw. 3 in die Ausbildung einzusteigen. Im Jahr 2 und/oder Jahr 3 sind Auslandsaufenthalte integriert. Diese werden fakultativ an-geboten. Bereits ab dem Jahr 1 besteht die Möglichkeit, Vorbereitungskurse zu belegen. Der hohen Bedeutung der Karriereperspektive wird im abgeleiteten Modell auf mehreren Ebenen Rechnung getragen. So werden nach jedem Ausbildungsjahr anerkannte Abschlüsse erreicht. Während diese in den Jahren 1 und 2 mit IHK-Abschlüssen gleichzusetzen sind, beginnen ab Jahr 3 die akademischen Graduierungen (Jahr 3 Bachelor, Jahr 4 Master). Die Vergütung wird Bestandteil einer wirtschaftsorientierten Ausbildung, wobei ihre Höhe mit Dauer der Ausbildung zunimmt.
Da mit der Einführung des Modells einer wirtschaftsorientierten Ausbildung die Überwindung von institutionellen Paradigmen und Schranken verbunden sind, erfolgte im Rahmen des Ausblicks der Arbeit eine weitere Expertenbefragung bezüglich seiner Umsetzbarkeit. Es setzt eine Beweglichkeit von institutioneller Seite voraus (hierbei insbesondere auch von den Kammern), die von der Mehr-zahl der Experten derzeit skeptisch gesehen wird. Die konzeptionelle Ausgestaltung findet grundsätzlichen Zuspruch, wobei in einigen Details, zum Beispiel in der Dauer der Ausbildung, noch Klärungsbedarf besteht.
Grundsätzlich schließen sich die Experten/-innen der Meinung der Autorin an, dass ein Sinneswandel in der deutschen Ausbildungslandschaft gewünscht und gefordert wird. Dies betrifft insbesondere auch den kaufmännischen Bereich. Diese Arbeit liefert mit dem Modell der wirtschaftsorientierten Ausbildung einen wichtigen Beitrag zur Diskussion über neue Ausbildungswege.
East Africa is a natural laboratory: Studying its unique geological and biological history can help us better inform our theories and models. Studying its present and future can help us protect its globally important biodiversity and ecosystem services. East African vegetation plays a central role in all these aspects, and this dissertation aims to quantify its dynamics through computer simulations.
Computer models help us recreate past settings, forecast into the future or conduct simulation experiments that we cannot otherwise perform in the field. But before all that, one needs to test their performance. The outputs that the model produced using the present day-inputs, agreed well with present-day observations of East African vegetation. Next, I simulated past vegetation for which we have fossil pollen data to compare. With computer models, we can fill the gaps of knowledge between sites where we have fossil pollen data from, and create a more complete picture of the past. Good level of agreement between model and pollen data where they overlapped in space further validated our model performance.
Once the model was tested and validated for the region, it became possible to probe one of the long standing questions regarding East African vegetation: How did East Africa lose its tropical forests? The present-day vegetation in the tropics is mainly characterized by continuous forests worldwide except in tropical East Africa, where forests only occur as patches. In a series of simulation experiments, I was able to show under which conditions these forest patches could have been connected and fragmented in the past. This study showed the sensitivity of East African vegetation to climate change and variability such as those expected under future climate change.
El Niño Southern Oscillation (ENSO) events that result from the fluctuations in temperature between the ocean and atmosphere, bring further variability to East African climate and are predicted to increase in intensity in the future. But climate models are still not good at capturing the pattens of these events. In a study where I quantified the influence of ENSO events on East African vegetation, I showed how different the future vegetation could be from what we currently predict with these climate models that lack accurate ENSO contribution. Consideration of these discrepancies is important for our future global carbon budget calculations and management decisions.
Business process automation improves organizations’ efficiency to perform work. Therefore, a business process is first documented as a process model which then serves as blueprint for a number of process instances representing the execution of specific business cases. In existing business process management systems, process instances run independently from each other. However, in practice, instances are also collected in groups at certain process activities for a combined execution to improve the process performance. Currently, this so-called batch processing is executed manually or supported by external software. Only few research proposals exist to explicitly represent and execute batch processing needs in business process models. These works also lack a comprehensive understanding of requirements.
This thesis addresses the described issues by providing a basic concept, called batch activity. It allows an explicit representation of batch processing configurations in process models and provides a corresponding execution semantics, thereby easing automation. The batch activity groups different process instances based on their data context and can synchronize their execution over one or as well multiple process activities. The concept is conceived based on a requirements analysis considering existing literature on batch processing from different domains and industry examples. Further, this thesis provides two extensions: First, a flexible batch configuration concept, based on event processing techniques, is introduced to allow run time adaptations of batch configurations. Second, a concept for collecting and batching activity instances of multiple different process models is given. Thereby, the batch configuration is centrally defined, independently of the process models, which is especially beneficial for organizations with large process model collections. This thesis provides a technical evaluation as well as a validation of the presented concepts. A prototypical implementation in an existing open-source BPMS shows that with a few extensions, batch processing is enabled. Further, it demonstrates that the consolidated view of several work items in one user form can improve work efficiency. The validation, in which the batch activity concept is applied to different use cases in a simulated environment, implies cost-savings for business processes when a suitable batch configuration is used. For the validation, an extensible business process simulator was developed. It enables process designers to study the influence of a batch activity in a process with regards to its performance.
The development of self-adaptive software requires the engineering of an adaptation engine that controls the underlying adaptable software by a feedback loop. State-of-the-art approaches prescribe the feedback loop in terms of numbers, how the activities (e.g., monitor, analyze, plan, and execute (MAPE)) and the knowledge are structured to a feedback loop, and the type of knowledge. Moreover, the feedback loop is usually hidden in the implementation or framework and therefore not visible in the architectural design. Additionally, an adaptation engine often employs runtime models that either represent the adaptable software or capture strategic knowledge such as reconfiguration strategies. State-of-the-art approaches do not systematically address the interplay of such runtime models, which would otherwise allow developers to freely design the entire feedback loop.
This thesis presents ExecUtable RuntimE MegAmodels (EUREMA), an integrated model-driven engineering (MDE) solution that rigorously uses models for engineering feedback loops. EUREMA provides a domain-specific modeling language to specify and an interpreter to execute feedback loops. The language allows developers to freely design a feedback loop concerning the activities and runtime models (knowledge) as well as the number of feedback loops. It further supports structuring the feedback loops in the adaptation engine that follows a layered architectural style. Thus, EUREMA makes the feedback loops explicit in the design and enables developers to reason about design decisions.
To address the interplay of runtime models, we propose the concept of a runtime megamodel, which is a runtime model that contains other runtime models as well as activities (e.g., MAPE) working on the contained models. This concept is the underlying principle of EUREMA. The resulting EUREMA (mega)models are kept alive at runtime and they are directly executed by the EUREMA interpreter to run the feedback loops. Interpretation provides the flexibility to dynamically adapt a feedback loop. In this context, EUREMA supports engineering self-adaptive software in which feedback loops run independently or in a coordinated fashion within the same layer as well as on top of each other in different layers of the adaptation engine. Moreover, we consider preliminary means to evolve self-adaptive software by providing a maintenance interface to the adaptation engine.
This thesis discusses in detail EUREMA by applying it to different scenarios such as single, multiple, and stacked feedback loops for self-repairing and self-optimizing the mRUBiS application. Moreover, it investigates the design and expressiveness of EUREMA, reports on experiments with a running system (mRUBiS) and with alternative solutions, and assesses EUREMA with respect to quality attributes such as performance and scalability.
The conducted evaluation provides evidence that EUREMA as an integrated and open MDE approach for engineering self-adaptive software seamlessly integrates the development and runtime environments using the same formalism to specify and execute feedback loops, supports the dynamic adaptation of feedback loops in layered architectures, and achieves an efficient execution of feedback loops by leveraging incrementality.
Die Kernfrage der vorliegenden Arbeit lautet: Sichert die Schuldenbremse die fiskalische Nachhaltigkeit in Deutschland? Zur Beantwortung dieser Frage wird zunächst untersucht, welche Vor-Wirkungen die Einführung der Schuldenbremse im Zeitraum 2010-16 auf die deutschen Bundesländer zeitigte. Dafür wurden die beobachtete Konsolidierungsleistung und der 2009 bestehende Konsolidierungsanreiz bzw. –druck der Bundesländer mit Hilfe einer eigens zu diesem Zweck entwickelten Scorecard evaluiert. Mittels multipler Regressionsanalyse wurde dann analysiert, wie die Faktoren der Scorecard die Konsolidierungsleistung der Bun- desländer beeinflussen. Dabei wurde festgestellt, dass beinahe 90% der Variation, durch die unabhängigen Variablen Haushaltslage, Schuldenlast, Einnahmenwachstum und Pensionslast erklärt werden und der Schuldenbremse bei der Konsolidierungsepisode 2009-2016 eher eine untergeordnete Rolle zugefallen sein dürfte. Anschließend wurde mithilfe der in 65 Expertinneninterviews gesammelten Daten analysiert, welche Grenzen der neuen Fiskalregel in ihrem Wirken gesetzt sind, bzw. welche Risiken zukünftig die Einhaltung der Schuldenbremse erschweren oder verhindern könnten: Kommunalverschuldung, FEUs, Eventualverpflichtungen in Form von Bürgschaften für Finanzinstitute und Pensionsverpflichtungen. Die häufig geäußerten Kritikpunkte, die Schuldenbremse sei eine Konjunktur- und Investitionsbremse werden ebenfalls überprüft und zurückgewiesen. Schließlich werden potentielle zukünftige Entwicklungen hinsichtlich der Schuldenbremse und der öffentlichen Verwaltung in Deutschland sowie der Konsolidierungsbemühungen der Länder erörtert.
Plastic pollution is ubiquitous on the planet since several millions of tons of plastic waste enter aquatic ecosystems each year. Furthermore, the amount of plastic produced is expected to increase exponentially shortly. The heterogeneity of materials, additives and physical characteristics of plastics are typical of these emerging contaminants and affect their environmental fate in marine and freshwaters. Consequently, plastics can be found in the water column, sediments or littoral habitats of all aquatic ecosystems. Most of this plastic debris will fragment as a product of physical, chemical and biological forces, producing particles of small size. These particles (< 5mm) are known as “microplastics” (MP). Given their high surface-to-volume ratio, MP stimulate biofouling and the formation of biofilms in aquatic systems.
As a result of their unique structure and composition, the microbial communities in MP biofilms are referred to as the “Plastisphere.” While there is increasing data regarding the distinctive composition and structure of the microbial communities that form part of the plastisphere, scarce information exists regarding the activity of microorganisms in MP biofilms. This surface-attached lifestyle is often associated with the increase in horizontal gene transfer (HGT) among bacteria. Therefore, this type of microbial activity represents a relevant function worth to be analyzed in MP biofilms. The horizontal exchange of mobile genetic elements (MGEs) is an essential feature of bacteria. It accounts for the rapid evolution of these prokaryotes and their adaptation to a wide variety of environments. The process of HGT is also crucial for spreading antibiotic resistance and for the evolution of pathogens, as many MGEs are known to contain antibiotic resistance genes (ARGs) and genetic determinants of pathogenicity.
In general, the research presented in this Ph.D. thesis focuses on the analysis of HGT and heterotrophic activity in MP biofilms in aquatic ecosystems. The primary objective was to analyze the potential of gene exchange between MP bacterial communities vs. that of the surrounding water, including bacteria from natural aggregates. Moreover, the thesis addressed the potential of MP biofilms for the proliferation of biohazardous bacteria and MGEs from wastewater treatment plants (WWTPs) and associated with antibiotic resistance. Finally, it seeks to prove if the physiological profile of MP biofilms under different limnological conditions is divergent from that of the water communities. Accordingly, the thesis is composed of three independent studies published in peer-reviewed journals. The two laboratory studies were performed using both model and environmental microbial communities. In the field experiment, natural communities from freshwater ecosystems were examined.
In Chapter I, the inflow of treated wastewater into a temperate lake was simulated with a concentration gradient of MP particles. The effects of MP on the microbial community structure and the occurrence of integrase 1 (int 1) were followed. The int 1 is a marker associated with mobile genetic elements and known as a proxy for anthropogenic effects on the spread of antimicrobial resistance genes. During the experiment, the abundance of int1 increased in the plastisphere with increasing MP particle concentration, but not in the surrounding water. In addition, the microbial community on MP was more similar to the original wastewater community with increasing microplastic concentrations. Our results show that microplastic particles indeed promote persistence of standard indicators of microbial anthropogenic pollution in natural waters.
In Chapter II, the experiments aimed to compare the permissiveness of aquatic bacteria towards model antibiotic resistance plasmid pKJK5, between communities that form biofilms on MP vs. those that are free-living. The frequency of plasmid transfer in bacteria associated with MP was higher when compared to bacteria that are free-living or in natural aggregates. Moreover, comparison increased gene exchange occurred in a broad range of phylogenetically-diverse bacteria. The results indicate a different activity of HGT in MP biofilms, which could affect the ecology of aquatic microbial communities on a global scale and the spread of antibiotic resistance.
Finally, in Chapter III, physiological measurements were performed to assess whether microorganisms on MP had a different functional diversity from those in water. General heterotrophic activity such as oxygen consumption was compared in microcosm assays with and without MP, while diversity and richness of heterotrophic activities were calculated by using Biolog® EcoPlates. Three lakes with different nutrient statuses presented differences in MP-associated biomass build up. Functional diversity profiles of MP biofilms in all lakes differed from those of the communities in the surrounding water, but only in the oligo-mesotrophic lake MP biofilms had a higher functional richness compared to the ambient water. The results support that MP surfaces act as new niches for aquatic microorganisms and can affect global carbon dynamics of pelagic environments.
Overall, the experimental works presented in Chapters I and II support a scenario where MP pollution affects HGT dynamics among aquatic bacteria. Among the consequences of this alteration is an increase in the mobilization and transfer efficiency of ARGs. Moreover, it supposes that changes in HGT can affect the evolution of bacteria and the processing of organic matter, leading to different catabolic profiles such as demonstrated in Chapter III. The results are discussed in the context of the fate and magnitude of plastic pollution and the importance of HGT for bacterial evolution and the microbial loop, i.e., at the base of aquatic food webs. The thesis supports a relevant role of MP biofilm communities for the changes observed in the aquatic microbiome as a product of intense human intervention.
The continuously increasing pollution of aquatic environments with microplastics (plastic particles < 5 mm) is a global problem with potential implications for organisms of all trophic levels. For microorganisms, trillions of these floating microplastics particles represent a huge surface area for colonization. Due to the very low biodegradability, microplastics remain years to centuries in the environment and can be transported over thousands of kilometers together with the attached organisms. Since also pathogenic, invasive, or otherwise harmful species could be spread this way, it is essential to study microplastics-associated communities.
For this doctoral thesis, eukaryotic communities were analyzed for the first time on microplastics in brackish environments and compared to communities in the surrounding water and on the natural substrate wood. With Illumina MiSeq high-throughput sequencing, more than 500 different eukaryotic taxa were detected on the microplastics samples. Among them were various green algae, dinoflagellates, ciliates, fungi, fungal-like protists and small metazoans such as nematodes and rotifers. The most abundant organisms was a dinoflagellate of the genus Pfiesteria, which could include fish pathogenic and bloom forming toxigenic species. Network analyses revealed that there were numerous interaction possibilities among prokaryotes and eukaryotes in microplastics biofilms. Eukaryotic community compositions on microplastics differed significantly from those on wood and in water, and compositions were additionally distinct among the sampling locations. Furthermore, the biodiversity was clearly lower on microplastics in comparison to the diversity on wood or in the surrounding water.
In another experiment, a situation was simulated in which treated wastewater containing microplastics was introduced into a freshwater lake. With increasing microplastics concentrations, the resulting bacterial communities became more similar to those from the treated wastewater. Moreover, the abundance of integrase I increased together with rising concentrations of microplastics. Integrase I is often used as a marker for anthropogenic environmental pollution and is further linked to genes conferring, e.g., antibiotic resistance.
This dissertation gives detailed insights into the complexity of prokaryotic and eukaryotic communities on microplastics in brackish and freshwater systems. Even though microplastics provide novel microhabitats for various microbes, they might also transport toxigenic, pathogenic, antibiotic-resistant or parasitic organisms; meaning their colonization can pose potential threats to humans and the environment. Finally, this thesis explains the urgent need for more research as well as for strategies to minimize the global microplastic pollution.
Metamaterial devices
(2018)
Digital fabrication machines such as 3D printers excel at producing arbitrary shapes, such as for decorative objects. In recent years, researchers started to engineer not only the outer shape of objects, but also their internal microstructure. Such objects, typically based on 3D cell grids, are known as metamaterials. Metamaterials have been used to create materials that, e.g., change their volume, or have variable compliance.
While metamaterials were initially understood as materials, we propose to think of them as devices.
We argue that thinking of metamaterials as devices enables us to create internal structures that offer functionalities to implement an input-process-output model without electronics, but purely within the material’s internal structure. In this thesis, we investigate three aspects of such metamaterial devices that implement parts of the input-process-output model: (1) materials that process analog inputs by implementing mechanisms based on their microstructure, (2) that process digital signals by embedding mechanical computation into the object’s microstructure, and (3) interactive metamaterial objects that output to the user by changing their outside to interact with their environment. The input to our metamaterial devices is provided directly by the users interacting with the device by means of physically pushing the metamaterial, e.g., turning a handle, pushing a button, etc.
The design of such intricate microstructures, which enable the functionality of metamaterial devices, is not obvious. The complexity of the design arises from the fact that not only a suitable cell geometry is necessary, but that additionally cells need to play together in a well-defined way. To support users in creating such microstructures, we research and implement interactive design tools. These tools allow experts to freely edit their materials, while supporting novice users by auto-generating cells assemblies from high-level input. Our tools implement easy-to-use interactions like brushing, interactively simulate the cell structures’ deformation directly in the editor, and export the geometry as a 3D-printable file. Our goal is to foster more research and innovation on metamaterial devices by allowing the broader public to contribute.
This text is a contribution to the research on the worldwide success of evangelical Christianity and offers a new perspective on the relationship between late modern capitalism and evangelicalism. For this purpose, the utilization of affect and emotion in evangelicalism towards the mobilization of its members will be examined in order to find out what similarities to their employment in late modern capitalism can be found. Different examples from within the evangelical spectrum will be analyzed as affective economies in order to elaborate how affective mobilization is crucial for evangelicalism’s worldwide success. Pivotal point of this text is the exploration of how evangelicalism is able to activate the voluntary commitment of its members, financiers, and missionaries. Gathered here are examples where both spheres—evangelicalism and late modern capitalism—overlap and reciprocate, followed by a theoretical exploration of how the findings presented support a view of evangelicalism as an inner-worldly narcissism that contributes to an assumed re-enchantment of the world.
Movement and navigation are essential for many organisms during some parts of their lives. This is also true for bacteria, which can move along surfaces and swim though liquid environments. They are able to sense their environment, and move towards environmental cues in a directed fashion.
These abilities enable microbial lifecyles in biofilms, improved food uptake, host infection, and many more. In this thesis we study aspects of the swimming movement - or motility - of the soil bacterium (P. putida). Like most bacteria, P. putida swims by rotating its helical flagella, but their arrangement differs from the main model organism in bacterial motility research: (E. coli). P. putida is known for its intriguing motility strategy, where fast and slow episodes can occur after each other. Up until now, it was not known how these two speeds can be produced, and what advantages they might confer to this bacterium.
Normally the flagella, the main component of thrust generation in bacteria, are not observable by ordinary light microscopy. In order to elucidate this behavior, we therefore used a fluorescent staining technique on a mutant strain of this species to specifically label the flagella, while leaving the cell body only faintly stained. This allowed us to image the flagella of the swimming bacteria with high spacial and temporal resolution with a customized high speed fluorescence microscopy setup. Our observations show that P. putida can swim in three different modes. First, It can swim with the flagella pushing the cell body, which is the main mode of swimming motility previously known from other bacteria. Second, it can swim with the flagella pulling the cell body, which was thought not to be possible in situations with multiple flagella. Lastly, it can wrap its flagellar bundle around the cell body, which results in a speed wich is slower by a factor of two. In this mode, the flagella are in a different physical conformation with a larger radius so the cell body can fit inside. These three swimming modes explain the previous observation of two speeds, as well as the non strict alternation of the different speeds.
Because most bacterial swimming in nature does not occur in smoothly walled glass enclosures under a microscope, we used an artificial, microfluidic, structured system of obstacles to study the motion of our model organism in a structured environment. Bacteria were observed in microchannels with cylindrical obstacles of different sizes and with different distances with video microscopy and cell tracking. We analyzed turning angles, run times, and run length, which we compared to a minimal model for movement in structured geometries. Our findings show that hydrodynamic interactions with the walls lead to a guiding of the bacteria along obstacles. When comparing the observed behavior with the statics of a particle that is deflected with every obstacle contact, we find that cells run for longer distances than that model.
Navigation in chemical gradients is one of the main applications of motility in bacteria. We studied the swimming response of P. putida cells to chemical stimuli (chemotaxis) of the common food preservative sodium benzoate. Using a microfluidic gradient generation device, we created gradients of varying strength, and observed the motion of cells with a video microscope and subsequent cell tracking. Analysis of different motility parameters like run lengths and times, shows that P. putida employs the classical chemotaxis strategy of E. coli: runs up the gradient are biased to be longer than those down the gradient. Using the two different run speeds we observed due to the different swimming modes, we classify runs into `fast' and `slow' modes with a Gaussian mixture model (GMM). We find no evidence that P. putida's uses its swimming modes to perform chemotaxis.
In most studies of bacterial motility, cell tracking is used to gather trajectories of individual swimming cells. These trajectories then have to be decomposed into run sections and tumble sections. Several algorithms have been developed to this end, but most require manual tuning of a number of parameters, or extensive measurements with chemotaxis mutant strains. Together with our collaborators, we developed a novel motility analysis scheme, based on generalized Kramers-Moyal-coefficients. From the underlying stochastic model, many parameters like run length etc., can be inferred by an optimization procedure without the need for explicit run and tumble classification. The method can, however, be extended to a fully fledged tumble classifier. Using this method, we analyze E. coli chemotaxis measurements in an aspartate analog, and find evidence for a chemotactic bias in the tumble angles.