Refine
Year of publication
- 2018 (284) (remove)
Document Type
- Doctoral Thesis (284) (remove)
Keywords
- Biomarker (3)
- Fernerkundung (3)
- Magnetismus (3)
- biomarker (3)
- climate change (3)
- magnetism (3)
- remote sensing (3)
- uncertainty (3)
- Angriffserkennung (2)
- Bakterien (2)
Institute
- Institut für Chemie (37)
- Institut für Biochemie und Biologie (34)
- Institut für Physik und Astronomie (32)
- Institut für Geowissenschaften (25)
- Historisches Institut (21)
- Wirtschaftswissenschaften (17)
- Extern (16)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Sozialwissenschaften (14)
- Öffentliches Recht (13)
Various ways of preparing enantiomerically pure 2-amino[6]helicene derivatives were explored. Ni(0) mediated cyclotrimerization of enantiopure triynes provided (M)- and (P)-7,8-bis(p-tolyl)hexahelicene-2-amine in >99% ee as well as its benzoderivative in >99% ee. The stereocontrol was found to be inefficient for a 2- aminobenzo[6]helicene congener with an embedded five-membered ring. Helically chiral imidazolium salts bearing one or two helicene moieties have been synthesized and applied in enantioselective [2+2+2] cyclotrimerization catalyzed by an in situ formed Ni(0)-NHC complex. The synthesis of the first helically chiral Pd- and Ru-NHC complexes and their application in enantioselective catalysis was demonstrated. The latter shows promising results in enantioselective olefin metathesis reactions. A mechanistic proposal for asymmetric ring closing metathesis is provided.
Synthesis of artificial building blocks for sortase-mediated ligation and their enzymatic linkage
(2018)
The enzyme Sortase A catalyzes the formation of a peptide bond between the recognition sequence LPXTG and an oligoglycine. While manifold ligations between proteins and various biomolecules, proteins and small synthetic molecules as well as proteins and surfaces have been reported, the aim of this thesis was to investigate the sortase-catalyzed linkage between artificial building blocks. Hence, this could pave the way for the use of sortase A for tasks from a chemical point of view and maybe even materials science.
For the proof of concept, the studied systems were kept as simple as possible at first by choosing easily accessible silica NPs and commercially available polymers. These building blocks were functionalized with peptide motifs for sortase-mediated ligation. Silica nanoparticles were synthesized with diameters of 60 and 200 nm and surface modified with C=C functionalities. Then, peptides bearing a terminal cysteine were covalently linked by means of a thiol-ene reaction. 60 nm SiO2 NPs were functionalized with pentaglycines, while peptides with LPETG motif were linked to 200 nm silica particles. Polyethyleneglycol (PEG) and poly(N isopropylacrylamide) (PNIPAM) were likewise functionalized with peptides by thiol-ene reaction between cysteine residues and C=C units in the polymer end groups. Hence, G5-PEG and PNIPAM-LPETG conjugates were obtained. With this set of building blocks, NP–polymer hybrids, NP–NP, and polymer–polymer structures were generated by sortase-mediated ligation and the product formation shown by transmission electron microscopy, MALDI-ToF mass spectrometry and dynamic light scatting, among others. Thus, the linkage of these artificial building blocks by the enzyme sortase A could be demonstrated.
However, when using commercially available polymers, the purification of the polymer–peptide conjugates was impossible and resulted in a mixture containing unmodified polymer. Therefore, strategies were developed for the own synthesis of pure peptide-polymer and polymer-peptide conjugates as building blocks for sortase-mediated ligation. The designed routes are based on preparing polymer blocks via RAFT polymerization from CTAs that are attached to N- or C-terminus, respectively, of a peptide. GG-PNIPAM was synthesized through attachment of a suitable RAFT CTA to Fmoc-GG in an esterification reaction, followed by polymerization of NIPAM and cleavage of the Fmoc protection group. Furthermore, several peptides were synthesized by solid-phase peptide synthesis. The linkage of a RAFT CTA (or
polymerization initiator) to the N-terminus of a peptide can be conducted in an automated fashion as last step in a peptide synthesizer. The synthesis of such a conjugate couldn’t be realized in the time frame of this thesis, but many promising strategies exist to continue this strategy using different coupling reagents. Such polymer building blocks can be used to synthesize protein-polymer conjugates catalyzed by sortase A and the approach can be carried on to the synthesis of block copolymers by using polymer blocks with peptide motifs on both ends.
Although the proof of concept demonstrated in this thesis only shows examples that can be also synthesized by exclusively chemical techniques, a toolbox of such building blocks will enable the future formation of new materials and pave the way for the application of enzymes in materials science. In addition to nanoparticle systems and block copolymers, this also includes combination with protein-based building blocks to form hybrid materials. Hence, sortase could become an enzymatic tool that complements established chemical linking technologies and provides specific peptide motifs that are orthogonal to all existing chemical functional groups.
Synchrotron-based angle-resolved time-of-flight electron spectroscopy for dynamics in dichalogenides
(2018)
In the present work, we use symbolic regression for automated modeling of dynamical systems. Symbolic regression is a powerful and general method suitable for data-driven identification of mathematical expressions. In particular, the structure and parameters of those expressions are identified simultaneously.
We consider two main variants of symbolic regression: sparse regression-based and genetic programming-based symbolic regression. Both are applied to identification, prediction and control of dynamical systems.
We introduce a new methodology for the data-driven identification of nonlinear dynamics for systems undergoing abrupt changes. Building on a sparse regression algorithm derived earlier, the model after the change is defined as a minimum update with respect to a reference model of the system identified prior to the change. The technique is successfully exemplified on the chaotic Lorenz system and the van der Pol oscillator. Issues such as computational complexity, robustness against noise and requirements with respect to data volume are investigated.
We show how symbolic regression can be used for time series prediction. Again, issues such as robustness against noise and convergence rate are investigated us- ing the harmonic oscillator as a toy problem. In combination with embedding, we demonstrate the prediction of a propagating front in coupled FitzHugh-Nagumo oscillators. Additionally, we show how we can enhance numerical weather predictions to commercially forecast power production of green energy power plants.
We employ symbolic regression for synchronization control in coupled van der Pol oscillators. Different coupling topologies are investigated. We address issues such as plausibility and stability of the control laws found. The toolkit has been made open source and is used in turbulence control applications.
Genetic programming based symbolic regression is very versatile and can be adapted to many optimization problems. The heuristic-based algorithm allows for cost efficient optimization of complex tasks.
We emphasize the ability of symbolic regression to yield white-box models. In contrast to black-box models, such models are accessible and interpretable which allows the usage of established tool chains.
The utilization of lignin as renewable electrode material for electrochemical energy storage is a sustainable approach for future batteries and supercapacitors. The composite electrode was fabricated from Kraft lignin and conductive carbon and the charge storage contribution was determined in terms of electrical double layer (EDL) and redox reactions. The important factors at play for achieving high faradaic charge storage capacity contribute to high surface area, accessibility of redox sites in lignin and their interaction with conductive additives. A thinner layer of lignin covering the high surface area of carbon facilitates the electron transfer process with a shorter pathway from the active sites of nonconductive lignin to the current collector leading to the improvement of faradaic charge storage capacity.
Composite electrodes from lignin and carbon would be even more sustainable if the fluorinated binder can be omitted. A new route to fabricate a binder-free composite electrode from Kraft lignin and high surface area carbon has been proposed by crosslinking lignin with glyoxal. A high molecular weight of lignin is obtained to enhance both electroactivity and binder capability in composite electrodes. The order of the processing step of crosslinking lignin on the composite electrode plays a crucial role in achieving a stable electrode and high charge storage capacity. The crosslinked lignin based electrodes are promising since they allow for more stable, sustainable, halogen-free and environmentally benign devices for energy storage applications. Furthermore, improvement of the amount of redox active groups (quinone groups) in lignin is useful to enhance the capacity in lithium battery applications. Direct oxidative demethylation by cerium ammonium nitrate has been carried out under mild conditions. This proves that an increase of quinone groups is able to enhance the performance of lithium battery. Thus, lignin is a promising material and could be a good candidate for application in sustainable energy storage devices.
Numbers are omnipresent in daily life. They vary in display format and in their meaning so that it does not seem self-evident that our brains process them more or less easily and flexibly. The present thesis addresses mental number representations in general, and specifically the impact of finger counting on mental number representations. Finger postures that result from finger counting experience are one of many ways to convey numerical information. They are, however, probably the one where the numerical content becomes most tangible. By investigating the role of fingers in adults’ mental number representations the four presented studies also tested the Embodied Cognition hypothesis which predicts that bodily experience (e.g., finger counting) during concept acquisition (e.g., number concepts) stays an immanent part of these concepts. The studies focussed on different aspects of finger counting experience. First, consistency and further details of spontaneously used finger configurations were investigated when participants repeatedly produced finger postures according to specific numbers (Study 1). Furthermore, finger counting postures (Study 2), different finger configurations (Study 2 and 4), finger movements (Study 3), and tactile finger perception (Study 4) were investigated regarding their capability to affect number processing. Results indicated that active production of finger counting postures and single finger movements as well as passive perception of tactile stimulation of specific fingers co-activated associated number knowledge and facilitated responses towards corresponding magnitudes and number symbols. Overall, finger counting experience was reflected in specific effects in mental number processing of adult participants. This indicates that finger counting experience is an immanent part of mental number representations.
Findings are discussed in the light of a novel model. The MASC (Model of Analogue and Symbolic Codes) combines and extends two established models of number and magnitude processing. Especially a symbolic motor code is introduced as an essential part of the model. It comprises canonical finger postures (i.e., postures that are habitually used to represent numbers) and finger-number associations. The present findings indicate that finger counting functions both as a sensorimotor magnitude and as a symbolic representational format and that it thereby directly mediates between physical and symbolic size. The implications are relevant both for basic research regarding mental number representations and for pedagogic practices regarding the effectiveness of finger counting as a means to acquire a fundamental grasp of numbers.
Active and passive source data from two seismic experiments within the interdisciplinary project TIPTEQ (from The Incoming Plate to mega Thrust EarthQuake processes) were used to image and identify the structural and petrophysical properties (such as P- and S-velocities, Poisson's ratios, pore pressure, density and amount of fluids) within the Chilean seismogenic coupling zone at 38.25°S, where in 1960 the largest earthquake ever recorded (Mw 9.5) occurred. Two S-wave velocity models calculated using traveltime and noise tomography techniques were merged with an existing velocity model to obtain a 2D S-wave velocity model, which gathered the advantages of each individual model. In a following step, P- and S-reflectivity images of the subduction zone were obtained using different pre stack and post-stack depth migration techniques. Among them, the recent prestack line-drawing depth migration scheme yielded revealing results. Next, synthetic seismograms modelled using the reflectivity method allowed, through their input 1D synthetic P- and S-velocities, to infer the composition and rocks within the subduction zone. Finally, an image of the subduction zone is given, jointly interpreting the results from this work with results from other studies. The Chilean seismogenic coupling zone at 38.25°S shows a continental crust with highly reflective horizontal, as well as (steep) dipping events. Among them, the Lanalhue Fault Zone (LFZ), which is interpreted to be east-dipping, is imaged to very shallow depths. Some steep reflectors are observed for the first time, for example one near the coast, related to high seismicity and another one near the LFZ. Steep shallow reflectivity towards the volcanic arc could be related to a steep west-dipping reflector interpreted as fluids and/or melts, migrating upwards due to material recycling in the continental mantle wedge. The high resolution of the S-velocity model in the first kilometres allowed to identify several sedimentary basins, characterized by very low P- and S-velocities, high Poisson's ratios and possible steep reflectivity. Such high Poisson's ratios are also observed within the oceanic crust, which reaches the seismogenic zone hydrated due to bending-related faulting. It is interpreted to release water until reaching the coast and under the continental mantle wedge. In terms of seismic velocities, the inferred composition and rocks in the continental crust is in agreement with field geology observations at the surface along the proflle. Furthermore, there is no requirement to call on the existence of measurable amounts of present-day fluids above the plate interface in the continental crust of the Coastal Cordillera and the Central Valley in this part of the Chilean convergent margin. A large-scale anisotropy in the continental crust and upper mantle, previously proposed from magnetotelluric studies, is proposed from seismic velocities. However, quantitative studies on this topic in the continental crust of the Chilean seismogenic zone at 38.25°S do not exist to date.
Amorphous calcium carbonate(ACC) is a wide spread biological material found in many organisms, such as sea Urchins and mollusks, where it serves as either a precursor phase for the crystalline biominerals or is stabilized and used in the amorphous state. As ACC readily crystallizes, stabilizers such as anions, cations or macromolecules are often present to avoid or delay unwanted crystallization. Furthermore, additives often control the properties of the materials to suit the specific function needed for the organism. E.g. cystoliths in leaves that scatter light to optimize energy uptake from the sun or calcite/aragonite crystals used in protective shells in mussels and gastropods. Lifetime of the amorphous phase is controlled by the kinetic stability against crystallization. This has often been linked to water which plays a role in the mobility of ions and hence the probability of forming crystalline nuclei to initiate crystallization. However, it is unclear how the water molecules are incorporated within the amorphous phase, either as liquid confined in pores, as structural water binding to the ions or as a mixture of both. It is also unclear how this is perturbed when additives are added, especially Mg2+, one the most common additives found in biogenic samples. Mg2+ are expected to have a strong influence on the water incorporated into ACC, given the high energy barrier to dehydration of magnesium ions compared to calcium ions in solution.
During the last 10-15 years, there has been a large effort to understand the local environment of the ions/molecules and how this affects the properties of the amorphous phase. But only a few aspects of the structure have so far been well-described in literature. The reason for this is partly caused by the low stability of ACC if exposed to air, where it tends to crystallize within minutes and by the limited quantities of ACC produced in traditional synthesis routes. A further obstacle has been the difficulty in modeling the local structure based on experimental data. To solve the problem of stability and sample size, a few studies have used stabilizers such as Mg2+ or OH- and severely dehydrated samples so as to stabilize the amorphous state, allowing for combined neutron and x-ray analysis to be performed. However, so far, a clear description of the local environments of water present in the structure has not been reported.
In this study we show that ACC can be synthesized without any stabilizing additives in quantities necessary for neutron measurements and that accurate models can be derived with the help of empirical-potential structural refinement. These analyses have shown that there is a wide range of local environments for all of the components in the system suggesting that the amorphous phase is highly inhomogeneous, without any phase separation between ions and water. We also showed that the water in ACC is mainly structural and that there is no confined or liquid-like water present in the system. Analysis of amorphous magnesium carbonate also showed that there is a large difference in the local structure of the two cations and that Mg2+ surprisingly interacts with significantly less water molecules then Ca2+ despite the higher dehydration energy. All in all, this shows that the role of water molecules as a structural component of ACC, with a strong binding to cat- and anions probably retard or prevents the crystallization of the amorphous phase.
The interaction between surfaces displaying end-grafted hydrophilic polymer brushes plays important roles in biology and in many wet-technological applications. The outer surfaces of Gram-negative bacteria, for example, are composed of lipopolysaccharide (LPS) molecules exposing oligo- and polysaccharides to the aqueous environment. This unique, structurally complex biological interface is of great scientific interest as it mediates the interaction of bacteria with neighboring bacteria in colonies and biofilms. The interaction between polymer-decorated surfaces is generally coupled to the distance-dependent conformation of the polymer chains. Therefore, structural insight into the interacting surfaces is a prerequisite to understand the interaction characteristics as well as the underlying physical mechanisms. This problem has been addressed by theory, but accurate experimental data on polymer conformations under confinement are rare, because obtaining perturbation-free structural insight into buried soft interfaces is inherently difficult.
In this thesis, lipid membrane surfaces decorated with hydrophilic polymers of technological and biological relevance are investigated under controlled interaction conditions, i.e., at defined surface separations. For this purpose, dedicated sample architectures and experimental tools are developed. Via ellipsometry and neutron reflectometry pressure-distance curves and distance-dependent polymer conformations in terms of brush compression and reciprocative interpenetration are determined. Additional element-specific structural insight into the end-point distribution of interacting brushes is obtained by standing-wave x-ray fluorescence (SWXF).
The methodology is first established for poly[ethylene glycol] (PEG) brushes of defined length and grafting density. For this system, neutron reflectometry revealed pronounced brush interpenetration, which is not captured in common brush theories and therefore motivates rigorous simulation-based treatments. In the second step the same approach is applied to realistic mimics of the outer surfaces of Gram-negative bacteria: monolayers of wild type LPSs extracted from E. Coli O55:B5 displaying strain-specific O-side chains. The neutron reflectometry experiments yield unprecedented structural insight into bacterial interactions, which are of great relevance for the properties of biofilms.
Der Autor entwickelt ein Modell des strategischen und operativen Managements in Konzernrechtsabteilungen und richtet dabei den Fokus auf praktische Erkenntnisse, die auf betriebs- und volkswirtschaftlichen Theorien sowie zahlreichen Experteninterviews aufgebaut sind. Ein besonderes Augenmerk legt er auf die Spezifika von Rechtsdienstleistungen im Vergleich zu anderen Leistungen, die in einem Konzern benötigt werden. Die Sichtweise des Leiters einer Rechtsabteilung /General Counsels steht dabei im Vordergrund. Es wird die Frage untersucht, wie dieser die Leitung und Steuerung der gesamten Abteilung am effizientesten und effektivsten ausfüllen kann.
Spectroscopy at the limit
(2018)
Die Frage nach dem Zusammenhalt einer ganzen Gesellschaft ist eine der zentralen Fragen der Sozialwissenschaften und Soziologie. Seit dem Übergang in die Moderne bildet das Problem des Zusammenhalts von sich differenzierenden Gesellschaften den Gegenstand des wissenschaftlichen und gesellschaftlichen Diskurses. In der vorliegenden Studie stellt soziale Integration eine Form der gelungenen Vergesellschaftung dar, die sich in der Reproduktion von symbolischen und nicht-symbolischen Ressourcen artikuliert. Das Resultat dieser Reproduktion sind pluralistische Vergesellschaftungen, die, bezogen auf politische Präferenzen, konfligierende Interessen verursachen. Diese Präferenzen kommen in unterschiedlichen Formen, in ihrer Intensität und Wahrnehmung der politischen Partizipation zum Ausdruck. Da moderne politische Herrschaft aufgrund der rechtlichen und institutionellen Ausstattung einen bedeutsamen Einfluss auf soziale Reproduktion ausüben kann (z.B. durch Sozialpolitik), stellt direkte Beeinflussung politischer Entscheidungen, als Artikulation von sich aus den Konfliktlinien etablierenden, unterschiedlichen Präferenzen, das einzige legitime Mittel zwecks Umverteilung von Ressourcen auf der Ebene des Politischen dar. Somit wird die Konnotation zwischen Integration und politischer Partizipation sichtbar. In die Gesellschaft gut integrierte Mitglieder sind aufgrund einer breiten Teilnahme an Reproduktionsprozessen in der Lage, eigene Interessen zu erkennen und durch politische Aktivitäten zum Ausdruck zu bringen. Die empirischen Befunde scheinen den Eindruck zu vermitteln, dass der demokratische Konflikt in der modernen Gesellschaft nicht mehr direkt von Klassenzugehörigkeit und Klasseninteressen geprägt wird, sondern durch den Zugang zu und die Verfügbarkeit von symbolischen und nicht-symbolischen Ressourcen geformt wird. In der Konsequenz lautet die Fragestellung der vorliegenden Arbeit, ob integrierte Gesellschaften politisch aktiver sind.
Die Fragestellung der Arbeit wird mithilfe von Aggregatdaten demokratisch-verfasster politischer Systemen untersucht, die als etablierte Demokratien gelten und unterschiedlich Breite wohlfahrtstaatlichen Maßnahmen aufweisen. Die empirische Überprüfung der Hypothesen erfolgte mithilfe von bivariaten und multivariaten Regressionsanalysen. Die überprüften Hypothesen lassen sich folgend in einer Hypothese zusammenfassen: Je stärker die soziale Integration einer Gesellschaft, desto größer ist die konventionelle bzw. unkonventionelle politische Partizipation. Verallgemeinert ist die Aussage zulässig, dass soziale Integration einer Gesellschaft positive Effekte auf die Häufigkeit politischer Partizipation innerhalb dieser Gesellschaft hat. Stärker integrierte Gesellschaften sind politisch aktiver und dies unabhängig von der Form (konventionelle oder unkonventionelle) politischer Beteiligung. Dabei ist der direkte Effekt der gesamtgesellschaftlichen Integration auf die konventionellen Formen stärker als auf unkonventionellen. Diese Aussage ist nur zulässig, wenn die Elemente des Wahlsystems, wie z.B. Verhältniswahlrecht, und das BIP nicht berücksichtigt werden. Auf der Grundlage der Ergebnisse mit Kontrollvariablen erlauben die Daten die auf die Makroebene bezogene Aussage, dass neben einem hohen Niveau sozialer Integration auch ein durch (Mit-)Beteiligung bestimmtes Wahlsystem und ein hoher wirtschaftlicher Entwicklungsgrad begünstigend für ein hohes Niveau politischer Partizipation sind.
The solar activity and its consequences affect space weather and Earth’s climate. The solar activity exhibits a cyclic behaviour with a period of about 11 years. The solar cycle properties are governed by the dynamo taking place in the interior of the Sun, and they are distinctive. Extending the knowledge about solar cycle properties into the past is essential for understanding the solar dynamo and forecasting space weather. It can be acquired through the analysis of historical sunspot drawings. Sunspots are the dark areas, which are associated with strong magnetic fields, on the solar surface. Sunspots are the oldest and longest available observed features of solar activity.
One of the longest available records of sunspot drawings is the collection by Samuel Heinrich Schwabe during 1825–1867. The sunspot sizes measured from digitized Schwabe drawings are not to scale and need to be converted into physical sunspot areas. We employed a statistical approach assuming that the area distribution of sunspots was the same in the 19th century as it was in the 20th century. Umbral areas for about 130 000 sunspots observed by Schwabe were obtained. The annually averaged sunspot areas correlate reasonably well with the sunspot number. Tilt angles and polarity separations of sunspot groups were calculated assuming them to be bipolar. There is, of course, no polarity information in the observations. We derived an average tilt angle by attempting to exclude unipolar groups with a minimum separation of the two surmised polarities and an outlier rejection method, which follows the evolution of each group and detects the moment, when it turns unipolar as it decays. As a result, the tilt angles, although displaying considerable natural scatter, are on average 5.85° ± 0.25°, with the leading
polarity located closer to the equator, in good agreement with tilt angles obtained from 20th century data sets. Sources of uncertainties in the tilt angle determination are discussed and need to be addressed whenever different data sets are combined.
Digital images of observations printed in the books Rosa Ursina and Prodromus pro sole mobili by Christoph Scheiner, as well as the drawings from Scheiner’s letters to Marcus Welser, are analyzed to obtain information on the positions and sizes of sunspots that appeared before the Maunder minimum. In most cases, the given orientation of the ecliptic is used to set up the heliographic coordinate system for the drawings. Positions and sizes are measured manually displaying the drawings on a computer screen. Very early drawings have no indication of the solar orientation. A rotational matching using common spots of adjacent days is used in some cases, while in other cases, the assumption that images were aligned with a zenith–horizon coordinate system appeared to be the most likely. In total, 8167 sunspots were measured. A distribution of sunspot latitudes versus time (butterfly diagram) is obtained for Scheiner’s observations. The observations of 1611 are very inaccurate, but the drawings of 1612 have at least an indication of the solar orientation, while the remaining part of the spot positions from 1618–1631 have good to very good accuracy. We also computed 697 tilt angles of apparent bipolar sunspot groups, which were observed in the period 1618–1631. We find that the average tilt angle of nearly 4° does not significantly differ from the 20th century values.
The solar cycle properties seem to be related to the tilt angles of sunspot groups, and it is an important parameter in the surface flux transport models. The tilt angles of bipolar sunspot groups from various historical sets of solar drawings including from Schwabe and Scheiner are analyzed. Data by Scheiner, Hevelius, Staudacher, Zucconi, Schwabe, and Spörer deliver a series of average tilt angles spanning a period of 270 years, in addition to previously found values for 20th-century data obtained by other authors. We find that the average tilt angles before the Maunder minimum were not significantly different from modern values. However, the average tilt angles of a period 50 years after the Maunder minimum, namely for cycles 0 and 1, were much lower and near zero. The typical tilt angles before the Maunder minimum suggest that abnormally low tilt angles were not responsible for driving the solar cycle into a grand minimum.
With the Schwabe (1826–1867) and Spörer (1866–1880) sunspot data, the butterfly diagram of sunspot groups extends back till 1826. A recently developed method, which separates the wings of the butterfly diagram based on the long gaps present in sunspot group occurrences at different latitudinal bands, is used to separate the wings of the butterfly diagram. The cycle-to-cycle variation in the start (F), end (L), and highest (H) latitudes of the wings with respect to the strength of the wings are analyzed. On the whole, the wings of the stronger cycles tend to start at higher latitudes and have a greater extent. The time spans of the wings and the time difference between the wings in the northern hemisphere display a quasi-periodicity of 5–6 cycles. The average wing overlap is zero in the southern hemisphere, whereas it is 2–3 months in the north. A marginally significant oscillation of about 10 solar cycles is found in the asymmetry of the L latitudes. This latest, extended database of butterfly wings provides new observational constraints, regarding the spatio-temporal distribution of sunspot occurrences over the solar cycle, to solar dynamo models.
Signals stored in sediment
(2018)
Tectonic and climatic boundary conditions determine the amount and the characteristics (size distribution and composition) of sediment that is generated and exported from mountain regions. On millennial timescales, rivers adjust their morphology such that the incoming sediment (Qs,in) can be transported downstream by the available water discharge (Qw). Changes in climatic and tectonic boundary conditions thus trigger an adjustment of the downstream river morphology. Understanding the sensitivity of river morphology to perturbations in boundary conditions is therefore of major importance, for example, for flood assessments, infrastructure and habitats. Although we have a general understanding of how rivers evolve over longer timescales, the prediction of channel response to changes in boundary conditions on a more local scale and over shorter timescales remains a major challenge. To better predict morphological channel evolution, we need to test (i) how channels respond to perturbations in boundary conditions and (ii) how signals reflecting the persisting conditions are preserved in sediment characteristics. This information can then be applied to reconstruct how local river systems have evolved over time.
In this thesis, I address those questions by combining targeted field data collection in the Quebrada del Toro (Southern Central Andes of NW Argentina) with cosmogenic nuclide analysis and remote sensing data. In particular, I (1) investigate how information on hillslope processes is preserved in the 10Be concentration (geochemical composition) of fluvial sediments and how those signals are altered during downstream transport. I complement the field-based approach with physical experiments in the laboratory, in which I (2) explore how changes in sediment supply (Qs,in) or water discharge (Qw) generate distinct signals in the amount of sediment discharge at the basin outlet (Qs,out). With the same set of experiments, I (3) study the adjustments of alluvial channel morphology to changes in Qw and Qs,in, with a particular focus in fill-terrace formation. I transfer the findings from the experiments to the field to (4) reconstruct the evolution of a several-hundred meter thick fluvial fill-terrace sequence in the Quebrada del Toro. I create a detailed terrace chronology and perform reconstructions of paleo-Qs and Qw from the terrace deposits. In the following paragraphs, I summarize my findings on each of these four topics.
First, I sampled detrital sediment at the outlet of tributaries and along the main stem in the Quebrada del Toro, analyzed their 10Be concentration ([10Be]) and compared the data to a detailed hillslope-process inventory. The often observed non-linear increase in catchment-mean denudation rate (inferred from [10Be] in fluvial sediment) with catchment-median slope, which has commonly been explained by an adjustment in landslide-frequency, coincided with a shift in the main type of hillslope processes. In addition, the [10Be] in fluvial sediments varied with grain-size. I defined the normalized sand-gravel-index (NSGI) as the 10Be-concentration difference between sand and gravel fractions divided by their summed concentrations. The NSGI increased with median catchment slope and coincided with a shift in the prevailing hillslope processes active in the catchments, thus making the NSGI a potential proxy for the evolution of hillslope processes over time from sedimentary deposits. However, the NSGI recorded hillslope-processes less well in regions of reduced hillslope-channel connectivity and, in addition, has the potential to be altered during downstream transport due to lateral sediment input, size-selective sediment transport and abrasion.
Second, my physical experiments revealed that sediment discharge at the basin outlet (Qs,out) varied in response to changes in Qs,in or Qw. While changes in Qw caused a distinct signal in Qs,out during the transient adjustment phase of the channel to new boundary conditions, signals related to changes in Qs,in were buffered during the transient phase and likely only become apparent once the channel is adjusted to the new conditions. The temporal buffering is related to the negative feedback between Qs,in and channel-slope adjustments. In addition, I inferred from this result that signals extracted from the geochemical composition of sediments (e.g., [10Be]) are more likely to represent modern-day conditions during times of aggradation, whereas the signal will be temporally buffered due to mixing with older, remobilized sediment during times of channel incision.
Third, the same set of experiments revealed that river incision, channel-width narrowing and terrace cutting were initiated by either an increase in Qw, a decrease in Qs,in or a drop in base level. The lag-time between the external perturbation and the terrace cutting determined (1) how well terrace surfaces preserved the channel profile prior to perturbation and (2) the degree of reworking of terrace-surface material. Short lag-times and well preserved profiles occurred in cases with a rapid onset of incision. Also, lag-times were synchronous along the entire channel after upstream perturbations (Qw, Qs,in), whereas base-level fall triggered an upstream migrating knickzone, such that lag-times increased with distance upstream. Terraces formed after upstream perturbations (Qw, Qs,in) were always steeper when compared to the active channel in new equilibrium conditions. In the base-level fall experiment, the slope of the terrace-surfaces and the modern channel were similar. Hence, slope comparisons between the terrace surface and the modern channel can give insights into the mechanism of terrace formation.
Fourth, my detailed terrace-formation chronology indicated that cut-and-fill episodes in the Quebrada del Toro followed a ~100-kyr cyclicity, with the oldest terraces ~ 500 kyr old. The terraces were formed due to variability in upstream Qw and Qs. Reconstructions of paleo-Qs over the last 500 kyr, which were restricted to times of sediment deposition, indicated only minor (up to four-fold) variations in paleo-denudation rates. Reconstructions of paleo-Qw were limited to the times around the onset of river incision and revealed enhanced discharge from 10 to 85% compared to today. Such increases in Qw are in agreement with other quantitative paleo-hydrological reconstructions from the Eastern Andes, but have the advantage of dating further back in time.
Shaping via binding
(2018)
The ongoing trend of miniaturizing multifunctional devices, especially for minimally-invasive medical or sensor applications demands new strategies for designing the required functional polymeric micro-components or micro-devices. Here, polymers, which are capable of active movement, when an external stimulus is applied (e.g. shape-memory polymers), are intensively discussed as promising material candidates for realization of multifunctional micro-components. In this context further research activities are needed to gain a better knowledge about the underlying working principles for functionalization of polymeric micro-scale objects with a shape-memory effect. First reports about electrospun solid microfiber scaffolds, demonstrated a much more pronounced shape-memory effect than their bulk counterparts, indicating the high potential of electrospun micro-objects.
Based on these initial findings this thesis was aimed at exploring whether the alteration of the geometry of micro-scale electrospun polymeric objects can serve as suitable parameter to tailor their shape-memory properties. The central hypothesis was that different geometries should result in different degrees of macromolecular chain orientation in the polymeric micro-scale objects, which will influence their mechanical properties as well as thermally-induced shape-memory function. As electrospun micro-scale objects, microfiber scaffolds composed of hollow microfibers with different wall thickness and electrosprayed microparticles as well as their magneto-sensitive nanocomposites all prepared from the same polymer exhibiting pronounced bulk shape-memory properties were investigated. For this work a thermoplastic multiblock copolymer, named PDC, with excellent bulk shape-memory properties, associated with crystallizable oligo(ε-caprolactone) (OCL) switching domains, was chosen for the preparation of electrospun micro-scale objects, while crystallizable oligo(p-dioxanone) (OPDO) segments serve as hard domains in PDC.
In the first part of the thesis microfiber scaffolds with different microfiber geometries (solid or hollow with different wall thickness) were discussed. Hollow microfiber based PDC scaffolds were prepared by coaxial electrospinning from a 1, 1, 1, 3, 3, 3 hexafluoro-2-propanol (HFP) solution with a polymer concentration of 13% w·v-1. Here as a first step core-shell fiber scaffolds consisting of microfibers with a PDC shell and sacrificial poly(ethylene glycol) (PEG) core are generated. The hollow PDC microfibers were achieved after dissolving the PEG core with water. The utilization of a fixed electrospinning setup and the same polymer concentration of the PDC spinning solution could ensure the fabrication of microfibers with almost identical outer diameters of 1.4 ± 0.3 µm as determined by scanning electron microscopy (SEM). Different hollow microfiber wall thicknesses of 0.5 ± 0.2 and 0.3 ± 0.2 µm (analyzed by SEM) have been realized by variation of the mass flow rate, while solid microfibers were obtained by coaxial electrospinning without supplying any core solution. Differential scanning calorimetry experiments and tensile tests at ambient temperature revealed an increase in degree of OCL crystallinity form χc,OCL = 34 ± 1% to 43 ± 1% and a decrease in elongation of break from 800 ± 40% to 200 ± 50% associated with an increase in Young´s modulus and failture stress for PDC hollow microfiber scaffolds when compared with soild fibers. The observed effects were enhanced with decreasing wall thickness of the single hollow fibers. The shape-memory properties of the electrospun PDC scaffolds were quantified by cyclic, thermomechanical tensile tests. Here, scaffolds comprising hollow microfibers exhibited lower shape fixity ratios around Rf = 82 ± 1% and higher shape recovery ratios of Rr = 67 ± 1% associated to more pronounced relaxation at constant strain during the first test cycle and a lower switching temperature of Tsw = 33 ± 1 °C than the fibrous meshes consisting of solid microfibers. These findings strongly support the central hypothesis that different fiber geometries (solid or hollow with different wall thickness) in electrospun scaffolds result in different degrees of macromolecular chain orientation in the polymeric micro-scale objects, which can be applied as design parameter for tailoring their mechanical and shape-memory properties.
The second part of the thesis deals with electrosprayed particulate PDC micro-scale objects. Almost spherical PDC microparticles with diameters of 3.9 ± 0.9 μm (as determined by SEM) were achieved by electrospraying of HFP solution with a polymer concentration of 2% w·v-1. In contrast, smaller particles with sizes of 400 ± 100 nm or 1.2 ± 0.3 μm were obtained for the magneto-sensitive composite PDC microparticles containing 23 ± 0.5 wt% superparamagnetic magnetite nanoparticles (mNPs). All prepared PDC microparticles exhibited a similar overall crystallinity like the PDC bulk material as analyzed by DSC. AFM nanoindentation results revealed no influence of the nanofiller incorporation on the local mechanical properties represented by the reduced modulus determined for pure PDC microparticles and magneto-sensitive composite PDC microparticles with similar diameters around 1.3 µm. It was found that the reduced modulus of the nanocomposite microparticles increased substantially with decreasing particles size from 2.4 ± 0.9 GPa (1.2 µm) to 11.9 ± 3.1 GPa (0.4 µm), which can be related to a higher orientation of the macromolecules at the surface of smaller sized microparticles. The magneto-sensitivity of such nanocomposite microparticles could be demonstrated in two aspects. One was by attracting/collecting the composite micro-objects with an external permanent magnet. The other one was by a inductive heating to 44 ± 1 °C, which is well above the melting transition of the OCL switching domains, when compacted to a 10 x 10 mm2 film with a thickness of 10 µm and exposed to an alternating magnet field with an magnetic field strength of 30 kA·m-1. Both functions are of great relevance for designing next generation drug delivery systems combining targeting and on demand release.
By a compression approach shape-memory functionalization of individual microparticles could be realized. Here different programming pressures and compression temperatures were applied. The shape-recovery capability of the programmed PDC microparticles was quantified by online and off-line heating experiments analyzed via microscopy measurement. The obtained shape-memory properties were found to be strongly depending on the applied programming pressure and temperature. The best shape-memory performance with a high shape recovery rate of about Rr = 80±1% was obtained when a low pressure of 0.2 MPa was applied at 55 °C. Finally, it was demonstrated that PDC microparticles can be utilized as micro building parts for preparation of a macroscopic film with temporary stability by compression of a densely packed array of PDC microparticles at 60 °C followed by subsequent cooling to ambient temperature. This film disintegrates into individual microparticles upon heating to 60 °C. Based on this technology the design of stable macroscopic release systems can be envisioned, which can be easily fixed at the site of treatment (i.e. by suturing) and disintegrate on demand to microparticles facilitating the drug release.
In summary, the results of this thesis could confirm the central hypothesis that the variation of the geometry of polymeric micro-objects is a suitable parameter to adjust their shape-memory performance by changing the degree of macromolecular chain orientation in the specimens or by enabling new functions like on demand disintegration. These fundamental findings might be relevant for designing novel miniaturized multifunctional polymer-based devices.
Deoxyribonucleic acid (DNA) is the carrier of human genetic information and is exposed to environmental influences such as the ultraviolet (UV) fraction of sunlight every day. The photostability of the DNA against UV light is astonishing. Even if the DNA bases have a strong absorption maximum at around 260 nm/4.77 eV, their quantum yield of photoproducts remains very low 1. If the photon energies exceed the ionization energy (IE) of the nucleobases ( ̴ 8-9 eV) 2, the DNA can be severely damaged. Photoexcitation and -ionization reactions occur, which can induce strand breaks in the DNA. The efficiency of the excitation and ionization induced strand breaks in the target DNA sequences are represented by cross sections. If Si as a substrate material is used in the VUV irradiation experiments, secondary electrons with an energy below 3.6 eV are generated from the substrate. This low energy electrons (LEE) are known to induce dissociative electron attachment (DEA) in DNA and with it DNA strand breakage very efficiently. LEEs play an important role in cancer radiation therapy, since they are generated secondarily along the radiation track of ionizing radiation.
In the framework of this thesis, different single stranded DNA sequences were irradiated with 8.44 eV vacuum UV (VUV) light and cross sections for single strand breaks (SSB) were determined. Several sequences were also exposed to secondary LEEs, which additionally contributed to the SSBs. First, the cross sections for SSBs depending on the type of nucleobases were determined. Both types of DNA sequences, mono-nucleobase and mixed sequences showed very similar results upon VUV radiation. The additional influence of secondarily generated LEEs resulted in contrast in a clear trend for the SSB cross sections. In this, the polythymine sequence had the highest cross section for SSBs, which can be explained by strong anionic resonances in this energy range. Furthermore, SSB cross sections were determined as a function of sequence length. This resulted in an increase in the strand breaks to the same extent as the increase in the geometrical cross section. The longest DNA sequence (20 nucleotides) investigated in this series, however, showed smaller cross section values for SSBs, which can be explained by conformational changes in the DNA. Moreover, several DNA sequences that included the radiosensitizers 5-Bromouracil (5BrU) and 8-Bromoadenine (8BrA) were investigated and the corresponding SSB cross sections were determined. It was shown that 5BrU reacts very strongly to VUV radiation leading to high strand break yields, which showed in turn a strong sequence-dependency. 8BrA, on the other hand, showed no sensitization to the applied VUV radiation, since almost no increase in strand breakage yield was observed in comparison to non-modified DNA sequences.
In order to be able to identify the mechanisms of radiation damage by photons, the IEs of certain DNA sequences were further explored using photoionization tandem mass spectrometry. By varying the DNA sequence, both the IEs depending on the type of nucleobase as well as on the DNA strand length could be identified and correlated to the SSB cross sections. The influence of the IE on the photoinduced reaction in the brominated DNA sequences could be excluded.
Versammlungen sind der Ursprung der Demokratie. "Friedlich und ohne Waffen" darf der öffentliche Raum zur politischen Willensbildung genutzt werden. Aber gilt die Versammlungsfreiheit auch im öffentlichen Raum, der im privaten Eigentum steht? Diese Frage ist von großer Brisanz, wenn öffentlicher Raum privatisiert wird. Bahnhöfe, Flughäfen, Einkaufsstraßen und Marktplätze können im privaten Eigentum stehen - und dabei öffentlicher Raum bleiben. Müssen Eigentümer eines Geländes, das als öffentlicher Raum gestaltet ist, dort auch Versammlungen dulden? Wie verhält sich die Versammlungsfreiheit zum Schutz des Grundeigentums? Kann der Staat die Versammlungsfreiheit auf privatem Gelände durchsetzen? Und wie unterscheidet sich der Schutz zwischen der EMRK und dem Grundgesetz? Maria Scharlau geht der Frage nach, wie dieser Konflikt zwischen Versammlungsrecht und Eigentumsschutz zu lösen ist.
Scalable data profiling
(2018)
Data profiling is the act of extracting structural metadata from datasets. Structural metadata, such as data dependencies and statistics, can support data management operations, such as data integration and data cleaning. Data management often is the most time-consuming activity in any data-related project. Its support is extremely valuable in our data-driven world, so that more time can be spent on the actual utilization of the data, e. g., building analytical models. In most scenarios, however, structural metadata is not given and must be extracted first. Therefore, efficient data profiling methods are highly desirable.
Data profiling is a computationally expensive problem; in fact, most dependency discovery problems entail search spaces that grow exponentially in the number of attributes. To this end, this thesis introduces novel discovery algorithms for various types of data dependencies – namely inclusion dependencies, conditional inclusion dependencies, partial functional dependencies, and partial unique column combinations – that considerably improve over state-of-the-art algorithms in terms of efficiency and that scale to datasets that cannot be processed by existing algorithms. The key to those improvements are not only algorithmic innovations, such as novel pruning rules or traversal strategies, but also algorithm designs tailored for distributed execution. While distributed data profiling has been mostly neglected by previous works, it is a logical consequence on the face of recent hardware trends and the computational hardness of dependency discovery.
To demonstrate the utility of data profiling for data management, this thesis furthermore presents Metacrate, a database for structural metadata. Its salient features are its flexible data model, the capability to integrate various kinds of structural metadata, and its rich metadata analytics library. We show how to perform a data anamnesis of unknown, complex datasets based on this technology. In particular, we describe in detail how to reconstruct the schemata and assess their quality as part of the data anamnesis.
The data profiling algorithms and Metacrate have been carefully implemented, integrated with the Metanome data profiling tool, and are available as free software. In that way, we intend to allow for easy repeatability of our research results and also provide them for actual usage in real-world data-related projects.
Samarium hexaboride
(2018)
Die Publikation zu Sprachwandelprozessen im Russischen und Ukrainischen beschreibt eine ausschlaggebende Phase der neueren Sprachgeschichte Russlands und der Ukraine (1985–2008). Im Fokus steht die Anglisierung als eine der Haupttendenzen der aktuellen sprachlichen Destandardisierung europäischer Sprachen. Die Autorin zeigt am Beispiel der Anglisierung in der Werbesprache die Destandardisierung des Russischen und Ukrainischen nach 1985 auf. Diese korpusbasierte Untersuchung umfasst sowohl die quantitative (statistische) als auch die qualitative (systemlinguistische) Analyse des werbesprachlichen Korpus. Die quantitative chronologische Analyse belegt die deutlich stärkere Dynamik der Anglisierung im Ukrainischen nach 1998. Die qualitative Analyse illustriert die unterschiedlichen bzw. gemeinsamen innerlinguistischen Prozesse in beiden Sprachen, insbesondere Anglizismen-Integration und Standardisierungswege.
The formation and breaching of natural dammed lakes have formed the landscapes, especially in seismically active high-mountain regions. Dammed lakes pose both, potential water resources, and hazard in case of dam breaching. Central Asia has mostly arid and semi-arid climates. Rock glaciers already store more water than ice-glaciers in some semi-arid regions of the world, but their distribution and advance mechanisms are still under debate in recent research. Their impact on the water availability in Central Asia will likely increase as temperatures rise and glaciers diminish.
This thesis provides insight to the relative age distribution of selected Kyrgyz and Kazakh rock glaciers and their single lobes derived from lichenometric dating. The size of roughly 8000 different lichen specimens was used to approximate an exposure age of the underlying debris surface. We showed that rock-glacier movement differs signifcantly on small scales. This has several implications for climatic inferences from rock glaciers. First, reactivation of their lobes does not necessarily point to climatic changes, or at least at out-of-equilibrium conditions. Second, the elevations of rock-glacier toes can no longer be considered as general indicators of the limit of sporadic mountain permafrost as they have been used traditionally.
In the mountainous and seismically active region of Central Asia, natural dams, besides rock glaciers, also play a key role in controlling water and sediment infux into river valleys. However, rock glaciers advancing into valleys seem to be capable of infuencing the stream network, to dam rivers, or to impound lakes. This influence has not previously been addressed. We quantitatively explored these controls using a new inventory of 1300 Central Asian rock glaciers. Elevation, potential incoming solar radiation, and the size of rock glaciers and their feeder basins played key roles in predicting dam appearance. Bayesian techniques were used to credibly distinguish between lichen sizes on rock glaciers and their lobes, and to find those parameters of a rock-glacier system that are most credibly expressing the potential to build natural dams.
To place these studies in the region's history of natural dams, a combination of dating of former lake levels and outburst flood modelling addresses the history and possible outburst flood hypotheses of the second largest mountain lake of the world, Issyk Kul in Kyrgyzstan. Megafoods from breached earthen or glacial dams were found to be a likely explanation for some of the lake's highly fluctuating water levels. However, our detailed analysis of candidate lake sediments and outburst-flood deposits also showed that more localised dam breaks to the west of Issyk Kul could have left similar geomorphic and sedimentary evidence in this Central Asian mountain landscape. We thus caution against readily invoking megafloods as the main cause of lake-level drops of Issyk Kul. In summary, this thesis addresses some new pathways for studying rock glaciers and natural dams with several practical implications for studies on mountain permafrost and natural hazards.
Polymeric materials, which can perform reversible shape changes after programming, in response to a thermal or electrical stimulation, can serve as (soft) actuating components in devices like artificial muscles, photonics, robotics or sensors. Such polymeric actuators can be realized with hydrogels, liquid crystalline elastomers, electro-active polymers or shape-memory polymers by controlling with stumuli such as heat, light, electrostatic or magnetic field. If the application conditions do not allow the direct heating or electric stimulation of these smart devices, noncontact triggering will be required. Remotely controlled actuation have been reported for liquid crystalline elastomer composites or shape-memory polymer network composites, when a persistent external stress is applied during inductive heating in an alternating magnetic field. However such composites cannot meet the demands of applications requiring remotely controlled free-standing motions of the actuating components.
The current thesis investigates, whether a reprogrammable remotely controlled soft actuator can be realized by magneto-sensitive multiphase shape-memory copolymer network composites containing magnetite nanoparticles as magneto-sensitive multivalent netpoints. A central hypothesis was that a magnetically controlled two-way (reversible bidirectional) shape-memory effect in such nanocomposites can be achieved without application of external stress (freestanding), when the required orientation of the crystallizable actuation domains (ADs) can be ensured by an internal skeleton like structure formed by a second crystallizable phase determing the samples´s geometry, while magneto-sensitive iron oxide nanoparticles covalently integrated in the ADs allow remote temperature control. The polymer matrix of these composites should exhibit a phase-segregated morphology mainly composed of cyrstallizable ADs, whereby a second set of higher melting crystallites can take a skeleton like, geometry determining function (geometry determining domains, GDs) after programming of the composite and in this way the orientation of the ADs is established and maintained during actuation. The working principle for the reversible bidirectional movements in the multiphase shape-memory polymer network composite is related to a melting-induced contraction (MIC) during inductive heating and the crystallization induced elongation (CIE) of the oriented ADs during cooling. Finally, the amount of multivalent magnetosensitive netpoints in such a material should be as low as possible to ensure an adequate overall elasticity of the nanocomposite and at the same time a complete melting of both ADs and GDs via inductive heating, which is mandatory for enabling reprogrammability.
At first, surface decorated iron oxide nanoparticles were synthesized and investigated. The coprecipitation method was applied to synthesize magnetic nanoparticles (mNPs) based on magnetite with size of 12±3 nm and in a next step a ring-opening polymerization (ROP) was utilized for covalent surface modification of such mNPs with oligo(ϵ-caprolactone) (OCL) or oligo(ω-pentadecalactone) (OPDL) via the “grafting from” approach. A successful coating of mNPs with OCL and OPDL was confirmed by differential scanning calorimetry (DSC) experiments showing melting peaks at 52±1 °C for mNP-OCL and 89±1 °C for mNP-OPDL. It was further explored whether two-layered surface decorated mNPs, can be prepared via a second surface-initiated ROP of mNP-OCL or mNP-OPDL with ω-pentadecalactone or ϵ-caprolactone. The observation of two distinct melting transitions in DSC experiments as well as the increase in molecular weight of the detached coatings determined by GPC and 1H-NMR indicated a successful synthesis of the twolayered nanoparticles mNP-OCL-OPDL and mNP-OPDL-OCL. In contrast TEM micrographs revealed a reduction of the thickness of the polymeric coating on the nanoparticles after the second ROP, indicating that the applied synthesis and purification required further optimization.
For evaluating the impact of the dispersion of mNPs within a polymer matrix on the resulting inductive heating capability of composites, plain mNPs as well as OCL coated magnetite nanoparticles (mNP-OCLs) were physically incorporated into crosslinked poly(ε-caprolactone) (PCL) networks. Inductive heating experiments were performed with both networks cPCL/mNP and cPCL/mNP-OCL in an alternating magnetic field (AMF) with a magnetic field strength of H = 30 kA·m-1. Here a bulk temperature of Tbulk = 74±2 °C was achieved for cPCL/mNP-OCL, which was almost 20 °C higher than the melting transition of the PCL-based polymer matrix. In contrast, the composite with plain mNPs could only reach a Tbulk of 48±2 °C, which is not sufficient for a complete melting of all PCL crystallites as required for actuation.
The inductive heating capability of a multiphase copolymer nanocomposite network (designed as soft actuators) containing surface decorated mNPs as covalent netpoints was investigated. Such composite was synthesized from star-shaped OCL and OPDL precursors, as well as mNP-OCLs via reaction with HDI. The weight ratio of OPDL and OCL in the starting reaction mixture was 15/85 (wt%/wt%) and the amount of iron oxide in the nanocomposite was 4 wt%. DSC experiments revealed two well separated melting and crystallization peaks confirming the required phase-segregated morphology in the nanocomposite NC-mNP-OCL. TEM images could illustrate a phase-segregated morphology of the polymer matrix on the microlevel with droplet shaped regions attributed to the OPDL domains dispersed in an OCL matrix. The TEM images could further demonstrate that the nanoparticulate netpoints in NC-mNP-OCL were almost homogeneously dispersed within the OCL domains. The tests of the inductive heating capability of the nanocomposites at a magnetic field strength of Hhigh = 11.2 kA·m-1 revealed a achievable plateau surface temperature of Tsurf = 57±1 °C for NC-mNP-OCL recorded by an infrared video camera. An effective heat generation constant (̅P) can be derived from a multi-scale model for the heat generation, which is proportional to the rate of heat generation per unit volume of the sample. NC-mNP-OCL with homogeneously dispersed mNP-OCLs exhibited a ̅P value of 1.04±0.01 K·s- 1 at Hhigh, while at Hreset = 30.0 kA·m-1 a Tsurf of 88±1 °C (where all OPDL related crystallite are molten) and a ̅P value of 1.93±0.02 K·s-1 was obtained indicating a high magnetic heating capability of the composite.
The free-standing magnetically-controlled reversible shape-memory effect (mrSME) was explored with originally straight nanocomposite samples programmed by bending to an angle of 180°. By switching the magnetic field on and off the composite sample was allowed to repetitively heat to 60 °C and cool to the ambient temperature. A pronounced mrSME, characterized by changes in bending angle of Δϐrev = 20±3° could be obtained for a composite sample programmed by bending when a magnetic field strength of Hhigh = 11.2 kA·m-1 was applied in a multi-cyclic magnetic bending experiment with 600 heating-cooling cycles it could be shown that the actuation performance did not change with increasing number of test cycles, demonstrating the accuracy and reproducibility of this soft actuator. The degree of actuation as well as the kinetics of the shape changes during heating could be tuned by variation of the magnetic filed strength between Hlow and Hhigh or the magnetic field exposure time. When Hreset = 30.0 kA·m-1 was applied the programmed geometry was erased and the composite sample returned to it´s originally straight shape. The reprogrammability of the nanocomposite actuators was demonstrated by one and the same test specimen first exhibiting reversible angle changes when programmed by bending, secondly reprogrammed to a concertina, which expands upon inductive heating and contracts during cooling and finally reprogrammed to a clip like shape, which closes during cooling and opens when Hhigh was applied. In a next step the applicability of the presented remote controllable shape-memory polymer actuators was demonstrated by repetitive opening and closing of a multiring device prepared from NC-mNP-OCL, which repetitively opens and closes when a alternating magnetic field (Hhigh = 11.2 kA·m-1) was switched on and off.
For investigation of the micro- and nanostructural changes related to the actuation of the developed nanocomposite, AFM and WAXS experiments were conducted with programmed nanocomposite samples under cyclic heating and cooling between 25 °C and 60 °C. In AFM experiments the change in the distance (D) between representative droplet-like structures related to the OPDL geometry determining domains was used to calculate the reversible change in D. Here Drev = 3.5±1% was found for NC-mNP-OCL which was in good agreement with the results of the magneto-mechanical actuation experiments. Finally, the analysis of azimuthal (radial) WAXS scattering profiles could support the oriented crystallization of the OCL actuation domains at 25 °C.
In conclusion, the results of this work successfully demonstrated that shape-memory polymer nanocomposites, containing mNPs as magneto-sensitive multifunctional netpoints in a covalently crosslinked multiphase polymer matrix, exhibit magnetically (remotely) controlled actuations upon repetitive exposure to an alternating magnetic field. Furthermore, the (shape) memory of such a nanocomposite can be erased by exposing it to temperatures above the melting temperature of the geometry forming domains, which allows a reprogramming of the actuator. These findings would be relevant for designing novel reprogrammable remotely controllable soft polymeric actuators.
Natural extreme events are an integral part of nature on planet earth. Usually these events are only considered hazardous to humans, in case they are exposed. In this case, however, natural hazards can have devastating impacts on human societies. Especially hydro-meteorological hazards have a high damage potential in form of e.g. riverine and pluvial floods, winter storms, hurricanes and tornadoes, which can occur all over the globe. Along with an increasingly warm climate also an increase in extreme weather which potentially triggers natural hazards can be expected. Yet, not only changing natural systems, but also changing societal systems contribute to an increasing risk associated with these hazards. These can comprise increasing exposure and possibly also increasing vulnerability to the impacts of natural events. Thus, appropriate risk management is required to adapt all parts of society to existing and upcoming risks at various spatial scales. One essential part of risk management is the risk assessment including the estimation of the economic impacts. However, reliable methods for the estimation of economic impacts due to hydro-meteorological hazards are still missing. Therefore, this thesis deals with the question of how the reliability of hazard damage estimates can be improved, represented and propagated across all spatial scales. This question is investigated using the specific example of economic impacts to companies as a result of riverine floods in Germany.
Flood damage models aim to describe the damage processes during a given flood event. In other words they describe the vulnerability of a specific object to a flood. The models can be based on empirical data sets collected after flood events. In this thesis tree-based models trained with survey data are used for the estimation of direct economic flood impacts on the objects. It is found that these machine learning models, in conjunction with increasing sizes of data sets used to derive the models, outperform state-of-the-art damage models. However, despite the performance improvements induced by using multiple variables and more data points, large prediction errors remain at the object level. The occurrence of the high errors was explained by a further investigation using distributions derived from tree-based models. The investigation showed that direct economic impacts to individual objects cannot be modeled by a normal distribution. Yet, most state-of-the-art approaches assume a normal distribution and take mean values as point estimators. Subsequently, the predictions are unlikely values within the distributions resulting in high errors. At larger spatial scales more objects are considered for the damage estimation. This leads to a better fit of the damage estimates to a normal distribution. Consequently, also the performance of the point estimators get better, although large errors can still occur due to the variance of the normal distribution. It is recommended to use distributions instead of point estimates in order to represent the reliability of damage estimates.
In addition current approaches also mostly ignore the uncertainty associated with the characteristics of the hazard and the exposed objects. For a given flood event e.g. the estimation of the water level at a certain building is prone to uncertainties. Current approaches define exposed objects mostly by the use of land use data sets. These data sets often show inconsistencies, which introduce additional uncertainties. Furthermore, state-of-the-art approaches also imply problems of missing consistency when predicting the damage at different spatial scales. This is due to the use of different types of exposure data sets for model derivation and application. In order to face these issues a novel object-based method was developed in this thesis. The method enables a seamless estimation of hydro-meteorological hazard damage across spatial scales including uncertainty quantification. The application and validation of the method resulted in plausible estimations at all spatial scales without overestimating the uncertainty.
Mainly newly available data sets containing individual buildings make the application of the method possible as they allow for the identification of flood affected objects by overlaying the data sets with water masks. However, the identification of affected objects with two different water masks revealed huge differences in the number of identified objects. Thus, more effort is needed for their identification, since the number of objects affected determines the order of magnitude of the economic flood impacts to a large extent.
In general the method represents the uncertainties associated with the three components of risk namely hazard, exposure and vulnerability, in form of probability distributions. The object-based approach enables a consistent propagation of these uncertainties in space. Aside from the propagation of damage estimates and their uncertainties across spatial scales, a propagation between models estimating direct and indirect economic impacts was demonstrated. This enables the inclusion of uncertainties associated with the direct economic impacts within the estimation of the indirect economic impacts. Consequently, the modeling procedure facilitates the representation of the reliability of estimated total economic impacts. The representation of the estimates' reliability prevents reasoning based on a false certainty, which might be attributed to point estimates. Therefore, the developed approach facilitates a meaningful flood risk management and adaptation planning.
The successful post-event application and the representation of the uncertainties qualifies the method also for the use for future risk assessments. Thus, the developed method enables the representation of the assumptions made for the future risk assessments, which is crucial information for future risk management. This is an important step forward, since the representation of reliability associated with all components of risk is currently lacking in all state-of-the-art methods assessing future risk.
In conclusion, the use of object-based methods giving results in the form of distributions instead of point estimations is recommended. The improvement of the model performance by the means of multi-variable models and additional data points is possible, but small. Uncertainties associated with all components of damage estimation should be included and represented within the results. Furthermore, the findings of the thesis suggest that, at larger scales, the influence of the uncertainty associated with the vulnerability is smaller than those associated with the hazard and exposure. This leads to the conclusion that for an increased reliability of flood damage estimations and risk assessments, the improvement and active inclusion of hazard and exposure, including their uncertainties, is needed in addition to the improvements of the models describing the vulnerability of the objects.
The highly conserved protein complex containing the Target of Rapamycin (TOR) kinase is known to integrate intra- and extra-cellular stimuli controlling nutrient allocation and cellular growth. This thesis describes three studies aimed to understand how TOR signaling pathway influences carbon and nitrogen metabolism in Chlamydomonas reinhardtii. The first study presents a time-resolved analysis of the molecular and physiological features across the diurnal cycle. The inhibition of TOR leads to 50% reduction in growth followed by nonlinear delays in the cell cycle progression. The metabolomics analysis showed that the growth repression is mainly driven by differential carbon partitioning between anabolic and catabolic processes. Furthermore, the high accumulation of nitrogen-containing compounds indicated that TOR kinase controls the carbon to nitrogen balance of the cell, which is responsible for biomass accumulation, growth and cell cycle progression. In the second study the cause of the high accumulation of amino acids is explained. For this purpose, the effect of TOR inhibition on Chlamydomonas was examined under different growth regimes using stable 13C- and 15N-isotope labeling. The data clearly showed that an increased nitrogen uptake is induced within minutes after the inhibition of TOR. Interestingly, this increased N-influx is accompanied by increased activities of nitrogen assimilating enzymes. Accordingly, it was concluded that TOR inhibition induces de-novo amino acid synthesis in Chlamydomonas. The recognition of this novel process opened an array of questions regarding potential links between central metabolism and TOR signaling. Therefore a detailed phosphoproteomics study was conducted to identify the potential substrates of TOR pathway regulating central metabolism. Interestingly, some of the key enzymes involved in carbon metabolism as well as amino acid synthesis exhibited significant changes in the phosphosite intensities immediately after TOR inhibition. Altogether, these studies provide a) detailed insights to metabolic response of Chlamydomonas to TOR inhibition, b) identification of a novel process causing rapid upshifts in amino acid levels upon TOR inhibition and c) finally highlight potential targets of TOR signaling regulating changes in central metabolism. Further biochemical and molecular investigations could confirm these observations and advance the understanding of growth signaling in microalgae.
This paper introduces a novel measure to assess similarity between event hydrographs. It is based on Cross Recurrence Plots and Recurrence Quantification Analysis which have recently gained attention in a range of disciplines when dealing with complex systems. The method attempts to quantify the event runoff dynamics and is based on the time delay embedded phase space representation of discharge hydrographs. A phase space trajectory is reconstructed from the event hydrograph, and pairs of hydrographs are compared to each other based on the distance of their phase space trajectories. Time delay embedding allows considering the multi-dimensional relationships between different points in time within the event. Hence, the temporal succession of discharge values is taken into account, such as the impact of the initial conditions on the runoff event. We provide an introduction to Cross Recurrence Plots and discuss their parameterization. An application example based on flood time series demonstrates how the method can be used to measure the similarity or dissimilarity of events, and how it can be used to detect events with rare runoff dynamics. It is argued that this methods provides a more comprehensive approach to quantify hydrograph similarity compared to conventional hydrological signatures.
Together with the gradual change of mean values, ongoing climate change is projected to increase frequency and amplitude of temperature and precipitation extremes in many regions of Europe. The impacts of such in most cases short term extraordinary climate situations on terrestrial ecosystems are a matter of central interest of recent climate change research, because it can not per se be assumed that known dependencies between climate variables and ecosystems are linearly scalable. So far, yet, there is a high demand for a method to quantify such impacts in terms of simultaneities of event time series.
In the course of this manuscript the new statistical approach of Event Coincidence Analysis (ECA) as well as it's R implementation is introduced, a methodology that allows assessing whether or not two types of event time series exhibit similar sequences of occurrences. Applications of the method are presented, analyzing climate impacts on different temporal and spacial scales: the impact of extraordinary expressions of various climatic variables on tree stem variations (subdaily and local scale), the impact of extreme temperature and precipitation events on the owering time of European shrub species (weekly and country scale), the impact of extreme temperature events on ecosystem health in terms of NDVI (weekly and continental scale) and the impact of El Niño and La Niña events on precipitation anomalies (seasonal and global scale).
The applications presented in this thesis refine already known relationships based on classical methods and also deliver substantial new findings to the scientific community: the widely known positive correlation between flowering time and temperature for example is confirmed to be valid for the tails of the distributions while the widely assumed positive dependency between stem diameter variation and temperature is shown to be not valid for very warm and very cold days. The larger scale investigations underline the sensitivity of anthrogenically shaped landscapes towards temperature extremes in Europe and provide a comprehensive global ENSO impact map for strong precipitation events.
Finally, by publishing the R implementation of the method, this thesis shall enable other researcher to further investigate on similar research questions by using Event Coincidence Analysis.
The purpose of Probabilistic Seismic Hazard Assessment (PSHA) at a construction site is to provide the engineers with a probabilistic estimate of ground-motion level that could be equaled or exceeded at least once in the structure’s design lifetime. A certainty on the predicted ground-motion allows the engineers to confidently optimize structural design and mitigate the risk of extensive damage, or in worst case, a collapse. It is therefore in interest of engineering, insurance, disaster mitigation, and security of society at large, to reduce uncertainties in prediction of design ground-motion levels.
In this study, I am concerned with quantifying and reducing the prediction uncertainty of regression-based Ground-Motion Prediction Equations (GMPEs). Essentially, GMPEs are regressed best-fit formulae relating event, path, and site parameters (predictor variables) to observed ground-motion values at the site (prediction variable). GMPEs are characterized by a parametric median (μ) and a non-parametric variance (σ) of prediction. μ captures the known ground-motion physics i.e., scaling with earthquake rupture properties (event), attenuation with distance from source (region/path), and amplification due to local soil conditions (site); while σ quantifies the natural variability of data that eludes μ. In a broad sense, the GMPE prediction uncertainty is cumulative of 1) uncertainty on estimated regression coefficients (uncertainty on μ,σ_μ), and 2) the inherent natural randomness of data (σ). The extent of μ parametrization, the quantity, and quality of ground-motion data used in a regression, govern the size of its prediction uncertainty: σ_μ and σ.
In the first step, I present the impact of μ parametrization on the size of σ_μ and σ. Over-parametrization appears to increase the σ_μ, because of the large number of regression coefficients (in μ) to be estimated with insufficient data. Under-parametrization mitigates σ_μ, but the reduced explanatory strength of μ is reflected in inflated σ. For an optimally parametrized GMPE, a ~10% reduction in σ is attained by discarding the low-quality data from pan-European events with incorrect parametric values (of predictor variables).
In case of regions with scarce ground-motion recordings, without under-parametrization, the only way to mitigate σ_μ is to substitute long-term earthquake data at a location with short-term samples of data across several locations – the Ergodic Assumption. However, the price of ergodic assumption is an increased σ, due to the region-to-region and site-to-site differences in ground-motion physics. σ of an ergodic GMPE developed from generic ergodic dataset is much larger than that of non-ergodic GMPEs developed from region- and site-specific non-ergodic subsets - which were too sparse to produce their specific GMPEs. Fortunately, with the dramatic increase in recorded ground-motion data at several sites across Europe and Middle-East, I could quantify the region- and site-specific differences in ground-motion scaling and upgrade the GMPEs with 1) substantially more accurate region- and site-specific μ for sites in Italy and Turkey, and 2) significantly smaller prediction variance σ. The benefit of such enhancements to GMPEs is quite evident in my comparison of PSHA estimates from ergodic versus region- and site-specific GMPEs; where the differences in predicted design ground-motion levels, at several sites in Europe and Middle-Eastern regions, are as large as ~50%.
Resolving the ergodic assumption with mixed-effects regressions is feasible when the quantified region- and site-specific effects are physically meaningful, and the non-ergodic subsets (regions and sites) are defined a priori through expert knowledge. In absence of expert definitions, I demonstrate the potential of machine learning techniques in identifying efficient clusters of site-specific non-ergodic subsets, based on latent similarities in their ground-motion data. Clustered site-specific GMPEs bridge the gap between site-specific and fully ergodic GMPEs, with their partially non-ergodic μ and, σ ~15% smaller than the ergodic variance.
The methodological refinements to GMPE development produced in this study are applicable to new ground-motion datasets, to further enhance certainty of ground-motion prediction and thereby, seismic hazard assessment. Advanced statistical tools show great potential in improving the predictive capabilities of GMPEs, but the fundamental requirement remains: large quantity of high-quality ground-motion data from several sites for an extended time-period.
Ab 1806 versuchte Napoleon über sieben Jahre hinweg, den europäischen Kontinent gegen jegliche Importe aus Großbritannien abzuschotten. Dieses protektionistische Großexperiment löste in der zeitgenössischen Presse intensive Debatten zu wirtschaftstheoretischen Fragen aus: Fördert ökonomischer Isolationismus den nationalen Wohlstand? Können internationale Rechtsregelungen einem Handelskrieg entgegenwirken? Oder werden nationaler Wohlstand und damit globaler Frieden nicht eher durch allgemeinen Freihandel begünstigt? Alix Winter stellt das wirtschaftspolitische Ereignis der Kontinentalsperre als Kristallisationspunkt öffentlicher Auseinandersetzungen über ökonomische Grundsatzfragen heraus und identifiziert das frühe 19. Jahrhundert als Radikalisierungsphase aufklärerischer liberaler Wirtschaftstheorien.In 1806, Napoleon attempted to seal off the entire European continent from British imports. This ambitious protectionist experiment set off extensive debates about questions of economic theory. Did economic isolation foster the wealth of nations? Could international law prevent the outbreak of a maritime trade war? Or would global free trade promote wealth and peace among all nations? In this book, Alix Winter studies how the continental system served as a focal point for public discussions about fundamental economical questions and how the early nineteenth century witnessed the radicalisation of Enlightenment liberal economic theories.
Previous studies on native language (L1) anaphor resolution have found that monolingual native speakers are sensitive to syntactic, pragmatic, and semantic constraints on pronouns and reflexive resolution. However, most studies have focused on English and other Germanic languages, and little is currently known about the online (i.e., real-time) processing of anaphors in languages with syntactically less restricted anaphors, such as Turkish. We also know relatively little about how 'non-standard' populations such as non-native (L2) speakers and heritage speakers (HSs) resolve anaphors.
This thesis investigates the interpretation and real-time processing of anaphors in German and in a typologically different and as yet understudied language, Turkish. It compares hypotheses about differences between native speakers' (L1ers) and L2 speakers' (L2ers) sentence processing, looking into differences in processing mechanisms as well as the possibility of cross-linguistic influence. To help fill the current research gap regarding HS sentence comprehension, it compares findings for this group with those for L2ers.
To investigate the representation and processing of anaphors in these three populations, I carried out a series of offline questionnaires and Visual-World eye-tracking experiments on the resolution of reflexives and pronouns in both German and Turkish. In the German experiments, native German speakers as well as L2ers of German were tested, while in the Turkish experiments, non-bilingual native Turkish speakers as well as HSs of Turkish with L2 German were tested. This allowed me to observe both cross-linguistic differences as well as population differences between monolinguals' and different types of bilinguals' resolution of anaphors.
Regarding the comprehension of Turkish anaphors by L1ers, contrary to what has been previously assumed, I found that Turkish has no reflexive that follows Condition A of Binding theory (Chomsky, 1981). Furthermore, I propose more general cross-linguistic differences between Turkish and German, in the form of a stronger reliance on pragmatic information in anaphor resolution overall in Turkish compared to German.
As for the processing differences between L1ers and L2ers of a language, I found evidence in support of hypotheses which propose that L2ers of German rely more strongly on non-syntactic information compared to L1ers (Clahsen & Felser, 2006, 2017; Cunnings, 2016, 2017) independent of a potential influence of their L1. HSs, on the other hand, showed a tendency to overemphasize interpretational contrasts between different Turkish anaphors compared to monolingual native speakers. However, lower-proficiency HSs were likely to merge different forms for simplified representation and processing. Overall, L2ers and HSs showed differences from monolingual native speakers both in their final interpretation of anaphors and during online processing. However, these differences were not parallel between the two types of bilingual and thus do not support a unified model of L2 and HS processing (cf. Montrul, 2012).
The findings of this thesis contribute to the field of anaphor resolution by providing data from a previously unexplored language, Turkish, as well as contributing to research on native and non-native processing differences. My results also illustrate the importance of considering individual differences in the acquisition process when studying bilingual language comprehension. Factors such as age of acquisition, language proficiency and the type of input a language learner receives may influence the processing mechanisms they develop and employ, both between and within different bilingual populations.
Photocatalysis is considered significant in this new energy era, because the inexhaustibly abundant, clean, and safe energy of the sun can be harnessed for sustainable, nonhazardous, and economically development of our society. In the research of photocatalysis, the current focus was held by the design and modification of photocatalyst.
As one of the most promising photocatalysts, g-C3N4 has gained considerable attention for its eye-catching properties. It has been extensively explored in photocatalysis applications, such as water splitting, organic pollutant degradation, and CO2 reduction. Even so, it also has its own drawbacks which inhibit its further application. Inspired by that, this thesis will mainly present and discuss the process and achievement on the preparation of some novel photocatalysts and their photocatalysis performance. These materials were all synthesized via the alteration of classic g-C3N4 preparation method, like using different pre-compositions for initial supramolecular complex and functional group post-modification. By taking place of cyanuric acid, 2,5-Dihydroxy-1,4-benzoquinone and chloranilic acid can form completely new supramolecular complex with melamine. After heating, the resulting products of the two complex shown 2D sheet-like and 1D fiber-like morphologies, respectively, which maintain at even up to high temperature of 800 °C. These materials cover crystals, polymers and N-doped carbons with the increase of synthesis temperature. Based on their different pre-compositions, they show different dye degradation performances. For CLA-M-250, it shows the highest photocatalytic activity and strong oxidation capacity. It shows not only great photo-performance in RhB degradation, but also oxygen production in water splitting. In the post-modification method, a novel photocatalysis solution was proposed to modify carbon nitride scaffold with cyano group, whose content can be well controlled by the input of sodium thiocyanate. The cyanation modification leads to narrowed band gap as well as improved photo-induced charges separation. Cyano group grafted carbon nitride thus shows dramatically enhanced performance in the photocatalytic coupling reaction between styrene and sodium benzenesulfinate under green light irradiation, which is in stark contrast with the inactivity of pristine g-C3N4.
Today, more than half of the world’s population lives in urban areas. With a high density of population and assets, urban areas are not only the economic, cultural and social hubs of every society, they are also highly susceptible to natural disasters. As a consequence of rising sea levels and an expected increase in extreme weather events caused by a changing climate in combination with growing cities, flooding is an increasing threat to many urban agglomerations around the globe.
To mitigate the destructive consequences of flooding, appropriate risk management and adaptation strategies are required. So far, flood risk management in urban areas is almost exclusively focused on managing river and coastal flooding. Often overlooked is the risk from small-scale rainfall-triggered flooding, where the rainfall intensity of rainstorms exceeds the capacity of urban drainage systems, leading to immediate flooding. Referred to as pluvial flooding, this flood type exclusive to urban areas has caused severe losses in cities around the world. Without further intervention, losses from pluvial flooding are expected to increase in many urban areas due to an increase of impervious surfaces compounded with an aging drainage infrastructure and a projected increase in heavy precipitation events. While this requires the integration of pluvial flood risk into risk management plans, so far little is known about the adverse consequences of pluvial flooding due to a lack of both detailed data sets and studies on pluvial flood impacts. As a consequence, methods for reliably estimating pluvial flood losses, needed for pluvial flood risk assessment, are still missing.
Therefore, this thesis investigates how pluvial flood losses to private households can be reliably estimated, based on an improved understanding of the drivers of pluvial flood loss. For this purpose, detailed data from pluvial flood-affected households was collected through structured telephone- and web-surveys following pluvial flood events in Germany and the Netherlands.
Pluvial flood losses to households are the result of complex interactions between impact characteristics such as the water depth and a household’s resistance as determined by its risk awareness, preparedness, emergency response, building properties and other influencing factors. Both exploratory analysis and machine-learning approaches were used to analyze differences in resistance and impacts between households and their effects on the resulting losses. The comparison of case studies showed that the awareness around pluvial flooding among private households is quite low. Low awareness not only challenges the effective dissemination of early warnings, but was also found to influence the implementation of private precautionary measures. The latter were predominately implemented by households with previous experience of pluvial flooding. Even cases where previous flood events affected a different part of the same city did not lead to an increase in preparedness of the surveyed households, highlighting the need to account for small-scale variability in both impact and resistance parameters when assessing pluvial flood risk.
While it was concluded that the combination of low awareness, ineffective early warning and the fact that only a minority of buildings were adapted to pluvial flooding impaired the coping capacities of private households, the often low water levels still enabled households to mitigate or even prevent losses through a timely and effective emergency response.
These findings were confirmed by the detection of loss-influencing variables, showing that cases in which households were able to prevent any loss to the building structure are predominately explained by resistance variables such as the household’s risk awareness, while the degree of loss is mainly explained by impact variables.
Based on the important loss-influencing variables detected, different flood loss models were developed. Similar to flood loss models for river floods, the empirical data from the preceding data collection was used to train flood loss models describing the relationship between impact and resistance parameters and the resulting loss to building structures. Different approaches were adapted from river flood loss models using both models with the water depth as only predictor for building structure loss and models incorporating additional variables from the preceding variable detection routine.
The high predictive errors of all compared models showed that point predictions are not suitable for estimating losses on the building level, as they severely impair the reliability of the estimates. For that reason, a new probabilistic framework based on Bayesian inference was introduced that is able to provide predictive distributions instead of single loss estimates. These distributions not only give a range of probable losses, they also provide information on how likely a specific loss value is, representing the uncertainty in the loss estimate.
Using probabilistic loss models, it was found that the certainty and reliability of a loss estimate on the building level is not only determined by the use of additional predictors as shown in previous studies, but also by the choice of response distribution defining the shape of the predictive distribution. Here, a mix between a beta and a Bernoulli distribution to account for households that are able to prevent losses to their building’s structure was found to provide significantly more certain and reliable estimates than previous approaches using Gaussian or non-parametric response distributions.
The successful model transfer and post-event application to estimate building structure loss in Houston, TX, caused by pluvial flooding during Hurricane Harvey confirmed previous findings, and demonstrated the potential of the newly developed multi-variable beta model for future risk assessments. The highly detailed input data set constructed from openly available data sources containing over 304,000 affected buildings in Harris County further showed the potential of data-driven, building-level loss models for pluvial flood risk assessment.
In conclusion, pluvial flood losses to private households are the result of complex interactions between impact and resistance variables, which should be represented in loss models. The local occurrence of pluvial floods requires loss estimates on high spatial resolutions, i.e. on the building level, where losses are variable and uncertainties are high.
Therefore, probabilistic loss estimates describing the uncertainty of the estimate should be used instead of point predictions. While the performance of probabilistic models on the building level are mainly driven by the choice of response distribution, multi-variable models are recommended for two reasons:
First, additional resistance variables improve the detection of cases in which households were able to prevent structural losses.
Second, the added variability of additional predictors provides a better representation of the uncertainties when loss estimates from multiple buildings are aggregated.
This leads to the conclusion that data-driven probabilistic loss models on the building level allow for a reliable loss estimation at an unprecedented level of detail, with a consistent quantification of uncertainties on all aggregation levels. This makes the presented approach suitable for a wide range of applications, from decision support in spatial planning to impact- based early warning systems.
Modern gamma-ray telescopes, provide the main stream of data for astrophysicists in quest of detecting the sources of gamma rays such as active galactic nuclei (AGN). Many blazars have been detected with gamma-ray telescopes such as HESS, VERITAS, MAGIC and Fermi satellite as sources of gamma-rays with the energy E ≥ 100 GeV. These very-high-energy photons interact with extragalactic background light (EBL) producing ultra-relativistic electron-positron pairs. Observations with Fermi-LAT indicate that the GeV gamma-ray flux from some blazars is lower than that predicted from the full electromagnetic cascade. The pairs can induce electrostatic and electromagnetic instabilities. In this case, wave-particle interactions can reduce the energy of the pairs. Therefore, the collective plasma effects can also substantially suppress the GeV-band gamma-ray emission affecting as well the IGMF constraints. Using Particle in cell (PIC) simulations, we have revisited the issue of plasma instabilities induced by electron-positron beams in the fully ionized intergalactic medium. This problem is related to pair beams produced by TeV radiation of blazars. The main objective of our study is to clarify the feedback of the beam-driven instabilities on the pairs. The present dissertation provides new results regarding the plasma instabilities from blazar induced pair beams interacting with intergalactic medium. This clarifies the relevance of plasma instabilities and improves our understanding of blazars.
Plant-derived Transcription Factors for Orthologous Regulation of Gene Expression in the Yeast Saccharomyces cerevisiae
Control of gene expression by transcription factors (TFs) is central in many synthetic biology projects where tailored expression of one or multiple genes is often needed. As TFs from evolutionary distant organisms are unlikely to affect gene expression in a host of choice, they represent excellent candidates for establishing orthogonal control systems. To establish orthogonal regulators for use in yeast (Saccharomyces cerevisiae), we chose TFs from the plant Arabidopsis thaliana. We established a library of 106 different combinations of chromosomally integrated TFs, activation domains (yeast GAL4 AD, herpes simplex virus VP64, and plant EDLL) and synthetic promoters harbouring cognate cis-regulatory motifs driving a yEGFP reporter. Transcriptional output of the different driver / reporter combinations varied over a wide spectrum, with EDLL being a considerably stronger transcription activation domain in yeast, than the GAL4 activation domain, in particular when fused to Arabidopsis NAC TFs. Notably, the strength of several NAC - EDLL fusions exceeded that of the strong yeast TDH3 promoter by 6- to 10-fold. We furthermore show that plant TFs can be used to build regulatory systems encoded by centromeric or episomal plasmids. Our library of TF – DNA-binding site combinations offers an excellent tool for diverse synthetic biology applications in yeast.
COMPASS: Rapid combinatorial optimization of biochemical pathways based on artificial transcription factors
We established a high-throughput cloning method, called COMPASS for COMbinatorial Pathway ASSembly, for the balanced expression of multiple genes in Saccharomyces cerevisiae. COMPASS employs orthogonal, plant-derived artificial transcription factors (ATFs) for controlling the expression of pathway genes, and homologous recombination-based cloning for the generation of thousands of individual DNA constructs in parallel. The method relies on a positive selection of correctly assembled pathway variants from both, in vivo and in vitro cloning procedures. To decrease the turnaround time in genomic engineering, we equipped COMPASS with multi-locus CRISPR/Cas9-mediated modification capacity. In its current realization, COMPASS allows combinatorial optimization of up to ten pathway genes, each transcriptionally controlled by nine different ATFs spanning a 10-fold difference in expression strength. The application of COMPASS was demonstrated by generating cell libraries producing beta-carotene and co-producing beta-ionone and biosensor-responsive naringenin. COMPASS will have many applications in other synthetic biology projects that require gene expression balancing.
CaPRedit: Genome editing using CRISPR-Cas9 and plant-derived transcriptional regulators for the redirection of flux through the FPP branch-point in yeast. Technologies developed over the past decade have made Saccharomyces cerevisiae a promising platform for production of different natural products. We developed CRISPR/Ca9- and plant derived regulator-mediated genome editing approach (CaPRedit) to greatly accelerate strain modification and to facilitate very low to very high expression of key enzymes using inducible regulators. CaPRedit can be implemented to enhance the production of yeast endogenous or heterologous metabolites in the yeast S. cerevisiae. The CaPRedit system aims to faciltiate modification of multiple targets within a complex metabolic pathway through providing new tools for increased expression of genes encoding rate-limiting enzymes, decreased expression of essential genes, and removed expression of competing pathways. This approach is based on CRISPR/Cas9-mediated one-step double-strand breaks to integrate modules containing IPTG-inducible plant-derived artificial transcription factor and promoter pair(s) in a desired locus or loci. Here, we used CaPRedit to redirect the yeast endogenous metabolic flux toward production of farnesyl diphosphate (FPP), a central precursor of nearly all yeast isoprenoid products, by overexpression of the enzymes lead to produce FPP from glutamate. We found significantly higher beta-carotene accumulation in the CaPRedit-mediated modified strain than in the wild type (WT) strain. More specifically, CaPRedit_FPP 1.0 strain was generated, in which three genes involved in FPP synthesis, tHMG1, ERG20, and GDH2, were inducibly overexpressed under the control of strong plant-derived ATFPs. The beta–carotene accumulated in CaPRedit_FPP 1.0 strain to a level 1.3-fold higher than the previously reported optimized strain that carries the same overexpressed genes (as well as additional genetic modifications to redirect yeast endogenous metabolism toward FPP production). Furthermore, the genetic modifications implemented in CaPRedit_FPP 1.0 strain resulted in only a very small growth defect (growth rate relative to the WT is ~ -0.03).
Monoclonal antibodies (mAbs) are an innovative group of drugs with increasing clinical importance in oncology, combining high specificity with generally low toxicity. There are, however, numerous challenges associated with the development of mAbs as therapeutics. Mechanistic understanding of factors that govern the pharmacokinetics (PK) of mAbs is critical for drug development and the optimisation of effective therapies; in particular, adequate dosing strategies can improve patient quality life and lower drug cost. Physiologically-based PK (PBPK) models offer a physiological and mechanistic framework, which is of advantage in the context of animal to human extrapolation. Unlike for small molecule drugs, however, there is no consensus on how to model mAb disposition in a PBPK context. Current PBPK models for mAb PK hugely vary in their representation of physiology and parameterisation. Their complexity poses a challenge for their applications, e.g., translating knowledge from animal species to humans.
In this thesis, we developed and validated a consensus PBPK model for mAb disposition taking into account recent insights into mAb distribution (antibody biodistribution coefficients and interstitial immunoglobulin G (IgG) pharmacokinetics) to predict tissue PK across several pre-clinical species and humans based on plasma data only. The model allows to a priori predict target-independent (unspecific) mAb disposition processes as well as mAb disposition in concentration ranges, for which the unspecific clearance (CL) dominates target-mediated CL processes. This is often the case for mAb therapies at steady state dosing.
The consensus PBPK model was then used and refined to address two important problems:
1) Immunodeficient mice are crucial models to evaluate mAb efficacy in cancer therapy. Protection from elimination by binding to the neonatal Fc receptor is known to be a major pathway influencing the unspecific CL of both, endogenous and therapeutic IgG. The concentration of endogenous IgG, however, is reduced in immunodeficient mouse models, and this effect on unspecific mAb CL is unknown, yet of great importance for the extrapolation to human in the context of mAb cancer therapy.
2) The distribution of mAbs into solid tumours is of great interest. To comprehensively investigate mAb distribution within tumour tissue and its implications for therapeutic efficacy, we extended the consensus PBPK model by a detailed tumour distribution model incorporating a cell-level model for mAb-target interaction. We studied the impact of variations in tumour microenvironment on therapeutic efficacy and explored the plausibility of different mechanisms of action in mAb cancer therapy.
The mathematical findings and observed phenomena shed new light on therapeutic utility and dosing regimens in mAb cancer treatment.
In this work, different strategies for the construction of biohybrid photoelectrodes are investigated and have been evaluated according to their intrinsic catalytic activity for the oxidation of the cofactor NADH or for the connection with the enzymes PQQ glucose dehydrogenase (PQQ-GDH), FAD-dependent glucose dehydrogenase (FAD-GDH) and fructose dehydrogenase (FDH). The light-controlled oxidation of NADH has been analyzed with InGaN/GaN nanowire-modified electrodes. Upon illumination with visible light the InGaN/GaN nanowires generate an anodic photocurrent, which increases in a concentration-dependent manner in the presence of NADH, thus allowing determination of the cofactor. Furthermore, different approaches for the connection of enzymes to quantum dot (QD)-modified electrodes via small redox molecules or redox polymers have been analyzed and discussed. First, interaction studies with diffusible redox mediators such as hexacyanoferrate(II) and ferrocenecarboxylic acid have been performed with CdSe/ZnS QD-modified gold electrodes to build up photoelectrochemical signal chains between QDs and the enzymes FDH and PQQ-GDH. In the presence of substrate and under illumination of the electrode, electrons are transferred from the enzyme via the redox mediators to the QDs. The resulting photocurrent is dependent on the substrate concentration and allows a quantification of the fructose and glucose content in solution. A first attempt with immobilized redox mediator, i.e. ferrocenecarboxylic acid chemically coupled to PQQ-GDH and attached to QD-modified gold electrodes, reveal the potential to build up photoelectrochemical signal chains even without diffusible redox mediators in solution. However, this approach results in a significant deteriorated photocurrent response compared to the situation with diffusing mediators. In order to improve the photoelectrochemical performance of such redox mediator-based, light-switchable signal chains, an osmium complex-containing redox polymer has been evaluated as electron relay for the electronic linkage between QDs and enzymes. The redox polymer allows the stable immobilization of the enzyme and the efficient wiring with the QD-modified electrode. In addition, a 3D inverse opal TiO2 (IO-TiO2) electrode has been used for the integration of PbS QDs, redox polymer and FAD-GDH in order to increase the electrode surface. This results in a significantly improved photocurrent response, a quite low onset potential for the substrate oxidation and a broader glucose detection range as compared to the approach with ferrocenecarboxylic acid and PQQ-GDH immobilized on CdSe/ZnS QD-modified gold electrodes. Furthermore, IO-TiO2 electrodes are used to integrate sulfonated polyanilines (PMSA1) and PQQ-GDH, and to investigate the direct interaction between the polymer and the enzyme for the light-switchable detection of glucose. While PMSA1 provides visible light excitation and ensures the efficient connection between the IO-TiO2 electrode and the biocatalytic entity, PQQ-GDH enables the oxidation of glucose. Here, the IO-TiO2 electrodes with pores of approximately 650 nm provide a suitable interface and morphology, which is required for a stable and functional assembly of the polymer and enzyme. The successful integration of the polymer and the enzyme can be confirmed by the formation of a glucose-dependent anodic photocurrent. In conclusion, this work provides insights into the design of photoelectrodes and presents different strategies for the efficient coupling of redox enzymes to photoactive entities, which allows for light-directed sensing and provides the basis for the generation of power from sun light and energy-rich compounds.
Uncertainty is an essential part of atmospheric processes and thus inherent to weather forecasts. Nevertheless, weather forecasts and warnings are still predominately issued as deterministic (yes or no) forecasts, although research suggests that providing weather forecast users with additional information about the forecast uncertainty can enhance the preparation of mitigation measures. Communicating forecast uncertainty would allow for a provision of information on possible future events at an earlier time. The desired benefit is to enable the users to start with preparatory protective action at an earlier stage of time based on the their own risk assessment and decision threshold. But not all users have the same threshold for taking action. In the course of the project WEXICOM (‘Wetterwarnungen: Von der Extremereignis-Information zu Kommunikation und Handlung’) funded by the Deutscher Wetterdienst (DWD), three studies were conducted between the years 2012 and 2016 to reveal how weather forecasts and warnings are reflected in weather-related decision-making. The studies asked which factors influence the perception of forecasts and the decision to take protective action and how forecast users make sense of probabilistic information and the additional lead time. In a first exploratory study conducted in 2012, members of emergency services in Germany were asked questions about how weather warnings are communicated to professional endusers in the emergency community and how the warnings are converted into mitigation measures. A large number of open questions were selected to identify new topics of interest. The questions covered topics like users’ confidence in forecasts, their understanding of probabilistic information as well as their lead time and decision thresholds to start with preparatory mitigation measures. Results show that emergency service personnel generally have a good sense of uncertainty inherent in weather forecasts. Although no single probability threshold could be identified for organisations to start with preparatory mitigation measures, it became clear that emergency services tend to avoid forecasts based on low probabilities as a basis for their decisions. Based on this findings, a second study conducted with residents of Berlin in 2014 further investigated the question of decision thresholds. The survey questions related to the topics of the perception of and prior experience with severe weather, trustworthiness of forecasters and confidence in weather forecasts, and socio-demographic and social-economic characteristics. Within the questionnaire a scenario was created to determine individual decision thresholds and see whether subgroups of the sample lead to different thresholds. The results show that people’s willingness to act tends to be higher and decision thresholds tend to be lower if the expected weather event is more severe or the property at risk is of higher value. Several influencing factors of risk perception have significant effects such as education, housing status and ability to act, whereas socio-demographic determinants alone are often not sufficient to fully grasp risk perception and protection behaviour. Parallel to the quantitative studies, an interview study was conducted with 27 members of German civil protection between 2012 and 2016. The results show that the latest developments in (numerical) weather forecasting do not necessarily fit the current practice of German emergency services. These practices are mostly carried out on alarms and ground truth in a reactive manner rather than on anticipation based on prognosis or forecasts. As the potential consequences rather than the event characteristics determine protective action, the findings support the call and need for impact-based warnings. Forecasters will rely on impact data and need to learn the users’ understanding of impact. Therefore, it is recommended to enhance weather communication not only by improving computer models and observation tools, but also by focusing on the aspects of communication and collaboration. Using information about uncertainty demands awareness about and acceptance of the limits of knowledge, hence, the capabilities of the forecaster to anticipate future developments of the atmosphere and the capabilities of the user to make sense of this information.
Ostdeutsche Ehen vor Gericht
(2018)
Die DDR hatte eine der weltweit höchsten Scheidungsraten. Die Ehescheidung war mit nur wenigen Hürden verbunden und wurde weitgehend als Privatsache betrachtet. Doch ab 1990 trafen ost- und westdeutsche Bürger und Juristen
mit unterschiedlichen Erfahrungen aufeinander. Anja Schröter betrachtet die Scheidungspraxis in Ostdeutschland vom letzten Jahrzehnt der DDR über die Epochenzäsur 1989/90 hinweg bis zur Jahrtausendwende. Sie hat Juristen und ostdeutsche Bürger nach ihren Erfahrungen gefragt. Eine faszinierende Studie zum ostdeutschen Alltag im Umbruch.
Future magnetic recording industry needs a high-density data storage technology. However, switching the magnetization of small bits requires high magnetic fields that cause excessive heat dissipation. Therefore, controlling magnetism without applying external magnetic field is an important research topic for potential applications in data storage devices with low power consumption. Among the different approaches being investigated, two of them stand out, namely i) all-optical helicity dependent switching (AO-HDS) and ii) ferroelectric control of magnetism. This thesis aims to contribute towards a better understanding of the physical processes behinds these effects as well as reporting new and exciting possibility for the optical and/or electric control of magnetic properties. Hence, the thesis contains two differentiated chapters of results; the first devoted to AO-HDS on TbFe alloys and the second to the electric field control of magnetism in an archetypal Fe/BaTiO3 system.
In the first part, the scalability of the AO-HDS to small laser spot-sizes of few microns in the ferrimagnetic TbFe alloy is investigated by spatially resolving the magnetic contrast with photo-emission electron microscopy (PEEM) and X-ray magnetic circular dichroism (XMCD). The results show that the AO-HDS is a local effect within the laser spot size that occurs in the ring-shaped region in the vicinity of thermal demagnetization. Within the ring region, the helicity dependent switching occurs via thermally activated domain wall motion. Further, the thesis reports on a novel effect of thickness dependent inversion of the switching orientation. It addresses some of the important questions like the role of laser heating and the microscopic mechanism driving AO-HDS.
The second part of the thesis focuses on the electric field control of magnetism in an artificial multiferroic heterostructure. The sample consists of an Fe wedge with thickness varying between 0:5 nm and 3 nm, deposited on top of a ferroelectric and ferroelastic BaTiO3 [001]-oriented single crystal substrate. Here, the magnetic contrast is imaged via PEEM and XMCD as a function of out-of-plane voltage. The results show the evidence of the electric field control of superparamagnetism mediated by a ferroelastic modification of the magnetic anisotropy. The changes in the magnetoelastic anisotropy drive the transition from the superparamagnetic to superferromagnetic state at localized sample positions.
BACKGROUND: Physical activity involving high spinal load has been exposed to possess a crucial impact in the genesis of acute and chronic low back pain and disorder. Vigorous spinal loads are surmised in drop landings, for which strenuous bending loads were formerly evinced for the lower extremity structures. Thus far, clinical studies revealed that repetitive landing impacts can evoke benign structural adaptions or damage to the lumbar vertebrae. Though, causes for these observations are hitherto not conclusively evinced; since actual spinal load has to date not been experimentally documented. Moreover, it is yet undetermined how physiological activation of trunk musculature compensates for landing impact induced spinal loads, and to which extend trunk activity and spinal load are affected by landing demands and performer characteristics. AIMS of this study are 1. the localisation and quantification of spinal bending loads under various landing demands and 2. the identification of compensatory trunk muscular activity pattern, which potentially alleviate spinal load magnitudes. Three consecutive Hypotheses (H1 - H3) were hereto postulated: H1 posits that spinal bending loads in segregated motion planes can feasibly and reliably be evaluated from peak spine segmental angular accelerations. H2 furthermore assumes that vertical drop landings elicit highest spine bending load in sagittal flexion of the lumbar spine. Based on these verifications, a second study shall prove the successive hypothesis (H3) that diversified landing conditions, like performer’s landing familiarity and gender, as an implementation of an instantaneous follow-up task, affect the emerging lumbar spinal bending load. Herein it is moreover surmised that lumbar spinal bending loads under distinct landing conditions are predominantly modulated by herewith disparately deployed conditioned pre-activations of trunk muscles. METHODS: To test the above arrayed hypothesis, two successive studies were carried out. In STUDY 1, 17 subjects were repetitively assessed performing various drop landings (heigth: 15, 30, 45, 60cm; unilateral, bilateral, blindfolded, catching a ball) in a test-retest-design. Herein individual peak angular accelerations [αMAX] were derived from three-dimensional motion data of four trunk-segments (upper thoracic, lower thoracic, lumbar, pelvis). αMAX was herein assessed in flexion, lateral flexion, and rotation of each spinal joint, formed by two adjacent segments. Reliability of αMAX within and between test-days was evaluated by CV%, ICC 2.1, TRV%, and Bland & Altman Analysis (BIAS±LoA). Subsequently, peak flexion acceleration of the lumbo-pelvic joint [αFLEX[LS-PV]] was statistically compared to αMAX expressions of each other assessed spinal joint and motion plane (Mean ±SD, Independent Samples T-test). STUDY 2 deliberately assessed mere peak lumbo-pelvic flexion accelerations [αFLEX[LS-PV]] and electro-myographic trunk pre-activity prior to αFLEX[LS-PV] on 43 subjects performing varied landing tasks (height 45cm; with definite or indefinite predictability of a subsequent instant follow up jump). Subjects were contrasted with respect to their previous landing familiarity ( >1000 vs. <100 landings performed in the past 10 years) and gender. Differences of αFLEX[LS-PV] and muscular pre-activity between contrasted subject groups as between landing tasks were equally statistically tested by three-way mixed ANOVA with Post-hoc tests. Associations between αFLEX[LS-PV] and muscular pre-activity were factor-specifically assessed by Spearman’s rank order correlation coefficient (rS). Complementarily, muscular pre-activity was subdivided by landing phases [DROP, IMPACT] and discretely assessed for phase specific associations to αFLEX[LS-PV]. Each muscular activity was moreover pairwise compared between DROP and IMPACT (Mean ±SD, Dependent Samples T-test). RESULTS: αMAX was presented with overall high variability within test-days (CV =36%). Lowest intra-individual variability and highest reproducibility of αMAX between test-days was shown in flexion of the spine. αFLEX[LS-PV] showed largely consistent sig. higher magnitudes compared to αMAX presented in more cranial spinal joints and other motion planes. αFLEX[LS-PV] moreover gradually increased with escalations in landing heights. Landing unfamiliar subjects presented sig. higher αFLEX[LS-PV] in contrast to landing familiar ones (p=.016). M. Obliquus Int. with M. Transversus Abd. (66 ±32%MVC) and M. Erector Spinae (47 ±15%MVC) presented maredly highest activity in contrast to lowest activity of M. Rectus Abd. (10 ±4%MVC). Landing unfamiliar subjects showed compared to landing familiar ones sig. higher activity of M. Obliquus Ext. (17 ±8%MVC, 12 ±7%MVC, p= .044). M. Obliquus Ext. and its co-contraction ratio with M. Erector Spinae moreover exhibited low but sig. positive correlations to αFLEX[LS-PV] (rs=.39, rs=.31). Each trunk muscule distributed larger shares of its activity to DROP, whereas peak activations of most muscles emerged in the proportionally shorter IMPACT phase. Commonly increased muscular pre-activation particularly at IMPACT was found in landings with a contrived follow up jump and in female subjects, whereby αFLEX[LS-PV] was hereof only marginally affected. DISCUSSION: Highest spine segmental angular accelerations in drop landings emerge in sagittal flexion of the lumbar spine. The compensatory stabilisation of the spine appears to be preponderantly provided by a dorso-ventral co-contraction of M. Obliquus Int., M. Transversus Abd. and M. Erector Spinae. Elevated pre-activity of M. Obliquuis Ext. supposably characterises poor landing experience, which might engender increased bending loads to the lumbar spine. A pervasive large variability of spinal angular accelerations measured across all landing types, suggests a multifarious utilisation of diverse mechanisms compensating for spinal impacts in landing performances. A standardised assessment and valid evaluation of landing evoked lumbar bending loads is hereof largley confined. CONCLUSION: Drop landings elicit most strenuous lumbo-pelvic flexion accelerations, which can be appraised as representatives for high energetic bending loads to the spine. Such entail the highest risk to overload the spinal tissue, when landing demands exceed the individual’s landing skill. Previous landing experience and training appears to effectively improve muscular spine stabilisation pattern, diminishing spinal bending loads.
In this thesis, we discuss the characterization of orthogroups by so-called disjunctions of identities. The orthogroups are a subclass of the class of completely regular semigroups, a generalization of the concept of a group. Thus there is for all elements of an orthogroup some kind of an inverse element such that both elements commute. Based on a fundamental result by A.H. Clifford, every completely regular semigroup is a semilattice of completely simple semigroups. This allows the description the gross structure of such semigroup. In particular every orthogroup is a semilattice of rectangular groups which are isomorphic to direct products of rectangular bands and groups. Semilattices of rectangular groups coming from various classes are characterized using the concept of an alternative variety, a generalization of the classical idea of a variety by Birkhoff.
After starting with some fundamental definitions and results concerning semigroups, we introduce the concept of disjunctions of identities and summarize some necessary properties. In particular we present some disjunction of identities which is sufficient for a semigroup for being completely regular. Furthermore we derive from this identity some statements concerning Rees matrix semigroups, a possible representation of completely simple semigroups. A main result of this thesis is the general description of disjunctions of identities such that a completely regular semigroup satisfying the described identity is a semilattice of left groups (right groups / groups). In this case the completely regular semigroup is an orthogroup. Furthermore we define various classes of rectangular groups such that there is an exponent taken from a set of pairwise coprime positive integers. An important result is the characterization of the class of all semilattices of particular rectangular groups (taken from the classes defined before) using a set-theoretic minimal set of disjunctions of identities. Additionally we investigate semilattices of groups (so-called Clifford semigroups). For this purpose we consider abelian groups of particular exponents and prove some well-known results from the theory of Clifford semigroups in an alternative way applying the concept of disjunctions of identities. As a practical application of the results concerning semilattices of left zero semigroups and right zero semigroups we identify a particular transformation semigroup. For more detailed information about the product of two arbitrary elements of a semilattice of semigroups we introduce the concept of strong semilattices of semigroups. It is well-known that a semilattice of groups is a strong semilattice of groups. So we can characterize a strong semilattice of groups of particular pairwise coprime exponents by disjunctions of identities. Additionally we describe the class of all strong semilattices of left zero semigroups and right zero semigroups with the help of such kind of identity, and we relate this statement to the theory of normal bands. A possible extension of the already described semilattices of rectangular groups can be achieved by an auxiliary total order (in terms of chains of semigroups). To this end we present a corresponding characterization due to disjunctions of identities which is obviously minimal. A list of open questions which have arisen during the research for this thesis, but left crude, is attached.
On a small scale
(2018)
This study argues that micro relations matter in peacekeeping. Asking what makes the implementation of peacekeeping interventions complex and how complexity is resolved, I find that formal, contractual mechanisms only rarely effectively reduce complexity – and that micro relations fill this gap. Micro relations are personal relationships resulting from frequent face-to-face interaction in professional and – equally importantly – social contexts.
This study offers an explanation as to why micro relations are important for coping with complexity, in the form of a causal mechanism. For this purpose, I bring together theoretical and empirical knowledge: I draw upon the current debate on ‘institutional complexity’ (Greenwood et al. 2011) in organizational institutionalism as well as original empirical evidence from a within-case study of the peacekeeping intervention in Haiti, gained in ten weeks of field research. In this study, scholarship on institutional complexity serves to identify theoretical causal channels which guide empirical analysis. An additional, secondary aim is pursued with this mechanism-centered approach: testing the utility of Beach and Pedersen’s (2013) theory-testing process tracing.
Regarding the first research question – what makes the implementation of peacekeeping interventions complex –, the central finding is that complexity manifests itself in the dual role of organizations as cooperation partners and competitors for (scarce) resources, turf and influence. UN organizations, donor agencies and international NGOs implementing peacekeeping activities in post-conflict environments have chronic difficulty mastering both roles because they entail contradictory demands: effective cooperation requires information exchange, resource and responsibility-sharing as well as external scrutiny, whereas prevailing over competitors demands that organizations conceal information, guard resources, increase relative turf and influence, as well as shield themselves from scrutiny. Competition fuels organizational distrust and friction – and impedes cooperation.
How is this complexity resolved? The answer to this second research question is that deep-seated organizational competition is routinely mediated – and cooperation motivated – in micro relations and micro interaction. Regular, frequent face-to-face interaction between individual organizational members generates social resources that help to transcend organizational distrust and conflict, most importantly familiarity with each other, personal trust and belief in reciprocity. Furthermore, informal conflict mediation and control mechanisms – namely, open discussion, mutual monitoring in direct interaction and social exclusion – enhance solidarity and mutual support.
Der 20. Juli 1944 zählt zu den Schlüsselereignissen der deutschen Geschichte des 20. Jahrhunderts. Das missglückte Attentat von Claus Schenk Graf von Stauffenberg auf Adolf Hitler und der anschließende Umsturzversuch sind zum Symbol des Widerstandes gegen den Nationalsozialismus geworden. Von den Ereignissen völlig überrascht, hatte das NS-Regime in Bezug auf die Gruppe der Verschwörer sofort festgelegt, dass in der Öffentlichkeit nur von einer »ganz kleinen Clique« die Rede sein dürfe – eine Formulierung, die mitunter noch heute das Bild des Widerstandskreises prägt.
Die vorliegende Analyse zeigt erstmals anhand von zahlreichen Netzwerkvisualisierungen, was die NS-Ermittler tatsächlich über das große und komplexe zivile und militärische Netzwerk vom 20. Juli 1944 wussten, das so unterschiedliche gesellschaftliche Gruppen umfasste wie Offiziere, Verwaltungsbeamte, Diplomaten, Juristen, Industrielle, Theologen, Gutsbesitzer, Gewerkschafter und Sozialdemokraten. Zeitgenössische Briefe und Tagebücher verdeutlichen schließlich das geschickte Agieren der Verschwörer vor und nach dem Umsturzversuch und offenbaren zudem die Fehlerhaftigkeit der NS-Quellen.