Refine
Has Fulltext
- yes (89) (remove)
Year of publication
- 2014 (89) (remove)
Document Type
- Doctoral Thesis (89) (remove)
Is part of the Bibliography
- yes (89) (remove)
Keywords
- Gammastrahlungsastronomie (4)
- data analysis (4)
- gamma-ray astronomy (4)
- Crab Nebula (3)
- Datenanalyse (3)
- Krebsnebel (3)
- Synchronisation (3)
- Systembiologie (3)
- synchronization (3)
- systems biology (3)
Institute
- Institut für Physik und Astronomie (18)
- Institut für Geowissenschaften (11)
- Institut für Biochemie und Biologie (10)
- Institut für Chemie (7)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (6)
- Mathematisch-Naturwissenschaftliche Fakultät (5)
- Department Linguistik (4)
- Institut für Mathematik (4)
- Institut für Umweltwissenschaften und Geographie (3)
- Wirtschaftswissenschaften (3)
Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
Permafrost, defined as ground that is frozen for at least two consecutive years, is a distinct feature of the terrestrial unglaciated Arctic. It covers approximately one quarter of the land area of the Northern Hemisphere (23,000,000 km²). Arctic landscapes, especially those underlain by permafrost, are threatened by climate warming and may degrade in different ways, including active layer deepening, thermal erosion, and development of rapid thaw features. In Siberian and Alaskan late Pleistocene ice-rich Yedoma permafrost, rapid and deep thaw processes (called thermokarst) can mobilize deep organic carbon (below 3 m depth) by surface subsidence due to loss of ground ice. Increased permafrost thaw could cause a feedback loop of global significance if its stored frozen organic carbon is reintroduced into the active carbon cycle as greenhouse gases, which accelerate warming and inducing more permafrost thaw and carbon release. To assess this concern, the major objective of the thesis was to enhance the understanding of the origin of Yedoma as well as to assess the associated organic carbon pool size and carbon quality (concerning degradability). The key research questions were:
- How did Yedoma deposits accumulate?
- How much organic carbon is stored in the Yedoma region?
- What is the susceptibility of the Yedoma region's carbon for future decomposition?
To address these three research questions, an interdisciplinary approach, including detailed field studies and sampling in Siberia and Alaska as well as methods of sedimentology, organic biogeochemistry, remote sensing, statistical analyses, and computational modeling were applied. To provide a panarctic context, this thesis additionally includes results both from a newly compiled northern circumpolar carbon database and from a model assessment of carbon fluxes in a warming Arctic.
The Yedoma samples show a homogeneous grain-size composition. All samples were poorly sorted with a multi-modal grain-size distribution, indicating various (re-) transport processes. This contradicts the popular pure loess deposition hypothesis for the origin of Yedoma permafrost. The absence of large-scale grinding processes via glaciers and ice sheets in northeast Siberian lowlands, processes which are necessary to create loess as material source, suggests the polygenetic origin of Yedoma deposits.
Based on the largest available data set of the key parameters, including organic carbon content, bulk density, ground ice content, and deposit volume (thickness and coverage) from Siberian and Alaskan study sites, this thesis further shows that deep frozen organic carbon in the Yedoma region consists of two distinct major reservoirs, Yedoma deposits and thermokarst deposits (formed in thaw-lake basins). Yedoma deposits contain ~80 Gt and thermokarst deposits ~130 Gt organic carbon, or a total of ~210 Gt. Depending on the approach used for calculating uncertainty, the range for the total Yedoma region carbon store is ±75 % and ±20 % for conservative single and multiple bootstrapping calculations, respectively. Despite the fact that these findings reduce the Yedoma region carbon pool by nearly a factor of two compared to previous estimates, this frozen organic carbon is still capable of inducing a permafrost carbon feedback to climate warming. The complete northern circumpolar permafrost region contains between 1100 and 1500 Gt organic carbon, of which ~60 % is perennially frozen and decoupled from the short-term carbon cycle.
When thawed and reintroduced into the active carbon cycle, the organic matter qualities become relevant. Furthermore, results from investigations into Yedoma and thermokarst organic matter quality studies showed that Yedoma and thermokarst organic matter exhibit no depth-dependent quality trend. This is evidence that after freezing, the ancient organic matter is preserved in a state of constant quality. The applied alkane and fatty-acid-based biomarker proxies including the carbon-preference and the higher-land-plant-fatty-acid indices show a broad range of organic matter quality and thus no significantly different qualities of the organic matter stored in thermokarst deposits compared to Yedoma deposits. This lack of quality differences shows that the organic matter biodegradability depends on different decomposition trajectories and the previous decomposition/incorporation history. Finally, the fate of the organic matter has been assessed by implementing deep carbon pools and thermokarst processes in a permafrost carbon model. Under various warming scenarios for the northern circumpolar permafrost region, model results show a carbon release from permafrost regions of up to ~140 Gt and ~310 Gt by the years 2100 and 2300, respectively. The additional warming caused by the carbon release from newly-thawed permafrost contributes 0.03 to 0.14°C by the year 2100. The model simulations predict that a further increase by the 23rd century will add 0.4°C to global mean surface air temperatures.
In conclusion, Yedoma deposit formation during the late Pleistocene was dominated by water-related (alluvial/fluvial/lacustrine) as well as aeolian processes under periglacial conditions. The circumarctic permafrost region, including the Yedoma region, contains a substantial amount of currently frozen organic carbon. The carbon of the Yedoma region is well-preserved and therefore available for decomposition after thaw. A missing quality-depth trend shows that permafrost preserves the quality of ancient organic matter. When the organic matter is mobilized by deep degradation processes, the northern permafrost region may add up to 0.4°C to the global warming by the year 2300.
Organische Halbleiter besitzen neue, bemerkenswerte Materialeigenschaften, die sie für die grundlegende Forschung wie auch aktuelle technologische Entwicklung (bsw. org. Leuchtdioden, org. Solarzellen) interessant werden lassen. Aufgrund der starken konformative Freiheit der konjugierten Polymerketten führt die Vielzahl der möglichen Anordnungen und die schwache intermolekulare Wechselwirkung für gewöhnlich zu geringer struktureller Ordnung im Festkörper. Die Morphologie hat gleichzeitig direkten Einfluss auf die elektronische Struktur der organischen Halbleiter, welches sich meistens in einer deutlichen Reduktion der Ladungsträgerbeweglichkeit gegenüber den anorganischen Verwandten zeigt. So stellt die Beweglichkeit der Ladungen im Halbleiter einen der limitierenden Faktoren für die Leistungsfähigkeit bzw. den Wirkungsgrad von funktionellen organischen Bauteilen dar. Im Jahr 2009 wurde ein neues auf Naphthalindiimid und Bithiophen basierendes Dornor/Akzeptor Copolymer vorgestellt [P(NDI2OD‑T2)], welches sich durch seine außergewöhnlich hohe Ladungsträgermobilität auszeichnet. In dieser Arbeit wird die Ladungsträgermobilität in P(NDI2OD‑T2) bestimmt, und der Transport durch eine geringe energetischer Unordnung charakterisiert. Obwohl dieses Material zunächst als amorph beschrieben wurde zeigt eine detaillierte Analyse der optischen Eigenschaften von P(NDI2OD‑T2), dass bereits in Lösung geordnete Vorstufen supramolekularer Strukturen (Aggregate) existieren. Quantenchemische Berechnungen belegen die beobachteten spektralen Änderungen. Mithilfe der NMR-Spektroskopie kann die Bildung der Aggregate unabhängig von optischer Spektroskopie bestätigt werden. Die Analytische Ultrazentrifugation an P(NDI2OD‑T2) Lösungen legt nahe, dass sich die Aggregation innerhalb der einzelnen Ketten unter Reduktion des hydrodynamischen Radius vollzieht. Die Ausbildung supramolekularen Strukturen nimmt auch eine signifikante Rolle bei der Filmbildung ein und verhindert gleichzeitig die Herstellung amorpher P(NDI2OD‑T2) Filme. Durch chemische Modifikation der P(NDI2OD‑T2)-Kette und verschiedener Prozessierungs-Methoden wurde eine Änderung des Kristallinitätsgrades und gleichzeitig der Orientierung der kristallinen Domänen erreicht und mittels Röntgenbeugung quantifiziert. In hochauflösenden Elektronenmikroskopie-Messungen werden die Netzebenen und deren Einbettung in die semikristallinen Strukturen direkt abgebildet. Aus der Kombination der verschiedenen Methoden erschließt sich ein Gesamtbild der Nah- und Fernordnung in P(NDI2OD‑T2). Über die Messung der Elektronenmobilität dieser Schichten wird die Anisotropie des Ladungstransports in den kristallographischen Raumrichtungen von P(NDI2OD‑T2) charakterisiert und die Bedeutung der intramolekularen Wechselwirkung für effizienten Ladungstransport herausgearbeitet. Gleichzeitig wird deutlich, wie die Verwendung von größeren und planaren funktionellen Gruppen zu höheren Ladungsträgermobilitäten führt, welche im Vergleich zu klassischen semikristallinen Polymeren weniger sensitiv auf die strukturelle Unordnung im Film sind.
Despite remarkable progress made in the past century, which has revolutionized our understanding of the universe, there are numerous open questions left in theoretical physics. Particularly important is the fact that the theories describing the fundamental interactions of nature are incompatible. Einstein's theory of general relative describes gravity as a dynamical spacetime, which is curved by matter and whose curvature determines the motion of matter. On the other hand we have quantum field theory, in form of the standard model of particle physics, where particles interact via the remaining interactions - electromagnetic, weak and strong interaction - on a flat, static spacetime without gravity. A theory of quantum gravity is hoped to cure this incompatibility by heuristically replacing classical spacetime by quantum spacetime'. Several approaches exist attempting to define such a theory with differing underlying premises and ideas, where it is not clear which is to be preferred. Yet a minimal requirement is the compatibility with the classical theory, they attempt to generalize. Interestingly many of these models rely on discrete structures in their definition or postulate discreteness of spacetime to be fundamental. Besides the direct advantages discretisations provide, e.g. permitting numerical simulations, they come with serious caveats requiring thorough investigation: In general discretisations break fundamental diffeomorphism symmetry of gravity and are generically not unique. Both complicates establishing the connection to the classical continuum theory. The main focus of this thesis lies in the investigation of this relation for spin foam models. This is done on different levels of the discretisation / triangulation, ranging from few simplices up to the continuum limit. In the regime of very few simplices we confirm and deepen the connection of spin foam models to discrete gravity. Moreover, we discuss dynamical, e.g. diffeomorphism invariance in the discrete, to fix the ambiguities of the models. In order to satisfy these conditions, the discrete models have to be improved in a renormalisation procedure, which also allows us to study their continuum dynamics. Applied to simplified spin foam models, we uncover a rich, non--trivial fixed point structure, which we summarize in a phase diagram. Inspired by these methods, we propose a method to consistently construct the continuum theory, which comes with a unique vacuum state.
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
Galaxies are observational probes to study the Large Scale Structure. Their gravitational motions are tracers of the total matter density and therefore of the Large Scale Structure. Besides, studies of structure formation and galaxy evolution rely on numerical cosmological simulations. Still, only one universe observable from a given position, in time and space, is available for comparisons with simulations. The related cosmic variance affects our ability to interpret the results. Simulations constrained by observational data are a perfect remedy to this problem. Achieving such simulations requires the projects Cosmic flows and CLUES. Cosmic flows builds catalogs of accurate distance measurements to map deviations from the expansion. These measures are mainly obtained with the galaxy luminosity-rotation rate correlation. We present the calibration of that relation in the mid-infrared with observational data from Spitzer Space Telescope. Resulting accurate distance estimates will be included in the third catalog of the project. In the meantime, two catalogs up to 30 and 150 Mpc/h have been released. We report improvements and applications of the CLUES' method on these two catalogs. The technique is based on the constrained realization algorithm. The cosmic displacement field is computed with the Zel'dovich approximation. This latter is then reversed to relocate reconstructed three-dimensional constraints to their precursors' positions in the initial field. The size of the second catalog (8000 galaxies within 150 Mpc/h) highlighted the importance of minimizing the observational biases. By carrying out tests on mock catalogs, built from cosmological simulations, a method to minimize observational bias can be derived. Finally, for the first time, cosmological simulations are constrained solely by peculiar velocities. The process is successful as resulting simulations resemble the Local Universe. The major attractors and voids are simulated at positions approaching observational positions by a few megaparsecs, thus reaching the limit imposed by the linear theory.
Diese Arbeit befasst sich mit den sogenannten relativähnlichen Sätzen im Frühneuhochdeutschen und leistet somit einen Beitrag zur Subordinationsforschung des älteren Deutsch. Relativähnliche Sätze sind formal durch ein satzinitiales anaphorisches d-Element und die Endstellung des finiten Verbs gekennzeichnet. Semantisch gesehen beziehen sie sich auf den vorangehenden Satz als Ganzes, indem sie ihn in bestimmter Weise weiterführen oder kommentieren. In der bisherigen Forschung werden diese Sätze satztypologisch als Hauptsätze mit Verbendstellung analysiert (vgl. dazu Maurer 1926, Behaghel 1932 und Lötscher 2000). Nach der ausführlichen Diskussion der formalen Abhängigkeitsmarker im älteren Deutsch sowie anhand einer umfangreichen korpusbasierten Untersuchung wird in dieser Arbeit gezeigt, dass relativähnliche Sätze im Frühneuhochdeutschen auch als abhängige Sätze - analog zu den weiterführenden Relativsätzen im Gegenwartsdeutschen - analysiert werden können. Die weiterführenden Relativsätze im Gegenwartsdeutschen enthalten satzinitial auch ein anaphorisches Element, das sich auf das Gesagte in dem vorangehenden Satz bezieht. Verbendstellung weisen sie ebenfalls auf (mehr zur Grammatik der weiterführenden Relativsätze vgl. insb. Brandt 1990 und Holler 2005). Über die Untersuchung relativähnlicher Sätze hinaus befasst sich diese Arbeit ausführlich mit formalen Abhängigkeitsmarkern des älteren Deutsch, wie Verbendstellung, Einleiter und afinite Konstruktion.
The purpose of this thesis is to develop an automated inversion scheme to derive point and finite source parameters for weak earthquakes, here intended with the unusual meaning of earthquakes with magnitudes at the limit or below the bottom magnitude threshold of standard source inversion routines. The adopted inversion approaches entirely rely on existing inversion software, the methodological work mostly targeting the development and tuning of optimized inversion flows. The resulting inversion scheme is tested for very different datasets, and thus allows the discussion on the source inversion problem at different scales. In the first application, dealing with mining induced seismicity, the source parameters determination is addressed at a local scale, with source-sensor distance of less than 3 km. In this context, weak seismicity corresponds to event below magnitude MW 2.0, which are rarely target of automated source inversion routines. The second application considers a regional dataset, namely the aftershock sequence of the 2010 Maule earthquake (Chile), using broadband stations at regional distances, below 300 km. In this case, the magnitude range of the target aftershocks range down to MW 4.0. This dataset is here considered as a weak seismicity case, since the analysis of such moderate seismicity is generally investigated only by moment tensor inversion routines, with no attempt to resolve source duration or finite source parameters. In this work, automated multi-step inversion schemes are applied to both datasets with the aim of resolving point source parameters, both using double couple (DC) and full moment tensor (MT) models, source duration and finite source parameters. A major result of the analysis of weaker events is the increased size of resulting moment tensor catalogues, which interpretation may become not trivial. For this reason, a novel focal mechanism clustering approach is used to automatically classify focal mechanisms, allowing the investigation of the most relevant and repetitive rupture features. The inversion of the mining induced seismicity dataset reveals the repetitive occurrence of similar rupture processes, where the source geometry is controlled by the shape of the mined panel. Moreover, moment tensor solutions indicate a significant contribution of tensile processes. Also the second application highlights some characteristic geometrical features of the fault planes, which show a general consistency with the orientation of the slab. The additional inversion for source duration allowed to verify the empirical correlation for moment normalized earthquakes in subduction zones among a decreasing rupture duration with increasing source depth, which was so far only observed for larger events.
Die Häufung von Diabetes, Herz-Kreislauf-Erkrankungen und einigen Krebsarten, deren Entstehung auf Übergewicht und Bewegungsmangel zurückzuführen sind, ist ein aktuelles Problem unserer Gesellschaft. Insbesondere mit fortschreitendem Alter nehmen die damit einhergehenden Komplikationen zu. Umso bedeutender ist das Verständnis der pathologischen Mechanismen in Folge von Adipositas, Bewegungsmangel, des Alterungsprozesses und den Einfluss-nehmenden Faktoren.
Ziel dieser Arbeit war die Entstehung metabolischer Erkrankungen beim Menschen zu untersuchen. Die Auswertung von Verlaufsdaten anthropometrischer und metabolischer Parameter der 584 Teilnehmern der prospektiven ‚Metabolisches Syndrom Berlin Potsdam Follow-up Studie‘ wies für die gesamte Kohorte einen Anstieg an Übergewicht, ebenso eine Verschlechterung des Blutdrucks und des Glukosestoffwechsels auf. Wir untersuchten, ob das Hormon FGF21 Einfluss an dem Auftreten eines Diabetes mellitus Typ 2 (T2DM) oder des Metabolischen Syndroms (MetS) hat. Wir konnten zeigen, dass Personen, die später ein MetS entwickeln, bereits zu Studienbeginn einen erhöhten FGF21-Spiegel, einen höheren BMI, WHR, Hb1Ac und diastolischen Blutdruck aufwiesen. Neben FGF21 wurde auch Vaspin in diesem Zusammenhang untersucht. Es zeigte sich, dass Personen, die später einen T2DM entwickeln, neben einer Erhöhung klinischer Parameter tendenziell erhöhte Spiegel des Hormons aufwiesen. Mit FGF21 und Vaspin wurden hier zwei neue Faktoren für die Vorhersage des Metabolischen Syndroms bzw. Diabetes mellitus Typ 2 identifiziert.
Der langfristige Effekt einer Gewichtsreduktion wurde in einer Subkohorte von 60 Personen untersucht. Der überwiegende Teil der Probanden mit Gewichtsabnahme-Intervention nahm in der ersten sechsmonatigen Phase erfolgreich ab. Jedoch zeigte sich ein deutlicher Trend zur Wiederzunahme des verlorenen Gewichts über den Beobachtungszeitraum von fünf Jahren. Von besonderem Interesse war die Abschätzung des kardiovaskulären Risikos über den Framingham Score. Es wurde deutlich, dass für Personen mit konstanter Gewichtsabnahme ein deutlich geringeres kardiovaskuläres Risiko bestand. Hingegen zeigten Personen mit konstanter Wiederzunahme oder starken Gewichtsschwankungen ein hohes kardiovaskuläres Risiko. Unsere Daten legten nahe, dass eine erfolgreiche dauerhafte Gewichtsreduktion statistisch mit einem erniedrigten kardiovaskulären Risiko assoziiert ist, während Probanden mit starken Gewichtsschwankungen oder einer Gewichtszunahme ein gesteigertes Risiko haben könnten.
Um die Interaktion der molekularen Vorgänge hinsichtlich der Gewichtsreduktion und Lebensspanne untersuchen zu können, nutzen wir den Modellorganismus C.elegans. Eine kontinuierliche Restriktion wirkte sich verlängernd, eine Überversorgung verkürzend auf die Lebensspanne des Rundwurms aus. Der Einfluss eines zeitlich eingeschränkten, intermittierenden Nahrungsregimes, analog zum Weight-Cycling im Menschen, auf die Lebensspanne war von großem Interesse. Dieser regelmäßige Wechsel zwischen ad libitum Fütterung und Restriktion hatte in Abhängigkeit von der Häufigkeit der Restriktion einen unterschiedlich starken lebensverlängernden Effekt. Phänomene, wie Gewichtswiederzunahmen, sind in C.elegans nicht zu beobachten und beruhen vermutlich auf einem Mechanismus ist, der evolutionär jünger und in C.elegans noch nicht angelegt ist.
Um neue Stoffwechselwege zu identifizieren, die die Lebensspanne beeinflussen, wurden Metabolitenprofile genetischer als auch diätetischer Langlebigkeitsmodelle analysiert. Diese Analysen wiesen den Tryptophan-Stoffwechsel als einen neuen, bisher noch nicht im Fokus stehenden Stoffwechselweg aus, der mit Langlebigkeit in Verbindung steht.
The mystery of the origin of cosmic rays has been tackled for more than hundred years and is still not solved. Cosmic rays are detected with energies spanning more than 10 orders of magnitude and reaching energies up to ~10²¹ eV, far higher than any man-made accelerator can reach. Different theories on the astrophysical objects and processes creating such highly energetic particles have been proposed.
A very prominent explanation for a process producing highly energetic particles is shock acceleration. The observation of high-energy gamma rays from supernova remnants, some of them revealing a shell like structure, is clear evidence that particles are accelerated to ultrarelativistic energies in the shocks of these objects. The environments of supernova remnants are complex and challenge detailed modelling of the processes leading to high-energy gamma-ray emission.
The study of shock acceleration at bow shocks, created by the supersonic movement of individual stars through the interstellar medium, offers a unique possibility to determine the physical properties of shocks in a less complex environment. The shocked medium is heated by the stellar and the shock excited radiation, leading to thermal infrared emission. 28 bow shocks have been discovered through their infrared emission. Nonthermal radiation in radio and X-ray wavelengths has been detected from two bow shocks, pointing to the existence of relativistic particles in these systems. Theoretical models of the emission processes predict high-energy and very high-energy emission at a flux level in reach of current instruments. This work presents the search for gamma-ray emission from bow shocks of runaway stars in the energy regime from 100MeV to ~100TeV.
The search is performed with the large area telescope (LAT) on-board the Fermi satellite and the H.E.S.S. telescopes located in the Khomas Highland in Namibia. The Fermi-LAT was launched in 2008 and is continuously scanning the sky since then. It detects photons with energies from 20MeV to over 300 GeV and has an unprecedented sensitivity. The all-sky coverage allows us to study all 28 bow shocks of runaway stars listed in the E-BOSS catalogue of infrared bow shocks. No significant emission was detected from any of the objects, although predicted by several theoretical models describing the non-thermal emission of bow shocks of runaway stars.
The H.E.S.S. experiment is the most sensitive system of imaging atmospheric Cherenkov telescopes. It detects photons from several tens of GeV to ~100TeV. Seven of the bow shocks have been observed with H.E.S.S. and the data analysis is presented in this thesis. The analyses of the very-high energy data did not reveal significant emission from any of the sources either.
This work presents the first systematic search for gamma-ray emission from bow shocks of runaway stars. For the first time Fermi-LAT data was specifically analysed to reveal emission from bow shocks of runaway stars. In the TeV regime no searches for emission from theses objects have been published so far, the study presented here is the first in this energy regime. The level of the gamma-ray emission from bow shocks of runaway stars is constrained by the calculated upper limits over six orders in magnitude in energy.
The upper limits calculated for the bow shocks of runaway stars in the course of this work, constrain several models. For the best candidate, ζ Ophiuchi, the upper limits in the Fermi-LAT energy range are lower than the predictions by a factor ~5. This challenges the assumptions made in this model and gives valuable input for further modelling approaches.
The analyses were performed with the software packages provided by the H.E.S.S. and Fermi collaborations. The development of a unified analysis framework for gamma-ray data, namely GammaLib/ctools, is rapidly progressing within the CTA consortium. Recent implementations and cross-checks with current software frameworks are presented in the Appendix.
Schlucken ist ein lebensnotwendiger Prozess, dessen Diagnose und Therapie eine enorme Herausforderung bedeutet. Die Erkennung und Beurteilung von Schlucken und Schluckstörungen erfordert den Einsatz von technisch aufwendigen Verfahren, wie Videofluoroskopie (VFSS) und fiberoptisch-endoskopische Schluckuntersuchung (FEES), die eine hohe Belastung für die Patienten darstellen. Beide Verfahren werden als Goldstandard in der Diagnostik von Schluckstörungen eingesetzt. Die Durchführung obliegt in der Regel ärztlichem Personal. Darüber hinaus erfordert die Auswertung des Bildmaterials der Diagnostik eine ausreichend hohe Erfahrung. In der Therapie findet neben den klassischen Therapiemethoden, wie z.B. diätetische Modifikationen und Schluckmanöver, auch zunehmend die funktionelle Elektrostimulation Anwendung. Ziel der vorliegenden Dissertationsschrift ist die Evaluation eines im Verbundprojekt BigDysPro entwickelten Bioimpedanz (BI)- und Elektromyographie (EMG)-Messsystems. Es wurde geprüft, ob sich das BI- und EMG-Messsystem eignet, sowohl in der Diagnostik als auch in der Therapie als eigenständiges Messsystem und im Rahmen einer Schluckneuroprothese eingesetzt zu werden. In verschiedenen Studien wurden gesunde Probanden für die Überprüfung der Reproduzierbarkeit (Intra-und Interrater-Reliabilität), der Unterscheidbarkeit von Schluck- und Kopfbewegungen und der Beeinflussung der Biosignale (BI, EMG) durch verschiedene Faktoren (Geschlecht der Probanden, Leitfähigkeit, Konsistenz und Menge der Nahrung) untersucht. Durch zusätzliche Untersuchungen mit Patienten wurde einerseits der Einfluss der Elektrodenart geprüft. Andererseits wurden parallel zur BI- und EMG-Messung auch endoskopische (FEES) und radiologische Schluckuntersuchungen (VFSS) durchgeführt, um die Korrelation der Biosignale mit der Bewegung anatomischer Strukturen (VFSS) und mit der Schluckqualität (FEES) zu prüfen. Es wurden 31 gesunde Probanden mit 1819 Schlucken und 60 Patienten mit 715 Schlucken untersucht. Die Messkurven zeigten einen typischen, reproduzierbaren Signalverlauf, der mit anatomischen und funktionellen Änderungen während der pharyngalen Schluckphase in der VFSS korrelierte (r > 0,7). Aus dem Bioimpedanzsignal konnten Merkmale extrahiert werden, die mit physiologischen Merkmalen eines Schluckes, wie verzögerter laryngealer Verschluss und Kehlkopfhebung, korrelierten und eine Einschätzung der Schluckqualität in Übereinstimmung mit der FEES ermöglichten. In den Signalverläufen der Biosignale konnten signifikante Unterschiede zwischen Schluck- und Kopfbewegungen und den Nahrungsmengen und -konsistenzen nachgewiesen werden. Im Gegensatz zur Nahrungsmenge und -konsistenz zeigte die Leitfähigkeit der zu schluckenden Nahrung, das Geschlecht der Probanden und die Art der Elektroden keinen signifikanten Einfluss auf die Messsignale. Mit den Ergebnissen der Evaluation konnte gezeigt werden, dass mit dem BI- und EMG-Messsystem ein neuartiges und nicht-invasives Verfahren zur Verfügung steht, das eine reproduzierbare Darstellung der pharyngalen Schluckphase und ihrer Veränderungen ermöglicht. Daraus ergeben sich vielseitige Einsatzmöglichkeiten in der Diagnostik, z.B. Langzeitmessung zur Schluckfrequenz und Einschätzung der Schluckqualität, und in der Therapie, z.B. der Einsatz in einer Schluckneuroprothese oder als Biofeedback zur Darstellung des Schluckes, von Schluckstörungen.
Donor-acceptor (D-A) copolymers have revolutionized the field of organic electronics over the last decade. Comprised of a electron rich and an electron deficient molecular unit, these copolymers facilitate the systematic modification of the material's optoelectronic properties. The ability to tune the optical band gap and to optimize the molecular frontier orbitals as well as the manifold of structural sites that enable chemical modifications has created a tremendous variety of copolymer structures. Today, these materials reach or even exceed the performance of amorphous inorganic semiconductors. Most impressively, the charge carrier mobility of D-A copolymers has been pushed to the technologically important value of 10 cm^{2}V^{-1}s^{-1}. Furthermore, owed to their enormous variability they are the material of choice for the donor component in organic solar cells, which have recently surpassed the efficiency threshold of 10%. Because of the great number of available D-A copolymers and due to their fast chemical evolution, there is a significant lack of understanding of the fundamental physical properties of these materials. Furthermore, the complex chemical and electronic structure of D-A copolymers in combination with their semi-crystalline morphology impede a straightforward identification of the microscopic origin of their superior performance. In this thesis, two aspects of prototype D-A copolymers were analysed. These are the investigation of electron transport in several copolymers and the application of low band gap copolymers as acceptor component in organic solar cells. In the first part, the investigation of a series of chemically modified fluorene-based copolymers is presented. The charge carrier mobility varies strongly between the different derivatives, although only moderate structural changes on the copolymers structure were made. Furthermore, rather unusual photocurrent transients were observed for one of the copolymers. Numerical simulations of the experimental results reveal that this behavior arises from a severe trapping of electrons in an exponential distribution of trap states. Based on the comparison of simulation and experiment, the general impact of charge carrier trapping on the shape of photo-CELIV and time-of-flight transients is discussed. In addition, the high performance naphthalenediimide (NDI)-based copolymer P(NDI2OD-T2) was characterized. It is shown that the copolymer posses one of the highest electron mobilities reported so far, which makes it attractive to be used as the electron accepting component in organic photovoltaic cells.\par Solar cells were prepared from two NDI-containing copolymers, blended with the hole transporting polymer P3HT. I demonstrate that the use of appropriate, high boiling point solvents can significantly increase the power conversion efficiency of these devices. Spectroscopic studies reveal that the pre-aggregation of the copolymers is suppressed in these solvents, which has a strong impact on the blend morphology. Finally, a systematic study of P3HT:P(NDI2OD-T2) blends is presented, which quantifies the processes that limit the efficiency of devices. The major loss channel for excited states was determined by transient and steady state spectroscopic investigations: the majority of initially generated electron-hole pairs is annihilated by an ultrafast geminate recombination process. Furthermore, exciton self-trapping in P(NDI2OD-T2) domains account for an additional reduction of the efficiency. The correlation of the photocurrent to microscopic morphology parameters was used to disclose the factors that limit the charge generation efficiency. Our results suggest that the orientation of the donor and acceptor crystallites relative to each other represents the main factor that determines the free charge carrier yield in this material system. This provides an explanation for the overall low efficiencies that are generally observed in all-polymer solar cells.
The tropical warm pool waters surrounding Indonesia are one of the equatorial heat and moisture sources that are considered as a driving force of the global climate system. The climate in Indonesia is dominated by the equatorial monsoon system, and has been linked to El Niño-Southern Oscillation (ENSO) events, which often result in severe droughts or floods over Indonesia with profound societal and economic impacts on the populations living in the world's fourth most populated country. The latest IPCC report states that ENSO will remain the dominant mode in the tropical Pacific with global effects in the 21st century and ENSO-related precipitation extremes will intensify. However, no common agreement exists among climate simulation models for projected change in ENSO and the Australian-Indonesian Monsoon. Exploring high-resolution palaeoclimate archives, like tree rings or varved lake sediments, provide insights into the natural climate variability of the past, and thus helps improving and validating simulations of future climate changes. Centennial tree-ring stable isotope records | Within this doctoral thesis the main goal was to explore the potential of tropical tree rings to record climate signals and to use them as palaeoclimate proxies. In detail, stable carbon (δ13C) and oxygen (δ18O) isotopes were extracted from teak trees in order to establish the first well-replicated centennial (AD 1900-2007) stable isotope records for Java, Indonesia. Furthermore, different climatic variables were tested whether they show significant correlation with tree-ring proxies (ring-width, δ13C, δ18O). Moreover, highly resolved intra-annual oxygen isotope data were established to assess the transfer of the seasonal precipitation signal into the tree rings. Finally, the established oxygen isotope record was used to reveal possible correlations with ENSO events. Methodological achievements | A second goal of this thesis was to assess the applicability of novel techniques which facilitate and optimize high-resolution and high-throughput stable isotope analysis of tree rings. Two different UV-laser-based microscopic dissection systems were evaluated as a novel sampling tool for high-resolution stable isotope analysis. Furthermore, an improved procedure of tree-ring dissection from thin cellulose laths for stable isotope analysis was designed. The most important findings of this thesis are: I) The herein presented novel sampling techniques improve stable isotope analyses for tree-ring studies in terms of precision, efficiency and quality. The UV-laser-based microdissection serve as a valuable tool for sampling plant tissue at ultrahigh-resolution and for unprecedented precision. II) A guideline for a modified method of cellulose extraction from wholewood cross-sections and subsequent tree-ring dissection was established. The novel technique optimizes the stable isotope analysis process in two ways: faster and high-throughput cellulose extraction and precise tree-ring separation at annual to high-resolution scale. III) The centennial tree-ring stable isotope records reveal significant correlation with regional precipitation. High-resolution stable oxygen values, furthermore, allow distinguishing between dry and rainy season rainfall. IV) The δ18O record reveals significant correlation with different ENSO flavors and demonstrates the importance of considering ENSO flavors when interpreting palaeoclimatic data in the tropics. The findings of my dissertation show that seasonally resolved δ18O records from Indonesian teak trees are a valuable proxy for multi-centennial reconstructions of regional precipitation variability (monsoon signals) and large-scale ocean-atmosphere phenomena (ENSO) for the Indo-Pacific region. Furthermore, the novel methodological achievements offer many unexplored avenues for multidisciplinary research in high-resolution palaeoclimatology.
Protein-metal coordination complexes are well known as active centers in enzymatic catalysis, and to contribute to signal transduction, gas transport, and to hormone function. Additionally, they are now known to contribute as load-bearing cross-links to the mechanical properties of several biological materials, including the jaws of Nereis worms and the byssal threads of marine mussels. The primary aim of this thesis work is to better understand the role of protein-metal cross-links in the mechanical properties of biological materials, using the mussel byssus as a model system. Specifically, the focus is on histidine-metal cross-links as sacrificial bonds in the fibrous core of the byssal thread (Chapter 4) and L-3,4-dihydroxyphenylalanine (DOPA)-metal bonds in the protective thread cuticle (Chapter 5).
Byssal threads are protein fibers, which mussels use to attach to various substrates at the seashore. These relatively stiff fibers have the ability to extend up to about 100 % strain, dissipating large amounts of mechanical energy from crashing waves, for example. Remarkably, following damage from cyclic loading, initial mechanical properties are subsequently recovered by a material-intrinsic self-healing capability. Histidine residues coordinated to transition metal ions in the proteins comprising the fibrous thread core have been suggested as reversible sacrificial bonds that contribute to self-healing; however, this remains to be substantiated in situ. In the first part of this thesis, the role of metal coordination bonds in the thread core was investigated using several spectroscopic methods. In particular, X-ray absorption spectroscopy (XAS) was applied to probe the coordination environment of zinc in Mytilus californianus threads at various stages during stretching and subsequent healing. Analysis of the extended X-ray absorption fine structure (EXAFS) suggests that tensile deformation of threads is correlated with the rupture of Zn-coordination bonds and that self-healing is connected with the reorganization of Zn-coordination bond topologies rather than the mere reformation of Zn-coordination bonds. These findings have interesting implications for the design of self-healing metallopolymers.
The byssus cuticle is a protective coating surrounding the fibrous thread core that is both as hard as an epoxy and extensible up to 100 % strain before cracking. It was shown previously that cuticle stiffness and hardness largely depend on the presence of Fe-DOPA coordination bonds. However, the byssus is known to concentrate a large variety of metals from seawater, some of which are also capable of binding DOPA (e.g. V). Therefore, the question arises whether natural variation of metal composition can affect the mechanical performance of the byssal thread cuticle. To investigate this hypothesis, nanoindentation and confocal Raman spectroscopy were applied to the cuticle of native threads, threads with metals removed (EDTA treated), and threads in which the metal ions in the native tissue were replaced by either Fe or V. Interestingly, replacement of metal ions with either Fe or V leads to the full recovery of native mechanical properties with no statistical difference between each other or the native properties. This likely indicates that a fixed number of metal coordination sites are maintained within the byssal thread cuticle – possibly achieved during thread formation – which may provide an evolutionarily relevant mechanism for maintaining reliable mechanics in an unpredictable environment.
While the dynamic exchange of bonds plays a vital role in the mechanical behavior and self-healing in the thread core by allowing them to act as reversible sacrificial bonds, the compatibility of DOPA with other metals allows an inherent adaptability of the thread cuticle to changing circumstances. The requirements to both of these materials can be met by the dynamic nature of the protein-metal cross-links, whereas covalent cross-linking would fail to provide the adaptability of the cuticle and the self-healing of the core. In summary, these studies of the thread core and the thread cuticle serve to underline the important and dynamic roles of protein-metal coordination in the mechanical function of load-bearing protein fibers, such as the mussel byssus.
The atmosphere over the Arctic Ocean is strongly influenced by the distribution of sea ice and open water. Leads in the sea ice produce strong convective fluxes of sensible and latent heat and release aerosol particles into the atmosphere. They increase the occurrence of clouds and modify the structure and characteristics of the atmospheric boundary layer (ABL) and thereby influence the Arctic climate.
In the course of this study aircraft measurements were performed over the western Arctic Ocean as part of the campaign PAMARCMIP 2012 of the Alfred Wegener Institute for Polar and Marine Research (AWI). Backscatter from aerosols and clouds within the lower troposphere and the ABL were measured with the nadir pointing Airborne Mobile Aerosol Lidar (AMALi) and dropsondes were launched to obtain profiles of meteorological variables. Furthermore, in situ measurements of aerosol properties, meteorological variables and turbulence were part of the campaign. The measurements covered a broad range of atmospheric and sea ice conditions.
In this thesis, properties of the ABL over Arctic sea ice with a focus on the influence of open leads are studied based on the data from the PAMARCMIP campaign. The height of the ABL is determined by different methods that are applied to dropsonde and AMALi backscatter profiles. ABL heights are compared for different flights representing different conditions of the atmosphere and of sea ice and open water influence. The different criteria for ABL height that are applied show large variation in terms of agreement among each other, depending on the characteristics of the ABL and its history. It is shown that ABL height determination from lidar backscatter by methods commonly used under mid-latitude conditions is applicable to the Arctic ABL only under certain conditions. Aerosol or clouds within the ABL are needed as a tracer for ABL height detection from backscatter. Hence an aerosol source close to the surface is necessary, that is typically found under the present influence of open water and therefore convective conditions. However it is not always possible to distinguish residual layers from the actual ABL. Stable boundary layers are generally difficult to detect.
To illustrate the complexity of the Arctic ABL and processes therein, four case studies are analyzed each of which represents a snapshot of the interplay between atmosphere and underlying sea ice or water surface. Influences of leads and open water on the aerosol and clouds within the ABL are identified and discussed. Leads are observed to cause the formation of fog and cloud layers within the ABL by humidity emission. Furthermore they decrease the stability and increase the height of the ABL and consequently facilitate entrainment of air and aerosol layers from the free troposphere.
Tierische und menschliche Fäkalien aus Landwirtschaft und Haushalten enthalten zahlreiche obligat und opportunistisch pathogene Mikroorganismen, deren Konzentration u. a. je nach Gesundheitszustand der betrachteten Gruppe schwankt. Neben den Krankheitserregern enthalten Fäkalien aber auch essentielle Pflanzennährstoffe (276) und dienen seit Jahrtausenden (63) als Dünger für Feldfrüchte. Mit der unbedarften Verwendung von pathogenbelastetem Fäkaldünger steigt jedoch auch das Risiko einer Infektion von Mensch und Tier. Diese Gefahr erhöht sich mit der globalen Vernetzung der Landwirtschaft, z. B. durch den Import von kontaminierten Futter- bzw. Lebensmitteln (29).
Die vorliegende Arbeit stellt die milchsaure Fermentation von Rindergülle und Klärschlamm als alternative Hygienisierungsmethode gegenüber der Pasteurisation in Biogasanlagen bzw. gebräuchlichen Kompostierung vor.
Dabei wird ein Abfall der Gram-negativen Bakterienflora sowie der Enterokokken, Schimmel- und Hefepilze unter die Nachweisgrenze von 3 log10KbE/g beobachtet, gleichzeitig steigt die Konzentration der Lactobacillaceae um das Tausendfache. Darüber hinaus wird gezeigt, dass pathogene Bakterien wie Staphylococcus aureus, Salmonella spp., Listeria monocytogenes, EHEC O:157 und vegetative Clostridum perfringens-Zellen innerhalb von 3 Tagen inaktiviert werden. Die Inaktivierung von ECBO-Viren und Spulwurmeiern erfolgt innerhalb von 7 bzw. 56 Tagen. Zur Aufklärung der Ursache der beobachteten Hygienisierung wurde das fermentierte Material auf flüchtige Fettsäuren sowie pH-Wertänderungen untersucht. Es konnte festgestellt werden, dass die gemessenen Werte nicht die alleinige Ursache für das Absterben der Erreger sind, vielmehr wird eine zusätzliche bakterizide Wirkung durch eine mutmaßliche Bildung von Bakteriozinen in Betracht gezogen. Die parasitizide Wirkung wird auf die physikalischen Bedingungen der Fermentation zurückgeführt.
Die methodischen Grundlagen basieren auf Analysen mittels zahlreicher klassisch-kultureller Verfahren, wie z. B. der Lebendkeimzahlbestimmung. Darüber hinaus findet die MALDI-TOF-Massenspektrometrie und die klassische PCR in Kombination mit der Gradienten-Gelelektrophorese Anwendung, um kultivierbare Bakterienfloren zu beschreiben bzw. nicht kultivierbare Bakterienfloren stichprobenartig zu erfassen.
Neben den Aspekten der Hygienisierung wird zudem die Eignung der Methode für die landwirtschaftliche Nutzung berücksichtigt. Dies findet sich insbesondere in der Komposition des zu fermentierenden Materials wieder, welches für die verstärkte Humusakkumulation im Ackerboden optimiert wurde. Darüber hinaus wird die Masseverlustbilanz während der milchsauren Fermentation mit denen der Kompostierung sowie der Verarbeitung in der Biogasanlage verglichen und als positiv bewertet, da sie mit insgesamt 2,45 % sehr deutlich unter den bisherigen Alternativen liegt (73, 138, 458). Weniger Verluste an organischem Material während der Hygienisierung führen zu einer größeren verwendbaren Düngermenge, die auf Grund ihres organischen Ursprungs zu einer Verstärkung des Humusanteiles im Ackerboden beitragen kann (56, 132).
Synchronisationsphänomene myotendinöser Oszillationen interagierender neuromuskulärer Systeme
(2014)
Muskeln oszillieren nachgewiesener Weise mit einer Frequenz um 10 Hz. Doch was geschieht mit myofaszialen Oszillationen, wenn zwei neuromuskuläre Systeme interagieren? Die Dissertation widmet sich dieser Fragestellung bei isometrischer Interaktion. Während der Testmessungen ergaben sich Hinweise für das Vorhandensein von möglicherweise zwei verschiedenen Formen der Isometrie. Arbeiten zwei Personen isometrisch gegeneinander, können subjektiv zwei Modi eingenommen werden: man kann entweder isometrisch halten – der Kraft des Partners widerstehen – oder isometrisch drücken – gegen den isometrischen Widerstand des Partners arbeiten. Daher wurde zusätzlich zu den Messungen zur Interaktion zweier Personen an einzelnen Individuen geprüft, ob möglicherweise zwei Formen der Isometrie existieren. Die Promotion besteht demnach aus zwei inhaltlich und methodisch getrennten Teilen: I „Single-Isometrie“ und II „Paar-Isometrie“. Für Teil I wurden mithilfe eines pneumatisch betriebenen Systems die hypothetischen Messmodi Halten und Drücken während isometrischer Aktion untersucht. Bei n = 10 Probanden erfolgte parallel zur Aufzeichnung des Drucksignals während der Messungen die Erfassung der Kraft (DMS) und der Beschleunigung sowie die Aufnahme der mechanischen Muskeloszillationen folgender myotendinöser Strukturen via Mechanomyo- (MMG) bzw. Mechanotendografie (MTG): M. triceps brachii (MMGtri), Trizepssehne (MTGtri), M. obliquus externus abdominis (MMGobl). Pro Proband wurden bei 80 % der MVC sowohl sechs 15-Sekunden-Messungen (jeweils drei im haltenden bzw. drückenden Modus; Pause: 1 Minute) als auch vier Ermüdungsmessungen (jeweils zwei im haltenden bzw. drückenden Modus; Pause: 2 Minuten) durchgeführt. Zum Vergleich der Messmodi Halten und Drücken wurden die Amplituden der myofaszialen Oszillationen sowie die Kraftausdauer herangezogen. Signifikante Unterschiede zwischen dem haltenden und dem drückenden Modus zeigten sich insbesondere im Bereich der Ermüdungscharakteristik. So lassen Probanden im haltenden Modus signifikant früher nach als im drückenden Modus (t(9) = 3,716; p = .005). Im drückenden Modus macht das längste isometrische Plateau durchschnittlich 59,4 % der Gesamtdauer aus, im haltenden sind es 31,6 % (t(19) = 5,265, p = .000). Die Amplituden der Single-Isometrie-Messungen unterscheiden sich nicht signifikant. Allerdings variieren die Amplituden des MMGobl zwischen den Messungen im drückenden Modus signifikant stärker als im haltenden Modus. Aufgrund dieser teils signifikanten Unterschiede zwischen den beiden Messmodi wurde dieses Setting auch im zweiten Teil „Paar-Isometrie“ berücksichtigt. Dort wurden n = 20 Probanden – eingeteilt in zehn gleichgeschlechtliche Paare – während isometrischer Interaktion untersucht. Die Sensorplatzierung erfolgte analog zu Teil I. Die Oszillationen der erfassten MTG- sowie MMG-Signale wurden u.a. mit Algorithmen der Nichtlinearen Dynamik auf ihre Kohärenz hin untersucht. Durch die Paar-Isometrie-Messungen zeigte sich, dass die Muskeln und die Sehnen beider neuromuskulärer Systeme bei Interaktion im bekannten Frequenzbereich von 10 Hz oszillieren. Außerdem waren sie in der Lage, sich bei Interaktion so aufeinander abzustimmen, dass sich eine signifikante Kohärenz entwickelte, die sich von Zufallspaarungen signifikant unterscheidet (Patchanzahl: t(29) = 3,477; p = .002; Summe der 4 längsten Patches: t(29) = 7,505; p = .000). Es wird der Schluss gezogen, dass neuromuskuläre Komplementärpartner in der Lage sind, sich im Sinne kohärenten Verhaltens zu synchronisieren. Bezüglich der Parameter zur Untersuchung der möglicherweise vorhandenen zwei Formen der Isometrie zeigte sich bei den Paar-Isometrie-Messungen zwischen Halten und Drücken ein signifikanter Unterschied bei der Ermüdungscharakteristik sowie bezüglich der Amplitude der MMGobl. Die Ergebnisse beider Teilstudien bestärken die Hypothese, dass zwei Formen der Isometrie existieren. Fraglich ist, ob man überhaupt von Isometrie sprechen kann, da jede isometrische Muskelaktion aus feinen Oszillationen besteht, die eine per Definition postulierte Isometrie ausschließen. Es wird der Vorschlag unterbreitet, die Isometrie durch den Begriff der Homöometrie auszutauschen. Die Ergebnisse der Paar-Isometrie-Messungen zeigen u.a., dass neuromuskuläre Systeme in der Lage sind, ihre myotendinösen Oszillationen so aufeinander abzustimmen, dass kohärentes Verhalten entsteht. Es wird angenommen, dass hierzu beide neuromuskulären Systeme funktionell intakt sein müssen. Das Verfahren könnte für die Diagnostik funktioneller Störungen relevant werden.
The subsurface upper Palaeozoic sedimentary successions of the Loppa High half-graben and the Finnmark platform in the Norwegian Barents Sea (southwest Barents Sea) were investigated using 2D/3D seismic datasets combined with well and core data. These sedimentary successions represent a case of mixed siliciclastic-carbonates depositional systems, which formed during the earliest phase of the Atlantic rifting between Greenland and Norway. During the Carboniferous and Permian the southwest part of the Barents Sea was located along the northern margin of Pangaea, which experienced a northward drift at a speed of ~2–3 mm per year. This gradual shift in the paleolatitudinal position is reflected by changes in regional climatic conditions: from warm-humid in the early Carboniferous, changing to warm-arid in the middle to late Carboniferous and finally to colder conditions in the late Permian. Such changes in paleolatitude and climate have resulted in major changes in the style of sedimentation including variations in the type of carbonate factories. The upper Palaeozoic sedimentary succession is composed of four major depositional units comprising chronologically the Billefjorden Group dominated by siliciclastic deposition in extensional tectonic-controlled wedges, the Gipsdalen Group dominated by warm-water carbonates, stacked buildups and evaporites, the Bjarmeland Group characterized by cool-water carbonates as well as by the presence of buildup networks, and the Tempelfjorden Group characterized by fine-grained sedimentation dominated by biological silica production. In the Loppa High, the integration of a core study with multi-attribute seismic facies classification allowed highlighting the main sedimentary unconformities and mapping the spatial extent of a buried paleokarst terrain. This geological feature is interpreted to have formed during a protracted episode of subaerial exposure occurring between the late Palaeozoic and middle Triassic. Based on seismic sequence stratigraphy analysis the palaeogeography in time and space of the Loppa High basin was furthermore reconstructed and a new and more detailed tectono-sedimentary model for this area was proposed. In the Finnmark platform area, a detailed core analysis of two main exploration wells combined with key 2D seismic sections located along the main depositional profile, allowed the evaluation of depositional scenarios for the two main lithostratigraphic units: the Ørn Formation (Gipsdalen Group) and the Isbjørn Formation (Bjarmeland Group). During the mid-Sakmarian, two major changes were observed between the two formations including (1) the variation in the type of the carbonate factories, which is interpreted to be depth-controlled and (2) the change in platform morphology, which evolved from a distally steepened ramp to a homoclinal ramp. The results of this study may help supporting future reservoirs characterization of the upper Palaeozoic units in the Barents Sea, particularly in the Loppa High half-graben and the Finmmark platform area.
The monsoon is an important component of the Earth’s climate system. It played a vital role in the development and sustenance of the largely agro-based economy in India. A better understanding of past variations in the Indian Summer Monsoon (ISM) is necessary to assess its nature under global warming scenarios. Instead, our knowledge of spatiotemporal patterns of past ISM strength, as inferred from proxy records, is limited due to the lack of high-resolution paleo-hydrological records from the core monsoon domain.
In this thesis I aim to improve our understanding of Holocene ISM variability from the core ‘monsoon zone’ (CMZ) in India. To achieve this goal, I tried to understand modern and thereafter reconstruct Holocene monsoonal hydrology, by studying surface sediments and a high-resolution sedimentary record from the saline-alkaline Lonar crater lake, central India. My approach relies on analyzing stable carbon and hydrogen isotope ratios from sedimentary lipid biomarkers to track past hydrological changes.
In order to evaluate the relationship of the modern ecosystem and hydrology of the lake I studied the distribution of lipid biomarkers in the modern ecosystem and compared it to lake surface sediments. The major plants from dry deciduous mixed forest type produced a greater amount of leaf wax n-alkanes and a greater fraction of n-C31 and n-C33 alkanes relative to n-C27 and n-C29. Relatively high average chain length (ACL) values (29.6–32.8) for these plants seem common for vegetation from an arid and warm climate. Additionally I found that human influence and subsequent nutrient supply result in increased lake primary productivity, leading to an unusually high concentration of tetrahymanol, a biomarker for salinity and water column stratification, in the nearshore sediments. Due to this inhomogeneous deposition of tetrahymanol in modern sediments, I hypothesize that lake level fluctuation may potentially affect aquatic lipid biomarker distributions in lacustrine sediments, in addition to source changes.
I reconstructed centennial-scale hydrological variability associated with changes in the intensity of the ISM based on a record of leaf wax and aquatic biomarkers and their stable carbon (δ13C) and hydrogen (δD) isotopic composition from a 10 m long sediment core from the lake. I identified three main periods of distinct hydrology over the Holocene in central India. The period between 10.1 and 6 cal. ka BP was likely the wettest during the Holocene. Lower ACL index values (29.4 to 28.6) of leaf wax n-alkanes and their negative δ13C values (–34.8‰ to –27.8‰) indicated the dominance of woody C3 vegetation in the catchment, and negative δDwax (average for leaf wax n-alkanes) values (–171‰ to –147‰) argue for a wet period due to an intensified monsoon. After 6 cal. ka BP, a gradual shift to less negative δ13C values (particularly for the grass derived n-C31) and appearance of the triterpene lipid tetrahymanol, generally considered as a marker for salinity and water column stratification, marked the onset of drier conditions. At 5.1 cal. ka BP increasing flux of leaf wax n-alkanes along with the highest flux of tetrahymanol indicated proximity of the lakeshore to the center due to a major lake level decrease. Rapid fluctuations in abundance of both terrestrial and aquatic biomarkers between 4.8 and 4 cal. ka BP indicated an unstable lake ecosystem, culminating in a transition to arid conditions. A pronounced shift to less negative δ13C values, in particular for n-C31 (–25.2‰ to –22.8‰), over this period indicated a change of dominant vegetation to C4 grasses. Along with a 40‰ increase in leaf wax n-alkane δD values, which likely resulted from less rainfall and/or higher plant evapotranspiration, I interpret this period to reflect the driest conditions in the region during the last 10.1 ka. This transition led to protracted late Holocene arid conditions and the establishment of a permanently saline lake. This is supported by the high abundance of tetrahymanol. A late Holocene peak of cyanobacterial biomarker input at 1.3 cal. ka BP might represent an event of lake eutrophication, possibly due to human impact and the onset of cattle/livestock farming in the catchment.
The most intriguing feature of the mid-Holocene driest period was the high amplitude and rapid fluctuations in δDwax values, probably due to a change in the moisture source and/or precipitation seasonality. I hypothesize that orbital induced weakening of the summer solar insolation and associated reorganization of the general atmospheric circulation were responsible for an unstable hydroclimate in the mid-Holocene in the CMZ.
My findings shed light onto the sequence of changes during mean state changes of the monsoonal system, once an insolation driven threshold has been passed, and show that small changes in solar insolation can be associated to major environmental changes and large fluctuations in moisture source, a scenario that may be relevant with respect to future changes in the ISM system.
Die vorliegende Studie beschäftigte sich mit der Bedeutung der dysfunktionalen Einstellungen für die Entwicklung von depressiven Symptomen bei Kindern und Jugendlichen. Nach der kognitiven Theorie der Depression von Beck (1967, 1996) führen dysfunktionale Einstellungen in Interaktion mit Stress zu depressiven Symptomen. Es existieren allerdings nur wenige Studien, die die longitudinale Beziehung zwischen den dysfunktionalen Einstellungen und der Depressivität bei Kindern und Jugendlichen untersucht haben (Lakdawalla et al., 2007). Folglich kann noch nicht eindeutig geklärt werden, ob die dysfunktionalen Einstellungen Ursache, Begleiterscheinung oder Konsequenz der Depression sind. Als Datengrundlage diente eine Stichprobe von Kindern und Jugendlichen im Alter von 9 bis 20 Jahren, die im Rahmen der PIER-Studie zu dysfunktionalen Einstellungen, kritischen Lebensereignissen und depressiven Symptomen befragt wurden (Nt1t2 = 1.053; t1: 2011/2012, t2: 2013/2014). Querschnittliche Analysen zeigten hohe Assoziationen zwischen den dysfunktionalen Einstellungen, kritischen Lebensereignissen und depressiven Symptomen. Eine latente Moderationsanalyse wies nur bei den Jugendlichen auf signifikante Interaktion zwischen den dysfunktionalen Einstellungen und den kritischen Lebensereignissen in der Vorhersage depressiver Symptomatik hin. Im Längsschnitt zeigten latente Cross-Lagged-Panel-Analysen erwartungsgemäß, dass die dysfunktionalen Einstellungen und die Depressivität mit dem Alter immer stabilere Konstrukte darstellen, die sehr eng miteinander zusammenhängen. Eine diesem Modell hinzugefügte latente Moderationsanalyse konnte das kognitive Modell der Depression nach Beck weder bei Kindern noch bei Jugendlichen bestätigen. Die spätere depressive Symptomatik konnte lediglich durch Haupteffekte der früheren Ausprägung der Depressivität und der kritischen Lebensereignisse vorhergesagt werden. Diese Ergebnisse legen den Schluss nahe, dass es sich bei den dysfunktionalen Einstellungen eher um Begleiterscheinungen als um Risikofaktoren oder Konsequenzen der depressiven Symptomatik handelt.
Ziel dieser Arbeit war die Entwicklung von Methoden zur Synthese von auf Phenol basierenden Naturstoffen. Insbesondere wurde bei der Methodenentwicklung die Nachhaltigkeit in den Vordergrund gerückt. Dies bedeutet, dass durch die Zusammenfassung mehrerer Syntheseschritte zu einem (Tandem-Reaktion) beispielsweise unnötige Reaktionsschritte vermieden werden sollten. Ferner sollten im Sinne der Nachhaltigkeit möglichst ungiftige Reagenzien und Lösungmittel verwendet werden, ebenso wie mehrfach wiederverwertbare Katalysatoren zum Einsatz kommen. Im Rahmen dieser Arbeit wurden Methoden zum Aufbau von Biphenolen mittels Pd/C-katalysierten Suzuki-Miyaura-Kupplungen entwickelt. Diese Methoden sind insofern äußerst effizient, da der ansonsten gebräuchliche Syntheseweg über drei Reaktionsschritte somit auf lediglich eine Reaktionsstufe reduziert wurde. Weiterhin wurden die Reaktionsbedingungen so gestaltet, dass einfaches Wasser als vollkommen ungiftiges Lösungsmittel verwendet werden konnte. Des Weiteren wurde für diese Reaktionen ein Katalysator gewählt, der einfach durch Filtration vom Reaktionsgemisch abgetrennt und für weitere Reaktionen mehrfach wiederverwendet werden konnte. Darüber hinaus konnte durch die Synthese von mehr als 100 Verbindungen die breite Anwendbarkeit der Methoden aufgezeigt werden. Mit den entwickelten Methoden konnten 14 Naturstoffe - z. T. erstmals - synthetisiert werden. Derartige Stoffe werden u. a. von den ökonomisch bedeutenden Kernobstgewächsen (Äpfeln, Birnen) als Abwehrmittel gegenüber Schädlingen erzeugt. Folglich konnte mit Hilfe dieser Methoden ein Syntheseweg für potentielle Pflanzenschutzmittel entwickelt werden. Im zweiten Teil dieser Arbeit wurde ein Zugang zu den sich ebenfalls vom Phenol ableitenden Chromanonen, Chromonen und Cumarinen untersucht. Bei diesen Untersuchungen konnte durch die Entwicklung zweier neuer Tandem-Reaktionen ein nachhaltiger und stufenökonomischer Syntheseweg zur Darstellung substituierter Benzo(dihydro)pyrone aufgezeigt werden. Durch die erstmalige Kombination der Claisen-Umlagerung mit einer Oxa-Michael-Addition bzw. konjugierten-Addition wurden zwei vollkommen atomökonomische Reaktionen miteinander verknüpft und somit eine überaus effiente Synthese von allyl- bzw. prenylsubstituierten Chromanonen und Chromonen ermöglicht. Ferner konnten durch die Anwendung einer Claisen-Umlagerung-Wittig-Laktonisierungs-Reaktion allyl- bzw. prenylsubstituierte Cumarine erhalten werden. Herausragendes Merkmal dieser Methoden war, dass in nur einem Schritt der jeweilige Naturstoffgrundkörper aufgebaut und eine lipophile Seitenkette generiert werden konnte. Die Entwicklung dieser Methoden ist von hohem pharmazeutischem Stellenwert, da auf diesen Wegen Verbindungen synthetisiert werden können, die zum einem über das notwendige pharmakologische Grundgerüst verfügen und zum anderen über eine Seitenkette, welche die Aufnahmefähigkeit und damit die Wirksamkeit im Organismus beträchtlich erhöht. Insgesamt konnten mittels der entwickelten Methoden 15 Chromanon-, Chromon- und Cumarin-Naturstoffe z. T. erstmals synthetisiert werden.
The quantitative descriptions of the state of stress in the Earth’s crust, and spatial-temporal stress changes are of great importance in terms of scientific questions as well as applied geotechnical issues. Human activities in the underground (boreholes, tunnels, caverns, reservoir management, etc.) have a large impact on the stress state. It is important to assess, whether these activities may lead to (unpredictable) hazards, such as induced seismicity. Equally important is the understanding of the in situ stress state in the Earth’s crust, as it allows the determination of safe well paths, already during well planning. The same goes for the optimal configuration of the injection- and production wells, where stimulation for artificial fluid path ways is necessary.
The here presented cumulative dissertation consists of four separate manuscripts, which are already published, submitted or will be submitted for peer review within the next weeks. The main focus is on the investigation of the possible usage of geothermal energy in the province Alberta (Canada). A 3-D geomechanical–numerical model was designed to quantify the contemporary 3-D stress tensor in the upper crust. For the calibration of the regional model, 321 stress orientation data and 2714 stress magnitude data were collected, whereby the size and diversity of the database is unique. A calibration scheme was developed, where the model is calibrated versus the in situ stress data stepwise for each data type and gradually optimized using statistically test methods. The optimum displacement on the model boundaries can be determined by bivariate linear regression, based on only three model runs with varying deformation ratio. The best-fit model is able to predict most of the in situ stress data quite well. Thus, the model can provide the full stress tensor along any chosen virtual well paths. This can be used to optimize the orientation of horizontal wells, which e.g. can be used for reservoir stimulation. The model confirms regional deviations from the average stress orientation trend, such as in the region of the Peace River Arch and the Bow Island Arch.
In the context of data compilation for the Alberta stress model, the Canadian database of the World Stress Map (WSM) could be expanded by including 514 new data records. This publication of an update of the Canadian stress map after ~20 years with a specific focus on Alberta shows, that the maximum horizontal stress (SHmax) is oriented southwest to northeast over large areas in Northern America. The SHmax orientation in Alberta is very homogeneous, with an average of about 47°. In order to calculate the average SHmax orientation on a regular grid as well as to estimate the wave-length of stress orientation, an existing algorithm has been improved and is applied to the Canadian data. The newly introduced quasi interquartile range on the circle (QIROC) improves the variance estimation of periodic data, as it is less susceptible to its outliers.
Another geomechanical–numerical model was built to estimate the 3D stress tensor in the target area ”Nördlich Lägern” in Northern Switzerland. This location, with Opalinus clay as a host rock, is a potential repository site for high-level radioactive waste. The performed modelling aims to investigate the sensitivity of the stress tensor on tectonic shortening, topography, faults and variable rock properties within the Mesozoic sedimentary stack, according to the required stability needed for a suitable radioactive waste disposal site. The majority of the tectonic stresses caused by the far-field shortening from the South are admitted by the competent rock units in the footwall and hanging wall of the argillaceous target horizon, the Upper Malm and Upper Muschelkalk. Thus, the differential stress within the host rock remains relatively low. East-west striking faults release stresses driven by tectonic shortening. The purely gravitational influence by the topography is low; higher SHmax magnitudes below topographical depression and lower values below hills are mainly observed near the surface. A complete calibration of the model is not possible, as no stress magnitude data are available for calibration, yet. The collection of this data will begin in 2015; subsequently they will be used to adjust the geomechanical–numerical model again.
The third geomechanical–numerical model investigates the stress variation in an ultra-deep gold mine in South Africa. This reservoir model is spatially one order of magnitude smaller than the previous local model from Northern Switzerland. Here, the primary focus is to investigate the hypothesis that the Mw 1.9 earthquake on 27 December 2007 was induced by stress changes due to the mining process. The Coulomb failure stress change (DeltaCFS) was used to analyse the stress change. It confirmed that the seismic event was induced by static stress transfer due to the mining progress. The rock was brought closer to failure on the derived rupture plane by stress changes of up to 1.5–15MPa, in dependence of the DeltaCFS analysis type. A forward modelling of a generic excavation scheme reveals that with decreasing distance to the dyke the DeltaCFS values increase significantly. Hence, even small changes in the mining progress can have a significant impact on the seismic hazard risk, i.e. the change of the occurrence probability to induce a seismic event of economic concern.
Das Europäische Parlament ist zweifelsohne die mächtigste parlamentarische Versammlung auf supranationaler Ebene. Das provoziert die Frage, wie Entscheidungen in diesem Parlament gefällt werden und wie sie begründet werden können. Darin liegt das Hauptanliegen dieser Arbeit, die zur Beantwortung dieser Frage auf soziologische Ansätze der Erklärung sozialen Handelns zurückgreift und damit einen neuen Zugang zur Beobachtung parlamentarischen Handelns schafft. Dabei arbeitet sie heraus, wie wichtig es ist, bei der Analyse politischer Entscheidungsprozesse zu beachten, wie politische Probleme von Akteuren interpretiert und gegenüber Verhandlungspartnern dargestellt werden. An den Fallbeispielen der Entscheidungsprozesse zur Dienstleistungsrichtlinie, zur Chemikalien-Verordnung REACH und dem TDIP (CIA)-Ausschuss in der Legislaturperiode 2004–2009, wird der soziale Mechanismus dargestellt, der hinter Einigungen im Europäischen Parlament steckt. Kultur als Interpretation der Welt wird so zum Schlüssel des Verständnisses politischer Entscheidungen auf supranationaler Ebene.
An important contribution of geosciences to the renewable energy production portfolio is the exploration and utilization of geothermal resources. For the development of a geothermal project at great depths a detailed geological and geophysical exploration program is required in the first phase. With the help of active seismic methods high-resolution images of the geothermal reservoir can be delivered. This allows potential transport routes for fluids to be identified as well as regions with high potential of heat extraction to be mapped, which indicates favorable conditions for geothermal exploitation. The presented work investigates the extent to which an improved characterization of geothermal reservoirs can be achieved with the new methods of seismic data processing. The summations of traces (stacking) is a crucial step in the processing of seismic reflection data. The common-reflection-surface (CRS) stacking method can be applied as an alternative for the conventional normal moveout (NMO) or the dip moveout (DMO) stack. The advantages of the CRS stack beside an automatic determination of stacking operator parameters include an adequate imaging of arbitrarily curved geological boundaries, and a significant increase in signal-to-noise (S/N) ratio by stacking far more traces than used in a conventional stack. A major innovation I have shown in this work is that the quality of signal attributes that characterize the seismic images can be significantly improved by this modified type of stacking in particular. Imporoved attribute analysis facilitates the interpretation of seismic images and plays a significant role in the characterization of reservoirs. Variations of lithological and petro-physical properties are reflected by fluctuations of specific signal attributes (eg. frequency or amplitude characteristics). Its further interpretation can provide quality assessment of the geothermal reservoir with respect to the capacity of fluids within a hydrological system that can be extracted and utilized. The proposed methodological approach is demonstrated on the basis on two case studies. In the first example, I analyzed a series of 2D seismic profile sections through the Alberta sedimentary basin on the eastern edge of the Canadian Rocky Mountains. In the second application, a 3D seismic volume is characterized in the surroundings of a geothermal borehole, located in the central part of the Polish basin. Both sites were investigated with the modified and improved stacking attribute analyses. The results provide recommendations for the planning of future geothermal plants in both study areas.
In der Berufsgruppe der Lehrerinnen und Lehrer besteht eine hohe Prävalenz psychischer und psychosomatischer Erkrankungen. Aus- und Weiterbildungsangebote zur Vermittlung lehrerspezifischer sozialer Kompetenzen spielen eine wichtige Rolle bei der Förderung der Lehrergesundheit. In der vorliegenden Studie wurde das „Lehrer/innen-Coaching nach dem Freiburger Modell“ evaluiert, welches die Kompetenz von Lehrkräften stärken soll, innerhalb der Schule und insbesondere im Unterricht, schwierige interpersonelle Situationen aktiv und konstruktiv zu gestalten. Damit sollen stressbedingte gesundheitliche Belastungen abgebaut und dem Entstehen gravierender psychischer Störungen vorgebeugt werden. In der vorliegenden Arbeit werden zwei modifizierte Versionen dieses Programms erstmalig im Rahmen einer landesweiten Feldstudie untersucht. Die zentralen Evaluationsfragestellungen beziehen sich auf die Effektivität der Intervention als Gesundheitsförderungsmaßnahme (Akzeptanz, Wirksamkeit, Wirksamkeitsvergleich der beiden Interventionsformen im landesweiten Einsatz). Daneben strebt die Studie einen Vergleich mit den Ergebnissen einer Vorgängerstudie sowie die Generierung weiterer Erkenntnisse zum Zusammenhang zwischen Aspekten der sozialen Kompetenz von Lehrkräften und ihrer psychischen Gesundheit an. An der Maßnahme konnten alle baden-württembergischen Lehrerinnen und Lehrer mit einer Berufserfahrung von mindestens 10 Jahren teilnehmen. Für die Untersuchung der Wirksamkeit der Maßnahme und des Wirksamkeitsvergleichs der beiden unterschiedlichen Formen liegt ein quasiexperimentelles Design mit insgesamt zwei Messzeitpunkten vor. In die Auswertung zur Wirksamkeit der Intervention konnten die Daten von den 314 Teilnehmern einbezogen werden. Die Messinstrumente, die in der vorliegenden Studie zur Anwendung kamen, waren der General Health Questionnaire (GHQ-12), das Maslach Burnout Inventory (MBI-D) und die ins Deutsche übersetzte Jefferson Scale of Empathy (JSE) in der an Lehrer adaptierten Form. Die Evaluationsergebnisse zeigen, dass die Teilnahme am „Lehrer/innen-Coaching nach dem Freiburger Modell” mit einer signifikanten Verbesserung der gesundheitsbezogenen abhängigen Variablen einhergeht. Besonders hervorzuheben ist die ausgeprägte Verbesserung der mittels GHQ-12 erfassten psychischen Gesundheit. Das Ergebnis des Prä-Post-Vergleichs der Gesundheitswerte beider Interventionsgruppen bestätigte sich auch im Vergleich zu einer Null-Interventionsgruppe: Entsprechend der Hypothese gab es bei den Teilnehmern eine signifikant stärkere Verbesserung der psychischen Gesundheit als bei den Nicht-Teilnehmern (Null-Interventionsgruppe). Die beiden Interventionsmodi „Kompaktform” und „Kurzform” erwiesen sich im Hinblick auf die Verbesserung der Lehrergesundheit als gleichermaßen wirksam. Zudem zeigen die Ergebnisse der Teilnehmerbefragung, dass die Maßnahme Anklang bei der Zielgruppe fand. Die Akzeptanz durch die Zielgruppe ist für die Wirksamkeit einer auf Freiwilligkeit basierenden verhaltenspräventiven Maßnahme naturgemäß eine essenzielle Voraussetzung. Bei der psychischen Gesundheit der Lehrer bestehen – wie aus weiteren Befunden der Studie ersichtlich – bedeutsame Zusammenhänge zu einer intakten zwischenmenschlichen Beziehung mit den Schülern, einer gelungenen, durch gegenseitige Unterstützung gekennzeichneten Interaktion im Kollegium und einem entsprechend unterstützenden Führungsverhalten der Schulleitung. Dies macht deutlich, welches besondere Gewicht einer gelingenden Beziehungsgestaltung an Schulen und im Unterricht beizumessen ist. Bezüglich der Vorgehensweise in der vorliegenden Untersuchung werden einige methodische Limitationen hinsichtlich des Designs diskutiert. Ergänzend wird im Ausblick der Evaluationsstudie darauf hingewiesen, wie sich durch die Verknüpfung des vorliegenden Programms mit weiteren, auf den Ebenen Verhalten, Verhältnisse und Führung ansetzenden gesundheitspräventiven Maßnahmen, zukünftig die Stärkung der psychischen Gesundheit von Lehrkräften weiter ausbauen ließe.
Unstetige Galerkin-Diskretisierung niedriger Ordnung in einem atmosphärischen Multiskalenmodell
(2014)
Die Dynamik der Atmosphäre der Erde umfasst einen Bereich von mikrophysikalischer Turbulenz über konvektive Prozesse und Wolkenbildung bis zu planetaren Wellenmustern. Für Wettervorhersage und zur Betrachtung des Klimas über Jahrzehnte und Jahrhunderte ist diese Gegenstand der Modellierung mit numerischen Verfahren. Mit voranschreitender Entwicklung der Rechentechnik sind Neuentwicklungen der dynamischen Kerne von Klimamodellen, die mit der feiner werdenden Auflösung auch entsprechende Prozesse auflösen können, notwendig. Der dynamische Kern eines Modells besteht in der Umsetzung (Diskretisierung) der grundlegenden dynamischen Gleichungen für die Entwicklung von Masse, Energie und Impuls, so dass sie mit Computern numerisch gelöst werden können. Die vorliegende Arbeit untersucht die Eignung eines unstetigen Galerkin-Verfahrens niedriger Ordnung für atmosphärische Anwendungen. Diese Eignung für Gleichungen mit Wirkungen von externen Kräften wie Erdanziehungskraft und Corioliskraft ist aus der Theorie nicht selbstverständlich. Es werden nötige Anpassungen beschrieben, die das Verfahren stabilisieren, ohne sogenannte „slope limiter” einzusetzen. Für das unmodifizierte Verfahren wird belegt, dass es nicht geeignet ist, atmosphärische Gleichgewichte stabil darzustellen. Das entwickelte stabilisierte Modell reproduziert eine Reihe von Standard-Testfällen der atmosphärischen Dynamik mit Euler- und Flachwassergleichungen in einem weiten Bereich von räumlichen und zeitlichen Skalen. Die Lösung der thermischen Windgleichung entlang der mit den Isobaren identischen charakteristischen Kurven liefert atmosphärische Gleichgewichtszustände mit durch vorgegebenem Grundstrom einstellbarer Neigung zu(barotropen und baroklinen)Instabilitäten, die für die Entwicklung von Zyklonen wesentlich sind. Im Gegensatz zu früheren Arbeiten sind diese Zustände direkt im z-System(Höhe in Metern)definiert und müssen nicht aus Druckkoordinaten übertragen werden.Mit diesen Zuständen, sowohl als Referenzzustand, von dem lediglich die Abweichungen numerisch betrachtet werden, und insbesondere auch als Startzustand, der einer kleinen Störung unterliegt, werden verschiedene Studien der Simulation von barotroper und barokliner Instabilität durchgeführt. Hervorzuheben ist dabei die durch die Formulierung von Grundströmen mit einstellbarer Baroklinität ermöglichte simulationsgestützte Studie des Grades der baroklinen Instabilität verschiedener Wellenlängen in Abhängigkeit von statischer Stabilität und vertikalem Windgradient als Entsprechung zu Stabilitätskarten aus theoretischen Betrachtungen in der Literatur.
Large-scale floodplain sediment dynamics in the Mekong Delta : present state and future prospects
(2014)
The Mekong Delta (MD) sustains the livelihood and food security of millions of people in Vietnam and Cambodia. It is known as the “rice bowl” of South East Asia and has one of the world’s most productive fisheries. Sediment dynamics play a major role for the high productivity of agriculture and fishery in the delta. However, the MD is threatened by climate change, sea level rise and unsustainable development activities in the Mekong Basin. But despite its importance and the expected threats, the understanding of the present and future sediment dynamics in the MD is very limited. This is a consequence of its large extent, the intricate system of rivers, channels and floodplains and the scarcity of observations. Thus this thesis aimed at (1) the quantification of suspended sediment dynamics and associated sediment-nutrient deposition in floodplains of the MD, and (2) assessed the impacts of likely future boundary changes on the sediment dynamics in the MD. The applied methodology combines field experiments and numerical simulation to quantify and predict the sediment dynamics in the entire delta in a spatially explicit manner. The experimental part consists of a comprehensive procedure to monitor quantity and spatial variability of sediment and associated nutrient deposition for large and complex river floodplains, including an uncertainty analysis. The measurement campaign applied 450 sediment mat traps in 19 floodplains over the MD for a complete flood season. The data also supports quantification of nutrient deposition in floodplains based on laboratory analysis of nutrient fractions of trapped sedimentation.The main findings are that the distribution of grain size and nutrient fractions of suspended sediment are homogeneous over the Vietnamese floodplains. But the sediment deposition within and between ring dike floodplains shows very high spatial variability due to a high level of human inference. The experimental findings provide the essential data for setting up and calibration of a large-scale sediment transport model for the MD. For the simulation studies a large scale hydrodynamic model was developed in order to quantify large-scale floodplain sediment dynamics. The complex river-channel-floodplain system of the MD is described by a quasi-2D model linking a hydrodynamic and a cohesive sediment transport model. The floodplains are described as quasi-2D presentations linked to rivers and channels modeled in 1D by using control structures. The model setup, based on the experimental findings, ignored erosion and re-suspension processes due to a very high degree of human interference during the flood season. A two-stage calibration with six objective functions was developed in order to calibrate both the hydrodynamic and sediment transport modules. The objective functions include hydraulic and sediment transport parameters in main rivers, channels and floodplains. The model results show, for the first time, the tempo-spatial distribution of sediment and associated nutrient deposition rates in the whole MD. The patterns of sediment transport and deposition are quantified for different sub-systems. The main factors influencing spatial sediment dynamics are the network of rivers, channels and dike-rings, sluice gate operations, magnitude of the floods and tidal influences. The superposition of these factors leads to high spatial variability of the sediment transport and deposition, in particular in the Vietnamese floodplains. Depending on the flood magnitude, annual sediment loads reaching the coast vary from 48% to 60% of the sediment load at Kratie, the upper boundary of the MD. Deposited sediment varies from 19% to 23% of the annual load at Kratie in Cambodian floodplains, and from 1% to 6% in the compartmented and diked floodplains in Vietnam. Annual deposited nutrients (N, P, K), which are associated to the sediment deposition, provide on average more than 50% of mineral fertilizers typically applied for rice crops in non-flooded ring dike compartments in Vietnam. This large-scale quantification provides a basis for estimating the benefits of the annual Mekong floods for agriculture and fishery, for assessing the impacts of future changes on the delta system, and further studies on coastal deposition/erosion. For the estimation of future prospects a sensitivity-based approach is applied to assess the response of floodplain hydraulics and sediment dynamics to the changes in the delta boundaries including hydropower development, climate change in the Mekong River Basin and effective sea level rise. The developed sediment model is used to simulate the mean sediment transport and sediment deposition in the whole delta system for the baseline (2000-2010) and future (2050-2060) periods. For each driver we derive a plausible range of future changes and discretize it into five levels, resulting in altogether 216 possible factor combinations. Our results thus cover all plausible future pathways of sediment dynamics in the delta based on current knowledge. The uncertainty of the range of the resulting impacts can be decreased in case more information on these drivers becomes available. Our results indicate that the hydropower development dominates the changes in sediment dynamics of the Mekong Delta, while sea level rise has the smallest effect. The floodplains of Vietnamese Mekong Delta are much more sensitive to the changes compared to the other subsystems of the delta. In terms of median changes of the three combined drivers, the inundation extent is predicted to increase slightly, but the overall floodplain sedimentation would be reduced by approximately 40%, while the sediment load to the Sea would diminish to half of the current rates. These findings provide new and valuable information on the possible impacts of future development on the delta, and indicate the most vulnerable areas. Thus, the presented results are a significant contribution to the ongoing international discussion on the hydropower development in the Mekong basin and its impact on the Mekong delta.
What is a radical? Somebody who goes against mainstream opinions? An agitator who suggests transforming society at the risk of endangering its harmony? In the political context of the British Isles at the end of the eighteenth century, the word radical had a negative connotation. It referred to the Levellers and the English Civil War, it brought back a period of history which was felt as a traumatic experience. Its stigmas were still vivid in the mind of the political leaders of these times. The reign of Cromwell was certainly the main reason for the general aversion of any form of virulent contestation of the power, especially when it contained political claims.
In the English political context, radicalism can be understood as the different campaigns for parliamentary reforms establishing universal suffrage. However, it became evident that not all those who were supporting such a reform originated from the same social class or shared the same ideals. As a matter of fact, the reformist associations and their leaders often disagreed with each other. Edward Royle and Hames Walvin claimed that radicalism could not be analyzed historically as a concept, because it was not a homogeneous movement, nor it had common leaders and a clear ideology. For them, radicalism was merely a loose concept, « a state of mind rather than a plan of action. »
At the beginning of the nineteenth-century, the newspaper The Northern Star used the word radical in a positive way to designate a person or a group of people whose ideas were conform to those of the newspaper. However, an opponent of parliamentary reform will use the same word in a negative way, in this case the word radical will convey a notion of menace. From the very beginning, the term radical covered a large spectrum of ideas and conceptions. In fact, the plurality of what the word conveys is the main characteristic of what a radical is. As a consequence, because the radicals tended to differentiate themselves with their plurality and their differences rather than with common features, it seems impossible to define what radicalism (whose suffix in –ism implies that it designate a doctrine, an ideology) is. Nevertheless, today it is accepted by all historians. From the mid-twentieth century, we could say that it was taken from granted to consider radicalism as a movement that fitted with the democratic precepts (universal suffrage, freedom of speech) of our modern world.
Let us first look at radicalism as a convenient way to designate the different popular movements appealing to universal suffrage during the time period 1792-1848. We could easily observe through the successions of men and associations, a long lasting radical state of mind: Cartwright, Horne Tooke, Thomas Hardy, Francis Burdett, William Cobbett, Henry Hunt, William Lovett, Bronterre O’Brien, Feargus O’Connor, The London Society for Constitutional information (SCI), The London Corresponding Society (LCS), The Hampden Clubs, The Chartists, etc. These organizations and people acknowledged having many things in common and being inspired by one another in carrying out their activities. These influences can be seen in the language and the political ideology that British historians name as "Constitutionalist", but also, in the political organization of extra-parliamentary societies. Most of the radicals were eager to redress injustices and, in practice, they were inspired by a plan of actions drawn on from the pamphlets of the True Whigs of the eighteenth-century. We contest the argument that the radicals lacked coherence and imagination or that they did not know how to put into practice their ambitions. In fact, their innovative forms of protest left a mark on history and found many successors in the twentieth century. Radicals’ prevarications were the result of prohibitive legislation that regulated the life of associations and the refusal of the authorities to cooperate with them.
As mentioned above, the term radical was greatly used and the contemporaries of the period starting from the French Revolution to Chartism never had to quarrel about the notions the word radical covered. However, this does not imply that all radicals were the same or that they belong to the same entity. Equally to Horne Tooke, the Reverend and ultra-Tory Stephens was considered as a radical, it went also with the shoemaker Thomas Hardy and the extravagant aristocrat Francis Burdett. Whether one belonged to the Aristocracy, the middle-class, the lower class or the Church, nothing could prevent him from being a radical. Surely, anybody could be a radical in its own way. Radicalism was wide enough to embrace everybody, from revolutionary reformers to paternalistic Tories.
We were interested to clarify the meaning of the term radical because its inclusive nature was overlooked by historians. That’s why the term radical figures in the original title of our dissertation Les voix/voies radicales (radical voices/ways to radicalism). In the French title, both words voix/voies are homonymous; the first one voix (voice) correspond to people, the second one voies (ways) refers to ideas. By this, we wanted to show that the word radical belongs to the sphere of ideas and common experience but also to the nature of human beings.
Methodoloy
The thesis stresses less on the question of class and its formation than on the circumstances that brought people to change their destiny and those of their fellows or to modernize the whole society. We challenged the work of E.P. Thompson, who in his famous book, The Making of the English Working Class, defined the radical movements in accordance with an idea of class.
How a simple shoe-maker, Thomas Hardy, could become the center of attention during a trial where he was accused of being the mastermind of a modern revolution? What brought William Cobbett, an ultra-Tory, self-taught intellectual, to gradually espouse the cause of universal suffrage at a period where it was unpopular to do so? Why a whole population gathered to hear Henry Hunt, a gentleman farmer whose background did not destine him for becoming the champion of the people? It seemed that the easiest way to answer to these questions and to understand the nature of the popular movements consisted in studying the life of their leaders. We aimed at reconstructing the universe which surrounded the principal actors of the reform movements as if we were a privileged witness of theses times.
This idea to associate the biographies of historical characters for a period of more than fifty years arouse when we realized that key events of the reform movements were echoing each other, such the trial of Thomas Hardy in 1794 and the massacre of Peterloo of 1819. The more we learned about the major events of radicalism and the life of their leaders, the more we were intrigued. Finally, one could ask himself if being a radical was not after all a question of character rather than one of class. The different popular movements in favour of a parliamentary reform were in fact far more inclusive and diversified from what historians traditionally let us to believe. For instance, once he manage to gather a sufficient number of members of the popular classes, Thomas Hardy projected to give the control of his association to an intellectual elite led by Horne Tooke.
Moreover, supporters of the radical reforms followed leaders whose background was completely different as theirs. For example, O’Connor claimed royal descent from the ancient kings of Ireland. William Cobbett, owner of a popular newspaper was proud of his origins as a farmer. William Lovett, close to the liberals and a few members of parliament came from a very poor family of fishermen. We have thus put together the life of these five men, Thomas hardy, William Cobbett, Henry Hunt, William Lovett and Feargus O’Connor in order to compose a sort of a saga of the radicals. This association gives us a better idea of the characteristics of the different movements in which they participated, but also, throw light on the circumstances of their formation and their failures, on the particular atmosphere which prevailed at these times, on the men who influenced these epochs, and finally on the marks they had left. These men were at the heart of a whole network and in contact with other actors of peripheral movements. They gathered around themselves close and loyal fellows with whom they shared many struggles but also quarreled and had strong words.
The original part of our approach is reflected in the choice to not consider studying the fluctuations of the radical movements in a linear fashion where the story follows a strict chronology. We decided to split up the main issue of the thesis through different topics. To do so, we simply have described the life of the people who inspired these movements. Each historical figure covers a chapter, and the general story follows a chronological progression. Sometimes we had to go back through time or discuss the same events in different chapters when the main protagonists lived in the same period of time.
Radical movements were influenced by people of different backgrounds. What united them above all was their wish to obtain a normalization of the political world, to redress injustices and obtain parliamentary reform. We paid particular attention to the moments where the life of these men corresponded to an intense activity of the radical movement or to a transition of its ideas and organization. We were not so much interested in their feelings about secondary topics nor did we about their affective relations. Furthermore, we had little interest in their opinions on things which were not connected to our topic unless it helped us to have a better understanding of their personality. We have purposely reduced the description of our protagonists to their radical sphere. Of course we talked about their background and their intellectual development; people are prone to experience reversals of opinions, the case of Cobbett is the most striking one.
The life of these personalities coincided with particular moments of the radical movement, such as the first popular political associations, the first open-air mass meetings, the first popular newspapers, etc. We wanted to emphasize the personalities of those who addressed speeches and who were present in the radical associations. One could argue that the inconvenience of focusing on a particular person presents a high risk of overlooking events and people who were not part of his world. However, it was essential to differ from an analysis or a chronicle which had prevailed in the studies of the radical movements, as we aimed at offering a point of view that completed the precedents works written on that topic. In order to do so, we have deliberately put the humane character of the radical movement at the center of our work and used the techniques of biography as a narrative thread.
Conclusion
The life of each historical figure that we have portrayed corresponded to a particular epoch of the radical movement. Comparing the speeches of the radical leaders over a long period of time, we noticed that the radical ideology evolved. The principles of the Rights of Men faded away and gave place to more concrete reasoning, such as the right to benefit from one’s own labour. This transition is characterized by the Chartist period of Feargus O’Connor. This does not mean that collective memory and radical tradition ceased to play an important part. The popular classes were always appealed to Constitutional rhetoric and popular myths. Indeed, thanks to them they identified themselves and justified their claims to universal suffrage.
We focused on the life of a few influent leaders of radicalism in order to understand its evolution and its nature. The description of their lives constituted our narrative thread and it enabled us to maintain consistency in our thesis. If the chapters are independent the one from the other, events and speeches are in correspondences. Sometimes we could believe that we were witnessing a repetition of facts and events as if history was repeating itself endlessly. However, like technical progress, the spirit of time, Zeitgeist, experiences changes and mutations. These features are fundamental elements to comprehend historical phenomena; the latter cannot be simplified to philosophical, sociological, or historical concept. History is a science which has this particularity that the physical reality of phenomena has a human dimension. As a consequence, it is essential not to lose touch with the human aspect of history when one pursues studies and intellectual activities on a historical phenomenon.
We decided to take a route opposite to the one taken by many historians. We have first identified influential people from different epochs before entering into concepts analysis. Thanks to this compilation of radical leaders, a new and fresh look to the understanding of radicalism was possible. Of course, we were not the first one to have studied them, but we ordered them following a chronology, like Plutarch enjoyed juxtaposing Greeks and Romans historical figures. Thanks to this technique we wanted to highlight the features of the radical leaders’ speeches, personalities and epochs, but also their differences. At last, we tried to draw the outlines and the heart of different radical movements in order to follow the ways that led to radicalism. We do not pretend to have offered an original and exclusive definition of radicalism, we mainly wanted to understand the nature of what defines somebody as a radical and explain the reasons why thousands of people decided to believe in this man. Moreover, we wanted to distance ourselves from the ideological debate of the Cold War which permeated also the interpretation of past events. Too often, the history of radicalism was either narrated with a form of revolutionary nostalgia or in order to praise the merits of liberalism.
If the great mass meetings ends in the mid-nineteenth-century with the fall of Chartism, this practice spread out in the whole world in the twentieth-century. Incidentally, the Arab Spring of the beginning of the twenty-first-century demonstrated that a popular platform was the best way for the people to claim their rights and destabilize a political system which they found too authoritative. Through protest the people express an essential quality of revolt, which is an expression of emancipation from fear. From then on, a despotic regime loses this psychological terror which helped it to maintain itself into power. The balance of power between the government and its people would also take a new turn. The radicals won this psychological victory more than 150 years ago and yet universal suffrage was obtained only a century later. From the acceptance of the principles of liberties to their cultural practice, a long route has to be taken to change people’s mind. It is a wearisome struggle for the most vulnerable people. In the light of western history, fundamental liberties must be constantly defended. Paradoxically, revolt is an essential and constitutive element of the maintenance of democracy.
The work elaborates on the question if coaches in non-professional soccer can influence referee decisions. Modeled from a principal-agent perspective, the managing referee boards can be seen as the principal. They aim at facilitating a fair competition which is in accordance with the existing rules and regulations. In doing so, the referees are assigned as impartial agents on the pitch. The coaches take over a non-legitimate principal-like role trying to influence the referees even though they do not have the formal right to do so.
Separate questionnaires were set up for referees and coaches. The coach questionnaire aimed at identifying the extent and the forms of influencing attempts by coaches. The referee questionnaire tried to elaborate on the questions if referees take notice of possible influencing attempts and how they react accordingly.
The results were put into relation with official match data in order to identify significant influences on personal sanctions (yellow cards, second yellow cards, red cards) and the match result.
It is found that there is a slight effect on the referee’s decisions. However, this effect is rather disadvantageous for the influencing coach and there is no evidence for an impact on the match result itself.
Knowing the rates and mechanisms of geomorphic process that shape the Earth’s surface is crucial to understand landscape evolution. Modern methods for estimating denudation rates enable us to quantitatively express and compare processes of landscape downwearing that can be traced through time and space—from the seemingly intact, though intensely shattered, phantom blocks of the catastrophically fragmented basal facies of giant rockslides up to denudational noise in orogen-wide data sets averaging over several millennia. This great variety of spatiotemporal scales of denudation rates is both boon and bane of geomorphic process rates. Indeed, processes of landscape downwearing can be traced far back in time, helping us to understand the Earth’s evolution. Yet, this benefit may turn into a drawback due to scaling issues if these rates are to be compared across different observation timescales.
This thesis investigates the mechanisms, patterns and rates of landscape downwearing across the Himalaya-Tibet orogen.
Accounting for the spatiotemporal variability of denudation processes, this thesis addresses landscape downwearing on three distinctly different spatial scales, starting off at the local scale of individual hillslopes where considerable amounts of debris are generated from rock instantaneously: Rocksliding in active mountains is a major impetus of landscape downwearing. Study I provides a systematic overview of the internal sedimentology of giant rockslide deposits and thus meets the challenge of distinguishing them from macroscopically and microscopically similar glacial deposits, tectonic fault-zone breccias, and impact breccias. This distinction is important to avoid erroneous or misleading deduction of paleoclimatic or tectonic implications. -> Grain size analysis shows that rockslide-derived micro-breccia closely resemble those from meteorite impact or tectonic faults. -> Frictionite may occur more frequently that previously assumed. -> Mössbauer-spectroscopy derived results indicate basal rock melting in the absence of water, involving short-term temperatures of >1500°C.
Zooming out, Study II tracks the fate of these sediments, using the example of the upper Indus River, NW India. There we use river sand samples from the Indus and its tributaries to estimate basin-averaged denudation rates along a ~320-km reach across the Tibetan Plateau margin, to answer the question whether incision into the western Tibetan Plateau margin is currently active or not. -> We find an about one-order-of-magnitude upstream decay—from 110 to 10 mm kyr^-1—of cosmogenic Be-10-derived basin-wide denudation rates across the morphological knickpoint that marks the transition from the Transhimalayan ranges to the Tibetan Plateau. This trend is corroborated by independent bulk petrographic and heavy mineral analysis of the same samples. -> From the observation that tributary-derived basin-wide denudation rates do not increase markedly until ~150–200 km downstream of the topographic plateau margin we conclude that incision into the Tibetan Plateau is inactive. -> Comparing our postglacial Be-10-derived denudation rates to long-term (>10^6 yr) estimates from low-temperature thermochronometry, ranging from 100 to 750 mm kyr^-1, points to an order- of-magnitude decay of rates of landscape downwearing towards present. We infer that denudation rates must have been higher in the Quaternary, probably promoted by the interplay of glacial and interglacial stages.
Our investigation of regional denudation patterns in the upper Indus finally is an integral part of Study III that synthesizes denudation of the Himalaya-Tibet orogen. In order to identify general and time-invariant predictors for Be-10-derived denudation rates we analyze tectonic, climatic and topographic metrics from an inventory of 297 drainage basins from various parts of the orogen. Aiming to get insight to the full response distributions of denudation rate to tectonic, climatic and topographic candidate predictors, we apply quantile regression instead of ordinary least squares regression, which has been standard analysis tool in previous studies that looked for denudation rate predictors. -> We use principal component analysis to reduce our set of 26 candidate predictors, ending up with just three out of these: Aridity Index, topographic steepness index, and precipitation of the coldest quarter of the year. -> Topographic steepness index proves to perform best during additive quantile regression. Our consequent prediction of denudation rates on the basin scale involves prediction errors that remain between 5 and 10 mm kyr^-1. -> We conclude that while topographic metrics such as river-channel steepness and slope gradient—being representative on timescales that our cosmogenic Be-10-derived denudation rates integrate over—generally appear to be more suited as predictors than climatic and tectonic metrics based on decadal records.
Es ist in dieser Arbeit gelungen, starre Oligospiroketal(OSK)-Stäbe als Grundbausteine für komplexe 2D- und 3D-Systeme zu verwenden. Dazu wurde ein difunktionalisierter starrer Stab synthetisiert, der mit seines Gleichen und anderen verzweigten Funktionalisierungseinheiten in Azid-Alkin-Klickreaktionen eingesetzt wurde. An zwei über Klickreaktion verknüpften OSK-Stäben konnten mittels theoretischer Berechnungen Aussagen über die neuartige Bimodalität der Konformation getroffen werden. Es wurde dafür der Begriff Gelenkstab eingeführt, da die Moleküle um ein Gelenk gedreht sowohl gestreckt als auch geknickt vorliegen können. Aufbauend auf diesen Erkenntnissen konnte gezeigt werden, dass nicht nur gezielt große Polymere aus bis zu vier OSK-Stäben synthetisiert werden können, sondern es auch möglich ist, durch gezielte Änderung von Reaktionsbedingungen der Klickreaktion auch Cyclen aus starren OSK-Stäben herzustellen. Die neu entwickelte Substanzklasse der Gelenkstäbe wurde im Hinblick auf die Steuerung des vorliegenden Gleichgewichts zwischen geknicktem und gestrecktem Gelenkstab hin untersucht. Dafür wurde der Gelenkstab mit Pyrenylresten in terminaler Position versehen. Es wurde durch Fluoreszenzmessungen festgestellt, dass das Gleichgewicht z. B. durch die Temperatur oder die Wahl des Lösungsmittels beeinflussbar ist. Für vielfache Anwendungen wurde eine vereinfachte Synthesestrategie gefunden, mit der eine beliebige Funktionalisierung in nur einem Syntheseschritt erreicht werden konnte. Es konnten photoaktive Gelenkstäbe synthetisiert werden, die gezielt zur intramolekularen Dimerisierung geführt werden konnten. Zusätzlich wurde durch Aminosäuren ein Verknüpfungselement am Ende der Gelenkstäbe gefunden, das eine stereoselektive Synthese von Mehrfachfunktionalisierungen zulässt. Die Synthese der komplexen Gelenkstäbe wurde als ein neuartiges Gebiet aufgezeigt und bietet ein breites Forschungspotential für weitere Anwendungen z. B. in der Biologie (als molekulare Schalter für Ionentransporte) und in der Materialchemie (als Ladungs- oder Energietransporteure).
Das Influenzavirus infiziert Säugetiere und Vögel. Der erste Schritt im Infektionszyklus ist die Anbindung des Viruses über sein Oberflächenprotein Hämagglutinin (HA) an Zuckerstrukturen auf Epithelzellen des respiratorischen Traktes im Wirtsorganismus. Aus den drei komplementaritätsbestimmenden Regionen (complementarity determining regions, CDRs) der schweren Kette eines monoklonalen Hämagglutinin-bindenden Antikörpers wurden drei lineare Peptide abgeleitet. Die Bindungseigenschaften der drei Peptide wurden experimentell mittels Oberflächenplasmonenresonanzspektroskopie untersucht. Es zeigte sich, dass in Übereinstimmung mit begleitenden Molekulardynamik-Simulationen zwei der drei Peptide (PeB und PeC) analog zur Bindefähigkeit des Antikörpers in der Lage sind, Influenzaviren vom Stamm X31 (H3N2 A/Aichi/2/1968) zu binden. Die Interaktion des Peptids PeB, welches potentiell mit der konservierten Rezeptorbindestelle im HA interagiert, wurde anschließend näher charakterisiert. Die Detektion der Influenzaviren war unter geeigneten Immobilisationsbedingungen im diagnostisch relevanten Bereich möglich. Die Spezifität der PeB-Virus-Bindung wurde mittels geeigneter Kontrollen auf der Seite des Analyten und des Liganden nachgewiesen. Des Weiteren war das Peptid PeB in der Lage die Bindung von X31-Viren an Mimetika seines natürlichen Rezeptors zu inhibieren, was die spezifische Interaktion mit der Rezeptorbindungsstelle im Hämagglutinin belegt. Anschließend wurde die Primärsequenz von PeB durch eine vollständige Substitutionsanalyse im Microarray-Format hinsichtlich der Struktur-Aktivitäts-Beziehungen charakterisiert. Dies führte außerdem zu verbesserten Peptidvarianten mit erhöhter Affinität und breiterer Spezifität gegen aktuelle Influenzastämme verschiedener Serotypen (z.B. H1N1/2009, H5N1/2004, H7N1/2013). Schließlich konnte durch Verwendung einer in der Primärsequenz angepassten höher affinen Peptidvariante die Influenzainfektion in vitro inhibiert werden. Damit stellen die vom ursprünglichen Peptid PeB abgeleiteten Varianten Rezeptormoleküle in biosensorischen Testsystemen sowie potentielle Wirkstoffe dar.
Cyanobacteria produce about 40 percent of the world’s primary biomass, but also a variety of often toxic peptides such as microcystin. Mass developments, so called blooms, can pose a real threat to the drinking water supply in many parts of the world. This study aimed at characterizing the biological function of microcystin production in one of the most common bloom-forming cyanobacterium Microcystis aeruginosa.
In a first attempt, the effect of elevated light intensity on microcystin production and its binding to cellular proteins was studied. Therefore, conventional microcystin quantification techniques were combined with protein-biochemical methods. RubisCO, the key enzyme for primary carbon fixation was a major microcystin interaction partner. High light exposition strongly stimulated microcystin-protein interactions. Up to 60 percent of the total cellular microcystin was detected bound to proteins, i.e. inaccessible for standard quantification procedures. Underestimation of total microcystin contents when neglecting the protein fraction was also demonstrated in field samples. Finally, an immuno-fluorescence based method was developed to identify microcystin producing cyanobacteria in mixed populations.
The high light induced microcystin interaction with proteins suggested an impact of the secondary metabolite on the primary metabolism of Microcystis by e.g. modulating the activity of enzymes. For addressing that question, a comprehensive GC/MS-based approach was conducted to compare the accumulation of metabolites in the wild-type of Microcystis aeruginosa PCC 7806 and the microcystin deficient ΔmcyB mutant. From all 501 detected non-redundant metabolites 85 (17 percent) accumulated significantly different in either of both genotypes upon high light exposition. Accumulation of compatible solutes in the ΔmcyB mutant suggests a role of microcystin in fine-tuning the metabolic flow to prevent stress related to excess light, high oxygen concentration and carbon limitation.
Co-analysis of the widely used model cyanobacterium Synechocystis PCC 6803 revealed profound metabolic differences between species of cyanobacteria. Whereas Microcystis channeled more resources towards carbohydrate synthesis, Synechocystis invested more in amino acids. These findings were supported by electron microscopy of high light treated cells and the quantification of storage compounds. While Microcystis accumulated mainly glycogen to about 8.5 percent of its fresh weight within three hours, Synechocystis produced higher amounts of cyanophycin. The results showed that the characterization of species-specific metabolic features should gain more attention with regard to the biotechnological use of cyanobacteria.
Pulsar wind nebulae (PWNe) are the most abundant TeV gamma-ray emitters in the Milky Way. The radiative emission of these objects is powered by fast-rotating pulsars, which donate parts of their rotational energy into winds of relativistic particles. This thesis presents an in-depth study of the detected population of PWNe at high energies. To outline general trends regarding their evolutionary behaviour, a time-dependent model is introduced and compared to the available data. In particular, this work presents two exceptional PWNe which protrude from the rest of the population, namely the Crab Nebula and N 157B. Both objects are driven by pulsars with extremely high rotational energy loss rates. Accordingly, they are often referred to as energetic twins. Modelling the non-thermal multi-wavelength emission of N157B gives access to specific properties of this object, like the magnetic field inside the nebula. Comparing the derived parameters to those of the Crab Nebula reveals large intrinsic differences between the two PWNe. Possible origins of these differences are discussed in context of the resembling pulsars.
Compared to the TeV gamma-ray regime, the number of detected PWNe is much smaller in the MeV-GeV gamma-ray range. In the latter range, the Crab Nebula stands out by the recent detection of gamma-ray flares. In general, the measured flux enhancements on short time scales of days to weeks were not expected in the theoretical understanding of PWNe. In this thesis, the variability of the Crab Nebula is analysed using data from the Fermi Large Area Telescope (Fermi-LAT). For the presented analysis, a new gamma-ray reconstruction method is used, providing a higher sensitivity and a lower energy threshold compared to previous analyses. The derived gamma-ray light curve of the Crab Nebula is investigated for flares and periodicity. The detected flares are analysed regarding their energy spectra, and their variety and commonalities are discussed. In addition, a dedicated analysis of the flare which occurred in March 2013 is performed. The derived short-term variability time scale is roughly 6h, implying a small region inside the Crab Nebula to be responsible for the enigmatic flares. The most promising theories explaining the origins of the flux eruptions and gamma-ray variability are discussed in detail.
In the technical part of this work, a new analysis framework is presented. The introduced software, called gammalib/ctools, is currently being developed for the future CTA observa- tory. The analysis framework is extensively tested using data from the H. E. S. S. experiment. To conduct proper data analysis in the likelihood framework of gammalib/ctools, a model describing the distribution of background events in H.E.S.S. data is presented. The software provides the infrastructure to combine data from several instruments in one analysis. To study the gamma-ray emitting PWN population, data from Fermi-LAT and H. E. S. S. are combined in the likelihood framework of gammalib/ctools. In particular, the spectral peak, which usually lies in the overlap energy regime between these two instruments, is determined with the presented analysis framework. The derived measurements are compared to the predictions from the time-dependent model. The combined analysis supports the conclusion of a diverse population of gamma-ray emitting PWNe.
Bacteria respond to changing environmental conditions by switching the global pattern of expressed genes. In response to specific environmental stresses the cell activates several stress-specific molecules such as sigma factors. They reversibly bind the RNA polymerase to form the so-called holoenzyme and direct it towards the appropriate stress response genes. In exponentially growing E. coli cells, the majority of the transcriptional activity is carried out by the housekeeping sigma factor, while stress responses are often under the control of alternative sigma factors. Different sigma factors compete for binding to a limited pool of RNA polymerase (RNAP) core enzymes, providing a mechanism for cross talk between genes or gene classes via the sharing of expression machinery. To quantitatively analyze the contribution of sigma factor competition to global changes in gene expression, we develop a thermodynamic model that describes binding between sigma factors and core RNAP at equilibrium, transcription, non-specific binding to DNA and the modulation of the availability of the molecular components.
Association of housekeeping sigma factor to RNAP is generally favored by its abundance and higher binding affinity to the core. In order to promote transcription by alternative sigma subunits, the bacterial cell modulates the transcriptional efficiency in a reversible manner through several strategies such as anti-sigma factors, 6S RNA and generally any kind of transcriptional regulators (e.g. activators or inhibitors). By shifting the outcome of sigma factor competition for the core, these modulators bias the transcriptional program of the cell. The model is validated by comparison with in vitro competition experiments, with which excellent agreement is found. We observe that transcription is affected via the modulation of the concentrations of the different types of holoenzymes, so saturated promoters are only weakly affected by sigma factor competition. However, in case of overlapping promoters or promoters recognized by two types of sigma factors, we find that even saturated promoters are strongly affected.
Active transcription effectively lowers the affinity between the sigma factor driving it and the core RNAP, resulting in complex cross talk effects and raising the question of how their in vitro measure is relevant in the cell. We also estimate that sigma factor competition is not strongly affected by non-specific binding of core RNAPs, sigma factors, and holoenzymes to DNA. Finally, we analyze the role of increased core RNAP availability upon the shut-down of ribosomal RNA transcription during stringent response. We find that passive up-regulation of alternative sigma-dependent transcription is not only possible, but also displays hypersensitivity based on the sigma factor competition. Our theoretical analysis thus provides support for a significant role of passive control during that global switch of the gene expression program and gives new insights into RNAP partitioning in the cell.
The characterization of exoplanets is a young and rapidly expanding field in astronomy.
It includes a method called transmission spectroscopy that searches for planetary spectral
fingerprints in the light received from the host star during the event of a transit. This
techniques allows for conclusions on the atmospheric composition at the terminator region,
the boundary between the day and night side of the planet. Observationally a big
challenge, first attempts in the community have been successful in the detection of several
absorption features in the optical wavelength range. These are for example a Rayleighscattering
slope and absorption by sodium and potassium. However, other objects show
a featureless spectrum indicative for a cloud or haze layer of condensates masking the
probable atmospheric layers.
In this work, we performed transmission spectroscopy by spectrophotometry of three
Hot Jupiter exoplanets. When we began the work on this thesis, optical transmission
spectra have been available for two exoplanets. Our main goal was to advance the current
sample of probed objects to learn by comparative exoplanetology whether certain
absorption features are common. We selected the targets HAT-P-12b, HAT-P-19b and
HAT-P-32b, for which the detection of atmospheric signatures is feasible with current
ground-based instrumentation. In addition, we monitored the host stars of all three objects
photometrically to correct for influences of stellar activity if necessary.
The obtained measurements of the three objects all favor featureless spectra. A variety
of atmospheric compositions can explain the lack of a wavelength dependent absorption.
But the broad trend of featureless spectra in planets of a wide range of temperatures,
found in this work and in similar studies recently published in the literature, favors an
explanation based on the presence of condensates even at very low concentrations in the
atmospheres of these close-in gas giants. This result points towards the general conclusion
that the capability of transmission spectroscopy to determine the atmospheric composition
is limited, at least for measurements at low spectral resolution.
In addition, we refined the transit parameters and ephemerides of HAT-P-12b and HATP-
19b. Our monitoring campaigns allowed for the detection of the stellar rotation period
of HAT-P-19 and a refined age estimate. For HAT-P-12 and HAT-P-32, we derived upper
limits on their potential variability. The calculated upper limits of systematic effects of
starspots on the derived transmission spectra were found to be negligible for all three
targets.
Finally, we discussed the observational challenges in the characterization of exoplanet
atmospheres, the importance of correlated noise in the measurements and formulated
suggestions on how to improve on the robustness of results in future work.
Die Dissertation wird der rechtsstaatlichen Problematik in der Verfassung von Georgien und in der Rechtsprechung des Verfassungsgerichts Georgiens gewidmet und bietet eine umfangreiche Analyse der wichtigsten Rechtsstaatsmerkmale in dieser Hinsicht. Neben dieser Analyse der früheren und bestehenden Gegebenheiten, prognostiziert sie mittels der perspektivistischen Denkmodele zukünftige Entwicklungen und schlägt eventuelle Lösungswege von Problemen vor. Bei der Arbeit an der Dissertation wurde insbesondere die reiche Erfahrung der deutschen Rechtsstaatsdogmatik in Anspruch genommen.
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
Entrepreneurship is known to be a main driver of economic growth. Hence, governments have an interest in supporting and promoting entrepreneurial activities. Start-up subsidies, which have been analyzed extensively, only aim at mitigating the lack of financial capital. However, some entrepreneurs also lack in human, social, and managerial capital. One way to address these shortcomings is by subsidizing coaching programs for entrepreneurs. However, theoretical and empirical evidence about business coaching and programs subsidizing coaching is scarce. This dissertation gives an extensive overview of coaching and is the first empirical study for Germany analyzing the effects of coaching programs on its participants. In the theoretical part of the dissertation the process of a business start-up is described and it is discussed how and in which stage of the company’s evolvement coaching can influence entrepreneurial success. The concept of coaching is compared to other non-monetary types of support as training, mentoring, consulting, and counseling. Furthermore, national and international support programs are described. Most programs have either no or small positive effects. However, there is little quantitative evidence in the international literature. In the empirical part of the dissertation the effectiveness of coaching is shown by evaluating two German coaching programs, which support entrepreneurs via publicly subsidized coaching sessions. One of the programs aims at entrepreneurs who have been employed before becoming self-employed, whereas the other program is targeted at former unemployed entrepreneurs. The analysis is based on the evaluation of a quantitative and a qualitative dataset. The qualitative data are gathered by intensive one-on-one interviews with coaches and entrepreneurs. These data give a detailed insight about the coaching topics, duration, process, effectiveness, and the thoughts of coaches and entrepreneurs. The quantitative data include information about 2,936 German-based entrepreneurs. Using propensity score matching, the success of participants of the two coaching programs is compared with adequate groups of non-participants. In contrast to many other studies also personality traits are observed and controlled for in the matching process. The results show that only the program for former unemployed entrepreneurs has small positive effects. Participants have a larger survival probability in self-employment and a larger probability to hire employees than matched non-participants. In contrast, the program for former employed individuals has negative effects. Compared to individuals who did not participate in the coaching program, participants have a lower probability to stay in self-employment, lower earned net income, lower number of employees and lower life satisfaction. There are several reasons for these differing results of the two programs. First, former unemployed individuals have more basic coaching needs than former employed individuals. Coaches can satisfy these basic coaching needs, whereas former employed individuals have more complex business problems, which are not very easy to be solved by a coaching intervention. Second, the analysis reveals that former employed individuals are very successful in general. It is easier to increase the success of former unemployed individuals as they have a lower base level of success than former employed individuals. An effect heterogeneity analysis shows that coaching effectiveness differs by region. Coaching for previously unemployed entrepreneurs is especially useful in regions with bad labor market conditions. In summary, in line with previous literature, it is found that coaching has little effects on the success of entrepreneurs. The previous employment status, the characteristics of the entrepreneur and the regional labor market conditions play a crucial role in the effectiveness of coaching. In conclusion, coaching needs to be well tailored to the individual and applied thoroughly. Therefore, governments should design and provide coaching programs only after due consideration.
Boolean constraint solving technology has made tremendous progress over the last decade, leading to industrial-strength solvers, for example, in the areas of answer set programming (ASP), the constraint satisfaction problem (CSP), propositional satisfiability (SAT) and satisfiability of quantified Boolean formulas (QBF). However, in all these areas, there exist multiple solving strategies that work well on different applications; no strategy dominates all other strategies. Therefore, no individual solver shows robust state-of-the-art performance in all kinds of applications. Additionally, the question arises how to choose a well-performing solving strategy for a given application; this is a challenging question even for solver and domain experts. One way to address this issue is the use of portfolio solvers, that is, a set of different solvers or solver configurations. We present three new automatic portfolio methods: (i) automatic construction of parallel portfolio solvers (ACPP) via algorithm configuration,(ii) solving the $NP$-hard problem of finding effective algorithm schedules with Answer Set Programming (aspeed), and (iii) a flexible algorithm selection framework (claspfolio2) allowing for fair comparison of different selection approaches. All three methods show improved performance and robustness in comparison to individual solvers on heterogeneous instance sets from many different applications. Since parallel solvers are important to effectively solve hard problems on parallel computation systems (e.g., multi-core processors), we extend all three approaches to be effectively applicable in parallel settings. We conducted extensive experimental studies different instance sets from ASP, CSP, MAXSAT, Operation Research (OR), SAT and QBF that indicate an improvement in the state-of-the-art solving heterogeneous instance sets. Last but not least, from our experimental studies, we deduce practical advice regarding the question when to apply which of our methods.
Unterschiedliche Verfahren zur Ermittlung von Georadar-Wellengeschwindigkeiten wurden entwickelt und erfolgreich angewendet. Für die Verfahren wurden statistische Methoden und Schwarmintelligenz-Algorithmen benutzt. Es wurde gezeigt, dass die neuen Verfahren schneller, präziser und besser reproduzierbare Ergebnisse für Georadar-Wellengeschwindigkeit erzielen als herkömmliche Verfahren.
Mit verbesserten Werten der Georadar-Wellengeschwindigkeit lassen sich die verzerrten dreidimensionalen Abbilder der obersten zehn Meter des Untergrundes, welche sich mit Georadar-Daten erzeugen lassen, korrigieren. In diesen korrigierten Abbildern sind dann realistische Tiefen von Schichten oder Objekten im Untergrund besser messbar. Außerdem verbessern präzisere Wellengeschwindigkeiten die Bestimmung von Bodenparametern, wie Wassergehalt oder Tonanteil. Die präsentierten Verfahren erlauben eine quantitative Angabe von Fehlern der bestimmten Wellengeschwindigkeit und der daraus folgenden Tiefen und Bodenparametern im Untergrund. Die Vorteile dieser neu entwickelten Verfahren zur Charakterisierung des Untergrundes der oberen Meter wurde an Feldbeispielen demonstriert.
Ziel der Arbeit war die Entwicklung von farbstoffmarkierten Polymeren, die einen temperaturgetriebenen Knäuel-Kollaps-Phasenübergang in wässriger Lösung ("thermo-responsive Polymere") zeigen und diesen in ein optisches Signal übersetzen können. Solche Polymere unterliegen innerhalb eines kleinen Temperaturintervalls einer massiven Änderung ihres Verhaltens, z B. ihrer Konformation und ihres Quellungsgrads. Diese Änderungen sind mit einem Wechsel der Löseeigenschaften von hydrophil zu hydrophob verbunden. Als Matrixpolymere wurden Poly-N-isopropylacrylamid (polyNIPAm), Poly(oligoethylen-glykolacrylat) (polyOEGA) und Poly(oligoethylenglykolmethacrylat) (polyOEGMA) ein-gesetzt, in die geeignete Farbstoffen durch Copolymerisation eingebaut wurden. Als besonders geeignet, um den Phasenübergang in ein optisches Signal zu übersetzen, erwiesen sich hierfür kompakte, solvatochrome Cumarin- und Naphthalimidderivate. Diese beeinträchtigten weder das Polymerisationsverhalten noch den Phasenübergang, reagierten aber sowohl bezüglich Farbe als auch Fluoreszenz stark auf die Polarität des Lösemittels. Weiterhin wurden Systeme entwickelt, die mittels Energietransfer (FRET) ein an den Phasenübergang gekoppeltes optisches Signal erzeugen. Hierbei wurde ein Cumarin als Donor- und ein Polythiophen als Akzeptorfarbstoff eingesetzt. Es zeigte sich, dass trotz scheinbarer Ähnlichkeit bestimmte Polymere ausgeprägt auf einen Temperaturstimulus mit Änderung ihrer spektralen Eigenschaften reagieren, andere aber nicht. Hierfür wurden die molekularen Ursachen untersucht. Als wahrscheinliche Gründe für das Ausbleiben einer spektralen Änderung in Oligo(ethylenglykol)-basierten Polymeren sind zum einen die fehlende Dehydratationseffektivität infolge des Fehlens eines selbstgenügenden Wasserstoffbrückenbindungsmotivs zu nennen und zum anderen die sterische Abschirmung der Farbstoffe durch die Oligo(ethylenglykol)-Seitenketten. Als Prinzipbeweis für die Nützlichkeit solcher Systeme für die Bioanalytik wurde ein System entwickelt, dass die Löslichkeitseigenschaft eines thermoresponsiven Polymers durch Antikörper-Antigen-Reaktion änderte. Die Bindung selbst kleiner Mengen eines Antikörpers ließ sich so direkt optisch auslesen und war bereits mit dem bloßen Auge zu erkennen.
Die vorliegende Studie untersucht die gesellschaftliche Rolle des gegenwärtigen Mathematikunterrichts an deutschen allgemeinbildenden Schulen aus einer soziologisch-kritischen Perspektive. In Zentrum des Interesses steht die durch den Mathematikunterricht erfahrene Sozialisation. Die Studie umfasst unter anderem eine Literaturdiskussion, die Ausarbeitung eines soziologischen Rahmens auf der Grundlage des Werks von Michel Foucault und zwei Teilstudien zur Soziologie der Logik und des Rechnens. Abschließend werden Dispositive des Mathematischen beschrieben, die darlegen, in welcher Art und mit welcher persönlichen und gesellschaftlichen Folgen der gegenwärtige Mathematikunterricht eine spezielle Geisteshaltung etabliert.
Following the principles of green chemistry, a simple and efficient synthesis of functionalised imidazolium zwitterionic compounds from renewable resources was developed based on a modified one-pot Debus-Radziszewski reaction. The combination of different carbohydrate-derived 1,2-dicarbonyl compounds and amino acids is a simple way to modulate the properties and introduce different functionalities. A representative compound was assessed as an acid catalyst, and converted into acidic ionic liquids by reaction with several strong acids. The reactivity of the double carboxylic functionality was explored by esterification with long and short chain alcohols, as well as functionalised amines, which led to the straightforward formation of surfactant-like molecules or bifunctional esters and amides. One of these di-esters is currently being investigated for the synthesis of poly(ionic liquids). The functionalisation of cellulose with one of the bifunctional esters was investigated and preliminary tests employing it for the functionalisation of filter papers were carried out successfully. The imidazolium zwitterions were converted into ionic liquids via hydrothermal decarboxylation in flow, a benign and scalable technique. This method provides access to imidazolium ionic liquids via a simple and sustainable methodology, whilst completely avoiding contamination with halide salts. Different ionic liquids can be generated depending on the functionality contained in the ImZw precursor. Two alanine-derived ionic liquids were assessed for their physicochemical properties and applications as solvents for the dissolution of cellulose and the Heck coupling.
The adaptation of cell growth and proliferation to environmental changes is essential for the surviving of biological systems. The evolutionary conserved Ser/Thr protein kinase “Target of Rapamycin” (TOR) has emerged as a major signaling node that integrates the sensing of numerous growth signals to the coordinated regulation of cellular metabolism and growth. Although the TOR signaling pathway has been widely studied in heterotrophic organisms, the research on TOR in photosynthetic eukaryotes has been hampered by the reported land plant resistance to rapamycin. Thus, the finding that Chlamydomonas reinhardtii is sensitive to rapamycin, establish this unicellular green alga as a useful model system to investigate TOR signaling in photosynthetic eukaryotes.
The observation that rapamycin does not fully arrest Chlamydomonas growth, which is different from observations made in other organisms, prompted us to investigate the regulatory function of TOR in Chlamydomonas in context of the cell cycle. Therefore, a growth system that allowed synchronously growth under widely unperturbed cultivation in a fermenter system was set up and the synchronized cells were characterized in detail. In a highly resolved kinetic study, the synchronized cells were analyzed for their changes in cytological parameters as cell number and size distribution and their starch content. Furthermore, we applied mass spectrometric analysis for profiling of primary and lipid metabolism. This system was then used to analyze the response dynamics of the Chlamydomonas metabolome and lipidome to TOR-inhibition by rapamycin
The results show that TOR inhibition reduces cell growth, delays cell division and daughter cell release and results in a 50% reduced cell number at the end of the cell cycle. Consistent with the growth phenotype we observed strong changes in carbon and nitrogen partitioning in the direction of rapid conversion into carbon and nitrogen storage through an accumulation of starch, triacylglycerol and arginine. Interestingly, it seems that the conversion of carbon into triacylglycerol occurred faster than into starch after TOR inhibition, which may indicate a more dominant role of TOR in the regulation of TAG biosynthesis than in the regulation of starch.
This study clearly shows, for the first time, a complex picture of metabolic and lipidomic dynamically changes during the cell cycle of Chlamydomonas reinhardtii and furthermore reveals a complex regulation and adjustment of metabolite pools and lipid composition in response to TOR inhibition.
Der Bittergeschmack dient Säugern vermutlich zur Wahrnehmung und Vermeidung toxischer Substanzen. Bitterstoffe können jedoch auch gesund sein oder werden oft bereitwillig mit der Nahrung aufgenommen. Ob sie geschmacklich unterschieden werden können, ist allerdings umstritten. Detektiert werden Bitterstoffe von oralen Bittergeschmacksrezeptoren, den TAS2R (human) bzw. Tas2r (murin). In der Literatur gibt es aber immer mehr Hinweise darauf, dass überdies Tas2r nicht nur in extragustatorischen Organen exprimiert werden, sondern dort auch wichtige Aufgaben erfüllen könnten, was wiederum die Aufklärung ihrer noch nicht vollständig entschlüsselten Funktionsweisen erfordert. So ist noch unbekannt, ob alle bisher als funktionell identifizierten Tas2r wirklich gustatorische Funktionen erfüllen.
Im Rahmen der Charakterisierung neu generierter, im Locus des Bittergeschmacksrezeptors Tas2r131 genetisch modifizierter Mauslinien, wurde in vorliegender Arbeit die gustatorische sowie extragustatorische Expression von Tas2r131 untersucht. Dass Tas2r131 nicht nur in Pilzpapillen, Wall- und Blätterpapillen (VP+FoP), Gaumen, Ductus nasopalatinus, Vomeronasalorgan und Kehldeckel, sondern auch in Thymus, Testes und Nebenhodenkopf, in Gehirnarealen sowie im Ganglion geniculatum nachgewiesen wurde, bildete die Grundlage für weiterführende Studien. Die vorliegende Arbeit zeigt außerdem, dass Tas2r108, Tas2r126, Tas2r135, Tas2r137 und Tas2r143 in Blut exprimiert werden, was auf eine heterogene Funktion der Tas2r hindeutet. Dass zusätzlich erstmals die Expression aller 35 als funktionell beschriebenen Tas2r im gustatorischen VP+FoP-Epithel von C57BL/6-Mäusen nachgewiesen wurde, verweist auf deren Relevanz als funktionelle Geschmacksrezeptoren.
Weiter zeigten Untersuchungen zur Aufklärung eines möglichen Bitter-Unterscheidungsvermögens in Geschmackspapillen von Mäusen mit fluoreszenzmarkierten oder ablatierten Tas2r131-Zellen, dass Tas2r131 exprimierende Zellen eine Tas2r-Zellsubpopulation bilden. Darüber hinaus existieren innerhalb der Bitterzellen geordnete Tas2r-Expressionsmuster, die sich nach der chromosomalen Lage ihrer Gene richten. Isolierte Bitterzellen reagieren heterogen auf bekannte Bitterstoffe. Und Mäuse mit ablatierter Tas2r131-Zellpopulation besitzen noch andere Tas2r-Zellen und schmecken damit einige Bitterstoffe kaum noch, andere aber noch sehr gut. Diese Befunde belegen die Existenz verschiedener gustatorischer Tas2r-Zellpopulationen, welche die Voraussetzung bilden, Bitterstoffe heterogen zu detektieren. Ob dies die Grundlage für ein divergierendes Verhalten gegenüber unverträglichen und harmlosen oder gar nützlichen Bitterstoffen darstellt, kann mit Hilfe der dargelegten Tas2r-Expressionsmuster künftig in Verhaltensexperimenten geprüft werden.
Die Bittergeschmackswahrnehmung in Säugetieren stellt sich als ein hochkomplexer Mechanismus dar, dessen Vielschichtigkeit durch die hier neu aufgezeigten heterogenen Tas2r-Expressions- und Funktionsmuster erneut verdeutlicht wird.
The Epoch of Reionization marks after recombination the second major change in the ionization state of the universe, going from a neutral to an ionized state. It starts with the appearance of the first stars and galaxies; a fraction of high-energy photons emitted from galaxies permeate into the intergalactic medium (IGM) and gradually ionize the hydrogen, until the IGM is completely ionized at z~6 (Fan et al., 2006). While the progress of reionization is driven by galaxy evolution, it changes the ionization and thermal state of the IGM substantially and affects subsequent structure and galaxy formation by various feedback mechanisms.
Understanding this interaction between reionization and galaxy formation is further impeded by a lack of understanding of the high-redshift galactic properties such as the dust distribution and the escape fraction of ionizing photons. Lyman Alpha Emitters (LAEs) represent a sample of high-redshift galaxies that are sensitive to all these galactic properties and the effects of reionization.
In this thesis we aim to understand the progress of reionization by performing cosmological simulations, which allows us to investigate the limits of constraining reionization by high-redshift galaxies as LAEs, and examine how galactic properties and the ionization state of the IGM affect the visibility and observed quantities of LAEs and Lyman Break galaxies (LBGs).
In the first part of this thesis we focus on performing radiative transfer calculations to simulate reionization. We have developed a mapping-sphere-scheme, which, starting from spherically averaged temperature and density fields, uses our 1D radiative transfer code and computes the effect of each source on the IGM temperature and ionization (HII, HeII, HeIII) profiles, which are subsequently mapped onto a grid. Furthermore we have updated the 3D Monte-Carlo radiative transfer pCRASH, enabling detailed reionization simulations which take individual source characteristics into account.
In the second part of this thesis we perform a reionization simulation by post-processing a smoothed-particle hydrodynamical (SPH) simulation (GADGET-2) with 3D radiative transfer (pCRASH), where the ionizing sources are modelled according to the characteristics of the stellar populations in the hydrodynamical simulation. Following the ionization fractions of hydrogen (HI) and helium (HeII, HeIII), and temperature in our simulation, we find that reionization starts at z~11 and ends at z~6, and high density regions near sources are ionized earlier than low density regions far from sources.
In the third part of this thesis we couple the cosmological SPH simulation and the radiative transfer simulations with a physically motivated, self-consistent model for LAEs, in order to understand the importance of the ionization state of the IGM, the escape fraction of ionizing photons from galaxies and dust in the interstellar medium (ISM) on the visibility of LAEs. Comparison of our models results with the LAE Lyman Alpha (Lya) and UV luminosity functions at z~6.6 reveals a three-dimensional degeneracy between the ionization state of the IGM, the ionizing photons escape fraction and the ISM dust distribution, which implies that LAEs act not only as tracers of reionization but also of the ionizing photon escape fraction and of the ISM dust distribution. This degeneracy does not even break down when we compare simulated with observed clustering of LAEs at z~6.6. However, our results show that reionization has the largest impact on the amplitude of the LAE angular correlation functions, and its imprints are clearly distinguishable from those of properties on galactic scales. These results show that reionization cannot be constrained tightly by exclusively using LAE observations. Further observational constraints, e.g. tomographies of the redshifted hydrogen 21cm line, are required.
In addition we also use our LAE model to probe the question when a galaxy is visible as a LAE or a LBG. Within our model galaxies above a critical stellar mass can produce enough luminosity to be visible as a LBG and/or a LAE. By finding an increasing duty cycle of LBGs with Lya emission as the UV magnitude or stellar mass of the galaxy rises, our model reveals that the brightest (and most massive) LBGs most often show Lya emission.
Predicting the Lya equivalent width (Lya EW) distribution and the fraction of LBGs showing Lya emission at z~6.6, we reproduce the observational trend of the Lya EWs with UV magnitude. However, the Lya EWs of the UV brightest LBGs exceed observations and can only be reconciled by accounting for an increased Lya attenuation of massive galaxies, which implies that the observed Lya brightest LAEs do not necessarily coincide with the UV brightest galaxies. We have analysed the dependencies of LAE observables on the properties of the galactic and intergalactic medium and the LAE-LBG connection, and this enhances our understanding of the nature of LAEs.
In the presented thesis, the most advanced photon reconstruction technique of ground-based γ-ray astronomy is adapted to the H.E.S.S. 28 m telescope. The method is based on a semi-analytical model of electromagnetic particle showers in the atmosphere. The properties of cosmic γ-rays are reconstructed by comparing the camera image of the telescope with the Cherenkov emission that is expected from the shower model. To suppress the dominant background from charged cosmic rays, events are selected based on several criteria. The performance of the analysis is evaluated with simulated events. The method is then applied to two sources that are known to emit γ-rays. The first of these is the Crab Nebula, the standard candle of ground-based γ-ray astronomy. The results of this source confirm the expected performance of the reconstruction method, where the much lower energy threshold compared to H.E.S.S. I is of particular importance. A second analysis is performed on the region around the Galactic Centre. The analysis results emphasise the capabilities of the new telescope to measure γ-rays in an energy range that is interesting for both theoretical and experimental astrophysics. The presented analysis features the lowest energy threshold that has ever been reached in ground-based γ-ray astronomy, opening a new window to the precise measurement of the physical properties of time-variable sources at energies of several tens of GeV.
The data quality of real-world datasets need to be constantly monitored and maintained to allow organizations and individuals to reliably use their data. Especially, data integration projects suffer from poor initial data quality and as a consequence consume more effort and money. Commercial products and research prototypes for data cleansing and integration help users to improve the quality of individual and combined datasets. They can be divided into either standalone systems or database management system (DBMS) extensions. On the one hand, standalone systems do not interact well with DBMS and require time-consuming data imports and exports. On the other hand, DBMS extensions are often limited by the underlying system and do not cover the full set of data cleansing and integration tasks.
We overcome both limitations by implementing a concise set of five data cleansing and integration operators on the parallel data analytics platform Stratosphere. We define the semantics of the operators, present their parallel implementation, and devise optimization techniques for individual operators and combinations thereof. Users specify declarative queries in our query language METEOR with our new operators to improve the data quality of individual datasets or integrate them to larger datasets. By integrating the data cleansing operators into the higher level language layer of Stratosphere, users can easily combine cleansing operators with operators from other domains, such as information extraction, to complex data flows. Through a generic description of the operators, the Stratosphere optimizer reorders operators even from different domains to find better query plans.
As a case study, we reimplemented a part of the large Open Government Data integration project GovWILD with our new operators and show that our queries run significantly faster than the original GovWILD queries, which rely on relational operators. Evaluation reveals that our operators exhibit good scalability on up to 100 cores, so that even larger inputs can be efficiently processed by scaling out to more machines. Finally, our scripts are considerably shorter than the original GovWILD scripts, which results in better maintainability of the scripts.