Refine
Year of publication
- 2015 (157) (remove)
Document Type
- Doctoral Thesis (157) (remove)
Language
- English (157) (remove)
Is part of the Bibliography
- yes (157)
Keywords
- Klimawandel (3)
- climate change (3)
- Arbeitsgedächtnis (2)
- Erosion (2)
- Geschäftsprozessmanagement (2)
- Modellierung (2)
- Nanopartikel (2)
- Psycholinguistik (2)
- Satzverarbeitung (2)
- Seesedimente (2)
Institute
- Institut für Biochemie und Biologie (41)
- Institut für Geowissenschaften (33)
- Institut für Physik und Astronomie (24)
- Institut für Ernährungswissenschaft (10)
- Department Linguistik (9)
- Institut für Chemie (8)
- Institut für Mathematik (5)
- Institut für Anglistik und Amerikanistik (4)
- Institut für Informatik und Computational Science (4)
- Institut für Umweltwissenschaften und Geographie (4)
Organic bulk heterojunction (BHJ) solar cells based on polymer:fullerene blends are a promising alternative for a low-cost solar energy conversion. Despite significant improvements of the power conversion efficiency in recent years, the fundamental working principles of these devices are yet not fully understood. In general, the current output of organic solar cells is determined by the generation of free charge carriers upon light absorption and their transport to the electrodes in competition to the loss of charge carriers due to recombination.
The object of this thesis is to provide a comprehensive understanding of the dynamic processes and physical parameters determining the performance. A new approach for analyzing the characteristic current-voltage output was developed comprising the experimental determination of the efficiencies of charge carrier generation, recombination and transport, combined with numerical device simulations.
Central issues at the beginning of this work were the influence of an electric field on the free carrier generation process and the contribution of generation, recombination and transport to the current-voltage characteristics. An elegant way to directly measure the field dependence of the free carrier generation is the Time Delayed Collection Field (TDCF) method. In TDCF charge carriers are generated by a short laser pulse and subsequently extracted by a defined rectangular voltage pulse. A new setup was established with an improved time resolution compared to former reports in literature. It was found that charge generation is in general independent of the electric field, in contrast to the current view in literature and opposed to the expectations of the Braun-Onsager model that was commonly used to describe the charge generation process. Even in cases where the charge generation was found to be field-dependend, numerical modelling showed that this field-dependence is in general not capable to account for the voltage dependence of the photocurrent. This highlights the importance of efficient charge extraction in competition to non-geminate recombination, which is the second objective of the thesis.
Therefore, two different techniques were combined to characterize the dynamics and efficiency of non-geminate recombination under device-relevant conditions. One new approach is to perform TDCF measurements with increasing delay between generation and extraction of charges. Thus, TDCF was used for the first time to measure charge carrier generation, recombination and transport with the same experimental setup. This excludes experimental errors due to different measurement and preparation conditions and demonstrates the strength of this technique. An analytic model for the description of TDCF transients was developed and revealed the experimental conditions for which reliable results can be obtained. In particular, it turned out that the $RC$ time of the setup which is mainly given by the sample geometry has a significant influence on the shape of the transients which has to be considered for correct data analysis.
Secondly, a complementary method was applied to characterize charge carrier recombination under steady state bias and illumination, i.e. under realistic operating conditions. This approach relies on the precise determination of the steady state carrier densities established in the active layer. It turned out that current techniques were not sufficient to measure carrier densities with the necessary accuracy. Therefore, a new technique {Bias Assisted Charge Extraction} (BACE) was developed. Here, the charge carriers photogenerated under steady state illumination are extracted by applying a high reverse bias. The accelerated extraction compared to conventional charge extraction minimizes losses through non-geminate recombination and trapping during extraction. By performing numerical device simulations under steady state, conditions were established under which quantitative information on the dynamics can be retrieved from BACE measurements.
The applied experimental techniques allowed to sensitively analyse and quantify geminate and non-geminate recombination losses along with charge transport in organic solar cells. A full analysis was exemplarily demonstrated for two prominent polymer-fullerene blends.
The model system P3HT:PCBM spincast from chloroform (as prepared) exhibits poor power conversion efficiencies (PCE) on the order of 0.5%, mainly caused by low fill factors (FF) and currents. It could be shown that the performance of these devices is limited by the hole transport and large bimolecular recombination (BMR) losses, while geminate recombination losses are insignificant. The low polymer crystallinity and poor interconnection between the polymer and fullerene domains leads to a hole mobility of the order of 10^-7 cm^2/Vs which is several orders of magnitude lower than the electron mobility in these devices. The concomitant build up of space charge hinders extraction of both electrons and holes and promotes bimolecular recombination losses.
Thermal annealing of P3HT:PCBM blends directly after spin coating improves crystallinity and interconnection of the polymer and the fullerene phase and results in comparatively high electron and hole mobilities in the order of 10^-3 cm^2/Vs and 10^-4 cm^2/Vs, respectively. In addition, a coarsening of the domain sizes leads to a reduction of the BMR by one order of magnitude. High charge carrier mobilities and low recombination losses result in comparatively high FF (>65%) and short circuit current (J_SC ≈ 10 mA/cm^2). The overall device performance (PCE ≈ 4%) is only limited by a rather low spectral overlap of absorption and solar emission and a small V_OC, given by the energetics of the P3HT.
From this point of view the combination of the low bandgap polymer PTB7 with PCBM is a promising approach. In BHJ solar cells, this polymer leads to a higher V_OC due to optimized energetics with PCBM. However, the J_SC in these (unoptimized) devices is similar to the J_SC in the optimized blend with P3HT and the FF is rather low (≈ 50%). It turned out that the unoptimized PTB7:PCBM blends suffer from high BMR, a low electron mobility of the order of 10^-5 cm^2/Vs and geminate recombination losses due to field dependent charge carrier generation.
The use of the solvent additive DIO optimizes the blend morphology, mainly by suppressing the formation of very large fullerene domains and by forming a more uniform structure of well interconnected donor and acceptor domains of the order of a few nanometers. Our analysis shows that this results in an increase of the electron mobility by about one order of magnitude (3 x 10^-4 cm^2/Vs), while BMR and geminate recombination losses are significantly reduced. In total these effects improve the J_SC (≈ 17 mA/cm^2) and the FF (> 70%). In 2012 this polymer/fullerene combination resulted in a record PCE for a single junction OSC of 9.2%.
Remarkably, the numerical device simulations revealed that the specific shape of the J-V characteristics depends very sensitively to the variation of not only one, but all dynamic parameters. On the one hand this proves that the experimentally determined parameters, if leading to a good match between simulated and measured J-V curves, are realistic and reliable. On the other hand it also emphasizes the importance to consider all involved dynamic quantities, namely charge carrier generation, geminate and non-geminate recombination as well as electron and hole mobilities. The measurement or investigation of only a subset of these parameters as frequently found in literature will lead to an incomplete picture and possibly to misleading conclusions.
Importantly, the comparison of the numerical device simulation employing the measured parameters and the experimental $J-V$ characteristics allows to identify loss channels and limitations of OSC. For example, it turned out that inefficient extraction of charge carriers is a criticical limitation factor that is often disobeyed. However, efficient and fast transport of charges becomes more and more important with the development of new low bandgap materials with very high internal quantum efficiencies. Likewise, due to moderate charge carrier mobilities, the active layer thicknesses of current high-performance devices are usually limited to around 100 nm. However, larger layer thicknesses would be more favourable with respect to higher current output and robustness of production. Newly designed donor materials should therefore at best show a high tendency to form crystalline structures, as observed in P3HT, combined with the optimized energetics and quantum efficiency of, for example, PTB7.
This thesis contains three experimental studies addressing the interplay between deformation and the mineral reaction between natural calcite and magnesite. The solid-solid mineral reaction between the two carbonates causes the formation of a magnesio-calcite precursor layer and a dolomite reaction rim in every experiment at isostatic annealing and deformation conditions.
CHAPTER 1 briefly introduces general aspects concerning mineral reactions in nature and diffusion pathways for mass transport. Moreover, results of previous laboratory studies on the influence of deformation on mineral reactions are summarized. In addition, the main goals of this study are pointed out.
In CHAPTER 2, the reaction between calcite and magnesite single crystals is examined at isostatic annealing conditions. Time series performed at a fixed temperature revealed a diffusion-controlled dolomite rim growth. Two microstructural domains could be identified characterized by palisade-shaped dolomite grains growing into the magnesite and granular dolomite growing towards calcite. A model was provided for the dolomite rim growth based on the counter-diffusion of CaO and MgO. All reaction products exhibited a characteristic crystallographic relationship with respect to the calcite reactant. Moreover, kinetic parameters of the mineral reaction were determined out of a temperature series at a fixed time. The main goal of the isostatic test series was to gain information about the microstructure evolution, kinetic parameters, chemical composition and texture development of the reaction products. The results were used as a reference to quantify the influence of deformation on the mineral reaction.
CHAPTER 3 deals with the influence of non-isostatic deformation on dolomite and magnesio-calcite layer production between calcite and magnesite single crystals. Deformation was achieved by triaxial compression and by torsion. Triaxial compression up to 38 MPa axial stress at a fixed time showed no significant influence of stress and strain on dolomite formation. Time series conducted at a fixed stress yield no change in growth rates for dolomite and magnesio-calcite at low strains. Slightly larger magnesio-calcite growth rates were observed at strains above >0.1. High strains at similar stresses were caused by the activation of additional glide systems in the calcite single crystal and more mobile dislocations in the magnesio-calcite grains, providing fast diffusion pathways. In torsion experiments a gradual decrease in dolomite and magnesio-calcite layer thickness was observed at a critical shear strain. During deformation, crystallographic orientations of reaction products rearranged with respect to the external framework. A direct effect of the mineral reaction on deformation could not be recognized due to the relatively small reaction product widths.
In CHAPTER 4, the influence of starting material microfabrics and the presence of water on the reaction kinetics was evaluated. In these experimental series polycrystalline material was in contact with single crystals or two polycrystalline materials were used as reactants. Isostatic annealing resulted in different dolomite and magnesio-calcite layer thicknesses, depending on starting material microfabrics. The reaction progress at the magnesite interface was faster with smaller magnesite grain size, because grain boundaries provided fast pathways for diffusion and multiple nucleation sites for dolomite formation. Deformation by triaxial compression and torsion yield lower dolomite rim thicknesses compared to annealed samples for the same time. This was caused by grain coarsening of polycrystalline magnesite during deformation. In contrast, magnesio-calcite layers tended to be larger during deformation, which triggered enhanced diffusion along grain boundaries. The presence of excess water had no significant influence on the reaction kinetics, at least if the reactants were single crystals.
In CHAPTER 5 general conclusions about the interplay between deformation and the mineral reaction in the carbonate system are presented.
Finally, CHAPTER 6 highlights possible future work in the carbonate system based on the results of this study.
Magnetite nanoparticles and their assembly comprise a new area of development for new technologies. The magnetic particles can interact and assemble in chains or networks. Magnetotactic bacteria are one of the most interesting microorganisms, in which the assembly of nanoparticles occurs. These microorganisms are a heterogeneous group of gram negative prokaryotes, which all show the production of special magnetic organelles called magnetosomes, consisting of a magnetic nanoparticle, either magnetite (Fe3O4) or greigite (Fe3S4), embedded in a membrane. The chain is assembled along an actin-like scaffold made of MamK protein, which makes the magnetosomes to arrange in mechanically stable chains. The chains work as a compass needle in order to allow cells to orient and swim along the magnetic field of the Earth.
The formation of magnetosomes is known to be controlled at the molecular level. The physico–chemical conditions of the surrounding environment also influence biomineralization. The work presented in this manuscript aims to understand how such external conditions, in particular the extracellular oxidation reduction potential (ORP) influence magnetite formation in the strain Magnetospirillum magneticum AMB-1. A controlled cultivation of the microorganism was developed in a bioreactor and the formation of magnetosomes was characterized.
Different techniques have been applied in order to characterize the amount of iron taken up by the bacteria and in consequence the size of magnetosomes produced at different ORP conditions. By comparison of iron uptake, morphology of bacteria, size and amount of magnetosomes per cell at different ORP, the formation of magnetosomes was inhibited at ORP 0 mV, whereas reduced conditions, ORP – 500 mV facilitate biomineralization process.
Self-assembly of magnetosomes occurring in magnetotactic bacteria became an inspiration to learn from nature and to construct nanoparticles assemblies by using the bacteriophage M13 as a template. The M13 bacteriophage is an 800 nm long filament with encapsulated single-stranded DNA that has been recently used as a scaffold for nanoparticle assembly. I constructed two types of assemblies based on bacteriophages and magnetic nanoparticles. A chain – like assembly was first formed where magnetite nanoparticles are attached along the phage filament. A sperm – like construct was also built with a magnetic head and a tail formed by phage filament.
The controlled assembly of magnetite nanoparticles on the phage template was possible due to two different mechanism of nanoparticle assembly. The first one was based on the electrostatic interactions between positively charged polyethylenimine coated magnetite nanoparticles and negatively charged phages. The second phage –nanoparticle assembly was achieved by bioengineered recognition sites. A mCherry protein is displayed on the phage and is was used as a linker to a red binding nanobody (RBP) that is fused to the one of the proteins surrounding the magnetite crystal of a magnetosome.
Both assemblies were actuated in water by an external magnetic field showing their swimming behavior and potentially enabling further usage of such structures for medical applications. The speed of the phage - nanoparticles assemblies are relatively slow when compared to those of microswimmers previously published. However, only the largest phage-magnetite assemblies could be imaged and it is therefore still unclear how fast these structures can be in their smaller version.
The continuously increasing demand for rare earth elements in technical components of modern technologies, brings the detection of new deposits closer into the focus of global exploration. One promising method to globally map important deposits might be remote sensing, since it has been used for a wide range of mineral mapping in the past. This doctoral thesis investigates the capacity of hyperspectral remote sensing for the detection of rare earth element deposits. The definition and the realization of a fundamental database on the spectral characteristics of rare earth oxides, rare earth metals and rare earth element bearing materials formed the basis of this thesis. To investigate these characteristics in the field, hyperspectral images of four outcrops in Fen Complex, Norway, were collected in the near-field. A new methodology (named REEMAP) was developed to delineate rare earth element enriched zones. The main steps of REEMAP are: 1) multitemporal weighted averaging of multiple images covering the sample area; 2) sharpening the rare earth related signals using a Gaussian high pass deconvolution technique that is calibrated on the standard deviation of a Gaussian-bell shaped curve that represents by the full width of half maxima of the target absorption band; 3) mathematical modeling of the target absorption band and highlighting of rare earth elements. REEMAP was further adapted to different hyperspectral sensors (EO-1 Hyperion and EnMAP) and a new test site (Lofdal, Namibia). Additionally, the hyperspectral signatures of associated minerals were investigated to serve as proxy for the host rocks. Finally, the capacity and limitations of spectroscopic rare earth element detection approaches in general and of the REEMAP approach specifically were investigated and discussed. One result of this doctoral thesis is that eight rare earth oxides show robust absorption bands and, therefore, can be used for hyperspectral detection methods. Additionally, the spectral signatures of iron oxides, iron-bearing sulfates, calcite and kaolinite can be used to detect metasomatic alteration zones and highlight the ore zone. One of the key results of this doctoral work is the developed REEMAP approach, which can be applied from near-field to space. The REEMAP approach enables rare earth element mapping especially for noisy images. Limiting factors are a low signal to noise ratio, a reduced spectral resolution, overlaying materials, atmospheric absorption residuals and non-optimal illumination conditions. Another key result of this doctoral thesis is the finding that the future hyperspectral EnMAP satellite (with its currently published specifications, June 2015) will be theoretically capable to detect absorption bands of erbium, dysprosium, holmium, neodymium and europium, thulium and samarium. This thesis presents a new methodology REEMAP that enables a spatially wide and rapid hyperspectral detection of rare earth elements in order to meet the demand for fast, extensive and efficient rare earth exploration (from near-field to space).
Development of geophysical methods to characterize methane hydrate reservoirs on a laboratory scale
(2015)
Gas hydrates are crystalline solids composed of water and gas molecules. They are stable at elevated pressure and low temperatures. Therefore, natural gas hydrate deposits occur at continental margins, permafrost areas, deep lakes, and deep inland seas. During hydrate formation, the water molecules rearrange to form cavities which host gas molecules. Due to the high pressure during hydrate formation, significant amounts of gas can be stored in hydrate structures. The water-gas ratio hereby can reach up to 1:172 at 0°C and atmospheric pressure. Natural gas hydrates predominantly contain methane. Because methane constitutes both a fuel and a greenhouse gas, gas hydrates are a potential energy resource as well as a potential source for greenhouse gas.
This study investigates the physical properties of methane hydrate bearing sediments on a laboratory scale. To do so, an electrical resistivity tomography (ERT) array was developed and mounted in a large reservoir simulator (LARS). For the first time, the ERT array was applied to hydrate saturated sediment samples under controlled temperature, pressure, and hydrate saturation conditions on a laboratory scale. Typically, the pore space of (marine) sediments is filled with electrically well conductive brine. Because hydrates constitute an electrical isolator, significant contrasts regarding the electrical properties of the pore space emerge during hydrate formation and dissociation. Frequent measurements during hydrate formation experiments permit the recordings of the spatial resistivity distribution inside LARS. Those data sets are used as input for a new data processing routine which transfers the spatial resistivity distribution into the spatial distribution of hydrate saturation. Thus, the changes of local hydrate saturation can be monitored with respect to space and time.
This study shows that the developed tomography yielded good data quality and resolved even small amounts of hydrate saturation inside the sediment sample. The conversion algorithm transforming the spatial resistivity distribution into local hydrate saturation values yielded the best results using the Archie-var-phi relation. This approach considers the increasing hydrate phase as part of the sediment frame, metaphorically reducing the sample’s porosity. In addition, the tomographical measurements showed that fast lab based hydrate formation processes cause small crystallites to form which tend to recrystallize.
Furthermore, hydrate dissociation experiments via depressurization were conducted in order to mimic the 2007/2008 Mallik field trial. It was observed that some patterns in gas and water flow could be reproduced, even though some setup related limitations arose.
In two additional long-term experiments the feasibility and performance of CO2-CH4 hydrate exchange reactions were studied in LARS. The tomographical system was used to monitor the spatial hydrate distribution during the hydrate formation stage. During the subsequent CO2 injection, the tomographical array allowed to follow the CO2 migration front inside the sediment sample and helped to identify the CO2 breakthrough.
The lives of more than 1/6 th of the world population is directly affected by the caprices of the South Asian summer monsoon rainfall. India receives around 78 % of the annual precipitation during the June-September months, the summer monsoon season of South Asia. But, the monsoon circulation is not consistent throughout the entire summer season. Episodes of heavy rainfall (active periods) and low rainfall (break periods) are inherent to the intraseasonal variability of the South Asian summer monsoon. Extended breaks or long-lasting dryness can result in droughts and hence trigger crop failures and in turn famines. Furthermore, India's electricity generation from renewable sources (wind and hydro-power), which is increasingly important in order to satisfy the rapidly rising demand for energy, is highly reliant on the prevailing meteorology. The major drought years 2002 and 2009 for the Indian summer monsoon during the last decades, which are results of the occurrence of multiple extended breaks, emphasise exemplary that the understanding of the monsoon system and its intraseasonal variation is of greatest importance. Although, numerous studies based on observations, reanalysis data and global model simulations have been carried out with the focus on monsoon active and break phases over India, the understanding of the monsoon intraseasonal variability is only in the infancy stage. Regional climate models could benefit the comprehension of monsoon breaks by its resolution advantage.
This study investigates moist dynamical processes that initiate and maintain breaks during the South Asian summer monsoon using the atmospheric regional climate model HIRHAM5 at a horizontal resolution of 25 km forced by the ECMWF ERA Interim reanalysis for the period 1979-2012. By calculating moisture and moist static energy budgets the various competing mechanisms leading to extended breaks are quantitatively estimated. Advection of dry air from the deserts of western Asia towards central India is the dominant moist dynamical process in initiating extended break conditions over South Asia. Once initiated, the extended breaks are maintained due to many competing mechanisms: (i) the anomalous easterlies at the southern flank of this anticyclonic anomaly weaken the low-level cross-equatorial jet and thus the moisture transport into the monsoon region, (ii) differential radiative heating over the continental and the oceanic tropical convergence zone induces a local Hadley circulation with anomalous rising over the equatorial Indian Ocean and descent over central India, and (iii) a cyclonic response to positive rainfall anomalies over the near-equatorial Indian Ocean amplifies the anomalous easterlies over India and hence contributes to the low-level divergence over central India.
A sensitivity experiment that mimics a scenario of higher atmospheric aerosol concentrations over South Asia addresses a current issue of large uncertainty: the role aerosols play in suppressing monsoon rainfall and hence in triggering breaks. To study the indirect aerosol effects the cloud droplet number concentration was increased to imitate the aerosol's function as cloud condensation nuclei. The sensitivity experiment with altered microphysical cloud properties shows a reduction in the summer monsoon precipitation together with a weakening of the South Asian summer monsoon. Several physical mechanisms are proposed to be responsible for the suppressed monsoon rainfall: (i) according to the first indirect radiative forcing the increase in the number of cloud droplets causes an increase in the cloud reflectivity of solar radiation, leading to a climate cooling over India which in turn reduces the hydrological cycle, (ii) a stabilisation of the troposphere induced by a differential cooling between the surface and the upper troposphere over central India inhibits the growth of deep convective rain clouds, (iii) an increase of the amount of low and mid-level clouds together with a decrease in high-level cloud amount amplify the surface cooling and hence the atmospheric stability, and (iv) dynamical changes of the monsoon manifested as a anomalous anticyclonic circulation over India reduce the moisture transport into the monsoon region. The study suggests that the changes in the total precipitation, which are dominated by changes in the convective precipitation, mainly result from the indirect radiative forcing. Suppression of rainfall due to the direct microphysical effect is found to be negligible over India. Break statistics of the polluted cloud scenario indicate an increase in the occurrence of short breaks (3 days), while the frequency of extended breaks (> 7 days) is clearly not affected. This disproves the hypothesis that more and smaller cloud droplets, caused by a high load of atmospheric aerosols trigger long drought conditions over central India.
The present study addresses the question of how German vowels are perceived and produced by Polish learners of German as a Foreign Language. It comprises three main experiments: a discrimination experiment, a production experiment, and an identification experiment. With the exception of the discrimination task, the experiments further investigated the influence of orthographic marking on the perception and production of German vowel length. It was assumed that explicit markings such as the Dehnungs-h ("lengthening h") could help Polish GFL learners in perceiving and producing German words more correctly.
The discrimination experiment with manipulated nonce words showed that Polish GFL learners detect pure length differences in German vowels less accurately than German native speakers, while this was not the case for pure quality differences. The results of the identification experiment contrast with the results of the discrimination task in that Polish GFL learners were better at judging incorrect vowel length than incorrect vowel quality in manipulated real words. However, orthographic marking did not turn out to be the driving factor and it is suggested that metalinguistic awareness can explain the asymmetry between the two perception experiments. The production experiment supported the results of the identification task in that lengthening h did not help Polish learners in producing German vowel length more correctly. Yet, as far as vowel quality productions are concerned, it is argued that orthography does influence L2 sound productions because Polish learners seem to be negatively influenced by their native grapheme-to-phoneme correspondences.
It is concluded that it is important to differentiate between the influence of the L1 and L2 orthographic system. On the one hand, the investigation of the influence of orthographic vowel length markers in German suggests that Polish GFL learners do not make use of length information provided by the L2 orthographic system. On the other hand, the vowel quality data suggest that the L1 orthographic system plays a crucial role in the acquisition of a foreign language. It is therefore proposed that orthography influences the acquisition of foreign sounds, but not in the way it was originally assumed.
The standing stock and production of organismal biomass depends strongly on the organisms’ biotic environment, which arises from trophic and non-trophic interactions among them. The trophic interactions between the different groups of organisms form the food web of an ecosystem, with the autotrophic and bacterial production at the basis and potentially several levels of consumers on top of the producers. Feeding interactions can regulate communities either by severe grazing pressure or by shortage of resources or prey production, termed top-down and bottom-up control, respectively. The limitations of all communities conglomerate in the food web regulation, which is subject to abiotic and biotic forcing regimes arising from external and internal constraints. This dissertation presents the effects of alterations in two abiotic, external forcing regimes, terrestrial matter input and long-lasting low temperatures in winter. Diverse methodological approaches, a complex ecosystem model study and the analysis of two whole-lake measurements, were performed to investigate effects for the food web regulation and the resulting consequences at the species, community and ecosystem scale. Thus, all types of organisms, autotrophs and heterotrophs, at all trophic levels were investigated to gain a comprehensive overview of the effects of the two mentioned altered forcing regimes. In addition, an extensive evaluation of the trophic interactions and resulting carbon fluxes along the pelagic and benthic food web was performed to display the efficiencies of the trophic energy transfer within the food webs. All studies were conducted in shallow lakes, which is worldwide the most abundant type of lakes. The specific morphology of shallow lakes allows that the benthic production contributes substantially to the whole-lake production. Further, as shallow lakes are often small they are especially sensitive to both, changes in the input of terrestrial organic matter and the atmospheric temperature. Another characteristic of shallow lakes is their appearance in alternative stable states. They are either in a clear-water or turbid state, where macrophytes and phytoplankton dominate, respectively. Both states can stabilize themselves through various mechanisms.
These two alternative states and stabilizing mechanisms are integrated in the complex ecosystem model PCLake, which was used to investigate the effects of the enhanced terrestrial particulate organic matter (t-POM) input to lakes. The food web regulation was altered by three distinct pathways: (1) Zoobenthos received more food, increased in biomass which favored benthivorous fish and those reduced the available light due to bioturbation. (2) Zooplankton substituted autochthonous organic matter in their diet by suspended t-POM, thus the autochthonous organic matter remaining in the water reduced its transparency. (3) T-POM suspended into the water and reduced directly the available light. As macrophytes are more light-sensitive than phytoplankton they suffered the most from the lower transparency. Consequently, the resilience of the clear-water state was reduced by enhanced t-POM inputs, which makes the turbid state more likely at a given nutrient concentration. In two subsequent winters long-lasting low temperatures and a concurrent long duration of ice coverage was observed which resulted in low overall adult fish biomasses in the two study lakes – Schulzensee and Gollinsee, characterized by having and not having submerged macrophytes, respectively. Before the partial winterkill of fish Schulzensee allowed for a higher proportion of piscivorous fish than Gollinsee. However, the partial winterkill of fish aligned both communities as piscivorous fish are more sensitive to low oxygen concentrations. Young of the year fish benefitted extremely from the absence of adult fish due to lower predation pressure. Therefore, they could exert a strong top-down control on crustaceans, which restructured the entire zooplankton community leading to low crustacean biomasses and a community composition characterized by copepodites and nauplii. As a result, ciliates were released from top-down control, increased to high biomasses compared to lakes of various trophic states and depths and dominated the zooplankton community. While being very abundant in the study lakes and having the highest weight specific grazing rates among the zooplankton, ciliates exerted potentially a strong top-down control on small phytoplankton and particle-attached bacteria. This resulted in a higher proportion of large phytoplankton compared to other lakes. Additionally, the phytoplankton community was evenly distributed presumably due to the numerous fast growing and highly specific ciliate grazers. Although, the pelagic food web was completely restructured after the subsequent partial winterkills of fish, both lakes were resistant to effects of this forcing regime at the ecosystem scale. The consistently high predation pressure on phytoplankton prevented that Schulzensee switched from the clear-water to the turbid state. Further mechanisms, which potentially stabilized the clear-water state, were allelopathic effects by macrophytes and nutrient limitation in summer. The pelagic autotrophic and bacterial production was an order of magnitude more efficient transferred to animal consumers than the respective benthic production, despite the alterations of the food web structure after the partial winterkill of fish. Thus, the compiled mass-balanced whole-lake food webs suggested that the benthic bacterial and autotrophic production, which exceeded those of the pelagic habitat, was not used by animal consumers. This holds even true if the food quality, additional consumers such as ciliates, benthic protozoa and meiobenthos, the pelagic-benthic link and the potential oxygen limitation of macrobenthos were considered. Therefore, low benthic efficiencies suggest that lakes are primarily pelagic systems at least at the animal consumer level.
Overall, this dissertation gives insights into the regulation of organism groups in the pelagic and benthic habitat at each trophic level under two different forcing regimes and displays the efficiency of the carbon transfer in both habitats. The results underline that the alterations of external forcing regimes affect all hierarchical level including the ecosystem.
Earthquake clustering has proven the most useful tool to forecast changes in seismicity rates in the short and medium term (hours to months), and efforts are currently being made to extend the scope of such models to operational earthquake forecasting. The overarching goal of the research presented in this thesis is to improve physics-based earthquake forecasts, with a focus on aftershock sequences. Physical models of triggered seismicity are based on the redistribution of stresses in the crust, coupled with the rate-and-state constitutive law proposed by Dieterich to calculate changes in seismicity rate. This type of models are known as Coulomb- rate and-state (CRS) models. In spite of the success of the Coulomb hypothesis, CRS models typically performed poorly in comparison to statistical ones, and they have been underepresented in the operational forecasting context. In this thesis, I address some of these issues, and in particular these questions: (1) How can we realistically model the uncertainties and heterogeneity of the mainshock stress field? (2) What is the effect of time dependent stresses in the postseismic phase on seismicity? I focus on two case studies from different tectonic settings: the Mw 9.0 Tohoku megathrust and the Mw 6.0 Parkfield strike slip earthquake. I study aleatoric uncertainties using a Monte Carlo method. I find that the existence of multiple receiver faults is the most important source of intrinsic stress heterogeneity, and CRS models perform better when this variability is taken into account. Epistemic uncertainties inherited from the slip models also have a significant impact on the forecast, and I find that an ensemble model based on several slip distributions outperforms most individual models. I address the role of postseismic stresses due to aseismic slip on the mainshock fault (afterslip) and to the redistribution of stresses by previous aftershocks (secondary triggering). I find that modeling secondary triggering improves model performance. The effect of afterslip is less clear, and difficult to assess for near-fault aftershocks due to the large uncertainties of the afterslip models. Off-fault events, on the other hand, are less sensitive to the details of the slip distribution: I find that following the Tohoku earthquake, afterslip promotes seismicity in the Fukushima region. To evaluate the performance of the improved CRS models in a pseudo-operational context, I submitted them for independent testing to a collaborative experiment carried out by CSEP for the 2010-2012 Canterbury sequence. Preliminary results indicate that physical models generally perform well compared to statistical ones, suggesting that CRS models may have a role to play in the future of operational forecasting. To facilitate efforts in this direction, and to enable future studies of earthquake triggering by time dependent processes, I have made the code open source. In the final part of this thesis I summarize the capabilities of the program and outline technical aspects regarding performance and parallelization strategies.
The size and morphology control of precipitated solid particles is a major economic issue for numerous industries. For instance, it is interesting for the nuclear industry, concerning the recovery of radioactive species from used nuclear fuel.
The precipitates features, which are a key parameter from the post-precipitate processing, depend on the process local mixing conditions. So far, the relationship between precipitation features and hydrodynamic conditions have not been investigated.
In this study, a new experimental configuration consisting of coalescing drops is set to investigate the link between reactive crystallization and hydrodynamics. Two configurations of aqueous drops are examined. The first one corresponds to high contact angle drops (>90°) in oil, as a model system for flowing drops, the second one correspond to sessile drops in air with low contact angle (<25°). In both cases, one reactive is dissolved in each drop, namely oxalic acid and cerium nitrate. When both drops get into contact, they may coalesce; the dissolved species mix and react to produce insoluble cerium oxalate. The precipitates features and effect on hydrodynamics are investigated depending on the solvent. In the case of sessile drops in air, the surface tension difference between the drops generates a gradient which induces a Marangoni flow from the low surface tension drop over the high surface tension drop. By setting the surface tension difference between the two drops and thus the Marangoni flow, the hydrodynamics conditions during the drop coalescence could be modified. Diols/water mixtures are used as solvent, in order to fix the surface tension difference between the liquids of both drops regardless from the reactant concentration. More precisely, the used diols, 1,2-propanediol and 1,3-propanediol, are isomer with identical density and close viscosity. By keeping the water volume fraction constant and playing with the 1,2-propanediol and 1,3-propanediol volume fractions of the solvents, the mixtures surface tensions differ up to 10 mN/m for identical/constant reactant concentration, density and viscosity. 3 precipitation behaviors were identified for the coalescence of water/diols/recatants drops depending on the oxalic excess. The corresponding precipitates patterns are visualized by optical microscopy and the precipitates are characterized by confocal microscopy SEM, XRD and SAXS measurements. In the intermediate oxalic excess regime, formation of periodic patterns can be observed. These patterns consist in alternating cerium oxalate precipitates with distinct morphologies, namely needles and “microflowers”. Such periodic fringes can be explained by a feedback mechanism between convection, reaction and the diffusion.
Adjustment of empirically derived ground motion prediction equations (GMPEs), from a data- rich region/site where they have been derived to a data-poor region/site, is one of the major challenges associated with the current practice of seismic hazard analysis. Due to the fre- quent use in engineering design practices the GMPEs are often derived for response spectral ordinates (e.g., spectral acceleration) of a single degree of freedom (SDOF) oscillator. The functional forms of such GMPEs are based upon the concepts borrowed from the Fourier spectral representation of ground motion. This assumption regarding the validity of Fourier spectral concepts in the response spectral domain can lead to consequences which cannot be explained physically.
In this thesis, firstly results from an investigation that explores the relationship between Fourier and response spectra, and implications of this relationship on the adjustment issues of GMPEs, are presented. The relationship between the Fourier and response spectra is explored by using random vibration theory (RVT), a framework that has been extensively used in earthquake engineering, for instance within the stochastic simulation framework and in the site response analysis. For a 5% damped SDOF oscillator the RVT perspective of response spectra reveals that no one-to-one correspondence exists between Fourier and response spectral ordinates except in a limited range (i.e., below the peak of the response spectra) of oscillator frequencies. The high oscillator frequency response spectral ordinates are dominated by the contributions from the Fourier spectral ordinates that correspond to the frequencies well below a selected oscillator frequency. The peak ground acceleration (PGA) is found to be related with the integral over the entire Fourier spectrum of ground motion which is in contrast to the popularly held perception that PGA is a high-frequency phenomenon of ground motion.
This thesis presents a new perspective for developing a response spectral GMPE that takes the relationship between Fourier and response spectra into account. Essentially, this frame- work involves a two-step method for deriving a response spectral GMPE: in the first step two empirical models for the FAS and for a predetermined estimate of duration of ground motion are derived, in the next step, predictions from the two models are combined within the same RVT framework to obtain the response spectral ordinates. In addition to that, a stochastic model based scheme for extrapolating the individual acceleration spectra beyond the useable frequency limits is also presented. To that end, recorded acceleration traces were inverted to obtain the stochastic model parameters that allow making consistent extrapola- tion in individual (acceleration) Fourier spectra. Moreover an empirical model, for a dura- tion measure that is consistent within the RVT framework, is derived. As a next step, an oscillator-frequency-dependent empirical duration model is derived that allows obtaining the most reliable estimates of response spectral ordinates. The framework of deriving the response spectral GMPE presented herein becomes a self-adjusting model with the inclusion of stress parameter (∆σ) and kappa (κ0) as the predictor variables in the two empirical models. The entire analysis of developing the response spectral GMPE is performed on recently compiled RESORCE-2012 database that contains recordings made from Europe, the Mediterranean and the Middle East. The presented GMPE for response spectral ordinates should be considered valid in the magnitude range of 4 ≤ MW ≤ 7.6 at distances ≤ 200 km.
By perturbing the differential of a (cochain-)complex by "small" operators, one obtains what is referred to as quasicomplexes, i.e. a sequence whose curvature is not equal to zero in general. In this situation the cohomology is no longer defined. Note that it depends on the structure of the underlying spaces whether or not an operator is "small." This leads to a magical mix of perturbation and regularisation theory. In the general setting of Hilbert spaces compact operators are "small." In order to develop this theory, many elements of diverse mathematical disciplines, such as functional analysis, differential geometry, partial differential equation, homological algebra and topology have to be combined. All essential basics are summarised in the first chapter of this thesis. This contains classical elements of index theory, such as Fredholm operators, elliptic pseudodifferential operators and characteristic classes. Moreover we study the de Rham complex and introduce Sobolev spaces of arbitrary order as well as the concept of operator ideals. In the second chapter, the abstract theory of (Fredholm) quasicomplexes of Hilbert spaces will be developed. From the very beginning we will consider quasicomplexes with curvature in an ideal class. We introduce the Euler characteristic, the cone of a quasiendomorphism and the Lefschetz number. In particular, we generalise Euler's identity, which will allow us to develop the Lefschetz theory on nonseparable Hilbert spaces. Finally, in the third chapter the abstract theory will be applied to elliptic quasicomplexes with pseudodifferential operators of arbitrary order. We will show that the Atiyah-Singer index formula holds true for those objects and, as an example, we will compute the Euler characteristic of the connection quasicomplex. In addition to this we introduce geometric quasiendomorphisms and prove a generalisation of the Lefschetz fixed point theorem of Atiyah and Bott.
The promotion of self-employment as part of active labor market policies is considered to be one of the most important unemployment support schemes in Germany. Against this background the main part of this thesis contributes to the evaluation of start-up support schemes within ALMP. Chapter 2 and 4 focus on the evaluation of the New Start-up Subsidy (NSUS, Gründungszuschuss) in its first version (from 2006 to the end of 2011). The chapters offer an advancement of the evaluation of start-up subsidies in Germany, and are based on a novel data set of administrative data from the Federal Employment Agency that was enriched with information from a telephone survey. Chapter 2 provides a thorough descriptive analysis of the NSUS that consists of two parts. First, the participant structure of the program is compared with the one of two former programs. In a second step, the study conducts an in-depth characterization of the participants of the NSUS focusing on founding motives, the level of start-up capital and equity used as well as the sectoral distribution of the new business. Furthermore, the business survival, income situation of founders and job creation by the new businesses is analyzed during a period of 19 months after start-up. The contribution of Chapter 4 is to introduce a new explorative data set that allows comparing subsidized start-ups out of unemployment with non-subsidized business start-ups that were founded by individuals who were not unemployed at the time of start-up. Because previous evaluation studies commonly used eligible non-participants amongst the unemployed as control group to assess the labor market effects of the start-up subsidies, the corresponding results hence referred to the effectiveness of the ALMP measure, but could not address the question whether the subsidy leads to similarly successful and innovative businesses compared to non-subsidized businesses. An assessment of this economic/growth aspect is also important, since the subsidy might induce negative effects that may outweigh the positive effects from an ALMP perspective. The main results of Chapter 4 indicate that subsidized founders seem to have no shortages in terms of formal education, but exhibit less employment and industry-specific experience, and are less likely to benefit from intergenerational transmission of start-ups. Moreover, the study finds evidence that necessity start-ups are over-represented among subsidized business founders, which suggests disadvantages in terms of business preparation due to possible time restrictions right before start-up. Finally, the study also detects more capital constraints among the unemployed, both in terms of the availability of personal equity and access to loans. With respect to potential differences between both groups in terms of business development over time, the results indicate that subsidized start-ups out of unemployment face higher business survival rates 19 months after start-up. However, they lag behind regular business founders in terms of income, business growth, and innovation. The arduous data collection process for start-up activities of non-subsidized founders for Chapter 4 made apparent that Germany is missing a central reporting system for business formations. Additionally, the different start-up reporting systems that do exist exhibit substantial discrepancies in data processing procedures, and therefore also in absolute numbers concerning the overall start-up activity. Chapter 3 is therefore placed in front of Chapter 4 and has the aim to provide a comprehensive review of the most important German start-up reporting systems. The second part of the thesis consists of Chapter 5 which contributes to the literature on determinants of job search behavior of the unemployed individuals by analyzing the effectiveness of internet search with regard to search behavior of unemployed individuals and subsequent job quality. The third and final part of the thesis outlines why the German labor market reacted in a very mild fashion to the Great Recession 2008/09, especially compared to other countries. Chapter 6 describes current economic trends of the labor market in light of general trends in the European Union, and reveals some of the main associated challenges. Thereafter, recent reforms of the main institutional settings of the labor market which influence labor supply are analyzed. Finally, based on the status quo of these institutional settings, the chapter gives a brief overview of strategies to adequately combat the challenges in terms of labor supply and to ensure economic growth in the future.
Optical frequency combs (OFC) constitute an array of phase-correlated equidistant spectral lines with nearly equal intensities over a broad spectral range. The adaptations of combs generated in mode-locked lasers proved to be highly efficient for the calibration of high-resolution (resolving power > 50000) astronomical spectrographs. The observation of different galaxy structures or the studies of the Milky Way are done using instruments in the low- and medium resolution range. To such instruments belong, for instance, the Multi Unit Spectroscopic Explorer (MUSE) being developed for the Very Large Telescope (VLT) of the European Southern Observatory (ESO) and the 4-metre Multi-Object Spectroscopic Telescope (4MOST) being in development for the ESO VISTA 4.1 m Telescope. The existing adaptations of OFC from mode-locked lasers are not resolvable by these instruments.
Within this work, a fibre-based approach for generation of OFC specifically in the low- and medium resolution range is studied numerically. This approach consists of three optical fibres that are fed by two equally intense continuous-wave (CW) lasers. The first fibre is a conventional single-mode fibre, the second one is a suitably pumped amplifying Erbium-doped fibre with anomalous dispersion, and the third one is a low-dispersion highly nonlinear optical fibre. The evolution of a frequency comb in this system is governed by the following processes: as the two initial CW-laser waves with different frequencies propagate through the first fibre, they generate an initial comb via a cascade of four-wave mixing processes. The frequency components of the comb are phase-correlated with the original laser lines and have a frequency spacing that is equal to the initial laser frequency separation (LFS), i.e. the difference in the laser frequencies. In the time domain, a train of pre-compressed pulses with widths of a few pico-seconds arises out of the initial bichromatic deeply-modulated cosine-wave. These pulses undergo strong compression in the subsequent amplifying Erbium-doped fibre: sub-100 fs pulses with broad OFC spectra are formed. In the following low-dispersion highly nonlinear fibre, the OFC experience a further broadening and the intensity of the comb lines are fairly equalised. This approach was mathematically modelled by means of a Generalised Nonlinear Schrödinger Equation (GNLS) that contains terms describing the nonlinear optical Kerr effect, the delayed Raman response, the pulse self-steepening, and the linear optical losses as well as the wavelength-dependent Erbium gain profile for the second fibre. The initial condition equation being a deeply-modulated cosine-wave mimics the radiation of the two initial CW lasers. The numerical studies are performed with the help of Matlab scripts that were specifically developed for the integration of the GNLS and the initial condition according to the proposed approach for the OFC generation. The scripts are based on the Fourth-Order Runge-Kutta in the Interaction Picture Method (RK4IP) in combination with the local error method.
This work includes the studies and results on the length optimisation of the first and the second fibre depending on different values of the group-velocity dispersion of the first fibre. Such length optimisation studies are necessary because the OFC have the biggest possible broadband and exhibit a low level of noise exactly at the optimum lengths. Further, the optical pulse build-up in the first and the second fibre was studied by means of the numerical technique called Soliton Radiation Beat Analysis (SRBA). It was shown that a common soliton crystal state is formed in the first fibre for low laser input powers. The soliton crystal continuously dissolves into separated optical solitons as the input power increases. The pulse formation in the second fibre is critically dependent on the features of the pulses formed in the first fibre. I showed that, for low input powers, an adiabatic soliton compression delivering low-noise OFC occurs in the second fibre. At high input powers, the pulses in the first fibre have more complicated structures which leads to the pulse break-up in the second fibre with a subsequent degradation of the OFC noise performance. The pulse intensity noise studies that were performed within the framework of this thesis allow making statements about the noise performance of an OFC. They showed that the intensity noise of the whole system decreases with the increasing value of LFS.
This work investigates the influence of the Coriolis force on mass motion related to the Rheasilvia impact basin on asteroid (4) Vesta's southern hemisphere. The giant basin is 500km in diameter, with a centre which nearly coincides with the rotation axis of Vesta. The Rheasilvia basin partially overlaps an earlier, similarly large impact basin, Veneneia.
Mass motion within and in the vicinity of the Rheasilvia basin includes slumping and landslides, which, primarily due to their small linear extents, have not been noticeably affected by the Coriolis force. However, a series of ridges related to the basin exhibit significant curvature, which may record the effect of the Coriolis force on the mass motion which generated them.
In this thesis 32 of these curved ridges, in three geologically distinct regions, were examined. The mass motion velocities from which the ridge curvatures may have resulted during the crater modification stage were investigated. Velocity profiles were derived by fitting inertial circles along the curved ridges and considering both the current and past rotation states of Vesta. An iterative, statistical approach was used, whereby the radii of inertial circles were obtained through repeated fitting to triplets of points across the ridges. The most frequently found radius for each central point was then used for velocity derivation at that point.
The results of the velocity analysis are strongly supportive of a Coriolis force origin for the curved ridges. Derived velocities (29.6 ± 24.6 m/s) generally agree well with previously published predictions from numerical simulations of mass motion during the impact process. Topographical features such as local slope gradient and mass deposition regions on the curved ridges also independently agree with regions in which the calculated mass motion accelerates or decelerates.
Sections of constant acceleration, deceleration and constant velocity are found, showing that mass motion is being governed by varying conditions of topography, regolith structure and friction. Estimates of material properties such as the effective viscosities (1.9-9.0·10⁶ Pa·s) and coefficients of friction (0.02-0.81) are derived from the velocity profile information in these sections. From measured accelerations of mass motions on the crater wall, it is also shown that the crater walls must have been locally steeper at the time of the mass motion.
Together with these novel insights into the state and behaviour of material moving during the modification stage of Rheasilvia's formation, this work represents the first time that the Coriolis Effect on mass motions during crater formation has been shown to result in diagnostic features preserved until today.
Breaking down complexity
(2015)
The unbounded expressive capacity of human language cannot boil down to an infinite list of sentences stored in a finite brain. Our linguistic knowledge is rather grounded around a rule-based universal syntactic computation—called Merge—which takes categorized units in input (e.g. this and ship), and generates structures by binding words recursively into more complex hierarchies of any length (e.g. this ship; this ship sinks…). Here we present data from different fMRI datasets probing the cortical implementation of this fundamental process. We first pushed complexity down to a three-word level, to explore how Merge creates minimally hierarchical phrases and sentences. We then moved to the most fundamental two-word level, to directly assess the universal invariant nature of Merge, when no additive mechanisms are involved. Our most general finding is that Merge as the basic syntactic operation is primarily performed by confined area, namely BA 44 in the IFG. Activity reduces to its most ventral-anterior portion at the most fundamental level, following fine-grained sub-anatomical parcellation proposed for the region. The deep frontal operculum/anterior-dorsal insula (FOP/adINS), a phylogenetically older and less specialized region, rather appears to support word-accumulation processing in which the categorical information of the word is first accessed based on its lexical status, and then maintained on hold before further processing takes place. The present data confirm the general notion of BA 44 being activated as a function of complex structural hierarchy, but they go beyond this view by proposing that structural sensitivity in BA 44 is already appreciated at the lowest levels of complexity during which minimal phrase-structures are build up, and syntactic Merge is assessed. Further, they call for a redefinition of BA 44 from multimodal area to a macro-region with internal localizable functional profiles
The Tien-Shan and the neighboring Pamir region are two of the largest mountain belts in the world. Their deformation is dominated by intermontane basins bounded by active thrust and reverse faulting. The Tien-Shan mountain belt is characterized by a very high rate of seismicity along its margins as well as within the Tien-Shan interior. The study area of the here presented thesis, the western part of the Tien-Shan region, is currently seismically active with small and moderate sized earthquakes. However, at the end of the 19th beginning of the 20th century, this region was struck by a remarkable series of large magnitude (M>7) earthquakes, two of them reached magnitude 8.
Those large earthquakes occurred prior to the installation of the global digital seismic network and therefore were recorded only by analog seismic instruments. The processing of the analog data brings several difficulties, for example, not always the true parameters of the recording system are known. Another complicated task is the digitization of those records - a very time-consuming and delicate part. Therefore a special set of techniques is developed and modern methods are adapted for the digitized instrumental data analysis.
The main goal of the presented thesis is to evaluate the impact of large magnitude M≥7.0 earthquakes, which occurred at the turn of 19th to 20th century in the Tien-Shan region, on the overall regional tectonics. A further objective is to investigate the accuracy of previously estimated source parameters for those earthquakes, which were mainly based on macroseismic observations, and re-estimate them based on the instrumental data. An additional aim of this study is to develop the tools and methods for faster and more productive usage of analog seismic data in modern seismology.
In this thesis, the ten strongest and most interesting historical earthquakes in Tien-Shan region are analyzed. The methods and tool for digitizing and processing the analog seismic data are presented. The source parameters of the two major M≥8.0 earthquakes in the Northern Tien-Shan are re-estimated in individual case studies. Those studies are published as peer-reviewed scientific articles in reputed journals. Additionally, the Sarez-Pamir earthquake and its connection with one of the largest landslides in the world, Usoy landslide, is investigated by seismic modeling. These results are also published as a research paper.
With the developed techniques, the source parameters of seven more major earthquakes in the region are determined and their impact on the regional tectonics was investigated. The large magnitudes of those earthquakes are confirmed by instrumental data. The focal mechanism of these earthquakes were determined providing evidence for responsible faults or fault systems.
A main limitation in the field of flood hydrology is the short time period covered by instrumental flood time series, rarely exceeding more than 50 to 100 years. However, climate variability acts on short to millennial time scales and identifying causal linkages to extreme hydrological events requires longer datasets. To extend instrumental flood time series back in time, natural geoarchives are increasingly explored as flood recorders. Therefore, annually laminated (varved) lake sediments seem to be the most suitable archives since (i) lake basins act as natural sediment traps in the landscape continuously recording land surface processes including floods and (ii) individual flood events are preserved as detrital layers intercalated in the varved sediment sequence and can be dated with seasonal precision by varve counting.
The main goal of this thesis is to improve the understanding about hydrological and sedimentological processes leading to the formation of detrital flood layers and therewith to contribute to an improved interpretation of lake sediments as natural flood archives. This goal was achieved in two ways: first, by comparing detrital layers in sediments of two dissimilar peri-Alpine lakes, Lago Maggiore in Northern Italy and Mondsee in Upper Austria, with local instrumental flood data and, second, by tracking detrital layer formation during floods by a combined hydro-sedimentary monitoring network at Lake Mondsee spanning from the rain fall to the deposition of detrital sediment at the lake floor.
Successions of sub-millimetre to 17 mm thick detrital layers were detected in sub-recent lake sediments of the Pallanza Basin in the western part of Lago Maggiore (23 detrital layers) and Lake Mondsee (23 detrital layers) by combining microfacies and high-resolution micro X-ray fluorescence scanning techniques (µ-XRF). The detrital layer records were dated by detailed intra-basin correlation to a previously dated core sequence in Lago Maggiore and varve counting in Mondsee. The intra-basin correlation of detrital layers between five sediment cores in Lago Maggiore and 13 sediment cores in Mondsee allowed distinguishing river runoff events from local erosion. Moreover, characteristic spatial distribution patterns of detrital flood layers revealed different depositional processes in the two dissimilar lakes, underflows in Lago Maggiore as well as under- and interflows in Mondsee. Comparisons with runoff data of the main tributary streams, the Toce River at Lago Maggiore and the Griesler Ache at Mondsee, revealed empirical runoff thresholds above which the deposition of a detrital layer becomes likely. Whereas this threshold is the same for the whole Pallanza Basin in Lago Maggiore (600 m3s-1 daily runoff), it varies within Lake Mondsee. At proximal locations close to the river inflow detrital layer deposition requires floods exceeding a daily runoff of 40 m3s-1, whereas at a location 2 km more distal an hourly runoff of 80 m3s-1 and at least 2 days with runoff above 40 m3s-1 are necessary. A relation between the thickness of individual deposits and runoff amplitude of the triggering events is apparent for both lakes but is obviously further influenced by variable influx and lake internal distribution of detrital sediment.
To investigate processes of flood layer formation in lake sediments, hydro-sedimentary dynamics in Lake Mondsee and its main tributary stream, Griesler Ache, were monitored from January 2011 to December 2013. Precipitation, discharge and turbidity were recorded continuously at the rivers outlet to the lake and compared to sediment fluxes trapped close to the lake bottom on a basis of three to twelve days and on a monthly basis in three different water depths at two locations in the lake basin, in a distance of 0.9 (proximal) and 2.8 km (distal) to the Griesler Ache inflow. Within the three-year observation period, 26 river floods of different amplitude (10-110 m3s-1) were recorded resulting in variable sediment fluxes to the lake (4-760 g m-2d-1). Vertical and lateral variations in flood-related sedimentation during the largest floods indicate that interflows are the main processes of lake internal sediment transport in Lake Mondsee. The comparison of hydrological and sedimentological data revealed (i) a rapid sedimentation within three days after the peak runoff in the proximal and within six to ten days in the distal lake basin, (ii) empirical runoff thresholds for triggering sediment flux at the lake floor increasing from the proximal (20 m3s-1) to the distal lake basin (30 m3s-1) and (iii) factors controlling the amount of detrital sediment deposition at a certain location in the lake basin. The total influx of detrital sediment is mainly driven by runoff amplitude, catchment sediment availability and episodic sediment input by local sediment sources. A further role plays the lake internal sediment distribution which is not the same for each event but is favoured by flood duration and the existence of a thermocline and, therewith, the season in which a flood occurred.
In summary, the studies reveal a high sensitivity of lake sediments to flood events of different intensity. Certain runoff amplitudes are required to supply enough detrital material to form a visible detrital layer at the lake floor. Reasonable are positive feedback mechanisms between rainfall, runoff, erosion, fluvial sediment transport capacity and lake internal sediment distribution. Therefore, runoff thresholds for detrital layer formation are site-specific due to different lake-catchment characteristics. However, the studies also reveal that flood amplitude is not the only control for the amount of deposited sediment at a certain location in the lake basin even for the strongest flood events. The sediment deposition is rather influenced by a complex interaction of catchment and in-lake processes. This means that the coring location within a lake basin strongly determines the significance of a flood layer record. Moreover, the results show that while lake sediments provide ideal archives for reconstructing flood frequencies, the reconstruction of flood amplitudes is a more complex issue and requires detailed knowledge about relevant catchment and in-lake sediment transport and depositional processes.
Peak oil is forcing our society to shift from fossil to renewable resources. However, such renewable resources are also scarce, and they too must be used in the most efficient and sustainable way possible. Biorefining is a concept that represents both resource efficiency and sustainability. This approach initiates a cascade use, which means food and feed production before material use, and an energy-related use at the end of the value-added chain. However, sustainability should already start in the fields, on the agricultural side, where the industrially-used biomass is produced. Therefore, the aim of my doctoral thesis is to analyse the sustainable feedstock supply for biorefineries. In contrast to most studies on biorefineries, I focus on the sustainable provision of feedstock and not on the bioengineering processing of whatever feedstock is available.
Grasslands provide a high biomass potential. They are often inefficiently used, so a new utilisation concept based on the biorefining approach can increase the added value from grasslands. Fodder legumes from temporary and permanent grasslands were chosen for this study. Previous research shows that they are a promising feedstock for industrial uses, and their positive environmental impact is an important byproduct to promote sustainable agricultural production systems.
Green Biorefineries are a class of biorefineries that use fresh green biomass, such as grasses or fodder legumes, as feedstock. After fractionation, an organic solution (press juice) forms; this is used for the production of organic acids, chemicals and extracts, as well as fertilisers. A fibre component (press cake) is also created to produce feed, biomaterials and biogas. This thesis examines a specific value chain, using alfalfa and clover/grass as feedstock and generating lactic acid and one type of cattle feed from it. The research question is if biomass production needs to be adapted for the utilisation of fodder legumes in the Green Biorefinery approach. I have attempted to give a holistic analysis of cultivation, processing and utilisation of two specific grassland crops. Field trials with alfalfa and clover/grass at different study sites were carried out to obtain information on biomass quality and quantity depending on the crop, study site and harvest time. The fresh biomass was fractionated with a screw press and the composition of press juices and cakes was analysed. Fermentation experiments took place to determine the usability of press juices for lactic acid production. The harvest time is not of high importance for the quality of press juices as a fermentation medium. For permanent grasslands, late cuts, often needed for reasons of nature conservation, are possible without a major influence on feedstock quality. The press cakes were silaged for feed-value determination.
Following evidence that both intermediate products are suitable feedstocks in the Green Biorefinery approach, I developed a cost-benefit analysis, comparing different production scenarios on a farm. Two standard crop rotations for Brandenburg, producing either only market crops or market crops and fodder legumes for ruminant feed production, were compared to a system that uses the cultivated fodder legumes for the Green Biorefinery value chain instead of only feed production. Timely processing of the raw material is important to maintain quality for industrial uses, so on-site processing at the farm is assumed in Green Biorefinery scenario. As a result, more added value stays in the rural area. Two farm sizes, common for many European regions, were chosen to examine the influence of scale. The cost site of farmers has also been analysed in detail to assess which farm characteristics make production of press juices for biochemical industries viable. Results show that for large farm sizes in particular, the potential profits are high. Additionally, the wider spectrum of marketable products generates new sources of income for farmers.
The holistic analysis of the supply chain provides evidence that the cultivation processes for fodder legumes do not need to be adapted for use in Green Biorefineries. In fact, the new utilisation approach even widens the cultivation and processing spectrum and can increase economic viability of fodder legume production in conventional farming.
Physical fitness is an important marker of health that enables people to carry out activities of daily living with vigour and alertness but without undue fatigue and with sufficient reserve to enjoy active leisure pursuits and to meet unforeseen emergencies. Especially, due to scientific findings that the onset of civilization diseases (e.g., obesity, cardiovascular disease) begins in childhood and that physical fitness tracks (at least) into young adulthood, the regular monitoring and promotion of physical fitness in children is risen up to a public health issue. In relation to the evaluation of a child’s physical fitness over time (i.e., development) the use of longitudinally-based percentile values is of particular interest due to their underlined dedication of true physical fitness development within subjects (i.e., individual changes in timing and tempo of growth and maturation). Besides its genetic determination (e.g., sex, body height), physical fitness is influenced by factors that refer to children’s environment and behaviour. For instance, disparities in physical fitness according to children’s living area are frequently reported concerning the fact that living in rural areas as compared to urban areas seems to be more favourable for children’s physical fitness. In addition, cross-sectional studies found higher fitness values in children participating in sports clubs as compared to non-participants. However, up to date, the observed associations between both (i.e., living area and sports club participating) and children’s physical fitness are unresolved concerning a long-term effect. In addition, social inequality as determined by the socioeconomic status (SES) extends through many areas of children’s life. While evidence indicates that the SES is inversely related to various indices of child’s daily life and behaviour like educational success, nutritional habits, and sedentary- and physical activity behaviour, a potential relationship between child’s physical fitness and the SES is hardly investigated and indicated inconsistent results.
The present thesis addressed three objectives: (1) to generate physical fitness percentiles for 9- to 12- year-old boys and girls using a longitudinal approach and to analyse the age- and sex-specific development of physical fitness, (2) to investigate the long-term effect of living area and sports club participation on physical fitness in third- to sixth-grade primary school students, and (3) to examine associations between the SES and physical fitness in a large and representative (i.e., for a German federal state) sample of third grade primary school students.
Methods
(i/ii) Healthy third graders were followed over four consecutive years (up to grade 6), including annually assessment of physical fitness and parental questionnaire (i.e., status of sports club participation and living area). Six tests were conducted to estimate various components of physical fitness: speed (50-m sprint test), upper body muscular power (1-kg ball push test), lower body muscular power (triple hop test), flexibility (stand-and-reach test), agility (star agility run test), and cardiorespiratory fitness (CRF) (9-min run test). (iii) Within a cross-sectional study (i.e., third objective), physical fitness of third graders was assessed by six physical fitness tests including: speed (20-m sprint test), upper body muscular power (1-kg ball push test), lower body muscular power (standing long jump [SLJ] test), flexibility (stand-and-reach test), agility (star agility run test), and CRF (6-min run test). By means of questionnaire, students reported their status of organized sports participation (OSP).
Results
(i) With respect to percentiles of physical fitness development, test performances increased in boys and girls from age 9 to 12, except for males’ flexibility (i.e., stable performance over time). Girls revealed significantly better performance in flexibility, whereas boys scored significantly higher in the remaining physical fitness tests. In girls as compared to boys, physical fitness development was slightly faster for upper body muscular power but substantially faster for flexibility. Generated physical fitness percentile curves indicated a timed and capacity-specific physical fitness development (curvilinear) for upper body muscular power, agility, and CRF. (ii) Concerning the effect of living area and sports club participation on physical fitness development, children living in urban areas showed a significantly faster performance development in physical fitness components of upper and lower body muscular power as compared to peers from rural areas. The same direction was noted as a trend in CRF. Additionally, children that regularly participated in a sports club, when compared to those that not continuously participated in a sports club demonstrated a significantly faster performance development in lower body muscular power. A trend of faster performance development in sports club participants occurred in CRF too. (iii) Regarding the association of SES with physical fitness, the percentage of third graders that achieved a high physical fitness level in lower body muscular power and CRF was significantly higher in students attending schools in communities with high SES as compared to middle and low SES, irrespective of sex. Similar, students from the high SES-group performed significantly better in lower body muscular power and CRF than students from the middle and/or the low SES-group.
Conclusion
(i) The generated percentile values provide an objective tool to estimate childrenʼs physical fitness within the frame of physical education (e.g., age- and sex-specific grading of motor performance) and further to detect children with specific fitness characteristics (low fit or high fit) that may be indicative for the necessity of preventive health promotion or long term athlete development. (ii) It is essential to consider variables of different domains (e.g., environment and behavior) in order to improve knowledge of potential factors which influence physical fitness during childhood. In this regard, the present thesis provide a first input to clarify the causality of living area and sports club participation on physical fitness development in school-aged children. Living in urban areas as well as a regular participation in sports clubs positively affected children´s physical fitness development (i.e., muscular power and CRF). Herein, sports club participation seems to be a key factor within the relationship between living area and physical fitness. (iii) The findings of the present thesis imply that attending schools in communities with high SES refers to better performance in specific physical fitness test items (i.e., muscular power, CRF) in third graders. Extra-curricular physical education classes may represent an important equalizing factor for physical activity opportunities in children of different SES backgrounds. In regard to strong evidence of a positive relationship between physical fitness - in particular muscular fitness/ CRF - and health, more emphasis should be laid on establishing sports clubs and extra-curricular physical education classes as an easy and attractive means to promote fitness-, and hence health- enhancing daily physical activity for all children (i.e. public health approach).
Analysis and modeling of transient earthquake patterns and their dependence on local stress regimes
(2015)
Investigations in the field of earthquake triggering and associated interactions, which includes aftershock triggering as well as induced seismicity, is important for seismic hazard assessment due to earthquakes destructive power. One of the approaches to study earthquake triggering and their interactions is the use of statistical earthquake models, which are based on knowledge of the basic seismicity properties, in particular, the magnitude distribution and spatiotemporal properties of the triggered events.
In my PhD thesis I focus on some specific aspects of aftershock properties, namely, the relative seismic moment release of the aftershocks with respect to the mainshocks; the spatial correlation between aftershock occurrence and fault deformation; and on the influence of aseismic transients on the aftershock parameter estimation. For the analysis of aftershock sequences I choose a statistical approach, in particular, the well known Epidemic Type Aftershock Sequence (ETAS) model, which accounts for the input of background and triggered seismicity. For my specific purposes, I develop two ETAS model modifications in collaboration with Sebastian Hainzl. By means of this approach, I estimate the statistical aftershock parameters and performed simulations of aftershock sequences as well.
In the case of seismic moment release of aftershocks, I focus on the ratio of cumulative seismic moment release with respect to the mainshocks. Specifically, I investigate the ratio with respect to the focal mechanism of the mainshock and estimate an effective magnitude, which represents the cumulative aftershock energy (similar to Bath's law, which defines the average difference between mainshock and the largest aftershock magnitudes). Furthermore, I compare the observed seismic moment ratios with the results of the ETAS simulations. In particular, I test a restricted ETAS (RETAS) model which is based on results of a clock advanced model and static stress triggering.
To analyze spatial variations of triggering parameters I focus in my second approach on the aftershock occurrence triggered by large mainshocks and the study of the aftershock parameter distribution and their spatial correlation with the coseismic/postseismic slip and interseismic locking. To invert the aftershock parameters I improve the modified ETAS (m-ETAS) model, which is able to take the extension of the mainshock rupture into account. I compare the results obtained by the classical approach with the output of the m-ETAS model.
My third approach is concerned with the temporal clustering of seismicity, which might not only be related to earthquake-earthquake interactions, but also to a time-dependent background rate, potentially biasing the parameter estimations. Thus, my coauthors and I also applied a modification of the ETAS model, which is able to take into account time-dependent background activity. It can be applicable for two different cases: when an aftershock catalog has a temporal incompleteness or when the background seismicity rate changes with time, due to presence of aseismic forces.
An essential part of any research is the testing of the developed models using observational data sets, which are appropriate for the particular study case. Therefore, in the case of seismic moment release I use the global seismicity catalog. For the spatial distribution of triggering parameters I exploit two aftershock sequences of the Mw8.8 2010 Maule (Chile) and Mw 9.0 2011 Tohoku (Japan) mainshocks. In addition, I use published geodetic slip models of different authors. To test our ability to detect aseismic transients my coauthors and I use the data sets from Western Bohemia (Central Europe) and California.
Our results indicate that:
(1) the seismic moment of aftershocks with respect to mainshocks depends on the static stress changes and is maximal for the normal, intermediate for thrust and minimal for strike-slip stress regimes, where the RETAS model shows a good correspondence with the results;
(2) The spatial distribution of aftershock parameters, obtained by the m-ETAS model, shows anomalous values in areas of reactivated crustal fault systems. In addition, the aftershock density is found to be correlated with coseismic slip gradient, afterslip, interseismic coupling and b-values. Aftershock seismic moment is positively correlated with the areas of maximum coseismic slip and interseismically locked areas. These correlations might be related to the stress level or to material properties variations in space;
(3) Ignoring aseismic transient forcing or temporal catalog incompleteness can lead to the significant under- or overestimation of the underlying trigger parameters. In the case when a catalog is complete, this method helps to identify aseismic sources.
In many procedures of seismic risk mitigation, ground motion simulations are needed to test systems or improve their effectiveness. For example they may be used to estimate the level of ground shaking caused by future earthquakes. Good physical models for ground motion simulation are also thought to be important for hazard assessment, as they could close gaps in the existing datasets. Since the observed ground motion in nature shows a certain variability, part of which cannot be explained by macroscopic parameters such as magnitude or position of an earthquake, it would be desirable that a good physical model is not only able to produce one single seismogram, but also to reveal this natural variability.
In this thesis, I develop a method to model realistic ground motions in a way that is computationally simple to handle, permitting multiple scenario simulations. I focus on two aspects of ground motion modelling. First, I use deterministic wave propagation for the whole frequency range – from static deformation to approximately 10 Hz – but account for source variability by implementing self-similar slip distributions and rough fault interfaces. Second, I scale the source spectrum so that the modelled waveforms represent the correct radiated seismic energy. With this scaling I verify whether the energy magnitude is suitable as an explanatory variable, which characterises the amount of energy radiated at high frequencies – the advantage of the energy magnitude being that it can be deduced from observations, even in real-time.
Applications of the developed method for the 2008 Wenchuan (China) earthquake, the 2003 Tokachi-Oki (Japan) earthquake and the 1994 Northridge (California, USA) earthquake show that the fine source discretisations combined with the small scale source variability ensure that high frequencies are satisfactorily introduced, justifying the deterministic wave propagation approach even at high frequencies. I demonstrate that the energy magnitude can be used to calibrate the high-frequency content in ground motion simulations.
Because deterministic wave propagation is applied to the whole frequency range, the simulation method permits the quantification of the variability in ground motion due to parametric uncertainties in the source description. A large number of scenario simulations for an M=6 earthquake show that the roughness of the source as well as the distribution of fault dislocations have a minor effect on the simulated variability by diminishing directivity effects, while hypocenter location and rupture velocity more strongly influence the variability. The uncertainty in energy magnitude, however, leads to the largest differences of ground motion amplitude between different events, resulting in a variability which is larger than the one observed.
For the presented approach, this dissertation shows (i) the verification of the computational correctness of the code, (ii) the ability to reproduce observed ground motions and (iii) the validation of the simulated ground motion variability. Those three steps are essential to evaluate the suitability of the method for means of seismic risk mitigation.
Methicillin resistant Staphylococcus aureus (MRSA) is one of the most important antibiotic-resistant pathogens in hospitals and the community. Recently, a new generation of MRSA, the so called livestock associated (LA) MRSA, has emerged occupying food producing animals as a new niche. LA-MRSA can be regularly isolated from economically important live-stock species including corresponding meats. The present thesis takes a methodological approach to confirm the hypothesis that LA-MRSA are transmitted along the pork, poultry and beef production chain from animals at farm to meat on consumers` table. Therefore two new concepts were developed, adapted to differing data sets.
A mathematical model of the pig slaughter process was developed which simulates the change in MRSA carcass prevalence during slaughter with special emphasis on identifying critical process steps for MRSA transmission. Based on prevalences as sole input variables the model framework is able to estimate the average value range of both the MRSA elimination and contamination rate of each of the slaughter steps. These rates are then used to set up a Monte Carlo simulation of the slaughter process chain. The model concludes that regardless of the initial extent of MRSA contamination low outcome prevalences ranging between 0.15 and 1.15 % can be achieved among carcasses at the end of slaughter. Thus, the model demonstrates that the standard procedure of pig slaughtering in principle includes process steps with the capacity to limit MRSA cross contamination. Scalding and singeing were identified as critical process steps for a significant reduction of superficial MRSA contamination.
In the course of the German national monitoring program for zoonotic agents MRSA prevalence and typing data are regularly collected covering the key steps of different food production chains. A new statistical approach has been proposed for analyzing this cross sectional set of MRSA data with regard to show potential farm to fork transmission. For this purpose, chi squared statistics was combined with the calculation of the Czekanowski similarity index to compare the distributions of strain specific characteristics between the samples from farm, carcasses after slaughter and meat at retail. The method was implemented on the turkey and veal production chains and the consistently high degrees of similarity which have been revealed between all sample pairs indicate MRSA transmission along the chain.
As the proposed methods are not specific to process chains or pathogens they offer a broad field of application and extend the spectrum of methods for bacterial transmission assessment.
The rise of evolutionary novelties is one of the major drivers of evolutionary diversification. African weakly-electric fishes (Teleostei, Mormyridae) have undergone an outstanding adaptive radiation, putatively owing to their ability to communicate through species-specific Electric Organ Discharges (EODs) produced by a novel, muscle-derived electric organ. Indeed, such EODs might have acted as effective pre-zygotic isolation mechanisms, hence favoring ecological speciation in this group of fishes. Despite the evolutionary importance of this organ, genetic investigations regarding its origin and function have remained limited.
The ultimate aim of this study is to better understand the genetic basis of EOD production by exploring the transcriptomic profiles of the electric organ and of its ancestral counterpart, the skeletal muscle, in the genus Campylomormyrus. After having established a set of reference transcriptomes using “Next-Generation Sequencing” (NGS) technologies, I performed in silico analyses of differential expression, in order to identify sets of genes that might be responsible for the functional differences observed between these two kinds of tissues. The results of such analyses indicate that: i) the loss of contractile activity and the decoupling of the excitation-contraction processes are reflected by the down-regulation of the corresponding genes in the electric organ; ii) the metabolic activity of the electric organ might be specialized towards the production and turnover of membrane structures; iii) several ion channels are highly expressed in the electric organ in order to increase excitability, and iv) several myogenic factors might be down-regulated by transcription repressors in the EO.
A secondary task of this study is to improve the genus level phylogeny of Campylomormyrus by applying new methods of inference based on the multispecies coalescent model, in order to reduce the conflict among gene trees and to reconstruct a phylogenetic tree as closest as possible to the actual species-tree. By using 1 mitochondrial and 4 nuclear markers, I was able to resolve the phylogenetic relationships among most of the currently described Campylomormyrus species. Additionally, I applied several coalescent-based species delimitation methods, in order to test the hypothesis that putatively cryptic species, which are distinguishable only from their EOD, belong to independently evolving lineages. The results of this analysis were additionally validated by investigating patterns of diversification at 16 microsatellite loci. The results suggest the presence of a new, yet undescribed species of Campylomormyrus.
This thesis investigates temporal and aspectual reference in the typologically unrelated African languages Hausa (Chadic, Afro–Asiatic) and Medumba (Grassfields Bantu).
It argues that Hausa is a genuinely tenseless language and compares the interpretation of temporally unmarked sentences in Hausa to that of morphologically tenseless sentences in Medumba, where tense marking is optional and graded.
The empirical behavior of the optional temporal morphemes in Medumba motivates an analysis as existential quantifiers over times and thus provides new evidence suggesting that languages vary in whether their (past) tense is pronominal or quantificational (see also Sharvit 2014).
The thesis proposes for both Hausa and Medumba that the alleged future tense marker is a modal element that obligatorily combines with a prospective future shifter (which is covert in Medumba). Cross-linguistic variation in whether or not a future marker is compatible with non-future interpretation is proposed to be predictable from the aspectual architecture of the given language.
Reconstructing climate from the Dead Sea sediment record using high-resolution micro-facies analyses
(2015)
The sedimentary record of the Dead Sea is a key archive for reconstructing climate in the eastern Mediterranean region, as it stores the environmental and tectonic history of the Levant for the entire Quaternary. Moreover, the lake is located at the boundary between Mediterranean sub-humid to semi-arid and Saharo-Arabian hyper-arid climates, so that even small shifts in atmospheric circulation are sensitively recorded in the sediments. This DFG-funded doctoral project was carried out within the ICDP Dead Sea Deep Drilling Project (DSDDP) that intended to gain the first long, continuous and high-resolution sediment core from the deep Dead Sea basin. The drilling campaign was performed in winter 2010-11 and more than 700 m of sediments were recovered. The main aim of this thesis was (1) to establish the lithostratigraphic framework for the ~455 m long sediment core from the deep Dead Sea basin and (2) to apply high-resolution micro-facies analyses for reconstructing and better understanding climate variability from the Dead Sea sediments.
Addressing the first aim, the sedimentary facies of the ~455 m long deep-basin core 5017-1 were described in great detail and characterised through continuous overview-XRF element scanning and magnetic susceptibility measurements. Three facies groups were classified: (1) the marl facies group, (2) the halite facies group and (3) a group involving different expressions of massive, graded and slumped deposits including coarse clastic detritus. Core 5017-1 encompasses a succession of four main lithological units. Based on first radiocarbon and U-Th ages and correlation of these units to on-shore stratigraphic sections, the record comprises the last ca 220 ka, i.e. the upper part of the Amora Formation (parts of or entire penultimate interglacial and glacial), the last interglacial Samra Fm. (~135-75 ka), the last glacial Lisan Fm. (~75-14 ka) and the Holocene Ze’elim Formation. A major advancement of this record is that, for the first time, also transitional intervals were recovered that are missing in the exposed formations and that can now be studied in great detail.
Micro-facies analyses involve a combination of high-resolution microscopic thin section analysis and µXRF element scanning supported by magnetic susceptibility measurements. This approach allows identifying and characterising micro-facies types, detecting event layers and reconstructing past climate variability with up to seasonal resolution, given that the analysed sediments are annually laminated. Within this thesis, micro-facies analyses, supported by further sedimentological and geochemical analyses (grain size, X-ray diffraction, total organic carbon and calcium carbonate contents) and palynology, were applied for two time intervals:
(1) The early last glacial period ~117-75 ka was investigated focusing on millennial-scale hydroclimatic variations and lake level changes recorded in the sediments. Thereby, distinguishing six different micro-facies types with distinct geochemical and sedimentological characteristics allowed estimating relative lake level and water balance changes of the lake. Comparison of the results to other records in the Mediterranean region suggests a close link of the hydroclimate in the Levant to North Atlantic and Mediterranean climates during the time of the build-up of Northern hemisphere ice sheets during the early last glacial period.
(2) A mostly annually laminated late Holocene section (~3700-1700 cal yr BP) was analysed in unprecedented detail through a multi-proxy, inter-site correlation approach of a shallow-water core (DSEn) and its deep-basin counterpart (5017-1). Within this study, a ca 1500 years comprising time series of erosion and dust deposition events was established and anchored to the absolute time-scale through 14C dating and age modelling. A particular focus of this study was the characterisation of two dry periods, from ~3500 to 3300 and from ~3000 to 2400 cal yr BP, respectively. Thereby, a major outcome was the coincidence of the latter dry period with a period of moist and cold climate in Europe related to a Grand Solar Minimum around 2800 cal yr BP and an increase in flood events despite overall dry conditions in the Dead Sea region during that time. These contrasting climate signatures in Europe and at the Dead Sea were likely linked through complex teleconnections of atmospheric circulation, causing a change in synoptic weather patterns in the eastern Mediterranean.
In summary, within this doctorate the lithostratigraphic framework of a unique long sediment core from the deep Dead Sea basin is established, which serves as a base for any further high-resolution investigations on this core. It is demonstrated in two case studies that micro-facies analyses are an invaluable tool to understand the depositional processes in the Dead Sea and to decipher past climate variability in the Levant on millennial to seasonal time-scales. Hence, this work adds important knowledge helping to establish the deep Dead Sea record as a key climate archive of supra-regional significance.
The non-linear behaviour of the atmospheric dynamics is not well understood and makes the evaluation and usage of regional climate models (RCMs) difficult. Due to these non-linearities, chaos and internal variability (IV) within the RCMs are induced, leading to a sensitivity of RCMs to their initial conditions (IC). The IV is the ability of RCMs to realise different solutions of simulations that differ in their IC, but have the same lower and lateral boundary conditions (LBC), hence can be defined as the across-member spread between the ensemble members.
For the investigation of the IV and the dynamical and diabatic contributions generating the IV four ensembles of RCM simulations are performed with the atmospheric regional model HIRHAM5. The integration area is the Arctic and each ensemble consists of 20 members. The ensembles cover the time period from July to September for the years 2006, 2007, 2009 and 2012. The ensemble members have the same LBC and differ in their IC only. The different IC are arranged by an initialisation time that shifts successively by six hours. Within each ensemble the first simulation starts on 1st July at 00 UTC and the last simulation starts on 5th July at 18 UTC and each simulation runs until 30th September. The analysed time period ranges from 6th July to 30th September, the time period that is covered by all ensemble members. The model runs without any nudging to allow a free development of each simulation to get the full internal variability within the HIRHAM5.
As a measure of the model generated IV, the across-member standard deviation and the across-member variance is used and the dynamical and diabatic processes influencing the IV are estimated by applying a diagnostic budget study for the IV tendency of the potential temperature developed by Nikiema and Laprise [2010] and Nikiema and Laprise [2011]. The diagnostic budget study is based on the first law of thermodynamics for potential temperature and the mass-continuity equation. The resulting budget equation reveals seven contributions to the potential temperature IV tendency.
As a first study, this work analyses the IV within the HIRHAM5. Therefore, atmospheric circulation parameters and the potential temperature for all four ensemble years are investigated. Similar to previous studies, the IV fluctuates strongly in time. Further, due to the fact that all ensemble members are forced with the same LBC, the IV depends on the vertical level within the troposphere, with high values in the lower troposphere and at 500 hPa and low values in the upper troposphere and at the surface. By the same reason, the spatial distribution shows low values of IV at the boundaries of the model domain.
The diagnostic budget study for the IV tendency of potential temperature reveals that the seven contributions fluctuate in time like the IV. However, the individual terms reach different absolute magnitudes. The budget study identifies the horizontal and vertical ‘baroclinic’ terms as the main contributors to the IV tendency, with the horizontal ‘baroclinic’ term producing and the vertical ‘baroclinic’ term reducing the IV. The other terms fluctuate around zero, because they are small in general or are balanced due to the domain average.
The comparison of the results obtained for the four different ensembles (summers 2006, 2007, 2009 and 2012) reveals that on average the findings for each ensemble are quite similar concerning the magnitude and the general pattern of IV and its contributions. However, near the surface a weaker IV is produced with decreasing sea ice extent. This is caused by a smaller impact of the horizontal 'baroclinic' term over some regions and by the changing diabatic processes, particularly a more intense reducing tendency of the IV due to condensative heating. However, it has to be emphasised that the behaviour of the IV and its dynamical and diabatic contributions are influenced mainly by complex atmospheric feedbacks and large-scale processes and not by the sea ice distribution.
Additionally, a comparison with a second RCM covering the Arctic and using the same LBCs and IC is performed. For both models very similar results concerning the IV and its dynamical and diabatic contributions are found. Hence, this investigation leads to the conclusion that the IV is a natural phenomenon and is independent from the applied RCM.