Refine
Has Fulltext
- yes (72) (remove)
Year of publication
- 2015 (72) (remove)
Document Type
- Doctoral Thesis (72) (remove)
Language
- English (72) (remove)
Is part of the Bibliography
- yes (72)
Keywords
- Klimawandel (3)
- climate change (3)
- Arbeitsgedächtnis (2)
- Erosion (2)
- Geschäftsprozessmanagement (2)
- Modellierung (2)
- Nanopartikel (2)
- Psycholinguistik (2)
- Satzverarbeitung (2)
- Seesedimente (2)
Institute
- Institut für Geowissenschaften (23)
- Institut für Physik und Astronomie (15)
- Institut für Biochemie und Biologie (6)
- Department Linguistik (4)
- Institut für Umweltwissenschaften und Geographie (4)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (3)
- Institut für Chemie (3)
- Institut für Ernährungswissenschaft (3)
- Institut für Mathematik (3)
- Sozialwissenschaften (2)
The main focus of the present thesis was to investigate the stabilization ability of poly(ionic liquid)s (PILs) in several examples as well as develop novel chemical structures and synthetic routes of PILs. The performed research can be specifically divided into three parts that include synthesis and application of hybrid material composed of PIL and cellulose nanofibers (CNFs), thiazolium-containing PILs, and main-chain imidazolium-type PILs.
In the first chapter, a vinylimidazolium-type IL was polymerized in water in the presence of CNFs resulting in the in situ electrostatic grafting of polymeric chains onto the surface of CNFs. The synthesized hybrid material merged advantages of its two components, that is, superior mechanical strength of CNFs and anion dependent solution properties of PILs. In contrast to unmodified CNFs, the hybrid could be stabilized and processed in organic solvents enabling its application as reinforcing agent for porous polyelectrolyte membranes.
In the second part, PILs and ionic polymers containing two types of thiazolium repeating units were synthesized. Such polymers displayed counterion dependent thermal stability and solubility in organic solvents of various dielectric constants. This new class of PILs was tested as stabilizers and phase transfer agents for carbon nanotubes in aqueous and organic media, and as binder materials to disperse electroactive powders and carbon additives in solid electrode in lithium-ion batteries. The incorporation of S and N atoms into the polymeric structures make such PILs also potential precursors for S, N - co-doped carbons.
In the last chapter, reactants originating from biomass were successfully harnessed to synthesize main-chain imidazolium-type PILs. An imidazolium-type diester IL obtained via a modified Debus-Radziszewski reaction underwent transesterification with diol in a polycondensation reaction. This yielded a polyester-type PIL which CO2 sorption properties were investigated. In the next step, the modified Debus-Radziszewski reaction was further applied to synthesize main-chain PILs according to a convenient, one-step protocol, using water as a green solvent and simple organic molecules as reagents. Depending on the structure of the employed diamine, the synthesized PILs after anion exchange showed superior thermal stability with unusually high carbonization yields.
Overall, the outcome of these studies will actively contribute to the current research on PILs by introducing novel PIL chemical structures, improved synthetic routes, and new examples of stabilized materials. The synthesis of main-chain imidazolium-type PILs by a modified Debus-Radziszewski reaction is of a special interest for the future work on porous ionic liquid networks as well as colloidal PIL nanoparticles.
The lives of more than 1/6 th of the world population is directly affected by the caprices of the South Asian summer monsoon rainfall. India receives around 78 % of the annual precipitation during the June-September months, the summer monsoon season of South Asia. But, the monsoon circulation is not consistent throughout the entire summer season. Episodes of heavy rainfall (active periods) and low rainfall (break periods) are inherent to the intraseasonal variability of the South Asian summer monsoon. Extended breaks or long-lasting dryness can result in droughts and hence trigger crop failures and in turn famines. Furthermore, India's electricity generation from renewable sources (wind and hydro-power), which is increasingly important in order to satisfy the rapidly rising demand for energy, is highly reliant on the prevailing meteorology. The major drought years 2002 and 2009 for the Indian summer monsoon during the last decades, which are results of the occurrence of multiple extended breaks, emphasise exemplary that the understanding of the monsoon system and its intraseasonal variation is of greatest importance. Although, numerous studies based on observations, reanalysis data and global model simulations have been carried out with the focus on monsoon active and break phases over India, the understanding of the monsoon intraseasonal variability is only in the infancy stage. Regional climate models could benefit the comprehension of monsoon breaks by its resolution advantage.
This study investigates moist dynamical processes that initiate and maintain breaks during the South Asian summer monsoon using the atmospheric regional climate model HIRHAM5 at a horizontal resolution of 25 km forced by the ECMWF ERA Interim reanalysis for the period 1979-2012. By calculating moisture and moist static energy budgets the various competing mechanisms leading to extended breaks are quantitatively estimated. Advection of dry air from the deserts of western Asia towards central India is the dominant moist dynamical process in initiating extended break conditions over South Asia. Once initiated, the extended breaks are maintained due to many competing mechanisms: (i) the anomalous easterlies at the southern flank of this anticyclonic anomaly weaken the low-level cross-equatorial jet and thus the moisture transport into the monsoon region, (ii) differential radiative heating over the continental and the oceanic tropical convergence zone induces a local Hadley circulation with anomalous rising over the equatorial Indian Ocean and descent over central India, and (iii) a cyclonic response to positive rainfall anomalies over the near-equatorial Indian Ocean amplifies the anomalous easterlies over India and hence contributes to the low-level divergence over central India.
A sensitivity experiment that mimics a scenario of higher atmospheric aerosol concentrations over South Asia addresses a current issue of large uncertainty: the role aerosols play in suppressing monsoon rainfall and hence in triggering breaks. To study the indirect aerosol effects the cloud droplet number concentration was increased to imitate the aerosol's function as cloud condensation nuclei. The sensitivity experiment with altered microphysical cloud properties shows a reduction in the summer monsoon precipitation together with a weakening of the South Asian summer monsoon. Several physical mechanisms are proposed to be responsible for the suppressed monsoon rainfall: (i) according to the first indirect radiative forcing the increase in the number of cloud droplets causes an increase in the cloud reflectivity of solar radiation, leading to a climate cooling over India which in turn reduces the hydrological cycle, (ii) a stabilisation of the troposphere induced by a differential cooling between the surface and the upper troposphere over central India inhibits the growth of deep convective rain clouds, (iii) an increase of the amount of low and mid-level clouds together with a decrease in high-level cloud amount amplify the surface cooling and hence the atmospheric stability, and (iv) dynamical changes of the monsoon manifested as a anomalous anticyclonic circulation over India reduce the moisture transport into the monsoon region. The study suggests that the changes in the total precipitation, which are dominated by changes in the convective precipitation, mainly result from the indirect radiative forcing. Suppression of rainfall due to the direct microphysical effect is found to be negligible over India. Break statistics of the polluted cloud scenario indicate an increase in the occurrence of short breaks (3 days), while the frequency of extended breaks (> 7 days) is clearly not affected. This disproves the hypothesis that more and smaller cloud droplets, caused by a high load of atmospheric aerosols trigger long drought conditions over central India.
This thesis contains three experimental studies addressing the interplay between deformation and the mineral reaction between natural calcite and magnesite. The solid-solid mineral reaction between the two carbonates causes the formation of a magnesio-calcite precursor layer and a dolomite reaction rim in every experiment at isostatic annealing and deformation conditions.
CHAPTER 1 briefly introduces general aspects concerning mineral reactions in nature and diffusion pathways for mass transport. Moreover, results of previous laboratory studies on the influence of deformation on mineral reactions are summarized. In addition, the main goals of this study are pointed out.
In CHAPTER 2, the reaction between calcite and magnesite single crystals is examined at isostatic annealing conditions. Time series performed at a fixed temperature revealed a diffusion-controlled dolomite rim growth. Two microstructural domains could be identified characterized by palisade-shaped dolomite grains growing into the magnesite and granular dolomite growing towards calcite. A model was provided for the dolomite rim growth based on the counter-diffusion of CaO and MgO. All reaction products exhibited a characteristic crystallographic relationship with respect to the calcite reactant. Moreover, kinetic parameters of the mineral reaction were determined out of a temperature series at a fixed time. The main goal of the isostatic test series was to gain information about the microstructure evolution, kinetic parameters, chemical composition and texture development of the reaction products. The results were used as a reference to quantify the influence of deformation on the mineral reaction.
CHAPTER 3 deals with the influence of non-isostatic deformation on dolomite and magnesio-calcite layer production between calcite and magnesite single crystals. Deformation was achieved by triaxial compression and by torsion. Triaxial compression up to 38 MPa axial stress at a fixed time showed no significant influence of stress and strain on dolomite formation. Time series conducted at a fixed stress yield no change in growth rates for dolomite and magnesio-calcite at low strains. Slightly larger magnesio-calcite growth rates were observed at strains above >0.1. High strains at similar stresses were caused by the activation of additional glide systems in the calcite single crystal and more mobile dislocations in the magnesio-calcite grains, providing fast diffusion pathways. In torsion experiments a gradual decrease in dolomite and magnesio-calcite layer thickness was observed at a critical shear strain. During deformation, crystallographic orientations of reaction products rearranged with respect to the external framework. A direct effect of the mineral reaction on deformation could not be recognized due to the relatively small reaction product widths.
In CHAPTER 4, the influence of starting material microfabrics and the presence of water on the reaction kinetics was evaluated. In these experimental series polycrystalline material was in contact with single crystals or two polycrystalline materials were used as reactants. Isostatic annealing resulted in different dolomite and magnesio-calcite layer thicknesses, depending on starting material microfabrics. The reaction progress at the magnesite interface was faster with smaller magnesite grain size, because grain boundaries provided fast pathways for diffusion and multiple nucleation sites for dolomite formation. Deformation by triaxial compression and torsion yield lower dolomite rim thicknesses compared to annealed samples for the same time. This was caused by grain coarsening of polycrystalline magnesite during deformation. In contrast, magnesio-calcite layers tended to be larger during deformation, which triggered enhanced diffusion along grain boundaries. The presence of excess water had no significant influence on the reaction kinetics, at least if the reactants were single crystals.
In CHAPTER 5 general conclusions about the interplay between deformation and the mineral reaction in the carbonate system are presented.
Finally, CHAPTER 6 highlights possible future work in the carbonate system based on the results of this study.
Most of the baryonic matter in the Universe resides in a diffuse gaseous phase in-between galaxies consisting mostly of hydrogen and helium. This intergalactic medium (IGM) is distributed in large-scale filaments as part of the overall cosmic web. The luminous extragalactic objects that we can observe today, such as galaxies and quasars, are surrounded by the IGM in the most dense regions within the cosmic web. The radiation of these objects contributes to the so-called ultraviolet background (UVB) which keeps the IGM highly ionized ever since the epoch of reionization.
Measuring the amount of absorption due to intergalactic neutral hydrogen (HI) against extragalactic background sources is a very useful tool to constrain the energy input of ionizing sources into the IGM. Observations suggest that the HI Lyman-alpha effective optical depth, τ_eff, decreases with decreasing redshift, which is primarily due to the expansion of the Universe. However, some studies find a smaller value of the effective optical depth than expected at the specific redshift z~3.2, possibly related to the complete reionization of helium in the IGM and a hardening of the UVB. The detection and possible cause of a decrease in τ_eff at z~3.2 is controversially debated in the literature and the observed features need further explanation.
To better understand the properties of the mean absorption at high redshift and to provide an answer for whether the detection of a τ_eff feature is real we study 13 high-resolution, high signal-to-noise ratio quasar spectra observed with the Ultraviolet and Visual Echelle Spectrograph (UVES) at the Very Large Telescope (VLT). The redshift evolution of the effective optical depth, τ_eff(z), is measured in the redshift range 2.7≤z≤3.6. The influence of metal absorption features is removed by performing a comprehensive absorption-line-fitting procedure.
In the first part of the thesis, a line-parameter analysis of the column density, N, and Doppler parameter, b, of ≈7500 individually fitted absorption lines is performed. The results are in good agreement with findings from previous surveys.
The second (main) part of this thesis deals with the analysis of the redshift evolution of the effective optical depth. The τ_eff measurements vary around the empirical power law τ_eff(z)~(1+z)^(γ+1) with γ=2.09±0.52. The same analysis as for the observed spectra is performed on synthetic absorption spectra. From a comparison between observed and synthetic spectral data it can be inferred that the uncertainties of the τ_eff values are likely underestimated and that the scatter is probably caused by high-column-density absorbers with column densities in the range 15≤logN≤17. In the real Universe, such absorbers are rarely observed, however. Hence, the difference in τ_eff from different observational data sets and absorption studies is most likely caused by cosmic variance. If, alternatively, the disagreement between such data is a result of an too optimistic estimate of the (systematic) errors, it is also possible that all τ_eff measurements agree with a smooth evolution within the investigated redshift range. To explore in detail the different analysis techniques of previous studies an extensive literature comparison to the results of this work is presented in this thesis.
Although a final explanation for the occurrence of the τ_eff deviation in different studies at z~3.2 cannot be given here, our study, which represents the most detailed line-fitting analysis of its kind performed at the investigated redshifts so far, represents another important benchmark for the characterization of the HI Ly-alpha effective optical depth at high redshift and its indicated unusual behavior at z~3.2.
The promotion of self-employment as part of active labor market policies is considered to be one of the most important unemployment support schemes in Germany. Against this background the main part of this thesis contributes to the evaluation of start-up support schemes within ALMP. Chapter 2 and 4 focus on the evaluation of the New Start-up Subsidy (NSUS, Gründungszuschuss) in its first version (from 2006 to the end of 2011). The chapters offer an advancement of the evaluation of start-up subsidies in Germany, and are based on a novel data set of administrative data from the Federal Employment Agency that was enriched with information from a telephone survey. Chapter 2 provides a thorough descriptive analysis of the NSUS that consists of two parts. First, the participant structure of the program is compared with the one of two former programs. In a second step, the study conducts an in-depth characterization of the participants of the NSUS focusing on founding motives, the level of start-up capital and equity used as well as the sectoral distribution of the new business. Furthermore, the business survival, income situation of founders and job creation by the new businesses is analyzed during a period of 19 months after start-up. The contribution of Chapter 4 is to introduce a new explorative data set that allows comparing subsidized start-ups out of unemployment with non-subsidized business start-ups that were founded by individuals who were not unemployed at the time of start-up. Because previous evaluation studies commonly used eligible non-participants amongst the unemployed as control group to assess the labor market effects of the start-up subsidies, the corresponding results hence referred to the effectiveness of the ALMP measure, but could not address the question whether the subsidy leads to similarly successful and innovative businesses compared to non-subsidized businesses. An assessment of this economic/growth aspect is also important, since the subsidy might induce negative effects that may outweigh the positive effects from an ALMP perspective. The main results of Chapter 4 indicate that subsidized founders seem to have no shortages in terms of formal education, but exhibit less employment and industry-specific experience, and are less likely to benefit from intergenerational transmission of start-ups. Moreover, the study finds evidence that necessity start-ups are over-represented among subsidized business founders, which suggests disadvantages in terms of business preparation due to possible time restrictions right before start-up. Finally, the study also detects more capital constraints among the unemployed, both in terms of the availability of personal equity and access to loans. With respect to potential differences between both groups in terms of business development over time, the results indicate that subsidized start-ups out of unemployment face higher business survival rates 19 months after start-up. However, they lag behind regular business founders in terms of income, business growth, and innovation. The arduous data collection process for start-up activities of non-subsidized founders for Chapter 4 made apparent that Germany is missing a central reporting system for business formations. Additionally, the different start-up reporting systems that do exist exhibit substantial discrepancies in data processing procedures, and therefore also in absolute numbers concerning the overall start-up activity. Chapter 3 is therefore placed in front of Chapter 4 and has the aim to provide a comprehensive review of the most important German start-up reporting systems. The second part of the thesis consists of Chapter 5 which contributes to the literature on determinants of job search behavior of the unemployed individuals by analyzing the effectiveness of internet search with regard to search behavior of unemployed individuals and subsequent job quality. The third and final part of the thesis outlines why the German labor market reacted in a very mild fashion to the Great Recession 2008/09, especially compared to other countries. Chapter 6 describes current economic trends of the labor market in light of general trends in the European Union, and reveals some of the main associated challenges. Thereafter, recent reforms of the main institutional settings of the labor market which influence labor supply are analyzed. Finally, based on the status quo of these institutional settings, the chapter gives a brief overview of strategies to adequately combat the challenges in terms of labor supply and to ensure economic growth in the future.
The Brazilian Cerrado is recognised as one of the most threatened biomes in the world, as the region has experienced a striking change from natural vegetation to intense cash crop production. The impacts of rapid agricultural expansion on soil and water resources are still poorly understood in the region. Therefore, the overall aim of the thesis is to improve our understanding of the ecohydrological processes causing water and soil degradation in the Brazilian Cerrado.
I first present a metaanalysis to provide quantitative evidence and identifying the main impacts of soil and water alterations resulting from land use change. Second, field studies were conducted to (i) examine the effects of land use change on soils of natural cerrado transformed to common croplands and pasture and (ii) indicate how agricultural production affects water quality across a meso-scale catchment. Third, the ecohydrological process-based model SWAT was tested with simple scenario analyses to gain insight into the impacts of land use and climate change on the water cycling in the upper São Lourenço catchment which experienced decreasing discharges in the last 40 years.
Soil and water quality parameters from different land uses were extracted from 89 soil and 18 water studies in different regions across the Cerrado. Significant effects on pH, bulk density and available P and K for croplands and less-pronounced effects on pastures were evident. Soil total N did not differ between land uses because most of the cropland sites were N-fixing soybean cultivations, which are not artificially fertilized with N. By contrast, water quality studies showed N enrichment in agricultural catchments, indicating fertilizer impacts and potential susceptibility to eutrophication. Regardless of the land use, P is widely absent because of the high-fixing capacities of deeply weathered soils and the filtering capacity of riparian vegetation. Pesticides, however, were consistently detected throughout the entire aquatic system. In several case studies, extremely high-peak concentrations exceeded Brazilian and EU water quality limits, which pose serious health risks.
My field study revealed that land conversion caused a significant reduction in infiltration rates near the soil surface of pasture (–96 %) and croplands (–90 % to –93 %). Soil aggregate stability was significantly reduced in croplands than in cerrado and pasture. Soybean crops had extremely high extractable P (80 mg kg–1), whereas pasture N levels declined. A snapshot water sampling showed strong seasonality in water quality parameters. Higher temperature, oxi-reduction potential (ORP), NO2–, and very low oxygen concentrations (<5 mg•l–1) and saturation (<60 %) were recorded during the rainy season. By contrast, remarkably high PO43– concentrations (up to 0.8 mg•l–1) were measured during the dry season. Water quality parameters were affected by agricultural activities at all sampled sub-catchments across the catchment, regardless of stream characteristic. Direct NO3– leaching appeared to play a minor role; however, water quality is affected by topsoil fertiliser inputs with impact on small low order streams and larger rivers. Land conversion leaving cropland soils more susceptible to surface erosion by increased overland flow events.
In a third study, the field data were used to parameterise SWAT. The model was tested with different input data and calibrated in SWAT-CUP using the SUFI-2 algorithm. The model was judged reliable to simulate the water balance in the Cerrado. A complete cerrado, pasture and cropland cover was used to analyse the impact of land use on water cycling as well as climate change projections (2039–2058) according to the projections of the RCP 8.5 scenario. The actual evapotranspiration (ET) for the cropland scenario was higher compared to the cerrado cover (+100 mm a–1). Land use change scenarios confirmed that deforestation caused higher annual ET rates explaining partly the trend of decreased streamflow. Taking all climate change scenarios into account, the most likely effect is a prolongation of the dry season (by about one month), with higher peak flows in the rainy season. Consequently, potential threats for crop production with lower soil moisture and increased erosion and sediment transport during the rainy season are likely and should be considered in adaption plans.
From the three studies of the thesis I conclude that land use intensification is likely to seriously limit the Cerrado’s future regarding both agricultural productivity and ecosystem stability. Because only limited data are available for the vast biome, we recommend further field studies to understand the interaction between terrestrial and aquatic systems. This thesis may serve as a valuable database for integrated modelling to investigate the impact of land use and climate change on soil and water resources and to test and develop mitigation measures for the Cerrado in the future.
This PhD thesis is essentially a collection of six sequential articles on dynamics of accountability in the reformed employment and welfare administration in different countries. The first article examines how recent changes in the governance of employment services in three European countries (Denmark, Germany and Norway) have influenced accountability relationships from a very wide-ranging perspective. It starts from the overall assumption in the literature that accountability relationships are becoming more numerous and complex, and that these changes may lead to multiple accountability disorder. The article explores these assumptions by analyzing the different actors involved and the information requested in the new governance arrangements in all three countries. It concludes that the considerable changes in organizational arrangements and more managerial information demanded and provided have led to more shared forms of accountability. Nevertheless, a clear development towards less political or administrative accountability could not be observed.
The second article analyzes how the structure and development of reform processes affect accountability relationships and via what mechanisms. It is distinguished between an instrumental perspective and an institutional perspective and each of these perspectives takes a different view on the link between reforms and concrete action and results. By taking the welfare reforms in Norway and Germany as an example, it is shown that the reform outcomes in both countries are the result of a complex process of powering, puzzling and institutional constraints where different situational interpretations of problems, interests and administrative legacies had to be balanced. Accountability thus results not from a single process of environmental necessity or strategic choice, but from a dynamic interplay between different actors and institutional spheres.
The third article then covers a specific instrument of public sector reforms, i.e. the increasing use of performance management. The article discusses the challenges and ambiguities between performance management and different forms of accountability based on the cases of the reformed welfare administration in Norway and Germany. The findings are that the introduction of performance management creates new accountability structures which influence service delivery, but not necessarily in the direction expected by reform agents. Observed unintended consequences include target fixation, the displacement of political accountability and the predominance of control aspects of accountability.
The fourth article analyzes the accountability implications of the increasingly marketized models of welfare governance. It has often been argued that relocating powers and discretion to private contractors involve a trade-off between democratic accountability and efficiency. However, there is limited empirical evidence of how contracting out shapes accountability or is shaped by alternative democratic or administrative forms of accountability. Along these lines the article examines employment service accountability in the era of contracting out in Germany, Denmark and Great Britain. It is found that market accountability instruments are complementary instruments, not substitutes. The findings highlight the importance of administrative and political instruments in legitimizing marketized service provision and shed light on the processes that lead to the development of a hybrid accountability model.
The fifth and sixth articles focus on the diagonal accountability relationships between public agencies, supreme audit institutions (SAI) and parental ministry or parliament.
The fifth article examines the evolving role of SAIs in Denmark, Germany and Norway focusing particularly on their contribution to public accountability and their ambivalent relationship with some aspects of public sector reform in the welfare sector. The article analyzes how SAIs assess New Public Management inspired reforms in the welfare sector in the three countries. The analysis shows that all three SAIs have taken on an evaluative role when judging New Public Management instruments. At the same time their emphasis on legality and compliance can be at odds with some of the operating principles introduced by New Public Management reforms.
The sixth article focuses on the auditing activities of the German SAI in the field of labor market administration as a single in-depth case study. The purpose is to analyze how SAIs gain impact in diagonal accountability settings. The results show that the direct relationship between auditor and auditee based on cooperation and trust is of outstanding importance for SAIs to give effect to their recommendations. However, if an SAI has to rely on actors of diagonal accountability, it is in a vulnerable position as it might lose control over the interpretation of its results.
The size and morphology control of precipitated solid particles is a major economic issue for numerous industries. For instance, it is interesting for the nuclear industry, concerning the recovery of radioactive species from used nuclear fuel.
The precipitates features, which are a key parameter from the post-precipitate processing, depend on the process local mixing conditions. So far, the relationship between precipitation features and hydrodynamic conditions have not been investigated.
In this study, a new experimental configuration consisting of coalescing drops is set to investigate the link between reactive crystallization and hydrodynamics. Two configurations of aqueous drops are examined. The first one corresponds to high contact angle drops (>90°) in oil, as a model system for flowing drops, the second one correspond to sessile drops in air with low contact angle (<25°). In both cases, one reactive is dissolved in each drop, namely oxalic acid and cerium nitrate. When both drops get into contact, they may coalesce; the dissolved species mix and react to produce insoluble cerium oxalate. The precipitates features and effect on hydrodynamics are investigated depending on the solvent. In the case of sessile drops in air, the surface tension difference between the drops generates a gradient which induces a Marangoni flow from the low surface tension drop over the high surface tension drop. By setting the surface tension difference between the two drops and thus the Marangoni flow, the hydrodynamics conditions during the drop coalescence could be modified. Diols/water mixtures are used as solvent, in order to fix the surface tension difference between the liquids of both drops regardless from the reactant concentration. More precisely, the used diols, 1,2-propanediol and 1,3-propanediol, are isomer with identical density and close viscosity. By keeping the water volume fraction constant and playing with the 1,2-propanediol and 1,3-propanediol volume fractions of the solvents, the mixtures surface tensions differ up to 10 mN/m for identical/constant reactant concentration, density and viscosity. 3 precipitation behaviors were identified for the coalescence of water/diols/recatants drops depending on the oxalic excess. The corresponding precipitates patterns are visualized by optical microscopy and the precipitates are characterized by confocal microscopy SEM, XRD and SAXS measurements. In the intermediate oxalic excess regime, formation of periodic patterns can be observed. These patterns consist in alternating cerium oxalate precipitates with distinct morphologies, namely needles and “microflowers”. Such periodic fringes can be explained by a feedback mechanism between convection, reaction and the diffusion.
This dissertation investigates the working memory mechanism subserving human sentence processing and its relative contribution to processing difficulty as compared to syntactic prediction. Within the last decades, evidence for a content-addressable memory system underlying human cognition in general has accumulated (e.g., Anderson et al., 2004). In sentence processing research, it has been proposed that this general content-addressable architecture is also used for language processing (e.g., McElree, 2000).
Although there is a growing body of evidence from various kinds of linguistic dependencies that is consistent with a general content-addressable memory subserving sentence processing (e.g., McElree et al., 2003; VanDyke2006), the case of reflexive-antecedent dependencies has challenged this view. It has been proposed that in the processing of reflexive-antecedent dependencies, a syntactic-structure based memory access is used rather than cue-based retrieval within a content-addressable framework (e.g., Sturt, 2003).
Two eye-tracking experiments on Chinese reflexives were designed to tease apart accounts assuming a syntactic-structure based memory access mechanism from cue-based retrieval (implemented in ACT-R as proposed by Lewis and Vasishth (2005).
In both experiments, interference effects were observed from noun phrases which syntactically do not qualify as the reflexive's antecedent but match the animacy requirement the reflexive imposes on its antecedent. These results are interpreted as evidence against a purely syntactic-structure based memory access. However, the exact pattern of effects observed in the data is only partially compatible with the Lewis and Vasishth cue-based parsing model.
Therefore, an extension of the Lewis and Vasishth model is proposed. Two principles are added to the original model, namely 'cue confusion' and 'distractor prominence'.
Although interference effects are generally interpreted in favor of a content-addressable memory architecture, an alternative explanation for interference effects in reflexive processing has been proposed which, crucially, might reconcile interference effects with a structure-based account.
It has been argued that interference effects do not necessarily reflect cue-based retrieval interference in a content-addressable memory but might equally well be accounted for by interference effects which have already occurred at the moment of encoding the antecedent in memory (Dillon, 2011).
Three experiments (eye-tracking and self-paced reading) on German reflexives and Swedish possessives were designed to tease apart cue-based retrieval interference from encoding interference. The results of all three experiments suggest that there is no evidence that encoding interference affects the retrieval of a reflexive's antecedent.
Taken together, these findings suggest that the processing of reflexives can be explained with the same cue-based retrieval mechanism that has been invoked to explain syntactic dependency resolution in a range of other structures. This supports the view that the language processing system is located within a general cognitive architecture, with a general-purpose content-addressable working memory system operating on linguistic expressions.
Finally, two experiments (self-paced reading and eye-tracking) using Chinese relative clauses were conducted to determine the relative contribution to sentence processing difficulty of working-memory processes as compared to syntactic prediction during incremental parsing.
Chinese has the cross-linguistically rare property of being a language with subject-verb-object word order and pre-nominal relative clauses. This property leads to opposing predictions of expectation-based
accounts and memory-based accounts with respect to the relative processing difficulty of subject vs. object relatives.
Previous studies showed contradictory results, which has been attributed to different kinds local ambiguities confounding the materials (Lin and Bever, 2011). The two experiments presented are the first to compare Chinese relatives clauses in syntactically unambiguous contexts.
The results of both experiments were consistent with the predictions of the expectation-based account of sentence processing but not with the memory-based account. From these findings, I conclude that any theory of human sentence processing needs to take into account the power of predictive processes unfolding in the human mind.
The main goal of this cumulative thesis is the derivation of surface emissivity data in the infrared from radiance measurements of Venus. Since these data are diagnostic of the chemical composition and grain size of the surface material, they can help to improve knowledge of the planet’s geology. Spectrally resolved images of nightside emissions in the range 1.0-5.1 μm were recently acquired by the InfraRed Mapping channel of the Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS-M-IR) aboard ESA’s Venus EXpress (VEX). Surface and deep atmospheric thermal emissions in this spectral range are strongly obscured by the extremely opaque atmosphere, but three narrow spectral windows at 1.02, 1.10, and 1.18 μm allow the sounding of the surface. Additional windows between 1.3 and 2.6 μm provide information on atmospheric parameters that is required to interpret the surface signals. Quantitative data on surface and atmosphere can be retrieved from the measured spectra by comparing them to simulated spectra. A numerical radiative transfer model is used in this work to simulate the observable radiation as a function of atmospheric, surface, and instrumental parameters. It is a line-by-line model taking into account thermal emissions by surface and atmosphere as well as absorption and multiple scattering by gases and clouds. The VIRTIS-M-IR measurements are first preprocessed to obtain an optimal data basis for the subsequent steps. In this process, a detailed detector responsivity analysis enables the optimization of the data consistency. The measurement data have a relatively low spectral information content, and different parameter vectors can describe the same measured spectrum equally well. A usual method to regularize the retrieval of the wanted parameters from a measured spectrum is to take into account a priori mean values and standard deviations of the parameters to be retrieved. This decreases the probability to obtain unreasonable parameter values. The multi-spectrum retrieval algorithm MSR is developed to additionally consider physically realistic spatial and temporal a priori correlations between retrieval parameters describing different measurements. Neglecting geologic activity, MSR also allows the retrieval of an emissivity map as a parameter vector that is common to several spectrally resolved images that cover the same surface target. Even applying MSR, it is difficult to obtain reliable emissivity maps in absolute values. A detailed retrieval error analysis based on synthetic spectra reveals that this is mainly due to interferences from parameters that cannot be derived from the spectra themselves, but that have to be set to assumed values to enable the radiative transfer simulations. The MSR retrieval of emissivity maps relative to a fixed emissivity is shown to effectively avoid most emissivity retrieval errors. Relative emissivity maps at 1.02, 1.10, and 1.18 μm are finally derived from many VIRTIS-M-IR measurements that cover a surface target at Themis Regio. They are interpreted as spatial variations relative to an assumed emissivity mean of the target. It is verified that the maps are largely independent of the choice of many interfering parameters as well as the utilized measurement data set. These are the first Venus IR emissivity data maps based on a consistent application of a full radiative transfer simulation and a retrieval algorithm that respects a priori information. The maps are sufficiently reliable for future geologic interpretations.
In many procedures of seismic risk mitigation, ground motion simulations are needed to test systems or improve their effectiveness. For example they may be used to estimate the level of ground shaking caused by future earthquakes. Good physical models for ground motion simulation are also thought to be important for hazard assessment, as they could close gaps in the existing datasets. Since the observed ground motion in nature shows a certain variability, part of which cannot be explained by macroscopic parameters such as magnitude or position of an earthquake, it would be desirable that a good physical model is not only able to produce one single seismogram, but also to reveal this natural variability.
In this thesis, I develop a method to model realistic ground motions in a way that is computationally simple to handle, permitting multiple scenario simulations. I focus on two aspects of ground motion modelling. First, I use deterministic wave propagation for the whole frequency range – from static deformation to approximately 10 Hz – but account for source variability by implementing self-similar slip distributions and rough fault interfaces. Second, I scale the source spectrum so that the modelled waveforms represent the correct radiated seismic energy. With this scaling I verify whether the energy magnitude is suitable as an explanatory variable, which characterises the amount of energy radiated at high frequencies – the advantage of the energy magnitude being that it can be deduced from observations, even in real-time.
Applications of the developed method for the 2008 Wenchuan (China) earthquake, the 2003 Tokachi-Oki (Japan) earthquake and the 1994 Northridge (California, USA) earthquake show that the fine source discretisations combined with the small scale source variability ensure that high frequencies are satisfactorily introduced, justifying the deterministic wave propagation approach even at high frequencies. I demonstrate that the energy magnitude can be used to calibrate the high-frequency content in ground motion simulations.
Because deterministic wave propagation is applied to the whole frequency range, the simulation method permits the quantification of the variability in ground motion due to parametric uncertainties in the source description. A large number of scenario simulations for an M=6 earthquake show that the roughness of the source as well as the distribution of fault dislocations have a minor effect on the simulated variability by diminishing directivity effects, while hypocenter location and rupture velocity more strongly influence the variability. The uncertainty in energy magnitude, however, leads to the largest differences of ground motion amplitude between different events, resulting in a variability which is larger than the one observed.
For the presented approach, this dissertation shows (i) the verification of the computational correctness of the code, (ii) the ability to reproduce observed ground motions and (iii) the validation of the simulated ground motion variability. Those three steps are essential to evaluate the suitability of the method for means of seismic risk mitigation.
Organic bulk heterojunction (BHJ) solar cells based on polymer:fullerene blends are a promising alternative for a low-cost solar energy conversion. Despite significant improvements of the power conversion efficiency in recent years, the fundamental working principles of these devices are yet not fully understood. In general, the current output of organic solar cells is determined by the generation of free charge carriers upon light absorption and their transport to the electrodes in competition to the loss of charge carriers due to recombination.
The object of this thesis is to provide a comprehensive understanding of the dynamic processes and physical parameters determining the performance. A new approach for analyzing the characteristic current-voltage output was developed comprising the experimental determination of the efficiencies of charge carrier generation, recombination and transport, combined with numerical device simulations.
Central issues at the beginning of this work were the influence of an electric field on the free carrier generation process and the contribution of generation, recombination and transport to the current-voltage characteristics. An elegant way to directly measure the field dependence of the free carrier generation is the Time Delayed Collection Field (TDCF) method. In TDCF charge carriers are generated by a short laser pulse and subsequently extracted by a defined rectangular voltage pulse. A new setup was established with an improved time resolution compared to former reports in literature. It was found that charge generation is in general independent of the electric field, in contrast to the current view in literature and opposed to the expectations of the Braun-Onsager model that was commonly used to describe the charge generation process. Even in cases where the charge generation was found to be field-dependend, numerical modelling showed that this field-dependence is in general not capable to account for the voltage dependence of the photocurrent. This highlights the importance of efficient charge extraction in competition to non-geminate recombination, which is the second objective of the thesis.
Therefore, two different techniques were combined to characterize the dynamics and efficiency of non-geminate recombination under device-relevant conditions. One new approach is to perform TDCF measurements with increasing delay between generation and extraction of charges. Thus, TDCF was used for the first time to measure charge carrier generation, recombination and transport with the same experimental setup. This excludes experimental errors due to different measurement and preparation conditions and demonstrates the strength of this technique. An analytic model for the description of TDCF transients was developed and revealed the experimental conditions for which reliable results can be obtained. In particular, it turned out that the $RC$ time of the setup which is mainly given by the sample geometry has a significant influence on the shape of the transients which has to be considered for correct data analysis.
Secondly, a complementary method was applied to characterize charge carrier recombination under steady state bias and illumination, i.e. under realistic operating conditions. This approach relies on the precise determination of the steady state carrier densities established in the active layer. It turned out that current techniques were not sufficient to measure carrier densities with the necessary accuracy. Therefore, a new technique {Bias Assisted Charge Extraction} (BACE) was developed. Here, the charge carriers photogenerated under steady state illumination are extracted by applying a high reverse bias. The accelerated extraction compared to conventional charge extraction minimizes losses through non-geminate recombination and trapping during extraction. By performing numerical device simulations under steady state, conditions were established under which quantitative information on the dynamics can be retrieved from BACE measurements.
The applied experimental techniques allowed to sensitively analyse and quantify geminate and non-geminate recombination losses along with charge transport in organic solar cells. A full analysis was exemplarily demonstrated for two prominent polymer-fullerene blends.
The model system P3HT:PCBM spincast from chloroform (as prepared) exhibits poor power conversion efficiencies (PCE) on the order of 0.5%, mainly caused by low fill factors (FF) and currents. It could be shown that the performance of these devices is limited by the hole transport and large bimolecular recombination (BMR) losses, while geminate recombination losses are insignificant. The low polymer crystallinity and poor interconnection between the polymer and fullerene domains leads to a hole mobility of the order of 10^-7 cm^2/Vs which is several orders of magnitude lower than the electron mobility in these devices. The concomitant build up of space charge hinders extraction of both electrons and holes and promotes bimolecular recombination losses.
Thermal annealing of P3HT:PCBM blends directly after spin coating improves crystallinity and interconnection of the polymer and the fullerene phase and results in comparatively high electron and hole mobilities in the order of 10^-3 cm^2/Vs and 10^-4 cm^2/Vs, respectively. In addition, a coarsening of the domain sizes leads to a reduction of the BMR by one order of magnitude. High charge carrier mobilities and low recombination losses result in comparatively high FF (>65%) and short circuit current (J_SC ≈ 10 mA/cm^2). The overall device performance (PCE ≈ 4%) is only limited by a rather low spectral overlap of absorption and solar emission and a small V_OC, given by the energetics of the P3HT.
From this point of view the combination of the low bandgap polymer PTB7 with PCBM is a promising approach. In BHJ solar cells, this polymer leads to a higher V_OC due to optimized energetics with PCBM. However, the J_SC in these (unoptimized) devices is similar to the J_SC in the optimized blend with P3HT and the FF is rather low (≈ 50%). It turned out that the unoptimized PTB7:PCBM blends suffer from high BMR, a low electron mobility of the order of 10^-5 cm^2/Vs and geminate recombination losses due to field dependent charge carrier generation.
The use of the solvent additive DIO optimizes the blend morphology, mainly by suppressing the formation of very large fullerene domains and by forming a more uniform structure of well interconnected donor and acceptor domains of the order of a few nanometers. Our analysis shows that this results in an increase of the electron mobility by about one order of magnitude (3 x 10^-4 cm^2/Vs), while BMR and geminate recombination losses are significantly reduced. In total these effects improve the J_SC (≈ 17 mA/cm^2) and the FF (> 70%). In 2012 this polymer/fullerene combination resulted in a record PCE for a single junction OSC of 9.2%.
Remarkably, the numerical device simulations revealed that the specific shape of the J-V characteristics depends very sensitively to the variation of not only one, but all dynamic parameters. On the one hand this proves that the experimentally determined parameters, if leading to a good match between simulated and measured J-V curves, are realistic and reliable. On the other hand it also emphasizes the importance to consider all involved dynamic quantities, namely charge carrier generation, geminate and non-geminate recombination as well as electron and hole mobilities. The measurement or investigation of only a subset of these parameters as frequently found in literature will lead to an incomplete picture and possibly to misleading conclusions.
Importantly, the comparison of the numerical device simulation employing the measured parameters and the experimental $J-V$ characteristics allows to identify loss channels and limitations of OSC. For example, it turned out that inefficient extraction of charge carriers is a criticical limitation factor that is often disobeyed. However, efficient and fast transport of charges becomes more and more important with the development of new low bandgap materials with very high internal quantum efficiencies. Likewise, due to moderate charge carrier mobilities, the active layer thicknesses of current high-performance devices are usually limited to around 100 nm. However, larger layer thicknesses would be more favourable with respect to higher current output and robustness of production. Newly designed donor materials should therefore at best show a high tendency to form crystalline structures, as observed in P3HT, combined with the optimized energetics and quantum efficiency of, for example, PTB7.
The Tien-Shan and the neighboring Pamir region are two of the largest mountain belts in the world. Their deformation is dominated by intermontane basins bounded by active thrust and reverse faulting. The Tien-Shan mountain belt is characterized by a very high rate of seismicity along its margins as well as within the Tien-Shan interior. The study area of the here presented thesis, the western part of the Tien-Shan region, is currently seismically active with small and moderate sized earthquakes. However, at the end of the 19th beginning of the 20th century, this region was struck by a remarkable series of large magnitude (M>7) earthquakes, two of them reached magnitude 8.
Those large earthquakes occurred prior to the installation of the global digital seismic network and therefore were recorded only by analog seismic instruments. The processing of the analog data brings several difficulties, for example, not always the true parameters of the recording system are known. Another complicated task is the digitization of those records - a very time-consuming and delicate part. Therefore a special set of techniques is developed and modern methods are adapted for the digitized instrumental data analysis.
The main goal of the presented thesis is to evaluate the impact of large magnitude M≥7.0 earthquakes, which occurred at the turn of 19th to 20th century in the Tien-Shan region, on the overall regional tectonics. A further objective is to investigate the accuracy of previously estimated source parameters for those earthquakes, which were mainly based on macroseismic observations, and re-estimate them based on the instrumental data. An additional aim of this study is to develop the tools and methods for faster and more productive usage of analog seismic data in modern seismology.
In this thesis, the ten strongest and most interesting historical earthquakes in Tien-Shan region are analyzed. The methods and tool for digitizing and processing the analog seismic data are presented. The source parameters of the two major M≥8.0 earthquakes in the Northern Tien-Shan are re-estimated in individual case studies. Those studies are published as peer-reviewed scientific articles in reputed journals. Additionally, the Sarez-Pamir earthquake and its connection with one of the largest landslides in the world, Usoy landslide, is investigated by seismic modeling. These results are also published as a research paper.
With the developed techniques, the source parameters of seven more major earthquakes in the region are determined and their impact on the regional tectonics was investigated. The large magnitudes of those earthquakes are confirmed by instrumental data. The focal mechanism of these earthquakes were determined providing evidence for responsible faults or fault systems.
A main limitation in the field of flood hydrology is the short time period covered by instrumental flood time series, rarely exceeding more than 50 to 100 years. However, climate variability acts on short to millennial time scales and identifying causal linkages to extreme hydrological events requires longer datasets. To extend instrumental flood time series back in time, natural geoarchives are increasingly explored as flood recorders. Therefore, annually laminated (varved) lake sediments seem to be the most suitable archives since (i) lake basins act as natural sediment traps in the landscape continuously recording land surface processes including floods and (ii) individual flood events are preserved as detrital layers intercalated in the varved sediment sequence and can be dated with seasonal precision by varve counting.
The main goal of this thesis is to improve the understanding about hydrological and sedimentological processes leading to the formation of detrital flood layers and therewith to contribute to an improved interpretation of lake sediments as natural flood archives. This goal was achieved in two ways: first, by comparing detrital layers in sediments of two dissimilar peri-Alpine lakes, Lago Maggiore in Northern Italy and Mondsee in Upper Austria, with local instrumental flood data and, second, by tracking detrital layer formation during floods by a combined hydro-sedimentary monitoring network at Lake Mondsee spanning from the rain fall to the deposition of detrital sediment at the lake floor.
Successions of sub-millimetre to 17 mm thick detrital layers were detected in sub-recent lake sediments of the Pallanza Basin in the western part of Lago Maggiore (23 detrital layers) and Lake Mondsee (23 detrital layers) by combining microfacies and high-resolution micro X-ray fluorescence scanning techniques (µ-XRF). The detrital layer records were dated by detailed intra-basin correlation to a previously dated core sequence in Lago Maggiore and varve counting in Mondsee. The intra-basin correlation of detrital layers between five sediment cores in Lago Maggiore and 13 sediment cores in Mondsee allowed distinguishing river runoff events from local erosion. Moreover, characteristic spatial distribution patterns of detrital flood layers revealed different depositional processes in the two dissimilar lakes, underflows in Lago Maggiore as well as under- and interflows in Mondsee. Comparisons with runoff data of the main tributary streams, the Toce River at Lago Maggiore and the Griesler Ache at Mondsee, revealed empirical runoff thresholds above which the deposition of a detrital layer becomes likely. Whereas this threshold is the same for the whole Pallanza Basin in Lago Maggiore (600 m3s-1 daily runoff), it varies within Lake Mondsee. At proximal locations close to the river inflow detrital layer deposition requires floods exceeding a daily runoff of 40 m3s-1, whereas at a location 2 km more distal an hourly runoff of 80 m3s-1 and at least 2 days with runoff above 40 m3s-1 are necessary. A relation between the thickness of individual deposits and runoff amplitude of the triggering events is apparent for both lakes but is obviously further influenced by variable influx and lake internal distribution of detrital sediment.
To investigate processes of flood layer formation in lake sediments, hydro-sedimentary dynamics in Lake Mondsee and its main tributary stream, Griesler Ache, were monitored from January 2011 to December 2013. Precipitation, discharge and turbidity were recorded continuously at the rivers outlet to the lake and compared to sediment fluxes trapped close to the lake bottom on a basis of three to twelve days and on a monthly basis in three different water depths at two locations in the lake basin, in a distance of 0.9 (proximal) and 2.8 km (distal) to the Griesler Ache inflow. Within the three-year observation period, 26 river floods of different amplitude (10-110 m3s-1) were recorded resulting in variable sediment fluxes to the lake (4-760 g m-2d-1). Vertical and lateral variations in flood-related sedimentation during the largest floods indicate that interflows are the main processes of lake internal sediment transport in Lake Mondsee. The comparison of hydrological and sedimentological data revealed (i) a rapid sedimentation within three days after the peak runoff in the proximal and within six to ten days in the distal lake basin, (ii) empirical runoff thresholds for triggering sediment flux at the lake floor increasing from the proximal (20 m3s-1) to the distal lake basin (30 m3s-1) and (iii) factors controlling the amount of detrital sediment deposition at a certain location in the lake basin. The total influx of detrital sediment is mainly driven by runoff amplitude, catchment sediment availability and episodic sediment input by local sediment sources. A further role plays the lake internal sediment distribution which is not the same for each event but is favoured by flood duration and the existence of a thermocline and, therewith, the season in which a flood occurred.
In summary, the studies reveal a high sensitivity of lake sediments to flood events of different intensity. Certain runoff amplitudes are required to supply enough detrital material to form a visible detrital layer at the lake floor. Reasonable are positive feedback mechanisms between rainfall, runoff, erosion, fluvial sediment transport capacity and lake internal sediment distribution. Therefore, runoff thresholds for detrital layer formation are site-specific due to different lake-catchment characteristics. However, the studies also reveal that flood amplitude is not the only control for the amount of deposited sediment at a certain location in the lake basin even for the strongest flood events. The sediment deposition is rather influenced by a complex interaction of catchment and in-lake processes. This means that the coring location within a lake basin strongly determines the significance of a flood layer record. Moreover, the results show that while lake sediments provide ideal archives for reconstructing flood frequencies, the reconstruction of flood amplitudes is a more complex issue and requires detailed knowledge about relevant catchment and in-lake sediment transport and depositional processes.
Spots on stellar surfaces are thought to be stellar analogues of sunspots. Thus, starspots are direct manifestations of strong magnetic fields. Their decay rate is directly related to the magnetic diffusivity, which itself is a key quantity for the deduction of an activity cycle length. So far, no single starspot decay has been observed, and thus no stellar activity cycle was inferred from its corresponding turbulent diffusivity.
We investigate the evolution of starspots on the rapidly-rotating K0 giant XX Triangulum. Continuous high-resolution and phase-resolved spectroscopy was obtained with the robotic 1.2-m STELLA telescope on Tenerife over a timespan of six years. With our line-profile inversion code iMap we reconstruct a total of 36 consecutive Doppler maps. To quantify starspot area decay and growth, we match the observed images with simplified spot models based on a Monte-Carlo approach.
It is shown that the surface of XX Tri is covered with large high-latitude and even polar spots and with occasional small equatorial spots. Just over the course of six years, we see a systematically changing spot distribution with various time scales and morphology such as spot fragmentation and spot merging as well as spot decay and formation.
For the first time, a starspot decay rate on another star than the Sun is determined. From our spot-decay analysis we determine an average linear decay rate of D = -0.067±0.006 Gm^2/day. From this decay rate, we infer a turbulent diffusivity of η_τ = (6.3±0.5) x 10^14 cm^2/s and consequently predict an activity cycle of 26±6 years. The obtained cycle length matches very well with photometric observations.
Our time-series of Doppler maps further enables to investigate the differential rotation of XX Tri. We therefore applied a cross-correlation analysis. We detect a weak solar-like differential rotation with a surface shear of α = 0.016±0.003. This value agrees with similar studies of other RS CVn stars.
Furthermore, we found evidence for active longitudes and flip-flops. Whereas the more active longitude is located in phase towards the (unseen) companion star, the weaker active longitude is located at the opposite stellar hemisphere. From their periodic appearance, we infer a flip-flop cycle of ~2 years. Both activity phenomena are common on late-type binary stars.
Last but not least we redetermine several astrophysical properties of XX Tri and its binary system, as large datasets of photometric and spectroscopic observations are available since its last determination in 1999. Additionally, we compare the rotational spot-modulation from photometric and spectroscopic studies.
The Barberton Greenstone Belt (BGB) in the northwestern part of South Africa belongs to the few well-preserved remnants of Archean crust. Over the last centuries, the BGB has been intensively studied at surface with detailed mapping of its surfacial geological units and tectonic features. Nevertheless, the deeper structure of the BGB remains poorly understood. Various tectonic evolution models have been developed based on geo-chronological and structural data. These theories are highly controversial and centre on the question whether plate tectonics - as geoscientists understand them today - was already evolving on the Early Earth or whether vertical mass movements driven by the higher temperature of the Earth in Archean times governed continent development.
To get a step closer to answering the questions regarding the internal structure and formation of the BGB, magnetotelluric (MT) field experiments were conducted as part of the German-South African research initiative Inkaba yeAfrica. Five-component MT data (three magnetic and two electric channels) were collected at ~200 sites aligned along six profiles crossing the southern part of the BGB. Tectonic features like (fossil) faults and shear zones are often mineralized and therefore can have high electrical conductivities. Hence, by obtaining an image of the conductivity distribution of the subsurface from MT measurements can provide useful information on tectonic processes.
Unfortunately, the BGB MT data set is heavily affected by man-made electromagnetic noise caused, e.g. by powerlines and electric fences. Aperiodic spikes in the magnetic and corresponding offsets in the electric field components impair the data quality particularly at periods >1 s which are required to image deep electrical structures. Application of common methods for noise reduction like delay filtering and remote reference processing, only worked well for periods <1 s. Within the framework of this thesis two new filtering approaches were developed to handle the severe noise in long period data and obtain reliable processing results. The first algorithm is based on the Wiener filter in combination with a spike detection algorithm. Comparison of data variances of a local site with those of a reference site allows the identification of disturbed time series windows for each recorded channel at the local site. Using the data of the reference site, a Wiener filter algorithm is applied to predict physically meaningful data to replace the disturbed windows. While spikes in the magnetic channels are easily recognized and replaced, steps in the electric channels are more difficult to detect depending on their offset. Therefore, I have implemented a novel approach based on time series differentiation, noise removal and subsequent integration to overcome this obstacle. A second filtering approach where spikes and steps in the time series are identified using a comparison of the short and long time average of the data was also implemented as part of my thesis. For this filtering approach the noise in the form of spikes and offsets in the data is treated by an interpolation of the affected data samples. The new developments resulted in a substantial data improvement and allowed to gain one to two decades of data (up to 10 or 100 s).
The re-processed MT data were used to image the electrical conductivity distribution of the BGB by 2D and 3D inversion. Inversion models are in good agreement with the surface geology delineating the highly resistive rocks of the BGB from surrounding more conductive geological units. Fault zones appear as conductive structures and can be traced to depths of 5 to 10 km. 2D models suggest a continuation of the faults further south across the boundary of the BGB. Based on the shallow tectonic structures (fault system) within the BGB compared to deeply rooted resistive batholiths in the area, tectonic models including both vertical mass transport and in parts present-day style plate tectonics seem to be most likely for the evolution of the BGB.
The rise of evolutionary novelties is one of the major drivers of evolutionary diversification. African weakly-electric fishes (Teleostei, Mormyridae) have undergone an outstanding adaptive radiation, putatively owing to their ability to communicate through species-specific Electric Organ Discharges (EODs) produced by a novel, muscle-derived electric organ. Indeed, such EODs might have acted as effective pre-zygotic isolation mechanisms, hence favoring ecological speciation in this group of fishes. Despite the evolutionary importance of this organ, genetic investigations regarding its origin and function have remained limited.
The ultimate aim of this study is to better understand the genetic basis of EOD production by exploring the transcriptomic profiles of the electric organ and of its ancestral counterpart, the skeletal muscle, in the genus Campylomormyrus. After having established a set of reference transcriptomes using “Next-Generation Sequencing” (NGS) technologies, I performed in silico analyses of differential expression, in order to identify sets of genes that might be responsible for the functional differences observed between these two kinds of tissues. The results of such analyses indicate that: i) the loss of contractile activity and the decoupling of the excitation-contraction processes are reflected by the down-regulation of the corresponding genes in the electric organ; ii) the metabolic activity of the electric organ might be specialized towards the production and turnover of membrane structures; iii) several ion channels are highly expressed in the electric organ in order to increase excitability, and iv) several myogenic factors might be down-regulated by transcription repressors in the EO.
A secondary task of this study is to improve the genus level phylogeny of Campylomormyrus by applying new methods of inference based on the multispecies coalescent model, in order to reduce the conflict among gene trees and to reconstruct a phylogenetic tree as closest as possible to the actual species-tree. By using 1 mitochondrial and 4 nuclear markers, I was able to resolve the phylogenetic relationships among most of the currently described Campylomormyrus species. Additionally, I applied several coalescent-based species delimitation methods, in order to test the hypothesis that putatively cryptic species, which are distinguishable only from their EOD, belong to independently evolving lineages. The results of this analysis were additionally validated by investigating patterns of diversification at 16 microsatellite loci. The results suggest the presence of a new, yet undescribed species of Campylomormyrus.
The standing stock and production of organismal biomass depends strongly on the organisms’ biotic environment, which arises from trophic and non-trophic interactions among them. The trophic interactions between the different groups of organisms form the food web of an ecosystem, with the autotrophic and bacterial production at the basis and potentially several levels of consumers on top of the producers. Feeding interactions can regulate communities either by severe grazing pressure or by shortage of resources or prey production, termed top-down and bottom-up control, respectively. The limitations of all communities conglomerate in the food web regulation, which is subject to abiotic and biotic forcing regimes arising from external and internal constraints. This dissertation presents the effects of alterations in two abiotic, external forcing regimes, terrestrial matter input and long-lasting low temperatures in winter. Diverse methodological approaches, a complex ecosystem model study and the analysis of two whole-lake measurements, were performed to investigate effects for the food web regulation and the resulting consequences at the species, community and ecosystem scale. Thus, all types of organisms, autotrophs and heterotrophs, at all trophic levels were investigated to gain a comprehensive overview of the effects of the two mentioned altered forcing regimes. In addition, an extensive evaluation of the trophic interactions and resulting carbon fluxes along the pelagic and benthic food web was performed to display the efficiencies of the trophic energy transfer within the food webs. All studies were conducted in shallow lakes, which is worldwide the most abundant type of lakes. The specific morphology of shallow lakes allows that the benthic production contributes substantially to the whole-lake production. Further, as shallow lakes are often small they are especially sensitive to both, changes in the input of terrestrial organic matter and the atmospheric temperature. Another characteristic of shallow lakes is their appearance in alternative stable states. They are either in a clear-water or turbid state, where macrophytes and phytoplankton dominate, respectively. Both states can stabilize themselves through various mechanisms.
These two alternative states and stabilizing mechanisms are integrated in the complex ecosystem model PCLake, which was used to investigate the effects of the enhanced terrestrial particulate organic matter (t-POM) input to lakes. The food web regulation was altered by three distinct pathways: (1) Zoobenthos received more food, increased in biomass which favored benthivorous fish and those reduced the available light due to bioturbation. (2) Zooplankton substituted autochthonous organic matter in their diet by suspended t-POM, thus the autochthonous organic matter remaining in the water reduced its transparency. (3) T-POM suspended into the water and reduced directly the available light. As macrophytes are more light-sensitive than phytoplankton they suffered the most from the lower transparency. Consequently, the resilience of the clear-water state was reduced by enhanced t-POM inputs, which makes the turbid state more likely at a given nutrient concentration. In two subsequent winters long-lasting low temperatures and a concurrent long duration of ice coverage was observed which resulted in low overall adult fish biomasses in the two study lakes – Schulzensee and Gollinsee, characterized by having and not having submerged macrophytes, respectively. Before the partial winterkill of fish Schulzensee allowed for a higher proportion of piscivorous fish than Gollinsee. However, the partial winterkill of fish aligned both communities as piscivorous fish are more sensitive to low oxygen concentrations. Young of the year fish benefitted extremely from the absence of adult fish due to lower predation pressure. Therefore, they could exert a strong top-down control on crustaceans, which restructured the entire zooplankton community leading to low crustacean biomasses and a community composition characterized by copepodites and nauplii. As a result, ciliates were released from top-down control, increased to high biomasses compared to lakes of various trophic states and depths and dominated the zooplankton community. While being very abundant in the study lakes and having the highest weight specific grazing rates among the zooplankton, ciliates exerted potentially a strong top-down control on small phytoplankton and particle-attached bacteria. This resulted in a higher proportion of large phytoplankton compared to other lakes. Additionally, the phytoplankton community was evenly distributed presumably due to the numerous fast growing and highly specific ciliate grazers. Although, the pelagic food web was completely restructured after the subsequent partial winterkills of fish, both lakes were resistant to effects of this forcing regime at the ecosystem scale. The consistently high predation pressure on phytoplankton prevented that Schulzensee switched from the clear-water to the turbid state. Further mechanisms, which potentially stabilized the clear-water state, were allelopathic effects by macrophytes and nutrient limitation in summer. The pelagic autotrophic and bacterial production was an order of magnitude more efficient transferred to animal consumers than the respective benthic production, despite the alterations of the food web structure after the partial winterkill of fish. Thus, the compiled mass-balanced whole-lake food webs suggested that the benthic bacterial and autotrophic production, which exceeded those of the pelagic habitat, was not used by animal consumers. This holds even true if the food quality, additional consumers such as ciliates, benthic protozoa and meiobenthos, the pelagic-benthic link and the potential oxygen limitation of macrobenthos were considered. Therefore, low benthic efficiencies suggest that lakes are primarily pelagic systems at least at the animal consumer level.
Overall, this dissertation gives insights into the regulation of organism groups in the pelagic and benthic habitat at each trophic level under two different forcing regimes and displays the efficiency of the carbon transfer in both habitats. The results underline that the alterations of external forcing regimes affect all hierarchical level including the ecosystem.
In this thesis, I study ultrafast dynamics in perovskite oxides using time resolved broadband spectroscopy. I focus on the observation of coherent phonon propagation by time resolved Brillouin scattering: following the excition of metal transducer films with a femtosecond infrared pump pulse, coherent phonon dynamics in the GHz frequency range are triggered. Their propagation is monitored using a delayed white light probe pulse. The technique is illustrated on various thin films and multilayered samples. I apply the technique to investigate the linear and nonlinear acoustic response in bulk SrTiO_3, which displays a ferroelastic phase transition from a cubic to a tetragonal structural phase at T_a=105 K. In the linear regime, I observe a coupling of the observed acoustic phonon mode to the softening optic modes describing the phase transition. In the nonlinear regime, I find a giant slowing down of the sound velocity in the low temperature phase that is only observable for a strain amplitude exceeding the tetragonality of the material. It is attributed to a coupling of the high frequency phonons to ferroelastic domain walls in the material. I propose a new mechanism for the coupling of strain waves to the domain walls that is only effective for high amplitude strain. A detailed study of the phonon attenuation across a wide temperature range shows that the phonon attenuation at low temperatures is influenced by the domain configuration, which is determined by interface strain. Preliminary measurements on magnetic-ferroelectric multilayers reveal that the excitation fluence needs to be carefully controlled when dynamics at phase transitions are studied.
Intuitively, it is clear that neural processes and eye movements in reading are closely connected, but only few studies have investigated both signals simultaneously. Instead, the usual approach is to record them in separate experiments and to subsequently consolidate the results. However, studies using this approach have shown that it is feasible to coregister eye movements and EEG in natural reading and contributed greatly to the understanding of oculomotor processes in reading. The present thesis builds upon that work, assessing to what extent coregistration can be helpful for sentence processing research.
In the first study, we explore how well coregistration is suited to study subtle effects common to psycholinguistic experiments by investigating the effect of distance on dependency resolution. The results demonstrate that researchers must improve the signal-to-noise ratio to uncover more subdued effects in coregistration. In the second study, we compare oscillatory responses in different presentation modes. Using robust effects from world knowledge violations, we show that the generation and retrieval of memory traces may differ between natural reading and word-by-word presentation. In the third study, we bridge the gap between our knowledge of behavioral and neural responses to integration difficulties in reading by analyzing the EEG in the context of regressive saccades. We find the P600, a neural indicator of recovery processes, when readers make a regressive saccade in response to integration difficulties.
The results in the present thesis demonstrate that coregistration can be a useful tool for the study of sentence processing. However, they also show that it may not be suitable for some questions, especially if they involve subtle effects.