Refine
Year of publication
- 2014 (110) (remove)
Document Type
- Doctoral Thesis (110) (remove)
Language
- English (110) (remove)
Is part of the Bibliography
- yes (110) (remove)
Keywords
- Gammastrahlungsastronomie (4)
- data analysis (4)
- gamma-ray astronomy (4)
- Crab Nebula (3)
- Datenanalyse (3)
- Krebsnebel (3)
- Systembiologie (3)
- systems biology (3)
- Fermi-LAT (2)
- H.E.S.S. (2)
Institute
- Institut für Biochemie und Biologie (24)
- Institut für Physik und Astronomie (19)
- Institut für Geowissenschaften (16)
- Institut für Informatik und Computational Science (7)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (6)
- Institut für Mathematik (5)
- Institut für Umweltwissenschaften und Geographie (5)
- Mathematisch-Naturwissenschaftliche Fakultät (5)
- Wirtschaftswissenschaften (5)
- Institut für Chemie (4)
World market governance
(2014)
Democratic capitalism or liberal democracy, as the successful marriage of convenience between market liberalism and democracy sometimes is called, is in trouble. The market economy system has become global and there is a growing mismatch with the territoriality of the nation-states. The functional global networks and inter-governmental order can no longer keep pace with the rapid development of the global market economy and regulatory capture is all too common. Concepts like de-globalization, self-regulation, and global government are floated in the debate. The alternatives are analysed and found to be improper, inadequate or plainly impossible. The proposed route is instead to accept that the global market economy has developed into an independent fundamental societal system that needs its own governance. The suggestion is World Market Governance based on the Rule of Law in order to shape the fitness environment for the global market economy and strengthen the nation-states so that they can regain the sovereignty to decide upon the social and cultural conditions in each country. Elements in the proposed Rule of Law are international legislation decided by an Assembly supported by a Council, and an independent Judiciary. Existing international organisations would function as executors. The need for broad sustained demand for regulations in the common interest is identified.
Wood is used for many applications because of its excellent mechanical properties, relative abundance and as it is a renewable resource. However, its wider utilization as an engineering material is limited because it swells and shrinks upon moisture changes and is susceptible to degradation by microorganisms and/or insects. Chemical modifications of wood have been shown to improve dimensional stability, water repellence and/or durability, thus increasing potential service-life of wood materials. However current treatments are limited because it is difficult to introduce and fix such modifications deep inside the tissue and cell wall. Within the scope of this thesis, novel chemical modification methods of wood cell walls were developed to improve both dimensional stability and water repellence of wood material. These methods were partly inspired by the heartwood formation in living trees, a process, that for some species results in an insertion of hydrophobic chemical substances into the cell walls of already dead wood cells, In the first part of this thesis a chemistry to modify wood cell walls was used, which was inspired by the natural process of heartwood formation. Commercially available hydrophobic flavonoid molecules were effectively inserted in the cell walls of spruce, a softwood species with low natural durability, after a tosylation treatment to obtain “artificial heartwood”. Flavonoid inserted cell walls show a reduced moisture absorption, resulting in better dimensional stability, water repellency and increased hardness. This approach was quite different compared to established modifications which mainly address hydroxyl groups of cell wall polymers with hydrophilic substances. In the second part of the work in-situ styrene polymerization inside the tosylated cell walls was studied. It is known that there is a weak adhesion between hydrophobic polymers and hydrophilic cell wall components. The hydrophobic styrene monomers were inserted into the tosylated wood cell walls for further polymerization to form polystyrene in the cell walls, which increased the dimensional stability of the bulk wood material and reduced water uptake of the cell walls considerably when compared to controls. In the third part of the work, grafting of another hydrophobic and also biodegradable polymer, poly(ɛ-caprolactone) in the wood cell walls by ring opening polymerization of ɛ-caprolactone was studied at mild temperatures. Results indicated that polycaprolactone attached into the cell walls, caused permanent swelling of the cell walls up to 5%. Dimensional stability of the bulk wood material increased 40% and water absorption reduced more than 35%. A fully biodegradable and hydrophobized wood material was obtained with this method which reduces disposal problem of the modified wood materials and has improved properties to extend the material’s service-life. Starting from a bio-inspired approach which showed great promise as an alternative to standard cell wall modifications we showed the possibility of inserting hydrophobic molecules in the cell walls and supported this fact with in-situ styrene and ɛ-caprolactone polymerization into the cell walls. It was shown in this thesis that despite the extensive knowledge and long history of using wood as a material there is still room for novel chemical modifications which could have a high impact on improving wood properties.
The economic impact analysis contained in this book shows how irrigation farming is particularly susceptible when applying certain water management policies in the Australian Murray-Darling Basin, one of the world largest river basins and Australia’s most fertile region. By comparing different pricing and non-pricing water management policies with the help of the Water Integrated Market Model, it is found that the impact of water demand reducing policies is most severe on crops that need to be intensively irrigated and are at the same time less water productive. A combination of increasingly frequent and severe droughts and the application of policies that decrease agricultural water demand, in the same region, will create a situation in which the highly water dependent crops rice and cotton cannot be cultivated at all.
Software maintenance encompasses any changes made to a software system after its initial deployment and is thereby one of the key phases in the typical software-engineering lifecycle. In software maintenance, we primarily need to understand structural and behavioral aspects, which are difficult to obtain, e.g., by code reading. Software analysis is therefore a vital tool for maintaining these systems: It provides - the preferably automated - means to extract and evaluate information from their artifacts such as software structure, runtime behavior, and related processes. However, such analysis typically results in massive raw data, so that even experienced engineers face difficulties directly examining, assessing, and understanding these data. Among other things, they require tools with which to explore the data if no clear question can be formulated beforehand. For this, software analysis and visualization provide its users with powerful interactive means. These enable the automation of tasks and, particularly, the acquisition of valuable and actionable insights into the raw data. For instance, one means for exploring runtime behavior is trace visualization. This thesis aims at extending and improving the tool set for visual software analysis by concentrating on several open challenges in the fields of dynamic and static analysis of software systems. This work develops a series of concepts and tools for the exploratory visualization of the respective data to support users in finding and retrieving information on the system artifacts concerned. This is a difficult task, due to the lack of appropriate visualization metaphors; in particular, the visualization of complex runtime behavior poses various questions and challenges of both a technical and conceptual nature. This work focuses on a set of visualization techniques for visually representing control-flow related aspects of software traces from shared-memory software systems: A trace-visualization concept based on icicle plots aids in understanding both single-threaded as well as multi-threaded runtime behavior on the function level. The concept’s extensibility further allows the visualization and analysis of specific aspects of multi-threading such as synchronization, the correlation of such traces with data from static software analysis, and a comparison between traces. Moreover, complementary techniques for simultaneously analyzing system structures and the evolution of related attributes are proposed. These aim at facilitating long-term planning of software architecture and supporting management decisions in software projects by extensions to the circular-bundle-view technique: An extension to 3-dimensional space allows for the use of additional variables simultaneously; interaction techniques allow for the modification of structures in a visual manner. The concepts and techniques presented here are generic and, as such, can be applied beyond software analysis for the visualization of similarly structured data. The techniques' practicability is demonstrated by several qualitative studies using subject data from industry-scale software systems. The studies provide initial evidence that the techniques' application yields useful insights into the subject data and its interrelationships in several scenarios.
The International Project for the Evaluation of Educational Achievement (IEA) was formed in the 1950s (Postlethwaite, 1967). Since that time, the IEA has conducted many studies in the area of mathematics, such as the First International Mathematics Study (FIMS) in 1964, the Second International Mathematics Study (SIMS) in 1980-1982, and a series of studies beginning with the Third International Mathematics and Science Study (TIMSS) which has been conducted every 4 years since 1995. According to Stigler et al. (1999), in the FIMS and the SIMS, U.S. students achieved low scores in comparison with students in other countries (p. 1). The TIMSS 1995 “Videotape Classroom Study” was therefore a complement to the earlier studies conducted to learn “more about the instructional and cultural processes that are associated with achievement” (Stigler et al., 1999, p. 1). The TIMSS Videotape Classroom Study is known today as the TIMSS Video Study. From the findings of the TIMSS 1995 Video Study, Stigler and Hiebert (1999) likened teaching to “mountain ranges poking above the surface of the water,” whereby they implied that we might see the mountaintops, but we do not see the hidden parts underneath these mountain ranges (pp. 73-78). By watching the videotaped lessons from Germany, Japan, and the United States again and again, they discovered that “the systems of teaching within each country look similar from lesson to lesson. At least, there are certain recurring features [or patterns] that typify many of the lessons within a country and distinguish the lessons among countries” (pp. 77-78). They also discovered that “teaching is a cultural activity,” so the systems of teaching “must be understood in relation to the cultural beliefs and assumptions that surround them” (pp. 85, 88). From this viewpoint, one of the purposes of this dissertation was to study some cultural aspects of mathematics teaching and relate the results to mathematics teaching and learning in Vietnam. Another research purpose was to carry out a video study in Vietnam to find out the characteristics of Vietnamese mathematics teaching and compare these characteristics with those of other countries. In particular, this dissertation carried out the following research tasks: - Studying the characteristics of teaching and learning in different cultures and relating the results to mathematics teaching and learning in Vietnam - Introducing the TIMSS, the TIMSS Video Study and the advantages of using video study in investigating mathematics teaching and learning - Carrying out the video study in Vietnam to identify the image, scripts and patterns, and the lesson signature of eighth-grade mathematics teaching in Vietnam - Comparing some aspects of mathematics teaching in Vietnam and other countries and identifying the similarities and differences across countries - Studying the demands and challenges of innovating mathematics teaching methods in Vietnam – lessons from the video studies Hopefully, this dissertation will be a useful reference material for pre-service teachers at education universities to understand the nature of teaching and develop their teaching career.
The Epoch of Reionization marks after recombination the second major change in the ionization state of the universe, going from a neutral to an ionized state. It starts with the appearance of the first stars and galaxies; a fraction of high-energy photons emitted from galaxies permeate into the intergalactic medium (IGM) and gradually ionize the hydrogen, until the IGM is completely ionized at z~6 (Fan et al., 2006). While the progress of reionization is driven by galaxy evolution, it changes the ionization and thermal state of the IGM substantially and affects subsequent structure and galaxy formation by various feedback mechanisms.
Understanding this interaction between reionization and galaxy formation is further impeded by a lack of understanding of the high-redshift galactic properties such as the dust distribution and the escape fraction of ionizing photons. Lyman Alpha Emitters (LAEs) represent a sample of high-redshift galaxies that are sensitive to all these galactic properties and the effects of reionization.
In this thesis we aim to understand the progress of reionization by performing cosmological simulations, which allows us to investigate the limits of constraining reionization by high-redshift galaxies as LAEs, and examine how galactic properties and the ionization state of the IGM affect the visibility and observed quantities of LAEs and Lyman Break galaxies (LBGs).
In the first part of this thesis we focus on performing radiative transfer calculations to simulate reionization. We have developed a mapping-sphere-scheme, which, starting from spherically averaged temperature and density fields, uses our 1D radiative transfer code and computes the effect of each source on the IGM temperature and ionization (HII, HeII, HeIII) profiles, which are subsequently mapped onto a grid. Furthermore we have updated the 3D Monte-Carlo radiative transfer pCRASH, enabling detailed reionization simulations which take individual source characteristics into account.
In the second part of this thesis we perform a reionization simulation by post-processing a smoothed-particle hydrodynamical (SPH) simulation (GADGET-2) with 3D radiative transfer (pCRASH), where the ionizing sources are modelled according to the characteristics of the stellar populations in the hydrodynamical simulation. Following the ionization fractions of hydrogen (HI) and helium (HeII, HeIII), and temperature in our simulation, we find that reionization starts at z~11 and ends at z~6, and high density regions near sources are ionized earlier than low density regions far from sources.
In the third part of this thesis we couple the cosmological SPH simulation and the radiative transfer simulations with a physically motivated, self-consistent model for LAEs, in order to understand the importance of the ionization state of the IGM, the escape fraction of ionizing photons from galaxies and dust in the interstellar medium (ISM) on the visibility of LAEs. Comparison of our models results with the LAE Lyman Alpha (Lya) and UV luminosity functions at z~6.6 reveals a three-dimensional degeneracy between the ionization state of the IGM, the ionizing photons escape fraction and the ISM dust distribution, which implies that LAEs act not only as tracers of reionization but also of the ionizing photon escape fraction and of the ISM dust distribution. This degeneracy does not even break down when we compare simulated with observed clustering of LAEs at z~6.6. However, our results show that reionization has the largest impact on the amplitude of the LAE angular correlation functions, and its imprints are clearly distinguishable from those of properties on galactic scales. These results show that reionization cannot be constrained tightly by exclusively using LAE observations. Further observational constraints, e.g. tomographies of the redshifted hydrogen 21cm line, are required.
In addition we also use our LAE model to probe the question when a galaxy is visible as a LAE or a LBG. Within our model galaxies above a critical stellar mass can produce enough luminosity to be visible as a LBG and/or a LAE. By finding an increasing duty cycle of LBGs with Lya emission as the UV magnitude or stellar mass of the galaxy rises, our model reveals that the brightest (and most massive) LBGs most often show Lya emission.
Predicting the Lya equivalent width (Lya EW) distribution and the fraction of LBGs showing Lya emission at z~6.6, we reproduce the observational trend of the Lya EWs with UV magnitude. However, the Lya EWs of the UV brightest LBGs exceed observations and can only be reconciled by accounting for an increased Lya attenuation of massive galaxies, which implies that the observed Lya brightest LAEs do not necessarily coincide with the UV brightest galaxies. We have analysed the dependencies of LAE observables on the properties of the galactic and intergalactic medium and the LAE-LBG connection, and this enhances our understanding of the nature of LAEs.
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
In processing and data storage mainly ferromagnetic (FM) materials are being used. Approaching physical limits, new concepts have to be found for faster, smaller switches, for higher data densities and more energy efficiency. Some of the discussed new concepts involve the material classes of correlated oxides and materials with antiferromagnetic coupling. Their applicability depends critically on their switching behavior, i.e., how fast and how energy efficient material properties can be manipulated. This thesis presents investigations of ultrafast non-equilibrium phase transitions on such new materials. In transition metal oxides (TMOs) the coupling of different degrees of freedom and resulting low energy excitation spectrum often result in spectacular changes of macroscopic properties (colossal magneto resistance, superconductivity, metal-to-insulator transitions) often accompanied by nanoscale order of spins, charges, orbital occupation and by lattice distortions, which make these material attractive. Magnetite served as a prototype for functional TMOs showing a metal-to-insulator-transition (MIT) at T = 123 K. By probing the charge and orbital order as well as the structure after an optical excitation we found that the electronic order and the structural distortion, characteristics of the insulating phase in thermal equilibrium, are destroyed within the experimental resolution of 300 fs. The MIT itself occurs on a 1.5 ps timescale. It shows that MITs in functional materials are several thousand times faster than switching processes in semiconductors. Recently ferrimagnetic and antiferromagnetic (AFM) materials have become interesting. It was shown in ferrimagnetic GdFeCo, that the transfer of angular momentum between two opposed FM subsystems with different time constants leads to a switching of the magnetization after laser pulse excitation. In addition it was theoretically predicted that demagnetization dynamics in AFM should occur faster than in FM materials as no net angular momentum has to be transferred out of the spin system. We investigated two different AFM materials in order to learn more about their ultrafast dynamics. In Ho, a metallic AFM below T ≈ 130 K, we found that the AFM Ho can not only be faster but also ten times more energy efficiently destroyed as order in FM comparable metals. In EuTe, an AFM semiconductor below T ≈ 10 K, we compared the loss of magnetization and laser-induced structural distortion in one and the same experiment. Our experiment shows that they are effectively disentangled. An exception is an ultrafast release of lattice dynamics, which we assign to the release of magnetostriction. The results presented here were obtained with time-resolved resonant soft x-ray diffraction at the Femtoslicing source of the Helmholtz-Zentrum Berlin and at the free-electron laser in Stanford (LCLS). In addition the development and setup of a new UHV-diffractometer for these experiments will be reported.
The tropical warm pool waters surrounding Indonesia are one of the equatorial heat and moisture sources that are considered as a driving force of the global climate system. The climate in Indonesia is dominated by the equatorial monsoon system, and has been linked to El Niño-Southern Oscillation (ENSO) events, which often result in severe droughts or floods over Indonesia with profound societal and economic impacts on the populations living in the world's fourth most populated country. The latest IPCC report states that ENSO will remain the dominant mode in the tropical Pacific with global effects in the 21st century and ENSO-related precipitation extremes will intensify. However, no common agreement exists among climate simulation models for projected change in ENSO and the Australian-Indonesian Monsoon. Exploring high-resolution palaeoclimate archives, like tree rings or varved lake sediments, provide insights into the natural climate variability of the past, and thus helps improving and validating simulations of future climate changes. Centennial tree-ring stable isotope records | Within this doctoral thesis the main goal was to explore the potential of tropical tree rings to record climate signals and to use them as palaeoclimate proxies. In detail, stable carbon (δ13C) and oxygen (δ18O) isotopes were extracted from teak trees in order to establish the first well-replicated centennial (AD 1900-2007) stable isotope records for Java, Indonesia. Furthermore, different climatic variables were tested whether they show significant correlation with tree-ring proxies (ring-width, δ13C, δ18O). Moreover, highly resolved intra-annual oxygen isotope data were established to assess the transfer of the seasonal precipitation signal into the tree rings. Finally, the established oxygen isotope record was used to reveal possible correlations with ENSO events. Methodological achievements | A second goal of this thesis was to assess the applicability of novel techniques which facilitate and optimize high-resolution and high-throughput stable isotope analysis of tree rings. Two different UV-laser-based microscopic dissection systems were evaluated as a novel sampling tool for high-resolution stable isotope analysis. Furthermore, an improved procedure of tree-ring dissection from thin cellulose laths for stable isotope analysis was designed. The most important findings of this thesis are: I) The herein presented novel sampling techniques improve stable isotope analyses for tree-ring studies in terms of precision, efficiency and quality. The UV-laser-based microdissection serve as a valuable tool for sampling plant tissue at ultrahigh-resolution and for unprecedented precision. II) A guideline for a modified method of cellulose extraction from wholewood cross-sections and subsequent tree-ring dissection was established. The novel technique optimizes the stable isotope analysis process in two ways: faster and high-throughput cellulose extraction and precise tree-ring separation at annual to high-resolution scale. III) The centennial tree-ring stable isotope records reveal significant correlation with regional precipitation. High-resolution stable oxygen values, furthermore, allow distinguishing between dry and rainy season rainfall. IV) The δ18O record reveals significant correlation with different ENSO flavors and demonstrates the importance of considering ENSO flavors when interpreting palaeoclimatic data in the tropics. The findings of my dissertation show that seasonally resolved δ18O records from Indonesian teak trees are a valuable proxy for multi-centennial reconstructions of regional precipitation variability (monsoon signals) and large-scale ocean-atmosphere phenomena (ENSO) for the Indo-Pacific region. Furthermore, the novel methodological achievements offer many unexplored avenues for multidisciplinary research in high-resolution palaeoclimatology.
Adopting a minimalist framework, the dissertation provides an analysis for the syntactic structure of comparatives, with special attention paid to the derivation of the subclause. The proposed account explains how the comparative subclause is connected to the matrix clause, how the subclause is formed in the syntax and what additional processes contribute to its final structure. In addition, it casts light upon these problems in cross-linguistic terms and provides a model that allows for synchronic and diachronic differences. This also enables one to give a more adequate explanation for the phenomena found in English comparatives since the properties of English structures can then be linked to general settings of the language and hence need no longer be considered as idiosyncratic features of the grammar of English. First, the dissertation provides a unified analysis of degree expressions, relating the structure of comparatives to that of other degrees. It is shown that gradable adjectives are located within a degree phrase (DegP), which in turn projects a quantifier phrase (QP) and that these two functional layers are always present, irrespectively of whether there is a phonologically visible element in these layers. Second, the dissertation presents a novel analysis of Comparative Deletion by reducing it to an overtness constraint holding on operators: in this way, it is reduced to morphological differences and cross-linguistic variation is not conditioned by way of postulating an arbitrary parameter. Cross-linguistic differences are ultimately dependent on whether a language has overt operators equipped with the relevant – [+compr] and [+rel] – features. Third, the dissertation provides an adequate explanation for the phenomenon of Attributive Comparative Deletion, as attested in English, by way of relating it to the regular mechanism of Comparative Deletion. I assume that Attributive Comparative Deletion is not a universal phenomenon, and its presence in English can be conditioned by independent, more general rules, while the absence of such restrictions leads to its absence in other languages. Fourth, the dissertation accounts for certain phenomena related to diachronic changes, examining how the changes in the status of comparative operators led to changes in whether Comparative Deletion is attested in a given language: I argue that only operators without a lexical XP can be grammaticalised. The underlying mechanisms underlying are essentially general economy principles and hence the processes are not language-specific or exceptional. Fifth, the dissertation accounts for optional ellipsis processes that play a crucial role in the derivation of typical comparative subclauses. These processes are not directly related to the structure of degree expressions and hence the elimination of the quantified expression from the subclause; nevertheless, they are shown to be in interaction with the mechanisms underlying Comparative Deletion or the absence thereof.
Protein-metal coordination complexes are well known as active centers in enzymatic catalysis, and to contribute to signal transduction, gas transport, and to hormone function. Additionally, they are now known to contribute as load-bearing cross-links to the mechanical properties of several biological materials, including the jaws of Nereis worms and the byssal threads of marine mussels. The primary aim of this thesis work is to better understand the role of protein-metal cross-links in the mechanical properties of biological materials, using the mussel byssus as a model system. Specifically, the focus is on histidine-metal cross-links as sacrificial bonds in the fibrous core of the byssal thread (Chapter 4) and L-3,4-dihydroxyphenylalanine (DOPA)-metal bonds in the protective thread cuticle (Chapter 5).
Byssal threads are protein fibers, which mussels use to attach to various substrates at the seashore. These relatively stiff fibers have the ability to extend up to about 100 % strain, dissipating large amounts of mechanical energy from crashing waves, for example. Remarkably, following damage from cyclic loading, initial mechanical properties are subsequently recovered by a material-intrinsic self-healing capability. Histidine residues coordinated to transition metal ions in the proteins comprising the fibrous thread core have been suggested as reversible sacrificial bonds that contribute to self-healing; however, this remains to be substantiated in situ. In the first part of this thesis, the role of metal coordination bonds in the thread core was investigated using several spectroscopic methods. In particular, X-ray absorption spectroscopy (XAS) was applied to probe the coordination environment of zinc in Mytilus californianus threads at various stages during stretching and subsequent healing. Analysis of the extended X-ray absorption fine structure (EXAFS) suggests that tensile deformation of threads is correlated with the rupture of Zn-coordination bonds and that self-healing is connected with the reorganization of Zn-coordination bond topologies rather than the mere reformation of Zn-coordination bonds. These findings have interesting implications for the design of self-healing metallopolymers.
The byssus cuticle is a protective coating surrounding the fibrous thread core that is both as hard as an epoxy and extensible up to 100 % strain before cracking. It was shown previously that cuticle stiffness and hardness largely depend on the presence of Fe-DOPA coordination bonds. However, the byssus is known to concentrate a large variety of metals from seawater, some of which are also capable of binding DOPA (e.g. V). Therefore, the question arises whether natural variation of metal composition can affect the mechanical performance of the byssal thread cuticle. To investigate this hypothesis, nanoindentation and confocal Raman spectroscopy were applied to the cuticle of native threads, threads with metals removed (EDTA treated), and threads in which the metal ions in the native tissue were replaced by either Fe or V. Interestingly, replacement of metal ions with either Fe or V leads to the full recovery of native mechanical properties with no statistical difference between each other or the native properties. This likely indicates that a fixed number of metal coordination sites are maintained within the byssal thread cuticle – possibly achieved during thread formation – which may provide an evolutionarily relevant mechanism for maintaining reliable mechanics in an unpredictable environment.
While the dynamic exchange of bonds plays a vital role in the mechanical behavior and self-healing in the thread core by allowing them to act as reversible sacrificial bonds, the compatibility of DOPA with other metals allows an inherent adaptability of the thread cuticle to changing circumstances. The requirements to both of these materials can be met by the dynamic nature of the protein-metal cross-links, whereas covalent cross-linking would fail to provide the adaptability of the cuticle and the self-healing of the core. In summary, these studies of the thread core and the thread cuticle serve to underline the important and dynamic roles of protein-metal coordination in the mechanical function of load-bearing protein fibers, such as the mussel byssus.
The role of flavonols and anthocyanins in the cold an UV-B acclimation of Arabidopsis thaliana (L.)
(2014)
Sex-specific differences in the regulation of body weight dynamics and adipose tissue metabolism
(2014)
An important contribution of geosciences to the renewable energy production portfolio is the exploration and utilization of geothermal resources. For the development of a geothermal project at great depths a detailed geological and geophysical exploration program is required in the first phase. With the help of active seismic methods high-resolution images of the geothermal reservoir can be delivered. This allows potential transport routes for fluids to be identified as well as regions with high potential of heat extraction to be mapped, which indicates favorable conditions for geothermal exploitation. The presented work investigates the extent to which an improved characterization of geothermal reservoirs can be achieved with the new methods of seismic data processing. The summations of traces (stacking) is a crucial step in the processing of seismic reflection data. The common-reflection-surface (CRS) stacking method can be applied as an alternative for the conventional normal moveout (NMO) or the dip moveout (DMO) stack. The advantages of the CRS stack beside an automatic determination of stacking operator parameters include an adequate imaging of arbitrarily curved geological boundaries, and a significant increase in signal-to-noise (S/N) ratio by stacking far more traces than used in a conventional stack. A major innovation I have shown in this work is that the quality of signal attributes that characterize the seismic images can be significantly improved by this modified type of stacking in particular. Imporoved attribute analysis facilitates the interpretation of seismic images and plays a significant role in the characterization of reservoirs. Variations of lithological and petro-physical properties are reflected by fluctuations of specific signal attributes (eg. frequency or amplitude characteristics). Its further interpretation can provide quality assessment of the geothermal reservoir with respect to the capacity of fluids within a hydrological system that can be extracted and utilized. The proposed methodological approach is demonstrated on the basis on two case studies. In the first example, I analyzed a series of 2D seismic profile sections through the Alberta sedimentary basin on the eastern edge of the Canadian Rocky Mountains. In the second application, a 3D seismic volume is characterized in the surroundings of a geothermal borehole, located in the central part of the Polish basin. Both sites were investigated with the modified and improved stacking attribute analyses. The results provide recommendations for the planning of future geothermal plants in both study areas.
The mystery of the origin of cosmic rays has been tackled for more than hundred years and is still not solved. Cosmic rays are detected with energies spanning more than 10 orders of magnitude and reaching energies up to ~10²¹ eV, far higher than any man-made accelerator can reach. Different theories on the astrophysical objects and processes creating such highly energetic particles have been proposed.
A very prominent explanation for a process producing highly energetic particles is shock acceleration. The observation of high-energy gamma rays from supernova remnants, some of them revealing a shell like structure, is clear evidence that particles are accelerated to ultrarelativistic energies in the shocks of these objects. The environments of supernova remnants are complex and challenge detailed modelling of the processes leading to high-energy gamma-ray emission.
The study of shock acceleration at bow shocks, created by the supersonic movement of individual stars through the interstellar medium, offers a unique possibility to determine the physical properties of shocks in a less complex environment. The shocked medium is heated by the stellar and the shock excited radiation, leading to thermal infrared emission. 28 bow shocks have been discovered through their infrared emission. Nonthermal radiation in radio and X-ray wavelengths has been detected from two bow shocks, pointing to the existence of relativistic particles in these systems. Theoretical models of the emission processes predict high-energy and very high-energy emission at a flux level in reach of current instruments. This work presents the search for gamma-ray emission from bow shocks of runaway stars in the energy regime from 100MeV to ~100TeV.
The search is performed with the large area telescope (LAT) on-board the Fermi satellite and the H.E.S.S. telescopes located in the Khomas Highland in Namibia. The Fermi-LAT was launched in 2008 and is continuously scanning the sky since then. It detects photons with energies from 20MeV to over 300 GeV and has an unprecedented sensitivity. The all-sky coverage allows us to study all 28 bow shocks of runaway stars listed in the E-BOSS catalogue of infrared bow shocks. No significant emission was detected from any of the objects, although predicted by several theoretical models describing the non-thermal emission of bow shocks of runaway stars.
The H.E.S.S. experiment is the most sensitive system of imaging atmospheric Cherenkov telescopes. It detects photons from several tens of GeV to ~100TeV. Seven of the bow shocks have been observed with H.E.S.S. and the data analysis is presented in this thesis. The analyses of the very-high energy data did not reveal significant emission from any of the sources either.
This work presents the first systematic search for gamma-ray emission from bow shocks of runaway stars. For the first time Fermi-LAT data was specifically analysed to reveal emission from bow shocks of runaway stars. In the TeV regime no searches for emission from theses objects have been published so far, the study presented here is the first in this energy regime. The level of the gamma-ray emission from bow shocks of runaway stars is constrained by the calculated upper limits over six orders in magnitude in energy.
The upper limits calculated for the bow shocks of runaway stars in the course of this work, constrain several models. For the best candidate, ζ Ophiuchi, the upper limits in the Fermi-LAT energy range are lower than the predictions by a factor ~5. This challenges the assumptions made in this model and gives valuable input for further modelling approaches.
The analyses were performed with the software packages provided by the H.E.S.S. and Fermi collaborations. The development of a unified analysis framework for gamma-ray data, namely GammaLib/ctools, is rapidly progressing within the CTA consortium. Recent implementations and cross-checks with current software frameworks are presented in the Appendix.
Following the principles of green chemistry, a simple and efficient synthesis of functionalised imidazolium zwitterionic compounds from renewable resources was developed based on a modified one-pot Debus-Radziszewski reaction. The combination of different carbohydrate-derived 1,2-dicarbonyl compounds and amino acids is a simple way to modulate the properties and introduce different functionalities. A representative compound was assessed as an acid catalyst, and converted into acidic ionic liquids by reaction with several strong acids. The reactivity of the double carboxylic functionality was explored by esterification with long and short chain alcohols, as well as functionalised amines, which led to the straightforward formation of surfactant-like molecules or bifunctional esters and amides. One of these di-esters is currently being investigated for the synthesis of poly(ionic liquids). The functionalisation of cellulose with one of the bifunctional esters was investigated and preliminary tests employing it for the functionalisation of filter papers were carried out successfully. The imidazolium zwitterions were converted into ionic liquids via hydrothermal decarboxylation in flow, a benign and scalable technique. This method provides access to imidazolium ionic liquids via a simple and sustainable methodology, whilst completely avoiding contamination with halide salts. Different ionic liquids can be generated depending on the functionality contained in the ImZw precursor. Two alanine-derived ionic liquids were assessed for their physicochemical properties and applications as solvents for the dissolution of cellulose and the Heck coupling.
Deciphering the functioning of biological networks is one of the central tasks in systems biology. In particular, signal transduction networks are crucial for the understanding of the cellular response to external and internal perturbations. Importantly, in order to cope with the complexity of these networks, mathematical and computational modeling is required. We propose a computational modeling framework in order to achieve more robust discoveries in the context of logical signaling networks. More precisely, we focus on modeling the response of logical signaling networks by means of automated reasoning using Answer Set Programming (ASP). ASP provides a declarative language for modeling various knowledge representation and reasoning problems. Moreover, available ASP solvers provide several reasoning modes for assessing the multitude of answer sets. Therefore, leveraging its rich modeling language and its highly efficient solving capacities, we use ASP to address three challenging problems in the context of logical signaling networks: learning of (Boolean) logical networks, experimental design, and identification of intervention strategies. Overall, the contribution of this thesis is three-fold. Firstly, we introduce a mathematical framework for characterizing and reasoning on the response of logical signaling networks. Secondly, we contribute to a growing list of successful applications of ASP in systems biology. Thirdly, we present a software providing a complete pipeline for automated reasoning on the response of logical signaling networks.
Pulsar wind nebulae (PWNe) are the most abundant TeV gamma-ray emitters in the Milky Way. The radiative emission of these objects is powered by fast-rotating pulsars, which donate parts of their rotational energy into winds of relativistic particles. This thesis presents an in-depth study of the detected population of PWNe at high energies. To outline general trends regarding their evolutionary behaviour, a time-dependent model is introduced and compared to the available data. In particular, this work presents two exceptional PWNe which protrude from the rest of the population, namely the Crab Nebula and N 157B. Both objects are driven by pulsars with extremely high rotational energy loss rates. Accordingly, they are often referred to as energetic twins. Modelling the non-thermal multi-wavelength emission of N157B gives access to specific properties of this object, like the magnetic field inside the nebula. Comparing the derived parameters to those of the Crab Nebula reveals large intrinsic differences between the two PWNe. Possible origins of these differences are discussed in context of the resembling pulsars.
Compared to the TeV gamma-ray regime, the number of detected PWNe is much smaller in the MeV-GeV gamma-ray range. In the latter range, the Crab Nebula stands out by the recent detection of gamma-ray flares. In general, the measured flux enhancements on short time scales of days to weeks were not expected in the theoretical understanding of PWNe. In this thesis, the variability of the Crab Nebula is analysed using data from the Fermi Large Area Telescope (Fermi-LAT). For the presented analysis, a new gamma-ray reconstruction method is used, providing a higher sensitivity and a lower energy threshold compared to previous analyses. The derived gamma-ray light curve of the Crab Nebula is investigated for flares and periodicity. The detected flares are analysed regarding their energy spectra, and their variety and commonalities are discussed. In addition, a dedicated analysis of the flare which occurred in March 2013 is performed. The derived short-term variability time scale is roughly 6h, implying a small region inside the Crab Nebula to be responsible for the enigmatic flares. The most promising theories explaining the origins of the flux eruptions and gamma-ray variability are discussed in detail.
In the technical part of this work, a new analysis framework is presented. The introduced software, called gammalib/ctools, is currently being developed for the future CTA observa- tory. The analysis framework is extensively tested using data from the H. E. S. S. experiment. To conduct proper data analysis in the likelihood framework of gammalib/ctools, a model describing the distribution of background events in H.E.S.S. data is presented. The software provides the infrastructure to combine data from several instruments in one analysis. To study the gamma-ray emitting PWN population, data from Fermi-LAT and H. E. S. S. are combined in the likelihood framework of gammalib/ctools. In particular, the spectral peak, which usually lies in the overlap energy regime between these two instruments, is determined with the presented analysis framework. The derived measurements are compared to the predictions from the time-dependent model. The combined analysis supports the conclusion of a diverse population of gamma-ray emitting PWNe.
Polypeptoid block coloymers
(2014)
In the presented thesis, the most advanced photon reconstruction technique of ground-based γ-ray astronomy is adapted to the H.E.S.S. 28 m telescope. The method is based on a semi-analytical model of electromagnetic particle showers in the atmosphere. The properties of cosmic γ-rays are reconstructed by comparing the camera image of the telescope with the Cherenkov emission that is expected from the shower model. To suppress the dominant background from charged cosmic rays, events are selected based on several criteria. The performance of the analysis is evaluated with simulated events. The method is then applied to two sources that are known to emit γ-rays. The first of these is the Crab Nebula, the standard candle of ground-based γ-ray astronomy. The results of this source confirm the expected performance of the reconstruction method, where the much lower energy threshold compared to H.E.S.S. I is of particular importance. A second analysis is performed on the region around the Galactic Centre. The analysis results emphasise the capabilities of the new telescope to measure γ-rays in an energy range that is interesting for both theoretical and experimental astrophysics. The presented analysis features the lowest energy threshold that has ever been reached in ground-based γ-ray astronomy, opening a new window to the precise measurement of the physical properties of time-variable sources at energies of several tens of GeV.
Permafrost, defined as ground that is frozen for at least two consecutive years, is a distinct feature of the terrestrial unglaciated Arctic. It covers approximately one quarter of the land area of the Northern Hemisphere (23,000,000 km²). Arctic landscapes, especially those underlain by permafrost, are threatened by climate warming and may degrade in different ways, including active layer deepening, thermal erosion, and development of rapid thaw features. In Siberian and Alaskan late Pleistocene ice-rich Yedoma permafrost, rapid and deep thaw processes (called thermokarst) can mobilize deep organic carbon (below 3 m depth) by surface subsidence due to loss of ground ice. Increased permafrost thaw could cause a feedback loop of global significance if its stored frozen organic carbon is reintroduced into the active carbon cycle as greenhouse gases, which accelerate warming and inducing more permafrost thaw and carbon release. To assess this concern, the major objective of the thesis was to enhance the understanding of the origin of Yedoma as well as to assess the associated organic carbon pool size and carbon quality (concerning degradability). The key research questions were:
- How did Yedoma deposits accumulate?
- How much organic carbon is stored in the Yedoma region?
- What is the susceptibility of the Yedoma region's carbon for future decomposition?
To address these three research questions, an interdisciplinary approach, including detailed field studies and sampling in Siberia and Alaska as well as methods of sedimentology, organic biogeochemistry, remote sensing, statistical analyses, and computational modeling were applied. To provide a panarctic context, this thesis additionally includes results both from a newly compiled northern circumpolar carbon database and from a model assessment of carbon fluxes in a warming Arctic.
The Yedoma samples show a homogeneous grain-size composition. All samples were poorly sorted with a multi-modal grain-size distribution, indicating various (re-) transport processes. This contradicts the popular pure loess deposition hypothesis for the origin of Yedoma permafrost. The absence of large-scale grinding processes via glaciers and ice sheets in northeast Siberian lowlands, processes which are necessary to create loess as material source, suggests the polygenetic origin of Yedoma deposits.
Based on the largest available data set of the key parameters, including organic carbon content, bulk density, ground ice content, and deposit volume (thickness and coverage) from Siberian and Alaskan study sites, this thesis further shows that deep frozen organic carbon in the Yedoma region consists of two distinct major reservoirs, Yedoma deposits and thermokarst deposits (formed in thaw-lake basins). Yedoma deposits contain ~80 Gt and thermokarst deposits ~130 Gt organic carbon, or a total of ~210 Gt. Depending on the approach used for calculating uncertainty, the range for the total Yedoma region carbon store is ±75 % and ±20 % for conservative single and multiple bootstrapping calculations, respectively. Despite the fact that these findings reduce the Yedoma region carbon pool by nearly a factor of two compared to previous estimates, this frozen organic carbon is still capable of inducing a permafrost carbon feedback to climate warming. The complete northern circumpolar permafrost region contains between 1100 and 1500 Gt organic carbon, of which ~60 % is perennially frozen and decoupled from the short-term carbon cycle.
When thawed and reintroduced into the active carbon cycle, the organic matter qualities become relevant. Furthermore, results from investigations into Yedoma and thermokarst organic matter quality studies showed that Yedoma and thermokarst organic matter exhibit no depth-dependent quality trend. This is evidence that after freezing, the ancient organic matter is preserved in a state of constant quality. The applied alkane and fatty-acid-based biomarker proxies including the carbon-preference and the higher-land-plant-fatty-acid indices show a broad range of organic matter quality and thus no significantly different qualities of the organic matter stored in thermokarst deposits compared to Yedoma deposits. This lack of quality differences shows that the organic matter biodegradability depends on different decomposition trajectories and the previous decomposition/incorporation history. Finally, the fate of the organic matter has been assessed by implementing deep carbon pools and thermokarst processes in a permafrost carbon model. Under various warming scenarios for the northern circumpolar permafrost region, model results show a carbon release from permafrost regions of up to ~140 Gt and ~310 Gt by the years 2100 and 2300, respectively. The additional warming caused by the carbon release from newly-thawed permafrost contributes 0.03 to 0.14°C by the year 2100. The model simulations predict that a further increase by the 23rd century will add 0.4°C to global mean surface air temperatures.
In conclusion, Yedoma deposit formation during the late Pleistocene was dominated by water-related (alluvial/fluvial/lacustrine) as well as aeolian processes under periglacial conditions. The circumarctic permafrost region, including the Yedoma region, contains a substantial amount of currently frozen organic carbon. The carbon of the Yedoma region is well-preserved and therefore available for decomposition after thaw. A missing quality-depth trend shows that permafrost preserves the quality of ancient organic matter. When the organic matter is mobilized by deep degradation processes, the northern permafrost region may add up to 0.4°C to the global warming by the year 2300.
It is generally agreed upon that stars typically form in open clusters and stellar associations, but little is known about the structure of the open cluster system. Do open clusters and stellar associations form isolated or do they prefer to form in groups and complexes? Open cluster groups and complexes could verify star forming regions to be larger than expected, which would explain the chemical homogeneity over large areas in the Galactic disk. They would also define an additional level in the hierarchy of star formation and could be used as tracers for the scales of fragmentation in giant molecular clouds? Furthermore, open cluster groups and complexes could affect Galactic dynamics and should be considered in investigations and simulations on the dynamical processes, such as radial migration, disc heating, differential rotation, kinematic resonances, and spiral structure.
In the past decade there were a few studies on open cluster pairs (de La Fuente Marcos & de La Fuente Marcos 2009a,b,c) and on open cluster groups and complexes (Piskunov et al. 2006). The former only considered spatial proximity for the identification of the pairs, while the latter also required tangential velocities to be similar for the members. In this work I used the full set of 6D phase-space information to draw a more detailed picture on these structures. For this purpose I utilised the most homogeneous cluster catalogue available, namely the Catalogue of Open Cluster Data (COCD; Kharchenko et al. 2005a,b), which contains parameters for 650 open clusters and compact associations, as well as for their uniformly selected members. Additional radial velocity (RV) and metallicity ([M/H]) information on the members were obtained from the RAdial Velocity Experiment (RAVE; Steinmetz et al. 2006; Kordopatis et al. 2013) for 110 and 81 clusters, respectively. The RAVE sample was cleaned considering quality parameters and flags provided by RAVE (Matijevič et al. 2012; Kordopatis et al. 2013). To ensure that only real members were included for the mean values, also the cluster membership, as provided by Kharchenko et al. (2005a,b), was considered for the stars cross-matched in RAVE.
6D phase-space information could be derived for 432 out of the 650 COCD objects and I used an adaption of the Friends-of-Friends algorithm, as used in cosmology, to identify potential groupings. The vast majority of the 19 identified groupings were pairs, but I also found four groups of 4-5 members and one complex with 15 members. For the verification of the identified structures, I compared the results to a randomly selected subsample of the catalogue for the Milky Way global survey of Star Clusters (MWSC; Kharchenko et al. 2013), which became available recently, and was used as reference sample. Furthermore, I implemented Monte-Carlo simulations with randomised samples created from two distinguished input distributions for the spatial and velocity parameters. On the one hand, assuming a uniform distribution in the Galactic disc and, on the other hand, assuming the COCD data distributions to be representative for the whole open cluster population.
The results suggested that the majority of identified pairs are rather by chance alignments, but the groups and the complex seemed to be genuine. A comparison of my results to the pairs, groups and complexes proposed in the literature yielded a partial overlap, which was most likely because of selection effects and different parameters considered. This is another verification for the existence of such structures.
The characteristics of the found groupings favour that members of an open cluster grouping originate from a common giant molecular cloud and formed in a single, but possibly sequential, star formation event. Moreover, the fact that the young open cluster population showed smaller spatial separations between nearest neighbours than the old cluster population indicated that the lifetime of open cluster groupings is most likely comparable to that of the Galactic open cluster population itself. Still even among the old open clusters I could identify groupings, which suggested that the detected structure could be in some cases more long lived as one might think.
In this thesis I could only present a pilot study on structures in the Galactic open cluster population, since the data sample used was highly incomplete. For further investigations a far more complete sample would be required. One step in this direction would be to use data from large current surveys, like SDSS, RAVE, Gaia-ESO and VVV, as well as including results from studies on individual clusters. Later the sample can be completed by data from upcoming missions, like Gaia and 4MOST. Future studies using this more complete open cluster sample will reveal the effect of open cluster groupings on star formation theory and their significance for the kinematics, dynamics and evolution of the Milky Way, and thereby of spiral galaxies.
Monoclonal antibodies (mAbs) are engineered immunoglobulins G (IgG) used for more than 20 years as targeted therapy in oncology, infectious diseases and (auto-)immune disorders. Their protein nature greatly influences their pharmacokinetics (PK), presenting typical linear and non-linear behaviors.
While it is common to use empirical modeling to analyze clinical PK data of mAbs, there is neither clear consensus nor guidance to, on one hand, select the structure of classical compartment models and on the other hand, interpret mechanistically PK parameters. The mechanistic knowledge present in physiologically-based PK (PBPK) models is likely to support rational classical model selection and thus, a methodology to link empirical and PBPK models is desirable. However, published PBPK models for mAbs are quite diverse in respect to the physiology of distribution spaces and the parameterization of the non-specific elimination involving the neonatal Fc receptor (FcRn) and endogenous IgG (IgGendo). The remarkable discrepancy between the simplicity of biodistribution data and the complexity of published PBPK models translates in parameter identifiability issues.
In this thesis, we address this problem with a simplified PBPK model—derived from a hierarchy of more detailed PBPK models and based on simplifications of tissue distribution model. With the novel tissue model, we are breaking new grounds in mechanistic modeling of mAbs disposition: We demonstrate that binding to FcRn is indeed linear and that it is not possible to infer which tissues are involved in the unspecific elimination of wild-type mAbs. We also provide a new approach to predict tissue partition coefficients based on mechanistic insights: We directly link tissue partition coefficients (Ktis) to data-driven and species-independent published antibody biodistribution coefficients (ABCtis) and thus, we ensure the extrapolation from pre-clinical species to human with the simplified PBPK model. We further extend the simplified PBPK model to account for a target, relevant to characterize the non-linear clearance due to mAb-target interaction.
With model reduction techniques, we reduce the dimensionality of the simplified PBPK model to design 2-compartment models, thus guiding classical model development with physiological and mechanistic interpretation of the PK parameters. We finally derive a new scaling approach for anatomical and physiological parameters in PBPK models that translates the inter-individual variability into the design of mechanistic covariate models with direct link to classical compartment models, specially useful for PK population analysis during clinical development.
The subsurface upper Palaeozoic sedimentary successions of the Loppa High half-graben and the Finnmark platform in the Norwegian Barents Sea (southwest Barents Sea) were investigated using 2D/3D seismic datasets combined with well and core data. These sedimentary successions represent a case of mixed siliciclastic-carbonates depositional systems, which formed during the earliest phase of the Atlantic rifting between Greenland and Norway. During the Carboniferous and Permian the southwest part of the Barents Sea was located along the northern margin of Pangaea, which experienced a northward drift at a speed of ~2–3 mm per year. This gradual shift in the paleolatitudinal position is reflected by changes in regional climatic conditions: from warm-humid in the early Carboniferous, changing to warm-arid in the middle to late Carboniferous and finally to colder conditions in the late Permian. Such changes in paleolatitude and climate have resulted in major changes in the style of sedimentation including variations in the type of carbonate factories. The upper Palaeozoic sedimentary succession is composed of four major depositional units comprising chronologically the Billefjorden Group dominated by siliciclastic deposition in extensional tectonic-controlled wedges, the Gipsdalen Group dominated by warm-water carbonates, stacked buildups and evaporites, the Bjarmeland Group characterized by cool-water carbonates as well as by the presence of buildup networks, and the Tempelfjorden Group characterized by fine-grained sedimentation dominated by biological silica production. In the Loppa High, the integration of a core study with multi-attribute seismic facies classification allowed highlighting the main sedimentary unconformities and mapping the spatial extent of a buried paleokarst terrain. This geological feature is interpreted to have formed during a protracted episode of subaerial exposure occurring between the late Palaeozoic and middle Triassic. Based on seismic sequence stratigraphy analysis the palaeogeography in time and space of the Loppa High basin was furthermore reconstructed and a new and more detailed tectono-sedimentary model for this area was proposed. In the Finnmark platform area, a detailed core analysis of two main exploration wells combined with key 2D seismic sections located along the main depositional profile, allowed the evaluation of depositional scenarios for the two main lithostratigraphic units: the Ørn Formation (Gipsdalen Group) and the Isbjørn Formation (Bjarmeland Group). During the mid-Sakmarian, two major changes were observed between the two formations including (1) the variation in the type of the carbonate factories, which is interpreted to be depth-controlled and (2) the change in platform morphology, which evolved from a distally steepened ramp to a homoclinal ramp. The results of this study may help supporting future reservoirs characterization of the upper Palaeozoic units in the Barents Sea, particularly in the Loppa High half-graben and the Finmmark platform area.
Large-scale floodplain sediment dynamics in the Mekong Delta : present state and future prospects
(2014)
The Mekong Delta (MD) sustains the livelihood and food security of millions of people in Vietnam and Cambodia. It is known as the “rice bowl” of South East Asia and has one of the world’s most productive fisheries. Sediment dynamics play a major role for the high productivity of agriculture and fishery in the delta. However, the MD is threatened by climate change, sea level rise and unsustainable development activities in the Mekong Basin. But despite its importance and the expected threats, the understanding of the present and future sediment dynamics in the MD is very limited. This is a consequence of its large extent, the intricate system of rivers, channels and floodplains and the scarcity of observations. Thus this thesis aimed at (1) the quantification of suspended sediment dynamics and associated sediment-nutrient deposition in floodplains of the MD, and (2) assessed the impacts of likely future boundary changes on the sediment dynamics in the MD. The applied methodology combines field experiments and numerical simulation to quantify and predict the sediment dynamics in the entire delta in a spatially explicit manner. The experimental part consists of a comprehensive procedure to monitor quantity and spatial variability of sediment and associated nutrient deposition for large and complex river floodplains, including an uncertainty analysis. The measurement campaign applied 450 sediment mat traps in 19 floodplains over the MD for a complete flood season. The data also supports quantification of nutrient deposition in floodplains based on laboratory analysis of nutrient fractions of trapped sedimentation.The main findings are that the distribution of grain size and nutrient fractions of suspended sediment are homogeneous over the Vietnamese floodplains. But the sediment deposition within and between ring dike floodplains shows very high spatial variability due to a high level of human inference. The experimental findings provide the essential data for setting up and calibration of a large-scale sediment transport model for the MD. For the simulation studies a large scale hydrodynamic model was developed in order to quantify large-scale floodplain sediment dynamics. The complex river-channel-floodplain system of the MD is described by a quasi-2D model linking a hydrodynamic and a cohesive sediment transport model. The floodplains are described as quasi-2D presentations linked to rivers and channels modeled in 1D by using control structures. The model setup, based on the experimental findings, ignored erosion and re-suspension processes due to a very high degree of human interference during the flood season. A two-stage calibration with six objective functions was developed in order to calibrate both the hydrodynamic and sediment transport modules. The objective functions include hydraulic and sediment transport parameters in main rivers, channels and floodplains. The model results show, for the first time, the tempo-spatial distribution of sediment and associated nutrient deposition rates in the whole MD. The patterns of sediment transport and deposition are quantified for different sub-systems. The main factors influencing spatial sediment dynamics are the network of rivers, channels and dike-rings, sluice gate operations, magnitude of the floods and tidal influences. The superposition of these factors leads to high spatial variability of the sediment transport and deposition, in particular in the Vietnamese floodplains. Depending on the flood magnitude, annual sediment loads reaching the coast vary from 48% to 60% of the sediment load at Kratie, the upper boundary of the MD. Deposited sediment varies from 19% to 23% of the annual load at Kratie in Cambodian floodplains, and from 1% to 6% in the compartmented and diked floodplains in Vietnam. Annual deposited nutrients (N, P, K), which are associated to the sediment deposition, provide on average more than 50% of mineral fertilizers typically applied for rice crops in non-flooded ring dike compartments in Vietnam. This large-scale quantification provides a basis for estimating the benefits of the annual Mekong floods for agriculture and fishery, for assessing the impacts of future changes on the delta system, and further studies on coastal deposition/erosion. For the estimation of future prospects a sensitivity-based approach is applied to assess the response of floodplain hydraulics and sediment dynamics to the changes in the delta boundaries including hydropower development, climate change in the Mekong River Basin and effective sea level rise. The developed sediment model is used to simulate the mean sediment transport and sediment deposition in the whole delta system for the baseline (2000-2010) and future (2050-2060) periods. For each driver we derive a plausible range of future changes and discretize it into five levels, resulting in altogether 216 possible factor combinations. Our results thus cover all plausible future pathways of sediment dynamics in the delta based on current knowledge. The uncertainty of the range of the resulting impacts can be decreased in case more information on these drivers becomes available. Our results indicate that the hydropower development dominates the changes in sediment dynamics of the Mekong Delta, while sea level rise has the smallest effect. The floodplains of Vietnamese Mekong Delta are much more sensitive to the changes compared to the other subsystems of the delta. In terms of median changes of the three combined drivers, the inundation extent is predicted to increase slightly, but the overall floodplain sedimentation would be reduced by approximately 40%, while the sediment load to the Sea would diminish to half of the current rates. These findings provide new and valuable information on the possible impacts of future development on the delta, and indicate the most vulnerable areas. Thus, the presented results are a significant contribution to the ongoing international discussion on the hydropower development in the Mekong basin and its impact on the Mekong delta.
The purpose of this thesis is to develop an automated inversion scheme to derive point and finite source parameters for weak earthquakes, here intended with the unusual meaning of earthquakes with magnitudes at the limit or below the bottom magnitude threshold of standard source inversion routines. The adopted inversion approaches entirely rely on existing inversion software, the methodological work mostly targeting the development and tuning of optimized inversion flows. The resulting inversion scheme is tested for very different datasets, and thus allows the discussion on the source inversion problem at different scales. In the first application, dealing with mining induced seismicity, the source parameters determination is addressed at a local scale, with source-sensor distance of less than 3 km. In this context, weak seismicity corresponds to event below magnitude MW 2.0, which are rarely target of automated source inversion routines. The second application considers a regional dataset, namely the aftershock sequence of the 2010 Maule earthquake (Chile), using broadband stations at regional distances, below 300 km. In this case, the magnitude range of the target aftershocks range down to MW 4.0. This dataset is here considered as a weak seismicity case, since the analysis of such moderate seismicity is generally investigated only by moment tensor inversion routines, with no attempt to resolve source duration or finite source parameters. In this work, automated multi-step inversion schemes are applied to both datasets with the aim of resolving point source parameters, both using double couple (DC) and full moment tensor (MT) models, source duration and finite source parameters. A major result of the analysis of weaker events is the increased size of resulting moment tensor catalogues, which interpretation may become not trivial. For this reason, a novel focal mechanism clustering approach is used to automatically classify focal mechanisms, allowing the investigation of the most relevant and repetitive rupture features. The inversion of the mining induced seismicity dataset reveals the repetitive occurrence of similar rupture processes, where the source geometry is controlled by the shape of the mined panel. Moreover, moment tensor solutions indicate a significant contribution of tensile processes. Also the second application highlights some characteristic geometrical features of the fault planes, which show a general consistency with the orientation of the slab. The additional inversion for source duration allowed to verify the empirical correlation for moment normalized earthquakes in subduction zones among a decreasing rupture duration with increasing source depth, which was so far only observed for larger events.
Inferring gene regulatory networks and cellular phases from time-resolved transcriptomics data
(2014)
Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
Cyanobacteria produce about 40 percent of the world’s primary biomass, but also a variety of often toxic peptides such as microcystin. Mass developments, so called blooms, can pose a real threat to the drinking water supply in many parts of the world. This study aimed at characterizing the biological function of microcystin production in one of the most common bloom-forming cyanobacterium Microcystis aeruginosa.
In a first attempt, the effect of elevated light intensity on microcystin production and its binding to cellular proteins was studied. Therefore, conventional microcystin quantification techniques were combined with protein-biochemical methods. RubisCO, the key enzyme for primary carbon fixation was a major microcystin interaction partner. High light exposition strongly stimulated microcystin-protein interactions. Up to 60 percent of the total cellular microcystin was detected bound to proteins, i.e. inaccessible for standard quantification procedures. Underestimation of total microcystin contents when neglecting the protein fraction was also demonstrated in field samples. Finally, an immuno-fluorescence based method was developed to identify microcystin producing cyanobacteria in mixed populations.
The high light induced microcystin interaction with proteins suggested an impact of the secondary metabolite on the primary metabolism of Microcystis by e.g. modulating the activity of enzymes. For addressing that question, a comprehensive GC/MS-based approach was conducted to compare the accumulation of metabolites in the wild-type of Microcystis aeruginosa PCC 7806 and the microcystin deficient ΔmcyB mutant. From all 501 detected non-redundant metabolites 85 (17 percent) accumulated significantly different in either of both genotypes upon high light exposition. Accumulation of compatible solutes in the ΔmcyB mutant suggests a role of microcystin in fine-tuning the metabolic flow to prevent stress related to excess light, high oxygen concentration and carbon limitation.
Co-analysis of the widely used model cyanobacterium Synechocystis PCC 6803 revealed profound metabolic differences between species of cyanobacteria. Whereas Microcystis channeled more resources towards carbohydrate synthesis, Synechocystis invested more in amino acids. These findings were supported by electron microscopy of high light treated cells and the quantification of storage compounds. While Microcystis accumulated mainly glycogen to about 8.5 percent of its fresh weight within three hours, Synechocystis produced higher amounts of cyanophycin. The results showed that the characterization of species-specific metabolic features should gain more attention with regard to the biotechnological use of cyanobacteria.
The work elaborates on the question if coaches in non-professional soccer can influence referee decisions. Modeled from a principal-agent perspective, the managing referee boards can be seen as the principal. They aim at facilitating a fair competition which is in accordance with the existing rules and regulations. In doing so, the referees are assigned as impartial agents on the pitch. The coaches take over a non-legitimate principal-like role trying to influence the referees even though they do not have the formal right to do so.
Separate questionnaires were set up for referees and coaches. The coach questionnaire aimed at identifying the extent and the forms of influencing attempts by coaches. The referee questionnaire tried to elaborate on the questions if referees take notice of possible influencing attempts and how they react accordingly.
The results were put into relation with official match data in order to identify significant influences on personal sanctions (yellow cards, second yellow cards, red cards) and the match result.
It is found that there is a slight effect on the referee’s decisions. However, this effect is rather disadvantageous for the influencing coach and there is no evidence for an impact on the match result itself.
The monsoon is an important component of the Earth’s climate system. It played a vital role in the development and sustenance of the largely agro-based economy in India. A better understanding of past variations in the Indian Summer Monsoon (ISM) is necessary to assess its nature under global warming scenarios. Instead, our knowledge of spatiotemporal patterns of past ISM strength, as inferred from proxy records, is limited due to the lack of high-resolution paleo-hydrological records from the core monsoon domain.
In this thesis I aim to improve our understanding of Holocene ISM variability from the core ‘monsoon zone’ (CMZ) in India. To achieve this goal, I tried to understand modern and thereafter reconstruct Holocene monsoonal hydrology, by studying surface sediments and a high-resolution sedimentary record from the saline-alkaline Lonar crater lake, central India. My approach relies on analyzing stable carbon and hydrogen isotope ratios from sedimentary lipid biomarkers to track past hydrological changes.
In order to evaluate the relationship of the modern ecosystem and hydrology of the lake I studied the distribution of lipid biomarkers in the modern ecosystem and compared it to lake surface sediments. The major plants from dry deciduous mixed forest type produced a greater amount of leaf wax n-alkanes and a greater fraction of n-C31 and n-C33 alkanes relative to n-C27 and n-C29. Relatively high average chain length (ACL) values (29.6–32.8) for these plants seem common for vegetation from an arid and warm climate. Additionally I found that human influence and subsequent nutrient supply result in increased lake primary productivity, leading to an unusually high concentration of tetrahymanol, a biomarker for salinity and water column stratification, in the nearshore sediments. Due to this inhomogeneous deposition of tetrahymanol in modern sediments, I hypothesize that lake level fluctuation may potentially affect aquatic lipid biomarker distributions in lacustrine sediments, in addition to source changes.
I reconstructed centennial-scale hydrological variability associated with changes in the intensity of the ISM based on a record of leaf wax and aquatic biomarkers and their stable carbon (δ13C) and hydrogen (δD) isotopic composition from a 10 m long sediment core from the lake. I identified three main periods of distinct hydrology over the Holocene in central India. The period between 10.1 and 6 cal. ka BP was likely the wettest during the Holocene. Lower ACL index values (29.4 to 28.6) of leaf wax n-alkanes and their negative δ13C values (–34.8‰ to –27.8‰) indicated the dominance of woody C3 vegetation in the catchment, and negative δDwax (average for leaf wax n-alkanes) values (–171‰ to –147‰) argue for a wet period due to an intensified monsoon. After 6 cal. ka BP, a gradual shift to less negative δ13C values (particularly for the grass derived n-C31) and appearance of the triterpene lipid tetrahymanol, generally considered as a marker for salinity and water column stratification, marked the onset of drier conditions. At 5.1 cal. ka BP increasing flux of leaf wax n-alkanes along with the highest flux of tetrahymanol indicated proximity of the lakeshore to the center due to a major lake level decrease. Rapid fluctuations in abundance of both terrestrial and aquatic biomarkers between 4.8 and 4 cal. ka BP indicated an unstable lake ecosystem, culminating in a transition to arid conditions. A pronounced shift to less negative δ13C values, in particular for n-C31 (–25.2‰ to –22.8‰), over this period indicated a change of dominant vegetation to C4 grasses. Along with a 40‰ increase in leaf wax n-alkane δD values, which likely resulted from less rainfall and/or higher plant evapotranspiration, I interpret this period to reflect the driest conditions in the region during the last 10.1 ka. This transition led to protracted late Holocene arid conditions and the establishment of a permanently saline lake. This is supported by the high abundance of tetrahymanol. A late Holocene peak of cyanobacterial biomarker input at 1.3 cal. ka BP might represent an event of lake eutrophication, possibly due to human impact and the onset of cattle/livestock farming in the catchment.
The most intriguing feature of the mid-Holocene driest period was the high amplitude and rapid fluctuations in δDwax values, probably due to a change in the moisture source and/or precipitation seasonality. I hypothesize that orbital induced weakening of the summer solar insolation and associated reorganization of the general atmospheric circulation were responsible for an unstable hydroclimate in the mid-Holocene in the CMZ.
My findings shed light onto the sequence of changes during mean state changes of the monsoonal system, once an insolation driven threshold has been passed, and show that small changes in solar insolation can be associated to major environmental changes and large fluctuations in moisture source, a scenario that may be relevant with respect to future changes in the ISM system.
The characterization of exoplanets is a young and rapidly expanding field in astronomy.
It includes a method called transmission spectroscopy that searches for planetary spectral
fingerprints in the light received from the host star during the event of a transit. This
techniques allows for conclusions on the atmospheric composition at the terminator region,
the boundary between the day and night side of the planet. Observationally a big
challenge, first attempts in the community have been successful in the detection of several
absorption features in the optical wavelength range. These are for example a Rayleighscattering
slope and absorption by sodium and potassium. However, other objects show
a featureless spectrum indicative for a cloud or haze layer of condensates masking the
probable atmospheric layers.
In this work, we performed transmission spectroscopy by spectrophotometry of three
Hot Jupiter exoplanets. When we began the work on this thesis, optical transmission
spectra have been available for two exoplanets. Our main goal was to advance the current
sample of probed objects to learn by comparative exoplanetology whether certain
absorption features are common. We selected the targets HAT-P-12b, HAT-P-19b and
HAT-P-32b, for which the detection of atmospheric signatures is feasible with current
ground-based instrumentation. In addition, we monitored the host stars of all three objects
photometrically to correct for influences of stellar activity if necessary.
The obtained measurements of the three objects all favor featureless spectra. A variety
of atmospheric compositions can explain the lack of a wavelength dependent absorption.
But the broad trend of featureless spectra in planets of a wide range of temperatures,
found in this work and in similar studies recently published in the literature, favors an
explanation based on the presence of condensates even at very low concentrations in the
atmospheres of these close-in gas giants. This result points towards the general conclusion
that the capability of transmission spectroscopy to determine the atmospheric composition
is limited, at least for measurements at low spectral resolution.
In addition, we refined the transit parameters and ephemerides of HAT-P-12b and HATP-
19b. Our monitoring campaigns allowed for the detection of the stellar rotation period
of HAT-P-19 and a refined age estimate. For HAT-P-12 and HAT-P-32, we derived upper
limits on their potential variability. The calculated upper limits of systematic effects of
starspots on the derived transmission spectra were found to be negligible for all three
targets.
Finally, we discussed the observational challenges in the characterization of exoplanet
atmospheres, the importance of correlated noise in the measurements and formulated
suggestions on how to improve on the robustness of results in future work.
Geometric electroelasticity
(2014)
In this work a diffential geometric formulation of the theory of electroelasticity is developed which also includes thermal and magnetic influences. We study the motion of bodies consisting of an elastic material that are deformed by the influence of mechanical forces, heat and an external electromagnetic field. To this end physical balance laws (conservation of mass, balance of momentum, angular momentum and energy) are established. These provide an equation that describes the motion of the body during the deformation. Here the body and the surrounding space are modeled as Riemannian manifolds, and we allow that the body has a lower dimension than the surrounding space. In this way one is not (as usual) restricted to the description of the deformation of three-dimensional bodies in a three-dimensional space, but one can also describe the deformation of membranes and the deformation in a curved space. Moreover, we formulate so-called constitutive relations that encode the properties of the used material. Balance of energy as a scalar law can easily be formulated on a Riemannian manifold. The remaining balance laws are then obtained by demanding that balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space. This generalizes a result by Marsden and Hughes that pertains to bodies that have the same dimension as the surrounding space and does not allow the presence of electromagnetic fields. Usually, in works on electroelasticity the entropy inequality is used to decide which otherwise allowed deformations are physically admissible and which are not. It is alsoemployed to derive restrictions to the possible forms of constitutive relations describing the material. Unfortunately, the opinions on the physically correct statement of the entropy inequality diverge when electromagnetic fields are present. Moreover, it is unclear how to formulate the entropy inequality in the case of a membrane that is subjected to an electromagnetic field. Thus, we show that one can replace the use of the entropy inequality by the demand that for a given process balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space and under linear rescalings of the temperature. On the one hand, this demand also yields the desired restrictions to the form of the constitutive relations. On the other hand, it needs much weaker assumptions than the arguments in physics literature that are employing the entropy inequality. Again, our result generalizes a theorem of Marsden and Hughes. This time, our result is, like theirs, only valid for bodies that have the same dimension as the surrounding space.
A dramatic efficiency improvement of bulk heterojunction solar cells based on electron-donating conjugated polymers in combination with soluble fullerene derivatives has been achieved over the past years. Certified and reported power conversion efficiencies now reach over 9% for single junctions and exceed the 10% benchmark for tandem solar cells. This trend brightens the vision of organic photovoltaics becoming competitive with inorganic solar cells including the realization of low-cost and large-area organic photovoltaics. For the best performing organic materials systems, the yield of charge generation can be very efficient. However, a detailed understanding of the free charge carrier generation mechanisms at the donor acceptor interface and the energy loss associated with it needs to be established. Moreover, organic solar cells are limited by the competition between charge extraction and free charge recombination, accounting for further efficiency losses. A conclusive picture and the development of precise methodologies for investigating the fundamental processes in organic solar cells are crucial for future material design, efficiency optimization, and the implementation of organic solar cells into commercial products.
In order to advance the development of organic photovoltaics, my thesis focuses on the comprehensive understanding of charge generation, recombination and extraction in organic bulk heterojunction solar cells summarized in 6 chapters on the cumulative basis of 7 individual publications.
The general motivation guiding this work was the realization of an efficient hybrid inorganic/organic tandem solar cell with sub-cells made from amorphous hydrogenated silicon and organic bulk heterojunctions. To realize this project aim, the focus was directed to the low band-gap copolymer PCPDTBT and its derivatives, resulting in the examination of the charge carrier dynamics in PCPDTBT:PC70BM blends in relation to by the blend morphology. The phase separation in this blend can be controlled by the processing additive diiodooctane, enhancing domain purity and size. The quantitative investigation of the free charge formation was realized by utilizing and improving the time delayed collection field technique. Interestingly, a pronounced field dependence of the free carrier generation for all blends is found, with the field dependence being stronger without the additive. Also, the bimolecular recombination coefficient for both blends is rather high and increases with decreasing internal field which we suggest to be caused by a negative field dependence of mobility. The additive speeds up charge extraction which is rationalized by the threefold increase in mobility.
By fluorine attachment within the electron deficient subunit of PCPDTBT, a new polymer F-PCPDTBT is designed. This new material is characterized by a stronger tendency to aggregate as compared to non-fluorinated PCPDTBT. Our measurements show that for F-PCPDTBT:PCBM blends the charge carrier generation becomes more efficient and the field-dependence of free charge carrier generation is weakened. The stronger tendency to aggregate induced by the fluorination also leads to increased polymer rich domains, accompanied in a threefold reduction in the non-geminate recombination coefficient at conditions of open circuit. The size of the polymer domains is nicely correlated to the field-dependence of charge generation and the Langevin reduction factor, which highlights the importance of the domain size and domain purity for efficient charge carrier generation. In total, fluorination of PCPDTBT causes the PCE to increase from 3.6 to 6.1% due to enhanced fill factor, short circuit current and open circuit voltage. Further optimization of the blend ratio, active layer thickness, and polymer molecular weight resulted in 6.6% efficiency for F-PCPDTBT:PC70BM solar cells.
Interestingly, the double fluorinated version 2F-PCPDTBT exhibited poorer FF despite a further reduction of geminate and non-geminate recombination losses. To further analyze this finding, a new technique is developed that measures the effective extraction mobility under charge carrier densities and electrical fields comparable to solar cell operation conditions. This method involves the bias enhanced charge extraction technique. With the knowledge of the carrier density under different electrical field and illumination conditions, a conclusive picture of the changes in charge carrier dynamics leading to differences in the fill factor upon fluorination of PCPDTBT is attained. The more efficient charge generation and reduced recombination with fluorination is counterbalanced by a decreased extraction mobility. Thus, the highest fill factor of 60% and efficiency of 6.6% is reached for F-PCPDTBT blends, while 2F-PCPDTBT blends have only moderate fill factors of 54% caused by the lower effective extraction mobility, limiting the efficiency to 6.5%.
To understand the details of the charge generation mechanism and the related losses, we evaluated the yield and field-dependence of free charge generation using time delayed collection field in combination with sensitive measurements of the external quantum efficiency and absorption coefficients for a variety of blends. Importantly, both the yield and field-dependence of free charge generation is found to be unaffected by excitation energy, including direct charge transfer excitation below the optical band gap. To access the non-detectable absorption at energies of the relaxed charge transfer emission, the absorption was reconstructed from the CT emission, induced via the recombination of thermalized charges in electroluminescence. For a variety of blends, the quantum yield at energies of charge transfer emission was identical to excitations with energies well above the optical band-gap. Thus, the generation proceeds via the split-up of the thermalized charge transfer states in working solar cells. Further measurements were conducted on blends with fine-tuned energy levels and similar blend morphologies by using different fullerene derivatives. A direct correlation between the efficiency of free carrier generation and the energy difference of the relaxed charge transfer state relative to the energy of the charge separated state is found. These findings open up new guidelines for future material design as new high efficiency materials require a minimum energetic offset between charge transfer and the charge separated state while keeping the HOMO level (and LUMO level) difference between donor and acceptor as small as possible.
One of the most significant current discussions in Astrophysics relates to the origin of high-energy cosmic rays. According to our current knowledge, the abundance distribution of the elements in cosmic rays at their point of origin indicates, within plausible error limits, that they were initially formed by nuclear processes in the interiors of stars. It is also believed that their energy distribution up to 1018 eV has Galactic origins. But even though the knowledge about potential sources of cosmic rays is quite poor above „ 1015 eV, that is the “knee” of the cosmic-ray spectrum, up to the knee there seems to be a wide consensus that supernova remnants are the most likely candidates. Evidence of this comes from observations of non-thermal X-ray radiation, requiring synchrotron electrons with energies up to 1014 eV, exactly in the remnant of supernovae. To date, however, there is not conclusive evidence that they produce nuclei, the dominant component of cosmic rays, in addition to electrons. In light of this dearth of evidence, γ-ray observations from supernova remnants can offer the most promising direct way to confirm whether or not these astrophysical objects are indeed the main source of cosmic-ray nuclei below the knee. Recent observations with space- and ground-based observatories have established shell-type supernova remnants as GeV-to- TeV γ-ray sources. The interpretation of these observations is however complicated by the different radiation processes, leptonic and hadronic, that can produce similar fluxes in this energy band rendering ambiguous the nature of the emission itself. The aim of this work is to develop a deeper understanding of these radiation processes from a particular shell-type supernova remnant, namely RX J1713.7–3946, using observations of the LAT instrument onboard the Fermi Gamma-Ray Space Telescope. Furthermore, to obtain accurate spectra and morphology maps of the emission associated with this supernova remnant, an improved model of the diffuse Galactic γ-ray emission background is developed. The analyses of RX J1713.7–3946 carried out with this improved background show that the hard Fermi-LAT spectrum cannot be ascribed to the hadronic emission, leading thus to the conclusion that the leptonic scenario is instead the most natural picture for the high-energy γ-ray emission of RX J1713.7–3946. The leptonic scenario however does not rule out the possibility that cosmic-ray nuclei are accelerated in this supernova remnant, but it suggests that the ambient density may not be high enough to produce a significant hadronic γ-ray emission. Further investigations involving other supernova remnants using the improved back- ground developed in this work could allow compelling population studies, and hence prove or disprove the origin of Galactic cosmic-ray nuclei in these astrophysical objects. A break- through regarding the identification of the radiation mechanisms could be lastly achieved with a new generation of instruments such as CTA.
Polyadenylation is a decisive 3’ end processing step during the maturation of pre-mRNAs. The length of the poly(A) tail has an impact on mRNA stability, localization and translatability. Accordingly, many eukaryotic organisms encode several copies of canonical poly(A) polymerases (cPAPs). The disruption of cPAPs in mammals results in lethality. In plants, reduced cPAP activity is non-lethal. Arabidopsis encodes three nuclear cPAPs, PAPS1, PAPS2 and PAPS4, which are constitutively expressed throughout the plant. Recently, the detailed analysis of Arabidopsis paps1 mutants revealed a subset of genes that is preferentially polyadenylated by the cPAP isoform PAPS1 (Vi et al. 2013). Thus, the specialization of cPAPs might allow the regulation of different sets of genes in order to optimally face developmental or environmental challenges.
To gain insights into the cPAP-based gene regulation in plants, the phenotypes of Arabidopsis cPAPs mutants under different conditions are characterized in detail in the following work. An involvement of all three cPAPs in flowering time regulation and stress response regulation is shown. While paps1 knockdown mutants flower early, paps4 and paps2 paps4 knockout mutants exhibit a moderate late-flowering phenotype. PAPS1 promotes the expression of the major flowering inhibitor FLC, supposedly by specific polyadenylation of an FLC activator. PAPS2 and PAPS4 exhibit partially overlapping functions and ensure timely flowering by repressing FLC and at least one other unidentified flowering inhibitor. The latter two cPAPs act in a novel regulatory pathway downstream of the autonomous pathway component FCA and act independently from the polyadenylation factors and flowering time regulators CstF64 and FY. Moreover, PAPS1 and PAPS2/PAPS4 are implicated in different stress response pathways in Arabidopsis. Reduced activity of the poly(A) polymerase PAPS1 results in enhanced resistance to osmotic and oxidative stress. Simultaneously, paps1 mutants are cold-sensitive. In contrast, PAPS2/PAPS4 are not involved in the regulation of osmotic or cold stress, but paps2 paps4 loss-of-function mutants exhibit enhanced sensitivity to oxidative stress provoked in the chloroplast. Thus, both PAPS1 and PAPS2/PAPS4 are required to maintain a balanced redox state in plants. PAPS1 seems to fulfil this function in concert with CPSF30, a polyadenylation factor that regulates alternative polyadenylation and tolerance to oxidative stress.
The individual paps mutant phenotypes and the cPAP-specific genetic interactions support the model of cPAP-dependent polyadenylation of selected mRNAs. The high similarity of the polyadenylation machineries in yeast, mammals and plants suggests that similar regulatory mechanisms might be present in other organism groups. The cPAP-dependent developmental and physiological pathways identified in this work allow the design of targeted experiments to better understand the ecological and molecular context underlying cPAP-specialization.
Galaxies are observational probes to study the Large Scale Structure. Their gravitational motions are tracers of the total matter density and therefore of the Large Scale Structure. Besides, studies of structure formation and galaxy evolution rely on numerical cosmological simulations. Still, only one universe observable from a given position, in time and space, is available for comparisons with simulations. The related cosmic variance affects our ability to interpret the results. Simulations constrained by observational data are a perfect remedy to this problem. Achieving such simulations requires the projects Cosmic flows and CLUES. Cosmic flows builds catalogs of accurate distance measurements to map deviations from the expansion. These measures are mainly obtained with the galaxy luminosity-rotation rate correlation. We present the calibration of that relation in the mid-infrared with observational data from Spitzer Space Telescope. Resulting accurate distance estimates will be included in the third catalog of the project. In the meantime, two catalogs up to 30 and 150 Mpc/h have been released. We report improvements and applications of the CLUES' method on these two catalogs. The technique is based on the constrained realization algorithm. The cosmic displacement field is computed with the Zel'dovich approximation. This latter is then reversed to relocate reconstructed three-dimensional constraints to their precursors' positions in the initial field. The size of the second catalog (8000 galaxies within 150 Mpc/h) highlighted the importance of minimizing the observational biases. By carrying out tests on mock catalogs, built from cosmological simulations, a method to minimize observational bias can be derived. Finally, for the first time, cosmological simulations are constrained solely by peculiar velocities. The process is successful as resulting simulations resemble the Local Universe. The major attractors and voids are simulated at positions approaching observational positions by a few megaparsecs, thus reaching the limit imposed by the linear theory.
Knowing the rates and mechanisms of geomorphic process that shape the Earth’s surface is crucial to understand landscape evolution. Modern methods for estimating denudation rates enable us to quantitatively express and compare processes of landscape downwearing that can be traced through time and space—from the seemingly intact, though intensely shattered, phantom blocks of the catastrophically fragmented basal facies of giant rockslides up to denudational noise in orogen-wide data sets averaging over several millennia. This great variety of spatiotemporal scales of denudation rates is both boon and bane of geomorphic process rates. Indeed, processes of landscape downwearing can be traced far back in time, helping us to understand the Earth’s evolution. Yet, this benefit may turn into a drawback due to scaling issues if these rates are to be compared across different observation timescales.
This thesis investigates the mechanisms, patterns and rates of landscape downwearing across the Himalaya-Tibet orogen.
Accounting for the spatiotemporal variability of denudation processes, this thesis addresses landscape downwearing on three distinctly different spatial scales, starting off at the local scale of individual hillslopes where considerable amounts of debris are generated from rock instantaneously: Rocksliding in active mountains is a major impetus of landscape downwearing. Study I provides a systematic overview of the internal sedimentology of giant rockslide deposits and thus meets the challenge of distinguishing them from macroscopically and microscopically similar glacial deposits, tectonic fault-zone breccias, and impact breccias. This distinction is important to avoid erroneous or misleading deduction of paleoclimatic or tectonic implications. -> Grain size analysis shows that rockslide-derived micro-breccia closely resemble those from meteorite impact or tectonic faults. -> Frictionite may occur more frequently that previously assumed. -> Mössbauer-spectroscopy derived results indicate basal rock melting in the absence of water, involving short-term temperatures of >1500°C.
Zooming out, Study II tracks the fate of these sediments, using the example of the upper Indus River, NW India. There we use river sand samples from the Indus and its tributaries to estimate basin-averaged denudation rates along a ~320-km reach across the Tibetan Plateau margin, to answer the question whether incision into the western Tibetan Plateau margin is currently active or not. -> We find an about one-order-of-magnitude upstream decay—from 110 to 10 mm kyr^-1—of cosmogenic Be-10-derived basin-wide denudation rates across the morphological knickpoint that marks the transition from the Transhimalayan ranges to the Tibetan Plateau. This trend is corroborated by independent bulk petrographic and heavy mineral analysis of the same samples. -> From the observation that tributary-derived basin-wide denudation rates do not increase markedly until ~150–200 km downstream of the topographic plateau margin we conclude that incision into the Tibetan Plateau is inactive. -> Comparing our postglacial Be-10-derived denudation rates to long-term (>10^6 yr) estimates from low-temperature thermochronometry, ranging from 100 to 750 mm kyr^-1, points to an order- of-magnitude decay of rates of landscape downwearing towards present. We infer that denudation rates must have been higher in the Quaternary, probably promoted by the interplay of glacial and interglacial stages.
Our investigation of regional denudation patterns in the upper Indus finally is an integral part of Study III that synthesizes denudation of the Himalaya-Tibet orogen. In order to identify general and time-invariant predictors for Be-10-derived denudation rates we analyze tectonic, climatic and topographic metrics from an inventory of 297 drainage basins from various parts of the orogen. Aiming to get insight to the full response distributions of denudation rate to tectonic, climatic and topographic candidate predictors, we apply quantile regression instead of ordinary least squares regression, which has been standard analysis tool in previous studies that looked for denudation rate predictors. -> We use principal component analysis to reduce our set of 26 candidate predictors, ending up with just three out of these: Aridity Index, topographic steepness index, and precipitation of the coldest quarter of the year. -> Topographic steepness index proves to perform best during additive quantile regression. Our consequent prediction of denudation rates on the basin scale involves prediction errors that remain between 5 and 10 mm kyr^-1. -> We conclude that while topographic metrics such as river-channel steepness and slope gradient—being representative on timescales that our cosmogenic Be-10-derived denudation rates integrate over—generally appear to be more suited as predictors than climatic and tectonic metrics based on decadal records.
During this work I built a four wave mixing setup for the time-resolved femtosecond spectroscopy of Raman-active lattice modes. This setup enables to study the selective excitation of phonon polaritons. These quasi-particles arise from the coupling of electro-magnetic waves and transverse optical lattice modes, the so-called phonons. The phonon polaritons were investigated in the optically non-linear, ferroelectric crystals LiNbO₃ and LiTaO₃.
The direct observation of the frequency shift of the scattered narrow bandwidth probe pulses proofs the role of the Raman interaction during the probe and excitation process of phonon polaritons. I compare this experimental method with the measurement where ultra-short laser pulses are used. The frequency shift remains obscured by the relative broad bandwidth of these laser pulses. In an experiment with narrow bandwidth probe pulses, the Stokes and anti-Stokes intensities are spectrally separated. They are assigned to the corresponding counter-propagating wavepackets of phonon polaritons. Thus, the dynamics of these wavepackets was separately studied. Based on these findings, I develop the mathematical description of the so-called homodyne detection of light for the case of light scattering from counter propagating phonon polaritons.
Further, I modified the broad bandwidth of the ultra-short pump pulses using bandpass filters to generate two pump pulses with non-overlapping spectra. This enables the frequency-selective excitation of polariton modes in the sample, which allows me to observe even very weak polariton modes in LiNbO₃ or LiTaO₃ that belong to the higher branches of the dispersion relation of phonon polaritons. The experimentally determined dispersion relation of the phonon polaritons could therefore be extended and compared to theoretical models. In addition, I determined the frequency-dependent damping of phonon polaritons.
Magnetite is an iron oxide, which is ubiquitous in rocks and is usually deposited as small nanoparticulate matter among other rock material. It differs from most other iron oxides because it contains divalent and trivalent iron. Consequently, it has a special crystal structure and unique magnetic properties. These properties are used for paleoclimatic reconstructions where naturally occurring magnetite helps understanding former geological ages. Further on, magnetic properties are used in bio- and nanotechnological applications –synthetic magnetite serves as a contrast agent in MRI, is exploited in biosensing, hyperthermia or is used in storage media.
Magnetic properties are strongly size-dependent and achieving size control under preferably mild synthesis conditions is of interest in order to obtain particles with required properties. By using a custom-made setup, it was possible to synthesize stable single domain magnetite nanoparticles with the co-precipitation method. Furthermore, it was shown that magnetite formation is temperature-dependent, resulting in larger particles at higher temperatures. However, mechanistic approaches about the details are incomplete.
Formation of magnetite from solution was shown to occur from nanoparticulate matter rather than solvated ions. The theoretical framework of such processes has only started to be described, partly due to the lack of kinetic or thermodynamic data. Synthesis of magnetite nanoparticles at different temperatures was performed and the Arrhenius plot was used determine an activation energy for crystal growth of 28.4 kJ mol-1, which led to the conclusion that nanoparticle diffusion is the rate-determining step.
Furthermore, a study of the alteration of magnetite particles of different sizes as a function of their storage conditions is presented. The magnetic properties depend not only on particle size but also depend on the structure of the oxide, because magnetite oxidizes to maghemite under environmental conditions. The dynamics of this process have not been well described. Smaller nanoparticles are shown to oxidize more rapidly than larger ones and the lower the storage temperature, the lower the measured oxidation. In addition, the magnetic properties of the altered particles are not decreased dramatically, thus suggesting that this alteration will not impact the use of such nanoparticles as medical carriers.
Finally, the effect of biological additives on magnetite formation was investigated. Magnetotactic bacteria¬¬ are able to synthesize and align magnetite nanoparticles of well-defined size and morphology due to the involvement of special proteins with specific binding properties. Based on this model of morphology control, phage display experiments were performed to determine peptide sequences that preferably bind to (111)-magnetite faces. The aim was to control the shape of magnetite nanoparticles during the formation. Magnetotactic bacteria are also able to control the intracellular redox potential with proteins called magnetochromes. MamP is such a protein and its oxidizing nature was studied in vitro via biomimetic magnetite formation experiments based on ferrous ions. Magnetite and further trivalent oxides were found.
This work helps understanding basic mechanisms of magnetite formation and gives insight into non-classical crystal growth. In addition, it is shown that alteration of magnetite nanoparticles is mainly based on oxidation to maghemite and does not significantly influence the magnetic properties. Finally, biomimetic experiments help understanding the role of MamP within the bacteria and furthermore, a first step was performed to achieve morphology control in magnetite formation via co-precipitation.
Enterprise-specific in-memory data managment : HYRISEc - an in-memory column store engine for OLXP
(2014)
Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
In this thesis we consider diverse aspects of existence and correctness of asymptotic solutions to elliptic differential and pseudodifferential equations. We begin our studies with the case of a general elliptic boundary value problem in partial derivatives. A small parameter enters the coefficients of the main equation as well as into the boundary conditions. Such equations have already been investigated satisfactory, but there still exist certain theoretical deficiencies. Our aim is to present the general theory of elliptic problems with a small parameter. For this purpose we examine in detail the case of a bounded domain with a smooth boundary. First of all, we construct formal solutions as power series in the small parameter. Then we examine their asymptotic properties. It suffices to carry out sharp two-sided \emph{a priori} estimates for the operators of boundary value problems which are uniform in the small parameter. Such estimates failed to hold in functional spaces used in classical elliptic theory. To circumvent this limitation we exploit norms depending on the small parameter for the functions defined on a bounded domain. Similar norms are widely used in literature, but their properties have not been investigated extensively. Our theoretical investigation shows that the usual elliptic technique can be correctly carried out in these norms. The obtained results also allow one to extend the norms to compact manifolds with boundaries. We complete our investigation by formulating algebraic conditions on the operators and showing their equivalence to the existence of a priori estimates. In the second step, we extend the concept of ellipticity with a small parameter to more general classes of operators. Firstly, we want to compare the difference in asymptotic patterns between the obtained series and expansions for similar differential problems. Therefore we investigate the heat equation in a bounded domain with a small parameter near the time derivative. In this case the characteristics touch the boundary at a finite number of points. It is known that the solutions are not regular in a neighbourhood of such points in advance. We suppose moreover that the boundary at such points can be non-smooth but have cuspidal singularities. We find a formal asymptotic expansion and show that when a set of parameters comes through a threshold value, the expansions fail to be asymptotic. The last part of the work is devoted to general concept of ellipticity with a small parameter. Several theoretical extensions to pseudodifferential operators have already been suggested in previous studies. As a new contribution we involve the analysis on manifolds with edge singularities which allows us to consider wider classes of perturbed elliptic operators. We examine that introduced classes possess a priori estimates of elliptic type. As a further application we demonstrate how developed tools can be used to reduce singularly perturbed problems to regular ones.
Donor-acceptor (D-A) copolymers have revolutionized the field of organic electronics over the last decade. Comprised of a electron rich and an electron deficient molecular unit, these copolymers facilitate the systematic modification of the material's optoelectronic properties. The ability to tune the optical band gap and to optimize the molecular frontier orbitals as well as the manifold of structural sites that enable chemical modifications has created a tremendous variety of copolymer structures. Today, these materials reach or even exceed the performance of amorphous inorganic semiconductors. Most impressively, the charge carrier mobility of D-A copolymers has been pushed to the technologically important value of 10 cm^{2}V^{-1}s^{-1}. Furthermore, owed to their enormous variability they are the material of choice for the donor component in organic solar cells, which have recently surpassed the efficiency threshold of 10%. Because of the great number of available D-A copolymers and due to their fast chemical evolution, there is a significant lack of understanding of the fundamental physical properties of these materials. Furthermore, the complex chemical and electronic structure of D-A copolymers in combination with their semi-crystalline morphology impede a straightforward identification of the microscopic origin of their superior performance. In this thesis, two aspects of prototype D-A copolymers were analysed. These are the investigation of electron transport in several copolymers and the application of low band gap copolymers as acceptor component in organic solar cells. In the first part, the investigation of a series of chemically modified fluorene-based copolymers is presented. The charge carrier mobility varies strongly between the different derivatives, although only moderate structural changes on the copolymers structure were made. Furthermore, rather unusual photocurrent transients were observed for one of the copolymers. Numerical simulations of the experimental results reveal that this behavior arises from a severe trapping of electrons in an exponential distribution of trap states. Based on the comparison of simulation and experiment, the general impact of charge carrier trapping on the shape of photo-CELIV and time-of-flight transients is discussed. In addition, the high performance naphthalenediimide (NDI)-based copolymer P(NDI2OD-T2) was characterized. It is shown that the copolymer posses one of the highest electron mobilities reported so far, which makes it attractive to be used as the electron accepting component in organic photovoltaic cells.\par Solar cells were prepared from two NDI-containing copolymers, blended with the hole transporting polymer P3HT. I demonstrate that the use of appropriate, high boiling point solvents can significantly increase the power conversion efficiency of these devices. Spectroscopic studies reveal that the pre-aggregation of the copolymers is suppressed in these solvents, which has a strong impact on the blend morphology. Finally, a systematic study of P3HT:P(NDI2OD-T2) blends is presented, which quantifies the processes that limit the efficiency of devices. The major loss channel for excited states was determined by transient and steady state spectroscopic investigations: the majority of initially generated electron-hole pairs is annihilated by an ultrafast geminate recombination process. Furthermore, exciton self-trapping in P(NDI2OD-T2) domains account for an additional reduction of the efficiency. The correlation of the photocurrent to microscopic morphology parameters was used to disclose the factors that limit the charge generation efficiency. Our results suggest that the orientation of the donor and acceptor crystallites relative to each other represents the main factor that determines the free charge carrier yield in this material system. This provides an explanation for the overall low efficiencies that are generally observed in all-polymer solar cells.
Effect of benzylglucosinolate on signaling pathways associated with type 2 diabetes prevention
(2014)
Type 2 diabetes (T2D) is a health problem throughout the world. In 2010, there were nearly 230 million individuals with diabetes worldwide and it is estimated that in the economically advanced countries the cases will increase about 50% in the next twenty years. Insulin resistance is one of major features in T2D, which is also a risk factor for metabolic and cardiovascular complications. Epidemiological and animal studies have shown that the consumption of vegetables and fruits can delay or prevent the development of the disease, although the underlying mechanisms of these effects are still unclear. Brassica species such as broccoli (Brassica oleracea var. italica) and nasturtium (Tropaeolum majus) possess high content of bioactive phytochemicals, e.g. nitrogen sulfur compounds (glucosinolates and isothiocyanates) and polyphenols largely associated with the prevention of cancer. Isothiocyanates (ITCs) display their anti-carcinogenic potential by inducing detoxicating phase II enzymes and increasing glutathione (GSH) levels in tissues. In T2D diabetes an increase in gluconeogenesis and triglyceride synthesis, and a reduction in fatty acid oxidation accompanied by the presence of reactive oxygen species (ROS) are observed; altogether is the result of an inappropriate response to insulin. Forkhead box O (FOXO) transcription factors play a crucial role in the regulation of insulin effects on gene expression and metabolism, and alterations in FOXO function could contribute to metabolic disorders in diabetes. In this study using stably transfected human osteosarcoma cells (U-2 OS) with constitutive expression of FOXO1 protein labeled with GFP (green fluorescent protein) and human hepatoma cells HepG2 cell cultures, the ability of benzylisothiocyanate (BITC) deriving from benzylglucosinolate, extracted from nasturtium to modulate, i) the insulin-signaling pathway, ii) the intracellular localization of FOXO1 and iii) the expression of proteins involved in glucose metabolism, ROS detoxification, cell cycle arrest and DNA repair was evaluated. BITC promoted oxidative stress and in response to that induced FOXO1 translocation from cytoplasm into the nucleus antagonizing the insulin effect. BITC stimulus was able to down-regulate gluconeogenic enzymes, which can be considered as an anti-diabetic effect; to promote antioxidant resistance expressed by the up-regulation in manganese superoxide dismutase (MnSOD) and detoxification enzymes; to modulate autophagy by induction of BECLIN1 and down-regulation of the mammalian target of rapamycin complex 1 (mTORC1) pathway; and to promote cell cycle arrest and DNA damage repair by up-regulation of the cyclin-dependent kinase inhibitor (p21CIP) and Growth Arrest / DNA Damage Repair (GADD45). Except for the nuclear factor (erythroid derived)-like2 (NRF2) and its influence in the detoxification enzymes gene expression, all the observed effects were independent from FOXO1, protein kinase B (AKT/PKB) and NAD-dependent deacetylase sirtuin-1 (SIRT1). The current study provides evidence that besides of the anticarcinogenic potential, isothiocyanates might have a role in T2D prevention. BITC stimulus mimics the fasting state, in which insulin signaling is not triggered and FOXO proteins remain in the nucleus modulating gene expression of their target genes, with the advantage of a down-regulation of gluconeogenesis instead of its increase. These effects suggest that BITC might be considered as a promising substance in the prevention or treatment of T2D, therefore the factors behind of its modulatory effects need further investigation.
The data quality of real-world datasets need to be constantly monitored and maintained to allow organizations and individuals to reliably use their data. Especially, data integration projects suffer from poor initial data quality and as a consequence consume more effort and money. Commercial products and research prototypes for data cleansing and integration help users to improve the quality of individual and combined datasets. They can be divided into either standalone systems or database management system (DBMS) extensions. On the one hand, standalone systems do not interact well with DBMS and require time-consuming data imports and exports. On the other hand, DBMS extensions are often limited by the underlying system and do not cover the full set of data cleansing and integration tasks.
We overcome both limitations by implementing a concise set of five data cleansing and integration operators on the parallel data analytics platform Stratosphere. We define the semantics of the operators, present their parallel implementation, and devise optimization techniques for individual operators and combinations thereof. Users specify declarative queries in our query language METEOR with our new operators to improve the data quality of individual datasets or integrate them to larger datasets. By integrating the data cleansing operators into the higher level language layer of Stratosphere, users can easily combine cleansing operators with operators from other domains, such as information extraction, to complex data flows. Through a generic description of the operators, the Stratosphere optimizer reorders operators even from different domains to find better query plans.
As a case study, we reimplemented a part of the large Open Government Data integration project GovWILD with our new operators and show that our queries run significantly faster than the original GovWILD queries, which rely on relational operators. Evaluation reveals that our operators exhibit good scalability on up to 100 cores, so that even larger inputs can be efficiently processed by scaling out to more machines. Finally, our scripts are considerably shorter than the original GovWILD scripts, which results in better maintainability of the scripts.
In March 2010, the project CoCoCo (incipient COntinent-COntinent COllision) recorded a 650 km long amphibian N-S wide-angle seismic profile, extending from the Eratosthenes Seamount (ESM) across Cyprus and southern Turkey to the Anatolian plateau. The aim of the project is to reveal the impact of the transition from subduction to continent-continent collision of the African plate with the Cyprus-Anatolian plate. A visual quality check, frequency analysis and filtering were applied to the seismic data and reveal a good data quality. Subsequent first break picking, finite-differences ray tracing and inversion of the offshore wide-angle data leads to a first-arrival tomographic model. This model reveals (1) P-wave velocities lower than 6.5 km/s in the crust, (2) a variable crustal thickness of about 28 - 37 km and (3) an upper crustal reflection at 5 km depth beneath the ESM. Two land shots on Turkey, also recorded on Cyprus, airgun shots south of Cyprus and geological and previous seismic investigations provide the information to derive a layered velocity model beneath the Anatolian plateau and for the ophiolite complex on Cyprus. The analysis of the reflections provides evidence for a north-dipping plate subducting beneath Cyprus. The main features of this layered velocity model are (1) an upper and lower crust with large lateral changes of the velocity structure and thickness, (2) a Moho depth of about 38 - 45 km beneath the Anatolian plateau, (3) a shallow north-dipping subducting plate below Cyprus with an increasing dip and (4) a typical ophiolite sequence on Cyprus with a total thickness of about 12 km. The offshore-onshore seismic data complete and improve the information about the velocity structure beneath Cyprus and the deeper part of the offshore tomographic model. Thus, the wide-angle seismic data provide detailed insights into the 2-D geometry and velocity structures of the uplifted and overriding Cyprus-Anatolian plate. Subsequent gravity modelling confirms and extends the crustal P-wave velocity model. The deeper part of the subducting plate is constrained by the gravity data and has a dip angle of ~ 28°. Finally, an integrated analysis of the geophysical and geological information allows a comprehensive interpretation of the crustal structure related to the collision process.
The quantitative descriptions of the state of stress in the Earth’s crust, and spatial-temporal stress changes are of great importance in terms of scientific questions as well as applied geotechnical issues. Human activities in the underground (boreholes, tunnels, caverns, reservoir management, etc.) have a large impact on the stress state. It is important to assess, whether these activities may lead to (unpredictable) hazards, such as induced seismicity. Equally important is the understanding of the in situ stress state in the Earth’s crust, as it allows the determination of safe well paths, already during well planning. The same goes for the optimal configuration of the injection- and production wells, where stimulation for artificial fluid path ways is necessary.
The here presented cumulative dissertation consists of four separate manuscripts, which are already published, submitted or will be submitted for peer review within the next weeks. The main focus is on the investigation of the possible usage of geothermal energy in the province Alberta (Canada). A 3-D geomechanical–numerical model was designed to quantify the contemporary 3-D stress tensor in the upper crust. For the calibration of the regional model, 321 stress orientation data and 2714 stress magnitude data were collected, whereby the size and diversity of the database is unique. A calibration scheme was developed, where the model is calibrated versus the in situ stress data stepwise for each data type and gradually optimized using statistically test methods. The optimum displacement on the model boundaries can be determined by bivariate linear regression, based on only three model runs with varying deformation ratio. The best-fit model is able to predict most of the in situ stress data quite well. Thus, the model can provide the full stress tensor along any chosen virtual well paths. This can be used to optimize the orientation of horizontal wells, which e.g. can be used for reservoir stimulation. The model confirms regional deviations from the average stress orientation trend, such as in the region of the Peace River Arch and the Bow Island Arch.
In the context of data compilation for the Alberta stress model, the Canadian database of the World Stress Map (WSM) could be expanded by including 514 new data records. This publication of an update of the Canadian stress map after ~20 years with a specific focus on Alberta shows, that the maximum horizontal stress (SHmax) is oriented southwest to northeast over large areas in Northern America. The SHmax orientation in Alberta is very homogeneous, with an average of about 47°. In order to calculate the average SHmax orientation on a regular grid as well as to estimate the wave-length of stress orientation, an existing algorithm has been improved and is applied to the Canadian data. The newly introduced quasi interquartile range on the circle (QIROC) improves the variance estimation of periodic data, as it is less susceptible to its outliers.
Another geomechanical–numerical model was built to estimate the 3D stress tensor in the target area ”Nördlich Lägern” in Northern Switzerland. This location, with Opalinus clay as a host rock, is a potential repository site for high-level radioactive waste. The performed modelling aims to investigate the sensitivity of the stress tensor on tectonic shortening, topography, faults and variable rock properties within the Mesozoic sedimentary stack, according to the required stability needed for a suitable radioactive waste disposal site. The majority of the tectonic stresses caused by the far-field shortening from the South are admitted by the competent rock units in the footwall and hanging wall of the argillaceous target horizon, the Upper Malm and Upper Muschelkalk. Thus, the differential stress within the host rock remains relatively low. East-west striking faults release stresses driven by tectonic shortening. The purely gravitational influence by the topography is low; higher SHmax magnitudes below topographical depression and lower values below hills are mainly observed near the surface. A complete calibration of the model is not possible, as no stress magnitude data are available for calibration, yet. The collection of this data will begin in 2015; subsequently they will be used to adjust the geomechanical–numerical model again.
The third geomechanical–numerical model investigates the stress variation in an ultra-deep gold mine in South Africa. This reservoir model is spatially one order of magnitude smaller than the previous local model from Northern Switzerland. Here, the primary focus is to investigate the hypothesis that the Mw 1.9 earthquake on 27 December 2007 was induced by stress changes due to the mining process. The Coulomb failure stress change (DeltaCFS) was used to analyse the stress change. It confirmed that the seismic event was induced by static stress transfer due to the mining progress. The rock was brought closer to failure on the derived rupture plane by stress changes of up to 1.5–15MPa, in dependence of the DeltaCFS analysis type. A forward modelling of a generic excavation scheme reveals that with decreasing distance to the dyke the DeltaCFS values increase significantly. Hence, even small changes in the mining progress can have a significant impact on the seismic hazard risk, i.e. the change of the occurrence probability to induce a seismic event of economic concern.
The H.E.S.S. array is a third generation Imaging Atmospheric Cherenkov Telescope (IACT) array. It is located in the Khomas Highland in Namibia, and measures very high energy (VHE) gamma-rays. In Phase I, the array started data taking in 2004 with its four identical 13 m telescopes. Since then, H.E.S.S. has emerged as the most successful IACT experiment to date. Among the almost 150 sources of VHE gamma-ray radiation found so far, even the oldest detection, the Crab Nebula, keeps surprising the scientific community with unexplained phenomena such as the recently discovered very energetic flares of high energy gamma-ray radiation. During its most recent flare, which was detected by the Fermi satellite in March 2013, the Crab Nebula was simultaneously observed with the H.E.S.S. array for six nights. The results of the observations will be discussed in detail during the course of this work. During the nights of the flare, the new 24 m × 32 m H.E.S.S. II telescope was still being commissioned, but participated in the data taking for one night. To be able to reconstruct and analyze the data of the H.E.S.S. Phase II array, the algorithms and software used by the H.E.S.S. Phase I array had to be adapted. The most prominent advanced shower reconstruction technique developed by de Naurois and Rolland, the template-based model analysis, compares real shower images taken by the Cherenkov telescope cameras with shower templates obtained using a semi-analytical model. To find the best fitting image, and, therefore, the relevant parameters that describe the air shower best, a pixel-wise log-likelihood fit is done. The adaptation of this advanced shower reconstruction technique to the heterogeneous H.E.S.S. Phase II array for stereo events (i.e. air showers seen by at least two telescopes of any kind), its performance using MonteCarlo simulations as well as its application to real data will be described.
The background of civil service reform in Indonesia reveals the emergence of the reformation movement in 1998, following the fall of the authoritarian New Order regime. The reformation movement has seen the introduction of reforms in Indonesia's various governmental institutions, including the civil service. The civil service reforms were marked by the revision of Act 8/ 74 with Act 43 of 1999 on Civil Service Administration. The implementation of the civil service reform program, which was carried out by both central and local governments, required cooperation between the actors (in particular, Ministries, agencies and local governments), known as coordination.
Currently, the coordination that occurs between actors tends to be rigid and hierarchical. As a result, targets are not efficiently and effectively met. Hierarchical coordination, without a strong public sector infrastructure, tends to have a negative impact on achieving the desired outcomes of the civil service reform program. As an intrinsic part of the New Order regime, hierarchical coordination resulted in inefficiency and lack of efficacy. Despite these inefficiencies, the administration and the political environment have changed significantly as a result of the reform process.
Obvious examples of the reforms are changes in recruitment patterns, placement and remuneration policies. However, in the case of Indonesia, it appears that every state institution has its own policy. Thus, it appears that there has not been policy coherence in the civil service reform program, resulting in the lack of a sustainable program. The important thing to examine is how the coordination mechanisms of the civil service reform program in the central government have developed during the reform era in Indonesia
The purpose of this study is to analyse the linkages between coordination mechanisms and the actual implementation of civil service reform programs. This is undertaken as a basis for intervention based on the structures and patterns of coordination mechanisms in the implementation of civil service reform programs. The next step is to formulate the development coordination mechanisms, particularly to create structures and patterns of civil service reforms which are more sustainable to the specific characteristics of public sector organisations in Indonesia.
The benefits of this research are a stronger understanding of the linkages between the mechanisms of coordination and implementation of civil service reform programs. Through this analysis, the findings can then be applied as a basic consideration in planning a sustainable civil service reform program. In the basis of theoretical issues concerning the linkages between coordination mechanisms and implementation of civil service reform program sustainability, this book explores the type of coordination, which is needed to test the proportional and sustainable concept of the intended civil service reform program in Indonesia.
Research conducted through studies, surveys and donors has shown that poor coordination is the major hindrance to the civil service program reform in Indonesia. This research employs a qualitative approach. In this study, the coordination mechanisms and implementations of civil service reform programs are demonstrated by means of case studies of the State Ministry for Administrative Reform, the National Civil Service Agency and the National Public Administration Institute. The coordination mechanisms in these Ministries and agencies were analysed using indicators of effective and efficient coordination. The analysis of the coordination mechanisms shows a tendency towards rigid hierarchical coordination. This raises concerns about fragmentation among departments and agencies at the central government level and calls for integrated civil service reform both at central and local governmental levels. In the context of implementation programs, a hierarchical mechanism of coordination leverages on various aspects, such as the program formulation, implementation flow of the program, the impact of policies, and achievement targets. In particular, there was a shift process of the mainstream civil service reform in the Ministries and agencies which are marked by the emergence of sectoral interest and inefficiencies in the civil service reform program. The primary result of successful civil service reform is increased professionalism in the civil service.
The findings on hierarchical mechanisms and the prescriptions which will follow show that the professionalism of Indonesia’s civil service is at stake. The implementation of the program through coordination mechanisms in Ministries and agencies is measured in various dimensions: the centre of coordination, integration of coordination, sustainability of coordination and multidimensionality of coordination.
The results of this analysis show that coordination mechanisms and the implementation of civil service reform are more successful when they are integration rather than hierarchical mechanisms. For a successful implementation of the reform program, it is crucial to intervene and change the type of coordination at the central government through the integration approach (hierarchy, market, and network). Furthermore, in order to move towards the integration type mechanism of coordination the separation of the administration and politics in the practice of good governance needs to be carried out immediately and simultaneously. Based on this analysis, it can be concluded that the integration type mechanism of coordination is a suitable for Indonesia for a sustainable civil service reform program. Finally, to achieve coherent civil service reforms, national policies developed according to the central government's priorities are indispensable, establishing a coordination mechanism that can be adhered to throughout all reform sectors.
Despite remarkable progress made in the past century, which has revolutionized our understanding of the universe, there are numerous open questions left in theoretical physics. Particularly important is the fact that the theories describing the fundamental interactions of nature are incompatible. Einstein's theory of general relative describes gravity as a dynamical spacetime, which is curved by matter and whose curvature determines the motion of matter. On the other hand we have quantum field theory, in form of the standard model of particle physics, where particles interact via the remaining interactions - electromagnetic, weak and strong interaction - on a flat, static spacetime without gravity. A theory of quantum gravity is hoped to cure this incompatibility by heuristically replacing classical spacetime by quantum spacetime'. Several approaches exist attempting to define such a theory with differing underlying premises and ideas, where it is not clear which is to be preferred. Yet a minimal requirement is the compatibility with the classical theory, they attempt to generalize. Interestingly many of these models rely on discrete structures in their definition or postulate discreteness of spacetime to be fundamental. Besides the direct advantages discretisations provide, e.g. permitting numerical simulations, they come with serious caveats requiring thorough investigation: In general discretisations break fundamental diffeomorphism symmetry of gravity and are generically not unique. Both complicates establishing the connection to the classical continuum theory. The main focus of this thesis lies in the investigation of this relation for spin foam models. This is done on different levels of the discretisation / triangulation, ranging from few simplices up to the continuum limit. In the regime of very few simplices we confirm and deepen the connection of spin foam models to discrete gravity. Moreover, we discuss dynamical, e.g. diffeomorphism invariance in the discrete, to fix the ambiguities of the models. In order to satisfy these conditions, the discrete models have to be improved in a renormalisation procedure, which also allows us to study their continuum dynamics. Applied to simplified spin foam models, we uncover a rich, non--trivial fixed point structure, which we summarize in a phase diagram. Inspired by these methods, we propose a method to consistently construct the continuum theory, which comes with a unique vacuum state.
Previous studies on the acquisition of verb inflection in normally developing children have revealed an astonishing pattern: children use correctly inflected verbs in their own speech but fail to make use of verb inflections when comprehending sentences uttered by others. Thus, a three-year old might well be able to say something like ‘The cat sleeps on the bed’, but fails to understand that the same sentence, when uttered by another person, refers to only one sleeping cat but not more than one. The previous studies that have examined children's comprehension of verb inflections have employed a variant of a picture selection task in which the child was asked to explicitly indicate (via pointing) what semantic meaning she had inferred from the test sentence. Recent research on other linguistic structures, such as pronouns or focus particles, has indicated that earlier comprehension abilities can be found when methods are used that do not require an explicit reaction, like preferential looking tasks. This dissertation aimed to examine whether children are truly not able to understand the connection the the verb form and the meaning of the sentence subject until the age of five years or whether earlier comprehension can be found when a different measure, preferential looking, is used. Additionally, children's processing of subject-verb agreement violations was examined. The three experiments of this thesis that examined children's comprehension of verb inflections revealed the following: German-speaking three- to four-year old children looked more to a picture showing one actor when hearing a sentence with a singular inflected verb but only when their eye gaze was tracked and they did not have to perform a picture selection task. When they were asked to point to the matching picture, they performed at chance-level. This pattern indicates asymmetries in children's language performance even within the receptive modality. The fourth experiment examined sensitivity to subject-verb agreement violations and did not reveal evidence for sensitivity toward agreement violations in three- and four-year old children, but only found that children's looking patterns were influenced by the grammatical violations at the age of five. The results from these experiments are discussed in relation to the existence of a production-comprehension asymmetry in the use of verb inflections and children's underlying grammatical knowledge.
Synchronization is a fundamental phenomenon in nature. It can be considered as a general property of self-sustained oscillators to adjust their rhythm in the presence of an interaction.
In this work we investigate complex regimes of synchronization phenomena by means of theoretical analysis, numerical modeling, as well as practical analysis of experimental data.
As a subject of our investigation we consider chimera state, where due to spontaneous symmetry-breaking of an initially homogeneous oscillators lattice split the system into two parts with different dynamics. Chimera state as a new synchronization phenomenon was first found in non-locally coupled oscillators system, and has attracted a lot of attention in the last decade. However, the recent studies indicate that this state is also possible in globally coupled systems. In the first part of this work, we show under which conditions the chimera-like state appears in a system of globally coupled identical oscillators with intrinsic delayed feedback. The results of the research explain how initially monostable oscillators became effectivly bistable in the presence of the coupling and create a mean field that sustain the coexistence of synchronized and desynchronized states. Also we discuss other examples, where chimera-like state appears due to frequency dependence of the phase shift in the bistable system.
In the second part, we make further investigation of this topic by modeling influence of an external periodic force to an oscillator with intrinsic delayed feedback. We made stability analysis of the synchronized state and constructed Arnold tongues. The results explain formation of the chimera-like state and hysteric behavior of the synchronization area. Also, we consider two sets of parameters of the oscillator with symmetric and asymmetric Arnold tongues, that correspond to mono- and bi-stable regimes of the oscillator.
In the third part, we demonstrate the results of the work, which was done in collaboration with our colleagues from Psychology Department of University of Potsdam. The project aimed to study the effect of the cardiac rhythm on human perception of time using synchronization analysis. From our part, we made a statistical analysis of the data obtained from the conducted experiment on free time interval reproduction task. We examined how ones heartbeat influences the time perception and searched for possible phase synchronization between heartbeat cycles and time reproduction responses. The findings support the prediction that cardiac cycles can serve as input signals, and is used for reproduction of time intervals in the range of several seconds.
The adaptation of cell growth and proliferation to environmental changes is essential for the surviving of biological systems. The evolutionary conserved Ser/Thr protein kinase “Target of Rapamycin” (TOR) has emerged as a major signaling node that integrates the sensing of numerous growth signals to the coordinated regulation of cellular metabolism and growth. Although the TOR signaling pathway has been widely studied in heterotrophic organisms, the research on TOR in photosynthetic eukaryotes has been hampered by the reported land plant resistance to rapamycin. Thus, the finding that Chlamydomonas reinhardtii is sensitive to rapamycin, establish this unicellular green alga as a useful model system to investigate TOR signaling in photosynthetic eukaryotes.
The observation that rapamycin does not fully arrest Chlamydomonas growth, which is different from observations made in other organisms, prompted us to investigate the regulatory function of TOR in Chlamydomonas in context of the cell cycle. Therefore, a growth system that allowed synchronously growth under widely unperturbed cultivation in a fermenter system was set up and the synchronized cells were characterized in detail. In a highly resolved kinetic study, the synchronized cells were analyzed for their changes in cytological parameters as cell number and size distribution and their starch content. Furthermore, we applied mass spectrometric analysis for profiling of primary and lipid metabolism. This system was then used to analyze the response dynamics of the Chlamydomonas metabolome and lipidome to TOR-inhibition by rapamycin
The results show that TOR inhibition reduces cell growth, delays cell division and daughter cell release and results in a 50% reduced cell number at the end of the cell cycle. Consistent with the growth phenotype we observed strong changes in carbon and nitrogen partitioning in the direction of rapid conversion into carbon and nitrogen storage through an accumulation of starch, triacylglycerol and arginine. Interestingly, it seems that the conversion of carbon into triacylglycerol occurred faster than into starch after TOR inhibition, which may indicate a more dominant role of TOR in the regulation of TAG biosynthesis than in the regulation of starch.
This study clearly shows, for the first time, a complex picture of metabolic and lipidomic dynamically changes during the cell cycle of Chlamydomonas reinhardtii and furthermore reveals a complex regulation and adjustment of metabolite pools and lipid composition in response to TOR inhibition.
Characterization of drought tolerance in potato cultivars for identification of molecular markers
(2014)
Entrepreneurship is known to be a main driver of economic growth. Hence, governments have an interest in supporting and promoting entrepreneurial activities. Start-up subsidies, which have been analyzed extensively, only aim at mitigating the lack of financial capital. However, some entrepreneurs also lack in human, social, and managerial capital. One way to address these shortcomings is by subsidizing coaching programs for entrepreneurs. However, theoretical and empirical evidence about business coaching and programs subsidizing coaching is scarce. This dissertation gives an extensive overview of coaching and is the first empirical study for Germany analyzing the effects of coaching programs on its participants. In the theoretical part of the dissertation the process of a business start-up is described and it is discussed how and in which stage of the company’s evolvement coaching can influence entrepreneurial success. The concept of coaching is compared to other non-monetary types of support as training, mentoring, consulting, and counseling. Furthermore, national and international support programs are described. Most programs have either no or small positive effects. However, there is little quantitative evidence in the international literature. In the empirical part of the dissertation the effectiveness of coaching is shown by evaluating two German coaching programs, which support entrepreneurs via publicly subsidized coaching sessions. One of the programs aims at entrepreneurs who have been employed before becoming self-employed, whereas the other program is targeted at former unemployed entrepreneurs. The analysis is based on the evaluation of a quantitative and a qualitative dataset. The qualitative data are gathered by intensive one-on-one interviews with coaches and entrepreneurs. These data give a detailed insight about the coaching topics, duration, process, effectiveness, and the thoughts of coaches and entrepreneurs. The quantitative data include information about 2,936 German-based entrepreneurs. Using propensity score matching, the success of participants of the two coaching programs is compared with adequate groups of non-participants. In contrast to many other studies also personality traits are observed and controlled for in the matching process. The results show that only the program for former unemployed entrepreneurs has small positive effects. Participants have a larger survival probability in self-employment and a larger probability to hire employees than matched non-participants. In contrast, the program for former employed individuals has negative effects. Compared to individuals who did not participate in the coaching program, participants have a lower probability to stay in self-employment, lower earned net income, lower number of employees and lower life satisfaction. There are several reasons for these differing results of the two programs. First, former unemployed individuals have more basic coaching needs than former employed individuals. Coaches can satisfy these basic coaching needs, whereas former employed individuals have more complex business problems, which are not very easy to be solved by a coaching intervention. Second, the analysis reveals that former employed individuals are very successful in general. It is easier to increase the success of former unemployed individuals as they have a lower base level of success than former employed individuals. An effect heterogeneity analysis shows that coaching effectiveness differs by region. Coaching for previously unemployed entrepreneurs is especially useful in regions with bad labor market conditions. In summary, in line with previous literature, it is found that coaching has little effects on the success of entrepreneurs. The previous employment status, the characteristics of the entrepreneur and the regional labor market conditions play a crucial role in the effectiveness of coaching. In conclusion, coaching needs to be well tailored to the individual and applied thoroughly. Therefore, governments should design and provide coaching programs only after due consideration.
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
In the field of disk-based parallel database management systems exists a great variety of solutions based on a shared-storage or a shared-nothing architecture. In contrast, main memory-based parallel database management systems are dominated solely by the shared-nothing approach as it preserves the in-memory performance advantage by processing data locally on each server. We argue that this unilateral development is going to cease due to the combination of the following three trends: a) Nowadays network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory inside a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic, and provide high-availability. c) A modern storage system such as Stanford's RAMCloud even keeps all data resident in main memory. Exploiting these characteristics in the context of a main-memory parallel database management system is desirable. The advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
This thesis describes building a columnar database on shared main memory-based storage. The thesis discusses the resulting architecture (Part I), the implications on query processing (Part II), and presents an evaluation of the resulting solution in terms of performance, high-availability, and elasticity (Part III).
In our architecture, we use Stanford's RAMCloud as shared-storage, and the self-designed and developed in-memory AnalyticsDB as relational query processor on top. AnalyticsDB encapsulates data access and operator execution via an interface which allows seamless switching between local and remote main memory, while RAMCloud provides not only storage capacity, but also processing power. Combining both aspects allows pushing-down the execution of database operators into the storage system. We describe how the columnar data processed by AnalyticsDB is mapped to RAMCloud's key-value data model and how the performance advantages of columnar data storage can be preserved.
The combination of fast network technology and the possibility to execute database operators in the storage system opens the discussion for site selection. We construct a system model that allows the estimation of operator execution costs in terms of network transfer, data processed in memory, and wall time. This can be used for database operators that work on one relation at a time - such as a scan or materialize operation - to discuss the site selection problem (data pull vs. operator push). Since a database query translates to the execution of several database operators, it is possible that the optimal site selection varies per operator. For the execution of a database operator that works on two (or more) relations at a time, such as a join, the system model is enriched by additional factors such as the chosen algorithm (e.g. Grace- vs. Distributed Block Nested Loop Join vs. Cyclo-Join), the data partitioning of the respective relations, and their overlapping as well as the allowed resource allocation.
We present an evaluation on a cluster with 60 nodes where all nodes are connected via RDMA-enabled network equipment. We show that query processing performance is about 2.4x slower if everything is done via the data pull operator execution strategy (i.e. RAMCloud is being used only for data access) and about 27% slower if operator execution is also supported inside RAMCloud (in comparison to operating only on main memory inside a server without any network communication at all). The fast-crash recovery feature of RAMCloud can be leveraged to provide high-availability, e.g. a server crash during query execution only delays the query response for about one second. Our solution is elastic in a way that it can adapt to changing workloads a) within seconds, b) without interruption of the ongoing query processing, and c) without manual intervention.