Refine
Year of publication
- 2014 (110) (remove)
Document Type
- Doctoral Thesis (110) (remove)
Language
- English (110) (remove)
Is part of the Bibliography
- yes (110) (remove)
Keywords
- Gammastrahlungsastronomie (4)
- data analysis (4)
- gamma-ray astronomy (4)
- Crab Nebula (3)
- Datenanalyse (3)
- Krebsnebel (3)
- Systembiologie (3)
- systems biology (3)
- Fermi-LAT (2)
- H.E.S.S. (2)
Institute
- Institut für Biochemie und Biologie (24)
- Institut für Physik und Astronomie (19)
- Institut für Geowissenschaften (16)
- Institut für Informatik und Computational Science (7)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (6)
- Institut für Mathematik (5)
- Institut für Umweltwissenschaften und Geographie (5)
- Mathematisch-Naturwissenschaftliche Fakultät (5)
- Wirtschaftswissenschaften (5)
- Institut für Chemie (4)
The purpose of this thesis is to develop an automated inversion scheme to derive point and finite source parameters for weak earthquakes, here intended with the unusual meaning of earthquakes with magnitudes at the limit or below the bottom magnitude threshold of standard source inversion routines. The adopted inversion approaches entirely rely on existing inversion software, the methodological work mostly targeting the development and tuning of optimized inversion flows. The resulting inversion scheme is tested for very different datasets, and thus allows the discussion on the source inversion problem at different scales. In the first application, dealing with mining induced seismicity, the source parameters determination is addressed at a local scale, with source-sensor distance of less than 3 km. In this context, weak seismicity corresponds to event below magnitude MW 2.0, which are rarely target of automated source inversion routines. The second application considers a regional dataset, namely the aftershock sequence of the 2010 Maule earthquake (Chile), using broadband stations at regional distances, below 300 km. In this case, the magnitude range of the target aftershocks range down to MW 4.0. This dataset is here considered as a weak seismicity case, since the analysis of such moderate seismicity is generally investigated only by moment tensor inversion routines, with no attempt to resolve source duration or finite source parameters. In this work, automated multi-step inversion schemes are applied to both datasets with the aim of resolving point source parameters, both using double couple (DC) and full moment tensor (MT) models, source duration and finite source parameters. A major result of the analysis of weaker events is the increased size of resulting moment tensor catalogues, which interpretation may become not trivial. For this reason, a novel focal mechanism clustering approach is used to automatically classify focal mechanisms, allowing the investigation of the most relevant and repetitive rupture features. The inversion of the mining induced seismicity dataset reveals the repetitive occurrence of similar rupture processes, where the source geometry is controlled by the shape of the mined panel. Moreover, moment tensor solutions indicate a significant contribution of tensile processes. Also the second application highlights some characteristic geometrical features of the fault planes, which show a general consistency with the orientation of the slab. The additional inversion for source duration allowed to verify the empirical correlation for moment normalized earthquakes in subduction zones among a decreasing rupture duration with increasing source depth, which was so far only observed for larger events.
Inferring gene regulatory networks and cellular phases from time-resolved transcriptomics data
(2014)
Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
Cyanobacteria produce about 40 percent of the world’s primary biomass, but also a variety of often toxic peptides such as microcystin. Mass developments, so called blooms, can pose a real threat to the drinking water supply in many parts of the world. This study aimed at characterizing the biological function of microcystin production in one of the most common bloom-forming cyanobacterium Microcystis aeruginosa.
In a first attempt, the effect of elevated light intensity on microcystin production and its binding to cellular proteins was studied. Therefore, conventional microcystin quantification techniques were combined with protein-biochemical methods. RubisCO, the key enzyme for primary carbon fixation was a major microcystin interaction partner. High light exposition strongly stimulated microcystin-protein interactions. Up to 60 percent of the total cellular microcystin was detected bound to proteins, i.e. inaccessible for standard quantification procedures. Underestimation of total microcystin contents when neglecting the protein fraction was also demonstrated in field samples. Finally, an immuno-fluorescence based method was developed to identify microcystin producing cyanobacteria in mixed populations.
The high light induced microcystin interaction with proteins suggested an impact of the secondary metabolite on the primary metabolism of Microcystis by e.g. modulating the activity of enzymes. For addressing that question, a comprehensive GC/MS-based approach was conducted to compare the accumulation of metabolites in the wild-type of Microcystis aeruginosa PCC 7806 and the microcystin deficient ΔmcyB mutant. From all 501 detected non-redundant metabolites 85 (17 percent) accumulated significantly different in either of both genotypes upon high light exposition. Accumulation of compatible solutes in the ΔmcyB mutant suggests a role of microcystin in fine-tuning the metabolic flow to prevent stress related to excess light, high oxygen concentration and carbon limitation.
Co-analysis of the widely used model cyanobacterium Synechocystis PCC 6803 revealed profound metabolic differences between species of cyanobacteria. Whereas Microcystis channeled more resources towards carbohydrate synthesis, Synechocystis invested more in amino acids. These findings were supported by electron microscopy of high light treated cells and the quantification of storage compounds. While Microcystis accumulated mainly glycogen to about 8.5 percent of its fresh weight within three hours, Synechocystis produced higher amounts of cyanophycin. The results showed that the characterization of species-specific metabolic features should gain more attention with regard to the biotechnological use of cyanobacteria.
The work elaborates on the question if coaches in non-professional soccer can influence referee decisions. Modeled from a principal-agent perspective, the managing referee boards can be seen as the principal. They aim at facilitating a fair competition which is in accordance with the existing rules and regulations. In doing so, the referees are assigned as impartial agents on the pitch. The coaches take over a non-legitimate principal-like role trying to influence the referees even though they do not have the formal right to do so.
Separate questionnaires were set up for referees and coaches. The coach questionnaire aimed at identifying the extent and the forms of influencing attempts by coaches. The referee questionnaire tried to elaborate on the questions if referees take notice of possible influencing attempts and how they react accordingly.
The results were put into relation with official match data in order to identify significant influences on personal sanctions (yellow cards, second yellow cards, red cards) and the match result.
It is found that there is a slight effect on the referee’s decisions. However, this effect is rather disadvantageous for the influencing coach and there is no evidence for an impact on the match result itself.
The monsoon is an important component of the Earth’s climate system. It played a vital role in the development and sustenance of the largely agro-based economy in India. A better understanding of past variations in the Indian Summer Monsoon (ISM) is necessary to assess its nature under global warming scenarios. Instead, our knowledge of spatiotemporal patterns of past ISM strength, as inferred from proxy records, is limited due to the lack of high-resolution paleo-hydrological records from the core monsoon domain.
In this thesis I aim to improve our understanding of Holocene ISM variability from the core ‘monsoon zone’ (CMZ) in India. To achieve this goal, I tried to understand modern and thereafter reconstruct Holocene monsoonal hydrology, by studying surface sediments and a high-resolution sedimentary record from the saline-alkaline Lonar crater lake, central India. My approach relies on analyzing stable carbon and hydrogen isotope ratios from sedimentary lipid biomarkers to track past hydrological changes.
In order to evaluate the relationship of the modern ecosystem and hydrology of the lake I studied the distribution of lipid biomarkers in the modern ecosystem and compared it to lake surface sediments. The major plants from dry deciduous mixed forest type produced a greater amount of leaf wax n-alkanes and a greater fraction of n-C31 and n-C33 alkanes relative to n-C27 and n-C29. Relatively high average chain length (ACL) values (29.6–32.8) for these plants seem common for vegetation from an arid and warm climate. Additionally I found that human influence and subsequent nutrient supply result in increased lake primary productivity, leading to an unusually high concentration of tetrahymanol, a biomarker for salinity and water column stratification, in the nearshore sediments. Due to this inhomogeneous deposition of tetrahymanol in modern sediments, I hypothesize that lake level fluctuation may potentially affect aquatic lipid biomarker distributions in lacustrine sediments, in addition to source changes.
I reconstructed centennial-scale hydrological variability associated with changes in the intensity of the ISM based on a record of leaf wax and aquatic biomarkers and their stable carbon (δ13C) and hydrogen (δD) isotopic composition from a 10 m long sediment core from the lake. I identified three main periods of distinct hydrology over the Holocene in central India. The period between 10.1 and 6 cal. ka BP was likely the wettest during the Holocene. Lower ACL index values (29.4 to 28.6) of leaf wax n-alkanes and their negative δ13C values (–34.8‰ to –27.8‰) indicated the dominance of woody C3 vegetation in the catchment, and negative δDwax (average for leaf wax n-alkanes) values (–171‰ to –147‰) argue for a wet period due to an intensified monsoon. After 6 cal. ka BP, a gradual shift to less negative δ13C values (particularly for the grass derived n-C31) and appearance of the triterpene lipid tetrahymanol, generally considered as a marker for salinity and water column stratification, marked the onset of drier conditions. At 5.1 cal. ka BP increasing flux of leaf wax n-alkanes along with the highest flux of tetrahymanol indicated proximity of the lakeshore to the center due to a major lake level decrease. Rapid fluctuations in abundance of both terrestrial and aquatic biomarkers between 4.8 and 4 cal. ka BP indicated an unstable lake ecosystem, culminating in a transition to arid conditions. A pronounced shift to less negative δ13C values, in particular for n-C31 (–25.2‰ to –22.8‰), over this period indicated a change of dominant vegetation to C4 grasses. Along with a 40‰ increase in leaf wax n-alkane δD values, which likely resulted from less rainfall and/or higher plant evapotranspiration, I interpret this period to reflect the driest conditions in the region during the last 10.1 ka. This transition led to protracted late Holocene arid conditions and the establishment of a permanently saline lake. This is supported by the high abundance of tetrahymanol. A late Holocene peak of cyanobacterial biomarker input at 1.3 cal. ka BP might represent an event of lake eutrophication, possibly due to human impact and the onset of cattle/livestock farming in the catchment.
The most intriguing feature of the mid-Holocene driest period was the high amplitude and rapid fluctuations in δDwax values, probably due to a change in the moisture source and/or precipitation seasonality. I hypothesize that orbital induced weakening of the summer solar insolation and associated reorganization of the general atmospheric circulation were responsible for an unstable hydroclimate in the mid-Holocene in the CMZ.
My findings shed light onto the sequence of changes during mean state changes of the monsoonal system, once an insolation driven threshold has been passed, and show that small changes in solar insolation can be associated to major environmental changes and large fluctuations in moisture source, a scenario that may be relevant with respect to future changes in the ISM system.
The characterization of exoplanets is a young and rapidly expanding field in astronomy.
It includes a method called transmission spectroscopy that searches for planetary spectral
fingerprints in the light received from the host star during the event of a transit. This
techniques allows for conclusions on the atmospheric composition at the terminator region,
the boundary between the day and night side of the planet. Observationally a big
challenge, first attempts in the community have been successful in the detection of several
absorption features in the optical wavelength range. These are for example a Rayleighscattering
slope and absorption by sodium and potassium. However, other objects show
a featureless spectrum indicative for a cloud or haze layer of condensates masking the
probable atmospheric layers.
In this work, we performed transmission spectroscopy by spectrophotometry of three
Hot Jupiter exoplanets. When we began the work on this thesis, optical transmission
spectra have been available for two exoplanets. Our main goal was to advance the current
sample of probed objects to learn by comparative exoplanetology whether certain
absorption features are common. We selected the targets HAT-P-12b, HAT-P-19b and
HAT-P-32b, for which the detection of atmospheric signatures is feasible with current
ground-based instrumentation. In addition, we monitored the host stars of all three objects
photometrically to correct for influences of stellar activity if necessary.
The obtained measurements of the three objects all favor featureless spectra. A variety
of atmospheric compositions can explain the lack of a wavelength dependent absorption.
But the broad trend of featureless spectra in planets of a wide range of temperatures,
found in this work and in similar studies recently published in the literature, favors an
explanation based on the presence of condensates even at very low concentrations in the
atmospheres of these close-in gas giants. This result points towards the general conclusion
that the capability of transmission spectroscopy to determine the atmospheric composition
is limited, at least for measurements at low spectral resolution.
In addition, we refined the transit parameters and ephemerides of HAT-P-12b and HATP-
19b. Our monitoring campaigns allowed for the detection of the stellar rotation period
of HAT-P-19 and a refined age estimate. For HAT-P-12 and HAT-P-32, we derived upper
limits on their potential variability. The calculated upper limits of systematic effects of
starspots on the derived transmission spectra were found to be negligible for all three
targets.
Finally, we discussed the observational challenges in the characterization of exoplanet
atmospheres, the importance of correlated noise in the measurements and formulated
suggestions on how to improve on the robustness of results in future work.
Geometric electroelasticity
(2014)
In this work a diffential geometric formulation of the theory of electroelasticity is developed which also includes thermal and magnetic influences. We study the motion of bodies consisting of an elastic material that are deformed by the influence of mechanical forces, heat and an external electromagnetic field. To this end physical balance laws (conservation of mass, balance of momentum, angular momentum and energy) are established. These provide an equation that describes the motion of the body during the deformation. Here the body and the surrounding space are modeled as Riemannian manifolds, and we allow that the body has a lower dimension than the surrounding space. In this way one is not (as usual) restricted to the description of the deformation of three-dimensional bodies in a three-dimensional space, but one can also describe the deformation of membranes and the deformation in a curved space. Moreover, we formulate so-called constitutive relations that encode the properties of the used material. Balance of energy as a scalar law can easily be formulated on a Riemannian manifold. The remaining balance laws are then obtained by demanding that balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space. This generalizes a result by Marsden and Hughes that pertains to bodies that have the same dimension as the surrounding space and does not allow the presence of electromagnetic fields. Usually, in works on electroelasticity the entropy inequality is used to decide which otherwise allowed deformations are physically admissible and which are not. It is alsoemployed to derive restrictions to the possible forms of constitutive relations describing the material. Unfortunately, the opinions on the physically correct statement of the entropy inequality diverge when electromagnetic fields are present. Moreover, it is unclear how to formulate the entropy inequality in the case of a membrane that is subjected to an electromagnetic field. Thus, we show that one can replace the use of the entropy inequality by the demand that for a given process balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space and under linear rescalings of the temperature. On the one hand, this demand also yields the desired restrictions to the form of the constitutive relations. On the other hand, it needs much weaker assumptions than the arguments in physics literature that are employing the entropy inequality. Again, our result generalizes a theorem of Marsden and Hughes. This time, our result is, like theirs, only valid for bodies that have the same dimension as the surrounding space.
A dramatic efficiency improvement of bulk heterojunction solar cells based on electron-donating conjugated polymers in combination with soluble fullerene derivatives has been achieved over the past years. Certified and reported power conversion efficiencies now reach over 9% for single junctions and exceed the 10% benchmark for tandem solar cells. This trend brightens the vision of organic photovoltaics becoming competitive with inorganic solar cells including the realization of low-cost and large-area organic photovoltaics. For the best performing organic materials systems, the yield of charge generation can be very efficient. However, a detailed understanding of the free charge carrier generation mechanisms at the donor acceptor interface and the energy loss associated with it needs to be established. Moreover, organic solar cells are limited by the competition between charge extraction and free charge recombination, accounting for further efficiency losses. A conclusive picture and the development of precise methodologies for investigating the fundamental processes in organic solar cells are crucial for future material design, efficiency optimization, and the implementation of organic solar cells into commercial products.
In order to advance the development of organic photovoltaics, my thesis focuses on the comprehensive understanding of charge generation, recombination and extraction in organic bulk heterojunction solar cells summarized in 6 chapters on the cumulative basis of 7 individual publications.
The general motivation guiding this work was the realization of an efficient hybrid inorganic/organic tandem solar cell with sub-cells made from amorphous hydrogenated silicon and organic bulk heterojunctions. To realize this project aim, the focus was directed to the low band-gap copolymer PCPDTBT and its derivatives, resulting in the examination of the charge carrier dynamics in PCPDTBT:PC70BM blends in relation to by the blend morphology. The phase separation in this blend can be controlled by the processing additive diiodooctane, enhancing domain purity and size. The quantitative investigation of the free charge formation was realized by utilizing and improving the time delayed collection field technique. Interestingly, a pronounced field dependence of the free carrier generation for all blends is found, with the field dependence being stronger without the additive. Also, the bimolecular recombination coefficient for both blends is rather high and increases with decreasing internal field which we suggest to be caused by a negative field dependence of mobility. The additive speeds up charge extraction which is rationalized by the threefold increase in mobility.
By fluorine attachment within the electron deficient subunit of PCPDTBT, a new polymer F-PCPDTBT is designed. This new material is characterized by a stronger tendency to aggregate as compared to non-fluorinated PCPDTBT. Our measurements show that for F-PCPDTBT:PCBM blends the charge carrier generation becomes more efficient and the field-dependence of free charge carrier generation is weakened. The stronger tendency to aggregate induced by the fluorination also leads to increased polymer rich domains, accompanied in a threefold reduction in the non-geminate recombination coefficient at conditions of open circuit. The size of the polymer domains is nicely correlated to the field-dependence of charge generation and the Langevin reduction factor, which highlights the importance of the domain size and domain purity for efficient charge carrier generation. In total, fluorination of PCPDTBT causes the PCE to increase from 3.6 to 6.1% due to enhanced fill factor, short circuit current and open circuit voltage. Further optimization of the blend ratio, active layer thickness, and polymer molecular weight resulted in 6.6% efficiency for F-PCPDTBT:PC70BM solar cells.
Interestingly, the double fluorinated version 2F-PCPDTBT exhibited poorer FF despite a further reduction of geminate and non-geminate recombination losses. To further analyze this finding, a new technique is developed that measures the effective extraction mobility under charge carrier densities and electrical fields comparable to solar cell operation conditions. This method involves the bias enhanced charge extraction technique. With the knowledge of the carrier density under different electrical field and illumination conditions, a conclusive picture of the changes in charge carrier dynamics leading to differences in the fill factor upon fluorination of PCPDTBT is attained. The more efficient charge generation and reduced recombination with fluorination is counterbalanced by a decreased extraction mobility. Thus, the highest fill factor of 60% and efficiency of 6.6% is reached for F-PCPDTBT blends, while 2F-PCPDTBT blends have only moderate fill factors of 54% caused by the lower effective extraction mobility, limiting the efficiency to 6.5%.
To understand the details of the charge generation mechanism and the related losses, we evaluated the yield and field-dependence of free charge generation using time delayed collection field in combination with sensitive measurements of the external quantum efficiency and absorption coefficients for a variety of blends. Importantly, both the yield and field-dependence of free charge generation is found to be unaffected by excitation energy, including direct charge transfer excitation below the optical band gap. To access the non-detectable absorption at energies of the relaxed charge transfer emission, the absorption was reconstructed from the CT emission, induced via the recombination of thermalized charges in electroluminescence. For a variety of blends, the quantum yield at energies of charge transfer emission was identical to excitations with energies well above the optical band-gap. Thus, the generation proceeds via the split-up of the thermalized charge transfer states in working solar cells. Further measurements were conducted on blends with fine-tuned energy levels and similar blend morphologies by using different fullerene derivatives. A direct correlation between the efficiency of free carrier generation and the energy difference of the relaxed charge transfer state relative to the energy of the charge separated state is found. These findings open up new guidelines for future material design as new high efficiency materials require a minimum energetic offset between charge transfer and the charge separated state while keeping the HOMO level (and LUMO level) difference between donor and acceptor as small as possible.
One of the most significant current discussions in Astrophysics relates to the origin of high-energy cosmic rays. According to our current knowledge, the abundance distribution of the elements in cosmic rays at their point of origin indicates, within plausible error limits, that they were initially formed by nuclear processes in the interiors of stars. It is also believed that their energy distribution up to 1018 eV has Galactic origins. But even though the knowledge about potential sources of cosmic rays is quite poor above „ 1015 eV, that is the “knee” of the cosmic-ray spectrum, up to the knee there seems to be a wide consensus that supernova remnants are the most likely candidates. Evidence of this comes from observations of non-thermal X-ray radiation, requiring synchrotron electrons with energies up to 1014 eV, exactly in the remnant of supernovae. To date, however, there is not conclusive evidence that they produce nuclei, the dominant component of cosmic rays, in addition to electrons. In light of this dearth of evidence, γ-ray observations from supernova remnants can offer the most promising direct way to confirm whether or not these astrophysical objects are indeed the main source of cosmic-ray nuclei below the knee. Recent observations with space- and ground-based observatories have established shell-type supernova remnants as GeV-to- TeV γ-ray sources. The interpretation of these observations is however complicated by the different radiation processes, leptonic and hadronic, that can produce similar fluxes in this energy band rendering ambiguous the nature of the emission itself. The aim of this work is to develop a deeper understanding of these radiation processes from a particular shell-type supernova remnant, namely RX J1713.7–3946, using observations of the LAT instrument onboard the Fermi Gamma-Ray Space Telescope. Furthermore, to obtain accurate spectra and morphology maps of the emission associated with this supernova remnant, an improved model of the diffuse Galactic γ-ray emission background is developed. The analyses of RX J1713.7–3946 carried out with this improved background show that the hard Fermi-LAT spectrum cannot be ascribed to the hadronic emission, leading thus to the conclusion that the leptonic scenario is instead the most natural picture for the high-energy γ-ray emission of RX J1713.7–3946. The leptonic scenario however does not rule out the possibility that cosmic-ray nuclei are accelerated in this supernova remnant, but it suggests that the ambient density may not be high enough to produce a significant hadronic γ-ray emission. Further investigations involving other supernova remnants using the improved back- ground developed in this work could allow compelling population studies, and hence prove or disprove the origin of Galactic cosmic-ray nuclei in these astrophysical objects. A break- through regarding the identification of the radiation mechanisms could be lastly achieved with a new generation of instruments such as CTA.
Polyadenylation is a decisive 3’ end processing step during the maturation of pre-mRNAs. The length of the poly(A) tail has an impact on mRNA stability, localization and translatability. Accordingly, many eukaryotic organisms encode several copies of canonical poly(A) polymerases (cPAPs). The disruption of cPAPs in mammals results in lethality. In plants, reduced cPAP activity is non-lethal. Arabidopsis encodes three nuclear cPAPs, PAPS1, PAPS2 and PAPS4, which are constitutively expressed throughout the plant. Recently, the detailed analysis of Arabidopsis paps1 mutants revealed a subset of genes that is preferentially polyadenylated by the cPAP isoform PAPS1 (Vi et al. 2013). Thus, the specialization of cPAPs might allow the regulation of different sets of genes in order to optimally face developmental or environmental challenges.
To gain insights into the cPAP-based gene regulation in plants, the phenotypes of Arabidopsis cPAPs mutants under different conditions are characterized in detail in the following work. An involvement of all three cPAPs in flowering time regulation and stress response regulation is shown. While paps1 knockdown mutants flower early, paps4 and paps2 paps4 knockout mutants exhibit a moderate late-flowering phenotype. PAPS1 promotes the expression of the major flowering inhibitor FLC, supposedly by specific polyadenylation of an FLC activator. PAPS2 and PAPS4 exhibit partially overlapping functions and ensure timely flowering by repressing FLC and at least one other unidentified flowering inhibitor. The latter two cPAPs act in a novel regulatory pathway downstream of the autonomous pathway component FCA and act independently from the polyadenylation factors and flowering time regulators CstF64 and FY. Moreover, PAPS1 and PAPS2/PAPS4 are implicated in different stress response pathways in Arabidopsis. Reduced activity of the poly(A) polymerase PAPS1 results in enhanced resistance to osmotic and oxidative stress. Simultaneously, paps1 mutants are cold-sensitive. In contrast, PAPS2/PAPS4 are not involved in the regulation of osmotic or cold stress, but paps2 paps4 loss-of-function mutants exhibit enhanced sensitivity to oxidative stress provoked in the chloroplast. Thus, both PAPS1 and PAPS2/PAPS4 are required to maintain a balanced redox state in plants. PAPS1 seems to fulfil this function in concert with CPSF30, a polyadenylation factor that regulates alternative polyadenylation and tolerance to oxidative stress.
The individual paps mutant phenotypes and the cPAP-specific genetic interactions support the model of cPAP-dependent polyadenylation of selected mRNAs. The high similarity of the polyadenylation machineries in yeast, mammals and plants suggests that similar regulatory mechanisms might be present in other organism groups. The cPAP-dependent developmental and physiological pathways identified in this work allow the design of targeted experiments to better understand the ecological and molecular context underlying cPAP-specialization.
Galaxies are observational probes to study the Large Scale Structure. Their gravitational motions are tracers of the total matter density and therefore of the Large Scale Structure. Besides, studies of structure formation and galaxy evolution rely on numerical cosmological simulations. Still, only one universe observable from a given position, in time and space, is available for comparisons with simulations. The related cosmic variance affects our ability to interpret the results. Simulations constrained by observational data are a perfect remedy to this problem. Achieving such simulations requires the projects Cosmic flows and CLUES. Cosmic flows builds catalogs of accurate distance measurements to map deviations from the expansion. These measures are mainly obtained with the galaxy luminosity-rotation rate correlation. We present the calibration of that relation in the mid-infrared with observational data from Spitzer Space Telescope. Resulting accurate distance estimates will be included in the third catalog of the project. In the meantime, two catalogs up to 30 and 150 Mpc/h have been released. We report improvements and applications of the CLUES' method on these two catalogs. The technique is based on the constrained realization algorithm. The cosmic displacement field is computed with the Zel'dovich approximation. This latter is then reversed to relocate reconstructed three-dimensional constraints to their precursors' positions in the initial field. The size of the second catalog (8000 galaxies within 150 Mpc/h) highlighted the importance of minimizing the observational biases. By carrying out tests on mock catalogs, built from cosmological simulations, a method to minimize observational bias can be derived. Finally, for the first time, cosmological simulations are constrained solely by peculiar velocities. The process is successful as resulting simulations resemble the Local Universe. The major attractors and voids are simulated at positions approaching observational positions by a few megaparsecs, thus reaching the limit imposed by the linear theory.
Knowing the rates and mechanisms of geomorphic process that shape the Earth’s surface is crucial to understand landscape evolution. Modern methods for estimating denudation rates enable us to quantitatively express and compare processes of landscape downwearing that can be traced through time and space—from the seemingly intact, though intensely shattered, phantom blocks of the catastrophically fragmented basal facies of giant rockslides up to denudational noise in orogen-wide data sets averaging over several millennia. This great variety of spatiotemporal scales of denudation rates is both boon and bane of geomorphic process rates. Indeed, processes of landscape downwearing can be traced far back in time, helping us to understand the Earth’s evolution. Yet, this benefit may turn into a drawback due to scaling issues if these rates are to be compared across different observation timescales.
This thesis investigates the mechanisms, patterns and rates of landscape downwearing across the Himalaya-Tibet orogen.
Accounting for the spatiotemporal variability of denudation processes, this thesis addresses landscape downwearing on three distinctly different spatial scales, starting off at the local scale of individual hillslopes where considerable amounts of debris are generated from rock instantaneously: Rocksliding in active mountains is a major impetus of landscape downwearing. Study I provides a systematic overview of the internal sedimentology of giant rockslide deposits and thus meets the challenge of distinguishing them from macroscopically and microscopically similar glacial deposits, tectonic fault-zone breccias, and impact breccias. This distinction is important to avoid erroneous or misleading deduction of paleoclimatic or tectonic implications. -> Grain size analysis shows that rockslide-derived micro-breccia closely resemble those from meteorite impact or tectonic faults. -> Frictionite may occur more frequently that previously assumed. -> Mössbauer-spectroscopy derived results indicate basal rock melting in the absence of water, involving short-term temperatures of >1500°C.
Zooming out, Study II tracks the fate of these sediments, using the example of the upper Indus River, NW India. There we use river sand samples from the Indus and its tributaries to estimate basin-averaged denudation rates along a ~320-km reach across the Tibetan Plateau margin, to answer the question whether incision into the western Tibetan Plateau margin is currently active or not. -> We find an about one-order-of-magnitude upstream decay—from 110 to 10 mm kyr^-1—of cosmogenic Be-10-derived basin-wide denudation rates across the morphological knickpoint that marks the transition from the Transhimalayan ranges to the Tibetan Plateau. This trend is corroborated by independent bulk petrographic and heavy mineral analysis of the same samples. -> From the observation that tributary-derived basin-wide denudation rates do not increase markedly until ~150–200 km downstream of the topographic plateau margin we conclude that incision into the Tibetan Plateau is inactive. -> Comparing our postglacial Be-10-derived denudation rates to long-term (>10^6 yr) estimates from low-temperature thermochronometry, ranging from 100 to 750 mm kyr^-1, points to an order- of-magnitude decay of rates of landscape downwearing towards present. We infer that denudation rates must have been higher in the Quaternary, probably promoted by the interplay of glacial and interglacial stages.
Our investigation of regional denudation patterns in the upper Indus finally is an integral part of Study III that synthesizes denudation of the Himalaya-Tibet orogen. In order to identify general and time-invariant predictors for Be-10-derived denudation rates we analyze tectonic, climatic and topographic metrics from an inventory of 297 drainage basins from various parts of the orogen. Aiming to get insight to the full response distributions of denudation rate to tectonic, climatic and topographic candidate predictors, we apply quantile regression instead of ordinary least squares regression, which has been standard analysis tool in previous studies that looked for denudation rate predictors. -> We use principal component analysis to reduce our set of 26 candidate predictors, ending up with just three out of these: Aridity Index, topographic steepness index, and precipitation of the coldest quarter of the year. -> Topographic steepness index proves to perform best during additive quantile regression. Our consequent prediction of denudation rates on the basin scale involves prediction errors that remain between 5 and 10 mm kyr^-1. -> We conclude that while topographic metrics such as river-channel steepness and slope gradient—being representative on timescales that our cosmogenic Be-10-derived denudation rates integrate over—generally appear to be more suited as predictors than climatic and tectonic metrics based on decadal records.
During this work I built a four wave mixing setup for the time-resolved femtosecond spectroscopy of Raman-active lattice modes. This setup enables to study the selective excitation of phonon polaritons. These quasi-particles arise from the coupling of electro-magnetic waves and transverse optical lattice modes, the so-called phonons. The phonon polaritons were investigated in the optically non-linear, ferroelectric crystals LiNbO₃ and LiTaO₃.
The direct observation of the frequency shift of the scattered narrow bandwidth probe pulses proofs the role of the Raman interaction during the probe and excitation process of phonon polaritons. I compare this experimental method with the measurement where ultra-short laser pulses are used. The frequency shift remains obscured by the relative broad bandwidth of these laser pulses. In an experiment with narrow bandwidth probe pulses, the Stokes and anti-Stokes intensities are spectrally separated. They are assigned to the corresponding counter-propagating wavepackets of phonon polaritons. Thus, the dynamics of these wavepackets was separately studied. Based on these findings, I develop the mathematical description of the so-called homodyne detection of light for the case of light scattering from counter propagating phonon polaritons.
Further, I modified the broad bandwidth of the ultra-short pump pulses using bandpass filters to generate two pump pulses with non-overlapping spectra. This enables the frequency-selective excitation of polariton modes in the sample, which allows me to observe even very weak polariton modes in LiNbO₃ or LiTaO₃ that belong to the higher branches of the dispersion relation of phonon polaritons. The experimentally determined dispersion relation of the phonon polaritons could therefore be extended and compared to theoretical models. In addition, I determined the frequency-dependent damping of phonon polaritons.
Magnetite is an iron oxide, which is ubiquitous in rocks and is usually deposited as small nanoparticulate matter among other rock material. It differs from most other iron oxides because it contains divalent and trivalent iron. Consequently, it has a special crystal structure and unique magnetic properties. These properties are used for paleoclimatic reconstructions where naturally occurring magnetite helps understanding former geological ages. Further on, magnetic properties are used in bio- and nanotechnological applications –synthetic magnetite serves as a contrast agent in MRI, is exploited in biosensing, hyperthermia or is used in storage media.
Magnetic properties are strongly size-dependent and achieving size control under preferably mild synthesis conditions is of interest in order to obtain particles with required properties. By using a custom-made setup, it was possible to synthesize stable single domain magnetite nanoparticles with the co-precipitation method. Furthermore, it was shown that magnetite formation is temperature-dependent, resulting in larger particles at higher temperatures. However, mechanistic approaches about the details are incomplete.
Formation of magnetite from solution was shown to occur from nanoparticulate matter rather than solvated ions. The theoretical framework of such processes has only started to be described, partly due to the lack of kinetic or thermodynamic data. Synthesis of magnetite nanoparticles at different temperatures was performed and the Arrhenius plot was used determine an activation energy for crystal growth of 28.4 kJ mol-1, which led to the conclusion that nanoparticle diffusion is the rate-determining step.
Furthermore, a study of the alteration of magnetite particles of different sizes as a function of their storage conditions is presented. The magnetic properties depend not only on particle size but also depend on the structure of the oxide, because magnetite oxidizes to maghemite under environmental conditions. The dynamics of this process have not been well described. Smaller nanoparticles are shown to oxidize more rapidly than larger ones and the lower the storage temperature, the lower the measured oxidation. In addition, the magnetic properties of the altered particles are not decreased dramatically, thus suggesting that this alteration will not impact the use of such nanoparticles as medical carriers.
Finally, the effect of biological additives on magnetite formation was investigated. Magnetotactic bacteria¬¬ are able to synthesize and align magnetite nanoparticles of well-defined size and morphology due to the involvement of special proteins with specific binding properties. Based on this model of morphology control, phage display experiments were performed to determine peptide sequences that preferably bind to (111)-magnetite faces. The aim was to control the shape of magnetite nanoparticles during the formation. Magnetotactic bacteria are also able to control the intracellular redox potential with proteins called magnetochromes. MamP is such a protein and its oxidizing nature was studied in vitro via biomimetic magnetite formation experiments based on ferrous ions. Magnetite and further trivalent oxides were found.
This work helps understanding basic mechanisms of magnetite formation and gives insight into non-classical crystal growth. In addition, it is shown that alteration of magnetite nanoparticles is mainly based on oxidation to maghemite and does not significantly influence the magnetic properties. Finally, biomimetic experiments help understanding the role of MamP within the bacteria and furthermore, a first step was performed to achieve morphology control in magnetite formation via co-precipitation.
Enterprise-specific in-memory data managment : HYRISEc - an in-memory column store engine for OLXP
(2014)
Virtualized cloud data centers provide on-demand resources, enable agile resource provisioning, and host heterogeneous applications with different resource requirements. These data centers consume enormous amounts of energy, increasing operational expenses, inducing high thermal inside data centers, and raising carbon dioxide emissions. The increase in energy consumption can result from ineffective resource management that causes inefficient resource utilization. This dissertation presents detailed models and novel techniques and algorithms for virtual resource management in cloud data centers. The proposed techniques take into account Service Level Agreements (SLAs) and workload heterogeneity in terms of memory access demand and communication patterns of web applications and High Performance Computing (HPC) applications. To evaluate our proposed techniques, we use simulation and real workload traces of web applications and HPC applications and compare our techniques against the other recently proposed techniques using several performance metrics. The major contributions of this dissertation are the following: proactive resource provisioning technique based on robust optimization to increase the hosts' availability for hosting new VMs while minimizing the idle energy consumption. Additionally, this technique mitigates undesirable changes in the power state of the hosts by which the hosts' reliability can be enhanced in avoiding failure during a power state change. The proposed technique exploits the range-based prediction algorithm for implementing robust optimization, taking into consideration the uncertainty of demand. An adaptive range-based prediction for predicting workload with high fluctuations in the short-term. The range prediction is implemented in two ways: standard deviation and median absolute deviation. The range is changed based on an adaptive confidence window to cope with the workload fluctuations. A robust VM consolidation for efficient energy and performance management to achieve equilibrium between energy and performance trade-offs. Our technique reduces the number of VM migrations compared to recently proposed techniques. This also contributes to a reduction in energy consumption by the network infrastructure. Additionally, our technique reduces SLA violations and the number of power state changes. A generic model for the network of a data center to simulate the communication delay and its impact on VM performance, as well as network energy consumption. In addition, a generic model for a memory-bus of a server, including latency and energy consumption models for different memory frequencies. This allows simulating the memory delay and its influence on VM performance, as well as memory energy consumption. Communication-aware and energy-efficient consolidation for parallel applications to enable the dynamic discovery of communication patterns and reschedule VMs using migration based on the determined communication patterns. A novel dynamic pattern discovery technique is implemented, based on signal processing of network utilization of VMs instead of using the information from the hosts' virtual switches or initiation from VMs. The result shows that our proposed approach reduces the network's average utilization, achieves energy savings due to reducing the number of active switches, and provides better VM performance compared to CPU-based placement. Memory-aware VM consolidation for independent VMs, which exploits the diversity of VMs' memory access to balance memory-bus utilization of hosts. The proposed technique, Memory-bus Load Balancing (MLB), reactively redistributes VMs according to their utilization of a memory-bus using VM migration to improve the performance of the overall system. Furthermore, Dynamic Voltage and Frequency Scaling (DVFS) of the memory and the proposed MLB technique are combined to achieve better energy savings.
In this thesis we consider diverse aspects of existence and correctness of asymptotic solutions to elliptic differential and pseudodifferential equations. We begin our studies with the case of a general elliptic boundary value problem in partial derivatives. A small parameter enters the coefficients of the main equation as well as into the boundary conditions. Such equations have already been investigated satisfactory, but there still exist certain theoretical deficiencies. Our aim is to present the general theory of elliptic problems with a small parameter. For this purpose we examine in detail the case of a bounded domain with a smooth boundary. First of all, we construct formal solutions as power series in the small parameter. Then we examine their asymptotic properties. It suffices to carry out sharp two-sided \emph{a priori} estimates for the operators of boundary value problems which are uniform in the small parameter. Such estimates failed to hold in functional spaces used in classical elliptic theory. To circumvent this limitation we exploit norms depending on the small parameter for the functions defined on a bounded domain. Similar norms are widely used in literature, but their properties have not been investigated extensively. Our theoretical investigation shows that the usual elliptic technique can be correctly carried out in these norms. The obtained results also allow one to extend the norms to compact manifolds with boundaries. We complete our investigation by formulating algebraic conditions on the operators and showing their equivalence to the existence of a priori estimates. In the second step, we extend the concept of ellipticity with a small parameter to more general classes of operators. Firstly, we want to compare the difference in asymptotic patterns between the obtained series and expansions for similar differential problems. Therefore we investigate the heat equation in a bounded domain with a small parameter near the time derivative. In this case the characteristics touch the boundary at a finite number of points. It is known that the solutions are not regular in a neighbourhood of such points in advance. We suppose moreover that the boundary at such points can be non-smooth but have cuspidal singularities. We find a formal asymptotic expansion and show that when a set of parameters comes through a threshold value, the expansions fail to be asymptotic. The last part of the work is devoted to general concept of ellipticity with a small parameter. Several theoretical extensions to pseudodifferential operators have already been suggested in previous studies. As a new contribution we involve the analysis on manifolds with edge singularities which allows us to consider wider classes of perturbed elliptic operators. We examine that introduced classes possess a priori estimates of elliptic type. As a further application we demonstrate how developed tools can be used to reduce singularly perturbed problems to regular ones.
Donor-acceptor (D-A) copolymers have revolutionized the field of organic electronics over the last decade. Comprised of a electron rich and an electron deficient molecular unit, these copolymers facilitate the systematic modification of the material's optoelectronic properties. The ability to tune the optical band gap and to optimize the molecular frontier orbitals as well as the manifold of structural sites that enable chemical modifications has created a tremendous variety of copolymer structures. Today, these materials reach or even exceed the performance of amorphous inorganic semiconductors. Most impressively, the charge carrier mobility of D-A copolymers has been pushed to the technologically important value of 10 cm^{2}V^{-1}s^{-1}. Furthermore, owed to their enormous variability they are the material of choice for the donor component in organic solar cells, which have recently surpassed the efficiency threshold of 10%. Because of the great number of available D-A copolymers and due to their fast chemical evolution, there is a significant lack of understanding of the fundamental physical properties of these materials. Furthermore, the complex chemical and electronic structure of D-A copolymers in combination with their semi-crystalline morphology impede a straightforward identification of the microscopic origin of their superior performance. In this thesis, two aspects of prototype D-A copolymers were analysed. These are the investigation of electron transport in several copolymers and the application of low band gap copolymers as acceptor component in organic solar cells. In the first part, the investigation of a series of chemically modified fluorene-based copolymers is presented. The charge carrier mobility varies strongly between the different derivatives, although only moderate structural changes on the copolymers structure were made. Furthermore, rather unusual photocurrent transients were observed for one of the copolymers. Numerical simulations of the experimental results reveal that this behavior arises from a severe trapping of electrons in an exponential distribution of trap states. Based on the comparison of simulation and experiment, the general impact of charge carrier trapping on the shape of photo-CELIV and time-of-flight transients is discussed. In addition, the high performance naphthalenediimide (NDI)-based copolymer P(NDI2OD-T2) was characterized. It is shown that the copolymer posses one of the highest electron mobilities reported so far, which makes it attractive to be used as the electron accepting component in organic photovoltaic cells.\par Solar cells were prepared from two NDI-containing copolymers, blended with the hole transporting polymer P3HT. I demonstrate that the use of appropriate, high boiling point solvents can significantly increase the power conversion efficiency of these devices. Spectroscopic studies reveal that the pre-aggregation of the copolymers is suppressed in these solvents, which has a strong impact on the blend morphology. Finally, a systematic study of P3HT:P(NDI2OD-T2) blends is presented, which quantifies the processes that limit the efficiency of devices. The major loss channel for excited states was determined by transient and steady state spectroscopic investigations: the majority of initially generated electron-hole pairs is annihilated by an ultrafast geminate recombination process. Furthermore, exciton self-trapping in P(NDI2OD-T2) domains account for an additional reduction of the efficiency. The correlation of the photocurrent to microscopic morphology parameters was used to disclose the factors that limit the charge generation efficiency. Our results suggest that the orientation of the donor and acceptor crystallites relative to each other represents the main factor that determines the free charge carrier yield in this material system. This provides an explanation for the overall low efficiencies that are generally observed in all-polymer solar cells.
Effect of benzylglucosinolate on signaling pathways associated with type 2 diabetes prevention
(2014)
Type 2 diabetes (T2D) is a health problem throughout the world. In 2010, there were nearly 230 million individuals with diabetes worldwide and it is estimated that in the economically advanced countries the cases will increase about 50% in the next twenty years. Insulin resistance is one of major features in T2D, which is also a risk factor for metabolic and cardiovascular complications. Epidemiological and animal studies have shown that the consumption of vegetables and fruits can delay or prevent the development of the disease, although the underlying mechanisms of these effects are still unclear. Brassica species such as broccoli (Brassica oleracea var. italica) and nasturtium (Tropaeolum majus) possess high content of bioactive phytochemicals, e.g. nitrogen sulfur compounds (glucosinolates and isothiocyanates) and polyphenols largely associated with the prevention of cancer. Isothiocyanates (ITCs) display their anti-carcinogenic potential by inducing detoxicating phase II enzymes and increasing glutathione (GSH) levels in tissues. In T2D diabetes an increase in gluconeogenesis and triglyceride synthesis, and a reduction in fatty acid oxidation accompanied by the presence of reactive oxygen species (ROS) are observed; altogether is the result of an inappropriate response to insulin. Forkhead box O (FOXO) transcription factors play a crucial role in the regulation of insulin effects on gene expression and metabolism, and alterations in FOXO function could contribute to metabolic disorders in diabetes. In this study using stably transfected human osteosarcoma cells (U-2 OS) with constitutive expression of FOXO1 protein labeled with GFP (green fluorescent protein) and human hepatoma cells HepG2 cell cultures, the ability of benzylisothiocyanate (BITC) deriving from benzylglucosinolate, extracted from nasturtium to modulate, i) the insulin-signaling pathway, ii) the intracellular localization of FOXO1 and iii) the expression of proteins involved in glucose metabolism, ROS detoxification, cell cycle arrest and DNA repair was evaluated. BITC promoted oxidative stress and in response to that induced FOXO1 translocation from cytoplasm into the nucleus antagonizing the insulin effect. BITC stimulus was able to down-regulate gluconeogenic enzymes, which can be considered as an anti-diabetic effect; to promote antioxidant resistance expressed by the up-regulation in manganese superoxide dismutase (MnSOD) and detoxification enzymes; to modulate autophagy by induction of BECLIN1 and down-regulation of the mammalian target of rapamycin complex 1 (mTORC1) pathway; and to promote cell cycle arrest and DNA damage repair by up-regulation of the cyclin-dependent kinase inhibitor (p21CIP) and Growth Arrest / DNA Damage Repair (GADD45). Except for the nuclear factor (erythroid derived)-like2 (NRF2) and its influence in the detoxification enzymes gene expression, all the observed effects were independent from FOXO1, protein kinase B (AKT/PKB) and NAD-dependent deacetylase sirtuin-1 (SIRT1). The current study provides evidence that besides of the anticarcinogenic potential, isothiocyanates might have a role in T2D prevention. BITC stimulus mimics the fasting state, in which insulin signaling is not triggered and FOXO proteins remain in the nucleus modulating gene expression of their target genes, with the advantage of a down-regulation of gluconeogenesis instead of its increase. These effects suggest that BITC might be considered as a promising substance in the prevention or treatment of T2D, therefore the factors behind of its modulatory effects need further investigation.
The data quality of real-world datasets need to be constantly monitored and maintained to allow organizations and individuals to reliably use their data. Especially, data integration projects suffer from poor initial data quality and as a consequence consume more effort and money. Commercial products and research prototypes for data cleansing and integration help users to improve the quality of individual and combined datasets. They can be divided into either standalone systems or database management system (DBMS) extensions. On the one hand, standalone systems do not interact well with DBMS and require time-consuming data imports and exports. On the other hand, DBMS extensions are often limited by the underlying system and do not cover the full set of data cleansing and integration tasks.
We overcome both limitations by implementing a concise set of five data cleansing and integration operators on the parallel data analytics platform Stratosphere. We define the semantics of the operators, present their parallel implementation, and devise optimization techniques for individual operators and combinations thereof. Users specify declarative queries in our query language METEOR with our new operators to improve the data quality of individual datasets or integrate them to larger datasets. By integrating the data cleansing operators into the higher level language layer of Stratosphere, users can easily combine cleansing operators with operators from other domains, such as information extraction, to complex data flows. Through a generic description of the operators, the Stratosphere optimizer reorders operators even from different domains to find better query plans.
As a case study, we reimplemented a part of the large Open Government Data integration project GovWILD with our new operators and show that our queries run significantly faster than the original GovWILD queries, which rely on relational operators. Evaluation reveals that our operators exhibit good scalability on up to 100 cores, so that even larger inputs can be efficiently processed by scaling out to more machines. Finally, our scripts are considerably shorter than the original GovWILD scripts, which results in better maintainability of the scripts.
In March 2010, the project CoCoCo (incipient COntinent-COntinent COllision) recorded a 650 km long amphibian N-S wide-angle seismic profile, extending from the Eratosthenes Seamount (ESM) across Cyprus and southern Turkey to the Anatolian plateau. The aim of the project is to reveal the impact of the transition from subduction to continent-continent collision of the African plate with the Cyprus-Anatolian plate. A visual quality check, frequency analysis and filtering were applied to the seismic data and reveal a good data quality. Subsequent first break picking, finite-differences ray tracing and inversion of the offshore wide-angle data leads to a first-arrival tomographic model. This model reveals (1) P-wave velocities lower than 6.5 km/s in the crust, (2) a variable crustal thickness of about 28 - 37 km and (3) an upper crustal reflection at 5 km depth beneath the ESM. Two land shots on Turkey, also recorded on Cyprus, airgun shots south of Cyprus and geological and previous seismic investigations provide the information to derive a layered velocity model beneath the Anatolian plateau and for the ophiolite complex on Cyprus. The analysis of the reflections provides evidence for a north-dipping plate subducting beneath Cyprus. The main features of this layered velocity model are (1) an upper and lower crust with large lateral changes of the velocity structure and thickness, (2) a Moho depth of about 38 - 45 km beneath the Anatolian plateau, (3) a shallow north-dipping subducting plate below Cyprus with an increasing dip and (4) a typical ophiolite sequence on Cyprus with a total thickness of about 12 km. The offshore-onshore seismic data complete and improve the information about the velocity structure beneath Cyprus and the deeper part of the offshore tomographic model. Thus, the wide-angle seismic data provide detailed insights into the 2-D geometry and velocity structures of the uplifted and overriding Cyprus-Anatolian plate. Subsequent gravity modelling confirms and extends the crustal P-wave velocity model. The deeper part of the subducting plate is constrained by the gravity data and has a dip angle of ~ 28°. Finally, an integrated analysis of the geophysical and geological information allows a comprehensive interpretation of the crustal structure related to the collision process.
The quantitative descriptions of the state of stress in the Earth’s crust, and spatial-temporal stress changes are of great importance in terms of scientific questions as well as applied geotechnical issues. Human activities in the underground (boreholes, tunnels, caverns, reservoir management, etc.) have a large impact on the stress state. It is important to assess, whether these activities may lead to (unpredictable) hazards, such as induced seismicity. Equally important is the understanding of the in situ stress state in the Earth’s crust, as it allows the determination of safe well paths, already during well planning. The same goes for the optimal configuration of the injection- and production wells, where stimulation for artificial fluid path ways is necessary.
The here presented cumulative dissertation consists of four separate manuscripts, which are already published, submitted or will be submitted for peer review within the next weeks. The main focus is on the investigation of the possible usage of geothermal energy in the province Alberta (Canada). A 3-D geomechanical–numerical model was designed to quantify the contemporary 3-D stress tensor in the upper crust. For the calibration of the regional model, 321 stress orientation data and 2714 stress magnitude data were collected, whereby the size and diversity of the database is unique. A calibration scheme was developed, where the model is calibrated versus the in situ stress data stepwise for each data type and gradually optimized using statistically test methods. The optimum displacement on the model boundaries can be determined by bivariate linear regression, based on only three model runs with varying deformation ratio. The best-fit model is able to predict most of the in situ stress data quite well. Thus, the model can provide the full stress tensor along any chosen virtual well paths. This can be used to optimize the orientation of horizontal wells, which e.g. can be used for reservoir stimulation. The model confirms regional deviations from the average stress orientation trend, such as in the region of the Peace River Arch and the Bow Island Arch.
In the context of data compilation for the Alberta stress model, the Canadian database of the World Stress Map (WSM) could be expanded by including 514 new data records. This publication of an update of the Canadian stress map after ~20 years with a specific focus on Alberta shows, that the maximum horizontal stress (SHmax) is oriented southwest to northeast over large areas in Northern America. The SHmax orientation in Alberta is very homogeneous, with an average of about 47°. In order to calculate the average SHmax orientation on a regular grid as well as to estimate the wave-length of stress orientation, an existing algorithm has been improved and is applied to the Canadian data. The newly introduced quasi interquartile range on the circle (QIROC) improves the variance estimation of periodic data, as it is less susceptible to its outliers.
Another geomechanical–numerical model was built to estimate the 3D stress tensor in the target area ”Nördlich Lägern” in Northern Switzerland. This location, with Opalinus clay as a host rock, is a potential repository site for high-level radioactive waste. The performed modelling aims to investigate the sensitivity of the stress tensor on tectonic shortening, topography, faults and variable rock properties within the Mesozoic sedimentary stack, according to the required stability needed for a suitable radioactive waste disposal site. The majority of the tectonic stresses caused by the far-field shortening from the South are admitted by the competent rock units in the footwall and hanging wall of the argillaceous target horizon, the Upper Malm and Upper Muschelkalk. Thus, the differential stress within the host rock remains relatively low. East-west striking faults release stresses driven by tectonic shortening. The purely gravitational influence by the topography is low; higher SHmax magnitudes below topographical depression and lower values below hills are mainly observed near the surface. A complete calibration of the model is not possible, as no stress magnitude data are available for calibration, yet. The collection of this data will begin in 2015; subsequently they will be used to adjust the geomechanical–numerical model again.
The third geomechanical–numerical model investigates the stress variation in an ultra-deep gold mine in South Africa. This reservoir model is spatially one order of magnitude smaller than the previous local model from Northern Switzerland. Here, the primary focus is to investigate the hypothesis that the Mw 1.9 earthquake on 27 December 2007 was induced by stress changes due to the mining process. The Coulomb failure stress change (DeltaCFS) was used to analyse the stress change. It confirmed that the seismic event was induced by static stress transfer due to the mining progress. The rock was brought closer to failure on the derived rupture plane by stress changes of up to 1.5–15MPa, in dependence of the DeltaCFS analysis type. A forward modelling of a generic excavation scheme reveals that with decreasing distance to the dyke the DeltaCFS values increase significantly. Hence, even small changes in the mining progress can have a significant impact on the seismic hazard risk, i.e. the change of the occurrence probability to induce a seismic event of economic concern.
The H.E.S.S. array is a third generation Imaging Atmospheric Cherenkov Telescope (IACT) array. It is located in the Khomas Highland in Namibia, and measures very high energy (VHE) gamma-rays. In Phase I, the array started data taking in 2004 with its four identical 13 m telescopes. Since then, H.E.S.S. has emerged as the most successful IACT experiment to date. Among the almost 150 sources of VHE gamma-ray radiation found so far, even the oldest detection, the Crab Nebula, keeps surprising the scientific community with unexplained phenomena such as the recently discovered very energetic flares of high energy gamma-ray radiation. During its most recent flare, which was detected by the Fermi satellite in March 2013, the Crab Nebula was simultaneously observed with the H.E.S.S. array for six nights. The results of the observations will be discussed in detail during the course of this work. During the nights of the flare, the new 24 m × 32 m H.E.S.S. II telescope was still being commissioned, but participated in the data taking for one night. To be able to reconstruct and analyze the data of the H.E.S.S. Phase II array, the algorithms and software used by the H.E.S.S. Phase I array had to be adapted. The most prominent advanced shower reconstruction technique developed by de Naurois and Rolland, the template-based model analysis, compares real shower images taken by the Cherenkov telescope cameras with shower templates obtained using a semi-analytical model. To find the best fitting image, and, therefore, the relevant parameters that describe the air shower best, a pixel-wise log-likelihood fit is done. The adaptation of this advanced shower reconstruction technique to the heterogeneous H.E.S.S. Phase II array for stereo events (i.e. air showers seen by at least two telescopes of any kind), its performance using MonteCarlo simulations as well as its application to real data will be described.
The background of civil service reform in Indonesia reveals the emergence of the reformation movement in 1998, following the fall of the authoritarian New Order regime. The reformation movement has seen the introduction of reforms in Indonesia's various governmental institutions, including the civil service. The civil service reforms were marked by the revision of Act 8/ 74 with Act 43 of 1999 on Civil Service Administration. The implementation of the civil service reform program, which was carried out by both central and local governments, required cooperation between the actors (in particular, Ministries, agencies and local governments), known as coordination.
Currently, the coordination that occurs between actors tends to be rigid and hierarchical. As a result, targets are not efficiently and effectively met. Hierarchical coordination, without a strong public sector infrastructure, tends to have a negative impact on achieving the desired outcomes of the civil service reform program. As an intrinsic part of the New Order regime, hierarchical coordination resulted in inefficiency and lack of efficacy. Despite these inefficiencies, the administration and the political environment have changed significantly as a result of the reform process.
Obvious examples of the reforms are changes in recruitment patterns, placement and remuneration policies. However, in the case of Indonesia, it appears that every state institution has its own policy. Thus, it appears that there has not been policy coherence in the civil service reform program, resulting in the lack of a sustainable program. The important thing to examine is how the coordination mechanisms of the civil service reform program in the central government have developed during the reform era in Indonesia
The purpose of this study is to analyse the linkages between coordination mechanisms and the actual implementation of civil service reform programs. This is undertaken as a basis for intervention based on the structures and patterns of coordination mechanisms in the implementation of civil service reform programs. The next step is to formulate the development coordination mechanisms, particularly to create structures and patterns of civil service reforms which are more sustainable to the specific characteristics of public sector organisations in Indonesia.
The benefits of this research are a stronger understanding of the linkages between the mechanisms of coordination and implementation of civil service reform programs. Through this analysis, the findings can then be applied as a basic consideration in planning a sustainable civil service reform program. In the basis of theoretical issues concerning the linkages between coordination mechanisms and implementation of civil service reform program sustainability, this book explores the type of coordination, which is needed to test the proportional and sustainable concept of the intended civil service reform program in Indonesia.
Research conducted through studies, surveys and donors has shown that poor coordination is the major hindrance to the civil service program reform in Indonesia. This research employs a qualitative approach. In this study, the coordination mechanisms and implementations of civil service reform programs are demonstrated by means of case studies of the State Ministry for Administrative Reform, the National Civil Service Agency and the National Public Administration Institute. The coordination mechanisms in these Ministries and agencies were analysed using indicators of effective and efficient coordination. The analysis of the coordination mechanisms shows a tendency towards rigid hierarchical coordination. This raises concerns about fragmentation among departments and agencies at the central government level and calls for integrated civil service reform both at central and local governmental levels. In the context of implementation programs, a hierarchical mechanism of coordination leverages on various aspects, such as the program formulation, implementation flow of the program, the impact of policies, and achievement targets. In particular, there was a shift process of the mainstream civil service reform in the Ministries and agencies which are marked by the emergence of sectoral interest and inefficiencies in the civil service reform program. The primary result of successful civil service reform is increased professionalism in the civil service.
The findings on hierarchical mechanisms and the prescriptions which will follow show that the professionalism of Indonesia’s civil service is at stake. The implementation of the program through coordination mechanisms in Ministries and agencies is measured in various dimensions: the centre of coordination, integration of coordination, sustainability of coordination and multidimensionality of coordination.
The results of this analysis show that coordination mechanisms and the implementation of civil service reform are more successful when they are integration rather than hierarchical mechanisms. For a successful implementation of the reform program, it is crucial to intervene and change the type of coordination at the central government through the integration approach (hierarchy, market, and network). Furthermore, in order to move towards the integration type mechanism of coordination the separation of the administration and politics in the practice of good governance needs to be carried out immediately and simultaneously. Based on this analysis, it can be concluded that the integration type mechanism of coordination is a suitable for Indonesia for a sustainable civil service reform program. Finally, to achieve coherent civil service reforms, national policies developed according to the central government's priorities are indispensable, establishing a coordination mechanism that can be adhered to throughout all reform sectors.
Despite remarkable progress made in the past century, which has revolutionized our understanding of the universe, there are numerous open questions left in theoretical physics. Particularly important is the fact that the theories describing the fundamental interactions of nature are incompatible. Einstein's theory of general relative describes gravity as a dynamical spacetime, which is curved by matter and whose curvature determines the motion of matter. On the other hand we have quantum field theory, in form of the standard model of particle physics, where particles interact via the remaining interactions - electromagnetic, weak and strong interaction - on a flat, static spacetime without gravity. A theory of quantum gravity is hoped to cure this incompatibility by heuristically replacing classical spacetime by quantum spacetime'. Several approaches exist attempting to define such a theory with differing underlying premises and ideas, where it is not clear which is to be preferred. Yet a minimal requirement is the compatibility with the classical theory, they attempt to generalize. Interestingly many of these models rely on discrete structures in their definition or postulate discreteness of spacetime to be fundamental. Besides the direct advantages discretisations provide, e.g. permitting numerical simulations, they come with serious caveats requiring thorough investigation: In general discretisations break fundamental diffeomorphism symmetry of gravity and are generically not unique. Both complicates establishing the connection to the classical continuum theory. The main focus of this thesis lies in the investigation of this relation for spin foam models. This is done on different levels of the discretisation / triangulation, ranging from few simplices up to the continuum limit. In the regime of very few simplices we confirm and deepen the connection of spin foam models to discrete gravity. Moreover, we discuss dynamical, e.g. diffeomorphism invariance in the discrete, to fix the ambiguities of the models. In order to satisfy these conditions, the discrete models have to be improved in a renormalisation procedure, which also allows us to study their continuum dynamics. Applied to simplified spin foam models, we uncover a rich, non--trivial fixed point structure, which we summarize in a phase diagram. Inspired by these methods, we propose a method to consistently construct the continuum theory, which comes with a unique vacuum state.
Previous studies on the acquisition of verb inflection in normally developing children have revealed an astonishing pattern: children use correctly inflected verbs in their own speech but fail to make use of verb inflections when comprehending sentences uttered by others. Thus, a three-year old might well be able to say something like ‘The cat sleeps on the bed’, but fails to understand that the same sentence, when uttered by another person, refers to only one sleeping cat but not more than one. The previous studies that have examined children's comprehension of verb inflections have employed a variant of a picture selection task in which the child was asked to explicitly indicate (via pointing) what semantic meaning she had inferred from the test sentence. Recent research on other linguistic structures, such as pronouns or focus particles, has indicated that earlier comprehension abilities can be found when methods are used that do not require an explicit reaction, like preferential looking tasks. This dissertation aimed to examine whether children are truly not able to understand the connection the the verb form and the meaning of the sentence subject until the age of five years or whether earlier comprehension can be found when a different measure, preferential looking, is used. Additionally, children's processing of subject-verb agreement violations was examined. The three experiments of this thesis that examined children's comprehension of verb inflections revealed the following: German-speaking three- to four-year old children looked more to a picture showing one actor when hearing a sentence with a singular inflected verb but only when their eye gaze was tracked and they did not have to perform a picture selection task. When they were asked to point to the matching picture, they performed at chance-level. This pattern indicates asymmetries in children's language performance even within the receptive modality. The fourth experiment examined sensitivity to subject-verb agreement violations and did not reveal evidence for sensitivity toward agreement violations in three- and four-year old children, but only found that children's looking patterns were influenced by the grammatical violations at the age of five. The results from these experiments are discussed in relation to the existence of a production-comprehension asymmetry in the use of verb inflections and children's underlying grammatical knowledge.
Synchronization is a fundamental phenomenon in nature. It can be considered as a general property of self-sustained oscillators to adjust their rhythm in the presence of an interaction.
In this work we investigate complex regimes of synchronization phenomena by means of theoretical analysis, numerical modeling, as well as practical analysis of experimental data.
As a subject of our investigation we consider chimera state, where due to spontaneous symmetry-breaking of an initially homogeneous oscillators lattice split the system into two parts with different dynamics. Chimera state as a new synchronization phenomenon was first found in non-locally coupled oscillators system, and has attracted a lot of attention in the last decade. However, the recent studies indicate that this state is also possible in globally coupled systems. In the first part of this work, we show under which conditions the chimera-like state appears in a system of globally coupled identical oscillators with intrinsic delayed feedback. The results of the research explain how initially monostable oscillators became effectivly bistable in the presence of the coupling and create a mean field that sustain the coexistence of synchronized and desynchronized states. Also we discuss other examples, where chimera-like state appears due to frequency dependence of the phase shift in the bistable system.
In the second part, we make further investigation of this topic by modeling influence of an external periodic force to an oscillator with intrinsic delayed feedback. We made stability analysis of the synchronized state and constructed Arnold tongues. The results explain formation of the chimera-like state and hysteric behavior of the synchronization area. Also, we consider two sets of parameters of the oscillator with symmetric and asymmetric Arnold tongues, that correspond to mono- and bi-stable regimes of the oscillator.
In the third part, we demonstrate the results of the work, which was done in collaboration with our colleagues from Psychology Department of University of Potsdam. The project aimed to study the effect of the cardiac rhythm on human perception of time using synchronization analysis. From our part, we made a statistical analysis of the data obtained from the conducted experiment on free time interval reproduction task. We examined how ones heartbeat influences the time perception and searched for possible phase synchronization between heartbeat cycles and time reproduction responses. The findings support the prediction that cardiac cycles can serve as input signals, and is used for reproduction of time intervals in the range of several seconds.
The adaptation of cell growth and proliferation to environmental changes is essential for the surviving of biological systems. The evolutionary conserved Ser/Thr protein kinase “Target of Rapamycin” (TOR) has emerged as a major signaling node that integrates the sensing of numerous growth signals to the coordinated regulation of cellular metabolism and growth. Although the TOR signaling pathway has been widely studied in heterotrophic organisms, the research on TOR in photosynthetic eukaryotes has been hampered by the reported land plant resistance to rapamycin. Thus, the finding that Chlamydomonas reinhardtii is sensitive to rapamycin, establish this unicellular green alga as a useful model system to investigate TOR signaling in photosynthetic eukaryotes.
The observation that rapamycin does not fully arrest Chlamydomonas growth, which is different from observations made in other organisms, prompted us to investigate the regulatory function of TOR in Chlamydomonas in context of the cell cycle. Therefore, a growth system that allowed synchronously growth under widely unperturbed cultivation in a fermenter system was set up and the synchronized cells were characterized in detail. In a highly resolved kinetic study, the synchronized cells were analyzed for their changes in cytological parameters as cell number and size distribution and their starch content. Furthermore, we applied mass spectrometric analysis for profiling of primary and lipid metabolism. This system was then used to analyze the response dynamics of the Chlamydomonas metabolome and lipidome to TOR-inhibition by rapamycin
The results show that TOR inhibition reduces cell growth, delays cell division and daughter cell release and results in a 50% reduced cell number at the end of the cell cycle. Consistent with the growth phenotype we observed strong changes in carbon and nitrogen partitioning in the direction of rapid conversion into carbon and nitrogen storage through an accumulation of starch, triacylglycerol and arginine. Interestingly, it seems that the conversion of carbon into triacylglycerol occurred faster than into starch after TOR inhibition, which may indicate a more dominant role of TOR in the regulation of TAG biosynthesis than in the regulation of starch.
This study clearly shows, for the first time, a complex picture of metabolic and lipidomic dynamically changes during the cell cycle of Chlamydomonas reinhardtii and furthermore reveals a complex regulation and adjustment of metabolite pools and lipid composition in response to TOR inhibition.
Characterization of drought tolerance in potato cultivars for identification of molecular markers
(2014)
Entrepreneurship is known to be a main driver of economic growth. Hence, governments have an interest in supporting and promoting entrepreneurial activities. Start-up subsidies, which have been analyzed extensively, only aim at mitigating the lack of financial capital. However, some entrepreneurs also lack in human, social, and managerial capital. One way to address these shortcomings is by subsidizing coaching programs for entrepreneurs. However, theoretical and empirical evidence about business coaching and programs subsidizing coaching is scarce. This dissertation gives an extensive overview of coaching and is the first empirical study for Germany analyzing the effects of coaching programs on its participants. In the theoretical part of the dissertation the process of a business start-up is described and it is discussed how and in which stage of the company’s evolvement coaching can influence entrepreneurial success. The concept of coaching is compared to other non-monetary types of support as training, mentoring, consulting, and counseling. Furthermore, national and international support programs are described. Most programs have either no or small positive effects. However, there is little quantitative evidence in the international literature. In the empirical part of the dissertation the effectiveness of coaching is shown by evaluating two German coaching programs, which support entrepreneurs via publicly subsidized coaching sessions. One of the programs aims at entrepreneurs who have been employed before becoming self-employed, whereas the other program is targeted at former unemployed entrepreneurs. The analysis is based on the evaluation of a quantitative and a qualitative dataset. The qualitative data are gathered by intensive one-on-one interviews with coaches and entrepreneurs. These data give a detailed insight about the coaching topics, duration, process, effectiveness, and the thoughts of coaches and entrepreneurs. The quantitative data include information about 2,936 German-based entrepreneurs. Using propensity score matching, the success of participants of the two coaching programs is compared with adequate groups of non-participants. In contrast to many other studies also personality traits are observed and controlled for in the matching process. The results show that only the program for former unemployed entrepreneurs has small positive effects. Participants have a larger survival probability in self-employment and a larger probability to hire employees than matched non-participants. In contrast, the program for former employed individuals has negative effects. Compared to individuals who did not participate in the coaching program, participants have a lower probability to stay in self-employment, lower earned net income, lower number of employees and lower life satisfaction. There are several reasons for these differing results of the two programs. First, former unemployed individuals have more basic coaching needs than former employed individuals. Coaches can satisfy these basic coaching needs, whereas former employed individuals have more complex business problems, which are not very easy to be solved by a coaching intervention. Second, the analysis reveals that former employed individuals are very successful in general. It is easier to increase the success of former unemployed individuals as they have a lower base level of success than former employed individuals. An effect heterogeneity analysis shows that coaching effectiveness differs by region. Coaching for previously unemployed entrepreneurs is especially useful in regions with bad labor market conditions. In summary, in line with previous literature, it is found that coaching has little effects on the success of entrepreneurs. The previous employment status, the characteristics of the entrepreneur and the regional labor market conditions play a crucial role in the effectiveness of coaching. In conclusion, coaching needs to be well tailored to the individual and applied thoroughly. Therefore, governments should design and provide coaching programs only after due consideration.
This work introduces concepts and corresponding tool support to enable a complementary approach in dealing with recovery. Programmers need to recover a development state, or a part thereof, when previously made changes reveal undesired implications. However, when the need arises suddenly and unexpectedly, recovery often involves expensive and tedious work. To avoid tedious work, literature recommends keeping away from unexpected recovery demands by following a structured and disciplined approach, which consists of the application of various best practices including working only on one thing at a time, performing small steps, as well as making proper use of versioning and testing tools. However, the attempt to avoid unexpected recovery is both time-consuming and error-prone. On the one hand, it requires disproportionate effort to minimize the risk of unexpected situations. On the other hand, applying recommended practices selectively, which saves time, can hardly avoid recovery. In addition, the constant need for foresight and self-control has unfavorable implications. It is exhaustive and impedes creative problem solving. This work proposes to make recovery fast and easy and introduces corresponding support called CoExist. Such dedicated support turns situations of unanticipated recovery from tedious experiences into pleasant ones. It makes recovery fast and easy to accomplish, even if explicit commits are unavailable or tests have been ignored for some time. When mistakes and unexpected insights are no longer associated with tedious corrective actions, programmers are encouraged to change source code as a means to reason about it, as opposed to making changes only after structuring and evaluating them mentally. This work further reports on an implementation of the proposed tool support in the Squeak/Smalltalk development environment. The development of the tools has been accompanied by regular performance and usability tests. In addition, this work investigates whether the proposed tools affect programmers’ performance. In a controlled lab study, 22 participants improved the design of two different applications. Using a repeated measurement setup, the study examined the effect of providing CoExist on programming performance. The result of analyzing 88 hours of programming suggests that built-in recovery support as provided with CoExist positively has a positive effect on programming performance in explorative programming tasks.
In the field of disk-based parallel database management systems exists a great variety of solutions based on a shared-storage or a shared-nothing architecture. In contrast, main memory-based parallel database management systems are dominated solely by the shared-nothing approach as it preserves the in-memory performance advantage by processing data locally on each server. We argue that this unilateral development is going to cease due to the combination of the following three trends: a) Nowadays network technology features remote direct memory access (RDMA) and narrows the performance gap between accessing main memory inside a server and of a remote server to and even below a single order of magnitude. b) Modern storage systems scale gracefully, are elastic, and provide high-availability. c) A modern storage system such as Stanford's RAMCloud even keeps all data resident in main memory. Exploiting these characteristics in the context of a main-memory parallel database management system is desirable. The advent of RDMA-enabled network technology makes the creation of a parallel main memory DBMS based on a shared-storage approach feasible.
This thesis describes building a columnar database on shared main memory-based storage. The thesis discusses the resulting architecture (Part I), the implications on query processing (Part II), and presents an evaluation of the resulting solution in terms of performance, high-availability, and elasticity (Part III).
In our architecture, we use Stanford's RAMCloud as shared-storage, and the self-designed and developed in-memory AnalyticsDB as relational query processor on top. AnalyticsDB encapsulates data access and operator execution via an interface which allows seamless switching between local and remote main memory, while RAMCloud provides not only storage capacity, but also processing power. Combining both aspects allows pushing-down the execution of database operators into the storage system. We describe how the columnar data processed by AnalyticsDB is mapped to RAMCloud's key-value data model and how the performance advantages of columnar data storage can be preserved.
The combination of fast network technology and the possibility to execute database operators in the storage system opens the discussion for site selection. We construct a system model that allows the estimation of operator execution costs in terms of network transfer, data processed in memory, and wall time. This can be used for database operators that work on one relation at a time - such as a scan or materialize operation - to discuss the site selection problem (data pull vs. operator push). Since a database query translates to the execution of several database operators, it is possible that the optimal site selection varies per operator. For the execution of a database operator that works on two (or more) relations at a time, such as a join, the system model is enriched by additional factors such as the chosen algorithm (e.g. Grace- vs. Distributed Block Nested Loop Join vs. Cyclo-Join), the data partitioning of the respective relations, and their overlapping as well as the allowed resource allocation.
We present an evaluation on a cluster with 60 nodes where all nodes are connected via RDMA-enabled network equipment. We show that query processing performance is about 2.4x slower if everything is done via the data pull operator execution strategy (i.e. RAMCloud is being used only for data access) and about 27% slower if operator execution is also supported inside RAMCloud (in comparison to operating only on main memory inside a server without any network communication at all). The fast-crash recovery feature of RAMCloud can be leveraged to provide high-availability, e.g. a server crash during query execution only delays the query response for about one second. Our solution is elastic in a way that it can adapt to changing workloads a) within seconds, b) without interruption of the ongoing query processing, and c) without manual intervention.