Refine
Year of publication
- 2014 (125) (remove)
Document Type
- Doctoral Thesis (125) (remove)
Language
- English (125) (remove)
Keywords
- Gammastrahlungsastronomie (4)
- data analysis (4)
- gamma-ray astronomy (4)
- Crab Nebula (3)
- Datenanalyse (3)
- Krebsnebel (3)
- Systembiologie (3)
- systems biology (3)
- Fermi-LAT (2)
- Geomorphologie (2)
Institute
- Institut für Biochemie und Biologie (29)
- Institut für Physik und Astronomie (21)
- Institut für Geowissenschaften (17)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (9)
- Institut für Informatik und Computational Science (8)
- Institut für Umweltwissenschaften und Geographie (6)
- Wirtschaftswissenschaften (6)
- Institut für Mathematik (5)
- Mathematisch-Naturwissenschaftliche Fakultät (5)
- Department Psychologie (4)
Wood is used for many applications because of its excellent mechanical properties, relative abundance and as it is a renewable resource. However, its wider utilization as an engineering material is limited because it swells and shrinks upon moisture changes and is susceptible to degradation by microorganisms and/or insects. Chemical modifications of wood have been shown to improve dimensional stability, water repellence and/or durability, thus increasing potential service-life of wood materials. However current treatments are limited because it is difficult to introduce and fix such modifications deep inside the tissue and cell wall. Within the scope of this thesis, novel chemical modification methods of wood cell walls were developed to improve both dimensional stability and water repellence of wood material. These methods were partly inspired by the heartwood formation in living trees, a process, that for some species results in an insertion of hydrophobic chemical substances into the cell walls of already dead wood cells, In the first part of this thesis a chemistry to modify wood cell walls was used, which was inspired by the natural process of heartwood formation. Commercially available hydrophobic flavonoid molecules were effectively inserted in the cell walls of spruce, a softwood species with low natural durability, after a tosylation treatment to obtain “artificial heartwood”. Flavonoid inserted cell walls show a reduced moisture absorption, resulting in better dimensional stability, water repellency and increased hardness. This approach was quite different compared to established modifications which mainly address hydroxyl groups of cell wall polymers with hydrophilic substances. In the second part of the work in-situ styrene polymerization inside the tosylated cell walls was studied. It is known that there is a weak adhesion between hydrophobic polymers and hydrophilic cell wall components. The hydrophobic styrene monomers were inserted into the tosylated wood cell walls for further polymerization to form polystyrene in the cell walls, which increased the dimensional stability of the bulk wood material and reduced water uptake of the cell walls considerably when compared to controls. In the third part of the work, grafting of another hydrophobic and also biodegradable polymer, poly(ɛ-caprolactone) in the wood cell walls by ring opening polymerization of ɛ-caprolactone was studied at mild temperatures. Results indicated that polycaprolactone attached into the cell walls, caused permanent swelling of the cell walls up to 5%. Dimensional stability of the bulk wood material increased 40% and water absorption reduced more than 35%. A fully biodegradable and hydrophobized wood material was obtained with this method which reduces disposal problem of the modified wood materials and has improved properties to extend the material’s service-life. Starting from a bio-inspired approach which showed great promise as an alternative to standard cell wall modifications we showed the possibility of inserting hydrophobic molecules in the cell walls and supported this fact with in-situ styrene and ɛ-caprolactone polymerization into the cell walls. It was shown in this thesis that despite the extensive knowledge and long history of using wood as a material there is still room for novel chemical modifications which could have a high impact on improving wood properties.
One of the most significant current discussions in Astrophysics relates to the origin of high-energy cosmic rays. According to our current knowledge, the abundance distribution of the elements in cosmic rays at their point of origin indicates, within plausible error limits, that they were initially formed by nuclear processes in the interiors of stars. It is also believed that their energy distribution up to 1018 eV has Galactic origins. But even though the knowledge about potential sources of cosmic rays is quite poor above „ 1015 eV, that is the “knee” of the cosmic-ray spectrum, up to the knee there seems to be a wide consensus that supernova remnants are the most likely candidates. Evidence of this comes from observations of non-thermal X-ray radiation, requiring synchrotron electrons with energies up to 1014 eV, exactly in the remnant of supernovae. To date, however, there is not conclusive evidence that they produce nuclei, the dominant component of cosmic rays, in addition to electrons. In light of this dearth of evidence, γ-ray observations from supernova remnants can offer the most promising direct way to confirm whether or not these astrophysical objects are indeed the main source of cosmic-ray nuclei below the knee. Recent observations with space- and ground-based observatories have established shell-type supernova remnants as GeV-to- TeV γ-ray sources. The interpretation of these observations is however complicated by the different radiation processes, leptonic and hadronic, that can produce similar fluxes in this energy band rendering ambiguous the nature of the emission itself. The aim of this work is to develop a deeper understanding of these radiation processes from a particular shell-type supernova remnant, namely RX J1713.7–3946, using observations of the LAT instrument onboard the Fermi Gamma-Ray Space Telescope. Furthermore, to obtain accurate spectra and morphology maps of the emission associated with this supernova remnant, an improved model of the diffuse Galactic γ-ray emission background is developed. The analyses of RX J1713.7–3946 carried out with this improved background show that the hard Fermi-LAT spectrum cannot be ascribed to the hadronic emission, leading thus to the conclusion that the leptonic scenario is instead the most natural picture for the high-energy γ-ray emission of RX J1713.7–3946. The leptonic scenario however does not rule out the possibility that cosmic-ray nuclei are accelerated in this supernova remnant, but it suggests that the ambient density may not be high enough to produce a significant hadronic γ-ray emission. Further investigations involving other supernova remnants using the improved back- ground developed in this work could allow compelling population studies, and hence prove or disprove the origin of Galactic cosmic-ray nuclei in these astrophysical objects. A break- through regarding the identification of the radiation mechanisms could be lastly achieved with a new generation of instruments such as CTA.
In March 2010, the project CoCoCo (incipient COntinent-COntinent COllision) recorded a 650 km long amphibian N-S wide-angle seismic profile, extending from the Eratosthenes Seamount (ESM) across Cyprus and southern Turkey to the Anatolian plateau. The aim of the project is to reveal the impact of the transition from subduction to continent-continent collision of the African plate with the Cyprus-Anatolian plate. A visual quality check, frequency analysis and filtering were applied to the seismic data and reveal a good data quality. Subsequent first break picking, finite-differences ray tracing and inversion of the offshore wide-angle data leads to a first-arrival tomographic model. This model reveals (1) P-wave velocities lower than 6.5 km/s in the crust, (2) a variable crustal thickness of about 28 - 37 km and (3) an upper crustal reflection at 5 km depth beneath the ESM. Two land shots on Turkey, also recorded on Cyprus, airgun shots south of Cyprus and geological and previous seismic investigations provide the information to derive a layered velocity model beneath the Anatolian plateau and for the ophiolite complex on Cyprus. The analysis of the reflections provides evidence for a north-dipping plate subducting beneath Cyprus. The main features of this layered velocity model are (1) an upper and lower crust with large lateral changes of the velocity structure and thickness, (2) a Moho depth of about 38 - 45 km beneath the Anatolian plateau, (3) a shallow north-dipping subducting plate below Cyprus with an increasing dip and (4) a typical ophiolite sequence on Cyprus with a total thickness of about 12 km. The offshore-onshore seismic data complete and improve the information about the velocity structure beneath Cyprus and the deeper part of the offshore tomographic model. Thus, the wide-angle seismic data provide detailed insights into the 2-D geometry and velocity structures of the uplifted and overriding Cyprus-Anatolian plate. Subsequent gravity modelling confirms and extends the crustal P-wave velocity model. The deeper part of the subducting plate is constrained by the gravity data and has a dip angle of ~ 28°. Finally, an integrated analysis of the geophysical and geological information allows a comprehensive interpretation of the crustal structure related to the collision process.
Monoclonal antibodies (mAbs) are engineered immunoglobulins G (IgG) used for more than 20 years as targeted therapy in oncology, infectious diseases and (auto-)immune disorders. Their protein nature greatly influences their pharmacokinetics (PK), presenting typical linear and non-linear behaviors.
While it is common to use empirical modeling to analyze clinical PK data of mAbs, there is neither clear consensus nor guidance to, on one hand, select the structure of classical compartment models and on the other hand, interpret mechanistically PK parameters. The mechanistic knowledge present in physiologically-based PK (PBPK) models is likely to support rational classical model selection and thus, a methodology to link empirical and PBPK models is desirable. However, published PBPK models for mAbs are quite diverse in respect to the physiology of distribution spaces and the parameterization of the non-specific elimination involving the neonatal Fc receptor (FcRn) and endogenous IgG (IgGendo). The remarkable discrepancy between the simplicity of biodistribution data and the complexity of published PBPK models translates in parameter identifiability issues.
In this thesis, we address this problem with a simplified PBPK model—derived from a hierarchy of more detailed PBPK models and based on simplifications of tissue distribution model. With the novel tissue model, we are breaking new grounds in mechanistic modeling of mAbs disposition: We demonstrate that binding to FcRn is indeed linear and that it is not possible to infer which tissues are involved in the unspecific elimination of wild-type mAbs. We also provide a new approach to predict tissue partition coefficients based on mechanistic insights: We directly link tissue partition coefficients (Ktis) to data-driven and species-independent published antibody biodistribution coefficients (ABCtis) and thus, we ensure the extrapolation from pre-clinical species to human with the simplified PBPK model. We further extend the simplified PBPK model to account for a target, relevant to characterize the non-linear clearance due to mAb-target interaction.
With model reduction techniques, we reduce the dimensionality of the simplified PBPK model to design 2-compartment models, thus guiding classical model development with physiological and mechanistic interpretation of the PK parameters. We finally derive a new scaling approach for anatomical and physiological parameters in PBPK models that translates the inter-individual variability into the design of mechanistic covariate models with direct link to classical compartment models, specially useful for PK population analysis during clinical development.
Metabolic systems tend to exhibit steady states that can be measured in terms of their concentrations and fluxes. These measurements can be regarded as a phenotypic representation of all the complex interactions and regulatory mechanisms taking place in the underlying metabolic network. Such interactions determine the system's response to external perturbations and are responsible, for example, for its asymptotic stability or for oscillatory trajectories around the steady state. However, determining these perturbation responses in the absence of fully specified kinetic models remains an important challenge of computational systems biology. Structural kinetic modeling (SKM) is a framework to analyse whether a metabolic steady state remains stable under perturbation, without requiring detailed knowledge about individual rate equations. It provides a parameterised representation of the system's Jacobian matrix in which the model parameters encode information about the enzyme-metabolite interactions. Stability criteria can be derived by generating a large number of structural kinetic models (SK-models) with randomly sampled parameter sets and evaluating the resulting Jacobian matrices. The parameter space can be analysed statistically in order to detect network positions that contribute significantly to the perturbation response. Because the sampled parameters are equivalent to the elasticities used in metabolic control analysis (MCA), the results are easy to interpret biologically. In this project, the SKM framework was extended by several novel methodological improvements. These improvements were evaluated in a simulation study using a set of small example pathways with simple Michaelis Menten rate laws. Afterwards, a detailed analysis of the dynamic properties of the neuronal TCA cycle was performed in order to demonstrate how the new insights obtained in this work could be used for the study of complex metabolic systems. The first improvement was achieved by examining the biological feasibility of the elasticity combinations created during Monte Carlo sampling. Using a set of small example systems, the findings showed that the majority of sampled SK-models would yield negative kinetic parameters if they were translated back into kinetic models. To overcome this problem, a simple criterion was formulated that mitigates such infeasible models and the application of this criterion changed the conclusions of the SKM experiment. The second improvement of this work was the application of supervised machine-learning approaches in order to analyse SKM experiments. So far, SKM experiments have focused on the detection of individual enzymes to identify single reactions important for maintaining the stability or oscillatory trajectories. In this work, this approach was extended by demonstrating how SKM enables the detection of ensembles of enzymes or metabolites that act together in an orchestrated manner to coordinate the pathways response to perturbations. In doing so, stable and unstable states served as class labels, and classifiers were trained to detect elasticity regions associated with stability and instability. Classification was performed using decision trees and relevance vector machines (RVMs). The decision trees produced good classification accuracy in terms of model bias and generalizability. RVMs outperformed decision trees when applied to small models, but encountered severe problems when applied to larger systems because of their high runtime requirements. The decision tree rulesets were analysed statistically and individually in order to explore the role of individual enzymes or metabolites in controlling the system's trajectories around steady states. The third improvement of this work was the establishment of a relationship between the SKM framework and the related field of MCA. In particular, it was shown how the sampled elasticities could be converted to flux control coefficients, which were then investigated for their predictive information content in classifier training. After evaluation on the small example pathways, the methodology was used to study two steady states of the neuronal TCA cycle with respect to their intrinsic mechanisms responsible for stability or instability. The findings showed that several elasticities were jointly coordinated to control stability and that the main source for potential instabilities were mutations in the enzyme alpha-ketoglutarate dehydrogenase.
During this work I built a four wave mixing setup for the time-resolved femtosecond spectroscopy of Raman-active lattice modes. This setup enables to study the selective excitation of phonon polaritons. These quasi-particles arise from the coupling of electro-magnetic waves and transverse optical lattice modes, the so-called phonons. The phonon polaritons were investigated in the optically non-linear, ferroelectric crystals LiNbO₃ and LiTaO₃.
The direct observation of the frequency shift of the scattered narrow bandwidth probe pulses proofs the role of the Raman interaction during the probe and excitation process of phonon polaritons. I compare this experimental method with the measurement where ultra-short laser pulses are used. The frequency shift remains obscured by the relative broad bandwidth of these laser pulses. In an experiment with narrow bandwidth probe pulses, the Stokes and anti-Stokes intensities are spectrally separated. They are assigned to the corresponding counter-propagating wavepackets of phonon polaritons. Thus, the dynamics of these wavepackets was separately studied. Based on these findings, I develop the mathematical description of the so-called homodyne detection of light for the case of light scattering from counter propagating phonon polaritons.
Further, I modified the broad bandwidth of the ultra-short pump pulses using bandpass filters to generate two pump pulses with non-overlapping spectra. This enables the frequency-selective excitation of polariton modes in the sample, which allows me to observe even very weak polariton modes in LiNbO₃ or LiTaO₃ that belong to the higher branches of the dispersion relation of phonon polaritons. The experimentally determined dispersion relation of the phonon polaritons could therefore be extended and compared to theoretical models. In addition, I determined the frequency-dependent damping of phonon polaritons.
Scientific inquiry requires that we formulate not only what we know, but also what we do not know and by how much. In climate data analysis, this involves an accurate specification of measured quantities and a consequent analysis that consciously propagates the measurement errors at each step. The dissertation presents a thorough analytical method to quantify errors of measurement inherent in paleoclimate data. An additional focus are the uncertainties in assessing the coupling between different factors that influence the global mean temperature (GMT).
Paleoclimate studies critically rely on `proxy variables' that record climatic signals in natural archives. However, such proxy records inherently involve uncertainties in determining the age of the signal. We present a generic Bayesian approach to analytically determine the proxy record along with its associated uncertainty, resulting in a time-ordered sequence of correlated probability distributions rather than a precise time series. We further develop a recurrence based method to detect dynamical events from the proxy probability distributions. The methods are validated with synthetic examples and
demonstrated with real-world proxy records. The proxy estimation step reveals the interrelations between proxy variability and uncertainty. The recurrence analysis of the East Asian Summer Monsoon during the last 9000 years confirms the well-known `dry' events at 8200 and 4400 BP, plus an additional significantly dry event at 6900 BP.
We also analyze the network of dependencies surrounding GMT. We find an intricate, directed network with multiple links between the different factors at multiple time delays. We further uncover a significant feedback from the GMT to the El Niño Southern Oscillation at quasi-biennial timescales. The analysis highlights the need of a more nuanced formulation of influences between different climatic factors, as well as the limitations in trying to estimate such dependencies.
Automated location of seismic events is a very important task in microseismic monitoring operations as well for local and regional seismic monitoring. Since microseismic records are generally characterised by low signal-to-noise ratio, such methods are requested to be noise robust and sufficiently accurate. Most of the standard automated location routines are based on the automated picking, identification and association of the first arrivals of P and S waves and on the minimization of the residuals between theoretical and observed arrival times of the considered seismic phases. Although current methods can accurately pick P onsets, the automatic picking of the S onset is still problematic, especially when the P coda overlaps the S wave onset. In this thesis I developed a picking free automated method based on the Short-Term-Average/Long-Term-Average (STA/LTA) traces at different stations as observed data. I used the STA/LTA of several characteristic functions in order to increase the sensitiveness to the P wave and the S waves. For the P phases we use the STA/LTA traces of the vertical energy function, while for the S phases, we use the STA/LTA traces of the horizontal energy trace and then a more optimized characteristic function which is obtained using the principal component analysis technique. The orientation of the horizontal components can be retrieved by robust and linear approach of waveform comparison between stations within a network using seismic sources outside the network (chapter 2). To locate the seismic event, we scan the space of possible hypocentral locations and origin times, and stack the STA/LTA traces along the theoretical arrival time surface for both P and S phases. Iterating this procedure on a three-dimensional grid we retrieve a multidimensional matrix whose absolute maximum corresponds to the spatial and temporal coordinates of the seismic event. Location uncertainties are then estimated by perturbing the STA/LTA parameters (i.e the length of both long and short time windows) and relocating each event several times. In order to test the location method I firstly applied it to a set of 200 synthetic events. Then we applied it to two different real datasets. A first one related to mining induced microseismicity in a coal mine in the northern Germany (chapter 3). In this case we successfully located 391 microseismic event with magnitude range between 0.5 and 2.0 Ml. To further validate the location method I compared the retrieved locations with those obtained by manual picking procedure. The second dataset consist in a pilot application performed in the Campania-Lucania region (southern Italy) using a 33 stations seismic network (Irpinia Seismic Network) with an aperture of about 150 km (chapter 4). We located 196 crustal earthquakes (depth < 20 km) with magnitude range 1.1 < Ml < 2.7. A subset of these locations were compared with accurate locations retrieved by a manual location procedure based on the use of a double difference technique. In both cases results indicate good agreement with manual locations. Moreover, the waveform stacking location method results noise robust and performs better than classical location methods based on the automatic picking of the P and S waves first arrivals.
Effect of benzylglucosinolate on signaling pathways associated with type 2 diabetes prevention
(2014)
Type 2 diabetes (T2D) is a health problem throughout the world. In 2010, there were nearly 230 million individuals with diabetes worldwide and it is estimated that in the economically advanced countries the cases will increase about 50% in the next twenty years. Insulin resistance is one of major features in T2D, which is also a risk factor for metabolic and cardiovascular complications. Epidemiological and animal studies have shown that the consumption of vegetables and fruits can delay or prevent the development of the disease, although the underlying mechanisms of these effects are still unclear. Brassica species such as broccoli (Brassica oleracea var. italica) and nasturtium (Tropaeolum majus) possess high content of bioactive phytochemicals, e.g. nitrogen sulfur compounds (glucosinolates and isothiocyanates) and polyphenols largely associated with the prevention of cancer. Isothiocyanates (ITCs) display their anti-carcinogenic potential by inducing detoxicating phase II enzymes and increasing glutathione (GSH) levels in tissues. In T2D diabetes an increase in gluconeogenesis and triglyceride synthesis, and a reduction in fatty acid oxidation accompanied by the presence of reactive oxygen species (ROS) are observed; altogether is the result of an inappropriate response to insulin. Forkhead box O (FOXO) transcription factors play a crucial role in the regulation of insulin effects on gene expression and metabolism, and alterations in FOXO function could contribute to metabolic disorders in diabetes. In this study using stably transfected human osteosarcoma cells (U-2 OS) with constitutive expression of FOXO1 protein labeled with GFP (green fluorescent protein) and human hepatoma cells HepG2 cell cultures, the ability of benzylisothiocyanate (BITC) deriving from benzylglucosinolate, extracted from nasturtium to modulate, i) the insulin-signaling pathway, ii) the intracellular localization of FOXO1 and iii) the expression of proteins involved in glucose metabolism, ROS detoxification, cell cycle arrest and DNA repair was evaluated. BITC promoted oxidative stress and in response to that induced FOXO1 translocation from cytoplasm into the nucleus antagonizing the insulin effect. BITC stimulus was able to down-regulate gluconeogenic enzymes, which can be considered as an anti-diabetic effect; to promote antioxidant resistance expressed by the up-regulation in manganese superoxide dismutase (MnSOD) and detoxification enzymes; to modulate autophagy by induction of BECLIN1 and down-regulation of the mammalian target of rapamycin complex 1 (mTORC1) pathway; and to promote cell cycle arrest and DNA damage repair by up-regulation of the cyclin-dependent kinase inhibitor (p21CIP) and Growth Arrest / DNA Damage Repair (GADD45). Except for the nuclear factor (erythroid derived)-like2 (NRF2) and its influence in the detoxification enzymes gene expression, all the observed effects were independent from FOXO1, protein kinase B (AKT/PKB) and NAD-dependent deacetylase sirtuin-1 (SIRT1). The current study provides evidence that besides of the anticarcinogenic potential, isothiocyanates might have a role in T2D prevention. BITC stimulus mimics the fasting state, in which insulin signaling is not triggered and FOXO proteins remain in the nucleus modulating gene expression of their target genes, with the advantage of a down-regulation of gluconeogenesis instead of its increase. These effects suggest that BITC might be considered as a promising substance in the prevention or treatment of T2D, therefore the factors behind of its modulatory effects need further investigation.
Nowadays, software systems are getting more and more complex. To tackle this challenge most diverse techniques, such as design patterns, service oriented architectures (SOA), software development processes, and model-driven engineering (MDE), are used to improve productivity, while time to market and quality of the products stay stable. Multiple of these techniques are used in parallel to profit from their benefits. While the use of sophisticated software development processes is standard, today, MDE is just adopted in practice. However, research has shown that the application of MDE is not always successful. It is not fully understood when advantages of MDE can be used and to what degree MDE can also be disadvantageous for productivity. Further, when combining different techniques that aim to affect the same factor (e.g. productivity) the question arises whether these techniques really complement each other or, in contrast, compensate their effects. Due to that, there is the concrete question how MDE and other techniques, such as software development process, are interrelated. Both aspects (advantages and disadvantages for productivity as well as the interrelation to other techniques) need to be understood to identify risks relating to the productivity impact of MDE. Before studying MDE's impact on productivity, it is necessary to investigate the range of validity that can be reached for the results. This includes two questions. First, there is the question whether MDE's impact on productivity is similar for all approaches of adopting MDE in practice. Second, there is the question whether MDE's impact on productivity for an approach of using MDE in practice remains stable over time. The answers for both questions are crucial for handling risks of MDE, but also for the design of future studies on MDE success. This thesis addresses these questions with the goal to support adoption of MDE in future. To enable a differentiated discussion about MDE, the term MDE setting'' is introduced. MDE setting refers to the applied technical setting, i.e. the employed manual and automated activities, artifacts, languages, and tools. An MDE setting's possible impact on productivity is studied with a focus on changeability and the interrelation to software development processes. This is done by introducing a taxonomy of changeability concerns that might be affected by an MDE setting. Further, three MDE traits are identified and it is studied for which manifestations of these MDE traits software development processes are impacted. To enable the assessment and evaluation of an MDE setting's impacts, the Software Manufacture Model language is introduced. This is a process modeling language that allows to reason about how relations between (modeling) artifacts (e.g. models or code files) change during application of manual or automated development activities. On that basis, risk analysis techniques are provided. These techniques allow identifying changeability risks and assessing the manifestations of the MDE traits (and with it an MDE setting's impact on software development processes). To address the range of validity, MDE settings from practice and their evolution histories were capture in context of this thesis. First, this data is used to show that MDE settings cover the whole spectrum concerning their impact on changeability or interrelation to software development processes. Neither it is seldom that MDE settings are neutral for processes nor is it seldom that MDE settings have impact on processes. Similarly, the impact on changeability differs relevantly. Second, a taxonomy of evolution of MDE settings is introduced. In that context it is discussed to what extent different types of changes on an MDE setting can influence this MDE setting's impact on changeability and the interrelation to processes. The category of structural evolution, which can change these characteristics of an MDE setting, is identified. The captured MDE settings from practice are used to show that structural evolution exists and is common. In addition, some examples of structural evolution steps are collected that actually led to a change in the characteristics of the respective MDE settings. Two implications are: First, the assessed diversity of MDE settings evaluates the need for the analysis techniques that shall be presented in this thesis. Second, evolution is one explanation for the diversity of MDE settings in practice. To summarize, this thesis studies the nature and evolution of MDE settings in practice. As a result support for the adoption of MDE settings is provided in form of techniques for the identification of risks relating to productivity impacts.
The data quality of real-world datasets need to be constantly monitored and maintained to allow organizations and individuals to reliably use their data. Especially, data integration projects suffer from poor initial data quality and as a consequence consume more effort and money. Commercial products and research prototypes for data cleansing and integration help users to improve the quality of individual and combined datasets. They can be divided into either standalone systems or database management system (DBMS) extensions. On the one hand, standalone systems do not interact well with DBMS and require time-consuming data imports and exports. On the other hand, DBMS extensions are often limited by the underlying system and do not cover the full set of data cleansing and integration tasks.
We overcome both limitations by implementing a concise set of five data cleansing and integration operators on the parallel data analytics platform Stratosphere. We define the semantics of the operators, present their parallel implementation, and devise optimization techniques for individual operators and combinations thereof. Users specify declarative queries in our query language METEOR with our new operators to improve the data quality of individual datasets or integrate them to larger datasets. By integrating the data cleansing operators into the higher level language layer of Stratosphere, users can easily combine cleansing operators with operators from other domains, such as information extraction, to complex data flows. Through a generic description of the operators, the Stratosphere optimizer reorders operators even from different domains to find better query plans.
As a case study, we reimplemented a part of the large Open Government Data integration project GovWILD with our new operators and show that our queries run significantly faster than the original GovWILD queries, which rely on relational operators. Evaluation reveals that our operators exhibit good scalability on up to 100 cores, so that even larger inputs can be efficiently processed by scaling out to more machines. Finally, our scripts are considerably shorter than the original GovWILD scripts, which results in better maintainability of the scripts.
Today, it is well known that galaxies like the Milky Way consist not only of stars but also of gas and dust. The galactic halo, a sphere of gas that surrounds the stellar disk of a galaxy, is especially interesting. It provides a wealth of information about in and outflowing gaseous material towards and away from galaxies and their hierarchical evolution. For the Milky Way, the so-called high-velocity clouds (HVCs), fast moving neutral gas complexes in the halo that can be traced by absorption-line measurements, are believed to play a crucial role in the overall matter cycle in our Galaxy. Over the last decades, the properties of these halo structures and their connection to the local circumgalactic and intergalactic medium (CGM and IGM, respectively) have been investigated in great detail by many different groups. So far it remains unclear, however, to what extent the results of these studies can be transferred to other galaxies in the local Universe. In this thesis, we study the absorption properties of Galactic HVCs and compare the HVC absorption characteristics with those of intervening QSO absorption-line systems at low redshift. The goal of this project is to improve our understanding of the spatial extent and physical conditions of gaseous galaxy halos in the local Universe. In the first part of the thesis we use HST /STIS ultraviolet spectra of more than 40 extragalactic background sources to statistically analyze the absorption properties of the HVCs in the Galactic halo. We determine fundamental absorption line parameters including covering fractions of different weakly/intermediately/highly ionized metals with a particular focus on SiII and MgII. Due to the similarity in the ionization properties of SiII and MgII, we are able to estimate the contribution of HVC-like halo structures to the cross section of intervening strong MgII absorbers at z = 0. Our study implies that only the most massive HVCs would be regarded as strong MgII absorbers, if the Milky Way halo would be seen as a QSO absorption line system from an exterior vantage point. Combining the observed absorption-cross section of Galactic HVCs with the well-known number density of intervening strong MgII absorbers at z = 0, we conclude that the contribution of infalling gas clouds (i.e., HVC analogs) in the halos of Milky Way-type galaxies to the cross section of strong MgII absorbers is 34%. This result indicates that only about one third of the strong MgII absorption can be associated with HVC analogs around other galaxies, while the majority of the strong MgII systems possibly is related to galaxy outflows and winds. The second part of this thesis focuses on the properties of intervening metal absorbers at low redshift. The analysis of the frequency and physical conditions of intervening metal systems in QSO spectra and their relation to nearby galaxies offers new insights into the typical conditions of gaseous galaxy halos. One major aspect in our study was to regard intervening metal systems as possible HVC analogs. We perform a detailed analysis of absorption line properties and line statistics for 57 metal absorbers along 78 QSO sightlines using newly-obtained ultraviolet spectra obtained with HST /COS. We find clear evidence for bimodal distribution in the HI column density in the absorbers, a trend that we interpret as sign for two different classes of absorption systems (with HVC analogs at the high-column density end). With the help of the strong transitions of SiII λ1260, SiIII λ1206, and CIII λ977 we have set up Cloudy photoionization models to estimate the local ionization conditions, gas densities, and metallicities. We find that the intervening absorption systems studied by us have, on average, similar physical conditions as Galactic HVC absorbers, providing evidence that many of them represent HVC analogs in the vicinity of other galaxies. We therefore determine typical halo sizes for SiII, SiIII, and CIII for L = 0.01L∗ and L = 0.05L∗ galaxies. Based on the covering fractions of the different ions in the Galactic halo, we find that, for example, the typical halo size for SiIII is ∼ 160 kpc for L = 0.05L∗ galaxies. We test the plausibility of this result by searching for known galaxies close to the QSO sightlines and at similar redshifts as the absorbers. We find that more than 34% of the measured SiIII absorbers have galaxies associated with them, with the majority of the absorbers indeed being at impact parameters ρ ≤160 kpc.
The background of civil service reform in Indonesia reveals the emergence of the reformation movement in 1998, following the fall of the authoritarian New Order regime. The reformation movement has seen the introduction of reforms in Indonesia's various governmental institutions, including the civil service. The civil service reforms were marked by the revision of Act 8/ 74 with Act 43 of 1999 on Civil Service Administration. The implementation of the civil service reform program, which was carried out by both central and local governments, required cooperation between the actors (in particular, Ministries, agencies and local governments), known as coordination.
Currently, the coordination that occurs between actors tends to be rigid and hierarchical. As a result, targets are not efficiently and effectively met. Hierarchical coordination, without a strong public sector infrastructure, tends to have a negative impact on achieving the desired outcomes of the civil service reform program. As an intrinsic part of the New Order regime, hierarchical coordination resulted in inefficiency and lack of efficacy. Despite these inefficiencies, the administration and the political environment have changed significantly as a result of the reform process.
Obvious examples of the reforms are changes in recruitment patterns, placement and remuneration policies. However, in the case of Indonesia, it appears that every state institution has its own policy. Thus, it appears that there has not been policy coherence in the civil service reform program, resulting in the lack of a sustainable program. The important thing to examine is how the coordination mechanisms of the civil service reform program in the central government have developed during the reform era in Indonesia
The purpose of this study is to analyse the linkages between coordination mechanisms and the actual implementation of civil service reform programs. This is undertaken as a basis for intervention based on the structures and patterns of coordination mechanisms in the implementation of civil service reform programs. The next step is to formulate the development coordination mechanisms, particularly to create structures and patterns of civil service reforms which are more sustainable to the specific characteristics of public sector organisations in Indonesia.
The benefits of this research are a stronger understanding of the linkages between the mechanisms of coordination and implementation of civil service reform programs. Through this analysis, the findings can then be applied as a basic consideration in planning a sustainable civil service reform program. In the basis of theoretical issues concerning the linkages between coordination mechanisms and implementation of civil service reform program sustainability, this book explores the type of coordination, which is needed to test the proportional and sustainable concept of the intended civil service reform program in Indonesia.
Research conducted through studies, surveys and donors has shown that poor coordination is the major hindrance to the civil service program reform in Indonesia. This research employs a qualitative approach. In this study, the coordination mechanisms and implementations of civil service reform programs are demonstrated by means of case studies of the State Ministry for Administrative Reform, the National Civil Service Agency and the National Public Administration Institute. The coordination mechanisms in these Ministries and agencies were analysed using indicators of effective and efficient coordination. The analysis of the coordination mechanisms shows a tendency towards rigid hierarchical coordination. This raises concerns about fragmentation among departments and agencies at the central government level and calls for integrated civil service reform both at central and local governmental levels. In the context of implementation programs, a hierarchical mechanism of coordination leverages on various aspects, such as the program formulation, implementation flow of the program, the impact of policies, and achievement targets. In particular, there was a shift process of the mainstream civil service reform in the Ministries and agencies which are marked by the emergence of sectoral interest and inefficiencies in the civil service reform program. The primary result of successful civil service reform is increased professionalism in the civil service.
The findings on hierarchical mechanisms and the prescriptions which will follow show that the professionalism of Indonesia’s civil service is at stake. The implementation of the program through coordination mechanisms in Ministries and agencies is measured in various dimensions: the centre of coordination, integration of coordination, sustainability of coordination and multidimensionality of coordination.
The results of this analysis show that coordination mechanisms and the implementation of civil service reform are more successful when they are integration rather than hierarchical mechanisms. For a successful implementation of the reform program, it is crucial to intervene and change the type of coordination at the central government through the integration approach (hierarchy, market, and network). Furthermore, in order to move towards the integration type mechanism of coordination the separation of the administration and politics in the practice of good governance needs to be carried out immediately and simultaneously. Based on this analysis, it can be concluded that the integration type mechanism of coordination is a suitable for Indonesia for a sustainable civil service reform program. Finally, to achieve coherent civil service reforms, national policies developed according to the central government's priorities are indispensable, establishing a coordination mechanism that can be adhered to throughout all reform sectors.
In the presented thesis, the most advanced photon reconstruction technique of ground-based γ-ray astronomy is adapted to the H.E.S.S. 28 m telescope. The method is based on a semi-analytical model of electromagnetic particle showers in the atmosphere. The properties of cosmic γ-rays are reconstructed by comparing the camera image of the telescope with the Cherenkov emission that is expected from the shower model. To suppress the dominant background from charged cosmic rays, events are selected based on several criteria. The performance of the analysis is evaluated with simulated events. The method is then applied to two sources that are known to emit γ-rays. The first of these is the Crab Nebula, the standard candle of ground-based γ-ray astronomy. The results of this source confirm the expected performance of the reconstruction method, where the much lower energy threshold compared to H.E.S.S. I is of particular importance. A second analysis is performed on the region around the Galactic Centre. The analysis results emphasise the capabilities of the new telescope to measure γ-rays in an energy range that is interesting for both theoretical and experimental astrophysics. The presented analysis features the lowest energy threshold that has ever been reached in ground-based γ-ray astronomy, opening a new window to the precise measurement of the physical properties of time-variable sources at energies of several tens of GeV.
The Epoch of Reionization marks after recombination the second major change in the ionization state of the universe, going from a neutral to an ionized state. It starts with the appearance of the first stars and galaxies; a fraction of high-energy photons emitted from galaxies permeate into the intergalactic medium (IGM) and gradually ionize the hydrogen, until the IGM is completely ionized at z~6 (Fan et al., 2006). While the progress of reionization is driven by galaxy evolution, it changes the ionization and thermal state of the IGM substantially and affects subsequent structure and galaxy formation by various feedback mechanisms.
Understanding this interaction between reionization and galaxy formation is further impeded by a lack of understanding of the high-redshift galactic properties such as the dust distribution and the escape fraction of ionizing photons. Lyman Alpha Emitters (LAEs) represent a sample of high-redshift galaxies that are sensitive to all these galactic properties and the effects of reionization.
In this thesis we aim to understand the progress of reionization by performing cosmological simulations, which allows us to investigate the limits of constraining reionization by high-redshift galaxies as LAEs, and examine how galactic properties and the ionization state of the IGM affect the visibility and observed quantities of LAEs and Lyman Break galaxies (LBGs).
In the first part of this thesis we focus on performing radiative transfer calculations to simulate reionization. We have developed a mapping-sphere-scheme, which, starting from spherically averaged temperature and density fields, uses our 1D radiative transfer code and computes the effect of each source on the IGM temperature and ionization (HII, HeII, HeIII) profiles, which are subsequently mapped onto a grid. Furthermore we have updated the 3D Monte-Carlo radiative transfer pCRASH, enabling detailed reionization simulations which take individual source characteristics into account.
In the second part of this thesis we perform a reionization simulation by post-processing a smoothed-particle hydrodynamical (SPH) simulation (GADGET-2) with 3D radiative transfer (pCRASH), where the ionizing sources are modelled according to the characteristics of the stellar populations in the hydrodynamical simulation. Following the ionization fractions of hydrogen (HI) and helium (HeII, HeIII), and temperature in our simulation, we find that reionization starts at z~11 and ends at z~6, and high density regions near sources are ionized earlier than low density regions far from sources.
In the third part of this thesis we couple the cosmological SPH simulation and the radiative transfer simulations with a physically motivated, self-consistent model for LAEs, in order to understand the importance of the ionization state of the IGM, the escape fraction of ionizing photons from galaxies and dust in the interstellar medium (ISM) on the visibility of LAEs. Comparison of our models results with the LAE Lyman Alpha (Lya) and UV luminosity functions at z~6.6 reveals a three-dimensional degeneracy between the ionization state of the IGM, the ionizing photons escape fraction and the ISM dust distribution, which implies that LAEs act not only as tracers of reionization but also of the ionizing photon escape fraction and of the ISM dust distribution. This degeneracy does not even break down when we compare simulated with observed clustering of LAEs at z~6.6. However, our results show that reionization has the largest impact on the amplitude of the LAE angular correlation functions, and its imprints are clearly distinguishable from those of properties on galactic scales. These results show that reionization cannot be constrained tightly by exclusively using LAE observations. Further observational constraints, e.g. tomographies of the redshifted hydrogen 21cm line, are required.
In addition we also use our LAE model to probe the question when a galaxy is visible as a LAE or a LBG. Within our model galaxies above a critical stellar mass can produce enough luminosity to be visible as a LBG and/or a LAE. By finding an increasing duty cycle of LBGs with Lya emission as the UV magnitude or stellar mass of the galaxy rises, our model reveals that the brightest (and most massive) LBGs most often show Lya emission.
Predicting the Lya equivalent width (Lya EW) distribution and the fraction of LBGs showing Lya emission at z~6.6, we reproduce the observational trend of the Lya EWs with UV magnitude. However, the Lya EWs of the UV brightest LBGs exceed observations and can only be reconciled by accounting for an increased Lya attenuation of massive galaxies, which implies that the observed Lya brightest LAEs do not necessarily coincide with the UV brightest galaxies. We have analysed the dependencies of LAE observables on the properties of the galactic and intergalactic medium and the LAE-LBG connection, and this enhances our understanding of the nature of LAEs.
The adaptation of cell growth and proliferation to environmental changes is essential for the surviving of biological systems. The evolutionary conserved Ser/Thr protein kinase “Target of Rapamycin” (TOR) has emerged as a major signaling node that integrates the sensing of numerous growth signals to the coordinated regulation of cellular metabolism and growth. Although the TOR signaling pathway has been widely studied in heterotrophic organisms, the research on TOR in photosynthetic eukaryotes has been hampered by the reported land plant resistance to rapamycin. Thus, the finding that Chlamydomonas reinhardtii is sensitive to rapamycin, establish this unicellular green alga as a useful model system to investigate TOR signaling in photosynthetic eukaryotes.
The observation that rapamycin does not fully arrest Chlamydomonas growth, which is different from observations made in other organisms, prompted us to investigate the regulatory function of TOR in Chlamydomonas in context of the cell cycle. Therefore, a growth system that allowed synchronously growth under widely unperturbed cultivation in a fermenter system was set up and the synchronized cells were characterized in detail. In a highly resolved kinetic study, the synchronized cells were analyzed for their changes in cytological parameters as cell number and size distribution and their starch content. Furthermore, we applied mass spectrometric analysis for profiling of primary and lipid metabolism. This system was then used to analyze the response dynamics of the Chlamydomonas metabolome and lipidome to TOR-inhibition by rapamycin
The results show that TOR inhibition reduces cell growth, delays cell division and daughter cell release and results in a 50% reduced cell number at the end of the cell cycle. Consistent with the growth phenotype we observed strong changes in carbon and nitrogen partitioning in the direction of rapid conversion into carbon and nitrogen storage through an accumulation of starch, triacylglycerol and arginine. Interestingly, it seems that the conversion of carbon into triacylglycerol occurred faster than into starch after TOR inhibition, which may indicate a more dominant role of TOR in the regulation of TAG biosynthesis than in the regulation of starch.
This study clearly shows, for the first time, a complex picture of metabolic and lipidomic dynamically changes during the cell cycle of Chlamydomonas reinhardtii and furthermore reveals a complex regulation and adjustment of metabolite pools and lipid composition in response to TOR inhibition.
Following the principles of green chemistry, a simple and efficient synthesis of functionalised imidazolium zwitterionic compounds from renewable resources was developed based on a modified one-pot Debus-Radziszewski reaction. The combination of different carbohydrate-derived 1,2-dicarbonyl compounds and amino acids is a simple way to modulate the properties and introduce different functionalities. A representative compound was assessed as an acid catalyst, and converted into acidic ionic liquids by reaction with several strong acids. The reactivity of the double carboxylic functionality was explored by esterification with long and short chain alcohols, as well as functionalised amines, which led to the straightforward formation of surfactant-like molecules or bifunctional esters and amides. One of these di-esters is currently being investigated for the synthesis of poly(ionic liquids). The functionalisation of cellulose with one of the bifunctional esters was investigated and preliminary tests employing it for the functionalisation of filter papers were carried out successfully. The imidazolium zwitterions were converted into ionic liquids via hydrothermal decarboxylation in flow, a benign and scalable technique. This method provides access to imidazolium ionic liquids via a simple and sustainable methodology, whilst completely avoiding contamination with halide salts. Different ionic liquids can be generated depending on the functionality contained in the ImZw precursor. Two alanine-derived ionic liquids were assessed for their physicochemical properties and applications as solvents for the dissolution of cellulose and the Heck coupling.
Reading is a complex cognitive task based on the analyses of visual stimuli. Due to the physiology of the eye, only a small number of letters around the fixation position can be extracted with high visual acuity, while the visibility of words and letters outside this so-called foveal region quickly drops with increasing eccentricity. As a consequence, saccadic eye movements are needed to repeatedly shift the fovea to new words for visual word identification during reading. Moreover, even within a foveated word fixation positions near the word center are superior to other fixation positions for efficient word recognition (O’Regan, 1981; Brysbaert, Vitu, and Schroyens, 1996). Thus, most reading theories assume that readers aim specifically at word centers during reading (for a review see Reichle, Rayner, & Pollatsek, 2003). However, saccades’ landing positions within words during reading are in fact systematically modulated by the distance of the launch site from the word center (McConkie, Kerr, Reddix, & Zola, 1988). In general, it is largely unknown how readers identify the center of upcoming target words and there is no computational model of the sensorimotor translation of the decision for a target word into spatial word center coordinates. Here we present a series of three studies which aim at advancing the current knowledge about the computation of saccade target coordinates during saccade planning in reading. Based on a large corpus analyses, we firstly identified word skipping as a further factor beyond the launch-site distance with a likewise systematic and surprisingly large effect on within-word landing positions. Most importantly, we found that the end points of saccades after skipped word are shifted two and more letters to the left as compared to one-step saccades (i.e., from word N to word N+1) with equal launch-site distances. Then we present evidence from a single saccade experiment suggesting that the word-skipping effect results from highly automatic low-level perceptual processes, which are essentially based on the localization of blank spaces between words. Finally, in the third part, we present a Bayesian model of the computation of the word center from primary sensory measurements of inter-word spaces. We demonstrate that the model simultaneously accounts for launch-site and saccade-type contingent modulations of within-word landing positions in reading. Our results show that the spatial saccade target during reading is the result of complex estimations of the word center based on incomplete sensory information, which also leads to specific systematic deviations of saccades’ landing positions from the word center. Our results have important implications for current reading models and experimental reading research.
Enterprise-specific in-memory data managment : HYRISEc - an in-memory column store engine for OLXP
(2014)
Boolean constraint solving technology has made tremendous progress over the last decade, leading to industrial-strength solvers, for example, in the areas of answer set programming (ASP), the constraint satisfaction problem (CSP), propositional satisfiability (SAT) and satisfiability of quantified Boolean formulas (QBF). However, in all these areas, there exist multiple solving strategies that work well on different applications; no strategy dominates all other strategies. Therefore, no individual solver shows robust state-of-the-art performance in all kinds of applications. Additionally, the question arises how to choose a well-performing solving strategy for a given application; this is a challenging question even for solver and domain experts. One way to address this issue is the use of portfolio solvers, that is, a set of different solvers or solver configurations. We present three new automatic portfolio methods: (i) automatic construction of parallel portfolio solvers (ACPP) via algorithm configuration,(ii) solving the $NP$-hard problem of finding effective algorithm schedules with Answer Set Programming (aspeed), and (iii) a flexible algorithm selection framework (claspfolio2) allowing for fair comparison of different selection approaches. All three methods show improved performance and robustness in comparison to individual solvers on heterogeneous instance sets from many different applications. Since parallel solvers are important to effectively solve hard problems on parallel computation systems (e.g., multi-core processors), we extend all three approaches to be effectively applicable in parallel settings. We conducted extensive experimental studies different instance sets from ASP, CSP, MAXSAT, Operation Research (OR), SAT and QBF that indicate an improvement in the state-of-the-art solving heterogeneous instance sets. Last but not least, from our experimental studies, we deduce practical advice regarding the question when to apply which of our methods.
Entrepreneurship is known to be a main driver of economic growth. Hence, governments have an interest in supporting and promoting entrepreneurial activities. Start-up subsidies, which have been analyzed extensively, only aim at mitigating the lack of financial capital. However, some entrepreneurs also lack in human, social, and managerial capital. One way to address these shortcomings is by subsidizing coaching programs for entrepreneurs. However, theoretical and empirical evidence about business coaching and programs subsidizing coaching is scarce. This dissertation gives an extensive overview of coaching and is the first empirical study for Germany analyzing the effects of coaching programs on its participants. In the theoretical part of the dissertation the process of a business start-up is described and it is discussed how and in which stage of the company’s evolvement coaching can influence entrepreneurial success. The concept of coaching is compared to other non-monetary types of support as training, mentoring, consulting, and counseling. Furthermore, national and international support programs are described. Most programs have either no or small positive effects. However, there is little quantitative evidence in the international literature. In the empirical part of the dissertation the effectiveness of coaching is shown by evaluating two German coaching programs, which support entrepreneurs via publicly subsidized coaching sessions. One of the programs aims at entrepreneurs who have been employed before becoming self-employed, whereas the other program is targeted at former unemployed entrepreneurs. The analysis is based on the evaluation of a quantitative and a qualitative dataset. The qualitative data are gathered by intensive one-on-one interviews with coaches and entrepreneurs. These data give a detailed insight about the coaching topics, duration, process, effectiveness, and the thoughts of coaches and entrepreneurs. The quantitative data include information about 2,936 German-based entrepreneurs. Using propensity score matching, the success of participants of the two coaching programs is compared with adequate groups of non-participants. In contrast to many other studies also personality traits are observed and controlled for in the matching process. The results show that only the program for former unemployed entrepreneurs has small positive effects. Participants have a larger survival probability in self-employment and a larger probability to hire employees than matched non-participants. In contrast, the program for former employed individuals has negative effects. Compared to individuals who did not participate in the coaching program, participants have a lower probability to stay in self-employment, lower earned net income, lower number of employees and lower life satisfaction. There are several reasons for these differing results of the two programs. First, former unemployed individuals have more basic coaching needs than former employed individuals. Coaches can satisfy these basic coaching needs, whereas former employed individuals have more complex business problems, which are not very easy to be solved by a coaching intervention. Second, the analysis reveals that former employed individuals are very successful in general. It is easier to increase the success of former unemployed individuals as they have a lower base level of success than former employed individuals. An effect heterogeneity analysis shows that coaching effectiveness differs by region. Coaching for previously unemployed entrepreneurs is especially useful in regions with bad labor market conditions. In summary, in line with previous literature, it is found that coaching has little effects on the success of entrepreneurs. The previous employment status, the characteristics of the entrepreneur and the regional labor market conditions play a crucial role in the effectiveness of coaching. In conclusion, coaching needs to be well tailored to the individual and applied thoroughly. Therefore, governments should design and provide coaching programs only after due consideration.
The aim of the present thesis is to answer the question to what degree the processes involved in sentence comprehension are sensitive to task demands. A central phenomenon in this regard is the so-called ambiguity advantage, which is the finding that ambiguous sentences can be easier to process than unambiguous sentences. This finding may appear counterintuitive, because more meanings should be associated with a higher computational effort. Currently, two theories exist that can explain this finding.
The Unrestricted Race Model (URM) by van Gompel et al. (2001) assumes that several sentence interpretations are computed in parallel, whenever possible, and that the first interpretation to be computed is assigned to the sentence. Because the duration of each structure-building process varies from trial to trial, the parallelism in structure-building predicts that ambiguous sentences should be processed faster. This is because when two structures are permissible, the chances that some interpretation will be computed quickly are higher than when only one specific structure is permissible. Importantly, the URM is not sensitive to task demands such as the type of comprehension questions being asked.
A radically different proposal is the strategic underspecification model by Swets et al. (2008). It assumes that readers do not attempt to resolve ambiguities unless it is absolutely necessary. In other words, they underspecify. According the strategic underspecification hypothesis, all attested replications of the ambiguity advantage are due to the fact that in those experiments, readers were not required to fully understand the sentence.
In this thesis, these two models of the parser’s actions at choice-points in the sentence are presented and evaluated. First, it is argued that the Swets et al.’s (2008) evidence against the URM and in favor of underspecification is inconclusive. Next, the precise predictions of the URM as well as the underspecification model are refined. Subsequently, a self-paced reading experiment involving the attachment of pre-nominal relative clauses in Turkish is presented, which provides evidence against strategical underspecification. A further experiment is presented which investigated relative clause attachment in German using the speed-accuracy tradeoff (SAT) paradigm. The experiment provides evidence against strategic underspecification and in favor of the URM. Furthermore the results of the experiment are used to argue that human sentence comprehension is fallible, and that theories of parsing should be able to account for that fact. Finally, a third experiment is presented, which provides evidence for the sensitivity to task demands in the treatment of ambiguities. Because this finding is incompatible with the URM, and because the strategic underspecification model has been ruled out, a new model of ambiguity resolution is proposed: the stochastic multiple-channel model of ambiguity resolution (SMCM). It is further shown that the quantitative predictions of the SMCM are in agreement with experimental data.
In conclusion, it is argued that the human sentence comprehension system is parallel and fallible, and that it is sensitive to task-demands.
The term Linked Data refers to connected information sources comprising structured data about a wide range of topics and for a multitude of applications. In recent years, the conceptional and technical foundations of Linked Data have been formalized and refined. To this end, well-known technologies have been established, such as the Resource Description Framework (RDF) as a Linked Data model or the SPARQL Protocol and RDF Query Language (SPARQL) for retrieving this information. Whereas most research has been conducted in the area of generating and publishing Linked Data, this thesis presents novel approaches for improved management. In particular, we illustrate new methods for analyzing and processing SPARQL queries. Here, we present two algorithms suitable for identifying structural relationships between these queries. Both algorithms are applied to a large number of real-world requests to evaluate the performance of the approaches and the quality of their results. Based on this, we introduce different strategies enabling optimized access of Linked Data sources. We demonstrate how the presented approach facilitates effective utilization of SPARQL endpoints by prefetching results relevant for multiple subsequent requests. Furthermore, we contribute a set of metrics for determining technical characteristics of such knowledge bases. To this end, we devise practical heuristics and validate them through thorough analysis of real-world data sources. We discuss the findings and evaluate their impact on utilizing the endpoints. Moreover, we detail the adoption of a scalable infrastructure for improving Linked Data discovery and consumption. As we outline in an exemplary use case, this platform is eligible both for processing and provisioning the corresponding information.
Transcription factors (TFs) are ubiquitous gene expression regulators and play essential roles in almost all biological processes. This Ph.D. project is primarily focused on the functional characterisation of MYB112 - a member of the R2R3-MYB TF family from the model plant Arabidopsis thaliana. This gene was selected due to its increased expression during senescence based on previous qRT-PCR expression profiling experiments of 1880 TFs in Arabidopsis leaves at three developmental stages (15 mm leaf, 30 mm leaf and 20% yellowing leaf). MYB112 promoter GUS fusion lines were generated to further investigate the expression pattern of MYB112. Employing transgenic approaches in combination with metabolomics and transcriptomics we demonstrate that MYB112 exerts a major role in regulation of plant flavonoid metabolism. We report enhanced and impaired anthocyanin accumulation in MYB112 overexpressors and MYB112-deficient mutants, respectively. Expression profiling reveals that MYB112 acts as a positive regulator of the transcription factor PAP1 leading to increased anthocyanin biosynthesis, and as a negative regulator of MYB12 and MYB111, which both control flavonol biosynthesis. We also identify MYB112 early responsive genes using a combination of several approaches. These include gene expression profiling (Affymetrix ATH1 micro-arrays and qRT-PCR) and transactivation assays in leaf mesophyll cell protoplasts. We show that MYB112 binds to an 8-bp DNA fragment containing the core sequence (A/T/G)(A/C)CC(A/T)(A/G/T)(A/C)(T/C). By electrophoretic mobility shift assay (EMSA) and chromatin immunoprecipitation coupled to qPCR (ChIP-qPCR) we demonstrate that MYB112 binds in vitro and in vivo to MYB7 and MYB32 promoters revealing them as direct downstream target genes. MYB TFs were previously reported to play an important role in controlling flavonoid biosynthesis in plants. Many factors acting upstream of the anthocyanin biosynthesis pathway show enhanced expression levels during nitrogen limitation, or elevated sucrose content. In addition to the mentioned conditions, other environmental parameters including salinity or high light stress may trigger anthocyanin accumulation. In contrast to several other MYB TFs affecting anthocyanin biosynthesis pathway genes, MYB112 expression is not controlled by nitrogen limitation, or carbon excess, but rather is stimulated by salinity and high light stress. Thus, MYB112 constitutes a previously uncharacterised regulatory factor that modifies anthocyanin accumulation under conditions of abiotic stress.
The characterization of exoplanets is a young and rapidly expanding field in astronomy.
It includes a method called transmission spectroscopy that searches for planetary spectral
fingerprints in the light received from the host star during the event of a transit. This
techniques allows for conclusions on the atmospheric composition at the terminator region,
the boundary between the day and night side of the planet. Observationally a big
challenge, first attempts in the community have been successful in the detection of several
absorption features in the optical wavelength range. These are for example a Rayleighscattering
slope and absorption by sodium and potassium. However, other objects show
a featureless spectrum indicative for a cloud or haze layer of condensates masking the
probable atmospheric layers.
In this work, we performed transmission spectroscopy by spectrophotometry of three
Hot Jupiter exoplanets. When we began the work on this thesis, optical transmission
spectra have been available for two exoplanets. Our main goal was to advance the current
sample of probed objects to learn by comparative exoplanetology whether certain
absorption features are common. We selected the targets HAT-P-12b, HAT-P-19b and
HAT-P-32b, for which the detection of atmospheric signatures is feasible with current
ground-based instrumentation. In addition, we monitored the host stars of all three objects
photometrically to correct for influences of stellar activity if necessary.
The obtained measurements of the three objects all favor featureless spectra. A variety
of atmospheric compositions can explain the lack of a wavelength dependent absorption.
But the broad trend of featureless spectra in planets of a wide range of temperatures,
found in this work and in similar studies recently published in the literature, favors an
explanation based on the presence of condensates even at very low concentrations in the
atmospheres of these close-in gas giants. This result points towards the general conclusion
that the capability of transmission spectroscopy to determine the atmospheric composition
is limited, at least for measurements at low spectral resolution.
In addition, we refined the transit parameters and ephemerides of HAT-P-12b and HATP-
19b. Our monitoring campaigns allowed for the detection of the stellar rotation period
of HAT-P-19 and a refined age estimate. For HAT-P-12 and HAT-P-32, we derived upper
limits on their potential variability. The calculated upper limits of systematic effects of
starspots on the derived transmission spectra were found to be negligible for all three
targets.
Finally, we discussed the observational challenges in the characterization of exoplanet
atmospheres, the importance of correlated noise in the measurements and formulated
suggestions on how to improve on the robustness of results in future work.
Bacteria respond to changing environmental conditions by switching the global pattern of expressed genes. In response to specific environmental stresses the cell activates several stress-specific molecules such as sigma factors. They reversibly bind the RNA polymerase to form the so-called holoenzyme and direct it towards the appropriate stress response genes. In exponentially growing E. coli cells, the majority of the transcriptional activity is carried out by the housekeeping sigma factor, while stress responses are often under the control of alternative sigma factors. Different sigma factors compete for binding to a limited pool of RNA polymerase (RNAP) core enzymes, providing a mechanism for cross talk between genes or gene classes via the sharing of expression machinery. To quantitatively analyze the contribution of sigma factor competition to global changes in gene expression, we develop a thermodynamic model that describes binding between sigma factors and core RNAP at equilibrium, transcription, non-specific binding to DNA and the modulation of the availability of the molecular components.
Association of housekeeping sigma factor to RNAP is generally favored by its abundance and higher binding affinity to the core. In order to promote transcription by alternative sigma subunits, the bacterial cell modulates the transcriptional efficiency in a reversible manner through several strategies such as anti-sigma factors, 6S RNA and generally any kind of transcriptional regulators (e.g. activators or inhibitors). By shifting the outcome of sigma factor competition for the core, these modulators bias the transcriptional program of the cell. The model is validated by comparison with in vitro competition experiments, with which excellent agreement is found. We observe that transcription is affected via the modulation of the concentrations of the different types of holoenzymes, so saturated promoters are only weakly affected by sigma factor competition. However, in case of overlapping promoters or promoters recognized by two types of sigma factors, we find that even saturated promoters are strongly affected.
Active transcription effectively lowers the affinity between the sigma factor driving it and the core RNAP, resulting in complex cross talk effects and raising the question of how their in vitro measure is relevant in the cell. We also estimate that sigma factor competition is not strongly affected by non-specific binding of core RNAPs, sigma factors, and holoenzymes to DNA. Finally, we analyze the role of increased core RNAP availability upon the shut-down of ribosomal RNA transcription during stringent response. We find that passive up-regulation of alternative sigma-dependent transcription is not only possible, but also displays hypersensitivity based on the sigma factor competition. Our theoretical analysis thus provides support for a significant role of passive control during that global switch of the gene expression program and gives new insights into RNAP partitioning in the cell.
Pulsar wind nebulae (PWNe) are the most abundant TeV gamma-ray emitters in the Milky Way. The radiative emission of these objects is powered by fast-rotating pulsars, which donate parts of their rotational energy into winds of relativistic particles. This thesis presents an in-depth study of the detected population of PWNe at high energies. To outline general trends regarding their evolutionary behaviour, a time-dependent model is introduced and compared to the available data. In particular, this work presents two exceptional PWNe which protrude from the rest of the population, namely the Crab Nebula and N 157B. Both objects are driven by pulsars with extremely high rotational energy loss rates. Accordingly, they are often referred to as energetic twins. Modelling the non-thermal multi-wavelength emission of N157B gives access to specific properties of this object, like the magnetic field inside the nebula. Comparing the derived parameters to those of the Crab Nebula reveals large intrinsic differences between the two PWNe. Possible origins of these differences are discussed in context of the resembling pulsars.
Compared to the TeV gamma-ray regime, the number of detected PWNe is much smaller in the MeV-GeV gamma-ray range. In the latter range, the Crab Nebula stands out by the recent detection of gamma-ray flares. In general, the measured flux enhancements on short time scales of days to weeks were not expected in the theoretical understanding of PWNe. In this thesis, the variability of the Crab Nebula is analysed using data from the Fermi Large Area Telescope (Fermi-LAT). For the presented analysis, a new gamma-ray reconstruction method is used, providing a higher sensitivity and a lower energy threshold compared to previous analyses. The derived gamma-ray light curve of the Crab Nebula is investigated for flares and periodicity. The detected flares are analysed regarding their energy spectra, and their variety and commonalities are discussed. In addition, a dedicated analysis of the flare which occurred in March 2013 is performed. The derived short-term variability time scale is roughly 6h, implying a small region inside the Crab Nebula to be responsible for the enigmatic flares. The most promising theories explaining the origins of the flux eruptions and gamma-ray variability are discussed in detail.
In the technical part of this work, a new analysis framework is presented. The introduced software, called gammalib/ctools, is currently being developed for the future CTA observa- tory. The analysis framework is extensively tested using data from the H. E. S. S. experiment. To conduct proper data analysis in the likelihood framework of gammalib/ctools, a model describing the distribution of background events in H.E.S.S. data is presented. The software provides the infrastructure to combine data from several instruments in one analysis. To study the gamma-ray emitting PWN population, data from Fermi-LAT and H. E. S. S. are combined in the likelihood framework of gammalib/ctools. In particular, the spectral peak, which usually lies in the overlap energy regime between these two instruments, is determined with the presented analysis framework. The derived measurements are compared to the predictions from the time-dependent model. The combined analysis supports the conclusion of a diverse population of gamma-ray emitting PWNe.
Cyanobacteria produce about 40 percent of the world’s primary biomass, but also a variety of often toxic peptides such as microcystin. Mass developments, so called blooms, can pose a real threat to the drinking water supply in many parts of the world. This study aimed at characterizing the biological function of microcystin production in one of the most common bloom-forming cyanobacterium Microcystis aeruginosa.
In a first attempt, the effect of elevated light intensity on microcystin production and its binding to cellular proteins was studied. Therefore, conventional microcystin quantification techniques were combined with protein-biochemical methods. RubisCO, the key enzyme for primary carbon fixation was a major microcystin interaction partner. High light exposition strongly stimulated microcystin-protein interactions. Up to 60 percent of the total cellular microcystin was detected bound to proteins, i.e. inaccessible for standard quantification procedures. Underestimation of total microcystin contents when neglecting the protein fraction was also demonstrated in field samples. Finally, an immuno-fluorescence based method was developed to identify microcystin producing cyanobacteria in mixed populations.
The high light induced microcystin interaction with proteins suggested an impact of the secondary metabolite on the primary metabolism of Microcystis by e.g. modulating the activity of enzymes. For addressing that question, a comprehensive GC/MS-based approach was conducted to compare the accumulation of metabolites in the wild-type of Microcystis aeruginosa PCC 7806 and the microcystin deficient ΔmcyB mutant. From all 501 detected non-redundant metabolites 85 (17 percent) accumulated significantly different in either of both genotypes upon high light exposition. Accumulation of compatible solutes in the ΔmcyB mutant suggests a role of microcystin in fine-tuning the metabolic flow to prevent stress related to excess light, high oxygen concentration and carbon limitation.
Co-analysis of the widely used model cyanobacterium Synechocystis PCC 6803 revealed profound metabolic differences between species of cyanobacteria. Whereas Microcystis channeled more resources towards carbohydrate synthesis, Synechocystis invested more in amino acids. These findings were supported by electron microscopy of high light treated cells and the quantification of storage compounds. While Microcystis accumulated mainly glycogen to about 8.5 percent of its fresh weight within three hours, Synechocystis produced higher amounts of cyanophycin. The results showed that the characterization of species-specific metabolic features should gain more attention with regard to the biotechnological use of cyanobacteria.
Knowing the rates and mechanisms of geomorphic process that shape the Earth’s surface is crucial to understand landscape evolution. Modern methods for estimating denudation rates enable us to quantitatively express and compare processes of landscape downwearing that can be traced through time and space—from the seemingly intact, though intensely shattered, phantom blocks of the catastrophically fragmented basal facies of giant rockslides up to denudational noise in orogen-wide data sets averaging over several millennia. This great variety of spatiotemporal scales of denudation rates is both boon and bane of geomorphic process rates. Indeed, processes of landscape downwearing can be traced far back in time, helping us to understand the Earth’s evolution. Yet, this benefit may turn into a drawback due to scaling issues if these rates are to be compared across different observation timescales.
This thesis investigates the mechanisms, patterns and rates of landscape downwearing across the Himalaya-Tibet orogen.
Accounting for the spatiotemporal variability of denudation processes, this thesis addresses landscape downwearing on three distinctly different spatial scales, starting off at the local scale of individual hillslopes where considerable amounts of debris are generated from rock instantaneously: Rocksliding in active mountains is a major impetus of landscape downwearing. Study I provides a systematic overview of the internal sedimentology of giant rockslide deposits and thus meets the challenge of distinguishing them from macroscopically and microscopically similar glacial deposits, tectonic fault-zone breccias, and impact breccias. This distinction is important to avoid erroneous or misleading deduction of paleoclimatic or tectonic implications. -> Grain size analysis shows that rockslide-derived micro-breccia closely resemble those from meteorite impact or tectonic faults. -> Frictionite may occur more frequently that previously assumed. -> Mössbauer-spectroscopy derived results indicate basal rock melting in the absence of water, involving short-term temperatures of >1500°C.
Zooming out, Study II tracks the fate of these sediments, using the example of the upper Indus River, NW India. There we use river sand samples from the Indus and its tributaries to estimate basin-averaged denudation rates along a ~320-km reach across the Tibetan Plateau margin, to answer the question whether incision into the western Tibetan Plateau margin is currently active or not. -> We find an about one-order-of-magnitude upstream decay—from 110 to 10 mm kyr^-1—of cosmogenic Be-10-derived basin-wide denudation rates across the morphological knickpoint that marks the transition from the Transhimalayan ranges to the Tibetan Plateau. This trend is corroborated by independent bulk petrographic and heavy mineral analysis of the same samples. -> From the observation that tributary-derived basin-wide denudation rates do not increase markedly until ~150–200 km downstream of the topographic plateau margin we conclude that incision into the Tibetan Plateau is inactive. -> Comparing our postglacial Be-10-derived denudation rates to long-term (>10^6 yr) estimates from low-temperature thermochronometry, ranging from 100 to 750 mm kyr^-1, points to an order- of-magnitude decay of rates of landscape downwearing towards present. We infer that denudation rates must have been higher in the Quaternary, probably promoted by the interplay of glacial and interglacial stages.
Our investigation of regional denudation patterns in the upper Indus finally is an integral part of Study III that synthesizes denudation of the Himalaya-Tibet orogen. In order to identify general and time-invariant predictors for Be-10-derived denudation rates we analyze tectonic, climatic and topographic metrics from an inventory of 297 drainage basins from various parts of the orogen. Aiming to get insight to the full response distributions of denudation rate to tectonic, climatic and topographic candidate predictors, we apply quantile regression instead of ordinary least squares regression, which has been standard analysis tool in previous studies that looked for denudation rate predictors. -> We use principal component analysis to reduce our set of 26 candidate predictors, ending up with just three out of these: Aridity Index, topographic steepness index, and precipitation of the coldest quarter of the year. -> Topographic steepness index proves to perform best during additive quantile regression. Our consequent prediction of denudation rates on the basin scale involves prediction errors that remain between 5 and 10 mm kyr^-1. -> We conclude that while topographic metrics such as river-channel steepness and slope gradient—being representative on timescales that our cosmogenic Be-10-derived denudation rates integrate over—generally appear to be more suited as predictors than climatic and tectonic metrics based on decadal records.
The work elaborates on the question if coaches in non-professional soccer can influence referee decisions. Modeled from a principal-agent perspective, the managing referee boards can be seen as the principal. They aim at facilitating a fair competition which is in accordance with the existing rules and regulations. In doing so, the referees are assigned as impartial agents on the pitch. The coaches take over a non-legitimate principal-like role trying to influence the referees even though they do not have the formal right to do so.
Separate questionnaires were set up for referees and coaches. The coach questionnaire aimed at identifying the extent and the forms of influencing attempts by coaches. The referee questionnaire tried to elaborate on the questions if referees take notice of possible influencing attempts and how they react accordingly.
The results were put into relation with official match data in order to identify significant influences on personal sanctions (yellow cards, second yellow cards, red cards) and the match result.
It is found that there is a slight effect on the referee’s decisions. However, this effect is rather disadvantageous for the influencing coach and there is no evidence for an impact on the match result itself.
Large-scale floodplain sediment dynamics in the Mekong Delta : present state and future prospects
(2014)
The Mekong Delta (MD) sustains the livelihood and food security of millions of people in Vietnam and Cambodia. It is known as the “rice bowl” of South East Asia and has one of the world’s most productive fisheries. Sediment dynamics play a major role for the high productivity of agriculture and fishery in the delta. However, the MD is threatened by climate change, sea level rise and unsustainable development activities in the Mekong Basin. But despite its importance and the expected threats, the understanding of the present and future sediment dynamics in the MD is very limited. This is a consequence of its large extent, the intricate system of rivers, channels and floodplains and the scarcity of observations. Thus this thesis aimed at (1) the quantification of suspended sediment dynamics and associated sediment-nutrient deposition in floodplains of the MD, and (2) assessed the impacts of likely future boundary changes on the sediment dynamics in the MD. The applied methodology combines field experiments and numerical simulation to quantify and predict the sediment dynamics in the entire delta in a spatially explicit manner. The experimental part consists of a comprehensive procedure to monitor quantity and spatial variability of sediment and associated nutrient deposition for large and complex river floodplains, including an uncertainty analysis. The measurement campaign applied 450 sediment mat traps in 19 floodplains over the MD for a complete flood season. The data also supports quantification of nutrient deposition in floodplains based on laboratory analysis of nutrient fractions of trapped sedimentation.The main findings are that the distribution of grain size and nutrient fractions of suspended sediment are homogeneous over the Vietnamese floodplains. But the sediment deposition within and between ring dike floodplains shows very high spatial variability due to a high level of human inference. The experimental findings provide the essential data for setting up and calibration of a large-scale sediment transport model for the MD. For the simulation studies a large scale hydrodynamic model was developed in order to quantify large-scale floodplain sediment dynamics. The complex river-channel-floodplain system of the MD is described by a quasi-2D model linking a hydrodynamic and a cohesive sediment transport model. The floodplains are described as quasi-2D presentations linked to rivers and channels modeled in 1D by using control structures. The model setup, based on the experimental findings, ignored erosion and re-suspension processes due to a very high degree of human interference during the flood season. A two-stage calibration with six objective functions was developed in order to calibrate both the hydrodynamic and sediment transport modules. The objective functions include hydraulic and sediment transport parameters in main rivers, channels and floodplains. The model results show, for the first time, the tempo-spatial distribution of sediment and associated nutrient deposition rates in the whole MD. The patterns of sediment transport and deposition are quantified for different sub-systems. The main factors influencing spatial sediment dynamics are the network of rivers, channels and dike-rings, sluice gate operations, magnitude of the floods and tidal influences. The superposition of these factors leads to high spatial variability of the sediment transport and deposition, in particular in the Vietnamese floodplains. Depending on the flood magnitude, annual sediment loads reaching the coast vary from 48% to 60% of the sediment load at Kratie, the upper boundary of the MD. Deposited sediment varies from 19% to 23% of the annual load at Kratie in Cambodian floodplains, and from 1% to 6% in the compartmented and diked floodplains in Vietnam. Annual deposited nutrients (N, P, K), which are associated to the sediment deposition, provide on average more than 50% of mineral fertilizers typically applied for rice crops in non-flooded ring dike compartments in Vietnam. This large-scale quantification provides a basis for estimating the benefits of the annual Mekong floods for agriculture and fishery, for assessing the impacts of future changes on the delta system, and further studies on coastal deposition/erosion. For the estimation of future prospects a sensitivity-based approach is applied to assess the response of floodplain hydraulics and sediment dynamics to the changes in the delta boundaries including hydropower development, climate change in the Mekong River Basin and effective sea level rise. The developed sediment model is used to simulate the mean sediment transport and sediment deposition in the whole delta system for the baseline (2000-2010) and future (2050-2060) periods. For each driver we derive a plausible range of future changes and discretize it into five levels, resulting in altogether 216 possible factor combinations. Our results thus cover all plausible future pathways of sediment dynamics in the delta based on current knowledge. The uncertainty of the range of the resulting impacts can be decreased in case more information on these drivers becomes available. Our results indicate that the hydropower development dominates the changes in sediment dynamics of the Mekong Delta, while sea level rise has the smallest effect. The floodplains of Vietnamese Mekong Delta are much more sensitive to the changes compared to the other subsystems of the delta. In terms of median changes of the three combined drivers, the inundation extent is predicted to increase slightly, but the overall floodplain sedimentation would be reduced by approximately 40%, while the sediment load to the Sea would diminish to half of the current rates. These findings provide new and valuable information on the possible impacts of future development on the delta, and indicate the most vulnerable areas. Thus, the presented results are a significant contribution to the ongoing international discussion on the hydropower development in the Mekong basin and its impact on the Mekong delta.