Refine
Has Fulltext
- yes (100) (remove)
Year of publication
- 2017 (100) (remove)
Document Type
- Doctoral Thesis (100) (remove)
Language
- English (100) (remove)
Keywords
- Klimawandel (3)
- climate change (3)
- Arbeitsmarktpolitik (2)
- Bioraffinerie (2)
- DNA origami (2)
- Machine Learning (2)
- Nanopartikel (2)
- Naturgefahren (2)
- Netzwerke (2)
- Prosodie (2)
Institute
- Institut für Geowissenschaften (20)
- Institut für Chemie (15)
- Institut für Physik und Astronomie (13)
- Institut für Biochemie und Biologie (12)
- Department Linguistik (6)
- Institut für Mathematik (6)
- Department Psychologie (4)
- Hasso-Plattner-Institut für Digital Engineering GmbH (4)
- Institut für Umweltwissenschaften und Geographie (4)
- Sozialwissenschaften (4)
- Wirtschaftswissenschaften (4)
- Department Sport- und Gesundheitswissenschaften (3)
- Extern (3)
- Institut für Ernährungswissenschaft (2)
- Institut für Informatik und Computational Science (2)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (1)
Translating innovation
(2017)
This doctoral thesis studies the process of innovation adoption in public administrations, addressing the research question of how an innovation is translated to a local context. The study empirically explores Design Thinking as a new problem-solving approach introduced by a federal government organisation in Singapore. With a focus on user-centeredness, collaboration and iteration Design Thinking seems to offer a new way to engage recipients and other stakeholders of public services as well as to re-think the policy design process from a user’s point of view. Pioneered in the private sector, early adopters of the methodology include civil services in Australia, Denmark, the United Kingdom, the United States as well as Singapore. Hitherto, there is not much evidence on how and for which purposes Design Thinking is used in the public sector.
For the purpose of this study, innovation adoption is framed in an institutionalist perspective addressing how concepts are translated to local contexts. The study rejects simplistic views of the innovation adoption process, in which an idea diffuses to another setting without adaptation. The translation perspective is fruitful because it captures the multidimensionality and ‘messiness’ of innovation adoption. More specifically, the overall research question addressed in this study is: How has Design Thinking been translated to the local context of the public sector organisation under investigation? And from a theoretical point of view: What can we learn from translation theory about innovation adoption processes?
Moreover, there are only few empirical studies of organisations adopting Design Thinking and most of them focus on private organisations. We know very little about how Design Thinking is embedded in public sector organisations. This study therefore provides further empirical evidence of how Design Thinking is used in a public sector organisation, especially with regards to its application to policy work which has so far been under-researched.
An exploratory single case study approach was chosen to provide an in-depth analysis of the innovation adoption process. Based on a purposive, theory-driven sampling approach, a Singaporean Ministry was selected because it represented an organisational setting in which Design Thinking had been embedded for several years, making it a relevant case with regard to the research question. Following a qualitative research design, 28 semi-structured interviews (45-100 minutes) with employees and managers were conducted. The interview data was triangulated with observations and documents, collected during a field research research stay in Singapore.
The empirical study of innovation adoption in a single organisation focused on the intra-organisational perspective, with the aim to capture the variations of translation that occur during the adoption process. In so doing, this study opened the black box often assumed in implementation studies. Second, this research advances translation studies not only by showing variance, but also by deriving explanatory factors. The main differences in the translation of Design Thinking occurred between service delivery and policy divisions, as well as between the first adopter and the rest of the organisation. For the intra-organisational translation of Design Thinking in the Singaporean Ministry the following five factors played a role: task type, mode of adoption, type of expertise, sequence of adoption, and the adoption of similar practices.
Via their powerful radiation, stellar winds, and supernova explosions, massive stars (Mini & 8 M☉) bear a tremendous impact on galactic evolution. It became clear in recent decades that the majority of massive stars reside in binary systems. This thesis sets as a goal to quantify the impact of binarity (i.e., the presence of a companion star) on massive stars. For this purpose, massive binary systems in the Local Group, including OB-type binaries, high mass X-ray binaries (HMXBs), and Wolf-Rayet (WR) binaries, were investigated by means of spectral, orbital, and evolutionary analyses.
The spectral analyses were performed with the non-local thermodynamic equillibrium (non-LTE) Potsdam Wolf-Rayet (PoWR) model atmosphere code. Thanks to critical updates in the calculation of the hydrostatic layers, the code became a state-of-the-art tool applicable for all types of hot massive stars (Chapter 2). The eclipsing OB-type triple system δ Ori served as an intriguing test-case for the new version of the PoWR code, and provided key insights regarding the formation of X-rays in massive stars (Chapter 3). We further analyzed two prototypical HMXBs, Vela X-1 and IGR J17544-2619, and obtained fundamental conclusions regarding the dichotomy of two basic classes of HMXBs (Chapter 4). We performed an exhaustive analysis of the binary R 145 in the Large Magellanic Cloud (LMC), which was claimed to host the most massive stars known. We were able to disentangle the spectrum of the system, and performed an orbital, polarimetric, and spectral analysis, as well as an analysis of the wind-wind collision region. The true masses of the binary components turned out to be significantly lower than suggested, impacting our understanding of the initial mass function and stellar evolution at low metallicity (Chapter 5). Finally, all known WR binaries in the Small Magellanic Cloud (SMC) were analyzed. Although it was theoretical predicted that virtually all WR stars in the SMC should be formed via mass-transfer in binaries, we find that binarity was not important for the formation of the known WR stars in the SMC, implying a strong discrepancy between theory and observations (Chapter 6).
Trunk loading and back pain
(2017)
An essential function of the trunk is the compensation of external forces and loads in order to guarantee stability. Stabilising the trunk during sudden, repetitive loading in everyday tasks, as well as during performance is important in order to protect against injury. Hence, reduced trunk stability is accepted as a risk factor for the development of back pain (BP). An altered activity pattern including extended response and activation times as well as increased co-contraction of the trunk muscles as well as a reduced range of motion and increased movement variability of the trunk are evident in back pain patients (BPP). These differences to healthy controls (H) have been evaluated primarily in quasi-static test situations involving isolated loading directly to the trunk. Nevertheless, transferability to everyday, dynamic situations is under debate. Therefore, the aim of this project is to analyse 3-dimensional motion and neuromuscular reflex activity of the trunk as response to dynamic trunk loading in healthy (H) and back pain patients (BPP).
A measurement tool was developed to assess trunk stability, consisting of dynamic test situations. During these tests, loading of the trunk is generated by the upper and lower limbs with and without additional perturbation. Therefore, lifting of objects and stumbling while walking are adequate represents. With the help of a 12-lead EMG, neuromuscular activity of the muscles encompassing the trunk was assessed. In addition, three-dimensional trunk motion was analysed using a newly developed multi-segmental trunk model. The set-up was checked for reproducibility as well as validity. Afterwards, the defined measurement set-up was applied to assess trunk stability in comparisons of healthy and back pain patients.
Clinically acceptable to excellent reliability could be shown for the methods (EMG/kinematics) used in the test situations. No changes in trunk motion pattern could be observed in healthy adults during continuous loading (lifting of objects) of different weights. In contrast, sudden loading of the trunk through perturbations to the lower limbs during walking led to an increased neuromuscular activity and ROM of the trunk. Moreover, BPP showed a delayed muscle response time and extended duration until maximum neuromuscular activity in response to sudden walking perturbations compared to healthy controls. In addition, a reduced lateral flexion of the trunk during perturbation could be shown in BPP.
It is concluded that perturbed gait seems suitable to provoke higher demands on trunk stability in adults. The altered neuromuscular and kinematic compensation pattern in back pain patients (BPP) can be interpreted as increased spine loading and reduced trunk stability in patients. Therefore, this novel assessment of trunk stability is suitable to identify deficits in BPP. Assignment of affected BPP to therapy interventions with focus on stabilisation of the trunk aiming to improve neuromuscular control in dynamic situations is implied. Hence, sensorimotor training (SMT) to enhance trunk stability and compensation of unexpected sudden loading should be preferred.
This thesis investigates the processing of non-canonical word orders and whether non-canonical orders involving object topicalizations, midfield scrambling and particle verbs are treated the same by native (L1) and non-native (L2) speakers. The two languages investigated are Norwegian and German.
32 L1 Norwegian and 32 L1 German advanced learners of Norwegian were tested in two experiments on object topicalization in Norwegian. The results from the online self-paced reading task and the offline agent identification task show that both groups are able to identify the non-canonical word order and show a facilitatory effect of animate subjects in their reanalysis. Similarly high error rates in the agent identification task suggest that globally unambiguous object topicalizations are a challenging structure for L1 and L2 speakers alike.
The same participants were also tested in two experiments on particle placement in Norwegian, again using a self-paced reading task, this time combined with an acceptability rating task. In the acceptability rating L1 and L2 speakers show the same preference for the verb-adjacent placement of the particle over the non-adjacent placement after the direct object. However, this preference for adjacency is only found in the L1 group during online processing, whereas the L2 group shows no preference for either order.
Another set of experiments tested 33 L1 German and 39 L1 Slavic advanced learners of German on object scrambling in ditransitive clauses in German. Non-native speakers accept both object orders and show neither a preference for either order nor a processing advantage for the canonical order. The L1 group, in contrast, shows a small, but significant preference for the canonical dative-first order in the judgment and the reading task.
The same participants were also tested in two experiments on the application of the split rule in German particle verbs. Advanced L2 speakers of German are able to identify particle verbs and can apply the split rule in V2 contexts in an acceptability judgment task in the same way as L1 speakers. However, unlike the L1 group, the L2 group is not sensitive to the grammaticality manipulation during online processing. They seem to be sensitive to the additional lexical information provided by the particle, but are unable to relate the split particle to the preceding verb and recognize the ungrammaticality in non-V2 contexts.
Taken together, my findings suggest that non-canonical word orders are not per se more difficult to identify for L2 speakers than L1 speakers and can trigger the same reanalysis processes as in L1 speakers. I argue that L2 speakers’ ability to identify a non-canonical word order depends on how the non-canonicity is signaled (case marking vs. surface word order), on the constituents involved (identical vs. different word types), and on the impact of the word order change on sentence meaning. Non-canonical word orders that are signaled by morphological case marking and cause no change to the sentence’s content are hard to detect for L2 speakers.
Lithospheric plates move over the low viscosity asthenosphere balancing several forces. The driving forces include basal shear stress exerted by mantle convection and plate boundary forces such as slab pull and ridge push, whereas the resisting forces include inter-plate friction, trench resistance, and cratonic root resistance. These generate plate motions, the lithospheric stress field and dynamic topography which are observed with different geophysical methods. The orientation and tectonic regime of the observed crustal/lithospheric stress field further contribute to our knowledge of different deformation processes occurring within the Earth's crust and lithosphere. Using numerical models previous studies were able to identify major forces generating stresses in the crust and lithosphere which also contribute to the formation of topography as well as driving lithospheric plates. They showed that the first-order stress pattern explaining about 80\,\% of the stress field originates from a balance of forces acting at the base of the moving lithospheric plates due to convective flow in the underlying mantle. The remaining second-order stress pattern is due to lateral density variations in the crust and lithosphere in regions of pronounced topography and high gravitational potential, such as the Himalayas and mid-ocean ridges. By linking global lithosphere dynamics to deep mantle flow this study seeks to evaluate the influence of shallow and deep density heterogenities on plate motions, lithospheric stress field and dynamic topography using the geoid as a major constraint for mantle rheology. We use the global 3D lithosphere-asthenosphere model SLIM3D with visco-elasto-plastic rheology coupled at 300 km depth to a spectral model of mantle flow. The complexity of the lithosphere-asthenosphere component allows for the simulation of power-law rheology with creep parameters accounting for both diffusion and dislocation creep within the uppermost 300 km.
First we investigate the influence of intra-plate friction and asthenospheric viscosity on present-day plate motions. Previous modelling studies have suggested that small friction coefficients (µ < 0.1, yield stress ~ 100 MPa) can lead to plate tectonics in models of mantle convection. Here we show that, in order to match present-day plate motions and net rotation, the frictional parameter must be less than 0.05. We are able to obtain a good fit with the magnitude and orientation of observed plate velocities (NUVEL-1A) in a no-net-rotation (NNR) reference frame with µ < 0.04 and minimum asthenosphere viscosity ~ 5*10e19 Pas to 10e20 Pas. Our estimates of net rotation (NR) of the lithosphere suggest that amplitudes ~ 0.1-0.2 °/Ma, similar to most observation-based estimates, can be obtained with asthenosphere viscosity cutoff values of ~ 10e19 Pas to 5*10e19 Pas and friction coefficient µ < 0.05.
The second part of the study investigates further constraints on shallow and deep mantle heterogeneities causing plate motion by predicting lithosphere stress field and topography and validating with observations. Lithosphere stresses and dynamic topography are computed using the modelling setup and rheological parameters for prescribed plate motions. We validate our results with the World Stress Map 2016 (WSM2016) and the observed residual topography. Here we tested a number of upper mantle thermal-density structures. The one used to calculate plate motions is considered the reference thermal-density structure. This model is derived from a heat flow model combined with a sea floor age model. In addition we used three different thermal-density structures derived from global S-wave velocity models to show the influence of lateral density heterogeneities in the upper 300 km on model predictions. A large portion of the total dynamic force generating stresses in the crust/lithosphere has its origin in the deep mantle, while topography is largely influenced by shallow heterogeneities. For example, there is hardly any difference between the stress orientation patterns predicted with and without consideration of the heterogeneities in the upper mantle density structure across North America, Australia, and North Africa. However, the crust is dominant in areas of high altitude for the stress orientation compared to the all deep mantle contribution.
This study explores the sensitivity of all the considered surface observables with regards to model parameters providing insights into the influence of the asthenosphere and plate boundary rheology on plate motion as we test various thermal-density structures to predict stresses and topography.
According to the classical plume hypothesis, mantle plumes are localized upwellings of hot, buoyant material in the Earth’s mantle. They have a typical mushroom shape, consisting of a large plume head, which is associated with the formation of voluminous flood basalts (a Large
Igneous Province) and a narrow plume tail, which generates a linear, age-progressive chain of volcanic edifices (a hotspot track) as the tectonic plate migrates over the relatively stationary plume. Both plume heads and tails reshape large areas of the Earth’s surface over many tens of millions of years.
However, not every plume has left an exemplary record that supports the classical hypothesis. The main objective of this thesis is therefore to study how specific hotspots have created the crustal thickness pattern attributed to their volcanic activities. Using regional geodynamic
models, the main chapters of this thesis address the challenge of deciphering the three individual (and increasingly complex) Réunion, Iceland, and Kerguelen hotspot histories, especially focussing on the interactions between the respective plume and nearby spreading ridges.
For this purpose, the mantle convection code ASPECT is used to set up three-dimensional numerical models, which consider the specific local surroundings of each plume by prescribing time-dependent boundary conditions for temperature and mantle flow. Combining reconstructed plate boundaries and plate motions, large-scale global flow velocities and an inhomogeneous lithosphere thickness distribution together with a dehydration rheology represents a novel setup for regional convection models.
The model results show the crustal thickness pattern produced by the plume, which is compared to present-day topographic structures, crustal thickness estimates and age determinations of volcanic provinces associated with hotspot activity. Altogether, the model results agree well
with surface observations. Moreover, the dynamic development of the plumes in the models provide explanations for the generation of smaller, yet characteristic volcanic features that were previously unexplained. Considering the present-day state of a model as a prediction for the
current temperature distribution in the mantle, it cannot only be compared to observations on the surface, but also to structures in the Earth’s interior as imaged by seismic tomography.
More precisely, in the case of the Réunion hotspot, the model demonstrates how the distinctive gap between the Maldives and Chagos is generated due to the combination of the ridge geometry and plume-ridge interaction. Further, the Rodrigues Ridge is formed as the surface expression
of a long-distance sublithospheric flow channel between the upwelling plume and the closest ridge segment, confirming the long-standing hypothesis of Morgan (1978) for the first time in a dynamic context. The Réunion plume has been studied in connection with the seismological
RHUM-RUM project, which has recently provided new seismic tomography images that yield an excellent match with the geodynamic model.
Regarding the Iceland plume, the numerical model shows how plume material may have accumulated in an east-west trending corridor of thin lithosphere across Greenland and resulted in simultaneous melt generation west and east of Greenland. This provides an explanation for the
extremely widespread volcanic material attributed to magma production of the Iceland hotspot and demonstrates that the model setup is also able to explain more complicated hotspot histories. The Iceland model results also agree well with newly derived seismic tomographic images.
The Kerguelen hotspot has an extremely complex history and previous studies concluded that the plume might be dismembered or influenced by solitary waves in its conduit to produce the reconstructed variable melt production rate. The geodynamic model, however, shows that a constant plume influx can result in a variable magma production rate if the plume interacts with nearby mid-ocean ridges. Moreover, the Ninetyeast Ridge in the model is created by on-ridge activities, while the Kerguelen plume was located beneath the Australian plate. This is also a contrast to earlier studies, which described the Ninetyeast Ridge as the result of the Indian plate passing over the plume. Furthermore, the Amsterdam-Saint Paul Plateau in the model is the result of plume material flowing from the upwelling toward the Southeast Indian Ridge, whereas previous geochemical studies attributed that volcanic province to a separate deep plume.
In summary, the three case studies presented in this thesis consistently highlight the importance of plume-ridge interaction in order to reconstruct the overall volcanic hotspot record as well as specific smaller features attributed to a certain hotspot. They also demonstrate that it is not necessary to attribute highly complicated properties to a specific plume in order to account for complex observations. Thus, this thesis contributes to the general understanding of plume dynamics and extends the very specific knowledge about the Réunion, Iceland, and Kerguelen mantle plumes.
With recent advances in the area of information extraction, automatically extracting structured information from a vast amount of unstructured textual data becomes an important task, which is infeasible for humans to capture all information manually. Named entities (e.g., persons, organizations, and locations), which are crucial components in texts, are usually the subjects of structured information from textual documents. Therefore, the task of named entity mining receives much attention. It consists of three major subtasks, which are named entity recognition, named entity linking, and relation extraction.
These three tasks build up an entire pipeline of a named entity mining system, where each of them has its challenges and can be employed for further applications. As a fundamental task in the natural language processing domain, studies on named entity recognition have a long history, and many existing approaches produce reliable results. The task is aiming to extract mentions of named entities in text and identify their types. Named entity linking recently received much attention with the development of knowledge bases that contain rich information about entities. The goal is to disambiguate mentions of named entities and to link them to the corresponding entries in a knowledge base. Relation extraction, as the final step of named entity mining, is a highly challenging task, which is to extract semantic relations between named entities, e.g., the ownership relation between two companies.
In this thesis, we review the state-of-the-art of named entity mining domain in detail, including valuable features, techniques, evaluation methodologies, and so on. Furthermore, we present two of our approaches that focus on the named entity linking and relation extraction tasks separately.
To solve the named entity linking task, we propose the entity linking technique, BEL, which operates on a textual range of relevant terms and aggregates decisions from an ensemble of simple classifiers. Each of the classifiers operates on a randomly sampled subset of the above range. In extensive experiments on hand-labeled and benchmark datasets, our approach outperformed state-of-the-art entity linking techniques, both in terms of quality and efficiency.
For the task of relation extraction, we focus on extracting a specific group of difficult relation types, business relations between companies. These relations can be used to gain valuable insight into the interactions between companies and perform complex analytics, such as predicting risk or valuating companies. Our semi-supervised strategy can extract business relations between companies based on only a few user-provided seed company pairs. By doing so, we also provide a solution for the problem of determining the direction of asymmetric relations, such as the ownership_of relation. We improve the reliability of the extraction process by using a holistic pattern identification method, which classifies the generated extraction patterns. Our experiments show that we can accurately and reliably extract new entity pairs occurring in the target relation by using as few as five labeled seed pairs.
Nowadays, graph data models are employed, when relationships between entities have to be stored and are in the scope of queries. For each entity, this graph data model locally stores relationships to adjacent entities. Users employ graph queries to query and modify these entities and relationships. These graph queries employ graph patterns to lookup all subgraphs in the graph data that satisfy certain graph structures. These subgraphs are called graph pattern matches. However, this graph pattern matching is NP-complete for subgraph isomorphism. Thus, graph queries can suffer a long response time, when the number of entities and relationships in the graph data or the graph patterns increases.
One possibility to improve the graph query performance is to employ graph views that keep ready graph pattern matches for complex graph queries for later retrieval. However, these graph views must be maintained by means of an incremental graph pattern matching to keep them consistent with the graph data from which they are derived, when the graph data changes. This maintenance adds subgraphs that satisfy a graph pattern to the graph views and removes subgraphs that do not satisfy a graph pattern anymore from the graph views.
Current approaches for incremental graph pattern matching employ Rete networks. Rete networks are discrimination networks that enumerate and maintain all graph pattern matches of certain graph queries by employing a network of condition tests, which implement partial graph patterns that together constitute the overall graph query. Each condition test stores all subgraphs that satisfy the partial graph pattern. Thus, Rete networks suffer high memory consumptions, because they store a large number of partial graph pattern matches. But, especially these partial graph pattern matches enable Rete networks to update the stored graph pattern matches efficiently, because the network maintenance exploits the already stored partial graph pattern matches to find new graph pattern matches. However, other kinds of discrimination networks exist that can perform better in time and space than Rete networks. Currently, these other kinds of networks are not used for incremental graph pattern matching.
This thesis employs generalized discrimination networks for incremental graph pattern matching. These discrimination networks permit a generalized network structure of condition tests to enable users to steer the trade-off between memory consumption and execution time for the incremental graph pattern matching. For that purpose, this thesis contributes a modeling language for the effective definition of generalized discrimination networks. Furthermore, this thesis contributes an efficient and scalable incremental maintenance algorithm, which updates the (partial) graph pattern matches that are stored by each condition test. Moreover, this thesis provides a modeling evaluation, which shows that the proposed modeling language enables the effective modeling of generalized discrimination networks. Furthermore, this thesis provides a performance evaluation, which shows that a) the incremental maintenance algorithm scales, when the graph data becomes large, and b) the generalized discrimination network structures can outperform Rete network structures in time and space at the same time for incremental graph pattern matching.
Data profiling is the computer science discipline of analyzing a given dataset for its metadata. The types of metadata range from basic statistics, such as tuple counts, column aggregations, and value distributions, to much more complex structures, in particular inclusion dependencies (INDs), unique column combinations (UCCs), and functional dependencies (FDs). If present, these statistics and structures serve to efficiently store, query, change, and understand the data. Most datasets, however, do not provide their metadata explicitly so that data scientists need to profile them.
While basic statistics are relatively easy to calculate, more complex structures present difficult, mostly NP-complete discovery tasks; even with good domain knowledge, it is hardly possible to detect them manually. Therefore, various profiling algorithms have been developed to automate the discovery. None of them, however, can process datasets of typical real-world size, because their resource consumptions and/or execution times exceed effective limits.
In this thesis, we propose novel profiling algorithms that automatically discover the three most popular types of complex metadata, namely INDs, UCCs, and FDs, which all describe different kinds of key dependencies. The task is to extract all valid occurrences from a given relational instance. The three algorithms build upon known techniques from related work and complement them with algorithmic paradigms, such as divide & conquer, hybrid search, progressivity, memory sensitivity, parallelization, and additional pruning to greatly improve upon current limitations. Our experiments show that the proposed algorithms are orders of magnitude faster than related work. They are, in particular, now able to process datasets of real-world, i.e., multiple gigabytes size with reasonable memory and time consumption.
Due to the importance of data profiling in practice, industry has built various profiling tools to support data scientists in their quest for metadata. These tools provide good support for basic statistics and they are also able to validate individual dependencies, but they lack real discovery features even though some fundamental discovery techniques are known for more than 15 years. To close this gap, we developed Metanome, an extensible profiling platform that incorporates not only our own algorithms but also many further algorithms from other researchers. With Metanome, we make our research accessible to all data scientists and IT-professionals that are tasked with data profiling. Besides the actual metadata discovery, the platform also offers support for the ranking and visualization of metadata result sets.
Being able to discover the entire set of syntactically valid metadata naturally introduces the subsequent task of extracting only the semantically meaningful parts. This is challenge, because the complete metadata results are surprisingly large (sometimes larger than the datasets itself) and judging their use case dependent semantic relevance is difficult. To show that the completeness of these metadata sets is extremely valuable for their usage, we finally exemplify the efficient processing and effective assessment of functional dependencies for the use case of schema normalization.
Natural products and their derivatives have always been a source of drug leads. In particular, bacterial compounds have played an important role in drug development, for example in the field of antibiotics. A decrease in the discovery of novel leads from natural sources and the hope of finding new leads through the generation of large libraries of drug-like compounds by combinatorial chemistry aimed at specific molecular targets drove the pharmaceutical companies away from research on natural products. However, recent technological advances in genetics, bioinformatics and analytical chemistry have revived the interest in natural products. The ribosomally synthesized and post-translationally modified peptides (RiPPs) are a group of natural products generated by the action of post-translationally modifying enzymes on precursor peptides translated from mRNA by ribosomes. The great substrate promiscuity exhibited by many of the enzymes from RiPP biosynthetic pathways have led to the generation of hundreds of novel synthetic and semisynthetic variants, including variants carrying non-canonical amino acids (ncAAs). The microviridins are a family of RiPPs characterized by their atypical tricyclic structure composed of lactone and lactam rings, and their activity as serine protease inhibitors. The generalities of their biosynthetic pathway have already been described, however, the lack of information on details such as the protease responsible for cleaving off the leader peptide from the cyclic core peptide has impeded the fast and cheap production of novel microviridin variants. In the present work, knowledge on leader peptide activation of enzymes from other RiPP families has been extrapolated to the microviridin family, making it possible to bypass the need of a leader peptide. This feature allowed for the exploitation of the microviridin biosynthetic machinery for the production of novel variants through the establishment of an efficient one-pot in vitro platform. The relevance of this chemoenzymatic approach has been exemplified by the synthesis of novel potent serine protease inhibitors from both rationally-designed peptide libraries and bioinformatically predicted microviridins. Additionally, new structure-activity relationships (SARs) could be inferred by screening microviridin intermediates. The significance of this technique was further demonstrated by the simple incorporation of ncAAs into the microviridin scaffold.
Recognizing, understanding, and responding to quantities are considerable skills for human beings. We can easily communicate quantities, and we are extremely efficient in adapting our behavior to numerical related tasks. One usual task is to compare quantities. We also use symbols like digits in numerical-related tasks. To solve tasks including digits, we must to rely on our previously learned internal number representations.
This thesis elaborates on the process of number comparison with the use of noisy mental representations of numbers, the interaction of number and size representations and how we use mental number representations strategically. For this, three studies were carried out.
In the first study, participants had to decide which of two presented digits was numerically larger. They had to respond with a saccade in the direction of the anticipated answer. Using only a small set of meaningfully interpretable parameters, a variant of random walk models is described that accounts for response time, error rate, and variance of response time for the full matrix of 72 digit pairs. In addition, the used random walk model predicts a numerical distance effect even for error response times and this effect clearly occurs in the observed data. In relation to corresponding correct answers error responses were systematically faster. However, different from standard assumptions often made in random walk models, this account required that the distributions of step sizes of the induced random walks be asymmetric to account for this asymmetry between correct and incorrect responses.
Furthermore, the presented model provides a well-defined framework to investigate the nature and scale (e.g., linear vs. logarithmic) of the mapping of numerical magnitude onto its internal representation. In comparison of the fits of proposed models with linear and logarithmic mapping, the logarithmic mapping is suggested to be prioritized.
Finally, we discuss how our findings can help interpret complex findings (e.g., conflicting speed vs. accuracy trends) in applied studies that use number comparison as a well-established diagnostic tool. Furthermore, a novel oculomotoric effect is reported, namely the saccadic overschoot effect. The participants responded by saccadic eye movements and the amplitude of these saccadic responses decreases with numerical distance.
For the second study, an experimental design was developed that allows us to apply the signal detection theory to a task where participants had to decide whether a presented digit was physically smaller or larger. A remaining question is, whether the benefit in (numerical magnitude – physical size) congruent conditions is related to a better perception than in incongruent conditions. Alternatively, the number-size congruency effect is mediated by response biases due to numbers magnitude. The signal detection theory is a perfect tool to distinguish between these two alternatives. It describes two parameters, namely sensitivity and response bias. Changes in the sensitivity are related to the actual task performance due to real differences in perception processes whereas changes in the response bias simply reflect strategic implications as a stronger preparation (activation) of an anticipated answer. Our results clearly demonstrate that the number-size congruency effect cannot be reduced to mere response bias effects, and that genuine sensitivity gains for congruent number-size pairings contribute to the number-size congruency effect.
Third, participants had to perform a SNARC task – deciding whether a presented digit was odd or even. Local transition probability of irrelevant attributes (magnitude) was varied while local transition probability of relevant attributes (parity) and global probability occurrence of each stimulus were kept constantly. Participants were quite sensitive in recognizing the underlying local transition probability of irrelevant attributes. A gain in performance was observed for actual repetitions of the irrelevant attribute in relation to changes of the irrelevant attribute in high repetition conditions compared to low repetition conditions. One interpretation of these findings is that information about the irrelevant attribute (magnitude) in the previous trial is used as an informative precue, so that participants can prepare early processing stages in the current trial, with the corresponding benefits and costs typical of standard cueing studies.
Finally, the results reported in this thesis are discussed in relation to recent studies in numerical cognition.
Proteins are molecules that are essential for life and carry out an enormous number of functions in organisms. To this end, they change their conformation and bind to other molecules. However, the interplay between conformational change and binding is not fully understood. In this work, this interplay is investigated with molecular dynamics (MD) simulations of the protein-peptide system Mdm2-PMI and by analysis of data from relaxation experiments.
The central task it to uncover the binding mechanism, which is described by the sequence of (partial) binding events and conformational change events including their probabilities. In the simplest case, the binding mechanism is described by a two-step model: binding followed by conformational change or conformational change followed by binding. In the general case, longer sequences with multiple conformational changes and partial binding events are possible as well as parallel pathways that differ in their sequences of events. The theory of Markov state models (MSMs) provides the theoretical framework in which all these cases can be modeled. For this purpose, MSMs are estimated in this work from MD data, and rate equation models, which are related to MSMs, are inferred from experimental relaxation data.
The MD simulation and Markov modeling of the PMI-Mdm2 system shows that PMI and Mdm2 can bind via multiple pathways. A main result of this work is a dissociation rate on the order of one event per second, which was calculated using Markov modeling and is in agreement with experiment. So far, dissociation rates and transition rates of this magnitude have only been calculated with methods that speed up transitions by acting with time-dependent, external forces on the binding partners. The simulation technique developed in this work, in contrast, allows the estimation of dissociation rates from the combination of free energy calculation and direct MD simulation of the fast binding process. Two new statistical estimators TRAM and TRAMMBAR are developed to estimate a MSM from the joint data of both simulation types.
In addition, a new analysis technique for time-series data from chemical relaxation experiments is developed in this work. It allows to identify one of the above-mentioned two-step mechanisms as the mechanism that underlays the data. The new method is valid for a broader range of concentrations than previous methods and therefore allows to choose the concentrations such that the mechanism can be uniquely identified. It is successfully tested with data for the binding of recoverin to a rhodopsin kinase peptide.
Start-up incentives targeted at unemployed individuals have become an important tool of the Active Labor Market Policy (ALMP) to fight unemployment in many countries in recent years. In contrast to traditional ALMP instruments like training measures, wage subsidies, or job creation schemes, which are aimed at reintegrating unemployed individuals into dependent employment, start-up incentives are a fundamentally different approach to ALMP, in that they intend to encourage and help unemployed individuals to exit unemployment by entering self-employment and, thus, by creating their own jobs. In this sense, start-up incentives for unemployed individuals serve not only as employment and social policy to activate job seekers and combat unemployment but also as business policy to promote entrepreneurship. The corresponding empirical literature on this topic so far has been mainly focused on the individual labor market perspective, however. The main part of the thesis at hand examines the new start-up subsidy (“Gründungszuschuss”) in Germany and consists of four empirical analyses that extend the existing evidence on start-up incentives for unemployed individuals from multiple perspectives and in the following directions:
First, it provides the first impact evaluation of the new start-up subsidy in Germany. The results indicate that participation in the new start-up subsidy has significant positive and persistent effects on both reintegration into the labor market as well as the income profiles of participants, in line with previous evidence on comparable German and international programs, which emphasizes the general potential of start-up incentives as part of the broader ALMP toolset. Furthermore, a new innovative sensitivity analysis of the applied propensity score matching approach integrates findings from entrepreneurship and labor market research about the key role of an individual’s personality on start-up decision, business performance, as well as general labor market outcomes, into the impact evaluation of start-up incentives. The sensitivity analysis with regard to the inclusion and exclusion of usually unobserved personality variables reveals that differences in the estimated treatment effects are small in magnitude and mostly insignificant. Consequently, concerns about potential overestimation of treatment effects in previous evaluation studies of similar start-up incentives due to usually unobservable personality variables are less justified, as long as the set of observed control variables is sufficiently informative (Chapter 2).
Second, the thesis expands our knowledge about the longer-term business performance and potential of subsidized businesses arising from the start-up subsidy program. In absolute terms, the analysis shows that a relatively high share of subsidized founders successfully survives in the market with their original businesses in the medium to long run. The subsidy also yields a “double dividend” to a certain extent in terms of additional job creation. Compared to “regular”, i.e., non-subsidized new businesses founded by non-unemployed individuals in the same quarter, however, the economic and growth-related impulses set by participants of the subsidy program are only limited with regard to employment growth, innovation activity, or investment. Further investigations of possible reasons for these differences show that differential business growth paths of subsidized founders in the longer run seem to be mainly limited by higher restrictions to access capital and by unobserved factors, such as less growth-oriented business strategies and intentions, as well as lower (subjective) entrepreneurial persistence. Taken together, the program has only limited potential as a business and entrepreneurship policy intended to induce innovation and economic growth (Chapters 3 and 4).
And third, an empirical analysis on the level of German regional labor markets yields that there is a high regional variation in subsidized start-up activity relative to overall new business formation. The positive correlation between regular start-up intensity and the share among all unemployed individuals who participate in the start-up subsidy program suggests that (nascent) unemployed founders also profit from the beneficial effects of regional entrepreneurship capital. Moreover, the analysis of potential deadweight and displacement effects from an aggregated regional perspective emphasizes that the start-up subsidy for unemployed individuals represents a market intervention into existing markets, which affects incumbents and potentially produces inefficiencies and market distortions. This macro perspective deserves more attention and research in the future (Chapter 5).
The Cauchy problem for the linearised Einstein equation and the Goursat problem for wave equations
(2017)
In this thesis, we study two initial value problems arising in general relativity. The first is the Cauchy problem for the linearised Einstein equation on general globally hyperbolic spacetimes, with smooth and distributional initial data. We extend well-known results by showing that given a solution to the linearised constraint equations of arbitrary real Sobolev regularity, there is a globally defined solution, which is unique up to addition of gauge solutions. Two solutions are considered equivalent if they differ by a gauge solution. Our main result is that the equivalence class of solutions depends continuously on the corre- sponding equivalence class of initial data. We also solve the linearised constraint equations in certain cases and show that there exist arbitrarily irregular (non-gauge) solutions to the linearised Einstein equation on Minkowski spacetime and Kasner spacetime.
In the second part, we study the Goursat problem (the characteristic Cauchy problem) for wave equations. We specify initial data on a smooth compact Cauchy horizon, which is a lightlike hypersurface. This problem has not been studied much, since it is an initial value problem on a non-globally hyperbolic spacetime. Our main result is that given a smooth function on a non-empty, smooth, compact, totally geodesic and non-degenerate Cauchy horizon and a so called admissible linear wave equation, there exists a unique solution that is defined on the globally hyperbolic region and restricts to the given function on the Cauchy horizon. Moreover, the solution depends continuously on the initial data. A linear wave equation is called admissible if the first order part satisfies a certain condition on the Cauchy horizon, for example if it vanishes. Interestingly, both existence of solution and uniqueness are false for general wave equations, as examples show. If we drop the non-degeneracy assumption, examples show that existence of solution fails even for the simplest wave equation. The proof requires precise energy estimates for the wave equation close to the Cauchy horizon. In case the Ricci curvature vanishes on the Cauchy horizon, we show that the energy estimates are strong enough to prove local existence and uniqueness for a class of non-linear wave equations. Our results apply in particular to the Taub-NUT spacetime and the Misner spacetime. It has recently been shown that compact Cauchy horizons in spacetimes satisfying the null energy condition are necessarily smooth and totally geodesic. Our results therefore apply if the spacetime satisfies the null energy condition and the Cauchy horizon is compact and non-degenerate.
Anthropogenically amplified erosion leads to increased fine-grained sediment input into the fluvial system in the 15.000 km2 Kharaa River catchment in northern Mongolia and constitutes a major stressing factor for the aquatic ecosystem. This study uniquely combines the application of intensive monitoring, source fingerprinting and catchment modelling techniques to allow for the comparison of the credibility and accuracy of each single method. High-resolution discharge data were used in combination with daily suspended solid measurements to calculate the suspended sediment budget and compare it with estimations of the sediment budget model SedNet. The comparison of both techniques showed that the development of an overall sediment budget with SedNet was possible, yielding results in the same order of magnitude (20.3 kt a- 1 and 16.2 kt a- 1).
Radionuclide sediment tracing, using Be-7, Cs-137 and Pb-210 was applied to differentiate sediment sources for particles < 10μm from hillslope and riverbank erosion and showed that riverbank erosion generates 74.5% of the suspended sediment load, whereas surface erosion contributes 21.7% and gully erosion only 3.8%. The contribution of the single subcatchments of the Kharaa to the suspended sediment load was assessed based on their variation in geochemical composition (e.g. in Ti, Sn, Mo, Mn, As, Sr, B, U, Ca and Sb). These variations were used for sediment source discrimination with geochemical composite fingerprints based on Genetic Algorithm driven Discriminant Function Analysis, the Kruskal–Wallis H-test and Principal Component Analysis. The contributions of the individual sub-catchment varied from 6.4% to 36.2%, generally showing higher contributions from the sub-catchments in the middle, rather than the upstream portions of the study area.
The results indicate that river bank erosion generated by existing grazing practices of livestock is the main cause for elevated fine sediment input. Actions towards the protection of the headwaters and the stabilization of the river banks within the middle reaches were identified as the highest priority. Deforestation and by lodging and forest fires should be prevented to avoid increased hillslope erosion in the mountainous areas. Mining activities are of minor importance for the overall catchment sediment load but can constitute locally important point sources for particular heavy metals in the fluvial system.
Self-adaptive data quality
(2017)
Carrying out business processes successfully is closely linked to the quality of the data inventory in an organization. Lacks in data quality lead to problems: Incorrect address data prevents (timely) shipments to customers. Erroneous orders lead to returns and thus to unnecessary effort. Wrong pricing forces companies to miss out on revenues or to impair customer satisfaction. If orders or customer records cannot be retrieved, complaint management takes longer. Due to erroneous inventories, too few or too much supplies might be reordered.
A special problem with data quality and the reason for many of the issues mentioned above are duplicates in databases. Duplicates are different representations of same real-world objects in a dataset. However, these representations differ from each other and are for that reason hard to match by a computer. Moreover, the number of required comparisons to find those duplicates grows with the square of the dataset size. To cleanse the data, these duplicates must be detected and removed. Duplicate detection is a very laborious process. To achieve satisfactory results, appropriate software must be created and configured (similarity measures, partitioning keys, thresholds, etc.). Both requires much manual effort and experience.
This thesis addresses automation of parameter selection for duplicate detection and presents several novel approaches that eliminate the need for human experience in parts of the duplicate detection process.
A pre-processing step is introduced that analyzes the datasets in question and classifies their attributes semantically. Not only do these annotations help understanding the respective datasets, but they also facilitate subsequent steps, for example, by selecting appropriate similarity measures or normalizing the data upfront. This approach works without schema information.
Following that, we show a partitioning technique that strongly reduces the number of pair comparisons for the duplicate detection process. The approach automatically finds particularly suitable partitioning keys that simultaneously allow for effective and efficient duplicate retrieval. By means of a user study, we demonstrate that this technique finds partitioning keys that outperform expert suggestions and additionally does not need manual configuration. Furthermore, this approach can be applied independently of the attribute types.
To measure the success of a duplicate detection process and to execute the described partitioning approach, a gold standard is required that provides information about the actual duplicates in a training dataset. This thesis presents a technique that uses existing duplicate detection results and crowdsourcing to create a near gold standard that can be used for the purposes above. Another part of the thesis describes and evaluates strategies how to reduce these crowdsourcing costs and to achieve a consensus with less effort.
In this work the human AOX1 was characterized and detailed aspects regarding the expression, the enzyme kinetics and the production of reactive oxygen species (ROS) were investigated. The hAOX1 is a cytosolic enzyme belonging to the molybdenum hydroxylase family. Its catalytically active form is a homodimer with a molecular weight of 300 kDa. Each monomer (150 kDa) consists of three domains: a N-terminal domain (20 kDa) containing two [2Fe-2S] clusters, a 40 kDa intermediate domain containing a flavin adenine dinucleotide (FAD), and a C-terminal domain (85 kDa) containing the substrate binding pocket and the molybdenum cofactor (Moco). The hAOX1 has an emerging role in the metabolism and pharmacokinetics of many drugs, especially aldehydes and N- heterocyclic compounds.
In this study, the hAOX1 was hetereogously expressed in E. coli TP1000 cells, using a new codon optimized gene sequence which improved the expressed protein yield of around 10-fold compared to the previous expression systems for this enzyme. To increase the catalytic activity of hAOX1, an in vitro chemical sulfuration was performed to favor the insertion of the equatorial sulfido ligand at the Moco with consequent increased enzymatic activity of around 10-fold. Steady-state kinetics and inhibition studies were performed using several substrates, electron acceptors and inhibitors. The recombinant hAOX1 showed higher catalytic activity when molecular oxygen was used as electron acceptor. The highest turn over values were obtained with phenanthridine as substrate. Inhibition studies using thioridazine (phenothiazine family), in combination with structural studies performed in the group of Prof. M.J. Romão, Nova Universidade de Lisboa, showed a new inhibition site located in proximity of the dimerization site of hAOX1. The inhibition mode of thioridazine resulted in a noncompetitive inhibition type. Further inhibition studies with loxapine, a thioridazine-related molecule, showed the same type of inhibition. Additional inhibition studies using DCPIP and raloxifene were carried out.
Extensive studies on the FAD active site of the hAOX1 were performed. Twenty new hAOX1 variants were produced and characterized. The hAOX1 variants generated in this work were divided in three groups: I) hAOX1 single nucleotide polymorphisms (SNP) variants; II) XOR- FAD loop hAOX1 variants; III) additional single point hAOX1 variants. The hAOX1 SNP variants G46E, G50D, G346R, R433P, A439E, K1231N showed clear alterations in their catalytic activity, indicating a crucial role of these residues into the FAD active site and in relation to the overall reactivity of hAOX1.
Furthermore, residues of the bovine XOR FAD flexible loop (Q423ASRREDDIAK433) were introduced in the hAOX1. FAD loop hAOX1 variants were produced and characterized for their stability and catalytic activity. Especially the variants hAOX1 N436D/A437D/L438I, N436D/A437D/L438I/I440K and Q434R/N436D/A437D/L438I/I440K showed decreased catalytic activity and stability. hAOX1 wild type and variants were tested for reactivity toward NADH but no reaction was observed.
Additionally, the hAOX1 wild type and variants were tested for the generation of reactive oxygen species (ROS). Interestingly, one of the SNP variants, hAOX1 L438V, showed a high ratio of superoxide prodction. This result showed a critical role for the residue Leu438 in the mechanism of oxygen radicals formation by hAOX1. Subsequently, further hAOX1 variants having the mutated Leu438 residue were produced. The variants hAOX1 L438A, L438F and L438K showed superoxide overproduction of around 85%, 65% and 35% of the total reducing equivalent obtained from the substrate oxidation.
The results of this work show for the first time a characterization of the FAD active site of the hAOX1, revealing the importance of specific residues involved in the generation of ROS and effecting the overall enzymatic activity of hAOX1. The hAOX1 SNP variants presented here indicate that those allelic variations in humans might cause alterations ROS balancing and clearance of drugs in humans.
In this thesis, stochastic dynamics modelling collective motions of populations, one of the most mysterious type of biological phenomena, are considered. For a system of N particle-like individuals, two kinds of asymptotic behaviours are studied : ergodicity and flocking properties, in long time, and propagation of chaos, when the number N of agents goes to infinity. Cucker and Smale, deterministic, mean-field kinetic model for a population without a hierarchical structure is the starting point of our journey : the first two chapters are dedicated to the understanding of various stochastic dynamics it inspires, with random noise added in different ways. The third chapter, an attempt to improve those results, is built upon the cluster expansion method, a technique from statistical mechanics. Exponential ergodicity is obtained for a class of non-Markovian process with non-regular drift. In the final part, the focus shifts onto a stochastic system of interacting particles derived from Keller and Segel 2-D parabolicelliptic model for chemotaxis. Existence and weak uniqueness are proven.
The timing and location of the two largest earthquakes of the 21st century (Sumatra, 2004 and Tohoku 2011, events) greatly surprised the scientific community, indicating that the deformation processes that precede and follow great megathrust earthquakes remain enigmatic. During these phases before and after the earthquake a combination of multi-scale complex processes are acting simultaneously: Stresses built up by long-term tectonic motions are modified by sudden jerky deformations during earthquakes, before being restored by multiple ensuing relaxation processes.
This thesis details a cross-scale thermomechanical model developed with the aim of simulating the entire subduction process from earthquake (1 minute) to million years’ time scale, excluding only rupture propagation. The model employs elasticity, non-linear transient viscous rheology, and rate-and-state friction. It generates spontaneous earthquake sequences, and, by using an adaptive time-step algorithm, recreates the deformation process as observed naturally over single and multiple seismic cycles. The model is thoroughly tested by comparing results to those from known high- resolution solutions of generic modeling setups widely used in modeling of rupture propagation. It is demonstrated, that while not modeling rupture propagation explicitly, the modeling procedure correctly recognizes the appearance of instability (earthquake) and correctly simulates the cumulative slip at a fault during great earthquake by means of a quasi-dynamic approximation.
A set of 2D models is used to study the effects of non-linear transient rheology on the postseismic processes following great earthquakes. Our models predict that the viscosity in the mantle wedge drops by 3 to 4 orders of magnitude during a great earthquake with magnitude above 9. This drop in viscosity results in spatial scales and timings of the relaxation processes following the earthquakes that are significantly different to previous estimates. These models replicate centuries long seismic cycles exhibited by the greatest earthquakes (like the Great Chile 1960 Earthquake) and are consistent with the major features of postseismic surface displacements recorded after the Great Tohoku Earthquake.
The 2D models are also applied to study key factors controlling maximum magnitudes of earthquakes in subduction zones. Even though methods of instrumentally observing earthquakes at subduction zones have rapidly improved in recent decades, the characteristic recurrence interval of giant earthquakes (Mw>8.5) is much larger than the currently available observational record and therefore the necessary conditions for giant earthquakes are not clear. Statistical studies have recognized the importance of the slab shape and its surface roughness, state of the strain of the upper plate and thickness of sediments filling the trenches. In this thesis we attempt to explain these observations and to identify key controlling parameters. We test a set of 2D models representing great earthquake seismic cycles at known subduction zones with various known geometries, megathrust friction coefficients, and convergence rates implemented. We found that low-angle subduction (large effect) and thick sediments in the subduction channel (smaller effect) are the fundamental necessary conditions for generating giant earthquakes, while the change of subduction velocity from 10 to 3.5 cm/yr has a lower effect. Modeling results also suggest that having thick sediments in the subduction channel causes low static friction, resulting in neutral or slightly compressive deformation in the overriding plate for low-angle subduction zones. These modeling results agree well with observations for the largest earthquakes. The model predicts the largest possible earthquakes for subduction zones of given dipping angles. The predicted maximum magnitudes exactly threshold magnitudes of all known giant earthquakes of 20th and 21st centuries.
The clear limitation of most of the models developed in the thesis is their 2D nature. Development of 3D models with comparable resolution and complexity will require significant advances in numerical techniques. Nevertheless, we conducted a series of low-resolution 3D models to study the interaction between two large asperities at a subduction interface separated by an aseismic gap of varying width. The novelty of the model is that it considers behavior of the asperities during multiple seismic cycles. As expected, models show that an aseismic gap with a narrow width could not prevent rupture propagation from one asperity to another, and that rupture always crosses the entire model. When the gap becomes too wide, asperities do not interact anymore and rupture independently. However, an interesting mode of interaction was observed in the model with an intermediate width of the aseismic gap: In this model the asperities began to stably rupture in anti-phase following multiple seismic cycles. These 3D modeling results, while insightful, must be considered preliminary because of the limitations in resolution.
The technique developed in this thesis for cross-scale modeling of seismic cycles can be used to study the effects of multiple seismic cycles on the long-term deformation of the upper plate. The technique can be also extended to the case of continental transform faults and for the advanced 3D modeling of specific subduction zones. This will require further development of numerical techniques and adaptation of the existing advanced highly scalable parallel codes like LAMEM and ASPECT.
Magnetotactic bacteria possess an intracellular structure called the magnetosome chain. Magnetosome chains contain nano−particles of iron crystals enclosed by a membrane and aligned on a cytoskeletal filament. Due to the presence of the magnetosome chains, magnetotactic bacteria are able to orient and swim along the magnetic field lines. A detailed study of structural properties of magnetosome chains in magnetotactic bacteria has primary scientific interests. It can provide more insight into the formation of the cytoskeleton in bacteria. In this thesis, we develop a new framework to study the structural properties of magnetosome chains in magnetotactic bacteria.
First, we address the bending stiffness of magnetosome chains resulting from two main contributions: the magnetic interactions of magnetosome particles and the bending stiffness of the cytoskeletal filament to which the magnetosomes are anchored. Our analysis indicates that the linear configuration of magnetosome particles without the stabilisation to the cytoskeleton may close to ring like structures, with no net magnetic moment, which thus can not perform as a compass in cellular navigation. As a result we think that one of the roles of the filament is to stabilize the linear configuration against ring closure.
We then investigate the equilibrium configurations of magnetosome particles including linear chain and closed−ring structures. We notably observe that for the formation of a stable linear structure on the cytoskeletal filament, presence of a binding energy is needed. In the presence of external stimuli the stability of the magnetosome chain is due to the internal dipole−dipole interactions, the stiffness and the binding energy of the protein structure connecting the magnetosome particles to the filament. Our observations, during and after the treatment of the magnetosome chain with the external magnetic field substantiates the stabilisation of magnetosome chains to the cytoskeletal filament by proteinous linkers and the dynamic feature of these structures.
Finally, we employ our model to study the FMR spectra of magnetosome chains in a single cell of magnetotactic bacteria. We explore the effect of magnetocrystalline anisotropy in three-fold symmetry observed in FMR spectra and the peculiarity of different spectra arisen from different mutants of these bacteria.
Detection and Kirchhoff-type migration of seismic events by use of a new characteristic function
(2017)
The classical method of seismic event localization is based on the picking of body wave arrivals, ray tracing and inversion of travel time data. Travel time picks with small uncertainties are required to produce reliable and accurate results with this kind of source localization. Hence recordings, with a low Signal-to-Noise Ratio (SNR) cannot be used in a travel time based inversion. Low SNR can be related with weak signals from distant and/or low magnitude sources as well as with a high level of ambient noise. Diffraction stacking is considered as an alternative seismic event localization method that enables also the processing of low SNR recordings by mean of stacking the amplitudes of seismograms along a travel time function. The location of seismic event and its origin time are determined based on the highest stacked amplitudes (coherency) of the image function. The method promotes an automatic processing since it does not need travel time picks as input data.
However, applying diffraction stacking may require longer computation times if only limited computer resources are used. Furthermore, a simple diffraction stacking of recorded amplitudes could possibly fail to locate the seismic sources if the focal mechanism leads to complex radiation patterns which typically holds for both natural and induced seismicity.
In my PhD project, I have developed a new work flow for the localization of seismic events which is based on a diffraction stacking approach. A parallelized code was implemented for the calculation of travel time tables and for the determination of an image function to reduce computation time. In order to address the effects from complex source radiation patterns, I also suggest to compute diffraction stacking from a characteristic function (CF) instead of stacking the original wave form data. A new CF, which is called in the following mAIC (modified from Akaike Information Criterion) is proposed. I demonstrate that, the performance of the mAIC does not depend on the chosen length of the analyzed time window and that both P- and S-wave onsets can be detected accurately. To avoid cross-talk between P- and S-waves due to inaccurate velocity models, I separate the P- and S-waves from the mAIC function by making use of polarization attributes. Then, eventually the final image function is represented by the largest eigenvalue as a result of the covariance analysis between P- and S-image functions. Before applying diffraction stacking, I also apply seismogram denoising by using Otsu thresholding in the time-frequency domain.
Results from synthetic experiments show that the proposed diffraction stacking provides reliable results even from seismograms with low SNR=1. Tests with different presentations of the synthetic seismograms (displacement, velocity, and acceleration) shown that, acceleration seismograms deliver better results in case of high SNR, whereas displacement seismograms provide more accurate results in case of low SNR recordings. In another test, different measures (maximum amplitude, other statistical parameters) were used to determine the source location in the final image function. I found that the statistical approach is the preferred method particularly for low SNR.
The work flow of my diffraction stacking method was finally applied to local earthquake data from Sumatra, Indonesia. Recordings from a temporary network of 42 stations deployed for 9 months around the Tarutung pull-apart Basin were analyzed. The seismic event locations resulting from the diffraction stacking method align along a segment of the Sumatran Fault. A more complex distribution of seismicity is imaged within and around the Tarutung Basin. Two lineaments striking N-S were found in the middle of the Tarutung Basin which support independent results from structural geology. These features are interpreted as opening fractures due to local extension. A cluster of seismic events repeatedly occurred in short time which might be related to fluid drainage since two hot springs are observed at the surface near to this cluster.
In the present work side-chain polystyrenes were synthesized and characterized, in order to be applied in multilayer OLEDs fabricated by solution process techniques. Manufacture of optoelectronic devices by solution process techniques is meant to decrease significantly fabrication cost and allow large scale production of such devices.
This dissertation focusses in three series, enveloped in two material classes. The two classes differ to each other in the type of charge transport exhibited, either ambipolar transport or electron transport. All materials were applied in all-organic solution processed green Ir-based devices.
In the first part, a series of ambipolar host materials were developed to transport both charge types, holes and electrons, and be applied especially as matrix for green Ir-based emitters. It was possible to increase devices efficacy by modulating the predominant charge transport type. This was achieved by modification of molecules electron transport part with more electron-deficient heterocycles or by extending the delocalization of the LUMO. Efficiencies up to 28.9 cd/A were observed for all-organic solution-process three layer devices.
In the second part, suitability of triarylboranes and tetraphenylsilanes as electron transport materials was studied. High triplet energies were obtained, up to 2.95 eV, by rational combination of both molecular structures. Although the combination of both elements had a low effect in materials electron transport properties, high efficiencies around 24 cd/A were obtained for the series in all-organic solution-processed two layer devices.
In the last part, benzene and pyridine were chosen as the series electron-transport motif. By controlling the relative pyridine content (RPC) solubility into methanol was induced for polystyrenes with bulky side-chains. Materials with RPC ≥ 0.5 could be deposited orthogonally from solution without harming underlying layers. From the best of our knowledge, this is the first time such materials are applied in this architecture showing moderate efficiencies around 10 cd/A in all-organic solution processed OLEDs.
Overall, the outcome of these studies will actively contribute to the current research on materials for all-solution processed OLEDs.
In this work, a sensor system based on thermoresponsive materials is developed by utilizing a modular approach. By synthesizing three different key monomers containing either a carboxyl, alkene or alkyne end group connected with a spacer to the methacrylic polymerizable unit, a flexible copolymerization strategy has been set up with oligo ethylene glycol methacrylates. This allows to tune the lower critical solution temperature (LCST) of the polymers in aqueous media. The molar masses are variable thanks to the excurse taken in polymerization in ionic liquids thus stretching molar masses from 25 to over 1000 kDa. The systems that were shown shown to be effective in aqueous solution could be immobilized on surfaces by copolymerizing photo crosslinkable units. The immobilized systems were formulated to give different layer thicknesses, swelling ratios and mesh sizes depending on the demand of the coupling reaction.
The coupling of detector units or model molecules is approached via reactions of the click chemistry pool, and the reactions are evaluated on their efficiency under those aspects, too. These coupling reactions are followed by surface plasmon resonance spectroscopy (SPR) to judge efficiency. With these tools at hand, Salmonella saccharides could be selectively detected by SPR. Influenza viruses were detected in solution by turbidimetry in solution as well as by a copolymerized solvatochromic dye to track binding via the changes of the polymers’ fluorescence by said binding event. This effect could also be achieved by utilizing the thermoresponsive behavior. Another demonstrator consists of the detection system bound to a quartz surface, thus allowing the virus detection on a solid carrier.
The experiments show the great potential of combining the concepts of thermoresponsive materials and click chemistry to develop technically simple sensors for large biomolecules and viruses.
The classical Navier-Stokes equations of hydrodynamics are usually written in terms of vector analysis. More promising is the formulation of these equations in the language of differential forms of degree one. In this way the study of Navier-Stokes equations includes the analysis of the de Rham complex. In particular, the Hodge theory for the de Rham complex enables one to eliminate the pressure from the equations. The Navier-Stokes equations constitute a parabolic system with a nonlinear term which makes sense only for one-forms. A simpler model of dynamics of incompressible viscous fluid is given by Burgers' equation. This work is aimed at the study of invariant structure of the Navier-Stokes equations which is closely related to the algebraic structure of the de Rham complex at step 1. To this end we introduce Navier-Stokes equations related to any elliptic quasicomplex of first order differential operators. These equations are quite similar to the classical Navier-Stokes equations including generalised velocity and pressure vectors. Elimination of the pressure from the generalised Navier-Stokes equations gives a good motivation for the study of the Neumann problem after Spencer for elliptic quasicomplexes. Such a study is also included in the work.We start this work by discussion of Lamé equations within the context of elliptic quasicomplexes on compact manifolds with boundary. The non-stationary Lamé equations form a hyperbolic system. However, the study of the first mixed problem for them gives a good experience to attack the linearised Navier-Stokes equations. On this base we describe a class of non-linear perturbations of the Navier-Stokes equations, for which the solvability results still hold.
Import and decomposition of dissolved organic carbon in pre-dams of drinking water reservoirs
(2017)
Dissolved organic carbon (DOC) depicts a key component in the aquatic carbon cycle as well as for drinking water production from surface waters. DOC concentrations increased in water bodies of the northern hemisphere in the last decades, posing ecological consequences and water quality problems. Within the pelagic zone of lakes and reservoirs, the DOC pool is greatly affected by biological activity as DOC is simultaneously produced and decomposed. This thesis aimed for a conceptual understanding of organic carbon cycling and DOC quality changes under differing hydrological and trophic conditions. Further, the occurrence of aquatic priming was investigated, which has been proposed as a potential process facilitating the microbial decomposition of stable allochthonous DOC within the pelagic zone.
To study organic carbon cycling under different hydrological conditions, quantitative and qualitative investigations were carried out in three pre-dams of drinking water reservoirs exhibiting a gradient in DOC concentrations and trophic states. All pre-dams were mainly autotrophic in their epilimnia. Discharge and temperature were identified as the key factors regulating net production and respiration in the upper water layers of the pre-dams. Considerable high autochthonous production was observed during the summer season under higher trophic status and base flow conditions. Up to 30% of the total gained organic carbon was produced within the epilimnia. Consequently, this affected the DOC quality within the pre-dams over the year and enhanced characteristics of algae-derived DOC were observed during base flow in summer. Allochthonous derived DOC dominated at high discharges and oligotrophic conditions when production and respiration were low. These results underline that also small impoundments with typically low water residence times are hotspots of carbon cycling, significantly altering water quality in dependence of discharge conditions, temperature and trophic status. Further, it highlights that these factors need to be considered in future water management as increasing temperatures and altered precipitation patterns are predicted in the context of climate change.
Under base flow conditions, heterotrophic bacteria preferentially utilized older DOC components with a conventional radiocarbon age of 195-395 years before present (i.e. before 1950). In contrast, younger carbon components (modern, i.e. produced after 1950) were mineralized following a storm flow event. This highlights that age and recalcitrance of DOC are independent from each other. To assess the ages of the microbially consumed DOC, a simplified method was developed to recover the respired CO2 from heterotrophic bacterioplankton for carbon isotope analyses (13C, 14C). The advantages of the method comprise the operation of replicate incubations at in-situ temperatures using standard laboratory equipment and thus enabling an application in a broad range of conditions.
Aquatic priming was investigated in laboratory experiments during the microbial decomposition of two terrestrial DOC substrates (peat water and soil leachate). Thereby, natural phytoplankton served as a source of labile organic matter and the total DOC pool increased throughout the experiments due to exudation and cell lysis of the growing phytoplankton. A priming effect for both terrestrial DOC substrates was revealed via carbon isotope analysis and mixing models. Thereby, priming was more pronounced for the peat water than for the soil leachate. This indicates that the DOC source and the amount of the added labile organic matter might influence the magnitude of a priming effect. Additional analysis via high-resolution mass spectrometry revealed that oxidized, unsaturated compounds were more strongly decomposed under priming (i.e. in phytoplankton presence). Given the observed increase in DOC concentrations during the experiments, it can be concluded that aquatic priming is not easily detectable via net concentration changes alone and could be considered as a qualitative effect.
The knowledge gained from this thesis contributes to the understanding of aquatic carbon cycling and demonstrated how DOC dynamics in freshwaters vary with hydrological, seasonal and trophic conditions. It further demonstrated that aquatic priming contributes to the microbial transformation of organic carbon and the observed decay of allochthonous DOC during transport in inland waters.
Mathematical models of bacterial growth have been successfully applied to study the relationship between antibiotic drug exposure and the antibacterial effect. Since these models typically lack a representation of cellular processes and cell physiology, the mechanistic integration of drug action is not possible on the cellular level. The cellular mechanisms of drug action, however, are particularly relevant for the prediction, analysis and understanding of interactions between antibiotics. Interactions are also studied experimentally, however, a lacking consent on the experimental protocol hinders direct comparison of results. As a consequence, contradictory classifications as additive, synergistic or antagonistic are reported in literature.
In the present thesis we developed a novel mathematical model for bacterial growth that integrates cell-level processes into the population growth level. The scope of the model is to predict bacterial growth under antimicrobial perturbation by multiple antibiotics in vitro.
To this end, we combined cell-level data from literature with population growth data for Bacillus subtilis, Escherichia coli and Staphylococcus aureus. The cell-level data described growth-determining characteristics of a reference cell, including the ribosomal concentration and efficiency. The population growth data comprised extensive time-kill curves for clinically relevant antibiotics (tetracycline, chloramphenicol, vancomycin, meropenem, linezolid, including dual combinations).
The new cell-level approach allowed for the first time to simultaneously describe single and combined effects of the aforementioned antibiotics for different experimental protocols, in particular different growth phases (lag and exponential phase). Consideration of ribosomal dynamics and persisting sub-populations explained the decreased potency of linezolid on cultures in the lag phase compared to exponential phase cultures. The model captured growth rate dependent killing and auto-inhibition of meropenem and - also for vancomycin exposure - regrowth of the bacterial cultures due to adaptive resistance development. Stochastic interaction surface analysis demonstrated the pronounced antagonism between meropenem and linezolid to be robust against variation in the growth phase and pharmacodynamic endpoint definition, but sensitive to a change in the experimental duration.
Furthermore, the developed approach included a detailed representation of the bacterial cell-cycle. We used this representation to describe septation dynamics during the transition of a bacterial culture from the exponential to stationary growth phase. Resulting from a new mechanistic understanding of transition processes, we explained the lag time between the increase in cell number and bacterial biomass during the transition from the lag to exponential growth phase. Furthermore, our model reproduces the increased intracellular RNA mass fraction during long term exposure of bacteria to chloramphenicol.
In summary, we contribute a new approach to disentangle the impact of drug effects, assay readout and experimental protocol on antibiotic interactions. In the absence of a consensus on the corresponding experimental protocols, this disentanglement is key to translate information between heterogeneous experiments and also ultimately to the clinical setting.
All life-sustaining processes are ultimately driven by thousands of biochemical reactions occurring in the cells: the metabolism. These reactions form an intricate network which produces all required chemical compounds, i.e., metabolites, from a set of input molecules. Cells regulate the activity through metabolic reactions in a context-specific way; only reactions that are required in a cellular context, e.g., cell type, developmental stage or environmental condition, are usually active, while the rest remain inactive. The context-specificity of metabolism can be captured by several kinds of experimental data, such as by gene and protein expression or metabolite profiles. In addition, these context-specific data can be assimilated into computational models of metabolism, which then provide context-specific metabolic predictions.
This thesis is composed of three individual studies focussing on context-specific experimental data integration into computational models of metabolism. The first study presents an optimization-based method to obtain context-specific metabolic predictions, and offers the advantage of being fully automated, i.e., free of user defined parameters. The second study explores the effects of alternative optimal solutions arising during the generation of context-specific metabolic predictions. These alternative optimal solutions are metabolic model predictions that represent equally well the integrated data, but that can markedly differ. This study proposes algorithms to analyze the space of alternative solutions, as well as some ways to cope with their impact in the predictions.
Finally, the third study investigates the metabolic specialization of the guard cells of the plant Arabidopsis thaliana, and compares it with that of a different cell type, the mesophyll cells. To this end, the computational methods developed in this thesis are applied to obtain metabolic predictions specific to guard cell and mesophyll cells. These cell-specific predictions are then compared to explore the differences in metabolic activity between the two cell types. In addition, the effects of alternative optima are taken into consideration when comparing the two cell types. The computational results indicate a major reorganization of the primary metabolism in guard cells. These results are supported by an independent 13C labelling experiment.
The existence of diverse and active microbial ecosystems in the deep subsurface – a biosphere that was originally considered devoid of life – was discovered in multiple microbiological studies. However, most of the studies are restricted to marine ecosystems, while our knowledge about the microbial communities in the deep subsurface of lake systems and their potentials to adapt to changing environmental conditions is still fragmentary. This doctoral thesis aims to build up a unique data basis for providing the first detailed high-throughput characterization of the deep biosphere of lacustrine sediments and to emphasize how important it is to differentiate between the living and the dead microbial community in deep biosphere studies.
In this thesis, up to 3.6 Ma old sediments (up to 317 m deep) of the El’gygytgyn Crater Lake were examined, which represents the oldest terrestrial climate record of the Arctic. Combining next generation sequencing with detailed geochemical characteristics and other environmental parameters, the microbial community composition was analyzed in regard to changing climatic conditions within the last 3.6 Ma to 1.0 Ma (Pliocene and Pleistocene). DNA from all investigated sediments was successfully extracted and a surprisingly diverse (6,910 OTUs) and abundant microbial community in the El’gygytgyn deep sediments were revealed. The bacterial abundance (10³-10⁶ 16S rRNA copies g⁻¹ sediment) was up to two orders of magnitudes higher than the archaeal abundance (10¹-10⁵) and fluctuates with the Pleistocene glacial/interglacial cyclicality. Interestingly, a strong increase in the microbial diversity with depth was observed (approximately 2.5 times higher diversity in Pliocene sediments compared to Pleistocene sediments). The increase in diversity with depth in the Lake El’gygytgyn is most probably caused by higher sedimentary temperatures towards the deep sediment layers as well as an enhanced temperature-induced intra-lake bioproductivity and higher input of allochthonous organic-rich material during Pliocene climatic conditions. Moreover, the microbial richness parameters follow the general trends of the paleoclimatic parameters, such as the paleo-temperature and paleo-precipitation. The most abundant bacterial representatives in the El’gygytgyn deep biosphere are affiliated with the phyla Proteobacteria, Actinobacteria, Bacteroidetes, and Acidobacteria, which are also commonly distributed in the surrounding permafrost habitats. The predominated taxon was the halotolerant genus Halomonas (in average 60% of the total reads per sample).
Additionally, this doctoral thesis focuses on the live/dead differentiation of microbes in cultures and environmental samples. While established methods (e.g., fluorescence in situ hybridization, RNA analyses) are not applicable to the challenging El’gygytgyn sediments, two newer methods were adapted to distinguish between DNA from live cells and free (extracellular, dead) DNA: the propidium monoazide (PMA) treatment and the cell separation adapted for low amounts of DNA. The applicability of the DNA-intercalating dye PMA was successfully evaluated to mask free DNA of different cultures of methanogenic archaea, which play a major role in the global carbon cycle. Moreover, an optimal procedure to simultaneously treat bacteria and archaea was developed using 130 µM PMA and 5 min of photo-activation with blue LED light, which is also applicable on sandy environmental samples with a particle load of ≤ 200 mg mL⁻¹. It was demonstrated that the soil texture has a strong influence on the PMA treatment in particle-rich samples and that in particular silt and clay-rich samples (e.g., El’gygytgyn sediments) lead to an insufficient shielding of free DNA by PMA. Therefore, a cell separation protocol was used to distinguish between DNA from live cells (intracellular DNA) and extracellular DNA in the El’gygytgyn sediments. While comparing these two DNA pools with a total DNA pool extracted with a commercial kit, significant differences in the microbial composition of all three pools (mean distance of relative abundance: 24.1%, mean distance of OTUs: 84.0%) was discovered. In particular, the total DNA pool covers significantly fewer taxa than the cell-separated DNA pools and only inadequately represents the living community. Moreover, individual redundancy analyses revealed that the microbial community of the intra- and extracellular DNA pool are driven by different environmental factors. The living community is mainly influenced by life-dependent parameters (e.g., sedimentary matrix, water availability), while the extracellular DNA is dependent on the biogenic silica content. The different community-shaping parameters and the fact, that a redundancy analysis of the total DNA pool explains significantly less variance of the microbial community, indicate that the total DNA represents a mixture of signals of the live and dead microbial community.
This work provides the first fundamental data basis of the diversity and distribution of microbial deep biosphere communities of a lake system over several million years. Moreover, it demonstrates the substantial importance of extracellular DNA in old sediments. These findings may strongly influence future environmental community analyses, where applications of live/dead differentiation avoid incorrect interpretations due to a failed extraction of the living microbial community or an overestimation of the past community diversity in the course of total DNA extraction approaches.
Carbohydrate-protein interactions are ubiquitous in nature. They provide the initial molecular contacts in many cell-cell processes as for example immune responses, signal transduction, egg fertilization and infection processes of pathogenic viruses and bacteria. Furthermore, bacteria themselves are infected by bacteriophages, viruses which can cause the bacterial lysis, but do not affect other hosts. The infection process of a bacteriophage involves the specific detection and binding of the bacterium, which can be based on a carbohydrate-protein interaction. The mechanism of specific detection of pathogenic bacteria can thereby be useful for the development of bacteria sensors in the food industry or for tools in diagnostics.
Bacteriophages of the Podoviridae family use tailspike proteins for the specific detection of enteritis causing bacteria as Escherichia coli, Salmonella spp. or Shigella flexneri. The tailspike protein provides the first contact by binding to the carbohydrate containing O-antigen part of lipopolysaccharide in the Gram-negative cell wall. After binding to O-antigen repeating units, the enzymatic activity of tailspike proteins leads to cleavage of the carbohydrate chains, which enables the bacteriophage to approach the bacterial surface for DNA injection. Tailspike proteins thereby exhibit a relatively low affinity to the oligosaccharide structures of O-antigen due to the necessary binding, cleavage and release cycle, compared for example to antibodies. In this work it was aimed to study the determinants that influence carbohydrate affinity in the extended TSP binding grooves. This is a prerequisite to design a high-affinity tailspike protein based bacteria sensor.
For this purpose the tailspike protein of the bacteriophage Sf6 (Sf6 TSP) was used, which specifically binds Shigella flexneri Y O-antigen with two tetrasaccharide repeating units at the intersubunits of the trimeric β-helix protein. The Sf6 TSP endorhamnosidase cleaves the O-antigen, which leads to an octasaccharide as the main product. The binding affinity of inactive Sf6 TSP towards polysaccharide was characterized by fluorescence titration experiments and surface plasmon resonance (SPR).
Moreover, cysteine mutations were introduced into the Sf6 TSP binding site for the covalent thiol-coupling of an environment-sensitive fluorescent label to obtain a sensor for Shigella flexneri Y based on TSP-O-antigen recognition. This sensor showed a more than 100 % amplitude increase of a visible light fluorescence upon the binding of a polysaccharide test solution. Improvements of the TSP sensor can be achieved by increasing the tailspike affinity towards the O-antigen. Therefore molecular dynamics simulations evaluating ligand flexibility, hydrogen bond occupancies and water network distributions were used for affinity prediction on the available cysteine mutants of Sf6 TSP. The binding affinities were experimentally analyzed by SPR. This combined computational and experimental set-up for the design of a high-affinity carbohydrate binding protein could successfully distinguish strongly increased and decreased affinities of single amino acid mutants.
A thermodynamically and structurally well characterized set of another tailspike protein HK620 TSP with high-affinity mutants was used to evaluate the influence of water molecules on binding affinity. The free enthalpy of HK620 TSP oligosaccharide complex formation thereby either derived from the replacement of a conserved water molecule or by immobilization of two water molecules upon ligand binding. Furthermore, the enthalpic and entropic contributions of water molecules in a hydrophobic binding pocket could be assigned by free energy calculations. The findings in this work can be helpful for the improvement of carbohydrate docking and carbohydrate binding protein engineering algorithms in the future.
We analyze an inverse noisy regression model under random design with the aim of estimating the unknown target function based on a given set of data, drawn according to some unknown probability distribution. Our estimators are all constructed by kernel methods, which depend on a Reproducing Kernel Hilbert Space structure using spectral regularization methods.
A first main result establishes upper and lower bounds for the rate of convergence under a given source condition assumption, restricting the class of admissible distributions. But since kernel methods scale poorly when massive datasets are involved, we study one example for saving computation time and memory requirements in more detail. We show that Parallelizing spectral algorithms also leads to minimax optimal rates of convergence provided the number of machines is chosen appropriately.
We emphasize that so far all estimators depend on the assumed a-priori smoothness of the target function and on the eigenvalue decay of the kernel covariance operator, which are in general unknown. To obtain good purely data driven estimators constitutes the problem of adaptivity which we handle for the single machine problem via a version of the Lepskii principle.
This study was inspired by the desire to contribute to literature on performance management from the context of a developing country. The guiding research questions were: How do managers use performance information in decision making? Why do managers use performance information the way they do? The study was based on theoretical strands of neo-patrimonialism and new institutionalism. The nature of the inquiry informed the choice of a qualitative case study research design. Data was assembled through face-to-face interviews, some observations, and collection of documents from managers at the levels of the directorate, division, and section/units. The managers who were the focus of this study are current or former staff members of the state departments in Kenya’s national Ministry of Agriculture, Livestock, and Fisheries as well as from departments responsible for coordination of performance related reforms.
The findings of this study show that performance information is regularly produced but its use by managers varies. Examples of use include preparing reports to external bodies, making decisions for resource re-allocation, making recommendations for rewards and sanctions, and policy advisory. On categorizing the forms of use as passive, purposeful, political or perverse, evidence shows that they overlap and that some of the forms are so closely related that it is difficult to separate them empirically.
On what can explain the forms of use established, four factors namely; political will and leadership; organizational capacity; administrative culture; and managers’ interests and attitudes, were investigated. While acknowledging the interrelatedness and even overlapping of the factors, the study demonstrates that there is explanatory power to each though with varying depth and scope. The study thus concludes that: Inconsistent political will and leadership for performance management reforms explain forms of use that are passive, political and perverse. Low organizational capacity could best explain passive and some limited aspects of purposeful use. Informal, personal and competitive administrative culture is associated with purposeful use and mostly with political and perverse use. Limited interest and apprehensive attitude are best associated with passive use.
The study contributes to the literature particularly in how institutions in a context of neo-patrimonialism shape performance information use. It recommends that further research is necessary to establish how neo-patrimonialism positively affects performance oriented reforms. This is interesting in particular given the emerging thinking on pockets of effectiveness and developmental patrimonialism. This is important since it is expected that performance related reforms will continue to be advocated in developing countries in the foreseeable future.
Information on the contemporary in-situ stress state of the earth’s crust is essential for geotechnical applications and physics-based seismic hazard assessment. Yet, stress data records for a data point are incomplete and their availability is usually not dense enough to allow conclusive statements. This demands a thorough examination of the in-situ stress field which is achieved by 3D geomechanicalnumerical models. However, the models spatial resolution is limited and the resulting local stress state is subject to large uncertainties that confine the significance of the findings. In addition, temporal variations of the in-situ stress field are naturally or anthropogenically induced. In my thesis I address these challenges in three manuscripts that investigate (1) the current crustal stress field orientation, (2) the 3D geomechanical-numerical modelling of the in-situ stress state, and (3) the phenomenon of injection induced temporal stress tensor rotations. In the first manuscript I present the first comprehensive stress data compilation of Iceland with 495 data records. Therefore, I analysed image logs from 57 boreholes in Iceland for indicators of the orientation of the maximum horizontal stress component. The study is the first stress survey from different kinds of stress indicators in a geologically very young and tectonically active area of an onshore spreading ridge. It reveals a distinct stress field with a depth independent stress orientation even very close to the spreading centre. In the second manuscript I present a calibrated 3D geomechanical-numerical modelling approach of the in-situ stress state of the Bavarian Molasse Basin that investigates the regional (70x70x10km³) and local (10x10x10km³) stress state. To link these two models I develop a multi-stage modelling approach that provides a reliable and efficient method to derive from the larger scale model initial and boundary conditions for the smaller scale model. Furthermore, I quantify the uncertainties in the models results which are inherent to geomechanical-numerical modelling in general and the multi-stage approach in particular. I show that the significance of the models results is mainly reduced due to the uncertainties in the material properties and the low number of available stress magnitude data records for calibration. In the third manuscript I investigate the phenomenon of injection induced temporal stress tensor rotation and its controlling factors. I conduct a sensitivity study with a 3D generic thermo-hydro-mechanical model. I show that the key control factors for the stress tensor rotation are the permeability as the decisive factor, the injection rate, and the initial differential stress. In particular for enhanced geothermal systems with a low permeability large rotations of the stress tensor are indicated. According to these findings the estimation of the initial differential stress in a reservoir is possible provided the permeability is known and the angle of stress rotation is observed. I propose that the stress tensor rotations can be a key factor in terms of the potential for induced seismicity on pre-existing faults due to the reorientation of the stress field that changes the optimal orientation of faults.
Researchers have made many approaches to study the complexities of the mammalian taste system; however molecular mechanisms of taste processing in the early structures of the central taste pathway remain unclear. More recently the Arc catFISH (cellular compartment analysis of temporal activity by fluorescent in situ hybridisation) method has been used in our lab to study neural activation following taste stimulation in the first central structure in the taste pathway, the nucleus of the solitary tract. This method uses the immediate early gene Arc as a neural activity marker to identify taste-responsive neurons. Arc plays a critical role in memory formation and is necessary for conditioned taste aversion memory formation. In the nucleus of the solitary tract only bitter taste stimulation resulted in increased Arc expression, however this did not occur following stimulation with tastants of any other taste quality. The primary target for gustatory NTS neurons is the parabrachial nucleus (PbN) and, like Arc, the PbN plays an important role in conditioned taste aversion learning.
The aim of this thesis is to investigate Arc expression in the PbN following taste stimulation to elucidate the molecular identity and function of Arc expressing, taste- responsive neurons. Naïve and taste-conditioned mice were stimulated with tastants from each of the five basic taste qualities (sweet, salty, sour, umami, and bitter), with additional bitter compounds included for comparison. The expression patterns of Arc and marker genes were analysed using in situ hybridisation (ISH). The Arc catFISH method was used to observe taste-responsive neurons following each taste stimulation. A double fluorescent in situ hybridisation protocol was then established to investigate possible neuropeptide genes involved in neural responses to taste stimulation.
The results showed that bitter taste stimulation induces increased Arc expression in the PbN in naïve mice. This was not true for other taste qualities. In mice conditioned to find an umami tastant aversive, subsequent umami taste stimulation resulted in an increase in Arc expression similar to that seen in bitter-stimulated mice. Taste-responsive Arc expression was denser in the lateral PbN than the medial PbN. In mice that received two temporally separated taste stimulations, each stimulation time-point showed a distinct population of Arc-expressing neurons, with only a small population (10 – 18 %) of neurons responding to both stimulations. This suggests that either each stimulation event activates a different population of neurons, or that Arc is marking something other than simple cellular activation, such as long-term cellular changes that do not occur twice within a 25 minute time frame. Investigation using the newly established double-FISH protocol revealed that, of the bitter-responsive Arc expressing neuron population: 16 % co-expressed calcitonin RNA; 17 % co-expressed glucagon-like peptide 1 receptor RNA; 17 % co-expressed hypocretin receptor 1 RNA; 9 % co-expressed gastrin-releasing peptide RNA; and 20 % co-expressed neurotensin RNA. This co-expression with multiple different neuropeptides suggests that bitter-activated Arc expression mediates multiple neural responses to the taste event, such as taste aversion learning, suppression of food intake, increased heart rate, and involves multiple brain structures such as the lateral hypothalamus, amygdala, bed nucleus of the stria terminalis, and the thalamus.
The increase in Arc-expression suggests that bitter taste stimulation, and umami taste stimulation in umami-averse animals, may result in an enhanced state of Arc- dependent synaptic plasticity in the PbN, allowing animals to form taste-relevant memories to these aversive compounds more readily. The results investigating neuropeptide RNA co- expression suggest the amygdala, bed nucleus of the stria terminalis, and thalamus as possible targets for bitter-responsive Arc-expressing PbN neurons.
Tremendous progress in the development of thin film solar cell techniques has been made over the last decade. The field of organic solar cells is constantly developing, new material classes like Perowskite solar cells are emerging and different types of hybrid organic/inorganic material combinations are being investigated for their physical properties and their applicability in thin film electronics. Besides typical single-junction architectures for solar cells, multi-junction concepts are also being investigated as they enable the overcoming of theoretical limitations of a single-junction. In multi-junction devices each sub-cell operates in different wavelength regimes and should exhibit optimized band-gap energies. It is exactly this tunability of the band-gap energy that renders organic solar cell materials interesting candidates for multi-junction applications. Nevertheless, only few attempts have been made to combine inorganic and organic solar cells in series connected multi-junction architectures. Even though a great diversity of organic solar cells exists nowadays, their open circuit voltage is usually low compared to the band-gap of the active layer. Hence, organic low band-gap solar cells in particular show low open circuit voltages and the key factors that determine the voltage losses are not yet fully understood. Besides open circuit voltage losses the recombination of charges in organic solar cells is also a prevailing research topic, especially with respect to the influence of trap states.
The exploratory focus of this work is therefore set, on the one hand, on the development of hybrid organic/inorganic multi-junctions and, on the other hand, on gaining a deeper understanding of the open circuit voltage and the recombination processes of organic solar cells.
In the first part of this thesis, the development of a hybrid organic/inorganic triple-junction will be discussed which showed at that time (Jan. 2015) a record power conversion efficiency of 11.7%. The inorganic sub-cells of these devices consist of hydrogenated amorphous silicon and were delivered by the Competence Center Thin-Film and Nanotechnology for Photovoltaics in Berlin. Different recombination contacts and organic sub-cells were tested in conjunction with these inorganic sub-cells on the basis of optical modeling predictions for the optimal layer thicknesses to finally reach record efficiencies for this type of solar cells.
In the second part, organic model systems will be investigated to gain a better understanding of the fundamental loss mechanisms that limit the open circuit voltage of organic solar cells. First, bilayer systems with different orientation of the donor and acceptor molecules were investigated to study the influence of the donor/acceptor orientation on non-radiative voltage loss. Secondly, three different bulk heterojunction solar cells all comprising the same amount of fluorination and the same polymer backbone in the donor component were examined to study the influence of long range electrostatics on the open circuit voltage. Thirdly, the device performance of two bulk heterojunction solar cells was compared which consisted of the same donor polymer but used different fullerene acceptor molecules. By this means, the influence of changing the energetics of the acceptor component on the open circuit voltage was investigated and a full analysis of the charge carrier dynamics was presented to unravel the reasons for the worse performance of the solar cell with the higher open circuit voltage. In the third part, a new recombination model for organic solar cells will be introduced and its applicability shown for a typical low band-gap cell. This model sheds new light on the recombination process in organic solar cells in a broader context as it re-evaluates the recombination pathway of charge carriers in devices which show the presence of trap states. Thereby it addresses a current research topic and helps to resolve alleged discrepancies which can arise from the interpretation of data derived by different measurement techniques.
Approaching physical limits in speed and size of today's magnetic storage and processing technologies demands new concepts for controlling magnetization and moves researches on optically induced magnetic dynamics. Studies on photoinduced magnetization dynamics and their underlying mechanisms have been primarily performed on ferromagnetic metals. Ferromagnetic dynamics bases on transfer of the conserved angular momentum connected with atomic magnetic moments out of the parallel aligned magnetic system into other degrees of freedom.
In this thesis the so far rarely studied response of antiferromagnetic order to ultra-short optical laser pulses in a metal is investigated. The experiments were performed at the FemtoSpex slicing facility at the storage ring BESSY II, an unique source for ultra-short elliptically polarized x-ray pulses. Laser-induced changes of the 4f-magnetic order parameter in ferro- and antiferromagnetic dysprosium (Dy), were studied by x-ray methods, which yield directly comparable quantities. The discovered fundamental differences in the temporal and spatial behavior of ferro- and antiferrmagnetic dynamics are assinged to an additional channel for angular momentum transfer, which reduces the antiferromagnetic order by redistributing angular momentum within the non-parallel aligned magnetic system, and hence conserves the zero net magnetization. It is shown that antiferromagnetic dynamics proceeds considerably faster and more energy-efficient than demagnetization in ferromagnets. By probing antiferromagnetic order in time and space, it is found to be affected along the whole sample depth of an in situ grown 73 nm tick Dy film. Interatomic transfer of angular momentum via fast diffusion of laser-excited 5d electrons is held responsible for the out-most long-ranging effect. Ultrafast ferromagnetic dynamics can be expected to base on the same origin, which however leads to demagnetization only in regions close to interfaces caused by super-diffusive spin transport. Dynamics due to local scattering processes of excited but less mobile electrons, occur in both magnetic alignments only in directly excited regions of the sample and on slower pisosecond timescales. The thesis provides fundamental insights into photoinduced magnetic dynamics by directly comparing ferro- and antiferromagnetic dynamics in the same material and by consideration of the laser-induced magnetic depth profile.
Cellular membranes constantly experience remodeling, as exemplified by morphological changes during endo- and exocytosis. Regulation of membrane morphology is essential for these processes. In this work, we attempt to establish a regulation path based on the use of photoswitches exhibiting conformational changes in model membranes, namely, giant unilamellar vesicles (GUVs). The mechanism of the changes in the GUVs’ morphology caused by isomerization of the photosensitive molecules has been previously explored but still remains elusive. We examine the morphological reshaping of GUVs in the presence of the photoswitch o-tetrafluoroazobenzene (F-azo) and show that the mechanism behind the resulting morphological changes involves both an increase in the membrane area and generation of a positive spontaneous curvature. First, we characterize the partitioning of F-azo in a single-component membrane using both experimental and computational approaches. The partition coefficient calculated from molecular dynamic simulations agrees with experimental data obtained with size-exclusion chromatography. Then, we implement the approach of vesicle electrodeformation in order to assess the increase in the membrane area, which is observed as a result of the conformational change of F-azo. Finally, the local and the effective membrane spontaneous curvatures were estimated from the observed shapes of vesicles exhibiting outward budding. We then extend the application of the F-azo to multicomponent lipid membranes, which exhibit a coexistence of domains in different liquid phases due to a miscibility gap between the lipids. We perform initial experiments to investigate whether F-azo can be employed to modulate the lateral lipid packing and organization. We observe either complete mixing of the domains or the appearing of disordered domains within the domains of more ordered phase. The type of behavior observed in response to the photoisomerization of F-azo was dependent on the used lipid composition. We believe that the findings introduced here will have an impact in understanding and controlling both lipid phase modulation and regulation of the membrane morphology in membrane systems.
Background: Consumption of whole-grain, coffee, and red meat were consistently related to the risk of developing type 2 diabetes in prospective cohort studies, but potentially underlying biological mechanisms are not well understood. Metabolomics profiles were shown to be sensitive to these dietary exposures, and at the same time to be informative with respect to the risk of type 2 diabetes. Moreover, graphical network-models were demonstrated to reflect the biological processes underlying high-dimensional metabolomics profiles.
Aim: The aim of this study was to infer hypotheses on the biological mechanisms that link consumption of whole-grain bread, coffee, and red meat, respectively, to the risk of developing type 2 diabetes. More specifically, it was aimed to consider network models of amino acid and lipid profiles as potential mediators of these risk-relations.
Study population: Analyses were conducted in the prospective EPIC-Potsdam cohort (n = 27,548), applying a nested case-cohort design (n = 2731, including 692 incident diabetes cases). Habitual diet was assessed with validated semiquantitative food-frequency questionnaires. Concentrations of 126 metabolites (acylcarnitines, phosphatidylcholines, sphingomyelins, amino acids) were determined in baseline-serum samples. Incident type 2 diabetes cases were assed and validated in an active follow-up procedure. The median follow-up time was 6.6 years.
Analytical design: The methodological approach was conceptually based on counterfactual causal inference theory. Observations on the network-encoded conditional independence structure restricted the space of possible causal explanations of observed metabolomics-data patterns. Given basic directionality assumptions (diet affects metabolism; metabolism affects future diabetes incidence), adjustment for a subset of direct neighbours was sufficient to consistently estimate network-independent direct effects. Further model-specification, however, was limited due to missing directionality information on the links between metabolites. Therefore, a multi-model approach was applied to infer the bounds of possible direct effects. All metabolite-exposure links and metabolite-outcome links, respectively, were classified into one of three categories: direct effect, ambiguous (some models indicated an effect others not), and no-effect.
Cross-sectional and longitudinal relations were evaluated in multivariable-adjusted linear regression and Cox proportional hazard regression models, respectively. Models were comprehensively adjusted for age, sex, body mass index, prevalence of hypertension, dietary and lifestyle factors, and medication.
Results: Consumption of whole-grain bread was related to lower levels of several lipid metabolites with saturated and monounsaturated fatty acids. Coffee was related to lower aromatic and branched-chain amino acids, and had potential effects on the fatty acid profile within lipid classes. Red meat was linked to lower glycine levels and was related to higher circulating concentrations of branched-chain amino acids. In addition, potential marked effects of red meat consumption on the fatty acid composition within the investigated lipid classes were identified.
Moreover, potential beneficial and adverse direct effects of metabolites on type 2 diabetes risk were detected. Aromatic amino acids and lipid metabolites with even-chain saturated (C14-C18) and with specific polyunsaturated fatty acids had adverse effects on type 2 diabetes risk. Glycine, glutamine, and lipid metabolites with monounsaturated fatty acids and with other species of polyunsaturated fatty acids were classified as having direct beneficial effects on type 2 diabetes risk.
Potential mediators of the diet-diabetes links were identified by graphically overlaying this information in network models. Mediation analyses revealed that effects on lipid metabolites could potentially explain about one fourth of the whole-grain bread effect on type 2 diabetes risk; and that effects of coffee and red meat consumption on amino acid and lipid profiles could potentially explain about two thirds of the altered type 2 diabetes risk linked to these dietary exposures.
Conclusion: An algorithm was developed that is capable to integrate single external variables (continuous exposures, survival time) and high-dimensional metabolomics-data in a joint graphical model. Application to the EPIC-Potsdam cohort study revealed that the observed conditional independence patterns were consistent with the a priori mediation hypothesis: Early effects on lipid and amino acid metabolism had the potential to explain large parts of the link between three of the most widely discussed diabetes-related dietary exposures and the risk of developing type 2 diabetes.
Underground coal gasification (UCG) has the potential to increase worldwide coal reserves by developing coal resources, currently not economically extractable by conventional mining methods. For that purpose, coal is combusted in situ to produce a high-calorific synthesis gas with different end-use options, including electricity generation as well as production of fuels and chemical feedstock. Apart from the high economic potentials, UCG may induce site‐specific environmental impacts, including ground surface subsidence and pollutant migration of UCG by-products into shallow freshwater aquifers. Sustainable and efficient UCG operation requires a thorough understanding of the coupled thermal, hydraulic and mechanical processes, occurring in the UCG reactor vicinity. The development and infrastructure costs of UCG trials are very high; therefore, numerical simulations of coupled processes in UCG are essential for the assessment of potential environmental impacts. Therefore, the aim of the present study is to assess UCG-induced permeability changes, potential hydraulic short circuit formation and non-isothermal multiphase fluid flow dynamics by means of coupled numerical simulations. Simulation results on permeability changes in the UCG reactor vicinity demonstrate that temperature-dependent thermo-mechanical parameters have to be considered in near-field assessments, only. Hence, far-field simulations do not become inaccurate, but benefit from increased computational efficiency when thermo-mechanical parameters are maintained constant. Simulations on potential hydraulic short circuit formation between single UCG reactors at regional-scale emphasize that geologic faults may induce hydraulic connections, and thus compromise efficient UCG operation. In this context, the steam jacket surrounding high-temperature UCG reactors plays a vital role in avoiding UCG by-products escaping into freshwater aquifers and in minimizing energy consumption by formation fluid evaporation. A steam jacket emerges in the close reactor vicinity due to phase transition of formation water and is a non-isothermal flow phenomenon. Considering this complex multiphase flow behavior, an innovative conceptual modeling approach, validated against field data, enables the quantification and prediction of UCG reactor water balances. The findings of this doctoral thesis provide an important basis for integration of thermo-hydro-mechanical simulations in UCG, required for the assessment and mitigation of its potential environmental impacts as well as optimization of its efficiency.
During the course of millions of years, evolutionary forces have shaped the current distribution of species and their genetic variability, by influencing their phylogeny, adaptability and probability of survival. Southeast Asia is an extraordinary biodiverse region, where past climate events have resulted in dramatic changes in land availability and distribution of vegetation, resulting likewise in periodic connections between isolated islands and the mainland. These events have influenced the way species are distributed throughout this region but, more importantly, they influenced the genesis of genetic diversity. Despite the observation that a shared paleo-history resulted in very diverse species phylogeographic patterns, the mechanisms behind these patterns are still poorly understood.
In this thesis, I investigated and contrasted the phylogeography of three groups of ungulate species distributed within South and Southeast Asia, aiming to understand what mechanisms have shaped speciation and geographical distribution of genetic variability. For that purpose, I analysed the mitogenomes of historical samples, in order to account for populations from the entire range of species distributions – including populations that no longer exist. This thesis is organized in three manuscripts, which correspond to the three investigated groups: red muntjacs, Rusa deer and Asian rhinoceros.
Red muntjacs are a widely distributed species and occur in very different habitats. We found evidence for gene-flow among populations of different islands, indicative of their ability to utilize the available land corridors. However, we described also the existence of at least two dispersal barriers that created population differentiation within this group; one isolated Sundaic and Mainland populations and the second separated individuals from Sri Lanka.
Second, the two Rusa species investigated here revealed another consequence of the historical land connections. While the two species were monophyletic, we found evidence of hybridisation in Java, facilitated by the expansion of the widespread sambar, Rusa unicolor. Consequently, I found that all the individuals of Javan deer, R. timorensis which were transported to the east of Sundaland by humans, to be of hybrid descent.
In the last manuscript, we were able to include samples from the extinct mainland populations of both Sumatran and Javan rhinoceros. The results revealed a much higher genetic diversity of the historical populations than ever reported for the contemporaneous survivors. Their evolutionary histories revealed a close relationship to climatic events of the Pleistocene but, more importantly, point out the vast extent of genetic erosion within these two endangered species.
The specific phylogeographic history of the species showed some common patters of genetic differentiation that could be directly linked to the climatic and geological changes on the Sunda Shelf during the Pleistocene. However, by contrasting these results I discussed that the same geological events
did not always result in similar histories. One obvious example was the different permeability of the land corridors of Sundaland, as the ability of each species to utilize this newly available land was directly related to their specific ecological requirements. Taken together, these results have an important contribution to the general understanding of evolution in this biodiversity hotspot and the main drivers shaping the distribution of genetic diversity, but could also have important consequences for taxonomy and conservation of the three investigated groups.
Introduction: Carbohydrate (CHO) and fat are the main substrates to fuel prolonged endurance exercise, each having its oxidation patterns regulated by several factors such as intensity, duration and mode of the activity, dietary intake pattern, muscle glycogen concentrations, gender and training status. Exercising at intensities where fat oxidation rates are high has been shown to induce metabolic benefits in recreational and health-oriented sportsmen. The exercise intensity (Fatpeak) eliciting peak fat oxidation rates is therefore of particular interest when aiming to prescribe exercise for the purpose of fat oxidation and related metabolic effects. Although running and walking are feasible and popular among the target population, no reliable protocols are available to assess Fatpeak as well as its actual velocity (VPFO) during treadmill ergometry. Moreover, to date, it remains unclear how pre-exercise CHO availability modulates the oxidative regulation of substrates when exercise is conducted at the intensity where the individual anaerobic threshold (IAT) is located (VIAT). That is, a metabolic marker representing the upper border where constant load endurance exercise can be sustained, being commonly used to guide athletic training or in performance diagnostics. The research objectives of the current thesis were therefore, 1) to assess the reliability and day-to-day variability of VPFO and Fatpeak during treadmill ergometry running; 2) to assess the impact of high CHO (HC) vs. low CHO (LC) diets (where on the LC day a combination of low CHO diet and a glycogen depleting exercise was implemented) on the oxidative regulation of CHOs and fat while exercise is conducted at VIAT. Methods: Research objective 1: Sixteen recreational athletes (f=7, m=9; 25 ± 3 y; 1.76 ± 0.09 m; 68.3 ± 13.7 kg; 23.1 ± 2.9 kg/m²) performed 2 different running protocols on 3 different days with standardized nutrition the day before testing. At day 1, peak oxygen uptake (VO2peak) and the velocities at the aerobic threshold (VLT) and respiratory exchange ratio (RER) of 1.00 (VRER) were assessed. At days 2 and 3, subjects ran an identical submaximal incremental test (Fat-peak test) composed of a 10 min warm-up (70% VLT) followed by 5 stages of 6 min with equal increments (stage 1 = VLT, stage 5 = VRER). Breath-by-breath gas exchange data was measured continuously and used to determine fat oxidation rates. A third order polynomial function was used to identify VPFO and subsequently Fatpeak. The reproducibility and variability of variables was verified with an intraclass correlation coefficient (ICC), Pearson’s correlation coefficient, coefficient of variation (CV) and the mean differences (bias) ± 95% limits of agreement (LoA). Research objective 2: Sixteen recreational runners (m=8, f=8; 28 ± 3 y; 1.76 ± 0.09 m; 72 ± 13 kg; 23 ± 2 kg/m²) performed 3 different running protocols, each allocated on a different day. At day 1, a maximal stepwise incremental test was implemented to assess the IAT and VIAT. During days 2 and 3, participants ran a constant-pace bout (30 min) at VIAT that was combined with randomly assigned HC (7g/kg/d) or LC (3g/kg/d) diets for the 24 h before testing. Breath-by-breath gas exchange data was measured continuously and used to determine substrate oxidation. Dietary data and differences in substrate oxidation were analyzed with a paired t-test. A two-way ANOVA tested the diet X gender interaction (α = 0.05). Results: Research objective 1: ICC, Pearson’s correlation and CV for VPFO and Fatpeak were 0.98, 0.97, 5.0%; and 0.90, 0.81, 7.0%, respectively. Bias ± 95% LoA was -0.3 ± 0.9 km/h for VPFO and -2 ± 8% of VO2peak for Fatpeak. Research objective 2: Overall, the IAT and VIAT were 2.74 ± 0.39 mmol/l and 11.1 ± 1.4 km/h, respectively. CHO oxidation was 3.45 ± 0.08 and 2.90 ± 0.07 g/min during HC and LC bouts respectively (P < 0.05). Likewise, fat oxidation was 0.13 ± 0.03 and 0.36 ± 0.03 g/min (P < 0.05). Females had 14% (P < 0.05) and 12% (P > 0.05) greater fat oxidation compared to males during HC and LC bouts, respectively. Conclusions: Research objective 1: In summary, relative and absolute reliability indicators for VPFO and Fatpeak were found to be excellent. The observed LoA may now serve as a basis for future training prescriptions, although fat oxidation rates at prolonged exercise bouts at this intensity still need to be investigated. Research objective 2: Twenty-four hours of high CHO consumption results in concurrent higher CHO oxidation rates and overall utilization, whereas maintaining a low systemic CHO availability significantly increases the contribution of fat to the overall energy metabolism. The observed gender differences underline the necessity of individualized dietary planning before exerting at intensities associated with performance exercise. Ultimately, future research should establish how these findings can be extrapolated to training and competitive situations and with that provide trainers and nutritionists with improved data to derive training prescriptions.
The work done during the PhD studies has been focused on measurements of distribution functions of rotating galaxies using integral field spectroscopy observations.
Throughout the main body of research presented here we have been using CALIFA (Calar Alto Legacy Integral Field Area) survey stellar velocity fields to obtain robust measurements of circular velocities for rotating galaxies of all morphological types. A crucial part of the work was enabled by well-defined CALIFA sample selection criteria: it enabled reconstructing sample-independent distributions of galaxy properties.
In Chapter 2, we measure the distribution in absolute magnitude - circular velocity space for a well-defined sample of 199 rotating CALIFA galaxies using their stellar kinematics. Our aim in this analysis is to avoid subjective selection criteria and to take volume and large-scale structure factors into account. Using stellar velocity fields instead of gas emission line kinematics allows including rapidly rotating early type galaxies. Our initial sample contains 277 galaxies with available stellar velocity fields and growth curve r-band photometry. After rejecting 51 velocity fields that could not be modelled due to the low number of bins, foreground contamination or significant interaction we perform Markov Chain Monte Carlo (MCMC) modelling of the velocity fields, obtaining the rotation curve and kinematic parameters and their realistic uncertainties. We perform an extinction correction and calculate the circular velocity v_circ accounting for pressure support a given galaxy has. The resulting galaxy distribution on the M_r - v_circ plane is then modelled as a mixture of two distinct populations, allowing robust and reproducible rejection of outliers, a significant fraction of which are slow rotators. The selection effects are understood well enough that the incompleteness of the sample can be corrected and the 199 galaxies can be weighted by volume and large-scale structure factors enabling us to fit a volume-corrected Tully-Fisher relation (TFR). More importantly, we also provide the volume-corrected distribution of galaxies in the M_r - v_circ plane, which can be compared with cosmological simulations. The joint distribution of the luminosity and circular velocity space densities, representative over the range of -20 > M_r > -22 mag, can place more stringent constraints on the galaxy formation and evolution scenarios than linear TFR fit parameters or the luminosity function alone.
In Chapter 3, we measure one of the marginal distributions of the M_r - v_circ distribution: the circular velocity function of rotating galaxies. The velocity function is a fundamental observable statistic of the galaxy population, being of a similar importance as the luminosity function, but much more difficult to measure. We present the first directly measured circular velocity function that is representative between 60 < v_circ < 320 km s^-1 for galaxies of all morphological types at a given rotation velocity. For the low mass galaxy population 60 < v_circ < 170 km s^-1, we use the HIPASS velocity function. For the massive galaxy population 170 < v_circ < 320 km s^-1, we use stellar circular velocities from CALIFA. The CALIFA velocity function includes homogeneous velocity measurements of both late and early-type rotation-supported galaxies. It has the crucial advantage of not missing gas-poor massive ellipticals that HI surveys are blind to. We show that both velocity functions can be combined in a seamless manner, as their ranges of validity overlap. The resulting observed velocity function is compared to velocity functions derived from cosmological simulations of the z = 0 galaxy population. We find that dark matter-only simulations show a strong mismatch with the observed VF. Hydrodynamic Illustris simulations fare better, but still do not fully reproduce observations.
In Chapter 4, we present some other work done during the PhD studies, namely, a method that improves the precision of specific angular measurements by combining simultaneous Markov Chain Monte Carlo modelling of ionised gas 2D velocity fields and HI linewidths. To test the method we use a sample of 25 galaxies from the Sydney-AAO Multi-object Integral field (SAMI) survey that had matching ALFALFA HI linewidths. Such a method allows constraining the rotation curve both in the inner regions of a galaxy and in its outskirts, leading to increased precision of specific angular momentum measurements. It could be used to further constrain the observed relation between galaxy mass, specific angular momentum and morphology (Obreschkow & Glazebrook 2014).
Mathematical and computational methods are presented in the appendices.
Direct anthropogenic influences on the Earth’s subsurface during drilling, extraction or injection activities, can affect land stability by causing subsidence, uplifts or lateral displacements. They can occur in localized as well as in uninhabited and inhabited regions. Thus the associated risks for humans, infrastructure, and environment must be minimized. To achieve this, appropriate surveillance methods must be found that can be used for simultaneous monitoring during such activities. Multi-temporal synthetic aperture radar interferometry (MT-InSAR) methods like the Persistent Scatterer Interferometry (PSI) and the Small BAseline Subsets (SBAS) have been developed as standard approaches for satellite-based surface displacement monitoring. With increasing spatial resolution and availability of SAR sensors in recent years, MT-InSAR can be valuable for the detection and mapping of even the smallest man-made displacements.
This doctoral thesis aims at investigating the capacities of the mentioned standard methods for this purpose, and comprises three main objectives against the backdrop of a user-friendly surveillance service:
(1) the spatial and temporal significance assessment against leveling, (2) the suitability evaluation of PSI and SBAS under different conditions, and (3) the analysis of the link between surface motion and subsurface processes.
Two prominent case studies on anthropogenic induced subsurface processes in Germany serve as the basis for this goal. The first is the distinct urban uplift with severe damages at Staufen im Breisgau that has been associated since 2007 with a failure to implement a shallow geothermal energy supply for an individual building. The second case study considers the pilot project of geological carbon dioxide (CO2) storage at Ketzin, and comprises borehole drilling and fluid injection of more than 67 kt CO2 between 2008 and 2013. Leveling surveys at Staufen and comprehensive background knowledge of the underground processes gained from different kinds of in-situ measurements at both locations deliver a suitable basis for this comparative study and the above stated objectives. The differences in location setting, i.e. urban versus rural site character, were intended to investigate the limitations in the applicability of PSI and SBAS.
For the MT-InSAR analysis, X-band images from the German TerraSAR-X and TanDEM-X satellites were acquired in the standard Stripmap mode with about 3 m spatial resolution in azimuth and range direction. Data acquisition lasted over a period of five years for Staufen (2008-2013), and four years for Ketzin (2009-2013). For the first approximation of the subsurface source, an inversion of the InSAR outcome in Staufen was applied. The modeled uplift based on complex hydromechanical simulations and a correlation analysis with bottomhole pressure data were used for comparison with MT-InSAR measurements at Ketzin.
In response to the defined objectives of this thesis, a higher level of detail can be achieved in mapping surface displacements without in-situ effort by using MT-InSAR in comparison to leveling (1). A clear delineation of the elliptical shaped uplift border and its magnitudes at different parts was possible at Staufen, with the exception of a vegetated area in the northwest. Vegetation coverage and the associated temporal signal decorrelation are the main limitations of MT-InSAR as clearly demonstrated at the Ketzin test site. They result in insufficient measurement point density and unwrapping issues. Therefore, spatial resolutions of one meter or better are recommended to achieve an adequate point density for local displacement analysis and to apply signal noise reduction. Leveling measurements can provide a complementary data source here, but require much effort pertaining to personnel even at the local scale. Horizontal motions could be identified at Staufen by only comparing the temporal evolution of the 1D line of sight (LOS) InSAR measurements with the available leveling data. An exception was the independent LOS decomposition using ascending and descending data sets for the period 2012-2013. The full 3D displacement field representation failed due to insufficient orbit-related, north-south sensitivity of the satellite-based measurements. By using the dense temporal mapping capabilities of the TerraSAR-X/TanDEM-X satellites after every 11 days, the temporal displacement evolution could be captured as good as that with leveling.
With respect to the tested methods and in the view of generality, SBAS should be preferred over PSI (2). SBAS delivered a higher point density, and was therefore less affected by phase unwrapping issues in both case studies. Linking surface motions with subsurface processes is possible when considering simplified geophysical models (3), but it still requires intensive research to gain a deep understanding.
The aim of this thesis is to develop approaches to automatically recognise the structure of argumentation in short monological texts. This amounts to identifying the central claim of the text, supporting premises, possible objections, and counter-objections to these objections, and connecting them correspondingly to a structure that adequately describes the argumentation presented in the text.
The first step towards such an automatic analysis of the structure of argumentation is to know how to represent it. We systematically review the literature on theories of discourse, as well as on theories of the structure of argumentation against a set of requirements and desiderata, and identify the theory of J. B. Freeman (1991, 2011) as a suitable candidate to represent argumentation structure. Based on this, a scheme is derived that is able to represent complex argumentative structures and can cope with various segmentation issues typically occurring in authentic text.
In order to empirically test our scheme for reliability of annotation, we conduct several annotation experiments, the most important of which assesses the agreement in reconstructing argumentation structure. The results show that expert annotators produce very reliable annotations, while the results of non-expert annotators highly depend on their training in and commitment to the task.
We then introduce the 'microtext' corpus, a collection of short argumentative texts. We report on the creation, translation, and annotation of it and provide a variety of statistics. It is the first parallel corpus (with a German and English version) annotated with argumentation structure, and -- thanks to the work of our colleagues -- also the first annotated according to multiple theories of (global) discourse structure.
The corpus is then used to develop and evaluate approaches to automatically predict argumentation structures in a series of six studies: The first two of them focus on learning local models for different aspects of argumentation structure. In the third study, we develop the main approach proposed in this thesis for predicting globally optimal argumentation structures: the 'evidence graph' model. This model is then systematically compared to other approaches in the fourth study, and achieves state-of-the-art results on the microtext corpus. The remaining two studies aim to demonstrate the versatility and elegance of the proposed approach by predicting argumentation structures of different granularity from text, and finally by using it to translate rhetorical structure representations into argumentation structures.
Borehole instabilities are frequently encountered when drilling through finely laminated, organic rich shales (Økland and Cook, 1998; Ottesen, 2010; etc.); such instabilities should be avoided to assure a successful exploitation and safe production of the contained unconventional hydrocarbons. Borehole instabilities, such as borehole breakouts or drilling induced tensile fractures, may lead to poor cementing of the borehole annulus, difficulties with recording and interpretation of geophysical logs, low directional control and in the worst case the loss of the well. If these problems are not recognized and expertly remedied, pollution of the groundwater or the emission of gases into the atmosphere can occur since the migration paths of the hydrocarbons in the subsurface are not yet fully understood (e.g., Davies et al., 2014; Zoback et al., 2010). In addition, it is often mentioned that the drilling problems encountered and the resulting downtimes of the wellbore system in finely laminated shales significantly increase drilling costs (Fjaer et al., 2008; Aadnoy and Ong, 2003).
In order to understand and reduce the borehole instabilities during drilling in unconventional shales, we investigate stress-induced irregular extensions of the borehole diameter, which are also referred to as borehole breakouts. For this purpose, experiments with different borehole diameters, bedding plane angles and stress boundary conditions were performed on finely laminated Posidonia shales. The Lower Jurassic Posidonia shale is one of the most productive source rocks for conventional reservoirs in Europe and has the greatest potential for unconventional oil and gas in Europe (Littke et al., 2011).
In this work, Posidonia shale specimens from the North (PN) and South (PS) German basins were selected and characterized petrophysically and mechanically. The composition of the two shales is dominated by calcite (47-56%) followed by clays (23-28%) and quartz (16-17%). The remaining components are mainly pyrite and organic matter. The porosity of the shales varies considerably and is up to 10% for PS and 1% for PN, which is due to a larger deposition depth of PN. Both shales show marked elasticity and strength anisotropy, which can be attributed to a macroscopic distribution and orientation of soft and hard minerals. Under load the hard minerals form a load-bearing, supporting structure, while the soft minerals compensate the deformation. Therefore, if loaded parallel to the bedding, the Posidonia shale is more brittle than loaded normal to the bedding. The resulting elastic anisotropy, which can be defined by the ratio of the modulus of elasticity parallel and normal to the bedding, is about 50%, while the strength anisotropy (i.e., the ratio of uniaxial compressive strength normal and parallel to the bedding) is up to 66%. Based on the petrophysical characterization of the two rocks, a transverse isotropy (TVI) was derived. In general, PS is softer and weaker than PN, which is due to the stronger compaction of the material due to the higher burial depth.
Conventional triaxial borehole breakout experiments on specimens with different borehole diameters showed that, when the diameter of the borehole is increased, the stress required to initiate borehole breakout decreases to a constant value. This value can be expressed as the ratio of the tangential stress and the uniaxial compressive strength of the rock. The ratio increases exponentially with decreasing borehole diameter from about 2.5 for a 10 mm diameter hole to ~ 7 for a 1 mm borehole (increase of initiation stress by 280%) and can be described by a fracture mechanic based criterion. The reduction in borehole diameter is therefore a considerable aspect in reducing the risk of breakouts. New drilling techniques with significantly reduced borehole diameters, such as "fish-bone" holes, are already underway and are currently being tested (e.g., Xing et al., 2012).
The observed strength anisotropy and the TVI material behavior are also reflected in the observed breakout processes at the borehole wall. Drill holes normal to the bedding develop breakouts in a plane of isotropy and are not affected by the strength or elasticity anisotropy. The observed breakouts are point-symmetric and form compressive shear failure planes, which can be predicted by a Mohr-Coulomb failure approach. If the shear failure planes intersect, conjugate breakouts can be described as "dog-eared” breakouts.
While the initiation of breakouts for wells oriented normal to the stratification has been triggered by random local defects, reduced strengths parallel to bedding planes are the starting point for breakouts for wells parallel to the bedding. In the case of a deflected borehole trajectory, therefore, the observed failure type changes from shear-induced failure surfaces to buckling failure of individual layer packages. In addition, the breakout depths and widths increased, resulting in a stress-induced enlargement of the borehole cross-section and an increased output of rock material into the borehole. With the transition from shear to buckling failure and changing bedding plane angle with respect to the borehole axis, the stress required for inducing wellbore breakouts drops by 65%.
These observations under conventional triaxial stress boundary conditions could also be confirmed under true triaxial stress conditions. Here breakouts grew into the rock as a result of buckling failure, too. In this process, the broken layer packs rotate into the pressure-free drill hole and detach themselves from the surrounding rock by tensile cracking. The final breakout shape in Posidonia shale can be described as trapezoidal when the bedding planes are parallel to the greatest horizontal stress and to the borehole axis. In the event that the largest horizontal stress is normal to the stratification, breakouts were formed entirely by shear fractures between the stratification and required higher stresses to initiate similar to breakouts in conventional triaxial experiments with boreholes oriented normal to the bedding.
In the content of this work, a fracture mechanics-based failure criterion for conventional triaxial loading conditions in isotropic rocks (Dresen et al., 2010) has been successfully extended to true triaxial loading conditions in the transverse isotropic rock to predict the initiation of borehole breakouts. The criterion was successfully verified on the experiments carried out.
The extended failure criterion and the conclusions from the laboratory and numerical work may help to reduce the risk of borehole breakouts in unconventional shales.
The motivation of this work was to investigate the self-assembly of a block copolymer species that attended little attraction before, double hydrophilic block copolymers (DHBCs). DHBCs consist of two linear hydrophilic polymer blocks. The self-assembly of DHBCs towards suprastructures such as particles and vesicles is determined via a strong difference in hydrophilicity between the corresponding blocks leading to a microphase separation due to immiscibility. The benefits of DHBCs and the corresponding particles and vesicles, such as biocompatibility, high permeability towards water and hydrophilic compounds as well as the large amount of possible functionalizations that can be addressed to the block copolymers make the application of DHBC based structures a viable choice in biomedicine. In order to assess a route towards self-assembled structures from DHBCs that display the potential to act as cargos for future applications, several block copolymers containing two hydrophilic polymer blocks were synthesized. Poly(ethylene oxide)-b-poly(N-vinylpyrrolidone) (PEO-b-PVP) and Poly(ethylene oxide)-b-poly(N-vinylpyrrolidone-co-N-vinylimidazole) (PEO-b-P(VP-co-VIm) block copolymers were synthesized via reversible deactivation radical polymerization (RDRP) techniques starting from a PEO-macro chain transfer agent. The block copolymers displayed a concentration dependent self-assembly behavior in water which was determined via dynamic light scattering (DLS). It was possible to observe spherical particles via laser scanning confocal microscopy (LSCM) and cryogenic scanning electron microscopy (cryo SEM) at highly concentrated solutions of PEO-b-PVP. Furthermore, a crosslinking strategy with (PEO-b-P(VP-co-VIm) was developed applying a diiodo derived crosslinker diethylene glycol bis(2-iodoethyl) ether to form quaternary amines at the VIm units. The formed crosslinked structures proved stability upon dilution and transfer into organic solvents. Moreover, self-assembly and crosslinking in DMF proved to be more advantageous and the crosslinked structures could be successfully transferred to aqueous solution. The afforded spherical submicron particles could be visualized via LSCM, cryo SEM and Cryo TEM.
Double hydrophilic pullulan-b-poly(acrylamide) block copolymers were synthesized via copper catalyzed alkyne azide cycloaddition (CuAAC) starting from suitable pullulan alkyne and azide functionalized poly(N,N-dimethylacrylamide) (PDMA) and poly(N-ethylacrylamide) (PEA) homopolymers. The conjugation reaction was confirmed via SEC and 1H-NMR measurements. The self-assembly of the block copolymers was monitored with DLS and static light scattering (SLS) measurements indicating the presence of hollow spherical structures. Cryo SEM measurements could confirm the presence of vesicular structures for Pull-b-PEA block copolymers. Solutions of Pull-b-PDMA displayed particles in cryo SEM. Moreover, an end group functionalization of Pull-b-PDMA with Rhodamine B allowed assessing the structure via LSCM and hollow spherical structures were observed indicating the presence of vesicles, too.
An exemplified pathway towards a DHBC based drug delivery vehicle was demonstrated with the block copolymer Pull-b-PVP. The block copolymer was synthesized via RAFT/MADIX techniques starting from a pullulan chain transfer agent. Pull-b-PVP displayed a concentration dependent self-assembly in water with an efficiency superior to the PEO-b-PVP system, which could be observed via DLS. Cryo SEM and LSCM microscopy displayed the presence of spherical structures. In order to apply a reversible crosslinking strategy on the synthesized block copolymer, the pullulan block was selectively oxidized to dialdehydes with NaIO4. The oxidation of the block copolymer was confirmed via SEC and 1H-NMR measurements. The self-assembled and oxidized structures were subsequently crosslinked with cystamine dihiydrochloride, a pH and redox responsive crosslinker resulting in crosslinked vesicles which were observed via cryo SEM. The vesicular structures of crosslinked Pull-b-PVP could be disassembled by acid treatment or the application of the redox agent tris(2-carboxyethyl)-phosphin-hydrochloride. The successful disassembly was monitored with DLS measurements.
To conclude, self-assembled structures from DHBCs such as particles and vesicles display a strong potential to generate an impact on biomedicine and nanotechnologies. The variety of DHBC compositions and functionalities are very promising features for future applications.
Functional nanoporous carbon-based materials derived from oxocarbon-metal coordination complexes
(2017)
Nanoporous carbon based materials are of particular interest for both science and industry due to their exceptional properties such as a large surface area, high pore volume, high electroconductivity as well as high chemical and thermal stability. Benefiting from these advantageous properties, nanoporous carbons proved to be useful in various energy and environment related applications including energy storage and conversion, catalysis, gas sorption and separation technologies. The synthesis of nanoporous carbons classically involves thermal carbonization of the carbon precursors (e.g. phenolic resins, polyacrylonitrile, poly(vinyl alcohol) etc.) followed by an activation step and/or it makes use of classical hard or soft templates to obtain well-defined porous structures. However, these synthesis strategies are complicated and costly; and make use of hazardous chemicals, hindering their application for large-scale production. Furthermore, control over the carbon materials properties is challenging owing to the relatively unpredictable processes at the high carbonization temperatures.
In the present thesis, nanoporous carbon based materials are prepared by the direct heat treatment of crystalline precursor materials with pre-defined properties. This synthesis strategy does not require any additional carbon sources or classical hard- or soft templates. The highly stable and porous crystalline precursors are based on coordination compounds of the squarate and croconate ions with various divalent metal ions including Zn2+, Cu2+, Ni2+, and Co2+, respectively. Here, the structural properties of the crystals can be controlled by the choice of appropriate synthesis conditions such as the crystal aging temperature, the ligand/metal molar ratio, the metal ion, and the organic ligand system. In this context, the coordination of the squarate ions to Zn2+ yields porous 3D cube crystalline particles. The morphology of the cubes can be tuned from densely packed cubes with a smooth surface to cubes with intriguing micrometer-sized openings and voids which evolve on the centers of the low index faces as the crystal aging temperature is raised. By varying the molar ratio, the particle shape can be changed from truncated cubes to perfect cubes with right-angled edges.
These crystalline precursors can be easily transformed into the respective carbon based materials by heat treatment at elevated temperatures in a nitrogen atmosphere followed by a facile washing step. The resulting carbons are obtained in good yields and possess a hierarchical pore structure with well-organized and interconnected micro-, meso- and macropores. Moreover, high surface areas and large pore volumes of up to 1957 m2 g-1 and 2.31 cm3 g-1 are achieved, respectively, whereby the macroscopic structure of the precursors is preserved throughout the whole synthesis procedure.
Owing to these advantageous properties, the resulting carbon based materials represent promising supercapacitor electrode materials for energy storage applications. This is exemplarily demonstrated by employing the 3D hierarchical porous carbon cubes derived from squarate-zinc coordination compounds as electrode material showing a specific capacitance of 133 F g-1 in H2SO4 at a scan rate of 5 mV s-1 and retaining 67% of this specific capacitance when the scan rate is increased to 200 mV s-1.
In a further application, the porous carbon cubes derived from squarate-zinc coordination compounds are used as high surface area support material and decorated with nickel nanoparticles via an incipient wetness impregnation. The resulting composite material combines a high surface area, a hierarchical pore structure with high functionality and well-accessible pores. Moreover, owing to their regular micro-cube shape, they allow for a good packing of a fixed-bed flow reactor along with high column efficiency and a minimized pressure drop throughout the packed reactor. Therefore, the composite is employed as heterogeneous catalyst in the selective hydrogenation of 5-hydroxymethylfurfural to 2,5-dimethylfuran showing good catalytic performance and overcoming the conventional problem of column blocking.
Thinking about the rational design of 3D carbon geometries, the functions and properties of the resulting carbon-based materials can be further expanded by the rational introduction of heteroatoms (e.g. N, B, S, P, etc.) into the carbon structures in order to alter properties such as wettability, surface polarity as well as the electrochemical landscape. In this context, the use of crystalline materials based on oxocarbon-metal ion complexes can open a platform of highly functional materials for all processes that involve surface processes.
Development of a reliable and environmentally friendly synthesis for fluorescence carbon nanodots
(2017)
Carbon nanodots (CNDs) have generated considerable attention due to their promising properties, e.g. high water solubility, chemical inertness, resistance to photobleaching, high biocompatibility and ease of functionalization. These properties render them ideal for a wide range of functions, e.g. electrochemical applications, waste water treatment, (photo)catalysis, bio-imaging and bio-technology, as well as chemical sensing, and optoelectronic devices like LEDs. In particular, the ability to prepare CNDs from a wide range of accessible organic materials makes them a potential alternative for conventional organic dyes and semiconductor quantum dots (QDs) in various applications. However, current synthesis methods are typically expensive and depend on complex and time-consuming processes or severe synthesis conditions and toxic chemicals. One way to reduce overall preparation costs is the use of biological waste as starting material. Hence, natural carbon sources such as pomelo peal, egg white and egg yolk, orange juice, and even eggshells, to name a few; have been used for the preparation of CNDs. While the use of waste is desirable, especially to avoid competition with essential food production, most starting-materials lack the essential purity and structural homogeneity to obtain homogeneous carbon dots. Furthermore, most synthesis approaches reported to date require extensive purification steps and have resulted in carbon dots with heterogeneous photoluminescent properties and indefinite composition. For this reason, among others, the relationship between CND structure (e.g. size, edge shape, functional groups and overall composition) and photophysical properties is yet not fully understood. This is particularly true for carbon dots displaying selective luminescence (one of their most intriguing properties), i.e. their PL emission wavelength can be tuned by varying the excitation wavelength.
In this work, a new reliable, economic, and environmentally-friendly one-step synthesis is established to obtain CNDs with well-defined and reproducible photoluminescence (PL) properties via the microwave-assisted hydrothermal treatment of starch, carboxylic acids and Tris-EDTA (TE) buffer as carbon- and nitrogen source, respectively. The presented microwave-assisted hydrothermal precursor carbonization (MW-hPC) is characterized by its cost-efficiency, simplicity, short reaction times, low environmental footprint, and high yields of approx. 80% (w/w). Furthermore, only a single synthesis step is necessary to obtain homogeneous water-soluble CNDs with no need for further purification.
Depending on starting materials and reaction conditions different types of CNDs have been prepared. The as-prepared CNDs exhibit reproducible, highly homogeneous and favourable PL properties with narrow emission bands (approx. 70nm FWHM), are non-blinking, and are ready to use without need for further purification, modification or surface passivation agents. Furthermore, the CNDs are comparatively small (approx. 2.0nm to 2.4nm) with narrow size distributions; are stable over a long period of time (at least one year), either in solution or as a dried solid; and maintain their PL properties when re-dispersed in solution. Depending on CND type, the PL quantum yield (PLQY) can be adjusted from as low as 1% to as high as 90%; one of the highest reported PLQY values (for CNDs) so far.
An essential part of this work was the utilization of a microwave synthesis reactor, allowing various batch sizes and precise control over reaction temperature and -time, pressure, and heating- and cooling rate, while also being safe to operate at elevated reaction conditions (e.g. 230 ±C and 30 bar). The hereby-achieved high sample throughput allowed, for the first time, the thorough investigation of a wide range of synthesis parameters, providing valuable insight into the CND formation. The influence of carbon- and nitrogen source, precursor concentration and -combination, reaction time and -temperature, batch size, and post-synthesis purification steps were carefully investigated regarding their influence on the optical properties of as-synthesized CNDs. In addition, the change in photophysical properties resulting from the conversion of CND solution into solid and back into the solution was investigated. Remarkably, upon freeze-drying the initial brown CND-solution turns into a non-fluorescent white/slightly yellow to brown solid which recovers PL in aqueous solution. Selected CND samples were also subject to EDX, FTIR, NMR, PL lifetime (TCSPC), particle size (TEM), TGA and XRD analysis. Besides structural characterization, the pH- and excitation dependent PL characteristics (i.e. selective luminescence) were examined; giving inside into the origin of photophysical properties and excitation dependent behaviour of CNDs. The obtained results support the notion that for CNDs the nature of the surface states determines the PL properties and that excitation dependent behaviour is caused by the “Giant Red-Edge Excitation Shift” (GREES).
Nanolenses are linear chains of differently-sized metal nanoparticles, which can theoretically provide extremely high field enhancements. The complex structure renders their synthesis challenging and has hampered closer analyses so far. Here, the technique of DNA origami was used to self-assemble DNA-coated 10 nm, 20 nm, and 60 nm gold or silver nanoparticles into gold or silver nanolenses. Three different geometrical arrangements of gold nanolenses were assembled, and for each of the three, sets of single gold nanolenses were investigated in detail by atomic force microscopy, scanning electron microscopy, dark-field scattering and Raman spectroscopy. The surface-enhanced Raman scattering (SERS) capabilities of the single nanolenses were assessed by labelling the 10 nm gold nanoparticle selectively with dye molecules. The experimental data was complemented by finite-difference time-domain simulations. For those gold nanolenses which showed the strongest field enhancement, SERS signals from the two different internal gaps were compared by selectively placing probe dyes on the 20 nm or 60 nm gold particles. The highest enhancement was found for the gap between the 20 nm and 10 nm nanoparticle, which is indicative of a cascaded field enhancement. The protein streptavidin was labelled with alkyne groups and served as a biological model analyte, bound between the 20 nm and 10 nm particle of silver nanolenses. Thereby, a SERS signal from a single streptavidin could be detected. Background peaks observed in SERS measurements on single silver nanolenses could be attributed to amorphous carbon. It was shown that the amorphous carbon is generated in situ.
The valorization of carbohydrates is one of the most promising fields in green chemistry, as it enables to produce bulk chemicals and fuels out of renewable and abundant resources, instead of further exploiting fossil feedstocks. The focus in this thesis is the conversion of fructose, using dehydration and hydrodeoxygenation reactions. The main goal is to find an easy continuous process, including the solubility of the sugar in a green solvent, the conversion over a solid acid as well as over a metal@tungsten carbide catalyst.
At the beginning of this thesis, solid acid catalysts are synthesized by using carbohydrate material like glucose and starch at high temperatures (up to 600 °C). Additionally a third carbon is synthesized, using an activation method based on Ca(OH)2. After carbonization and further sulfonation, using fuming sulfuric acid, the three resulting catalysts are characterized together with sulfonated carbon black and Amberlyst 15 as references. In order to test all solid acid catalysts in reaction, a 250 mm x 4.6 mm stainless steel column is used as a fixed-bed continuous reactor. The temperature (110 °C to 250 °C) and residence time (2 to 30 minutes) is varied, and a direct relationship between contact time and selectivity is determined. The reaction mechanism, as well as the product distribution is showing a dehydration step of fructose towards 5-hydroxymethylfurfural (HMF). These furan-ring molecules are considered as “sleeping giants”, due to the possibility of using them as fuel, but also for upgrading them to chemicals like terephthalic acid or p-xylene. Consecutive reactions are producing levulinic acid, as well as condensation products with ethanol and formic acid. The activated carbon is additionally showing a 2 % yield of 2,5-Dimethylfuran (DMF) production, pointing towards the extraordinary properties of this catalyst. Without a metal catalyst present, what is normally necessary for hydrogenation reactions, a transferhydrogenation (with formic acid) is observed. The active catalyst was therefore carbon itself, what activated the hydrogen on its surface. This phenomenon was just very rarely observed so far. Expensive noble metals are the material of choice, when it comes to hydrogenation reactions nowadays and cheaper alternatives are necessary.
By postulating a similar electronic structure of tungsten carbide (WC) to platinum by Lewy and Boudart, research is focusing on the replacement of Pt. The production of nano-sized tungsten carbide particles (7.5 ± 2.5 nm, 70 m2 g-1) is enabled by the so called “urea glass route” and its catalytic performances are compared to commercial material. It is shown, that the activity is strongly dependent on the size of the particles as well as the surface area. Nano-sized tungsten carbide is showing activity for hydrogenation reactions under mild conditions (maximum 150 °C, 30 bar). This material therefore opens up new possibilities for replacing the rare and expensive platinum with tungsten carbide based catalysts.
Additionally different metal nanoparticles of palladium, copper and nickel are deposited on top of WC to further promote its reactivity. The nickel nanoparticles are strongly connected to WC and showed the best activity as well as selectivity for upgrading HMF with hydrodeoxygenation. The Ni@WC is not leaching and is showing very good hydrodeoxygenation properties with DMF yields up to 90 percent. Copper@WC is not showing good activity and palladium@WC enables undesired consecutive reactions, hydrogenating the furan ring system.
In order to enable the upgrade of fructose to DMF directly in a continuous system, the current H CUBE Pro TM hydrogenation system is customized with a second reaction column. A 250 mm x 4.6 mm stainless steel reactor column is connected ahead of the hydrogen insertion, enabling the dehydration of fructose to HMF derivatives, before pumping these products into the second column for hydrogenation. The overall residence time in the two column reactor system is 14 minutes. The overall results are an almost full conversion with a yield of 38.5 % DMF and 47 % yield of EL. The main disadvantage is the formation of higher mass products, so called humins, which start depositing on top of the catalysts, blocking their active sites.
In general it can be stated, that a two column system goes along with a higher investment as well as more maintenance costs, compared to a one column catalytic approach. To develop a catalyst, which is on the one hand able to dehydrate as well as hydrodeoxygenate the reactants, is aimed for at the last part of the thesis. The activated carbon however shows already activity for hydrodeoxygenation without any metal present and offers itself therefore as an alternative to overcome the temperature instability of Amberlyst 15 (max. 120 °C) for a combined DMF production directly from fructose. The activity for the upgrade to DMF is increased from 2 % to 12 % DMF yield in one mixed continuous column.
In order to scale up the entire one column approach, an 800 mm x 28.5 mm inner diameter column was planned and manufactured. The system is scaled up and assembled, whereas this flow reactor system is able to be run with 50 mL min-1 maximum flow rate, to stand a pressure of maximum 100 bar and be heated to around 500 °C. The tubing and connections, as well as the used devices are planned according to be safe and easy in use. The scaled-up approach offers a reaction column 120 times bigger (510 ml) then the first extension of the commercial system. This further extension offers the possibility of ranging between 1 and 1000 mL min-1, making it possible to use the approach in pilot plant applications.
Nowadays, the need to protect the environment becomes more urgent than ever. In the field of chemistry, this translates to practices such as waste prevention, use of renewable feedstocks, and catalysis; concepts based on the principles of green chemistry. Polymers are an important product in the chemical industry and are also in the focus of these changes. In this thesis, more sustainable approaches to make two classes of polymers, polypeptoids and polyesters, are described.
Polypeptoids or poly(alkyl-N-glycines) are isomers of polypeptides and are biocompatible, as well as degradable under biologically relevant conditions. In addition to that, they can have interesting properties such as lower critical solution temperature (LCST) behavior. They are usually synthesized by the ring opening polymerization (ROP) of N-carboxy anhydrides (NCAs), which are produced with the use of toxic compounds (e.g. phosgene) and which are highly sensitive to humidity. In order to avoid the direct synthesis and isolation of the NCAs, N-phenoxycarbonyl-protected N-substituted glycines are prepared, which can yield the NCAs in situ. The conditions for the NCA synthesis and its direct polymerization are investigated and optimized for the simplest N-substituted glycine, sarcosine. The use of a tertiary amine in less than stoichiometric amounts compared to the N-phenoxycarbonyl--sarcosine seems to accelerate drastically the NCA formation and does not affect the efficiency of the polymerization. In fact, well defined polysarcosines that comply to the monomer to initiator ratio can be produced by this method. This approach was also applied to other N-substituted glycines.
Dihydroxyacetone is a sustainable diol produced from glycerol, and has already been used for the synthesis of polycarbonates. Here, it was used as a comonomer for the synthesis of polyesters. However, the polymerization of dihydroxyacetone presented difficulties, probably due to the insolubility of the macromolecular chains. To circumvent the problem, the dimethyl acetal protected dihydroxyacetone was polymerized with terephthaloyl chloride to yield a soluble polymer. When the carbonyl was recovered after deprotection, the product was insoluble in all solvents, showing that the carbonyl in the main chain hinders the dissolution of the polymers. The solubility issue can be avoided, when a 1:1 mixture of dihydroxyacetone/ ethylene glycol is used to yield a soluble copolyester.