Refine
Has Fulltext
- yes (130) (remove)
Year of publication
- 2019 (130) (remove)
Document Type
- Doctoral Thesis (130) (remove)
Is part of the Bibliography
- yes (130)
Keywords
- Klimawandel (4)
- Spektroskopie (4)
- climate change (4)
- Biodiversität (3)
- machine learning (3)
- spectroscopy (3)
- Anden (2)
- Andes (2)
- Himalaya (2)
- Klima (2)
Institute
- Institut für Biochemie und Biologie (22)
- Institut für Geowissenschaften (18)
- Institut für Physik und Astronomie (17)
- Institut für Chemie (15)
- Hasso-Plattner-Institut für Digital Engineering GmbH (10)
- Extern (9)
- Department Linguistik (8)
- Institut für Umweltwissenschaften und Geographie (6)
- Sozialwissenschaften (5)
- Institut für Mathematik (4)
Restful choreographies
(2019)
Business process management has become a key instrument to organize work as many companies represent their operations in business process models. Recently, business process choreography diagrams have been introduced as part of the Business Process Model and Notation standard to represent interactions between business processes, run by different partners. When it comes to the interactions between services on the Web, Representational State Transfer (REST) is one of the primary architectural styles employed by web services today. Ideally, the RESTful interactions between participants should implement the interactions defined at the business choreography level.
The problem, however, is the conceptual gap between the business process choreography diagrams and RESTful interactions. Choreography diagrams, on the one hand, are modeled from business domain experts with the purpose of capturing, communicating and, ideally, driving the business interactions. RESTful interactions, on the other hand, depend on RESTful interfaces that are designed by web engineers with the purpose of facilitating the interaction between participants on the internet. In most cases however, business domain experts are unaware of the technology behind web service interfaces and web engineers tend to overlook the overall business goals of web services. While there is considerable work on using process models during process implementation, there is little work on using choreography models to implement interactions between business processes. This thesis addresses this research gap by raising the following research question: How to close the conceptual gap between business process choreographies and RESTful interactions? This thesis offers several research contributions that jointly answer the research question.
The main research contribution is the design of a language that captures RESTful interactions between participants---RESTful choreography modeling language. Formal completeness properties (with respect to REST) are introduced to validate its instances, called RESTful choreographies. A systematic semi-automatic method for deriving RESTful choreographies from business process choreographies is proposed. The method employs natural language processing techniques to translate business interactions into RESTful interactions. The effectiveness of the approach is shown by developing a prototypical tool that evaluates the derivation method over a large number of choreography models.
In addition, the thesis proposes solutions towards implementing RESTful choreographies. In particular, two RESTful service specifications are introduced for aiding, respectively, the execution of choreographies' exclusive gateways and the guidance of RESTful interactions.
Sin pragmática no sería posible la comunicación, puesto que no podríamos interpretar enunciados lingüísticos. A cualquier aprendiente de una lengua que no domina, no le basta con ser competente lingüísticamente, puesto que su fin es comunicarse con otras personas y en contextos determinados. Solo una enseñanza que facilite la habilidad de producir y comprender enunciados para realizar actos de lengua, seleccionando los más apropiados para un contexto determinado, podrá preciarse de ser eficiente.
El trabajo que aquí se presenta pretende dar a conocer a la comunidad científica y, en especial, a los y las involucradas directa e indirectamente en el proceso de enseñanza, el concepto de pragmática verbal y contrastarlo con otros como gramática, cultura o interculturalidad, así como concienciarlos de la importancia y de la necesidad imperiosa del establecimiento de la pragmática como disciplina relevante en el proceso comunicativo y, en especial, de su inclusión sistemática y manifiesta en los libros de texto de español como lengua extranjera elaborados para el contexto escolar. Para ello se investiga la presencia de elementos pragmáticos y el fomento de la competencia pragmática en libros de texto para principiantes, por ser estos el material utilizado por excelencia en las escuelas y por su relevancia a la hora de especificar contenidos, tipo de progresión y metodología.
One method of embedding groups into skew fields was introduced by A. I. Mal'tsev and B. H. Neumann (cf. [18, 19]). If G is an ordered group and F is a skew field, the set F((G)) of formal power series over F in G with well-ordered support forms a skew field into which the group ring F[G] can be embedded. Unfortunately it is not suficient that G is left-ordered since F((G)) is only an F-vector space in this case as there is no natural way to define a multiplication on F((G)). One way to extend the original idea onto left-ordered groups is to examine the endomorphism ring of F((G)) as explored by N. I. Dubrovin (cf. [5, 6]). It is possible to embed any crossed product ring F[G; η, σ] into the endomorphism ring of F((G)) such that each non-zero element of F[G; η, σ] defines an automorphism of F((G)) (cf. [5, 10]). Thus, the rational closure of F[G; η, σ] in the endomorphism ring of F((G)), which we will call the Dubrovin-ring of F[G; η, σ], is a potential candidate for a skew field of fractions of F[G; η, σ]. The methods of N. I. Dubrovin allowed to show that specific classes of groups can be embedded into a skew field. For example, N. I. Dubrovin contrived some special criteria, which are applicable on the universal covering group of SL(2, R). These methods have also been explored by J. Gräter and R. P. Sperner (cf. [10]) as well as N.H. Halimi and T. Ito (cf. [11]). Furthermore, it is of interest to know if skew fields of fractions are unique. For example, left and right Ore domains have unique skew fields of fractions (cf. [2]). This is not the general case as for example the free group with 2 generators can be embedded into non-isomorphic skew fields of fractions (cf. [12]). It seems likely that Ore domains are the most general case for which unique skew fields of fractions exist. One approach to gain uniqueness is to restrict the search to skew fields of fractions with additional properties. I. Hughes has defined skew fields of fractions of crossed product rings F[G; η, σ] with locally indicable G which fulfill a special condition. These are called Hughes-free skew fields of fractions and I. Hughes has proven that they are unique if they exist [13, 14]. This thesis will connect the ideas of N. I. Dubrovin and I. Hughes. The first chapter contains the basic terminology and concepts used in this thesis. We present methods provided by N. I. Dubrovin such as the complexity of elements in rational closures and special properties of endomorphisms of the vector space of formal power series F((G)). To combine the ideas of N.I. Dubrovin and I. Hughes we introduce Conradian left-ordered groups of maximal rank and examine their connection to locally indicable groups. Furthermore we provide notations for crossed product rings, skew fields of fractions as well as Dubrovin-rings and prove some technical statements which are used in later parts. The second chapter focuses on Hughes-free skew fields of fractions and their connection to Dubrovin-rings. For that purpose we introduce series representations to interpret elements of Hughes-free skew fields of fractions as skew formal Laurent series. This 1 Introduction allows us to prove that for Conradian left-ordered groups G of maximal rank the statement "F[G; η, σ] has a Hughes-free skew field of fractions" implies "The Dubrovin ring of F [G; η, σ] is a skew field". We will also prove the reverse and apply the results to give a new prove of Theorem 1 in [13]. Furthermore we will show how to extend injective ring homomorphisms of some crossed product rings onto their Hughes-free skew fields of fractions. At last we will be able to answer the open question whether Hughes--free skew fields are strongly Hughes-free (cf. [17, page 53]).
Introduction: Cystic fibrosis (CF) is a genetic disease which disrupts the function of an epithelial surface anion channel, CFTR (cystic fibrosis transmembrane conductance regulator). Impairment to this channel leads to inflammation and infection in the lung causing the majority of morbidity and mortality. However, CF is a multiorgan disease affecting many tissues, including vascular smooth muscle. Studies have revealed young people with cystic fibrosis lacking inflammation and infection still demonstrate vascular endothelial dysfunction, measured per flow-mediated dilation (FMD). In other disease cohorts, i.e. diabetic and obese, endurance exercise interventions have been shown improve or taper this impairment. However, long-term exercise interventions are risky, as well as costly in terms of time and resources. Nevertheless, emerging research has correlated the acute effects of exercise with its long-term benefits and advocates the study of acute exercise effects on FMD prior to longitudinal studies. The acute effects of exercise on FMD have previously not been examined in young people with CF, but could yield insights on the potential benefits of long-term exercise interventions.
The aims of these studies were to 1) develop and test the reliability of the FMD method and its applicability to study acute exercise effects; 2) compare baseline FMD and the acute exercise effect on FMD between young people with and without CF; and 3) explore associations between the acute effects of exercise on FMD and demographic characteristics, physical activity levels, lung function, maximal exercise capacity or inflammatory hsCRP levels.
Methods: Thirty young volunteers (10 people with CF, 10 non-CF and 10 non-CF active matched controls) between the ages of 10 and 30 years old completed blood draws, pulmonary function tests, maximal exercise capacity tests and baseline FMD measurements, before returning approximately 1 week later and performing a 30-min constant load training at 75% HRmax. FMD measurements were taken prior, immediately after, 30 minutes after and 1 hour after constant load training. ANOVAs and repeated measures ANOVAs were employed to explore differences between groups and timepoints, respectively. Linear regression was implemented and evaluated to assess correlations between FMD and demographic characteristics, physical activity levels, lung function, maximal exercise capacity or inflammatory hsCRP levels. For all comparisons, statistical significance was set at a p-value of α < 0.05.
Results: Young people with CF presented with decreased lung function and maximal exercise capacity compared to matched controls. Baseline FMD was also significantly decreased in the CF group (CF: 5.23% v non-CF: 8.27% v non-CF active: 9.12%). Immediately post-training, FMD was significantly attenuated (approximately 40%) in all groups with CF still demonstrating the most minimal FMD. Follow-up measurements of FMD revealed a slow recovery towards baseline values 30 min post-training and improvements in the CF and non-CF active groups 60 min post-training. Linear regression exposed significant correlations between maximal exercise capacity (VO2 peak), BMI and FMD immediately post-training.
Conclusion: These new findings confirm that CF vascular endothelial dysfunction can be acutely modified by exercise and will aid in underlining the importance of exercise in CF populations. The potential benefits of long-term exercise interventions on vascular endothelial dysfunction in young people with CF warrants further investigation.
In den letzten Jahrzehnten fand auch in der Beschichtungsindustrie ein Umdenken hin zu umweltfreundlicheren Farben und Lacken statt. Allerdings basieren auch neue Lösungen meist nicht auf Biopolymeren und in einem noch geringeren Anteil auf wasserbasierten Beschichtungssystemen aus nachwachsenden Rohstoffen. Dies stellt den Anknüpfungspunkt dieser Arbeit dar, in der untersucht wurde, ob das Biopolymer Stärke das Potenzial zum wasserbasierten Filmbildner für Farben und Lacke besitzt. Dabei müssen angelehnt an etablierte synthetische Marktprodukte die folgenden Kriterien erfüllt werden: Die wässrige Dispersion muss mindestens einen 30%igen Feststoffgehalt haben, bei Raumtemperatur verarbeitet werden können und Viskositäten zwischen 10^2-10^3 mPa·s aufweisen. Die finale Beschichtung muss einen geschlossenen Film bilden und sehr gute Haftfestigkeiten zu einer spezifischen Oberfläche, in dieser Arbeit Glas, besitzen. Als Grundlage für die Modifizierung der Stärke wurde eine Kombination von molekularem Abbau und chemischer Funktionalisierung ausgewählt. Da nicht bekannt war, welchen Einfluss die Stärkeart, die gewählte Abbaureaktion als auch verschiedene Substituenten auf die Dispersionsherstellung und deren Eigenschaften sowie die Beschichtungseigenschaften ausüben könnten, wurden die strukturellen Parameter getrennt voneinander untersucht.
Das erste Themengebiet beinhaltete den oxidativen Abbau von Kartoffel- und Palerbsenstärke mittels des Hypochlorit-Abbaus (OCl-) und des ManOx-Abbaus (H2O2, KMnO4). Mit beiden Abbaureaktionen konnten vergleichbare gewichtsmittlere Molmassen (Mw) von 2·10^5-10^6 g/mol (GPC-MALS) hergestellt werden. Allerdings führten die gewählten Reaktionsbedingungen beim ManOx-Abbau zur Bildung von Gelpartikeln. Diese lagen im µm-Bereich (DLS und Kryo-REM-Messungen) und hatten zur Folge, dass die ManOx-Proben deutlich erhöhte Viskositäten (c: 7,5 %; 9-260 mPa·s) im Vergleich zu den OCl--Proben (4-10 mPa·s) bei scherverdünnendem Verhalten besaßen und die Eigenschaften von viskoelastischen Gelen (G‘ > G‘‘) zeigten. Des Weiteren wiesen sie reduzierte Heißwasserlöslichkeiten (95 °C, vorrangig: 70-99 %) auf. Der OCl--Abbau führte zu hydrophileren (Carboxylgruppengehalt bis zu 6,1 %; ManOx: bis zu 3,1 %), nach 95 °C-Behandlung vollständig wasserlöslichen abgebauten Stärken, die ein Newtonsches Fließverhalten mit Eigenschaften einer viskoelastischen Flüssigkeit (G‘‘ > G‘) hatten. Die OCl--Proben konnten im Vergleich zu den ManOx-Produkten (10-20 %) zu konzentrierteren Dispersionen (20-40 %) verarbeitet werden, die gleichzeitig die Einschränkung von anwendungsrelevanten Mw auf < 7·10^5 g/mol zuließen (Konzentration sollte > 30 % sein). Außerdem führten nur die OCl--Proben der Kartoffelstärke zu transparenten (alle anderen waren opak) geschlossenen Beschichtungsfilmen. Somit hebt sich die Kombination von OCl--Abbau und Kartoffelstärke mit Hinblick auf die Endanwendung ab.
Das zweite Themengebiet umfasste Untersuchungen zum Einfluss von Ester- und Hydroxyalkylether-Substituenten auf Basis einer industriell abgebauten Kartoffelstärke (Mw: 1,2·10^5 g/mol) vor allem auf die Dispersionsherstellung, die rheologischen Eigenschaften der Dispersionen und die Beschichtungseigenschaften in Kombination mit Glassubstraten. Dazu wurden Ester und Ether mit DS/MS-Werten von 0,07-0,91 synthetisiert. Die Derivate konnten zu wasserbasierten Dispersionen mit Konzentrationen von 30-45 % verarbeitet werden, wobei bei hydrophoberen Modifikaten ein Co-Lösemittel, Diethylenglycolmonobutylether (DEGBE), eingesetzt werden musste. Die Feststoffgehalte sanken dabei für beide Derivatklassen vor allem mit zunehmender Alkylkettenlänge. Die anwendungsrelevanten Viskositäten (323-1240 mPa·s) stiegen auf Grund von Wechselwirkungen tendenziell mit DS/MS und Alkylkettenlänge an. Hinsichtlich der Beschichtungseigenschaften erwiesen sich die Ester vergleichend zu den Ethern als die bevorzugte Substituentenklasse, da nur die Ester geschlossene, defektfreie und mehrheitlich transparente Beschichtungsfilme bildeten, die exzellente bis sehr gute Haftfestigkeiten (ISO Klasse: 0 und 1) auf Glas besaßen. Die Ether bildeten mehrheitlich brüchige Filme. Basierend auf der Kombination der Ergebnisse aus Lösemittelaustausch, den rheologischen Untersuchungen und zusätzlichen Oberflächenspannungsmessungen (30-61 mN/m) konnte geschlossen werden, dass wahrscheinlich fehlende oder schlechte Haftfestigkeiten vorrangig akkumuliertem Wasser in den Beschichtungsfilmen (visuell: trüb oder weiß) geschuldet sind, während die Brüchigkeit vermutlich auf Wechselwirkungen (H-Brücken Wechselwirkungen, hydrophobe Wechselwirkungen) zwischen den Polymeren zurückgeführt werden kann.
Insgesamt scheint die Kombination aus Kartoffelstärke basierend auf dem OCl--Abbau mit Mw < 7·10^5 g/mol und einem Estersubstituenten eine gute Wahl für wasserbasierte Dispersionen mit hohen Feststoffkonzentrationen (> 30 %), guter Filmbildung und exzellenten Haftungen auf Glas zu sein.
A contemporary challenge in Ecology and Evolutionary Biology is to anticipate the fate of populations of organisms in the context of a changing world. Climate change and landscape changes due to anthropic activities have been of major concern in the contemporary history. Organisms facing these threats are expected to respond by local adaptation (i.e., genetic changes or phenotypic plasticity) or by shifting their distributional range (migration). However, there are limits to their responses. For example, isolated populations will have more difficulties in developing adaptive innovations by means of genetic changes than interconnected metapopulations. Similarly, the topography of the environment can limit dispersal opportunities for crawling organisms as compared to those that rely on wind. Thus, populations of species with different life history strategy may differ in their ability to cope with changing environmental conditions. However, depending on the taxon, empirical studies investigating organisms’ responses to environmental change may become too complex, long and expensive; plus, complications arising from dealing with endangered species. In consequence, eco-evolutionary modeling offers an opportunity to overcome these limitations and complement empirical studies, understand the action and limitations of underlying mechanisms, and project into possible future scenarios. In this work I take a modeling approach and investigate the effect and relative importance of evolutionary mechanisms (including phenotypic plasticity) on the ability for local adaptation of populations with different life strategy experiencing climate change scenarios. For this, I performed a review on the state of the art of eco-evolutionary Individual-Based Models (IBMs) and identify gaps for future research. Then, I used the results from the review to develop an eco-evolutionary individual-based modeling tool to study the role of genetic and plastic mechanisms in promoting local adaption of populations of organisms with different life strategies experiencing scenarios of climate change and environmental stochasticity. The environment was simulated through a climate variable (e.g., temperature) defining a phenotypic optimum moving at a given rate of change. The rate of change was changed to simulate different scenarios of climate change (no change, slow, medium, rapid climate change). Several scenarios of stochastic noise color resembling different climatic conditions were explored. Results show that populations of sexual species will rely mainly on standing genetic variation and phenotypic plasticity for local adaptation. Population of species with relatively slow growth rate (e.g., large mammals) – especially those of small size – are the most vulnerable, particularly if their plasticity is limited (i.e., specialist species). In addition, whenever organisms from these populations are capable of adaptive plasticity, they can buffer fitness losses in reddish climatic conditions. Likewise, whenever they can adjust their plastic response (e.g., bed-hedging strategy) they will cope with bluish environmental conditions as well. In contrast, life strategies of high fecundity can rely on non-adaptive plasticity for their local adaptation to novel environmental conditions, unless the rate of change is too rapid. A recommended management measure is to guarantee interconnection of isolated populations into metapopulations, such that the supply of useful genetic variation can be increased, and, at the same time, provide them with movement opportunities to follow their preferred niche, when local adaptation becomes problematic. This is particularly important for bluish and reddish climatic conditions, when the rate of change is slow, or for any climatic condition when the level of stress (rate of change) is relatively high.
A central insight from psychological studies on human eye movements is that eye movement patterns are highly individually characteristic. They can, therefore, be used as a biometric feature, that is, subjects can be identified based on their eye movements. This thesis introduces new machine learning methods to identify subjects based on their eye movements while viewing arbitrary content. The thesis focuses on probabilistic modeling of the problem, which has yielded the best results in the most recent literature. The thesis studies the problem in three phases by proposing a purely probabilistic, probabilistic deep learning, and probabilistic deep metric learning approach. In the first phase, the thesis studies models that rely on psychological concepts about eye movements. Recent literature illustrates that individual-specific distributions of gaze patterns can be used to accurately identify individuals. In these studies, models were based on a simple parametric family of distributions. Such simple parametric models can be robustly estimated from sparse data, but have limited flexibility to capture the differences between individuals. Therefore, this thesis proposes a semiparametric model of gaze patterns that is flexible yet robust for individual identification. These patterns can be understood as domain knowledge derived from psychological literature. Fixations and saccades are examples of simple gaze patterns. The proposed semiparametric densities are drawn under a Gaussian process prior centered at a simple parametric distribution. Thus, the model will stay close to the parametric class of densities if little data is available, but it can also deviate from this class if enough data is available, increasing the flexibility of the model. The proposed method is evaluated on a large-scale dataset, showing significant improvements over the state-of-the-art. Later, the thesis replaces the model based on gaze patterns derived from psychological concepts with a deep neural network that can learn more informative and complex patterns from raw eye movement data. As previous work has shown that the distribution of these patterns across a sequence is informative, a novel statistical aggregation layer called the quantile layer is introduced. It explicitly fits the distribution of deep patterns learned directly from the raw eye movement data. The proposed deep learning approach is end-to-end learnable, such that the deep model learns to extract informative, short local patterns while the quantile layer learns to approximate the distributions of these patterns. Quantile layers are a generic approach that can converge to standard pooling layers or have a more detailed description of the features being pooled, depending on the problem. The proposed model is evaluated in a large-scale study using the eye movements of subjects viewing arbitrary visual input. The model improves upon the standard pooling layers and other statistical aggregation layers proposed in the literature. It also improves upon the state-of-the-art eye movement biometrics by a wide margin. Finally, for the model to identify any subject — not just the set of subjects it is trained on — a metric learning approach is developed. Metric learning learns a distance function over instances. The metric learning model maps the instances into a metric space, where sequences of the same individual are close, and sequences of different individuals are further apart. This thesis introduces a deep metric learning approach with distributional embeddings. The approach represents sequences as a set of continuous distributions in a metric space; to achieve this, a new loss function based on Wasserstein distances is introduced. The proposed method is evaluated on multiple domains besides eye movement biometrics. This approach outperforms the state of the art in deep metric learning in several domains while also outperforming the state of the art in eye movement biometrics.
Phytoplankton growth depends not only on the mean intensity but also on the dynamics of the light supply. The nonlinear light-dependency of growth is characterized by a small number of basic parameters: the compensation light intensity PARcompμ, where production and losses are balanced, the growth efficiency at sub-saturating light αµ, and the maximum growth rate at saturating light µmax. In surface mixed layers, phytoplankton may rapidly move between high light intensities and almost darkness. Because of the different frequency distribution of light and/or acclimation processes, the light-dependency of growth may differ between constant and fluctuating light. Very few studies measured growth under fluctuating light at a sufficient number of mean light intensities to estimate the parameters of the growth-irradiance relationship. Hence, the influence of light dynamics on µmax, αµ and PARcompμ are still largely unknown. By extension, accurate modelling predictions of phytoplankton development under fluctuating light exposure remain difficult to make. This PhD thesis does not intend to directly extrapolate few experimental results to aquatic systems – but rather improving the mechanistic understanding of the variation of the light-dependency of growth under light fluctuations and effects on phytoplankton development.
In Lake TaiHu and at the Three Gorges Reservoir (China), we incubated phytoplankton communities in bottles placed either at fixed depths or moved vertically through the water column to mimic vertical mixing. Phytoplankton at fixed depths received only the diurnal changes in light (defined as constant light regime), while phytoplankton received rapidly fluctuating light by superimposing the vertical light gradient on the natural sinusoidal diurnal sunlight. The vertically moved samples followed a circular movement with 20 min per revolution, replicating to some extent the full overturn of typical Langmuir cells. Growth, photosynthesis, oxygen production and respiration of communities (at Lake TaiHu) were
measured. To complete these investigations, a physiological experiment was performed in the laboratory on a toxic strain of Microcystis aeruginosa (FACBH 1322) incubated under 20 min period fluctuating light. Here, we measured electron transport rates and net oxygen production at a much higher time resolution (single minute timescale).
The present PhD thesis provides evidence for substantial effects of fluctuating light on the eco-physiology of phytoplankton. Both experiments performed under semi-natural conditions in Lake TaiHu and at the Three Gorges Reservoir gave similar results. The significant decline in community growth efficiencies αµ under fluctuating light was caused for a great share by different frequency distribution of light intensities that shortened the effective daylength for production. The remaining gap in community αµ was attributed to species-specific photoacclimation mechanisms and to light-dependent respiratory losses. In contrast, community maximal growth rates µmax were similar between incubations at constant and fluctuating light. At daily growth saturating light supply, differences in losses for biosynthesis between the two light regimes were observed. Phytoplankton experiencing constant light suffered photo-inhibition - leading to photosynthesis foregone and additional respiratory costs for photosystems repair. On the contrary, intermittent exposure to low and high light intensities prevented photo-inhibition of mixed algae but forced them to develop alternative light strategy. They better harvested and exploited surface irradiance by enhancing their photosynthesis. In the laboratory, we showed that Microcystis aeruginosa increased its oxygen consumption by dark respiration in the light few minutes only after exposure to increasing light intensities. More, we proved that within a simulated Langmuir cell, the net production at saturating light and the compensation light intensity for production at limiting light are positively related. These results are best explained by an accumulation of photosynthetic products at increasing irradiance and mobilization of these fresh resources by rapid enhancement of dark respiration for maintenance and biosynthesis at decreasing irradiance. At the daily timescale, we showed that the enhancement of photosynthesis at high irradiance for biosynthesis of species increased their maintenance respiratory costs at limiting light. Species-specific growth at saturating light µmax and compensation light intensity for growth PARcompμ of species incubated in Lake TaiHu were positively related. Because of this species-specific physiological tradeoff, species displayed different light affinities to limiting and saturating light - thereby exhibiting a gleaner-opportunist tradeoff. In Lake TaiHu, we showed that inter-specific differences in light acquisition traits (µmax and PARcompμ) allowed coexis¬tence of species on a gradient of constant
light while avoiding competitive exclusion. More interestingly we demonstrated for the first time that vertical mixing (inducing fluctuating light supply for phytoplankton) may alter or even reverse the light utilization strategies of species within couple of days. The intra-specific variation in traits under fluctuating light increased the niche space for acclimated species, precluding competitive exclusion.
Overall, this PhD thesis contributes to a better understanding of phytoplankton eco-physiology under fluctuating light supply. This work could enhance the quality of predictions of phytoplankton development under certain weather conditions or climate change scenarios.
Hepcidin-25 (Hep-25) plays a crucial role in the control of iron homeostasis. Since the dysfunction of the hepcidin pathway leads to multiple diseases as a result of iron imbalance, hepcidin represents a potential target for the diagnosis and treatment of disorders of iron metabolism. Despite intense research in the last decade targeted at developing a selective immunoassay for iron disorder diagnosis and treatment and better understanding the ferroportin-hepcidin interaction, questions remain. The key to resolving these underlying questions is acquiring exact knowledge of the 3D structure of native Hep-25. Since it was determined that the N-terminus, which is responsible for the bioactivity of Hep-25, contains a small Cu(II)-binding site known as the ATCUN motif, it was assumed that the Hep-25-Cu(II) complex is the native, bioactive form of the hepcidin. This structure has thus far not been elucidated in detail. Owing to the lack of structural information on metal-bound Hep-25, little is known about its possible biological role in iron metabolism. Therefore, this work is focused on structurally characterizing the metal-bound Hep-25 by NMR spectroscopy and molecular dynamics simulations. For the present work, a protocol was developed to prepare and purify properly folded Hep-25 in high quantities. In order to overcome the low solubility of Hep-25 at neutral pH, we introduced the C-terminal DEDEDE solubility tag. The metal binding was investigated through a series of NMR spectroscopic experiments to identify the most affected amino acids that mediate metal coordination. Based on the obtained NMR data, a structural calculation was performed in order to generate a model structure of the Hep-25-Ni(II) complex. The DEDEDE tag was excluded from the structural calculation due to a lack of NMR restraints. The dynamic nature and fast exchange of some of the amide protons with solvent reduced the overall number of NMR restraints needed for a high-quality structure. The NMR data revealed that the 20 Cterminal Hep-25 amino acids experienced no significant conformational changes, compared to published results, as a result of a pH change from pH 3 to pH 7 and metal binding. A 3D model of the Hep-25-Ni(II) complex was constructed from NMR data recorded for the hexapeptideNi(II) complex and Hep-25-DEDEDE-Ni(II) complex in combination with the fixed conformation of 19 C-terminal amino acids. The NMR data of the Hep-25-DEDEDE-Ni(II) complex indicates that the ATCUN motif moves independently from the rest of the structure. The 3D model structure of the metal-bound Hep-25 allows for future works to elucidate hepcidin’s interaction with its receptor ferroportin and should serve as a starting point for the development of antibodies with improved selectivity.
For millennia, humans have affected landscapes all over the world. Due to horizontal expansion, agriculture plays a major role in the process of fragmentation. This process is caused by a substitution of natural habitats by agricultural land leading to agricultural landscapes. These landscapes are characterized by an alternation of agriculture and other land use like forests. In addition, there are landscape elements of natural origin like small water bodies. Areas of different land use are beside each other like patches, or fragments. They are physically distinguishable which makes them look like a patchwork from an aerial perspective. These fragments are each an own ecosystem with conditions and properties that differ from their adjacent fragments. As open systems, they are in exchange of information, matter and energy across their boundaries. These boundary areas are called transition zones. Here, the habitat properties and environmental conditions are altered compared to the interior of the fragments. This changes the abundance and the composition of species in the transition zones, which in turn has a feedback effect on the environmental conditions.
The literature mainly offers information and insights on species abundance and composition in forested transition zones. Abiotic effects, the gradual changes in energy and matter, received less attention. In addition, little is known about non-forested transition zones. For example, the effects on agricultural yield in transition zones of an altered microclimate, matter dynamics or different light regimes are hardly researched or understood. The processes in transition zones are closely connected with altered provisioning and regulating ecosystem services. To disentangle the mechanisms and to upscale the effects, models can be used.
My thesis provides insights into these topics: literature was reviewed and a conceptual framework for the quantitative description of gradients of matter and energy in transition zones was introduced. The results of measurements of environmental gradients like microclimate, aboveground biomass and soil carbon and nitrogen content are presented that span from within the forest into arable land. Both the measurements and the literature review could not validate a transition zone of 100 m for abiotic effects. Although this value is often reported and used in the literature, it is likely to be smaller.
Further, the measurements suggest that on the one hand trees in transition zones are smaller compared to those in the interior of the fragments, while on the other hand less biomass was measured in the arable lands’ transition zone. These results support the hypothesis that less carbon is stored in the aboveground biomass in transition zones. The soil at the edge (zero line) between adjacent forest and arable land contains more nitrogen and carbon content compared to the interior of the fragments. One-year measurements in the transition zone also provided evidence that microclimate is different compared to the fragments’ interior.
To predict the possible yield decreases that transition zones might cause, a modelling approach was developed. Using a small virtual landscape, I modelled the effect of a forest fragment shading the adjacent arable land and the effects of this on yield using the MONICA crop growth model. In the transition zone yield was less compared to the interior due to shading. The results of the simulations were upscaled to the landscape level and exemplarily calculated for the arable land of a whole region in Brandenburg, Germany.
The major findings of my thesis are: (1) Transition zones are likely to be much smaller than assumed in the scientific literature; (2) transition zones aren’t solely a phenomenon of forested ecosystems, but significantly extend into arable land as well; (3) empirical and modelling results show that transition zones encompass biotic and abiotic changes that are likely to be important to a variety of agricultural landscape ecosystem services.
Increasing concerns regarding the environmental impact of our chemical production have shifted attention towards possibilities for sustainable biotechnology. One-carbon (C1) compounds, including methane, methanol, formate and CO, are promising feedstocks for future bioindustry. CO2 is another interesting feedstock, as it can also be transformed using renewable energy to other C1 feedstocks for use. While formaldehyde is not suitable as a feedstock due to its high toxicity, it is a central intermediate in the process of C1 assimilation. This thesis explores formaldehyde metabolism and aims to engineer formaldehyde assimilation in the model organism Escherichia coli for the future C1-based bioindustry.
The first chapter of the thesis aims to establish growth of E. coli on formaldehyde via the most efficient naturally occurring route, the ribulose monophosphate pathway. Linear variants of the pathway were constructed in multiple-gene knockouts strains, coupling E. coli growth to the activities of the key enzymes of the pathway. Formaldehyde-dependent growth was achieved in rationally designed strains. In the final strain, the synthetic pathway provides the cell with almost all biomass and energy requirements.
In the second chapter, taking advantage of the unique feature of its reactivity, formaldehyde assimilation via condensation with glycine and pyruvate by two promiscuous aldolases was explored. Facilitated by these two reactions, the newly designed homoserine cycle is expected to support higher yields of a wide array of products than its counterparts. By dividing the pathway into segments and coupling them to the growth of dedicated strains, all pathway reactions were demonstrated to be sufficiently active. The work paves a way for future implementation of a highly efficient route for C1 feedstocks into commodity chemicals.
In the third chapter, the in vivo rate of the spontaneous formaldehyde tetrahydrofolate condensation to methylene-tetrahydrofolate was assessed in order to evaluate its applicability as a biotechnological process. Tested within an E. coli strain deleted in essential genes for native methylene-tetrahydrofolate biosynthesis, the reaction was shown to support the production of this essential intermediate. However, only low growth rates were observed and only at high formaldehyde concentrations. Computational analysis dependent on in vivo evidence from this strain deduced the slow rate of this spontaneous reaction, thus ruling out its substantial contribution to growth on C1 feedstocks.
The reactivity of formaldehyde makes it highly toxic. In the last chapter, the formation of thioproline, the condensation product of cysteine and formaldehyde, was confirmed to contribute this toxicity effect. Xaa-Pro aminopeptidase (PepP), which genetically links with folate metabolism, was shown to hydrolyze thioproline-containing peptides. Deleting pepP increased strain sensitivity to formaldehyde, pointing towards the toxicity of thioproline-containing peptides and the importance of their removal. The characterization in this study could be useful in handling this toxic intermediate.
Overall, this thesis identified challenges related to formaldehyde metabolism and provided novel solutions towards a future bioindustry based on sustainable C1 feedstocks in which formaldehyde serves as a key intermediate.
Accurate weather observations are the keystone to many quantitative applications, such as precipitation monitoring and nowcasting, hydrological modelling and forecasting, climate studies, as well as understanding precipitation-driven natural hazards (i.e. floods, landslides, debris flow). Weather radars have been an increasingly popular tool since the 1940s to provide high spatial and temporal resolution precipitation data at the mesoscale, bridging the gap between synoptic and point scale observations. Yet, many institutions still struggle to tap the potential of the large archives of reflectivity, as there is still much to understand about factors that contribute to measurement errors, one of which is calibration. Calibration represents a substantial source of uncertainty in quantitative precipitation estimation (QPE). A miscalibration of a few dBZ can easily deteriorate the accuracy of precipitation estimates by an order of magnitude. Instances where rain cells carrying torrential rains are misidentified by the radar as moderate rain could mean the difference between a timely warning and a devastating flood.
Since 2012, the Philippine Atmospheric, Geophysical, and Astronomical Services Administration (PAGASA) has been expanding the country’s ground radar network. We had a first look into the dataset from one of the longest running radars (the Subic radar) after devastating week-long torrential rains and thunderstorms in August 2012 caused by the annual southwestmonsoon and enhanced by the north-passing Typhoon Haikui. The analysis of the rainfall spatial distribution revealed the added value of radar-based QPE in comparison to interpolated rain gauge observations. However, when compared with local gauge measurements, severe miscalibration of the Subic radar was found. As a consequence, the radar-based QPE would have underestimated the rainfall amount by up to 60% if they had not been adjusted by rain gauge observations—a technique that is not only affected by other uncertainties, but which is also not feasible in other regions of the country with very sparse rain gauge coverage.
Relative calibration techniques, or the assessment of bias from the reflectivity of two radars, has been steadily gaining popularity. Previous studies have demonstrated that reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and its successor, the Global Precipitation Measurement (GPM), are accurate enough to serve as a calibration reference for ground radars over low-to-mid-latitudes (± 35 deg for TRMM; ± 65 deg for GPM). Comparing spaceborne radars (SR) and ground radars (GR) requires cautious consideration of differences in measurement geometry and instrument specifications, as well as temporal coincidence. For this purpose, we implement a 3-D volume matching method developed by Schwaller and Morris (2011) and extended by Warren et al. (2018) to 5 years worth of observations from the Subic radar. In this method, only the volumetric intersections of the SR and GR beams are considered.
Calibration bias affects reflectivity observations homogeneously across the entire radar domain. Yet, other sources of systematic measurement errors are highly heterogeneous in space, and can either enhance or balance the bias introduced by miscalibration. In order to account for such heterogeneous errors, and thus isolate the calibration bias, we assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a qualityweighted average of reflectivity differences in any sample of matching SR–GR volumes. We exemplify the idea of quality-weighted averaging by using beam blockage fraction (BBF) as a quality variable. Quality-weighted averaging is able to increase the consistency of SR and GR observations by decreasing the standard deviation of the SR–GR differences, and thus increasing the precision of the bias estimates.
To extend this framework further, the SR–GR quality-weighted bias estimation is applied to the neighboring Tagaytay radar, but this time focusing on path-integrated attenuation (PIA) as the source of uncertainty. Tagaytay is a C-band radar operating at a lower wavelength and is therefore more affected by attenuation. Applying the same method used for the Subic radar, a time series of calibration bias is also established for the Tagaytay radar.
Tagaytay radar sits at a higher altitude than the Subic radar and is surrounded by a gentler terrain, so beam blockage is negligible, especially in the overlapping region. Conversely, Subic radar is largely affected by beam blockage in the overlapping region, but being an SBand radar, attenuation is considered negligible. This coincidentally independent uncertainty contributions of each radar in the region of overlap provides an ideal environment to experiment with the different scenarios of quality filtering when comparing reflectivities from the two ground radars. The standard deviation of the GR–GR differences already decreases if we consider either BBF or PIA to compute the quality index and thus the weights. However, combining them multiplicatively resulted in the largest decrease in standard deviation, suggesting that taking both factors into account increases the consistency between the matched samples.
The overlap between the two radars and the instances of the SR passing over the two radars at the same time allows for verification of the SR–GR quality-weighted bias estimation method. In this regard, the consistency between the two ground radars is analyzed before and after bias correction is applied. For cases when all three radars are coincident during a significant rainfall event, the correction of GR reflectivities with calibration bias estimates from SR overpasses dramatically improves the consistency between the two ground radars which have shown incoherent observations before correction. We also show that for cases where adequate SR coverage is unavailable, interpolating the calibration biases using a moving average can be used to correct the GR observations for any point in time to some extent. By using the interpolated biases to correct GR observations, we demonstrate that bias correction reduces the absolute value of the mean difference in most cases, and therefore improves the consistency between the two ground radars.
This thesis demonstrates that in general, taking into account systematic sources of uncertainty that are heterogeneous in space (e.g. BBF) and time (e.g. PIA) allows for a more consistent estimation of calibration bias, a homogeneous quantity. The bias still exhibits an unexpected variability in time, which hints that there are still other sources of errors that remain unexplored. Nevertheless, the increase in consistency between SR and GR as well as between the two ground radars, suggests that considering BBF and PIA in a weighted-averaging approach is a step in the right direction.
Despite the ample room for improvement, the approach that combines volume matching between radars (either SR–GR or GR–GR) and quality-weighted comparison is readily available for application or further scrutiny. As a step towards reproducibility and transparency in atmospheric science, the 3D matching procedure and the analysis workflows as well as sample data are made available in public repositories. Open-source software such as Python and wradlib are used for all radar data processing in this thesis. This approach towards open science provides both research institutions and weather services with a valuable tool that can be applied to radar calibration, from monitoring to a posteriori correction of archived data.
Business process management (BPM) deals with modeling, executing, monitoring, analyzing, and improving business processes. During execution, the process communicates with its environment to get relevant contextual information represented as events. Recent development of big data and the Internet of Things (IoT) enables sources like smart devices and sensors to generate tons of events which can be filtered, grouped, and composed to trigger and drive business processes.
The industry standard Business Process Model and Notation (BPMN) provides several event constructs to capture the interaction possibilities between a process and its environment, e.g., to instantiate a process, to abort an ongoing activity in an exceptional situation, to take decisions based on the information carried by the events, as well as to choose among the alternative paths for further process execution. The specifications of such interactions are termed as event handling. However, in a distributed setup, the event sources are most often unaware of the status of process execution and therefore, an event is produced irrespective of the process being ready to consume it. BPMN semantics does not support such scenarios and thus increases the chance of processes getting delayed or getting in a deadlock by missing out on event occurrences which might still be relevant.
The work in this thesis reviews the challenges and shortcomings of integrating real-world events into business processes, especially the subscription management. The basic integration is achieved with an architecture consisting of a process modeler, a process engine, and an event processing platform. Further, points of subscription and unsubscription along the process execution timeline are defined for different BPMN event constructs. Semantic and temporal dependencies among event subscription, event occurrence, event consumption and event unsubscription are considered. To this end, an event buffer with policies for updating the buffer, retrieving the most suitable event for the current process instance, and reusing the event has been discussed that supports issuing of early subscription.
The Petri net mapping of the event handling model provides our approach with a translation of semantics from a business process perspective. Two applications based on this formal foundation are presented to support the significance of different event handling configurations on correct process execution and reachability of a process path. Prototype implementations of the approaches show that realizing flexible event handling is feasible with minor extensions of off-the-shelf process engines and event platforms.
Bildungsort Familie
(2019)
In der Bildungs- und Familienforschung wird die intergenerationale Weitergabe von Bildung innerhalb der Familie hauptsächlich unter dem Blickwinkel des schulischen Erfolges der nachwachsenden Generation thematisiert. „Wie“ aber bildungsbezogene Transferprozesse innerhalb der Familie konkret ablaufen, bleibt jedoch in der deutschen Forschungslandschaft weitestgehend unbearbeitet. An dieser Stelle setzt diese qualitativ angelegte Arbeit an. Ziel dieser Arbeit ist, bildungsbezogene Transferprozesse innerhalb von russischen Dreigenerationenfamilien, die aus der ehemaligen Sowjetunion nach Berlin seit 1989 ausgewandert sind und zwischen der Großeltern-, Elterngeneration und der Enkelgeneration ablaufen, zu untersuchen. Hinter diesen Transferprozessen verbergen sich im Sinne Bourdieus bewusste und unbewusste Bildungsstrategien der interviewten Familienmitglieder. Im Rahmen dieser Arbeit wurden zwei Spätaussiedlerfamilien – zu diesen zählen Familie Hoffmann und Familie Popow, sowie zwei russisch-jüdische Familien – zu diesen zählen Familie Rosenthal und Familie Buchbinder, interviewt. Es wurden mit den einzelnen Mitgliedern der vier untersuchten Dreigenerationenfamilien Gruppendiskussionen sowie mit je einem Vertreter einer Generation leitfadengestützte Einzelinterviews geführt. Die Erhebungsphase fand in Berlin im Zeitraum von 2010 bis 2012 statt. Das auf diese Weise gewonnene empirische Material wurde mithilfe der dokumentarischen Methode nach Bohnsack ausgewertet. Hierdurch wurde es möglich die implizite Selbstverständlichkeit, mit der sich Bildung in Familien nach Bourdieu habituell vollzieht, einzufangen und rekonstruierbar zu machen. In der Arbeit wurden eine habitustheoretische Interpretation der russischen Dreigenerationenfamilien und die entsprechende Feldanalyse nach Bourdieu vorgenommen. In diesem Zusammenhang wurde der soziale Raum der untersuchten Familien in der Ankunftsgesellschaft bezüglich ihres Vergleichshorizontes der Herkunftsgesellschaft rekonstruiert. Weiter wurde der Bildungstransfer vor dem jeweiligen Erlebnishintergrund der einzelnen Familien untersucht und diesbezüglich eine Typisierung vorgenommen.
Im Rahmen dieser Untersuchung konnten neue Erkenntnisse zum bisher unerforschten Feld des Bildungstransfers russischer Dreigenerationenfamilien in Berlin gewonnen werden. Ein wesentliches Ergebnis dieser Arbeit ist, dass die Anwendung von Bourdieus Klassentheorie auch auf Gruppen, die in einer sozialistischen Gesellschaft sozialisiert wurden und in eine kapitalistisch orientierte Gesellschaft ausgewandert sind, produktiv sein kann. Ein weiteres zentrales Ergebnis der Studie ist, dass bei zwei der vier untersuchten Familien die Migration den intergenerationalen Bildungstransfer beeinflusste. In diesem Zusammenhang weist Familie Rosenthal durch die Migration einen „gespaltenen“ Habitus auf. Dieser ist darauf zurückzuführen, dass diese Familie bei der Planung des Berufes für die Enkelin in Berlin sich am Praktischen und Notwendigen orientierte. Während die bewusste Bildungsstrategie der Großeltern- und Elterngeneration für die Enkelgeneration im Ankunftsland dem Habitus der Notwendigkeit, den Bourdieu der Arbeiterklasse zuschreibt, zugeordnet werden kann, lässt sich hingegen das Freizeitverhalten der Familie Rosenthal dem Habitus der Distinktion zuordnen, der typisch für die herrschende Klasse ist. Ein weiterer Befund dieser Untersuchung ist, dass im Vergleich zur Enkelin Rosenthal bei der Enkelin Popow eine sogenannte Sphärendiskrepanz rekonstruiert wurde. So ist die Enkelin Popow in der äußeren Sphäre der Schule auf sich gestellt, da die Großeltern- und Elterngeneration zum deutschen Schulsystem nur über einen geringen Informationsstand verfügen. Die Enkelin grenzt sich einerseits von ihrer Familie (innere Sphäre) und deutschen Schulabbrechern (äußere Sphäre) ab, orientiert sich aber andererseits beim Versuch sozial aufzusteigen an russischsprachigen Peers, die die gymnasiale Oberstufe besuchen (dritte Sphäre). Bei Enkelin Popow fungiert demzufolge die Peergruppe und nicht die Familie als zentraler Bildungsort. An dieser Stelle sei angemerkt, dass sowohl bei einer russisch-jüdischen Familie als auch bei einer Spätaussiedlerfamilie der intergenerationale Bildungstransfer durch die Migration beeinflusst wurde. Während Familie Rosenthal in der Herkunftsgesellschaft der Intelligenzija zuzuordnen ist, gehört Familie Popow der Arbeiterschaft an. Daraus folgt, dass der intergenerationale Bildungstransfer der untersuchten Familien sowohl unabhängig vom Spätaussiedler- und Kontingentflüchtlingsstatus als auch vom herkunftsortspezifischen sozialen Status abläuft. Demnach kann geschlussfolgert werden, dass im Rahmen dieser Studie die Migration ein zentraler Faktor für den intergenerationalen Bildungstransfer ist.
In der Dissertationsarbeit mit dem Titel „Eine Hypothese über die Grundlagen von Moral und einige Implikationen“ unternimmt die Autorin den Versuch, die anthropologischen Prämissen moralischen Handelns herauszuarbeiten. Es wird eine Hypothese aufgestellt und erläutert, die behauptet, dass moralisches Handeln nur dann verständlich wird, wenn der Handelnde erstens die Fähigkeit der Phantasie aufweist, zweitens auf Erfahrungen (mittels seines Gedächtnisses) zugreifen kann und durch Konversation mit anderen Personen interagierte und interagiert, denn nur auf der Basis dieser drei Grundlagen von Moral können sich diejenigen Fähigkeiten ent¬wickeln, die als Voraussetzungen moralischen Handeln gesehen werden müssen: Selbstbewusstsein, Freiheit, die Entwicklung eines Wir-Gefühls, die Genese eines moralischen Ideals und die Fähigkeit, sich im Entscheiden und Handeln nach diesem Ideal richten zu können. Außerdem werden in dieser Dissertation einige Implikationen dieser Hypothese auf individueller und zwischenmenschlicher Ebene diskutiert.
Since half a century, cytometry has been a major scientific discipline in the field of cytomics - the study of system’s biology at single cell level. It enables the investigation of physiological processes, functional characteristics and rare events with proteins by analysing multiple parameters on an individual cell basis. In the last decade, mass cytometry has been established which increased the parallel measurement to up to 50 proteins. This has shifted the analysis strategy from conventional consecutive manual gates towards multi-dimensional data processing. Novel algorithms have been developed to tackle these high-dimensional protein combinations in the data. They are mainly based on clustering or non-linear dimension reduction techniques, or both, often combined with an upstream downsampling procedure. However, these tools have obstacles either in comprehensible interpretability, reproducibility, computational complexity or in comparability between samples and groups.
To address this bottleneck, a reproducible, semi-automated cytometric data mining workflow PRI (pattern recognition of immune cells) is proposed which combines three main steps: i) data preparation and storage; ii) bin-based combinatorial variable engineering of three protein markers, the so called triploTs, and subsequent sectioning of these triploTs in four parts; and iii) deployment of a data-driven supervised learning algorithm, the cross-validated elastic-net regularized logistic regression, with these triploT sections as input variables. As a result, the selected variables from the models are ranked by their prevalence, which potentially have discriminative value. The purpose is to significantly facilitate the identification of meaningful subpopulations, which are most distinguish between two groups. The proposed workflow PRI is exemplified by a recently published public mass cytometry data set. The authors found a T cell subpopulation which is discriminative between effective and ineffective treatment of breast carcinomas in mice. With PRI, that subpopulation was not only validated, but was further narrowed down as a particular Th1 cell population. Moreover, additional insights of combinatorial protein expressions are revealed in a traceable manner. An essential element in the workflow is the reproducible variable engineering. These variables serve as basis for a clearly interpretable visualization, for a structured variable exploration and as input layers in neural network constructs.
PRI facilitates the determination of marker levels in a semi-continuous manner. Jointly with the combinatorial display, it allows a straightforward observation of correlating patterns, and thus, the dominant expressed markers and cell hierarchies. Furthermore, it enables the identification and complex characterization of discriminating subpopulations due to its reproducible and pseudo-multi-parametric pattern presentation. This endorses its applicability as a tool for unbiased investigations on cell subsets within multi-dimensional cytometric data sets.
The immense popularity of online communication services in the last decade has not only upended our lives (with news spreading like wildfire on the Web, presidents announcing their decisions on Twitter, and the outcome of political elections being determined on Facebook) but also dramatically increased the amount of data exchanged on these platforms. Therefore, if we wish to understand the needs of modern society better and want to protect it from new threats, we urgently need more robust, higher-quality natural language processing (NLP) applications that can recognize such necessities and menaces automatically, by analyzing uncensored texts. Unfortunately, most NLP programs today have been created for standard language, as we know it from newspapers, or, in the best case, adapted to the specifics of English social media.
This thesis reduces the existing deficit by entering the new frontier of German online communication and addressing one of its most prolific forms—users’ conversations on Twitter. In particular, it explores the ways and means by how people express their opinions on this service, examines current approaches to automatic mining of these feelings, and proposes novel methods, which outperform state-of-the-art techniques. For this purpose, I introduce a new corpus of German tweets that have been manually annotated with sentiments, their targets and holders, as well as lexical polarity items and their contextual modifiers. Using these data, I explore four major areas of sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained opinion mining, (iii) message-level polarity classification, and (iv) discourse-aware sentiment analysis. In the first task, I compare three popular groups of lexicon generation methods: dictionary-, corpus-, and word-embedding–based ones, finding that dictionary-based systems generally yield better polarity lists than the last two groups. Apart from this, I propose a linear projection algorithm, whose results surpass many existing automatically-generated lexicons. Afterwords, in the second task, I examine two common approaches to automatic prediction of sentiment spans, their sources, and targets: conditional random fields (CRFs) and recurrent neural networks, obtaining higher scores with the former model and improving these results even further by redefining the structure of CRF graphs. When dealing with message-level polarity classification, I juxtapose three major sentiment paradigms: lexicon-, machine-learning–, and deep-learning–based systems, and try to unite the first and last of these method groups by introducing a bidirectional neural network with lexicon-based attention. Finally, in order to make the new classifier aware of microblogs' discourse structure, I let it separately analyze the elementary discourse units of each tweet and infer the overall polarity of a message from the scores of its EDUs with the help of two new approaches: latent-marginalized CRFs and Recursive Dirichlet Process.
Die Wissenschaftsfreiheit ist ein Grundrecht, dessen Sinn und Auslegung im Rahmen von Reformen des Hochschulsystems nicht nur der Justiz, sondern auch der Wissenschaft selbst immer wieder Anlass zur Diskussion geben, so auch im Zuge der Einführung des so genannten Qualitätsmanagements von Studium und Lehre an deutschen Hochschulen. Die vorliegende Dissertationsschrift stellt die Ergebnisse einer empirischen Studie vor, die mit einer soziologischen Betrachtung des Qualitätsmanagements unterschiedlicher Hochschulen zu dieser Diskussion beiträgt.
Auf Grundlage der Prämisse, dass Verlauf und Folgen einer organisationalen Innovation nur verstanden werden können, wenn der alltägliche Umgang der Organisationsmitglieder mit den neuen Strukturen und Prozessen in die Analyse einbezogen wird, geht die Studie von der Frage aus, wie Akteurinnen und Akteure an deutschen Hochschulen die Qualitätsmanagementsysteme ihrer Organisationen nutzen. Die qualitative inhaltsanalytische Auswertung von 26 Leitfaden-Interviews mit Prorektorinnen und -rektoren, Qualitätsmanagement-Personal und Studiendekaninnen und -dekanen an neun Hochschulen ergibt, dass die Strategien der Akteursgruppen an den Hochschulen im Zusammenspiel mit strukturellen Aspekten unterschiedliche Dynamiken entstehen lassen, mit denen Implikationen für die Lehrfreiheit verbunden sind: Während die Autonomie der Lehrenden durch das Qualitätsmanagement an einigen Hochschulen unterstützt wird, sind sowohl Autonomie als auch Verantwortung für Studium und Lehre an anderen Hochschulen Gegenstand andauernder Konflikte, die auch das Qualitätsmanagement einschließen.
Business process management is an established technique for business organizations to manage and support their processes. Those processes are typically represented by graphical models designed with modeling languages, such as the Business Process Model and Notation (BPMN).
Since process models do not only serve the purpose of documentation but are also a basis for implementation and automation of the processes, they have to satisfy certain correctness requirements. In this regard, the notion of soundness of workflow nets was developed, that can be applied to BPMN process models in order to verify their correctness. Because the original soundness criteria are very restrictive regarding the behavior of the model, different variants of the soundness notion have been developed for situations in which certain violations are not even harmful.
All of those notions do only consider the control-flow structure of a process model, however. This poses a problem, taking into account the fact that with the recent release and the ongoing development of the Decision Model and Notation (DMN) standard, an increasing number of process models are complemented by respective decision models. DMN is a dedicated modeling language for decision logic and separates the concerns of process and decision logic into two different models, process and decision models respectively.
Hence, this thesis is concerned with the development of decisionaware soundness notions, i.e., notions of soundness that build upon the original soundness ideas for process models, but additionally take into account complementary decision models. Similar to the various notions of workflow net soundness, this thesis investigates different notions of decision soundness that can be applied depending on the desired degree of restrictiveness. Since decision tables are a standardized means of DMN to represent decision logic, this thesis also puts special focus on decision tables, discussing how they can be translated into an unambiguous format and how their possible output values can be efficiently determined.
Moreover, a prototypical implementation is described that supports checking a basic version of decision soundness. The decision soundness notions were also empirically evaluated on models from participants of an online course on process and decision modeling as well as from a process management project of a large insurance company. The evaluation demonstrates that violations of decision soundness indeed occur and can be detected with our approach.
There is evidence that infants start extracting words from fluent speech around 7.5 months of age (e.g., Jusczyk & Aslin, 1995) and that they use at least two mechanisms to segment words forms from fluent speech: prosodic information (e.g., Jusczyk, Cutler & Redanz, 1993) and statistical information (e.g., Saffran, Aslin & Newport, 1996). However, how these two mechanisms interact and whether they change during development is still not fully understood.
The main aim of the present work is to understand in what way different cues to word segmentation are exploited by infants when learning the language in their environment, as well as to explore whether this ability is related to later language skills. In Chapter 3 we pursued to determine the reliability of the method used in most of the experiments in the present thesis (the Headturn Preference Procedure), as well as to examine correlations and individual differences between infants’ performance and later language outcomes. In Chapter 4 we investigated how German-speaking adults weigh statistical and prosodic information for word segmentation. We familiarized adults with an auditory string in which statistical and prosodic information indicated different word boundaries and obtained both behavioral and pupillometry responses. Then, we conducted further experiments to understand in what way different cues to word segmentation are exploited by 9-month-old German-learning infants (Chapter 5) and by 6-month-old German-learning infants (Chapter 6). In addition, we conducted follow-up questionnaires with the infants and obtained language outcomes at later stages of development.
Our findings from this thesis revealed that (1) German-speaking adults show a strong weight of prosodic cues, at least for the materials used in this study and that (2) German-learning infants weight these two kind of cues differently depending on age and/or language experience. We observed that, unlike English-learning infants, 6-month-old infants relied more strongly on prosodic cues. Nine-month-olds do not show any preference for either of the cues in the word segmentation task. From the present results it remains unclear whether the ability to use prosodic cues to word segmentation relates to later language vocabulary. We speculate that prosody provides infants with their first window into the specific acoustic regularities in the signal, which enables them to master the specific stress pattern of German rapidly. Our findings are a step forwards in the understanding of an early impact of the native prosody compared to statistical learning in early word segmentation.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
The fabrication of 1D nanostrands composed of stimuli responsive microgels has been shown in this work. Microgels are well known materials able to respond to various stimuli from outer environment. Since these microgels respond via a volume change to an external stimulus, a targeted mechanical response can be achieved. Through carefully choosing the right composition of the polymer matrix, microgels can be designed to react precisely to the targeted stimuli (e.g. drug delivery via pH and temperature changes, or selective contractions through changes in electrical current125).
In this work, it was aimed to create flexible nano-filaments which are capable of fast anisotropic contractions similar to muscle filaments. For the fabrication of such filaments or strands, nanostructured templates (PDMS wrinkles) were chosen due to a facile and low-cost fabrication and versatile tunability of their dimensions. Additionally, wrinkling is a well-known lithography-free method which enables the fabrication of nanostructures in a reproducible manner and with a high long-range periodicity.
In Chapter 2.1, it was shown for the first time that microgels as soft matter particles can be aligned to densely packed microgel arrays of various lateral dimensions. The alignment of microgels with different compositions (e.g. VCL/AAEM, NIPAAm, NIPAAm/VCL and charged microgels) was shown by using different assembly techniques (e.g. spin-coating, template confined molding). It was chosen to set one experimental parameter constant which was the SiOx surface composition of the templates and substrates (e.g. oxidized PDMS wrinkles, Si-wafers and glass slides). It was shown that the fabrication of nanoarrays was feasible with all tested microgel types. Although the microgels exhibited different deformability when aligned on a flat surface, they retained their thermo-responsivity and swelling behavior.
Towards the fabrication of 1D microgel strands interparticle connectivity was aspired. This was achieved via different cross-linking methods (i.e. cross-linking via UV-irradiation and host-guest complexation) discussed in Chapter 2.2. The microgel arrays created by different assembly methods and microgel types were tested for their cross-linking suitability. It was observed that NIPAAm based microgels cannot be cross-linked with UV light. Furthermore, it was found that these microgels exhibit a strong surface-particle-interaction and therefore could not be detached from the given substrates. In contrast to the latter, with VCL/AAEM based microgels it was possible to both UV cross-link them based on the keto-enol tautomerism of the AAEM copolymer, and to detach them from the substrate due to the lower adhesion energy towards SiOx surfaces. With VCL/AAEM microgels long, one-dimensional microgel strands could be re-dispersed in water for further analysis. It has also been shown that at least one lateral dimension of the free dispersed 1D microgel strands is easily controllable by adjusting the wavelength of the wrinkled template. For further work, only VCL/AAEM based microgels were used to focus on the main aim of this work, i.e. the fabrication of 1D microgel nanostrands.
As an alternative to the unspecific and harsh UV cross-linking, the host-guest complexation via diazobenzene cross-linkers and cyclodextrin hosts was explored. The idea behind this approach was to give means to a future construction kit-like approach by incorporation of cyclodextrin comonomers in a broad variety of particle systems (e.g. microgels, nanoparticles). For this purpose, VCL/AAEM microgels were copolymerized with different amounts of mono-acrylate functionalized β-cyclodextrin (CD). After successfully testing the cross-linking capability in solution, the cross-linking of aligned VCL/AAEM/CD microgels was tried. Although the cross-linking worked well, once the single arrays came into contact to each other, they agglomerated. As a reason for this behavior residual amounts of mono-complexed diazobenzene linkers were suspected. Thus, end-capping strategies were tried out (e.g. excess amounts of β-cyclodextrin and coverage with azobenzene functionalized AuNPs) but were unsuccessful. With deeper thought, entropy effects were taken into consideration which favor the release of complexed diazobenzene linker leading to agglomerations. To circumvent this entropy driven effect, a multifunctional polymer with 50% azobenzene groups (Harada polymer) was used. First experiments with this polymer showed promising results regarding a less pronounced agglomeration (Figure 77). Thus, this approach could be pursued in the future. In this chapter it was found out that in contrast to pearl necklace and ribbon like formations, particle alignment in zigzag formation provided the best compromise in terms of stability in dispersion (see Figure 44a and Figure 51) while maintaining sufficient flexibility.
For this reason, microgel strands in zigzag formation were used for the motion analysis described in Chapter 2.3. The aim was to observe the properties of unrestrained microgel strands in solution (e.g. diffusion behavior, rotational properties and ideally, anisotropic contraction after temperature increase). Initially, 1D microgel strands were manipulated via AFM in a liquid cell setup. It could be observed that the strands required a higher load force compared to single microgels to be detached from the surface. However, with the AFM it was not possible to detach the strands in a controllable manner but resulted in a complete removal of single microgel particles and a tearing off the strands from the surface, respectively. For this reason, to observe the motion behavior of unrestrained microgel strands in solution, confocal microscopy was used. Furthermore, to hinder an adsorption of the strands, it was found out that coating the surface of the substrates with a repulsive polymer film was beneficial. Confocal and wide-field microscopy videos showed that the microgel strands exhibit translational and rotational diffusive motion in solution without perceptible bending. Unfortunately, with these methods the detection of the anisotropic stimuli responsive contraction of the free moving microgel strands was not possible. To summarize, the flexibility of microgel strands is more comparable to the mechanical behavior of a semi flexible cable than to a yarn. The strands studied here consist of dozens or even hundreds of discrete submicron units strung together by cross-linking, having few parallels in nanotechnology.
With the insights gained in this work on microgel-surface interactions, in the future, a targeted functionalization of the template and substrate surfaces can be conducted to actively prevent unwanted microgel adsorption for a given microgel system (e.g. PVCL and polystyrene coating235). This measure would make the discussed alignment methods more diverse. As shown herein, the assembly methods enable a versatile microgel alignment (e.g. microgel meshes, double and triple strands). To go further, one could use more complex templates (e.g. ceramic rhombs and star shaped wrinkles (Figure 14) to expand the possibilities of microgel alignment and to precisely control their aspect ratios (e.g. microgel rods with homogeneous size distributions).
Supermassive black holes reside in the hearts of almost all massive galaxies. Their evolutionary path seems to be strongly linked to the evolution of their host galaxies, as implied by several empirical relations between the black hole mass (M BH ) and different host galaxy properties. The physical driver of this co-evolution is, however, still not understood. More mass measurements over homogeneous samples and a detailed understanding of systematic uncertainties are required to fathom the origin of the scaling relations.
In this thesis, I present the mass estimations of supermassive black holes in the nuclei of one late-type and thirteen early-type galaxies. Our SMASHING sample extends from the intermediate to the massive galaxy mass regime and was selected to fill in gaps in number of galaxies along the scaling relations. All galaxies were observed at high spatial resolution, making use of the adaptive-optics mode of integral field unit (IFU) instruments on state-of-the-art telescopes (SINFONI, NIFS, MUSE). I extracted the stellar kinematics from these observations and constructed dynamical Jeans and Schwarzschild models to estimate the mass of the central black holes robustly. My new mass estimates increase the number of early-type galaxies with measured black hole masses by 15%. The seven measured galaxies with nuclear light deficits (’cores’) augment the sample of cored galaxies with measured black holes by 40%. Next to determining massive black hole masses, evaluating the accuracy of black hole masses is crucial for understanding the intrinsic scatter of the black hole- host galaxy scaling relations. I tested various sources of systematic uncertainty on my derived mass estimates.
The M BH estimate of the single late-type galaxy of the sample yielded an upper limit, which I could constrain very robustly. I tested the effects of dust, mass-to-light ratio (M/L) variation, and dark matter on my measured M BH . Based on these tests, the typically assumed constant M/L ratio can be an adequate assumption to account for the small amounts of dark matter in the center of that galaxy. I also tested the effect of a variable M/L variation on the M BH measurement on a second galaxy. By considering stellar M/L variations in the dynamical modeling, the measured M BH decreased by 30%. In the future, this test should be performed on additional galaxies to learn how an as constant assumed M/L flaws the estimated black hole masses.
Based on our upper limit mass measurement, I confirm previous suggestions that resolving the predicted BH sphere-of-influence is not a strict condition to measure black hole masses. Instead, it is only a rough guide for the detection of the black hole if high-quality, and high signal-to-noise IFU data are used for the measurement. About half of our sample consists of massive early-type galaxies which show nuclear surface brightness cores and signs of triaxiality. While these types of galaxies are typically modeled with axisymmetric modeling methods, the effects on M BH are not well studied yet. The massive galaxies of our presented galaxy sample are well suited to test the effect of different stellar dynamical models on the measured black hole mass in evidently triaxial galaxies. I have compared spherical Jeans and axisymmetric Schwarzschild models and will add triaxial Schwarzschild models to this comparison in the future. The constructed Jeans and Schwarzschild models mostly disagree with each other and cannot reproduce many of the triaxial features of the galaxies (e.g., nuclear sub-components, prolate rotation). The consequence of the axisymmetric-triaxial assumption on the accuracy of M BH and its impact on the black hole - host galaxy relation needs to be carefully examined in the future.
In the sample of galaxies with published M BH , we find measurements based on different dynamical tracers, requiring different observations, assumptions, and methods. Crucially, different tracers do not always give consistent results. I have used two independent tracers (cold molecular gas and stars) to estimate M BH in a regular galaxy of our sample. While the two estimates are consistent within their errors, the stellar-based measurement is twice as high as the gas-based. Similar trends have also been found in the literature. Therefore, a rigorous test of the systematics associated with the different modeling methods is required in the future. I caution to take the effects of different tracers (and methods) into account when discussing the scaling relations.
I conclude this thesis by comparing my galaxy sample with the compilation of galaxies with measured black holes from the literature, also adding six SMASHING galaxies, which were published outside of this thesis. None of the SMASHING galaxies deviates significantly from the literature measurements. Their inclusion to the published early-type galaxies causes a change towards a shallower slope for the M BH - effective velocity dispersion relation, which is mainly driven by the massive galaxies of our sample. More unbiased and homogenous measurements are needed in the future to determine the shape of the relation and understand its physical origin.
For many years, psycholinguistic evidence has been predominantly based on findings from native speakers of Indo-European languages, primarily English, thus providing a rather limited perspective into the human language system. In recent years a growing body of experimental research has been devoted to broadening this picture, testing a wide range of speakers and languages, aiming to understanding the factors that lead to variability in linguistic performance. The present dissertation investigates sources of variability within the morphological domain, examining how and to what extent morphological processes and representations are shaped by specific properties of languages and speakers. Firstly, the present work focuses on a less explored language, Hebrew, to investigate how the unique non-concatenative morphological structure of Hebrew, namely a non-linear combination of consonantal roots and vowel patterns to form lexical entries (L-M-D + CiCeC = limed ‘teach’), affects morphological processes and representations in the Hebrew lexicon. Secondly, a less investigated population was tested: late learners of a second language. We directly compare native (L1) and non-native (L2) speakers, specifically highly proficient and immersed late learners of Hebrew. Throughout all publications, we have focused on a morphological phenomenon of inflectional classes (called binyanim; singular: binyan), comparing productive (class Piel, e.g., limed ‘teach’) and unproductive (class Paal, e.g., lamad ‘learn’) verbal inflectional classes. By using this test case, two psycholinguistic aspects of morphology were examined: (i) how morphological structure affects online recognition of complex words, using masked priming (Publications I and II) and cross-modal priming (Publication III) techniques, and (ii) what type of cues are used when extending morpho-phonological patterns to novel complex forms, a process referred to as morphological generalization, using an elicited production task (Publication IV).
The findings obtained in the four manuscripts, either published or under review, provide significant insights into the role of productivity in Hebrew morphological processing and generalization in L1 and L2 speakers. Firstly, the present L1 data revealed a close relationship between productivity of Hebrew verbal classes and recognition process, as revealed in both priming techniques. The consonantal root was accessed only in the productive class (Piel) but not the unproductive class (Paal). Another dissociation between the two classes was revealed in the cross-modal priming, yielding a semantic relatedness effect only for Paal but not Piel primes. These findings are taken to reflect that the Hebrew mental representations display a balance between stored undecomposable unstructured stems (Paal) and decomposed structured stems (Piel), in a similar manner to a typical dual-route architecture, showing that the Hebrew mental lexicon is less unique than previously claimed in psycholinguistic research. The results of the generalization study, however, indicate that there are still substantial differences between inflectional classes of Hebrew and other Indo-European classes, particularly in the type of information they rely on in generalization to novel forms. Hebrew binyan generalization relies more on cues of argument structure and less on phonological cues.
Secondly, clear L1/L2 differences were observed in the sensitivity to abstract morphological and morpho-syntactic information during complex word recognition and generalization. While L1 Hebrew speakers were sensitive to the binyan information during recognition, expressed by the contrast in root priming, L2 speakers showed similar root priming effects for both classes, but only when the primes were presented in an infinitive form. A root priming effect was not obtained for primes in a finite form. These patterns are interpreted as evidence for a reduced sensitivity of L2 speakers to morphological information, such as information about inflectional classes, and evidence for processing costs in recognition of forms carrying complex morpho-syntactic information. Reduced reliance on structural information cues was found in production of novel verbal forms, when the L2 group displayed a weaker effect of argument structure for Piel responses, in comparison to the L1 group. Given the L2 results, we suggest that morphological and morphosyntactic information remains challenging for late bilinguals, even at high proficiency levels.
The Central Andes host large reserves of base and precious metals. The region represented, in 2017, an important part of the worldwide mining activity. Three principal types of deposits have been identified and studied: 1) porphyry type deposits extending from central Chile and Argentina to Bolivia, and Northern Peru, 2) iron oxide-copper-gold (IOCG) deposits, extending from central Peru to central Chile, and 3) epithermal tin polymetallic deposits extending from Southern Peru to Northern Argentina, which compose a large part of the deposits of the Bolivian Tin Belt (BTB). Deposits in the BTB can be divided into two major types: (1) tin-tungsten-zinc pluton-related polymetallic deposits, and (2) tin-silver-lead-zinc epithermal polymetallic vein deposits.
Mina Pirquitas is a tin-silver-lead-zinc epithermal polymetallic vein deposit, located in north-west Argentina, that used to be one of the most important tin-silver producing mine of the country. It was interpreted to be part of the BTB and it shares similar mineral associations with southern pluton related BTB epithermal deposits. Two major mineralization events related to three pulses of magmatic fluids mixed with meteoric water have been identified. The first event can be divided in two stages: 1) stage I-1 with quartz, pyrite, and cassiterite precipitating from fluids between 233 and 370 °C and salinity between 0 and 7.5 wt%, corresponding to a first pulse of fluids, and 2) stage I-2 with sphalerite and tin-silver-lead-antimony sulfosalts precipitating from fluids between 213 and 274 °C with salinity up to 10.6 wt%, corresponding to a new pulse of magmatic fluids in the hydrothermal system. The mineralization event II deposited the richest silver ores at Pirquitas. Event II fluids temperatures and salinities range between 190 and 252 °C and between 0.9 and 4.3 wt% respectively. This corresponds to the waning supply of magmatic fluids. Noble gas isotopic compositions and concentrations in ore-hosted fluid inclusions demonstrate a significant contribution of magmatic fluids to the Pirquitas mineralization although no intrusive rocks are exposed in the mine area.
Lead and sulfur isotopic measurements on ore minerals show that Pirquitas shares a similar signature with southern pluton related polymetallic deposits in the BTB. Furthermore, the major part of the sulfur isotopic values of sulfide and sulfosalt minerals from Pirquitas ranges in the field for sulfur derived from igneous rocks. This suggests that the main contribution of sulfur to the hydrothermal system at Pirquitas is likely to be magma-derived. The precise age of the deposit is still unknown but the results of wolframite dating of 2.9 ± 9.1 Ma and local structural observations suggest that the late mineralization event is younger than 12 Ma.
Over the last years there is an increasing awareness that historical land cover changes and associated land use legacies may be important drivers for present-day species richness and biodiversity due to time-delayed extinctions or colonizations in response to historical environmental changes. Historically altered habitat patches may therefore exhibit an extinction debt or colonization credit and can be expected to lose or gain species in the future. However, extinction debts and colonization credits are difficult to detect and their actual magnitudes or payments have rarely been quantified because species richness patterns and dynamics are also shaped by recent environmental conditions and recent environmental changes.
In this thesis we aimed to determine patterns of herb-layer species richness and recent species richness dynamics of forest herb layer plants and link those patterns and dynamics to historical land cover changes and associated land use legacies. The study was conducted in the Prignitz, NE-Germany, where the forest distribution remained stable for the last ca. 100 years but where a) the deciduous forest area had declined by more than 90 per cent (leaving only remnants of "ancient forests"), b) small new forests had been established on former agricultural land ("post-agricultural forests"). Here, we analyzed the relative importance of land use history and associated historical land cover changes for herb layer species richness compared to recent environmental factors and determined magnitudes of extinction debt and colonization credit and their payment in ancient and post-agricultural forests, respectively.
We showed that present-day species richness patterns were still shaped by historical land cover changes that ranged back to more than a century. Although recent environmental conditions were largely comparable we found significantly more forest specialists, species with short-distance dispersal capabilities and clonals in ancient forests than in post-agricultural forests. Those species richness differences were largely contingent to a colonization credit in post-agricultural forests that ranged up to 9 species (average 4.7), while the extinction debt in ancient forests had almost completely been paid. Environmental legacies from historical agricultural land use played a minor role for species richness differences. Instead, patch connectivity was most important. Species richness in ancient forests was still dependent on historical connectivity, indicating a last glimpse of an extinction debt, and the colonization credit was highest in isolated post-agricultural forests. In post-agricultural forests that were better connected or directly adjacent to ancient forest patches the colonization credit was way smaller and we were able to verify a gradual payment of the colonization credit from 2.7 species to 1.5 species over the last six decades.
This is a publication-based dissertation comprising three original research stud-ies (one published, one submitted and one ready for submission; status March 2019). The dissertation introduces a generic computer model as a tool to investigate the behaviour and population dynamics of animals in cyclic environments. The model is further employed for analysing how migratory birds respond to various scenarios of altered food supply under global change. Here, ecological and evolutionary time-scales are considered, as well as the biological constraints and trade-offs the individual faces, which ultimately shape response dynamics at the population level. Further, the effect of fine-scale temporal patterns in re-source supply are studied, which is challenging to achieve experimentally. My findings predict population declines, altered behavioural timing and negative carry-over effects arising in migratory birds under global change. They thus stress the need for intensified research on how ecological mechanisms are affected by global change and for effective conservation measures for migratory birds. The open-source modelling software created for this dissertation can now be used for other taxa and related research questions. Overall, this thesis improves our mechanistic understanding of the impacts of global change on migratory birds as one prerequisite to comprehend ongoing global biodiversity loss. The research results are discussed in a broader ecological and scientific context in a concluding synthesis chapter.
Quantum field theory on curved spacetimes is understood as a semiclassical approximation of some quantum theory of gravitation, which models a quantum field under the influence of a classical gravitational field, that is, a curved spacetime. The most remarkable effect predicted by this approach is the creation of particles by the spacetime itself, represented, for instance, by Hawking's evaporation of black holes or the Unruh effect. On the other hand, these aspects already suggest that certain cornerstones of Minkowski quantum field theory, more precisely a preferred vacuum state and, consequently, the concept of particles, do not have sensible counterparts within a theory on general curved spacetimes. Likewise, the implementation of covariance in the model has to be reconsidered, as curved spacetimes usually lack any non-trivial global symmetry. Whereas this latter issue has been resolved by introducing the paradigm of locally covariant quantum field theory (LCQFT), the absence of a reasonable concept for distinct vacuum and particle states on general curved spacetimes has become manifest even in the form of no-go-theorems.
Within the framework of algebraic quantum field theory, one first introduces observables, while states enter the game only afterwards by assigning expectation values to them. Even though the construction of observables is based on physically motivated concepts, there is still a vast number of possible states, and many of them are not reasonable from a physical point of view. We infer that this notion is still too general, that is, further physical constraints are required. For instance, when dealing with a free quantum field theory driven by a linear field equation, it is natural to focus on so-called quasifree states. Furthermore, a suitable renormalization procedure for products of field operators is vitally important. This particularly concerns the expectation values of the energy momentum tensor, which correspond to distributional bisolutions of the field equation on the curved spacetime. J. Hadamard's theory of hyperbolic equations provides a certain class of bisolutions with fixed singular part, which therefore allow for an appropriate renormalization scheme.
By now, this specification of the singularity structure is known as the Hadamard condition and widely accepted as the natural generalization of the spectral condition of flat quantum field theory. Moreover, due to Radzikowski's celebrated results, it is equivalent to a local condition, namely on the wave front set of the bisolution. This formulation made the powerful tools of microlocal analysis, developed by Duistermaat and Hörmander, available for the verification of the Hadamard property as well as the construction of corresponding Hadamard states, which initiated much progress in this field. However, although indispensable for the investigation in the characteristics of operators and their parametrices, microlocal analyis is not practicable for the study of their non-singular features and central results are typically stated only up to smooth objects. Consequently, Radzikowski's work almost directly led to existence results and, moreover, a concrete pattern for the construction of Hadamard bidistributions via a Hadamard series. Nevertheless, the remaining properties (bisolution, causality, positivity) are ensured only modulo smooth functions.
It is the subject of this thesis to complete this construction for linear and formally self-adjoint wave operators acting on sections in a vector bundle over a globally hyperbolic Lorentzian manifold. Based on Wightman's solution of d'Alembert's equation on Minkowski space and the construction for the advanced and retarded fundamental solution, we set up a Hadamard series for local parametrices and derive global bisolutions from them. These are of Hadamard form and we show existence of smooth bisections such that the sum also satisfies the remaining properties exactly.
The Government will create a motivated, merit-based, performance-driven, and professional civil service that is resistant to temptations of corruption and which provides efficient, effective and transparent public services that do not force customers to pay bribes.
— (GoIRA, 2006, p. 106)
We were in a black hole! We had an empty glass and had nothing from our side to fill it with! Thus, we accepted anything anybody offered; that is how our glass was filled; that is how we reformed our civil service.
— (Former Advisor to IARCSC, personal communication, August 2015)
How and under what conditions were the post-Taleban Civil Service Reforms of Afghanistan initiated? What were the main components of the reforms? What were their objectives and to which extent were they achieved? Who were the leading domestic and foreign actors involved in the process? Finally, what specific factors influenced the success and failure Afghanistan’s Civil Service Reforms since 2002? Guided by such fundamental questions, this research studies the wicked process of reforming the Afghan civil service in an environment where a variety of contextual, programmatic, and external factors affected the design and implementation of reforms that were entirely funded and technically assisted by the international community.
Focusing on the core components of reforms—recruitment, remuneration, and appraisal of civil servants—the qualitative study provides a detailed picture of the pre-reform civil service and its major human resources developments in the past. Following discussions on the content and purposes of the main reform programs, it will then analyze the extent of changes in policies and practices by examining the outputs and effects of these reforms.
Moreover, the study defines the specific factors that led the reforms toward a situation where most of the intended objectives remain unachieved. Doing so, it explores and explains how an overwhelming influence of international actors with conflicting interests, large-scale corruption, political interference, networks of patronage, institutionalized nepotism, culturally accepted cronyism and widespread ethnic favoritism created a very complex environment and prevented the reforms from transforming Afghanistan’s patrimonial civil service into a professional civil service, which is driven by performance and merit.
The North China Plain (NCP) is one of the most productive and intensive agricultural regions in China. High doses of mineral nitrogen (N) fertiliser, often combined with flood irrigation, are applied, resulting in N surplus, groundwater depletion and environmental pollution. The objectives of this thesis were to use the HERMES model to simulate the N cycle in winter wheat (Triticum aestivum L.)–summer maize (Zea mays L.) double crop rotations and show the performance of the HERMES model, of the new ammonia volatilisation sub-module and of the new nitrification inhibition tool in the NCP. Further objectives were to assess the models potential to save N and water on plot and county scale, as well as on short and long-term. Additionally, improved management strategies with the help of a model-based nitrogen fertiliser recommendation (NFR) and adapted irrigation, should be found.
Results showed that the HERMES model performed well under growing conditions of the NCP and was able to describe the relevant processes related to soil–plant interactions concerning N and water during a 2.5 year field experiment. No differences in grain yield between the real-time model-based NFR and the other treatments of the experiments on plot scale in Quzhou County could be found. Simulations with increasing amounts of irrigation resulted in significantly higher N leaching, higher N requirements of the NFR and reduced yields. Thus, conventional flood irrigation as currently practised by the farmers bears great uncertainties and exact irrigation amounts should be known for future simulation studies. In the best-practice scenario simulation on plot-scale, N input and N leaching, but also irrigation water could be reduced strongly within 2 years. Thus, the model-based NFR in combination with adapted irrigation had the highest potential to reduce nitrate leaching, compared to farmers practice and mineral N (Nmin)-reduced treatments. Also the calibrated and validated ammonia volatilisation sub-module of the HERMES model worked well under the climatic and soil conditions of northern China. Simple ammonia volatilisation approaches gave also satisfying results compared to process-oriented approaches. During the simulation with Ammonium sulphate Nitrate with nitrification inhibitor (ASNDMPP) ammonia volatilisation was higher than in the simulation without nitrification inhibitor, while the result for nitrate leaching was the opposite. Although nitrification worked well in the model, nitrification-born nitrous oxide emissions should be considered in future. Results of the simulated annual long-term (31 years) N losses in whole Quzhou County in Hebei Province were 296.8 kg N ha−1 under common farmers practice treatment and 101.7 kg N ha−1 under optimised treatment including NFR and automated irrigation (OPTai). Spatial differences in simulated N losses throughout Quzhou County, could only be found due to different N inputs. Simulations of an optimised treatment, could save on average more than 260 kg N ha−1a−1 from fertiliser input and 190 kg N ha−1a−1 from N losses and around 115.7 mm a−1 of water, compared to farmers practice. These long-term simulation results showed lower N and water saving potential, compared to short-term simulations and underline the necessity of long-term simulations to overcome the effect of high initial N stocks in soil.
Additionally, the OPTai worked best on clay loam soil except for a high simulated denitrification loss, while the simulations using farmers practice irrigation could not match the actual water needs resulting in yield decline, especially for winter wheat. Thus, a precise adaption of management to actual weather conditions and plant growth needs is necessary for future simulations. However, the optimised treatments did not seem to be able to maintain the soil organic matter pools, even with full crop residue input. Extra organic inputs seem to be required to maintain soil quality in the optimised treatments.
HERMES is a relatively simple model, with regard to data input requirements, to simulate the N cycle. It can offer interpretation of management options on plot, on county and regional scale for extension and research staff. Also in combination with other N and water saving methods the model promises to be a useful tool.
Analysis of supramolecular assemblies of NE81, the first lamin protein in a non-metazoan organism
(2019)
Nuclear lamins are nucleus-specific intermediate filaments forming a network located at the inner nuclear membrane of the nuclear envelope. They form the nuclear lamina together with proteins of the inner nuclear membrane regulating nuclear shape and gene expression, among others. The amoebozoan Dictyostelium NE81 protein is a suitable candidate for an evolutionary conserved lamin protein in this non-metazoan organism. It shares the domain organization of metazoan lamins and is fulfilling major lamin functions in Dictyostelium. Moreover, field-emission scanning electron microscopy (feSEM) images of NE81 expressed on Xenopus oocytes nuclei revealed filamentous structures with an overall appearance highly reminiscent to that of metazoan Xenopus lamin B2. For the classification as a lamin-like or a bona fide lamin protein, a better understanding of the supramolecular NE81 structure was necessary. Yet, NE81 carrying a large N-terminal GFP-tag turned out as unsuitable source for protein isolation and characterization; GFP-NE81 expressed in Dictyostelium NE81 knock-out cells exhibited an abnormal distribution, which is an indicator for an inaccurate assembly of GFP-tagged NE81. Hence, a shorter 8×HisMyc construct was the tag of choice to investi-gate formation and structure of NE81 assemblies. One strategy was the structural analysis of NE81 in situ at the outer nuclear membrane in Dictyostelium cells; NE81 without a func-tional nuclear localization signal (NLS) forms assemblies at the outer face of the nucleus. Ultrastructural feSEM pictures of NE81ΔNLS nuclei showed a few filaments of the expected size but no repetitive filamentous structures. The former strategy should also be established for metazoan lamins in order to facilitate their structural analysis. However, heterologously expressed Xenopus and C. elegans lamins showed no uniform localization at the outer nucle-ar envelope of Dictyostelium and hence, no further ultrastructural analysis was undertaken. For in vitro assembly experiments a Dictyostelium mutant was generated, expressing NE81 without the NLS and the membrane-anchoring isoprenylation site (HisMyc-NE81ΔNLSΔCLIM). The cytosolic NE81 clusters were soluble at high ionic strength and were purified from Dictyostelium extracts using Ni-NTA Agarose. Widefield immunofluorescence microscopy, super-resolution light microscopy and electron microscopy images of purified NE81 showed its capability to form filamentous structures at low ionic strength, as described previously for metazoan lamins. Introduction of a phosphomimetic point mutation (S122E) into the CDK1-consensus sequence of NE81 led to disassembled NE81 protein in vivo, which could be reversibly stimulated to form supramolecular assemblies by blue light exposure.
The results of this work reveal that NE81 has to be considered a bona fide lamin, since it is able to form filamentous assemblies. Furthermore, they highlight Dictyostelium as a non-mammalian model organism with a well-characterized nuclear envelope containing all rele-vant protein components known in animal cells.
L’extériorisation de toute communication est assujettie à un mode d’accès du locuteur aux informations véhiculées. Les constatations faites de nos données prouvent que tous les huit verbes étudiés traduisent des mécanismes d’acquisition des connaissances que nous avons appelés en emprunt à (Vogeleer, 1995 :92) « l’accès cognitif au savoir ». C’est cette valeur intrinsèque qui vaut à ces termes la dénomination de verbes médiatifs. En d’autres mots, ce sont des éléments qui explicitent des processus d’accès du locuteur au savoir. Une source du savoir qui peut être directe (la vue, le touché, l’ouïe, l’odorat…) ou indirecte (ouï-dire) et surtout inférée. Nous entendons par inférence un processus d’analyse et de mise en relation d’éléments (prémisses), lesquelles permettent de tirer une conclusion par déduction, induction ou par abduction. Et selon que lesdites prémisses tendent à être plus ou moins fiables, ces processus inférentiels impliqueront des valeurs épistémiques à des degrés divers.
Sur le plan rhétorico-syntaxique, nos analyses ont montré tous les verbes cognitifs (VC) de cette étude exigent l’occurrence d’autres constituants (actants) phrastiques qu’ils régissent. C’est grâce à cette valence verbale qu’ils gardent un pouvoir rectionnel dans les constructions asyndétiques. Ce sont donc les matrices des éléments sur lesquels ils se rapportent. Quant au cinétisme de ces verbes, il possède une fonction rhétorique et syntaxique. En effet, cet agencement particulier et souvent perturbant permet de traduire l’expression d’une figure de syntaxe à effet rhétorique : l’hyperbate. Une construction atypique qui, à travers les agencements anticonformistes, donne un sens de regressivité à l’énoncé et confère une saillance à des termes mis ce fait en exergue.
Predation drives coexistence, evolution and population dynamics of species in food webs, and has strong impacts on related ecosystem functions (e.g. primary production). The effect of predation on these processes largely depends on the trade-offs between functional traits in the predator and prey community. Trade-offs between defence against predation and competitive ability, for example, allow for prey speciation and predator-mediated coexistence of prey species with different strategies (defended or competitive), which may stabilize the overall food web dynamics. While the importance of such trade-offs for coexistence is widely known, we lack an understanding and the empirical evidence of how the variety of differently shaped trade-offs at multiple trophic levels affect biodiversity, trait adaptation and biomass dynamics in food webs. Such mechanistic understanding is crucial for predictions and management decisions that aim to maintain biodiversity and the capability of communities to adapt to environmental change ensuring their persistence.
In this dissertation, after a general introduction to predator-prey interactions and tradeoffs, I first focus on trade-offs in the prey between qualitatively different types of defence (e.g. camouflage or escape behaviour) and their costs. I show that these different types lead to different patterns of predator-mediated coexistence and population dynamics, by using a simple predator-prey model. In a second step, I elaborate quantitative aspects of trade-offs and demonstrates that the shape of the trade-off curve in combination with trait-fitness relationships strongly affects competition among different prey types: Either specialized species with extreme trait combinations (undefended or completely defended) coexist, or a species with an intermediate defence level dominates. The developed theory on trade-off shapes and coexistence is kept general, allowing for applications apart from defence-competitiveness trade-offs. Thirdly, I tested the theory on trade-off shapes on a long-term field data set of phytoplankton from Lake Constance. The measured concave trade-off between defence and growth governs seasonal trait changes of phytoplankton in response to an altering grazing pressure by zooplankton, and affects the maintenance of trait variation in the community. In a fourth step, I analyse the interplay of different tradeoffs at multiple trophic levels with plankton data of Lake Constance and a corresponding tritrophic food web model. The results show that the trait and biomass dynamics of the different three trophic levels are interrelated in a trophic biomass-trait cascade, leading to unintuitive patterns of trait changes that are reversed in comparison to predictions from bitrophic systems. Finally, in the general discussion, I extract main ideas on trade-offs in multitrophic systems, develop a graphical theory on trade-off-based coexistence, discuss the interplay of intra- and interspecific trade-offs, and end with a management-oriented view on the results of the dissertation, describing how food webs may respond to future global changes, given their trade-offs.
Predator-prey interactions provide central links in food webs. These interaction are directly or indirectly impacted by a number of factors. These factors range from physiological characteristics of individual organisms, over specifics of their interaction to impacts of the environment. They may generate the potential for the application of different strategies by predators and prey. Within this thesis, I modelled predator-prey interactions and investigated a broad range of different factors driving the application of certain strategies, that affect the individuals or their populations. In doing so, I focused on phytoplankton-zooplankton systems as established model systems of predator-prey interactions.
At the level of predator physiology I proposed, and partly confirmed, adaptations to fluctuating availability of co-limiting nutrients as beneficial strategies. These may allow to store ingested nutrients or to regulate the effort put into nutrient assimilation. We found that these two strategies are beneficial at different fluctuation frequencies of the nutrients, but may positively interact at intermediate frequencies. The corresponding experiments supported our model results. We found that the temporal structure of nutrient fluctuations indeed has strong effects on the juvenile somatic growth rate of {\itshape Daphnia}.
Predator colimitation by energy and essential biochemical nutrients gave rise to another physiological strategy. High-quality prey species may render themselves indispensable in a scenario of predator-mediated coexistence by being the only source of essential biochemical nutrients, such as cholesterol. Thereby, the high-quality prey may even compensate for a lacking defense and ensure its persistence in competition with other more defended prey species.
We found a similar effect in a model where algae and bacteria compete for nutrients. Now, being the only source of a compound that is required by the competitor (bacteria) prevented the competitive exclusion of the algae. In this case, the essential compounds were the organic carbon provided by the algae. Here again, being indispensable served as a prey strategy that ensured its coexistence.
The latter scenario also gave rise to the application of the two metabolic strategies of autotrophy and heterotrophy by algae and bacteria, respectively. We found that their coexistence allowed the recycling of resources in a microbial loop that would otherwise be lost. Instead, these resources were made available to higher trophic levels, increasing the trophic transfer efficiency in food webs.
The predation process comprises the next higher level of factors shaping the predator-prey interaction, besides these factors that originated from the functioning or composition of individuals. Here, I focused on defensive mechanisms and investigated multiple scenarios of static or adaptive combinations of prey defense and predator offense. I confirmed and extended earlier reports on the coexistence-promoting effects of partially lower palatability of the prey community. When bacteria and algae are coexisting, a higher palatability of bacteria may increase the average predator biomass, with the side effect of making the population dynamics more regular. This may facilitate experimental investigations and interpretations. If defense and offense are adaptive, this allows organisms to maximize their growth rate. Besides this fitness-enhancing effect, I found that co-adaptation may provide the predator-prey system with the flexibility to buffer external perturbations.
On top of these rather internal factors, environmental drivers also affect predator-prey interactions. I showed that environmental nutrient fluctuations may create a spatio-temporal resource heterogeneity that selects for different predator strategies. I hypothesized that this might favour either storage or acclimation specialists, depending on the frequency of the environmental fluctuations.
We found that many of these factors promote the coexistence of different strategies and may therefore support and sustain biodiversity. Thus, they might be relevant for the maintenance of crucial ecosystem functions that also affect us humans. Besides this, the richness of factors that impact predator-prey interactions might explain why so many species, especially in the planktonic regime, are able to coexist.
Most of the matter in the universe consists of hydrogen. The hydrogen in the intergalactic medium (IGM), the matter between the galaxies, underwent a change of its ionisation state at the epoch of reionisation, at a redshift roughly between 6>z>10, or ~10^8 years after the Big Bang. At this time, the mostly neutral hydrogen in the IGM was ionised but the source of the responsible hydrogen ionising emission remains unclear. In this thesis I discuss the most likely candidates for the emission of this ionising radiation, which are a type of galaxy called Lyman alpha emitters (LAEs). As implied by their name, they emit Lyman alpha radiation, produced after a hydrogen atom has been ionised and recombines with a free electron. The ionising radiation itself (also called Lyman continuum emission) which is needed for this process inside the LAEs could also be responsible for ionising the IGM around those galaxies at the epoch of reionisation, given that enough Lyman continuum escapes. Through this mechanism, Lyman alpha and Lyman continuum radiation are closely linked and are both studied to better understand the properties of high redshift galaxies and the reionisation state of the universe.
Before I can analyse their Lyman alpha emission lines and the escape of Lyman continuum emission from them, the first step is the detection and correct classification of LAEs in integral field spectroscopic data, specifically taken with the Multi-Unit Spectroscopic Explorer (MUSE). After detecting emission line objects in the MUSE data, the task of classifying them and determining their redshift is performed with the graphical user interface QtClassify, which I developed during the work on this thesis. It uses the strength of the combination of spectroscopic and photometric information that integral field spectroscopy offers to enable the user to quickly identify the nature of the detected emission lines. The reliable classification of LAEs and determination of their redshifts is a crucial first step towards an analysis of their properties.
Through radiative transfer processes, the properties of the neutral hydrogen clouds in and around LAEs are imprinted on the shape of the Lyman alpha line. Thus after identifying the LAEs in the MUSE data, I analyse the properties of the Lyman alpha emission line, such as the equivalent width (EW) distribution, the asymmetry and width of the line as well as the double peak fraction. I challenge the common method of displaying EW distributions as histograms without taking the limits of the survey into account and construct a more independent EW distribution function that better reflects the properties of the underlying population of galaxies. I illustrate this by comparing the fraction of high EW objects between the two surveys MUSE-Wide and MUSE-Deep, both consisting of MUSE pointings (each with the size of one square arcminute) of different depths. In the 60 MUSE-Wide fields of one hour exposure time I find a fraction of objects with extreme EWs above EW_0>240A of ~20%, while in the MUSE-Deep fields (9 fields with an exposure time of 10 hours and one with an exposure time of 31 hours) I find a fraction of only ~1%, which is due to the differences in the limiting line flux of the surveys. The highest EW I measure is EW_0 = 600.63 +- 110A, which hints at an unusual underlying stellar population, possibly with a very low metallicity.
With the knowledge of the redshifts and positions of the LAEs detected in the MUSE-Wide survey, I also look for Lyman continuum emission coming from these galaxies and analyse the connection between Lyman continuum emission and Lyman alpha emission. I use ancillary Hubble Space Telescope (HST) broadband photometry in the bands that contain the Lyman continuum and find six Lyman continuum leaker candidates. To test whether the Lyman continuum emission of LAEs is coming only from those individual objects or the whole population, I select LAEs that are most promising for the detection of Lyman continuum emission, based on their rest-frame UV continuum and Lyman alpha line shape properties. After this selection, I stack the broadband data of the resulting sample and detect a signal in Lyman continuum with a significance of S/N = 5.5, pointing towards a Lyman continuum escape fraction of ~80%. If the signal is reliable, it strongly favours LAEs as the providers of the hydrogen ionising emission at the epoch of reionisation and beyond.
Die funktionelle Charakterisierung von therapeutisch relevanten Proteinen kann bereits durch die Bereitstellung des Zielproteins in adäquaten Mengen limitierend sein. Dies trifft besonders auf Membranproteine zu, die aufgrund von zytotoxischen Effekten auf die Produktionszelllinie und der Tendenz Aggregate zu bilden, in niedrigen Ausbeuten an aktivem Protein resultieren können. Der lebende Organismus kann durch die Verwendung von translationsaktiven Zelllysaten umgangen werden- die Grundlage der zellfreien Proteinsynthese. Zu Beginn der Arbeit wurde die ATP-abhängige Translation eines Lysates auf der Basis von kultivierten Insektenzellen (Sf21) analysiert. Für diesen Zweck wurde ein ATP-bindendes Aptamer eingesetzt, durch welches die Translation der Nanoluziferase reguliert werden konnte. Durch die dargestellte Applizierung von Aptameren, könnten diese zukünftig in zellfreien Systemen für die Visualisierung der Transkription und Translation eingesetzt werden, wodurch zum Beispiel komplexe Prozesse validiert werden können.
Neben der reinen Proteinherstellung können Faktoren wie posttranslationale Modifikationen sowie eine Integration in eine lipidische Membran essentiell für die Funktionalität des Membranproteins sein. Im zweiten Abschnitt konnte, im zellfreien Sf21-System, für den G-Protein-gekoppelten Rezeptor Endothelin B sowohl eine Integration in die endogen vorhandenen Endoplasmatisch Retikulum-basierten Membranstrukturen als auch Glykosylierungen, identifiziert werden.
Auf der Grundlage der erfolgreichen Synthese des ET-B-Rezeptors wurden verschiedene Methoden zur Fluoreszenzmarkierung des Adenosin-Rezeptors A2a (Adora2a) angewandt und optimiert. Im dritten Abschnitt wurde der Adora2a mit Hilfe einer vorbeladenen tRNA, welche an eine fluoreszierende Aminosäure gekoppelt war, im zellfreien Chinesischen Zwerghamster Ovarien (CHO)-System markiert. Zusätzlich konnte durch den Einsatz eines modifizierten tRNA/Aminoacyl-tRNA-Synthetase-Paares eine nicht-kanonische Aminosäure an Position eines integrierten Amber-Stopcodon in die Polypeptidkette eingebaut und die funktionelle Gruppe im Anschluss an einen Fluoreszenzfarbstoff gekoppelt werden. Aufgrund des offenen Charakters eignen sich zellfreie Proteinsynthesesysteme besonders für eine Integration von exogenen Komponenten in den Translationsprozess. Mit Hilfe der Fluoreszenzmarkierung wurde eine ligandvermittelte Konformationsänderung im Adora2a über einen Biolumineszenz-Resonanzenergietransfer detektiert. Durch die Etablierung der Amber-Suppression wurde darüber hinaus das Hormon Erythropoetin pegyliert, wodurch Eigenschaften wie Stabilität und Halbwertszeit des Proteins verändert wurden.
Zu guter Letzt wurde ein neues tRNA/Aminoacyl-tRNA-Synthetase-Paar auf Basis der Methanosarcina mazei Pyrrolysin-Synthetase etabliert, um das Repertoire an nicht-kanonischen Aminosäuren und den damit verbundenen Kopplungsreaktionen zu erweitern. Zusammenfassend wurden die Potenziale zellfreier Systeme in Bezug auf der Herstellung von komplexen Membranproteinen und der Charakterisierung dieser durch die Einbringung einer positionsspezifischen Fluoreszenzmarkierung verdeutlicht, wodurch neue Möglichkeiten für die Analyse und Funktionalisierung von komplexen Proteinen geschaffen wurden.
Thermoresponsive Zellkultursubstrate für zeitlich-räumlich gesteuertes Auswachsen neuronaler Zellen
(2019)
Ein wichtiges Ziel der Neurowissenschaften ist das Verständnis der komplexen und zugleich faszinierenden, hochgeordneten Vernetzung der Neurone im Gehirn, welche neuronalen Prozessen, wie zum Beispiel dem Wahrnehmen oder Lernen wie auch Neuropathologien zu Grunde liegt. Für verbesserte neuronale Zellkulturmodelle zur detaillierten Untersuchung dieser Prozesse ist daher die Rekonstruktion von geordneten neuronalen Verbindungen dringend erforderlich. Mit Oberflächenstrukturen aus zellattraktiven und zellabweisenden Beschichtungen können neuronale Zellen und ihre Neuriten in vitro strukturiert werden. Zur Kontrolle der neuronalen Verbindungsrichtung muss das Auswachsen der Axone zu benachbarten Zellen dynamisch gesteuert werden, zum Beispiel über eine veränderliche Zugänglichkeit der Oberfläche.
In dieser Arbeit wurde untersucht, ob mit thermoresponsiven Polymeren (TRP) beschichtete Zellkultursubstrate für eine dynamische Kontrolle des Auswachsens neuronaler Zellen geeignet sind. TRP können über die Temperatur von einem zellabweisenden in einen zellattraktiven Zustand geschaltet werden, womit die Zugänglichkeit der Oberfläche für Zellen dynamisch gesteuert werden kann. Die TRP-Beschichtung wurde mikrostrukturiert, um einzelne oder wenige neuronale Zellen zunächst auf der Oberfläche anzuordnen und das Auswachsen der Zellen und Neuriten über definierte TRP-Bereiche in Abhängigkeit der Temperatur zeitlich und räumlich zu kontrollieren. Das Protokoll wurde mit der neuronalen Zelllinie SH-SY5Y etabliert und auf humane induzierte Neurone übertragen. Die Anordnung der Zellen konnte bei Kultivierung im zellabweisenden Zustand des TRPs für bis zu 7 Tage aufrecht erhalten werden. Durch Schalten des TRPs in den zellattraktiven Zustand konnte das Auswachsen der Neuriten und Zellen zeitlich und räumlich induziert werden. Immunozytochemische Färbungen und Patch-Clamp-Ableitungen der Neurone demonstrierten die einfache Anwendbarkeit und Zellkompatibilität der TRP-Substrate.
Eine präzisere räumliche Kontrolle des Auswachsens der Zellen sollte durch lokales Schalten der TRP-Beschichtung erreicht werden. Dafür wurden Mikroheizchips mit Mikroelektroden zur lokalen Jouleschen Erwärmung der Substratoberfläche entwickelt. Zur Evaluierung der generierten Temperaturprofile wurde eine Temperaturmessmethode entwickelt und die erhobenen Messwerte mit numerisch simulierten Werten abgeglichen. Die Temperaturmessmethode basiert auf einfach zu applizierenden Sol-Gel-Schichten, die den temperatursensitiven Fluoreszenzfarbstoff Rhodamin B enthalten. Sie ermöglicht oberflächennahe Temperaturmessungen in trockener und wässriger Umgebung mit hoher Orts- und Temperaturauflösung. Numerische Simulationen der Temperaturprofile korrelierten gut mit den experimentellen Daten. Auf dieser Basis konnten Geometrie und Material der Mikroelektroden hinsichtlich einer lokal stark begrenzten Temperierung optimiert werden. Ferner wurden für die Kultvierung der Zellen auf den Mikroheizchips eine Zellkulturkammer und Kontaktboard für die elektrische Kontaktierung der Mikroelektroden geschaffen.
Die vorgestellten Ergebnisse demonstrieren erstmalig das enorme Potential thermoresponsiver Zellkultursubstrate für die zeitlich und räumlich gesteuerte Formation geordneter neuronaler Verbindungen in vitro. Zukünftig könnte dies detaillierte Studien zur neuronalen Informationsverarbeitung oder zu Neuropathologien an relevanten, humanen Zellmodellen ermöglichen.
Light-induced pH cycle
(2019)
Background Many biochemical reactions depend on the pH of their environment and some are strongly accelerated in an acidic surrounding. A classical approach to control biochemical reactions non-invasivly is by changing the temperature. However, if the pH could be controlled by optical means using photo-active chemicals, this would mean to be able to accelerate suitable biochemical reactions. Optically switching the pH can be achieved by using photoacids. A photoacid is a molecule with a functional group that releases a proton upon irradiation with the suitable wavelength, acidifying the environmental aqueous surrounding. A major goal of this work was to establish a non-invasive method of optically controlling the pH in aqueous solutions, offering the opportunity to enhance the known chemical reactions portfolio. To demonstrate the photo-switchable pH cycling we chose an enzymatic assay using acid phosphatase, which is an enzyme with a strong pH dependent activity.
Results In this work we could demonstrate a light-induced, reversible control of the enzymatic activity of acid phosphatase non-invasivly. To successfully conduct those experiments a high power LED array was designed and built, suitable for a 96 well standard microtiter plate, not being commercially available. Heat management and a lateral ventilation system to avoid heat accumulation were established and a stable light intensity achieved. Different photoacids were characterised and their pH dependent absorption spectra recorded. By using the reversible photoacid G-acid as a proton donor, the pH can be changed reversibly using high power UV 365 nm LEDs. To demonstrate the pH cycling, acid phosphatase with hydrolytic activity under acidic conditions was chosen. An assay using the photoacid together with the enzyme was established, also providing that G-acid does not inhibit acid phosphatase. The feasibility of reversibly regulating the enzyme’s pH dependent activity by optical means was demonstrated, by controlling the enzymatic activity with light. It was demonstrated that the enzyme activity depends on the light exposure time only. When samples are not illuminated and left in the dark, no enzymatic activity was recorded. The process can be rapidly controlled by simply switching the light on and off and should be applicable to a wide range of enzymes and biochemical reactions.
Conclusions Reversible photoacids offer a light-dependent regulation of pH, making them extremely attractive for miniaturizable, non-invasive and time-resolved control of biochemical reactions. Many enzymes have a sharp pH dependent activity, thus the established setup in this thesis could be used for a versatile enzyme portfolio. Even though the demonstrated photo-switchable strategy could also be used for non-enzymatic assays, greatly facilitating the assay establishment. Photoacids have the potential for high throughput methods and automation. We demonstrated that it is possible to control photoacids using commonly available LEDs, making their use in highly integrated devices and instruments more attractive. The successfully designed 96 well high power UV LED array presents an opportunity for general combinatorial analysis in e.g. photochemistry, where a high light intensity is needed for the investigation of various reactions.
Emotions are a central element of human experience. They occur with high frequency in everyday life and play an important role in decision making. However, currently there is no consensus among researchers on what constitutes an emotion and on how emotions should be investigated. This dissertation identifies three problems of current emotion research: the problem of ground truth, the problem of incomplete constructs and the problem of optimal representation. I argue for a focus on the detailed measurement of emotion manifestations with computer-aided methods to solve these problems. This approach is demonstrated in three research projects, which describe the development of methods specific to these problems as well as their application to concrete research questions.
The problem of ground truth describes the practice to presuppose a certain structure of emotions as the a priori ground truth. This determines the range of emotion descriptions and sets a standard for the correct assignment of these descriptions. The first project illustrates how this problem can be circumvented with a multidimensional emotion perception paradigm which stands in contrast to the emotion recognition paradigm typically employed in emotion research. This paradigm allows to calculate an objective difficulty measure and to collect subjective difficulty ratings for the perception of emotional stimuli. Moreover, it enables the use of an arbitrary number of emotion stimuli categories as compared to the commonly used six basic emotion categories. Accordingly, we collected data from 441 participants using dynamic facial expression stimuli from 40 emotion categories. Our findings suggest an increase in emotion perception difficulty with increasing actor age and provide evidence to suggest that young adults, the elderly and men underestimate their emotion perception difficulty. While these effects were predicted from the literature, we also found unexpected and novel results. In particular, the increased difficulty on the objective difficulty measure for female actors and observers stood in contrast to reported findings. Exploratory analyses revealed low relevance of person-specific variables for the prediction of emotion perception difficulty, but highlighted the importance of a general pleasure dimension for the ease of emotion perception.
The second project targets the problem of incomplete constructs which relates to vaguely defined psychological constructs on emotion with insufficient ties to tangible manifestations. The project exemplifies how a modern data collection method such as face tracking data can be used to sharpen these constructs on the example of arousal, a long-standing but fuzzy construct in emotion research. It describes how measures of distance, speed and magnitude of acceleration can be computed from face tracking data and investigates their intercorrelations. We find moderate to strong correlations among all measures of static information on one hand and all measures of dynamic information on the other. The project then investigates how self-rated arousal is tied to these measures in 401 neurotypical individuals and 19 individuals with autism. Distance to the neutral face was predictive of arousal ratings in both groups. Lower mean arousal ratings were found for the autistic group, but no difference in correlation of the measures and arousal ratings could be found between groups. Results were replicated in a high autistic traits group consisting of 41 participants. The findings suggest a qualitatively similar perception of arousal for individuals with and without autism. No correlations between valence ratings and any of the measures could be found which emphasizes the specificity of our tested measures for the construct of arousal.
The problem of optimal representation refers to the search for the best representation of emotions and the assumption that there is a one-fits-all solution. In the third project we introduce partial least squares analysis as a general method to find an optimal representation to relate two high-dimensional data sets to each other. The project demonstrates its applicability to emotion research on the question of emotion perception differences between men and women. The method was used with emotion rating data from 441 participants and face tracking data computed on 306 videos. We found quantitative as well as qualitative differences in the perception of emotional facial expressions between these groups. We showed that women’s emotional perception systematically captured more of the variance in facial expressions. Additionally, we could show that significant differences exist in the way that women and men perceive some facial expressions which could be visualized as concrete facial expression sequences. These expressions suggest differing perceptions of masked and ambiguous facial expressions between the sexes. In order to facilitate use of the developed method by the research community, a package for the statistical environment R was written. Furthermore, to call attention to the method and its usefulness for emotion research, a website was designed that allows users to explore a model of emotion ratings and facial expression data in an interactive fashion.
Cette thèse d’urbanisme s’est donnée pour objectif de réfléchir à l’avenir des gares métropolitaines françaises et allemandes à horizon 2050. Elle porte une interrogation sur les fondements de la gare comme objet urbain conceptuel (abordé comme un système) et pose comme hypothèse qu’il serait en quelque sorte doté de propriétés autonomes. Parmi ces propriétés, c’est le processus d’expansion et de dialogue sans cesse renouvelé et conflictuel, entre la gare et son tissu urbain environnant, qui guide cette recherche ; notamment dans le rapport qu’il entretient avec l’hypermobilité des métropoles. Pour ce faire, cette thèse convoque quatre terrains d’études : les gares principales de Cologne et de Stuttgart en Allemagne et les gares de Paris-Montparnasse et Lyon-Part-Dieu en France ; et commence par un historique détaillé de leurs évolutions morphologiques, pour dégager une série de variables architectoniques et urbaines. Il procède dans un deuxième temps à une série d’analyse prospective, permettant de juger de l’influence possible des politiques publiques en matière transports et de mobilité, sur l’avenir conceptuel des gares. Cette thèse propose alors le concept de système-gare, pour décrire l’expansion et l’intégration des gares métropolitaines avec leur environnement urbain ; un processus de négociation dialectique qui ne trouve pas sa résolution dans le concept de gare comme lieu de vie/ville. Elle invite alors à penser la gare comme une hétérotopie, et propose une lecture dépolarisée et déhiérarchisée de ces espaces, en introduisant les concepts d’orchestre de gares et de métagare. Cette recherche propose enfin une lecture critique de la « ville numérique » et du concept de « mobilité comme service. » Pour éviter une mise en flux tendus potentiellement dommageables, l’application de ces concepts en gare ne pourra se soustraire à une augmentation simultanée des espaces physiques.
Im Rahmen dieser Dissertation konnten neue Kalium- und Natrium-Ionen Fluoreszenzfarbstoffe von der Klasse der Fluoroionophore synthetisiert und charakterisiert werden. Sie bestehen aus einem N Phenylazakronenether als Ionophor und unterschiedlichen Fluorophoren und sind über einen π-konjugierten 1,2,3-Triazol-1,4-diyl Spacer verbunden. Dabei lag der Fokus während ihrer Entwicklung darauf, diese in ihrer Sensitivität, Selektivität und in ihren photophysikalischen Eigenschaften so zu funktionalisieren, dass sie für intra- bzw. extrazelluläre Konzentrationsbestimmungen geeignet sind.
Durch Variation der in ortho Position der N-Phenylazakronenether befindlichen Alkoxy-Gruppen und der fluorophoren Gruppe der Fluoroionophore konnte festgestellt werden, dass die Sensitivität und Selektivität für Kalium- bzw. Natrium-Ionen jeweils durch eine bestimmte Isomerie der 1,2,3-Triazol-1,4-diyl-Einheit erhöht wird. Des Weiteren wurde gezeigt, dass durch eine erhöhte Einschränkung der N,N-Diethylamino-Gruppe des Fluorophors eine Steigerung der Fluoreszenzquantenausbeute und eine Verschiebung des Emissionsmaximums auf über 500 nm erreicht werden konnte. Die Einführung einer Isopropoxy-Gruppe an einem N-Phenylaza-[18]krone-6-ethers resultierte dabei in einem hoch selektiven Kalium-Ionen Fluoroionophor und ermöglichte eine in vitro Überwachung von 10 – 80 mM Kalium-Ionen. Die Substitution einer Methoxy-Gruppe an einem N-Phenylaza-[15]krone-5-ether kombiniert mit unterschiedlich N,N-Diethylamino-Coumarinen lieferte hingegen zwei Natrium-Ionen Fluoroionophore, die für die Überwachung von intra- bzw. extrazellulären Natrium-Ionen Konzentrationen geeignet sind.
In einem weiteren Schritt wurden N-Phenylaza-[18]krone-6-ether mit einem Fluorophor, basierend auf einem [1,3]-Dioxolo[4,5-f][1,3]benzodioxol-(DBD)-Grundgerüst, funktionalisiert. Die im Anschluss durchgeführten spektroskopischen Untersuchungen ergaben, dass die Isopropoxy-Gruppe in ortho Position des N-Phenylaza-[18]krone-6-ether in einen für extrazelluläre Kalium-Ionen Konzentrationen selektiven Fluoroionophor resultierte, der die Konzentrationsbestimmungen über die Fluoreszenzintensität und -lebensdauer ermöglicht.
In einem abschließenden Schritt konnte unter Verwendung eines Pyrens als fluorophore Gruppe ein weiterer für extrazelluläre Kalium-Ionen Konzentrationen geeigneter Fluoroionophor entwickelt werden. Die Bestimmung der Kalium-Ionen Konzentration erfolgte hierbei anhand der Fluoreszenzintensitätsverhältnisse bei zwei Emissionswellenlängen.
Insgesamt konnten 17 verschiedene neue Fluoroionophore für die Bestimmung von Kalium- bzw. Natrium-Ionen synthetisiert und charakterisiert werden. Sechs dieser neuen Moleküle ermöglichen in vitro Messungen der intra- oder extrazellulären Kalium- und Natrium-Ionen Konzentrationen und könnten zukünftig für in vivo Konzentrationsmessungen verwendet werden.
Kosmologie beschreibt die Entwicklung des Universums als Ganzes. Kosmologische Entdeckungen in Theorie und Praxis haben daher unser modernes wissenschaftliches Weltbild entscheidend geprägt. Die Vermittlung eines modernen Weltbildes durch Unterricht ist ein häufiger Wunsch in der naturwissenschaftlichen Bildungsdiskussion. Dennoch existieren weiterhin Forschungs- und Entwicklungsbedarfe. Kosmologische Themen finden sich häufig in den Medien und sind gleichzeitig weiter vom Alltag entfernt, so dass sich hier besonders leicht wissenschaftlich inkorrekte Vorstellungen entwickeln können, die zu Problemen im Unterricht führen können.
Das Ziel dieser wissenschaftlichen Arbeit ist es, zu diesem Forschungsgebiet beizutragen und die Voraussetzungen hinsichtlich vorhandener Vorkenntnisse und Präkonzepte in Kosmologie, mit denen Schülerinnen und Schüler in den Unterricht kommen, zu untersuchen und anschließend mit denen anderer Länder zu vergleichen. Dies erfolgt anhand einer qualitativen Inhaltsanalyse eines offenen Fragebogens. Auf dieser Grundlage wird schließlich ein Multiple-Choice Fragebogen entwickelt, angewendet und evaluiert.
Die Ergebnisse zeigen große Wissenslücken im Bereich der Kosmologie auf und geben erste Hinweise auf vorhandene Unterschiede zwischen den Ländern. Es existieren ebenfalls einige teils weit verbreitete wissenschaftlich inkorrekte Vorstellungen wie beispielsweise die Assoziation des Urknalls mit einer Explosion, der Urknall verursacht durch eine Kollision von Teilchen oder größeren Objekten, oder die Vorstellung der Ausdehnung des Universums als neue Entdeckungen und/oder Wissen. Des Weiteren gab nur etwa jeder Fünfte das korrekte Alter des Universums oder die Ausdehnung des Universums als einen der drei Belege der Urknalltheorie an, während fast 40% keinen einzigen Beleg nennen konnten. Für den geschlossenen Fragebogen konnten gute Hinweise für verschiedene Validitätsaspekte herausgearbeitet werden und es existieren erste Hinweise darauf, dass der Fragebogen Wissenszuwachs messen kann und damit wahrscheinlich zur Untersuchung der Wirksamkeit von Lerneinheiten eingesetzt werden kann. Auch ein entsprechendes Modell zur Verständnisentwicklung der Ausdehnung des Universums zeigte sich vielversprechend.
Diese Arbeit liefert insgesamt einen Forschungsbeitrag zum Schülervorwissen und Vorstellungen in der Kosmologie und deren Large Scale Assessment. Dies eröffnet die Möglichkeit zukünftiger Forschungen im Bereich von Gruppenvergleichen insbesondere hinsichtlich objektiver Ländervergleiche sowie der Untersuchungen der Wirksamkeit von einzelnen Lerneinheiten als auch Vergleiche verschiedener Lerneinheiten untereinander.
The main goal of this thesis is to explore the feasibility of using cross-lingual annotation projection as a method of alleviating the task of manual coreference annotation.
To reach our goal, we build a first trilingual parallel coreference corpus that encompasses multiple genres. For the annotation of the corpus, we develop common coreference annotation guidelines that are applicable to three languages (English, German, Russian) and include a novel domain-independent typology of bridging relations as well as state-of-the-art near-identity categories.
Thereafter, we design and perform several annotation projection experiments. In the first experiment, we implement a direct projection method with only one source language. Our results indicate that, already in a knowledge-lean scenario, our projection approach is superior to the most closely related work of Postolache et al. (2006). Since the quality of the resulting annotations is to a high degree dependent on the word alignment, we demonstrate how using limited syntactic information helps to further improve mention extraction on the target side. As a next step, in our second experiment, we show how exploiting two source languages helps to improve the quality of target annotations for both language pairs by concatenating annotations projected from two source languages. Finally, we assess the projection quality in a fully automatic scenario (using automatically produced source annotations), and propose a pilot experiment on manual projection of bridging pairs.
For each of the experiments, we carry out an in-depth error analysis, and we conclude that noisy word alignments, translation divergences and morphological and syntactic differences between languages are responsible for projection errors. We systematically compare and evaluate our projection methods, and we investigate the errors both qualitatively and quantitatively in order to identify problematic cases. Finally, we discuss the applicability of our method to coreference annotations and propose several avenues of future research.
This dissertation investigates the impact of the economic and fiscal crisis starting in 2008 on EU climate policy-making. While the overall number of adopted greenhouse gas emission reduction policies declined in the crisis aftermath, EU lawmakers decided to introduce new or tighten existing regulations in some important policy domains. Existing knowledge about the crisis impact on EU legislative decision-making cannot explain these inconsistencies. In response, this study develops an actor-centred conceptual framework based on rational choice institutionalism that provides a micro-level link to explain how economic crises translate into altered policy-making patterns. The core theoretical argument draws on redistributive conflicts, arguing that tensions between ‘beneficiaries’ and ‘losers’ of a regulatory initiative intensify during economic crises and spill over to the policy domain. To test this hypothesis and using social network analysis, this study analyses policy processes in three case studies: The introduction of carbon dioxide emission limits for passenger cars, the expansion of the EU Emissions Trading System to aviation, and the introduction of a regulatory framework for biofuels. The key finding is that an economic shock causes EU policy domains to polarise politically, resulting in intensified conflict and more difficult decision-making. The results also show that this process of political polarisation roots in the industry that is the subject of the regulation, and that intergovernmental bargaining among member states becomes more important, but also more difficult in times of crisis.
Hyperspectral remote sensing of the spatial and temporal heterogeneity of low Arctic vegetation
(2019)
Arctic tundra ecosystems are experiencing warming twice the global average and Arctic vegetation is responding in complex and heterogeneous ways. Shifting productivity, growth, species composition, and phenology at local and regional scales have implications for ecosystem functioning as well as the global carbon and energy balance. Optical remote sensing is an effective tool for monitoring ecosystem functioning in this remote biome. However, limited field-based spectral characterization of the spatial and temporal heterogeneity limits the accuracy of quantitative optical remote sensing at landscape scales. To address this research gap and support current and future satellite missions, three central research questions were posed:
• Does canopy-level spectral variability differ between dominant low Arctic vegetation communities and does this variability change between major phenological phases?
• How does canopy-level vegetation colour images recorded with high and low spectral resolution devices relate to phenological changes in leaf-level photosynthetic pigment concentrations?
• How does spatial aggregation of high spectral resolution data from the ground to satellite scale influence low Arctic tundra vegetation signatures and thereby what is the potential of upcoming hyperspectral spaceborne systems for low Arctic vegetation characterization?
To answer these questions a unique and detailed database was assembled. Field-based canopy-level spectral reflectance measurements, nadir digital photographs, and photosynthetic pigment concentrations of dominant low Arctic vegetation communities were acquired at three major phenological phases representing early, peak and late season. Data were collected in 2015 and 2016 in the Toolik Lake Research Natural Area located in north central Alaska on the North Slope of the Brooks Range. In addition to field data an aerial AISA hyperspectral image was acquired in the late season of 2016. Simulations of broadband Sentinel-2 and hyperspectral Environmental and Mapping Analysis Program (EnMAP) satellite reflectance spectra from ground-based reflectance spectra as well as simulations of EnMAP imagery from aerial hyperspectral imagery were also obtained.
Results showed that canopy-level spectral variability within and between vegetation communities differed by phenological phase. The late season was identified as the most discriminative for identifying many dominant vegetation communities using both ground-based and simulated hyperspectral reflectance spectra. This was due to an overall reduction in spectral variability and comparable or greater differences in spectral reflectance between vegetation communities in the visible near infrared spectrum.
Red, green, and blue (RGB) indices extracted from nadir digital photographs and pigment-driven vegetation indices extracted from ground-based spectral measurements showed strong significant relationships. RGB indices also showed moderate relationships with chlorophyll and carotenoid pigment concentrations. The observed relationships with the broadband RGB channels of the digital camera indicate that vegetation colour strongly influences the response of pigment-driven spectral indices and digital cameras can track the seasonal development and degradation of photosynthetic pigments.
Spatial aggregation of hyperspectral data from the ground to airborne, to simulated satel-lite scale was influenced by non-photosynthetic components as demonstrated by the distinct shift of the red edge to shorter wavelengths. Correspondence between spectral reflectance at the three scales was highest in the red spectrum and lowest in the near infra-red. By artificially mixing litter spectra at different proportions to ground-based spectra, correspondence with aerial and satellite spectra increased. Greater proportions of litter were required to achieve correspondence at the satellite scale.
Overall this thesis found that integrating multiple temporal, spectral, and spatial data is necessary to monitor the complexity and heterogeneity of Arctic tundra ecosystems. The identification of spectrally similar vegetation communities can be optimized using non-peak season hyperspectral data leading to more detailed identification of vegetation communities. The results also highlight the power of vegetation colour to link ground-based and satellite data. Finally, a detailed characterization non-photosynthetic ecosystem components is crucial for accurate interpretation of vegetation signals at landscape scales.
Additive Manufacturing (AM) in terms of laser powder-bed fusion (L-PBF) offers new prospects regarding the design of parts and enables therefore the production of lattice structures. These lattice structures shall be implemented in various industrial applications (e.g. gas turbines) for reasons of material savings or cooling channels. However, internal defects, residual stress, and structural deviations from the nominal geometry are unavoidable.
In this work, the structural integrity of lattice structures manufactured by means of L-PBF was non-destructively investigated on a multiscale approach.
A workflow for quantitative 3D powder analysis in terms of particle size, particle shape, particle porosity, inter-particle distance and packing density was established. Synchrotron computed tomography (CT) was used to correlate the packing density with the particle size and particle shape. It was also observed that at least about 50% of the powder porosity was released during production of the struts.
Struts are the component of lattice structures and were investigated by means of laboratory CT. The focus was on the influence of the build angle on part porosity and surface quality. The surface topography analysis was advanced by the quantitative characterisation of re-entrant surface features. This characterisation was compared with conventional surface parameters showing their complementary information, but also the need for AM specific surface parameters.
The mechanical behaviour of the lattice structure was investigated with in-situ CT under compression and successive digital volume correlation (DVC). The deformation was found to be knot-dominated, and therefore the lattice folds unit cell layer wise.
The residual stress was determined experimentally for the first time in such lattice structures. Neutron diffraction was used for the non-destructive 3D stress investigation. The principal stress directions and values were determined in dependence of the number of measured directions. While a significant uni-axial stress state was found in the strut, a more hydrostatic stress state was found in the knot. In both cases, strut and knot, seven directions were at least needed to find reliable principal stress directions.
Supercapacitors are electrochemical energy storage devices with rapid charge/discharge rate and long cycle life. Their biggest challenge is the inferior energy density compared to other electrochemical energy storage devices such as batteries. Being the most widely spread type of supercapacitors, electrochemical double-layer capacitors (EDLCs) store energy by electrosorption of electrolyte ions on the surface of charged electrodes. As a more recent development, Na-ion capacitors (NICs) are expected to be a more promising tactic to tackle the inferior energy density due to their higher-capacity electrodes and larger operating voltage. The charges are simultaneously stored by ion adsorption on the capacitive-type cathode surface and via faradic process in the battery-type anode, respectively. Porous carbon electrodes are of great importance in these devices, but the paramount problems are the facile synthetic routes for high-performance carbons and the lack of fundamental understanding of the energy storage mechanisms. Therefore, the aim of the present dissertation is to develop novel synthetic methods for (nitrogen-doped) porous carbon materials with superior performance, and to reveal a deeper understanding energy storage mechanisms of EDLCs and NICs.
The first part introduces a novel synthetic method towards hierarchical ordered meso-microporous carbon electrode materials for EDLCs. The large amount of micropores and highly ordered mesopores endow abundant sites for charge storage and efficient electrolyte transport, respectively, giving rise to superior EDLC performance in different electrolytes. More importantly, the controversial energy storage mechanism of EDLCs employing ionic liquid (IL) electrolytes is investigated by employing a series of porous model carbons as electrodes. The results not only allow to conclude on the relations between the porosity and ion transport dynamics, but also deliver deeper insights into the energy storage mechanism of IL-based EDLCs which is different from the one usually dominating in solvent-based electrolytes leading to compression double-layers.
The other part focuses on anodes of NICs, where novel synthesis of nitrogen-rich porous carbon electrodes and their sodium storage mechanism are investigated. Free-standing fibrous nitrogen-doped carbon materials are synthesized by electrospinning using the nitrogen-rich monomer (hexaazatriphenylene-hexacarbonitrile, C18N12) as the precursor followed by condensation at high temperature. These fibers provide superior capacity and desirable charge/discharge rate for sodium storage. This work also allows insights into the sodium storage mechanism in nitrogen-doped carbons. Based on this mechanism, further optimization is done by designing a composite material composed of nitrogen-rich carbon nanoparticles embedded in conductive carbon matrix for a better charge/discharge rate. The energy density of the assembled NICs significantly prevails that of common EDLCs while maintaining the high power density and long cycle life.
Due to its bioavailability and (bio)degradability, poly(lactide) (PLA) is an interesting polymer that is already being used as packaging material, surgical seam, and drug delivery system. Dependent on various parameters such as polymer composition, amphiphilicity, sample preparation, and the enantiomeric purity of lactide, PLA in an amphiphilic block copolymer can affect the self-assembly behavior dramatically. However, sizes and shapes of aggregates have a critical effect on the interactions between biological and drug delivery systems, where the general understanding of these polymers and their ability to influence self-assembly is of significant interest in science.
The first part of this thesis describes the synthesis and study of a series of linear poly(L-lactide) (PLLA) and poly(D-lactide) (PDLA)-based amphiphilic block copolymers with varying PLA (hydrophobic), and poly(ethylene glycol) (PEG) (hydrophilic) chain lengths and different block copolymer sequences (PEG-PLA and PLA-PEG). The PEG-PLA block copolymers were synthesized by ring-opening polymerization of lactide initiated by a PEG-OH macroinitiator. In contrast, the PLA-PEG block copolymers were produced by a Steglich-esterification of modified PLA with PEG-OH.
The aqueous self-assembly at room temperature of the enantiomerically pure PLLA-based block copolymers and their stereocomplexed mixtures was investigated by dynamic light scattering (DLS), transmission electron microscopy (TEM), wide-angle X-ray diffraction (WAXD), and differential scanning calorimetry (DSC). Spherical micelles and worm-like structures were produced, whereby the obtained self-assembled morphologies were affected by the lactide weight fraction in the block copolymer and self-assembly time. The formation of worm-like structures increases with decreasing PLA-chain length and arises from spherical micelles, which become colloidally unstable and undergo an epitaxial fusion with other micelles. As shown by DSC experiments, the crystallinity of the corresponding PLA blocks increases within the self-assembly time. However, the stereocomplexed self-assembled structures behave differently from the parent polymers and result in irregular-shaped clusters of spherical micelles. Additionally, time-dependent self-assembly experiments showed a transformation, from already self-assembled morphologies of different shapes to more compact micelles upon stereocomplexation.
In the second part of this thesis, with the objective to influence the self-assembly of PLA-based block copolymers and its stereocomplexes, poly(methyl phosphonate) (PMeP) and poly(isopropyl phosphonate) (PiPrP) were produced by ring-opening polymerization to implement an alternative to the hydrophilic block PEG. Although, the 1,8 diazabicyclo[5.4.0]unde 7 ene (DBU) or 1,5,7 triazabicyclo[4.4.0]dec-5-ene (TBD) mediated synthesis of the corresponding poly(alkyl phosphonate)s was successful, however, not so the polymerization of copolymers with PLA-based precursors (PLA-homo polymers, and PEG-PLA block copolymers). Transesterification, obtained by 1H-NMR spectroscopy, between the poly(phosphonate)- and PLA block caused a high-field shifted peak split of the methine proton in the PLA polymer chain, with split intensities depending on the used catalyst (DBU for PMeP, and TBD for PiPrP polymerization). An additional prepared block copolymer PiPrP-PLLA that wasn’t affected in its polymer sequence was finally used for self-assembly experiments with PLA-PEG and PEG-PLA mixing.
This work provides a comprehensive study of the self-assembly behavior of PLA-based block copolymers influenced by various parameters such as polymer block lengths, self-assembly time, and stereocomplexation of block copolymer mixtures.
The central motivation of the thesis was to provide possible solutions and concepts to improve the performance (e.g. activity and selectivity) of electrochemical N2 reduction reaction (NRR). Given that porous carbon-based materials usually exhibit a broad range of structural properties, they could be promising NRR catalysts. Therefore, the advanced design of novel porous carbon-based materials and the investigation of their application in electrocatalytic NRR including the particular reaction mechanisms are the most crucial points to be addressed. In this regard, three main topics were investigated. All of them are related to the functionalization of porous carbon for electrochemical NRR or other electrocatalytic reactions.
In chapter 3, a novel C-TixOy/C nanocomposite has been described that has been obtained via simple pyrolysis of MIL-125(Ti). A novel mode for N2 activation is achieved by doping carbon atoms from nearby porous carbon into the anion lattice of TixOy. By comparing the NRR performance of M-Ts and by carrying out DFT calculations, it is found that the existence of (O-)Ti-C bonds in C-doped TixOy can largely improve the ability to activate and reduce N2 as compared to unoccupied OVs in TiO2. The strategy of rationally doping heteroatoms into the anion lattice of transition metal oxides to create active centers may open many new opportunities beyond the use of noble metal-based catalysts also for other reactions that require the activation of small molecules as well.
In chapter 4, a novel catalyst construction composed of Au single atoms decorated on the surface of NDPCs was reported. The introduction of Au single atoms leads to active reaction sites, which are stabilized by the N species present in NDPCs. Thus, the interaction within as-prepared AuSAs-NDPCs catalysts enabled promising performance for electrochemical NRR. For the reaction mechanism, Au single sites and N or C species can act as Frustrated Lewis pairs (FLPs) to enhance the electron donation and back-donation process to activate N2 molecules. This work provides new opportunities for catalyst design in order to achieve efficient N2 fixation at ambient conditions by utilizing recycled electric energy.
The last topic described in chapter 5 mainly focused on the synthesis of dual heteroatom-doped porous carbon from simple precursors. The introduction of N and B heteroatoms leads to the construction of N-B motives and Frustrated Lewis pairs in a microporous architecture which is also rich in point defects. This can improve the strength of adsorption of different reactants (N2 and HMF) and thus their activation. As a result, BNC-2 exhibits a desirable electrochemical NRR and HMF oxidation performance. Gas adsorption experiments have been used as a simple tool to elucidate the relationship between the structure and catalytic activity. This work provides novel and deep insights into the rational design and the origin of activity in metal-free electrocatalysts and enables a physically viable discussion of the active motives, as well as the search for their further applications.
Throughout this thesis, the ubiquitous problems of low selectivity and activity of electrochemical NRR are tackled by designing porous carbon-based catalysts with high efficiency and exploring their catalytic mechanisms. The structure-performance relationships and mechanisms of activation of the relatively inert N2 molecules are revealed by either experimental results or DFT calculations. These fundamental understandings pave way for a future optimal design and targeted promotion of NRR catalysts with porous carbon-based structure, as well as study of new N2 activation modes.
Objeto de esta investigación es el auge y caída de una legitimación teológica de la poesía que tuvo lugar en el virreinato del Perú entre fines del siglo XVI y la segunda mitad del siglo XVII. Su punto cúlmine está marcado por el surgimiento de una “Academia Antártica” en las primeras décadas del siglo XVII, mientras que su fin, se aprecia a fines del mismo siglo, cuando eruditos de las órdenes religiosas, especialmente Juan de Espinosa y Medrano en sus textos en defensa de la poesía y las ciencias, negaron a la poesía cualquier estatuto teológico, sirviéndose sin embargo de ella para escribir sus sermones y textos. A partir del auge y caída de esta legitimación teológica en el virreinato del Perú, este estudio muestra la existencia de dos movimientos que forman un quiasmo entre una teologización de la poesía y una poetización de la teología, en cuyo centro velado se encuentra en disputa el saber teórico y práctico de la poesía. Lo que está en disputa en este sentido no es la poesía, entendida como una cumbre de las bellas letras, sino la posesión legítima de un modo de lectura analógico y tipológico del orden del universo, fundado en las Sagradas Escrituras y en la historia de la salvación, y un modo poético para doctrinar a todos los miembros de la sociedad virreinal en concordancia con aquel modo de lectura.
Ficción herética
(2019)
La metáfora de la «isla» en la narrativa cubana contemporánea engloba toda una serie de complejidades simbólicas dependientes de la vivencia del espacio y el tiempo. Su potencial visual se manifiesta u oculta las propias vivencias insulares de los escritores cubanos. En los últimos 30 años en Cuba, los fenómenos políticos económicos y sociales han modificado categóricamente la percepción y la configuración del plano social e individual frente a las exigencias globales (Fornet 2006, Rojas 1999, 2002, 2006). Se ha confirmado una sensación de acinesia e ingravidez (Casamayor 2013) y se ha presentado una actitud «herética» por parte de los narradores cubanos, quienes se confrontan con las ideas de la postmodernidad, lo postsoviético o postutópico, reafirmando así una sensibilidad presentista (Guerrero 2016). Estos autores presentan resonancias y reivindicaciones de los imaginarios insulares de autores y de tradiciones estéticas dentro y fuera de la isla como José Lezama Lima, Virgilio Piñera, Guillermo Cabrera Infante, Reinaldo Arenas y Severo Sarduy). El análisis de las écfrasis insulares permite examinar las dinámicas de representación y de sentido: disimulación: la disimulación, la anamorfosis y la trompe l’oeil (Sarduy 1981). La novela Tuyo es el reino (1998) de Abilio Estévez es un modelo desde el que se localizará las relaciones de sentido entre canon literario y los referentes socioculturales de las variaciones somatopológicas de la isla en la narrativa cubana actual: Ena Lucía Portela, Atilio Caballero, Antonio José Ponte, Daniel Díaz Mantilla, Emerio Medina, Orlando Luis Pardo, Anisley Negrin y Ahmel Echeverría, entre otros.
Modern health care systems are characterized by pronounced prevention and cost-optimized treatments. This dissertation offers novel empirical evidence on how useful such measures can be. The first chapter analyzes how radiation, a main pollutant in health care, can negatively affect cognitive health. The second chapter focuses on the effect of Low Emission Zones on public heath, as air quality is the major external source of health problems. Both chapters point out potentials for preventive measures. Finally, chapter three studies how changes in treatment prices affect the reallocation of hospital resources. In the following, I briefly summarize each chapter and discuss implications for health care systems as well as other policy areas. Based on the National Educational Panel Study that is linked to data on radiation, chapter one shows that radiation can have negative long-term effects on cognitive skills, even at subclinical doses. Exploiting arguably exogenous variation in soil contamination in Germany due to the Chernobyl disaster in 1986, the findings show that people exposed to higher radiation perform significantly worse in cognitive tests 25 years later. Identification is ensured by abnormal rainfall within a critical period of ten days. The results show that the effect is stronger among older cohorts than younger cohorts, which is consistent with radiation accelerating cognitive decline as people get older. On average, a one-standarddeviation increase in the initial level of CS137 (around 30 chest x-rays) is associated with a decrease in the cognitive skills by 4.1 percent of a standard deviation (around 0.05 school years). Chapter one shows that sub-clinical levels of radiation can have negative consequences even after early childhood. This is of particular importance because most of the literature focuses on exposure very early in life, often during pregnancy. However, population exposed after birth is over 100 times larger. These results point to substantial external human capital costs of radiation which can be reduced by choices of medical procedures. There is a large potential for reductions because about one-third of all CT scans are assumed to be not medically justified (Brenner and Hall, 2007). If people receive unnecessary CT scans because of economic incentives, this chapter points to additional external costs of health care policies. Furthermore, the results can inform the cost-benefit trade-off for medically indicated procedures. Chapter two provides evidence about the effectiveness of Low Emission Zones. Low Emission Zones are typically justified by improvements in population health. However, there is little evidence about the potential health benefits from policy interventions aiming at improving air quality in inner-cities. The chapter ask how the coverage of Low Emission Zones air pollution and hospitalization, by exploiting variation in the roll out of Low Emission Zones in Germany. It combines information on the geographic coverage of Low Emission Zones with rich panel data on the universe of German hospitals over the period from 2006 to 2016 with precise information on hospital locations and the annual frequency of detailed diagnoses. In order to establish that our estimates of Low Emission Zones’ health impacts can indeed be attributed to improvements in local air quality, we use data from Germany’s official air pollution monitoring system and assign monitor locations to Low Emission Zones and test whether measures of air pollution are affected by the coverage of a Low Emission Zone. Results in chapter two confirm former results showing that the introduction of Low Emission Zones improved air quality significantly by reducing NO2 and PM10 concentrations. Furthermore, the chapter shows that hospitals which catchment areas are covered by a Low Emission Zone, diagnose significantly less air pollution related diseases, in particular by reducing the incidents of chronic diseases of the circulatory and the respiratory system. The effect is stronger before 2012, which is consistent with a general improvement in the vehicle fleet’s emission standards. Depending on the disease, a one-standard-deviation increase in the coverage of a hospitals catchment area covered by a Low Emission Zone reduces the yearly number of diagnoses up to 5 percent. These findings have strong implications for policy makers. In 2015, overall costs for health care in Germany were around 340 billion euros, of which 46 billion euros for diseases of the circulatory system, making it the most expensive type of disease caused by 2.9 million cases (Statistisches Bundesamt, 2017b). Hence, reductions in the incidence of diseases of the circulatory system may directly reduce society’s health care costs. Whereas chapter one and two study the demand-side in health care markets and thus preventive potential, chapter three analyzes the supply-side. By exploiting the same hospital panel data set as in chapter two, chapter three studies the effect of treatment price shocks on the reallocation of hospital resources in Germany. Starting in 2005, the implementation of the German-DRG-System led to general idiosyncratic treatment price shocks for individual hospitals. Thus far there is little evidence of the impact of general price shocks on the reallocation of hospital resources. Additionally, I add to the exiting literature by showing that price shocks can have persistent effects on hospital resources even when these shocks vanish. However, simple OLS regressions would underestimate the true effect, due to endogenous treatment price shocks. I implement a novel instrument variable strategy that exploits the exogenous variation in the number of days of snow in hospital catchment areas. A peculiarity of the reform allowed variation in days of snow to have a persistent impact on treatment prices. I find that treatment price increases lead to increases in input factors such as nursing staff, physicians and the range of treatments offered but to decreases in the treatment volume. This indicates supplier-induced demand. Furthermore, the probability of hospital mergers and privatization decreases. Structural differences in pre-treatment characteristics between hospitals enhance these effects. For instance, private and larger hospitals are more affected. IV estimates reveal that OLS results are biased towards zero in almost all dimensions because structural hospital differences are correlated with the reallocation of hospital resources. These results are important for several reasons. The G-DRG-Reform led to a persistent polarization of hospital resources, as some hospitals were exposed to treatment price increases, while others experienced reductions. If hospitals increase the treatment volume as a response to price reductions by offering unnecessary therapies, it has a negative impact on population wellbeing and public spending. However, results show a decrease in the range of treatments if prices decrease. Hospitals might specialize more, thus attracting more patients. From a policy perspective it is important to evaluate if such changes in the range of treatments jeopardize an adequate nationwide provision of treatments. Furthermore, the results show a decrease in the number of nurses and physicians if prices decrease. This could partly explain the nursing crisis in German hospitals. However, since hospitals specialize more they might be able to realize efficiency gains which justify reductions in input factors without loses in quality. Further research is necessary to provide evidence for the impact of the G-DRG-Reform on health care quality. Another important aspect are changes in the organizational structure. Many public hospitals have been privatized or merged. The findings show that this is at least partly driven by the G-DRG-Reform. This can again lead to a lack in services offered in some regions if merged hospitals specialize more or if hospitals are taken over by ecclesiastical organizations which do not provide all treatments due to moral conviction. Overall, this dissertation reveals large potential for preventive health care measures and helps to explain reallocation processes in the hospital sector if treatment prices change. Furthermore, its findings have potentially relevant implications for other areas of public policy. Chapter one identifies an effect of low dose radiation on cognitive health. As mankind is searching for new energy sources, nuclear power is becoming popular again. However, results of chapter one point to substantial costs of nuclear energy which have not been accounted yet. Chapter two finds strong evidence that air quality improvements by Low Emission Zones translate into health improvements, even at relatively low levels of air pollution. These findings may, for instance, be of relevance to design further policies targeted at air pollution such as diesel bans. As pointed out in chapter three, the implementation of DRG-Systems may have unintended side-effects on the reallocation of hospital resources. This may also apply to other providers in the health care sector such as resident doctors.
Force plays a fundamental role in the regulation of biological processes. Cells can sense the mechanical properties of the extracellular matrix (ECM) by applying forces and transmitting mechanical signals. They further use mechanical information for regulating a wide range of cellular functions, including adhesion, migration, proliferation, as well as differentiation and apoptosis. Even though it is well understood that mechanical signals play a crucial role in directing cell fate, surprisingly little is known about the range of forces that define cell-ECM interactions at the molecular level.
Recently, synthetic molecular force sensor (MFS) designs have been established for measuring the molecular forces acting at the cell-ECM interface. MFSs detect the traction forces generated by cells and convert this mechanical input into an optical readout. They are composed of calibrated mechanoresponsive building blocks and are usually equipped with a fluorescence reporter system. Up to date, many different MFS designs have been introduced and successfully used for measuring forces involved in the adhesion of mammalian cells. These MFSs utilize different molecular building blocks, such as double-stranded deoxyribonucleic acid (dsDNA) molecules, DNA hairpins and synthetic polymers like polyethylene glycol (PEG). These currently available MFS designs lack ECM mimicking properties.
In this work, I introduce a new MFS building block for cell biology applications, derived from the natural ECM. It combines mechanical tunability with the ability to mimic the native cellular microenvironment. Inspired by structural ECM proteins with load bearing function, this new MFS design utilizes coiled coil (CC)-forming peptides. CCs are involved in structural and mechanical tasks in the cellular microenvironment and many of the key protein components of the cytoskeleton and the ECM contain CC structures. The well-known folding motif of CC structures, an easy synthesis via solid phase methods and the many roles CCs play in biological processes have inspired studies to use CCs as tunable model systems for protein design and assembly. All these properties make CCs ideal candidates as building blocks for MFSs. In this work, a series of heterodimeric CCs were designed, characterized and further used as molecular building blocks for establishing a novel, next-generation MFS prototype.
A mechanistic molecular understanding of their structural response to mechanical load is essential for revealing the sequence-structure-mechanics relationships of CCs. Here, synthetic heterodimeric CCs of different length were loaded in shear geometry and their mechanical response was investigated using a combination of atomic force microscope (AFM)-based single-molecule force spectroscopy (SMFS) and steered molecular dynamics (SMD) simulations. SMFS showed that the rupture forces of short heterodimeric CCs (3-5 heptads) lie in the range of 20-50 pN, depending on CC length, pulling geometry and the applied loading rate (dF/dt). Upon shearing, an initial rise in the force, followed by a force plateau and ultimately strand separation was observed in SMD simulations. A detailed structural analysis revealed that CC response to shear load depends on the loading rate and involves helix uncoiling, uncoiling-assisted sliding in the direction of the applied force and uncoiling-assisted dissociation perpendicular to the force axis.
The application potential of these mechanically characterized CCs as building blocks for MFSs has been tested in 2D cell culture applications with the goal of determining the threshold force for cell adhesion. Fully calibrated, 4- to 5-heptad long, CC motifs (CC-A4B4 and CC-A5B5) were used for functionalizing glass surfaces with MFSs. 3T3 fibroblasts and endothelial cells carrying mutations in a signaling pathway linked to cell adhesion and mechanotransduction processes were used as model systems for time-dependent adhesion experiments. A5B5-MFS efficiently supported cell attachment to the functionalized surfaces for both cell types, while A4B4-MFS failed to maintain attachment of 3T3 fibroblasts after the first 2 hours of initial cell adhesion. This difference in cell adhesion behavior demonstrates that the magnitude of cell-ECM forces varies depending on the cell type and further supports the application potential of CCs as mechanoresponsive and tunable molecular building blocks for the development of next-generation protein-based MFSs.This novel CC-based MFS design is expected to provide a powerful new tool for observing cellular mechanosensing processes at the molecular level and to deliver new insights into the mechanisms and forces involved. This MFS design, utilizing mechanically tunable CC building blocks, will not only allow for measuring the molecular forces acting at the cell-ECM interface, but also yield a new platform for the development of mechanically controlled materials for a large number of biological and medical applications.
In this work we investigated ultrafast demagnetization in a Heusler-alloy. This material belongs to the halfmetal and exists in a ferromagnetic phase. A special feature of investigated alloy is a structure of electronic bands. The last leads to the specific density of the states. Majority electrons form a metallic like structure while minority electrons form a gap near the Fermi-level, like in semiconductor. This particularity offers a good possibility to use this material as model-like structure and to make some proof of principles concerning demagnetization. Using pump-probe experiments we carried out time-resolved measurements to figure out the times of demagnetization. For the pumping we used ultrashort laser pulses with duration around 100 fs. Simultaneously we used two excitation regimes with two different wavelengths namely 400 nm and 1240 nm. Decreasing the energy of photons to the gap size of the minority electrons we explored the effect of the gap on the demagnetization dynamics. During this work we used for the first time OPA (Optical Parametrical Amplifier) for the generation of the laser irradiation in a long-wave regime. We tested it on the FETOSPEX-beamline in BASSYII electron storage ring. With this new technique we measured wavelength dependent demagnetization dynamics. We estimated that the demagnetization time is in a correlation with photon energy of the excitation pulse. Higher photon energy leads to the faster demagnetization in our material. We associate this result with the existence of the energy-gap for minority electrons and explained it with Elliot-Yaffet-scattering events. Additionally we applied new probe-method for magnetization state in this work and verified their effectivity. It is about the well-known XMCD (X-ray magnetic circular dichroism) which we adopted for the measurements in reflection geometry. Static experiments confirmed that the pure electronic dynamics can be separated from the magnetic one. We used photon energy fixed on the L3 of the corresponding elements with circular polarization. Appropriate incidence angel was estimated from static measurements. Using this probe method in dynamic measurements we explored electronic and magnetic dynamics in this alloy.
Membrane adhesion is a fundamental biological process in which membranes are attached to neighboring membranes or surfaces. Membrane adhesion emerges from a complex interplay between the binding of membrane-anchored receptors/ligands and the membrane properties. In this work, we study membrane adhesion mediated by lipid-anchored saccharides using microsecond-long full-atomistic molecular dynamics simulations. Motivated by neutron scattering experiments on membrane adhesion via lipid-anchored saccharides, we investigate the role of LeX, Lac1, and Lac2 saccharides and membrane fluctuations in membrane adhesion.
We study the binding of saccharides in three different systems: for saccharides in water, for saccharides anchored to essentially planar membranes at fixed separations, and for saccharides anchored to apposing fluctuating membranes. Our simulations of two saccharides in water indicate that the saccharides engage in weak interactions to form dimers. We find that the binding occurs in a continuum of bound states instead of a certain number of well-defined bound structures, which we term as "diffuse binding".
The binding of saccharides anchored to essentially planar membranes strongly depends on separation of the membranes, which is fixed in our simulation system. We show that the binding constants for trans-interactions of two lipid-anchored saccharides monotonically decrease with increasing separation. Saccharides anchored to the same membrane leaflet engage in cis-interactions with binding constants comparable to the trans-binding constants at the smallest membrane separations. The interplay of cis- and trans-binding can be investigated in simulation systems with many lipid-anchored saccharides. For Lac2, our simulation results indicate a positive cooperativity of trans- and cis-binding. In this cooperative binding the trans-binding constant is enhanced by the cis-interactions. For LeX, in contrast, we observe no cooperativity between trans- and cis-binding. In addition, we determine the forces generated by trans-binding of lipid-anchored saccharides in planar membranes from the binding-induced deviations of the lipid-anchors. We find that the forces acting on trans-bound saccharides increase with increasing membrane separation to values of the order of 10 pN.
The binding of saccharides anchored to the fluctuating membranes results from an interplay between the binding properties of the lipid-anchored saccharides and membrane fluctuations. Our simulations, which have the same average separation of the membranes as obtained from the neutron scattering experiments, yield a binding constant larger than in planar membranes with the same separation. This result demonstrates that membrane fluctuations play an important role at average membrane separations which are seemingly too large for effective binding. We further show that the probability distribution of the local separation can be well approximated by a Gaussian distribution. We calculate the relative membrane roughness and show that our results are in good agreement with the roughness values reported from the neutron scattering experiments.
Die vorliegende Dissertation zielt generell darauf ab, die Anwendung der dialektischen Methodologie auf den Bereich der Sprachphilosophie zu rechtfertigen und eine systematische Bearbeitung eines begrenzten Teils der Sprachphilosophie mithilfe der Dialektik durchzuführen. Um diese Herangehensweise, die in der Forschung kaum oder gar nicht vertreten ist, aufzuklären und festzustellen, werde ich zuerst auf die philosophischen Überlegungen von zwei Autoren zurückgreifen: Hegel und Wittgenstein.
Hegel und Wittgenstein sind, auf den ersten Blick, Autoren, die kaum Gemeinsamkeiten haben, außer dass sie sich beide mit der Philosophie als Fach beschäftigt und unvermeidlich ein gemeinschaftliches Thema, die Sprache, behandelt haben, wobei jedoch weder eine inhaltliche noch eine methodologische Verbindung hervorgehoben werden könnte. Die erste Voraussetzung dieser Dissertation, in Bezug auf die Geschichte der Idee, besteht darin, darauf hinzudeuten, dass der hegelsche Geistesbegriff und Wittgensteins Lebensform zwei Ansätze und Resultat einer philosophischen Bemühung sind, die gemeinsam die notwendige Auflösung bzw. Überwindung skeptischer Argumentation vornehmen. Tatsächlich hat Wittgenstein in seinen Philosophischen Untersuchungen eine Argumentation entwickelt, die als „Paradox des Regelfolgens“ bezeichnet und in der sekundären Literatur (hauptsächlich bei Kripke) als eine Art skeptische Argumentation betrachtet wurde. Demnach wird Wittgensteins Theorie der Sprache entweder als eine Auflösung dieses Skeptizismus oder einfach als ein skeptischer Text selbst ausgelegt (Brandom). Das erste Ziel meiner Dissertation besteht darin, zu zeigen, dass dieses Paradox als skeptische Argumentation allerdings unvollständig geblieben ist dass dieses Paradox als der erster entscheidender Moment zu der höchsten Form der skeptischen Herausforderung, der Antinomie, betrachtet werden kann. Eine vollständige skeptische Argumentation heißt, dass die alleinige Auflösung des Paradoxes, der Dispositionalismus und die Negation dieser Theorie, beide beweisbar sind. Ich werde also versuchen, aus der in den PU dargestellten Auflösung des Paradoxes des Regelfolgens die Vervollständigung einer Antinomie des Begriffes der Normativität in Bezug auf die Sprachregeln festzulegen, ähnlich der von Kant entwickelten kosmologischen Antinomie (Thesis cum Antithesis). Das zweite Ziel meiner Dissertation besteht folglich darin, zu zeigen, 1. dass die kantische Auflösung der Antinomie unwirksam bezüglich der Antinomie der Normativität ist, 2. dass diese Antinomie eine notwendige Auseinandersetzung mit einem radikalen Skeptizismus bedeutet und dass wir logisch gezwungen sind, nicht nur irgendeine Theorie der Sprachphilosophie neu zu bestimmen, sondern unsere Methodologie – das heißt die Anwendung der üblichen Normen der Rationalität – selbst grundsätzlich tiefer gehend in Frage zu stellen, und 3. dass die hegelsche Dialektik sich als die methodologische Auflösung einer solchen radikalen skeptischen Herausforderung bzw. als die Auflösung einer Antinomie überhaupt ergibt. Anlässlich dieser methodologischen Revidierung wird auf die hegelsche Dialektik zurückgegriffen.
Dennoch begrenzt sich der Zweck dieser Dissertation nicht darauf, eine Interpretation von Hegels Dialektik oder eine Überwindung von Wittgensteins Lebensform darzustellen, vielmehr geht es darum, auf die Problematik und die Grundsätze des Begriffs der Lebensform bzw. des theoretischen Geistes zurückzugreifen und kraft Hegels Dialektik darüber hinauszuführen, um den Platz und die Funktion der Sprache besser zu verstehen. Diese Arbeit erfolgt vielmehr im Rahmen eines wissenschaftlichen Projekts, oder anders gesagt, sie nutzt die methodologischen Resultate von zwei Autoren der Philosophie, um ein wissenschaftliches Programm vorzustellen. Der Anspruch dieser Arbeit ist dementsprechend, durch das Zurückgreifen auf Hegels Dialektik eine neue Erkenntnis über die Sprache zu gewinnen, wobei die beiden kontradiktorischen Momente der Kognition – die Normativität, die durch das Bewusstsein erfolgt und diejenige, die durch Dispositionen erfolgt –, konstruktiv verbinden sind. Der konkrete Gewinn dieser Methodologie ist es demnach, eine Sprachphilosophie überhaupt als System festlegen zu können, ein System, das es ermöglicht, sprachliche Phänomene in all ihren Aspekten in kohärenter Weise zu fassen. Inhaltlich betrachtet zielt dieses Programm darauf ab, die allgemeine Stufe des Begriffs der Sprache als Moment des Begriffs des Geistes dialektisch abzuleiten, d. h. den richtigen Sinn der Sprache festzulegen. Eine vollständige Bearbeitung der Sprachphilosophie mithilfe der Dialektik konnte ich allerdings nicht durchführen, und der Umfang der mithilfe der Dialektik hergeleiteten Sprachkategorien begrenzt sich auf die Lehre der Einbildungskraft, die die Lehre der allgemeinen Semiologie und der Grammatik einschließt.
Light-switchable proteins are being used increasingly to understand and manipulate complex molecular systems. The success of this approach has fueled the development of tailored photo-switchable proteins, to enable targeted molecular events to be studied using light. The development of novel photo-switchable tools has to date largely relied on rational design. Complementing this approach with directed evolution would be expected to facilitate these efforts. Directed evolution, however, has been relatively infrequently used to develop photo-switchable proteins due to the challenge presented by high-throughput evaluation of switchable protein activity. This thesis describes the development of two genetic circuits that can be used to evaluate libraries of switchable proteins, enabling optimization of both the on- and off-states. A screening system is described, which permits detection of DNA-binding activity based on conditional expression of a fluorescent protein. In addition, a tunable selection system is presented, which allows for the targeted selection of protein-protein interactions of a desired affinity range. This thesis additionally describes the development and characterization of a synthetic protein that was designed to investigate chromophore reconstitution in photoactive yellow protein (PYP), a promising scaffold for engineering photo-controlled protein tools.
Skarn deposits are found on every continents and were formed at different times from Precambrian to Tertiary. Typically, the formation of a skarn is induced by a granitic intrusion in carbonates-rich sedimentary rocks. During contact metamorphism, fluids derived from the granite interact with the sedimentary host rocks, which results in the formation of calc-silicate minerals at the expense of carbonates. Those newly formed minerals generally develop in a metamorphic zoned aureole with garnet in the proximal and pyroxene in the distal zone. Ore elements contained in magmatic fluids are precipitated due to the change in fluid composition. The temperature decrease of the entire system, due to the cooling of magmatic fluids and the entering of meteoric water, allows retrogression of some prograde minerals.
The Hämmerlein skarn deposit has a multi-stage history with a skarn formation during regional metamorphism and a retrogression of primary skarn minerals during the granitic intrusion. Tin was mobilized during both events. The 340 Ma old tin-bearing skarn minerals show that tin was present in sediments before the granite intrusion, and that the first Sn enrichment occurred during the skarn formation by regional metamorphism fluids. In a second step at ca. 320 Ma, tin-bearing fluids were produced with the intrusion of the Eibenstock granite. Tin, which has been added by the granite and remobilized from skarn calc-silicates, precipitated as cassiterite.
Compared to clay or marl, the skarn is enriched in Sn, W, In, Zn, and Cu. These metals have been supplied during both regional metamorphism and granite emplacement. In addition, the several isotopic and chemical data of skarn samples show that the granite selectively added elements such as Sn, and that there was no visible granitic contribution to the sedimentary signature of the skarn
The example of Hämmerlein shows that it is possible to form a tin-rich skarn without associated granite when tin has already been transported from tin-bearing sediments during regional metamorphism by aqueous metamorphic fluids. These skarns are economically not interesting if tin is only contained in the skarn minerals. Later alteration of the skarn (the heat and fluid source is not necessarily a granite), however, can lead to the formation of secondary cassiterite (SnO2), with which the skarn can become economically highly interesting.
Geomagnetic paleosecular variations (PSVs) are an expression of geodynamo processes inside the Earth’s liquid outer core. These paleomagnetic time series provide insights into the properties of the Earth’s magnetic field, from normal behavior with a dominating dipolar geometry, over field crises, such as pronounced intensity lows and geomagnetic excursions with a distorted field geometry, to the complete reversal of the dominating dipole contribution. Particularly, long-term high-resolution and high-quality PSV time series are needed for properly reconstructing the higher frequency components in the spectrum of geomagnetic field variations and for a better understanding of the effects of smoothing during the recording of such paleomagnetic records by sedimentary archives.
In this doctorate study, full vector paleomagnetic records were derived from 16 sediment cores recovered from the southeastern Black Sea. Age models are based on radiocarbon dating and correlations of warming/cooling cycles monitored by high-resolution X-ray fluorescence (XRF) elementary ratios as well as ice-rafted debris (IRD) in Black Sea sediments to the sequence of ‘Dansgaard-Oeschger’ (DO) events defined from Greenland ice core oxygen isotope stratigraphy.
In order to identify the carriers of magnetization in Black Sea sediments, core MSM33-55-1 recovered from the southeast Black Sea was subjected to detailed rock magnetic and electron microscopy investigations. The younger part of core MSM33-55-1 was continuously deposited since 41 ka. Before 17.5 ka, the magnetic minerals were dominated by a mixture of greigite (Fe3S4) and titanomagnetite (Fe3-xTixO4) in samples with SIRM/κLF >10 kAm-1, or exclusively by titanomagnetite in samples with SIRM/κLF ≤10 kAm-1. It was found that greigite is generally present as crustal aggregates in locally reducing micro-environments. From 17.5 ka to 8.3 ka, the dominant magnetic mineral in this transition phase was changing from greigite (17.5 – ~10.0 ka) to probably silicate-hosted titanomagnetite (~10.0 – 8.3 ka). After 8.3 ka, the anoxic Black Sea was a favorable environment for the formation of non-magnetic pyrite (FeS2) framboids.
Aiming to avoid compromising of paleomagnetic data by erroneous directions carried by greigite, paleomagnetic data from samples with SIRM/κLF >10 kAm-1, shown to contain greigite by various methods, were removed from obtained records. Consequently, full vector paleomagnetic records, comprising directional data and relative paleointensity (rPI), were derived only from samples with SIRM/κLF ≤10 kAm-1 from 16 Black Sea sediment cores. The obtained data sets were used to create a stack covering the time window between 68.9 and 14.5 ka with temporal resolution between 40 and 100 years, depending on sedimentation rates.
At 64.5 ka, according to obtained results from Black Sea sediments, the second deepest minimum in relative paleointensity during the past 69 ka occurred. The field minimum during MIS 4 is associated with large declination swings beginning about 3 ka before the minimum. While a swing to 50°E is associated with steep inclinations (50-60°) according to the coring site at 42°N, the subsequent declination swing to 30°W is associated with shallow inclinations of down to 40°. Nevertheless, these large deviations from the direction of a geocentric axial dipole field (I=61°, D=0°) still can not yet be termed as 'excursional', since latitudes of corresponding VGPs only reach down to 51.5°N (120°E) and 61.5°N (75°W), respectively. However, these VGP positions at opposite sides of the globe are linked with VGP drift rates of up to 0.2° per year in between. These extreme secular variations might be the mid-latitude expression of the Norwegian–Greenland Sea excursion found at several sites much further North in Arctic marine sediments between 69°N and 81°N.
At about 34.5 ka, the Mono Lake excursion is evidenced in the stacked Black Sea PSV record by both a rPI minimum and directional shifts. Associated VGPs from stacked Black Sea data migrated from Alaska, via central Asia and the Tibetan Plateau, to Greenland, performing a clockwise loop. This agrees with data recorded in the Wilson Creek Formation, USA., and Arctic sediment core PS2644-5 from the Iceland Sea, suggesting a dominant dipole field. On the other hand, the Auckland lava flows, New Zealand, the Summer Lake, USA., and Arctic sediment core from ODP Site-919 yield distinct VGPs located in the central Pacific Ocean due to a presumably non-dipole (multi-pole) field configuration.
A directional anomaly at 18.5 ka, associated with pronounced swings in inclination and declination, as well as a low in rPI, is probably contemporaneous with the Hilina Pali excursion, originally reported from Hawaiian lava flows. However, virtual geomagnetic poles (VGPs) calculated from Black Sea sediments are not located at latitudes lower than 60° N, which denotes normal, though pronounced secular variations. During the postulated Hilina Pali excursion, the VGPs calculated from Black Sea data migrated clockwise only along the coasts of the Arctic Ocean from NE Canada (20.0 ka), via Alaska (18.6 ka) and NE Siberia (18.0 ka) to Svalbard (17.0 ka), then looping clockwise through the Eastern Arctic Ocean.
In addition to the Mono Lake and the Norwegian–Greenland Sea excursions, the Laschamp excursion was evidenced in the Black Sea PSV record with the lowest paleointensities at about 41.6 ka and a short-term (~500 years) full reversal centered at 41 ka. These excursions are further evidenced by an abnormal PSV index, though only the Laschamp and the Mono Lake excursions exhibit excursional VGP positions. The stacked Black Sea paleomagnetic record was also converted into one component parallel to the direction expected from a geocentric axial dipole (GAD) and two components perpendicular to it, representing only non-GAD components of the geomagnetic field. The Laschamp and the Norwegian–Greenland Sea excursions are characterized by extremely low GAD components, while the Mono Lake excursion is marked by large non-GAD contributions. Notably, negative values of the GAD component, indicating a fully reversed geomagnetic field, are observed only during the Laschamp excursion.
In summary, this doctoral thesis reconstructed high-resolution and high-fidelity PSV records from SE Black Sea sediments. The obtained record comprises three geomagnetic excursions, the Norwegian–Greenland Sea excursion, the Laschamp excursion, and the Mono Lake excursion. They are characterized by abnormal secular variations of different amplitudes centered at about 64.5 ka, 41.0 ka and 34.5 ka, respectively. In addition, the obtained PSV record from the Black Sea do not provide evidence for the postulated 'Hilina Pali excursion' at about 18.5 ka. Anyway, the obtained Black Sea paleomagnetic record, covering field fluctuations from normal secular variations, over excursions, to a short but full reversal, points to a geomagnetic field characterized by a large dynamic range in intensity and a highly variable superposition of dipole and non-dipole contributions from the geodynamo during the past 68.9 to 14.5 ka.
In this thesis we introduce the concept of the degree of formality. It is directed against a dualistic point of view, which only distinguishes between formal and informal proofs. This dualistic attitude does not respect the differences between the argumentations classified as informal and it is unproductive because the individual potential of the respective argumentation styles cannot be appreciated and remains untapped.
This thesis has two parts. In the first of them we analyse the concept of the degree of formality (including a discussion about the respective benefits for each degree) while in the second we demonstrate its usefulness in three case studies. In the first case study we will repair Haskell B. Curry's view of mathematics, which incidentally is of great importance in the first part of this thesis, in light of the different degrees of formality. In the second case study we delineate how awareness of the different degrees of formality can be used to help students to learn how to prove. Third, we will show how the advantages of proofs of different degrees of formality can be combined by the development of so called tactics having a medium degree of formality. Together the three case studies show that the degrees of formality provide a convincing solution to the problem of untapped potential.
Carbonate-rich silicate and carbonate melts play a crucial role in deep Earth magmatic processes and their melt structure is a key parameter, as it controls physical and chemical properties. Carbonate-rich melts can be strongly enriched in geochemically important trace elements. The structural incorporation mechanisms of these elements are difficult to study because such melts generally cannot be quenched to glasses, which are usually employed for structural investigations. This thesis investigates the influence of CO2 on the local environments of trace elements contained in silicate glasses with variable CO2 concentrations as well as in silicate and carbonate melts. The compositions studied include sodium-rich peralkaline silicate melts and glasses and carbonate melts similar to those occurring naturally at Oldoinyo Lengai volcano, Tanzania.
The local environments of the three elements yttrium (Y), lanthanum (La) and strontium (Sr) were investigated in synthesized glasses and melts using X-ray absorption fine structure (XAFS) spectroscopy. Especially extended X-ray absorption fine structure spectroscopy (EXAFS) provides element specific information on local structure, such as bond lengths, coordination numbers and the degree of disorder. To cope with the enhanced structural disorder present in glasses and melts, EXAFS analysis was based on fitting approaches using an asymmetric distribution function as well as a correlation model according to bond valence theory. Firstly, silicate glasses quenched from high pressure/temperature melts with up to 7.6 wt % CO2 were investigated. In strongly and extremely peralkaline glasses the local structure of Y is unaffected by the CO2 content (with oxygen bond lengths of ~ 2.29 Å). Contrary, the bond lengths for Sr-O and La-O increase with increasing CO2 content in the strongly peralkaline glasses from ~ 2.53 to ~ 2.57 Å and from ~ 2.52 to ~ 2.54 Å, respectively, while they remain constant in extremely peralkaline glasses (at ~ 2.55 Å and 2.54 Å, respectively). Furthermore, silicate and unquenchable carbonate melts were investigated in-situ at high pressure/temperature conditions (2.2 to 2.6 GPa, 1200 to 1500 °C) using a Paris-Edinburgh press. A novel design of the pressure medium assembly for this press was developed, which features increased mechanical stability as well as enhanced transmittance at relevant energies to allow for low content element EXAFS in transmission. Compared to glasses the bond lengths of Y-O, La-O and Sr-O are elongated by up to + 3 % in the melt and exhibit higher asymmetric pair distributions. For all investigated silicate melt compositions Y-O bond lengths were found constant at ~ 2.37 Å, while in the carbonate melt the Y-O length increases slightly to 2.41 Å. The La-O bond lengths in turn, increase systematically over the whole silicate – carbonate melt joint from 2.55 to 2.60 Å. Sr-O bond lengths in melts increase from ~ 2.60 to 2.64 Å from pure silicate to silicate-bearing carbonate composition with constant elevated bond length within the carbonate region.
For comparison and deeper insight, glass and melt structures of Y and Sr bearing sodium-rich silicate to carbonate compositions were simulated in an explorative ab initio molecular dynamics (MD) study. The simulations confirm observed patterns of CO2-dependent local changes around Y and Sr and additionally provide further insights into detailed incorporation mechanisms of the trace elements and CO2. Principle findings include that in sodium-rich silicate compositions carbon either is mainly incorporated as a free carbonate-group or shares one oxygen with a network former (Si or [4]Al) to form a non-bridging carbonate. Of minor importance are bridging carbonates between two network formers. Here, a clear preference for two [4]Al as adjacent network formers occurs, compared to what a statistical distribution would suggest. In C-bearing silicate melts minor amounts of molecular CO2 are present, which is almost totally dissolved as carbonate in the quenched glasses.
The combination of experiment and simulation provides extraordinary insights into glass and melt structures. The new data is interpreted on the basis of bond valence theory and is used to deduce potential mechanisms for structural incorporation of investigated elements, which allow for prediction on their partitioning behavior in natural melts. Furthermore, it provides unique insights into the dissolution mechanisms of CO2 in silicate melts and into the carbonate melt structure. For the latter, a structural model is suggested, which is based on planar CO3-groups linking 7- to 9-fold cation polyhedra, in accordance to structural units as found in the Na-Ca carbonate nyerereite. Ultimately, the outcome of this study contributes to rationalize the unique physical properties and geological phenomena related to carbonated silicate-carbonate melts.
Synchronization – the adjustment of rhythms among coupled self-oscillatory systems – is a fascinating dynamical phenomenon found in many biological, social, and technical systems.
The present thesis deals with synchronization in finite ensembles of weakly coupled self-sustained oscillators with distributed frequencies.
The standard model for the description of this collective phenomenon is the Kuramoto model – partly due to its analytical tractability in the thermodynamic limit of infinitely many oscillators. Similar to a phase transition in the thermodynamic limit, an order parameter indicates the transition from incoherence to a partially synchronized state. In the latter, a part of the oscillators rotates at a common frequency. In the finite case, fluctuations occur, originating from the quenched noise of the finite natural frequency sample.
We study intermediate ensembles of a few hundred oscillators in which fluctuations are comparably strong but which also allow for a comparison to frequency distributions in the infinite limit.
First, we define an alternative order parameter for the indication of a collective mode in the finite case. Then we test the dependence of the degree of synchronization and the mean rotation frequency of the collective mode on different characteristics for different coupling strengths.
We find, first numerically, that the degree of synchronization depends strongly on the form (quantified by kurtosis) of the natural frequency sample and the rotation frequency of the collective mode depends on the asymmetry (quantified by skewness) of the sample. Both findings are verified in the infinite limit.
With these findings, we better understand and generalize observations of other authors. A bit aside of the general line of thoughts, we find an analytical expression for the volume contraction in phase space.
The second part of this thesis concentrates on an ordering effect of the finite-size fluctuations. In the infinite limit, the oscillators are separated into coherent and incoherent thus ordered and disordered oscillators. In finite ensembles, finite-size fluctuations can generate additional order among the asynchronous oscillators. The basic principle – noise-induced synchronization – is known from several recent papers. Among coupled oscillators, phases are pushed together by the order parameter fluctuations, as we on the one hand show directly and on the other hand quantify with a synchronization measure from directed statistics between pairs of passive oscillators.
We determine the dependence of this synchronization measure from the ratio of pairwise natural frequency difference and variance of the order parameter fluctuations. We find a good agreement with a simple analytical model, in which we replace the deterministic fluctuations of the order parameter by white noise.
Simulating the impact of herbicide drift exposure on non-target terrestrial plant communities
(2019)
In Europe, almost half of the terrestrial landscape is used for agriculture. Thus, semi-natural habitats such as field margins are substantial for maintaining diversity in intensively managed farmlands. However, plants located at field margins are threatened by agricultural practices such as the application of pesticides within the fields. Pesticides are chemicals developed to control for undesired species within agricultural fields to enhance yields. The use of pesticides implies, however, effects on non-target organisms within and outside of the agricultural fields. Non-target organisms are organisms not intended to be sprayed or controlled for. For example, plants occurring in field margins are not intended to be sprayed, however, can be impaired due to herbicide drift exposure. The authorization of plant protection products such as herbicides requires risk assessments to ensure that the application of the product has no unacceptable effects on the environment. For non-target terrestrial plants (NTTPs), the risk assessment is based on standardized greenhouse studies on plant individual level. To account for the protection of plant populations and communities under realistic field conditions, i.e. extrapolating from greenhouse studies to field conditions and from individual-level to community-level, assessment factors are applied. However, recent studies question the current risk assessment scheme to meet the specific protection goals for non-target terrestrial plants as suggested by the European Food Safety Authority (EFSA). There is a need to clarify the gaps of the current risk assessment and to include suitable higher tier options in the upcoming guidance document for non-target terrestrial plants.
In my thesis, I studied the impact of herbicide drift exposure on NTTP communities using a mechanistic modelling approach. I addressed main gaps and uncertainties of the current risk assessment and finally suggested this modelling approach as a novel higher tier option in future risk assessments. Specifically, I extended the plant community model IBC-grass (Individual-based community model for grasslands) to reflect herbicide impacts on plant individuals. In the first study, I compared model predictions of short-term herbicide impacts on artificial plant communities with empirical data. I demonstrated the capability of the model to realistically reflect herbicide impacts. In the second study, I addressed the research question whether or not reproductive endpoints need to be included in future risk assessments to protect plant populations and communities. I compared the consequences of theoretical herbicide impacts on different plant attributes for long-term plant population dynamics in the community context. I concluded that reproductive endpoints only need to be considered if the herbicide effect is assumed to be very high. The endpoints measured in the current vegetative vigour and seedling emergence studies had high impacts for the dynamic of plant populations and communities already at lower effect intensities. Finally, the third study analysed long-term impacts of herbicide application for three different plant communities. This study highlighted the suitability of the modelling approach to simulate different communities and thus detecting sensitive environmental conditions.
Overall, my thesis demonstrates the suitability of mechanistic modelling approaches to be used as higher tier options for risk assessments. Specifically, IBC-grass can incorporate available individual-level effect data of standardized greenhouse experiments to extrapolate to community-level under various environmental conditions. Thus, future risk assessments can be improved by detecting sensitive scenarios and including worst-case impacts on non-target plant communities.
The individual’s mental lexicon comprises all known words as well related infor-mation on semantics, orthography and phonology. Moreover, entries connect due to simi-larities in these language domains building a large network structure. The access to lexical information is crucial for processing of words and sentences. Thus, a lack of information in-hibits the retrieval and can cause language processing difficulties. Hence, the composition of the mental lexicon is essential for language skills and its assessment is a central topic of lin-guistic and educational research.
In early childhood, measurement of the mental lexicon is uncomplicated, for example through parental questionnaires or the analysis of speech samples. However, with growing content the measurement becomes more challenging: With more and more words in the mental lexicon, the inclusion of all possible known words into a test or questionnaire be-comes impossible. That is why there is a lack of methods to assess the mental lexicon for school children and adults. For the same reason, there are only few findings on the courses of lexical development during school years as well as its specific effect on other language skills. This dissertation is supposed to close this gap by pursuing two major goals: First, I wanted to develop a method to assess lexical features, namely lexicon size and lexical struc-ture, for children of different age groups. Second, I aimed to describe the results of this method in terms of lexical development of size and structure. Findings were intended to help understanding mechanisms of lexical acquisition and inform theories on vocabulary growth.
The approach is based on the dictionary method where a sample of words out of a dictionary is tested and results are projected on the whole dictionary to determine an indi-vidual’s lexicon size. In the present study, the childLex corpus, a written language corpus for children in German, served as the basis for lexicon size estimation. The corpus is assumed to comprise all words children attending primary school could know. Testing a sample of words out of the corpus enables projection of the results on the whole corpus. For this purpose, a vocabulary test based on the corpus was developed. Afterwards, test performance of virtual participants was simulated by drawing different lexicon sizes from the corpus and comparing whether the test items were included in the lexicon or not. This allowed determination of the relation between test performance and total lexicon size and thus could be transferred to a sample of real participants. Besides lexicon size, lexical content could be approximated with this approach and analyzed in terms of lexical structure.
To pursue the presented aims and establish the sampling method, I conducted three consecutive studies. Study 1 includes the development of a vocabulary test based on the childLex corpus. The testing was based on the yes/no format and included three versions for different age groups. The validation grounded on the Rasch Model shows that it is a valid instrument to measure vocabulary for primary school children in German. In Study 2, I estab-lished the method to estimate lexicon sizes and present results on lexical development dur-ing primary school. Plausible results demonstrate that lexical growth follows a quadratic function starting with about 6,000 words at the beginning of school and about 73,000 words on average for young adults. Moreover, the study revealed large interindividual differences. Study 3 focused on the analysis of network structures and their development in the mental lexicon due to orthographic similarities. It demonstrates that networks possess small-word characteristics and decrease in interconnectivity with age.
Taken together, this dissertation provides an innovative approach for the assessment and description of the development of the mental lexicon from primary school onwards. The studies determine recent results on lexical acquisition in different age groups that were miss-ing before. They impressively show the importance of this period and display the existence of extensive interindividual differences in lexical development. One central aim of future research needs to address the causes and prevention of these differences. In addition, the application of the method for further research (e.g. the adaptation for other target groups) and teaching purposes (e.g. adaptation of texts for different target groups) appears to be promising.
Optimization is a core part of technological advancement and is usually heavily aided by computers. However, since many optimization problems are hard, it is unrealistic to expect an optimal solution within reasonable time. Hence, heuristics are employed, that is, computer programs that try to produce solutions of high quality quickly. One special class are estimation-of-distribution algorithms (EDAs), which are characterized by maintaining a probabilistic model over the problem domain, which they evolve over time. In an iterative fashion, an EDA uses its model in order to generate a set of solutions, which it then uses to refine the model such that the probability of producing good solutions is increased.
In this thesis, we theoretically analyze the class of univariate EDAs over the Boolean domain, that is, over the space of all length-n bit strings. In this setting, the probabilistic model of a univariate EDA consists of an n-dimensional probability vector where each component denotes the probability to sample a 1 for that position in order to generate a bit string.
My contribution follows two main directions: first, we analyze general inherent properties of univariate EDAs. Second, we determine the expected run times of specific EDAs on benchmark functions from theory. In the first part, we characterize when EDAs are unbiased with respect to the problem encoding. We then consider a setting where all solutions look equally good to an EDA, and we show that the probabilistic model of an EDA quickly evolves into an incorrect model if it is always updated such that it does not change in expectation.
In the second part, we first show that the algorithms cGA and MMAS-fp are able to efficiently optimize a noisy version of the classical benchmark function OneMax. We perturb the function by adding Gaussian noise with a variance of σ², and we prove that the algorithms are able to generate the true optimum in a time polynomial in σ² and the problem size n. For the MMAS-fp, we generalize this result to linear functions. Further, we prove a run time of Ω(n log(n)) for the algorithm UMDA on (unnoisy) OneMax. Last, we introduce a new algorithm that is able to optimize the benchmark functions OneMax and LeadingOnes both in O(n log(n)), which is a novelty for heuristics in the domain we consider.
Cellulose derived polymers
(2019)
Plastics, such as polyethylene, polypropylene, and polyethylene terephthalate are part of our everyday lives in the form of packaging, household goods, electrical insulation, etc. These polymers are non-degradable and create many environmental problems and public health concerns. Additionally, these polymers are produced from finite fossils resources. With the continuous utilization of these limited resources, it is important to look towards renewable sources along with biodegradation of the produced polymers, ideally. Although many bio-based polymers are known, such as polylactic acid, polybutylene succinate adipate or polybutylene succinate, none have yet shown the promise of replacing conventional polymers like polyethylene, polypropylene and polyethylene terephthalate. Cellulose is one of the most abundant renewable resources produced in nature. It can be transformed into various small molecules, such as sugars, furans, and levoglucosenone. The aim of this research is to use the cellulose derived molecules for the synthesis of polymers.
Acid-treated cellulose was subjected to thermal pyrolysis to obtain levoglucosenone, which was reduced to levoglucosenol. Levoglucosenol was polymerized, for the first time, by ring-opening metathesis polymerization (ROMP) yielding high molar mass polymers of up to ~150 kg/mol. The poly(levoglucosenol) is thermally stable up to ~220 ℃, amorphous, and is exhibiting a relatively high glass transition temperature of ~100 ℃. The poly(levoglucosenol) can be converted to a transparent film, resembling common plastic, and was found to degrade in a moist acidic environment. This means that poly(levoglucosenol) may find its use as an alternative to conventional plastic, for instance, polystyrene.
Levoglucosenol was also converted into levoglucosenyl methyl ether, which was polymerized by cationic ring-opening metathesis polymerization (CROP). Polymers were obtained with molar masses up to ~36 kg/mol. These polymers are thermally stable up to ~220 ℃ and are semi-crystalline thermoplastics, having a glass transition temperature of ~35 ℃ and melting transition of 70-100 ℃. Additionally, the polymers underwent cross-linking, hydrogenation and thiol-ene click chemistry.
Predators can have numerical and behavioral effects on prey animals. While numerical effects are well explored, the impact of behavioral effects is unclear. Furthermore, behavioral effects are generally either analyzed with a focus on single individuals or with a focus on consequences for other trophic levels. Thereby, the impact of fear on the level of prey communities is overlooked, despite potential consequences for conservation and nature management. In order to improve our understanding of predator-prey interactions, an assessment of the consequences of fear in shaping prey community structures is crucial.
In this thesis, I evaluated how fear alters prey space use, community structure and composition, focusing on terrestrial mammals. By integrating landscapes of fear in an existing individual-based and spatially-explicit model, I simulated community assembly of prey animals via individual home range formation. The model comprises multiple hierarchical levels from individual home range behavior to patterns of prey community structure and composition. The mechanistic approach of the model allowed for the identification of underlying mechanism driving prey community responses under fear.
My results show that fear modified prey space use and community patterns. Under fear, prey animals shifted their home ranges towards safer areas of the landscape. Furthermore, fear decreased the total biomass and the diversity of the prey community and reinforced shifts in community composition towards smaller animals. These effects could be mediated by an increasing availability of refuges in the landscape. Under landscape changes, such as habitat loss and fragmentation, fear intensified negative effects on prey communities. Prey communities in risky environments were subject to a non-proportional diversity loss of up to 30% if fear was taken into account. Regarding habitat properties, I found that well-connected, large safe patches can reduce the negative consequences of habitat loss and fragmentation on prey communities. Including variation in risk perception between prey animals had consequences on prey space use. Animals with a high risk perception predominantly used safe areas of the landscape, while animals with a low risk perception preferred areas with a high food availability. On the community level, prey diversity was higher in heterogeneous landscapes of fear if individuals varied in their risk perception compared to scenarios in which all individuals had the same risk perception.
Overall, my findings give a first, comprehensive assessment of the role of fear in shaping prey communities. The linkage between individual home range behavior and patterns at the community level allows for a mechanistic understanding of the underlying processes. My results underline the importance of the structure of the landscape of fear as a key driver of prey community responses, especially if the habitat is threatened by landscape changes. Furthermore, I show that individual landscapes of fear can improve our understanding of the consequences of trait variation on community structures. Regarding conservation and nature management, my results support calls for modern conservation approaches that go beyond single species and address the protection of biotic interactions.
Der Porenraum eines Karbonatgesteins ist zumeist aus einer spezifischen Vergesellschaftung verschiedenster Porentypen aufgebaut, die eine unterschiedliche Herkunft aufweisen und zusätzlich in ihrer Form und Größe stark variieren können (e.g., Melim et al., 2001; Lee et al., 2009; He et al., 2014; Dernaika & Sinclair, 2017; Zhang et al., 2017). Diese für Karbonate typischen multimodalen Porensysteme entstehen sowohl durch primäre Ablagerungsprozesse, als auch durch mehrmalige Modifikation des Porenraumes nach Ablagerung des Sediments. Dies führt zu einer ungleichen Verteilung der Porenraumeigenschaften auf engstem Raum und das zeitgleiche Auftreten von effektiven und ineffektiven Poren. Diese immanenten Unterschiede in der Effektivität einzelner Porentypen sind der Hauptgrund für die häufig sehr niedrige Korellation zwischen Porosität und Permeabilität in Karbonaten (e.g., Mazzullo 2004; Ehrenberg & Nadeau, 2005; Hollis et al., 2010; He et al., 2014; Rashid et al., 2015; Dernaika & Sinclair, 2017). Durch die Extraktion von miteinander verbundenen und somit effektiven Porentypen jedoch kann das Verständnis und die Vorhersage der Permeabilität für einen gegeben Porositätswert stark verbessert werden (e.g., Melim et al., 2001; Zhang et al., 2017). Dazu wird in dieser Arbeit eine auf der digitalen Bildanalyse (DIA) beruhende Methode vorgestellt, mit der schrittweise die Effektivität von Poren aus den analysierten mittelmiozänen lakustrinen Karbonaten des Nördlinger Ries Kratersees (Süddeutschland) berechnet werden kann. Mithilfe des Porenformfaktors (sensu Anselmetti et al., 1998), der als Parameter zur Quantifizierung der Interkonnektivität zwischen Poren dient, wird der potentiellen Beitrag an Permeabilität jedes Porentyps zur Gesamtpermeabilität bestimmt. Somit können die effektivsten Porentypen innerhalb der analysierten Karbonate identifiziert werden. Desweiteren wird die digitale Bildanalyse dazu benutzt, zementierte Porenräume zu extrahieren, um den Einfluss der Zementation auf die Porenraumeigenschaften zu quantifizieren. Durch eine unabhängige Methode (Fluid-Flow-Simulation), deren Ergebnisse wiederum mit der digitalen Bildanalyse ausgewertet werden, können die vorherigen Erkentnisse bestätigt werden: Interpeloidale Poren und Lösungsporen sind die beiden effektivsten Porentypen im Porenraum der Riesseekarbonate. Die Extraktion des miteinander verbundenen (d.h. effektiven) Porennetzwerkes führt schließlich zu einer erheblich verbesserten Korrelation zwischen Porosität und Permeabilität in den analysierten Karbonaten. Die in dieser Arbeit beschriebene Methode bietet ein quantitatives petrographisches Werkzeug, mit dessen Hilfe die effektive Porosität eines Porenraumes extrahiert werden kann. Dies führt zu einem besseren Verständnis darüber, wie Porensysteme von Karbonaten Permeabilität erzeugen. Diese Dissertation zeigt auch, dass die Formkomplexität von Poren einer der wichtigsten Parameter ist, der die Interkonnektivität zwischen einzelnen Poren und somit die Entstehung von effektiver Porosität steuert. Außerdem erweist sich die digitale Bildanalyse als ausgezeichnetes Werkzeug um die Porosität und Permeabilität direkt an ihren gemeinsamen Ursprung zu knüpfen: die Gesteinstextur und die damit assoziierte Porenstruktur.
On a planetary scale human populations need to adapt to both socio-economic and environmental problems amidst rapid global change. This holds true for coupled human-environment (socio-ecological) systems in rural and urban settings alike. Two examples are drylands and urban coasts. Such socio-ecological systems have a global distribution. Therefore, advancing the knowledge base for identifying socio-ecological adaptation needs with local vulnerability assessments alone is infeasible: The systems cover vast areas, while funding, time, and human resources for local assessments are limited. They are lacking in low an middle-income countries (LICs and MICs) in particular.
But places in a specific socio-ecological system are not only unique and complex – they also exhibit similarities. A global patchwork of local rural drylands vulnerability assessments of human populations to socio-ecological and environmental problems has already been reduced to a limited number of problem structures, which typically cause vulnerability. However, the question arises whether this is also possible in urban socio-ecological systems. The question also arises whether these typologies provide added value in research beyond global change. Finally, the methodology employed for drylands needs refining and standardizing to increase its uptake in the scientific community. In this dissertation, I set out to fill these three gaps in research.
The geographical focus in my dissertation is on LICs and MICs, which generally have lower capacities to adapt, and greater adaptation needs, regarding rapid global change. Using a spatially explicit indicator-based methodology, I combine geospatial and clustering methods to identify typical configurations of key factors in case studies causing vulnerability to human populations in two specific socio-ecological systems. Then I use statistical and analytical methods to interpret and appraise both the typical configurations and the global typologies they constitute.
First, I improve the indicator-based methodology and then reanalyze typical global problem structures of socio-ecological drylands vulnerability with seven indicator datasets. The reanalysis confirms the key tenets and produces a more realistic and nuanced typology of eight spatially explicit problem structures, or vulnerability profiles: Two new profiles with typically high natural resource endowment emerge, in which overpopulation has led to medium or high soil erosion. Second, I determine whether the new drylands typology and its socio-ecological vulnerability concept advance a thematically linked scientific debate in human security studies: what drives violent conflict in drylands? The typology is a much better predictor for conflict distribution and incidence in drylands than regression models typically used in peace research. Third, I analyze global problem structures typically causing vulnerability in an urban socio-ecological system - the rapidly urbanizing coastal fringe (RUCF) – with eleven indicator datasets. The RUCF also shows a robust typology, and its seven profiles show huge asymmetries in vulnerability and adaptive capacity. The fastest population increase, lowest income, most ineffective governments, most prevalent poverty, and lowest adaptive capacity are all typically stacked in two profiles in LICs. This shows that beyond local case studies tropical cyclones and/or coastal flooding are neither stalling rapid population growth, nor urban expansion, in the RUCF. I propose entry points for scaling up successful vulnerability reduction strategies in coastal cities within the same vulnerability profile.
This dissertation shows that patchworks of local vulnerability assessments can be generalized to structure global socio-ecological vulnerabilities in both rural and urban socio-ecological systems according to typical problems. In terms of climate-related extreme events in the RUCF, conflicting problem structures and means to deal with them are threatening to widen the development gap between LICs and high-income countries unless successful vulnerability reduction measures are comprehensively scaled up. The explanatory power for human security in drylands warrants further applications of the methodology beyond global environmental change research in the future. Thus, analyzing spatially explicit global typologies of socio-ecological vulnerability is a useful complement to local assessments: The typologies provide entry points for where to consider which generic measures to reduce typical problem structures – including the countless places without local assessments. This can save limited time and financial resources for adaptation under rapid global change.
Interactions and feedbacks between tectonics, climate, and upper plate architecture control basin geometry, relief, and depositional systems. The Andes is part of a longlived continental margin characterized by multiple tectonic cycles which have strongly modified the Andean upper plate architecture. In the Andean retroarc, spatiotemporal variations in the structure of the upper plate and tectonic regimes have resulted in marked along-strike variations in basin geometry, stratigraphy, deformational style, and mountain belt morphology. These along-strike variations include high-elevation plateaus (Altiplano and Puna) associated with a thin-skin fold-and-thrust-belt and thick-skin deformation in broken foreland basins such as the Santa Barbara system and the Sierras Pampeanas. At the confluence of the Puna Plateau, the Santa Barbara system and the Sierras Pampeanas, major along-strike changes in upper plate architecture, mountain belt morphology, basement exhumation, and deformation style can be recognized. I have used a source to sink approach to unravel the spatiotemporal tectonic evolution of the Andean retroarc between 26 and 28°S. I obtained a large low-temperature thermochronology data set from basement units which includes apatite fission track, apatite U-Th-Sm/He, and zircon U-Th/He (ZHe) cooling ages. Stratigraphic descriptions of Miocene units were temporally constrained by U-Pb LA-ICP-MS zircon ages from interbedded pyroclastic material.
Modeled ZHe ages suggest that the basement of the study area was exhumed during the Famatinian orogeny (550-450 Ma), followed by a period of relative tectonic quiescence during the Paleozoic and the Triassic. The basement experienced horst exhumation during the Cretaceous development of the Salta rift. After initial exhumation, deposition of thick Cretaceous syn-rift strata caused reheating of several basement blocks within the Santa Barbara system. During the Eocene-Oligocene, the Andean compressional setting was responsible for the exhumation of several disconnected basement blocks. These exhumed blocks were separated by areas of low relief, in which humid climate and low erosion rates facilitated the development of etchplains on the crystalline basement. The exhumed basement blocks formed an Eocene to Oligocene broken foreland basin in the back-bulge depozone of the Andean foreland. During the Early Miocene, foreland basin strata filled up the preexisting Paleogene topography. The basement blocks in lower relief positions were reheated; associated geothermal gradients were higher than 25°C/km. Miocene volcanism was responsible for lateral variations on the amount of reheating along the Campo-Arenal basin. Around 12 Ma, a new deformational phase modified the drainage network and fragmented the lacustrine system. As deformation and rock uplift continued, the easily eroded sedimentary cover was efficiently removed and reworked by an ephemeral fluvial system, preventing the development of significant relief. After ~6 Ma, the low erodibility of the basement blocks which began to be exposed caused relief increase, leading to the development of stable fluvial systems. Progressive relief development modified atmospheric circulation, creating a rainfall gradient. After 3 Ma, orographic rainfall and high relief lead to the development of proximal fluvial-gravitational depositional systems in the surrounding basins.
Aluminum oxide is an Earth-abundant geological material, and its interaction with water is of crucial importance for geochemical and environmental processes. Some aluminum oxide surfaces are also known to be useful in heterogeneous catalysis, while the surface chemistry of aqueous oxide interfaces determines the corrosion, growth and dissolution of such materials. In this doctoral work, we looked mainly at the (0001) surface of α-Al 2 O 3 and its reactivity towards water. In particular, a great focus of this work is dedicated to simulate and address the vibrational spectra of water adsorbed on the α-alumina(0001) surface in various conditions and at different coverages. In fact, the main source of comparison and inspiration for this work comes from the collaboration with the “Interfacial Molecular Spectroscopy” group led by Dr. R. Kramer Campen at the Fritz-Haber Institute of the MPG in Berlin. The expertise of our project partners in surface-sensitive Vibrational Sum Frequency (VSF) generation spectroscopy was crucial to develop and adapt specific simulation schemes used in this work. Methodologically, the main approach employed in this thesis is Ab Initio Molecular Dynamics (AIMD) based on periodic Density Functional Theory (DFT) using the PBE functional with D2 dispersion correction. The analysis of vibrational frequencies from both a static and a dynamic, finite-temperature perspective offers the ability to investigate the water / aluminum oxide interface in close connection to experiment.
The first project presented in this work considers the characterization of dissociatively adsorbed deuterated water on the Al-terminated (0001) surface. This particular structure is known from both experiment and theory to be the thermodynamically most stable surface termination of α-alumina in Ultra-High Vacuum (UHV) conditions. Based on experiments performed by our colleagues at FHI, different adsorption sites and products have been proposed and identified for D 2 O. While previous theoretical investigations only looked at vibrational frequencies of dissociated OD groups by staticNormal Modes Analysis (NMA), we rather employed a more sophisticated approach to directly assess vibrational spectra (like IR and VSF) at finite temperature from AIMD. In this work, we have employed a recent implementation which makes use of velocity-velocity autocorrelation functions to simulate such spectral responses of O-H(D) bonds. This approach allows for an efficient and qualitatively accurate estimation of Vibrational Densities of States (VDOS) as well as IR and VSF spectra, which are then tested against experimental spectra from our collaborators.
In order to extend previous work on unimolecularly dissociated water on α-Al 2 O 3 , we then considered a different system, namely, a fully hydroxylated (0001) surface, which results from the reconstruction of the UHV-stable Al-terminated surface at high water contents. This model is then further extended by considering a hydroxylated surface with additional water molecules, forming a two-dimensional layer which serves as a potential template to simulate an aqueous interface in environmental conditions. Again, employing finite-temperature AIMD trajectories at the PBE+D2 level, we investigated the behaviour of both hydroxylated surface (HS) and the water-covered structure derived from it (known as HS+2ML). A full range of spectra, from VDOS to IR and VSF, is then calculated using the same methodology, as described above. This is the main focus of the second project, reported in Chapter 5. In this case, comparison between theoretical spectra and experimental data is definitely good. In particular, we underline the nature of high-frequency resonances observed above 3700 cm −1 in VSF experiments to be associated with surface OH-groups, known as “aluminols” which are a key fingerprint of the fully hydroxylated surface.
In the third and last project, which is presented in Chapter 6, the extension of VSF spectroscopy experiments to the time-resolved regime offered us the opportunity to investigate vibrational energy relaxation at the α-alumina / water interface. Specifically, using again DFT-based AIMD simulations, we simulated vibrational lifetimes for surface aluminols as experimentally detected via pump-probe VSF. We considered the water-covered HS model as a potential candidate to address this problem. The vibrational (IR) excitation and subsequent relaxation is performed by means of a non-equilibrium molecular dynamics scheme. In such a scheme, we specifically looked at the O-H stretching mode of surface aluminols. Afterwards, the analysis of non-equilibrium trajectories allows for an estimation of relaxation times in the order of 2-4 ps which are in overall agreement with measured ones.
The aim of this work has been to provide, within a consistent theoretical framework, a better understanding of vibrational spectroscopy and dynamics for water on the α-alumina(0001) surface,ranging from very low water coverage (similar to the UHV case) up to medium-high coverages, resembling the hydroxylated oxide in environmental moist conditions.
The natural abundance of Coiled Coil (CC) motifs in cytoskeleton and extracellular matrix proteins suggests that CCs play an important role as passive (structural) and active (regulatory) mechanical building blocks. CCs are self-assembled superhelical structures consisting of 2-7 α-helices. Self-assembly is driven by hydrophobic and ionic interactions, while the helix propensity of the individual helices contributes additional stability to the structure. As a direct result of this simple sequence-structure relationship, CCs serve as templates for protein design and sequences with a pre-defined thermodynamic stability have been synthesized de novo. Despite this quickly increasing knowledge and the vast number of possible CC applications, the mechanical function of CCs has been largely overlooked and little is known about how different CC design parameters determine the mechanical stability of CCs. Once available, this knowledge will open up new applications for CCs as nanomechanical building blocks, e.g. in biomaterials and nanobiotechnology.
With the goal of shedding light on the sequence-structure-mechanics relationship of CCs, a well-characterized heterodimeric CC was utilized as a model system. The sequence of this model system was systematically modified to investigate how different design parameters affect the CC response when the force is applied to opposing termini in a shear geometry or separated in a zipper-like fashion from the same termini (unzip geometry). The force was applied using an atomic force microscope set-up and dynamic single-molecule force spectroscopy was performed to determine the rupture forces and energy landscape properties of the CC heterodimers under study. Using force as a denaturant, CC chain separation is initiated by helix uncoiling from the force application points. In the shear geometry, this allows uncoiling-assisted sliding parallel to the force vector or dissociation perpendicular to the force vector. Both competing processes involve the opening of stabilizing hydrophobic (and ionic) interactions. Also in the unzip geometry, helix uncoiling precedes the rupture of hydrophobic contacts.
In a first series of experiments, the focus was placed on canonical modifications in the hydrophobic core and the helix propensity. Using the shear geometry, it was shown that both a reduced core packing and helix propensity lower the thermodynamic and mechanical stability of the CC; however, with different effects on the energy landscape of the system. A less tightly packed hydrophobic core increases the distance to the transition state, with only a small effect on the barrier height. This originates from a more dynamic and less tightly packed core, which provides more degrees of freedom to respond to the applied force in the direction of the force vector. In contrast, a reduced helix propensity decreases both the distance to the transition state and the barrier height. The helices are ‘easier’ to unfold and the remaining structure is less thermodynamically stable so that dissociation perpendicular to the force axis can occur at smaller deformations.
Having elucidated how canonical sequence modifications influence CC mechanics, the pulling geometry was investigated in the next step. Using one and the same sequence, the force application points were exchanged and two different shear and one unzipping geometry were compared. It was shown that the pulling geometry determines the mechanical stability of the CC. Different rupture forces were observed in the different shear as well as in the unzipping geometries, suggesting that chain separation follows different pathways on the energy landscape. Whereas the difference between CC shearing and unzipping was anticipated and has also been observed for other biological structures, the observed difference for the two shear geometries was less expected. It can be explained with the structural asymmetry of the CC heterodimer. It is proposed that the direction of the α-helices, the different local helix propensities and the position of a polar asparagine in the hydrophobic core are responsible for the observed difference in the chain separation pathways. In combination, these factors are considered to influence the interplay between processes parallel and perpendicular to the force axis.
To obtain more detailed insights into the role of helix stability, helical turns were reinforced locally using artificial constraints in the form of covalent and dynamic ‘staples’. A covalent staple bridges to adjacent helical turns, thus protecting them against uncoiling. The staple was inserted directly at the point of force application in one helix or in the same terminus of the other helix, which did not experience the force directly. It was shown that preventing helix uncoiling at the point of force application reduces the distance to the transition state while slightly increasing the barrier height. This confirms that helix uncoiling is critically important for CC chain separation. When inserted into the second helix, this stabilizing effect is transferred across the hydrophobic core and protects the force-loaded turns against uncoiling. If both helices were stapled, no additional increase in mechanical stability was observed. When replacing the covalent staple with a dynamic metal-coordination bond, a smaller decrease in the distance to the transition was observed, suggesting that the staple opens up while the CC is under load.
Using fluorinated amino acids as another type of non-natural modification, it was investigated how the enhanced hydrophobicity and the altered packing at the interface influences CC mechanics. The fluorinated amino acid was inserted into one central heptad of one or both α-helices. It was shown that this substitution destabilized the CC thermodynamically and mechanically. Specifically, the barrier height was decreased and the distance to the transition state increased. This suggests that a possible stabilizing effect of the increased hydrophobicity is overruled by a disturbed packing, which originates from a bad fit of the fluorinated amino acid into the local environment. This in turn increases the flexibility at the interface, as also observed for the hydrophobic core substitution described above. In combination, this confirms that the arrangement of the hydrophobic side chains is an additional crucial factor determining the mechanical stability of CCs.
In conclusion, this work shows that knowledge of the thermodynamic stability alone is not sufficient to predict the mechanical stability of CCs. It is the interplay between helix propensity and hydrophobic core packing that defines the sequence-structure-mechanics relationship. In combination, both parameters determine the relative contribution of processes parallel and perpendicular to the force axis, i.e. helix uncoiling and uncoiling-assisted sliding as well as dissociation. This new mechanistic knowledge provides insight into the mechanical function of CCs in tissues and opens up the road for designing CCs with pre-defined mechanical properties. The library of mechanically characterized CCs developed in this work is a powerful starting point for a wide spectrum of applications, ranging from molecular force sensors to mechanosensitive crosslinks in protein nanostructures and synthetic extracellular matrix mimics.
The growing energy demand of the modern economies leads to the increased consumption of fossil fuels in form of coal, oil, and natural gases, as the mains sources. The combustion of these carbon-based fossil fuels is inevitably producing greenhouse gases, especially CO2. Approaches to tackle the CO2 problem are to capture it from the combustion sources or directly from air, as well as to avoid CO2 production in energy consuming sources (e.g., in the refrigeration sector). In the former, relatively low CO2 concentrations and competitive adsorption of other gases is often leading to low CO2 capacities and selectivities. In both approaches, the interaction of gas molecules with porous materials plays a key role. Porous carbon materials possess unique properties including electric conductivity, tunable porosity, as well as thermal and chemical stability. Nevertheless, pristine carbon materials offer weak polarity and thus low CO2 affinity. This can be overcome by nitrogen doping, which enhances the affinity of carbon materials towards acidic or polar guest molecules (e.g., CO2, H2O, or NH3). In contrast to heteroatom-free materials, such carbon materials are in most cases “noble”, that is, they oxidize other matter rather than being oxidized due to the very positive working potential of their electrons. The challenging task here is to achieve homogenous distribution of significant nitrogen content with similar bonding motives throughout the carbon framework and a uniform pore size/distribution to maximize host-guest interactions. The aim of this thesis is the development of novel synthesis pathways towards nitrogen-doped nanoporous noble carbon materials with precise design on a molecular level and understanding of their structure-related performance in energy and environmental applications, namely gas adsorption and electrochemical energy storage.
A template-free synthesis approach towards nitrogen-doped noble microporous carbon materials with high pyrazinic nitrogen content and C2N-type stoichiometry was established via thermal condensation of a hexaazatriphenylene derivative. The materials exhibited high uptake of guest molecules, such as H2O and CO2 at low concentrations, as well as moderate CO2/N2 selectivities. In the following step, the CO2/N2 selectivity was enhanced towards molecular sieving of CO2 via kinetic size exclusion of N2. The precise control over the condensation degree, and thus, atomic construction and porosity of the resulting materials led to remarkable CO2/N2 selectivities, CO2 capacities, and heat of CO2 adsorption. The ultrahydrophilic nature of the pore walls and the narrow microporosity of these carbon materials served as ideal basis for the investigation of interface effects with more polar guest molecules than CO2, namely H2O and NH3.
H2O vapor physisorption measurements, as well as NH3-temperature programmed desorption and thermal response measurements showed exceptionally high affinity towards H2O vapor and NH3 gas. Another series of nitrogen-doped carbon materials was synthesized by direct condensation of a pyrazine-fused conjugated microporous polymer and their structure-related performance in electrochemical energy storage, namely as anode materials for sodium-ion battery, was investigated.
All in all, the findings in this thesis exemplify the value of molecularly designed nitrogen-doped carbon materials with remarkable heteroatom content implemented as well-defined structure motives. The simultaneous adjustment of the porosity renders these materials suitable candidates for fundamental studies about the interactions between nitrogen-doped carbon materials and different guest species.
Im Rahmen dieser Arbeit wird anhand von neuartigen Materialien das Potential der Europium-Lumineszenz für die strukturelle Analyse dargestellt. Bei diesen Materialien handelt es sich zum einen um Nanopartikel mit Matrizes aus mehreren Metall-Mischoxiden und Dotierungen durch die Sonde Europium und zum anderen um Metallorganische Netzwerke (MOFs), die mit Neodym , Samarium- und Europium-Ionen beladen sind.
Die Synthese der aus der Kombination von Metalloxiden enthaltenen Nanopartikel ist unter milden Bedingungen mithilfe von speziell dafür hergestellten Reagenzien erfolgt und hat zu sehr kleinen, amorphen Nanopartikeln geführt. Durch eine nachfolgende Temperaturbehandlung hat sich die Kristallinität erhöht. Damit verbunden haben sich auch die Kristallstruktur sowie die Position des Dotanden Europium verändert.
Während die etablierte Methode der Röntgendiffraktometrie einen Blick auf das Kristallgitter als Gesamtes ermöglicht, so trifft die Lumineszenz des Europiums durch die Sichtbarkeit einzelner Stark-Aufspaltungen Aussagen über dessen lokale Symmetrien. Die Symmetrie wird durch Sauerstofffehlstellen verändert, welche die Sauerstoffleitfähigkeit der Nanopartikel beeinflussen. Diese ist für die Anwendung als Katalysatoren in industriellen Prozessen und ebenso als Sensoren und Therapeutika in biologischen Systemen von Bedeutung.
Zur ersten katalytischen Charakterisierung werden die Proben mittels Temperatur-programmierter Reduktion untersucht. Des Weiteren werden die Mischoxid-Nanopartikel auch hinsichtlich ihrer Verwendbarkeit als Matrix in Aufkonversionsprozessen untersucht.
Die Metallorganischen Netzwerke eignen sich aufgrund ihrer mikroporösen Struktur für Anwendungen in der Speicherung gleichermaßen von Nutzgasen wie auch von Schadstoffen. Ebenfalls ist eine biologische Anwendung denkbar, die insbesondere den Bereich der drug delivery-Reagenzien betrifft.
Erfolgt in die mikroporösen Strukturen der Metallorganischen Netzwerke die Einlagerung von Lanthanoid-Ionen, so können diese bei der entsprechenden Kombination als Weißlicht-Emittierer fungieren. Dabei ist neben den Verhältnissen zwischen den Lanthanoid-Ionen auch die genaue Position innerhalb des Netzwerks sowie die Distanz zu anderen Ionen von Interesse. Zur Untersuchung dieser Fragestellungen wird die Umgebungssensitivität der Europium-Lumineszenz ausgenutzt. Die auf diese Weise festgestellte Formiat-Bildung hängt von zahlreichen Parametern ab.
Insgesamt stellt sich die im Rahmen dieser Arbeit verwendete Methodik des Einsatzes von Europium als strukturelle Sonde in höchstem Maße vielseitig dar und zeigt seine größte Stärke in der Kombination mit weiteren Methoden der Strukturanalytik. Die auf diese Weise genauestens charakterisierten neuartigen Materialien können nun gezielt und anwendungsfokussiert weiterentwickelt werden.
Electrets are dielectrics with quasi-permanent electric charge and/or dipoles, sometimes can be regarded as an electric analogy to a magnet. Since the discovery of the excellent charge retention capacity of poly(tetrafluoro ethylene) and the invention of the electret microphone, electrets have grown out of a scientific curiosity to an important application both in science and technology. The history of electret research goes hand in hand with the quest for new materials with better capacity at charge and/or dipole retention. To be useful, electrets normally have to be charged/poled to render them electro-active. This process involves electric-charge deposition and/or electric dipole orientation within the dielectrics ` surfaces and bulk. Knowledge of the spatial distribution of electric charge and/or dipole polarization after their deposition and subsequent decay is crucial in the task to improve their stability in the dielectrics.
Likewise, for dielectrics used in electrical insulation applications, there are also needs for accumulated space-charge and polarization spatial profiling. Traditionally, space-charge accumulation and large dipole polarization within insulating dielectrics is considered undesirable and harmful to the insulating dielectrics as they might cause dielectric loss and could lead to internal electric field distortion and local field enhancement. High local electric field could trigger several aging processes and reduce the insulating dielectrics' lifetime. However, with the advent of high-voltage DC transmission and high-voltage capacitor for energy storage, these are no longer the case. There are some overlapped between the two fields of electrets and electric insulation. While quasi-permanently trapped electric-charge and/or large remanent dipole polarization are the requisites for electret operation, stably trapped electric charge in electric insulation helps reduce electric charge transport and overall reduced electric conductivity. Controlled charge trapping can help in preventing further charge injection and accumulation as well as serving as field grading purpose in insulating dielectrics whereas large dipole polarization can be utilized in energy storage applications.
In this thesis, the Piezoelectrically-generated Pressure Steps (PPSs) were employed as a nondestructive method to probe the electric-charge and dipole polarization distribution in a range of thin film (several hundred micron) polymer-based materials, namely polypropylene (PP), low-density polyethylene/magnesium oxide (LDPE/MgO) nanocomposites and poly(vinylidene fluoride-co- trifluoro ethylene) (P(VDF-TrFE)) copolymer. PP film surface-treated with phosphoric acid to introduce surfacial isolated nanostructures serves as example of 2-dimensional nano-composites whereas LDPE/MgO serves as the case of 3-dimensional nano-composites with MgO nano-particles dispersed in LDPE polymer matrix. It is evidenced that the nanoparticles on the surface of acid-treated PP and in the bulk of LDPE/MgO nanocomposites improve charge trapping capacity of the respective material and prevent further charge injection and transport and that the enhanced charge trapping capacity makes PP and LDPE/MgO nanocomposites potential materials for both electret and electrical insulation applications. As for PVDF and VDF-based copolymers, the remanent spatial polarization distribution depends critically on poling method as well as specific parameters used in the respective poling method. In this work, homogeneous polarization poling of P(VDF-TrFE) copolymers with different VDF-contents have been attempted with hysteresis cyclical poling. The behaviour of remanent polarization growth and spatial polarization distribution are reported and discussed. The Piezoelectrically-generated Pressure Steps (PPSs) method has proven as a powerful method for the charge storage and transport characterization of a wide range of polymer material from nonpolar, to polar, to polymer nanocomposites category.
The identification of vulnerabilities in IT infrastructures is a crucial problem in enhancing the security, because many incidents resulted from already known vulnerabilities, which could have been resolved. Thus, the initial identification of vulnerabilities has to be used to directly resolve the related weaknesses and mitigate attack possibilities. The nature of vulnerability information requires a collection and normalization of the information prior to any utilization, because the information is widely distributed in different sources with their unique formats. Therefore, the comprehensive vulnerability model was defined and different sources have been integrated into one database. Furthermore, different analytic approaches have been designed and implemented into the HPI-VDB, which directly benefit from the comprehensive vulnerability model and especially from the logical preconditions and postconditions.
Firstly, different approaches to detect vulnerabilities in both IT systems of average users and corporate networks of large companies are presented. Therefore, the approaches mainly focus on the identification of all installed applications, since it is a fundamental step in the detection. This detection is realized differently depending on the target use-case. Thus, the experience of the user, as well as the layout and possibilities of the target infrastructure are considered. Furthermore, a passive lightweight detection approach was invented that utilizes existing information on corporate networks to identify applications.
In addition, two different approaches to represent the results using attack graphs are illustrated in the comparison between traditional attack graphs and a simplistic graph version, which was integrated into the database as well. The implementation of those use-cases for vulnerability information especially considers the usability. Beside the analytic approaches, the high data quality of the vulnerability information had to be achieved and guaranteed. The different problems of receiving incomplete or unreliable information for the vulnerabilities are addressed with different correction mechanisms. The corrections can be carried out with correlation or lookup mechanisms in reliable sources or identifier dictionaries. Furthermore, a machine learning based verification procedure was presented that allows an automatic derivation of important characteristics from the textual description of the vulnerabilities.
Risiken für Cyberressourcen können durch unbeabsichtigte oder absichtliche Bedrohungen entstehen. Dazu gehören Insider-Bedrohungen von unzufriedenen oder nachlässigen Mitarbeitern und Partnern, eskalierende und aufkommende Bedrohungen aus aller Welt, die stetige Weiterentwicklung der Angriffstechnologien und die Entstehung neuer und zerstörerischer Angriffe. Informationstechnik spielt mittlerweile in allen Bereichen des Lebens eine entscheidende Rolle, u. a. auch im Bereich des Militärs. Ein ineffektiver Schutz von Cyberressourcen kann hier Sicherheitsvorfälle und Cyberattacken erleichtern, welche die kritischen Vorgänge stören, zu unangemessenem Zugriff, Offenlegung, Änderung oder Zerstörung sensibler Informationen führen und somit die nationale Sicherheit, das wirtschaftliche Wohlergehen sowie die öffentliche Gesundheit und Sicherheit gefährden. Oftmals ist allerdings nicht klar, welche Bedrohungen konkret vorhanden sind und welche der kritischen Systemressourcen besonders gefährdet ist.
In dieser Dissertation werden verschiedene Analyseverfahren für Bedrohungen in militärischer Informationstechnik vorgeschlagen und in realen Umgebungen getestet. Dies bezieht sich auf Infrastrukturen, IT-Systeme, Netze und Anwendungen, welche Verschlusssachen (VS)/Staatsgeheimnisse verarbeiten, wie zum Beispiel bei militärischen oder Regierungsorganisationen. Die Besonderheit an diesen Organisationen ist das Konzept der Informationsräume, in denen verschiedene Datenelemente, wie z. B. Papierdokumente und Computerdateien, entsprechend ihrer Sicherheitsempfindlichkeit eingestuft werden, z. B. „STRENG GEHEIM“, „GEHEIM“, „VS-VERTRAULICH“, „VS-NUR-FÜR-DEN-DIENSTGEBRAUCH“ oder „OFFEN“.
Die Besonderheit dieser Arbeit ist der Zugang zu eingestuften Informationen aus verschiedenen Informationsräumen und der Prozess der Freigabe dieser. Jede in der Arbeit entstandene Veröffentlichung wurde mit Angehörigen in der Organisation besprochen, gegengelesen und freigegeben, so dass keine eingestuften Informationen an die Öffentlichkeit gelangen.
Die Dissertation beschreibt zunächst Bedrohungsklassifikationsschemen und Angreiferstrategien, um daraus ein ganzheitliches, strategiebasiertes Bedrohungsmodell für Organisationen abzuleiten. Im weiteren Verlauf wird die Erstellung und Analyse eines Sicherheitsdatenflussdiagramms definiert, welches genutzt wird, um in eingestuften Informationsräumen operationelle Netzknoten zu identifizieren, die aufgrund der Bedrohungen besonders gefährdet sind. Die spezielle, neuartige Darstellung ermöglicht es, erlaubte und verbotene Informationsflüsse innerhalb und zwischen diesen Informationsräumen zu verstehen.
Aufbauend auf der Bedrohungsanalyse werden im weiteren Verlauf die Nachrichtenflüsse der operationellen Netzknoten auf Verstöße gegen Sicherheitsrichtlinien analysiert und die Ergebnisse mit Hilfe des Sicherheitsdatenflussdiagramms anonymisiert dargestellt. Durch Anonymisierung der Sicherheitsdatenflussdiagramme ist ein Austausch mit externen Experten zur Diskussion von Sicherheitsproblematiken möglich.
Der dritte Teil der Arbeit zeigt, wie umfangreiche Protokolldaten der Nachrichtenflüsse dahingehend untersucht werden können, ob eine Reduzierung der Menge an Daten möglich ist. Dazu wird die Theorie der groben Mengen aus der Unsicherheitstheorie genutzt. Dieser Ansatz wird in einer Fallstudie, auch unter Berücksichtigung von möglichen auftretenden Anomalien getestet und ermittelt, welche Attribute in Protokolldaten am ehesten redundant sind.
Medical imaging plays an important role in disease diagnosis, treatment planning, and clinical monitoring. One of the major challenges in medical image analysis is imbalanced training data, in which the class of interest is much rarer than the other classes. Canonical machine learning algorithms suppose that the number of samples from different classes in the training dataset is roughly similar or balance. Training a machine learning model on an imbalanced dataset can introduce unique challenges to the learning problem.
A model learned from imbalanced training data is biased towards the high-frequency samples. The predicted results of such networks have low sensitivity and high precision. In medical applications, the cost of misclassification of the minority class could be more than the cost of misclassification of the majority class. For example, the risk of not detecting a tumor could be much higher than referring to a healthy subject to a doctor. The current Ph.D. thesis introduces several deep learning-based approaches for handling class imbalanced problems for learning multi-task such as disease classification and semantic segmentation.
At the data-level, the objective is to balance the data distribution through re-sampling the data space: we propose novel approaches to correct internal bias towards fewer frequency samples. These approaches include patient-wise batch sampling, complimentary labels, supervised and unsupervised minority oversampling using generative adversarial networks for all.
On the other hand, at algorithm-level, we modify the learning algorithm to alleviate the bias towards majority classes. In this regard, we propose different generative adversarial networks for cost-sensitive learning, ensemble learning, and mutual learning to deal with highly imbalanced imaging data.
We show evidence that the proposed approaches are applicable to different types of medical images of varied sizes on different applications of routine clinical tasks, such as disease classification and semantic segmentation. Our various implemented algorithms have shown outstanding results on different medical imaging challenges.
In the era of social networks, internet of things and location-based services, many online services produce a huge amount of data that have valuable objective information, such as geographic coordinates and date time. These characteristics (parameters) in the combination with a textual parameter bring the challenge for the discovery of geospatiotemporal knowledge. This challenge requires efficient methods for clustering and pattern mining in spatial, temporal and textual spaces.
In this thesis, we address the challenge of providing methods and frameworks for geospatiotemporal data analytics. As an initial step, we address the challenges of geospatial data processing: data gathering, normalization, geolocation, and storage. That initial step is the basement to tackle the next challenge -- geospatial clustering challenge. The first step of this challenge is to design the method for online clustering of georeferenced data. This algorithm can be used as a server-side clustering algorithm for online maps that visualize massive georeferenced data. As the second step, we develop the extension of this method that considers, additionally, the temporal aspect of data. For that, we propose the density and intensity-based geospatiotemporal clustering algorithm with fixed distance and time radius.
Each version of the clustering algorithm has its own use case that we show in the thesis.
In the next chapter of the thesis, we look at the spatiotemporal analytics from the perspective of the sequential rule mining challenge. We design and implement the framework that transfers data into textual geospatiotemporal data - data that contain geographic coordinates, time and textual parameters. By this way, we address the challenge of applying pattern/rule mining algorithms in geospatiotemporal space. As the applicable use case study, we propose spatiotemporal crime analytics -- discovery spatiotemporal patterns of crimes in publicly available crime data.
The second part of the thesis, we dedicate to the application part and use case studies. We design and implement the application that uses the proposed clustering algorithms to discover knowledge in data. Jointly with the application, we propose the use case studies for analysis of georeferenced data in terms of situational and public safety awareness.
Interlocutors typically link their utterances to the discourse environment and enrich communication by linguistic (e.g., information packaging) and extra-linguistic (e.g., eye gaze, gestures) means to optimize information transfer. Psycholinguistic studies underline that ‒for meaning computation‒ listeners profit from linguistic and visual cues that draw their focus of attention to salient information. This dissertation is the first work that examines how linguistic compared to visual salience cues influence sentence comprehension using the very same experimental paradigms and materials, that is, German subject-before-object (SO) and object-before-subject (OS) sentences, across the two cue modalities. Linguistic salience was induced by indicating a referent as the aboutness topic. Visual salience was induced by implicit (i.e., unconscious) or explicit (i.e., shared) manipulations of listeners’ attention to a depicted referent.
In Study 1, a selective, facilitative impact of linguistic salience on the context-sensitive OS word order was found using offline comprehensibility judgments. More precisely, during online sentence processing, this impact was characterized by a reduced sentence-initial Late positivity which reflects reduced processing costs for updating the current mental representation of discourse. This facilitative impact of linguistic salience was not replicated by means of an implicit visual cue (Study 2) shown to modulate word order preferences during sentence production. However, a gaze shift to a depicted referent as an indicator of shared attention eased sentence-initial processing similar to linguistic salience as revealed by reduced reading times (Study 3). Yet, this cue did not modulate the strong subject-antecedent preference during later pronoun resolution like linguistic salience. Taken together, these findings suggest a significant impact of linguistic and visual salience cues on sentence comprehension, which substantiates that both the information delivered via language and via the visual environment is integrated into the mental representation of the discourse; but, the way how salience is induced is crucial to its impact.
With the emergence of the Internet of things (IoT), plenty of battery-powered and energy-harvesting devices are being deployed to fulfill sensing and actuation tasks in a variety of application areas, such as smart homes, precision agriculture, smart cities, and industrial automation. In this context, a critical issue is that of denial-of-sleep attacks. Such attacks temporarily or permanently deprive battery-powered, energy-harvesting, or otherwise energy-constrained devices of entering energy-saving sleep modes, thereby draining their charge. At the very least, a successful denial-of-sleep attack causes a long outage of the victim device. Moreover, to put battery-powered devices back into operation, their batteries have to be replaced. This is tedious and may even be infeasible, e.g., if a battery-powered device is deployed at an inaccessible location. While the research community came up with numerous defenses against denial-of-sleep attacks, most present-day IoT protocols include no denial-of-sleep defenses at all, presumably due to a lack of awareness and unsolved integration problems. After all, despite there are many denial-of-sleep defenses, effective defenses against certain kinds of denial-of-sleep attacks are yet to be found.
The overall contribution of this dissertation is to propose a denial-of-sleep-resilient medium access control (MAC) layer for IoT devices that communicate over IEEE 802.15.4 links. Internally, our MAC layer comprises two main components. The first main component is a denial-of-sleep-resilient protocol for establishing session keys among neighboring IEEE 802.15.4 nodes. The established session keys serve the dual purpose of implementing (i) basic wireless security and (ii) complementary denial-of-sleep defenses that belong to the second main component. The second main component is a denial-of-sleep-resilient MAC protocol. Notably, this MAC protocol not only incorporates novel denial-of-sleep defenses, but also state-of-the-art mechanisms for achieving low energy consumption, high throughput, and high delivery ratios. Altogether, our MAC layer resists, or at least greatly mitigates, all denial-of-sleep attacks against it we are aware of. Furthermore, our MAC layer is self-contained and thus can act as a drop-in replacement for IEEE 802.15.4-compliant MAC layers. In fact, we implemented our MAC layer in the Contiki-NG operating system, where it seamlessly integrates into an existing protocol stack.
Carbonfasern haben sich in der Luft- und Raumfahrt etabliert und gewinnen in Alltagsanwendungen wie dem Automobilbereich, Windkraft- und Sportbereich durch ihre hohen Zugfestigkeiten, insbesondere ihrer hohen E-Moduli, und ihrer geringen Dichte immer mehr an Bedeutung. Auf Grund ihrer hohen Kosten, welche sich zur Hälfte aus der Precursorherstellung, inklusive seiner Synthese und seinem Verspinnprozess, dem Lösungsspinnverfahren, ergeben, erhalten zunehmend alternative und schmelzspinnbare Precursoren Interesse. Für die Carbonfaserherstellung wird fast ausschließlich Polyacrylnitril (PAN) verwendet, das vor dem Schmelzen irreversible exotherme Zyklisierungsreaktionen aufweist, welchen sich seine Zersetzung anschließt. Eine Möglichkeit der Reduzierung der Schmelztemperatur von Polymeren ist die Einbringung von Comonomeren zur Erhöhung des freien Volumens und die Reduzierung der intermolekularen Wechselwirkungen als interne Weichmacher. Wie am Fraunhofer IAP gezeigt wurde, kann mittels 2-Methoxyethylacrylat (MEA) die Schmelztemperatur zu neuartigen PAN-basierten Precursoren verringert werden. Um den PAN-co-MEA-Precursor für die nachfolgenden Prozessschritte der Carbonfaserherstellung zu verwenden, müssen die thermoplastischen Fasern in thermisch stabile Fasern ohne thermoplastisches Verhalten überführt werden. Es wurde ein neuer Prozessschritt (Prästabilisierung) eingeführt, welcher unter alkalischen Bedingungen zur Abspaltung der Comonomerseitenkette führt. Neben der Esterhydrolyse finden Reaktionen statt, welche an diesem Material noch nicht hinreichend untersucht wurden. Weiterhin stellt sich die Frage nach der Kinetik der Prästabilisierung und der Ermittlung einer geeigneten Prozessführung.
Hierzu wurde die Prästabilisierung in den Labormaßstab überführt und die möglichen Zusammensetzungen des aus DMSO und einer KOH-Lösung bestehenden Reaktionsmediums evaluiert. Weiterhin wurde die Behandlung bei verschiedenen Prästabilisierungszeiten von maximal 30 min und Temperaturen von 40, 50 und 60 °C durchgeführt, um primär mittels NMR-Spektroskopie die chemischen Strukturänderungen aufzuklären. Die Esterhydrolyse des Comonomers, welche zur Abspaltung des 2-Methoxyethanols führt, wurde mittels 1H-NMR-spektroskopischer Untersuchungen detektiert.
Es wurde ein Modell aufgestellt, das die chemisch-physikalischen Strukturänderungen während der Prästabilisierung aufzeigt. Die zuerst ablaufende Reaktion ist die Esterhydrolyse am Comonomer, welche vom Faserrand nach innen verläuft und durch die Präsenz des DMSO in Kombination mit der KOH-Lösung (Superbase) initiiert wird. Der zeitliche Reaktionsverlauf der Esterhydrolyse kann in drei Bereiche eingeteilt werden. Der erste Bereich ab dem Prästabilisierungsbeginn wird durch die Diffusion der basischen Anionen in die Faser, der zweite Bereich durch die Reaktion an der Estergruppe des Comonomers und der dritte Bereich durch letzte Reaktionen im Faserinneren und diffusiven Prozessen der Produkte und Edukte charakterisiert. Der zweite Bereich kann mit einer Reaktion pseudo 1. Ordnung abgebildet werden, da in diesem Bereich bereits eine ausreichende Diffusion der Edukte in die Faser stattgefunden hat. Bei 50 °C spielt die Diffusion im ersten Bereich im Vergleich zur Reaktion eine untergeordnete Rolle. Mit Erhöhung der Temperatur auf 60 °C kann eine im Verhältnis geringere Diffusions- als Reaktionsgeschwindigkeit beobachtet werden. Die Nebenreaktionen wurden mittels 13C-CP/MAS-NMR-spektroskopischen, elementaranlaytischen Untersuchungen sowie Doppelbrechungsmessungen charakterisiert. Während der alkalischen Esterhydrolyse beginnt die Reduzierung der Nitrilgruppen unter der Bildung von primären Carbonsäureamiden und Carbonsäuren. Zur Beschreibung dieser Umsetzung wurde eine Methode entwickelt, welche die Addition von 13C-CP/MAS-NMR-Spektren der Modellsubstanzen PAN, PAM und PAA beinhaltet. Weitere stattfindende Reaktionen sind die Bildung von konjugierten Doppelbindungen, welche insbesondere auf eine Zyklisierung der Nitrile hinweisen. Die nasschemisch initiierte Zyklisierung der Nitrilgruppen kann zu kürzeren Stabilisierungszeiten und einem besser kontrollierbaren Stabilisierungsprozess durch geringere Wärmefreisetzung und schlussendlich zu einer Kostenersparnis des gesamten Verfahrens führen. Die Umsetzung der Nitrilgruppen konnte mit einer Reaktion pseudo 1. Ordnung gut abgebildet werden. DMSO initiiert die Esterhydrolyse, wobei die KOH-Konzentration einen höheren Einfluss auf die Reaktionsgeschwindigkeit der Ester- und Nitrilhydrolyse als die DMSO-Konzentration besitzt. Beide Reaktionen zeigen eine vergleichbare Abhängigkeit von der Temperatur. Die Erhöhung der Prästabilisierungszeit und der KOH- bzw. DMSO-Konzentration führt zur Migration niedermolekularer Bestandteile des Fasermaterials an die Oberfläche und der Bildung punktueller Ablagerungen bis hin zu miteinander verbundenen Einzelfasern. Eine weitere Erhöhung der Prästabilisierungszeit bzw. der Konzentration führt zu einem steigenden Carbonsäureanteil und zur Quellung des Fasermaterials, wodurch die Ablagerungen in das Reaktionsmedium diffundieren. Die Ablagerungen enthalten Chlor, welches durch den Waschvorgang mit HCl in das Materialsystem gelangt ist und durch Parameteranpassungen reduziert wurde. Die schmelzbaren Fasern konnten durch die Prästabilisierung erfolgreich über eine Kern-Mantel-Struktur in nicht-thermoplastische Fasern überführt werden.
Zur Ermittlung eines geeigneten Prozessfensters für nachfolgende thermische Beanspruchungen der prästabilisierten Fasern wurden drei Kriterien identifiziert, anhand welcher die Evaluation erfolgte. Das erste Kriterium beinhaltet die Notwendigkeit der vollständigen Aufhebung der thermoplastischen Eigenschaft der Fasern. Als zweites Kriterium diente die Fasermorphologie. Anhand von REM-Aufnahmen wurden Faserbündel mit separierten Einzelfasern ohne Ablagerungen für die nachfolgende Stabilisierung ausgewählt. Das dritte Kriterium bezieht sich auf eine möglichst geringe Umsetzung der Nitrilgruppen, um Prästabilisierungsbedingungen mit Nebenreaktionen zu vermeiden.
Aus den Untersuchungen konnte eine Prästabilisierungstemperatur von 60 °C als geeignet identifiziert werden. Weiterhin führen hoch alkalische Zusammensetzungen des Reaktionsmediums mit KOH-Konzentrationen von 1, 1,5 und 2 M, vorzugsweise 1,5 M und 50 vol% DMSO mit Reaktionszeiten von unter 10 min zu geeigneten Fasern. Ein MEA-Anteil unterhalb von 2 mol% bewirkt eine Überführung in die Unschmelzbarkeit. Thermisch stabile und für die nachfolgende Stabilisierung geeignete Fasern besitzen weiterhin 68 – 80 mol% Nitrilgruppen, 20 – 25 mol% Carbonsäuren, bis zu 15 mol% primäre Carbonsäureamide und zyklisierte Strukturen.
Due to advances in science and technology towards smaller and more powerful processing units, the fabrication of micrometer sized machines for different tasks becomes more and more possible. Such micro-robots could revolutionize medical treatment of diseases and shall support to work on other small machines. Nevertheless, scaling down robots and other devices is a challenging task and will probably remain limited in near future. Over the past decade the concept of bio-hybrid systems has proved to be a promising approach in order to advance the further development of micro-robots. Bio-hybrid systems combine biological cells with artificial components, thereby benefiting from the functionality of living biological cells. Cell-driven micro-transport is one of the most prominent applications in the emerging field of these systems. So far, micrometer sized cargo has been successfully transported by means of swimming bacterial cells. The potential of motile adherent cells as transport systems has largely remained unexplored.
This thesis concentrates on the social amoeba Dictyostelium discoideum as a potential candidate for an amoeboid bio-hybrid transport system. The use of this model organism comes with several advantages. Due to the unspecific properties of Dictyostelium adhesion, a wide range of different cargo materials can be used for transport. As amoeboid cells exceed bacterial cells in size by one order of magnitude, also the size of an object carried by a single cell can also be much larger for an amoeba. Finally it is possible to guide the cell-driven transport based on the chemotactic behavior of the amoeba. Since cells undergo a developmentally induced chemotactic aggregation, cargo could be assembled in a self-organized manner into a cluster. It is also possible to impose an external chemical gradient to guide the amoeboid transport system to a desired location.
To establish Dictyostelium discoideum as a possible candidate for bio-hybrid transport systems, this thesis will first investigate the movement of single cells. Secondly, the interaction of cargo and cells will be studied. Eventually, a conceptional proof will be conducted, that the cheomtactic behavior can be exploited either to transport a cargo self-organized or through an external chemical source.
With the growth of information technology, patient attitudes are shifting – away from passively receiving care towards actively taking responsibility for their well- being. Handling doctor-patient relationships collaboratively and providing patients access to their health information are crucial steps in empowering patients. In mental healthcare, the implicit consensus amongst practitioners has been that sharing medical records with patients may have an unpredictable, harmful impact on clinical practice. In order to involve patients more actively in mental healthcare processes, Tele-Board MED (TBM) allows for digital collaborative documentation in therapist-patient sessions. The TBM software system offers a whiteboard-inspired graphical user interface that allows therapist and patient to jointly take notes during the treatment session. Furthermore, it provides features to automatically reuse the digital treatment session notes for the creation of treatment session summaries and clinical case reports. This thesis presents the development of the TBM system and evaluates its effects on 1) the fulfillment of the therapist’s duties of clinical case documentation, 2) patient engagement in care processes, and 3) the therapist-patient relationship. Following the design research methodology, TBM was developed and tested in multiple evaluation studies in the domains of cognitive behavioral psychotherapy and addiction care. The results show that therapists are likely to use TBM with patients if they have a technology-friendly attitude and when its use suits the treatment context. Support in carrying out documentation duties as well as fulfilling legal requirements contributes to therapist acceptance. Furthermore, therapists value TBM as a tool to provide a discussion framework and quick access to worksheets during treatment sessions. Therapists express skepticism, however, regarding technology use in patient sessions and towards complete record transparency in general. Patients expect TBM to improve the communication with their therapist and to offer a better recall of discussed topics when taking a copy of their notes home after the session. Patients are doubtful regarding a possible distraction of the therapist and usage in situations when relationship-building is crucial. When applied in a clinical environment, collaborative note-taking with TBM encourages patient engagement and a team feeling between therapist and patient. Furthermore, it increases the patient’s acceptance of their diagnosis, which in turn is an important predictor for therapy success. In summary, TBM has a high potential to deliver more than documentation support and record transparency for patients, but also to contribute to a collaborative doctor-patient relationship. This thesis provides design implications for the development of digital collaborative documentation systems in (mental) healthcare as well as recommendations for a successful implementation in clinical practice.
The foreland of the Andes in South America is characterised by distinct along strike changes in surface deformational styles. These styles are classified into two end-members, the thin-skinned and the thick-skinned style. The superficial expression of thin-skinned deformation is a succession of narrowly spaced hills and valleys, that form laterally continuous ranges on the foreland facing side of the orogen. Each of the hills is defined by a reverse fault that roots in a basal décollement surface within the sedimentary cover, and acted as thrusting ramp to stack the sedimentary pile. Thick-skinned deformation is morphologically characterised by spatially disparate, basement-cored mountain ranges. These mountain ranges are uplifted along reactivated high-angle crustal-scale discontinuities, such as suture zones between different tectonic terranes.
Amongst proposed causes for the observed variation are variations in the dip angle of the Nazca plate, variation in sediment thickness, lithospheric thickening, volcanism or compositional differences. The proposed mechanisms are predominantly based on geological observations or numerical thermomechanical modelling, but there has been no attempt to understand the mechanisms from a point of data-integrative 3D modelling. The aim of this dissertation is therefore to understand how lithospheric structure controls the deformational behaviour. The integration of independent data into a consistent model of the lithosphere allows to obtain additional evidence that helps to understand the causes for the different deformational styles. Northern Argentina encompasses the transition from the thin-skinned fold-and-thrust belt in Bolivia, to the thick-skinned Sierras Pampeanas province, which makes this area a well suited location for such a study. The general workflow followed in this study first involves data-constrained structural- and density-modelling in order to obtain a model of the study area. This model was then used to predict the steady-state thermal field, which was then used to assess the present-day rheological state in northern Argentina.
The structural configuration of the lithosphere in northern Argentina was determined by means of data-integrative, 3D density modelling verified by Bouguer gravity. The model delineates the first-order density contrasts in the lithosphere in the uppermost 200 km, and discriminates bodies for the sediments, the crystalline crust, the lithospheric mantle and the subducting Nazca plate. To obtain the intra-crustal density structure, an automated inversion approach was developed and applied to a starting structural model that assumed a homogeneously dense crust. The resulting final structural model indicates that the crustal structure can be represented by an upper crust with a density of 2800 kg/m³, and a lower crust of 3100 kg/m³. The Transbrazilian Lineament, which separates the Pampia terrane from the Río de la Plata craton, is expressed as a zone of low average crustal densities.
In an excursion, we demonstrate in another study, that the gravity inversion method developed to obtain intra-crustal density structures, is also applicable to obtain density variations in the uppermost lithospheric mantle. Densities in such sub-crustal depths are difficult to constrain from seismic tomographic models due to smearing of crustal velocities. With the application to the uppermost lithospheric mantle in the north Atlantic, we demonstrate in Tan et al. (2018) that lateral density trends of at least 125\,km width are robustly recovered by the inversion method, thereby providing an important tool for the delineation of subcrustal density trends.
Due to the genetic link between subduction, orogenesis and retroarc foreland basins the question rises whether the steady-state assumption is valid in such a dynamic setting. To answer this question, I analysed (i) the impact of subduction on the conductive thermal field of the overlying continental plate, (ii) the differences between the transient and steady-state thermal fields of a geodynamic coupled model. Both studies indicate that the assumption of a thermal steady-state is applicable in most parts of the study area. Within the orogenic wedge, where the assumption cannot be applied, I estimated the transient thermal field based on the results of the conducted analyses.
Accordingly, the structural model that had been obtained in the first step, could be used to obtain a 3D conductive steady-state thermal field. The rheological assessment based on this thermal field indicates that the lithosphere of the thin-skinned Subandean ranges is characterised by a relatively strong crust and a weak mantle. Contrarily, the adjacent foreland basin consists of a fully coupled, very strong lithosphere. Thus, shortening in northern Argentina can only be accommodated within the weak lithosphere of the orogen and the Subandean ranges. The analysis suggests that the décollements of the fold-and-thrust belt are the shallow continuation of shear zones that reside in the ductile sections of the orogenic crust. Furthermore, the localisation of the faults that provide strain transfer between the deeper ductile crust and the shallower décollement is strongly influenced by crustal weak zones such as foliation. In contrast to the northern foreland, the lithosphere of the thick-skinned Sierras Pampeanas is fully coupled and characterised by a strong crust and mantle. The high overall strength prevents the generation of crustal-scale faults by tectonic stresses. Even inherited crustal-scale discontinuities, such as sutures, cannot sufficiently reduce the strength of the lithosphere in order to be reactivated. Therefore, magmatism that had been identified to be a precursor of basement uplift in the Sierras Pampeanas, is the key factor that leads to the broken foreland of this province. Due to thermal weakening, and potentially lubrication of the inherited discontinuities, the lithosphere is locally weakened such that tectonic stresses can uplift the basement blocks. This hypothesis explains both the spatially disparate character of the broken foreland, as well as the observed temporal delay between volcanism and basement block uplift.
This dissertation provides for the first time a data-driven 3D model that is consistent with geophysical data and geological observations, and that is able to causally link the thermo-rheological structure of the lithosphere to the observed variation of surface deformation styles in the retroarc foreland of northern Argentina.
Untersuchungen an neuartigen sauerstoffsubstituierten Donoren und Akzeptoren für Singulettsauerstoff
(2019)
Im Verlauf dieser Arbeit wurden Aromaten wie Naphthaline und Anthracene mit Singulettsauerstoff, einer reaktiven Form des gewöhnlichen Sauerstoffs, zu sogenannten Endoperoxiden umgesetzt. Die hier eingesetzten Systeme wurden mit funktionellen Gruppen modifiziert, die über eine Sauerstoffbrücke mit dem Aromaten verknüpft sind. Die daraus entstandenen Endoperoxide sind meist besonders labil und konnten in dieser Arbeit isoliert und umfassend untersucht werden.
Hierbei wurde zum einen das Reaktionsverhalten untersucht. Es konnte gezeigt werden, dass die Aromaten in Abhängigkeit ihrer funktionellen Gruppen unterschiedlich schnell mit Singulettsauerstoff reagieren. Die so ermittelten Reaktivitäten wurden zusätzlich durch theoretische Berechnungen gestützt.
Die resultierenden Endoperoxide wurden unter verschiedenen Bedingungen wie erhöhter Temperatur oder einem sauren bzw. basischen Milieu auf ihre Stabilität hin untersucht. Dabei konnte gezeigt werden, dass die auf Naphthalinen basierenden Endoperoxiden den gebundenen Singulettsauerstoff in guten Ausbeuten oft schon bei sehr niedrigen Temperaturen (−40 bis 0 °C) freisetzen. Diese Verbindungen können daher als milde Quellen dieser reaktiven Sauerstoffspezies eingesetzt werden. Weiterhin konnten bei den Anthracenendoperoxiden Zerfallsmechanismen aufgeklärt und andere reaktive Sauerstoffspezies wie Wasserstoffperoxid oder Persäuren nachgewiesen werden.
Zu den Modifikationen der Aromaten gehören auch Glucosereste. Dadurch könnten sich die hier hergestellten Endoperoxide als vielversprechende Verbindungen in der Krebstherapie herausstellen, da Krebszellen deutlich stärker als gesunde Zellen kohlenhydratreiche Verbindungen für ihren Stoffwechsel benötigen. Bei der Spaltung von Endoperoxiden mit Glucosesubstituenten werden ebenfalls reaktive Sauerstoffspezies frei, die so zum Zelltod führen könnten.
Partial melting is a first order process for the chemical differentiation of the crust (Vielzeuf et al., 1990). Redistribution of chemical elements during melt generation crucially influences the composition of the lower and upper crust and provides a mechanism to concentrate and transport chemical elements that may also be of economic interest. Understanding of the diverse processes and their controlling factors is therefore not only of scientific interest but also of high economic importance to cover the demand for rare metals.
The redistribution of major and trace elements during partial melting represents a central step for the understanding how granite-bound mineralization develops (Hedenquist and Lowenstern, 1994). The partial melt generation and mobilization of ore elements (e.g. Sn, W, Nb, Ta) into the melt depends on the composition of the sedimentary source and melting conditions. Distinct source rocks have different compositions reflecting their deposition and alteration histories. This specific chemical “memory” results in different mineral assemblages and melting reactions for different protolith compositions during prograde metamorphism (Brown and Fyfe, 1970; Thompson, 1982; Vielzeuf and Holloway, 1988). These factors do not only exert an important influence on the distribution of chemical elements during melt generation, they also influence the volume of melt that is produced, extraction of the melt from its source, and its ascent through the crust (Le Breton and Thompson, 1988). On a larger scale, protolith distribution and chemical alteration (weathering), prograde metamorphism with partial melting, melt extraction, and granite emplacement are ultimately depending on a (plate-)tectonic control (Romer and Kroner, 2016). Comprehension of the individual stages and their interaction is crucial in understanding how granite-related mineralization forms, thereby allowing estimation of the mineralization potential of certain areas. Partial melting also influences the isotope systematics of melt and restite. Radiogenic and stable isotopes of magmatic rocks are commonly used to trace back the source of intrusions or to quantify mixing of magmas from different sources with distinct isotopic signatures (DePaolo and Wasserburg, 1979; Lesher, 1990; Chappell, 1996). These applications are based on the fundamental requirement that the isotopic signature in the melt reflects that of the bulk source from which it is derived. Different minerals in a protolith may have isotopic compositions of radiogenic isotopes that deviate from their whole rock signature (Ayres and Harris, 1997; Knesel and Davidson, 2002). In particular, old minerals with a distinct parent-to-daughter (P/D) ratio are expected to have a specific radiogenic isotope signature. As the partial melting reaction only involves selective phases in a protolith, the isotopic signature of the melt reflects that of the minerals involved in the melting reaction and, therefore, should be different from the bulk source signature. Similar considerations hold true for stable isotopes.
A reliable inference of networks from data is of key interest in many scientific fields. Several methods have been suggested in the literature to reliably determine links in a network. These techniques rely on statistical methods, typically controlling the number of false positive links, but not considering false negative links. In this thesis new methodologies to improve network inference are suggested. Initial analyses demonstrate the impact of falsepositive and false negative conclusions about the presence or absence of links on the resulting inferred network. Consequently, revealing the importance of making well-considered choices leads to suggest new approaches to enhance existing network reconstruction methods.
A simulation study, presented in Chapter 3, shows that different values to balance false positive and false negative conclusions about links should be used in order to reliably estimate network characteristics. The existence of type I and type II errors in the reconstructed network, also called biased network, is accepted. Consequently, an analytic method that describes the influence of these two errors on the network structure is explored. As a result of this analysis, an analytic formula of the density of the biased vertex degree distribution is found (Chapter 4).
In the inverse problem, the vertex degree distribution of the true underlying network is analytically reconstructed, assuming the probabilities of type I and type II errors. Chapters 4-5 show that the method is robust to incorrect estimates of α and β within reasonable limits. In Chapter 6, an iterative procedure to enhance this method is presented in the case of large errors on the estimates of α and β.
The investigations presented so far focus on the influence of false positive and false negative links on the network characteristics. In Chapter 7, the analysis is reversed - the study focuses on the influence of network characteristics on the probability of type I and type II errors, in the case of networks of coupled oscillators. The probabilities of α and β are influenced by the shortest path length and the detour degree, respectively. These results have been used to improve the network reconstruction, when the true underlying network is not known a priori, introducing a novel and advanced concept of threshold.
Einleitung
Die Implantation einer Knie- oder Hüft-Totalendoprothese (TEP) ist eine der häufigsten operativen Eingriffe. Im Anschluss an die Operation und die postoperative Rehabilitation stellt die Bewegungstherapie einen wesentlichen Bestandteil der Behandlung zur Verbesserung der Gelenkfunktion und der Lebensqualität dar. In strukturschwachen Gebieten werden entsprechende Angebote nur in unzureichender Dichte vorgehalten. Zudem zeichnet sich ein flächendeckender Fachkräftemangel im Bereich der Physiotherapie ab. Die Tele-Nachsorge bietet daher einen innovativen Ansatz für die postrehabilitative Versorgung der Patienten. Das Ziel der vorliegenden Untersuchung war die Überprüfung der Wirksamkeit einer interaktiven Tele-Nachsorgeintervention für Patienten mit Knie- oder Hüft-TEP im Vergleich zur herkömmlichen Versorgung (usual care). Dazu wurden die Funktionalität und die berufliche Wiedereingliederung untersucht.
Methode
Zwischen August 2016 und August 2017 wurden 111 Patienten (54,9 ± 6,8 Jahre, 54,3 % weiblich) zu Beginn ihrer stationären Anschlussheilbehandlung nach Implantation einer Knie- oder Hüft-TEP in diese randomisiert, kontrolliert, multizentrische Studie eingeschlossen. Nach Entlassung aus der orthopädischen Anschlussrehabilitation (Baseline) führte die Interventionsgruppe (IG) ein dreimonatiges interaktives Training über ein Telerehabilitationssystem durch. Hierfür erstellte ein betreuender Physiotherapeut einen individuellen Trainingsplan aus 38 Übungen zur Verbesserung der Kraft sowie der posturalen Kontrolle. Zur Anpassung des Trainingsplans übermittelte das System dem Physiotherapeuten Daten zur Quantität sowie zur Qualität des Trainings. Die Kontrollgruppe (KG) konnte die herkömmlichen Versorgungsangebote nutzen. Zur Beurteilung der Wirksamkeit der Intervention wurde die Differenz der Verbesserung im 6MWT zwischen der IG und der KG nach drei Monaten als primärer Endpunkt definiert. Als sekundäre Endpunkte wurden die Return-to-Work-Rate sowie die funktionelle Mobilität mittels des Stair Ascend Tests, des Five-Times-Sit-to-Stand Test und des Timed Up and Go Tests untersucht. Weiterhin wurden die gesundheitsbezogene Lebensqualität mit dem Short-Form 36 (SF-36) und die gelenkbezogenen Einschränkungen mit dem Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) evaluiert. Der primäre und die sekundären Endpunkte wurden anhand von baseline-adjustierten Kovarianzanalysen im intention-to-treat-Ansatz ausgewertet. Zusätzlich wurde die Teilnahme an Nachsorgeangeboten und die Adhärenz der Interventionsgruppe an der Tele-Nachsorge erfasst und evaluiert.
Ergebnisse
Zum Ende der Intervention wiesen beide Gruppen einen statistisch signifikanten Anstieg ihrer 6MWT Strecke auf (p < 0,001). Zu diesem Zeitpunkt legten die Teilnehmer der IG im Mittel 530,8 ± 79,7 m, die der KG 514,2 ± 71,2 m zurück. Dabei betrug die Differenz der Verbesserung der Gehstrecke in der IG 88,3 ± 57,7 m und in der KG 79,6 ± 48,7 m. Damit zeigt der primäre Endpunkt keine signifikanten Gruppenunterschiede (p = 0,951). Bezüglich der beruflichen Wiedereingliederung konnte jedoch eine signifikant höhere Rate in der IG (64,6 % versus 46,2 %; p = 0,014) festgestellt werden. Für die sekundären Endpunkte der funktionellen Mobilität, der Lebensqualität und der gelenkbezogenen Beschwerden belegen die Ergebnisse eine Gleichwertigkeit beider Gruppen zum Ende der Intervention.
Schlussfolgerung
Die telemedizinisch assistierte Bewegungstherapie für Knie- oder Hüft-TEP Patienten ist der herkömmlichen Versorgung zur Nachsorge hinsichtlich der erzielten Verbesserungen der funktionellen Mobilität, der gesundheitsbezogenen Lebensqualität und der gelenkbezogenen Beschwerden gleichwertig. In dieser Patientenpopulation ließen sich klinisch relevante Verbesserungen unabhängig von der Form der Bewegungstherapie erzielen. Im Hinblick auf die berufliche Wiedereingliederung zeigte sich eine signifikant höhere Rate in der Interventionsgruppe. Die telemedizinisch assistierte Bewegungstherapie scheint eine geeignete Versorgungsform der Nachsorge zu sein, die orts- und zeitunabhängig durchgeführt werden kann und somit den Bedürfnissen berufstätiger Patienten entgegenkommt und in den Alltag der Patienten integriert werden kann. Die Tele-Nachsorge sollte daher als optionale und komplementäre Form der postrehabilitativen Nachsorge angeboten werden. Auch im Hinblick auf den zunehmenden Fachkräftemangel im Bereich der Physiotherapie und bestehende Versorgungslücken in strukturschwachen Gebieten kann der Einsatz der Tele-Nachsorge innovative und bedarfsgerechte Lösungsansätze bieten.
The goal of this thesis is to broaden the empirical basis for a better, comprehensive understanding
of massive star evolution, star formation and feedback at low metallicity. Low metallicity massive stars are a key to understand the early universe. Quantitative information on metal-poor massive stars was sparse before. The quantitative spectroscopic studies of massive star populations associated with large-scale ISM structures were not performed at low metallicity before, but are important to investigate star-formation histories and feedback in detail. Much of this work relies on spectroscopic observations with VLT-FLAMES of ~500 OB stars in the Magellanic Clouds. When available, the optical spectroscopy was complemented by UV spectra from the HST, IUE, and FUSE archives. The two representative young stellar populations that have been studied are associated with the superbubble N 206 in the Large Magellanic Cloud (LMC) and with the supergiant shell SMC-SGS 1 in the Wing of the Small Magellanic Cloud (SMC), respectively. We performed spectroscopic analyses of the massive stars using the nonLTE Potsdam Wolf-Rayet (PoWR) model atmosphere code. We estimated the stellar, wind, and feedback parameters of the individual massive stars and established their statistical distributions.
The mass-loss rates of N206 OB stars are consistent with theoretical expectations for LMC metallicity. The most massive and youngest stars show nitrogen enrichment at their surface and are found to be slower rotators than the rest of the sample. The N 206 complex has undergone star formation episodes since more than 30 Myr, with a current star formation rate higher than average in the LMC. The spatial age distribution of stars across the complex possibly indicates triggered star formation due to the expansion of the superbubble. Three very massive, young Of stars in the region dominate the ionizing and mechanical feedback among hundreds of other OB stars in the sample. The current stellar wind feedback rate from the two WR stars in the complex is comparable to that released by the whole OB sample. We see only a minor fraction of this stellar wind feedback converted into X-ray emission. In this LMC complex, stellar winds and supernovae equally contribute to the total energy feedback, which eventually powered the central superbubble. However, the total energy input accumulated over the time scale of the superbubble significantly exceeds the observed energy content of the complex. The lack of energy along with the morphology of the complex suggests a leakage of hot gas from the superbubble.
With a detailed spectroscopic study of massive stars in SMC-SGS 1, we provide the stellar and wind parameters of a large sample of OB stars at low metallicity, including those in the lower mass-range. The stellar rotation velocities show a broad, tentatively bimodal distribution, with Be stars being among the fastest. A few very luminous O stars are found close to the main sequence, while all other, slightly evolved stars obey a strict luminosity limit. Considering additional massive stars in evolved stages, with published parameters and located all over the SMC, essentially confirms this picture. The comparison with single-star evolutionary tracks suggests a dichotomy in the fate of massive stars in the SMC. Only stars with an initial mass below 30 solar masses seem to evolve from the main sequence to the cool side of the HRD to become a red supergiant and to explode as type II-P supernova. In contrast, more massive stars appear to stay always hot and might evolve quasi chemically homogeneously, finally collapsing to relatively massive black holes. However, we find no indication that chemical mixing is correlated with rapid rotation. We measured the key parameters of stellar feedback and established the links between the rates of star formation and supernovae. Our study demonstrates that in metal-poor environments stellar feedback is dominated by core-collapse supernovae in combination with winds and ionizing radiation supplied by a few of the most massive stars. We found indications of the stochastic mode of star formation, where the resulting stellar population is fully capable of producing large-scale structures such as the supergiant shell SMC-SGS 1 in the Wing. The low level of feedback in metal-poor stellar populations allows star formation episodes to persist over long timescales.
Our study showcases the importance of quantitative spectroscopy of massive stars with adequate stellar-atmosphere models in order to understand star-formation, evolution, and feedback. The stellar population analyses in the LMC and SMC make us understand that massive stars and their impact can be very different depending on their environment. Obviously, due to their different metallicity, the massive stars in the LMC and the SMC follow different evolutionary paths. Their winds differ significantly, and the key feedback agents are different. As a consequence, the star formation can proceed in different modes.
C-Arylglykoside und Chalkone
(2019)
Im bis heute andauernden Zeitalter der wissenschaftlichen Medizin, konnte ein breites Spektrum von Wirkstoffen zur Behandlung diverser Krankheiten zusammengetragen werden. Dennoch hat es sich die organische Synthesechemie zur Aufgabe gemacht, dieses Spektrum auf neuen oder bekannten Wegen und aus verschiedenen Gründen zu erweitern. Zum einen ist das Vorkommen bestimmter Verbindungen in der Natur häufig limitiert, sodass synthetische Methoden immer öfter an Stelle eines weniger nachhaltigen Abbaus treten. Zum anderen kann durch Derivatisierung und Wirkstoffanpassung die physiologische Wirkung oder die Bioverfügbarkeit eines Wirkstoffes erhöht werden. In dieser Arbeit konnten einige Vertreter der bekannten Wirkstoffklassen C-Arylglykoside und Chalkone durch den Schlüsselschritt der Palladium-katalysierten MATSUDA-HECK-Reaktion synthetisiert werden.
Dazu wurden im Fall der C-Arylglykoside zunächst ungesättigte Kohlenhydrate (Glykale) über eine Ruthenium-katalysierte Zyklisierungsreaktion dargestellt. Diese wurden im Anschluss mit unterschiedlich substituierten Diazoniumsalzen in der oben erwähnten Palladium-katalysierten Kupplungsreaktion zur Reaktion gebracht. Bei der Auswertung der analytischen Daten konnte festgestellt werden, dass stets die trans-Diastereomere gebildet wurden. Im Anschluss konnte gezeigt werden, dass die Doppelbindungen dieser Verbindungen durch Hydrierung, Dihydroxylierung oder Epoxidierung funktionalisiert werden können. Auf diesem Wege konnte u. a. eine dem Diabetesmedikament Dapagliflozin ähnliche Verbindung hergestellt werden.
Im zweiten Teil der Arbeit wurden Arylallylchromanone durch die MATSUDA-HECK-Reaktion von verschiedenen 8-Allylchromanonen mit Diazoniumsalzen dargestellt. Dabei konnte beobachtet werden, dass eine MOM-Schutzgruppe in 7-Position der Moleküle die Darstellung von Produktgemischen unterdrückt und jeweils nur eine der möglichen Verbindungen gebildet wird. Die Lage der Doppelbindung konnte mittels 2D-NMR-Untersuchungen lokalisiert werden. In Kooperation mit der theoretischen Chemie sollte durch Berechnungen untersucht werden, wie die beobachteten Verbindungen entstehen. Durch eine auftretende Wechselwirkung innerhalb des Moleküls konnte allerdings keine explizite Aussage getroffen werden.
Im Anschluss sollten die erhaltenen Verbindungen in einer allylischen Oxidation zu Chalkonen umgesetzt werden. Die Ruthenium-katalysierten Methoden zeigten u. a. keine Eignung. Es konnte allerdings eine metallfreie, Mikrowellen-unterstützte Methode erfolgreich erprobt werden, sodass die Darstellung einiger Vertreter dieser physiologisch aktiven Stoffklasse gelang.
Das Preußische Erbrecht in der Judikatur des Berliner Obertribunals in den Jahren 1836 bis 1865
(2019)
Die Dissertation befasst sich mit dem Allgemeinen Preußischen Landrecht von 1794 und der hierzu ergangenen Rechtsprechung des Berliner Obertribunals. Im Fokus der Untersuchung stehen die erbrechtlichen Regelungen des Landrechts und deren Anwendung sowie Auslegung in der Judikatur des höchsten preußischen Gerichts. Der Forschungsgegenstand ergibt sich aus dem im Landrecht kodifizierten speziellen Gesetzesverständnisses. Nach diesem sollte die Gesetzesauslegung durch die Rechtsprechung auf ein Minimum, nämlich die Auslegung allein anhand des Wortlauts der Regelung reduziert werden, um dem absolutistischen Regierungsanspruch der preußischen Monarchen, namentlich Friedrich des Großen, hinreichend Rechnung zu tragen. In diesem Kontext wird der Frage nachgegangen, inwieweit das preußische Obertribunal das im Landrecht statuierte „Auslegungsverbot“ beachtet hat und in welchen Fällen sich das Gericht von der Vorgabe emanzipierte und weitere Auslegungsmethoden anwendete und sich so eine unabhängige Rechtsprechung entwickeln konnte.
Die Arbeit gliedert sich in drei Hauptabschnitte. Im Anschluss an die Einleitung, in der zunächst die rechtshistorische Bedeutung des Landrechts und des Erbrechts sowie der Untersuchungsgegenstand umrissen werden, folgt die Darstellung der Entstehungsgeschichte des Landrechts und des Berliner Obertribunals.
Hieran schließt sich in einem dritten Abschnitt eine Analyse der erbrechtlichen Vorschriften des Landrechts an. In dieser wird auf die Entstehungsgeschichte der verschiedenen erbrechtlichen Institute wie beispielsweise der gesetzlichen und gewillkürten Erbfolge, dem Pflichtteilsrecht etc., unter Berücksichtigung des zeitgenössischen wissenschaftlichen Diskurses eingegangen.
Im vierten Abschnitt geht es um die Judikate des Berliner Obertribunals aus den Jahren 1836-1865 in denen die zuvor dargestellten erbrechtlichen Regelungen entscheidungserheblich waren. Dabei wird der Forschungsfrage, inwieweit das Obertribunal das im Landrecht statuierte Auslegungsverbot beachtet hat und in welchen Fällen es von diesem abwich bzw. weitere Auslegungsmethoden anwendete, konkret nachgegangen wird. Insgesamt werden 26 Entscheidungen des Obertribunals unter dem Aspekt der Auslegungspraxis, der Kontinuität und der Beschleunigung der Rechtsprechung analysiert und ausgewertet.
Vom Monomer zum Glykopolymer
(2019)
Glykopolymere sind synthetische und natürlich vorkommende Polymere, die eine Glykaneinheit in der Seitenkette des Polymers tragen. Glykane sind durch die Glykan-Protein-Wechselwirkung verantwortlich für viele biologische Prozesse. Die Beteiligung der Glykanen in diesen biologischen Prozessen ermöglicht das Imitieren und Analysieren der Wechselwirkungen durch geeignete Modellverbindungen, z.B. der Glykopolymere. Dieses System der Glykan-Protein-Wechselwirkung soll durch die Glykopolymere untersucht und studiert werden, um die spezifische und selektive Bindung der Proteine an die Glykopolymere nachzuweisen. Die Proteine, die in der Lage sind, Kohlenhydratstrukturen selektiv zu binden, werden Lektine genannt.
In dieser Dissertationsarbeit wurden verschiedene Glykopolymere synthetisiert. Dabei sollte auf einen effizienten und kostengünstigen Syntheseweg geachtet werden.
Verschiedene Glykopolymere wurden durch funktionalisierte Monomere mit verschiedenen Zuckern, wie z.B. Mannose, Laktose, Galaktose oder N-Acetyl-Glukosamin als funktionelle Gruppe, hergestellt. Aus diesen funktionalisierten Glykomonomeren wurden über ATRP und RAFT-Polymerisation Glykopolymere synthetisiert.
Die erhaltenen Glykopolymere wurden in Diblockcopolymeren als hydrophiler Block angewendet und die Selbstassemblierung in wässriger Lösung untersucht. Die Polymere formten in wässriger Lösung Mizellen, bei denen der Zuckerblock an der Oberfläche der Mizellen sitzt. Die Mizellen wurden mit einem hydrophoben Fluoreszenzfarbstoff beladen, wodurch die CMC der Mizellenbildung bestimmt werden konnte.
Außerdem wurden die Glykopolymere als Oberflächenbeschichtung über „Grafting from“ mit SI-ATRP oder über „Grafting to“ auf verschiedene Oberflächen gebunden. Durch die glykopolymerbschichteten Oberflächen konnte die Glykan Protein Wechselwirkung über spektroskopische Messmethoden, wie SPR- und Mikroring Resonatoren untersucht werden. Hierbei wurde die spezifische und selektive Bindung der Lektine an die Glykopolymere nachgewiesen und die Bindungsstärke untersucht.
Die synthetisierten Glykopolymere könnten durch Austausch der Glykaneinheit für andere Lektine adressierbar werden und damit ein weites Feld an anderen Proteinen erschließen. Die bioverträglichen Glykopolymere wären alternativen für den Einsatz in biologischen Prozessen als Transporter von Medikamenten oder Farbstoffe in den Körper. Außerdem könnten die funktionalisierten Oberflächen in der Diagnostik zum Erkennen von Lektinen eingesetzt werden. Die Glykane, die keine selektive und spezifische Bindung zu Proteinen eingehen, könnten als antiadsorptive Oberflächenbeschichtung z.B. in der Zellbiologie eingesetzt werden.
Seit 2003 hat sich das politische Bild des Irak stark verändert. Dadurch begann der Prozess der Neugestaltung der irakischen Rechtsordnung. Die irakische Verfassung von 2005 legt erstmalig in der Geschichte des Irak den Islam und die Demokratie als zwei nebeneinander zu beachtende Grundprinzipien bei der Gesetzgebung fest. Trotz dieser signifikanten Veränderung im irakischen Rechtssystem und erheblicher Entwicklungen im internationalen Privat- und Zivilverfahrensrecht (IPR/IZVR) im internationalen Vergleich gilt die hauptsächlich im irakischen Zivilgesetzbuch (ZGB) von 1951 enthaltene gesetzliche Regelung des IPR/IZVR im Irak weiterhin. Deshalb entstand diese Arbeit für eine Reformierung des irakischen IPR/IZVR.
Die Arbeit gilt als erste umfassende wissenschaftliche Untersuchung, die sich mit dem jetzigen Inhalt und der zukünftigen Reformierung des irakischen internationalen Privatrecht- und Zivilverfahrensrechts (IPR/IZVR) beschäftigt.
Die Verfasserin vermittelt einen Gesamtüberblick über das jetzt geltende irakische internationale Privat- und Zivilverfahrensrecht mit gelegentlicher punktueller und stichwortartiger Heranziehung des deutschen, islamischen, türkischen und tunesischen Rechts, zeigt dessen Schwachstellen auf und unterbreitet entsprechende Reformvorschläge.
Wegen der besonderen Bedeutung des internationalen Vertragsrechts für die Wirtschaft im Irak und auch zum Teil für Deutschland gibt die Verfasserin einen genaueren Überblick über das irakische internationale Vertragsrecht und bekräftigt gleichzeitig dessen Reformbedürftigkeit.
Die Darstellung der wichtigen Entwicklungen im deutsch-europäischen, im traditionellen islamischen Recht und im türkischen und tunesischen internationalen Privat- und Zivilverfahrensrecht im zweiten Kapitel dienen als Grundlage, auf die bei der Reformierung des irakischen IPR/ IZVR zurück gegriffen werden kann. Da die Kenntnisse des islamischen Rechts nicht zwingend zum Rechtsstudium gehören, wird das islamische Recht dazu in Bezug auf seine Entstehung und die Rechtsquellen dargestellt.
Am Ende der Arbeit wird ein Entwurf eines föderalen Gesetzes zum internationalen Privatrecht im Irak katalogisiert, der sich im Rahmen der irakischen Verfassung gleichzeitig mit dem Islam und der Demokratie vereinbaren lässt.
Die Anfechtbarkeit und die Feststellbarkeit der Mutterschaft de lege lata und de lege ferenda
(2019)
Der althergebrachte Grundsatz, wonach das Kind von der Frau abstammt, welche es geboren hat, ist durch die moderne Fortpflanzungsmedizin ins Wanken geraten. Dennoch ordnet § 1591 BGB das Kind unanfechtbar der Geburtsmutter zu. Rechtliche und genetische Mutterschaft fallen deshalb dauerhaft auseinander, wenn das Kind im Wege der Leihmutterschaft oder nach einer Eizell- bzw. Embryospende zur Welt kommt. Die auf diese Methoden der artifiziellen Reproduktion bezogenen, im Inland bestehenden Verbote halten Paare mit Kinderwunsch nicht davon ab, auf entsprechende Angebote im Ausland zurückzugreifen. Daraus resultierende kollisions- und verfassungsrechtliche Probleme sind Gegenstand der vorliegenden Arbeit.
Für den Bereich der Leihmutterschaft wird der Frage nachgegangen, ob die mit dem Anfechtungsausschluss verfolgten Ziele des Gesetzgebers die damit einhergehenden Beeinträchtigungen grundrechtlich geschützter Rechtspositionen von genetischer Mutter und Kind rechtfertigen können. Besonderes Augenmerk liegt auf dem von Art. 6 Abs. 2 S. 1 GG geschützten Interesse von leiblichen Eltern und Kindern, die verfahrensrechtliche Möglichkeit zu erhalten, rechtlich einander zugeordnet zu werden. Dieses Interesse wird den Zielen des Gesetzgebers, der mit dem Anfechtungsausschluss die Rechte von Leihmüttern und Kindern zu schützen beabsichtigt, im Rahmen einer umfassenden Verhältnismäßigkeitsprüfung gegenübergestellt.
In den Konstellationen der Eizell- und Embryospende tritt schwerpunktmäßig das Recht des Kindes auf Kenntnis der eigenen Abstammung in den Vordergrund und mit ihm die Frage, ob sich daraus eine Verpflichtung des Gesetzgebers ableiten lässt, den Tatbestand von § 1598a BGB so zu erweitern, dass die vermuteten genetischen Eltern für den Bereich der artifiziellen Reproduktion in den Kreis der Klärungsverpflichteten aufgenommen werden.
Neben diesen Schwerpunkten werden viele weitere Probleme angesprochen. Im Ergebnis mündet die Arbeit in einen Vorschlag für die Legislative.
Der historische Spielfilm zählt zu den populärsten Formen geschichtskultureller Artikulation. Als solche ist er Gegenstand kontroverser Diskussionen über einen angemessenen didaktischen Umgang. Vor diesem Hintergrund ist es das Ziel der vorliegenden Arbeit, ein integratives, theoretisch und empirisch abgesichertes Analysemodell zu entwickeln, das nach den Tiefenstrukturen historischen Erzählens im Medium des Spielfilms fragt und dabei unterschiedliche Erscheinungsformen historischer Spielfilme berücksichtigt. Die Überlegungen bewegen sich deshalb in einem interdisziplinären Spannungsfeld von Theorien zum historischen Erzählen und Konzepten der Literatur- und Filmwissenschaft. Die Diskussion und Synthese dieser unterschiedlichen Konzepte geht dabei – auf der Grundlage einer großen Materialbasis – vom Gegenstand aus und ist induktiv angelegt. Als Orientierung für die praktische Arbeit werden am Ende der einzelnen Kapitel Toolkits entwickelt, die zu einer vertieften Auseinandersetzung mit historischen Spielfilmen anregen sollen.
The thesis comprises three experimental studies, which were carried out to unravel the short- as well as the long-term mechanical properties of shale rocks. Short-term mechanical properties such as compressive strength and Young’s modulus were taken from recorded stress-strain curves of constant strain rate tests. Long-term mechanical properties are represented by the time– dependent creep behavior of shales. This was obtained from constant stress experiments, where the test duration ranged from a couple minutes up to two weeks. A profound knowledge of the mechanical behavior of shales is crucial to reliably estimate the potential of a shale reservoir for an economical and sustainable extraction of hydrocarbons (HC). In addition, healing of clay-rich forming cap rocks involving creep and compaction is important for underground storage of carbon dioxide and nuclear waste.
Chapter 1 introduces general aspects of the research topic at hand and highlights the motivation for conducting this study. At present, a shift from energy recovered from conventional resources e.g., coal towards energy provided by renewable resources such as wind or water is a big challenge. Gas recovered from unconventional reservoirs (shale plays) is considered a potential bridge technology.
In Chapter 2, short-term mechanical properties of two European mature shale rocks are presented, which were determined from constant strain rate experiments performed at ambient and in situ deformation conditions (confining pressure, pc ≤ 100 MPa, temperature, T ≤ 125 °C, representing pc, T - conditions at < 4 km depth) using a Paterson– type gas deformation apparatus. The investigated shales were mainly from drill core material of Posidonia (Germany) shale and weathered material of Bowland (United Kingdom) shale. The results are compared with mechanical properties of North American shales. Triaxial compression tests performed perpendicular to bedding revealed semibrittle deformation behavior of Posidonia shale with pronounced inelastic deformation. This is in contrast to Bowland shale samples that deformed brittle and displayed predominantly elastic deformation. The static Young’s modulus, E, and triaxial compressive strength, σTCS, determined from recorded stress-strain curves strongly depended on the applied confining pressure and sample composition, whereas the influence of temperature and strain rate on E and σTCS was minor. Shales with larger amounts of weak minerals (clay, mica, total organic carbon) yielded decreasing E and σTCS. This may be related to a shift from deformation supported by a load-bearing framework of hard phases (e.g., quartz) towards deformation of interconnected weak minerals, particularly for higher fractions of about 25 – 30 vol% weak phases. Comparing mechanical properties determined at reservoir conditions with mechanical data applying effective medium theories revealed that E and σTCS of Posidonia and Bowland shale are close to the lower (Reuss) bound. Brittleness B is often quoted as a measure indicating the response of a shale formation to stimulation and economic production. The brittleness, B, of Posidonia and Bowland shale, estimated from E, is in good agreement with the experimental results. This correlation may be useful to predict B from sonic logs, from which the (dynamic) Young’s modulus can be retrieved.
Chapter 3 presents a study of the long-term creep properties of an immature Posidonia shale. Constant stress experiments (σ = const.) were performed at elevated confining pressures (pc = 50 – 200 MPa) and temperatures (T = 50 – 200 °C) to simulate reservoir pc, T - conditions. The Posidonia shale samples were acquired from a quarry in South Germany. At stresses below ≈ 84 % compressive strength of Posidonia shale, at high temperature and low confining pressure, samples showed pronounced transient (primary) creep with high deformation rates in the semibrittle regime. Sample deformation was mainly accommodated by creep of weak sample constituents and pore space reduction. An empirical power law relation between strain and time, which also accounts for the influence of pc, T and σ on creep strain was formulated to describe the primary creep phase. Extrapolation of the results to a creep period of several years, which is the typical time interval for a large production decline, suggest that fracture closure is unlikely at low stresses. At high stresses as expected for example at the contact between the fracture surfaces and proppants added during stimulation measures, subcritical crack growth may lead to secondary and tertiary creep. An empirical power law is suggested to describe secondary creep of shale rocks as a function of stress, pressure and temperature. The predicted closure rates agree with typical production decline curves recorded during the extraction of hydrocarbons. At the investigated conditions, the creep behavior of Posidonia shale was found to correlate with brittleness, calculated from sample composition.
In Chapter 4 the creep properties of mature Posidonia and Bowland shales are presented. The observed long-term creep behavior is compared to the short-term behavior determined in Chapter 2. Creep experiments were performed at simulated reservoir conditions of pc = 50 – 115 MPa and T = 75 – 150 °C. Similar to the mechanical response of immature Posidonia shale samples investigated in Chapter 3, creep strain rates of mature Bowland and Posidonia shales were enhanced with increasing stress and temperature and decreasing confining pressures. Depending on applied deformation conditions, samples displayed either only a primary (decelerating) or in addition also a secondary (quasi-steady state) and subsequently a tertiary (accelerating) creep phase before failure. At the same deformation conditions, creep strain of Posidonia shale, which is rich in weak constituents, is tremendously higher than of quartz-rich Bowland shale. Typically, primary creep strain is again mostly accommodated by deformation of weak minerals and local pore space reduction. At the onset of tertiary creep most of the deformation was accommodated by micro crack growth. A power law was used to characterize the primary creep phase of Posidonia and Bowland shale. Primary creep strain of shale rocks is inversely correlated to triaxial compressive strength and brittleness, as described in Chapter 2.
Chapter 5 provides a synthesis of the experimental findings and summarizes the major results of the studies presented in Chapters 2 – 4 and potential applications in the Exploration & Production industry.
Chapter 6 gives a brief outlook on potential future experimental research that would help to further improve our understanding of processes leading to fracture closure involving proppant embedment in unconventional shale gas reservoirs. Such insights may allow to improve stimulation techniques aimed at maintaining economical extraction of hydrocarbons over several years.
The current thesis examined how second language (L2) speakers of German predict upcoming input during language processing. Early research has shown that the predictive abilities of L2 speakers relative to L1 speakers are limited, resulting in the proposal of the Reduced Ability to Generate Expectations (RAGE) hypothesis. Considering that prediction is assumed to facilitate language processing in L1 speakers and probably plays a role in language learning, the assumption that L1/L2 differences can be explained in terms of different processing mechanisms is a particularly interesting approach. However, results from more recent studies on the predictive processing abilities of L2 speakers have indicated that the claim of the RAGE hypothesis is too broad and that prediction in L2 speakers could be selectively limited. In the current thesis, the RAGE hypothesis was systematically put to the test.
In this thesis, German L1 and highly proficient late L2 learners of German with Russian as L1 were tested on their predictive use of one or more information sources that exist as cues to sentence interpretation in both languages, to test for selective limits. The results showed that, in line with previous findings, L2 speakers can use the lexical-semantics of verbs to predict the upcoming noun. Here the level of prediction was more systematically controlled for than in previous studies by using verbs that restrict the selection of upcoming nouns to the semantic category animate or inanimate. Hence, prediction in L2 processing is possible. At the same time, this experiment showed that the L2 group was slower/less certain than the L1 group. Unlike previous studies, the experiment on case marking demonstrated that L2 speakers can use this morphosyntactic cue for prediction. Here, the use of case marking was tested by manipulating the word order (Dat > Acc vs. Acc > Dat) in double object constructions after a ditransitive verb. Both the L1 and the L2 group showed a difference between the two word order conditions that emerged within the critical time window for an anticipatory effect, indicating their sensitivity towards case. However, the results for the post-critical time window pointed to a higher uncertainty in the L2 group, who needed more time to integrate incoming information and were more affected by the word order variation than the L1 group, indicating that they relied more on surface-level information. A different cue weighting was also found in the experiment testing whether participants predict upcoming reference based on implicit causality information. Here, an additional child L1 group was tested, who had a lower memory capacity than the adult L2 group, as confirmed by a digit span task conducted with both learner groups. Whereas the children were only slightly delayed compared to the adult L1 group and showed the same effect of condition, the L2 speakers showed an over-reliance on surface-level information (first-mention/subjecthood). Hence, the pattern observed resulted more likely from L1/L2 differences than from resource deficits.
The reviewed studies and the experiments conducted show that L2 prediction is affected by a range of factors. While some of the factors can be attributed to more individual differences (e.g., language similarity, slower processing) and can be interpreted by L2 processing accounts assuming that L1 and L2 processing are basically the same, certain limits are better explained by accounts that assume more substantial L1/L2 differences. Crucially, the experimental results demonstrate that the RAGE hypothesis should be refined: Although prediction as a fast-operating mechanism is likely to be affected in L2 speakers, there is no indication that prediction is the dominant source of L1/L2 differences. The results rather demonstrate that L2 speakers show a different weighting of cues and rely more on semantic and surface-level information to predict as well as to integrate incoming information.
The present dissertation about teachers’ cultural diversity beliefs and culturally responsive practices includes a general introduction (Chapter 1), a systematic literature review (Chapter 2), three empirical studies (Chapter 3, 4, and 5) and it ends with a general discussion and conclusion (Chapter 6). The major focus of investigation laid in creating a deeper understanding of teachers’ beliefs about cultural diversity and how those beliefs are related to teaching practices, which could or could not be considered to be culturally responsive. In this dissertation, I relied on insights from theoretical perspectives that derived from the field of psychology such as social cognitive theory and intergroup ideologies, as well as from the field of multicultural education such as culturally responsive teaching.
In Chapter 1, I provide the background of this dissertation, with contextual information regarding the German educational system, the theoretical framework used and the main research objectives of each study.
In Chapter 2, I conducted a systematic review of the existing international studies on trainings addressing cultural diversity beliefs with pre-service teachers. More specifically, the aims of the systematic literature review were (1) to provide a description of main components and contextual characteristics of teacher trainings targeting cultural diversity beliefs, (2) report the training effects, and (3) detail the methodological strengths and weaknesses of these studies. By examining the main components and contextual characteristics of teacher trainings, the effects on beliefs about cultural diversity as well as the methodological strengths and weaknesses of these studies in a single review, I took an integrated approach to these three processes. To review the final pool of studies (N = 36) I used a descriptive and narrative approach, relying primarily on the use of words and text to summarise and explain findings of the synthesis.
The three empirical studies that follow, all highlight aspects of how far and how teacher beliefs about cultural diversity translate into real-world practices in schools. In Chapter 3, to expand the validity of culturally responsive teaching to the German context, I aimed at verifying the dimensional structure of German version of the Culturally Responsive Classroom Management Self-Efficacy Scale (CRCMSES; Siwatu, Putman, Starker-Glass, & Lewis, 2015). I conducted Exploratory and Confirmatory Factor Analysis, and run correlations between the subscales of the CRCMSES and a measure of cultural diversity- related stress. Data (n = 504) used for the first empirical study (Chapter 3) were collected in the InTePP-project (Inclusive Teaching Professionalization Panel) in which pre-service teachers’ competencies and beliefs were assessed longitudinally at two universities: the University of Potsdam and the University of Cologne.
In the second empirical study, which forms Chapter 4, the focus is on teachers’ practices resembling school approaches to cultural diversity. In this study, I investigated two research questions: (1a) What types of descriptive norms regarding cultural diversity are perceived by teachers and students with and without an immigrant background and (1b) what is their degree of congruence? Additionally, I was also interested in how are teachers’ and students’ perceptions of descriptive norms about cultural diversity related to practices and artefacts in the physical and virtual school environment? Data for the second empirical study (Chapter 4) were previously collected in a dissertation project of doctor Maja Schachner funded by the federal program “ProExzellenz” of the Free State of Thuringia. Adopting a mixed-methods research design I conducted a secondary analysis of data from teachers’ (n = 207) and students’ (n = 1,644) gathered in 22 secondary schools in south-west Germany. Additional sources of data in this study were based on pictures of school interiors (hall and corridors) and sixth-grade classrooms’ walls (n = 2,995), and screenshots from each school website (n = 6,499).
Chapter 5 addresses the question of how culturally responsive teaching, teacher cultural diversity beliefs, and self-reflection on own teaching are related. More specifically, in this study I addressed two research questions: (1) How does CRT relate to teachers’ beliefs about incorporating cultural diversity content into daily teaching and learning activities? And (2) how does the level of teachers’ self-reflection on their own teaching relate to CRT?
For this last empirical chapter, I conducted a multiple case study with four ethnic German teachers who work in one culturally and ethnically diverse high school in Berlin, using classroom video observations and post-observation interviews.
In the final chapter (Chapter 6), I summarised the main findings of the systematic literature review and three empirical studies, and discuss their scientific and practical implications.
This dissertation makes a significant contribution to the field of educational science to understanding culturally responsive teaching in terms of its measurement, focus on both beliefs and practices and the link between the two, and theoretical, practical, and future study implications.
The present work is a compilation of three original research articles submitted (or already published) in international peer-reviewed venues of the field of speech science. These three articles address the topics of fundamental motor laws in speech and dynamics of corresponding speech movements:
1. Kuberski, Stephan R. and Adamantios I. Gafos (2019). "The speed-curvature power law in tongue movements of repetitive speech". PLOS ONE 14(3). Public Library of Science. doi: 10.1371/journal.pone.0213851.
2. Kuberski, Stephan R. and Adamantios I. Gafos (In press). "Fitts' law in tongue movements of repetitive speech". Phonetica: International Journal of Phonetic Science. Karger Publishers. doi: 10.1159/000501644
3. Kuberski, Stephan R. and Adamantios I. Gafos (submitted). "Distinct phase space topologies of identical phonemic sequences". Language. Linguistic Society of America.
The present work introduces a metronome-driven speech elicitation paradigm in which participants were asked to utter repetitive sequences of elementary consonant-vowel syllables. This paradigm, explicitly designed to cover speech rates from a substantially wider range than has been explored so far in previous work, is demonstrated to satisfy the important prerequisites for assessing so far difficult to access aspects of speech. Specifically, the paradigm's extensive speech rate manipulation enabled elicitation of a great range of movement speeds as well as movement durations and excursions of the relevant effectors. The presence of such variation is a prerequisite to assessing whether invariant relations between these and other parameters exist and thus provides the foundation for a rigorous evaluation of the two laws examined in the first two contributions of this work.
In the data resulting from this paradigm, it is shown that speech movements obey the same fundamental laws as movements from other domains of motor control do. In particular, it is demonstrated that speech strongly adheres to the power law relation between speed and curvature of movement with a clear speech rate dependency of the power law's exponent. The often-sought or reported exponent of one third in the statement of the law is unique to a subclass of movements which corresponds to the range of faster rates under which a particular utterance is produced. For slower rates, significantly larger values than one third are observed. Furthermore, for the first time in speech this work uncovers evidence for the presence of Fitts' law. It is shown that, beyond a speaker-specific speech rate, speech movements of the tongue clearly obey Fitts' law by emergence of its characteristic linear relation between movement time and index of difficulty. For slower speech rates (when temporal pressure is small), no such relation is observed. The methods and datasets obtained in the two assessment above provide a rigorous foundation both for addressing implications for theories and models of speech as well as for better understanding the status of speech movements in the context of human movements in general.
All modern theories of language rely on a fundamental segmental hypothesis according to which the phonological message of an utterance is represented by a sequence of segments or phonemes. It is commonly assumed that each of these phonemes can be mapped to some unit of speech motor action, a so-called speech gesture.
For the first time here, it is demonstrated that the relation between the phonological description of simple utterances and the corresponding speech motor action is non-unique. Specifically, by the extensive speech rate manipulation in the herein used experimental paradigm it is demonstrated that speech exhibits clearly distinct dynamical organizations underlying the production of simple utterances. At slower speech rates, the dynamical organization underlying the repetitive production of elementary /CV/ syllables can be described by successive concatenations of closing and opening gestures, each with its own equilibrium point. As speech rate increases, the equilibria of opening and closing gestures are not equally stable yielding qualitatively different modes of organization with either a single equilibrium point of a combined opening-closing gesture or a periodic attractor unleashed by the disappearance of both equilibria. This observation, the non-uniqueness of the dynamical organization underlying what on the surface appear to be identical phonemic sequences, is an entirely new result in the domain of speech. Beyond that, the demonstration of periodic attractors in speech reveals that dynamical equilibrium point models do not account for all possible modes of speech motor behavior.