Refine
Has Fulltext
- yes (2501) (remove)
Year of publication
Document Type
- Doctoral Thesis (2501) (remove)
Language
Keywords
- climate change (52)
- Klimawandel (50)
- Modellierung (34)
- Nanopartikel (28)
- machine learning (21)
- Fernerkundung (20)
- Synchronisation (19)
- remote sensing (18)
- Spracherwerb (17)
- Blickbewegungen (16)
Institute
- Institut für Physik und Astronomie (402)
- Institut für Biochemie und Biologie (380)
- Institut für Geowissenschaften (325)
- Institut für Chemie (301)
- Extern (147)
- Institut für Umweltwissenschaften und Geographie (121)
- Institut für Ernährungswissenschaft (102)
- Wirtschaftswissenschaften (96)
- Department Psychologie (87)
- Hasso-Plattner-Institut für Digital Engineering GmbH (85)
- Institut für Informatik und Computational Science (84)
- Department Linguistik (83)
- Institut für Mathematik (63)
- Sozialwissenschaften (57)
- Department Sport- und Gesundheitswissenschaften (52)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (44)
- Department Erziehungswissenschaft (40)
- Institut für Romanistik (23)
- Historisches Institut (19)
- Institut für Philosophie (17)
- Institut für Germanistik (15)
- Öffentliches Recht (14)
- Strukturbereich Kognitionswissenschaften (12)
- Institut für Anglistik und Amerikanistik (9)
- Institut für Jüdische Studien und Religionswissenschaft (9)
- Digital Engineering Fakultät (7)
- Fachgruppe Betriebswirtschaftslehre (7)
- Fachgruppe Politik- & Verwaltungswissenschaft (7)
- Fachgruppe Volkswirtschaftslehre (7)
- Institut für Künste und Medien (7)
- Department Grundschulpädagogik (5)
- Mathematisch-Naturwissenschaftliche Fakultät (5)
- Department Musik und Kunst (4)
- Fachgruppe Soziologie (4)
- Psycholinguistics and Neurolinguistics (4)
- Bürgerliches Recht (3)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (3)
- Fakultät für Gesundheitswissenschaften (2)
- Institut für Jüdische Theologie (2)
- Institut für Slavistik (2)
- Interdisziplinäres Zentrum für Kognitive Studien (2)
- Multilingualism (2)
- Patholinguistics/Neurocognition of Language (2)
- Applied Computational Linguistics (1)
- Department für Inklusionspädagogik (1)
- Foundations of Computational Linguistics (1)
- Institut für Religionswissenschaft (1)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (1)
- Kommunalwissenschaftliches Institut (1)
- Language Acquisition (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
- Phonology & Phonetics (1)
- Potsdam Research Institute for Multilingualism (PRIM) (1)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (1)
- Strafrecht (1)
- Syntax, Morphology & Variability (1)
Lamprophyre sind porphyrische, aus Mantelschmelzen gebildete Gesteine, die meist in Form von Gängen auftreten. Sie zeichnen sich durch auffällige und charakteristische texturelle, chemische und mineralogische Eigenschaften aus. Als ehemalige Mantelschmelzen liefern sie Information sowohl über Bedingungen der Schmelzbildung im Mantel als auch über geodynamische Prozesse, die zu metasomatischer Veränderung des Mantels geführt haben. Im Saxothuringikum Mitteleuropas, am Nordrand des Böhmischen Massivs, gibt es zahlreiche Lamprophyrvorkommen, die hier zur Charakterisierung der Mantelentwicklung während der variszischen Orogenese dienen. Die vorliegende Arbeit befaßt sich mit den mineralogischen, geochemischen und isotopischen (Sr-Nd-Pb) Signaturen von spätvariszischen kalkalkalischen Lamprophyren, von postvariszischen ultramafischen Lamprophyren, von Alkalibasalten der Lausitz und, zum Vergleich, von prävariszischen Gabbros. Darüberhinaus nutzt die Arbeit Lithium-Isotopensignaturen kombiniert mit Sr-Nd-Pb–Isotopendaten spätvariszischer kalkalkalischer Lamprophyre aus drei variszischen Domänen (Erzgebirge, Lausitz, Sudeten) zur Erkundung der lokalen Mantelüberprägungen während der variszischen Orogenese.
A central insight from psychological studies on human eye movements is that eye movement patterns are highly individually characteristic. They can, therefore, be used as a biometric feature, that is, subjects can be identified based on their eye movements. This thesis introduces new machine learning methods to identify subjects based on their eye movements while viewing arbitrary content. The thesis focuses on probabilistic modeling of the problem, which has yielded the best results in the most recent literature. The thesis studies the problem in three phases by proposing a purely probabilistic, probabilistic deep learning, and probabilistic deep metric learning approach. In the first phase, the thesis studies models that rely on psychological concepts about eye movements. Recent literature illustrates that individual-specific distributions of gaze patterns can be used to accurately identify individuals. In these studies, models were based on a simple parametric family of distributions. Such simple parametric models can be robustly estimated from sparse data, but have limited flexibility to capture the differences between individuals. Therefore, this thesis proposes a semiparametric model of gaze patterns that is flexible yet robust for individual identification. These patterns can be understood as domain knowledge derived from psychological literature. Fixations and saccades are examples of simple gaze patterns. The proposed semiparametric densities are drawn under a Gaussian process prior centered at a simple parametric distribution. Thus, the model will stay close to the parametric class of densities if little data is available, but it can also deviate from this class if enough data is available, increasing the flexibility of the model. The proposed method is evaluated on a large-scale dataset, showing significant improvements over the state-of-the-art. Later, the thesis replaces the model based on gaze patterns derived from psychological concepts with a deep neural network that can learn more informative and complex patterns from raw eye movement data. As previous work has shown that the distribution of these patterns across a sequence is informative, a novel statistical aggregation layer called the quantile layer is introduced. It explicitly fits the distribution of deep patterns learned directly from the raw eye movement data. The proposed deep learning approach is end-to-end learnable, such that the deep model learns to extract informative, short local patterns while the quantile layer learns to approximate the distributions of these patterns. Quantile layers are a generic approach that can converge to standard pooling layers or have a more detailed description of the features being pooled, depending on the problem. The proposed model is evaluated in a large-scale study using the eye movements of subjects viewing arbitrary visual input. The model improves upon the standard pooling layers and other statistical aggregation layers proposed in the literature. It also improves upon the state-of-the-art eye movement biometrics by a wide margin. Finally, for the model to identify any subject — not just the set of subjects it is trained on — a metric learning approach is developed. Metric learning learns a distance function over instances. The metric learning model maps the instances into a metric space, where sequences of the same individual are close, and sequences of different individuals are further apart. This thesis introduces a deep metric learning approach with distributional embeddings. The approach represents sequences as a set of continuous distributions in a metric space; to achieve this, a new loss function based on Wasserstein distances is introduced. The proposed method is evaluated on multiple domains besides eye movement biometrics. This approach outperforms the state of the art in deep metric learning in several domains while also outperforming the state of the art in eye movement biometrics.
We establish elements of a new approach to ellipticity and parametrices within operator algebras on manifolds with higher singularities, only based on some general axiomatic requirements on parameter-dependent operators in suitable scales of spaes. The idea is to model an iterative process with new generations of parameter-dependent operator theories, together with new scales of spaces that satisfy analogous requirements as the original ones, now on a corresponding higher level. The "full" calculus involves two separate theories, one near the tip of the corner and another one at the conical exit to infinity. However, concerning the conical exit to infinity, we establish here a new concrete calculus of edge-degenerate operators which can be iterated to higher singularities.
Linked Open Data (LOD) comprises very many and often large public data sets and knowledge bases. Those datasets are mostly presented in the RDF triple structure of subject, predicate, and object, where each triple represents a statement or fact. Unfortunately, the heterogeneity of available open data requires significant integration steps before it can be used in applications. Meta information, such as ontological definitions and exact range definitions of predicates, are desirable and ideally provided by an ontology. However in the context of LOD, ontologies are often incomplete or simply not available. Thus, it is useful to automatically generate meta information, such as ontological dependencies, range definitions, and topical classifications. Association rule mining, which was originally applied for sales analysis on transactional databases, is a promising and novel technique to explore such data. We designed an adaptation of this technique for min-ing Rdf data and introduce the concept of “mining configurations”, which allows us to mine RDF data sets in various ways. Different configurations enable us to identify schema and value dependencies that in combination result in interesting use cases. To this end, we present rule-based approaches for auto-completion, data enrichment, ontology improvement, and query relaxation. Auto-completion remedies the problem of inconsistent ontology usage, providing an editing user with a sorted list of commonly used predicates. A combination of different configurations step extends this approach to create completely new facts for a knowledge base. We present two approaches for fact generation, a user-based approach where a user selects the entity to be amended with new facts and a data-driven approach where an algorithm discovers entities that have to be amended with missing facts. As knowledge bases constantly grow and evolve, another approach to improve the usage of RDF data is to improve existing ontologies. Here, we present an association rule based approach to reconcile ontology and data. Interlacing different mining configurations, we infer an algorithm to discover synonymously used predicates. Those predicates can be used to expand query results and to support users during query formulation. We provide a wide range of experiments on real world datasets for each use case. The experiments and evaluations show the added value of association rule mining for the integration and usability of RDF data and confirm the appropriateness of our mining configuration methodology.
Die vorliegende Arbeit trägt den Titel: Intellektuellen-Rolle in Günter Grass Werken : „Die Plebejer proben den Aufstand“(1966), „Örtlich betäubt“(1969), „Aus dem Tagebuch einer Schnecke“(1972), und „Ein weites Feld“(1995).
Das erste Kapitel befasst sich insgesamt mit drei Haupttiteln
II. Der Intellektuelle
II.1 Das allgemeine Umfeld
In diesem Teil der Dissertation sollen Aussagen getroffen werden, die auf folgende und weitere Fragen eine Antwort geben: Was ist ein Intellektueller? Wie kam der Begriff zustande? Gibt es Unterschiede zwischen den Intellektuellen und wie werden sie eingeteilt?
II.2 Das deutsche Umfeld
Die Behandlung des Nazisystems und dessen historische Hintergründe vermittelt bedeutsame Lehren. Aber wozu braucht man diese Lehren? Gibt es Spuren von Nationalsozialismus heutzutage? Wo waren die Intellektuellen bei der Bildung des Nationalsozialismus? Ist der Nationalsozialismus erst mit Hitler aufgetaucht? Wenn zuvor, in welcher Phase hat er sich im Bewusstsein der Deutschen verankert? Ob theoretische bzw. geistige Tendenzen dazu beigetragen haben?
II.3 Das Bild von Grass als Intellektueller
II.3.1 Positionierung
Eine Hauptthese für Grass intellektuelle Positionierung wird durch die Verbindung zwischen Grass’ Grundkonzeption der gesellschaftspolitischen Intellektualität und der Gruppe 47 ermittelt. Dann bezweckt die Behandlung von Grass Bild nach Erscheinen seines autobiographischen Werks: „Beim Häuten der Zwiebel“ (2006), dass seine Intellektualität nicht nur aus dem positiven, sondern auch aus dem negativen Profil beleuchtet wird.
Aus der Darstellung zahlreicher Ansichten von Günter Grass werden fünf thematische Kernpunkte als Konzepte behandelt. Unter jedem Konzept sollen spezifische Vorschläge zur gesellschaftlichen Positionierung aufgezeigt werden.
II.3.2 Grass’ politische Merkmale
Es handelt sich hier um die intellektuellen Charaktereigenschaften. Dadurch kommen manche Fragen zu Wort: Hat Günter Grass gesellschaftliche Aktivitäten? Hat er die Voraussetzungen dafür? Wie ist der Umfang seiner Aktivitäten? Hat die Gruppe 47 Einfluss auf Grass intellektuelle Merkmale? Steht bei Grass eine Methode der gesellschaftspolitischen Arbeit zur Verfügung?
Dann wird die politische Sprache von Günter Grass und ihre Wirkung auf den Rezipienten untersucht. Danach wird nach Grass’ Auffassung von der Revision gefragt und ob sie mit seiner Auffassung der Aufklärung zusammenpasst. Darauf wird die Funktion der Revision in seinem literarischen Werk und in seiner gesellschaftspolitischen Aktivität gezeigt.
Abschließend werden die Argumente seiner Intellektualität untersucht:
Wie hat Grass’ gesellschaftspolitische Aktivität den konkreten politischen Rahmen berücksichtigt? Um diese Frage zu beantworten, muss der Zusammenhang zwischen Politik und Moral verdeutlicht werden.
III. Historischer Kontext und Inhalt der Werke
Unter diesem Titel wird erstens der historische Zusammenhang der untersuchten Werke skizziert. Dann werden meistens durch Argumente aus jedem Werk selbst nicht nur der Kern des Werkes und sein Handlungsverlauf, sondern auch die dafür angewandte Methode dargestellt.
IV. Bezug der untersuchten Werke zu konkreten gesellschaftspolitischen Fragen
IV.1 Interaktionswege des Intellektuellen mit der Gesellschaft, vor allem beim Wandel gesellschaftspolitischer Prozesse
Zentralkonzepte des ersten Werkes sind: Vermittlung, Engagement, Solidarität und die Aktualität als Maßstab. Diese werden durch zwei Konzepte des zweiten Werkes: Appell an Generationen beim Wechsel und Zusammenhaltsprinzip an Revision gebunden, sowie durch die Behandlung vom Prozess der Meinungsbildung im vierten Werk ausgearbeitet.
IV.2 Thematische Aspekte zur Vermeidung eines Naziregimes
Aus den thematischen Perspektiven der drei letzten Werke geht eine bunte Sammlung intellektueller Konzepte aus, die zur Bekämpfung von Nazivorsprünge verwendet werden können.
V. Pädagogische Strategien der untersuchten Werke
Die pädagogischen Aspekte der untersuchten Werke sollen intellektuelle Werte vermitteln, die einen bedeutenden Beitrag zur Lösung gesellschaftspolitischer Probleme und Konflikte leisten.
VI. Entwicklung der literarischen und gesellschaftspolitischen Vision
Hier wird die Entwicklungslinie der gesellschaftspolitischen Vision in den untersuchten Werken verfolgt.
VII. Zur Rezeption der vier Werke
Durch die Auseinandersetzung mit der negativen Kritik wird angestrebt, ihre Subjektivität darzulegen, damit der gesellschaftspolitische Wert der vier Werke enthüllt wird.
Health effects, attributed to the environmental pollution resulted from using solvents such as benzene, are relatively unexplored among petroleum workers, personal use, and laboratory researchers. Solvents can cause various health problems, such as neurotoxicity, immunotoxicity, and carcinogenicity. As such it can be absorbed via epidermal or respiratory into the human body resulting in interacting with molecules that are responsible for biochemical and physiological processes of the brain.
Owing to the ever-growing demand for finding a solution, an Ionic liquid can use as an alternative solvent. Ionic liquids are salts in a liquid state at low temperature (below 100 C), or even at room temperature. Ionic liquids impart a unique architectural platform, which has been interesting because of their unusual properties that can be tuned by simple ways such as mixing two ionic liquids.
Ionic liquids not only used as reaction solvents but they became a key developing for novel applications based on their thermal stability, electric conductivity with very low vapor pressure in contrast to the conventional solvents.
In this study, ionic liquids were used as a solvent and reactant at the same time for the novel nanomaterials synthesis for different applications including solar cells, gas sensors, and water splitting.
The field of ionic liquids continues to grow, and become one of the most important branches of science. It appears to be at a point where research and industry can work together in a new way of thinking for green chemistry and sustainable production.
Digital inclusion
(2021)
In this thesis, we tackle two social disruptions: recent refugee waves in Germany and the COVID-19 pandemic. We focus on the use of information and communication technology (ICT) as a key means of alleviating these disruptions and promoting social inclusion. As social disruptions typically lead to frustration and fragmentation, it is essential to ensure the social inclusion of individuals and societies during such times.
In the context of the social inclusion of refugees, we focus on the Syrian refugees who arrived in Germany as of 2015, as they form a large and coherent refugee community. In particular, we address the role of ICTs in refugees’ social inclusion and investigate how different ICTs (especially smartphones and social networks) can foster refugees’ integration and social inclusion. In the context of the COVID-19 pandemic, we focus on the widespread unconventional working model of work from home (WFH). Our research here centers on the main constructs of WFH and the key differences in WFH experiences based on personal characteristics such as gender and parental status.
We reveal novel insights through four well-established research methods: literature review, mixed methods, qualitative method, and quantitative method. The results of our research have been published in the form of eight articles in major information systems venues and journals. Key results from the refugee research stream include the following: Smartphones represent a central component of refugee ICT use; refugees view ICT as a source of information and power; the social connectedness of refugees is strongly correlated with their Internet use; refugees are not relying solely on traditional methods to learn the German language or pursue further education; the ability to use smartphones anytime and anywhere gives refugees an empowering feeling of global connectedness; and ICTs empower refugees on three levels (community participation, sense of control, and self-efficacy).
Key insights from the COVID-19 WFH stream include: Gender and the presence of children under the age of 18 affect workers’ control over their time, technology usefulness, and WFH conflicts, while not affecting their WFH attitudes; and both personal and technology-related factors affect an individual’s attitude toward WFH and their productivity. Further insights are being gathered at the time of submitting this thesis.
This thesis contributes to the discussion within the information systems community regarding how to use different ICT solutions to promote the social inclusion of refugees in their new communities and foster an inclusive society. It also adds to the growing body of research on COVID-19, in particular on the sudden workplace transformation to WFH. The insights gathered in this thesis reveal theoretical implications and future opportunities for research in the field of information systems, practical implications for relevant stakeholders, and social implications related to the refugee crisis and the COVID-19 pandemic that must be addressed.
The aim of this work was the generation of carbon materials with high surface area, exhibiting a hierarchical pore system in the macro- and mesorange. Such a pore system facilitates the transport through the material and enhances the interaction with the carbon matrix (macropores are pores with diameters > 50 nm, mesopores between 2 – 50 nm). Thereto, new strategies for the synthesis of novel carbon materials with designed porosity were developed that are in particular useful for the storage of energy. Besides the porosity, it is the graphene structure itself that determines the properties of a carbon material. Non-graphitic carbon materials usually exhibit a quite large degree of disorder with many defects in the graphene structure, and thus exhibit inherent microporosity (d < 2nm). These pores are traps and oppose reversible interaction with the carbon matrix. Furthermore they reduce the stability and conductivity of the carbon material, which was undesired for the proposed applications. As one part of this work, the graphene structures of different non-graphitic carbon materials were studied in detail using a novel wide-angle x-ray scattering model that allowed precise information about the nature of the carbon building units (graphene stacks). Different carbon precursors were evaluated regarding their potential use for the synthesis shown in this work, whereas mesophase pitch proved to be advantageous when a less disordered carbon microstructure is desired. By using mesophase pitch as carbon precursor, two templating strategies were developed using the nanocasting approach. The synthesized (monolithic) materials combined for the first time the advantages of a hierarchical interconnected pore system in the macro- and mesorange with the advantages of mesophase pitch as carbon precursor. In the first case, hierarchical macro- / mesoporous carbon monoliths were synthesized by replication of hard (silica) templates. Thus, a suitable synthesis procedure was developed that allowed the infiltration of the template with the hardly soluble carbon precursor. In the second case, hierarchical macro- / mesoporous carbon materials were synthesized by a novel soft-templating technique, taking advantage of the phase separation (spinodal decomposition) between mesophase pitch and polystyrene. The synthesis also allowed the generation of monolithic samples and incorporation of functional nanoparticles into the material. The synthesized materials showed excellent properties as an anode material in lithium batteries and support material for supercapacitors.
It is a well-attested finding in head-initial languages that individuals with aphasia (IWA) have greater difficulties in comprehending object-extracted relative clauses (ORCs) as compared to subject-extracted relative clauses (SRCs). Adopting the linguistically based approach of Relativized Minimality (RM; Rizzi, 1990, 2004), the subject-object asymmetry is attributed to the occurrence of a Minimality effect in ORCs due to reduced processing capacities in IWA (Garraffa & Grillo, 2008; Grillo, 2008, 2009). For ORCs, it is claimed that the embedded subject intervenes in the syntactic dependency between the moved object and its trace, resulting in greater processing demands. In contrast, no such intervener is present in SRCs. Based on the theoretical framework of RM and findings from language acquisition (Belletti et al., 2012; Friedmann et al., 2009), it is assumed that Minimality effects are alleviated when the moved object and the intervening subject differ in terms of relevant syntactic features. For German, the language under investigation, the RM approach predicts that number (i.e., singular vs. plural) and the lexical restriction [+NP] feature (i.e., lexically restricted determiner phrases vs. lexically unrestricted pronouns) are considered relevant in the computation of Minimality. Greater degrees of featural distinctiveness are predicted to result in more facilitated processing of ORCs, because IWA can more easily distinguish between the moved object and the intervener.
This cumulative dissertation aims to provide empirical evidence on the validity of the RM approach in accounting for comprehension patterns during relative clause (RC) processing in German-speaking IWA. For that purpose, I conducted two studies including visual-world eye-tracking experiments embedded within an auditory referent-identification task to study the offline and online processing of German RCs. More specifically, target sentences were created to evaluate (a) whether IWA demonstrate a subject-object asymmetry, (b) whether dissimilarity in the number and/or the [+NP] features facilitates ORC processing, and (c) whether sentence processing in IWA benefits from greater degrees of featural distinctiveness. Furthermore, by comparing RCs disambiguated through case marking (at the relative pronoun or the following noun phrase) and number marking (inflection of the sentence-final verb), it was possible to consider the role of the relative position of the disambiguation point. The RM approach predicts that dissimilarity in case should not affect the occurrence of Minimality effects. However, the case cue to sentence interpretation appears earlier within RCs than the number cue, which may result in lower processing costs in case-disambiguated RCs compared to number-disambiguated RCs.
In study I, target sentences varied with respect to word order (SRC vs. ORC) and dissimilarity in the [+NP] feature (lexically restricted determiner phrase vs. pronouns as embedded element). Moreover, by comparing the impact of these manipulations in case- and number-disambiguated RCs, the effect of dissimilarity in the number feature was explored. IWA demonstrated a subject-object asymmetry, indicating the occurrence of a Minimality effect in ORCs. However, dissimilarity neither in the number feature nor in the [+NP] feature alone facilitated ORC processing. Instead, only ORCs involving distinct specifications of both the number and the [+NP] features were well comprehended by IWA. In study II, only temporarily ambiguous ORCs disambiguated through case or number marking were investigated, while controlling for varying points of disambiguation. There was a slight processing advantage of case marking as cue to sentence interpretation as compared to number marking.
Taken together, these findings suggest that the RM approach can only partially capture empirical data from German IWA. In processing complex syntactic structures, IWA are susceptible to the occurrence of the intervening subject in ORCs. The new findings reported in the thesis show that structural dissimilarity can modulate sentence comprehension in aphasia. Interestingly, IWA can override Minimality effects in ORCs and derive correct sentence meaning if the featural specifications of the constituents are maximally different, because they can more easily distinguish the moved object and the intervening subject given their reduced processing capacities. This dissertation presents new scientific knowledge that highlights how the syntactic theory of RM helps to uncover selective effects of morpho-syntactic features on sentence comprehension in aphasia, emphasizing the close link between assumptions from theoretical syntax and empirical research.
Metabolically active microbial communities are present in a wide range of subsurface environments. Techniques like enumeration of microbial cells, activity measurements with radiotracer assays and the analysis of porewater constituents are currently being used to explore the subsurface biosphere, alongside with molecular biological analyses. However, many of these techniques reach their detection limits due to low microbial activity and abundance. Direct measurements of microbial turnover not just face issues of insufficient sensitivity, they only provide information about a single specific process but in sediments many different process can occur simultaneously. Therefore, the development of a new technique to measure total microbial activity would be a major improvement. A new tritium-based hydrogenase-enzyme assay appeared to be a promising tool to quantify total living biomass, even in low activity subsurface environments. In this PhD project total microbial biomass and microbial activity was quantified in different subsurface sediments using established techniques (cell enumeration and pore water geochemistry) as well as a new tritium-based hydrogenase enzyme assay. By using a large database of our own cell enumeration data from equatorial Pacific and north Pacific sediments and published data it was shown that the global geographic distribution of subseafloor sedimentary microbes varies between sites by 5 to 6 orders of magnitude and correlates with the sedimentation rate and distance from land. Based on these correlations, global subseafloor biomass was estimated to be 4.1 petagram-C and ~0.6 % of Earth's total living biomass, which is significantly lower than previous estimates. Despite the massive reduction in biomass the subseafloor biosphere is still an important player in global biogeochemical cycles. To understand the relationship between microbial activity, abundance and organic matter flux into the sediment an expedition to the equatorial Pacific upwelling area and the north Pacific Gyre was carried out. Oxygen respiration rates in subseafloor sediments from the north Pacific Gyre, which are deposited at sedimentation rates of 1 mm per 1000 years, showed that microbial communities could survive for millions of years without fresh supply of organic carbon. Contrary to the north Pacific Gyre oxygen was completely depleted within the upper few millimeters to centimeters in sediments of the equatorial upwelling region due to a higher supply of organic matter and higher metabolic activity. So occurrence and variability of electron acceptors over depth and sites make the subsurface a complex environment for the quantification of total microbial activity. Recent studies showed that electron acceptor processes, which were previously thought to thermodynamically exclude each other can occur simultaneously. So in many cases a simple measure of the total microbial activity would be a better and more robust solution than assays for several specific processes, for example sulfate reduction rates or methanogenesis. Enzyme or molecular assays provide a more general approach as they target key metabolic compounds. Since hydrogenase enzymes are ubiquitous in microbes, the recently developed tritium-based hydrogenase radiotracer assay is applied to quantify hydrogenase enzyme activity as a parameter of total living cell activity. Hydrogenase enzyme activity was measured in sediments from different locations (Lake Van, Barents Sea, Equatorial Pacific and Gulf of Mexico). In sediment samples that contained nitrate, we found the lowest cell specific enzyme activity around 10^(-5) nmol H_(2) cell^(-1) d^(-1). With decreasing energy yield of the electron acceptor used, cell-specific hydrogenase activity increased and maximum values of up to 1 nmol H_(2) cell^(-1) d^(-1) were found in samples with methane concentrations of >10 ppm. Although hydrogenase activity cannot be converted directly into a turnover rate of a specific process, cell-specific activity factors can be used to identify specific metabolism and to quantify the metabolically active microbial population. In another study on sediments from the Nankai Trough microbial abundance and hydrogenase activity data show that both the habitat and the activity of subseafloor sedimentary microbial communities have been impacted by seismic activities. An increase in hydrogenase activity near the fault zone revealed that the microbial community was supplied with hydrogen as an energy source and that the microbes were specialized to hydrogen metabolism.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Education in knowledge society is challenged with a lot of problems in particular the interaction between the teacher and learner in social networking software as a key factor affects the learners’ learning and satisfaction (Prammanee, 2005) where “to teach is to communicate, to communicate is to interact, to interact is to learn” (Hefzallah, 2004, p. 48). Analyzing the relation between teacher-learner interaction from a side and learning outcome and learners’ satisfaction from the other side, some basic problems regarding a new learning culture using social networking software are discussed. Most of the educational institutions pay a lot of attentions to the equipments and emerging Information and Communication Technologies (ICTs) in learning situations. They try to incorporate ICT into their institutions as teaching and learning environments. They do this because they expect that by doing so they will improve the outcome of the learning process. Despite this, the learning outcome as reported in most studies is very limited, because the expectations of self-directed learning are much higher than the reality. Findings from an empirical study (investigating the role of teacher-learner interaction through new digital media wiki in higher education and learning outcome and learner’s satisfaction) are presented recommendations about the necessity of pedagogical interactions in support of teaching and learning activities in wiki courses in order to improve the learning outcome. Conclusions show the necessity for significant changes in the approach of vocational teacher training programs of online teachers in order to meet the requirements of new digital media in coherence with a new learning culture. These changes have to address collaborative instead of individual learning and ICT wiki as a tool for knowledge construction instead of a tool for gathering information.
The climate is a complex dynamical system involving interactions and feedbacks among different processes at multiple temporal and spatial scales. Although numerous studies have attempted to understand the climate system, nonetheless, the studies investigating the multiscale characteristics of the climate are scarce. Further, the present set of techniques are limited in their ability to unravel the multi-scale variability of the climate system. It is completely plausible that extreme events and abrupt transitions, which are of great interest to climate community, are resultant of interactions among processes operating at multi-scale. For instance, storms, weather patterns, seasonal irregularities such as El Niño, floods and droughts, and decades-long climate variations can be better understood and even predicted by quantifying their multi-scale dynamics. This makes a strong argument to unravel the interaction and patterns of climatic processes at different scales. With this background, the thesis aims at developing measures to understand and quantify multi-scale interactions within the climate system.
In the first part of the thesis, I proposed two new methods, viz, multi-scale event synchronization (MSES) and wavelet multi-scale correlation (WMC) to capture the scale-specific features present in the climatic processes. The proposed methods were tested on various synthetic and real-world time series in order to check their applicability and replicability. The results indicate that both methods (WMC and MSES) are able to capture scale-specific associations that exist between processes at different time scales in a more detailed manner as compared to the traditional single scale counterparts.
In the second part of the thesis, the proposed multi-scale similarity measures were used in constructing climate networks to investigate the evolution of spatial connections within climatic processes at multiple timescales. The proposed methods WMC and MSES, together with complex network were applied to two different datasets.
In the first application, climate networks based on WMC were constructed for the univariate global sea surface temperature (SST) data to identify and visualize the SSTs patterns that develop very similarly over time and distinguish them from those that have long-range teleconnections to other ocean regions. Further investigations of climate networks on different timescales revealed (i) various high variability and co-variability regions, and (ii) short and long-range teleconnection regions with varying spatial distance. The outcomes of the study not only re-confirmed the existing knowledge on the link between SST patterns like El Niño Southern Oscillation and the Pacific Decadal Oscillation, but also suggested new insights into the characteristics and origins of long-range teleconnections.
In the second application, I used the developed non-linear MSES similarity measure to quantify the multivariate teleconnections between extreme Indian precipitation and climatic patterns with the highest relevance for Indian sub-continent. The results confirmed significant non-linear influences that were not well captured by the traditional methods. Further, there was a substantial variation in the strength and nature of teleconnection across India, and across time scales.
Overall, the results from investigations conducted in the thesis strongly highlight the need for considering the multi-scale aspects in climatic processes, and the proposed methods provide robust framework for quantifying the multi-scale characteristics.
Subject of this work is the investigation of universal scaling laws which are observed in coupled chaotic systems. Progress is made by replacing the chaotic fluctuations in the perturbation dynamics by stochastic processes. First, a continuous-time stochastic model for weakly coupled chaotic systems is introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck equation scaling relations are derived, which are confirmed by results of numerical simulations. Next, the new effect of avoided crossing of Lyapunov exponents of weakly coupled disordered chaotic systems is described, which is qualitatively similar to the energy level repulsion in quantum systems. Using the scaling relations obtained for the coupling sensitivity of chaos, an asymptotic expression for the distribution function of small spacings between Lyapunov exponents is derived and compared with results of numerical simulations. Finally, the synchronization transition in strongly coupled spatially extended chaotic systems is shown to resemble a continuous phase transition, with the coupling strength and the synchronization error as control and order parameter, respectively. Using results of numerical simulations and theoretical considerations in terms of a multiplicative noise partial differential equation, the universality classes of the observed two types of transition are determined (Kardar-Parisi-Zhang equation with saturating term, directed percolation).
Development and application of novel genetic transformation technologies in maize (Zea mays L.)
(2007)
Plant genetic engineering approaches are of pivotal importance to both basic and applied research. However, rapid commercialization of genetically engineered crops, especially maize, raises several ecological and environmental concerns largely related to transgene flow via pollination. In most crops, the plastid genome is inherited uniparentally in a maternal manner. Consequently, a trait introduced into the plastid genome would not be transferred to the sexually compatible relatives of the crops via pollination. Thus, beside its several other advantages, plastid transformation provides transgene containment, and therefore, is an environmentally friendly approach for genetic engineering of crop plants. Reliable in vitro regeneration systems allowing repeated rounds of regeneration are of utmost importance to development of plastid transformation technologies in higher plants. While being the world’s major food crops, cereals are among the most difficult-to-handle plants in tissue culture which severely limits genetic engineering approaches. In maize, immature zygotic embryos provide the predominantly used material for establishing regeneration-competent cell or callus cultures for genetic transformation experiments. The procedures involved are demanding, laborious and time consuming and depend on greenhouse facilities. In one part of this work, a novel tissue culture and plant regeneration system was developed that uses maize leaf tissue and thus is independent of zygotic embryos and greenhouse facilities. Also, protocols were established for (i) the efficient induction of regeneration-competent callus from maize leaves in the dark, (ii) inducing highly regenerable callus in the light, and (iii) the use of leaf-derived callus for the generation of stably transformed maize plants. Furthermore, several selection methods were tested for developing a plastid transformation system in maize. However, stable plastid transformed maize plants could not be yet recovered. Possible explanations as well as suggestions for future attempts towards developing plastid transformation in maize are discussed. Nevertheless, these results represent a first essential step towards developing chloroplast transformation technology for maize, a method that requires multiple rounds of plant regeneration and selection to obtain genetically stable transgenic plants. In order to apply the newly developed transformation system towards metabolic engineering of carotenoid biosynthesis, the daffodil phytoene synthase (PSY) gene was integrated into the maize genome. The results illustrate that expression of a recombinant PSY significantly increases carotenoid levels in leaves. The beta-carotene (pro-vitamin A) amounts in leaves of transgenic plants were increased by ~21% in comparison to the wild-type. These results represent evidence for maize to have significant potential to accumulate higher amounts of carotenoids, especially beta-carotene, through transgenic expression of phytoene synthases. Finally, progresses were made towards developing transformation technologies in Peperomia (Piperaceae) by establishing an efficient leaf-based regeneration system. Also, factors determining plastid size and number in Peperomia, whose species display great interspecific variation in chloroplast size and number per cell, were investigated. The results suggest that organelle size and number are regulated in a tissue-specific manner rather than in dependency on the plastid type. Investigating plastid morphology in Peperomia species with giant chloroplasts, plasmatic connections between chloroplasts (stromules) were observed under the light microscope and in the absence of tissue fixation or GFP overexpression demonstrating the relevance of these structures in vivo. Furthermore, bacteria-like microorganisms were discovered within Peperomia cells, suggesting that this genus provides an interesting model not only for studying plastid biology but also for investigating plant-microbe interactions.
The objective and motivation behind this research is to provide applications with easy-to-use interfaces to communities of deaf and functionally illiterate users, which enables them to work without any human assistance. Although recent years have witnessed technological advancements, the availability of technology does not ensure accessibility to information and communication technologies (ICT). Extensive use of text from menus to document contents means that deaf or functionally illiterate can not access services implemented on most computer software. Consequently, most existing computer applications pose an accessibility barrier to those who are unable to read fluently. Online technologies intended for such groups should be developed in continuous partnership with primary users and include a thorough investigation into their limitations, requirements and usability barriers. In this research, I investigated existing tools in voice, web and other multimedia technologies to identify learning gaps and explored ways to enhance the information literacy for deaf and functionally illiterate users. I worked on the development of user-centered interfaces to increase the capabilities of deaf and low literacy users by enhancing lexical resources and by evaluating several multimedia interfaces for them. The interface of the platform-independent Italian Sign Language (LIS) Dictionary has been developed to enhance the lexical resources for deaf users. The Sign Language Dictionary accepts Italian lemmas as input and provides their representation in the Italian Sign Language as output. The Sign Language dictionary has 3082 signs as set of Avatar animations in which each sign is linked to a corresponding Italian lemma. I integrated the LIS lexical resources with MultiWordNet (MWN) database to form the first LIS MultiWordNet(LMWN). LMWN contains information about lexical relations between words, semantic relations between lexical concepts (synsets), correspondences between Italian and sign language lexical concepts and semantic fields (domains). The approach enhances the deaf users’ understanding of written Italian language and shows that a relatively small set of lexicon can cover a significant portion of MWN. Integration of LIS signs with MWN made it useful tool for computational linguistics and natural language processing. The rule-based translation process from written Italian text to LIS has been transformed into service-oriented system. The translation process is composed of various modules including parser, semantic interpreter, generator, and spatial allocation planner. This translation procedure has been implemented in the Java Application Building Center (jABC), which is a framework for extreme model driven design (XMDD). The XMDD approach focuses on bringing software development closer to conceptual design, so that the functionality of a software solution could be understood by someone who is unfamiliar with programming concepts. The transformation addresses the heterogeneity challenge and enhances the re-usability of the system. For enhancing the e-participation of functionally illiterate users, two detailed studies were conducted in the Republic of Rwanda. In the first study, the traditional (textual) interface was compared with the virtual character-based interactive interface. The study helped to identify usability barriers and users evaluated these interfaces according to three fundamental areas of usability, i.e. effectiveness, efficiency and satisfaction. In another study, we developed four different interfaces to analyze the usability and effects of online assistance (consistent help) for functionally illiterate users and compared different help modes including textual, vocal and virtual character on the performance of semi-literate users. In our newly designed interfaces the instructions were automatically translated in Swahili language. All the interfaces were evaluated on the basis of task accomplishment, time consumption, System Usability Scale (SUS) rating and number of times the help was acquired. The results show that the performance of semi-literate users improved significantly when using the online assistance. The dissertation thus introduces a new development approach in which virtual characters are used as additional support for barely literate or naturally challenged users. Such components enhanced the application utility by offering a variety of services like translating contents in local language, providing additional vocal information, and performing automatic translation from text to sign language. Obviously, there is no such thing as one design solution that fits for all in the underlying domain. Context sensitivity, literacy and mental abilities are key factors on which I concentrated and the results emphasize that computer interfaces must be based on a thoughtful definition of target groups, purposes and objectives.
In the present work, we study wave phenomena in strongly nonlinear lattices. Such lattices are characterized by the absence of classical linear waves. We demonstrate that compactons – strongly localized solitary waves with tails decaying faster than exponential – exist and that they play a major role in the dynamics of the system under consideration. We investigate compactons in different physical setups. One part deals with lattices of dispersively coupled limit cycle oscillators which find various applications in natural sciences such as Josephson junction arrays or coupled Ginzburg-Landau equations. Another part deals with Hamiltonian lattices. Here, a prominent example in which compactons can be found is the granular chain. In the third part, we study systems which are related to the discrete nonlinear Schrödinger equation describing, for example, coupled optical wave-guides or the dynamics of Bose-Einstein condensates in optical lattices. Our investigations are based on a numerical method to solve the traveling wave equation. This results in a quasi-exact solution (up to numerical errors) which is the compacton. Another ansatz which is employed throughout this work is the quasi-continuous approximation where the lattice is described by a continuous medium. Here, compactons are found analytically, but they are defined on a truly compact support. Remarkably, both ways give similar qualitative and quantitative results. Additionally, we study the dynamical properties of compactons by means of numerical simulation of the lattice equations. Especially, we concentrate on their emergence from physically realizable initial conditions as well as on their stability due to collisions. We show that the collisions are not exactly elastic but that a small part of the energy remains at the location of the collision. In finite lattices, this remaining part will then trigger a multiple scattering process resulting in a chaotic state.
In the last decade, the number and dimensions of catastrophic flooding events in the Niger River Basin (NRB) have markedly increased. Despite the devastating impact of the floods on the population and the mainly agriculturally based economy of the riverine nations, awareness of the hazards in policy and science is still low. The urgency of this topic and the existing research deficits are the motivation for the present dissertation.
The thesis is an initial detailed assessment of the increasing flood risk in the NRB. The research strategy is based on four questions regarding (1) features of the change in flood risk, (2) reasons for the change in the flood regime, (3) expected changes of the flood regime given climate and land use changes, and (4) recommendations from previous analysis for reducing the flood risk in the NRB.
The question examining the features of change in the flood regime is answered by means of statistical analysis. Trend, correlation, changepoint, and variance analyses show that, in addition to the factors exposure and vulnerability, the hazard itself has also increased significantly in the NRB, in accordance with the decadal climate pattern of West Africa. The northern arid and semi-arid parts of the NRB are those most affected by the changes.
As potential reasons for the increase in flood magnitudes, climate and land use changes are attributed by means of a hypothesis-testing framework. Two different approaches, based on either data analysis or simulation, lead to similar results, showing that the influence of climatic changes is generally larger compared to that of land use changes. Only in the dry areas of the NRB is the influence of land use changes comparable to that of climatic alterations.
Future changes of the flood regime are evaluated using modelling results. First ensembles of statistically and dynamically downscaled climate models based on different emission scenarios are analyzed. The models agree with a distinct increase in temperature. The precipitation signal, however, is not coherent. The climate scenarios are used to drive an eco-hydrological model. The influence of climatic changes on the flood regime is uncertain due to the unclear precipitation signal. Still, in general, higher flood peaks are expected. In a next step, effects of land use changes are integrated into the model. Different scenarios show that regreening might help to reduce flood peaks. In contrast, an expansion of agriculture might enhance the flood peaks in the NRB. Similarly to the analysis of observed changes in the flood regime, the impacts of climate- and land use changes for the future scenarios are also most severe in the dry areas of the NRB.
In order to answer the final research question, the results of the above analysis are integrated into a range of recommendations for science and policy on how to reduce flood risk in the NRB. The main recommendations include a stronger consideration of the enormous natural climate variability in the NRB and a focus on so called “no-regret” adaptation strategies which account for high uncertainty, as well as a stronger consideration of regional differences. Regarding the prevention and mitigation of catastrophic flooding, the most vulnerable and sensitive areas in the basin, the arid and semi-arid Sahelian and Sudano-Sahelian regions, should be prioritized. Eventually, an active, science-based and science-guided flood policy is recommended. The enormous population growth in the NRB in connection with the expected deterioration of environmental and climatic conditions is likely to enhance the region´s vulnerability to flooding. A smart and sustainable flood policy can help mitigate these negative impacts of flooding on the development of riverine societies in West Africa.
The Tibetan Plateau is the largest elevated landmass in the world and profoundly influences atmospheric circulation patterns such as the Asian monsoon system. Therefore this area has been increasingly in focus of palaeoenvironmental studies. This thesis evaluates the applicability of organic biomarkers for palaeolimnological purposes on the Tibetan Plateau with a focus on aquatic macrophyte-derived biomarkers. Submerged aquatic macrophytes have to be considered to significantly influence the sediment organic matter due to their high abundance in many Tibetan lakes. They can show highly 13C-enriched biomass because of their carbon metabolism and it is therefore crucial for the interpretation of δ13C values in sediment cores to understand to which extent aquatic macrophytes contribute to the isotopic signal of the sediments in Tibetan lakes and in which way variations can be explained in a palaeolimnological context. Additionally, the high abundance of macrophytes makes them interesting as potential recorders of lake water δD. Hydrogen isotope analysis of biomarkers is a rapidly evolving field to reconstruct past hydrological conditions and therefore of special relevance on the Tibetan Plateau due to the direct linkage between variations of monsoon intensity and changes in regional precipitation / evaporation balances. A set of surface sediment and aquatic macrophyte samples from the central and eastern Tibetan Plateau was analysed for composition as well as carbon and hydrogen isotopes of n-alkanes. It was shown how variable δ13C values of bulk organic matter and leaf lipids can be in submerged macrophytes even of a single species and how strongly these parameters are affected by them in corresponding sediments. The estimated contribution of the macrophytes by means of a binary isotopic model was calculated to be up to 60% (mean: 40%) to total organic carbon and up to 100% (mean: 66%) to mid-chain n-alkanes. Hydrogen isotopes of n-alkanes turned out to record δD of meteoric water of the summer precipitation. The apparent enrichment factor between water and n-alkanes was in range of previously reported ones (≈-130‰) at the most humid sites, but smaller (average: -86‰) at sites with a negative moisture budget. This indicates an influence of evaporation and evapotranspiration on δD of source water for aquatic and terrestrial plants. The offset between δD of mid- and long-chain n-alkanes was close to zero in most of the samples, suggesting that lake water as well as soil and leaf water are affected to a similar extent by those effects. To apply biomarkers in a palaeolimnological context, the aliphatic biomarker fraction of a sediment core from Lake Koucha (34.0° N; 97.2° E; eastern Tibetan Plateau) was analysed for concentrations, δ13C and δD values of compounds. Before ca. 8 cal ka BP, the lake was dominated by aquatic macrophyte-derived mid-chain n-alkanes, while after 6 cal ka BP high concentrations of a C20 highly branched isoprenoid compound indicate a predominance of phytoplankton. Those two principally different states of the lake were linked by a transition period with high abundances of microbial biomarkers. δ13C values were relatively constant for long-chain n-alkanes, while mid-chain n-alkanes showed variations between -23.5 to -12.6‰. Highest values were observed for the assumed period of maximum macrophyte growth during the late glacial and for the phytoplankton maximum during the middle and late Holocene. Therefore, the enriched values were interpreted to be caused by carbon limitation which in turn was induced by high macrophyte and primary productivity, respectively. Hydrogen isotope signatures of mid-chain n-alkanes have been shown to be able to track a previously deduced episode of reduced moisture availability between ca. 10 and 7 cal ka BP, indicated by a 20‰ shift towards higher δD values. Indications for cooler episodes at 6.0, 3.1 and 1.8 cal ka BP were gained from drops of biomarker concentrations, especially microbial-derived hopanoids, and from coincidental shifts towards lower δ13C values. Those episodes correspond well with cool events reported from other locations on the Tibetan Plateau as well as in the Northern Hemisphere. To conclude, the study of recent sediments and plants improved the understanding of factors affecting the composition and isotopic signatures of aliphatic biomarkers in sediments. Concentrations and isotopic signatures of the biomarkers in Lake Koucha could be interpreted in a palaeolimnological context and contribute to the knowledge about the history of the lake. Aquatic macrophyte-derived mid-chain n-alkanes were especially useful, due to their high abundance in many Tibetan Lakes and their ability to record major changes of lake productivity and palaeo-hydrological conditions. Therefore, they have the potential to contribute to a fuller understanding of past climate variability in this key region for atmospheric circulation systems.
Es gibt in Berlin eine einzigartige Vereinslandschaft im Amateur – und semiprofessionellen Fußballsport, in der einst von türkischen Migranten gegründete Vereine einen festen Platz einnehmen. Fußballsport bietet einen sozialen Raum für Jugendliche verschiedener kultureller, ethnischer und religiöser Herkunft, in dem Gruppen gebildet werden, um gegen einander zu konkurrieren. Ebenso eröffnet Fußball dem Einzelnen die Möglichkeit, die Gültigkeit und Relevanz von Vorurteilen und von gängigen Stereotypisierungen anderer Gruppen im Spielalltag einer ständigen Prüfung zu unterziehen. Fußballspieler können sich sowohl zwischen multi-kulturellen als auch mono-ethnischen Gruppenkonstellationen, in einigen Fällen auch in transnationalen Konstellationen bewegen, womit sie dabei wesentlich an der Sinngebung ihrer eigenen sozialen Zugehörigkeit mitwirken, die sich aus dem Spannungsfeld von Selbst- und Fremdwahrnehmungsmustern ergibt. In Folge dessen werden in diesem Raum Anerkennungsmechanismen konstituiert.
Die vorliegende Dissertation befasst sich mit dem alltäglichen Leben von türkisch-stämmigen, jugendlichen Amateur- und semiprofessionellen Fußballspielern (delikanli), sowie von anderen sozialen Akteuren der türkischen Fußballwelt, wie zum Beispiel „ältere“ Fußballspieler (agbi) und Fußballtrainer (hoca). Hauptanliegen der Arbeit war die Rekonstruktion kollektiver Wahrnehmungs-, Deutungs - und Handlungsmuster von Mitgliedern türkischer Fußballvereine im allgemeinen und ihrer Selbstdarstellung aber auch ihrer Wahrnehmung der „Anderen“ im besonderen. Mittels dieser Studie sollte nachvollzogen werden, ob und inwiefern sich traditionelle soziale Verhaltensmuster der gewählten Gruppe im technisch regulierten und stark Konkurrenz-orientierten Handlungsraum widerspiegeln und die reziproken Beziehungen zwischen dem „Selbst“ und den „Anderen“ regulieren. Dabei wurde die Relevanz von herkunftsbezogenen Stereotypisierungen und Vorurteilen in der kollektiven Konstitution von Selbstwahrnehmungen und Fremdverstehen im partikularen sozialen Feld (Bourdieu, 2001) des Fußballs rekonstruiert.
In dieser Arbeit wurde darüber hinaus beleuchtet, welche Rolle türkische Fußballvereine auf der einen Seite bei der Entstehung sozialer Raumzugehörigkeit zu den Stadtquartieren in Berlin einnehmen und welche Art von Mechanismen der sozialen Integration sie in diesen Vereinen herstellen. Auf der anderen Seite wurde hinterfragt, inwiefern sie zur sozialen Kohäsion zwischen diversen Kulturen beitragen. Daher wurde geprüft, ob und inwiefern die negativ konnotierte ethnozentrische Wahrnehmung von „Differenz“ (Bielefeld, 1998), die als soziales Konstrukt zwischen autochthonen und allochthonen Gruppen hergestellt wird, durch das Engagement der Vereinsakteure einen konstruktiven Wandel erfährt.
Übergeordnetes Ziel all dieser Forschungsfragen war es, ein fundiertes Verständnis über die Rolle von türkischen Fußballvereinen als soziale Mechanismen zu erlangen und deren Funktionsweise bei der Konstitution von Anpassungsstrategien in diesem sozialen Feld untersuchen. Detailliert wurde diese Rolle unter der Konzeptualisierung von sozialen Positionierungsmuster betrachtet, die als Gefüge von Deutungen des Alltäglichen verstanden werden, das individuelle und kollektive Handlungsmuster und implizit Muster des Fremdverstehens sowie des othering im Migrationskontext reguliert. Eine Rekonstruktion der sozialen Positionierungsmuster bietet eine eingehende soziologische Untersuchung dieser Teilnehmergruppe, die zudem Aufschluss über die Bedeutung und das Verständnis von ethnischer Zugehörigkeit für letztere gibt.
Neben umfangreicher Feldbeobachtung wurden in dieser qualitativen Studie mit Spielern verschiedener Vereine insgesamt zehn Gruppendiskussionen (Bohnsack, 2004) innerhalb ihrer Mannschaften zu gemeinsamen alltäglichen Erlebnissen und Erfahrungen durchgeführt, aufgezeichnet und mittels sozialwissenschaftlichem hermeneutischem Verfahren (Soeffner, 2004) interpretiert. Auch mit anderen Vereinsmitgliedern, d. h. mit Trainern bzw. hoca, Vorsitzenden, Managern und Sponsoren wurden jeweils zehn narrative und sieben biographische Einzelinterviews sowie sieben Experteninterviews durchgeführt. Deren Analyse erlaubt es, die Rolle dieser Mitglieder sowie wirkende Autoritätsmechanismen und kollektiv konstituierte Verhaltensmuster innerhalb der gesamten Vereinsgruppe zu rekonstruieren. Dabei wurde bezweckt, die Gesamtheit des sozialen Netzwerkes bzw. die Beziehungsschemata innerhalb der türkischen Fußballvereine Berlins zu verdeutlichen.
In der Arbeit werden zwei Standpunkte der theoretischen Auseinandersetzung verwendet. Auf der einen Seite wird die Lebensweltanalyse (Schütz und Luckmann, 1979, 1990) angewendet, um das soziale Erbe der in der Vergangenheit gesellschaftlich konstituierten Titulierung „Menschen mit Migrationshintergrund“ zu rekonstruieren, bzw. den Einfluss dieser sozialen Reproduktion auf die Wahrnehmungs-, Deutungs- und Handlungsmuster der Akteure zu untersuchen. Auf der anderen Seite wird die soziale Wirkung der tatsächlichen, alltäglichen Erfahrungsschemata im sozialen Feld des Fußballs auf die Selbstpositionierungen der Akteure mittels Goffmanscher Rahmenanalyse (Goffman, 1980) herausgearbeitet.
This thesis explores the variation in coreference patterns across language modes (i.e., spoken and written) and text genres. The significance of research on variation in language use has been emphasized in a number of linguistic studies. For instance, Biber and Conrad [2009] state that “register/genre variation is a fundamental aspect of human language” and “Given the ubiquity of register/genre variation, an understanding of how linguistic features are used in patterned ways across text varieties is of central importance for both the description of particular languages and the development of cross-linguistic theories of language use.”[p.23]
We examine the variation across genres with the primary goal of contributing to the body of knowledge on the description of language use in English. On the computational side, we believe that incorporating linguistic knowledge into learning-based systems can boost the performance of automatic natural language processing systems, particularly for non-standard texts. Therefore, in addition to their descriptive value, the linguistic findings we provide in this study may prove to be helpful for improving the performance of automatic coreference resolution, which is essential for a good text understanding and beneficial for several downstream NLP applications, including machine translation and text summarization.
In particular, we study a genre of texts that is formed of conversational interactions on the well-known social media platform Twitter. Two factors motivate us: First, Twitter conversations are realized in written form but resemble spoken communication [Scheffler, 2017], and therefore they form an atypical genre for the written mode. Second, while Twitter texts are a complicated genre for automatic coreference resolution, due to their widespread use in the digital sphere, at the same time they are highly relevant for applications that seek to extract information or sentiments from users’ messages. Thus, we are interested in discovering more about the linguistic and computational aspects of coreference in Twitter conversations. We first created a corpus of such conversations for this purpose and annotated it for coreference. We are interested in not only the coreference patterns but the overall discourse behavior of Twitter conversations. To address this, in addition to the coreference relations, we also annotated the coherence relations on the corpus we compiled. The corpus is available online in a newly developed form that allows for separating the tweets from their annotations.
This study consists of three empirical analyses where we independently apply corpus-based, psycholinguistic and computational approaches for the investigation of variation in coreference patterns in a complementary manner. (1) We first make a descriptive analysis of variation across genres through a corpus-based study. We investigate the linguistic aspects of nominal coreference in Twitter conversations and we determine how this genre relates to other text genres in spoken and written modes. In addition to the variation across genres, studying the differences in spoken-written modes is also in focus of linguistic research since from Woolbert [1922]. (2) In order to investigate whether the language mode alone has any effect on coreference patterns, we carry out a crowdsourced experiment and analyze the patterns in the same genre for both spoken and written modes. (3) Finally, we explore the potentials of domain adaptation of automatic coreference resolution (ACR) for the conversational Twitter data. In order to answer the question of how the genre of Twitter conversations relates to other genres in spoken and written modes with respect to coreference patterns, we employ a state-of-the-art neural ACR model [Lee et al., 2018] to examine whether ACR on Twitter conversations will benefit from mode-based separation in out-of-domain training data.
Although it has become common practice to build applications based on the reuse of existing components or services, technical complexity and semantic challenges constitute barriers to ensuring a successful and wide reuse of components and services. In the geospatial application domain, the barriers are self-evident due to heterogeneous geographic data, a lack of interoperability and complex analysis processes.
Constructing workflows manually and discovering proper services and data that match user intents and preferences is difficult and time-consuming especially for users who are not trained in software development. Furthermore, considering the multi-objective nature of environmental modeling for the assessment of climate change impacts and the various types of geospatial data (e.g., formats, scales, and georeferencing systems) increases the complexity challenges.
Automatic service composition approaches that provide semantics-based assistance in the process of workflow design have proven to be a solution to overcome these challenges and have become a frequent demand especially by end users who are not IT experts. In this light, the major contributions of this thesis are:
(i) Simplification of service reuse and workflow design of applications for climate impact analysis by following the eXtreme Model-Driven Development (XMDD) paradigm.
(ii) Design of a semantic domain model for climate impact analysis applications that comprises specifically designed services, ontologies that provide domain-specific vocabulary for referring to types and services, and the input/output annotation of the services using the terms defined in the ontologies.
(iii) Application of a constraint-driven method for the automatic composition of workflows for analyzing the impacts of sea-level rise. The application scenario demonstrates the impact of domain modeling decisions on the results and the performance of the synthesis algorithm.
Sinkholes and depressions are typical landforms of karst regions. They pose a considerable natural hazard to infrastructure, agriculture, economy and human life in affected areas worldwide. The physio-chemical processes of sinkholes and depression formation are manifold, ranging from dissolution and material erosion in the subsurface to mechanical subsidence/failure of the overburden. This thesis addresses the mechanisms leading to the development of sinkholes and depressions by using complementary methods: remote sensing, distinct element modelling and near-surface geophysics.
In the first part, detailed information about the (hydro)-geological background, ground structures, morphologies and spatio-temporal development of sinkholes and depressions at a very active karst area at the Dead Sea are derived from satellite image analysis, photogrammetry and geologic field surveys. There, clusters of an increasing number of sinkholes have been developing since the 1980s within large-scale depressions and are distributed over different kinds of surface materials: clayey mud, sandy-gravel alluvium and lacustrine evaporites (salt). The morphology of sinkholes differs depending in which material they form: Sinkholes in sandy-gravel alluvium and salt are generally deeper and narrower than sinkholes in the interbedded evaporite and mud deposits. From repeated aerial surveys, collapse precursory features like small-scale subsidence, individual holes and cracks are identified in all materials. The analysis sheds light on the ongoing hazardous subsidence process, which is driven by the base-level fall of the Dead Sea and by the dynamic formation of subsurface water channels.
In the second part of this thesis, a novel, 2D distinct element geomechanical modelling approach with the software PFC2D-V5 to simulating individual and multiple cavity growth and sinkhole and large-scale depression development is presented. The approach involves a stepwise material removal technique in void spaces of arbitrarily shaped geometries and is benchmarked by analytical and boundary element method solutions for circular cavities. Simulated compression and tension tests are used to calibrate model parameters with bulk rock properties for the materials of the field site. The simulations show that cavity and sinkhole evolution is controlled by material strength of both overburden and cavity host material, the depth and relative speed of the cavity growth and the developed stress pattern in the subsurface. Major findings are: (1) A progressively deepening differential subrosion with variable growth speed yields a more fragmented stress pattern with stress interaction between the cavities. It favours multiple sinkhole collapses and nesting within large-scale depressions. (2) Low-strength materials do not support large cavities in the material removal zone, and subsidence is mainly characterised by gradual sagging into the material removal zone with synclinal bending. (3) High-strength materials support large cavity formation, leading to sinkhole formation by sudden collapse of the overburden. (4) Large-scale depression formation happens either by coalescence of collapsing holes, block-wise brittle failure, or gradual sagging and lateral widening.
The distinct element based approach is compared to results from remote sensing and geophysics at the field site. The numerical simulation outcomes are generally in good agreement with derived morphometrics, documented surface and subsurface structures as well as seismic velocities. Complementary findings on the subrosion process are provided from electric and seismic measurements in the area.
Based on the novel combination of methods in this thesis, a generic model of karst landform evolution with focus on sinkhole and depression formation is developed. A deepening subrosion system related to preferential flow paths evolves and creates void spaces and subsurface conduits. This subsequently leads to hazardous subsidence, and the formation of sinkholes within large-scale depressions. Finally, a monitoring system for shallow natural hazard phenomena consisting of geodetic and geophysical observations is proposed for similarly affected areas.
Depending on the biochemical and biotechnical approach, the aim of this work was to understand the mechanism of protein-glucan interactions in regulation and control of starch degradation. Although starch degradation starts with the phosphorylation process, the mechanisms by which this process is controlling and adjusting starch degradation are not yet fully understood. Phosphorylation is a major process performed by the two dikinases enzymes α-glucan, water dikinase (GWD) and phosphoglucan water dikinase (PWD). GWD and PWD enzymes phosphorylate the starch granule surface; thereby stimulate starch degradation by hydrolytic enzymes. Despite these important roles for GWD and PWD, so far the biochemical processes by which these enzymes are able to regulate and adjust the rate of phosphate incorporation into starch during the degradation process haven‘t been understood. Recently, some proteins were found associated with the starch granule. Two of these proteins are named Early Starvation Protein 1 (ESV1) and its homologue Like-Early Starvation Protein 1 (LESV). It was supposed that both are involved in the control of starch degradation, but their function has not been clearly known until now. To understand how ESV1 and LESV-glucan interactions are regulated and affect the starch breakdown, it was analyzed the influence of ESV1 and LESV proteins on the phosphorylating enzyme GWD and PWD and hydrolysing enzymes ISA, BAM, and AMY. However, the analysis determined the location of LESV and ESV1 in the chloroplast stroma of Arabidopsis. Mass spectrometry data predicted ESV1and LESV proteins as a product of the At1g42430 and At3g55760 genes with a predicted mass of ~50 kDa and ~66 kDa, respectively. The ChloroP program predicted that ESV1 lacks the chloroplast transit peptide, but it predicted the first 56 amino acids N-terminal region as a chloroplast transit peptide for LESV. Usually, the transit peptide is processed during transport of the proteins into plastids. Given that this processing is critical, two forms of each ESV1 and LESV were generated and purified, a full-length form and a truncated form that lacks the transit peptide, namely, (ESV1and tESV1) and (LESV and tLESV), respectively. Both protein forms were included in the analysis assays, but only slight differences in glucan binding and protein action between ESV1 and tESV1 were observed, while no differences in the glucan binding and effect on the GWD and PWD action were observed between LESV and tLESV. The results revealed that the presence of the N-terminal is not massively altering the action of ESV1 or LESV. Therefore, it was only used the ESV1 and tLESV forms data to explain the function of both proteins.
However, the analysis of the results revealed that LESV and ESV1 proteins bind strongly at the starch granule surface. Furthermore, not all of both proteins were released after their incubation with starches after washing the granules with 2% [w/v] SDS indicates to their binding to the deeper layers of the granule surface. Supporting of this finding comes after the binding of both proteins to starches after removing the free glucans chains from the surface by the action of ISA and BAM. Although both proteins are capable of binding to the starch structure, only LESV showed binding to amylose, while in ESV1, binding was not observed. The alteration of glucan structures at the starch granule surface is essential for the incorporation of phosphate into starch granule while the phosphorylation of starch by GWD and PWD increased after removing the free glucan chains by ISA. Furthermore, PWD showed the possibility of starch phosphorylation without prephosphorylation by GWD.
Biochemical studies on protein-glucan interactions between LESV or ESV1 with different types of starch showed a potentially important mechanism of regulating and adjusting the phosphorylation process while the binding of LESV and ESV1 leads to altering the glucan structures of starches, hence, render the effect of the action of dikinases enzymes (GWD and PWD) more able to control the rate of starch degradation. Despite the presence of ESV1 which revealed an antagonistic effect on the PWD action as the PWD action was decreased without prephosphorylation by GWD and increased after prephosphorylation by GWD (Chapter 4), PWD showed a significant reduction in its action with or without prephosphorylation by GWD in the presence of ESV1 whether separately or together with LESV (Chapter 5). However, the presence of LESV and ESV1 together revealed the same effect compared to the effect of each one alone on the phosphorylation process, therefore it is difficult to distinguish the specific function between them. However, non-interactions were detected between LESV and ESV1 or between each of them with GWD and PWD or between GWD and PWD indicating the independent work for these proteins. It was also observed that the alteration of the starch structure by LESV and ESV1 plays a role in adjusting starch degradation rates not only by affecting the dikinases but also by affecting some of the hydrolysing enzymes since it was found that the presence of LESV and ESV1leads to the reduction of the action of BAM, but does not abolish it.
The main objective of this dissertation is to analyse prerequisites, expectations, apprehensions, and attitudes of students studying computer science, who are willing to gain a bachelor degree. The research will also investigate in the students’ learning style according to the Felder-Silverman model. These investigations fall in the attempt to make an impact on reducing the “dropout”/shrinkage rate among students, and to suggest a better learning environment.
The first investigation starts with a survey that has been made at the computer science department at the University of Baghdad to investigate the attitudes of computer science students in an environment dominated by women, showing the differences in attitudes between male and female students in different study years. Students are accepted to university studies via a centrally controlled admission procedure depending mainly on their final score at school. This leads to a high percentage of students studying subjects they do not want. Our analysis shows that 75% of the female students do not regret studying computer science although it was not their first choice. And according to statistics over previous years, women manage to succeed in their study and often graduate on top of their class. We finish with a comparison of attitudes between the freshman students of two different cultures and two different university enrolment procedures (University of Baghdad, in Iraq, and the University of Potsdam, in Germany) both with opposite gender majority.
The second step of investigation took place at the department of computer science at the University of Potsdam in Germany and analyzes the learning styles of students studying the three major fields of study offered by the department (computer science, business informatics, and computer science teaching). Investigating the differences in learning styles between the students of those study fields who usually take some joint courses is important to be aware of which changes are necessary to be adopted in the teaching methods to address those different students. It was a two stage study using two questionnaires; the main one is based on the Index of Learning Styles Questionnaire of B. A. Solomon and R. M. Felder, and the second questionnaire was an investigation on the students’ attitudes towards the findings of their personal first questionnaire. Our analysis shows differences in the preferences of learning style between male and female students of the different study fields, as well as differences between students with the different specialties (computer science, business informatics, and computer science teaching).
The third investigation looks closely into the difficulties, issues, apprehensions and expectations of freshman students studying computer science. The study took place at the computer science department at the University of Potsdam with a volunteer sample of students. The goal is to determine and discuss the difficulties and issues that they are facing in their study that may lead them to think in dropping-out, changing the study field, or changing the university. The research continued with the same sample of students (with business informatics students being the majority) through more than three semesters. Difficulties and issues during the study were documented, as well as students’ attitudes, apprehensions, and expectations. Some of the professors and lecturers opinions and solutions to some students’ problems were also documented. Many participants had apprehensions and difficulties, especially towards informatics subjects. Some business informatics participants began to think of changing the university, in particular when they reached their third semester, others thought about changing their field of study. Till the end of this research, most of the participants continued in their studies (the study they have started with or the new study they have changed to) without leaving the higher education system.
Relativistic pair beams produced in the cosmic voids by TeV gamma rays from blazars are expected to produce a detectable GeV-scale cascade emission missing in the observations. The suppression of this secondary cascade implies either the deflection of the pair beam by intergalactic magnetic fields (IGMFs) or an energy loss of the beam due to the electrostatic beam-plasma instability. IGMF of femto-Gauss strength is sufficient to significantly deflect the pair beams reducing the flux of secondary cascade below the observational limits. A similar flux reduction may result in the absence of the IGMF from the beam energy loss by the instability before the inverse Compton cooling. This dissertation consists of two studies about the instability role in the evolution of blazar-induced beams.
Firstly, we investigated the effect of sub-fG level IGMF on the beam energy loss by the instability. Considering IGMF with correlation lengths smaller than a few kpc, we found that such fields increase the transverse momentum of the pair beam particles, dramatically reducing the linear growth rate of the electrostatic instability and hence the energy-loss rate of the pair beam. Our results show that the IGMF eliminates beam plasma instability as an effective energy-loss agent at a field strength three orders of magnitude below that needed to suppress the secondary cascade emission by magnetic deflection. For intermediate-strength IGMF, we do not know a viable process to explain the observed absence of GeV-scale cascade emission and hence can be excluded.
Secondly, we probed how the beam-plasma instability feeds back on the beam, using a realistic two-dimensional beam distribution. We found that the instability broadens the beam opening angles significantly without any significant energy loss, thus confirming a recent feedback study on a simplified one-dimensional beam distribution. However, narrowing diffusion feedback of the beam particles with Lorentz factors less than 1e6 might become relevant even though initially it is negligible. Finally, when considering the continuous creation of TeV pairs, we found that the beam distribution and the wave spectrum reach a new quasi-steady state, in which the scattering of beam particles persists and the beam opening angle may increase by a factor of hundreds. This new intrinsic scattering of the cascade can result in time delays of around ten years, thus potentially mimicking the IGMF deflection. Understanding the implications on the GeV cascade emission requires accounting for inverse Compton cooling and simulating the beam-plasma system at different points in the IGM.
Since their discovery in 1610 by Galileo Galilei, Saturn's rings continue to fascinate both experts and amateurs. Countless numbers of icy grains in almost Keplerian orbits reveal a wealth of structures such as ringlets, voids and gaps, wakes and waves, and many more. Grains are found to increase in size with increasing radial distance to Saturn. Recently discovered "propeller" structures in the Cassini spacecraft data, provide evidence for the existence of embedded moonlets. In the wake of these findings, the discussion resumes about origin and evolution of planetary rings, and growth processes in tidal environments. In this thesis, a contact model for binary adhesive, viscoelastic collisions is developed that accounts for agglomeration as well as restitution. Collisional outcomes are crucially determined by the impact speed and masses of the collision partners and yield a maximal impact velocity at which agglomeration still occurs. Based on the latter, a self-consistent kinetic concept is proposed. The model considers all possible collisional outcomes as there are coagulation, restitution, and fragmentation. Emphasizing the evolution of the mass spectrum and furthermore concentrating on coagulation alone, a coagulation equation, including a restricted sticking probability is derived. The otherwise phenomenological Smoluchowski equation is reproduced from basic principles and denotes a limit case to the derived coagulation equation. Qualitative and quantitative analysis of the relevance of adhesion to force-free granular gases and to those under the influence of Keplerian shear is investigated. Capture probability, agglomerate stability, and the mass spectrum evolution are investigated in the context of adhesive interactions. A size dependent radial limit distance from the central planet is obtained refining the Roche criterion. Furthermore, capture probability in the presence of adhesion is generally different compared to the case of pure gravitational capture. In contrast to a Smoluchowski-type evolution of the mass spectrum, numerical simulations of the obtained coagulation equation revealed, that a transition from smaller grains to larger bodies cannot occur via a collisional cascade alone. For parameters used in this study, effective growth ceases at an average size of centimeters.
Insight by de—sign
(2023)
The calculus of design is a diagrammatic approach towards the relationship between design and insight. The thesis I am evolving is that insights are not discovered, gained, explored, revealed, or mined, but are operatively de—signed. The de in design neglects the contingency of the space towards the sign. The — is the drawing of a distinction within the operation. Space collapses through the negativity of the sign; the command draws a distinction that neglects the space for the form's sake. The operation to de—sign is counterintuitively not the creation of signs, but their removal, the exclusion of possible sign propositions of space. De—sign is thus an act of exclusion; the possibilities of space are crossed into form.
A dramatic efficiency improvement of bulk heterojunction solar cells based on electron-donating conjugated polymers in combination with soluble fullerene derivatives has been achieved over the past years. Certified and reported power conversion efficiencies now reach over 9% for single junctions and exceed the 10% benchmark for tandem solar cells. This trend brightens the vision of organic photovoltaics becoming competitive with inorganic solar cells including the realization of low-cost and large-area organic photovoltaics. For the best performing organic materials systems, the yield of charge generation can be very efficient. However, a detailed understanding of the free charge carrier generation mechanisms at the donor acceptor interface and the energy loss associated with it needs to be established. Moreover, organic solar cells are limited by the competition between charge extraction and free charge recombination, accounting for further efficiency losses. A conclusive picture and the development of precise methodologies for investigating the fundamental processes in organic solar cells are crucial for future material design, efficiency optimization, and the implementation of organic solar cells into commercial products.
In order to advance the development of organic photovoltaics, my thesis focuses on the comprehensive understanding of charge generation, recombination and extraction in organic bulk heterojunction solar cells summarized in 6 chapters on the cumulative basis of 7 individual publications.
The general motivation guiding this work was the realization of an efficient hybrid inorganic/organic tandem solar cell with sub-cells made from amorphous hydrogenated silicon and organic bulk heterojunctions. To realize this project aim, the focus was directed to the low band-gap copolymer PCPDTBT and its derivatives, resulting in the examination of the charge carrier dynamics in PCPDTBT:PC70BM blends in relation to by the blend morphology. The phase separation in this blend can be controlled by the processing additive diiodooctane, enhancing domain purity and size. The quantitative investigation of the free charge formation was realized by utilizing and improving the time delayed collection field technique. Interestingly, a pronounced field dependence of the free carrier generation for all blends is found, with the field dependence being stronger without the additive. Also, the bimolecular recombination coefficient for both blends is rather high and increases with decreasing internal field which we suggest to be caused by a negative field dependence of mobility. The additive speeds up charge extraction which is rationalized by the threefold increase in mobility.
By fluorine attachment within the electron deficient subunit of PCPDTBT, a new polymer F-PCPDTBT is designed. This new material is characterized by a stronger tendency to aggregate as compared to non-fluorinated PCPDTBT. Our measurements show that for F-PCPDTBT:PCBM blends the charge carrier generation becomes more efficient and the field-dependence of free charge carrier generation is weakened. The stronger tendency to aggregate induced by the fluorination also leads to increased polymer rich domains, accompanied in a threefold reduction in the non-geminate recombination coefficient at conditions of open circuit. The size of the polymer domains is nicely correlated to the field-dependence of charge generation and the Langevin reduction factor, which highlights the importance of the domain size and domain purity for efficient charge carrier generation. In total, fluorination of PCPDTBT causes the PCE to increase from 3.6 to 6.1% due to enhanced fill factor, short circuit current and open circuit voltage. Further optimization of the blend ratio, active layer thickness, and polymer molecular weight resulted in 6.6% efficiency for F-PCPDTBT:PC70BM solar cells.
Interestingly, the double fluorinated version 2F-PCPDTBT exhibited poorer FF despite a further reduction of geminate and non-geminate recombination losses. To further analyze this finding, a new technique is developed that measures the effective extraction mobility under charge carrier densities and electrical fields comparable to solar cell operation conditions. This method involves the bias enhanced charge extraction technique. With the knowledge of the carrier density under different electrical field and illumination conditions, a conclusive picture of the changes in charge carrier dynamics leading to differences in the fill factor upon fluorination of PCPDTBT is attained. The more efficient charge generation and reduced recombination with fluorination is counterbalanced by a decreased extraction mobility. Thus, the highest fill factor of 60% and efficiency of 6.6% is reached for F-PCPDTBT blends, while 2F-PCPDTBT blends have only moderate fill factors of 54% caused by the lower effective extraction mobility, limiting the efficiency to 6.5%.
To understand the details of the charge generation mechanism and the related losses, we evaluated the yield and field-dependence of free charge generation using time delayed collection field in combination with sensitive measurements of the external quantum efficiency and absorption coefficients for a variety of blends. Importantly, both the yield and field-dependence of free charge generation is found to be unaffected by excitation energy, including direct charge transfer excitation below the optical band gap. To access the non-detectable absorption at energies of the relaxed charge transfer emission, the absorption was reconstructed from the CT emission, induced via the recombination of thermalized charges in electroluminescence. For a variety of blends, the quantum yield at energies of charge transfer emission was identical to excitations with energies well above the optical band-gap. Thus, the generation proceeds via the split-up of the thermalized charge transfer states in working solar cells. Further measurements were conducted on blends with fine-tuned energy levels and similar blend morphologies by using different fullerene derivatives. A direct correlation between the efficiency of free carrier generation and the energy difference of the relaxed charge transfer state relative to the energy of the charge separated state is found. These findings open up new guidelines for future material design as new high efficiency materials require a minimum energetic offset between charge transfer and the charge separated state while keeping the HOMO level (and LUMO level) difference between donor and acceptor as small as possible.
The theory of atomic Boson-Fermion mixtures in the dilute limit beyond mean-field is considered in this thesis. Extending the formalism of quantum field theory we derived expressions for the quasi-particle excitation spectra, the ground state energy, and related quantities for a homogenous system to first order in the dilute gas parameter. In the framework of density functional theory we could carry over the previous results to inhomogeneous systems. We then determined to density distributions for various parameter values and identified three different phase regions: (i) a stable mixed regime, (ii) a phase separated regime, and (iii) a collapsed regime. We found a significant contribution of exchange-correlation effects in the latter case. Next, we determined the shift of the Bose-Einstein condensation temperature caused by Boson-Fermion interactions in a harmonic trap due to redistribution of the density profiles. We then considered Boson-Fermion mixtures in optical lattices. We calculated the criterion for stability against phase separation, identified the Mott-insulating and superfluid regimes both, analytically within a mean-field calculation, and numerically by virtue of a Gutzwiller Ansatz. We also found new frustrated ground states in the limit of very strong lattices. ----Anmerkung: Der Autor ist Träger des durch die Physikalische Gesellschaft zu Berlin vergebenen Carl-Ramsauer-Preises 2004 für die jeweils beste Dissertation der vier Universitäten Freie Universität Berlin, Humboldt-Universität zu Berlin, Technische Universität Berlin und Universität Potsdam.
Planets outside our solar system, so-called "exoplanets", can be detected with different methods, and currently more than 5000 exoplanets have been confirmed, according to NASA Exoplanet Archive. One major highlight of the studies on exoplanets in the past twenty years is the characterization of their atmospheres usingtransmission spectroscopy as the exoplanet transits. However, this characterization is a challenging process and sometimes there are reported discrepancies in the literature regarding the atmosphere of the same exoplanet. One potential reason for the observed atmospheric inconsistencies is called impact parameter degeneracy, and it is highly driven by the limb darkening effect of the host star. A brief introductionto those topics in presented in chapter 1, while the motivation and objectives of thiswork are described in chapter 2.The first goal is to clarify the origin of the transmission spectrum, which is anindicator of an exoplanet’s atmosphere; whether it is real or influenced by the impactparameter degeneracy. A second goal is to determine whether photometry from space using the Transiting Exoplanet Survey Satellite (TESS), could improve on the major parameters, which are responsible for the aforementioned degeneracy, of known exoplanetary systems. Three individual projects were conducted in order toaddress those goals. The three manuscripts are presented, in short, in the manuscriptoverview in chapter 3.More specifically, in chapter 4, the first manuscript is presented, which is an ex-tended investigation on the impact parameter degeneracy and its application onsynthetic transmission spectra. Evidently, the limb darkening of the host star isan important driver for this effect. It keeps the degeneracy persisting through different groups of exoplanets, based on the uncertainty of their impact parameter and on the type of their host star. The second goal, was addressed in the second and third manuscripts (chapter 5 and chapter 6 respectively). Using observationsfrom the TESS mission, two samples of exoplanets were studied; 10 transiting inflated hot-Jupiters and 43 transiting grazing systems. Potentially, the refinement or confirmation of their major system parameters’ measurements can assist in solving current or future discrepancies regarding their atmospheric characterization.In chapter 7 the conclusions of this work are discussed, while in chapter 8 itis proposed how TESS’s measurements can be able to discern between erroneousinterpretations of transmission spectra, especially on systems where the impact parameter degeneracy is likely not applicable.
Seit dem Zusammenbruch der Sowjetunion kamen in diesem Raum neue Migrationsprozesse wie die Arbeitsmigration zwischen den südlichen GUS-Republiken und Russland, aber auch grenzüberschreitende Bevölkerungsbewegungen ethnischer Gruppen in ihre „historischen Herkunftsgebiete“ auf. Die in der vorliegenden Arbeit untersuchten, dynamischen Wanderungsprozesse von Kasachen zwischen der Mongolei und Kasachstan weisen Kennzeichen dieses Migrationstypus, aber auch einige Besonderheiten auf. Die vorliegende Arbeit hat längere Forschungsaufenthalte in Kasachstan und der Mongolei von 2006 bis 2009 zur Grundlage. Aus der Mongolei stammende kasachische Migranten im Umland von Almaty und Kasachen im westlichsten aymag der Mongolei, Bayan-Ölgiy, wurden mittels quantitativer und qualitativer Methoden empirischer Sozialforschung befragt. Ergänzend wurden in beiden Staaten Befragungen von Experten aus gesellschaftlichen, wissenschaftlichen und politischen Institutionen durchgeführt, um eine möglichst ausgeglichene Sicht auf die postsowjetischen Migrations- und Inkorporationsprozesse zwischen beiden Staaten sicherzustellen. Zwischen den Migranten in Kasachstan und ihren – noch bzw. wieder – in der Mongolei lebenden Verwandten haben sich in den letzten Jahrzehnten enge soziale Netzwerke entwickelt. Die Aufrechterhaltung der Bindungen wird durch eine Verbesserung der Transport- und Kommunikationsmöglichkeiten zwischen beiden Staaten gefördert. Zirkuläre Migrationsmuster, regelmäßige Besuche und Telefongespräche sowie grenzüberschreitende sozioökonomische Unterstützungsmechanismen haben sich insbesondere in den vergangenen Jahren intensiviert. Diese Interaktionen sind im Kontext der rechtlichen, politischen und wirtschaftlichen Bedingungen im Migrationssystem Mongolei-Kasachstan – und insbesondere in Wechselwirkung mit der staat¬lichen Migrations- und Inkorpora-tionspolitik – einzuordnen. Die Erkenntnisse der vorliegenden Untersuchung lassen sich in aller Kürze so zusammenfassen: (I) Die in sozialen Netzwerken organisierten Interaktionen der Kasachen aus der Mongolei weisen Merkmale von, aber auch Unterschiede zu Konzepten des Transnationalismus-Ansatzes auf. (II) Die sozialen Bindungen zwischen Verwandten generieren Sozialkapital und tragen zur alltäglichen Unterstützung bei. (III) Die lokalen und grenzüberschreitenden Aktivitäten der Migranten sind als Strategien der sozioökonomischen Eingliederung zu deuten. (IV) Ein wesentlicher Teil der aus der Mongolei stammenden Kasachen artikuliert von der Mehrheitsbevölkerung abweichende, hybride Identifikationsmuster, die die politischen Eliten in Kasachstan bisher zu wenig wahrnehmen.
Characterization of altered inflorescence architecture in Arabidopsis thaliana BG-5 x Kro-0 hybrid
(2018)
A reciprocal cross between two A. thaliana accessions, Kro-0 (Krotzenburg, Germany) and BG-5 (Seattle, USA), displays purple rosette leaves and dwarf bushy phenotype in F1 hybrids when grown at 17 °C and a parental-like phenotype when grown at 21 °C. This F1 temperature-dependent-dwarf-bushy phenotype is characterized by reduced growth of the primary stem together with an increased number of branches. The reduced stem growth was the strongest at the first internode. In addition, we found that a temperature switch from 21 °C to 17 °C induced the phenotype only before the formation of the first internode of the stem. Similarly, the F1 dwarf-bushy phenotype could not be reversed when plants were shifted from 17 °C to 21 °C after the first internode was formed. Metabolic analysis showed that the F1 phenotype was associated with a significant upregulation of anthocyanin(s), kaempferol(s), salicylic acid, jasmonic acid and abscisic acid. As it has been previously shown that the dwarf-bushy phenotype is linked to two loci, one on chromosome 2 from Kro-0 and one on chromosome 3 from BG-5, an artificial micro-RNA approach was used to investigate the necessary genes on these intervals. From the results obtained, it was found that two genes, AT2G14120 that encodes for a DYNAMIN RELATED PROTEIN3B and AT2G14100 that encodes a member of the Cytochrome P450 family protein CYP705A13, were necessary for the appearance of the F1 phenotype on chromosome 2. It was also discovered that AT3G61035 that encodes for another cytochrome P450 family protein CYP705A13 and AT3G60840 that encodes for a MICROTUBULE-ASSOCIATED PROTEIN65-4 on chromosome 3 were both necessary for the induction of the F1 phenotype. To prove the causality of these genes, genomic constructs of the Kro-0 candidate genes on chromosome 2 were transferred to BG-5 and genomic constructs of the chromosome 3 candidate genes from BG-5 were transferred to Kro-0. The T1 lines showed that these genes are not sufficient alone to induce the phenotype. In addition to the F1 phenotype, more severe phenotypes were observed in the F2 generations that were grouped into five different phenotypic classes. Whilst seed yield was comparable between F1 hybrids and parental lines, three phenotypic classes in the F2 generation exhibited hybrid breakdown in the form of reproductive failure. This F2 hybrid breakdown was less sensitive to temperature and showed a dose-dependent effect of the loci involved in F1 phenotype. The severest class of hybrid breakdown phenotypes was observed only in the population of backcross with the parent Kro-0, which indicates a stronger contribution of the BG-5 allele when compared to the Kro-0 allele on the hybrid breakdown phenotypes. Overall, the findings of my thesis provide a further understanding of the genetic and metabolic factors underlying altered shoot architecture in hybrid dysfunction.
Classification, prediction and evaluation of graph neural networks on online social media platforms
(2024)
The vast amount of data generated on social media platforms have made them a valuable source of information for businesses, governments and researchers. Social media data can provide insights into user behavior, preferences, and opinions. In this work, we address two important challenges in social media analytics. Predicting user engagement with online content has become a critical task for content creators to increase user engagement and reach larger audiences. Traditional user engagement prediction approaches rely solely on features derived from the user and content. However, a new class of deep learning methods based on graphs captures not only the content features but also the graph structure of social media networks.
This thesis proposes a novel Graph Neural Network (GNN) approach to predict user interaction with tweets. The proposed approach combines the features of users, tweets and their engagement graphs. The tweet text features are extracted using pre-trained embeddings from language models, and a GNN layer is used to embed the user in a vector space. The GNN model then combines the features and graph structure to predict user engagement. The proposed approach achieves an accuracy value of 94.22% in classifying user interactions, including likes, retweets, replies, and quotes.
Another major challenge in social media analysis is detecting and classifying social bot accounts. Social bots are automated accounts used to manipulate public opinion by spreading misinformation or generating fake interactions. Detecting social bots is critical to prevent their negative impact on public opinion and trust in social media. In this thesis, we classify social bots on Twitter by applying Graph Neural Networks. The proposed approach uses a combination of both the features of a node and an aggregation of the features of a node’s neighborhood to classify social bot accounts. Our final results indicate a 6% improvement in the area under the curve score in the final predictions through the utilization of GNN.
Overall, our work highlights the importance of social media data and the potential of new methods such as GNNs to predict user engagement and detect social bots. These methods have important implications for improving the quality and reliability of information on social media platforms and mitigating the negative impact of social bots on public opinion and discourse.
Bacteria are one of the most widespread kinds of microorganisms that play essential roles in many biological and ecological processes. Bacteria live either as independent individuals or in organized communities. At the level of single cells, interactions between bacteria, their neighbors, and the surrounding physical and chemical environment are the foundations of microbial processes. Modern microscopy imaging techniques provide attractive and promising means to study the impact of these interactions on the dynamics of bacteria. The aim of this dissertation is to deepen our understanding four fundamental bacterial processes – single-cell motility, chemotaxis, bacterial interactions with environmental constraints, and their communication with neighbors – through a live cell imaging technique. By exploring these processes, we expanded our knowledge on so far unexplained mechanisms of bacterial interactions.
Firstly, we studied the motility of the soil bacterium Pseudomonas putida (P. putida), which swims through flagella propulsion, and has a complex, multi-mode swimming tactic. It was recently reported that P. putida exhibits several distinct swimming modes – the flagella can push and pull the cell body or wrap around it. Using a new combined phase-contrast and fluorescence imaging set-up, the swimming mode (push, pull, or wrapped) of each run phase was automatically recorded, which provided the full swimming statistics of the multi-mode swimmer. Furthermore, the investigation of cell interactions with a solid boundary illustrated an asymmetry for the different swimming modes; in contrast to the push and pull modes, the curvature of runs in wrapped mode was not affected by the solid boundary. This finding suggested that having a multi-mode swimming strategy may provide further versatility to react to environmental constraints.
Then we determined how P. putida navigates toward chemoattractants, i.e. its chemotaxis strategies. We found that individual run modes show distinct chemotactic responses in nutrition gradients. In particular, P. putida cells exhibited an asymmetry in their chemotactic responsiveness; the wrapped mode (slow swimming mode) was affected by the chemoattractant, whereas the push mode (fast swimming mode) was not. These results can be seen as a starting point to understand more complex chemotaxis strategies of multi-mode swimmers going beyond the well-known paradigm of Escherichia coli, that exhibits only one swimming mode.
Finally we considered the cell dynamics in a dense population. Besides physical interactions with their neighbors, cells communicate their activities and orchestrate their population behaviors via quorum-sensing. Molecules that are secreted to the surrounding by the bacterial cells, act as signals and regulate the cell population behaviour. We studied P. putida’s motility in a dense population by exposing the cells to environments with different concentrations of chemical signals. We found that higher amounts of chemical signals in the surrounding influenced the single-cell behaviourr, suggesting that cell-cell communications may also affect the flagellar dynamics.
In summary, this dissertation studies the dynamics of a bacterium with a multi-mode swimming tactic and how it is affected by the surrounding environment using microscopy imaging. The detailed description of the bacterial motility in fundamental bacterial processes can provide new insights into the ecology of microorganisms.
The topic of synchronization forms a link between nonlinear dynamics and neuroscience. On the one hand, neurobiological research has shown that the synchronization of neuronal activity is an essential aspect of the working principle of the brain. On the other hand, recent advances in the physical theory have led to the discovery of the phenomenon of phase synchronization. A method of data analysis that is motivated by this finding - phase synchronization analysis - has already been successfully applied to empirical data. The present doctoral thesis ties up to these converging lines of research. Its subject are methodical contributions to the further development of phase synchronization analysis, as well as its application to event-related potentials, a form of EEG data that is especially important in the cognitive sciences. The methodical contributions of this work consist firstly in a number of specialized statistical tests for a difference in the synchronization strength in two different states of a system of two oscillators. Secondly, in regard of the many-channel character of EEG data an approach to multivariate phase synchronization analysis is presented. For the empirical investigation of neuronal synchronization a classic experiment on language processing was replicated, comparing the effect of a semantic violation in a sentence context with that of the manipulation of physical stimulus properties (font color). Here phase synchronization analysis detects a decrease of global synchronization for the semantic violation as well as an increase for the physical manipulation. In the latter case, by means of the multivariate analysis the global synchronization effect can be traced back to an interaction of symmetrically located brain areas.<BR> The findings presented show that the method of phase synchronization analysis motivated by physics is able to provide a relevant contribution to the investigation of event-related potentials in the cognitive sciences.
The controlled dosage of substances from a device to its environment, such as a tissue or an organ in medical applications or a reactor, room, machinery or ecosystem in technical, should ideally match the requirements of the applications, e.g. in terms of the time point at which the cargo is released. On-demand dosage systems may enable such a desired release pattern, if the device contain suitable features that can translate external signals into a release function. This study is motivated by the opportunities arising from microsystems capable of an on-demand release and the contributions that geometrical design may have in realizing such features. The goals of this work included the design, fabrication, characterization and experimental proof-of-concept of geometry-assisted triggerable dosing effect (a) with a sequential dosing release and (b) in a self-sufficient dosage system. Structure-function relationships were addressed on the molecular, morphological and, with a particular attention, the device design level, which is on the micrometer scale. Models and/or computational tools were used to screen the parameter space and provide guidance for experiments.
תקציר
מצוות הפריון הינה אחת מן המצוות החשובות ביהדות ומגדירה במובנים רבים את מהות קיומו של האדם היהודי הדתי. זוגות דתיים אשר סובלים מבעיית פריון נקרעים לא פעם בין מחוייבותם הדתית והלחץ החברתי בקהילתם לבין אי יכולתם להביא ילדים לעולם בדרך טבעית. החל מאמצע המאה ה 20- הפכו טכנולוגיות פריון רפואיות לנגישות לציבור הרחב בישראל ובעולם והובילו למהפכה חברתית אשר אפשרה לזוגות חשוכי ילדים להגשים את חלומם בפעם הראשונה ולהפוך להורים. פריצת דרך רפואית זו לא נעלמה מעיני הציבור הדתי על זרמיו השונים וגרמה לטלטלה בתפיסת עולמם של מנהיגיהם, אשר נדרשו לתת מענה הלכתי הולם המאזן בין רווחת קהילותיהם לבין שמירה על ערכי ומצוות הדת היהודית. על רקע תמורות אלו התפתח שיח הלכתי ואקדמי חשוב ומעניין באשר למקומה של טכנולוגיה מודרנית בחברה המתנהלת על פי קודים מסורתיים. עם זאת, הדיאלוג החי המתקיים בין מנהיגי היהדות לבין קהילותיהם, אשר מתועד כאוסף שאלות ותשובות (שו"ת) לבעיות הלכתיות אקטואליות בתחום הפריון, לא נחקר דיו.
עבודתנו עוסקת במחקר ספרות השו"ת ובגישת פוסקי ההלכה להסתייעות בטכנולוגיית ההזרעה המלאכותית ובטיפולי פוריות חדשניים נוספים שהתפתחו בעקבותיה כגון הפריה חוץ גופית, פונדקאות ותרומת ביציות. העבודה סוקרת את התגבשות ועיצוב ההלכה בשאלה זו, כפי שעולה מספרות השו"ת החל מתקופת הראשונים (המאות ה 11- עד ה- 16 ) ועד לפוסקים מתקופת האחרונים (המאה ה- 16 ואילך), ומתמקדת בעיקר בתקופה המודרנית של המאה האחרונה. במרכז העבודה יעמדו מעמדם ההלכתי של האם והוולד בעקבות שימוש בהזרעה והפריה מלאכותית וכן השלכותיהן ההלכתיות של השימוש הרפואי בזרע.
תרומתו של מחקר זה הינה בהצגת וניתוח התמודדות הפוסקים עם העימות הערכי וההלכתי שמעוררות תמורות במבנה המשפחה ובדפוסי הורות מסורתיים כתוצאה משימוש הולך וגובר בטכנולוגיות פריון מתקדמות בחברה המודרנית. יתר על כן, מחקר זה מרחיב את הדיון ההלכתי אל מעבר לעמדות הפוסקים ביהדות האורתודוקסית, הנצמדים לעקרונות ההלכה ההיסטורית, ומציג לראשונה אף את קשת הדעות של רבנים המשתייכים לזרמי היהדות המתקדמת והקונסרבטיבית, הדוגלים בפיתוח ההלכה והתאמתה למציאות החיים המשתנה. כך, מעניקה עבודת מחקר זו תמונה שלמה ומקיפה יותר באשר להתמודדות הרבנים עם שאלת ההזרעה המלאכותית ומציגה את רב גוניותה של ההלכה כפי שמשתקפת בזרמי היהדות השונים בישראל ובתפוצות.
העבודה מחולקת לשישה חלקים אשר יפורטו בקצרה להלן.
חלק ראשון - מבוא לשאלת ההזרעה המלאכותית בספרות השו"ת
חלק זה מהווה מבוא והקדמה תיאורטית ומחקרית לשאלת ההזרעה המלאכותית ומטרתו לצייד את הקורא ברקע ההיסטורי ובכלים המחקריים הדרושים להמשך העבודה. הפרק הראשון סוקר מצוות דתיות והשקפות תרבותיות בנוגע לילודה בחברה היהודית ואילו הפרק השני מניח יסודות מתודולוגיים והיסטוריים הנחוצים לחקר שאלת ההזרעה המלאכותית בספרות השו"ת.
פרק ראשון - מצוות דתיות והשקפות תרבותיות בנוגע לילודה בחברה היהודית
פרק זה מציג את המשנה האידיאולוגית העומדת בבסיס ההצדקה ההלכתית להתרתם של טיפולי הפוריות. הפרק דן בחשיבות מצוות הפריון כפי שעולה ממקורות הלכתיים קדומים וברקע ההיסטורי והתרבותי למרכזיותה של מצווה זו ומצוות נלוות לה בדת היהודית. הפרק מדגים כיצד עיצבה חשיבות המצווה את דפוס המשפחה היהודית לאורך ההיסטוריה ועד לימינו ומסתיים בדיון קצר בשאלות רפואיות מוסריות העולות מהנגשת טכנולוגיות פריון מתקדמות במדינת ישראל המודרנית.
פרק שני - יסודות מתודולוגיים והיסטוריים בחקר שאלת ההזרעה המלאכותית בספרות השו"ת
בפרק זה מוצגים תחילה מטרות ושיטות המחקר אשר נבחרו לשם ניתוח הפסיקה בשאלת ההזרעה המלאכותית. מכיוון שהעבודה עוסקת בשאלה רפואית חדשנית והשפעתה על העולם הדתי, יערך מחקר זה תוך שימוש בניתוח רב מימדי המשלב מספר שיטות מחקר ובראשן השיטה הדוגמטית-היסטורית, כמו גם בניתוח ביקורתי של שיקולי המדיניות העומדים בבסיס שיקולי הפוסקים לפי אסוכלת הריאליזים ההלכתי. לאחר מכן נסקרים ונידונים המקורות ההיסטוריים העיקריים אשר עוסקים בהתעברות על טבעית באמבט או באמצעות שכיבה על סדינים והעומדים בבסיס הכרעות הפוסקים בהשלכות הזרעה מלאכותית על מעמדם ההלכתי של האישה והילוד.
חלק שני- ספרות השו"ת האורתודוקסית בשאלת הזרעה מלאכותית מן הבעל
חלקו שני של המחקר עוסק בניתוח ספרות השו"ת האורתודוקסית בשאלת הזרעה מלאכותית מן הבעל ובוחן את התפתחותו הרעיונית של איסור השחתת זרע ואת עמדות הפוסקים בשאלת הזרעה מלאכותית מן הבעל כפי שנידונה בספרות השו"ת.
פרק ראשון - התפתחותו הרעיונית של איסור השחתת זרע
בפרק זה נסקרים מקורותיו ההיסטוריים של האיסור להשחתת זרע. איסור זה נושק לכל תחומי חייו של הזוג הדתי ובעיקר לתחום המיניות והפריון ובעל השפעה מהותית על מגבלות הצניעות שהוחלו בעקבותיו על המיניות הגברית. קיים ויכוח בין החוקרים האם נובע האיסור מן המקורות העתיקים כגון התלמוד או לחילופין התעצם בעקבות החמרות הרבנים. מחקרינו מציע כי האפשרות השניה היא הסבירה יותר. חכמי התלמוד מפגינים גישה מקלה כלפי האיסור אך מביעים סלידה מהוצאת זרע לבטלה מתוך חשיבה חסידית או אמונה כי עיסוק בעינוג מיני עצמי עלול להסיט את האדם מקיום מצוות הפריון. לעומת זאת, בספרות הקבלה ובמיוחד בזוהר ניכרת החמרה במימדי האיסור החודרת מאוחר יותר אל עולם ההלכה.
פרק שני - דיון בהזרעה מן הבעל בספרות השו"ת
הפרק דן בפסיקותיהם של רבנים בולטים בזרם האורתודוקסי בשאלת ההזרעה מן הבעל. בספרות השו"ת האורתודוקסית ניכרת השפעתו של איסור הוצאת זרע לבטלה על שיקולי הפוסקים באם להתיר את הטיפול בהזרעה מלאכותית מן הבעל. פוסקים מקלים כגון הרבנים פיינשטיין, נבנצל ויוסף מייחסים משקל רב יותר לחשיבות מצוות הפריון ומצמצמים בשל כך את תחולת האיסור. לעומתם, פוסקים מחמירים ובראשם הרבנים טננבוים, קוק, עוזיאל, הדאיה ולדנברג מתנגדים לטיפולי פוריות מחשש השחתת זרע ובשל השלכותיה הרוחניות של העבירה על האיסור. שאלה מהותית נוספת בשיח ההלכתי בה נחלקו הרבנים הינה באשר לאפשרות קיומה של מצוות הפריה והרביה באמצעות הריון שהושג בסיוע רפואי. פוסקים מדור האחרונים, כגון הרבנים טננבוים, סגל ואזולאי (החיד"א), ובעקבותיהם פוסקים בולטים במאה ה- 20 וה- 21 כמו הרבנים עוזיאל, הדאיה, ולדנברג וקניבסקי סברו כי ניתן לקיים את המצווה באופן טבעי בלבד. לעומת זאת, הרבנים נבנצל, יוסף, וינברג ואוירבך ייחסו חשיבות רבה יותר לכוונת העושה ולתוצאה המיוחלת של ההריון אשר מגשימה לדעתם את הרציונל העומד בבסיס מצוות הפרייה הרבייה ובמצוות ישוב העולם.
שאלה נוספת המתעוררת בעניין הזרעה מן הבעל ונידונה בפרק זה עוסקת בבעיית העקרות ההלכתית. התעקשות הרבנים לשמור על הלכות הנידה במתכונתן המסורתית חושפת נשים דתיות לאי ודאות באשר לטהרתן ומכפיפה את חיי המין והפריון של משפחות שלמות למרות הסמכות הרבנית.
חלק שלישי- ספרות השו"ת האורתודוקסית בשאלת הזרעה מלאכותית מתורם
חלקו השלישי של המחקר עוסק בדיון ההלכתי האורתודוקסי בשאלת ההזרעה המלאכותית מתורם יהודי או נוכרי לאישה נשואה. מרבית הפוסקים הסתייגו מתרומת זרע לאישה נשואה מתורם יהודי כמו גם מנוכרי משיקולים הלכתיים ואידיאולוגיים.
פרק ראשון - שאלת ההזרעה המלאכותית מתורם יהודי
תרומת זרע מתורם יהודי זר מעלה חששות הלכתיים רבים לגבי מעמדה ההלכתי של האישה כגון ניאוף וקיום מצוות הייבום והחליצה. כמו כן הועלו חששות לגבי יחוסו וכשרותו ההלכתית של הקטין כגון חשש ממזרות, גילוי עריות בין ילדי התורם, פגיעה במעמד הכהונה ובעיות בתחום המעמד האישי בשל אי ודאות לגבי זהות התורם. בנוסף, משליכה הכרעה בשאלות אלה על זכאות הקטין לזכויות סוציאליות המגיעות לו מאביו, כגון הזכות למזונות ולירושה. פסיקת ההלכה במאה האחרונה אימצה את עמדתו של הרב פיינשטיין, לפיה רק הריון הנובע ממגע מיני ישיר בין אישה נשואה לגבר זר, בניגוד להזרעה מלאכותית, יכול לפגוע במעמד האישה והילוד. עם זאת, פוסקים רבים אסרו לבצע הזרעה מלאכותית מגבר יהודי מטעמי זהירות, בגין חששות אלו.
פרק שני - הזרעה מלאכותית מתורם לא יהודי לאישה נשואה
על אף שבמקרה של תרומת זרע מנוכרי נעדר הילוד יחוס לאביו הנוכרי וכשרותו נקבעת על פי מעמד האם בלבד, מסתייגים פוסקים רבים מן הפעולה. הפוסקים מונים טיעונים רבים לאיסור תרומת זרע מגבר נוכרי וביניהם חילול השם, פגיעה בביצוע מנהג הייבום והחליצה, ספקות לגבי זכויות הירושה, עידוד ההפקרות המינית ופגיעה ביציבות התא המשפחתי המסורתי. בנוסף חוששים הם מבלבול לגבי זהות אבי הילוד ופגיעה במעמד הכהונה. במהלך המאה ה- 20 ניכרת מגמה בקרב פוסקים העוסקים בקבלה לייחוס תכונות שליליות לזרע נוכרי, אשר עוברות לילוד ובאופן זה פוגעות בקדושת העם היהודי. מעבר לשאלות ההלכתיות מועלים בפרק זה קשיים אתיים אשר מעוררת תרומת זרע אנונימית, המונעת מן הילוד לברר את שורשיו הביולוגיים.
חלק רביעי- יחס הפסיקה האורתודוקסית לשינויים במבנה המשפחה המסורתית
בעקבות שימוש בטכנולוגיות פריון חלק זה מנתח את יחס הפסיקה האורתודוקסית לשינויים במבנה המשפחה המסורתית בעקבות שימוש בטכנולוגיות פריון ומורכב משלושה פרקים: הזרעה מלאכותית לאישה פנויה, הפריה חוץ גופית והקמת משפחה אלטרנטיבית בפרספקטיבה הלכתית אורתודוקסית.
פרק ראשון - הזרעה מלאכותית לאישה פנויה
המוסכמה ההלכתית הינה כי ילדה של אישה פנויה לא ייחשב לממזר, אפילו אם אביו הביולוגי הוא יהודי. למרות זאת אסרו פוסקים מסויימים על הזרעה מלאכותית במקרה זה בגין חששות לייחוס הילוד, גילוי עריות וגזילת ירושת שאר האחים. חלק מן הפוסקים קוראים לאסור הזרעה מלאכותית לאישה פנויה בשם טובת הילד ולא מסיבות הלכתיות. אפשרות נוספת אשר השלכותיה ההלכתיות נידונות בפרק זה הינה ביצוע הזרעה מלאכותית באישה רווקה או אלמנה באמצעות שימוש בזרעו של אדם שנפטר. חשוב לציין כי השימוש בטכנולוגיה זו שנוי במחלוקת בקרב הרבנים בגין החשיבות אותה מייחסת היהדות לערך כבוד המת.
פרק שני - הפריה חוץ גופית
הפריית מבחנה יכולה להתבצע בין בני זוג נשואים בסיוע או ללא היזקקות לתרומת ביצית או זרע. פוסקים אשר התירו את השימוש בטכנולוגיה זו הביעו את חששותיהם בנוגע להשלכותיה ההלכתיות של הפריה החוץ גופית בשל הקושי לפקח על התהליך, הנעשה מחוץ לגוף האדם. מתנגדי הפריית המבחנה סברו כי היא מתערבת במלאכת הבריאה האלוהית ועשויה לדרדר את האנושות מבחינה מוסרית לתכנון ולשיבוט בני אדם.
הפריה חוץ גופית מעוררות שאלות הלכתיות חדשניות ובראשן שאלת הגדרת האמהות, כגון במקרי פונדקאות ותרומת ביציות, מהן הסתייגו מרבית פוסקי הדור במאה ה 20- . הקביעה שאומצה בשיח ההלכתי העכשווי גורסת כי אמו של התינוק תהיה האישה שילדה אותו, גם אם נולד מתרומת ביצית. הכרעה זו מובלעת ברציונל החוקים המסדירים את הפונדקאות במדינת ישראל ואשר מטילים מגבלות הלכתיות על בחירת הפונדקאית לשם מניעת חשש ממזרות או שאלה לגבי יהדותו של הילוד.
פרק שלישי - הקמת משפחה אלטרנטיבית בפרספקטיבה הלכתית אורתודוקסית
קיימת בעיה הלכתית מהותית במיסוד הקשר החד-מיני ביהדות האורתודוקסית. ספרות השו"ת האורתודוקסית אינה עוסקת במפורש בשאלת השימוש בהזרעה מלאכותית לצורך הקמת המשפחה החד- מינית. עם זאת, ניתן ללמוד רבות על גישת האורתודוקסיה משו"תים עכשוויים העוסקים בתופעת ההומוסקסואליות ובקיום מצוות הפריון במקרה זה. הדעה הרווחת היום בקרב הפוסקים הינה כי קשר חד- מיני אסור, כמו גם מיסודו וקיום מצוות פרו ורבו במסגרת זו. בשל כך ניתן להסיק כי הרבנים לא יתירו שימוש בהזרעה מלאכותית לצורך קיום מצוות הפריון במשפחה חד-מינית.
חלק חמישי- עמדת רבני הזרם רפורמי והקונסרבטיבי בשאלת ההזרעה המלאכותית
חלק זה מתמקד בזרמי היהדות הרפורמית והקונסרבטיבית ומציג את עמדות רבני זרמים אלו בעניין ההזרעה המלאכותית תוך עימותן עם עמדות פוסקים אורתודוקסיים.
פרק ראשון - עמדת הרבנות הרפורמית בשאלת ההזרעה המלאכותית
חשיבות מצוות הפריון מודגשת בספרות השו"ת הרפורמית, אולם טיפולי פוריות אינם מתפרשים כדרך היחידה לקיומה. על מנת שלא לפגוע במרקם החיים המשפחתי מוצעת לא אחת חלופת האימוץ לטיפולי פוריות מייגעים, תרומת זרע והיזקקות לפונדקאות ותרומת ביציות. שאלת השחתת הזרע, המעסיקה רבות את רבני הזרם האורתודוקסי, כמעט שאינה נידונה על ידי רבני הזרם הרפורמי. באשר לשאלות של כשרות וייחוס, רווחת הדעה בקרב רבנים רפורמיים בולטים כגון הרב הומולקה ורומיין, כי מעמד הממזרות איננו רלוונטי עוד בימינו ולכן ממילא לא נשקפת לילד הסכנה שזכויותיו הדתיות ייפגעו.
ועד הרבנים הרפורמיים התיר לחבריו לפעול לפי שיקול דעתם בעניין עריכת טקס נישואין דתי בין בני זוג בני אותו המין, תוך הדגשת הויכוח הפנימי בתנועה ומורת הרוח הקיימת מצד הזרמים האחרים ביהדות.
פרק שני - עמדת הרבנות הקונסרבטיבית בשאלת ההזרעה המלאכותית
עמדת רבני היהדות הקונסרבטיבית גורסת כי טיפולי פוריות מהווים רשות, אך אינם בגדר חובה עבור זוגות הרוצים לקיים את מצוות הפריה והרבייה ומתקשים בכך. בדומה לביטולה של גזירת הממזרות בזרם הרפורמי, בוטל מעמד זה לאחרונה גם ביהדות הקונסרבטיבית. אוננות במסגרת טיפולי פוריות הינה מותרת ואיננה מוגדרת כהשחתת זרע לבטלה ואף מותרת גם לשם תרומת זרע של יהודי לאישה לא יהודיה. כמו בזרם הרפורמי גם כאן מונחת אופציית האימוץ על סדר היום.
כנסת הרבנים הקונסרבטיבים בארצות הברית קיבלה החלטה ברוב זעום להכרה אזרחית בהומוסקסואלים, אך עם זאת השאירה על כנו את האיסור לקדש זוגות אלה. אין בהחלטה זו הכרה הלכתית בזוגיות החד-מינית אלא קבלתם החברתית של זוגות אלה ועידודם להשתלב בקהילות הקונסרבטיביות.
חלק שישי - סיכום ודיון בהשלכות המחקר
בחלקו השישי והאחרון בעבודת מחקר זו נסכם ונדון בהשלכותיה האתיות והגלובליולת של הפסיקה ההלכית בשאלה הנידונה.
כפי שניתן לראות מעבודת מחקר זו, הרבנים האורתודוקסיים מסתמכים בהכרעותיהם בעיקר על שיקולים הלכתיים ופחות על שיקולים אתיים ואוניברסליים. מחד, מסתייגים פוסקים אורתודוקסיים מסויימים משימוש לרעה בטיפולי פוריות, אך מאידך מוכנים הם בשם עקרונות דתיים להתיר שימוש בטכנולוגיות אלו במקרים שאינם מצריכים זאת משיקולים רפואיים. במיוחד בולט העדר התייחסותם של הפוסקים האורתודוקסיים לשיקולים מוסריים חברתיים, כמו גם לרצונה ולרווחתה של האישה העוברת טיפולי פוריות. לעומתם מביעים רבני הזרם הרפורמי והקונסרבטיבי את דאגתם להשלכותיהם של טיפולים אלו על נשים הנזקקות להתערבות רפואית מעין זו לשם התעברות ואלו העובדות בתעשיית הפריון. יש לציין כי הדיון ההלכתי עוסק בהיבט צר בלבד של שאלת ההזרעה המלאכותית ואינו נוגע ברבות מן השאלות המורכבות שמעוררות טכנולוגיות פריון מתקדמות. עם זאת הדיון ההלכתי הינו חשוב ביותר ומציף אף שאלות נוספות בדבר הסכנה הצפויה מהפיכתן של טכנולוגיות אלה לכלי שרת בידי אידאולוגיות דתיות או פוליטיות.
In this thesis, I examine different A-bar movement dependencies in Igbo, a Benue-Congo language spoken in southern Nigeria. Movement dependencies are found in constructions where an element is moved to the left edge of the clause to express information-structural categories such as in questions, relativization and focus. I show that these constructions in Igbo are very uniform from a syntactic point of view. The constructions are built on two basic fronting operations: relativization and focus movement, and are biclausal. I further investigate several morphophonological effects that are found in these A-bar constructions. I propose that these effects are reflexes of movement that are triggered when an element is moved overtly in relativization or focus. This proposal helps to explain the tone patterns that have previously been assumed to be a property of relative clauses. The thesis adds to the growing body of tonal reflexes of A-bar movement reported for a few African languages. The thesis also provides an insight into the complementizer domain (C-domain) of Igbo.
The Indian summer monsoon (ISM) is one of the largest climate systems on earth and impacts the livelihood of nearly 40% of the world’s population. Despite dedicated efforts, a comprehensive picture of monsoon variability has proved elusive largely due to the absence of long term high resolution records, spatial inhomogeneity of the monsoon precipitation, and the complex forcing mechanisms (solar insolation, internal teleconnections for e.g., El Niño-Southern Oscillation, tropical-midlatitude interactions). My work aims to improve the understanding of monsoon variability through generation of long term high resolution palaeoclimate data from climatically sensitive regions in the ISM and westerlies domain. To achieve this aim I have (i) identified proxies (sedimentological, geochemical, isotopic, and mineralogical) that are sensitive to environmental changes; (ii) used the identified proxies to generate long term palaeoclimate data from two climatically sensitive regions, one in NW Himalayas (transitional westerlies and ISM domain in the Spiti valley and one in the core monsoon zone (Lonar lake) in central India); (iii) undertaken a regional overview to generate “snapshots” of selected time slices; and (iv) interpreted the spatial precipitation anomalies in terms of those caused by modern teleconnections. This approach must be considered only as the first step towards identifying the past teleconnections as the boundary conditions in the past were significantly different from today and would have impacted the precipitation anomalies. As the Spiti valley is located in the in the active tectonic orogen of Himalayas, it was essential to understand the role of regional tectonics to make valid interpretations of catchment erosion and detrital influx into the lake. My approach of using integrated structural/morphometric and geomorphic signatures provided clear evidence for active tectonics in this area and demonstrated the suitability of these lacustrine sediments as palaleoseismic archives. The investigations on the lacustrine outcrops in Spiti valley also provided information on changes in seasonality of precipitation and occurrence of frequent and intense periods (ca. 6.8-6.1 cal ka BP) of detrital influx indicating extreme hydrological events in the past. Regional comparison for this time slice indicates a possible extended “break-monsoon like” mode for the monsoon that favors enhanced precipitation over the Tibetan plateau, Himalayas and their foothills. My studies on surface sediments from Lonar lake helped to identify environmentally sensitive proxies which could also be used to interpret palaeodata obtained from a ca. 10m long core raised from the lake in 2008. The core encompasses the entire Holocene and is the first well dated (by 14C) archive from the core monsoon zone of central India. My identification of authigenic evaporite gaylussite crystals within the core sediments provided evidence of exceptionally drier conditions during 4.7-3.9 and 2.0-0.5 cal ka BP. Additionally, isotopic investigations on these crystals provided information on eutrophication, stratification, and carbon cycling processes in the lake.
In the era of social networks, internet of things and location-based services, many online services produce a huge amount of data that have valuable objective information, such as geographic coordinates and date time. These characteristics (parameters) in the combination with a textual parameter bring the challenge for the discovery of geospatiotemporal knowledge. This challenge requires efficient methods for clustering and pattern mining in spatial, temporal and textual spaces.
In this thesis, we address the challenge of providing methods and frameworks for geospatiotemporal data analytics. As an initial step, we address the challenges of geospatial data processing: data gathering, normalization, geolocation, and storage. That initial step is the basement to tackle the next challenge -- geospatial clustering challenge. The first step of this challenge is to design the method for online clustering of georeferenced data. This algorithm can be used as a server-side clustering algorithm for online maps that visualize massive georeferenced data. As the second step, we develop the extension of this method that considers, additionally, the temporal aspect of data. For that, we propose the density and intensity-based geospatiotemporal clustering algorithm with fixed distance and time radius.
Each version of the clustering algorithm has its own use case that we show in the thesis.
In the next chapter of the thesis, we look at the spatiotemporal analytics from the perspective of the sequential rule mining challenge. We design and implement the framework that transfers data into textual geospatiotemporal data - data that contain geographic coordinates, time and textual parameters. By this way, we address the challenge of applying pattern/rule mining algorithms in geospatiotemporal space. As the applicable use case study, we propose spatiotemporal crime analytics -- discovery spatiotemporal patterns of crimes in publicly available crime data.
The second part of the thesis, we dedicate to the application part and use case studies. We design and implement the application that uses the proposed clustering algorithms to discover knowledge in data. Jointly with the application, we propose the use case studies for analysis of georeferenced data in terms of situational and public safety awareness.
The study of outcrop modeling is located at the interface between two fields of expertise, Sedimentology and Computing Geoscience, which respectively investigates and simulates geological heterogeneity observed in the sedimentary record. During the last past years, modeling tools and techniques were constantly improved. In parallel, the study of Phanerozoic carbonate deposits emphasized the common occurrence of a random facies distribution along single depositional domain. Although both fields of expertise are intrinsically linked during outcrop simulation, their respective advances have not been combined in literature to enhance carbonate modeling studies. The present study re-examines the modeling strategy adapted to the simulation of shallow-water carbonate systems, based on a close relationship between field sedimentology and modeling capabilities. In the present study, the evaluation of three commonly used algorithms Truncated Gaussian Simulation (TGSim), Sequential Indicator Simulation (SISim), and Indicator Kriging (IK), were performed for the first time using visual and quantitative comparisons on an ideally suited carbonate outcrop. The results show that the heterogeneity of carbonate rocks cannot be fully simulated using one single algorithm. The operating mode of each algorithm involves capabilities as well as drawbacks that are not capable to match all field observations carried out across the modeling area. Two end members in the spectrum of carbonate depositional settings, a low-angle Jurassic ramp (High Atlas, Morocco) and a Triassic isolated platform (Dolomites, Italy), were investigated to obtain a complete overview of the geological heterogeneity in shallow-water carbonate systems. Field sedimentology and statistical analysis performed on the type, morphology, distribution, and association of carbonate bodies and combined with palaeodepositional reconstructions, emphasize similar results. At the basin scale (x 1 km), facies association, composed of facies recording similar depositional conditions, displays linear and ordered transitions between depositional domains. Contrarily, at the bedding scale (x 0.1 km), individual lithofacies type shows a mosaic-like distribution consisting of an arrangement of spatially independent lithofacies bodies along the depositional profile. The increase of spatial disorder from the basin to bedding scale results from the influence of autocyclic factors on the transport and deposition of carbonate sediments. Scale-dependent types of carbonate heterogeneity are linked with the evaluation of algorithms in order to establish a modeling strategy that considers both the sedimentary characteristics of the outcrop and the modeling capabilities. A surface-based modeling approach was used to model depositional sequences. Facies associations were populated using TGSim to preserve ordered trends between depositional domains. At the lithofacies scale, a fully stochastic approach with SISim was applied to simulate a mosaic-like lithofacies distribution. This new workflow is designed to improve the simulation of carbonate rocks, based on the modeling of each scale of heterogeneity individually. Contrarily to simulation methods applied in literature, the present study considers that the use of one single simulation technique is unlikely to correctly model the natural patterns and variability of carbonate rocks. The implementation of different techniques customized for each level of the stratigraphic hierarchy provides the essential computing flexibility to model carbonate systems. Closer feedback between advances carried out in the field of Sedimentology and Computing Geoscience should be promoted during future outcrop simulations for the enhancement of 3-D geological models.
This text is a contribution to the research on the worldwide success of evangelical Christianity and offers a new perspective on the relationship between late modern capitalism and evangelicalism. For this purpose, the utilization of affect and emotion in evangelicalism towards the mobilization of its members will be examined in order to find out what similarities to their employment in late modern capitalism can be found. Different examples from within the evangelical spectrum will be analyzed as affective economies in order to elaborate how affective mobilization is crucial for evangelicalism’s worldwide success. Pivotal point of this text is the exploration of how evangelicalism is able to activate the voluntary commitment of its members, financiers, and missionaries. Gathered here are examples where both spheres—evangelicalism and late modern capitalism—overlap and reciprocate, followed by a theoretical exploration of how the findings presented support a view of evangelicalism as an inner-worldly narcissism that contributes to an assumed re-enchantment of the world.
Die vorliegende Arbeit untersucht Urlaubsfotografien bei Facebook und beschreibt, welche sozio-technischen Medienpraktiken sich innerhalb der Social-Media Plattform über die Fotografien vollziehen. Fotografische Praktiken sind durch aktive Handlungen und soziale Gebrauchsweisen bestimmt. Urlaubsfotografien tragen zum Beispiel zur Strukturierung von Reiserouten und Vorstellungen bei, indem genrespezifische Motive und Rahmungen mit Hilfe von Medien reproduziert und wiederholt werden. Praktiken des Zeigens, Teilens und Kommunizierens werden durch Social Plug-Ins (Like/Share Buttons) und Tagging-Funktionen auch in die Benutzeroberflächen von Facebook integriert. Dadurch werden Nutzer*innen Aktivitäten und technische Prozesse miteinander verbunden. Am Beispiel der automatischen Generierung von Urlaubsfotografien auf Geotagseiten wird gezeigt, dass Social-Tagging zur Entstehung und Aushandlung geographischer Räume und Ortsvorstellungen beiträgt. Mithilfe technischer Strukturierungen von Fotografien auf Taggingseiten werden genrespezifische Motive, fotografische Trends und Ästhetiken besonders sichtbar. Allerdings wird ihre Visualisierung auch durch algorithmische Priorisierung einzelner Inhalte mitbestimmt. Dadurch werden Urlaubsfotografien für ein fotografisches Profiling genutzt, da sie das algorithmische Erfassen und Auswerten von Nutzer*innen-Informationen ermöglichen. Die Arbeit zeigt, dass der Einsatz von Bilderkennungsverfahren und fotografischen Datenanalysen zu einer optimierten Informationsgewinnung und zu einer Standardisierung von Fotografien beiträgt.
Vor dem Hintergrund der Auffassung, dass ethnische Minderheiten eine Form so-zialer Organisation darstellen, verfolgt die Studie – unter Berücksichtigung der Mehr-deutigkeit des Raumbegriffs – das Ziel, anhand von Beispielen aus Rumänien ein Konzept zu entwickeln, mit dem sich die aktuelle Beziehung von Ethnizität und Raum im Transformationsprozess adäquat analysieren und beschreiben lässt.
Galaxies are among the most complex systems that can currently be modelled with a computer. A realistic simulation must take into account cosmology and gravitation as well as effects of plasma, nuclear, and particle physics that occur on very different time, length, and energy scales. The Milky Way is the ideal test bench for such simulations, because we can observe millions of its individual stars whose kinematics and chemical composition are records of the evolution of our Galaxy. Thanks to the advent of multi-object spectroscopic surveys, we can systematically study stellar populations in a much larger volume of the Milky Way. While the wealth of new data will certainly revolutionise our picture of the formation and evolution of our Galaxy and galaxies in general, the big-data era of Galactic astronomy also confronts us with new observational, theoretical, and computational challenges.
This thesis aims at finding new observational constraints to test Milky-Way models, primarily based on infra-red spectroscopy from the Apache Point Observatory Galactic Evolution Experiment (APOGEE) and asteroseismic data from the CoRoT mission. We compare our findings with chemical-evolution models and more sophisticated chemodynamical simulations. In particular we use the new powerful technique of combining asteroseismic and spectroscopic observations that allows us to test the time dimension of such models for the first time. With CoRoT and APOGEE (CoRoGEE) we can infer much more precise ages for distant field red-giant stars, opening up a new window for Galactic archaeology.
Another important aspect of this work is the forward-simulation approach that we pursued when interpreting these complex datasets and comparing them to chemodynamical models.
The first part of the thesis contains the first chemodynamical study conducted with the APOGEE survey. Our sample comprises more than 20,000 red-giant stars located within 6 kpc from the Sun, and thus greatly enlarges the Galactic volume covered with high-resolution spectroscopic observations. Because APOGEE is much less affected by interstellar dust extinction, the sample covers the disc regions very close to the Galactic plane that are typically avoided by optical surveys. This allows us to investigate the chemo-kinematic properties of the Milky Way's thin disc outside the solar vicinity. We measure, for the first time with high-resolution data, the radial metallicity gradient of the disc as a function of distance from the Galactic plane, demonstrating that the gradient flattens and even changes its sign for mid-plane distances greater than 1 kpc.
Furthermore, we detect a gap between the high- and low-[$\alpha$/Fe] sequences in the chemical-abundance diagram (associated with the thin and thick disc) that unlike in previous surveys can hardly be explained by selection effects. Using 6D kinematic information, we also present chemical-abundance diagrams cleaned from stars on kinematically hot orbits. The data allow us to confirm without doubt that the scale length of the (chemically-defined) thick disc is significantly shorter than that of the thin disc.
In the second part, we present our results of the first combination of asteroseismic and spectroscopic data in the context of Galactic Archaeology. We analyse APOGEE follow-up observations of 606 solar-like oscillating red giants in two CoRoT fields close to the Galactic plane. These stars cover a large radial range of the Galactic disc (4.5 kpc $\lesssim R_{\rm Gal}\lesssim15$ kpc) and a large age baseline (0.5 Gyr $\lesssim \tau\lesssim$ 13 Gyr), allowing us to study the age- and radius-dependence of the [$\alpha$/Fe] vs. [Fe/H] distributions. We find that the age distribution of the high-[$\alpha$/Fe] sequence appears to be broader than expected from a monolithically-formed old thick disc that stopped to form stars 10 Gyr ago. In particular, we discover a significant population of apparently young, [$\alpha$/Fe]-rich stars in the CoRoGEE data whose existence cannot be explained by standard chemical-evolution models. These peculiar stars are much more abundant in the inner CoRoT field LRc01 than in the outer-disc field LRc01, suggesting that at least part of this population has a chemical-evolution rather than a stellar-evolution origin, possibly due to a peculiar chemical-enrichment history of the inner disc. We also find that strong radial migration is needed to explain the abundance of super-metal-rich stars in the outer disc.
Finally, we use the CoRoGEE sample to study the time evolution of the radial metallicity gradient in the thin disc, an observable that has been the subject of observational and theoretical debate for more than 20 years. By dividing the CoRoGEE dataset into six age bins, performing a careful statistical analysis of the radial [Fe/H], [O/H], and [Mg/Fe] distributions, and accounting for the biases introduced by the observation strategy, we obtain reliable gradient measurements. The slope of the radial [Fe/H] gradient of the young red-giant population ($-0.058\pm0.008$ [stat.] $\pm0.003$ [syst.] dex/kpc) is consistent with recent Cepheid data. For the age range of $1-4$ Gyr, the gradient steepens slightly ($-0.066\pm0.007\pm0.002$ dex/kpc), before flattening again to reach a value of $\sim-0.03$ dex/kpc for stars with ages between 6 and 10 Gyr. This age dependence of the [Fe/H] gradient can be explained by a nearly constant negative [Fe/H] gradient of $\sim-0.07$ dex/kpc in the interstellar medium over the past 10 Gyr, together with stellar heating and migration. Radial migration also offers a new explanation for the puzzling observation that intermediate-age open clusters in the solar vicinity (unlike field stars) tend to have higher metallicities than their younger counterparts. We suggest that non-migrating clusters are more likely to be kinematically disrupted, which creates a bias towards high-metallicity migrators from the inner disc and may even steepen the intermediate-age cluster abundance gradient.
Adsorption layers of soluble surfactants enable and govern a variety of phenomena in surface and colloidal sciences, such as foams. The ability of a surfactant solution to form wet foam lamellae is governed by the surface dilatational rheology. Only systems having a non-vanishing imaginary part in their surface dilatational modulus, E, are able to form wet foams. The aim of this thesis is to illuminate the dissipative processes that give rise to the imaginary part of the modulus. There are two controversial models discussed in the literature. The reorientation model assumes that the surfactants adsorb in two distinct states, differing in their orientation. This model is able to describe the frequency dependence of the modulus E. However, it assumes reorientation dynamics in the millisecond time regime. In order to assess this model, we designed a SHG pump-probe experiment that addresses the orientation dynamics. Results obtained reveal that the orientation dynamics occur in the picosecond time regime, being in strong contradiction with the two states model. The second model regards the interface as an interphase. The adsorption layer consists of a topmost monolayer and an adjacent sublayer. The dissipative process is due to the molecular exchange between both layers. The assessment of this model required the design of an experiment that discriminates between the surface compositional term and the sublayer contribution. Such an experiment has been successfully designed and results on elastic and viscoelastic surfactant provided evidence for the correctness of the model. Because of its inherent surface specificity, surface SHG is a powerful analytical tool that can be used to gain information on molecular dynamics and reorganization of soluble surfactants. They are central elements of both experiments. However, they impose several structural elements of the model system. During the course of this thesis, a proper model system has been identified and characterized. The combination of several linear and nonlinear optical techniques, allowed for a detailed picture of the interfacial architecture of these surfactants.
With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation.
Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly.
The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design.
As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation.
The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments.
A systems biological approach towards the molecular basis of heterosis in Arabidopsis thaliana
(2011)
Heterosis is defined as the superiority in performance of heterozygous genotypes compared to their corresponding genetically different homozygous parents. This phenomenon is already known since the beginning of the last century and it has been widely used in plant breeding, but the underlying genetic and molecular mechanisms are not well understood. In this work, a systems biological approach based on molecular network structures is proposed to contribute to the understanding of heterosis. Hybrids are likely to contain additional regulatory possibilities compared to their homozygous parents and, therefore, they may be able to correctly respond to a higher number of environmental challenges, which leads to a higher adaptability and, thus, the heterosis phenomenon. In the network hypothesis for heterosis, presented in this work, more regulatory interactions are expected in the molecular networks of the hybrids compared to the homozygous parents. Partial correlations were used to assess this difference in the global interaction structure of regulatory networks between the hybrids and the homozygous genotypes. This network hypothesis for heterosis was tested on metabolite profiles as well as gene expression data of the two parental Arabidopsis thaliana accessions C24 and Col-0 and their reciprocal crosses. These plants are known to show a heterosis effect in their biomass phenotype. The hypothesis was confirmed for mid-parent and best-parent heterosis for either hybrid of our experimental metabolite as well as gene expression data. It was shown that this result is influenced by the used cutoffs during the analyses. Too strict filtering resulted in sets of metabolites and genes for which the network hypothesis for heterosis does not hold true for either hybrid regarding mid-parent as well as best-parent heterosis. In an over-representation analysis, the genes that show the largest heterosis effects according to our network hypothesis were compared to genes of heterotic quantitative trait loci (QTL) regions. Separately for either hybrid regarding mid-parent as well as best-parent heterosis, a significantly larger overlap between the resulting gene lists of the two different approaches towards biomass heterosis was detected than expected by chance. This suggests that each heterotic QTL region contains many genes influencing biomass heterosis in the early development of Arabidopsis thaliana. Furthermore, this integrative analysis led to a confinement and an increased confidence in the group of candidate genes for biomass heterosis in Arabidopsis thaliana identified by both approaches.
Non-mycorrhizal fungal endophytes are able to colonize internally roots without causing visible disease symptoms establishing neutral or mutualistic associations with plants. These fungi known as non-clavicipitaceous endophytes have a broad host range of monocot and eudicot plants and are highly diverse. Some of them promote plant growth and confer increased abiotic-stress tolerance and disease resistance. According to such possible effects on host plants, it was aimed to isolate and to characterize native fungal root endophytes from tomato (Lycopersicon esculentum Mill.) and to analyze their effects on plant development, plant resistance and fruit yield and quality together with the model endophyte Piriformospora indica. Fifty one new fungal strains were isolated from desinfected tomato roots of four different crop sites in Colombia. These isolates were roughly characterized and fourteen potential endophytes were further analyzed concerning their taxonomy, their root colonization capacity and their impact on plant growth. Sequencing of the ITS region from the ribosomal RNA gene cluster and in-depth morphological characterisation revealed that they correspond to different phylogenetic groups among the phylum Ascomycota. Nine different morphotypes were described including six dark septate endophytes (DSE) that did not correspond to the Phialocephala group. Detailed confocal microscopy analysis showed various colonization patterns of the endophytes inside the roots ranging from epidermal penetration to hyphal growth through the cortex. Tomato pot experiments under glass house conditions showed that they differentially affect plant growth depending on colonization time and inoculum concentration. Three new isolates (two unknown fungal endophyte DSE48, DSE49 and one identified as Leptodontidium orchidicola) with neutral or positiv effects were selected and tested in several experiments for their influence on vegetative growth, fruit yield and quality and their ability to diminish the impact of the pathogen Verticillium dahliae on tomato plants. Although plant growth promotion by all three fungi was observed in young plants, vegetative growth parameters were not affected after 22 weeks of cultivation except a reproducible increase of root diameter by the endophyte DSE49. Additionally, L. orchidicola increased biomass and glucose content of tomato fruits, but only at an early date of harvest and at a certain level of root colonization. Concerning bioprotective effects, the endophytes DSE49 and L. orchidicola decreased significantly disease symptoms caused by the pathogen V. dahliae, but only at a low dosis of the pathogen. In order to analyze, if the model root endophytic fungus Piriformospora indica could be suitable for application in production systems, its impact on tomato was evaluated. Similarly to the new fungal isolates, significant differences for vegetative growth parameters were only observable in young plants and, but protection against V. dahliae could be seen in one experiment also at high dosage of the pathogen. As the DSE L. orchidicola, P. indica increased the number and biomass of marketable tomatoes only at the beginning of fruit setting, but this did not lead to a significant higher total yield. If the effects on growth are due to a better nutrition of the plant with mineral element was analyzed in barley in comparison to the arbuscular mycorrhizal fungus Glomus mosseae. While the mycorrhizal fungus increased nitrogen and phosphate uptake of the plant, no such effect was observed for P. indica. In summary this work shows that many different fungal endophytes can be also isolated from roots of crops and, that these isolates can have positive effects on early plant development. This does, however, not lead to an increase in total yield or in improvement of fruit quality of tomatoes under greenhouse conditions.
Fusionen stellen einen zentralen Baustein der Industrieökonomik dar. In diesem Buch wird der Frage nachgegangen, welchen Einfluss die räumliche Dimension auf eine Fusion ausübt. Dabei wird ein Grundmodell entwickelt und über dieses hinaus eine Vielzahl Erweiterungen präsentiert. Der Leser erhält somit die Möglichkeit ein tiefes Verständnis für Fusionen bei räumlichem Wettbewerb zu erlangen.
Carbohydrate recognition is a ubiquitous principle underlying many fundamental biological processes like fertilization, embryogenesis and viral infections. But how carbohydrate specificity and affinity induce a molecular event is not well understood. One of these examples is bacteriophage P22 that binds and infects three distinct Salmonella enterica (S.) hosts. It recognizes and depolymerizes repetitive carbohydrate structures of O antigen in its host´s outer membrane lipopolysaccharide molecule. This is mediated by tailspikes, mainly β helical appendages on phage P22 short non contractile tail apparatus (podovirus). The O antigen of all three Salmonella enterica hosts is built from tetrasaccharide repeating units consisting of an identical main chain with a distinguished 3,6 dideoxyhexose substituent that is crucial for P22 tailspike recognition: tyvelose in S. Enteritidis, abequose in S. Typhimurium and paratose in S. Paratyphi. In the first study the complexes of P22 tailspike with its host’s O antigen octasaccharide were characterized. S. Paratyphi octasaccharide binds less tightly (ΔΔG≈7 kJ/mol) to the tailspike than the other two hosts. Crystal structure analysis of P22 tailspike co crystallized with S. Paratyphi octasaccharides revealed different interactions than those observed before in tailspike complexes with S. Enteritidis and S. Typhimurium octasaccharides. These different interactions occur due to a structural rearrangement in the S. Paratyphi octasaccharide. It results in an unfavorable glycosidic bond Φ/Ψ angle combination that also had occurred when the S. Paratyphi octasaccharide conformation was analyzed in an aprotic environment. Contributions of individual protein surface contacts to binding affinity were analyzed showing that conserved structural waters mediate specific recognition of all three different Salmonella host O antigens. Although different O antigen structures possess distinct binding behavior on the tailspike surface, all are recognized and infected by phage P22. Hence, in a second study, binding measurements revealed that multivalent O antigen was able to bind with high avidity to P22 tailspike. Dissociation rates of the polymer were three times slower than for an octasaccharide fragment pointing towards high affinity for O antigen polysaccharide. Furthermore, when phage P22 was incubated with lipopolysaccharide aggregates before plating on S. Typhimurium cells, P22 infectivity became significantly reduced. Therefore, in a third study, the function of carbohydrate recognition on the infection process was characterized. It was shown that large S. Typhimurium lipopolysaccharide aggregates triggered DNA release from the phage capsid in vitro. This provides evidence that phage P22 does not use a second receptor on the Salmonella surface for infection. P22 tailspike binding and cleavage activity modulate DNA egress from the phage capsid. DNA release occurred more slowly when the phage possessed mutant tailspikes with less hydrolytic activity and was not induced if lipopolysaccharides contained tailspike shortened O antigen polymer. Furthermore, the onset of DNA release was delayed by tailspikes with reduced binding affinity. The results suggest a model for P22 infection induced by carbohydrate recognition: tailspikes position the phage on Salmonella enterica and their hydrolytic activity forces a central structural protein of the phage assembly, the plug protein, onto the host´s membrane surface. Upon membrane contact, a conformational change has to occur in the assembly to eject DNA and pilot proteins from the phage to establish infection. Earlier studies had investigated DNA ejection in vitro solely for viruses with long non contractile tails (siphovirus) recognizing protein receptors. Podovirus P22 in this work was therefore the first example for a short tailed phage with an LPS recognition organelle that can trigger DNA ejection in vitro. However, O antigen binding and cleaving tailspikes are widely distributed in the phage biosphere, for example in siphovirus 9NA. Crystal structure analysis of 9NA tailspike revealed a complete similar fold to P22 tailspike although they only share 36 % sequence identity. Moreover, 9NA tailspike possesses similar enzyme activity towards S. Typhimurium O antigen within conserved amino acids. These are responsible for a DNA ejection process from siphovirus 9NA triggered by lipopolysaccharide aggregates. 9NA expelled its DNA 30 times faster than podovirus P22 although the associated conformational change is controlled with a similar high activation barrier. The difference in DNA ejection velocity mirrors different tail morphologies and their efficiency to translate a carbohydrate recognition signal into action.
Die 11beta-HSD1 reguliert intrazellulär die Cortisolkonzentration durch Regeneration von Cortison z.B. aus dem Blutkreislauf, zu Cortisol. Daher stellt diese ein wichtiges Element in der Glucocorticoid-vermittelten Genregulation dar. Die 11beta-HSD1 wird ubiquitär exprimiert, auf hohem Niveau besonders in Leber, Fettgewebe und glatten Muskelzellen. Insbesondere die Bedeutung der 11beta-HSD1 in Leber und Fettgewebe konnte mehrfach nachgewiesen werden. In der Leber führte eine erhöhte Aktivität aufgrund einer Überexpression in Mäusen zu einer verstärkten Gluconeogeneserate. Des Weiteren konnte gezeigt werden, dass eine erhöhte Expression und erhöhte Enzymaktivität der 11beta-HSD1 im subkutanen und viszeralen Fettgewebe assoziiert ist mit Fettleibigkeit, Insulinresistenz und Dyslipidämie. Über die Regulation ist jedoch noch wenig bekannt. Zur Untersuchung der Promotoraktivität wurde der Promotorbereich von -3034 bis +188, vor und nach dem Translations- und Transkriptionsstart, der 11beta-HSD1 kloniert. 8 Promotorfragmente wurden mittels Dual-Luciferase-Assay in humanen HepG2-Zellen sowie undifferenzierten und differenzierten murinen 3T3-L1-Zellen untersucht. Anschließend wurde mittels nicht-radioaktiven EMSA die Bindung des TATA-Binding Proteins (TBP) sowie von CCAAT/Enhancer-Binding-Proteinen (C/EBP) an ausgewählte Promotorregionen analysiert. Nach der Charakterisierung des Promotors wurden spezifische endogene und exogene Regulatoren untersucht. Fettsäuren modifizieren die Entstehung von Adipositas und Insulinresistenz. Ihre Wirkung wird u.a. PPARgamma-abhängig vermittelt und kann durch das Inkretin (Glucose-dependent insulinotropic Peptide) GIP modifiziert werden. So wurden die Effekte von unterschiedlichen Fettsäuren, vom PPARgamma Agonisten Rosiglitazon sowie dem Inkretin GIP auf die Expression und Enzymaktivität der 11beta-HSD1 untersucht. Dies wurde in-vitro-, tierexperimentell und in humanen in-vivo-Studien realisiert. Zuletzt wurden 2 Single Nucleotide Polymorphismen (SNP) im Promotorbereich der 11beta-HSD1 in der Zellkultur im Hinblick auf potentielle Funktionalität analysiert sowie die Assoziation mit Diabetes mellitus Typ 2 und Körpergewicht in der MeSyBePo-Kohorte bei rund 1.800 Personen untersucht. Die Luciferase-Assays zeigten basal eine zell-spezifische Regulation der 11beta-HSD1, wobei in allen 3 untersuchten Zelltypen die Bindung eines Repressors nachgewiesen werden konnte. Zudem konnte eine mögliche Bindung des TBPs sowie von C/EBP-Proteinen an verschiedene Positionen gezeigt werden. Die Transaktivierungsassays mit den C/EBP-Proteinen -alpha, -beta und -delta zeigten eben-falls eine zellspezifische Regulation des 11beta-HSD1-Promotors. Die Aktivität und Expression der 11beta-HSD1 wurde durch die hier untersuchten endogenen und exogenen Faktoren spezifisch modifiziert, was sowohl in-vitro als auch in-vivo in unterschiedlichen Modellsystemen dargestellt werden konnte. Die Charakterisierung der MeSyBePo-Kohorte ergab keine direkten Assoziationen zwischen Polymorphismus und klinischem Phänotyp, jedoch Tendenzen für eine erhöhtes Körper-gewicht und Typ 2 Diabetes mellitus in Abhängigkeit des Genotyps. Der Promotor der 11beta-HSD1 konnte aufgrund der Daten aus den Luciferaseassays sowie den Daten aus den EMSA-Analysen näher charakterisiert werden. Dieser zeigt eine variable und zell-spezifische Regulation. Ein wichtiger Regulator stellen insbesondere in den HepG2-Zellen die C/EBP-Proteine -alpha, -beta und -delta dar. Aus den in-vivo-Studien ergab sich eine Regulation der 11beta-HSD1 durch endogene, exogene und pharmakologische Substanzen, die durch die Zellkulturversuche bestätigt und näher charakterisiert werden konnten.
In der molekularen Diagnostik besteht ein Bedarf an schnellen und spezifischen Testsystemen, die entweder für die Labordiagnostik oder in Point of Care-Umgebungen eingesetzt werden können. Um dieses Ziel zu erreichen, stehen die Miniaturisierung und Parallelisierung im Mittelpunkt des Forschungsinteresses. Die führende Methode im Bereich der DNA-Analytik ist derzeit die Realtime-PCR. Dieser Technologie sind hinsichtlich der Multiplexfähigkeit technologischen Hürden gesetzt, da derzeit nur eine Analyse von maximal vier Parametern parallel in einem Versuchsansatz erfolgen kann. Microarrays stellen hingegen die benötigten Voraussetzungen zur Verfügung, um als Werkzeuge für die Multiparameteranalyse in verschiedensten Anwendungsbereichen zu dienen. Ein Schwerpunkt dieser Arbeit war es, Multiplex-PCRs und diagnostische Microarrays zu entwickeln, die für analytische Fragestellungen eine schnelle und zuverlässige Multiparameteranalytik ermöglichen, um die bisherigen Einschränkungen aktueller Nachweisverfahren zu vermeiden. Als Anwendungen wurden zum einen ein Nachweissystem für acht relevante Geflügelpathogene zur Überwachung in der Geflügelzucht, zum anderen ein Nachweissystem zur Identifikation potentiell allergener Lebensmittelinhaltstoffe entwickelt. Neben der Entwicklung geeigneter PCR und Multiplex-PCR-Verfahren sowie spezifischer Microarrays für die Detektion der gesuchten Zielsequenzen stand auch die weiterführende Integration von DNA-Amplifikation und Microarray-Technologie im Fokus dieser Arbeit. Die OnChip-Amplifikation stellt eine Möglichkeit dar, um DNA-Analytik und Detektion in einem Reaktionsschritt zu integrieren. Entsprechend wurden die in der Arbeit entwickelten PCR- und Multiplex-PCR-Verfahren zum Nachweis potentieller allergener Lebensmittelinhaltsstoffe für die OnChip-Amplifikation adaptiert und Reaktionsbedingungen getestet, die eine Multiparameteranalyse auf dem Chip ermöglichen. Die entwickelten OnChip-PCR-Verfahren zeigten eine hohe Spezifität sowohl in Single- als auch in der Multiplex-OnChip-PCR. Eine Sensitivität von 10 Kopien bzw. <10ppm konnte in Single-OnChip-PCRs für den Nachweis allergener Lebensmittelinhaltsstoffe gezeigt werden. In Multiplex-OnChip-PCRs konnten 10-100ppm allergene Verunreinigungen spezifisch in unterschiedlichen Lebensmitteln nachgewiesen werden. Ein weiterer Schritt in Richtung einer möglichen Verwendung im Point of Care-Bereich stellt der Einsatz eines isothermalen Amplifikationsverfahrens dar. Vorteil eines solchen Verfahrens ist die Möglichkeit, auf das ansonsten benötigte Thermocycling zu verzichten. Dies vereinfacht eine Integration der OnChip-Amplifikation in mobile Analysegeräte oder Lab on Chip-Systeme und qualifiziert das Verfahren für den Einsatz in Point of Care-Umgebungen. In dieser Arbeit wurde eine noch junge isothermale Amplifikationsmethode, die helikase-abhängige Amplifikation (HDA), hinsichtlich ihrer Eignung für die Integration auf einem Microarray getestet. Hierfür konnte die bislang erste OnChip-HDA für Einzel- und Duplex-Nachweise von Pathogenen entwickelt werden.
Potato is the 4th most important food crop in the world. Especially in tropical and sub-tropical potato production, drought is a yield limiting factor. Potato is sensitive to water stress. Potato yield loss under water stress could be reduced by using tolerant varieties and adjusted agronomic practices. Direct selection for yield under water-stressed conditions requires long selection cycles. Thus, identification of markers for marker-assisted selection may speed up breeding. The objective of this thesis is to identify morphological markers for drought tolerance by continuously monitoring plant growth and canopy temperature with an automatic phenotyping system.
The phenotyping was performed in drought-stress experiments that were conducted in population A with 64 genotypes and population B with 21 genotypes in the screenhouse in 2015 and 2016 (population A) and in 2017 and 2018 (population B). Drought tolerance was quantified as deviation of the relative tuber starch yield from the experimental median (DRYM) and parent median (DRYMp). Relative tuber starch yield is starch yield under drought stress relative to the average starch yield of the respective cultivar under control conditions in the same experiment. The specific DRYM value was calculated based on the yield data of the same experiment or the global DRYM that was calculated from yield data derived from data combined over yeas of respective population or across multiple experiments including VALDIS and TROST experiments (2011-2016).
Analysis of variance found a significant effect of genotype on DRYM indicating that the tolerance variation required for marker identification was given in both populations.
Canopy growth was monitored continuously six times a day over five to ten weeks by a laser scanner system and yielded information on leaf area, plant height and leaf angle for population A and additionally on leaf inclination and light penetration depth for population B. Canopy temperature was measured 48 times a day over six to seven weeks by infrared thermometry in population B. From the continuous IRT surface temperature data set, the canopy temperature for each plant was selected by matching the time stamp of the IRT data with laser scanner data.
Mean, maximum, range and growth rate values were calculated from continuous laser scanner measurements of respective canopy parameters. Among the canopy parameters, the maximum and mean values in long-term stress conditions showed better correlation with DRYM values calculated in the same experiment than growth rate and diurnal range values. Therefore, drought tolerance index prediction was done from maximum and mean values of canopy parameters.
The tolerance index in specific experiment condition was linearly predicted by simple regression model from different single canopy parameters under long-term stress condition in population A (2016) and population B (2017 and 2018). Among the canopy parameters maximum light penetration depth (2017), mean leaf angle (2017, 2018, and 2016), mean leaf inclination or mean canopy temperature depression (2017 and 2018), maximum plant height (2017) were selected as tolerance predictors. However, no single parameters were sufficient to predict DRYM. Therefore, several independent parameters were integrated in a multiple regression model.
In multiple regression model, specific experiment DRYM values in population A was predicted from mean leaf angle (2016). In population B, specific tolerance could be predicted from maximum light penetration depth and mean leaf inclination (2017) and mean leaf inclination (2018) or mean canopy temperature depression and mean leaf angle (2018).
In data combined over season of population A, the multiple linear regression model selected maximum plant height and mean leaf angle as tolerance predictor. In Population B, mean leaf inclination was selected as tolerance predictor. However, in population A, the variation explained by the final model was too low.
Furthermore, the average tolerances respective to parent median (2011-2018) across FGH plants or all plants (FGH and field) were predicted from maximum plant height (population A) and maximum plant height and mean leaf inclination (population B). Altogether, canopy parameters could be used as markers for drought tolerance. Therefore, water stress breeding in potato could be speed up through using leaf inclination, light penetration depth, plant height and canopy temperature depression as markers for drought tolerance, especially in long-term stress conditions.
The concept of hydrologic connectivity summarizes all flow processes that link separate regions of a landscape. As such, it is a central theme in the field of catchment hydrology, with influence on neighboring disciplines such as ecology and geomorphology. It is widely acknowledged to be an important key in understanding the response behavior of a catchment and has at the same time inspired research on internal processes over a broad range of scales. From this process-hydrological point of view, hydrological connectivity is the conceptual framework to link local observations across space and scales.
This is the context in which the four studies this thesis comprises of were conducted. The focus was on structures and their spatial organization as important control on preferential subsurface flow. Each experiment covered a part of the conceptualized flow path from hillslopes to the stream: soil profile, hillslope, riparian zone, and stream.
For each study site, the most characteristic structures of the investigated domain and scale, such as slope deposits and peat layers were identified based on preliminary or previous investigations or literature reviews. Additionally, further structural data was collected and topographical analyses were carried out. Flow processes were observed either based on response observations (soil moisture changes or discharge patterns) or direct measurement (advective heat transport). Based on these data, the flow-relevance of the characteristic structures was evaluated, especially with regard to hillslope to stream connectivity.
Results of the four studies revealed a clear relationship between characteristic spatial structures and the hydrological behavior of the catchment. Especially the spatial distribution of structures throughout the study domain and their interconnectedness were crucial for the establishment of preferential flow paths and their relevance for large-scale processes. Plot and hillslope-scale irrigation experiments showed that the macropores of a heterogeneous, skeletal soil enabled preferential flow paths at the scale of centimeters through the otherwise unsaturated soil. These flow paths connected throughout the soil column and across the hillslope and facilitated substantial amounts of vertical and lateral flow through periglacial slope deposits.
In the riparian zone of the same headwater catchment, the connectivity between hillslopes and stream was controlled by topography and the dualism between characteristic subsurface structures and the geomorphological heterogeneity of the stream channel. At the small scale (1 m to 10 m) highest gains always occurred at steps along the longitudinal streambed profile, which also controlled discharge patterns at the large scale (100 m) during base flow conditions (number of steps per section). During medium and high flow conditions, however, the impact of topography and parafluvial flow through riparian zone structures prevailed and dominated the large-scale response patterns.
In the streambed of a lowland river, low permeability peat layers affected the connectivity between surface water and groundwater, but also between surface water and the hyporheic zone. The crucial factor was not the permeability of the streambed itself, but rather the spatial arrangement of flow-impeding peat layers, causing increased vertical flow through narrow “windows” in contrast to predominantly lateral flow in extended areas of high hydraulic conductivity sediments.
These results show that the spatial organization of structures was an important control for hydrological processes at all scales and study areas. In a final step, the observations from different scales and catchment elements were put in relation and compared. The main focus was on the theoretical analysis of the scale hierarchies of structures and processes and the direction of causal dependencies in this context. Based on the resulting hierarchical structure, a conceptual framework was developed which is capable of representing the system’s complexity while allowing for adequate simplifications.
The resulting concept of the parabolic scale series is based on the insight that flow processes in the terrestrial part of the catchment (soil and hillslopes) converge. This means that small-scale processes assemble and form large-scale processes and responses. Processes in the riparian zone and the streambed, however, are not well represented by the idea of convergence. Here, the large-scale catchment signal arrives and is modified by structures in the riparian zone, stream morphology, and the small-scale interactions between surface water and groundwater. Flow paths diverge and processes can better be represented by proceeding from large scales to smaller ones. The catchment-scale representation of processes and structures is thus the conceptual link between terrestrial hillslope processes and processes in the riparian corridor.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
Um die gegenwärtige Transformation der Öffentlichkeit im digitalen Zeitalter erfassen zu können, ist in der Öffentlichkeitstheorie eine erweiterte Perspektive notwendig, die nicht nur den massenmedialen Diskurs, sondern auch die Veränderung sozialer Praktiken und institutioneller Strukturen in den Blick nimmt. Das Ziel dieses Buches besteht darin, die Grundlagen einer solchen Perspektive auf die Theorie digitaler Öffentlichkeiten zu entwickeln. Im vorgeschlagenen Ansatz wird Öffentlichkeit im Anschluss an John Dewey als Prozess verstanden. In seiner prozessualen und funktionalen Bestimmung von Öffentlichkeit liegt eine besondere Originalität, die seinen Ansatz von anderen Öffentlichkeitskonzeptionen unterscheidet. Das Buch liefert sowohl eine systematische Rekonstruktion und Interpretation der Philosophie John Deweys als auch einen Vorschlag zur gesellschaftstheoretischen Deutung des digitalen Wandels.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
Changing the perspective sometimes offers completely new insights to an already well-known phenomenon. Exercising behavior, defined as planned, structured and repeated bodily movements with the intention to maintain or increase the physical fitness (Caspersen, Powell, & Christenson, 1985), can be thought of as such a well-known phenomenon that has been in the scientific focus for many decades (Dishman & O’Connor, 2005). Within these decades a perspective that assumes rational and controlled evaluations as the basis for decision making, was predominantly used to understand why some people engage in physical activity and others do not (Ekkekakis & Zenko, 2015).
Dual-process theories (Ekkekakis & Zenko, 2015; Payne & Gawronski, 2010) provide another perspective, that is not exclusively influenced by rational reasoning. These theories differentiate two different processes that guide behavior “depending on whether they operate automatically or in a controlled fashion“ (Gawronski & Creighton, 2012, p. 282). Following this line of thought, exercise behavior is not solely influenced by thoughtful deliberations (e.g. concluding that exercising is healthy) but also by spontaneous affective reactions (e.g. disliking being sweaty while exercising). The theoretical frameworks of dual-process models are not new in psychology (Chaiken & Trope, 1999) and have already been used for the explanation of numerous behaviors (e.g. Hofmann, Friese, & Wiers, 2008; Huijding, de Jong, Wiers, & Verkooijen, 2005). However, they have only rarely been used for the explanation of exercise behavior (e.g. Bluemke, Brand, Schweizer, & Kahlert, 2010; Conroy, Hyde, Doerksen, & Ribeiro, 2010; Hyde, Doerksen, Ribeiro, & Conroy, 2010). The assumption of two dissimilar behavior influencing processes, differs fundamentally from previous theories and thus from the research that has been conducted in the last decades in exercise psychology. Research mainly concentrated on predictors of the controlled processes and addressed the identified predictors in exercise interventions (Ekkekakis & Zenko, 2015; Hagger, Chatzisarantis, & Biddle, 2002).
Predictors arising from the described automatic processes, for example automatic evaluations for exercising (AEE), have been neglected in exercise psychology for many years. Until now, only a few researchers investigated the influence of these AEE for exercising behavior (Bluemke et al., 2010; Brand & Schweizer, 2015; Markland, Hall, Duncan, & Simatovic, 2015). Marginally more researchers focused on the impact of AEE for physical activity behavior (Calitri, Lowe, Eves, & Bennett, 2009; Conroy et al., 2010; Hyde et al., 2010; Hyde, Elavsky, Doerksen, & Conroy, 2012). The extant studies mainly focused on the quality of AEE and the associated quantity of exercise (exercise much or little; Bluemke et al., 2010; Calitri et al., 2009; Conroy et al., 2010; Hyde et al., 2012). In sum, there is still a dramatic lack of empirical knowledge, when applying dual-process theories to exercising behavior, even though these theories have proven to be successful in explaining behavior in many other health-relevant domains like eating, drinking or smoking behavior (e.g. Hofmann et al., 2008).
The main goal of the present dissertation was to collect empirical evidence for the influence of AEE on exercise behavior and to expand the so far exclusively correlational studies by experimentally controlled studies. By doing so, the ongoing debate on a paradigm shift from controlled and deliberative influences of exercise behavior towards approaches that consider automatic and affective influences (Ekkekakis & Zenko, 2015) should be encouraged. All three conducted publications are embedded in dual-process theorizing (Gawronski & Bodenhausen, 2006, 2014; Strack & Deutsch, 2004). These theories offer a theoretical framework that could integrate the established controlled variables of exercise behavior explanation and additionally consider automatic factors for exercise behavior like AEE.
Taken together, the empirical findings collected suggest that AEE play an important and diverse role for exercise behavior. They represent exercise setting preferences, are a cause for short-term exercise decisions and are decisive for long-term exercise adherence. Adding to the few already present studies in this field, the influence of (positive) AEE for exercise behavior was confirmed in all three presented publications. Even though the available set of studies needs to be extended in prospectively studies, first steps towards a more complete picture have been taken. Closing with the beginning of the synopsis: I think that time is right for a change of perspectives! This means a careful extension of the present theories with controlled evaluations explaining exercise behavior. Dual-process theories including controlled and automatic evaluations could provide such a basis for future research endeavors in exercise psychology.
The Central Andean region is characterized by diverse climate zones with sharp transitions between them. In this work, the area of interest is the South-Central Andes in northwestern Argentina that borders with Bolivia and Chile. The focus is the observation of soil moisture and water vapour with Global Navigation Satellite System (GNSS) remote-sensing methodologies. Because of the rapid temporal and spatial variations of water vapour and moisture circulations, monitoring this part of the hydrological cycle is crucial for understanding the mechanisms that control the local climate. Moreover, GNSS-based techniques have previously shown high potential and are appropriate for further investigation. This study includes both logistic-organization effort and data analysis. As for the prior, three GNSS ground stations were installed in remote locations in northwestern Argentina to acquire observations, where there was no availability of third-party data.
The methodological development for the observation of the climate variables of soil moisture and water vapour is independent and relies on different approaches. The soil-moisture estimation with GNSS reflectometry is an approximation that has demonstrated promising results, but it has yet to be operationally employed. Thus, a more advanced algorithm that exploits more observations from multiple satellite constellations was developed using data from two pilot stations in Germany. Additionally, this algorithm was slightly modified and used in a sea-level measurement campaign. Although the objective of this application is not related to monitoring hydrological parameters, its methodology is based on the same principles and helps to evaluate the core algorithm. On the other hand, water-vapour monitoring with GNSS observations is a well-established technique that is utilized operationally. Hence, the scope of this study is conducting a meteorological analysis by examining the along-the-zenith air-moisture levels and introducing indices related to the azimuthal gradient.
The results of the experiments indicate higher-quality soil moisture observations with the new algorithm. Furthermore, the analysis using the stations in northwestern Argentina illustrates the limits of this technology because of varying soil conditions and shows future research directions. The water-vapour analysis points out the strong influence of the topography on atmospheric moisture circulation and rainfall generation. Moreover, the GNSS time series allows for the identification of seasonal signatures, and the azimuthal-gradient indices permit the detection of main circulation pathways.
Polymer optical fibers (POFs) are a rather new tool for high-speed data transfer by modulated light. They allow the transport of high amounts of data over distances up to about 100 m without be influenced by external electromagnetic fields. Due to organic chemical nature of POFs, they are sensitive to the climate of their environment and therefore the optical fiber properties are as well. Hence, the optical stability is a key issue for long-term applications of POFs. The causes for a loss of optical transmission due to climatic exposures (aging/degradation) are researched by means of chemical analytical tools such as chemiluminescence (CL) and Fourier transform infrared (FTIR) spectroscopy for five different (with respect to manufacturers) step-index multimode PMMA based POFs and for seven different climatic conditions. Three of the five POF samples are studied more in detail to realize the effects of individual parameters and for forecasting longterm optical stability by short-term exposure tests. At first, the unexposed POF components (core, cladding, and bare POF as combination of core and cladding) are characterized with respect to important physical and chemical properties. The glass transition temperature Tg, and the melting temperature Tm are in the region of 120 °C to 140 °C, the molecular weight (Mw) of cores is in the order of 105 g mol-1. POFs are found to have different chemical compositions of their claddings as could be detected by FTIR, but identical compositions of their cores. Two of the POFs are exposed as cables (core, cladding and jacket) for about 3300 hours to the climate 92 °C / 95 % relative humidity (RH) resulting in a different transmission decrease. Investigating the related unexposed and exposed bare POFs for degradation using CL, FTIR, thermogravimetry (TG), UV/visible transmittance and gel permeation chromatography (GPC) suggest that claddings of POFs are more affected than cores. Probably the observed loss of transmission is mainly due to increased light absorption and imperfections at the core-cladding boundary caused by a large degradation of claddings. Hence, it is highly possible that the optical transmission stability of POFs is governed mainly by the thermo-oxidative stability of the cladding and minor of the core. Three bare POFs (core and cladding only) are exposed for different duration of exposure time (30 hours to 4500 hours) to 92 °C / 95 %RH, 92 °C / 50 %RH, 50 °C / 95 %RH, 90 °C / low humidity, 100 °C / low humidity, 110 °C / low humidity and 120 °C / low humidity. In these climates their transmission variations are found to be different from each other, too. The outcomes strongly inform that under high temperature and high humid climates physical changes such as volume expansion, are the main sources for the loss of optical transmission. Also, the optical transmission stability of POFs is found to be dependent on chemical compositions of claddings. Under high temperature and low humid conditions, a loss of transmission at the early stages of the exposure is mainly caused by physical changes, presumable by corecladding interface imperfections. For the later stages of exposures it is proposed to an additional increase of light absorption by core and cladding owes to degradation. Optical simulation results obtained parallel by Mr. L. Jankowski (a PhD student of BAM) are found to confirm these results. For bare POFs, too, the optical stability of POFs seems to depend on their thermo-oxidative stability. Some short-term exposure tests are conducted to realize influences of individual climatic parameters on the transmission property of POFs. It is found that at stationary high temperature and variable humidity conditions POFs display to a certain amount a reversible transmission loss due to physically absorbed water. But in the case of varying temperature and constant high humidity such reversibility is hardly noticeable. However, at room temperature and varying humidity, POFs display fully reversible transmission loss. The whole research described above has to be regarded as a starting point for further investigations. The restricted distribution of fundamental POF data by the manufacturers and the time consuming aging by climatic exposures restrict the results more or less to the samples, investigated here. Significant general statements require for example additional information concerning the variation of POF properties due to production. Nevertheless the tests, described here, have the capability for approximating and forecasting the long-term optical transmission stability of POFs. -------------- Auch im Druck erschienen: Appajaiah, Anilkumar: Climatic stability of polymer optical fibers (POF) / Anilkumar Appajaiah. - Bremerhaven : Wirtschaftsverl. NW, Verl. für neue Wiss., 2005. - Getr. Zählung [ca. 175 S.]. : Ill., graph. Darst. - (BAM-Dissertationsreihe ; 9) ISBN 3-86509-302-7
Back pain is a problem in adolescent athletes affecting postural control which is an important requirement for physical and daily activities whether under static or dynamic conditions. One leg stance and star excursion balance postural control tests are effective in measuring static and dynamic postural control respectively. These tests have been used in individuals with back pain, athletes and non-athletes without first establishing their reliabilities. In addition to this, there is no published literature investigating dynamic posture in adolescent athletes with back pain using the star excursion balance test. Therefore, the aim of the thesis was to assess deficit in postural control in adolescent athletes with and without back pain using static (one leg stance test) and dynamic postural (SEBT) control tests.
Adolescent athletes with and without back pain participated in the study. Static and dynamic postural control tests were performed using one leg stance and SEBT respectively. The reproducibility of both tests was established. Afterwards, it was determined whether there was an association between static and dynamic posture using the measure of displacement of the centre pressure and reach distance respectively. Finally, it was investigated whether there was a difference in postural control in adolescent athletes with and without back pain using the one leg stance test and the SEBT.
Fair to excellent reliabilities was recorded for the static (one leg stance) and dynamic (star excursion balance) postural control tests in the subjects of interest. No association was found between variables of the static and dynamic tests for the adolescent athletes with and without back pain. Also, no statistically significant difference was obtained between adolescent athletics with and without back pain using the static and dynamic postural control test.
One leg stance test and SEBT can be used as measures of postural control in adolescent athletes with and without back pain. Although static and dynamic postural control might be related, adolescent athletes with and without back pain might be using different mechanisms in controlling their static and dynamic posture. Consequently, static and dynamic postural control in adolescent athletes with back pain was not different from those without back pain. These outcome measures might not be challenging enough to detect deficit in postural control in our study group of interest.
The central melanin-concentrating hormone (MCH) system has been intensively studied for its involvement in the regulation of feeding behaviour and body weight regulation. The importance of the neuropeptide MCH in the control of energy balance has been underlined by MCH knock out and Melanin-concentrating hormone receptor subtype 1 (MCHR-1) knock-out animals. The anorectic and anti-obesity effects of selective MCHR-1 antagonists have confirmed the notion that pharmacological blockade of MCHR-1 is a potential therapeutic approach for obesity. First aim of this work is to study the neurochemical “equipment” of MCHR-1 immunoreactive neurons by double-labelling immunohistochemistry within the rat hypothalamus. Of special interest is the neuroanatomical identification of other hypothalamic neuropeptides that are co-distributed with MCHR-1. A second part of this study deals with the examination of neuronal activation patterns after pharmacological or physiological, feeding-related stimuli and was introduced to further understand central regulatory mechanisms of the MCH system. In the first part of work, I wanted to neurochemically characterize MCHR-1 immunoreactive neurons in the rat hypothalamus for colocalisation with neuropeptides of interest. Therefore I performed an immunohistochemical colocalisation study using a specific antibody against MCHR-1 in combination with antibodies against hypothalamic neuropeptides. I showed that MCHR-1 immunoreactivity (IR) was co-localised with orexin A in the lateral hypothalamus, and with adrenocorticotropic hormone and neuropeptide Y in the arcuate nucleus. Additionally, MCHR-1 IR was co-localised with the neuropeptides vasopressin and oxytocin in magnocellular neurons of the supraoptic and paraventricular hypothalamic nucleus and corticotrophin releasing hormone in the parvocellular division of the paraventricular hypothalamic nucleus. Moreover, for the first time MCHR-1 immunoreactivity was found in both the adenohypophyseal and neurohypophyseal part of the rat pituitary. These results provide the neurochemical basis for previously described potential physiological actions of MCH at its target receptor. In particular, the MCHR-1 may be involved not only in food intake regulation, but also in other physiological actions such as fluid regulation, reproduction and stress response, possibly through here examined neuropeptides. Central activation patterns induced by pharmacological or physiological stimulation can be mapped using c-Fos immunohistochemistry. In the first experimental design, central administration (icv) of MCH in the rat brain resulted in acute and significant increase of food and water intake, but this animal treatment did not induce a specific c-Fos induction pattern in hypothalamic nuclei. In contrast, sub-chronic application of MCHR-1 antagonist promoted a significant decrease in food- and water intake during an eight day treatment period. A qualitative analysis of c-Fos immunohistochemistry of sections derived from MCHR-1 antagonist treated animals showed a specific neuronal activation in the paraventricular nucleus, the supraoptic nucleus and the dorsomedial hypothalamus. These results could be substantiated by quantitative evaluation of an automated, software-supported analysis of the c-Fos signal. Additionally, I examined the activation pattern of rats in a restricted feeding schedule (RFS) to identify pathways involved in hunger and satiety. Animals were trained for 9 days to feed during a three hour period. On the last day, food restricted animals was also allowed to feed for the three hours, while food deprived (FD) animals did not receive food. Mapping of neuronal activation showed a clear difference between stareved (FD) and satiated (FR) rats. FD animals showed significant induction of c-Fos in forebrain regions, several hypothalamic nuclei, amygdaloid thalamus and FR animals in the supraoptic nucleus and the paraventricular nucleus of the hypothalamus, and the nucleus of the solitary tract. In the lateral hypothalamus of FD rats, c-Fos IR showed strong colocalisation for Orexin A, but no co-staining for MCH immunoreactivity. However, a large number of c-Fos IR neurons within activated regions of FD and FR animals was co-localised with MCHR-1 within selected regions. To conclude, the experimental set-up of scheduled feeding can be used to induce a specific hunger or satiety activation pattern within the rat brain. My results show a differential activation by hunger signals of MCH neurons and furthermore, demonstrates that MCHR-1 expressing neurons may be essential parts of downstream processing of physiological feeding/hunger stimuli. In the final part of my work, the relevance of here presented studies is discussed with respect to possible introduction of MCHR-1 antagonists as drug candidates for the treatment of obesity.
Biochemical and physiological studies of Arabidopsis thaliana Diacylglycerol Kinase 7 (AtDGK7)
(2006)
A family of diacylglycerol kinases (DGK) phosphorylates the substrate diacylglycerol (DAG) to generate phosphatidic acid (PA) . Both molecules, DAG and PA, are involved in signal transduction pathways. In the model plant Arabidopsis thaliana, seven candidate genes (named AtDGK1 to AtDGK7) code for putative DGK isoforms. Here I report the molecular cloning and characterization of AtDGK7. Biochemical, molecular and physiological experiments of AtDGK7 and their corresponding enzyme are analyzed. Information from Genevestigator says that AtDGK7 gene is expressed in seedlings and adult Arabidopsis plants, especially in flowers. The AtDGK7 gene encodes the smallest functional DGK predicted in higher plants; but also, has an alternative coding sequence containing an extended AtDGK7 open reading frame, confirmed by PCR and submitted to the GenBank database (under the accession number DQ350135). The new cDNA has an extension of 439 nucleotides coding for 118 additional amino acids The former AtDGK7 enzyme has a predicted molecular mass of ~41 kDa and its activity is affected by pH and detergents. The DGK inhibitor R59022 also affects AtDGK7 activity, although at higher concentrations (i.e. IC50 ~380 µM). The AtDGK7 enzyme also shows a Michaelis-Menten type saturation curve for 1,2-DOG. Calculated Km and Vmax were 36 µM 1,2-DOG and 0.18 pmol PA min-1 mg of protein-1, respectively, under the assay conditions. Former protein AtDGK7 are able to phosphorylate different DAG analogs that are typically found in plants. The new deduced AtDGK7 protein harbors the catalytic DGKc and accessory domains DGKa, instead the truncated one as the former AtDGK7 protein (Gomez-Merino et al., 2005).
The electrical resistivity tomography (ERT) method is widely used to investigate geological, geotechnical, and hydrogeological problems in inland and aquatic environments (i.e., lakes, rivers, and seas). The objective of the ERT method is to obtain reliable resistivity models of the subsurface that can be interpreted in terms of the subsurface structure and petrophysical properties. The reliability of the resulting resistivity models depends not only on the quality of the acquired data, but also on the employed inversion strategy. Inversion of ERT data results in multiple solutions that explain the measured data equally well. Typical inversion approaches rely on different deterministic (local) strategies that consider different smoothing and damping strategies to stabilize the inversion. However, such strategies suffer from the trade-off of smearing possible sharp subsurface interfaces separating layers with resistivity contrasts of up to several orders of magnitude. When prior information (e.g., from outcrops, boreholes, or other geophysical surveys) suggests sharp resistivity variations, it might be advantageous to adapt the parameterization and inversion strategies to obtain more stable and geologically reliable model solutions. Adaptations to traditional local inversions, for example, by using different structural and/or geostatistical constraints, may help to retrieve sharper model solutions. In addition, layer-based model parameterization in combination with local or global inversion approaches can be used to obtain models with sharp boundaries.
In this thesis, I study three typical layered near-surface environments in which prior information is used to adapt 2D inversion strategies to favor layered model solutions. In cooperation with the coauthors of Chapters 2-4, I consider two general strategies. Our first approach uses a layer-based model parameterization and a well-established global inversion strategy to generate ensembles of model solutions and assess uncertainties related to the non-uniqueness of the inverse problem. We apply this method to invert ERT data sets collected in an inland coastal area of northern France (Chapter~2) and offshore of two Arctic regions (Chapter~3). Our second approach consists of using geostatistical regularizations with different correlation lengths. We apply this strategy to a more complex subsurface scenario on a local intermountain alluvial fan in southwestern Germany (Chapter~4). Overall, our inversion approaches allow us to obtain resistivity models that agree with the general geological understanding of the studied field sites. These strategies are rather general and can be applied to various geological environments where a layered subsurface structure is expected. The flexibility of our strategies allows adaptations to invert other kinds of geophysical data sets such as seismic refraction or electromagnetic induction methods, and could be considered for joint inversion approaches.
During a dark night, it is possible to observe thousands of stars by eye. All these stars are located within the Milky Way, our home. Not all stars are the same, they can have different sizes, masses, temperatures and ages. Heavy stars do not live long (in astronomical terms), only a few million years, but stars less massive than the Sun can get more than ten billion years old. Such small stars that formed in the beginning of the Universe still shine today. These ancient stars are very helpful to learn more about the early Universe, the First Stars and the history of the Milky Way. But how do you recognise an ancient star? Using their chemical fingerprints! In the beginning of the Universe, there were only two chemical elements: hydrogen and helium (and a tiny bit of lithium). All the heavier elements like carbon, calcium and iron were only made later within stars and their explosions. The amount of chemical elements in the Universe increases with the number of stars that are born, evolve and explode. Stars that form later are born with more heavy elements, or a greater metallicity. In the field of astronomy that is called “Galactic Archaeology”, stars of various metallicities are used to study the history of the Milky Way. In this doctoral thesis, the focus is on metal-poor stars because these are expected to be the oldest and can therefore tell us a lot about the early history of our Galaxy.
Until today, we still have not discovered a metal-free star. The most metal-poor stars, however, give us important insights in the lives and deaths of the First Stars. Many of the oldest, most metal-poor stars have an unexpectedly large amount of carbon, compared to for example iron. These carbon-enhanced metal-poor (CEMP) stars tell us something about the very first stars in the Universe: they somehow produced a lot of carbon. If we look at the precise chemical fingerprints of the CEMP stars, we can learn a lot more. But our interpretation depends on the assumption that the chemical fingerprint of a star does not change during its life. In this thesis, new data is presented that shows that this assumption may be too simple: many extremely metal-poor CEMP stars are members of binary systems. Interactions between two stars in a binary system can pollute the surface of the stars. Likely not all of the CEMP stars in binary systems were actually polluted, but we should be very careful in our interpretations of the fingerprints of these stars.
The CEMP stars and other metal-poor stars are also important for our understanding of the early history of the Milky Way. Most researchers who study metal-poor stars look for these stars in the halo of the Milky Way: a huge diffuse Galactic component containing about 1% of the stars in our Galaxy. However, models predict that the oldest metal-poor stars are located in the center of the Milky Way, in the bulge. The metal-poor inner Galaxy is unfortunately difficult to study due to large amounts of dust between us and the center and an overwhelming majority of metal-rich stars. This thesis presents results from the successful Pristine Inner Galaxy Survey (PIGS), a new survey looking for (and finding) the oldest stars in the bulge of the Milky Way. PIGS is using images with a specific color that is sensitive to the metallicity of stars, and can therefore efficiently select the metal-poor stars among millions of other, more metal-rich stars. The interesting candidates are followed up with spectroscopy, which is then analysed using two independent methods. With this strategy, PIGS has discovered the largest sample of metal-poor stars in the inner Galaxy to date. A new result from the PIGS data is that the metal-poor stars rotate more slowly around the Galactic center compared to the more metal-rich stars, and they show larger randomness in their motions as well. Another important contribution from PIGS is the discovery of tens of CEMP stars in the inner Galaxy, where previously only two such stars were known.
The new results from this thesis help us to understand the First Stars and the early history of the Milky Way. Ongoing and future large surveys will provide us with a lot of additional data in the coming years. It is an exciting time for the field of Galactic Archaeology.
Kosmologie beschreibt die Entwicklung des Universums als Ganzes. Kosmologische Entdeckungen in Theorie und Praxis haben daher unser modernes wissenschaftliches Weltbild entscheidend geprägt. Die Vermittlung eines modernen Weltbildes durch Unterricht ist ein häufiger Wunsch in der naturwissenschaftlichen Bildungsdiskussion. Dennoch existieren weiterhin Forschungs- und Entwicklungsbedarfe. Kosmologische Themen finden sich häufig in den Medien und sind gleichzeitig weiter vom Alltag entfernt, so dass sich hier besonders leicht wissenschaftlich inkorrekte Vorstellungen entwickeln können, die zu Problemen im Unterricht führen können.
Das Ziel dieser wissenschaftlichen Arbeit ist es, zu diesem Forschungsgebiet beizutragen und die Voraussetzungen hinsichtlich vorhandener Vorkenntnisse und Präkonzepte in Kosmologie, mit denen Schülerinnen und Schüler in den Unterricht kommen, zu untersuchen und anschließend mit denen anderer Länder zu vergleichen. Dies erfolgt anhand einer qualitativen Inhaltsanalyse eines offenen Fragebogens. Auf dieser Grundlage wird schließlich ein Multiple-Choice Fragebogen entwickelt, angewendet und evaluiert.
Die Ergebnisse zeigen große Wissenslücken im Bereich der Kosmologie auf und geben erste Hinweise auf vorhandene Unterschiede zwischen den Ländern. Es existieren ebenfalls einige teils weit verbreitete wissenschaftlich inkorrekte Vorstellungen wie beispielsweise die Assoziation des Urknalls mit einer Explosion, der Urknall verursacht durch eine Kollision von Teilchen oder größeren Objekten, oder die Vorstellung der Ausdehnung des Universums als neue Entdeckungen und/oder Wissen. Des Weiteren gab nur etwa jeder Fünfte das korrekte Alter des Universums oder die Ausdehnung des Universums als einen der drei Belege der Urknalltheorie an, während fast 40% keinen einzigen Beleg nennen konnten. Für den geschlossenen Fragebogen konnten gute Hinweise für verschiedene Validitätsaspekte herausgearbeitet werden und es existieren erste Hinweise darauf, dass der Fragebogen Wissenszuwachs messen kann und damit wahrscheinlich zur Untersuchung der Wirksamkeit von Lerneinheiten eingesetzt werden kann. Auch ein entsprechendes Modell zur Verständnisentwicklung der Ausdehnung des Universums zeigte sich vielversprechend.
Diese Arbeit liefert insgesamt einen Forschungsbeitrag zum Schülervorwissen und Vorstellungen in der Kosmologie und deren Large Scale Assessment. Dies eröffnet die Möglichkeit zukünftiger Forschungen im Bereich von Gruppenvergleichen insbesondere hinsichtlich objektiver Ländervergleiche sowie der Untersuchungen der Wirksamkeit von einzelnen Lerneinheiten als auch Vergleiche verschiedener Lerneinheiten untereinander.
Since 1980 Iraq passed through various wars and conflicts including Iraq-Iran war, Saddam Hussein’s the Anfals and Halabja campaigns against the Kurds and the killing campaigns against Shiite in 1986, Saddam Hussein’s invasion of Kuwait in August 1990, the Gulf war in 1990, Iraq war in 2003 and the fall of Saddam, the conflicts and chaos in the transmission of power after the death of Saddam, and the war against ISIS . All these wars left severe impacts in most households in Iraq; on women and children in particular.
The consequences of such long wars could be observed in all sectors including economic, social, cultural and religious sectors. The social structure, norms and attitudes are intensely affected. Many women specifically divorced women found them-selves in challenging different difficulties such as social as well as economic situations. Thus the divorced women in Iraqi Kurdistan are the focus of this research.
Considering the fact that there is very few empirical researches on this topic, a constructivist grounded theory methodology (CGT) is viewed as reliable in order to come up with a comprehensive picture about the everyday life of divorced women in Iraqi Kurdistan. Data collected in Sulaimani city in Iraqi Kurdistan. The work of Kathy Charmaz was chosen to be the main methodological context of the research and the main data collection method was individual intensive narrative interviews with divorced women.
Women generally and divorced women specifically in Iraqi Kurdistan are living in a patriarchal society that passing through many changes due to the above mentioned wars among many other factors. This research is trying to study the everyday life of divorced women in such situations and the forms of social insecurity they are experiencing. The social institutions starting from the family as a very significant institution for women to the governmental and non-governmental institutions that are working to support women, and the copying strategies, are in focus in this research. The main research argument is that the family is playing ambivalent roles in divorced women’s life. For instance, on one side families are revealed to be an essential source of security to most respondents, on the other side families posed also many threats and restrictions on those women. This argument supported by what called by Suad joseph "the paradox of support and suppression" . Another important finding is that the stat institution(laws , constitutions ,Offices of combating violence against woman and family) are supporting women somehow and offering them protection from the insecurities but it is clear that the existence of the laws does not stop the violence against women in Iraqi Kurdistan, As explained by Pateman because the laws /the contract is a sexual-social contract that upholds the sex rights of males and grants them more privileges than females. The political instability, Tribal social norms also play a major role in influencing the rule of law.
It is noteworthy to refer that analyzing the interviews in this research showed that in spite that divorced women living in insecurities and facing difficulties but most of the respondents try to find a coping strategies to tackle difficult situations and to deal with the violence they face; these strategies are bargaining, sometimes compromising or resisting …etc. Different theories used to explain these coping strategies such as bargaining with patriarchy. Kandiyoti who stated that women living under certain restraints struggle to find way and strategies to enhance their situations. The research finding also revealed that the western liberal feminist view of agency is limited this is agree with Saba Mahmood and what she explained about Muslim women agency. For my respondents, who are divorced women, their agency reveals itself in different ways, in resisting or compromising with or even obeying the power of male relatives, and the normative system in the society. Agency is also explained the behavior of women contacting formal state institutions in cases of violence like the police or Offices of combating violence against woman and family.
Plastic pollution is ubiquitous on the planet since several millions of tons of plastic waste enter aquatic ecosystems each year. Furthermore, the amount of plastic produced is expected to increase exponentially shortly. The heterogeneity of materials, additives and physical characteristics of plastics are typical of these emerging contaminants and affect their environmental fate in marine and freshwaters. Consequently, plastics can be found in the water column, sediments or littoral habitats of all aquatic ecosystems. Most of this plastic debris will fragment as a product of physical, chemical and biological forces, producing particles of small size. These particles (< 5mm) are known as “microplastics” (MP). Given their high surface-to-volume ratio, MP stimulate biofouling and the formation of biofilms in aquatic systems.
As a result of their unique structure and composition, the microbial communities in MP biofilms are referred to as the “Plastisphere.” While there is increasing data regarding the distinctive composition and structure of the microbial communities that form part of the plastisphere, scarce information exists regarding the activity of microorganisms in MP biofilms. This surface-attached lifestyle is often associated with the increase in horizontal gene transfer (HGT) among bacteria. Therefore, this type of microbial activity represents a relevant function worth to be analyzed in MP biofilms. The horizontal exchange of mobile genetic elements (MGEs) is an essential feature of bacteria. It accounts for the rapid evolution of these prokaryotes and their adaptation to a wide variety of environments. The process of HGT is also crucial for spreading antibiotic resistance and for the evolution of pathogens, as many MGEs are known to contain antibiotic resistance genes (ARGs) and genetic determinants of pathogenicity.
In general, the research presented in this Ph.D. thesis focuses on the analysis of HGT and heterotrophic activity in MP biofilms in aquatic ecosystems. The primary objective was to analyze the potential of gene exchange between MP bacterial communities vs. that of the surrounding water, including bacteria from natural aggregates. Moreover, the thesis addressed the potential of MP biofilms for the proliferation of biohazardous bacteria and MGEs from wastewater treatment plants (WWTPs) and associated with antibiotic resistance. Finally, it seeks to prove if the physiological profile of MP biofilms under different limnological conditions is divergent from that of the water communities. Accordingly, the thesis is composed of three independent studies published in peer-reviewed journals. The two laboratory studies were performed using both model and environmental microbial communities. In the field experiment, natural communities from freshwater ecosystems were examined.
In Chapter I, the inflow of treated wastewater into a temperate lake was simulated with a concentration gradient of MP particles. The effects of MP on the microbial community structure and the occurrence of integrase 1 (int 1) were followed. The int 1 is a marker associated with mobile genetic elements and known as a proxy for anthropogenic effects on the spread of antimicrobial resistance genes. During the experiment, the abundance of int1 increased in the plastisphere with increasing MP particle concentration, but not in the surrounding water. In addition, the microbial community on MP was more similar to the original wastewater community with increasing microplastic concentrations. Our results show that microplastic particles indeed promote persistence of standard indicators of microbial anthropogenic pollution in natural waters.
In Chapter II, the experiments aimed to compare the permissiveness of aquatic bacteria towards model antibiotic resistance plasmid pKJK5, between communities that form biofilms on MP vs. those that are free-living. The frequency of plasmid transfer in bacteria associated with MP was higher when compared to bacteria that are free-living or in natural aggregates. Moreover, comparison increased gene exchange occurred in a broad range of phylogenetically-diverse bacteria. The results indicate a different activity of HGT in MP biofilms, which could affect the ecology of aquatic microbial communities on a global scale and the spread of antibiotic resistance.
Finally, in Chapter III, physiological measurements were performed to assess whether microorganisms on MP had a different functional diversity from those in water. General heterotrophic activity such as oxygen consumption was compared in microcosm assays with and without MP, while diversity and richness of heterotrophic activities were calculated by using Biolog® EcoPlates. Three lakes with different nutrient statuses presented differences in MP-associated biomass build up. Functional diversity profiles of MP biofilms in all lakes differed from those of the communities in the surrounding water, but only in the oligo-mesotrophic lake MP biofilms had a higher functional richness compared to the ambient water. The results support that MP surfaces act as new niches for aquatic microorganisms and can affect global carbon dynamics of pelagic environments.
Overall, the experimental works presented in Chapters I and II support a scenario where MP pollution affects HGT dynamics among aquatic bacteria. Among the consequences of this alteration is an increase in the mobilization and transfer efficiency of ARGs. Moreover, it supposes that changes in HGT can affect the evolution of bacteria and the processing of organic matter, leading to different catabolic profiles such as demonstrated in Chapter III. The results are discussed in the context of the fate and magnitude of plastic pollution and the importance of HGT for bacterial evolution and the microbial loop, i.e., at the base of aquatic food webs. The thesis supports a relevant role of MP biofilm communities for the changes observed in the aquatic microbiome as a product of intense human intervention.
Mathematical modeling of biological systems is a powerful tool to systematically investigate the functions of biological processes and their relationship with the environment. To obtain accurate and biologically interpretable predictions, a modeling framework has to be devised whose assumptions best approximate the examined scenario and which copes with the trade-off of complexity of the underlying mathematical description: with attention to detail or high coverage. Correspondingly, the system can be examined in detail on a smaller scale or in a simplified manner on a larger scale. In this thesis, the role of photosynthesis and its related biochemical processes in the context of plant metabolism was dissected by employing modeling approaches ranging from kinetic to stoichiometric models. The Calvin-Benson cycle, as primary pathway of carbon fixation in C3 plants, is the initial step for producing starch and sucrose, necessary for plant growth. Based on an integrative analysis for model ranking applied on the largest compendium of (kinetic) models for the Calvin-Benson cycle, those suitable for development of metabolic engineering strategies were identified. Driven by the question why starch rather than sucrose is the predominant transitory carbon storage in higher plants, the metabolic costs for their synthesis were examined. The incorporation of the maintenance costs for the involved enzymes provided a model-based support for the preference of starch as transitory carbon storage, by only exploiting the stoichiometry of synthesis pathways. Many photosynthetic organisms have to cope with processes which compete with carbon fixation, such as photorespiration whose impact on plant metabolism is still controversial. A systematic model-oriented review provided a detailed assessment for the role of this pathway in inhibiting the rate of carbon fixation, bridging carbon and nitrogen metabolism, shaping the C1 metabolism, and influencing redox signal transduction. The demand of understanding photosynthesis in its metabolic context calls for the examination of the related processes of the primary carbon metabolism. To this end, the Arabidopsis core model was assembled via a bottom-up approach. This large-scale model can be used to simulate photoautotrophic biomass production, as an indicator for plant growth, under so-called optimal, carbon-limiting and nitrogen-limiting growth conditions. Finally, the introduced model was employed to investigate the effects of the environment, in particular, nitrogen, carbon and energy sources, on the metabolic behavior. This resulted in a purely stoichiometry-based explanation for the experimental evidence for preferred simultaneous acquisition of nitrogen in both forms, as nitrate and ammonium, for optimal growth in various plant species. The findings presented in this thesis provide new insights into plant system's behavior, further support existing opinions for which mounting experimental evidences arise, and posit novel hypotheses for further directed large-scale experiments.
En el presente trabajo se realizó una investigación multidisciplinaria combinando métodos de geomorfología tectónica con estudios geofisicos y estructurales, focalizados principalmente en la caracterización neotectónica de ambos faldeos de la sierra de La Candelaria y del extremo sur de la cuenca de Metán. La zona de estudio se encuentra ubicada en la región limítrofe entre las provincias de Salta y Tucumán y pertenece a la provincia geológica del Sistema Santa Bárbara.
El principal objetivo consistió en contextualizar las evidencias de actividad tectónica cuaternaria de la región mediante la propuesta de un modelo estructural novedoso, con el propósito de incrementar la información disponible sobre estructuras neotectónicas y su potencial sismogénico. Con este fin, se aplicaron e integraron diversas técnicas tales como la interpretación de líneas sísmicas de reflexión, construcción de secciones estructurales balanceadas, y métodos geofísicos someros, para constatar el comportamiento en profundidad tanto de las estructuras geológicas identificadas en superficie como de las posibles fallas ciegas corticales involucradas.
En primer lugar, se realizó un relevamiento regional del área de estudio empleando imágenes satelitales multiespectrales LANDSAT y SENTINEL 2, que permitieron reconocer diferentes niveles de abanicos aluviales y terrazas fluviales cuaternarios. Mediante la determinación de diferentes indicadores morfométricos en modelos de elevación digital (MED), junto con observaciones de campo, fue posible identificar evidencias de deformación sobre dichos niveles cuaternarios que han sido relacionadas genéticamente con cuatro fallas neotectónicas. Tres de ellas (fallas Arias, El Quemado y Copo Quile) fueron seleccionadas para efectuar estudios de mayor detalle por medio de la aplicación de métodos de geofísica somera (tomografía eléctrica resistiva (ERT) y tomografía sísmica de refracción Sísmica (SRT)), que permitieron corroborar su existencia en profundidad, realizar inferencias geométricas y cinemáticas, y estimar la magnitud de la deformación reciente. Las fallas Arias y El Quemado fueron interpretadas como fallas inversas relacionadas con deslizamiento flexural interstratal, mientras que la falla Copo Quile se interpretó como una falla inversa ciega de bajo ángulo.También se realizó una interpretación conjunta de líneas sísmicas de reflexión y pozos exploratorios pertenecientes a áreas hidrocarburíferas de las cuencas de Choromoro y Metán con el fin de contextualizar las principales estructuras reconocidas en el marco estratigráfico y tectónico regional. Toda la información fue integrada en una sección estructural balanceada mediante técnicas de modelado cinemático. Dicho modelo permite inferir que la deformación cuaternaria reconocida está relacionada al desplazamiento del basamento a lo largo de un corrimiento ciego, responsable del levantamiento de la sierra de La Candelaria y el cerr Cantero. Asimismo, el modelo cinemático permite interpretar la ubicación aproximada de los principales niveles de despegue que controlan el estilo de deformación. El nivel de despegue más somero, que controla la deformación de la cobertura sedimentaria se encuentra a 4 km de profundidad, a 21 km se estima la presencia de otra zona de cizalla subhorizontal dentro del basamento.
Finalmente, a partir de la integración de todos los resultados obtenidos, se evaluó el potencial sismogénico de las fallas en la zona de estudio. Las fallas de primer orden que controlan la deformación en la zona son las responsables de los grandes terremotos. Mientras, las fallas Cuaternarias flexodeslizantes e inversas afectan solamente a la cobertura sedimentaria y serían estructuras de segundo orden que acomodan la deformación y fueron activadas durante el cuaternario con movimientos asísmicos y/o sísmicos de muy baja magnitud.
Estos resultados permiten inferir que el corrimiento La Candelaria constituye una fuente sismogénica potencial de importancia para la región, donde se ubican numerosas poblaciones y obras civiles de envergadura. Por otra parte, la sección estructural balanceada implica la presencia de otras fallas ciegas de distinto orden de magnitud que podrían ser posibles fuentes sismogénicas profundas adicionales, marcando la necesidad de continuar con el desarrollo de este tipo de estudios en esta región tectónicamente activa.
Background: Physical fitness is a key aspect of children’s ability to perform activities of daily living, engage in leisure activities, and is associated with important health characteristics. As such, it shows multi-directional associations with weight status as well as executive functions, and varies according to a variety of moderating factors, such as the child’s gender, age, geographical location, and socioeconomic conditions and context. The assessment and monitoring of children’s physical fitness has gained attention in recent decades, as has the question of how to promote physical fitness through the implementation of a variety of programs and interventions. However, these programs and interventions rarely focus on children with deficits in their physical fitness. Due to their deficits, these children are at the highest risk of suffering health impairments compared to their more average fit peers. In efforts to promote physical fitness, schools could offer promising and viable approaches to interventions, as they provide access to large youth populations while providing useful infrastructure. Evidence suggests that school-based physical fitness interventions, particularly those that include supplementary physical education, are useful for promoting and improving physical fitness in children with normal fitness. However, there is little evidence on whether these interventions have similar or even greater effects on children with deficits in their physical fitness. Furthermore, the question arises whether these measures help to sustainably improve the development/trajectories of physical fitness in these children.
The present thesis aims to elucidate the following four objectives: (1) to evaluate the effects of a 14 week intervention with 2 x 45 minutes per week additional remedial physical education on physical fitness and executive function in children with deficits in their physical fitness; (2) to assess moderating effects of body height and body mass on physical fitness components in children with physical fitness deficits; (3) to assess moderating effects of age and skeletal growth on physical fitness in children with physical fitness deficits; and (4) to analyse moderating effects of different physical fitness components on executive function in children with physical fitness deficits.
Methods: Using physical fitness data from the EMOTIKON study, 76 third graders with physical fitness deficits were identified in 11 schools in Brandenburg state that met the requirements for implementing a remedial physical education intervention (i.e., employing specially trained physical education teachers). The fitness intervention was implemented in a cross-over design and schools were randomly assigned to either an intervention-control or control-intervention group. The remedial physical education intervention consisted of a 14 week, 2 x 45 minutes per week remedial physical education curriculum supplemented by a physical exercise homework program. Assessments were conducted at the beginning and end of each intervention and control period, and further assessments were conducted at the beginning and end of each school year until the end of sixth grade. Physical fitness as the primary outcome was assessed using fitness tests implemented in the EMOTIKON study (i.e., lower body muscular strength (standing long jump), speed (20 m sprint), cardiorespiratory fitness (6 min run), agility (star run), upper body muscular strength (ball push test), and balance (one leg balance)). Executive functions as a secondary outcome were assessed using attention and psychomotor processing speed (digit symbol substitution test), mental flexibility and fine motor skills (trail making test), and inhibitory control (Simon task). Anthropometric measures such as body height, body mass, maturity offset, and body composition parameters, as well as socioeconomic information were recorded as potential moderators.
Results: (1) The evaluation of possible effects of the remedial physical education intervention on physical fitness and executive functions of children with deficits in their physical fitness did not reveal any detectable intervention-related improvements in physical fitness or executive functions. The implemented analysis strategies also showed moderating effects of body mass index (BMI) on performance in 6 min run, star run, and standing long jump, with children with a lower BMI performing better, moderating effects of proximity to Berlin on performance in the 6 min run and standing long jump, better performances being found in children living closer to Berlin, and overall gendered differences in executive function test performance, with boys performing better compared to girls. (2) Analysing moderating effects of body height and body mass on physical fitness performance, better overall physical fitness performance was found for taller children. For body mass, a negative effect was found on performance in the 6 min run (linear), standing long jump (linear), and 20 m sprint (quadratic), with better performance associated with lighter children, and a positive effect of body mass on performance in the ball push test, with heavier children performing better. In addition, the analysis revealed significant interactions between body height and body mass on performance in 6 min run and 20 m sprint, with higher body mass being associated with performance improvements in larger children, while higher body mass was associated with performance declines in smaller children. In addition, the analysis revealed overall age-related improvements in physical fitness and was able to show that children with better overall physical fitness also elicit greater age-related improvements. (3) In the analysis of moderating effects of age and maturity offset on physical fitness performances, two unrotated principal components of z-transformed age and maturity offset values were calculated (i.e., relative growth = (age + maturity offset)/2; growth delay = (age - maturity offset)) to avoid colinearity. Analysing these constructs revealed positive effects of relative growth on performances in star run, 20 m sprint, and standing long jump, with children of higher relative growth performing better. For growth delay, positive effects were found on performances in 6 min run and 20 m sprint, with children having larger growth delays showing better performances. Further, the model revealed gendered differences in 6 min run and 20 m sprint performances with girls performing better than boys. (4) Analysing the effects of physical fitness tests on executive function revealed a positive effect of star run and one leg balance performance and a negative effect of 6 min run performance on reaction speed in the Simon task. However, these effects were not detectable when individual differences were accounted for. Then these effects showed overall positive effects, with better performances being associated with faster reaction speeds. In addition, the analysis revealed a positive correlation between overall reaction speed and effects of the 6 min run, suggesting that children with greater effects of 6 min run had faster overall reaction speeds. Negative correlations were found between star run effects and age effects on Simon task reaction speed, meaning that children with larger star run effects had smaller age effects, and between 6 min run effects and star run effects on Simon task reaction speed, meaning that children with larger 6 min run effects tended to have smaller star run effects on Simon task reaction speed and vice versa.
Conclusions: (1) The lack of detectable intervention-related effects could have been caused by an insufficient intervention period, by the implementation of comprehensive and thus non- specific exercises, or by both. Accordingly, longer intervention periods and/or more specific exercises may have been more beneficial and could have led to detectable improvements in physical fitness and/or executive function. However, it remains unclear whether these interventions can benefit children with deficits in physical fitness, as it is possible that their deficits are not caused by a mere lack of exercise, but rather depend on the socioeconomic conditions of the children and their families and areas. Therefore, further research is needed to assess the moderation of physical fitness in children with physical fitness deficits and, in particular, the links between children’s environment and their physical fitness trajectories. (2) Findings from this work suggest that using BMI as a composite of body height and body mass may not be able to capture the variation associated with these parameters and their interactions. In particular, because of their multidirectional associations, further research would help elucidate how BMI and its subcomponents influence physical fitness and how they vary between children with and without physical fitness deficits. (3) The assessment of growth- related changes indicated negative effects associated with the growth spurt approaching age of peak height velocity, and furthermore showed significant differences in these effects between children. Thus, these effects and possible interindividual differences should be considered in the assessment of the development of physical fitness in children. (4) Furthermore, this work has shown that the associations between physical fitness and executive functions vary between children and may be moderated by children’s socioeconomic conditions and the structure of their daily activities. Further research is needed to explore these associations using approaches that account for individual variance.
Future magnetic recording industry needs a high-density data storage technology. However, switching the magnetization of small bits requires high magnetic fields that cause excessive heat dissipation. Therefore, controlling magnetism without applying external magnetic field is an important research topic for potential applications in data storage devices with low power consumption. Among the different approaches being investigated, two of them stand out, namely i) all-optical helicity dependent switching (AO-HDS) and ii) ferroelectric control of magnetism. This thesis aims to contribute towards a better understanding of the physical processes behinds these effects as well as reporting new and exciting possibility for the optical and/or electric control of magnetic properties. Hence, the thesis contains two differentiated chapters of results; the first devoted to AO-HDS on TbFe alloys and the second to the electric field control of magnetism in an archetypal Fe/BaTiO3 system.
In the first part, the scalability of the AO-HDS to small laser spot-sizes of few microns in the ferrimagnetic TbFe alloy is investigated by spatially resolving the magnetic contrast with photo-emission electron microscopy (PEEM) and X-ray magnetic circular dichroism (XMCD). The results show that the AO-HDS is a local effect within the laser spot size that occurs in the ring-shaped region in the vicinity of thermal demagnetization. Within the ring region, the helicity dependent switching occurs via thermally activated domain wall motion. Further, the thesis reports on a novel effect of thickness dependent inversion of the switching orientation. It addresses some of the important questions like the role of laser heating and the microscopic mechanism driving AO-HDS.
The second part of the thesis focuses on the electric field control of magnetism in an artificial multiferroic heterostructure. The sample consists of an Fe wedge with thickness varying between 0:5 nm and 3 nm, deposited on top of a ferroelectric and ferroelastic BaTiO3 [001]-oriented single crystal substrate. Here, the magnetic contrast is imaged via PEEM and XMCD as a function of out-of-plane voltage. The results show the evidence of the electric field control of superparamagnetism mediated by a ferroelastic modification of the magnetic anisotropy. The changes in the magnetoelastic anisotropy drive the transition from the superparamagnetic to superferromagnetic state at localized sample positions.
In a very simplified view, the plant leaf growth can be reduced to two processes, cell division and cell expansion, accompanied by expansion of their surrounding cell walls. The vacuole, as being the largest compartment of the plant cell, plays a major role in controlling the water balance of the plant. This is achieved by regulating the osmotic pressure, through import and export of solutes over the vacuolar membrane (the tonoplast) and by controlling the water channels, the aquaporins. Together with the control of cell wall relaxation, vacuolar osmotic pressure regulation is thought to play an important role in cell expansion, directly by providing cell volume and indirectly by providing ion and pH homestasis for the cytosoplasm. In this thesis the role of tonoplast protein coding genes in cell expansion in the model plant Arabidopsis thaliana is studied and genes which play a putative role in growth are identified. Since there is, to date, no clearly identified protein localization signal for the tonoplast, there is no possibility to perform genome-wide prediction of proteins localized to this compartment. Thus, a series of recent proteomic studies of the tonoplast were used to compile a list of cross-membrane tonoplast protein coding genes (117 genes), and other growth-related genes from notably the growth regulating factor (GRF) and expansin families were included (26 genes). For these genes a platform for high-throughput reverse transcription quantitative real time polymerase chain reaction (RT-qPCR) was developed by selecting specific primer pairs. To this end, a software tool (called QuantPrime, see http://www.quantprime.de) was developed that automatically designs such primers and tests their specificity in silico against whole transcriptomes and genomes, to avoid cross-hybridizations causing unspecific amplification. The RT-qPCR platform was used in an expression study in order to identify candidate growth related genes. Here, a growth-associative spatio-temporal leaf sampling strategy was used, targeting growing regions at high expansion developmental stages and comparing them to samples taken from non-expanding regions or stages of low expansion. Candidate growth related genes were identified after applying a template-based scoring analysis on the expression data, ranking the genes according to their association with leaf expansion. To analyze the functional involvement of these genes in leaf growth on a macroscopic scale, knockout mutants of the candidate growth related genes were screened for growth phenotypes. To this end, a system for non-invasive automated leaf growth phenotyping was established, based on a commercially available image capture and analysis system. A software package was developed for detailed developmental stage annotation of the images captured with the system, and an analysis pipeline was constructed for automated data pre-processing and statistical testing, including modeling and graph generation, for various growth-related phenotypes. Using this system, 24 knockout mutant lines were analyzed, and significant growth phenotypes were found for five different genes.
Polymeric semiconductors are strong contenders for replacing traditional inorganic semiconductors in electronic applications requiring low power, low cost and flexibility, such as biosensors, flexible solar cells and electronic displays. Molecular doping has the potential to enable this revolution by improving the conductivity and charge transport properties of this class of materials. Despite decades of research in this field, gaps in our understanding of the nature of dopant–polymer interactions has resulted in limited commercialization of this technology. This work aims at providing a deeper insight into the underlying mechanisms of molecular p-doping of semiconducting polymers in the solution and solid-state, and thereby bring the scientific community closer to realizing the dream of making organic semiconductors commonplace in the electronics industry. The role of 1) dopant size/shape, 2) polymer chain aggregation and 3) charge delocalization on the doping mechanism and efficiency is addressed using optical (UV-Vis-NIR) and electron paramagnetic resonance (EPR) spectroscopies. By conducting a comprehensive study of the nature and concentration of the doping-induced species in solutions of the polymer poly(3-hexylthiophene) (P3HT) with 3 different dopants, we identify the unique optical signatures of the delocalized polaron, localized polaron and charge-transfer complex, and report their extinction coefficient values. Furthermore, with X-ray diffraction, atomic force microscopy and electrical conductivity measurements, we study the impact of processing technique and doping mechanism on the morphology and thereby, charge transport through the doped films.
This work demonstrates that the doping mechanism and type of doping-induced species formed are strongly influenced by the polymer backbone arrangement rather than dopant shape/size. The ability of the polymer chain to aggregate is found to be crucial for efficient charge transfer (ionization) and polaron delocalization. At the same time, our results suggest that the high ionization efficiency of a dopant–polymer system in solution may subsequently hinder efficient charge transport in the solid-state due to the reduction in the fraction of tie chains, which enable charges to move efficiently between aggregated domains in the films. This study demonstrates the complex multifaceted nature of polymer doping while providing important hints for the future design of dopant-host systems and film fabrication techniques.
In this dissertation we introduce a concept of light driven active and passive manipulation of colloids trapped at solid/liquid interface. The motion is induced due to generation of light driven diffusioosmotic flow (LDDO) upon irradiation with light of appropriate wavelength. The origin of the flow is due to osmotic pressure gradient resulting from a concentration gradient at the solid/liquid interface of the photosensitive surfactant present in colloidal dispersion. The photosensitive surfactant consists of a cationic head group and a hydrophobic tail in which azobenzene group is integrated in. The azobenzene is known to undergo reversible photo-isomerization from a stable trans to a meta stable cis state under irradiation with UV light. Exposure to light of larger wavelength results in back photo-isomerization from cis to trans state. The two isomers have different molecular properties, for instance, trans isomer has a rod like structure and low polarity (0 dipole moment), whereas cis one is bent and has a dipole moment of ~3 Debye. Being integrated in the hydrophobic tail of the surfactant molecule, the azobenzene state determines the hydrophobicity of the whole molecule: in the trans state the surfactant is more hydrophobic than in the cis-state. In this way many properties of the surfactant such as the CMC, solubility and the interaction potential with a solid surface can be altered by light. When the solution containing such a surfactant is irradiated with focused light, a concentration gradient of different isomers is formed near the boundary of the irradiated area near the solid surface resulting in osmotic pressure gradient. The generated diffusioosmotic (DO) flow carries the particles passively along.
The local-LDDO flow can be generated around and by each particle when mesoporous silica colloids are dispersed in the surfactant solution. This is because porous particles act as a sink/source which absorbs azobenzene molecule in trans state and expels it when it is in the cis state. The DO flows generated at each particle interact resulting in aggregation or separation depending upon the initial state of surfactant molecules. The kinetic of aggregation and separation can be controlled and manipulated by altering the parameters such as the wavelength and intensity of the applied light, as well as surfactant and particle concentration. Using two wavelengths simultaneously allows for dynamic gathering and separation creating fascinating patterns such as 2D disk of well separated particles or establishing collective complex behaviour of particle ensemble as described in this thesis.
The mechanism of l-LDDO is also used to generate self-propelled motion. This is possible when half of the porous particle is covered by metal layer, basically blocking the pores on one side. The LDDO flow generated on uncapped side pushes the particle forward resulting in a super diffusive motion. The system of porous particle and azobenzene containing surfactant molecule can be utilized for various application such as drug delivery, cargo transportation, self-assembling, micro motors/ machines or micro patterning.
The Earth's inner magnetosphere is a very dynamic system, mostly driven by the external solar wind forcing exerted upon the magnetic field of our planet. Disturbances in the solar wind, such as coronal mass ejections and co-rotating interaction regions, cause geomagnetic storms, which lead to prominent changes in charged particle populations of the inner magnetosphere - the plasmasphere, ring current, and radiation belts. Satellites operating in the regions of elevated energetic and relativistic electron fluxes can be damaged by deep dielectric or surface charging during severe space weather events. Predicting the dynamics of the charged particles and mitigating their effects on the infrastructure is of particular importance, due to our increasing reliance on space technologies.
The dynamics of particles in the plasmasphere, ring current, and radiation belts are strongly coupled by means of collisions and collisionless interactions with electromagnetic fields induced by the motion of charged particles. Multidimensional numerical models simplify the treatment of transport, acceleration, and loss processes of these particles, and allow us to predict how the near-Earth space environment responds to solar storms. The models inevitably rely on a number of simplifications and assumptions that affect model accuracy and complicate the interpretation of the results. In this dissertation, we quantify the processes that control electron dynamics in the inner magnetosphere, paying particular attention to the uncertainties of the employed numerical codes and tools.
We use a set of convenient analytical solutions for advection and diffusion equations to test the accuracy and stability of the four-dimensional Versatile Electron Radiation Belt (VERB-4D) code. We show that numerical schemes implemented in the code converge to the analytical solutions and that the VERB-4D code demonstrates stable behavior independent of the assumed time step. The order of the numerical scheme for the convection equation is demonstrated to affect results of ring current and radiation belt simulations, and it is crucially important to use high-order numerical schemes to decrease numerical errors in the model.
Using the thoroughly tested VERB-4D code, we model the dynamics of the ring current electrons during the 17 March 2013 storm. The discrepancies between the model and observations above 4.5 Earth's radii can be explained by uncertainties in the outer boundary conditions. Simulation results indicate that the electrons were transported from the geostationary orbit towards the Earth by the global-scale electric and magnetic fields.
We investigate how simulation results depend on the input models and parameters. The model is shown to be particularly sensitive to the global electric field and electron lifetimes below 4.5 Earth's radii. The effects of radial diffusion and subauroral polarization streams are also quantified.
We developed a data-assimilative code that blends together a convection model of energetic electron transport and loss and Van Allen Probes satellite data by means of the Kalman filter. We show that the Kalman filter can correct model uncertainties in the convection electric field, electron lifetimes, and boundary conditions. It is also demonstrated how the innovation vector - the difference between observations and model prediction - can be used to identify physical processes missing in the model of energetic electron dynamics.
We computed radial profiles of phase space density of ultrarelativistic electrons, using Van Allen Probes measurements. We analyze the shape of the profiles during geomagnetically quiet and disturbed times and show that the formation of new local minimums in the radial profiles coincides with the ground observations of electromagnetic ion-cyclotron (EMIC) waves. This correlation indicates that EMIC waves are responsible for the loss of ultrarelativistic electrons from the heart of the outer radiation belt into the Earth's atmosphere.
In the current paradigm of cosmology, the formation of large-scale structures is mainly driven by non-radiating dark matter, making up the dominant part of the matter budget of the Universe. Cosmological observations however, rely on the detection of luminous galaxies, which are biased tracers of the underlying dark matter. In this thesis I present cosmological reconstructions of both, the dark matter density field that forms the cosmic web, and cosmic velocities, for which both aspects of my work are delved into, the theoretical formalism and the results of its applications to cosmological simulations and also to a galaxy redshift survey.The foundation of our method is relying on a statistical approach, in which a given galaxy catalogue is interpreted as a biased realization of the underlying dark matter density field. The inference is computationally performed on a mesh grid by sampling from a probability density function, which describes the joint posterior distribution of matter density and the three dimensional velocity field. The statistical background of our method is described in Chapter ”Implementation of argo”, where the introduction in sampling methods is given, paying special attention to Markov Chain Monte-Carlo techniques. In Chapter ”Phase-Space Reconstructions with N-body Simulations”, I introduce and implement a novel biasing scheme to relate the galaxy number density to the underlying dark matter, which I decompose into a deterministic part, described by a non-linear and scale-dependent analytic expression, and a stochastic part, by presenting a negative binomial (NB) likelihood function that models deviations from Poissonity. Both bias components had already been studied theoretically, but were so far never tested in a reconstruction algorithm. I test these new contributions againstN-body simulations to quantify improvements and show that, compared to state-of-the-art methods, the stochastic bias is inevitable at wave numbers of k≥0.15h Mpc^−1 in the power spectrum in order to obtain unbiased results from the reconstructions. In the second part of Chapter ”Phase-Space Reconstructions with N-body Simulations” I describe and validate our approach to infer the three dimensional cosmic velocity field jointly with the dark matter density. I use linear perturbation theory for the large-scale bulk flows and a dispersion term to model virialized galaxy motions, showing that our method is accurately recovering the real-space positions of the redshift-space distorted galaxies. I analyze the results with the isotropic and also the two-dimensional power spectrum.Finally, in Chapter ”Phase-space Reconstructions with Galaxy Redshift Surveys”, I show how I combine all findings and results and apply the method to the CMASS (for Constant (stellar) Mass) galaxy catalogue of the Baryon Oscillation Spectroscopic Survey (BOSS). I describe how our method is accounting for the observational selection effects inside our reconstruction algorithm. Also, I demonstrate that the renormalization of the prior distribution function is mandatory to account for higher order contributions in the structure formation model, and finally a redshift-dependent bias factor is theoretically motivated and implemented into our method. The various refinements yield unbiased results of the dark matter until scales of k≤0.2 h Mpc^−1in the power spectrum and isotropize the galaxy catalogue down to distances of r∼20h^−1 Mpc in the correlation function. We further test the results of our cosmic velocity field reconstruction by comparing them to a synthetic mock galaxy catalogue, finding a strong correlation between the mock and the reconstructed velocities. The applications of both, the density field without redshift-space distortions, and the velocity reconstructions, are very broad and can be used for improved analyses of the baryonic acoustic oscillations, environmental studies of the cosmic web, the kinematic Sunyaev-Zel’dovic or integrated Sachs-Wolfe effect.
The main goal of this dissertation is to experimentally investigate how focus is realised, perceived, and processed by native Turkish speakers, independent of preconceived notions of positional restrictions. Crucially, there are various issues and scientific debates surrounding focus in the Turkish language in the existing literature (chapter 1). It is argued in this dissertation that two factors led to the stagnant literature on focus in Turkish: the lack of clearly defined, modern understandings of information structure and its fundamental notion of focus, and the ongoing and ill-defined debate surrounding the question of whether there is an immediately preverbal focus position in Turkish. These issues gave rise to specific research questions addressed across this dissertation. Specifically, we were interested in how the focus dimensions such as focus size (comparing narrow constituent and broad sentence focus), focus target (comparing narrow subject and narrow object focus), and focus type (comparing new-information and contrastive focus) affect Turkish focus realisation and, in turn, focus comprehension when speakers are provided syntactic freedom to position focus as they see fit.
To provide data on these core goals, we presented three behavioural experiments based on a systematic framework of information structure and its notions (chapter 2): (i) a production task with trigger wh-questions and contextual animations manipulated to elicit the focus dimensions of interest (chapter 3), (ii) a timed acceptability judgment task in listening to the recorded answers in our production task (chapter 4), and (iii) a self-paced reading task to gather on-line processing data (chapter 5).
Based on the results of the conducted experiments, multiple conclusions are made in this dissertation (chapter 6). Firstly, this dissertation demonstrated empirically that there is no focus position in Turkish, neither in the sense of a strict focus position language nor as a focally loaded position facilitating focus perception and/or processing. While focus is, in fact, syntactically variable in the Turkish preverbal area, this is a consequence of movement triggered by other IS aspects like topicalisation and backgrounding, and the observational markedness of narrow subject focus compared to narrow object focus. As for focus type in Turkish, this dimension is not associated with word order in production, perception, or processing. Significant acoustic correlates of focus size (broad sentence focus vs narrow constituent focus) and focus target (narrow subject focus vs narrow object focus) were observed in fundamental frequency and intensity, representing focal boost, (postfocal) deaccentuation, and the presence or absence of a phrase-final rise in the prenucleus, while the perceivability of these effects remains to be investigated. In contrast, no acoustic correlates of focus type in simple, three-word transitive structures were observed, with focus types being interchangeable in mismatched question-answer pairs. Overall, the findings of this dissertation highlight the need for experimental investigations regarding focus in Turkish, as theoretical predictions do not necessarily align with experimental data. As such, the fallacy of implying causation from correlation should be strictly kept in mind, especially when constructions coincide with canonical structures, such as the immediately preverbal position in narrow object foci. Finally, numerous open questions remain to be explored, especially as focus and word order in Turkish are multifaceted. As shown, givenness is a confounding factor when investigating focus types, while thematic role assignment potentially confounds word order preferences. Further research based on established, modern information structure frameworks is needed, with chapter 5 concluding with specific recommendations for such future research.
Ghana ist ein Musterbeispiel dafür, dass ein Entwicklungsland den Weg zu Good Governance schaffen kann. In vielen Studien wird dem Land im afrikanischen Vergleich heute bescheinigt, hier ein Vorreiter zu sein. Dies ist Ausgangslage der vorliegenden Studie, die der Frage nachgeht „Welche Gründe, Muster und Bedingungen führen zur Entstehung von Good Governance?“. Im Zentrum der vorliegenden Studie steht, wie aus der erkenntnisleitenden Fragestellung hervorgeht, eine empirische Untersuchung zur Entstehung von Good Governance und damit ein Transformationsprozess. Dieser wird bewusst über einen sehr langen Zeitraum (über ein halbes Jahrhundert) untersucht, um auch langfristige Entwicklungen einbeziehen zu können. Die Studie wird mit Hilfe eines „Mixed-Methods-Ansatzes“ sowohl unter Rückgriff auf quantitative als auch auf qualitative Methoden durchgeführt, was sich im Rückblick als sehr ertragreich erwiesen hat. Zunächst wird die Qualität der Governance über den gesamten Zeitraum anhand von sechs Indikatoren gemessen. Danach werden qualitativ die Gründe für die Fort- und Rückschritte analysiert. Dabei lassen sich immer wieder Systematiken herausarbeiten, wie zum Beispiel zirkuläre Entwicklungen, die über viele Jahre den Weg hin zu Good Governance verhinderten, bis jeweils Ausbrüche aus den Kreisläufen geschafft werden konnten. Sowohl in der demokratischen und rechtsstaatlichen Entwicklung als auch bezogen auf die Versorgung der Bevölkerung mit öffentlichen Gütern und die wirtschaftliche Entwicklung. Auch wenn die verschiedenen Bereiche von Good Governance zunächst einzeln untersucht werden, so zeigen sich gleichzeitig deutlich die Wechselwirkungen der Komponenten. Zum Beispiel kristallisiert sich klar heraus, dass Rechtsstaatlichkeit sowohl auf die Stabilität politischer Systeme wirkt, als auch auf die wirtschaftliche Entwicklung. Ebenso beeinflussen diese wiederum die Korruption. Ähnliche Verknüpfungen lassen sich auch bei allen anderen Bereichen nachvollziehen. Die Entwicklung eines Landes kann also nur unter Berücksichtigung eines komplexen Governance-Systems verstanden und erklärt werden. Dabei können die Wechselwirkungen entweder konstruktiv oder destruktiv sein. Die Verflechtungen der einzelnen Bereiche werden in einem Negativ- und dann in einem Positiv-Szenario festgehalten. Diese Idealtypen-Bildung spitzt die Erkenntnisse der vorliegenden Arbeit zu und dient dem analytischen Verständnis der untersuchten Prozesse. Die Untersuchung zeigt, wie Good Governance über das Zusammenspiel verschiedener Faktoren entstehen kann und dass es wissenschaftlich sehr ertragreich ist, Transformationsforschung auf ein komplexes Governance-System auszuweiten. Hierbei werden die vielen empirisch erarbeiteten Ergebnisse zu den einzelnen Transformationen zu komplexen, in sich greifenden Gesamtszenarien zusammengeführt. Da es bisher keine explizite Good Governance-Transformationsforschung gab, wurde hiermit ein erster Schritt in diese Richtung getan. Es wird darüber hinaus deutlich, dass eine Transformation zu Good Governance nicht durch eine kurzfristige Veränderung der Rahmenbedingungen zu erreichen ist. Es geht um kulturelle Veränderungen, um Lernprozesse, um langfristige Entwicklungen, die in der Studie am Beispiel Ghana analysiert werden. In vielen vorangegangenen Transformationsstudien wurde diese zeitliche Komponente vernachlässigt. Ghana hat bereits viele Schritte getan, um einen Weg in die Zukunft und zu Good Governance zu finden. Die Untersuchung dieser Schritte ist Kern der vorliegenden Arbeit. Der Weg Ghanas ist jedoch noch nicht abgeschlossen.
Z,E-Diene sind ein häufig auftretendes Strukturmerkmal in Naturstoffen. Aus diesem Grund ist die einfache Darstellung dieser Struktureinheit von großen Interesse in der organischen Chemie.
Das erste Ziel der vorliegenden Arbeit war daher die Weiterentwicklung der Ringschlussmetathese-/ baseninduzierten Ringöffnungs-/ Veresterungssequenz (RBRV-Sequenz) zur Synthese von (2Z,4E)-Diencarbonsäureethylestern ausgehend von Butenoaten. Dazu wurde zunächst die RBRV-Sequenz optimiert. Diese aus drei Schritten bestehende Sequenz konnte in einem Eintopf-Verfahren angewendet werden. Die Ringschlussmetathese gelang mit einer Katalysatorbeladung von 1 mol% des GRUBBS-Katalysators der zweiten Generation in Dichlormethan. Für die baseninduzierte Ringöffnung des β,γ-ungesättigten δ Valerolactons wurde NaHMDS verwendet. Die Alkylierung der Carboxylatspezies gelang mit dem MEERWEIN-Reagenz. Die Anwendbarkeit der Sequenz wurde für verschiedene Substrate demonstriert.
Die Erweiterung der Methode auf α-substituierte Butenoate unterlag starken Einschränkungen. So konnte der Zugang für α Hydroxyderivate realisiert werden. Bei der Anwendung der RBRV-Sequenz auf die α-substituierten Butenoate wurde festgestellt, dass diese sich nur in moderaten Ausbeuten umsetzen ließen und zudem nicht selektiv zu den (2E,4E)-konfigurierten α-substituierten-Dienestern reagierten.
Der Einsatz von Eninen unter den Standardbedingungen der RBRV-Sequenz gelang nicht. Erst nach Modifizierung der Sequenz (höhere Katalysatorbeladung, Wechsel des Lösungsmittels) konnten die [3]Dendralen-Produkte in geringen Ausbeuten erhalten werden.
Im zweiten Teil der Arbeit wurde der Einsatz von (2Z,4E)-Diencarbonsäureethylestern in der Totalsynthese von Naturstoffen untersucht. Dazu wurden zunächst die Transformationsmöglichkeiten der Ester geprüft. Es konnte gezeigt werden, dass sich (2Z,4E)-Diencarbonsäureethylester insbesondere zur Synthese von (2Z,4E)-Aldehyden sowie zum Aufbau der (3Z,5E)-Dien-1-in-Struktur eignen.
Anhand dieser Ergebnisse wurde im Anschluss die RBRV-Sequenz in der Totalsynthese eingesetzt. Dazu wurde zunächst der (2Z,4E)-Dienester Microsphaerodiolin in seiner ersten Totalsynthese auf drei verschiedene Routen hergestellt. Im Anschluss wurden sechs verschiedene Polyacetylene mit einer (3Z,5E)-Dien-1-in-Einheit hergestellt. Schlüsselschritte in ihrer Synthese waren immer die RBRV-Sequenz zum Aufbau der Z,E-Dien-Einheit, die Transformation des Esters in ein terminales Alkin sowie die CADIOT-CHODKIEWICZ-Kupplung zum Aufbau unsymmetrischer Polyine. Alle sechs Polyacetylene wurden zum ersten Mal in einer Totalsynthese synthetisiert. Drei Polyacetylene wurden ausgehend von (S)-Butantriol enantiomerenrein dargestellt. Anhand ihrer Drehwerte konnte eine Revision der von YAO und Mitarbeitern vorgenommen Zuordnung der Absolutkonfiguration der Naturstoffe vorgenommen werden.
Corvino, Corvino and Schoen, Chruściel and Delay have shown the existence of a large class of asymptotically flat vacuum initial data for Einstein's field equations which are static or stationary in a neighborhood of space-like infinity, yet quite general in the interior. The proof relies on some abstract, non-constructive arguments which makes it difficult to calculate such data numerically by using similar arguments. A quasilinear elliptic system of equations is presented of which we expect that it can be used to construct vacuum initial data which are asymptotically flat, time-reflection symmetric, and asymptotic to static data up to a prescribed order at space-like infinity. A perturbation argument is used to show the existence of solutions. It is valid when the order at which the solutions approach staticity is restricted to a certain range. Difficulties appear when trying to improve this result to show the existence of solutions that are asymptotically static at higher order. The problems arise from the lack of surjectivity of a certain operator. Some tensor decompositions in asymptotically flat manifolds exhibit some of the difficulties encountered above. The Helmholtz decomposition, which plays a role in the preparation of initial data for the Maxwell equations, is discussed as a model problem. A method to circumvent the difficulties that arise when fast decay rates are required is discussed. This is done in a way that opens the possibility to perform numerical computations. The insights from the analysis of the Helmholtz decomposition are applied to the York decomposition, which is related to that part of the quasilinear system which gives rise to the difficulties. For this decomposition analogous results are obtained. It turns out, however, that in this case the presence of symmetries of the underlying metric leads to certain complications. The question, whether the results obtained so far can be used again to show by a perturbation argument the existence of vacuum initial data which approach static solutions at infinity at any given order, thus remains open. The answer requires further analysis and perhaps new methods.
Companies develop process models to explicitly describe their business operations. In the same time, business operations, business processes, must adhere to various types of compliance requirements. Regulations, e.g., Sarbanes Oxley Act of 2002, internal policies, best practices are just a few sources of compliance requirements. In some cases, non-adherence to compliance requirements makes the organization subject to legal punishment. In other cases, non-adherence to compliance leads to loss of competitive advantage and thus loss of market share. Unlike the classical domain-independent behavioral correctness of business processes, compliance requirements are domain-specific. Moreover, compliance requirements change over time. New requirements might appear due to change in laws and adoption of new policies. Compliance requirements are offered or enforced by different entities that have different objectives behind these requirements. Finally, compliance requirements might affect different aspects of business processes, e.g., control flow and data flow. As a result, it is infeasible to hard-code compliance checks in tools. Rather, a repeatable process of modeling compliance rules and checking them against business processes automatically is needed. This thesis provides a formal approach to support process design-time compliance checking. Using visual patterns, it is possible to model compliance requirements concerning control flow, data flow and conditional flow rules. Each pattern is mapped into a temporal logic formula. The thesis addresses the problem of consistency checking among various compliance requirements, as they might stem from divergent sources. Also, the thesis contributes to automatically check compliance requirements against process models using model checking. We show that extra domain knowledge, other than expressed in compliance rules, is needed to reach correct decisions. In case of violations, we are able to provide a useful feedback to the user. The feedback is in the form of parts of the process model whose execution causes the violation. In some cases, our approach is capable of providing automated remedy of the violation.
MHC genes encode proteins that are responsible for the recognition of foreign antigens and the triggering of a subsequent, adequate immune response of the organism. Thus they hold a key position in the immune system of vertebrates. It is believed that the extraordinary genetic diversity of MHC genes is shaped by adaptive selectional processes in response to the reoccurring adaptations of parasites and pathogens. A large number of MHC studies were performed in a wide range of wildlife species aiming to understand the role of immune gene diversity in parasite resistance under natural selection conditions. Methodically, most of this work with very few exceptions has focussed only upon the structural, i.e. sequence diversity of regions responsible for antigen binding and presentation. Most of these studies found evidence that MHC gene variation did indeed underlie adaptive processes and that an individual’s allelic diversity explains parasite and pathogen resistance to a large extent. Nevertheless, our understanding of the effective mechanisms is incomplete. A neglected, but potentially highly relevant component concerns the transcriptional differences of MHC alleles. Indeed, differences in the expression levels MHC alleles and their potential functional importance have remained unstudied. The idea that also transcriptional differences might play an important role relies on the fact that lower MHC gene expression is tantamount with reduced induction of CD4+ T helper cells and thus with a reduced immune response. Hence, I studied the expression of MHC genes and of immune regulative cytokines as additional factors to reveal the functional importance of MHC diversity in two free-ranging rodent species (Delomys sublineatus, Apodemus flavicollis) in association with their gastrointestinal helminths under natural selection conditions. I established the method of relative quantification of mRNA on liver and spleen samples of both species in our laboratory. As there was no available information on nucleic sequences of potential reference genes in both species, PCR primer systems that were established in laboratory mice have to be tested and adapted for both non-model organisms. In the due course, sets of stable reference genes for both species were found and thus the preconditions for reliable measurements of mRNA levels established. For D. sublineatus it could be demonstrated that helminth infection elicits aspects of a typical Th2 immune response. Whereas mRNA levels of the cytokine interleukin Il4 increased with infection intensity by strongyle nematodes neither MHC nor cytokine expression played a significant role in D. sublineatus. For A. flavicollis I found a negative association between the parasitic nematode Heligmosomoides polygyrus and hepatic MHC mRNA levels. As a lower MHC expression entails a lower immune response, this could be evidence for an immune evasive strategy of the nematode, as it has been suggested for many micro-parasites. This implies that H. polygyrus is capable to interfere actively with the MHC transcription. Indeed, this parasite species has long been suspected to be immunosuppressive, e.g. by induction of regulatory T-helper cells that respond with a higher interleukin Il10 and tumor necrosis factor Tgfb production. Both cytokines in turn cause an abated MHC expression. By disabling recognition by the MHC molecule H. polygyrus might be able to prevent an activation of the immune system. Indeed, I found a strong tendency in animals carrying the allele Apfl-DRB*23 to have an increased infection intensity with H. polygyrus. Furthermore, I found positive and negative associations between specific MHC alleles and other helminth species, as well as typical signs of positive selection acting on the nucleic sequences of the MHC. The latter was evident by an elevated rate of non-synonymous to synonymous substitutions in the MHC sequences of exon 2 encoding the functionally important antigen binding sites whereas the first and third exons of the MHC DRB gene were highly conserved. In conclusion, the studies in this thesis demonstrate that valid procedures to quantify expression of immune relevant genes are also feasible in non-model wildlife organisms. In addition to structural MHC diversity, also MHC gene expression should be considered to obtain a more complete picture on host-pathogen coevolutionary selection processes. This is especially true if parasites are able to interfere with systemic MHC expression. In this case advantageous or disadvantageous effects of allelic binding motifs are abated. The studies could not define the role of MHC gene expression in antagonistic coevolution as such but the results suggest that it depends strongly on the specific parasite species that is involved.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
The Central Pontides is an accretionary-type orogenic area within the Alpine-Himalayan orogenic belt characterized by pre-collisional tectonic continental growth. The region comprises Mesozoic subduction-accretionary complexes and an accreted intra-oceanic arc that are sandwiched between the Laurasian active continental margin and Gondwana-derived the Kırşehir Block. The subduction-accretion complexes mainly consist of an Albian-Turonian accretionary wedge representing the Laurasian active continental margin. To the north, the wedge consists of slate/phyllite and metasandstone intercalation with recrystallized limestone, Na-amphibole-bearing metabasite (PT= 7–12 kbar and 400 ± 70 ºC) and tectonic slices of serpentinite representing accreted distal part of a large Lower Cretaceous submarine turbidite fan deposited on the Laurasian active continental margin that was subsequently accreted and metamorphosed. Raman spectra of carbonaceous material (RSCM) of the metapelitic rocks revealed that the metaflysch sequence consists of metamorphic packets with distinct peak metamorphic temperatures. The majority of the metapelites are low-temperature (ca. 330 °C) slates characterized by lack of differentiation of the graphite (G) and D2 defect bands. They possibly represent offscraped distal turbidites along the toe of the Albian accretionary wedge. The rest are phyllites that are characterized by slightly pronounced G band with D2 defect band occurring on its shoulder. Peak metamorphic temperatures of these phyllites are constrained to 370-385 °C. The phyllites are associated with a strip of incipient blueschist facies metabasites which are found as slivers within the offscraped distal turbidites. They possibly represent underplated continental metasediments together with oceanic crustal basalt along the basal décollement. Tectonic emplacement of the underplated rocks into the offscraped distal turbidites was possibly achieved by out-of-sequence thrusting causing tectonic thickening and uplift of the wedge. 40Ar/39Ar phengite ages from the phyllites are ca. 100 Ma, indicating Albian subduction and regional HP metamorphism.
The accreted continental metasediments are underlain by HP/LT metamorphic rocks of oceanic origin along an extensional shear zone. The oceanic metamorphic sequence mainly comprises tectonically thickened deep-seated eclogite to blueschist facies metabasites and micaschists. In the studied area, metabasites are epidote-blueschists locally with garnet (PT= 17 ± 1 kbar and 500 ± 40 °C). Lawsonite-blueschists are exposed as blocks along the extensional shear zone (PT= 14 ± 2 kbar and 370–440 °C). They are possibly associated with low shear stress regime of the initial stage of convergence. Close to the shear zone, the footwall micaschists consist of quartz, phengite, paragonite, chlorite, rutile with syn-kinematic albite porphyroblast formed by pervasive shearing during exhumation. These types of micaschists are tourmaline-bearing and their retrograde nature suggests high-fluid flux along shear zones. Peak metamorphic mineral assemblages are partly preserved in the chloritoid-micaschist farther away from the shear zone representing the zero strain domains during exhumation. Three peak metamorphic assemblages are identified and their PT conditions are constrained by pseudosections produced by Theriak-Domino and by Raman spectra of carbonaceous material: 1) garnet-chloritoid-glaucophane with lawsonite pseudomorphs (P= 17.5 ± 1 kbar, T: 390-450 °C) 2) chloritoid with glaucophane pseudomorphs (P= 16-18 kbar, T: 475 ± 40 °C) and 3) relatively high-Mg chloritoid (17%) with jadeite pseudomorphs (P= 22-25 kbar; T: 440 ± 30 °C) in addition to phengite, paragonite, quartz, chlorite, rutile and apatite. The last mineral assemblage is interpreted as transformation of the chloritoid + glaucophane assemblage to chloritoid + jadeite paragenesis with increasing pressure. Absence of tourmaline suggests that the chloritoid-micaschist did not interact with B-rich fluids during zero strain exhumation. 40Ar/39Ar phengite age of a pervasively sheared footwall micaschist is constrained to 100.6 ± 1.3 Ma and that of a chloritoid-micaschist is constrained to 91.8 ± 1.8 Ma suggesting exhumation during on-going subduction with a southward younging of the basal accretion and the regional metamorphism. To the south, accretionary wedge consists of blueschist and greenschist facies metabasite, marble and volcanogenic metasediment intercalation. 40Ar/39Ar phengite dating reveals that this part of the wedge is of Middle Jurassic age partly overprinted during the Albian. Emplacement of the Middle Jurassic subduction-accretion complexes is possibly associated with obliquity of the Albian convergence.
Peak metamorphic assemblages and PT estimates of the deep-seated oceanic metamorphic sequence suggest tectonic stacking within wedge with different depths of burial. Coupling and exhumation of the distinct metamorphic slices are controlled by decompression of the wedge possibly along a retreating slab. Structurally, decompression of the wedge is evident by an extensional shear zone and the footwall micaschists with syn-kinematic albite porphyroblasts. Post-kinematic garnets with increasing grossular content and pseudomorphing minerals within the chloritoid-micaschists also support decompression model without an extra heating.
Thickening of subduction-accretionary complexes is attributed to i) significant amount of clastic sediment supply from the overriding continental domain and ii) deep level basal underplating by propagation of the décollement along a retreating slab. Underplating by basal décollement propagation and subsequent exhumation of the deep-seated subduction-accretion complexes are connected and controlled by slab rollback creating a necessary space for progressive basal accretion along the plate interface and extension of the wedge above for exhumation of the tectonically thickened metamorphic sequences. This might be the most common mechanism of the tectonic thickening and subsequent exhumation of deep-seated HP/LT subduction-accretion complexes.
To the south, the Albian-Turonian accretionary wedge structurally overlies a low-grade volcanic arc sequence consisting of low-grade metavolcanic rocks and overlying metasedimentary succession is exposed north of the İzmir-Ankara-Erzincan suture (İAES), separating Laurasia from Gondwana-derived terranes. The metavolcanic rocks mainly consist of basaltic andesite/andesite and mafic cognate xenolith-bearing rhyolite with their pyroclastic equivalents, which are interbedded with recrystallized pelagic limestone and chert. The metavolcanic rocks are stratigraphically overlain by recrystallized micritic limestone with rare volcanogenic metaclastic rocks. Two groups can be identified based on trace and rare earth element characteristics. The first group consists of basaltic andesite/andesite (BA1) and rhyolite with abundant cognate gabbroic xenoliths. It is characterized by relative enrichment of LREE with respect to HREE. The rocks are enriched in fluid mobile LILE, and strongly depleted in Ti and P reflecting fractionation of Fe-Ti oxides and apatite, which are found in the mafic cognate xenoliths. Abundant cognate gabbroic xenoliths and identical trace and rare earth elements compositions suggest that rhyolites and basaltic andesites/andesites (BA1) are cogenetic and felsic rocks were derived from a common mafic parental magma by fractional crystallization and accumulation processes. The second group consists only of basaltic andesites (BA2) with flat REE pattern resembling island arc tholeiites. Although enriched in LILE, this group is not depleted in Ti or P.
Geochemistry of the metavolcanic rocks indicates supra-subduction volcanism evidenced by depletion of HFSE and enrichment of LILE. The arc sequence is sandwiched between an Albian-Turonian subduction-accretionary complex representing the Laurasian active margin and an ophiolitic mélange. Absence of continent derived detritus in the arc sequence and its tectonic setting in a wide Cretaceous accretionary complex suggest that the Kösdağ Arc was intra-oceanic. This is in accordance with basaltic andesites (BA2) with island arc tholeiite REE pattern.
Zircons from two metarhyolite samples give Late Cretaceous (93.8 ± 1.9 and 94.4 ± 1.9 Ma) U/Pb ages. Low-grade regional metamorphism of the intra-oceanic arc sequence is constrained 69.9 ± 0.4 Ma by 40Ar/39Ar dating on metamorphic muscovite from a metarhyolite indicating that the arc sequence became part of a wide Tethyan Cretaceous accretionary complex by the latest Cretaceous. The youngest 40Ar/39Ar phengite age from the overlying subduction-accretion complexes is 92 Ma confirming southward younging of an accretionary-type orogenic belt. Hence, the arc sequence represents an intra-oceanic paleo-arc that formed above the sinking Tethyan slab and finally accreted to Laurasian active continental margin. Abrupt non-collisional termination of arc volcanism was possibly associated with southward migration of the arc volcanism similar to the Izu-Bonin-Mariana arc system.
The intra-oceanic Kösdağ Arc is coeval with the obducted supra-subduction ophiolites in NW Turkey suggesting that it represents part of the presumed but missing incipient intra-oceanic arc associated with the generation of the regional supra-subduction ophiolites. Remnants of a Late Cretaceous intra-oceanic paleo-arc and supra-subduction ophiolites can be traced eastward within the Alp-Himalayan orogenic belt. This reveals that Late Cretaceous intra-oceanic subduction occurred as connected event above the sinking Tethyan slab. It resulted as arc accretion to Laurasian active margin and supra-subduction ophiolite obduction on Gondwana-derived terranes.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
Nonaqueous synthesis of metal oxide nanoparticles and their assembly into mesoporous materials
(2006)
This thesis mainly consist of two parts, the synthesis of several kinds of technologically interesting crystalline metal oxide nanoparticles via nonaqueous sol-gel process and the formation of mesoporous metal oxides using some of these nanoparticles as building blocks via evaporation induced self-assembly (EISA) technique. In the first part, the experimental procedures and characterization results of successful syntheses of crystalline tin oxide and tin doped indium oxide (ITO) nanoparticles are reported. SnO2 nanoparticles exhibit monodisperse particle size (3.5 nm in average), high crystallinity and particularly high dispersibility in THF, which enable them to become the ideal particulate precursor for the formation of mesoporous SnO2. ITO nanoparticles possess uniform particle morphology, narrow particle size distribution (5-10 nm), high crystallinity as well as high electrical conductivity. The synthesis approaches and characterization of various mesoporous metal oxides, including TiO2, SnO2, mixture of CeO2 and TiO2, mixture of BaTiO3 and SnO2, are reported in the second part of this thesis. Mesoporous TiO2 and SnO2 are presented as highlights of this part. Mesoporous TiO2 was produced in the forms of both films and bulk material. In the case of mesoporous SnO2, the study was focused on the high order of the porous structure. All these mesoporous metal oxides show high crystallinity, high surface area and rather monodisperse pore sizes, which demonstrate the validity of EISA process and the usage of preformed crystalline nanoparticles as nanobuilding blocks (NBBs) to produce mesoporous metal oxides.
Die Europäische Währungsunion (EWU) umfasst heute 16 Staaten mit insgesamt 321 Millionen Einwohnern, sie ist mit einem Bruttoinlandsprodukt von 22,9 Billionen Euro einer der größten Wirtschaftsräume der Erde. In den nächsten Jahren wird die EWU durch die Aufnahme der 2004 und 2007 beigetretenen neuen EU-Länder weiter wachsen. Da der Beitritt von der Erfüllung der Kriterien von Maastricht abhängt, erfolgt die Erweiterung im Gegensatz zur 5. Erweiterungsrunde der EU nicht als Block, sondern sequentiell. Nach den Beitritten von Slowenien am 1.1.2007 und der Slowakei zum 1.1.2009 steht der Beitritt eines ersten großen Landes in den nächsten Jahren bevor. Daher stößt die Frage der Effekte eines solchen Beitritts seit geraumer Zeit auf breites Interesse in der ökonomischen Literatur. Das Forschungsziel der Dissertation ist es, die theoretischen Wirkungsmechanismen eines Beitritts der neuen Mitgliedsländer zur Europäischen Währungsunion abzubilden. Hierzu werden mögliche stabilitätspolitische Konsequenzen sowie die Auswirkungen eines Beitritts auf die geografische Wirtschaftsstruktur und das Wachstum dieser Länder in theoretischen Modellen abgeleitet. Die direkten Effekte des Beitritts werden in einem angewandt-theoretischen Modell zudem quantifiziert. Insgesamt wird der Beitritt aus drei verschiedenen Perspektiven analysiert: Erstens werden die Konsequenzen der Währungsunion für die Stabilitätspolitik der neuen Mitgliedsländer im Rahmen eines neukeynesianischen Modells betrachtet. Zweitens werden die mit der Transaktionskostensenkung verbundenen Gewinne in einem angewandten Gleichgewichtsmodell quantifiziert. Drittens werden die wachstumstheoretischen Wirkungen der Finanzmarktintegration in einem dynamischen Gleichgewichtsmodell untersucht. Da die drei Aspekte der makroökonomischen Stabilität, der Transaktionskostensenkung und der dynamischen Wirkungen der Finanzmarktintegration weitgehend unabhängig voneinander auftreten, ist die Verwendung verschiedener Modelle mit geringen Kosten verbunden. In der Gesamtbeurteilung des EWU-Beitritts der neuen EU-Länder kommt diese Arbeit zu einer anderen Einschätzung als bisherige Studien. Die in Teil eins ermittelten stabilitätspolitischen Konsequenzen sind entweder neutral oder implizieren bei Beitritt zur Währungsunion eine größere Stabilität. Die in Teil zwei und drei ermittelten statischen und dynamischen Gewinne eines Beitritts sind zudem erheblich, so dass ein schneller Beitritt zur Währungsunion für die neuen EU-Mitgliedsländer vorteilhaft ist. Unter Berücksichtigung der Ziele der Europäischen Wirtschafts- und Währungsunion (EWWU) müssen hierzu jedoch zwei Bedingungen erfüllt sein. Einerseits sind hinreichend entwickelte Finanzmärkte notwendig, um das Ziel einer Konvergenz der neuen und alten EU-Mitgliedsländer zu erreichen. Andererseits wird der Gesamtraum von einer stärkeren Finanzmarktintegration und einer Senkung der Transaktionskosten profitieren, jedoch durch die Übertragung von Schocks der neuen Mitgliedsländer instabiler. Daher kann der Beitritt der neuen Mitgliedsländer zur EWU für den Gesamtraum negativ sein. Diese Kosten sind nur dann zu rechtfertigen, falls über die schnellere Entwicklung der neuen Mitgliedsstaaten eine höhere Stabilität des Währungsraumes erzielt wird. Das neukeynesianische Wachstumsmodell gibt Hinweise, dass eine solche Entwicklung eintreten könnte.
Several mechanisms are proposed to be part of the earthquake triggering process, including static stress interactions and dynamic stress transfer. Significant differences of these mechanisms are particularly expected in the spatial distribution of aftershocks. However, testing the different hypotheses is challenging because it requires the consideration of the large uncertainties involved in stress calculations as well as the appropriate consideration of secondary aftershock triggering which is related to stress changes induced by smaller pre- and aftershocks. In order to evaluate the forecast capability of different mechanisms, I take the effect of smaller--magnitude earthquakes into account by using the epidemic type aftershock sequence (ETAS) model where the spatial probability distribution of direct aftershocks, if available, is correlated to alternative source information and mechanisms. Surface shaking, rupture geometry, and slip distributions are tested. As an approximation of the shaking level, ShakeMaps are used which are available in near real-time after a mainshock and thus could be used for first-order forecasts of the spatial aftershock distribution. Alternatively, the use of empirical decay laws related to minimum fault distance is tested and Coulomb stress change calculations based on published and random slip models. For comparison, the likelihood values of the different model combinations are analyzed in the case of several well-known aftershock sequences (1992 Landers, 1999 Hector Mine, 2004 Parkfield). The tests show that the fault geometry is the most valuable information for improving aftershock forecasts. Furthermore, they reveal that static stress maps can additionally improve the forecasts of off--fault aftershock locations, while the integration of ground shaking data could not upgrade the results significantly. In the second part of this work, I focused on a procedure to test the information content of inverted slip models. This allows to quantify the information gain if this kind of data is included in aftershock forecasts. For this purpose, the ETAS model based on static stress changes, which is introduced in part one, is applied. The forecast ability of the models is systematically tested for several earthquake sequences and compared to models using random slip distributions. The influence of subfault resolution and segment strike and dip is tested. Some of the tested slip models perform very good, in that cases almost no random slip models are found to perform better. Contrastingly, for some of the published slip models, almost all random slip models perform better than the published slip model. Choosing a different subfault resolution hardly influences the result, as long the general slip pattern is still reproducible. Whereas different strike and dip values strongly influence the results depending on the standard deviation chosen, which is applied in the process of randomly selecting the strike and dip values.
This thesis work describes a new experimental method for the determination of Mode II (shear) fracture toughness, KIIC of rock and compares the outcome to results from Mode I (tensile) fracture toughness, KIC, testing using the International Society of Rock Mechanics Chevron-Bend method.Critical Mode I fracture growth at ambient conditions was studied by carrying out a series of experiments on a sandstone at different loading rates. The mechanical and microstructural data show that time- and loading rate dependent crack growth occurs in the test material at constant energy requirement.The newly developed set-up for determination of the Mode II fracture toughness is called the Punch-Through Shear test. Notches were drilled to the end surfaces of core samples. An axial load punches down the central cylinder introducing a shear load in the remaining rock bridge. To the mantle of the cores a confining pressure may be applied. The application of confining pressure favours the growth of Mode II fractures as large pressures suppress the growth of tensile cracks.Variation of geometrical parameters leads to an optimisation of the PTS- geometry. Increase of normal load on the shear zone increases KIIC bi-linear. High slope is observed at low confining pressures; at pressures above 30 MPa low slope increase is evident. The maximum confining pressure applied is 70 MPa. The evolution of fracturing and its change with confining pressure is described.The existence of Mode II fracture in rock is a matter of debate in the literature. Comparison of the results from Mode I and Mode II testing, mainly regarding the resulting fracture pattern, and correlation analysis of KIC and KIIC to physico-mechanical parameters emphasised the differences between the response of rock to Mode I and Mode II loading. On the microscale, neither the fractures resulting from Mode I the Mode II loading are pure mode fractures. On macroscopic scale, Mode I and Mode II do exist.
Adopting a minimalist framework, the dissertation provides an analysis for the syntactic structure of comparatives, with special attention paid to the derivation of the subclause. The proposed account explains how the comparative subclause is connected to the matrix clause, how the subclause is formed in the syntax and what additional processes contribute to its final structure. In addition, it casts light upon these problems in cross-linguistic terms and provides a model that allows for synchronic and diachronic differences. This also enables one to give a more adequate explanation for the phenomena found in English comparatives since the properties of English structures can then be linked to general settings of the language and hence need no longer be considered as idiosyncratic features of the grammar of English. First, the dissertation provides a unified analysis of degree expressions, relating the structure of comparatives to that of other degrees. It is shown that gradable adjectives are located within a degree phrase (DegP), which in turn projects a quantifier phrase (QP) and that these two functional layers are always present, irrespectively of whether there is a phonologically visible element in these layers. Second, the dissertation presents a novel analysis of Comparative Deletion by reducing it to an overtness constraint holding on operators: in this way, it is reduced to morphological differences and cross-linguistic variation is not conditioned by way of postulating an arbitrary parameter. Cross-linguistic differences are ultimately dependent on whether a language has overt operators equipped with the relevant – [+compr] and [+rel] – features. Third, the dissertation provides an adequate explanation for the phenomenon of Attributive Comparative Deletion, as attested in English, by way of relating it to the regular mechanism of Comparative Deletion. I assume that Attributive Comparative Deletion is not a universal phenomenon, and its presence in English can be conditioned by independent, more general rules, while the absence of such restrictions leads to its absence in other languages. Fourth, the dissertation accounts for certain phenomena related to diachronic changes, examining how the changes in the status of comparative operators led to changes in whether Comparative Deletion is attested in a given language: I argue that only operators without a lexical XP can be grammaticalised. The underlying mechanisms underlying are essentially general economy principles and hence the processes are not language-specific or exceptional. Fifth, the dissertation accounts for optional ellipsis processes that play a crucial role in the derivation of typical comparative subclauses. These processes are not directly related to the structure of degree expressions and hence the elimination of the quantified expression from the subclause; nevertheless, they are shown to be in interaction with the mechanisms underlying Comparative Deletion or the absence thereof.
We study buckling instabilities of filaments in biological systems. Filaments in a cell are the building blocks of the cytoskeleton. They are responsible for the mechanical stability of cells and play an important role in intracellular transport by molecular motors, which transport cargo such as organelles along cytoskeletal filaments. Filaments of the cytoskeleton are semiflexible polymers, i.e., their bending energy is comparable to the thermal energy such that they can be viewed as elastic rods on the nanometer scale, which exhibit pronounced thermal fluctuations. Like macroscopic elastic rods, filaments can undergo a mechanical buckling instability under a compressive load. In the first part of the thesis, we study how this buckling instability is affected by the pronounced thermal fluctuations of the filaments. In cells, compressive loads on filaments can be generated by molecular motors. This happens, for example, during cell division in the mitotic spindle. In the second part of the thesis, we investigate how the stochastic nature of such motor-generated forces influences the buckling behavior of filaments. In chapter 2 we review briefly the buckling instability problem of rods on the macroscopic scale and introduce an analytical model for buckling of filaments or elastic rods in two spatial dimensions in the presence of thermal fluctuations. We present an analytical treatment of the buckling instability in the presence of thermal fluctuations based on a renormalization-like procedure in terms of the non-linear sigma model where we integrate out short-wavelength fluctuations in order to obtain an effective theory for the mode of the longest wavelength governing the buckling instability. We calculate the resulting shift of the critical force by fluctuation effects and find that, in two spatial dimensions, thermal fluctuations increase this force. Furthermore, in the buckled state, thermal fluctuations lead to an increase in the mean projected length of the filament in the force direction. As a function of the contour length, the mean projected length exhibits a cusp at the buckling instability, which becomes rounded by thermal fluctuations. Our main result is the observation that a buckled filament is stretched by thermal fluctuations, i.e., its mean projected length in the direction of the applied force increases by thermal fluctuations. Our analytical results are confirmed by Monte Carlo simulations for buckling of semiflexible filaments in two spatial dimensions. We also perform Monte Carlo simulations in higher spatial dimensions and show that the increase in projected length by thermal fluctuations is less pronounced than in two dimensions and strongly depends on the choice of the boundary conditions. In the second part of this work, we present a model for buckling of semiflexible filaments under the action of molecular motors. We investigate a system in which a group of motors moves along a clamped filament carrying a second filament as a cargo. The cargo-filament is pushed against the wall and eventually buckles. The force-generating motors can stochastically unbind and rebind to the filament during the buckling process. We formulate a stochastic model of this system and calculate the mean first passage time for the unbinding of all linking motors which corresponds to the transition back to the unbuckled state of the cargo filament in a mean-field model. Our results show that for sufficiently short microtubules the movement of kinesin-I-motors is affected by the load force generated by the cargo filament. Our predictions could be tested in future experiments.
Die vorliegende Arbeit untersucht das sozio-kulturelle und institutionelle Umfeld der Organisationen des öffentlichen Sektors in der Mongolei, das signifikante Einflüsse auf die aktuellen Reformbemühungen in der öffentlichen Verwaltung hat. Die Studie stützt sich auf die Kultur- und Werttheorie. Die regelkonforme Verhaltensweise, Gemeinschaftsfavorisierende strenge Hierarchie, die fatalistische Annahme einer Autorität als unvermeidlich und unkontrollierbar sowie ein auf möglichst eigenständige Entscheidung und Meinungsbildung angestrebter Individualismus sind die weitverbreiteten kulturellen Verhaltensformen bei den Organisationen des öffentlichen Sektors der Mongolei. Dementsprechend streben die Beschäftigten des öffentlichen Dienstes uneigennützig das Wohlergehen der Bevölkerung, die Einhaltung der öffentlichen Regeln, die einvernehmlichen Beziehungen der Menschen zueinander sowie die Sicherheit und Nachhaltigkeit des Lebens an. Bestimmte Wertvorstellungen zur Selbstbestimmung, wie persönliche Geisteshaltung, eigenständiges Handeln sowie Kreativität sind für sie sehr wichtig. Dieser sozio- kulturelle Kontext hat große Auswirkungen auf das Arbeitsverhalten und auf die Aktivitäten der Beschäftigten des öffentlichen Dienstes zur Umsetzung von Reformen in der öffentlichen Verwaltung. Daher ist eine institutionelle Führung als Förderer und Beschützer von Wertesystemen bei der Umsetzung von Reformen in den hiesigen Institutionen unerlässlich.
Im Rahmen dieser Dissertation wurden die erstmaligen Totalsynthesen der Arylnaphthalen-Lignane Alashinol D, Vitexdoin C, Vitrofolal E, Noralashinol C1 und Ternifoliuslignan E vorgestellt. Der Schlüsselschritt der entwickelten Methode, basiert auf einer regioselektiven intramolekularen Photo-Dehydro-Diels-Alder (PDDA)-Reaktion, die mittels UV-Strahlung im Durchflussreaktor durchgeführt wurde. Bei der Synthese der PDDA-Vorläufer (Diarylsuberate) wurde eine Synthesestrategie nach dem Baukastenprinzip verfolgt. Diese ermöglicht die Darstellung asymmetrischer komplexer Systeme aus nur wenigen Grundbausteinen und die Totalsynthese einer Vielzahl an Lignanen. In systematischen Voruntersuchungen konnte zudem die klare Überlegenheit der intra- gegenüber der intermolekularen PDDA-Reaktion aufgezeigt werden. Dabei stellte sich eine Verknüpfung der beiden Arylpropiolester über einen Korksäurebügel, in para-Position, als besonders effizient heraus. Werden asymmetrisch substituierte Diarylsuberate, bei denen einer der endständigen Estersubstituenten durch eine Trimethylsilyl-Gruppe oder ein Wasserstoffatom ersetzt wurde, verwendet, durchlaufen diese Systeme eine regioselektive Cyclisierung und als Hauptprodukt werden Naphthalenophane mit einem Methylester in 3-Position erhalten. Mit Hilfe von umfangreichen Experimenten zur Funktionalisierung der 4-Position, konnte zudem gezeigt werden, dass die Substitution der nucleophilen Cycloallen-Intermediate, während der PDDA-Reaktion, generell durch die Zugabe von N-Halogen-Succinimiden möglich ist. In Anbetracht der geringen Ausbeuten haben diese intermolekularen Abfangreaktionen, jedoch keinen präparativen Nutzen für die Totalsynthesen von Lignanen. Mit dem Ziel die allgemeinen photochemischen Reaktionsbedingungen zu optimieren, wurde erstmalig die triplettsensibilisierte PDDA-Reaktion vorgestellt. Durch die Verwendung von Xanthon als Sensibilisator wurde der Einsatz von effizienteren UVA-Lichtquellen ermöglicht, wodurch die Gefahr einer Photozersetzung durch Überbestrahlung minimiert wurde. Im Vergleich zur direkten Anregung mit UVB-Strahlung, konnten die Ausbeuten mit indirekter Anregung durch einen Photokatalysator signifikant gesteigert werden. Die grundlegenden Erkenntnisse und die entwickelten Synthesestrategien dieser Arbeit, können dazu beitragen zukünftig die Erschließung neuer pharmakologisch interessanter Lignane voranzutreiben.
1 Bisher ist nur die semisynthetische Darstellung von Noralashinol C ausgehend von Hydroxymatairesinol literaturbekannt.
Zentrales Element dieser Arbeit ist die Synthese und Charakterisierung praktisch nutzbarer Ionogele. Die Basis der Polymerionogele bildet das Modellpolymer Polymethylmethacrylat. Als Additive kommen ionische Flüssigkeiten zum Einsatz, deren Grundlage Derivate des vielfach verwendeten Imidazoliumkations sind. Die Eigenschaften der eingebetteten ionischen Flüssigkeiten sind für die Ionogele funktionsgebend. Die Funktionalität der jeweiligen Gele und damit der Transfer der Eigenschaften von ionischen Flüssigkeiten auf die Ionogele wurde in der vorliegenden Arbeit mittels zahlreicher Charakterisierungstechniken überprüft und bestätigt. In dieser Arbeit wurden durch Ionogelbildung makroskopische Ionogelobjekte in Form von Folien und Vliesen erzeugt. Dabei kamen das Filmgießen und das Elektrospinnen als Methoden zur Erzeugung dieser Folien und Vliese zum Einsatz, woraus jeweils ein Modellsystem resultiert. Dadurch wird die vorliegende Arbeit in die Themenkomplexe „elektrisch halbleitende Ionogelfolien“ und „antimikrobiell aktive Ionogelvliese“ gegliedert. Der Einsatz von triiodidhaltigen ionischen Flüssigkeiten und einer Polymermatrix in einem diskontinuierlichen Gießprozess resultiert in elektrisch halbleitenden Ionogelfolien. Die flexiblen und transparenten Folien können Mittelpunkt zahlreicher neuer Anwendungsfelder im Bereich flexibler Elektronik sein. Das Elektrospinnen von Polymethylmethacrylat mit einer ionischen Flüssigkeit führte zu einem homogen Ionogelvlies, welches ein Modell für die Übertragung antimikrobiell aktiver Eigenschaften ionischer Flüssigkeiten auf poröse Strukturen zur Filtration darstellt. Gleichzeitig ist es das erste Beispiel für ein kupferchloridhaltiges Ionogel. Ionogele sind attraktive Materialien mit zahlreichen Anwendungsmöglichkeiten. Mit der vorliegenden Arbeit wird das Spektrum der Ionogele um ein elektrisch halbleitendes und ein antimikrobiell aktives Ionogel erweitert. Gleichzeitig wurden durch diese Arbeit der Gruppe der ionischen Flüssigkeiten drei Beispiele für elektrisch halbleitende ionische Flüssigkeiten sowie zahlreiche kupfer(II)chloridbasierte ionische Flüssigkeiten hinzugefügt.
Chitooligosaccharides are composed of glycosamin and N-acetylglycisamin residues. Gel permeations chromatography is employed for the separation of oligomers, cation exchange chromatography is used for the separation of homologes and isomers. Trideuterioacetylation of the chitooligosaccharides followed by MALDI-TOF mass spectrometry allowes for the quantitation of mixtures of homologes. vMALDI LTQ multiple-stage MS is employed for quantitative sequencing of complex mixtures of heterochitooligosaccharides. Pure homologes and isomers are applied to biological assays. Chitooligosaccahrides form high-affinity non-covalent complexes with HC gp-39 (human cartilage glycoprotein of 39 kDa). The affinity of the chitooligosaccharides depends on DP, FA and the sequence of glycosamin and N-acetylglycosamin moieties. (+)nanoESI Q TOF MS/MS is used for identification of a high-affinity binding chitooligosaccharide of a non-covalent chitinase B - chitooligosaccharide complex. DADAA is identified as the heterochitoisomer binding with highest affinity and biostability to HC gp-39. Fluorescence based enzyme assays confirm the results.
Nowadays, business processes are increasingly supported by IT services that produce massive amounts of event data during process execution. Aiming at a better process understanding and improvement, this event data can be used to analyze processes using process mining techniques. Process models can be automatically discovered and the execution can be checked for conformance to specified behavior. Moreover, existing process models can be enhanced and annotated with valuable information, for example for performance analysis. While the maturity of process mining algorithms is increasing and more tools are entering the market, process mining projects still face the problem of different levels of abstraction when comparing events with modeled business activities. Mapping the recorded events to activities of a given process model is essential for conformance checking, annotation and understanding of process discovery results. Current approaches try to abstract from events in an automated way that does not capture the required domain knowledge to fit business activities. Such techniques can be a good way to quickly reduce complexity in process discovery. Yet, they fail to enable techniques like conformance checking or model annotation, and potentially create misleading process discovery results by not using the known business terminology.
In this thesis, we develop approaches that abstract an event log to the same level that is needed by the business. Typically, this abstraction level is defined by a given process model. Thus, the goal of this thesis is to match events from an event log to activities in a given process model. To accomplish this goal, behavioral and linguistic aspects of process models and event logs as well as domain knowledge captured in existing process documentation are taken into account to build semiautomatic matching approaches. The approaches establish a pre--processing for every available process mining technique that produces or annotates a process model, thereby reducing the manual effort for process analysts. While each of the presented approaches can be used in isolation, we also introduce a general framework for the integration of different matching approaches.
The approaches have been evaluated in case studies with industry and using a large industry process model collection and simulated event logs. The evaluation demonstrates the effectiveness and efficiency of the approaches and their robustness towards nonconforming execution logs.
Antarctic glacier forfields are extreme environments and pioneer sites for ecological succession. The Antarctic continent shows microbial community development as a natural laboratory because of its special environment, geographic isolation and little anthropogenic influence. Increasing temperatures due to global warming lead to enhanced deglaciation processes in cold-affected habitats and new terrain is becoming exposed to soil formation and accessible for microbial colonisation. This study aims to understand the structure and development of glacier forefield bacterial communities, especially how soil parameters impact the microorganisms and how those are adapted to the extreme conditions of the habitat. To this effect, a combination of cultivation experiments, molecular, geophysical and geochemical analysis was applied to examine two glacier forfields of the Larsemann Hills, East Antarctica. Culture-independent molecular tools such as terminal restriction length polymorphism (T-RFLP), clone libraries and quantitative real-time PCR (qPCR) were used to determine bacterial diversity and distribution. Cultivation of yet unknown species was carried out to get insights in the physiology and adaptation of the microorganisms. Adaptation strategies of the microorganisms were studied by determining changes of the cell membrane phospholipid fatty acid (PLFA) inventory of an isolated bacterium in response to temperature and pH fluctuations and by measuring enzyme activity at low temperature in environmental soil samples. The two studied glacier forefields are extreme habitats characterised by low temperatures, low water availability and small oligotrophic nutrient pools and represent sites of different bacterial succession in relation to soil parameters. The investigated sites showed microbial succession at an early step of soil formation near the ice tongue in comparison to closely located but rather older and more developed soil from the forefield. At the early step the succession is influenced by a deglaciation-dependent areal shift of soil parameters followed by a variable and prevalently depth-related distribution of the soil parameters that is driven by the extreme Antarctic conditions. The dominant taxa in the glacier forefields are Actinobacteria, Acidobacteria, Proteobacteria, Bacteroidetes, Cyanobacteria and Chloroflexi. The connection of soil characteristics with bacterial community structure showed that soil parameter and soil formation along the glacier forefield influence the distribution of certain phyla. In the early step of succession the relative undifferentiated bacterial diversity reflects the undifferentiated soil development and has a high potential to shift according to past and present environmental conditions. With progressing development environmental constraints such as water or carbon limitation have a greater influence. Adapting the culturing conditions to the cold and oligotrophic environment, the number of culturable heterotrophic bacteria reached up to 108 colony forming units per gram soil and 148 isolates were obtained. Two new psychrotolerant bacteria, Herbaspirillum psychrotolerans PB1T and Chryseobacterium frigidisoli PB4T, were characterised in detail and described as novel species in the family of Oxalobacteraceae and Flavobacteriaceae, respectively. The isolates are able to grow at low temperatures tolerating temperature fluctuations and they are not specialised to a certain substrate, therefore they are well-adapted to the cold and oligotrophic environment. The adaptation strategies of the microorganisms were analysed in environmental samples and cultures focussing on extracellular enzyme activity at low temperature and PLFA analyses. Extracellular phosphatases (pH 11 and pH 6.5), β-glucosidase, invertase and urease activity were detected in the glacier forefield soils at low temperature (14°C) catalysing the conversion of various compounds providing necessary substrates and may further play a role in the soil formation and total carbon turnover of the habitat. The PLFA analysis of the newly isolated species C. frigidisoli showed that the cold-adapted strain develops different strategies to maintain the cell membrane function under changing environmental conditions by altering the PLFA inventory at different temperatures and pH values. A newly discovered fatty acid, which was not found in any other microorganism so far, significantly increased at decreasing temperature and low pH and thus plays an important role in the adaption of C. frigidisoli. This work gives insights into the diversity, distribution and adaptation mechanisms of microbial communities in oligotrophic cold-affected soils and shows that Antarctic glacier forefields are suitable model systems to study bacterial colonisation in connection to soil formation.