Refine
Has Fulltext
- yes (140) (remove)
Year of publication
- 2018 (140) (remove)
Document Type
- Doctoral Thesis (140) (remove)
Keywords
- Fernerkundung (3)
- Magnetismus (3)
- magnetism (3)
- remote sensing (3)
- uncertainty (3)
- Angriffserkennung (2)
- Bakterien (2)
- Big Data (2)
- Bildung (2)
- Biodiversität (2)
Institute
- Institut für Chemie (24)
- Institut für Geowissenschaften (21)
- Institut für Physik und Astronomie (21)
- Institut für Biochemie und Biologie (14)
- Extern (12)
- Hasso-Plattner-Institut für Digital Engineering GmbH (12)
- Wirtschaftswissenschaften (10)
- Sozialwissenschaften (6)
- Department Linguistik (5)
- Department Sport- und Gesundheitswissenschaften (4)
Health effects, attributed to the environmental pollution resulted from using solvents such as benzene, are relatively unexplored among petroleum workers, personal use, and laboratory researchers. Solvents can cause various health problems, such as neurotoxicity, immunotoxicity, and carcinogenicity. As such it can be absorbed via epidermal or respiratory into the human body resulting in interacting with molecules that are responsible for biochemical and physiological processes of the brain.
Owing to the ever-growing demand for finding a solution, an Ionic liquid can use as an alternative solvent. Ionic liquids are salts in a liquid state at low temperature (below 100 C), or even at room temperature. Ionic liquids impart a unique architectural platform, which has been interesting because of their unusual properties that can be tuned by simple ways such as mixing two ionic liquids.
Ionic liquids not only used as reaction solvents but they became a key developing for novel applications based on their thermal stability, electric conductivity with very low vapor pressure in contrast to the conventional solvents.
In this study, ionic liquids were used as a solvent and reactant at the same time for the novel nanomaterials synthesis for different applications including solar cells, gas sensors, and water splitting.
The field of ionic liquids continues to grow, and become one of the most important branches of science. It appears to be at a point where research and industry can work together in a new way of thinking for green chemistry and sustainable production.
The climate is a complex dynamical system involving interactions and feedbacks among different processes at multiple temporal and spatial scales. Although numerous studies have attempted to understand the climate system, nonetheless, the studies investigating the multiscale characteristics of the climate are scarce. Further, the present set of techniques are limited in their ability to unravel the multi-scale variability of the climate system. It is completely plausible that extreme events and abrupt transitions, which are of great interest to climate community, are resultant of interactions among processes operating at multi-scale. For instance, storms, weather patterns, seasonal irregularities such as El Niño, floods and droughts, and decades-long climate variations can be better understood and even predicted by quantifying their multi-scale dynamics. This makes a strong argument to unravel the interaction and patterns of climatic processes at different scales. With this background, the thesis aims at developing measures to understand and quantify multi-scale interactions within the climate system.
In the first part of the thesis, I proposed two new methods, viz, multi-scale event synchronization (MSES) and wavelet multi-scale correlation (WMC) to capture the scale-specific features present in the climatic processes. The proposed methods were tested on various synthetic and real-world time series in order to check their applicability and replicability. The results indicate that both methods (WMC and MSES) are able to capture scale-specific associations that exist between processes at different time scales in a more detailed manner as compared to the traditional single scale counterparts.
In the second part of the thesis, the proposed multi-scale similarity measures were used in constructing climate networks to investigate the evolution of spatial connections within climatic processes at multiple timescales. The proposed methods WMC and MSES, together with complex network were applied to two different datasets.
In the first application, climate networks based on WMC were constructed for the univariate global sea surface temperature (SST) data to identify and visualize the SSTs patterns that develop very similarly over time and distinguish them from those that have long-range teleconnections to other ocean regions. Further investigations of climate networks on different timescales revealed (i) various high variability and co-variability regions, and (ii) short and long-range teleconnection regions with varying spatial distance. The outcomes of the study not only re-confirmed the existing knowledge on the link between SST patterns like El Niño Southern Oscillation and the Pacific Decadal Oscillation, but also suggested new insights into the characteristics and origins of long-range teleconnections.
In the second application, I used the developed non-linear MSES similarity measure to quantify the multivariate teleconnections between extreme Indian precipitation and climatic patterns with the highest relevance for Indian sub-continent. The results confirmed significant non-linear influences that were not well captured by the traditional methods. Further, there was a substantial variation in the strength and nature of teleconnection across India, and across time scales.
Overall, the results from investigations conducted in the thesis strongly highlight the need for considering the multi-scale aspects in climatic processes, and the proposed methods provide robust framework for quantifying the multi-scale characteristics.
Es gibt in Berlin eine einzigartige Vereinslandschaft im Amateur – und semiprofessionellen Fußballsport, in der einst von türkischen Migranten gegründete Vereine einen festen Platz einnehmen. Fußballsport bietet einen sozialen Raum für Jugendliche verschiedener kultureller, ethnischer und religiöser Herkunft, in dem Gruppen gebildet werden, um gegen einander zu konkurrieren. Ebenso eröffnet Fußball dem Einzelnen die Möglichkeit, die Gültigkeit und Relevanz von Vorurteilen und von gängigen Stereotypisierungen anderer Gruppen im Spielalltag einer ständigen Prüfung zu unterziehen. Fußballspieler können sich sowohl zwischen multi-kulturellen als auch mono-ethnischen Gruppenkonstellationen, in einigen Fällen auch in transnationalen Konstellationen bewegen, womit sie dabei wesentlich an der Sinngebung ihrer eigenen sozialen Zugehörigkeit mitwirken, die sich aus dem Spannungsfeld von Selbst- und Fremdwahrnehmungsmustern ergibt. In Folge dessen werden in diesem Raum Anerkennungsmechanismen konstituiert.
Die vorliegende Dissertation befasst sich mit dem alltäglichen Leben von türkisch-stämmigen, jugendlichen Amateur- und semiprofessionellen Fußballspielern (delikanli), sowie von anderen sozialen Akteuren der türkischen Fußballwelt, wie zum Beispiel „ältere“ Fußballspieler (agbi) und Fußballtrainer (hoca). Hauptanliegen der Arbeit war die Rekonstruktion kollektiver Wahrnehmungs-, Deutungs - und Handlungsmuster von Mitgliedern türkischer Fußballvereine im allgemeinen und ihrer Selbstdarstellung aber auch ihrer Wahrnehmung der „Anderen“ im besonderen. Mittels dieser Studie sollte nachvollzogen werden, ob und inwiefern sich traditionelle soziale Verhaltensmuster der gewählten Gruppe im technisch regulierten und stark Konkurrenz-orientierten Handlungsraum widerspiegeln und die reziproken Beziehungen zwischen dem „Selbst“ und den „Anderen“ regulieren. Dabei wurde die Relevanz von herkunftsbezogenen Stereotypisierungen und Vorurteilen in der kollektiven Konstitution von Selbstwahrnehmungen und Fremdverstehen im partikularen sozialen Feld (Bourdieu, 2001) des Fußballs rekonstruiert.
In dieser Arbeit wurde darüber hinaus beleuchtet, welche Rolle türkische Fußballvereine auf der einen Seite bei der Entstehung sozialer Raumzugehörigkeit zu den Stadtquartieren in Berlin einnehmen und welche Art von Mechanismen der sozialen Integration sie in diesen Vereinen herstellen. Auf der anderen Seite wurde hinterfragt, inwiefern sie zur sozialen Kohäsion zwischen diversen Kulturen beitragen. Daher wurde geprüft, ob und inwiefern die negativ konnotierte ethnozentrische Wahrnehmung von „Differenz“ (Bielefeld, 1998), die als soziales Konstrukt zwischen autochthonen und allochthonen Gruppen hergestellt wird, durch das Engagement der Vereinsakteure einen konstruktiven Wandel erfährt.
Übergeordnetes Ziel all dieser Forschungsfragen war es, ein fundiertes Verständnis über die Rolle von türkischen Fußballvereinen als soziale Mechanismen zu erlangen und deren Funktionsweise bei der Konstitution von Anpassungsstrategien in diesem sozialen Feld untersuchen. Detailliert wurde diese Rolle unter der Konzeptualisierung von sozialen Positionierungsmuster betrachtet, die als Gefüge von Deutungen des Alltäglichen verstanden werden, das individuelle und kollektive Handlungsmuster und implizit Muster des Fremdverstehens sowie des othering im Migrationskontext reguliert. Eine Rekonstruktion der sozialen Positionierungsmuster bietet eine eingehende soziologische Untersuchung dieser Teilnehmergruppe, die zudem Aufschluss über die Bedeutung und das Verständnis von ethnischer Zugehörigkeit für letztere gibt.
Neben umfangreicher Feldbeobachtung wurden in dieser qualitativen Studie mit Spielern verschiedener Vereine insgesamt zehn Gruppendiskussionen (Bohnsack, 2004) innerhalb ihrer Mannschaften zu gemeinsamen alltäglichen Erlebnissen und Erfahrungen durchgeführt, aufgezeichnet und mittels sozialwissenschaftlichem hermeneutischem Verfahren (Soeffner, 2004) interpretiert. Auch mit anderen Vereinsmitgliedern, d. h. mit Trainern bzw. hoca, Vorsitzenden, Managern und Sponsoren wurden jeweils zehn narrative und sieben biographische Einzelinterviews sowie sieben Experteninterviews durchgeführt. Deren Analyse erlaubt es, die Rolle dieser Mitglieder sowie wirkende Autoritätsmechanismen und kollektiv konstituierte Verhaltensmuster innerhalb der gesamten Vereinsgruppe zu rekonstruieren. Dabei wurde bezweckt, die Gesamtheit des sozialen Netzwerkes bzw. die Beziehungsschemata innerhalb der türkischen Fußballvereine Berlins zu verdeutlichen.
In der Arbeit werden zwei Standpunkte der theoretischen Auseinandersetzung verwendet. Auf der einen Seite wird die Lebensweltanalyse (Schütz und Luckmann, 1979, 1990) angewendet, um das soziale Erbe der in der Vergangenheit gesellschaftlich konstituierten Titulierung „Menschen mit Migrationshintergrund“ zu rekonstruieren, bzw. den Einfluss dieser sozialen Reproduktion auf die Wahrnehmungs-, Deutungs- und Handlungsmuster der Akteure zu untersuchen. Auf der anderen Seite wird die soziale Wirkung der tatsächlichen, alltäglichen Erfahrungsschemata im sozialen Feld des Fußballs auf die Selbstpositionierungen der Akteure mittels Goffmanscher Rahmenanalyse (Goffman, 1980) herausgearbeitet.
Characterization of altered inflorescence architecture in Arabidopsis thaliana BG-5 x Kro-0 hybrid
(2018)
A reciprocal cross between two A. thaliana accessions, Kro-0 (Krotzenburg, Germany) and BG-5 (Seattle, USA), displays purple rosette leaves and dwarf bushy phenotype in F1 hybrids when grown at 17 °C and a parental-like phenotype when grown at 21 °C. This F1 temperature-dependent-dwarf-bushy phenotype is characterized by reduced growth of the primary stem together with an increased number of branches. The reduced stem growth was the strongest at the first internode. In addition, we found that a temperature switch from 21 °C to 17 °C induced the phenotype only before the formation of the first internode of the stem. Similarly, the F1 dwarf-bushy phenotype could not be reversed when plants were shifted from 17 °C to 21 °C after the first internode was formed. Metabolic analysis showed that the F1 phenotype was associated with a significant upregulation of anthocyanin(s), kaempferol(s), salicylic acid, jasmonic acid and abscisic acid. As it has been previously shown that the dwarf-bushy phenotype is linked to two loci, one on chromosome 2 from Kro-0 and one on chromosome 3 from BG-5, an artificial micro-RNA approach was used to investigate the necessary genes on these intervals. From the results obtained, it was found that two genes, AT2G14120 that encodes for a DYNAMIN RELATED PROTEIN3B and AT2G14100 that encodes a member of the Cytochrome P450 family protein CYP705A13, were necessary for the appearance of the F1 phenotype on chromosome 2. It was also discovered that AT3G61035 that encodes for another cytochrome P450 family protein CYP705A13 and AT3G60840 that encodes for a MICROTUBULE-ASSOCIATED PROTEIN65-4 on chromosome 3 were both necessary for the induction of the F1 phenotype. To prove the causality of these genes, genomic constructs of the Kro-0 candidate genes on chromosome 2 were transferred to BG-5 and genomic constructs of the chromosome 3 candidate genes from BG-5 were transferred to Kro-0. The T1 lines showed that these genes are not sufficient alone to induce the phenotype. In addition to the F1 phenotype, more severe phenotypes were observed in the F2 generations that were grouped into five different phenotypic classes. Whilst seed yield was comparable between F1 hybrids and parental lines, three phenotypic classes in the F2 generation exhibited hybrid breakdown in the form of reproductive failure. This F2 hybrid breakdown was less sensitive to temperature and showed a dose-dependent effect of the loci involved in F1 phenotype. The severest class of hybrid breakdown phenotypes was observed only in the population of backcross with the parent Kro-0, which indicates a stronger contribution of the BG-5 allele when compared to the Kro-0 allele on the hybrid breakdown phenotypes. Overall, the findings of my thesis provide a further understanding of the genetic and metabolic factors underlying altered shoot architecture in hybrid dysfunction.
This text is a contribution to the research on the worldwide success of evangelical Christianity and offers a new perspective on the relationship between late modern capitalism and evangelicalism. For this purpose, the utilization of affect and emotion in evangelicalism towards the mobilization of its members will be examined in order to find out what similarities to their employment in late modern capitalism can be found. Different examples from within the evangelical spectrum will be analyzed as affective economies in order to elaborate how affective mobilization is crucial for evangelicalism’s worldwide success. Pivotal point of this text is the exploration of how evangelicalism is able to activate the voluntary commitment of its members, financiers, and missionaries. Gathered here are examples where both spheres—evangelicalism and late modern capitalism—overlap and reciprocate, followed by a theoretical exploration of how the findings presented support a view of evangelicalism as an inner-worldly narcissism that contributes to an assumed re-enchantment of the world.
The concept of hydrologic connectivity summarizes all flow processes that link separate regions of a landscape. As such, it is a central theme in the field of catchment hydrology, with influence on neighboring disciplines such as ecology and geomorphology. It is widely acknowledged to be an important key in understanding the response behavior of a catchment and has at the same time inspired research on internal processes over a broad range of scales. From this process-hydrological point of view, hydrological connectivity is the conceptual framework to link local observations across space and scales.
This is the context in which the four studies this thesis comprises of were conducted. The focus was on structures and their spatial organization as important control on preferential subsurface flow. Each experiment covered a part of the conceptualized flow path from hillslopes to the stream: soil profile, hillslope, riparian zone, and stream.
For each study site, the most characteristic structures of the investigated domain and scale, such as slope deposits and peat layers were identified based on preliminary or previous investigations or literature reviews. Additionally, further structural data was collected and topographical analyses were carried out. Flow processes were observed either based on response observations (soil moisture changes or discharge patterns) or direct measurement (advective heat transport). Based on these data, the flow-relevance of the characteristic structures was evaluated, especially with regard to hillslope to stream connectivity.
Results of the four studies revealed a clear relationship between characteristic spatial structures and the hydrological behavior of the catchment. Especially the spatial distribution of structures throughout the study domain and their interconnectedness were crucial for the establishment of preferential flow paths and their relevance for large-scale processes. Plot and hillslope-scale irrigation experiments showed that the macropores of a heterogeneous, skeletal soil enabled preferential flow paths at the scale of centimeters through the otherwise unsaturated soil. These flow paths connected throughout the soil column and across the hillslope and facilitated substantial amounts of vertical and lateral flow through periglacial slope deposits.
In the riparian zone of the same headwater catchment, the connectivity between hillslopes and stream was controlled by topography and the dualism between characteristic subsurface structures and the geomorphological heterogeneity of the stream channel. At the small scale (1 m to 10 m) highest gains always occurred at steps along the longitudinal streambed profile, which also controlled discharge patterns at the large scale (100 m) during base flow conditions (number of steps per section). During medium and high flow conditions, however, the impact of topography and parafluvial flow through riparian zone structures prevailed and dominated the large-scale response patterns.
In the streambed of a lowland river, low permeability peat layers affected the connectivity between surface water and groundwater, but also between surface water and the hyporheic zone. The crucial factor was not the permeability of the streambed itself, but rather the spatial arrangement of flow-impeding peat layers, causing increased vertical flow through narrow “windows” in contrast to predominantly lateral flow in extended areas of high hydraulic conductivity sediments.
These results show that the spatial organization of structures was an important control for hydrological processes at all scales and study areas. In a final step, the observations from different scales and catchment elements were put in relation and compared. The main focus was on the theoretical analysis of the scale hierarchies of structures and processes and the direction of causal dependencies in this context. Based on the resulting hierarchical structure, a conceptual framework was developed which is capable of representing the system’s complexity while allowing for adequate simplifications.
The resulting concept of the parabolic scale series is based on the insight that flow processes in the terrestrial part of the catchment (soil and hillslopes) converge. This means that small-scale processes assemble and form large-scale processes and responses. Processes in the riparian zone and the streambed, however, are not well represented by the idea of convergence. Here, the large-scale catchment signal arrives and is modified by structures in the riparian zone, stream morphology, and the small-scale interactions between surface water and groundwater. Flow paths diverge and processes can better be represented by proceeding from large scales to smaller ones. The catchment-scale representation of processes and structures is thus the conceptual link between terrestrial hillslope processes and processes in the riparian corridor.
Um die gegenwärtige Transformation der Öffentlichkeit im digitalen Zeitalter erfassen zu können, ist in der Öffentlichkeitstheorie eine erweiterte Perspektive notwendig, die nicht nur den massenmedialen Diskurs, sondern auch die Veränderung sozialer Praktiken und institutioneller Strukturen in den Blick nimmt. Das Ziel dieses Buches besteht darin, die Grundlagen einer solchen Perspektive auf die Theorie digitaler Öffentlichkeiten zu entwickeln. Im vorgeschlagenen Ansatz wird Öffentlichkeit im Anschluss an John Dewey als Prozess verstanden. In seiner prozessualen und funktionalen Bestimmung von Öffentlichkeit liegt eine besondere Originalität, die seinen Ansatz von anderen Öffentlichkeitskonzeptionen unterscheidet. Das Buch liefert sowohl eine systematische Rekonstruktion und Interpretation der Philosophie John Deweys als auch einen Vorschlag zur gesellschaftstheoretischen Deutung des digitalen Wandels.
Plastic pollution is ubiquitous on the planet since several millions of tons of plastic waste enter aquatic ecosystems each year. Furthermore, the amount of plastic produced is expected to increase exponentially shortly. The heterogeneity of materials, additives and physical characteristics of plastics are typical of these emerging contaminants and affect their environmental fate in marine and freshwaters. Consequently, plastics can be found in the water column, sediments or littoral habitats of all aquatic ecosystems. Most of this plastic debris will fragment as a product of physical, chemical and biological forces, producing particles of small size. These particles (< 5mm) are known as “microplastics” (MP). Given their high surface-to-volume ratio, MP stimulate biofouling and the formation of biofilms in aquatic systems.
As a result of their unique structure and composition, the microbial communities in MP biofilms are referred to as the “Plastisphere.” While there is increasing data regarding the distinctive composition and structure of the microbial communities that form part of the plastisphere, scarce information exists regarding the activity of microorganisms in MP biofilms. This surface-attached lifestyle is often associated with the increase in horizontal gene transfer (HGT) among bacteria. Therefore, this type of microbial activity represents a relevant function worth to be analyzed in MP biofilms. The horizontal exchange of mobile genetic elements (MGEs) is an essential feature of bacteria. It accounts for the rapid evolution of these prokaryotes and their adaptation to a wide variety of environments. The process of HGT is also crucial for spreading antibiotic resistance and for the evolution of pathogens, as many MGEs are known to contain antibiotic resistance genes (ARGs) and genetic determinants of pathogenicity.
In general, the research presented in this Ph.D. thesis focuses on the analysis of HGT and heterotrophic activity in MP biofilms in aquatic ecosystems. The primary objective was to analyze the potential of gene exchange between MP bacterial communities vs. that of the surrounding water, including bacteria from natural aggregates. Moreover, the thesis addressed the potential of MP biofilms for the proliferation of biohazardous bacteria and MGEs from wastewater treatment plants (WWTPs) and associated with antibiotic resistance. Finally, it seeks to prove if the physiological profile of MP biofilms under different limnological conditions is divergent from that of the water communities. Accordingly, the thesis is composed of three independent studies published in peer-reviewed journals. The two laboratory studies were performed using both model and environmental microbial communities. In the field experiment, natural communities from freshwater ecosystems were examined.
In Chapter I, the inflow of treated wastewater into a temperate lake was simulated with a concentration gradient of MP particles. The effects of MP on the microbial community structure and the occurrence of integrase 1 (int 1) were followed. The int 1 is a marker associated with mobile genetic elements and known as a proxy for anthropogenic effects on the spread of antimicrobial resistance genes. During the experiment, the abundance of int1 increased in the plastisphere with increasing MP particle concentration, but not in the surrounding water. In addition, the microbial community on MP was more similar to the original wastewater community with increasing microplastic concentrations. Our results show that microplastic particles indeed promote persistence of standard indicators of microbial anthropogenic pollution in natural waters.
In Chapter II, the experiments aimed to compare the permissiveness of aquatic bacteria towards model antibiotic resistance plasmid pKJK5, between communities that form biofilms on MP vs. those that are free-living. The frequency of plasmid transfer in bacteria associated with MP was higher when compared to bacteria that are free-living or in natural aggregates. Moreover, comparison increased gene exchange occurred in a broad range of phylogenetically-diverse bacteria. The results indicate a different activity of HGT in MP biofilms, which could affect the ecology of aquatic microbial communities on a global scale and the spread of antibiotic resistance.
Finally, in Chapter III, physiological measurements were performed to assess whether microorganisms on MP had a different functional diversity from those in water. General heterotrophic activity such as oxygen consumption was compared in microcosm assays with and without MP, while diversity and richness of heterotrophic activities were calculated by using Biolog® EcoPlates. Three lakes with different nutrient statuses presented differences in MP-associated biomass build up. Functional diversity profiles of MP biofilms in all lakes differed from those of the communities in the surrounding water, but only in the oligo-mesotrophic lake MP biofilms had a higher functional richness compared to the ambient water. The results support that MP surfaces act as new niches for aquatic microorganisms and can affect global carbon dynamics of pelagic environments.
Overall, the experimental works presented in Chapters I and II support a scenario where MP pollution affects HGT dynamics among aquatic bacteria. Among the consequences of this alteration is an increase in the mobilization and transfer efficiency of ARGs. Moreover, it supposes that changes in HGT can affect the evolution of bacteria and the processing of organic matter, leading to different catabolic profiles such as demonstrated in Chapter III. The results are discussed in the context of the fate and magnitude of plastic pollution and the importance of HGT for bacterial evolution and the microbial loop, i.e., at the base of aquatic food webs. The thesis supports a relevant role of MP biofilm communities for the changes observed in the aquatic microbiome as a product of intense human intervention.
Future magnetic recording industry needs a high-density data storage technology. However, switching the magnetization of small bits requires high magnetic fields that cause excessive heat dissipation. Therefore, controlling magnetism without applying external magnetic field is an important research topic for potential applications in data storage devices with low power consumption. Among the different approaches being investigated, two of them stand out, namely i) all-optical helicity dependent switching (AO-HDS) and ii) ferroelectric control of magnetism. This thesis aims to contribute towards a better understanding of the physical processes behinds these effects as well as reporting new and exciting possibility for the optical and/or electric control of magnetic properties. Hence, the thesis contains two differentiated chapters of results; the first devoted to AO-HDS on TbFe alloys and the second to the electric field control of magnetism in an archetypal Fe/BaTiO3 system.
In the first part, the scalability of the AO-HDS to small laser spot-sizes of few microns in the ferrimagnetic TbFe alloy is investigated by spatially resolving the magnetic contrast with photo-emission electron microscopy (PEEM) and X-ray magnetic circular dichroism (XMCD). The results show that the AO-HDS is a local effect within the laser spot size that occurs in the ring-shaped region in the vicinity of thermal demagnetization. Within the ring region, the helicity dependent switching occurs via thermally activated domain wall motion. Further, the thesis reports on a novel effect of thickness dependent inversion of the switching orientation. It addresses some of the important questions like the role of laser heating and the microscopic mechanism driving AO-HDS.
The second part of the thesis focuses on the electric field control of magnetism in an artificial multiferroic heterostructure. The sample consists of an Fe wedge with thickness varying between 0:5 nm and 3 nm, deposited on top of a ferroelectric and ferroelastic BaTiO3 [001]-oriented single crystal substrate. Here, the magnetic contrast is imaged via PEEM and XMCD as a function of out-of-plane voltage. The results show the evidence of the electric field control of superparamagnetism mediated by a ferroelastic modification of the magnetic anisotropy. The changes in the magnetoelastic anisotropy drive the transition from the superparamagnetic to superferromagnetic state at localized sample positions.
Business process management is an acknowledged asset for running an organization in a productive and sustainable way. One of the most important aspects of business process management, occurring on a daily basis at all levels, is decision making. In recent years, a number of decision management frameworks have appeared in addition to existing business process management systems. More recently, Decision Model and Notation (DMN) was developed by the OMG consortium with the aim of complementing the widely used Business Process Model and Notation (BPMN). One of the reasons for the emergence of DMN is the increasing interest in the evolving paradigm known as the separation of concerns. This paradigm states that modeling decisions complementary to processes reduces process complexity by externalizing decision logic from process models and importing it into a dedicated decision model. Such an approach increases the agility of model design and execution. This provides organizations with the flexibility to adapt to the ever increasing rapid and dynamic changes in the business ecosystem. The research gap, identified by us, is that the separation of concerns, recommended by DMN, prescribes the externalization of the decision logic of process models in one or more separate decision models, but it does not specify this can be achieved.
The goal of this thesis is to overcome the presented gap by developing a framework for discovering decision models in a semi-automated way from information about existing process decision making. Thus, in this thesis we develop methodologies to extract decision models from: (1) control flow and data of process models that exist in enterprises; and (2) from event logs recorded by enterprise information systems, encapsulating day-to-day operations. Furthermore, we provide an extension of the methodologies to discover decision models from event logs enriched with fuzziness, a tool dealing with partial knowledge of the process execution information. All the proposed techniques are implemented and evaluated in case studies using real-life and synthetic process models and event logs. The evaluation of these case studies shows that the proposed methodologies provide valid and accurate output decision models that can serve as blueprints for executing decisions complementary to process models. Thus, these methodologies have applicability in the real world and they can be used, for example, for compliance checks, among other uses, which could improve the organization's decision making and hence it's overall performance.
In einer verschränkten Lektüre von Curtius, Auerbach und Bachtin macht die Dissertation sichtbar, wie die Autoren mit ihren Arbeiten zur europäischen Literaturgeschichte nach einer ethischen Orientierung in der Krise der Moderne suchen. Ihr Konzept einer philologisch fundierten Geschichtsphilosophie in praktischer Absicht wird sowohl kultur- und theoriegeschichtlich aufbereitet, als auch anhand detaillierter Textanalysen nachvollzogen.
Der Blick auf den geschichtsphilosophischen Aspekt ihrer Forschungsarbeit erweist sich hierbei nicht nur insoweit als fruchtbar, als er sich als Schlüssel offenbart, philologische Mikrologie und breite Zusammenschau sowie ideengeschichtliche und gesellschaftliche Entwicklungen zu verknüpfen. Ihr Ansatz offenbart sich auch als wesentlich differenzierter, als es die gängigen Vorbehalte gegenüber der Geschichtsphilosophie vermuten lassen.
Die Untersuchung erweitert aus diesem Grund den methodischen Diskurshorizont, indem sie die Möglichkeiten einer kritischen Geschichtsphilosophie für gegenwärtige Fragen der Literaturgeschichte neu justiert. Dies geschieht über den Zugang so unterschiedlicher Rezeptionen wie der von Anselm Haverkamp, Edward Said, Terry Eagleton und Homi Bhabha, die einen Diskussionsraum eröffnen, welcher den eigenen historischen Standpunkt der Dissertation im Kontext von Postmoderne und Postkolonialismus reflektiert.
New bio-based polymers
(2018)
Redox-responsive polymers, such as poly(disulfide)s, are a versatile class of polymers with potential applications including gene- and drug-carrier systems. Their degradability under reductive conditions allows for a controlled response to the different redox states that are present throughout the body. Poly(disulfide)s are typically synthesized by step growth polymerizations. Step growth polymerizations, however, may suffer from low conversions and therefore low molar masses, limiting potential applications. The purpose of this thesis was therefore to find and investigate new synthetic routes towards the synthesis of amino acid-based poly(disulfide)s.
The different routes in this thesis include entropy-driven ring opening polymerizations of novel macrocyclic monomers, derived from cystine derivatives. These monomers were obtained with overall yields of up to 77% and were analyzed by mass spectrometry as well as by 1D and 2D NMR spectroscopy. The kinetics of the entropy-driven ring-opening metathesis polymerization (ED-ROMP) were thoroughly investigated in dependence of temperature, monomer concentration, and catalyst concentration. The polymerization was optimized to yield poly(disulfide)s with weight average molar masses of up to 80 kDa and conversions of ~80%, at the thermodynamic equilibrium. Additionally, an alternative metal free polymerization, namely the entropy-driven ring-opening disulfide metathesis polymerization (ED-RODiMP) was established for the polymerization of the macrocyclic monomers. The effect of different solvents, concentrations and catalyst loadings on the polymerization process and its kinetics were studied. Polymers with very high weight average molar masses of up to 177 kDa were obtained. Moreover, various post-polymerization reactions were successfully performed.
This work provides the first example of the homopolymerization of endo-cyclic disulfides by ED-ROMP and the first substantial study into the kinetics of the ED-RODiMP process.
Causes for slow weathering and erosion in the steep, warm, monsoon-subjected Highlands of Sri Lanka
(2018)
In the Highlands of Sri Lanka, erosion and chemical weathering rates are among the lowest for global mountain denudation. In this tropical humid setting, highly weathered deep saprolite profiles have developed from high-grade metamorphic charnockite during spheroidal weathering of the bedrock. The spheroidal weathering produces rounded corestones and spalled rindlets at the rock-saprolite interface. I used detailed textural, mineralogical, chemical, and electron-microscopic (SEM, FIB, TEM) analyses to identify the factors limiting the rate of weathering front advance in the profile, the sequence of weathering reactions, and the underlying mechanisms. The first mineral attacked by weathering was found to be pyroxene initiated by in situ Fe oxidation, followed by in situ biotite oxidation. Bulk dissolution of the primary minerals is best described with a dissolution – re-precipitation process, as no chemical gradients towards the mineral surface and sharp structural boundaries are observed at the nm scale. Only the local oxidation in pyroxene and biotite is better described with an ion by ion process. The first secondary phases are oxides and amorphous precipitates from which secondary minerals (mainly smectite and kaolinite) form. Only for biotite direct solid state transformation to kaolinite is likely. The initial oxidation of pyroxene and biotite takes place in locally restricted areas and is relatively fast: log J = -11 molmin/(m2 s). However, calculated corestone-scale mineral oxidation rates are comparable to corestone-scale mineral dissolution rates: log R = -13 molpx/(m2 s) and log R = -15 molbt/(m2 s). The oxidation reaction results in a volume increase. Volumetric calculations suggest that this observed oxidation leads to the generation of porosity due to the formation of micro-fractures in the minerals and the bedrock allowing for fluid transport and subsequent dissolution of plagioclase. At the scale of the corestone, this fracture reaction is responsible for the larger fractures that lead to spheroidal weathering and to the formation of rindlets. Since these fractures have their origin from the initial oxidational induced volume increase, oxidation is the rate limiting parameter for weathering to take place. The ensuing plagioclase weathering leads to formation of high secondary porosity in the corestone over a distance of only a few cm and eventually to the final disaggregation of bedrock to saprolite. As oxidation is the first weathering reaction, the supply of O2 is a rate-limiting factor for chemical weathering. Hence, the supply of O2 and its consumption at depth connects processes at the weathering front with erosion at the surface in a feedback mechanism. The strength of the feedback depends on the relative weight of advective versus diffusive transport of O2 through the weathering profile. The feedback will be stronger with dominating diffusive transport. The low weathering rate ultimately depends on the transport of O2 through the whole regolith, and on lithological factors such as low bedrock porosity and the amount of Fe-bearing primary minerals. In this regard the low-porosity charnockite with its low content of Fe(II) bearing minerals impedes fast weathering reactions. Fresh weatherable surfaces are a pre-requisite for chemical weathering. However, in the case of the charnockite found in the Sri Lankan Highlands, the only process that generates these surfaces is the fracturing induced by oxidation. Tectonic quiescence in this region and low pre-anthropogenic erosion rate (attributed to a dense vegetation cover) minimize the rejuvenation of the thick and cohesive regolith column, and lowers weathering through the feedback with erosion.
For more than two centuries, plant ecologists have aimed to understand how environmental gradients and biotic interactions shape the distribution and co-occurrence of plant species. In recent years, functional trait–based approaches have been increasingly used to predict patterns of species co-occurrence and species distributions along environmental gradients (trait–environment relationships). Functional traits are measurable properties at the individual level that correlate well with important processes. Thus, they allow us to identify general patterns by synthesizing studies across specific taxonomic compositions, thereby fostering our understanding of the underlying processes of species assembly. However, the importance of specific processes have been shown to be highly dependent on the spatial scale under consideration. In particular, it remains uncertain which mechanisms drive species assembly and allow for plant species coexistence at smaller, more local spatial scales. Furthermore, there is still no consensus on how particular environmental gradients affect the trait composition of plant communities. For example, increasing drought because of climate change is predicted to be a main threat to plant diversity, although it remains unclear which traits of species respond to increasing aridity. Similarly, there is conflicting evidence of how soil fertilization affects the traits related to establishment ability (e.g., seed mass). In this cumulative dissertation, I present three empirical trait-based studies that investigate specific research questions in order to improve our understanding of species distributions along environmental gradients.
In the first case study, I analyze how annual species assemble at the local scale and how environmental heterogeneity affects different facets of biodiversity—i.e. taxonomic, functional, and phylogenetic diversity—at different spatial scales. The study was conducted in a semi-arid environment at the transition zone between desert and Mediterranean ecosystems that features a sharp precipitation gradient (Israel). Different null model analyses revealed strong support for environmentally driven species assembly at the local scale, since species with similar traits tended to co-occur and shared high abundances within microsites (trait convergence). A phylogenetic approach, which assumes that closely related species are functionally more similar to each other than distantly related ones, partly supported these results. However, I observed that species abundances within microsites were, surprisingly, more evenly distributed across the phylogenetic tree than expected (phylogenetic overdispersion). Furthermore, I showed that environmental heterogeneity has a positive effect on diversity, which was higher on functional than on taxonomic diversity and increased with spatial scale. The results of this case study indicate that environmental heterogeneity may act as a stabilizing factor to maintain species diversity at local scales, since it influenced species distribution according to their traits and positively influenced diversity. All results were constant along the precipitation gradient.
In the second case study (same study system as case study one), I explore the trait responses of two Mediterranean annuals (Geropogon hybridus and Crupina crupinastrum) along a precipitation gradient that is comparable to the maximum changes in precipitation predicted to occur by the end of this century (i.e., −30%). The heterocarpic G. hybridus showed strong trends in seed traits, suggesting that dispersal ability increased with aridity. By contrast, the homocarpic C. crupinastrum showed only a decrease in plant height as aridity increased, while leaf traits of both species showed no consistent pattern along the precipitation gradient. Furthermore, variance decomposition of traits revealed that most of the trait variation observed in the study system was actually found within populations. I conclude that trait responses towards aridity are highly species-specific and that the amount of precipitation is not the most striking environmental factor at this particular scale.
In the third case study, I assess how soil fertilization mediates—directly by increased nutrient addition and indirectly by increased competition—the effect of seed mass on establishment ability. For this experiment, I used 22 species differing in seed mass from dry grasslands in northeastern Germany and analyzed the interacting effects of seed mass with nutrient availability and competition on four key components of seedling establishment: seedling emergence, time of seedling emergence, seedling survival, and seedling growth. (Time of) seedling emergence was not affected by seed mass. However, I observed that the positive effect of seed mass on seedling survival is lowered under conditions of high nutrient availability, whereas the positive effect of seed mass on seedling growth was only reduced by competition. Based on these findings, I developed a conceptual model of how seed mass should change along a soil fertility gradient in order to reconcile conflicting findings from the literature. In this model, seed mass shows a U-shaped pattern along the soil fertility gradient as a result of changing nutrient availability and competition.
Overall, the three case studies highlight the role of environmental factors on species distribution and co-occurrence. Moreover, the findings of this thesis indicate that spatial heterogeneity at local scales may act as a stabilizing factor that allows species with different traits to coexist. In the concluding discussion, I critically debate intraspecific trait variability in plant community ecology, the use of phylogenetic relationships and easily measured key functional traits as a proxy for species’ niches. Finally, I offer my outlook for the future of functional plant community research.
Kolorektalkrebs (CRC) ist die dritthäufigste Tumorerkrankung weltweit. Neben dem Alter spielt auch die Ernährung eine wichtige Rolle bei der Entstehung der Krankheit. Eine vermutlich krebspräventive Wirkung wird dabei dem Spurenelement Selen zugeschrieben, das fast ausschließlich über Lebensmittel aufgenommen wird. So hängt beispielsweise ein niedriger Selenstatus mit dem Risiko, im Laufe des Lebens an CRC zu erkranken, zusammen. Seine Funktionen vermittelt Selen dabei überwiegend durch Selenoproteine, in denen es in Form von Selenocystein eingebaut wird. Zu den bisher am besten untersuchten Selenoproteinen mit möglicher Funktion während CRC zählen die Glutathionperoxidasen (GPXen). Die Mitglieder dieser Familie tragen aufgrund ihrer Hydroperoxid-reduzierenden Eigenschaften entscheidend zum Schutz der Zellen vor oxidativem Stress bei. Dies kann je nach Art und Stadium des Tumors entweder krebshemmend oder -fördernd wirken, da auch transformierte Zellen von dieser Schutzfunktion profitieren.
In dieser Arbeit wurde die GPX2 in HT29-Darmkrebszellen mithilfe stabil-transfizierter shRNA herunterreguliert, um die Funktion des Enzyms vor allem in Hinblick auf regulierte Signalwege zu untersuchen. Ein Knockdowns (KD) der strukturell ähnlichen GPX1 kam ebenfalls zum Einsatz, um gezielt Isoform-spezifische Funktionen unterscheiden zu können. Anhand eines PCR-Arrays wurden Signalwege identifiziert, die auf einen Einfluss der beiden Proteine im Zellwachstum hindeuteten. Anschließende Untersuchungen ließen auf einen verminderten Differenzierungsstatus in den GPX1- und GPX2-KDs aufgrund einer geringeren Aktivität der Alkalischen Phosphatase schließen. Zudem war die Zellviabilität im Neutralrot-Assay (NRU) bei Fehlen der GPX1 bzw. GPX2 im Vergleich zur Kontrolle reduziert. Die Ergebnisse des PCR-Arrays, und speziell für die GPX2 frühere Untersuchungen der Arbeitsgruppe, wiesen weiterhin auf eine Rolle der beiden Proteine in der entzündungsgetriebenen Karzinogenese hin. Daher wurden auch mögliche Interaktionen mit dem NFκB-Signalweg analysiert. Eine Stimulation der Zellen mit dem proinflammatorischen Zytokin IL1β ging mit einer verstärkten Aktivierung der MAP-Kinasen ERK1/2 in den Zellen mit GPX1- bzw. GPX2-KD einher. Die gleichzeitige Behandlung mit dem Antioxidans NAC führte nicht zur Rücknahme der Effekte in den KDs, sodass möglicherweise nicht nur die antioxidativen Eigenschaften der Enzyme bei der Interaktion mit diesen Signalwegsproteinen relevant sind.
Weiterhin wurden Analysen zum Substratspektrum der GPX2 in HCT116-Zellen mit einer Überexpression des Proteins durchgeführt. Dabei zeigte sich mittels NRU-Assay und DNA-Laddering, dass die GPX2 besonders vor den proapoptotischen Effekten einer Behandlung mit den Lipidhydroperoxiden HPODE und HPETE schützt.
Im Gegensatz zur GPX2 lässt sich Selenoprotein H (SELENOH) stärker durch die alimentäre Selenzufuhr beeinflussen. Einer möglichen Nutzung als Biomarker oder gar als Ansatzpunkt bei der Prävention bzw. Behandlung von CRC steht allerdings unvollständiges Wissen über die Funktion des Proteins gegenüber. Zur genaueren Charakterisierung von SELENOH wurden daher stabil-transfizierte KD-Klone in HT29- und Caco2-Zellen hergestellt und zunächst auf ihre Tumorigenität untersucht.
Zellen mit SELENOH-KD bildeten mehr und größere Kolonien im Soft Agar und zeigten ein erhöhtes Proliferations- und Migrationspotenzial im Vergleich zur Kontrolle.
Ein Xenograft in Nacktmäusen resultierte zudem in einer stärkeren Tumorbildung nach Injektion von KD-Zellen. Untersuchungen zur Beteiligung von SELENOH an der Zellzyklusregulation deuten auf eine hemmende Rolle des Proteins in der G1/S-Phase hin.
Die weiterhin beobachtete Hochregulation von SELENOH in humanen Adenokarzinomen und präkanzerösem Mausgewebe lässt sich möglicherweise mit der postulierten Schutzfunktion vor oxidativen Zell- und DNA-Schäden erklären. In gesunden Darmepithelzellen war das Protein vorrangig am Kryptengrund lokalisiert, was zu einer potenziellen Rolle während der gastrointestinalen Differenzierung passt.
Von einer Säkularisierung in einem Land wie Israel zu sprechen, wo die Religion offensichtlich einen wichtigen Teil des öffentlichen Lebens darstellt, scheint widersprüchlich zu sein. Doch Israel befindet sich bedingt durch Globalisierung, Pluralisierung und Modernisierung an einem Scheideweg. Teile der israelischen Gesellschaft säkularisieren sich bereits. Die religiöse orthodoxe Vorherrschaft scheint zu bröckeln. Kann jedoch deswegen von einem Mentalitätswandel oder einer Säkularisierung des Staates gesprochen werden? Kann ein Säkularisierungsprozess in Israel erfolgreich sein? Wie muss ein säkularer Staat beschaffen sein, um den unterschiedlichen religiösen Denominationen die gleichen Möglichkeiten zu bieten? Welche Rolle spielen dabei die jüdische Diaspora, Einwanderungen und gesellschaftliche Minderheiten? Ziel der vorliegenden Arbeit ist es diese Fragen zu erörtern. Auch wenn die enge Verknüpfung von Nation und Religion im Judentum eine Säkularisierung scheinbar unmöglich macht, so erlaubt unter Bezugnahme der Konzepte von Säkularismus und Nationalismus im Kontext der historischen Entwicklungen des Judentums eine differenziertere Betrachtung dieser Verknüpfung. Durch die Nutzung von unterschiedlichen qualitativen Methoden, wie der hermeneutischen Methode zur Betrachtung der verschiedenen theoretischen Begriffe und der Analyse des Verhältnisses von Nation und Religion im Judentum; der Nutzung von Zeitungsartikel zur Aufarbeitung der aktuellen Debatten in der israelischen Gesellschaft; der Auswertung von Statistiken; sowie der Durchführung von Experteninterviews erlauben einen vielseitigen Zugang zum Forschungsgegenstand. Letztendlich soll aufgezeigt werden, dass sich Israel zwar zunehmend säkularisiert, aber vor verschiedenen Herausforderungen, wie dem gesellschaftlichen Pluralismus, der instabilen Sicherheitslage, sowie einem zunehmenden religiösen Nationalismus steht.
Spatio-temporal data denotes a category of data that contains spatial as well as temporal components. For example, time-series of geo-data, thematic maps that change over time, or tracking data of moving entities can be interpreted as spatio-temporal data.
In today's automated world, an increasing number of data sources exist, which constantly generate spatio-temporal data. This includes for example traffic surveillance systems, which gather movement data about human or vehicle movements, remote-sensing systems, which frequently scan our surroundings and produce digital representations of cities and landscapes, as well as sensor networks in different domains, such as logistics, animal behavior study, or climate research.
For the analysis of spatio-temporal data, in addition to automatic statistical and data mining methods, exploratory analysis methods are employed, which are based on interactive visualization. These analysis methods let users explore a data set by interactively manipulating a visualization, thereby employing the human cognitive system and knowledge of the users to find patterns and gain insight into the data.
This thesis describes a software framework for the visualization of spatio-temporal data, which consists of GPU-based techniques to enable the interactive visualization and exploration of large spatio-temporal data sets. The developed techniques include data management, processing, and rendering, facilitating real-time processing and visualization of large geo-temporal data sets. It includes three main contributions:
- Concept and Implementation of a GPU-Based Visualization Pipeline.
The developed visualization methods are based on the concept of a GPU-based visualization pipeline, in which all steps -- processing, mapping, and rendering -- are implemented on the GPU. With this concept, spatio-temporal data is represented directly in GPU memory, using shader programs to process and filter the data, apply mappings to visual properties, and finally generate the geometric representations for a visualization during the rendering process. Data processing, filtering, and mapping are thereby executed in real-time, enabling dynamic control over the mapping and a visualization process which can be controlled interactively by a user.
- Attributed 3D Trajectory Visualization.
A visualization method has been developed for the interactive exploration of large numbers of 3D movement trajectories. The trajectories are visualized in a virtual geographic environment, supporting basic geometries such as lines, ribbons, spheres, or tubes. Interactive mapping can be applied to visualize the values of per-node or per-trajectory attributes, supporting shape, height, size, color, texturing, and animation as visual properties. Using the dynamic mapping system, several kind of visualization methods have been implemented, such as focus+context visualization of trajectories using interactive density maps, and space-time cube visualization to focus on the temporal aspects of individual movements.
- Geographic Network Visualization.
A method for the interactive exploration of geo-referenced networks has been developed, which enables the visualization of large numbers of nodes and edges in a geographic context. Several geographic environments are supported, such as a 3D globe, as well as 2D maps using different map projections, to enable the analysis of networks in different contexts and scales. Interactive filtering, mapping, and selection can be applied to analyze these geographic networks, and visualization methods for specific types of networks, such as coupled 3D networks or temporal networks have been implemented.
As a demonstration of the developed visualization concepts, interactive visualization tools for two distinct use cases have been developed. The first contains the visualization of attributed 3D movement trajectories of airplanes around an airport. It allows users to explore and analyze the trajectories of approaching and departing aircrafts, which have been recorded over the period of a month. By applying the interactive visualization methods for trajectory visualization and interactive density maps, analysts can derive insight from the data, such as common flight paths, regular and irregular patterns, or uncommon incidents such as missed approaches on the airport.
The second use case involves the visualization of climate networks, which are geographic networks in the climate research domain. They represent the dynamics of the climate system using a network structure that expresses statistical interrelationships between different regions. The interactive tool allows climate analysts to explore these large networks, analyzing the network's structure and relating it to the geographic background. Interactive filtering and selection enables them to find patterns in the climate data and identify e.g. clusters in the networks or flow patterns.
Die deutsche Berufsausbildung hat in den vergangenen Jahren stark an Zuspruch verloren. Dies trifft insbesondere auch auf die duale kaufmännische Berufsausbildung zu. Galt sie vor einigen Jahren noch als ein möglicher Ausbildungsweg für leistungsstarke Schüler/-innen, präferieren diese heute zum großen Teil das Studium. Die wachsende Anzahl an Studienabbrechern belegt jedoch, dass dadurch auch Potenzial verloren geht, weil sich Jugendliche mit dem Studium für einen Ausbildungsweg entscheiden, der für sie nicht geeignet ist. Bisherige Bemühungen zur Etablierung alternativer Bildungswege wie zum Beispiel Berufsakademien weisen zwar Erfolge auf, basieren jedoch auf einem Konzept, das sich ausschließlich am Bedarf der Wirtschaft orientiert. Es ist jedoch die Überzeugung der Autorin, dass neue innovative Bildungswege auch die Bedürfnisse und Vorstellungen derjenigen berücksichtigen müssen, für die sie entworfen werden. Denn die Generation der heutigen Jugendlichen zeichnet sich dadurch aus, dass sie ein anderes Wertekonzept als ihre Vorgängergenerationen aufweist. Die Dissertation entwickelt daher ein Modell einer wirtschaftsorientierten Ausbildung, welches sich aus unterschiedlichen motivationstheoretischen Elementen ableitet und zugleich die Werte der Generation der heutigen Jugend-lichen berücksichtigt. Es umfasst sowohl die Anreiz-Beitrags-Theorie nach Barnard als auch die Inhalts-Erwartungstheorie nach Vroom. Zudem liegt ein Hauptaugenmerk dieser Arbeit auf der Anpassung der Zwei-Faktoren-Theorie nach Herzberg auf die heutige Zeit.
Empirisch basiert die Dissertation auf einem dreistufigen Untersuchungsdesign. Die erste Stufe umfasst eine quantitative Befragung von insgesamt 459 Abiturienten/-innen und 100 Studierenden. In der zweiten Stufe wurden 10 Studieren-de und 12 Abiturienten/-innen qualitativ befragt. Eine Validierung der Ergebnis-se erfolgte in der dritten Stufe mittels Experteninterviews. Das Ziel der empirischen Untersuchung bestand in der Überprüfung von vier Hypothesen als Basis zur Modellableitung:
Hypothese H1 - Flexibilität erhöht die Attraktivität einer wirtschaftsorientierten Ausbildung: Der Faktor Flexibilität wurde als ein relevanter Motivator für die Wahl eines Ausbildungsweges identifiziert. Jugendliche wollen sich heutzutage nicht sofort bzw. nicht zu früh festlegen müssen.
Hypothese H2 - Auslandsaufenthalte erhöhen die Attraktivität einer wirtschaftsorientierten Ausbildung: Es wurde bestätigt, dass Auslandsaufenthalte die Attraktivität einer wirtschaftsorientierten Ausbildung steigert, es besteht jedoch eine Reihe von Barrieren, die Jugendliche (obwohl sie den grundsätzlichen Vor-teil sehen) davon abhalten, einen Auslandsaufenthalt für sich selbst in Betracht zu ziehen.
Hypothese H3 - Das Aufzeigen einer Karriereperspektive erhöht die Attraktivität einer wirtschaftsorientierten Ausbildung: Für die Generation der heutigen Jugendlichen steht bezüglich der Wahl ihres Ausbildungsweges die Aussicht auf eine Tätigkeit im Vordergrund, die ein gesichertes Einkommen und somit ein gutes Leben ermöglicht und zudem aus ihrer Sicht eine gewisse Sinnhaftigkeit hat. Führungspositionen, in denen auch höhere Verantwortung übernommen wird, strebt nur eine Minderheit an.
Hypothese H4 - Zusätzliche monetäre Anreize erhöhen die Attraktivität einer wirtschaftsorientierten Ausbildung: Vergütungsbestandteile werden grundsätzlich nicht abgelehnt (das wäre auch irrational), haben jedoch auch nicht die Anreizfunktion, die ihr auf Basis der Voruntersuchung im Rahmen dieser Arbeit hätte unterstellt werden können. Für die Entscheidungsfindung bezüglich eines Ausbildungsweges spielen sie nur eine untergeordnete Rolle. Dennoch trägt die Vergütung zur Attraktivität eines Ausbildungsweges bei.
Basierend auf den zuvor genannten Ergebnissen wurde das Modell einer wirtschaftsorientieren Ausbildung abgeleitet, das sowohl horizontal als auch vertikal flexibel ist. Horizontale Flexibilität ist dadurch gegeben, dass innerhalb eines Ausbildungsjahres unterschiedliche Unternehmen und Branchen kennengelernt werden (Jahr 1 und Jahr 2). Eine Spezialisierung erfolgt erst in den späteren Ausbildungsjahren. Durch die Möglichkeit, nach jedem Ausbildungsjahr mit einem Abschluss ins Berufsleben zu wechseln und die Ausbildung gegebenenfalls zu einem späteren Zeitpunkt fortzusetzen, ist auch eine vertikale Flexibilität gegeben. Zudem bietet das Modell Studienabbrechern/-innen die Möglichkeit, im Ausbildungsjahr 2 bzw. 3 in die Ausbildung einzusteigen. Im Jahr 2 und/oder Jahr 3 sind Auslandsaufenthalte integriert. Diese werden fakultativ an-geboten. Bereits ab dem Jahr 1 besteht die Möglichkeit, Vorbereitungskurse zu belegen. Der hohen Bedeutung der Karriereperspektive wird im abgeleiteten Modell auf mehreren Ebenen Rechnung getragen. So werden nach jedem Ausbildungsjahr anerkannte Abschlüsse erreicht. Während diese in den Jahren 1 und 2 mit IHK-Abschlüssen gleichzusetzen sind, beginnen ab Jahr 3 die akademischen Graduierungen (Jahr 3 Bachelor, Jahr 4 Master). Die Vergütung wird Bestandteil einer wirtschaftsorientierten Ausbildung, wobei ihre Höhe mit Dauer der Ausbildung zunimmt.
Da mit der Einführung des Modells einer wirtschaftsorientierten Ausbildung die Überwindung von institutionellen Paradigmen und Schranken verbunden sind, erfolgte im Rahmen des Ausblicks der Arbeit eine weitere Expertenbefragung bezüglich seiner Umsetzbarkeit. Es setzt eine Beweglichkeit von institutioneller Seite voraus (hierbei insbesondere auch von den Kammern), die von der Mehr-zahl der Experten derzeit skeptisch gesehen wird. Die konzeptionelle Ausgestaltung findet grundsätzlichen Zuspruch, wobei in einigen Details, zum Beispiel in der Dauer der Ausbildung, noch Klärungsbedarf besteht.
Grundsätzlich schließen sich die Experten/-innen der Meinung der Autorin an, dass ein Sinneswandel in der deutschen Ausbildungslandschaft gewünscht und gefordert wird. Dies betrifft insbesondere auch den kaufmännischen Bereich. Diese Arbeit liefert mit dem Modell der wirtschaftsorientierten Ausbildung einen wichtigen Beitrag zur Diskussion über neue Ausbildungswege.
The utilization of lignin as renewable electrode material for electrochemical energy storage is a sustainable approach for future batteries and supercapacitors. The composite electrode was fabricated from Kraft lignin and conductive carbon and the charge storage contribution was determined in terms of electrical double layer (EDL) and redox reactions. The important factors at play for achieving high faradaic charge storage capacity contribute to high surface area, accessibility of redox sites in lignin and their interaction with conductive additives. A thinner layer of lignin covering the high surface area of carbon facilitates the electron transfer process with a shorter pathway from the active sites of nonconductive lignin to the current collector leading to the improvement of faradaic charge storage capacity.
Composite electrodes from lignin and carbon would be even more sustainable if the fluorinated binder can be omitted. A new route to fabricate a binder-free composite electrode from Kraft lignin and high surface area carbon has been proposed by crosslinking lignin with glyoxal. A high molecular weight of lignin is obtained to enhance both electroactivity and binder capability in composite electrodes. The order of the processing step of crosslinking lignin on the composite electrode plays a crucial role in achieving a stable electrode and high charge storage capacity. The crosslinked lignin based electrodes are promising since they allow for more stable, sustainable, halogen-free and environmentally benign devices for energy storage applications. Furthermore, improvement of the amount of redox active groups (quinone groups) in lignin is useful to enhance the capacity in lithium battery applications. Direct oxidative demethylation by cerium ammonium nitrate has been carried out under mild conditions. This proves that an increase of quinone groups is able to enhance the performance of lithium battery. Thus, lignin is a promising material and could be a good candidate for application in sustainable energy storage devices.
Human actuation
(2018)
Ever since the conception of the virtual reality headset in 1968, many researchers have argued that the next step in virtual reality is to allow users to not only see and hear, but also feel virtual worlds. One approach is to use mechanical equipment to provide haptic feedback, e.g., robotic arms, exoskeletons and motion platforms. However, the size and the weight of such mechanical equipment tends to be proportional to its target’s size and weight, i.e., providing human-scale haptic feedback requires human-scale equipment, often restricting them to arcades and lab environments.
The key idea behind this dissertation is to bypass mechanical equipment by instead leveraging human muscle power. We thus create software systems that orchestrate humans in doing such mechanical labor—this is what we call human actuation. A potential benefit of such systems is that humans are more generic, flexible, and versatile than machines. This brings a wide range of haptic feedback to modern virtual reality systems.
We start with a proof-of-concept system—Haptic Turk, focusing on delivering motion experiences just like a motion platform. All Haptic Turk setups consist of a user who is supported by one or more human actuators. The user enjoys an interactive motion simulation such as a hang glider experience, but the motion is generated by those human actuators who manually lift, tilt, and push the user’s limbs or torso. To get the timing and force right, timed motion instructions in a format familiar from rhythm games are generated by the system.
Next, we extend the concept of human actuation from 3-DoF to 6-DoF virtual reality where users have the freedom to walk around. TurkDeck tackles this problem by orchestrating a group of human actuators to reconfigure a set of passive props on the fly while the user is progressing in the virtual environment. TurkDeck schedules human actuators by their distances from the user, and instructs them to reconfigure the props to the right place on the right time using laser projection and voice output.
Our studies in Haptic Turk and TurkDeck showed that human actuators enjoyed the experience but not as much as users. To eliminate the need of dedicated human actuators, Mutual Turk makes everyone a user by exchanging mechanical actuation between two or more users. Mutual Turk’s main functionality is that it orchestrates the users so as to actuate props at just the right moment and with just the right force to produce the correct feedback in each other's experience.
Finally, we further eliminate the need of another user, making human actuation applicable to single-user experiences. iTurk makes the user constantly reconfigure and animate otherwise passive props. This allows iTurk to provide virtual worlds with constantly varying or even animated haptic effects, even though the only animate entity present in the system is the user. Our demo experience features one example each of iTurk’s two main types of props, i.e., reconfigurable props (the foldable board from TurkDeck) and animated props (the pendulum).
We conclude this dissertation by summarizing the findings of our explorations and pointing out future directions. We discuss the development of human actuation compare to traditional machine actuation, the possibility of combining human and machine actuators and interaction models that involve more human actuators.
This project was focused on exploring the phase behavior of poly(styrene)187000-block-poly(2-vinylpyridine)203000 (SV390) with high molecular weight (390 kg/mol) in thin films, in which the self-assembly of block copolymers (BCPs) was realized via thermo-solvent annealing. The advanced processing technique of solvent vapor treatment provides controlled and stable conditions.
In Chapter 3, the factors to influence the annealing process and the swelling behavior of homopolymers are presented and discussed. The swelling behavior of BCP in films is controlled by the temperature of the vapor and of the substrate, on one hand, and variation of the saturation of the solvent vapor atmosphere (different solvents), on the other hand. Additional factors like the geometry and material of the chamber, the type of flow inside the chamber etc. also influence the reproducibility and stability of the processing. The slightly selective solvent vapor of chloroform gives 10% more swelling of P2VP than PS in films with thickness of ~40 nm.
The tunable morphology in ultrathin films of high molecular weight BCP (SV390) was investigated in Chapter 4. First, the swelling behavior can be precisely tuned by temperature and/or vapor flow separately, which provided information for exploring the multiple-parameter-influenced segmental chain mobility of polymer films. The equilibrium state of SV390 in thin films influenced by temperature was realized at various temperatures with the same degree of swelling. Various methods including characterization with SFM, metallization and RIE were used to identify the morphology of films as porous half-layer with PS dots and P2VP matrix. The kinetic investigations demonstrate that on substrates with either weak or strong interaction the original morphology of the BCP with high molecular weight is changed very fast within 5 min, and the further annealing serves for annihilation of defects.
The morphological development of symmetric BCP in films with thickness increasing from half-layer to one-layer influenced by confinement factors of gradient film thicknesses and various surface properties of substrates was studied in Chapter 5. SV390 and SV99 films show bulk lamella-forming morphology after slightly selective solvent vapor (chloroform) treatment. SV99 films show cylinder-forming morphology under strongly selective solvent vapor (toluene) treatment since the asymmetric structure (caused by toluene uptake in PS blocks only) of SV99 block copolymer during annealing. Both kinds of morphology (lamella and cylinder) are influenced by the film thickness. The annealed morphology of SV390 and SV99 influenced by the combination of confined film and substrate property is similar to the morphology on flat silicon wafers. In this chapter the gradients in the film thickness and surface properties of the substrates with regard to their influence on the morphological development in thin BCP films are presented. Directed self-assembly (graphoepitaxy) of this SV390 was also investigated to compare with systematically reported SV99.
In Chapter 6 an approach to induced oriented microphase separation in thick block copolymer films via treatment with the oriented vapor flow using mini-extruder is envisaged to be an alternative to existing methodologies, e.g. via non-solvent-induced phase separation. The preliminary tests performed in this study confirm potential perspective of this method, which alters the structure through the bulk of the film (as revealed by SAXS measurements), but more detailed studies have to be conducted in order to optimize the preparation.
Nanophotonics is the field of science and engineering aimed at studying the light-matter interactions on the nanoscale. One of the key aspects in studying such optics at the nanoscale is the ability to assemble the material components in a spatially controlled manner. In this work, DNA origami nanostructures were used to self-assemble dye molecules and DNA coated plasmonic nanoparticles. Optical properties of dye nanoarrays, where the dyes were arranged at distances where they can interact by Förster resonance energy transfer (FRET), were systematically studied according to the size and arrangement of the dyes using fluorescein (FAM) as the donor and cyanine 3 (Cy 3) as the acceptor. The optimized design, based on steady-state and time-resolved fluorometry, was utilized in developing a ratiometric pH sensor with pH-inert coumarin 343 (C343) as the donor and pH-sensitive FAM as the acceptor. This design was further applied in developing a ratiometric toxin sensor, where the donor C343 is unresponsive and FAM is responsive to thioacetamide (TAA) which is a well-known hepatotoxin. The results indicate that the sensitivity of the ratiometric sensor can be improved by simply arranging the dyes into a well-defined array. The ability to assemble multiple fluorophores without dye-dye aggregation also provides a strategy to amplify the signal measured from a fluorescent reporter, and was utilized here to develop a reporter for sensing oligonucleotides. By incorporating target capturing sequences and multiple fluorophores (ATTO 647N dye molecules), a reporter for microbead-based assay for non-amplified target oligonucleotide sensing was developed. Analysis of the assay using VideoScan, a fluorescence microscope-based technology capable of conducting multiplex analysis, showed the DNA origami nanostructure based reporter to have a lower limit of detection than a single stranded DNA reporter. Lastly, plasmonic nanostructures were assembled on DNA origami nanostructures as substrates to study interesting optical behaviors of molecules in the near-field. Specifically, DNA coated gold nanoparticles, silver nanoparticles, and gold nanorods, were placed on the DNA origami nanostructure aiming to study surface-enhanced fluorescence (SEF) and surface-enhanced Raman scattering (SERS) of molecules placed in the hotspot of coupled plasmonic structures.
Microswimmers, i.e. swimmers of micron size experiencing low Reynolds numbers, have received a great deal of attention in the last years, since many applications are envisioned in medicine and bioremediation. A promising field is the one of magnetic swimmers, since magnetism is biocom-patible and could be used to direct or actuate the swimmers. This thesis studies two examples of magnetic microswimmers from a physics point of view.
The first system to be studied are magnetic cells, which can be magnetic biohybrids (a swimming cell coupled with a magnetic synthetic component) or magnetotactic bacteria (naturally occurring bacteria that produce an intracellular chain of magnetic crystals). A magnetic cell can passively interact with external magnetic fields, which can be used for direction. The aim of the thesis is to understand how magnetic cells couple this magnetic interaction to their swimming strategies, mainly how they combine it with chemotaxis (the ability to sense external gradient of chemical species and to bias their walk on these gradients). In particular, one open question addresses the advantage given by these magnetic interactions for the magnetotactic bacteria in a natural environment, such as porous sediments. In the thesis, a modified Active Brownian Particle model is used to perform simulations and to reproduce experimental data for different systems such as bacteria swimming in the bulk, in a capillary or in confined geometries. I will show that magnetic fields speed up chemotaxis under special conditions, depending on parameters such as their swimming strategy (run-and-tumble or run-and-reverse), aerotactic strategy (axial or polar), and magnetic fields (intensities and orientations), but it can also hinder bacterial chemotaxis depending on the system.
The second example of magnetic microswimmer are rigid magnetic propellers such as helices or random-shaped propellers. These propellers are actuated and directed by an external rotating magnetic field. One open question is how shape and magnetic properties influence the propeller behavior; the goal of this research field is to design the best propeller for a given situation. The aim of the thesis is to propose a simulation method to reproduce the behavior of experimentally-realized propellers and to determine their magnetic properties. The hydrodynamic simulations are based on the use of the mobility matrix. As main result, I propose a method to match the experimental data, while showing that not only shape but also the magnetic properties influence the propellers swimming characteristics.
Synthesis of artificial building blocks for sortase-mediated ligation and their enzymatic linkage
(2018)
The enzyme Sortase A catalyzes the formation of a peptide bond between the recognition sequence LPXTG and an oligoglycine. While manifold ligations between proteins and various biomolecules, proteins and small synthetic molecules as well as proteins and surfaces have been reported, the aim of this thesis was to investigate the sortase-catalyzed linkage between artificial building blocks. Hence, this could pave the way for the use of sortase A for tasks from a chemical point of view and maybe even materials science.
For the proof of concept, the studied systems were kept as simple as possible at first by choosing easily accessible silica NPs and commercially available polymers. These building blocks were functionalized with peptide motifs for sortase-mediated ligation. Silica nanoparticles were synthesized with diameters of 60 and 200 nm and surface modified with C=C functionalities. Then, peptides bearing a terminal cysteine were covalently linked by means of a thiol-ene reaction. 60 nm SiO2 NPs were functionalized with pentaglycines, while peptides with LPETG motif were linked to 200 nm silica particles. Polyethyleneglycol (PEG) and poly(N isopropylacrylamide) (PNIPAM) were likewise functionalized with peptides by thiol-ene reaction between cysteine residues and C=C units in the polymer end groups. Hence, G5-PEG and PNIPAM-LPETG conjugates were obtained. With this set of building blocks, NP–polymer hybrids, NP–NP, and polymer–polymer structures were generated by sortase-mediated ligation and the product formation shown by transmission electron microscopy, MALDI-ToF mass spectrometry and dynamic light scatting, among others. Thus, the linkage of these artificial building blocks by the enzyme sortase A could be demonstrated.
However, when using commercially available polymers, the purification of the polymer–peptide conjugates was impossible and resulted in a mixture containing unmodified polymer. Therefore, strategies were developed for the own synthesis of pure peptide-polymer and polymer-peptide conjugates as building blocks for sortase-mediated ligation. The designed routes are based on preparing polymer blocks via RAFT polymerization from CTAs that are attached to N- or C-terminus, respectively, of a peptide. GG-PNIPAM was synthesized through attachment of a suitable RAFT CTA to Fmoc-GG in an esterification reaction, followed by polymerization of NIPAM and cleavage of the Fmoc protection group. Furthermore, several peptides were synthesized by solid-phase peptide synthesis. The linkage of a RAFT CTA (or
polymerization initiator) to the N-terminus of a peptide can be conducted in an automated fashion as last step in a peptide synthesizer. The synthesis of such a conjugate couldn’t be realized in the time frame of this thesis, but many promising strategies exist to continue this strategy using different coupling reagents. Such polymer building blocks can be used to synthesize protein-polymer conjugates catalyzed by sortase A and the approach can be carried on to the synthesis of block copolymers by using polymer blocks with peptide motifs on both ends.
Although the proof of concept demonstrated in this thesis only shows examples that can be also synthesized by exclusively chemical techniques, a toolbox of such building blocks will enable the future formation of new materials and pave the way for the application of enzymes in materials science. In addition to nanoparticle systems and block copolymers, this also includes combination with protein-based building blocks to form hybrid materials. Hence, sortase could become an enzymatic tool that complements established chemical linking technologies and provides specific peptide motifs that are orthogonal to all existing chemical functional groups.
Die Folgen einer lebensmittelbedingten Erkrankung sind zum Teil gravierend, insbesondere für Kinder und immunsupprimierte Menschen. Hierbei gehören Salmonella und Campylobacter zu den häufigsten Erregern, die verantwortlich für gastrointestinale Erkrankungen in Deutschland sind. Trotz umfassender Maßnahmen der EU zur Prävention und Bekämpfung von Salmonellen in Geflügelbeständen und der Lebensmittel-Industrie, wird von einem stagnierenden Trend von Infektionszahlen berichtet. Zoonose-Erreger wie Salmonellen können über Nutztiere in die Nahrungskette des Menschen gelangen, wodurch sich Infektionsherde schnell ausbreiten können. Dabei sind bestehende Präventionsstrategien für Geflügel vorhanden, die aber nicht auf den Menschen übertragbar sind. Folglich sind Diagnostik und Prävention in der Lebensmittelindustrie essentiell. Deshalb besteht ein hoher Bedarf für spezifische, sensitive und zuverlässige Nachweismethoden, die eine Point-of-care Diagnostik gewährleisten. Durch ein wachsendes Verständnis der wirtsspezifischen Faktoren von S. enterica Serovaren kann die Entwicklung sowohl neuartiger diagnostischer Methoden, als auch neuartiger Therapien und Impfstoffe maßgeblich vorangetrieben werden.
Infolgedessen wurde in dieser Arbeit ein infektionsähnliches in vitro Modell für S. Enteritidis etabliert und darauf basierend eine umfassende Untersuchung zur Identifizierung neuer Zielstrukturen für den Erreger durchgeführt. Während einer Salmonellen-Infektion ist die erste zelluläre Barriere im Wirt die Epithelschicht. Dementsprechend wurde eine humane Zelllinie (CaCo 2, Darmepithel) für die Pathogen-Wirt-Studie ausgewählt. Das Salmonellen-Transkriptom und morphologische Eigenschaften der Epithelzellen wurden in verschiedenen Phasen der Salmonellen-Infektion untersucht und mit bereits gut beschriebenen Virulenzfaktoren und Beobachtungen in Bezug gesetzt. Durch dieses Infektionsmodell konnte ein spezifischer Phänotyp für die intrazellulären Salmonellen in den Epithelzellen nachgewiesen werden. Zudem wurde aufgezeigt, dass bereits die Kultivierung in Flüssigmedium einen invasionsaktiven Zustand der Salmonellen erzeugt. Allerdings wurde durch die Kokultivierung mit Epithelzellen eine zusätzliche Expression relevanter Gene induziert, um eine effiziente Adhäsion und Transmembran-Transport zu gewährleisten. Letzterer ist charakteristisch für die intrazelluläre Limitierung von Nährstoffen und prägt den infektionsrelevanten Status. Unter Berücksichtigung dieser Faktoren ergab sich ein Phänotyp, der eindeutig Mechanismen zur Wirtsadaptation und möglicherweise auch Pathogenese aufzeigt. Die intrazellulären Bakterien müssen vom Wirt separiert werden, was ein wesentlicher Schritt für Pathogen-bestimmende Analysen ist. Hierbei wurde mithilfe einer Detergenz-basierten Lyse der eukaryotischen Zellmembran und differentieller Zentrifugation, der eukaryotische Eintrag minimal gehalten. Unter Verwendung der Virulenz-adaptierten Salmonellen wurden Untersuchungen in Hinblick auf die Identifizierung neuer Zielstrukturen für S. Enteritidis durchgeführt. Mithilfe eines immunologischen Screenings wurden neue potentielle Antigene entdeckt. Zu diesem Zweck wurden bakterielle cDNA-basierte Expressionsbibliotheken hergestellt, die durch eine vereinfachte Microarray-Anwendung ein Hochdurchsatzscreening von Proteinen als potentielle Binder ermöglichen. Folglich konnten neue unbeschriebene Proteine identifiziert werden, die sich durch eine Salmonella-Spezifität oder Membranständigkeit auszeichnen. Ebenso wurde ein Vergleich der im Screening identifizierten Proteine mit der Regulation der kodierenden Gene im infektionsähnlichen Modell durchgeführt. Dabei wurde deutlich, dass die Häufigkeit von Transkripten einen Einfluss auf die Verfügbarkeit in der cDNA-Bibliothek und folglich auch auf die Expressionsbibliothek nimmt. Angesichts eines Ungleichgewichts zwischen der Gesamtzahl protein-kodierender Gene in S. Enteritidis zu möglichen Klonen, die während des Microarray-Screenings untersucht werden können, besteht der Bedarf einer Anreicherung von Proteinen in der Expressionsbibliothek. Das infektionsähnliche Modell zeigte, dass nicht nur Virulenz-assoziierte, sondern auch Stress- und Metabolismus-relevante Gene hochreguliert werden. Durch die Konstruktion dieser spezifischen cDNA-Bibliotheken ist die Erkennung von charakteristischen molekularen Markern gegeben.
Weiterhin wurden anhand der Transkriptomanalyse spezifisch hochregulierte Gene identifiziert, die relevant für das intrazelluläre Überleben von S. Enteritidis in humanen Epithelzellen sind. Hiervon wurden drei Gene näher untersucht, indem ihr Einfluss im infektionsähnlichen Modell mittels entsprechender Gen-Knockout-Stämme analysiert wurde. Dabei wurde für eine dieser Mutanten ein reduziertes Wachstum in der späten intrazellulären Phase nachgewiesen. Weiterführende in vitro Analysen sind für die Charakterisierung des Knockout-Stamms notwendig, um den Einsatz als potenzielles Therapeutikum zu verifizieren.
Zusammenfassend wurde ein in vitro Infektionsmodell für S. Enteritidis etabliert, wodurch neue Zielstrukturen des Erregers identifiziert wurden. Diese sind für diagnostische oder therapeutische Anwendungen interessant. Das Modell lässt sich ebenso für andere intrazelluläre Pathogene übertragen und gewährleistet eine zuverlässige Identifizierung von potentiellen Antigenen.
Over the last decades mechanisms of recognition of morphologically complex words have been extensively examined in order to determine whether all word forms are stored and retrieved from the mental lexicon as wholes or whether they are decomposed into their morphological constituents such as stems and affixes. Most of the research in this domain focusses on English. Several factors have been argued to affect morphological processing including, for instance, morphological structure of a word (e.g., existence of allomorphic stem alternations) and its linguistic nature (e.g., whether it is a derived word or an inflected word form). It is not clear, however, whether processing accounts based on experimental evidence from English would hold for other languages. Furthermore, there is evidence that processing mechanisms may differ across various populations including children, adult native speakers and language learners. Recent studies claim that processing mechanisms could also differ between older and younger adults (Clahsen & Reifegerste, 2017; Reifegerste, Meyer, & Zwitserlood, 2017).
The present thesis examined how properties of the morphological structure, types of linguistic operations involved (i.e., the linguistic contrast between inflection and derivation) and characteristics of the particular population such as older adults (e.g., potential effects of ageing as a result of the cognitive decline or greater experience and exposure of older adults) affect initial, supposedly automatic stages of morphological processing in Russian and German. To this end, a series of masked priming experiments was conducted.
In experiments on Russian, the processing of derived -ost’ nouns (e.g., glupost’ ‘stupidity’) and of inflected forms with and without allomorphic stem alternations in 1P.Sg.Pr. (e.g., igraju – igrat’ ‘to play’ vs. košu – kosit’ ‘to mow’) was examined. The first experiment on German examined and directly compared processing of derived -ung nouns (e.g., Gründung ‘foundation’) and inflected -t past participles (e.g., gegründet ‘founded’), whereas the second one investigated the processing of regular and irregular plural forms (-s forms such as Autos ‘cars’ and -er forms such as Kinder ‘children’, respectively).
The experiments on both languages have shown robust and comparable facilitation effects for derived words and regularly inflected forms without stem changes (-t participles in German, forms of -aj verbs in Russian). Observed morphological priming effects could be clearly distinguished from purely semantic or orthographic relatedness between words. At the same time, we found a contrast between forms with and without allomorphic stem alternations in Russian and regular and irregular forms in German, with significantly more priming for unmarked stems (relative to alternated ones) and significantly more priming for regular (compared) word forms. These findings indicate the relevance of morphological properties of a word for initial stages of processing, contrary to claims made in the literature holding that priming effects are determined by surface form and meaning overlap only. Instead, our findings are more consistent with approaches positing a contrast between combinatorial, rule-based and lexically-stored forms (Clahsen, Sonnenstuhl, & Blevins, 2003).
The doctoral dissertation also addressed the role of ageing and age-related cognitive changes on morphological processing. The results obtained on this research issue are twofold. On the one hand, the data demonstrate effects of ageing on general measures of language performance, i.e., overall longer reaction times and/or higher accuracy rates in older than younger individuals. These findings replicate results from previous studies, which have been linked to the general slowing of processing speed at older age and to the larger vocabularies of older adults. One the other hand, we found that more specific aspects of language processing appear to be largely intact in older adults as revealed by largely similar morphological priming effects for older and younger adults. These latter results indicate that initial stages of morphological processing investigated here by means of the masked priming paradigm persist in older age. One caveat should, however, be noted. Achieving the same performance as a younger individual in a behavioral task may not necessarily mean that the same neural processes are involved. Older people may have to recruit a wider brain network than younger individuals, for example. To address this and related possibilities, future studies should examine older people’s neural representations and mechanisms involved in morphological processing.
The scientific drilling campaign PALEOVAN was conducted in the summer of 2010 and was part of the international continental drilling programme (ICDP). The main goal of the campaign was the recovery of a sensitive climate archive in the East of Anatolia. Lacustrine deposits underneath the lake floor of ‘Lake Van’ constitute this archive. The drilled core material was recovered from two locations: the Ahlat Ridge and the Northern Basin. A composite core was constructed from cored material of seven parallel boreholes at the Ahlat Ridge and covers an almost complete lacustrine history of Lake Van. The composite record offered sensitive climate proxies such as variations of total organic carbon, K/Ca ratios, or a relative abundance of arboreal pollen. These proxies revealed patterns that are similar to climate proxy variations from Greenland ice cores. Climate variations in Greenland ice cores have been dated by modelling the timing of orbital forces to affect the climate. Volatiles from melted ice aliquots are often taken as high-resolution proxies and provide a base for fitting the according temporal models.
The ICDP PALEOVAN scientific team fitted proxy data from the lacustrine drilling record to ice core data and constructed an age model. Embedded volcaniclastic layers had to be dated radiometrically in order to provide independent age constraints to the climate-stratigraphic age model. Solving this task by an application of the 40Ar/39Ar method was the main objective of this thesis. Earlier efforts to apply the 40Ar/39Ar dating resulted in inaccuracies that could not be explained satisfactorily.
The absence of K-rich feldspars in suitable tephra layers implied that feldspar crystals needed to be 500 μm in size minimum, in order to apply single-crystal 40Ar/39Ar dating. Some of the samples did not contain any of these grain sizes or only very few crystals of that size. In order to overcome this problem this study applied a combined single-crystal and multi-crystal approach with different crystal fractions from the same sample. The preferred method of a stepwise heating analysis of an aliquot of feldspar crystals has been applied to three samples. The Na-rich crystals and their young geological age required 20 mg of inclusion-free, non-corroded feldspars. Small sample volumes (usually 25 % aliquots of 5 cm3 of sample material – a spoon full of tephra) and the widespread presence of melt-inclusion led to the application of combined single- and multigrain total fusion analyses. 40Ar/39Ar analyses on single crystals have the advantage of being able to monitor the presence of excess 40Ar and detrital or xenocrystic contamination in the samples. Multigrain analyses may hide the effects from these obstacles. The results from the multigrain analyses are therefore discussed with respect to the findings from the respective cogenetic single crystal ages. Some of the samples in this study were dated by 40Ar/39Ar on feldspars on multigrain separates and (if available) in combination with only a few single crystals. 40Ar/39Ar ages from two of the samples deviated statistically from the age model. All other samples resulted in identical ages. The deviations displayed older ages than those obtained from the age model. t-Tests compared radiometric ages with available age control points from various proxies and from the relative paleointensity of the earth magnetic field within a stratigraphic range of ± 10 m. Concordant age control points from different relative chronometers indicated that deviations are a result of erroneous 40Ar/39Ar ages. The thesis argues two potential reasons for these ages: (1) the irregular appearance of 40Ar from rare melt- and fluid- inclusions and (2) the contamination of the samples with older crystals due to a rapid combination of assimilation and ejection.
Another aliquot of feldspar crystals that underwent separation for the application of 40Ar/39Ar dating was investigated for geochemical inhomogeneities. Magmatic zoning is ubiquitous in the volcaniclastic feldspar crystals. Four different types of magmatic zoning were detected. The zoning types are compositional zoning (C-type zoning), pseudo-oscillatory zoning of trace ele- ment concentrations (PO-type zoning), chaotic and patchy zoning of major and trace element concentrations (R-type zoning) and concentric zoning of trace elements (CC-type zoning). Sam- ples that deviated in 40Ar/39Ar ages showed C-type zoning, R-type zoning or a mix of different types of zoning (C-type and PO-type). Feldspars showing PO-type zoning typically represent the smallest grain size fractions in the samples. The constant major element compositions of these crystals are interpreted to represent the latest stages in the compositional evolution of feldspars in a peralkaline melt. PO-type crystals contain less melt- inclusions than other zoning types and are rarely corroded. This thesis concludes that feldspars that show PO-type zoning are most promising chronometers for the 40Ar/39Ar method, if samples provide mixed zoning types of Quaternary anorthoclase feldspars.
Five samples were dated by applying the 40Ar/39Ar method to volcanic glass. High fractions of atmospheric Ar (typically > 98%) significantly hampered the precision of the 40Ar/39Ar ages and resulted in rough age estimates that widely overlap the age model. Ar isotopes indicated that the glasses bore a chorine-rich Ar-end member. The chlorine-derived 38Ar indicated chlorine-rich fluid-inclusions or the hydration of the volcanic glass shards. This indication strengthened the evidence that irregularly distributed melt-inclusions and thus irregular distributed excess 40Ar influenced the problematic feldspar 40Ar/39Ar ages. Whether a connection between a corrected initial 40Ar/36Ar ratio from glasses to the 40Ar/36Ar ratios from pore waters exists remains unclear.
This thesis offers another age model, which is similarly based on the interpolation of the temporal tie points from geophysical and climate-stratigraphic data. The model used a PCHIP- interpolation (piecewise cubic hermite interpolating polynomial) whereas the older age model used a spline-interpolation. Samples that match in ages from 40Ar/39Ar dating of feldspars with the earlier published age model were additionally assigned with an age from the PCHIP- interpolation. These modelled ages allowed a recalculation of the Alder Creek sanidine mineral standard. The climate-stratigraphic calibration of an 40Ar/39Ar mineral standard proved that the age versus depth interpolations from PAELOVAN drilling cores were accurate, and that the applied chronometers recorded the temporal evolution of Lake Van synchronously.
Petrochemical discrimination of the sampled volcaniclastic material is also given in this thesis. 41 from 57 sampled volcaniclastic layers indicate Nemrut as their provenance. Criteria that served for the provenance assignment are provided and reviewed critically. Detailed correlations of selected PALEOVAN volcaniclastics to onshore samples that were described in detail by earlier studies are also discussed. The sampled volcaniclastics dominantly have a thickness of < 40 cm and have been ejected by small to medium sized eruptions. Onshore deposits from these types of eruptions are potentially eroded due to predominant strong winds on Nemrut and Süphan slopes. An exact correlation with the data presented here is therefore equivocal or not possible at all.
Deviating feldspar 40Ar/39Ar ages can possibly be explained by inherited 40Ar from feldspar xenocrysts contaminating the samples. In order to test this hypothesis diffusion couples of Ba were investigated in compositionally zoned feldspar crystals. The diffusive behaviour of Ba in feldspar is known, and gradients in the changing concentrations allowed for the calculation of the duration of the crystal’s magmatic development since the formation of the zoning interface. Durations were compared with degassing scenarios that model the Ar-loss during assimilation and subsequent ejection of the xenocrystals. Diffusive equilibration of the contrasting Ba concentrations is assumed to generate maximum durations as the gradient could have been developed in several growth and heating stages. The modelling does not show any indication of an involvement of inherited 40Ar in any of the deviating samples. However, the analytical set-up represents the lower limit of the required spatial resolution. Therefore, it cannot be excluded that the degassing modelling relies on a significant overestimation of the maximum duration of the magmatic history. Nevertheless, the modelling of xenocrystal degassing evidences that the irregular incorporation of excess 40Ar by melt- and fluid inclusions represents the most critical problem that needs to be overcome in dating volcaniclastic feldspars from the PALEOVAN drill cores. This thesis provides the complete background in generating and presenting 40Ar/39Ar ages that are compared to age data from a climate-stratigraphic model. Deviations are identified statistically and then discussed in order to find explanations from the age model and/or from 40Ar/39Ar geochronology. Most of the PALEOVAN stratigraphy provides several chronometers that have been proven for their synchronicity. Lacustrine deposits from Lake Van represent a key archive for reconstructing climate evolution in the eastern Mediterranean and in the Near East. The PALEOVAN record offers a climate-stratigraphic age model with a remarkable accuracy and resolution.
The Himalayan arc stretches >2500 km from east to west at the southern edge of the Tibetan Plateau, representing one of the most important Cenozoic continent-continent collisional orogens. Internal deformation processes and climatic factors, which drive weathering, denudation, and transport, influence the growth and erosion of the orogen. During glacial times wet-based glaciers sculpted the mountain range and left overdeepend and U-shaped valleys, which were backfilled during interglacial times with paraglacial sediments over several cycles. These sediments partially still remain within the valleys because of insufficient evacuation capabilities into the foreland. Climatic processes overlay long-term tectonic processes responsible for uplift and exhumation caused by convergence. Possible processes accommodating convergence within the orogenic wedge along the main Himalayan faults, which divide the range into four major lithologic units, are debated. In this context, the identification of processes shaping the Earth’s surface on short- and on long-term are crucial to understand the growth of the orogen and implications for landscape development in various sectors along the arc. This thesis focuses on both surface and tectonic processes that shape the landscape in the western Indian Himalaya since late Miocene.
In my first study, I dated well-preserved glacially polished bedrock on high-elevated ridges and valley walls in the upper of the Chandra Valley the by means of 10Be terrestrial cosmogenic radionuclides (TCN). I used these ages and mapped glacial features to reconstruct the extent and timing of Pleistocene glaciation at the southern front of the Himalaya. I was able to reconstruct an extensive valley glacier of ~200 km length and >1000 m thickness. Deglaciation of the Chandra Valley glacier started subsequently to insolation increase on the Northern Hemisphere and thus responded to temperature increase. I showed that the timing this deglaciation onset was coeval with retreat of further midlatitude glaciers on the Northern and Southern Hemispheres. These comparisons also showed that the post-LGM deglaciation very rapid, occurred within a few thousand years, and was nearly finished prior to the Bølling/Allerød interstadial.
A second study (co-authorship) investigates how glacial advances and retreats in high mountain environments impact the landscape. By 10Be TCN dating and geomorphic mapping, we obtained maximal length and height of the Siachen Glacier within the Nubra Valley. Today the Shyok and Nubra confluence is backfilled with sedimentary deposits, which are attributed to the valley blocking of the Siachen Glacier 900 m above the present day river level. A glacial dam of the Siachen Glacier blocked the Shyok River and lead to the evolution of a more than 20 km long lake. Fluvial and lacustrine deposits in the valley document alternating draining and filling cycles of the lake dammed by the Siachen Glacier. In this study, we can show that glacial incision was outpacing fluvial incision.
In the third study, which spans the million-year timescale, I focus on exhumation and erosion within the Chandra and Beas valleys. In this study the position and discussed possible reasons of rapidly exhuming rocks, several 100-km away from one of the main Himalayan faults (MFT) using Apatite Fission Track (AFT) thermochronometry. The newly gained AFT ages indicate rapid exhumation and confirm earlier studies in the Chandra Valley. I assume that the rapid exhumation is most likely related to uplift over subsurface structures. I tested this hypothesis by combining further low-temperature thermochronometers from areas east and west of my study area. By comparing two transects, each parallel to the Beas/Chandra Valley transect, I demonstrate similarities in the exhumation pattern to transects across the Sutlej region, and strong dissimilarities in the transect crossing the Dhauladar Range. I conclude that the belt of rapid exhumation terminates at the western end of the Kullu-Rampur window. Therewith, I corroborate earlier studies suggesting changes in exhumation behavior in the western Himalaya. Furthermore, I discussed several causes responsible for the pronounced change in exhumation patterns along strike: 1) the role of inherited pre-collisional features such as the Proterozoic sedimentary cover of the Indian basement, former ridges and geological structures, and 2) the variability of convergence rates along the Himalayan arc due to an increased oblique component towards the syntaxis.
The combination of field observations (geological and geomorphological mapping) and methods to constrain short- and long-term processes (10Be, AFT) help to understand the role of the individual contributors to exhumation and erosion in the western Indian Himalaya. With the results of this thesis, I emphasize the importance of glacial and tectonic processes in shaping the landscape by driving exhumation and erosion in the studied areas.
Microbial processing of organic matter (OM) in the freshwater biosphere is a key component of global biogeochemical cycles. Freshwaters receive and process valuable amounts of leaf OM from their terrestrial landscape. These terrestrial subsidies provide an essential source of energy and nutrients to the aquatic environment as a function of heterotrophic processing by fungi and bacteria. Particularly in freshwaters with low in-situ primary production from algae (microalgae, cyanobacteria), microbial turnover of leaf OM significantly contributes to the productivity and functioning of freshwater ecosystems and not least their contribution to global carbon cycling.
Based on differences in their chemical composition, it is believed that leaf OM is less bioavailable to microbial heterotrophs than OM photosynthetically produced by algae. Especially particulate leaf OM, consisting predominantly of structurally complex and aromatic polymers, is assumed highly resistant to enzymatic breakdown by microbial heterotrophs. However, recent research has demonstrated that OM produced by algae promotes the heterotrophic breakdown of leaf OM in aquatic ecosystems, with profound consequences for the metabolism of leaf carbon (C) within microbial food webs. In my thesis, I aimed at investigating the underlying mechanisms of this so called priming effect of algal OM on the use of leaf C in natural microbial communities, focusing on fungi and bacteria.
The works of my thesis underline that algal OM provides highly bioavailable compounds to the microbial community that are quickly assimilated by bacteria (Paper II). The substrate composition of OM pools determines the proportion of fungi and bacteria within the microbial community (Paper I). Thereby, the fraction of algae OM in the aquatic OM pool stimulates the activity and hence contribution of bacterial communities to leaf C turnover by providing an essential energy and nutrient source for the assimilation of the structural complex leaf OM substrate. On the contrary, the assimilation of algal OM remains limited for fungal communities as a function of nutrient competition between fungi and bacteria (Paper I, II). In addition, results provide evidence that environmental conditions determine the strength of interactions between microalgae and heterotrophic bacteria during leaf OM decomposition (Paper I, III). However, the stimulatory effect of algal photoautotrophic activities on leaf C turnover remained significant even under highly dynamic environmental conditions, highlighting their functional role for ecosystem processes (Paper III).
The results of my thesis provide insights into the mechanisms by which algae affect the microbial turnover of leaf C in freshwaters. This in turn contributes to a better understanding of the function of algae in freshwater biogeochemical cycles, especially with regard to their interaction with the heterotrophic community.
Light-driven diffusioosmosis
(2018)
The emergence of microfluidics created the need for precise and remote control of micron-sized objects. I demonstrate how light-sensitive motion can be induced at the micrometer scale by a simple addition of a photosensitive surfactant, which makes it possible to trigger hydrophobicity with light. With point-like laser irradiation, radial inward and outward hydrodynamic surface flows are remotely switched on and off. In this way, ensembles of microparticles can be moved toward or away from the irradiation center. Particle motion is analyzed according to varying parameters, such as surfactant and salt concentration, illumination condition, surface hydrophobicity, and surface structure.
The physical origin of this process is the so-called light-driven diffusioosmosis (LDDO), a phenomenon that was discovered in the framework of this thesis and is described experimentally and theoretically in this work. To give a brief explanation, a focused light irradiation induces a local photoisomerization that creates a concentration gradient at the solid-liquid interface. To compensate for the change in osmotic pressure near the surface, a hydrodynamic flow along the surface is generated. Surface-surfactant interaction largely governs LDDO. It is shown that surfactant adsorption depends on the isomerization state of the surfactant. Photoisomerization, therefore, triggers a surfactant attachment or detachment from the surface. This change is considered to be one of the reasons for the formation of LDDO flow.
These flows are introduced not only by a focused laser source but also by global irradiation. Porous particles show reversible repulsive and attractive interactions when dispersed in the solution of photosensitive surfactant. Repulsion and attraction is controlled by the irradiation wavelength. Illumination with red light leads to formation of aggregates, while illumination with blue light leads to the formation of a well-separated grid with equal interparticle distances, between 2µm and 80µm, depending on the particle surface density. These long-range interactions are considered to be a result of an increase or decrease of surfactant concentration around each particle, depending on the irradiation wavelength. Surfactant molecules adsorb inside the pores of the particles. A light-induced photoisomerization changes adsorption to the pores and drives surfactant molecules to the outside. The concentration gradients generate symmetric flows around each single particle resulting in local LDDO. With a break of the symmetry (i.e., by closing one side of the particle with a metal cap), one can achieve active self-propelled particle motion.
East Africa is a natural laboratory: Studying its unique geological and biological history can help us better inform our theories and models. Studying its present and future can help us protect its globally important biodiversity and ecosystem services. East African vegetation plays a central role in all these aspects, and this dissertation aims to quantify its dynamics through computer simulations.
Computer models help us recreate past settings, forecast into the future or conduct simulation experiments that we cannot otherwise perform in the field. But before all that, one needs to test their performance. The outputs that the model produced using the present day-inputs, agreed well with present-day observations of East African vegetation. Next, I simulated past vegetation for which we have fossil pollen data to compare. With computer models, we can fill the gaps of knowledge between sites where we have fossil pollen data from, and create a more complete picture of the past. Good level of agreement between model and pollen data where they overlapped in space further validated our model performance.
Once the model was tested and validated for the region, it became possible to probe one of the long standing questions regarding East African vegetation: How did East Africa lose its tropical forests? The present-day vegetation in the tropics is mainly characterized by continuous forests worldwide except in tropical East Africa, where forests only occur as patches. In a series of simulation experiments, I was able to show under which conditions these forest patches could have been connected and fragmented in the past. This study showed the sensitivity of East African vegetation to climate change and variability such as those expected under future climate change.
El Niño Southern Oscillation (ENSO) events that result from the fluctuations in temperature between the ocean and atmosphere, bring further variability to East African climate and are predicted to increase in intensity in the future. But climate models are still not good at capturing the pattens of these events. In a study where I quantified the influence of ENSO events on East African vegetation, I showed how different the future vegetation could be from what we currently predict with these climate models that lack accurate ENSO contribution. Consideration of these discrepancies is important for our future global carbon budget calculations and management decisions.
Obwohl der sozioökonomische Status (SES) eine in der Sozialepidemiologie häufig gebrauchte Variable darstellt, ist seine Verwendung mit methodischen Problemen verknüpft: Seine latente Struktur führt dazu, dass sich verschiedene Möglichkeiten der Operationalisierung eröffnen. Diese reichen von klassischen Ungleichheitsindikatoren wie Bildung, Einkommen oder Berufsposition, über multidimensionale oder über Nachbarschaftsmerkmale konstruierte Indizes, bis hin zu subjektiven Statuseinschätzungen. Problematisch ist dies insofern, als verschiedene Indikatoren auf unterschiedlichen theoretischen Konstrukten beruhen und unterschiedliche Schlussfolgerungen erlauben.
In dieser Arbeit wird deshalb in einem ersten Schritt anhand eines systematischen Reviews zum Zusammenhang von SES und Rückenschmerzen überprüft, welche Indikatoren in wissenschaftlichen Publikationen eingesetzt werden und wie die Auswahl begründet wird. Das Ergebnis zeigt eine klare Präferenz für klassische Indikatoren (Bildung, Einkommen und Berufsposition). Erläutert wurde die jeweilige Auswahl allerdings nur in einem geringen Prozentsatz der untersuchten Artikel, obwohl die unterschiedlichen Studienergebnisse nahelegen, dass der gewählte Indikator einen Einfluss auf den gefundenen Zusammenhang ausüben könnte.
Deshalb wurde in einem weiteren Schritt überprüft, wie unterschiedliche SES-Indikatoren mit der Verbesserung von Rückenschmerzen nach einer Rehabilitation (Studie 1) und der Neuentstehung von Rückenschmerzen (Studie 2) zusammenhängen. Außerdem wurde untersucht, ob ein einfaches Modell den Zusammenhang von SES und Gesundheit so darstellen kann, dass a priori abzuschätzen ist, wie hoch der Einfluss unterschiedlicher Indikatoren auf einen bestimmten Gesundheitsoutput sein könnte.
Es zeigt sich, dass sich der errechnete Zusammenhang zwischen den verschiedenen Indikatoren und chronischen Rückenschmerzen erheblich unterscheidet: Für Menschen, die bereits wegen Rückenschmerzen in Rehabilitation waren, erwiesen sich Bildung und Berufsposition als ähnlich einflussreiche Einflussfaktoren, während für das Einkommen kein bedeutender Zusammenhang festgestellt werden konnte. Für die Neuentstehung chronischer Rückenschmerzen zeigte sich die Berufsposition als wichtigster Indikator, gefolgt von Bildung, während für Einkommen kein signifikanter Zusammenhang gefunden werden konnte.
Folglich bestimmt die Wahl des Indikators die Höhe des festgestellten Zusammenhangs stark mit. Unterschiedliche Indikatoren dürfen deshalb nicht als austauschbar betrachtet werden und es muss bei jeder Forschungsfrage genau überlegt werden, welcher Indikator für die jeweilige Fragestellung am besten verwendet werden kann. Das vorgeschlagene theoretische Modell kann dabei als Unterstützung dienen.
Monoclonal antibodies (mAbs) are an innovative group of drugs with increasing clinical importance in oncology, combining high specificity with generally low toxicity. There are, however, numerous challenges associated with the development of mAbs as therapeutics. Mechanistic understanding of factors that govern the pharmacokinetics (PK) of mAbs is critical for drug development and the optimisation of effective therapies; in particular, adequate dosing strategies can improve patient quality life and lower drug cost. Physiologically-based PK (PBPK) models offer a physiological and mechanistic framework, which is of advantage in the context of animal to human extrapolation. Unlike for small molecule drugs, however, there is no consensus on how to model mAb disposition in a PBPK context. Current PBPK models for mAb PK hugely vary in their representation of physiology and parameterisation. Their complexity poses a challenge for their applications, e.g., translating knowledge from animal species to humans.
In this thesis, we developed and validated a consensus PBPK model for mAb disposition taking into account recent insights into mAb distribution (antibody biodistribution coefficients and interstitial immunoglobulin G (IgG) pharmacokinetics) to predict tissue PK across several pre-clinical species and humans based on plasma data only. The model allows to a priori predict target-independent (unspecific) mAb disposition processes as well as mAb disposition in concentration ranges, for which the unspecific clearance (CL) dominates target-mediated CL processes. This is often the case for mAb therapies at steady state dosing.
The consensus PBPK model was then used and refined to address two important problems:
1) Immunodeficient mice are crucial models to evaluate mAb efficacy in cancer therapy. Protection from elimination by binding to the neonatal Fc receptor is known to be a major pathway influencing the unspecific CL of both, endogenous and therapeutic IgG. The concentration of endogenous IgG, however, is reduced in immunodeficient mouse models, and this effect on unspecific mAb CL is unknown, yet of great importance for the extrapolation to human in the context of mAb cancer therapy.
2) The distribution of mAbs into solid tumours is of great interest. To comprehensively investigate mAb distribution within tumour tissue and its implications for therapeutic efficacy, we extended the consensus PBPK model by a detailed tumour distribution model incorporating a cell-level model for mAb-target interaction. We studied the impact of variations in tumour microenvironment on therapeutic efficacy and explored the plausibility of different mechanisms of action in mAb cancer therapy.
The mathematical findings and observed phenomena shed new light on therapeutic utility and dosing regimens in mAb cancer treatment.
Thematic role assignment and word order preferences in the child language acquisition of Tagalog
(2018)
A critical task in daily communications is identifying who did what to whom in an utterance, or assigning the thematic roles agent and patient in a sentence. This dissertation is concerned with Tagalog-speaking children’s use of word order and morphosyntactic markers for thematic role assignment. It aims to explain children’s difficulties in interpreting sentences with a non-canonical order of arguments (i.e., patient-before-agent) by testing the predictions of the following accounts: the frequency account (Demuth, 1989), the Competition model (MacWhinney & Bates, 1989), and the incremental processing account (Trueswell & Gleitman, 2004). Moreover, the experiments in this dissertation test the influence of a word order strategy in a language like Tagalog, where the thematic roles are always unambiguous in a sentence, due to its verb-initial order and its voice-marking system. In Tagalog’s voice-marking system, the inflection on the verb indicates the thematic role of the noun marked by 'ang.' First, the possible basis for a word order strategy in Tagalog was established using a sentence completion experiment given to adults and 5- and 7-year-old children (Chapter 2) and a child-directed speech corpus analysis (Chapter 3). In general, adults and children showed an agent-before-patient preference, although adults’ preference was also affected by sentence voice. Children’s comprehension was then examined through a self-paced listening and picture verification task (Chapter 3) and an eye-tracking and picture selection task (Chapter 4), where word order (agent-initial or patient-initial) and voice (agent voice or patient voice) were manipulated. Offline (i.e., accuracy) and online (i.e., listening times, looks to the target) measures revealed that 5- and 7-year-old Tagalog-speaking children had a bias to interpret the first noun as the agent. Additionally, the use of word order and morphosyntactic markers was found to be modulated by voice. In the agent voice, children relied more on a word order strategy; while in the patient voice, they relied on the morphosyntactic markers. These results are only partially explained by the accounts being tested in this dissertation. Instead, the findings support computational accounts of incremental word prediction and learning such as Chang, Dell, & Bock’s (2006) model.
Global climate change is one of the greatest challenges of the 21st century, with influence on the environment, societies, politics and economies. The (semi-)arid areas of Southern Africa already suffer from water scarcity. There is a great variety of ongoing research related to global climate history but important questions on regional differences still exist.
In southern African regions terrestrial climate archives are rare, which makes paleoclimate studies challenging. Based on the assumption that continental pans (sabkhas) represent a suitable geo-archive for the climate history, two different pans were studied in the southern and western Kalahari Desert. A combined approach of molecular biological and biogeochemical analyses is utilized to investigate the diversity and abundance of microorganisms and to trace temporal and spatial changes in paleoprecipitation in arid environments. The present PhD thesis demonstrates the applicability of pan sediments as a late Quaternary geo-archive based on microbial signature lipid biomarkers, such as archaeol, branched and isoprenoid glycerol dialkyl glycerol tetraethers (GDGTs) as well as phospholipid fatty acids (PLFA). The microbial signatures contained in the sediment provide information on the current or past microbial community from the Last Glacial Maximum to the recent epoch, the Holocene. The results are discussed in the context of regional climate evolution in southwestern Africa. The seasonal shift of the Innertropical Convergence Zone (ITCZ) along the equator influences the distribution of precipitation- and climate zones. The different expansion of the winter- and summer rainfall zones in southern Africa was confirmed by the frequency of certain microbial biomarkers. A period of increased precipitation in the south-western Kalahari could be described as a result of the extension of the winter rainfall zone during the last glacial maximum (21 ± 2 ka). Instead a period of increased paleoprecipitation in the western Kalahari was indicated during the Late Glacial to Holocene transition. This was possibly caused by a southwestern shift in the position of the summer rainfall zone associated to the southward movement of the ITCZ.
Furthermore, for the first time this study characterizes the bacterial and archaeal life based on 16S rRNA gene high-throughput sequencing in continental pan sediments and provides an insight into the recent microbial community structure. Near-surface processes play an important role for the modern microbial ecosystem in the pans. Water availability as well as salinity might determine the abundance and composition of the microbial communities. The microbial community of pan sediments is dominated by halophilic and dry-adapted archaea and bacteria. Frequently occurring microorganisms such as, Halobacteriaceae, Bacillus and Gemmatimonadetes are described in more detail in this study.
Die Klangeigenschaften von Musikinstrumenten werden durch das Zusammenwirken der auf ihnen anregbaren akustischen Schwingungsmoden bestimmt, welche sich wiederum aus der geometrischen Struktur des Resonators in Kombination mit den verwendeten Materialien ergeben. In dieser Arbeit wurde das Schwingungsverhalten von Streichinstrumenten durch den Einsatz minimal-invasiver piezoelektrischer Polymerfilmsensoren untersucht. Die studierten Kopplungsphänomene umfassen den sogenannten Wolfton und Schwingungstilger, die zu dessen Abschwächung verwendet werden, sowie die gegenseitige Beeinflussung von Bogen und Instrument beim Spielvorgang. An Dielektrischen Elastomeraktormembranen wurde dagegen der Einfluss der elastischen Eigenschaften des Membranmaterials auf das akustische und elektromechanische Schwingungsverhalten gezeigt. Die Dissertation gliedert sich in drei Teile, deren wesentliche Ergebnisse im Folgenden zusammengefasst werden.
In Teil I wurde die Funktionsweise eines abstimmbaren Schwingungstilgers zur Dämpfung von Wolftönen auf Streichinstrumenten untersucht. Durch Abstimmung der Resonanzfrequenz des Schwingungstilgers auf die Wolftonfrequenz kann ein Teil der Saitenschwingungen absorbiert werden, so dass die zu starke Anregung der Korpusresonanz vermieden wird, die den Wolfton verursacht. Der Schwingungstilger besteht aus einem „Wolftöter“, einem Massestück, welches auf der Nachlänge der betroffenen Saite (zwischen Steg und Saitenhalter) installiert wird. Hier wurde gezeigt, wie die Resonanzen dieses Schwingungstilgers von der Masse des Wolftöters und von dessen Position auf der Nachlänge abhängen. Aber auch die Geometrie des Wolftöters stellte sich als ausschlaggebend heraus, insbesondere bei einem nicht-rotationssymmetrischen Wolftöter: In diesem Fall entsteht – basierend auf den zu erwartenden nicht-harmonischen Moden einer massebelasteten Saite – eine zusätzliche Mode, die von der Polarisationsrichtung der Saitenschwingung abhängt.
Teil II der Dissertation befasst sich mit Elastomermembranen, die als Basis von Dielektrischen Elastomeraktoren dienen, und die wegen der Membranspannung auch akustische Resonanzen aufweisen. Die Ansprache von Elastomeraktoren hängt unter anderem von der Geschwindigkeit der elektrischen Anregung ab. Die damit zusammenhängenden viskoelastischen Eigenschaften der hier verwendeten Elastomere, Silikon und Acrylat, wurden einerseits in einer frequenzabhängigen dynamisch-mechanischen Analyse des Elastomers erfasst, andererseits auch optisch an vollständigen Aktoren selbst gemessen. Die höhere Viskosität des Acrylats, das bei tieferen Frequenzen höhere Aktuationsdehnungen als das Silikon zeigt, führt zu einer Verminderung der Dehnungen bei höheren Frequenzen, so dass über etwa 40 Hertz mit Silikon größere Aktuationsdehnungen erreicht werden. Mit den untersuchten Aktoren konnte die Gitterkonstante weicher optischer Beugungsgitter kontrolliert werden, die als zusätzlicher Film auf der Membran installiert wurden. Über eine Messung der akustischen Resonanzfrequenz von Elastomermebranen aus Acrylat in 1Abhängigkeit von ihrer Vorstreckung konnte in Verbindung mit einer Modellierung des hyperelastischen Verhaltens des Elastomers (Ogden-Modell) der Schermodul bestimmt werden.
Schließlich wird in Teil III die Untersuchung von Geigen und ihrer Streichanregung mit Hilfe minimal-invasiver piezoelektrischer Polymerfilme geschildert. Es konnten am Bogen und am Steg von Geigen – unter den beiden Füßen des Stegs – jeweils zwei Filmsensoren installiert werden. Mit den beiden Sensoren am Steg wurden Frequenzgänge von Geigen gemessen, welche eine Bestimmung der frequenzabhängigen Stegbewegung erlaubten. Diese Methode ermöglicht damit auch eine umfassende Charakterisierung der Signaturmoden in Bezug auf die Stegdynamik. Die Ergebnisse der komplementären Methoden von Impulsanregung und natürlichem Spielen der Geigen konnten dank der Sensoren verglichen werden. Für die Nutzung der Sensoren am Bogen – insbesondere für eine Messung des Bogendrucks – wurde eine Kalibrierung des Bogen-Sensor-Systems mit Hilfe einer Materialprüfmaschine durchgeführt. Bei einer Messung während des natürlichen Spielens wurde mit den Sensoren am Bogen einerseits die Übertragung der Saitenschwingung auf den Bogen festgestellt. Dabei konnten außerdem longitudinale Bogenhaarresonanzen identifiziert werden, die von der Position der Saite auf dem Bogen abhängen. Aus der Analyse dieses Phänomens konnte die longitudinale Wellengeschwindigkeit der Bogenhaare bestimmt werden, die eine wichtige Größe für die Kopplung zwischen Saite und Bogen ist. Mit Hilfe des Systems aus Sensoren an Bogen und Steg werden auf Grundlage der vorliegenden Arbeit Studien an Streichinstrumenten vorgeschlagen, in denen die Bespielbarkeit der Instrumente zu den jeweils angeregten Steg- und Bogenschwingungen in Beziehung gesetzt werden kann. Damit könnte nicht zuletzt auch die bisher nicht vollständig geklärte Rolle des Bogens für Klang und Bespielbarkeit besser beurteilt werden
Die intrazelluläre Markierung mit geeigneten Reagenzien ermöglicht ihre bildgebende Darstellung in lebenden Organismen. Dieses Verfahren (auch „Zell-Tracking“ genannt) wird in der Grundlagenforschung zur Entwicklung zellulärer Therapien, für die Erforschung pathologischer Prozesse, wie der Metastasierung, sowie für Therapiekontrollen eingesetzt. Besondere Bedeutung haben in den letzten Jahren zelluläre Therapien mit Stammzellen erlangt, da sie großes Potential bei der Regeneration von Geweben bei Krankheiten wie Morbus Parkinson oder Typ-1-Diabetes versprechen. Für die Entwicklung einer zellulären Therapie sind Informationen über den Verbleib der applizierten Zellen in vivo (Homing-Potential), über ihre Zellphysiologie sowie über die Entstehung möglicher Entzündungen notwendig. Das Ziel der vorliegenden Arbeit war daher die Synthese von Markierungsreagenzien, die nicht nur eine effiziente Zellmarkierung ermöglichen, sondern einen synergistischen Effekt hinsichtlich des modalitätsübergreifenden Einsatzes in den bildgebenden Verfahren MRT und Laser-Ablation(LA)-ICP-MS erlauben. Die MRT-Bildgebung ermöglicht die nicht invasive Nachverfolgung markierter Zellen in vivo und die LA-ICP-MS die anschließende ex vivo Analytik zur Darstellung der Elementverteilung (Bioimaging) in einer Biopsieprobe oder in einem Gewebeschnitt. Für diese Zwecke wurden zwei verschiedene Markierungsreagenzien mit dem kontrastgebenden Element Gadolinium synthetisiert. Gadolinium eignet sich aufgrund seines hohen magnetischen Moments hervorragend für die MRT-Bildgebung und da es in Biomolekülen nicht natürlich vorkommt, konnten die Reagenzien gleichermaßen für die Zellmarkierung und das Bioimaging mit der LA-ICP-MS untersucht werden. Für die Synthese eines makromolekularen Reagenzes wurde das kommerziell verfügbare Dendrimer G5-PAMAM über bifunktionelle Linker mit dem Chelator DOTA funktionalisiert, um anschließend Gadolinium zu komplexieren. Ein zweites, nanopartikuläres Reagenz wurde über eine Solvothermal-Synthese erhalten, bei der Ln:GdVO4-Nanokristalle mit einer funktionellen Polyacrylsäure(PAA)-Hülle dargestellt wurden. Die Dotierung der Ln:GdVO4-PAA Nanokristalle mit verschiedenen Lanthanoiden (Ln=Eu, Tb) zeigte ihre prinzipielle Multiplexfähigkeit in der LA-ICP-MS. Beide Markierungsreagenzien zeichneten sich durch gute Bioverträglichkeiten und r1-Relaxivitäten aus, was zudem ihr Potential für Anwendungen als präklinische „blood-pool“ MRT-Kontrastmittel belegte. Die Untersuchung der Zellmarkierung erfolgte anhand einer Tumorzelllinie und einer Stammzelllinie, wobei beide Zellarten erfolgreich intrazellulär mit beiden Reagenzien markiert wurden. Nach der Zellmarkierung veranschaulichte die in vitro MRT-Bildgebung von Zell-Phantomen eine deutlichere Kontrastverstärkung der Zellen nach der Markierung mit den Nanokristallen im Vergleich zum kommerziellen Kontrastmittel Magnevist®. Die hohe Effizienz der Zellmarkierung mit den Nanokristallen und die damit verbundenen hohen Signalintensitäten in einer einzelnen Zelle erlaubten beim Bioimaging mit der LA-ICP-MS, Messungen bis zu einer Auflösung von 4 µm Laser Spot Size. Nach der Zellmarkierung mit den DOTA(Gd3+)-funktionalisierten G5-PAMAM Dendrimeren waren hingegen Aufnahmen mit der LA-ICP-MS nur bis zu einer Auflösung von 12 µm Laser Spot Size möglich. Insgesamt waren die Ln:GdVO4-PAA Nanokristalle mit größerer Ausbeute und kostengünstiger herstellbar als die DOTA(Gd3+)-funktionalisierten G5-PAMAM Dendrimere und zeigten zudem eine effizientere Zellmarkierung. Die Ln:GdVO4-PAA Nanokristalle erscheinen somit für das Zell-Tracking als besonders vielversprechend. Darauf aufbauend wurden die Nanokristalle zur Etablierung der Antikörper-Konjugation ausgewählt, was sie für die molekulare in vivo Bildgebung sowie für die Immuno-Bildgebung von Gewebeschnitten oder Biopsie-Proben mit der LA-ICP-MS anwendbar macht.
Nervous allies
(2018)
Diese Dissertation untersucht die Entwicklung der diplomatischen Beziehungen zwischen Frankreich, den USA und der Bundesrepublik Deutschland im Zeitraum von 1969-1980. Auf breiter multiarchivarischer Quellengrundlage rekonstruiert sie die interdependente Außenpolitik dieser drei Staaten im Kontext zentraler Themenkomplexe der 1970er Jahre: des Aufstiegs und Verfalls der Entspannungspolitik, des Streits um den Status quo in Europa, die Deutsche Frage und die Zukunft Berlins, der internationalen Wirtschafts- und Währungskrise, der Debatte um Sicherheit und Zukunft des westlichen Bündnisses und des NATO-Doppelbeschlusses. Ebenso betrachtet werden eine Reihe von regionalen Ereignissen und Konflikten mit weitreichenden Auswirkungen wie der Jom-Kippur-Krieg, die Portugiesische Revolution oder die sowjetische Invasion Afghanistans.
Die Untersuchung folgt der zentralen, theoretisch motivierten Fragestellung, in welchem Maß staatliche Außenpolitik und diplomatische Beziehungen von individuellen Akteuren an der Spitze der Regierungen, ihren Agenden, Sichtweisen und persönlichen Beziehungen zu internationalen Partnern geprägt wurden oder in welchem Maß deren Entscheidungsfindung andererseits durch strukturelle Faktoren geopolitischer, ökonomischer oder politischer Natur definiert und limitiert wurde. Um diese Frage zu beantworten, fokussiert sich die Dissertation auf die Analyse von Regierungswechseln und deren Auswirkungen auf Kontinuität und Wandel der Außenpolitik. Die Narrative umfasst sieben solcher Regierungswechsel: von Bundeskanzler Kurt Georg Kiesinger zu Willy Brandt (1969) und von Brandt zu Helmut Schmidt (1974) in Bonn, von Präsident Charles de Gaulle zu Georges Pompidou (1969) und von Pompidou zu Valéry Giscard d’Estaing (1974) in Paris sowie von Lyndon B. Johnson zu Richard M. Nixon (1969), von Nixon zu Gerald R. Ford (1974) und von Ford zu Jimmy Carter (1977) in Washington.
Abseits eines Spektrums empirisch fundierter Erkenntnisse über die Geschichte der internationalen Beziehungen der 1970er Jahre belegt diese Arbeit vor allem hochgradig personalisierte und exklusive außenpolitische Entscheidungsstrukturen und eine deutliche Abhängigkeit der Qualität intergouvernementaler Beziehungen von den persönlichen Beziehungen außenpolitischer Führungspersönlichkeiten. Zugleich werden jedoch strukturelle Grenzen ihres Handlungsspielraums im internationalen System deutlich, die von Faktoren wie militärischer Sicherheit und geopolitischer Lage, Zugang zu Ressourcen und ökonomischer Leistungsfähigkeit sowie politischem Druck aus dem In- und Ausland abhängen. Die Dissertation kommt zu dem zentralen Ergebnis, dass Regierungswechsel zwar bisweilen drastische Einschnitte in Inhalt und Stil der auswärtigen Beziehungen nach sich zogen und Bonn, Paris und Washington im Laufe der Dekade mit vielerlei neuen Herausforderungen konfrontiert wurden, dass in der Gesamtschau jedoch pfadabhängige strukturelle Druckszenarien zu höherer politischer Kontinuität im internationalen System führten, als oft mit den für tiefgreifenden historischen Wandel bekannten 1970er Jahren assoziiert wird.
BACKGROUND: Physical activity involving high spinal load has been exposed to possess a crucial impact in the genesis of acute and chronic low back pain and disorder. Vigorous spinal loads are surmised in drop landings, for which strenuous bending loads were formerly evinced for the lower extremity structures. Thus far, clinical studies revealed that repetitive landing impacts can evoke benign structural adaptions or damage to the lumbar vertebrae. Though, causes for these observations are hitherto not conclusively evinced; since actual spinal load has to date not been experimentally documented. Moreover, it is yet undetermined how physiological activation of trunk musculature compensates for landing impact induced spinal loads, and to which extend trunk activity and spinal load are affected by landing demands and performer characteristics. AIMS of this study are 1. the localisation and quantification of spinal bending loads under various landing demands and 2. the identification of compensatory trunk muscular activity pattern, which potentially alleviate spinal load magnitudes. Three consecutive Hypotheses (H1 - H3) were hereto postulated: H1 posits that spinal bending loads in segregated motion planes can feasibly and reliably be evaluated from peak spine segmental angular accelerations. H2 furthermore assumes that vertical drop landings elicit highest spine bending load in sagittal flexion of the lumbar spine. Based on these verifications, a second study shall prove the successive hypothesis (H3) that diversified landing conditions, like performer’s landing familiarity and gender, as an implementation of an instantaneous follow-up task, affect the emerging lumbar spinal bending load. Herein it is moreover surmised that lumbar spinal bending loads under distinct landing conditions are predominantly modulated by herewith disparately deployed conditioned pre-activations of trunk muscles. METHODS: To test the above arrayed hypothesis, two successive studies were carried out. In STUDY 1, 17 subjects were repetitively assessed performing various drop landings (heigth: 15, 30, 45, 60cm; unilateral, bilateral, blindfolded, catching a ball) in a test-retest-design. Herein individual peak angular accelerations [αMAX] were derived from three-dimensional motion data of four trunk-segments (upper thoracic, lower thoracic, lumbar, pelvis). αMAX was herein assessed in flexion, lateral flexion, and rotation of each spinal joint, formed by two adjacent segments. Reliability of αMAX within and between test-days was evaluated by CV%, ICC 2.1, TRV%, and Bland & Altman Analysis (BIAS±LoA). Subsequently, peak flexion acceleration of the lumbo-pelvic joint [αFLEX[LS-PV]] was statistically compared to αMAX expressions of each other assessed spinal joint and motion plane (Mean ±SD, Independent Samples T-test). STUDY 2 deliberately assessed mere peak lumbo-pelvic flexion accelerations [αFLEX[LS-PV]] and electro-myographic trunk pre-activity prior to αFLEX[LS-PV] on 43 subjects performing varied landing tasks (height 45cm; with definite or indefinite predictability of a subsequent instant follow up jump). Subjects were contrasted with respect to their previous landing familiarity ( >1000 vs. <100 landings performed in the past 10 years) and gender. Differences of αFLEX[LS-PV] and muscular pre-activity between contrasted subject groups as between landing tasks were equally statistically tested by three-way mixed ANOVA with Post-hoc tests. Associations between αFLEX[LS-PV] and muscular pre-activity were factor-specifically assessed by Spearman’s rank order correlation coefficient (rS). Complementarily, muscular pre-activity was subdivided by landing phases [DROP, IMPACT] and discretely assessed for phase specific associations to αFLEX[LS-PV]. Each muscular activity was moreover pairwise compared between DROP and IMPACT (Mean ±SD, Dependent Samples T-test). RESULTS: αMAX was presented with overall high variability within test-days (CV =36%). Lowest intra-individual variability and highest reproducibility of αMAX between test-days was shown in flexion of the spine. αFLEX[LS-PV] showed largely consistent sig. higher magnitudes compared to αMAX presented in more cranial spinal joints and other motion planes. αFLEX[LS-PV] moreover gradually increased with escalations in landing heights. Landing unfamiliar subjects presented sig. higher αFLEX[LS-PV] in contrast to landing familiar ones (p=.016). M. Obliquus Int. with M. Transversus Abd. (66 ±32%MVC) and M. Erector Spinae (47 ±15%MVC) presented maredly highest activity in contrast to lowest activity of M. Rectus Abd. (10 ±4%MVC). Landing unfamiliar subjects showed compared to landing familiar ones sig. higher activity of M. Obliquus Ext. (17 ±8%MVC, 12 ±7%MVC, p= .044). M. Obliquus Ext. and its co-contraction ratio with M. Erector Spinae moreover exhibited low but sig. positive correlations to αFLEX[LS-PV] (rs=.39, rs=.31). Each trunk muscule distributed larger shares of its activity to DROP, whereas peak activations of most muscles emerged in the proportionally shorter IMPACT phase. Commonly increased muscular pre-activation particularly at IMPACT was found in landings with a contrived follow up jump and in female subjects, whereby αFLEX[LS-PV] was hereof only marginally affected. DISCUSSION: Highest spine segmental angular accelerations in drop landings emerge in sagittal flexion of the lumbar spine. The compensatory stabilisation of the spine appears to be preponderantly provided by a dorso-ventral co-contraction of M. Obliquus Int., M. Transversus Abd. and M. Erector Spinae. Elevated pre-activity of M. Obliquuis Ext. supposably characterises poor landing experience, which might engender increased bending loads to the lumbar spine. A pervasive large variability of spinal angular accelerations measured across all landing types, suggests a multifarious utilisation of diverse mechanisms compensating for spinal impacts in landing performances. A standardised assessment and valid evaluation of landing evoked lumbar bending loads is hereof largley confined. CONCLUSION: Drop landings elicit most strenuous lumbo-pelvic flexion accelerations, which can be appraised as representatives for high energetic bending loads to the spine. Such entail the highest risk to overload the spinal tissue, when landing demands exceed the individual’s landing skill. Previous landing experience and training appears to effectively improve muscular spine stabilisation pattern, diminishing spinal bending loads.
Water at α-alumina surfaces
(2018)
The (0001) surface of α-Al₂O₃ is the most stable surface cut under UHV conditions and was studied by many groups both theoretically and experimentally. Reaction barriers computed with GGA functionals are known to be underestimated. Based on an example reaction at the (0001) surface, this work seeks to improve this rate by applying a hybrid functional method and perturbation theory (LMP2) with an atomic orbital basis, rather than a plane wave basis. In addition to activation barriers, we calculate the stability and vibrational frequencies of water on the surface. Adsorption energies were compared to PW calculations and confirmed PBE+D2/PW stability results. Especially the vibrational frequencies with the B3LYP hybrid functional that have been calculated for the (0001) surface are in good agreement with experimental findings. Concerning the barriers and the reaction rate constant, the expectations are fully met. It could be shown that recalculation of the transition state leads to an increased barrier, and a decreased rate constant when hybrid functionals or LMP2 are applied.
Furthermore, the molecular beam scattering of water on (0001) surface was studied. In a previous work by Hass the dissociation was studied by AIMD of molecularly adsorbed water, referring to an equilibrium situation. The experimental method to obtaining this is pinhole dosing. In contrast to this earlier work, the dissociation process of heavy water that is brought onto the surface from a molecular beam source was modeled in this work by periodic ab initio molecular dynamics simulations. This experimental method results in a non-equilibrium situation. The calculations with different surface and beam models allow us to understand the results of the non-equilibrium situation better. In contrast to a more equilibrium situation with pinhole dosing, this gives an increase in the dissociation probability, which could be explained and also understood mechanistically by those calculations.
In this work good progress was made in understanding the (1120) surface of α-Al₂O₃ in contact with water in the low-coverage regime. This surface cut is the third most stable one under UHV conditions and has not been studied to a great extent yet. After optimization of the clean, defect free surface, the stability of different adsorbed species could be classified. One molecular minimum and several dissociated species could be detected. Starting from these, reaction rates for various surface reactions were evaluated. A dissociation reaction was shown to be very fast because the molecular minimum is relatively unstable, whereas diffusion reactions cover a wider range from fast to slow. In general, the (112‾0) surface appears to be much more reactive against water than the (0001) surface. In addition to reactivity, harmonic vibrational frequencies were determined for comparison with the findings of the experimental “Interfacial Molecular Spectroscopy” group from Fritz-Haber institute in Berlin. Especially the vibrational frequencies of OD species could be assigned to vibrations from experimental SFG spectra with very good agreement. Also, lattice vibrations were studied in close collaboration with the experimental partners. They perform SFG spectra at very low frequencies to get deep into the lattice vibration region. Correspondingly, a bigger slab model with greater expansion perpendicular to the surface was applied, considering more layers in the bulk. Also with the lattice vibrations we could obtain reasonably good agreement in terms of energy differences between the peaks.
Die Arbeit beschäftigt sich mit den aktuellen Regelungen des deutschen Aufenthaltsrechts in Bezug auf die Möglichkeiten des Familiennachzuges. Es werden Schwachstellen der aktuellen Regelungen aufgezeigt, Ursachen, Rechtfertigungsgründe und mögliche Lösungsansätze betrachtet.
Schwerpunkt der Betrachtung sind die Konflikte, welche sich unter dem Begriff der Inländerdiskriminierung zusammenfassen lassen. Hierzu wird das Phänomen der Inländerdiskriminierung untersucht und die im Kontext des Familiennachzuges hierzu ergangene Rechtsprechung des EuGH betrachtet. Dabei gilt das Hauptaugenmerk der Figur des grenzüberschreitenden Bezuges, welche der EuGH im Ergebnis mittlerweile aufgelöst hat. Als Ergebnis dieses Abschnittes der Arbeit wird festgestellt, dass eine Unterscheidung von Nachzug zu Deutschen oder zu Unionsbürgern gegen Gleichheitssätze verstößt und aufzuheben ist.
Weiterhin betrachtet die Arbeit verschiedene alternative Lebensmodelle neben der klassischen verschiedengeschlechtlichen Ehe. In Bezug auf gleichgeschlechtliche Lebensgemeinschaften werden auch nach Einführung der „Ehe für alle“ weitere Schwachstellen verortet, die vor allem darauf fußen, dass Nachzugsrechte vom Bestehen eines Instituts abhängen, welches in großen Teilen der Welt nicht gibt. In Hinblick auf nichtehelichen Lebensgemeinschaften wird hingegen die geltende Rechtslage als ausreichend betrachtet. Zuletzt betrachtet die Arbeit Ehemodelle, welche im deutschen Recht nicht vorgesehen und anerkannt sind. Dies sind die Zwangs-, Kinder- und Mehrehe. Es wird beleuchtet, wie das deutsche Recht und insbesondere das Aufenthaltsrecht mit diesen Ehen umgeht und welcher Zweck mit den bestehenden Regelungen verfolgt wird. Während der Gesetzgeber den Schutz der Opfer solcher Eheschließungen vor Augen hatte, kommt die Untersuchung zu dem Ergebnis, dass vielmehr eine weitere Gefährdung eintritt, welche nur zu vermeiden wäre, wenn auch diese Ehemodelle zunächst anerkannt würden und den Opfern im Inland sodann Hilfe angeboten würde.
Insgesamt stellt die Arbeit gravierende Mängel in menschenrechtlicher Hinsicht im bestehenden Recht des Familiennachzugs fest und schlägt eine generelle Neuordnung vor.
Die betrachteten Regelungen entsprechen dem Regelungsstand im Juli 2018.
La cabrona aquí soy yo
(2018)
La última década ha visto un interés creciente en el fenómeno del narcotráfico en México a nivel global. Las diversas expresiones de violencia extrema que acompañan al negocio ilegal de drogas se narran en artefactos mediáticos que provocan fascinación e intriga. Así, la literatura y el cine, la música y la televisión presentan imágenes e historias sobre el narcotráfico que alimentan el imaginario colectivo.
En este contexto, a nivel global hay representaciones mediáticas de la mujer mexicana narcotraficante que reproducen estereotipos femeninos donde la mujer se cosifica, exagerando los atributos sexuales del cuerpo de las mujeres. Esta representación cultural hace de la mujer un objeto de deseo, cuya belleza sirve como una marca de prestigio y ostentación para el hombre narcotraficante. La cultura del narcotráfico impone a las mujeres un ideal estético particular distintivo, que las mujeres reproducen meticulosamente para emular esta representación. Aunado a la belleza física, la mujer es retratada violenta y sin escrúpulos, usa su belleza y poder de seducción para acumular dinero y poder a costa de los hombres que conquista. Para los que no pertenecen al mundo del narcotráfico, este tipo de mujer, hipersexualizada, inspira juicios negativos, discriminación, desconfianza y temor.
La intención de la pregunta y objetivos de investigación de este trabajo fue rebasar estas representaciones para observar las complejidades de las experiencias de vida de estas mujeres. El propósito de esta tesis de doctorado fue explorar cómo cambian las vidas de las mujeres mexicanas cuando se involucran en la narcocultura, en la frontera México-Estados Unidos. En específico, la investigación analizó las transformaciones en la corporalidad y en las subjetividades de estas mujeres, y cómo estas transformaciones influían en el lugar que ocupan en el espacio social y cultural que configura el narcotráfico. Además, se analizó qué márgenes de negociación tienen las mujeres en la narcocultura, para poder actuar y definirse a sí mismas.
Las preguntas que guiaron el trabajo indagaban sobre cómo las mujeres cambiaban su cuerpo para encarnar el ideal estético y qué significados se atribuían a estos cambios. Fue importante analizar qué dinámicas de poder se ponían en juego a partir de estos cuerpos femeninos, en las relaciones con los hombres y con otras mujeres. También, otro objetivo fue qué procesos de subjetivación operaban en las mujeres que participan en la narcocultura, y qué márgenes de negociación tenían para actuar y definirse a sí mismas.
Esta es una investigación inscrita dentro de los estudios culturales y con una perspectiva feminista interseccional. La investigación se realizó en la frontera mexicana con Estados Unidos, en el noroeste, específicamente en las ciudades de Mexicali, Tijuana y San Diego, California. La frontera, en esta tesis, se observa como un espacio con múltiples contextos de interpretación, polisémico y heterogéneo. Estas cualidades hacen que los fenómenos culturales que ocurren en él sean diversos y contradictorios.
Para entender los fenómenos culturales que emergen de la frontera norte de México, fue útil el concepto de transfrontera de José Valenzuela Arce (2014). La propuesta de este académico es que las transfronteras son “espacios que se niegan a una sola de las condiciones o los lados que la integran” (p. 9). Así, el concepto habla de los procesos de conectividad y simultaneidad que la globalización genera y que redefinen a los Estados-territorio. Al mismo tiempo, habla también de los límites que estos mismos Estados utilizan para sostener narrativas nacionales que son “referentes organizadores de adscripciones identitarias y culturales” (p. 18) que crean diferencias y desigualdades. Si esto es así, una frontera no se explica completamente desde la demarcación territorial o desde la diferenciación jerárquica que incluye a algunos y excluye a otros, pero tampoco puede entenderse si nos concentramos solamente en los procesos de hibridación cultural que ocurren en esos espacios. Por eso, para Valenzuela las fronteras son entre espacios y entre tiempos.
Este concepto ayuda a entender cómo se intersecta lo global y local en los sistemas semióticos que componen el universo cultural del narcotráfico mexicano, al mismo tiempo que explica cómo se estructuran mecanismos de exclusión y jerarquías a partir del género, la posición social y otras marcas de diferenciación social. En última instancia, ayuda a localizar estos procesos culturales, materializados en el cuerpo de las mujeres.
El concepto de narcocultura también fue una herramienta heurística útil. La cultura aquí se entiende como un proceso de producción y reproducción de modelos simbólicos, materializados en artefactos o representaciones y, además, interiorizados en lógicas de vida, sistemas de valores y creencias, que circulan a través de las prácticas individuales y colectivas de mujeres y hombres, en contextos históricos y espaciales específicos. La narcocultura sería entonces el sistema semiótico producido en torno al negocio transnacional de tráfico ilegal de drogas, tal como se vive en la frontera norte de México. La narcocultura, tal como se define en este trabajo es un sistema semiótico con límites difusos. Así, las distinciones entre el mundo ilegal del narcotráfico y el mundo de la legalidad externo a este negocio, en el mejor de los casos son borrosas, en el peor, ficticias. La narcocultura trasciende límites territoriales, es un fenómeno cultural transnacional.
Fue necesario delinear las características de los estudios culturales latinoamericanos y los Kulturwissenschaften en Alemania, para distinguir las genealogías de estas dos diferentes perspectivas, entender sus diferencias, pero, sobre todo, encontrar los puntos en común entre ellas. La coincidencia central fue el carácter transdisciplinario de estas dos tradiciones académicas. Los estudios culturales entonces se entienden como un espacio de articulación entre disciplinas (Castro Gómez, 2002), que no tiene como objetivo la unificación sino la pluralización de significados, actitudes y modos de percepción (Bachmann-Medick, 2016). La transdisciplina permite trazar las complejidades de los fenómenos culturales, creando puentes entre diferentes formas de conocimiento y prácticas de investigación.
El feminismo interseccional es una perspectiva central en el trabajo de investigación. Una contribución del feminismo a los estudios culturales que influye en esta investigación es cuestionar “Hombre” y “Mujer” como esencias naturales dadas e inmutables, desde la premisa que “los signos "hombre" y "mujer" son construcciones discursivas que el lenguaje de la cultura proyecta e inscribe en el escenario de los cuerpos, disfrazando sus montajes de signos tras la falsa apariencia de que lo masculino y lo femenino son verdades naturales, ahistóricas” (Richard, 2009, p. 77). Los estudios culturales feministas suponen que estos signos se construyen en un sistema de representaciones que articulan subjetividades en mundos culturales concretos. Su objetivo entonces es develar en las prácticas significantes, los elementos ideológicos que configuran los signos y los conflictos que se suscitan a través del uso e interpretación de éstos.
Estos signos adquieren múltiples significados y lecturas de acuerdo con especificidades que se distinguen en la diferencia. La interseccionalidad, dentro del feminismo es un discurso teórico y metodológico que aboga por reconocer que el signo “mujer” no es una categoría absoluta, y por lo tanto no puede explicar por sí misma las variadas experiencias vitales de las mujeres. Las diferencias se vuelven legibles cuando se ponen en juego con otras categorías sociales como la posición social, la raza, la edad y la discapacidad. Las diferencias sociales están fincadas en diferentes discursos que naturalizan los diferentes atributos de estas categorías sociales cuando, para esta perspectiva, son socialmente construidos y cambiantes. El objetivo de una perspectiva interseccional es identificar cómo interactúan diferentes categorías sociales en instituciones, prácticas y subjetividades, para entender cómo se materializan las desigualdades a través del tiempo.
Los conceptos teóricos que guían esta tesis son cuerpo y subjetividad. Para esta tesis, el cuerpo se entiende como un sitio de articulación, donde se materializan códigos culturales y el orden social. El cuerpo puede entenderse como una frontera dinámica y mutable, donde convergen lo físico, lo simbólico y lo social. Sujeto y cuerpo son mutuamente constitutivos; el cuerpo es el medio a través del cual el sujeto vive experiencias en el mundo social, y son esas experiencias las que llevan al sujeto a encarnar las diferencias sociales, materializadas en género, sexo, clase social y raza.
A pesar de esta relación indisociable, para facilitar el análisis, una parte se concentra en el cuerpo y otra en la subjetividad. Así, para entender la dimensión corporal se puso en tensión la representación con la experiencia vivida, a través del análisis audiovisual y la observación etnográfica leída en conjunto. En el caso de la subjetividad, se puso en tensión la vida en la narrativa de ficción con las narraciones de vida en entrevistas, para también encontrar los puentes entre las representaciones y la experiencia vital.
Esta investigación fue un estudio cualitativo y transdisciplinario. Se utilizaron diversos recursos metodológicos para construir el análisis. Se realizó observación etnográfica en diversos bares y clubs a ambos lados de la frontera, que son frecuentados por personas que se adscriben al mundo de la narcocultura o bien, que trabajan dentro de las redes del narcotráfico. En las incursiones a estos sitios, se observó el físico de las mujeres: su manera de vestir, su arreglo personal, sus formas corporales. Se observó la conducta: las gestualidades y las interacciones con otros sujetos en el espacio. Además, se observó el espacio, para ver cómo se establecían reglas, límites y jerarquizaciones en la disposición física de los lugares visitados. Se analizaron tres videos de narcocorridos a través de la video hermenéutica, para determinar cómo se representan las mujeres en estos artefactos culturales, usando los mismos criterios físicos y conductuales que mencioné anteriormente.
El análisis de los videos de la mano del trabajo etnográfico ayudó a profundizar en los significados atribuidos a la corporalidad femenina, y también a los impactos que estos significados tienen en las vivencias y relaciones de estas mujeres.
Se realizaron 5 entrevistas semi estructuradas con mujeres que se identificaban con la narcocultura. Algunas sólo simpatizan con el estilo de vida, otras estuvieron involucradas de alguna manera en el negocio ilegal de drogas. En las entrevistas se exploraron narraciones sobre sus vidas donde se revelaban discursos sobre qué es lo femenino, qué significa ser mujer y cómo se vive el ser mujer en el mundo del narcotráfico. Adicionalmente, utilicé las narraciones de dos textos literarios de la narrativa sobre narcotráfico del norte de México. En estos dos textos, los personajes principales son mujeres. Analicé cómo se construye al sujeto femenino en la narración y qué discursos se transparentan en el texto sobre la feminidad y ser una mujer en el mundo del narco.
Aquí también se puso en tensión la representación y la experiencia de vida, buscando en el análisis de la narración literaria y las experiencias narradas por las mujeres, discursos comunes que explicaran los procesos de subjetivación femenina dentro de la narcocultura mexicana.
La primera parte del análisis articuló la observación etnográfica con el material audiovisual para entender las exigencias estéticas que la narcocultura demanda a las mujeres y las maneras en que ellas transforman su cuerpo para complacer esta demanda.
La narcocultura impone a las mujeres un ideal estético que se convierte en un medio de acceso a un tipo de poder. Este ideal exige un tipo particular de fisonomía y de apariencia personal, que las mujeres intentan reproducir a través de intervenciones en el cuerpo, con el maquillaje y el peinado y/o la cirugía estética. Además, demanda cierto estilo de moda, en ropa y accesorios, de marcas de lujo de consumo global. Entre más fielmente se reproduzca este ideal, las mujeres están en posibilidad de acceder a beneficios económicos y sociales que les dan márgenes de acción dentro de este entorno social. El cuerpo de las mujeres se convierte en el recurso primario para la movilidad social y la agencia dentro de este mundo. El cuerpo es el signo principal para determinar el lugar de las mujeres dentro de los sistemas de jerarquización, de inclusión y exclusión en los espacios físicos y sociales que fabrica el narcotráfico. Estos mecanismos de diferencia reproducen las desigualdades sociales, de género, edad, posición social y raza que se observan en otros ámbitos de la sociedad mexicana.
La observación etnográfica y el análisis audiovisual revelan que las posibilidades para performar la feminidad está confinado a limites muy estrechos. Alicia Gaspar de Alba llama a esto The Three Maria Syndrome, que ella define como “the patriarchal social discourse of Chicano/Mexicano culture that constructs women’s gender and sexuality according to three Biblical archetypes -virgins, mothers and whores-” (Gaspar de Alba, 2014, pos.3412). Estas representaciones femeninas son alegorías a las constricciones que la cultura machista mexicana impone sobre las mujeres, sometiéndolas a un repertorio restringido de opciones de vida y al control social de su sexualidad. Las mujeres dentro de la narcocultura tienen un lugar en él en función de su belleza física, el cuerpo es el referente principal para definirse como sujetos.
Las mujeres son objetos de deseo, cuya belleza es una joya más para la corona de un narcotraficante, una posesión más para ostentar su poderío. Al mismo tiempo, aparecen cada vez más las representaciones femeninas como sujetos activos, participando del negocio y de la violencia a la par de los hombres. Se observan transgresiones al ideal de feminidad que se exige a la mujer tradicional en la cultura mexicana. La docilidad, la suavidad y la sumisión que se espera, el recato y la compostura, no está presente. Las mujeres adoptan cualidades consideradas masculinas, tomando para sí el ejercicio de la violencia y la agresividad sexual para demostrar que ellas también pueden navegar un mundo agresivo e hipermasculino. A pesar de esto, esta mujer guerrera y valiente está dentro de los confines limitados que la cultura patriarcal impone al régimen heterosexual. Siguen al pie de la letra la prescripción del Three Maria Syndrome.
Esto queda patente un sistema de jerarquización a través de la cual se evalúa a las mujeres dentro de la narcocultura. Las mujeres son juzgadas a partir de criterios que intersectan componentes raciales, de género y de clase. Aunque las maneras en las que estas marcas de diferencia se encarnan en un cuerpo femenino de manera muy diversa, se puede identificar, a través de las representaciones y la observación etnográfica, que las mujeres más privilegiadas, son mujeres que encarnan los signos de una posición económica alta: tienen tez clara, son atractivas y cuidan su apariencia para presentar signos de feminidad de manera discreta, y su conducta proyecta compostura y respetabilidad, en función de su restricción, particularmente en la expresión de la sexualidad. A las mujeres que encarnan estos signos de feminidad se les respeta y se consideran valiosas. Su valor se formaliza a través de la respetabilidad del contrato matrimonial: este tipo de performance de género lo reproducen, por lo general, mujeres esposas de narcotraficantes. En el otro extremo del espectro están las mujeres menos valoradas: son mujeres morenas, que utilizan una estética asociada con la clase trabajadora, por lo general ostentosa y recargada de decoraciones. La conducta de estas mujeres se juzga como vulgar y sin restricciones. A las mujeres que encarnan este tipo de feminidad se les discrimina y cosifica, son las más vulnerables a la violencia en función del poco valor que tienen dentro del mundo del narcotráfico.
La buchona representa una versión devaluada de la feminidad, que choca con el decoro y la discreción que exigen las normas tradicionales de género. Son mujeres que se consideran vulgares, porque sus cuerpos portan signos de una sexualidad agresiva, porque adoptan conductas que irrumpen las restricciones sociales impuestas a las mujeres, porque sus prácticas y consumos culturales están asociadas a las clases trabajadoras y rurales. En las mujeres que entrevisté hay un conflicto entre la atractiva libertad que promete la transgresión de ser buchona y el deseo de respetabilidad que otorga ser una mujer que cumple con lo que la sociedad exige. Uno de los dilemas al centro de performar el cuerpo buchón es la batalla entre una feminidad aceptada socialmente, pero restrictiva y una feminidad que otorga poder, pero castiga.
Por este motivo, las mujeres que entrevisté rechazaban ser nombradas como buchonas y preferían llamarse a sí mismas cabronas. En este contexto particular, la palabra cabrona es una resignificación de un término coloquial castellano, usado para ofender. Aquí, la mujer cabrona se convierte en un eje articulador para la constitución de subjetividades femeninas dentro de la narcocultura. La cabrona es un tropo femenino que entrelaza narrativas sobre ser mujer que circulan a nivel global con narrativas locales sobre la feminidad. Asumirse “cabrona”, se convierte en un recurso para enfrentar un mundo violento y encontrar estrategias de acción en un espacio claramente dominado por los hombres.
La cabrona representa independencia y fuerza, autonomía y acción. La cabrona confronta los discursos tradicionales de una feminidad abnegada y dócil, con diferentes matices, aparentemente interpelando la dominación masculina. Por lo mismo, carga un fuerte estigma. La cultura de masas también produce representaciones sobre la cabrona. Se transmiten en discursos de género que circulan a través de imágenes en las redes sociales, en libros y workshops del mercado de autoayuda en el mundo entero, y que promueven una idea de mujer indócil frente a la gente de su entorno, suscrita al consumo y al individualismo de la cultura capitalista. En estas representaciones culturales contemporáneas, la mujer es fuerte e insumisa, pero conservando códigos corporales y prácticas femeninas.
En el contexto concreto de la narcocultura, los discursos globales sobre una mujer fuerte e independiente con poder económico y a cargo de su sexualidad, se encuentran con las condiciones particulares del norte mexicano. La violencia extrema, el machismo, las desigualdades sociales pronunciadas y la crisis de legitimidad del Estado intervienen para que estos discursos globales sobre la mujer muten en la representación de la buchona y la cabrona, interpretaciones locales de un discurso de género global. Para las mujeres, asumirse cabrona es un recurso para enfrentar un mundo violento y encontrar estrategias de acción en un espacio claramente dominado por los hombres. Ayuda a enfrentar la violencia perpetrada sobre ella, abre la posibilidad a ser la victimaria. La cabrona es la reacción que provoca el cuerpo femenino vulnerable y vulnerado, pero también, es la posibilidad de apropiarse de la violencia para ejercerla sobre otros cuerpos. Implica independencia, libertad sexual y éxito económico, evidenciadas por el consumo y el estilo de vida. Cuando niegan ser buchonas, están rechazando todos los estigmas que acarrea la palabra. No se reconocen en la discriminación de clase, las connotaciones raciales y los prejuicios sexistas que contiene. Prefieren cabrona porque es una manera de escindirse de los discursos negativos que se vuelcan sobre ellas, es un camino de acceso a una feminidad global que los medios de comunicación masiva presentan como ideal.
El análisis exploró qué elementos componían este tropo femenino a través de las entrevistas a mujeres y de personajes femeninos en novelas sobre narcotráfico, para encontrar puentes entre la ficción y la experiencia vital. La belleza y la capacidad de seducir tiene una utilidad ambivalente. Por un lado, todo el tiempo, dinero y cuidado que se invierte en apropiarse de un ideal estético, es para convertirse en una mujer que un narco pueda presumir. Para las mujeres es un motivo de orgullo saberse deseadas y puestas en aparador. Las mujeres están sometidas a las presiones que genera la creencia de que, para sobrevivir, hay que ser bella. En los textos literarios y en las entrevistas, se transparenta una naturalización del lugar de la mujer como objeto de ostentación para el hombre y, además, la validación que sienten las mujeres al ser reconocidas como bellas. La ficción y la vida nos presentan la precaria condición del sujeto femenino en la narcocultura. Es una subjetividad anclada a los discursos que demandan un ideal de belleza imposible para las mujeres y que encajonan el ser mujer a los caprichos y necesidades del hombre.
Sin embargo, la belleza femenina tiene otra faceta. La subjetividad femenina en la narcocultura no sólo es resultado del sometimiento de la mujer a los discursos que regulan su apariencia y su conducta. La belleza también es un instrumento al servicio de las mujeres para acceder a dinero y poder. La belleza y el poder de seducción femenino se convierten en estrategias de subsistencia, y esto transforma a la mujer de un objeto sometido a un sujeto que somete. La belleza y la seducción podrán dar a las mujeres ciertos márgenes de acción, pero esto tiene límites muy claros. Aunque estas estrategias femeninas muevan la balanza de poder hacía el sujeto femenino, hay que recordar el contexto. Están insertas en un mundo violento y machista, así que ejercer ese poder es un ejercicio de equilibrio muy delicado y arriesgado. Las mujeres que habitan la narcocultura están inmersas es un mundo de violencia, y no conocer y respetar las reglas y límites significa un riesgo de muerte. La muerte violenta es una consecuencia muy real por cometer errores en este mundo.
Esto lleva a tercer componente de ser cabrona: el riesgo. Para los hombres y mujeres que se involucran en el mundo cultural del narcotráfico, perseguir el riesgo es parte integral de vivir y es una parte importante de la constitución de subjetividades en la narcocultura. En las narraciones de las entrevistas y en las narraciones literarias, hay muchos momentos donde las mujeres viven situaciones de riesgo que ponen en peligro hasta sus vidas. A través de las narraciones se asoma la manera en qué ellas interpretan su papel en la situación y cómo se ven a sí mismas en función de esas experiencias. El riesgo le da sentido al carácter recio y atrevido que demanda asumir el rol de una cabrona, pero también expone la vulnerabilidad de la condición de las mujeres en un mundo violento. Tomar riesgos es otra manera de afirmarse como mujeres fuertes y poner distancia con las disposiciones de género que les exigen ser dóciles y pasivas. Tienen que demostrar lo que valen frente a un mundo dominado por hombres y el control de sus emociones juega un rol fundamental en lograr esto. Sin embargo, el reconocimiento del miedo y vulnerabilidad es, paradójicamente, lo que las ayuda a sobrevivir.
Detrás de los discursos de fuerza y poder femenino, se revela la fragilidad de unas vidas sumergidas en un mundo donde la violencia y el machismo deja a las mujeres en el filo de la vida y de la muerte. Para el caso que nos compete, el vacío institucional para garantizar seguridad a las mujeres en México deja a estas mujeres absolutamente expuestas, y cobra sentido la adopción del discurso de la cabrona como estrategia de persistencia. Al investirse como cabronas, encuentran una manera de enfrentarse al mundo violento al que deciden pertenecer, aunque al final de cuentas, permanecen atrapadas en él.
In a changing world facing several direct or indirect anthropogenic challenges the freshwater resources are endangered in quantity and quality. An excessive supply of nutrients, for example, can cause disproportional phytoplankton development and oxygen deficits in large rivers, leading to failure of the aims requested by the Water Framework Directive (WFD). Such problems can be observed in many European river catchments including the Elbe basin, and effective measures for improving water quality status are highly appreciated.
In water resources management and protection, modelling tools can help to understand the dominant nutrient processes and to identify the main sources of nutrient pollution in a watershed. They can be effective instruments for impact assessments investigating the effects of changing climate or socio-economic conditions on the status of surface water bodies, and for testing the usefulness of possible protection measures. Due to the high number of interrelated processes, ecohydrological model approaches containing water quality components are more complex than the pure hydrological ones, and their setup and calibration require more efforts. Such models, including the Soil and Water Integrated Model (SWIM), still need some further development and improvement.
Therefore, this cumulative dissertation focuses on two main objectives: 1) the approach-related objectives aiming in the SWIM model improvement and further development regarding nutrient (nitrogen and phosphorus) process description, and 2) the application-related objectives in meso- to large-scale Elbe river basins to support adaptive river basin management in view of possible future changes. The dissertation is based on five scientific papers published in international journals and dealing with these research questions.
Several adaptations were implemented in the model code to improve the representation of nutrient processes including a simple wetland approach, an extended by ammonium nitrogen cycle in the soils, as well as a detailed in-stream module, simulating algal growth, nutrient transformation processes and oxygen conditions in the river reaches, mainly driven by water temperature and light. Although this new approaches created a highly complex ecohydrological model with a large number of additional calibration parameters and rising uncertainty, the calibration and validation of the SWIM model enhanced by the new approaches in selected subcatchment and the entire Elbe river basin delivered satisfactory to good model results in terms of criteria of fit. Thus, the calibrated and validated model provided a sound base for the assessment of possible future changes and impacts in climate, land use and management in the Elbe river (sub)basin(s).
The new enhanced modelling approach improved the applicability of the SWIM model for the WFD related research questions, where the ability to consider biological water quality components (such as phytoplankton) is important. It additionally enhanced its ability to simulate the behaviour of nutrients coming mainly from point sources (e.g. phosphate phosphorus). Scenario results can be used by decision makers and stakeholders to find and understand future challenges and possible adaptation measures in the Elbe river basin.
Die Frage nach dem Zusammenhalt einer ganzen Gesellschaft ist eine der zentralen Fragen der Sozialwissenschaften und Soziologie. Seit dem Übergang in die Moderne bildet das Problem des Zusammenhalts von sich differenzierenden Gesellschaften den Gegenstand des wissenschaftlichen und gesellschaftlichen Diskurses. In der vorliegenden Studie stellt soziale Integration eine Form der gelungenen Vergesellschaftung dar, die sich in der Reproduktion von symbolischen und nicht-symbolischen Ressourcen artikuliert. Das Resultat dieser Reproduktion sind pluralistische Vergesellschaftungen, die, bezogen auf politische Präferenzen, konfligierende Interessen verursachen. Diese Präferenzen kommen in unterschiedlichen Formen, in ihrer Intensität und Wahrnehmung der politischen Partizipation zum Ausdruck. Da moderne politische Herrschaft aufgrund der rechtlichen und institutionellen Ausstattung einen bedeutsamen Einfluss auf soziale Reproduktion ausüben kann (z.B. durch Sozialpolitik), stellt direkte Beeinflussung politischer Entscheidungen, als Artikulation von sich aus den Konfliktlinien etablierenden, unterschiedlichen Präferenzen, das einzige legitime Mittel zwecks Umverteilung von Ressourcen auf der Ebene des Politischen dar. Somit wird die Konnotation zwischen Integration und politischer Partizipation sichtbar. In die Gesellschaft gut integrierte Mitglieder sind aufgrund einer breiten Teilnahme an Reproduktionsprozessen in der Lage, eigene Interessen zu erkennen und durch politische Aktivitäten zum Ausdruck zu bringen. Die empirischen Befunde scheinen den Eindruck zu vermitteln, dass der demokratische Konflikt in der modernen Gesellschaft nicht mehr direkt von Klassenzugehörigkeit und Klasseninteressen geprägt wird, sondern durch den Zugang zu und die Verfügbarkeit von symbolischen und nicht-symbolischen Ressourcen geformt wird. In der Konsequenz lautet die Fragestellung der vorliegenden Arbeit, ob integrierte Gesellschaften politisch aktiver sind.
Die Fragestellung der Arbeit wird mithilfe von Aggregatdaten demokratisch-verfasster politischer Systemen untersucht, die als etablierte Demokratien gelten und unterschiedlich Breite wohlfahrtstaatlichen Maßnahmen aufweisen. Die empirische Überprüfung der Hypothesen erfolgte mithilfe von bivariaten und multivariaten Regressionsanalysen. Die überprüften Hypothesen lassen sich folgend in einer Hypothese zusammenfassen: Je stärker die soziale Integration einer Gesellschaft, desto größer ist die konventionelle bzw. unkonventionelle politische Partizipation. Verallgemeinert ist die Aussage zulässig, dass soziale Integration einer Gesellschaft positive Effekte auf die Häufigkeit politischer Partizipation innerhalb dieser Gesellschaft hat. Stärker integrierte Gesellschaften sind politisch aktiver und dies unabhängig von der Form (konventionelle oder unkonventionelle) politischer Beteiligung. Dabei ist der direkte Effekt der gesamtgesellschaftlichen Integration auf die konventionellen Formen stärker als auf unkonventionellen. Diese Aussage ist nur zulässig, wenn die Elemente des Wahlsystems, wie z.B. Verhältniswahlrecht, und das BIP nicht berücksichtigt werden. Auf der Grundlage der Ergebnisse mit Kontrollvariablen erlauben die Daten die auf die Makroebene bezogene Aussage, dass neben einem hohen Niveau sozialer Integration auch ein durch (Mit-)Beteiligung bestimmtes Wahlsystem und ein hoher wirtschaftlicher Entwicklungsgrad begünstigend für ein hohes Niveau politischer Partizipation sind.
Movement and navigation are essential for many organisms during some parts of their lives. This is also true for bacteria, which can move along surfaces and swim though liquid environments. They are able to sense their environment, and move towards environmental cues in a directed fashion.
These abilities enable microbial lifecyles in biofilms, improved food uptake, host infection, and many more. In this thesis we study aspects of the swimming movement - or motility - of the soil bacterium (P. putida). Like most bacteria, P. putida swims by rotating its helical flagella, but their arrangement differs from the main model organism in bacterial motility research: (E. coli). P. putida is known for its intriguing motility strategy, where fast and slow episodes can occur after each other. Up until now, it was not known how these two speeds can be produced, and what advantages they might confer to this bacterium.
Normally the flagella, the main component of thrust generation in bacteria, are not observable by ordinary light microscopy. In order to elucidate this behavior, we therefore used a fluorescent staining technique on a mutant strain of this species to specifically label the flagella, while leaving the cell body only faintly stained. This allowed us to image the flagella of the swimming bacteria with high spacial and temporal resolution with a customized high speed fluorescence microscopy setup. Our observations show that P. putida can swim in three different modes. First, It can swim with the flagella pushing the cell body, which is the main mode of swimming motility previously known from other bacteria. Second, it can swim with the flagella pulling the cell body, which was thought not to be possible in situations with multiple flagella. Lastly, it can wrap its flagellar bundle around the cell body, which results in a speed wich is slower by a factor of two. In this mode, the flagella are in a different physical conformation with a larger radius so the cell body can fit inside. These three swimming modes explain the previous observation of two speeds, as well as the non strict alternation of the different speeds.
Because most bacterial swimming in nature does not occur in smoothly walled glass enclosures under a microscope, we used an artificial, microfluidic, structured system of obstacles to study the motion of our model organism in a structured environment. Bacteria were observed in microchannels with cylindrical obstacles of different sizes and with different distances with video microscopy and cell tracking. We analyzed turning angles, run times, and run length, which we compared to a minimal model for movement in structured geometries. Our findings show that hydrodynamic interactions with the walls lead to a guiding of the bacteria along obstacles. When comparing the observed behavior with the statics of a particle that is deflected with every obstacle contact, we find that cells run for longer distances than that model.
Navigation in chemical gradients is one of the main applications of motility in bacteria. We studied the swimming response of P. putida cells to chemical stimuli (chemotaxis) of the common food preservative sodium benzoate. Using a microfluidic gradient generation device, we created gradients of varying strength, and observed the motion of cells with a video microscope and subsequent cell tracking. Analysis of different motility parameters like run lengths and times, shows that P. putida employs the classical chemotaxis strategy of E. coli: runs up the gradient are biased to be longer than those down the gradient. Using the two different run speeds we observed due to the different swimming modes, we classify runs into `fast' and `slow' modes with a Gaussian mixture model (GMM). We find no evidence that P. putida's uses its swimming modes to perform chemotaxis.
In most studies of bacterial motility, cell tracking is used to gather trajectories of individual swimming cells. These trajectories then have to be decomposed into run sections and tumble sections. Several algorithms have been developed to this end, but most require manual tuning of a number of parameters, or extensive measurements with chemotaxis mutant strains. Together with our collaborators, we developed a novel motility analysis scheme, based on generalized Kramers-Moyal-coefficients. From the underlying stochastic model, many parameters like run length etc., can be inferred by an optimization procedure without the need for explicit run and tumble classification. The method can, however, be extended to a fully fledged tumble classifier. Using this method, we analyze E. coli chemotaxis measurements in an aspartate analog, and find evidence for a chemotactic bias in the tumble angles.
Gamma-ray astronomy has proven to provide unique insights into cosmic-ray accelerators in the past few decades. By combining information at the highest photon energies with the entire electromagnetic spectrum in multi-wavelength studies, detailed knowledge of non-thermal particle populations in astronomical objects and systems has been gained: Many individual classes of gamma-ray sources could be identified inside our galaxy and outside of it. Different sources were found to exhibit a wide range of temporal evolution, ranging from seconds to stable behaviours over many years of observations. With the dawn of both neutrino- and gravitational wave astronomy, additional messengers have come into play over the last years. This development presents the advent of multi-messenger astronomy: a novel approach not only to search for sources of cosmic rays, but for astronomy in general.
In this thesis, both traditional multi-wavelength studies and multi-messenger studies will be presented. They were carried out with the H.E.S.S. experiment, an imaging air Cherenkov telescope array located in the Khomas Highland of Namibia. H.E.S.S. has entered its second phase in 2012 with the addition of a large, fifth telescope. While the initial array was limited to the study of gamma-rays with energies above 100 GeV, the new instrument allows to access gamma-rays with energies down to a few tens of GeV. Strengths of the multi-wavelength approach will be demonstrated at the example of the galaxy NGC253, which is undergoing an episode of enhanced star-formation. The gamma-ray emission will be discussed in light of all the information on this system available from radio, infrared and X-rays. These wavelengths reveal detailed information on the population of supernova remnants, which are suspected cosmic-ray accelerators. A broad-band gamma-ray spectrum is derived from H.E.S.S. and Fermi-LAT data. The improved analysis of H.E.S.S. data provides a measurement which is no longer dominated by systematic uncertainties. The long-term behaviour of cosmic rays in the starburst galaxy NGC253 is finally characterised.
In contrast to the long time-scale evolution of a starburst galaxy, multi-messenger studies are especially intriguing when shorter time-scales are being probed. A prime example of a short time-scale transient are Gamma Ray Bursts. The efforts to understand this phenomenon effectively founded the branch of gamma-ray astronomy. The multi-messenger approach allows for the study of illusive phenomena such as Gamma Ray Bursts and other transients using electromagnetic radiation, neutrinos, cosmic rays and gravitational waves contemporaneously. With contemporaneous observations getting more important just recently, the execution of such observation campaigns still presents a big challenge due to the different limitations and strengths of the infrastructures.
An alert system for transient phenomena has been developed over the course of this thesis for H.E.S.S. It aims to address many follow-up challenges in order to maximise the science return of the new large telescope, which is able to repoint much faster than the initial four telescopes. The system allows for fully automated observations based on scientific alerts from any wavelength or messenger and allows H.E.S.S. to participate in multi-messenger campaigns. Utilising this new system, many interesting multi-messenger observation campaigns have been performed. Several highlight observations with H.E.S.S. are analysed, presented and discussed in this work. Among them are observations of Gamma Ray Bursts with low latency and low energy threshold, the follow-up of a neutrino candidate in spatial coincidence with a flaring active galactic nucleus and of the merger of two neutron stars, which was revealed by the coincidence of gravitational waves and a Gamma-Ray Burst.
In this thesis, deficits in theory of mind (ToM) and executive function (EF) were examined in tandem and separately as risk factors for conduct problems, including different forms and functions of aggressive behavior. All three reported studies and the additional analyses were based on a large community sample of N = 1,657 children, including three waves of a longitudinal study covering middle childhood and the transition to early adolescence (range 6 to 13 years) over a total of about three years. All data were analyzed with structural equation modeling.
Altogether, the results of all the conducted studies in this thesis extend previous research and confirm the propositions of the SIP model (Crick & Dodge, 1994) and of the amygdala theory of violent behavior (e.g., Blair et al., 2014) besides other accounts. Considering the three main research questions, the results of the thesis suggest first that deficits in ToM are a risk factor for relational and physical aggression from a mean age of 8 to 11 years under the control of stable between-person differences in aggression. In addition, earlier relationally aggressive behavior predicts later deficits in ToM in this age range, which confirms transactional relations between deficits in ToM and aggressive behavior in children (Crick & Dodge, 1994). Further, deficits in ToM seem to be a risk factor for parent-rated conduct problems cross-sectionally in an age range from 9 to 13 years. Second, deficits in cool EF are a risk factor for later physical, relational, and reactive aggression but not for proactive aggression over a course of three years from middle childhood to early adolescence. Habitual anger seems to mediate the relation between cool EF and physical, and as a trend also relational, aggression. Deficits in emotional and inhibitory control and planning have a direct effect on the individual level of conduct problems under the control of interindividual differences in conduct problems at a mean age of 8 years, but not on the trajectory of conduct problems over the course from age 8 to 11. Third, when deficits in cool EF and ToM are studied in tandem cross-sectionally at the transition from middle childhood to early adolescence, deficits in cool EF seem to play only an indirect role through deficits in ToM as a risk factor for conduct problems. Finally, all results hold equal for females and males in the conducted studies.
The results of this thesis emphasize the need to intervene in the transactional processes between deficits in ToM and in EF and conduct problems, including different forms and functions of aggression, particularly in the socially sensible period from middle and late childhood to early adolescence.
The prediction of the ground shaking that can occur at a site of interest due to an earthquake is crucial in any seismic hazard analysis. Usually, empirically derived ground-motion prediction equations (GMPEs) are employed within a logic-tree framework to account for this step. This is, however, challenging if the area under consideration has only low seismicity and lacks enough recordings to develop a region-specific GMPE. It is then usual practice to adapt GMPEs from data-rich regions (host area) to the area with insufficient ground-motion recordings (target area). Host GMPEs must be adjusted in such a way that they will capture the specific ground-motion characteristics of the target area. In order to do so, seismological parameters of the target region have to be provided as, for example, the site-specific attenuation factor kappa0. This is again an intricate task if data amount is too sparse to derive these parameters.
In this thesis, I explore methods that can facilitate the selection of non-endemic GMPEs in a logic-tree analysis or their adjustment to a data-poor region. I follow two different strategies towards this goal.
The first approach addresses the setup of a ground-motion logic tree if no indigenous GMPE is available. In particular, I propose a method to derive an optimized backbone model that captures the median ground-motion characteristics in the region of interest. This is done by aggregating several foreign GMPEs as weighted components of a mixture model in which the weights are inferred from observed data. The approach is applied to Northern Chile, a region for which no indigenous GMPE existed at the time of the study. Mixture models are derived for interface and intraslab type events using eight subduction zone GMPEs originating from different parts of the world. The derived mixtures provide satisfying results in terms of average residuals and average sample log-likelihoods. They outperform all individual non-endemic GMPEs and are comparable to a regression model that was specifically derived for that area.
The second approach is concerned with the derivation of the site-specific attenuation factor kappa0. kappa0 is one of the key parameters in host-to-target adjustments of GMPEs but is hard to derive if data amount is sparse. I explore methods to estimate kappa0 from ambient seismic noise. Seismic noise is, in contrast to earthquake recordings, continuously available. The rapidly emerging field of seismic interferometry gives the possibility to infer velocity and attenuation information from the cross-correlation or deconvolution of long noise recordings. The extraction of attenuation parameters from diffuse wavefields is, however, not straightforward especially not for frequencies above 1 Hz and at shallow depth. In this thesis, I show the results of two studies. In the first one, data of a small-scale array experiment in Greece are used to derive Love wave quality factors in
the frequency range 1-4 Hz. In a second study, frequency dependent quality factors of S-waves (5-15 Hz) are estimated by deconvolving noise recorded in a borehole and at a co-located surface station in West Bohemia/Vogtland. These two studies can be seen as preliminary steps towards the estimation of kappa0 from seismic noise.
Metamaterial devices
(2018)
Digital fabrication machines such as 3D printers excel at producing arbitrary shapes, such as for decorative objects. In recent years, researchers started to engineer not only the outer shape of objects, but also their internal microstructure. Such objects, typically based on 3D cell grids, are known as metamaterials. Metamaterials have been used to create materials that, e.g., change their volume, or have variable compliance.
While metamaterials were initially understood as materials, we propose to think of them as devices.
We argue that thinking of metamaterials as devices enables us to create internal structures that offer functionalities to implement an input-process-output model without electronics, but purely within the material’s internal structure. In this thesis, we investigate three aspects of such metamaterial devices that implement parts of the input-process-output model: (1) materials that process analog inputs by implementing mechanisms based on their microstructure, (2) that process digital signals by embedding mechanical computation into the object’s microstructure, and (3) interactive metamaterial objects that output to the user by changing their outside to interact with their environment. The input to our metamaterial devices is provided directly by the users interacting with the device by means of physically pushing the metamaterial, e.g., turning a handle, pushing a button, etc.
The design of such intricate microstructures, which enable the functionality of metamaterial devices, is not obvious. The complexity of the design arises from the fact that not only a suitable cell geometry is necessary, but that additionally cells need to play together in a well-defined way. To support users in creating such microstructures, we research and implement interactive design tools. These tools allow experts to freely edit their materials, while supporting novice users by auto-generating cells assemblies from high-level input. Our tools implement easy-to-use interactions like brushing, interactively simulate the cell structures’ deformation directly in the editor, and export the geometry as a 3D-printable file. Our goal is to foster more research and innovation on metamaterial devices by allowing the broader public to contribute.
The last years have shown an increasing sophistication of attacks against enterprises. Traditional security solutions like firewalls, anti-virus systems and generally Intrusion Detection Systems (IDSs) are no longer sufficient to protect an enterprise against these advanced attacks. One popular approach to tackle this issue is to collect and analyze events generated across the IT landscape of an enterprise. This task is achieved by the utilization of Security Information and Event Management (SIEM) systems. However, the majority of the currently existing SIEM solutions is not capable of handling the massive volume of data and the diversity of event representations. Even if these solutions can collect the data at a central place, they are neither able to extract all relevant information from the events nor correlate events across various sources. Hence, only rather simple attacks are detected, whereas complex attacks, consisting of multiple stages, remain undetected. Undoubtedly, security operators of large enterprises are faced with a typical Big Data problem.
In this thesis, we propose and implement a prototypical SIEM system named Real-Time Event Analysis and Monitoring System (REAMS) that addresses the Big Data challenges of event data with common paradigms, such as data normalization, multi-threading, in-memory storage, and distributed processing. In particular, a mostly stream-based event processing workflow is proposed that collects, normalizes, persists and analyzes events in near real-time. In this regard, we have made various contributions in the SIEM context. First, we propose a high-performance normalization algorithm that is highly parallelized across threads and distributed across nodes. Second, we are persisting into an in-memory database for fast querying and correlation in the context of attack detection. Third, we propose various analysis layers, such as anomaly- and signature-based detection, that run on top of the normalized and correlated events. As a result, we demonstrate our capabilities to detect previously known as well as unknown attack patterns. Lastly, we have investigated the integration of cyber threat intelligence (CTI) into the analytical process, for instance, for correlating monitored user accounts with previously collected public identity leaks to identify possible compromised user accounts.
In summary, we show that a SIEM system can indeed monitor a large enterprise environment with a massive load of incoming events. As a result, complex attacks spanning across the whole network can be uncovered and mitigated, which is an advancement in comparison to existing SIEM systems on the market.
Amorphous calcium carbonate(ACC) is a wide spread biological material found in many organisms, such as sea Urchins and mollusks, where it serves as either a precursor phase for the crystalline biominerals or is stabilized and used in the amorphous state. As ACC readily crystallizes, stabilizers such as anions, cations or macromolecules are often present to avoid or delay unwanted crystallization. Furthermore, additives often control the properties of the materials to suit the specific function needed for the organism. E.g. cystoliths in leaves that scatter light to optimize energy uptake from the sun or calcite/aragonite crystals used in protective shells in mussels and gastropods. Lifetime of the amorphous phase is controlled by the kinetic stability against crystallization. This has often been linked to water which plays a role in the mobility of ions and hence the probability of forming crystalline nuclei to initiate crystallization. However, it is unclear how the water molecules are incorporated within the amorphous phase, either as liquid confined in pores, as structural water binding to the ions or as a mixture of both. It is also unclear how this is perturbed when additives are added, especially Mg2+, one the most common additives found in biogenic samples. Mg2+ are expected to have a strong influence on the water incorporated into ACC, given the high energy barrier to dehydration of magnesium ions compared to calcium ions in solution.
During the last 10-15 years, there has been a large effort to understand the local environment of the ions/molecules and how this affects the properties of the amorphous phase. But only a few aspects of the structure have so far been well-described in literature. The reason for this is partly caused by the low stability of ACC if exposed to air, where it tends to crystallize within minutes and by the limited quantities of ACC produced in traditional synthesis routes. A further obstacle has been the difficulty in modeling the local structure based on experimental data. To solve the problem of stability and sample size, a few studies have used stabilizers such as Mg2+ or OH- and severely dehydrated samples so as to stabilize the amorphous state, allowing for combined neutron and x-ray analysis to be performed. However, so far, a clear description of the local environments of water present in the structure has not been reported.
In this study we show that ACC can be synthesized without any stabilizing additives in quantities necessary for neutron measurements and that accurate models can be derived with the help of empirical-potential structural refinement. These analyses have shown that there is a wide range of local environments for all of the components in the system suggesting that the amorphous phase is highly inhomogeneous, without any phase separation between ions and water. We also showed that the water in ACC is mainly structural and that there is no confined or liquid-like water present in the system. Analysis of amorphous magnesium carbonate also showed that there is a large difference in the local structure of the two cations and that Mg2+ surprisingly interacts with significantly less water molecules then Ca2+ despite the higher dehydration energy. All in all, this shows that the role of water molecules as a structural component of ACC, with a strong binding to cat- and anions probably retard or prevents the crystallization of the amorphous phase.
This thesis investigates the comprehension of the passive voice in three distinct populations. First, the comprehension of passives by adult German speakers was studied, followed by an examination of how German-speaking children comprehend the structure. Finally, bilingual Mandarin-English speakers were tested on their comprehension of the passive voice in English, which is their L2. An integral part of testing the comprehension in all three populations is the use of structural priming. In each of the three distinct parts of the research, structural priming was used for a specific reason. In the study involving adult German speakers, productive and receptive structural priming was directly compared. The goal was to see the effect the two priming modalities have on language comprehension. In the study on German-acquiring children, structural priming was an important tool in answering the question regarding the delayed acquisition of the passive voice. Finally, in the study on the bilingual population, cross-linguistic priming was used to investigate the importance of word order in the priming effect, since Mandarin and English have different word orders in passive voice sentences.
Various ways of preparing enantiomerically pure 2-amino[6]helicene derivatives were explored. Ni(0) mediated cyclotrimerization of enantiopure triynes provided (M)- and (P)-7,8-bis(p-tolyl)hexahelicene-2-amine in >99% ee as well as its benzoderivative in >99% ee. The stereocontrol was found to be inefficient for a 2- aminobenzo[6]helicene congener with an embedded five-membered ring. Helically chiral imidazolium salts bearing one or two helicene moieties have been synthesized and applied in enantioselective [2+2+2] cyclotrimerization catalyzed by an in situ formed Ni(0)-NHC complex. The synthesis of the first helically chiral Pd- and Ru-NHC complexes and their application in enantioselective catalysis was demonstrated. The latter shows promising results in enantioselective olefin metathesis reactions. A mechanistic proposal for asymmetric ring closing metathesis is provided.
In this thesis we provide a construction of the operator framework starting from the functional formulation of group field theory (GFT). We define operator algebras on Hilbert spaces whose expectation values in specific states provide correlation functions of the functional formulation. Our construction allows us to give a direct relation between the ingredients of the functional GFT and its operator formulation in a perturbative regime. Using this construction we provide an example of GFT states that can not be formulated as states in a Fock space and lead to math- ematically inequivalent representations of the operator algebra. We show that such inequivalent representations can be grouped together by their symmetry properties and sometimes break the left translation symmetry of the GFT action. We interpret these groups of inequivalent representations as phases of GFT, similar to the classification of phases that we use in QFT’s on space-time.
The continuously increasing pollution of aquatic environments with microplastics (plastic particles < 5 mm) is a global problem with potential implications for organisms of all trophic levels. For microorganisms, trillions of these floating microplastics particles represent a huge surface area for colonization. Due to the very low biodegradability, microplastics remain years to centuries in the environment and can be transported over thousands of kilometers together with the attached organisms. Since also pathogenic, invasive, or otherwise harmful species could be spread this way, it is essential to study microplastics-associated communities.
For this doctoral thesis, eukaryotic communities were analyzed for the first time on microplastics in brackish environments and compared to communities in the surrounding water and on the natural substrate wood. With Illumina MiSeq high-throughput sequencing, more than 500 different eukaryotic taxa were detected on the microplastics samples. Among them were various green algae, dinoflagellates, ciliates, fungi, fungal-like protists and small metazoans such as nematodes and rotifers. The most abundant organisms was a dinoflagellate of the genus Pfiesteria, which could include fish pathogenic and bloom forming toxigenic species. Network analyses revealed that there were numerous interaction possibilities among prokaryotes and eukaryotes in microplastics biofilms. Eukaryotic community compositions on microplastics differed significantly from those on wood and in water, and compositions were additionally distinct among the sampling locations. Furthermore, the biodiversity was clearly lower on microplastics in comparison to the diversity on wood or in the surrounding water.
In another experiment, a situation was simulated in which treated wastewater containing microplastics was introduced into a freshwater lake. With increasing microplastics concentrations, the resulting bacterial communities became more similar to those from the treated wastewater. Moreover, the abundance of integrase I increased together with rising concentrations of microplastics. Integrase I is often used as a marker for anthropogenic environmental pollution and is further linked to genes conferring, e.g., antibiotic resistance.
This dissertation gives detailed insights into the complexity of prokaryotic and eukaryotic communities on microplastics in brackish and freshwater systems. Even though microplastics provide novel microhabitats for various microbes, they might also transport toxigenic, pathogenic, antibiotic-resistant or parasitic organisms; meaning their colonization can pose potential threats to humans and the environment. Finally, this thesis explains the urgent need for more research as well as for strategies to minimize the global microplastic pollution.
In the thesis there are constructed new quantizations for pseudo-differential boundary value problems (BVPs) on manifolds with edge. The shape of operators comes from Boutet de Monvel’s calculus which exists on smooth manifolds with boundary. The singular case, here with edge and boundary, is much more complicated. The present approach simplifies the operator-valued symbolic structures by using suitable Mellin quantizations on infinite stretched model cones of wedges with boundary. The Mellin symbols themselves are, modulo smoothing ones, with asymptotics, holomorphic in the complex Mellin covariable. One of the main results is the construction of parametrices of elliptic elements in the corresponding operator algebra, including elliptic edge conditions.
Der neueste Geist des Kapitalismus beschreibt das heutige Mobilisierungs- und Rechtfertigungsregime, welches uns immer wieder dazu bringt, unsere Arbeitskraft zu verwerten und uns täglich ins kapitalistische Hamsterrad zu begeben. Der alte Geist des Kapitalismus, nach dem Fleiß, Disziplin und Sparsamkeit zum gesellschaftlichen Aufstieg führen, trägt längst nicht mehr. Auch reine Selbstverwirklichung, der Anspruch auf Flexibilität und flache Hierarchien reicht nicht mehr aus, um insbesondere gut qualifizierte Menschen zur Arbeit zu motivieren. Der neueste Geist des Kapitalismus hingegen ist das Produkt der tiefen Subjektivierung und Verinnerlichung des Neoliberalismus.
Es geht um beständige berufliche und private Optimierung sowie ein umfassendes Nutzendenken. Glücklich zu sein, ist nicht mehr nur eine Option, sondern es gibt den normativen Anspruch, glücklich sein zu sollen. Das Leistungsprinzip wird aktiv bejaht und Leistungsgerechtigkeit eingefordert. Die Bewältigung von Komplexität wird zum Metathema. Der Anspruch auf Distinktion, insbesondere auch gegenüber „Minderleisten“ nimmt zu. Die Welt wird zunehmend durch die Brille von Zahlen und Statistiken betrachtet, und Key Performance Indicators werden zu ständigen Wegbegleitern. Das Leben wird, verstärkt durch die sozialen Netzwerke, zunehmend zu einer performativen Bühne, die zugleich dem Networking dient. In der Konsequenz der beständigen Optimierung wird es jedoch immer schwerer, zur Ruhe zu kommen.
Dieser neueste Geist des Kapitalismus, dieser umfassende Optimierungsanspruch, hat jedoch gravierende Konsequenzen. Zu den manifesten Pathologien des neuesten Geistes gehören gestiegene Raten von Depressionen, Burn-out und Angststörungen. Gesellschaftlich spreizt sich die soziale Schere immer mehr anhand der Fähigkeit, Komplexität bewältigen zu können, was viele Verlierer und prekäre Gewinner produziert. Daher wird dieser neueste Geist des Kapitalismus sozialkritisch, künstlerkritisch und ideologiekritisch hinterfragt. Die Rolle der Gewerkschaften als der Zentralinstitution der Sozialkritik, die ein tatsächliches Gegengewicht zum neuesten Geist des Kapitalismus bieten kann, wird kontrovers diskutiert. Und es wird aufgezeigt: chillen ist die neue Subversion.
Die Dissertation untersucht die Sexualmetaphorik im Nürnberger Fastnachtspiel des 15. Jahrhunderts und wählt als Textgrundlage die Ausgabe der Nürnberger Fastnachtspiele von Adelbert von Keller als einzig vollständige Sammlung. Anliegen der Dissertation ist es, die Einzigartigkeit des Wortschatzes des Fastnachtspiels herauszuarbeiten, indem mit Fokus auf die Bildhaftigkeit und die Metaphorik der obszönen Redeweise über deren Aussage- und Wirkkraft im historischen Fastnachtspielkontext reflektiert wird. Wie die Metapher gezielt tabuisierte, intime Inhalte zum Zwecke der Komik legitimiert, wird erst theoretisch ergründet und dann in einer Interpretation der sprachlichen Gestaltung von Sexualität und Obszönität mit Blick auf den soziokulturellen Hintergrund des Nürnberger Fastnachtspiels kritisch reflektiert.
In einer interdisziplinären Annäherung werden zunächst sowohl Erkenntnisse und Theorien aus der Fastnachtspiel- und Mittelalterforschung als auch theoretische Ansätze aus der Metaphorologie und der Komikforschung zusammengetragen und diskutiert. Im Rahmen des breitgefächerten, wissenschaftlichen Diskurses zu den Fastnachtspielen wird das Nürnberger Fastnachtspiel aus inhaltlicher, funktionaler, genderbezogener, kulturgeschichtlicher und sprachlicher Perspektive beleuchtet, um ein besseres Verständnis der Sprache des Fastnachtspiels zu erlangen. Im nächsten Schritt werden das mittelalterliche Ehe- und Familienleben und die soziale und rechtliche Stellung von Mann und Frau analysiert, indem theologisch-normativer und literarischer Ehediskurs einander gegenübergestellt und gattungsspezifisch untersucht werden. So können Logiken und Verfahrensweisen in der mittelalterlichen Alltags-, Glaubens- und Rechtspraxis aufgedeckt werden. Dadurch ist es möglich, die Inszenierungen des Körpers im Fastnachtspiel einzuschätzen, die die mittelalterliche Sexualmoral verhandeln und so mannigfaltige Bilder und Vorstellungen von Mann und Frau entwerfen. Anschließend wird in der Diskussion relevanter Metapherntheorien ergründet, wie die metaphorische Sprache dabei den obszönen Inhalten gleichzeitig die Tür öffnet und sie auf Distanz hält. Bei der Betrachtung poststrukturalistischer, erkenntnistheoretischer, kognitiver, semantischer und philosophischer Theorien zur Beschreibung der Arbeits- und Wirkweise der Metapher erweisen sich insbesondere analogieorientierte und funktionale Ansätze als gewinnbringend, weil sie die Kontextualisierung der Metapher als bindend voraussetzen und sie als komplexes, metakognitives Phänomen diskutieren, das auf Interaktions- und Übertragungsprozessen beruht. Sodann werden die vielfältigen Anlässe, sozialen Formen und Funktionen des Lachens in der mittelalterlichen Gesellschaft näher in den Blick genommen, um den komisch-derbsinnlichen Duktus der Fastnachtspiele nachvollziehen zu können. Mit der Bewusstmachung wiederkehrender Elemente und Muster des Komischen kann der Unterhaltungswert der Fastnachtspielsprache exemplarisch verdeutlicht werden.
Die sich anschließende umfassende Interpretation der Fastnachtspiele wird methodisch angeleitet durch die Theorie der bildlichen Rede von Hans Georg Coenen. Er bleibt mit seinen Analogiedefinitionen der klassischen Rhetorik verpflichtet und unterscheidet unter anderem „kreative“, „konventionalisierte“ und „lexikalisierte“ Metaphern. Bei der außerordentlichen Vielfalt sexualmetaphorischer Ausdrucksweisen liegt der Fokus auf den Darstellungen der Geschlechtsorgane von Mann und Frau und dem Koitus.
Die Verfasserin gelangt zu folgenden Ergebnissen: Die Metapher sorgt durch ihre bildgestaltende Vermittlung jeweils dafür, dass der sexuelle Inhalt darstellbar wird, ohne jedoch in unmissverständlicher Direktheit auf den Betrachter zu treffen. Ob sie mit ihren alltäglichen, meist bäuerlichen Bildmotiven für die Schamsphäre über- oder untertreibt, abwertet oder aufwertet, verhüllt oder entlarvt - in jedweder Form und Gestalt kann die Metapher den sexuellen Inhalt ästhetisieren. Weil sie ihn unter neuer oder anderer Perspektive betrachtet, entrückt sie ihn formal. Damit erscheint das Sprechen über Sexualität in der metaphorischen Rede wie auf Abstand gerechtfertigt und das Ausmaß bzw. Übermaß der Inszenierung von Sexualität im Fastnachtspiel überhaupt erst möglich. Häufig werden Bilder vom Penis als Esel, von der Vagina als Wiese und vom Koitus als Speerkampf entworfen. Die Metapher stellt damit gewohnte Vorstellungen von Mann und Frau mitsamt den normativ gesetzten Erwartungen und Strukturen im Eheleben auf den Kopf. Das kann als obszön und unanständig, aber auch als amüsant empfunden worden sein und heute noch empfunden werden. Immer bleibt die Metapher dabei ambigue. In ihrer mehrkanaligen Wirkweise, ihrer innovativen Kraft und auch in ihrer Widersprüchlichkeit liegen ihre dichterische Begabung und ihre Qualität zur Komisierung. Damit stellt sich die kunstfertige Sprache des Fastnachtspiels der Körperlichkeit der Spielinhalte entgegen.
Die Dissertation demonstriert das innovative, normkritische Potenzial der Fastnachtspielsprache, die sie als eine Poetik der Ambiguität lesbar und als einen Schatz an vielfältigen und differenzierten Ausdrucksweisen für die Schamsphäre wertschätzbar macht. Damit leistet die Arbeit einen wichtigen Beitrag zur sprach- und literaturwissenschaftlichen Analyse des Nürnberger Fastnachtsspiels und zu einem differenzierteren Verständnis von dessen kulturgeschichtlicher Bedeutung.
Virtual 3D city models represent and integrate a variety of spatial data and georeferenced data related to urban areas. With the help of improved remote-sensing technology, official 3D cadastral data, open data or geodata crowdsourcing, the quantity and availability of such data are constantly expanding and its quality is ever improving for many major cities and metropolitan regions. There are numerous fields of applications for such data, including city planning and development, environmental analysis and simulation, disaster and risk management, navigation systems, and interactive city maps.
The dissemination and the interactive use of virtual 3D city models represent key technical functionality required by nearly all corresponding systems, services, and applications. The size and complexity of virtual 3D city models, their management, their handling, and especially their visualization represent challenging tasks. For example, mobile applications can hardly handle these models due to their massive data volume and data heterogeneity. Therefore, the efficient usage of all computational resources (e.g., storage, processing power, main memory, and graphics hardware, etc.) is a key requirement for software engineering in this field. Common approaches are based on complex clients that require the 3D model data (e.g., 3D meshes and 2D textures) to be transferred to them and that then render those received 3D models. However, these applications have to implement most stages of the visualization pipeline on client side. Thus, as high-quality 3D rendering processes strongly depend on locally available computer graphics resources, software engineering faces the challenge of building robust cross-platform client implementations.
Web-based provisioning aims at providing a service-oriented software architecture that consists of tailored functional components for building web-based and mobile applications that manage and visualize virtual 3D city models. This thesis presents corresponding concepts and techniques for web-based provisioning of virtual 3D city models. In particular, it introduces services that allow us to efficiently build applications for virtual 3D city models based on a fine-grained service concept. The thesis covers five main areas:
1. A Service-Based Concept for Image-Based Provisioning of
Virtual 3D City Models It creates a frame for a broad range of services related to the rendering and image-based dissemination of virtual 3D city models.
2. 3D Rendering Service for Virtual 3D City Models This service provides efficient, high-quality 3D rendering functionality for virtual 3D city models. In particular, it copes with requirements such as standardized data formats, massive model texturing, detailed 3D geometry, access to associated feature data, and non-assumed frame-to-frame coherence for parallel service requests. In addition, it supports thematic and artistic styling based on an expandable graphics effects library.
3. Layered Map Service for Virtual 3D City Models It generates a map-like representation of virtual 3D city models using an oblique view. It provides high visual quality, fast initial loading times, simple map-based interaction and feature data access. Based on a configurable client framework, mobile and web-based applications for virtual 3D city models can be created easily.
4. Video Service for Virtual 3D City Models It creates and synthesizes videos from virtual 3D city models. Without requiring client-side 3D rendering capabilities, users can create camera paths by a map-based user interface, configure scene contents, styling, image overlays, text overlays, and their transitions. The service significantly reduces the manual effort typically required to produce such videos. The videos can automatically be updated when the underlying data changes.
5. Service-Based Camera Interaction It supports task-based 3D camera interactions, which can be integrated seamlessly into service-based visualization applications. It is demonstrated how to build such web-based interactive applications for virtual 3D city models using this camera service.
These contributions provide a framework for design, implementation, and deployment of future web-based applications, systems, and services for virtual 3D city models. The approach shows how to decompose the complex, monolithic functionality of current 3D geovisualization systems into independently designed, implemented, and operated service- oriented units. In that sense, this thesis also contributes to microservice architectures for 3D geovisualization systems—a key challenge of today’s IT systems engineering to build scalable IT solutions.
Previous studies on native language (L1) anaphor resolution have found that monolingual native speakers are sensitive to syntactic, pragmatic, and semantic constraints on pronouns and reflexive resolution. However, most studies have focused on English and other Germanic languages, and little is currently known about the online (i.e., real-time) processing of anaphors in languages with syntactically less restricted anaphors, such as Turkish. We also know relatively little about how 'non-standard' populations such as non-native (L2) speakers and heritage speakers (HSs) resolve anaphors.
This thesis investigates the interpretation and real-time processing of anaphors in German and in a typologically different and as yet understudied language, Turkish. It compares hypotheses about differences between native speakers' (L1ers) and L2 speakers' (L2ers) sentence processing, looking into differences in processing mechanisms as well as the possibility of cross-linguistic influence. To help fill the current research gap regarding HS sentence comprehension, it compares findings for this group with those for L2ers.
To investigate the representation and processing of anaphors in these three populations, I carried out a series of offline questionnaires and Visual-World eye-tracking experiments on the resolution of reflexives and pronouns in both German and Turkish. In the German experiments, native German speakers as well as L2ers of German were tested, while in the Turkish experiments, non-bilingual native Turkish speakers as well as HSs of Turkish with L2 German were tested. This allowed me to observe both cross-linguistic differences as well as population differences between monolinguals' and different types of bilinguals' resolution of anaphors.
Regarding the comprehension of Turkish anaphors by L1ers, contrary to what has been previously assumed, I found that Turkish has no reflexive that follows Condition A of Binding theory (Chomsky, 1981). Furthermore, I propose more general cross-linguistic differences between Turkish and German, in the form of a stronger reliance on pragmatic information in anaphor resolution overall in Turkish compared to German.
As for the processing differences between L1ers and L2ers of a language, I found evidence in support of hypotheses which propose that L2ers of German rely more strongly on non-syntactic information compared to L1ers (Clahsen & Felser, 2006, 2017; Cunnings, 2016, 2017) independent of a potential influence of their L1. HSs, on the other hand, showed a tendency to overemphasize interpretational contrasts between different Turkish anaphors compared to monolingual native speakers. However, lower-proficiency HSs were likely to merge different forms for simplified representation and processing. Overall, L2ers and HSs showed differences from monolingual native speakers both in their final interpretation of anaphors and during online processing. However, these differences were not parallel between the two types of bilingual and thus do not support a unified model of L2 and HS processing (cf. Montrul, 2012).
The findings of this thesis contribute to the field of anaphor resolution by providing data from a previously unexplored language, Turkish, as well as contributing to research on native and non-native processing differences. My results also illustrate the importance of considering individual differences in the acquisition process when studying bilingual language comprehension. Factors such as age of acquisition, language proficiency and the type of input a language learner receives may influence the processing mechanisms they develop and employ, both between and within different bilingual populations.
This dissertation consists of five self-contained essays, addressing different aspects of career choices, especially the choice of entrepreneurship, under risk and ambiguity. In Chapter 2, the first essay develops an occupational choice model with boundedly rational agents, who lack information, receive noisy feedback, and are restricted in their decisions by their personality, to analyze and explain puzzling empirical evidence on entrepreneurial decision processes. In the second essay, in Chapter 3, I contribute to the literature on entrepreneurial choice by constructing a general career choice model on the basis of the assumption that outcomes are partially ambiguous. The third essay, in Chapter 4, theoretically and empirically analyzes the impact of media on career choices, where information on entrepreneurship provided by the media is treated as an informational shock affecting prior beliefs. The fourth essay, presented in Chapter 5, contains an empirical analysis of the effects of cyclical macro variables (GDP and unemployment) on innovative start-ups in Germany. In the fifth, and last, essay in Chapter 6, we examine whether information on personality is useful for advice, using the example of career advice.
The purpose of Probabilistic Seismic Hazard Assessment (PSHA) at a construction site is to provide the engineers with a probabilistic estimate of ground-motion level that could be equaled or exceeded at least once in the structure’s design lifetime. A certainty on the predicted ground-motion allows the engineers to confidently optimize structural design and mitigate the risk of extensive damage, or in worst case, a collapse. It is therefore in interest of engineering, insurance, disaster mitigation, and security of society at large, to reduce uncertainties in prediction of design ground-motion levels.
In this study, I am concerned with quantifying and reducing the prediction uncertainty of regression-based Ground-Motion Prediction Equations (GMPEs). Essentially, GMPEs are regressed best-fit formulae relating event, path, and site parameters (predictor variables) to observed ground-motion values at the site (prediction variable). GMPEs are characterized by a parametric median (μ) and a non-parametric variance (σ) of prediction. μ captures the known ground-motion physics i.e., scaling with earthquake rupture properties (event), attenuation with distance from source (region/path), and amplification due to local soil conditions (site); while σ quantifies the natural variability of data that eludes μ. In a broad sense, the GMPE prediction uncertainty is cumulative of 1) uncertainty on estimated regression coefficients (uncertainty on μ,σ_μ), and 2) the inherent natural randomness of data (σ). The extent of μ parametrization, the quantity, and quality of ground-motion data used in a regression, govern the size of its prediction uncertainty: σ_μ and σ.
In the first step, I present the impact of μ parametrization on the size of σ_μ and σ. Over-parametrization appears to increase the σ_μ, because of the large number of regression coefficients (in μ) to be estimated with insufficient data. Under-parametrization mitigates σ_μ, but the reduced explanatory strength of μ is reflected in inflated σ. For an optimally parametrized GMPE, a ~10% reduction in σ is attained by discarding the low-quality data from pan-European events with incorrect parametric values (of predictor variables).
In case of regions with scarce ground-motion recordings, without under-parametrization, the only way to mitigate σ_μ is to substitute long-term earthquake data at a location with short-term samples of data across several locations – the Ergodic Assumption. However, the price of ergodic assumption is an increased σ, due to the region-to-region and site-to-site differences in ground-motion physics. σ of an ergodic GMPE developed from generic ergodic dataset is much larger than that of non-ergodic GMPEs developed from region- and site-specific non-ergodic subsets - which were too sparse to produce their specific GMPEs. Fortunately, with the dramatic increase in recorded ground-motion data at several sites across Europe and Middle-East, I could quantify the region- and site-specific differences in ground-motion scaling and upgrade the GMPEs with 1) substantially more accurate region- and site-specific μ for sites in Italy and Turkey, and 2) significantly smaller prediction variance σ. The benefit of such enhancements to GMPEs is quite evident in my comparison of PSHA estimates from ergodic versus region- and site-specific GMPEs; where the differences in predicted design ground-motion levels, at several sites in Europe and Middle-Eastern regions, are as large as ~50%.
Resolving the ergodic assumption with mixed-effects regressions is feasible when the quantified region- and site-specific effects are physically meaningful, and the non-ergodic subsets (regions and sites) are defined a priori through expert knowledge. In absence of expert definitions, I demonstrate the potential of machine learning techniques in identifying efficient clusters of site-specific non-ergodic subsets, based on latent similarities in their ground-motion data. Clustered site-specific GMPEs bridge the gap between site-specific and fully ergodic GMPEs, with their partially non-ergodic μ and, σ ~15% smaller than the ergodic variance.
The methodological refinements to GMPE development produced in this study are applicable to new ground-motion datasets, to further enhance certainty of ground-motion prediction and thereby, seismic hazard assessment. Advanced statistical tools show great potential in improving the predictive capabilities of GMPEs, but the fundamental requirement remains: large quantity of high-quality ground-motion data from several sites for an extended time-period.
Uncertainty is an essential part of atmospheric processes and thus inherent to weather forecasts. Nevertheless, weather forecasts and warnings are still predominately issued as deterministic (yes or no) forecasts, although research suggests that providing weather forecast users with additional information about the forecast uncertainty can enhance the preparation of mitigation measures. Communicating forecast uncertainty would allow for a provision of information on possible future events at an earlier time. The desired benefit is to enable the users to start with preparatory protective action at an earlier stage of time based on the their own risk assessment and decision threshold. But not all users have the same threshold for taking action. In the course of the project WEXICOM (‘Wetterwarnungen: Von der Extremereignis-Information zu Kommunikation und Handlung’) funded by the Deutscher Wetterdienst (DWD), three studies were conducted between the years 2012 and 2016 to reveal how weather forecasts and warnings are reflected in weather-related decision-making. The studies asked which factors influence the perception of forecasts and the decision to take protective action and how forecast users make sense of probabilistic information and the additional lead time. In a first exploratory study conducted in 2012, members of emergency services in Germany were asked questions about how weather warnings are communicated to professional endusers in the emergency community and how the warnings are converted into mitigation measures. A large number of open questions were selected to identify new topics of interest. The questions covered topics like users’ confidence in forecasts, their understanding of probabilistic information as well as their lead time and decision thresholds to start with preparatory mitigation measures. Results show that emergency service personnel generally have a good sense of uncertainty inherent in weather forecasts. Although no single probability threshold could be identified for organisations to start with preparatory mitigation measures, it became clear that emergency services tend to avoid forecasts based on low probabilities as a basis for their decisions. Based on this findings, a second study conducted with residents of Berlin in 2014 further investigated the question of decision thresholds. The survey questions related to the topics of the perception of and prior experience with severe weather, trustworthiness of forecasters and confidence in weather forecasts, and socio-demographic and social-economic characteristics. Within the questionnaire a scenario was created to determine individual decision thresholds and see whether subgroups of the sample lead to different thresholds. The results show that people’s willingness to act tends to be higher and decision thresholds tend to be lower if the expected weather event is more severe or the property at risk is of higher value. Several influencing factors of risk perception have significant effects such as education, housing status and ability to act, whereas socio-demographic determinants alone are often not sufficient to fully grasp risk perception and protection behaviour. Parallel to the quantitative studies, an interview study was conducted with 27 members of German civil protection between 2012 and 2016. The results show that the latest developments in (numerical) weather forecasting do not necessarily fit the current practice of German emergency services. These practices are mostly carried out on alarms and ground truth in a reactive manner rather than on anticipation based on prognosis or forecasts. As the potential consequences rather than the event characteristics determine protective action, the findings support the call and need for impact-based warnings. Forecasters will rely on impact data and need to learn the users’ understanding of impact. Therefore, it is recommended to enhance weather communication not only by improving computer models and observation tools, but also by focusing on the aspects of communication and collaboration. Using information about uncertainty demands awareness about and acceptance of the limits of knowledge, hence, the capabilities of the forecaster to anticipate future developments of the atmosphere and the capabilities of the user to make sense of this information.
In this dissertation the lattice and the magnetic recovery dynamics of the two heavy rare-earth metals Dy and Gd after femtosecond photoexcitation are described. For the investigations, thin films of Dy and Gd were measured at low temperatures in the antiferromagnetic phase of Dy and close to room temperature in the ferromagnetic phase of Gd. Two different optical pump-x-ray probe techniques were employed: Ultrafast x-ray diffraction with hard x-rays (UXRD) yields the structural response of heavy rare-earth metals and resonant soft (elastic) x-ray diffraction (RSXD), which allows measuring directly changes in the helical antiferromagnetic order of Dy. The combination of both techniques enables to study the complex interaction between the magnetic and the phononic subsystems.
Be Creative, Now!
(2018)
Purpose – This thesis set out to explore, describe, and evaluate the reality behind the rhetoric of freedom and control in the context of creativity. The overarching subject is concerned with the relationship between creativity, freedom, and control, considering freedom is also seen as an element of control to manage creativity.
Design/methodology/approach – In-depth qualitative data gathered from at two innovative start-ups. Two ethnographic studies were conducted. The data are based on participatory observations, interviews, and secondary sources, each of which included a three months field study and a total of 41 interviews from both organizations.
Findings – The thesis provides explanations for the practice of freedom and the control of creativity within organizations and expands the existing theory of neo-normative control. The findings indicate that organizations use complex control systems that allow a high degree of freedom that paradoxically leads to more control. Freedom is a cover of control, which in turn leads to creativity. Covert control even results in the responsibility to be creative outside working hours.
Practical implications – Organizations, which rely on creativity might use the results of this thesis. Positive workplace control of creativity provides both freedom and structure for creative work. While freedom leads to organizational members being more motivated and committing themselves more strongly to their and the organization’s goals, and a specific structure also helps to provide the requirements for creativity.
Originality/value – The thesis provides an insight into an approach to workplace control, which has mostly neglected in creativity research and proposes a modified concept of neo-normative control. It serves to provide a further understanding of freedom for creativity and to challenge the liberal claims of new control forms.
Scalable data profiling
(2018)
Data profiling is the act of extracting structural metadata from datasets. Structural metadata, such as data dependencies and statistics, can support data management operations, such as data integration and data cleaning. Data management often is the most time-consuming activity in any data-related project. Its support is extremely valuable in our data-driven world, so that more time can be spent on the actual utilization of the data, e. g., building analytical models. In most scenarios, however, structural metadata is not given and must be extracted first. Therefore, efficient data profiling methods are highly desirable.
Data profiling is a computationally expensive problem; in fact, most dependency discovery problems entail search spaces that grow exponentially in the number of attributes. To this end, this thesis introduces novel discovery algorithms for various types of data dependencies – namely inclusion dependencies, conditional inclusion dependencies, partial functional dependencies, and partial unique column combinations – that considerably improve over state-of-the-art algorithms in terms of efficiency and that scale to datasets that cannot be processed by existing algorithms. The key to those improvements are not only algorithmic innovations, such as novel pruning rules or traversal strategies, but also algorithm designs tailored for distributed execution. While distributed data profiling has been mostly neglected by previous works, it is a logical consequence on the face of recent hardware trends and the computational hardness of dependency discovery.
To demonstrate the utility of data profiling for data management, this thesis furthermore presents Metacrate, a database for structural metadata. Its salient features are its flexible data model, the capability to integrate various kinds of structural metadata, and its rich metadata analytics library. We show how to perform a data anamnesis of unknown, complex datasets based on this technology. In particular, we describe in detail how to reconstruct the schemata and assess their quality as part of the data anamnesis.
The data profiling algorithms and Metacrate have been carefully implemented, integrated with the Metanome data profiling tool, and are available as free software. In that way, we intend to allow for easy repeatability of our research results and also provide them for actual usage in real-world data-related projects.
Utilization of sunlight for energy harvesting has been foreseen as sustainable replacement for fossil fuels, which would also eliminate side effects arising from fossil fuel consumption such as drastic increase of CO2 in Earth atmosphere. Semiconductor materials can be implemented for energy harvesting, and design of ideal energy harvesting devices relies on effective semiconductor with low recombination rate, ease of processing, stability over long period, non-toxicity and synthesis from abundant sources. Aforementioned criteria have attracted broad interest for graphitic carbon nitride (g-CN) materials, metal-free semiconductor which can be synthesized from low cost and abundant precursors. Furthermore, physical properties such as band gap, surface area and absorption can be tuned. g-CN was investigated as heterogeneous catalyst, with diversified applications from water splitting to CO2 reduction and organic coupling reactions. However, low dispersibility of g-CN in water and organic solvents was an obstacle for future improvements.
Tissue engineering aims to mimic natural tissues mechanically and biologically, so that synthetic materials can replace natural ones in future. Hydrogels are crosslinked networks with high water content, therefore are prime candidates for tissue engineering. However, the first requirement is synthesis of hydrogels with mechanical properties that are matching to natural tissues. Among different approaches for reinforcement, nanocomposite reinforcement is highly promising.
This thesis aims to investigate aqueous and organic dispersions of g-CN materials. Aqueous g-CN dispersions were utilized for visible light induced hydrogel synthesis, where g-CN acts as reinforcer and photoinitiator. Varieties of methodologies were presented for enhancing g-CN dispersibility, from co-solvent method to prepolymer formation, and it was shown that hydrogels with diversified mechanical properties (from skin-like to cartilage-like) are accessible via g-CN utilization. One pot photografting method was introduced for functionalization of g-CN surface which provides functional groups towards enhanced dispersibility in aqueous and organic media. Grafting vinyl thiazole groups yields stable additive-free organodispersions of g-CN which are electrostatically stabilized with increased photophysical properties. Colloidal stability of organic systems provides transparent g-CN coatings and printing g-CN from commercial inkjet printers.
Overall, application of g-CN in dispersed media is highly promising, and variety of materials can be accessible via utilization of g-CN and visible light with simple chemicals and synthetic conditions. g-CN in dispersed media will bridge emerging research areas from tissue engineering to energy harvesting in near future.
Due to a challenging population growth and environmental changes, a need for new routes to provide required chemicals for human necessities arises. An effective solution discussed in this thesis is industrial heterogeneous catalysis. The development of an advanced industrial heterogeneous catalyst is investigated herein by considering porous carbon nano-material as supports and modifying their surface chemistry structure with heteroatoms. Such modifications showed a significant influence on the performance of the catalyst and provided a deeper insight regarding the interaction between the surface structure of the catalyst and the surrounding phase. This thesis contributes to the few present studies about heteroatoms effect on the catalyst performance and emphasizes on the importance of understanding surface structure functionalization in a catalyst in different phases (liquid and gaseous) and for different reactions (hydrogenolysis, oxidation, and hydrogenation/ polymerization). Herein, the heteroatoms utilized for the modifications are hydrogen (H), oxygen (O), and nitrogen (N). The heteroatoms effect on the metal particle size, on the polarity of the support/ the catalyst, on the catalytic performance (activity, selectivity, and stability), and on the interaction with the surrounding phase has been explored. First hierarchical porous carbon nanomaterials functionalized with heteroatoms (N) is synthesized and applied as supports for nickel nanoparticles for hydrogenolysis process of kraft lignin in liquid phase. This reaction has been performed in batch and flow reactors for three different catalysts, two of comparable hierarchical porosity, yet one is modified with N and the other is not, and a third is a prepared catalyst from a commercial carbon support. The reaction production and analyses show that the catalysts with hierarchical porosity perform catalytically much better than in presence of a commercial carbon support with lower surface area. Moreover, the modification with N-heteroatoms enhanced the catalytic performance because the heteroatom modified porous carbon material with nickel nanoparticles catalyst (Ni-NDC) performed highest among the other catalysts. In the flow reactor, Ni-NDC selectively degraded the ether bonds (β-O-4) in kraft lignin with an activity of 2.2 x10^-4 mg lignin mg Ni-1 s-1 for 50 h at 350°C and 3.5 mL min-1 flow, providing ~99 % conversion to shorter chained chemicals (mainly guaiacol derivatives). Then, the functionalization of carbon surface was further studied in selective oxidation of glucose to gluconic acid using < 1 wt. % of gold (Au) deposited on the previously-mentioned synthesized carbon (C) supports with different functionalities (Au-CGlucose, Au-CGlucose-H, Au-CGlucose-O, Au-CGlucoseamine). Except for Au-CGlucose-O, the other catalysts achieved full glucose conversion within 40-120 min and 100% selectivity towards gluconic acid with a maximum activity of 1.5 molGlucose molAu-1 s-1 in an aqueous phase at 45 °C and pH 9. Each heteroatom influenced the polarity of the carbon differently, affecting by that the deposition of Au on the support and thus the activity of the catalyst and its selectivity. The heteroatom effect was further investigated in a gas phase. The Fischer-Tropsch reaction was applied to convert synthetic gas (CO and H2) to short olefins and paraffins using surface-functionalized carbon nanotubes (CNTs) with heteroatoms as supports for ion (Fe) deposition in presence and absence of promoters (Na and S). The results showed the promoted Fe-CNT doped with nitrogen catalyst to be stable up to 180 h and selective to the formation of olefins (~ 47 %) and paraffins (~6 %) with a conversion of CO ~ 92 % at a maximum activity of 94 *10^-5 mol CO g Fe-1 s-1. The more information given regarding this topic can open wide range of applications not only in catalysis, but in other approaches as well. In conclusion, incorporation of heteroatoms can be the next approach for an advanced industrial heterogeneous catalyst, but also for other applications (e.g. electrocatalysis, gas adsorption, or supercapacitors).
Reversible-deactivation radical polymerization (RDRP) is without any doubt one of the most prevalent and powerful strategies for polymer synthesis, by which well-defined living polymers with targeted molecular weight (MW), low molar dispersity (Ɖ) and diverse morphologies can be prepared in a controlled fashion. Atom transfer radical polymerization (ATRP) as one of the most extensive studied types of RDRP has been particularly emphasized due to the high accessibility to hybrid materials, multifunctional copolymers and diverse end group functionalities via commercially available precursors. However, due to catalyst-induced side reactions and chain-chain coupling termination in bulk environment, synthesis of high MW polymers with uniform chain length (low Ɖ) and highly-preserved chain-end fidelity is usually challenging. Besides, owing to the inherited radical nature, the control of microstructure, namely tacticity control, is another laborious task. Considering the applied catalysts, the utilization of large amounts of non-reusable transition metal ions which lead to cumbersome purification process, product contamination and complicated reaction procedures all delimit the scope ATRP techniques.
Metal-organic frameworks (MOFs) are an emerging type of porous materials combing the properties of both organic polymers and inorganic crystals, characterized with well-defined crystalline framework, high specific surface area, tunable porous structure and versatile nanochannel functionalities. These promising properties of MOFs have thoroughly revolutionized academic research and applications in tremendous aspects, including gas processing, sensing, photoluminescence, catalysis and compartmentalized polymerization. Through functionalization, the microenvironment of MOF nanochannel can be precisely devised and tailored with specified functional groups for individual host-guest interactions. Furthermore, properties of high transition metal density, accessible catalytic sites and crystalline particles all indicate MOFs as prominent heterogeneous catalysts which open a new avenue towards unprecedented catalytic performance. Although beneficial properties in catalysis, high agglomeration and poor dispersibility restrain the potential catalytic capacity to certain degree.
Due to thriving development of MOF sciences, fundamental polymer science is undergoing a significant transformation, and the advanced polymerization strategy can eventually refine the intrinsic drawbacks of MOF solids reversely. Therefore, in the present thesis, a combination of low-dimensional polymers with crystalline MOFs is demonstrated as a robust and comprehensive approach to gain the bilateral advantages from polymers (flexibility, dispersibility) and MOFs (stability, crystallinity). The utilization of MOFs for in-situ polymerizations and catalytic purposes can be realized to synthesize intriguing polymers in a facile and universal process to expand the applicability of conventional ATRP methodology. On the other hand, through the formation of MOF/polymer composites by surface functionalization, the MOF particles with environment-adjustable dispersibility and high catalytic property can be as-prepared.
In the present thesis, an approach via combination of confined porous textures from MOFs and controlled radical polymerization is proposed to advance synthetic polymer chemistry. Zn2(bdc)2(dabco) (Znbdc) and the initiator-functionalized Zn MOFs, ZnBrbdc, are utilized as a reaction environment for in-situ polymerization of various size-dependent methacrylate monomers (i.e. methyl, ethyl, benzyl and isobornyl methacrylate) through (surface-initiated) activators regenerated by electron transfer (ARGET/SI-ARGET) ATRP, resulting in polymers with control over dispersity, end functionalities and tacticity with respect to distinct molecular size. While the functionalized MOFs are applied, due to the strengthened compartmentalization effect, the accommodated polymers with molecular weight up to 392,000 can be achieved. Moreover, a significant improvement in end-group fidelity and stereocontrol can be observed. The results highlight a combination of MOFs and ATRP is a promising and universal methodology to synthesize versatile well-defined polymers with high molecular weight, increment in isotactic trial and the preserved chain-end functionality.
More than being a host only, MOFs can act as heterogeneous catalysts for metal-catalyzed polymerizations. A Cu(II)-based MOF, Cu2(bdc)2(dabco), is demonstrated as a heterogeneous, universal catalyst for both thermal or visible light-triggered ARGET ATRP with expanded monomer range. The accessible catalytic metal sites enable the Cu(II) MOF to polymerize various monomers, including benzyl methacrylate (BzMA), styrene, methyl methacrylate (MMA), 2-(dimethylamino)ethyl methacrylate (DMAEMA) in the fashion of ARGET ATRP. Furthermore, due to the robust frameworks, surpassing the conventional homogeneous catalyst, the Cu(II) MOF can tolerate strongly coordinating monomers and polymerize challenging monomers (i.e. 4-vinyl pyridine, 2-vinyl pyridine and isoprene), in a well-controlled fashion. Therefore, a synthetic procedure can be significantly simplified, and catalyst-resulted chelation can be avoided as well. Like other heterogeneous catalysts, the Cu(II) MOF catalytic complexes can be easily collected by centrifugation and recycled for an arbitrary amount of times.
The Cu(II) MOF, composed of photostimulable metal sites, is further used to catalyze controlled photopolymerization under visible light and requires no external photoinitiator, dye sensitizer or ligand. A simple light trigger allows the photoreduction of Cu(II) to the active Cu(I) state, enabling controlled polymerization in the form of ARGET ATRP. More than polymerization application, the synergic effect between MOF frameworks and incorporated nucleophilic monomers/molecules is also observed, where the formation of associating complexes is able to adjust the photochemical and electrochemical properties of the Cu(II) MOF, altering the band gap and light harvesting behavior. Owing to the tunable photoabsorption property resulting from the coordinating guests, photoinduced Reversible-deactivation radical polymerization (PRDRP) can be achieved to further simplify and fasten the polymerization.
More than the adjustable photoabsorption ability, the synergistic strategy via a combination of controlled/living polymerization technique and crystalline MOFs can be again evidenced as demonstrated in the MOF-based heterogeneous catalysts with enhanced dispersibility in solution. Through introducing hollow pollen pivots with surface immobilized environment-responsive polymer, PDMAEMA, highly dispersed MOF nanocrystals can be prepared after associating on polymer brushes via the intrinsic amine functionality in each DMAEMA monomer. Intriguingly, the pollen-PDMAEMA composite can serve as a “smart” anchor to trap nanoMOF particles with improved dispersibility, and thus to significantly enhance liquid-phase photocatalytic performance. Furthermore, the catalytic activity can be switched on and off via stimulable coil-to-globule transition of the PDMAEMA chains exposing or burying MOF catalytic sites, respectively.
Eta Carinae
(2018)
The exceptional binary star Eta Carinae has been fascinating scientists and the people in the Southern hemisphere alike for hundreds of years. It survived an enormous outbreak, comparable to a supernova energy-wise, and for a short period became the brightest star of the night sky. From observations from the radio regime to X-rays the system's characteristics and its emission in photon energies up to ~ 50 keV are well studied today. The binary is composed of two massive stars of ~ 30 and ~ 100 solar masses. Either star drives a strong stellar wind that continuously carries away a fraction of its mass. The collision of these winds leads to a shock on each side of the encounter. In the wind-wind-collision region plasma gets heated when it is overrun by the shocks. Part of the emission seen in X-rays can be attributed to this plasma. Above ~ 50 keV the emission is no longer of thermal origin: the required plasma temperature exceeds the available mechanical energy input of the stellar winds. In contrast to its observational history in thermal energies observational evidence of Eta Carinae's non-thermal emission has only recently built up. In high-energy gamma-rays Eta Carinae is the only binary of its kind that has been detected unambiguously. Its energy spectrum reaches up to ~ hundred GeV, a regime where satellite-based gamma-ray experiments run out of statistics. Ground-based gamma-ray experiments have the advantage of large photon collection areas. H.E.S.S. is the only gamma-ray experiment located in the Southern hemisphere and thus able to observe Eta Carinae in this energy range. H.E.S.S. measures gamma-rays via electromagnetic showers of particles that very-high-energy gamma-rays initiate in the atmosphere. The main challenge in observations of Eta Carinae with H.E.S.S. is the UV emission of the Carina nebula that leads to a background that is up to 10 times stronger than usual for H.E.S.S. This thesis presents the first detection of a colliding-wind binary in very-high-energy gamma-rays and documents the studies that led to it. The differential gamma-ray energy spectrum of Eta Carinae is measured up to 700 GeV. A hadronic and leptonic origin of the gamma-ray emission is discussed and based on the comparison of cooling times a hadronic scenario is favoured.
Photocatalysis is considered significant in this new energy era, because the inexhaustibly abundant, clean, and safe energy of the sun can be harnessed for sustainable, nonhazardous, and economically development of our society. In the research of photocatalysis, the current focus was held by the design and modification of photocatalyst.
As one of the most promising photocatalysts, g-C3N4 has gained considerable attention for its eye-catching properties. It has been extensively explored in photocatalysis applications, such as water splitting, organic pollutant degradation, and CO2 reduction. Even so, it also has its own drawbacks which inhibit its further application. Inspired by that, this thesis will mainly present and discuss the process and achievement on the preparation of some novel photocatalysts and their photocatalysis performance. These materials were all synthesized via the alteration of classic g-C3N4 preparation method, like using different pre-compositions for initial supramolecular complex and functional group post-modification. By taking place of cyanuric acid, 2,5-Dihydroxy-1,4-benzoquinone and chloranilic acid can form completely new supramolecular complex with melamine. After heating, the resulting products of the two complex shown 2D sheet-like and 1D fiber-like morphologies, respectively, which maintain at even up to high temperature of 800 °C. These materials cover crystals, polymers and N-doped carbons with the increase of synthesis temperature. Based on their different pre-compositions, they show different dye degradation performances. For CLA-M-250, it shows the highest photocatalytic activity and strong oxidation capacity. It shows not only great photo-performance in RhB degradation, but also oxygen production in water splitting. In the post-modification method, a novel photocatalysis solution was proposed to modify carbon nitride scaffold with cyano group, whose content can be well controlled by the input of sodium thiocyanate. The cyanation modification leads to narrowed band gap as well as improved photo-induced charges separation. Cyano group grafted carbon nitride thus shows dramatically enhanced performance in the photocatalytic coupling reaction between styrene and sodium benzenesulfinate under green light irradiation, which is in stark contrast with the inactivity of pristine g-C3N4.
How can interactive devices connect with users in the most immediate and intimate way? This question has driven interactive computing for decades. Throughout the last decades, we witnessed how mobile devices moved computing into users’ pockets, and recently, wearables put computing in constant physical contact with the user’s skin. In both cases moving the devices closer to users allowed devices to sense more of the user, and thus act more personal. The main question that drives our research is: what is the next logical step?
Some researchers argue that the next generation of interactive devices will move past the user’s skin and be directly implanted inside the user’s body. This has already happened in that we have pacemakers, insulin pumps, etc. However, we argue that what we see is not devices moving towards the inside of the user’s body, but rather towards the body’s biological “interface” they need to address in order to perform their function.
To implement our vision, we created a set of devices that intentionally borrow parts of the user’s body for input and output, rather than adding more technology to the body.
In this dissertation we present one specific flavor of such devices, i.e., devices that borrow the user’s muscles. We engineered I/O devices that interact with the user by reading and controlling muscle activity. To achieve the latter, our devices are based on medical-grade signal generators and electrodes attached to the user’s skin that send electrical impulses to the user’s muscles; these impulses then cause the user’s muscles to contract.
While electrical muscle stimulation (EMS) devices have been used to regenerate lost motor functions in rehabilitation medicine since the 1960s, in this dissertation, we propose a new perspective: EMS as a means for creating interactive systems.
We start by presenting seven prototypes of interactive devices that we have created to illustrate several benefits of EMS. These devices form two main categories: (1) Devices that allow users eyes-free access to information by means of their proprioceptive sense, such as the value of a variable in a computer system, a tool, or a plot; (2) Devices that increase immersion in virtual reality by simulating large forces, such as wind, physical impact, or walls and heavy objects.
Then, we analyze the potential of EMS to build interactive systems that miniaturize well and discuss how they leverage our proprioceptive sense as an I/O modality. We proceed by laying out the benefits and disadvantages of both EMS and mechanical haptic devices, such as exoskeletons.
We conclude by sketching an outline for future research on EMS by listing open technical, ethical and philosophical questions that we left unanswered.
Studien zum Bildungserfolg in Deutschland weisen auf verschiedene Ungleichheitsdimensionen hin. So wurde wiederholt ein enger Zusammenhang zwischen der sozialen Herkunft und dem schulischen Bildungserfolg dokumentiert. Des Weiteren stellen auch Geschlechterunterschiede im Bildungserfolg einen vielfach berichteten und sowohl wissenschaftlich als auch gesellschaftlich diskutierten Befund dar. Der großen Anzahl an Studien, die sich jeweils einer dieser Ungleichheitsdimensionen widmen, steht jedoch ein Forschungsbedarf bezüglich des systematischen Wissens über die Wechselwirkung von Geschlecht und sozialer Herkunft im Bildungserfolg gegenüber. Vor diesem Hintergrund hat die vorliegende Arbeit zum Ziel, das Zusammenspiel von Geschlecht und sozialer Herkunft zu untersuchen, wobei sie von zwei übergeordneten Fragestellungen geleitet wird, die im Rahmen von vier Teilstudien untersucht werden.Erstens wurde das Zusammenspiel von Geschlecht und sozioökonomischem Status (SES) in unterschiedlichen Facetten des Bildungserfolges sowie in den Berufsaspirationen analysiert (Teilstudien 1-3). Zweitens wurde untersucht, inwiefern die elterlichen Geschlechterrollenvorstellungen mit den Schulleistungen ihres Kindes assoziiert sind. Vor diesem Hintergrund wurde ebenso der Zusammenhang zwischen den elterlichen Geschlechterrollenvorstellungen und Merkmalen des familiären Hintergrundes analysiert (Teilstudie 4). Zusammenfassend betrachtet weisen die Ergebnisse der Teilstudien auf eine Wechselwirkung von Geschlechtszugehörigkeit und sozialer Herkunft im Bildungserfolg sowie in den beruflichen Aspirationen hin, auch wenn die entsprechenden Effekte eher klein ausfallen. Entgegen der gesellschaftlichen Konnotation von Mathematik als „Jungenfach“ stellen die Befunde damit beispielsweise einen Hinweis darauf dar, dass die vielfach zitierten Geschlechterunterschiede in den mathematischen Kompetenzen nicht als „naturgegeben“ sondern beeinflussbar verstanden werden können. Damit untermauern die Ergebnisse die unter anderem im Rahmen verschiedener Theorien herausgestellte Bedeutsamkeit des Sozialisationskontextes für die Entwicklung der Fähigkeiten und Ziele von Jungen und Mädchen sowie die im internationalen Vergleich gezeigte Variabilität von Geschlechterunterschieden in Schulleistungen.
In this thesis, we treat the extreme Newman-Penrose components of both the Maxwell field (s=±1) and the linearized gravitational perturbations (or "linearized gravity" for short) (s=±2) in the exterior of a slowly rotating Kerr black hole. Upon different rescalings, we can obtain spin s components which satisfy the separable Teukolsky master equation (TME). For each of these spin s components defined in Kinnersley tetrad, the resulting equations by performing some first-order differential operator on it once and twice (twice only for s=±2), together with the TME, are in the form of an "inhomogeneous spin-weighted wave equation" (ISWWE) with different potentials and constitute a linear spin-weighted wave system. We then prove energy and integrated local energy decay (Morawetz) estimates for this type of ISWWE, and utilize them to achieve both a uniform bound of a positive definite energy and a Morawetz estimate for the regular extreme Newman-Penrose components defined in the regular Hawking-Hartle tetrad.
We also present some brief discussions on mode stability for TME for the case of real frequencies. This says that in a fixed subextremal Kerr spacetime, there is no nontrivial separated mode solutions to TME which are purely ingoing at horizon and purely outgoing at infinity. This yields a representation formula for solutions to inhomogeneous Teukolsky equations, and will play a crucial role in generalizing the above energy and Morawetz estimates results to the full subextremal Kerr case.
Die Mehrzahl der Schlaganfallpatienten leidet unter Störungen der Gehfähigkeit. Die Behandlung der Folgen des Schlaganfalls stellt eine der häufigsten Indikationen für die neurologische Rehabilitation dar. Dabei steht die Wiederherstellung von sensomotorischen Funktionen, insbesondere der Gehfähigkeit, und der gesellschaftlichen Teilhabe im Vordergrund.
In Deutschland wird in der Gangrehabilitation nach Schlaganfall oft das Neurophysiologische Gangtraining nach Bobath (NGB) angewandt, das jedoch in seiner Effektivität kritisch gesehen wird. In Behandlungsleitlinien wird zuerst das Laufbandtraining (LT) empfohlen. Für diese Therapie liegen Wirknachweise für Verbesserungen in Gehgeschwindigkeit und Gehausdauer vor. Auch für die Rhythmisch-auditive Stimulation (RAS), dem ebenerdigen Gangtraining mit akustischer Stimulation liegt vergleichbare Evidenz für Schlaganfallpatienten vor.
Ziel der durchgeführten Studie war es, zu klären ob der Einsatz von RAS die Effektivität von LT verbessert. Es wurden die Auswirkungen eines 4-wöchigen musikgestützten Laufbandtrainings auf die Gangrehabilitation von Schlaganfallpatienten untersucht.
Für die Kombinationstherapie RAS mit Laufbandtraining (RAS-LT) wurde spezielle Trainingsmusik entwickelt. Diese wurde an die individuelle Laufbandkadenz des Patienten angepasst und in Abstimmung mit der Bandgeschwindigkeit systematisch gesteigert. Untersucht wurde, ob RAS-LT zu stärkeren Verbesserungen der Gehfähigkeit bei Schlaganfallpatienten führt als die Standardtherapien NGB und LT. Dazu wurde eine klinische Evaluation im prospektiven randomisierten und kontrollierten Parallelgruppendesign mit 45 Patienten nach Schlaganfall durchgeführt. 45 Patienten mit Hemiparese der unteren Extremität oder unsicherem und asymmetrischem Gangbild wurden in der Akutphase nach Schlaganfall eingeschlossen. Bei 10 Patienten wurde die Studie während der Interventionsphase abgebrochen, davon 1 Patient mit unerwünschter Nebenwirkung in Folge des LT.
Die verwendete Testbatterie umfasste neben Verfahren zur Bestimmung der Gehfunktion wie Fast Gait Speed Test, 3-min-Walking-Time-Test und der apparativen Ganganalyse mit dem Lokometer nach Bessou eine statische Posturographie und eine kinematische 2D-Ganganalyse auf dem Laufband. Diese Methode wurde in Erweiterung der bisherigen Studienlage in dieser Form erstmals für diese Fragestellung und dieses Patientenkollektiv konzipiert und eingesetzt. Sie ermöglichte eine differenzierte und seitenbezogene Beurteilung der Bewegungsqualität.
Die primären Endpunkte der Studie waren die longitudinalen Gangparameter Kadenz, Gehgeschwindigkeit und Schrittlänge. Als sekundäre Endpunkte dienten die Schrittsymmetrie, die Gehausdauer, das statische Gleichgewicht und die Bewegungsqualität des Gehens.
Prä-Post-Effekte wurden für die gesamte Stichprobe und für jede Gruppe durch T-Tests und wenn Normalverteilung nicht gegeben war mit dem Wilcoxon-Vorzeichen-Rangtest errechnet. Für die Ermittlung der Wirkungsunterschiede der 3 Interventionen wurde eine Kovarianzanalyse mit zwei Kovariaten durchgeführt: (1) der jeweilige Prä-Interventionsparameter und (2) die Zeit zwischen Akutereignis und Studienbeginn. Für einzelne Messparameter waren die Vorbedingungen der Kovarianzanalyse nicht erfüllt, sodass stattdessen ein Kruskal-Wallis H Test durchgeführt wurde. Das Signifikanzniveau wurde auf p < 0,05 und für gruppenspezifische Prä-Post-Effekte auf p > 0,016 gesetzt. Effektstärken wurden mit Cohens d berechnet.
Es wurden die Datensätze von 35 Patienten (RAS-LT: N = 11, LT: N = 13, NGB: N = 11) mit einem Alter von 63.6 ±8.6 Jahren, und mit einer Zeit zwischen Akutereignis und Beginn der Studie von 42.1 ±23.7 Tagen ausgewertet. In der statistischen Auswertung zeigten sich in der Nachuntersuchung stärkere Verbesserungen durch RAS-LT in der Kadenz (F(2,34) = 7.656, p = 0.002; partielles η2 = 0.338), wobei auch die Gruppenkontraste signifikante Unterschiede zugunsten von RAS-LT aufwiesen und eine Tendenz zu stärkerer Verbesserung in der Gehgeschwindigkeit (F(2,34) = 3.864, p = 0.032; partielles η2 = 0.205). Auch die Ergebnisse zur Schrittsymmetrie und zur Bewegungsqualität deuteten auf eine Überlegenheit des neuen Therapieansatzes RAS-LT hin, obgleich dort keine statistischen Signifikanzen im Gruppenvergleich erreicht wurden. Die Parameter Schrittlänge, Gehausdauer und die Werte zum statischen Gleichgewicht zeigten keine spezifischen Effekte von RAS-LT.
Die Studie liefert erstmals Anhaltspunkte für eine klinische Überlegenheit von RAS-LT gegenüber den Standardtherapien. Die weitere Entwicklung und Beforschung dieses innovativen Therapieansatzes können in Zukunft zu einer verbesserten Gangrehabilitation von Patienten nach Schlaganfall beitragen.
To reach its climate targets, the European Union has to implement a major sustainability transition in the coming decades. While the socio-technical change required for this transition is well discussed in the academic literature, the economics that go along with it are often reduced to a cost-benefit perspective of climate policy measures. By investigating climate change mitigation as a coordination problem, this thesis offers a novel perspective: It integrates the economic and the socio-technical dimension and thus allows to better understand the opportunities of a sustainability transition in Europe.
First, a game theoretic framework is developed to illustrate coordination on green or brown investment from an agent perspective. A model based on the coordination game "stag hunt" is used to discuss the influence of narratives and signals for green investment as a means to coordinate expectations towards green growth. Public and private green investment impulses – triggered by credible climate policy measures and targets – serve as an example for a green growth perspective for Europe in line with a sustainability transition. This perspective also embodies a critical view on classical analyses of climate policy measures.
Secondly, this analysis is enriched with empirical results derived from stakeholder involvement. In interviews and with a survey among European insurance companies, coordination mechanisms such as market and policy signals are identified and evaluated by their impact on investment strategies for green infrastructure. The latter, here defined as renewable energy, electricity distribution and transmission as well as energy efficiency improvements, is considered a central element of the transition to a low-carbon society.
Thirdly, this thesis identifies and analyzes major criticisms raised towards stakeholder involvement in sustainability science. On a conceptual level, different ways of conducting such qualitative research are classified. This conceptualization is then evaluated by scientists, thereby generating empirical evidence on ideals and practices of stakeholder involvement in sustainability science.
Through the combination of theoretical and empirical research on coordination problems, this thesis offers several contributions: On the one hand, it outlines an approach that allows to assess the economic opportunities of sustainability transitions. This is helpful for policy makers in Europe that are striving to implement climate policy measures addressing the targets of the Paris Agreement as well as to encourage a shift of investments towards green infrastructure. On the other hand, this thesis enhances the stabilization of the theoretical foundations in sustainability science. Therefore, it can aid researchers who involve stakeholders when studying sustainability transitions.
Earth's climate varies continuously across space and time, but humankind has witnessed only a small snapshot of its entire history, and instrumentally documented it for a mere 200 years. Our knowledge of past climate changes is therefore almost exclusively based on indirect proxy data, i.e. on indicators which are sensitive to changes in climatic variables and stored in environmental archives. Extracting the data from these archives allows retrieval of the information from earlier times. Obtaining accurate proxy information is a key means to test model predictions of the past climate, and only after such validation can the models be used to reliably forecast future changes in our warming world. The polar ice sheets of Greenland and Antarctica are one major climate archive, which record information about local air temperatures by means of the isotopic composition of the water molecules embedded in the ice. However, this temperature proxy is, as any indirect climate data, not a perfect recorder of past climatic variations. Apart from local air temperatures, a multitude of other processes affect the mean and variability of the isotopic data, which hinders their direct interpretation in terms of climate variations. This applies especially to regions with little annual accumulation of snow, such as the Antarctic Plateau. While these areas in principle allow for the extraction of isotope records reaching far back in time, a strong corruption of the temperature signal originally encoded in the isotopic data of the snow is expected. This dissertation uses observational isotope data from Antarctica, focussing especially on the East Antarctic low-accumulation area around the Kohnen Station ice-core drilling site, together with statistical and physical methods, to improve our understanding of the spatial and temporal isotope variability across different scales, and thus to enhance the applicability of the proxy for estimating past temperature variability. The presented results lead to a quantitative explanation of the local-scale (1–500 m) spatial variability in the form of a statistical noise model, and reveal the main source of the temporal variability to be the mixture of a climatic seasonal cycle in temperature and the effect of diffusional smoothing acting on temporally uncorrelated noise. These findings put significant limits on the representativity of single isotope records in terms of local air temperature, and impact the interpretation of apparent cyclicalities in the records. Furthermore, to extend the analyses to larger scales, the timescale-dependency of observed Holocene isotope variability is studied. This offers a deeper understanding of the nature of the variations, and is crucial for unravelling the embedded true temperature variability over a wide range of timescales.
Plant-derived Transcription Factors for Orthologous Regulation of Gene Expression in the Yeast Saccharomyces cerevisiae
Control of gene expression by transcription factors (TFs) is central in many synthetic biology projects where tailored expression of one or multiple genes is often needed. As TFs from evolutionary distant organisms are unlikely to affect gene expression in a host of choice, they represent excellent candidates for establishing orthogonal control systems. To establish orthogonal regulators for use in yeast (Saccharomyces cerevisiae), we chose TFs from the plant Arabidopsis thaliana. We established a library of 106 different combinations of chromosomally integrated TFs, activation domains (yeast GAL4 AD, herpes simplex virus VP64, and plant EDLL) and synthetic promoters harbouring cognate cis-regulatory motifs driving a yEGFP reporter. Transcriptional output of the different driver / reporter combinations varied over a wide spectrum, with EDLL being a considerably stronger transcription activation domain in yeast, than the GAL4 activation domain, in particular when fused to Arabidopsis NAC TFs. Notably, the strength of several NAC - EDLL fusions exceeded that of the strong yeast TDH3 promoter by 6- to 10-fold. We furthermore show that plant TFs can be used to build regulatory systems encoded by centromeric or episomal plasmids. Our library of TF – DNA-binding site combinations offers an excellent tool for diverse synthetic biology applications in yeast.
COMPASS: Rapid combinatorial optimization of biochemical pathways based on artificial transcription factors
We established a high-throughput cloning method, called COMPASS for COMbinatorial Pathway ASSembly, for the balanced expression of multiple genes in Saccharomyces cerevisiae. COMPASS employs orthogonal, plant-derived artificial transcription factors (ATFs) for controlling the expression of pathway genes, and homologous recombination-based cloning for the generation of thousands of individual DNA constructs in parallel. The method relies on a positive selection of correctly assembled pathway variants from both, in vivo and in vitro cloning procedures. To decrease the turnaround time in genomic engineering, we equipped COMPASS with multi-locus CRISPR/Cas9-mediated modification capacity. In its current realization, COMPASS allows combinatorial optimization of up to ten pathway genes, each transcriptionally controlled by nine different ATFs spanning a 10-fold difference in expression strength. The application of COMPASS was demonstrated by generating cell libraries producing beta-carotene and co-producing beta-ionone and biosensor-responsive naringenin. COMPASS will have many applications in other synthetic biology projects that require gene expression balancing.
CaPRedit: Genome editing using CRISPR-Cas9 and plant-derived transcriptional regulators for the redirection of flux through the FPP branch-point in yeast. Technologies developed over the past decade have made Saccharomyces cerevisiae a promising platform for production of different natural products. We developed CRISPR/Ca9- and plant derived regulator-mediated genome editing approach (CaPRedit) to greatly accelerate strain modification and to facilitate very low to very high expression of key enzymes using inducible regulators. CaPRedit can be implemented to enhance the production of yeast endogenous or heterologous metabolites in the yeast S. cerevisiae. The CaPRedit system aims to faciltiate modification of multiple targets within a complex metabolic pathway through providing new tools for increased expression of genes encoding rate-limiting enzymes, decreased expression of essential genes, and removed expression of competing pathways. This approach is based on CRISPR/Cas9-mediated one-step double-strand breaks to integrate modules containing IPTG-inducible plant-derived artificial transcription factor and promoter pair(s) in a desired locus or loci. Here, we used CaPRedit to redirect the yeast endogenous metabolic flux toward production of farnesyl diphosphate (FPP), a central precursor of nearly all yeast isoprenoid products, by overexpression of the enzymes lead to produce FPP from glutamate. We found significantly higher beta-carotene accumulation in the CaPRedit-mediated modified strain than in the wild type (WT) strain. More specifically, CaPRedit_FPP 1.0 strain was generated, in which three genes involved in FPP synthesis, tHMG1, ERG20, and GDH2, were inducibly overexpressed under the control of strong plant-derived ATFPs. The beta–carotene accumulated in CaPRedit_FPP 1.0 strain to a level 1.3-fold higher than the previously reported optimized strain that carries the same overexpressed genes (as well as additional genetic modifications to redirect yeast endogenous metabolism toward FPP production). Furthermore, the genetic modifications implemented in CaPRedit_FPP 1.0 strain resulted in only a very small growth defect (growth rate relative to the WT is ~ -0.03).
Thermoresponsive block copolymers of presumably highly biocompatible character exhibiting upper critical solution temperature (UCST) type phase behavior were developed. In particular, these polymers were designed to exhibit UCST-type cloud points (Tcp) in physiological saline solution (9 g/L) within the physiologically interesting window of 30-50°C. Further, their use as carrier for controlled release purposes was explored. Polyzwitterion-based block copolymers were synthesized by atom transfer radical polymerization (ATRP) via a macroinitiator approach with varied molar masses and co-monomer contents. These block copolymers can self-assemble in the amphiphilic state to form micelles, when the thermoresponsive block experiences a coil-to-globule transition upon cooling. Poly(ethylene glycol) methyl ether (mPEG) was used as the permanently hydrophilic block to stabilize the colloids formed, and polyzwitterions as the thermoresponsive block to promote the temperature-triggered assembly-disassembly of the micellear aggregates at low temperature.
Three zwitterionic monomers were used for this studies, namely 3-((2-(methacryloyloxy)ethyl)dimethylammonio)propane-1-sulfonate (SPE), 4-((2-(methacryloyl- oxy)ethyl)dimethylammonio)butane-1-sulfonate (SBE), and 3-((2-(methacryloyloxy)ethyl)- dimethylammonio)propane-1-sulfate) (ZPE). Their (co)polymers were characterized with respect to their molecular structure by proton nuclear magnetic resonance (1H-NMR) and gel permeation chromatography (GPC). Their phase behaviors in pure water as well as in physiological saline were studied by turbidimetry and dynamic light scattering (DLS). These (co)polymers are thermoresponsive with UCST-type phase behavior in aqueous solution. Their phase transition temperatures depend strongly on the molar masses and the incorporation of co-monomers: phase transition temperatures increased with increasing molar masses and content of poorly water-soluble co-monomer. In addition, the presence of salt influenced the phase transition dramatically. The phase transition temperature decreased with increasing salt content in the solution. While the PSPE homopolymers show a phase transition only in pure water, the PZPE homopolymers are able to exhibit a phase transition only in high salinity, as in physiological saline. Although both polyzwitterions have similar chemical structures that differ only in the anionic group (sulfonate group in SPE and sulfate group in ZPE), the water solubility is very different. Therefore, the phase transition temperatures of targeted block copolymers were modulated by using statistical copolymer of SPE and ZPE as thermoresponsive block, and varying the ratio of SPE to ZPE. Indeed, the statistical copolymers of P(SPE-co-ZPE) show phase transitions both in pure water as well as in physiological saline. Surprisingly, it was found that mPEG-b-PSBE block copolymer can display “schizophrenic” behavior in pure water, with the UCST-type cloud point occurring at lower temperature than the LCST-type one.
The block copolymer, which satisfied best the boundary conditions, is block copolymer mPEG114-b-P(SPE43-co-ZPE39) with a cloud point of 45°C in physiological saline. Therefore, it was chosen for solubilization studies of several solvatochromic dyes as models of active agents, using the thermoresponsive block copolymer as “smart” carrier. The uptake and release of the dyes were explored by UV-Vis and fluorescence spectroscopy, following the shift of the wavelength of the absorbance or emission maxima at low and high temperature. These are representative for the loaded and released state, respectively. However, no UCST-transition triggered uptake and release of these dyes could be observed. Possibly, the poor affinity of the polybetaines to the dyes in aqueous environtments may be related to the widely reported antifouling properties of zwitterionic polymers.
Answer Set Programming (ASP) is a declarative problem solving approach, combining a rich yet simple modeling language with high-performance solving capabilities. Although this has already resulted in various applications, certain aspects of such applications are more naturally modeled using variables over finite domains, for accounting for resources, fine timings, coordinates, or functions. Our goal is thus to extend ASP with constraints over integers while preserving its declarative nature. This allows for fast prototyping and elaboration tolerant problem descriptions of resource related applications. The resulting paradigm is called Constraint Answer Set Programming (CASP).
We present three different approaches for solving CASP problems. The first one, a lazy, modular approach combines an ASP solver with an external system for handling constraints. This approach has the advantage that two state of the art technologies work hand in hand to solve the problem, each concentrating on its part of the problem. The drawback is that inter-constraint dependencies cannot be communicated back to the ASP solver, impeding its learning algorithm. The second approach translates all constraints to ASP. Using the appropriate encoding techniques, this results in a very fast, monolithic system. Unfortunately, due to the large, explicit representation of constraints and variables, translation techniques are restricted to small and mid-sized domains. The third approach merges the lazy and the translational approach, combining the strength of both while removing their weaknesses. To this end, we enhance the dedicated learning techniques of an ASP solver with the inferences of the translating approach in a lazy way. That is, the important knowledge is only made explicit when needed.
By using state of the art techniques from neighboring fields, we provide ways to tackle real world, industrial size problems. By extending CASP to reactive solving, we open up new application areas such as online planning with continuous domains and durations.
Holocene climate variability is generally characterized by low frequency changes than compared to the last glaciations including the Lateglacial. However, there is vast evidence for decadal to centennial scale oscillations and millennial scale climate trends, which are within and beyond a human lifetime perception, respectively. Within the Baltic realm, a transitional zone between oceanic and continental climate influence, the impact of Holocene and Lateglacial climate and environmental change is currently partly understood. This is mainly attributed to the scarcity of well-dated and high-resolution sediment records and to the lacking continuity of already investigated archives.
The aim of this doctoral thesis is to reconstruct Holocene and Late Glacial climate variability on local to (over)regional scales based on varved (annually laminated) sediments from Lake Czechowskie down to annual resolution. This project was carried out within the Virtual Institute for Integrated Climate and Landscape Evolution Analyses (ICLEA) and funded by the Helmholtz Association and the Helmholtz Climate Initiative REKLIM (Regional Climate Change). ICLEA intended to gain a better understanding of climate variability and landscape evolution processes in the Northern Central European lowlands since the last deglaciation. REKLIM Topic 8 “Abrupt climate change derived from proxy data” aims at identifying spatiotemporal patterns of climate variability between e.g. higher and lower latitudes. The main aim of this thesis was (i) to establish a robust chronology based on a multiple dating approach for Lake Czechowskie covering the Late Glacial and Holocene and for the Trzechowskie palaeolake for the Lateglacial, respectively, (ii) to reconstruct past climatic and environmental conditions on centennial to multi-millennial time scales and (iii) to distinguish between local to regional different sediments responses to climate change.
Addressing the first aim, the Lake Czechowskie chronology has been established by a multiple dating approach comprising information from varve counting, tephrochronology, AMS 14C dating of terrestrial plant remains, biostratigraphy and 137Cs activity concentration measurements. Those independent age constraints covering the Lateglacial and the entire Holocene and have been further implemented in a Bayesian age model by using OxCal v.4.2. Thus, even within non-varved sediment intervals, robust chronological information has been used for absolute age determination. The identification of five cryptotephras, of which three are used as unambiguous isochrones, is furthermore a significant improvement of the Czechowskie chronology and currently unique for the Holocene within Poland. The first findings of coexisting early Holocene Hässeldalen and Askja-S cryptotephras within a varved sequence even allowed differential dating between both volcanic ashes and stimulated the discussion of revising the absolute ages of the Askja-S tephra.
The Trzechowskie palaeolake chronology has been established by a multiple dating approach comprising varve counting, tephrochronology, AMS 14C dating of terrestrial plant remains and biostratigraphy, covers the Lateglacial period (Allerød and Younger Dryas) and has been implemented in OxCal v.4.2. Those age constraints allowed regional correlation to other high-resolution climate archives and identifying leads and lags of proxy responses at the onset of the Younger Dryas.
The second aim has been accomplished by detailed micro-facies and geochemical analyses of the Czechowskie sediments for the entire Holocene. Thus, especially micro-facies changes had been linked to enhanced productivity at Lake Czechowskie. Most prominent changes have been recorded at 7.3, 6.5, 4.3 and 2.8 varve kyrs BP and are linked to a stepwise increasing influence of Atlantic air masses. Especially, the mid-Holocene change, which had been widely reported from palaeohydrological records in low latitudes, has been identified and linked to large scale reorganization of atmospheric circulation patterns. Thus, especially long-term changes of climatic and environmental boundary conditions are widely recorded by the Czechowskie sediments. The pronounced response to (multi)millennial scale changes is further corroborated by the lack of clear sediment responses to early Holocene centennial scale climate oscillations (e.g. the Preboreal Oscillation).
However, decadal scale changes at Lake Czechowskie during the most recent period (last 140 years) have been investigated in a lake comparison study. To fulfill the third aim of the doctoral thesis, three lakes in close vicinity to each other have been investigated in order to better distinguish how local, site-specific parameters, may superimpose regional climate driven changes. All lakes haven been unambiguously linked by the Askja AD1875 cryptotephra and independent varve chronologies. As a result, climate warming has only been recorded by sedimentation changes at the smallest and best sheltered lake (Głęboczek), whereas the largest lake (Czechowskie) and the shallowest lake (Jelonek) showed attenuated and less clear sediment responses, respectively. The different responses have been linked to morphological lake characteristics (lake size and depth, catchment area). This study highlights the potential of high-resolution lake comparison for robust proxy based climate reconstructions.
In summary, the doctoral thesis presents a high-resolution sediment record with an underlying age model, which is prerequisite for unprecedented age control down to annual resolution. Sediment proxy based climate reconstructions demonstrate the importance of the Czechowskie sediments for better understanding climate variability in the southern Baltic realm. Case studies showed the clear response on millennial time scale, while decadal scale fluctuations are either less well expressed or superimposed by local, site-specific parameters. The identification of volcanic ash layers is not only used for unambiguous isochrones, those are key tie lines for local to supra regional archive synchronization and establish the Lake Czechowskie as a key climate archive.
In the here presented work we discuss a series of results that are all in one way or another connected to the phenomenon of trapping in black hole spacetimes.
First we present a comprehensive review of the Kerr-Newman-Taub-NUT-de-Sitter family of black hole spacetimes and their most important properties. From there we go into a detailed analysis of the bahaviour of null geodesics in the exterior region of a sub-extremal Kerr spacetime. We show that most well known fundamental properties of null geodesics can be represented in one plot. In particular, one can see immediately that the ergoregion and trapping are separated in phase space.
We then consider the sets of future/past trapped null geodesics in the exterior region of a sub-extremal Kerr-Newman-Taub-NUT spacetime. We show that from the point of view of any timelike observer outside of such a black hole, trapping can be understood as two smooth sets of spacelike directions on the celestial sphere of the observer. Therefore the topological structure of the trapped set on the celestial sphere of any observer is identical to that in Schwarzschild.
We discuss how this is relevant to the black hole stability problem.
In a further development of these observations we introduce the notion of what it means for the shadow of two observers to be degenerate. We show that, away from the axis of symmetry, no continuous degeneration exists between the shadows of observers at any point in the exterior region of any Kerr-Newman black hole spacetime of unit mass. Therefore, except possibly for discrete changes, an observer can, by measuring the black holes shadow, determine the angular momentum and the charge of the black hole under observation, as well as the observer's radial position and angle of elevation above the equatorial plane. Furthermore, his/her relative velocity compared to a standard observer can also be measured. On the other hand, the black hole shadow does not allow for a full parameter resolution in the case of a Kerr-Newman-Taub-NUT black hole, as a continuous degeneration relating specific angular momentum, electric charge, NUT charge and elevation angle exists in this case.
We then use the celestial sphere to show that trapping is a generic feature of any black hole spacetime.
In the last chapter we then prove a generalization of the mode stability result of Whiting (1989) for the Teukolsky equation for the case of real frequencies. The main result of the last chapter states that a separated solution of the Teukolsky equation governing massless test fields on the Kerr spacetime, which is purely outgoing at infinity, and purely ingoing at the horizon, must vanish. This has the consequence, that for real frequencies, there are linearly independent fundamental solutions of the radial Teukolsky equation which are purely ingoing at the horizon, and purely outgoing at infinity, respectively. This fact yields a representation formula for solutions of the inhomogenous Teukolsky equation, and was recently used by Shlapentokh-Rothman (2015) for the scalar wave equation.
Diese Arbeit besteht aus drei Aufsätzen. Der erste Aufsatz („Die Arbeitsmarktpolitik in Südosteuropa: Von der Transformation bis zur EU-Integration“) erörtert die wirtschaftlichen und politischen Rahmenbedingungen in Südosteuropa und die damit einhergehenden Entwicklungen auf den jeweiligen Arbeitsmärkten seit 1991. Im Fokus steht dabei der Einfluss der Arbeitslosigkeit (als systemunabhängiges Problem) auf den EU-Integrationsprozess in den jugoslawischen Nachfolgestaaten und Albanien.
Welchen Einfluss haben der qualifikatorische und regionale Mismatch auf die Arbeitslosigkeit in Kroatien? Um diese Frage zu beantworten, wird im zweiten Kapitel dieser Arbeit („Arbeitslosigkeit im Transformationsprozess: Qualifikatorischer und regionaler Mismatch in Kroatien“) der Mismatch sowohl statisch mit Mismatch-Indikatoren als auch dynamisch im Rahmen der Matching-Funktion erörtert. Unter Anwendung von Paneldaten für neun Berufsgruppen und 21 Regionen im Zeitraum zwischen Januar 2004 und Juni 2015 wird in diesem Kapitel mithilfe von Fixed-Effects-Modellen dieser Einfluss geschätzt.
Führt die Anpassung der Arbeitslosenversicherungsgesetze an die EU-Standards zu einer Verbesserung der Arbeitsmarktergebnisse in den Staaten Südosteuropas? Mit Hilfe von Paneldaten für den Zeitraum 1996–2014 wird für fünf südosteuropäische Staaten (Albanien, Kroatien, Mazedonien, Montenegro und Serbien) dieser Einfluss im Rahmen eines Differenz-in-Differenzen-Modells im dritten Aufsatz („Unvollständige Integration: Eine Differenz-in-Differenzen-Analyse der südosteuropäischen Arbeitsmärkte“) geschätzt.
Magnetotellurics (MT) is a geophysical method that is able to image the electrical conductivity structure of the subsurface by recording time series of natural electromagnetic (EM) field variations. During the data processing these time series are divided into small segments and for each segment spectral values are computed which are typically averaged in a statistical manner to obtain MT transfer functions. Unfortunately, the presence of man-made EM noise sources often deteriorates a significant amount of the recorded time series resulting in disturbed transfer functions. Many advanced processing techniques, e.g. robust statistics, pre-stack data selection or remote reference, have been developed to tackle this problem. The first two techniques reduce the amount of outliers and noise in the data whereas the latter approach removes noise by using data from another MT station. However, especially in populated regions the data processing is still quite challenging even with these approaches. In this thesis, I present two novel pre-stack data confinement and selection criteria for the detection of outliers and noise affected data based on (i) a distance measure of each data segment with regard to the entire sample distribution and (ii) the evaluation of the magnetic polarisation direction of all segments. The first criterion is able to remove data points that scatter around the desired MT distribution and furthermore it can, under some circumstances, even reject complete data cluster originating from noise sources. The second criterion eliminates data points caused by a strongly polarised magnetic signal. Both criteria have been successfully applied to many stations with different noise contaminations showing that they can significantly improve the transfer function estimation. The novel criteria were used to evaluate a MT data set from the Eastern Karoo Basin in South Africa. The corresponding field experiment is part of an extensive research programme to collect information of the current e.g. geological setting in this region prior to a potential shale gas exploitation. The aim was to investigate whether a three-dimensional (3D) inversion of the newly measured data fosters a more realistic mapping of physical properties of the target horizon. For this purpose, a comprehensive 3D model was derived by using all available data. In a second step, I analysed parameters of the target horizon, e.g. its conductivity, that are proxies for physical properties such as thermal maturity and porosity.
Physical computing covers the design and realization of interactive objects and installations and allows learners to develop concrete, tangible products of the real world, which arise from their imagination. This can be used in computer science education to provide learners with interesting and motivating access to the different topic areas of the subject in constructionist and creative learning environments. However, if at all, physical computing has so far mostly been taught in afternoon clubs or other extracurricular settings. Thus, for the majority of students so far there are no opportunities to design and create their own interactive objects in regular school lessons.
Despite its increasing popularity also for schools, the topic has not yet been clearly and sufficiently characterized in the context of computer science education. The aim of this doctoral thesis therefore is to clarify physical computing from the perspective of computer science education and to adequately prepare the topic both content-wise and methodologically for secondary school teaching. For this purpose, teaching examples, activities, materials and guidelines for classroom use are developed, implemented and evaluated in schools.
In the theoretical part of the thesis, first the topic is examined from a technical point of view. A structured literature analysis shows that basic concepts used in physical computing can be derived from embedded systems, which are the core of a large field of different application areas and disciplines. Typical methods of physical computing in professional settings are analyzed and, from an educational perspective, elements suitable for computer science teaching in secondary schools are extracted, e. g. tinkering and prototyping. The investigation and classification of suitable tools for school teaching show that microcontrollers and mini computers, often with extensions that greatly facilitate the handling of additional components, are particularly attractive tools for secondary education. Considering the perspectives of science, teachers, students and society, in addition to general design principles, exemplary teaching approaches for school education and suitable learning materials are developed and the design, production and evaluation of a physical computing construction kit suitable for teaching is described.
In the practical part of this thesis, with “My Interactive Garden”, an exemplary approach to integrate physical computing in computer science teaching is tested and evaluated in different courses and refined based on the findings in a design-based research approach. In a series of workshops on physical computing, which is based on a concept for constructionist professional development that is developed specifically for this purpose, teachers are empowered and encouraged to develop and conduct physical computing lessons suitable for their particular classroom settings. Based on their in-class experiences, a process model of physical computing teaching is derived. Interviews with those teachers illustrate that benefits of physical computing, including the tangibility of crafted objects and creativity in the classroom, outweigh possible drawbacks like longer preparation times, technical difficulties or difficult assessment. Hurdles in the classroom are identified and possible solutions discussed.
Empirical investigations in the different settings reveal that “My Interactive Garden” and physical computing in general have a positive impact, among others, on learner motivation, fun and interest in class and perceived competencies.
Finally, the results from all evaluations are combined to evaluate the design principles for physical computing teaching and to provide a perspective on the development of decision-making aids for physical computing activities in school education.
Business process automation improves organizations’ efficiency to perform work. Therefore, a business process is first documented as a process model which then serves as blueprint for a number of process instances representing the execution of specific business cases. In existing business process management systems, process instances run independently from each other. However, in practice, instances are also collected in groups at certain process activities for a combined execution to improve the process performance. Currently, this so-called batch processing is executed manually or supported by external software. Only few research proposals exist to explicitly represent and execute batch processing needs in business process models. These works also lack a comprehensive understanding of requirements.
This thesis addresses the described issues by providing a basic concept, called batch activity. It allows an explicit representation of batch processing configurations in process models and provides a corresponding execution semantics, thereby easing automation. The batch activity groups different process instances based on their data context and can synchronize their execution over one or as well multiple process activities. The concept is conceived based on a requirements analysis considering existing literature on batch processing from different domains and industry examples. Further, this thesis provides two extensions: First, a flexible batch configuration concept, based on event processing techniques, is introduced to allow run time adaptations of batch configurations. Second, a concept for collecting and batching activity instances of multiple different process models is given. Thereby, the batch configuration is centrally defined, independently of the process models, which is especially beneficial for organizations with large process model collections. This thesis provides a technical evaluation as well as a validation of the presented concepts. A prototypical implementation in an existing open-source BPMS shows that with a few extensions, batch processing is enabled. Further, it demonstrates that the consolidated view of several work items in one user form can improve work efficiency. The validation, in which the batch activity concept is applied to different use cases in a simulated environment, implies cost-savings for business processes when a suitable batch configuration is used. For the validation, an extensible business process simulator was developed. It enables process designers to study the influence of a batch activity in a process with regards to its performance.
In the present work, we use symbolic regression for automated modeling of dynamical systems. Symbolic regression is a powerful and general method suitable for data-driven identification of mathematical expressions. In particular, the structure and parameters of those expressions are identified simultaneously.
We consider two main variants of symbolic regression: sparse regression-based and genetic programming-based symbolic regression. Both are applied to identification, prediction and control of dynamical systems.
We introduce a new methodology for the data-driven identification of nonlinear dynamics for systems undergoing abrupt changes. Building on a sparse regression algorithm derived earlier, the model after the change is defined as a minimum update with respect to a reference model of the system identified prior to the change. The technique is successfully exemplified on the chaotic Lorenz system and the van der Pol oscillator. Issues such as computational complexity, robustness against noise and requirements with respect to data volume are investigated.
We show how symbolic regression can be used for time series prediction. Again, issues such as robustness against noise and convergence rate are investigated us- ing the harmonic oscillator as a toy problem. In combination with embedding, we demonstrate the prediction of a propagating front in coupled FitzHugh-Nagumo oscillators. Additionally, we show how we can enhance numerical weather predictions to commercially forecast power production of green energy power plants.
We employ symbolic regression for synchronization control in coupled van der Pol oscillators. Different coupling topologies are investigated. We address issues such as plausibility and stability of the control laws found. The toolkit has been made open source and is used in turbulence control applications.
Genetic programming based symbolic regression is very versatile and can be adapted to many optimization problems. The heuristic-based algorithm allows for cost efficient optimization of complex tasks.
We emphasize the ability of symbolic regression to yield white-box models. In contrast to black-box models, such models are accessible and interpretable which allows the usage of established tool chains.
Modern gamma-ray telescopes, provide the main stream of data for astrophysicists in quest of detecting the sources of gamma rays such as active galactic nuclei (AGN). Many blazars have been detected with gamma-ray telescopes such as HESS, VERITAS, MAGIC and Fermi satellite as sources of gamma-rays with the energy E ≥ 100 GeV. These very-high-energy photons interact with extragalactic background light (EBL) producing ultra-relativistic electron-positron pairs. Observations with Fermi-LAT indicate that the GeV gamma-ray flux from some blazars is lower than that predicted from the full electromagnetic cascade. The pairs can induce electrostatic and electromagnetic instabilities. In this case, wave-particle interactions can reduce the energy of the pairs. Therefore, the collective plasma effects can also substantially suppress the GeV-band gamma-ray emission affecting as well the IGMF constraints. Using Particle in cell (PIC) simulations, we have revisited the issue of plasma instabilities induced by electron-positron beams in the fully ionized intergalactic medium. This problem is related to pair beams produced by TeV radiation of blazars. The main objective of our study is to clarify the feedback of the beam-driven instabilities on the pairs. The present dissertation provides new results regarding the plasma instabilities from blazar induced pair beams interacting with intergalactic medium. This clarifies the relevance of plasma instabilities and improves our understanding of blazars.
Veränderungen im thermalen Regime des Permafrosts verursachen Störungen der Erdoberfläche. Diese Veränderungen werden durch die in der Arktis seit Jahrzehnten ansteigenden Temperaturen verstärkt. Thermokarst ist ein Prozess, welcher die Erdoberfläche durch Schmelzen von Grundeis, oder Auftauen von Permafrost absacken lässt, wodurch charakteristische Landformen entstehen. Thermokarst ist vor allem entlang von Hängen weit verbreitet und die Anzahl der damit verbundenen Landformen in der Arktis steigt stetig an. Dieser Prozess mobilisiert große Mengen an Material, welche in Richtung Meer transportiert oder entlang von Hängen akkumuliert werden. Während entlang von Hängen auftretender Thermokarst terrestrische sowie aquatische Ökosysteme stark verändert, ist dessen Einfluss auf regionaler Skala zurzeit noch Gegenstand der Forschung.
In dieser Arbeit quantifizieren wir die Auswirkungen von Thermokarstprozessen entlang von Hängen auf die umliegenden Ökosysteme der küstennahen Täler und Nahküstenbereiche entlang der Yukon Küste in Kanada. Mittels überwachtem maschinellen Lernen haben wir geomorphische Faktoren identifiziert, welche die Entwicklung von retrogressiven Auftaurutschungen (RTS) begünstigen. RTS sind eine Erscheinungsform von Thermokarst entlang von Hängen. Die Küstengeomorphologie, sowie der Grundeistyp und -inhalt sind die wesentlichen bestimmenden Faktoren für das Auftreten von RTS. Wir haben Luftbildaufnahmen und Satellitenbilder genutzt, um die Evolution von RTS im Zeitraum von 1952 bis 2011 zu verfolgen. Während dieser Zeit ist die Anzahl und Ausdehnung von RTS linear angestiegen. Wir zeigen, dass 56% der RTS welche entlang der Küste in 2011 identifiziert wurden, 16.6 × 106 m3 an Material erodiert haben. Hiervon wurden 45% durch Küstenprozesse entlang der Küste transportiert. RTS tragen wesentlich zu dem Kohlenstoff-Budget des Nahküstenbereiches bei: 17% der in 2011 identifizierten RTS, haben 0.6% des organischen Kohlenstoffes transportiert, welcher durch Küstenerosion entlang der Yukon Küste jährlich freigesetzt wird. Um den Einfluss von Thermokarst entlang von Hängen auf das terrestrische Ökosystem zu beurteilen, haben wir die räumliche Verteilung von organischem Bodenkohlenstoff und Stickstoff (SOC, TN) entlang von Hangprofilen in drei arktischen Tälern analysiert.Wir weisen auf eine hohe räumliche Variabilität in der Verteilung von SOC und TN hin, welche auf komplexe Bodenprozesse zurückzuführen ist, welche entlang von Hängen auftreten. Thermokarst entlang von Hängen hat einen großen Einfluss auf die Degradierung von organischem Material und die Speicherung von SOC und TN.
Active and passive source data from two seismic experiments within the interdisciplinary project TIPTEQ (from The Incoming Plate to mega Thrust EarthQuake processes) were used to image and identify the structural and petrophysical properties (such as P- and S-velocities, Poisson's ratios, pore pressure, density and amount of fluids) within the Chilean seismogenic coupling zone at 38.25°S, where in 1960 the largest earthquake ever recorded (Mw 9.5) occurred. Two S-wave velocity models calculated using traveltime and noise tomography techniques were merged with an existing velocity model to obtain a 2D S-wave velocity model, which gathered the advantages of each individual model. In a following step, P- and S-reflectivity images of the subduction zone were obtained using different pre stack and post-stack depth migration techniques. Among them, the recent prestack line-drawing depth migration scheme yielded revealing results. Next, synthetic seismograms modelled using the reflectivity method allowed, through their input 1D synthetic P- and S-velocities, to infer the composition and rocks within the subduction zone. Finally, an image of the subduction zone is given, jointly interpreting the results from this work with results from other studies. The Chilean seismogenic coupling zone at 38.25°S shows a continental crust with highly reflective horizontal, as well as (steep) dipping events. Among them, the Lanalhue Fault Zone (LFZ), which is interpreted to be east-dipping, is imaged to very shallow depths. Some steep reflectors are observed for the first time, for example one near the coast, related to high seismicity and another one near the LFZ. Steep shallow reflectivity towards the volcanic arc could be related to a steep west-dipping reflector interpreted as fluids and/or melts, migrating upwards due to material recycling in the continental mantle wedge. The high resolution of the S-velocity model in the first kilometres allowed to identify several sedimentary basins, characterized by very low P- and S-velocities, high Poisson's ratios and possible steep reflectivity. Such high Poisson's ratios are also observed within the oceanic crust, which reaches the seismogenic zone hydrated due to bending-related faulting. It is interpreted to release water until reaching the coast and under the continental mantle wedge. In terms of seismic velocities, the inferred composition and rocks in the continental crust is in agreement with field geology observations at the surface along the proflle. Furthermore, there is no requirement to call on the existence of measurable amounts of present-day fluids above the plate interface in the continental crust of the Coastal Cordillera and the Central Valley in this part of the Chilean convergent margin. A large-scale anisotropy in the continental crust and upper mantle, previously proposed from magnetotelluric studies, is proposed from seismic velocities. However, quantitative studies on this topic in the continental crust of the Chilean seismogenic zone at 38.25°S do not exist to date.
Remote sensing technology, such as airborne, mobile, or terrestrial laser scanning, and photogrammetric techniques, are fundamental approaches for efficient, automatic creation of digital representations of spatial environments. For example, they allow us to generate 3D point clouds of landscapes, cities, infrastructure networks, and sites. As essential and universal category of geodata, 3D point clouds are used and processed by a growing number of applications, services, and systems such as in the domains of urban planning, landscape architecture, environmental monitoring, disaster management, virtual geographic environments as well as for spatial analysis and simulation.
While the acquisition processes for 3D point clouds become more and more reliable and widely-used, applications and systems are faced with more and more 3D point cloud data. In addition, 3D point clouds, by their very nature, are raw data, i.e., they do not contain any structural or semantics information. Many processing strategies common to GIS such as deriving polygon-based 3D models generally do not scale for billions of points. GIS typically reduce data density and precision of 3D point clouds to cope with the sheer amount of data, but that results in a significant loss of valuable information at the same time.
This thesis proposes concepts and techniques designed to efficiently store and process massive 3D point clouds. To this end, object-class segmentation approaches are presented to attribute semantics to 3D point clouds, used, for example, to identify building, vegetation, and ground structures and, thus, to enable processing, analyzing, and visualizing 3D point clouds in a more effective and efficient way. Similarly, change detection and updating strategies for 3D point clouds are introduced that allow for reducing storage requirements and incrementally updating 3D point cloud databases. In addition, this thesis presents out-of-core, real-time rendering techniques used to interactively explore 3D point clouds and related analysis results. All techniques have been implemented based on specialized spatial data structures, out-of-core algorithms, and GPU-based processing schemas to cope with massive 3D point clouds having billions of points.
All proposed techniques have been evaluated and demonstrated their applicability to the field of geospatial applications and systems, in particular for tasks such as classification, processing, and visualization. Case studies for 3D point clouds of entire cities with up to 80 billion points show that the presented approaches open up new ways to manage and apply large-scale, dense, and time-variant 3D point clouds as required by a rapidly growing number of applications and systems.
In this work, different strategies for the construction of biohybrid photoelectrodes are investigated and have been evaluated according to their intrinsic catalytic activity for the oxidation of the cofactor NADH or for the connection with the enzymes PQQ glucose dehydrogenase (PQQ-GDH), FAD-dependent glucose dehydrogenase (FAD-GDH) and fructose dehydrogenase (FDH). The light-controlled oxidation of NADH has been analyzed with InGaN/GaN nanowire-modified electrodes. Upon illumination with visible light the InGaN/GaN nanowires generate an anodic photocurrent, which increases in a concentration-dependent manner in the presence of NADH, thus allowing determination of the cofactor. Furthermore, different approaches for the connection of enzymes to quantum dot (QD)-modified electrodes via small redox molecules or redox polymers have been analyzed and discussed. First, interaction studies with diffusible redox mediators such as hexacyanoferrate(II) and ferrocenecarboxylic acid have been performed with CdSe/ZnS QD-modified gold electrodes to build up photoelectrochemical signal chains between QDs and the enzymes FDH and PQQ-GDH. In the presence of substrate and under illumination of the electrode, electrons are transferred from the enzyme via the redox mediators to the QDs. The resulting photocurrent is dependent on the substrate concentration and allows a quantification of the fructose and glucose content in solution. A first attempt with immobilized redox mediator, i.e. ferrocenecarboxylic acid chemically coupled to PQQ-GDH and attached to QD-modified gold electrodes, reveal the potential to build up photoelectrochemical signal chains even without diffusible redox mediators in solution. However, this approach results in a significant deteriorated photocurrent response compared to the situation with diffusing mediators. In order to improve the photoelectrochemical performance of such redox mediator-based, light-switchable signal chains, an osmium complex-containing redox polymer has been evaluated as electron relay for the electronic linkage between QDs and enzymes. The redox polymer allows the stable immobilization of the enzyme and the efficient wiring with the QD-modified electrode. In addition, a 3D inverse opal TiO2 (IO-TiO2) electrode has been used for the integration of PbS QDs, redox polymer and FAD-GDH in order to increase the electrode surface. This results in a significantly improved photocurrent response, a quite low onset potential for the substrate oxidation and a broader glucose detection range as compared to the approach with ferrocenecarboxylic acid and PQQ-GDH immobilized on CdSe/ZnS QD-modified gold electrodes. Furthermore, IO-TiO2 electrodes are used to integrate sulfonated polyanilines (PMSA1) and PQQ-GDH, and to investigate the direct interaction between the polymer and the enzyme for the light-switchable detection of glucose. While PMSA1 provides visible light excitation and ensures the efficient connection between the IO-TiO2 electrode and the biocatalytic entity, PQQ-GDH enables the oxidation of glucose. Here, the IO-TiO2 electrodes with pores of approximately 650 nm provide a suitable interface and morphology, which is required for a stable and functional assembly of the polymer and enzyme. The successful integration of the polymer and the enzyme can be confirmed by the formation of a glucose-dependent anodic photocurrent. In conclusion, this work provides insights into the design of photoelectrodes and presents different strategies for the efficient coupling of redox enzymes to photoactive entities, which allows for light-directed sensing and provides the basis for the generation of power from sun light and energy-rich compounds.
Innerhalb dieser Doktorarbeit wurde eine neuartige Mikromanipulationstechnik für die lokale Flüssigkeitsabgabe am komplexen Drüsengewebe der Schabe P. americana charakterisiert und für die damit verbundene gezielte Manipulation von einzelnen Zellen in einem Zellkomplex (Gewebe) angewandt. Bei dieser Mikromanipulationstechnik handelt es sich um die seit 2009 bekannte nanofluidische Rasterkraftmikroskopie (FluidFM = fluidic force microscopy). Dabei werden sehr kleine mikrokanälige Rasterkraftspitzen bzw. Mikro-/Nanopipetten mit einer Öffnung zwischen 300 nm und 2 µm verwendet, mit denen es möglich ist, sehr kleine Volumina im Pikoliter- bis Femtoliter-Bereich (10-12 L – 10-15 L) gezielt und ortsgenau abzugeben. Das Ziel dieser Arbeit war die Analyse zellulärer Prozesse, wie z. B. Zell-Zell-Kommunikation oder Signalweiterleitung, zwischen benachbarten Zellen unter Zuhilfenahme der Fluoreszenzmikroskopie. Mit dieser Methode können die Zellen und ihre Bestandteile mittels vorheriger Farbstoffbeladung unter einem Mikroskop mit hohem Kontrast optisch dargestellt werden. Mit Hilfe der Fluoreszenzmikroskopie sollten schlussendlich die zellulären Reaktionen innerhalb des Gewebes nach der lokalen Manipulation visualisiert werden.
Zunächst wurde die Anwendung des Systems an Luft und wässriger Umgebung beschrieben. In diesem Zusammenhang wurde eine Reinigungs- und Beladungsmethode entwickelt, mit der es möglich war, die kostspieligen Mikro-/Nanopipetten zu reinigen und anschließend mehrmals wiederzuverwenden. Hierzu wurde eine alternative Methode getestet, mit der das Diffusionsverhalten von Farbstoffmolekülen in unterschiedlichen Medien untersucht werden kann. Des Weiteren wurden die Systemparameter optimiert, welche nötig sind, um zwischen der Probenoberfläche und der Pipette einen guten Pipettenöffnungs-abschluss zu erhalten. Dieser Abschluss ist essentiell, damit die abgegebene Flüssigkeit ausschließlich in der Abgaberegion mit der Probe wechselwirkt und die darauffolgenden Reaktionen nur innerhalb des Gewebes erfolgen, da ansonsten die Zell-Zell-Signalweiterleitung zwischen den Zellen nicht eindeutig nachvollzogen werden kann. Diese interzelluläre Kommunikation wurde anhand zweier sekundärer Botenstoffe (Ca2+ und NO) untersucht. Hierbei war es möglich einzelne lokale Reaktionen zu detektieren, welche sich über weitere Zellen ausbreiteten. Schlussendlich wurde die Fertigung einer speziellen Injektionspipette beschrieben, welche an zwei biologischen Systemen getestet wurde.
Plants are unable to move away from unwanted environments and therefore have to locally adapt to changing conditions. Arabidopsis thaliana (Arabidopsis), a model organism in plant biology, has been able to rapidly colonize a wide spectrum of environments with different biotic and abiotic challenges. In recent years, natural variation in Arabidopsis has shown to be an excellent resource to study genes underlying adaptive traits and hybridization’s impact on natural diversity. Studies on Arabidopsis hybrids have provided information on the genetic basis of hybrid incompatibilities and heterosis, as well as inheritance patterns in hybrids. However, previous studies have focused mainly on global accessions and yet much remains to be known about variation happening within a local growth habitat. In my PhD, I investigated the impact of heterozygosity at a local collection site of Arabidopsis and its role in local adaptation. I focused on two different projects, both including hybrids among Arabidopsis individuals collected around Tübingen in Southern Germany. The first project sought to understand the impact of hybridization on metabolism and growth within a local Arabidopsis collection site. For this, the inheritance patterns in primary and secondary metabolism, together with rosette size of full diallel crosses among seven parents originating from Southern Germany were analyzed. In comparison to primary metabolites, compounds from secondary metabolism were more variable and showed pronounced non-additive inheritance patterns. In addition, defense metabolites, mainly glucosinolates, displayed the highest degree of variation from the midparent values and were positively correlated with a proxy for plant size.
In the second project, the role of ACCELERATED CELL DEATH 6 (ACD6) in the defense response pathway of Arabidopsis necrotic hybrids was further characterized. Allelic interactions of ACD6 have been previously linked to hybrid necrosis, both among global and local Arabidopsis accessions. Hence, I characterized the early metabolic and ionic changes induced by ACD6, together with marker gene expression assays of physiological responses linked to its activation. An upregulation of simple sugars and metabolites linked to non-enzymatic antioxidants and the TCA cycle were detected, together with putrescine and acids linked to abiotic stress responses. Senescence was found to be induced earlier in necrotic hybrids and cytoplasmic calcium signaling was unaffected in response to temperature. In parallel, GFP-tagged constructs of ACD6 were developed.
This work therefore gave novel insights on the role of heterozygosity in natural variation and adaptation and expanded our current knowledge on the physiological and molecular responses associated with ACD6 activation.
The interaction between surfaces displaying end-grafted hydrophilic polymer brushes plays important roles in biology and in many wet-technological applications. The outer surfaces of Gram-negative bacteria, for example, are composed of lipopolysaccharide (LPS) molecules exposing oligo- and polysaccharides to the aqueous environment. This unique, structurally complex biological interface is of great scientific interest as it mediates the interaction of bacteria with neighboring bacteria in colonies and biofilms. The interaction between polymer-decorated surfaces is generally coupled to the distance-dependent conformation of the polymer chains. Therefore, structural insight into the interacting surfaces is a prerequisite to understand the interaction characteristics as well as the underlying physical mechanisms. This problem has been addressed by theory, but accurate experimental data on polymer conformations under confinement are rare, because obtaining perturbation-free structural insight into buried soft interfaces is inherently difficult.
In this thesis, lipid membrane surfaces decorated with hydrophilic polymers of technological and biological relevance are investigated under controlled interaction conditions, i.e., at defined surface separations. For this purpose, dedicated sample architectures and experimental tools are developed. Via ellipsometry and neutron reflectometry pressure-distance curves and distance-dependent polymer conformations in terms of brush compression and reciprocative interpenetration are determined. Additional element-specific structural insight into the end-point distribution of interacting brushes is obtained by standing-wave x-ray fluorescence (SWXF).
The methodology is first established for poly[ethylene glycol] (PEG) brushes of defined length and grafting density. For this system, neutron reflectometry revealed pronounced brush interpenetration, which is not captured in common brush theories and therefore motivates rigorous simulation-based treatments. In the second step the same approach is applied to realistic mimics of the outer surfaces of Gram-negative bacteria: monolayers of wild type LPSs extracted from E. Coli O55:B5 displaying strain-specific O-side chains. The neutron reflectometry experiments yield unprecedented structural insight into bacterial interactions, which are of great relevance for the properties of biofilms.
The formation and breaching of natural dammed lakes have formed the landscapes, especially in seismically active high-mountain regions. Dammed lakes pose both, potential water resources, and hazard in case of dam breaching. Central Asia has mostly arid and semi-arid climates. Rock glaciers already store more water than ice-glaciers in some semi-arid regions of the world, but their distribution and advance mechanisms are still under debate in recent research. Their impact on the water availability in Central Asia will likely increase as temperatures rise and glaciers diminish.
This thesis provides insight to the relative age distribution of selected Kyrgyz and Kazakh rock glaciers and their single lobes derived from lichenometric dating. The size of roughly 8000 different lichen specimens was used to approximate an exposure age of the underlying debris surface. We showed that rock-glacier movement differs signifcantly on small scales. This has several implications for climatic inferences from rock glaciers. First, reactivation of their lobes does not necessarily point to climatic changes, or at least at out-of-equilibrium conditions. Second, the elevations of rock-glacier toes can no longer be considered as general indicators of the limit of sporadic mountain permafrost as they have been used traditionally.
In the mountainous and seismically active region of Central Asia, natural dams, besides rock glaciers, also play a key role in controlling water and sediment infux into river valleys. However, rock glaciers advancing into valleys seem to be capable of infuencing the stream network, to dam rivers, or to impound lakes. This influence has not previously been addressed. We quantitatively explored these controls using a new inventory of 1300 Central Asian rock glaciers. Elevation, potential incoming solar radiation, and the size of rock glaciers and their feeder basins played key roles in predicting dam appearance. Bayesian techniques were used to credibly distinguish between lichen sizes on rock glaciers and their lobes, and to find those parameters of a rock-glacier system that are most credibly expressing the potential to build natural dams.
To place these studies in the region's history of natural dams, a combination of dating of former lake levels and outburst flood modelling addresses the history and possible outburst flood hypotheses of the second largest mountain lake of the world, Issyk Kul in Kyrgyzstan. Megafoods from breached earthen or glacial dams were found to be a likely explanation for some of the lake's highly fluctuating water levels. However, our detailed analysis of candidate lake sediments and outburst-flood deposits also showed that more localised dam breaks to the west of Issyk Kul could have left similar geomorphic and sedimentary evidence in this Central Asian mountain landscape. We thus caution against readily invoking megafloods as the main cause of lake-level drops of Issyk Kul. In summary, this thesis addresses some new pathways for studying rock glaciers and natural dams with several practical implications for studies on mountain permafrost and natural hazards.
Understanding how humans move their eyes is an important part for understanding the functioning of the visual system. Analyzing eye movements from observations of natural scenes on a computer screen is a step to understand human visual behavior in the real world. When analyzing eye-movement data from scene-viewing experiments, the impor- tant questions are where (fixation locations), how long (fixation durations) and when (ordering of fixations) participants fixate on an image. By answering these questions, computational models can be developed which predict human scanpaths. Models serve as a tool to understand the underlying cognitive processes while observing an image, especially the allocation of visual attention.
The goal of this thesis is to provide new contributions to characterize and model human scanpaths on natural scenes. The results from this thesis will help to understand and describe certain systematic eye-movement tendencies, which are mostly independent of the image. One eye-movement tendency I focus on throughout this thesis is the tendency to fixate more in the center of an image than on the outer parts, called the central fixation bias. Another tendency, which I will investigate thoroughly, is the characteristic distribution of angles between successive eye movements.
The results serve to evaluate and improve a previously published model of scanpath generation from our laboratory, the SceneWalk model. Overall, six experiments were conducted for this thesis which led to the following five core results:
i) A spatial inhibition of return can be found in scene-viewing data. This means that locations which have already been fixated are afterwards avoided for a certain time interval (Chapter 2).
ii) The initial fixation position when observing an image has a long-lasting influence of up to five seconds on further scanpath progression (Chapter 2 & 3).
iii) The often described central fixation bias on images depends strongly on the duration of the initial fixation. Long-lasting initial fixations lead to a weaker central fixation bias than short fixations (Chapter 2 & 3).
iv) Human observers adjust their basic eye-movement parameters, like fixation dura- tions and saccade amplitudes, to the visual properties of a target they look for in visual search (Chapter 4).
v) The angle between two adjacent saccades is an indicator for the selectivity of the upcoming saccade target (Chapter 4).
All results emphasize the importance of systematic behavioral eye-movement tenden- cies and dynamic aspects of human scanpaths in scene viewing.
Die Entstehung der modernen britischen Nachrichtendienstarchitektur fiel in die erste Hälfte des zwanzigsten Jahrhunderts. Zeitgleich erfuhr die britische Gesellschaft eine nie dagewesene Demokratisierung. Die Arbeit versucht darzulegen, wie auch vermeintlich arkane Bereiche staatlichen Handelns in öffentliche Aushandlungsprozesse eingebettet sind und rekonstruiert deshalb erstmals systematisch öffentliche und fachöffentliche Diskurse über Nachrichtendienste Großbritanniens im Zeitalter der Weltkriege.
Today, more than half of the world’s population lives in urban areas. With a high density of population and assets, urban areas are not only the economic, cultural and social hubs of every society, they are also highly susceptible to natural disasters. As a consequence of rising sea levels and an expected increase in extreme weather events caused by a changing climate in combination with growing cities, flooding is an increasing threat to many urban agglomerations around the globe.
To mitigate the destructive consequences of flooding, appropriate risk management and adaptation strategies are required. So far, flood risk management in urban areas is almost exclusively focused on managing river and coastal flooding. Often overlooked is the risk from small-scale rainfall-triggered flooding, where the rainfall intensity of rainstorms exceeds the capacity of urban drainage systems, leading to immediate flooding. Referred to as pluvial flooding, this flood type exclusive to urban areas has caused severe losses in cities around the world. Without further intervention, losses from pluvial flooding are expected to increase in many urban areas due to an increase of impervious surfaces compounded with an aging drainage infrastructure and a projected increase in heavy precipitation events. While this requires the integration of pluvial flood risk into risk management plans, so far little is known about the adverse consequences of pluvial flooding due to a lack of both detailed data sets and studies on pluvial flood impacts. As a consequence, methods for reliably estimating pluvial flood losses, needed for pluvial flood risk assessment, are still missing.
Therefore, this thesis investigates how pluvial flood losses to private households can be reliably estimated, based on an improved understanding of the drivers of pluvial flood loss. For this purpose, detailed data from pluvial flood-affected households was collected through structured telephone- and web-surveys following pluvial flood events in Germany and the Netherlands.
Pluvial flood losses to households are the result of complex interactions between impact characteristics such as the water depth and a household’s resistance as determined by its risk awareness, preparedness, emergency response, building properties and other influencing factors. Both exploratory analysis and machine-learning approaches were used to analyze differences in resistance and impacts between households and their effects on the resulting losses. The comparison of case studies showed that the awareness around pluvial flooding among private households is quite low. Low awareness not only challenges the effective dissemination of early warnings, but was also found to influence the implementation of private precautionary measures. The latter were predominately implemented by households with previous experience of pluvial flooding. Even cases where previous flood events affected a different part of the same city did not lead to an increase in preparedness of the surveyed households, highlighting the need to account for small-scale variability in both impact and resistance parameters when assessing pluvial flood risk.
While it was concluded that the combination of low awareness, ineffective early warning and the fact that only a minority of buildings were adapted to pluvial flooding impaired the coping capacities of private households, the often low water levels still enabled households to mitigate or even prevent losses through a timely and effective emergency response.
These findings were confirmed by the detection of loss-influencing variables, showing that cases in which households were able to prevent any loss to the building structure are predominately explained by resistance variables such as the household’s risk awareness, while the degree of loss is mainly explained by impact variables.
Based on the important loss-influencing variables detected, different flood loss models were developed. Similar to flood loss models for river floods, the empirical data from the preceding data collection was used to train flood loss models describing the relationship between impact and resistance parameters and the resulting loss to building structures. Different approaches were adapted from river flood loss models using both models with the water depth as only predictor for building structure loss and models incorporating additional variables from the preceding variable detection routine.
The high predictive errors of all compared models showed that point predictions are not suitable for estimating losses on the building level, as they severely impair the reliability of the estimates. For that reason, a new probabilistic framework based on Bayesian inference was introduced that is able to provide predictive distributions instead of single loss estimates. These distributions not only give a range of probable losses, they also provide information on how likely a specific loss value is, representing the uncertainty in the loss estimate.
Using probabilistic loss models, it was found that the certainty and reliability of a loss estimate on the building level is not only determined by the use of additional predictors as shown in previous studies, but also by the choice of response distribution defining the shape of the predictive distribution. Here, a mix between a beta and a Bernoulli distribution to account for households that are able to prevent losses to their building’s structure was found to provide significantly more certain and reliable estimates than previous approaches using Gaussian or non-parametric response distributions.
The successful model transfer and post-event application to estimate building structure loss in Houston, TX, caused by pluvial flooding during Hurricane Harvey confirmed previous findings, and demonstrated the potential of the newly developed multi-variable beta model for future risk assessments. The highly detailed input data set constructed from openly available data sources containing over 304,000 affected buildings in Harris County further showed the potential of data-driven, building-level loss models for pluvial flood risk assessment.
In conclusion, pluvial flood losses to private households are the result of complex interactions between impact and resistance variables, which should be represented in loss models. The local occurrence of pluvial floods requires loss estimates on high spatial resolutions, i.e. on the building level, where losses are variable and uncertainties are high.
Therefore, probabilistic loss estimates describing the uncertainty of the estimate should be used instead of point predictions. While the performance of probabilistic models on the building level are mainly driven by the choice of response distribution, multi-variable models are recommended for two reasons:
First, additional resistance variables improve the detection of cases in which households were able to prevent structural losses.
Second, the added variability of additional predictors provides a better representation of the uncertainties when loss estimates from multiple buildings are aggregated.
This leads to the conclusion that data-driven probabilistic loss models on the building level allow for a reliable loss estimation at an unprecedented level of detail, with a consistent quantification of uncertainties on all aggregation levels. This makes the presented approach suitable for a wide range of applications, from decision support in spatial planning to impact- based early warning systems.
The Sun is the nearest star to the Earth. It consists of an interior and an atmosphere. The convection zone is the outermost layer of the solar interior. A flux rope may emerge as a coherent structure from the convection zone into the solar atmosphere or be formed by magnetic reconnection in the atmosphere. A flux rope is a bundle of magnetic field lines twisting around an axis field line, creating a helical shape by which dense filament material can be supported against gravity. The flux rope is also considered as the key structure of the most energetic phenomena in the solar system, such as coronal mass ejections (CMEs) and flares. These magnetic flux ropes can produce severe geomagnetic storms. In particular, to improve the ability to forecast space weather, it is important to enrich our knowledge about the dynamic formation of flux ropes and the underlying physical mechanisms that initiate their eruption, such as a CME.
A confined eruption consists of a filament eruption and usually an associated are, but does not evolve into a CME; rather, the moving plasma is halted in the solar corona and usually seen to fall back. The first detailed observations of a confined filament eruption were obtained on 2002 May 27by the TRACE satellite in the 195 A band. So, in the Chapter 3, we focus on a flux rope instability model. A twisted flux rope can become unstable by entering the kink instability regime. We show that the kink instability, which occurs if the twist of a flux rope exceeds a critical value, is capable of initiating of an eruption. This model is tested against the well observed confined eruption on 2002 May 27 in a parametric magnetohydrodynamic (MHD) simulation study that comprises all phases of the event. Very good agreement with the essential observed properties is obtained, only except for a relatively poor matching of the initial filament height.
Therefore, in Chapter 4, we submerge the center point of the flux rope deeper below the photosphere to obtain a flatter coronal rope section and a better matching with the initial height profile of the erupting filament. This implies a more realistic inclusion of the photospheric line tying. All basic assumptions and the other parameter settings are kept the same as in Chapter 3. This complement of the parametric study shows that the flux rope instability model can yield an even better match with the observational data. We also focus in Chapters 3 and 4 on the magnetic reconnection during the confined eruption, demonstrating that it occurs in two distinct locations and phases that correspond to the observed brightenings and changes of topology, and consider the fate of the erupting flux, which can reform a (less twisted) flux rope.
The Sun also produces series of homologous eruptions, i.e. eruptions which occur repetitively in the same active region and are of similar morphology. Therefore, in Chapter 5, we employ the reformed flux rope as a new initial condition, to investigate the possibility of subsequent homologous eruptions. Free magnetic energy is built up by imposing motions in the bottom boundary, such as converging motions, leading to flux cancellation. We apply converging motions in the sunspot area, such that a small part of the flux from the sunspots with different polarities is transported toward the polarity inversion line (PIL) and cancels with each other. The reconnection associated with the cancellation process forms more helical magnetic flux around the reformed flux rope, which leads to a second and a third eruption. In this study, we obtain the first MHD simulation results of a homologous sequence of eruptions that show a transition from a confined to two ejective eruptions, based on the reformation of a flux rope after each eruption.