Refine
Year of publication
- 2020 (157) (remove)
Document Type
- Doctoral Thesis (157) (remove)
Language
- English (157) (remove)
Keywords
- Maschinelles Lernen (3)
- diffusion (3)
- Anden (2)
- Andes (2)
- Arktis (2)
- Datenassimilation (2)
- Diffusion (2)
- EEG (2)
- Galaktische Archäologie (2)
- Genomik (2)
Institute
- Institut für Biochemie und Biologie (32)
- Institut für Physik und Astronomie (29)
- Institut für Geowissenschaften (23)
- Institut für Chemie (16)
- Institut für Anglistik und Amerikanistik (9)
- Hasso-Plattner-Institut für Digital Engineering GmbH (8)
- Institut für Umweltwissenschaften und Geographie (8)
- Department Psychologie (5)
- Institut für Ernährungswissenschaft (5)
- Institut für Mathematik (3)
‘The Territorialities of U.S. Imperialisms’ sets into relation U.S. imperial and Indigenous conceptions of territoriality as articulated in U.S. legal texts and Indigenous life writing in the 19th century. It analyzes the ways in which U.S. legal texts as “legal fictions” narratively press to affirm the United States’ territorial sovereignty and coherence in spite of its reliance on a variety of imperial practices that flexibly disconnect and (re)connect U.S. sovereignty, jurisdiction and territory.
At the same time, the book acknowledges Indigenous life writing as legal texts in their own right and with full juridical force, which aim to highlight the heterogeneity of U.S. national territory both from their individual perspectives and in conversation with these legal fictions. Through this, the book’s analysis contributes to a more nuanced understanding of the coloniality of U.S. legal fictions, while highlighting territoriality as a key concept in the fashioning of the narrative of U.S. imperialism.
To meet the demands of a growing world population while reducing carbon dioxide (CO2) emissions, it is necessary to capture CO2 and convert it into value-added compounds. In recent years, metabolic engineering of microbes has gained strong momentum as a strategy for the production of valuable chemicals. As common microbial feedstocks like glucose directly compete with human consumption, the one carbon (C1) compound formate was suggested as an alternative feedstock. Formate can be easily produced by various means including electrochemical reduction of CO2 and could serve as a feedstock for microbial production, hence presenting a novel entry point for CO2 to the biosphere and a storage option for excess electricity. Compared to the gaseous molecule CO2, formate is a highly soluble compound that can be easily handled and stored. It can serve as a carbon and energy source for natural formatotrophs, but these microbes are difficult to cultivate and engineer. In this work, I present the results of several projects that aim to establish efficient formatotrophic growth of E. coli – which cannot naturally grow on formate – via synthetic formate assimilation pathways. In the first study, I establish a workflow for growth-coupled metabolic engineering of E. coli. I demonstrate this approach by presenting an engineering scheme for the PFL-threonine cycle, a synthetic pathway for anaerobic formate assimilation in E. coli. The described methods are intended to create a standardized toolbox for engineers that aim to establish novel metabolic routes in E. coli and related organisms. The second chapter presents a study on the catalytic efficiency of C1-oxidizing enzymes in vivo. As formatotrophic growth requires generation of both energy and biomass from formate, the engineered E. coli strains need to be equipped with a highly efficient formate dehydrogenase, which provides reduction equivalents and ATP for formate assimilation. I engineered a strain that cannot generate reducing power and energy for cellular growth, when fed on acetate. Under this condition, the strain depends on the introduction of an enzymatic system for NADH regeneration, which could further produce ATP via oxidative phosphorylation. I show that the strain presents a valuable testing platform for C1-oxidizing enzymes by testing different NAD-dependent formate and methanol dehydrogenases in the energy auxotroph strain. Using this platform, several candidate enzymes with high in vivo activity, were identified and characterized as potential energy-generating systems for synthetic formatotrophic or methylotrophic growth in E. coli. In the third chapter, I present the establishment of the serine threonine cycle (STC) – a synthetic formate assimilation pathway – in E. coli. In this pathway, formate is assimilated via formate tetrahydrofolate ligase (FtfL) from Methylobacterium extorquens (M. extorquens). The carbon from formate is attached to glycine to produce serine, which is converted into pyruvate entering central metabolism. Via the natural threonine synthesis and cleavage route, glycine is regenerated and acetyl-CoA is produced as the pathway product. I engineered several selection strains that depend on different STC modules for growth and determined key enzymes that enable high flux through threonine synthesis and cleavage. I could show that expression of an auxiliary formate dehydrogenase was required to achieve growth via threonine synthesis and cleavage on pyruvate. By overexpressing most of the pathway enzymes from the genome, and applying adaptive laboratory evolution, growth on glycine and formate was achieved, indicating the activity of the complete cycle. The fourth chapter shows the establishment of the reductive glycine pathway (rGP) – a short, linear formate assimilation route – in E. coli. As in the STC, formate is assimilated via M. extorquens FtfL. The C1 from formate is condensed with CO2 via the reverse reaction of the glycine cleavage system to produce glycine. Another carbon from formate is attached to glycine to form serine, which is assimilated into central metabolism via pyruvate. The engineered E. coli strain, expressing most of the pathway genes from the genome, can grow via the rGP with formate or methanol as a sole carbon and energy source.
With populations growing worldwide and climate change threatening food production there is an urgent need to find ways to ensure food security. Increasing carbon fixation rate in plants is a promising approach to boost crop yields. The carbon-fixing enzyme Rubisco catalyzes, beside the carboxylation reaction, also an oxygenation reaction that generates glycolate-2P, which needs to be recycled via a metabolic route termed photorespiration. Photorespiration dissipates energy and most importantly releases previously fixed CO2, thus significantly lowering carbon fixation rate and yield. Engineering plants to omit photorespiratory CO2 release is the goal of the FutureAgriculture consortium and this thesis is part of this collaboration. The consortium aims to establish alternative glycolate-2P recycling routes that do not release CO2. Ultimately, they are expected to increase carbon fixation rates and crop yields. Natural and novel reactions, which require enzyme engineering, were considered in the pathway design process. Here I describe the engineering of two pathways, the arabinose-5P and the erythrulose shunt. They were designed to recycle glycolate-2P via glycolaldehyde into a sugar phosphate and thereby reassimilate glycolate-2P to the Calvin cycle. I used Escherichia coli gene deletion strains to validate and characterize the activity of both synthetic shunts. The strains’ auxotrophies can be alleviated by the activity of the synthetic route, thus providing a direct way to select for pathway activity. I introduced all pathway components to these dedicated selection strains and discovered inhibitions, limitations and metabolic cross talk interfering with pathway activity. After resolving these issues, I was able to show the in vivo activity of all pathway components and combine them into functional modules.. Specifically, I demonstrate the activity of a new-to-nature module of glycolate reduction to glycolaldehyde. Also, I successfully show a new glycolaldehyde assimilation route via arabinose-5P to ribulose-5P. In addition, all necessary enzymes for glycolaldehyde assimilation via L-erythrulose were shown to be active and an L-threitol assimilation route via L-erythrulose was established in E. coli. On their own, these findings demonstrate the power of using an easily engineerable microbe to test novel pathways; combined, they will form the basis for implementing photorespiration bypasses in plants.
TrainTrap
(2020)
Due to continuously intensifying human usage of the marine environment worldwide ranging cetaceans face an increasing number of threats. Besides whaling, overfishing and by-catch, new technical developments increase the water and noise pollution, which can negatively affect marine species. Cetaceans are especially prone to these influences, being at the top of the food chain and therefore accumulating toxins and contaminants. Furthermore, they are extremely noise sensitive due to their highly developed hearing sense and echolocation ability. As a result, several cetacean species were brought to extinction during the last century or are now classified as critically endangered. This work focuses on two odontocetes. It applies and compares different molecular methods for inference of population status and adaptation, with implications for conservation. The worldwide distributed sperm whale (Physeter macrocephalus) shows a matrilineal population structure with predominant male dispersal. A recently stranded group of male sperm whales provided a unique opportunity to investigate male grouping for the first time. Based on the mitochondrial control region, I was able to infer that male bachelor groups comprise multiple matrilines, hence derive from different social groups, and that they represent the genetic variability of the entire North Atlantic. The harbor porpoise (Phocoena phocoena) occurs only in the northern hemisphere. By being small and occurring mostly in coastal habitats it is especially prone to human disturbance. Since some subspecies and subpopulations are critically endangered, it is important to generate and provide genetic markers with high resolution to facilitate population assignment and subsequent protection measurements. Here, I provide the first harbour porpoise whole genome, in high quality and including a draft annotation. Using it for mapping ddRAD seq data, I identify genome wide SNPs and, together with a fragment of the mitochondrial control region, inferred the population structure of its North Atlantic distribution range. The Belt Sea harbors a distinct subpopulation oppose to the North Atlantic, with a transition zone in the Kattegat. Within the North Atlantic I could detect subtle genetic differentiation between western (Canada-Iceland) and eastern (North Sea) regions, with support for a German North Sea breading ground around the Isle of Sylt. Further, I was able to detect six outlier loci which show isolation by distance across the investigated sampling areas. In employing different markers, I could show that single maker systems as well as genome wide data can unravel new information about population affinities of odontocetes. Genome wide data can facilitate investigation of adaptations and evolutionary history of the species and its populations. Moreover, they facilitate population genetic investigations, providing a high resolution, and hence allowing for detection of subtle population structuring especially important for highly mobile cetaceans.
With rising complexity of today's software and hardware systems and the hypothesized increase in autonomous, intelligent, and self-* systems, developing correct systems remains an important challenge. Testing, although an important part of the development and maintainance process, cannot usually establish the definite correctness of a software or hardware system - especially when systems have arbitrarily large or infinite state spaces or an infinite number of initial states. This is where formal verification comes in: given a representation of the system in question in a formal framework, verification approaches and tools can be used to establish the system's adherence to its similarly formalized specification, and to complement testing.
One such formal framework is the field of graphs and graph transformation systems. Both are powerful formalisms with well-established foundations and ongoing research that can be used to describe complex hardware or software systems with varying degrees of abstraction. Since their inception in the 1970s, graph transformation systems have continuously evolved; related research spans extensions of expressive power, graph algorithms, and their implementation, application scenarios, or verification approaches, to name just a few topics.
This thesis focuses on a verification approach for graph transformation systems called k-inductive invariant checking, which is an extension of previous work on 1-inductive invariant checking. Instead of exhaustively computing a system's state space, which is a common approach in model checking, 1-inductive invariant checking symbolically analyzes graph transformation rules - i.e. system behavior - in order to draw conclusions with respect to the validity of graph constraints in the system's state space. The approach is based on an inductive argument: if a system's initial state satisfies a graph constraint and if all rules preserve that constraint's validity, we can conclude the constraint's validity in the system's entire state space - without having to compute it.
However, inductive invariant checking also comes with a specific drawback: the locality of graph transformation rules leads to a lack of context information during the symbolic analysis of potential rule applications. This thesis argues that this lack of context can be partly addressed by using k-induction instead of 1-induction. A k-inductive invariant is a graph constraint whose validity in a path of k-1 rule applications implies its validity after any subsequent rule application - as opposed to a 1-inductive invariant where only one rule application is taken into account. Considering a path of transformations then accumulates more context of the graph rules' applications.
As such, this thesis extends existing research and implementation on 1-inductive invariant checking for graph transformation systems to k-induction. In addition, it proposes a technique to perform the base case of the inductive argument in a symbolic fashion, which allows verification of systems with an infinite set of initial states. Both k-inductive invariant checking and its base case are described in formal terms. Based on that, this thesis formulates theorems and constructions to apply this general verification approach for typed graph transformation systems and nested graph constraints - and to formally prove the approach's correctness.
Since unrestricted graph constraints may lead to non-termination or impracticably high execution times given a hypothetical implementation, this thesis also presents a restricted verification approach, which limits the form of graph transformation systems and graph constraints. It is formalized, proven correct, and its procedures terminate by construction. This restricted approach has been implemented in an automated tool and has been evaluated with respect to its applicability to test cases, its performance, and its degree of completeness.
"How Wenzel and Cassie were wrong" – this was the eye-catching title of an article published by Lichao Gao and Thomas McCarthy in 2007, in which fundamental interpretations of wetting behavior were put into question. The authors initiated a discussion on a subject, which had been generally accepted a long time ago and they showed that wetting phenomena were not as fully understood as imagined. Similarly, this thesis tries to put a focus on certain aspects of liquid wetting, which so far have been widely neglected in terms of interpretation and experimental proof. While the effect of surface roughness on the macroscopically observed wetting behavior is commonly and reliably interpreted according to the well-known models of Wenzel and Cassie/Baxter, the size-scale of the structures responsible for the surface's rough texture has not been of further interest. Analogously, the limits of these models have not been described and exploited. Thus, the question arises, what will happen when the size of surface structures is reduced to the size of the contacting liquid molecules itself? Are common methods still valid or can deviations from macroscopic behavior be observed?
This thesis wants to create a starting point regarding these questions. In order to investigate the effect of smallest-scale surface structures on liquid wetting, a suitable model system is developed by means of self-assembled monolayer (SAM) formation from (fluoro)organic thiols of differing lengths of the alkyl chain. Surface topographies are created which rely on size differences of several Ångströms and exhibit surprising wetting behavior depending on the choice of the individual precursor system. Thus, contact angles are experimentally detected, which deviate considerably from theoretical calculations based on Wenzel and Cassie/Baxter models and confirm that sub-nm surface topographies affect wetting. Moreover, experimentally determined wetting properties are found to correlate well to an assumed scale-dependent surface tension of the contacting liquid. This behavior has already been described for scattering experiments taking into account capillary waves on the liquid surface induced by temperature and had been predicted earlier by theoretical calculations.
However, the investigation of model surfaces requires the provision of suitable precursor molecules, which are not commercially available and opens up a door to the exotic chemistry of fluoro-organic materials. During the course of this work, the synthesis of long-chain precursors is examined with a particular focus put on oligomerically pure semi-fluorinated n-alkyl thiols and n-alkyl trichlorosilanes. For this, general protocols for the syntheses of the desired compounds are developed and product mixtures are assayed to be separated into fractions of individual chain lengths by fluorous-phase high-performance liquid chromatography (F-HPLC).
The transition from model systems to technically more relevant surfaces and applications is initiated through the deposition of SAMs from long-chain fluorinated n-alkyl trichlorosilanes. Depositions are accomplished by a vapor-phase deposition process conducted on a pilot-scale set-up, which enables the exact control of relevant process parameters. Thus, the influence of varying deposition conditions on the properties of the final coating is examined and analyzed for the most important parameters. The strongest effect is observed for the partial pressure of reactive water vapor, which directly controls the extent of precursor hydrolysis during the deposition process. Experimental results propose that the formation of ordered monolayers rely on the amount of hydrolyzed silanol species present in the deposition system irrespective of the exact grade of hydrolysis. However, at increased amounts of species which are able to form cross-linked molecules due to condensation reactions, films deteriorate in quality. This effect is assumed to be caused by the introduction of defects within the film and the adsorption of cross linked agglomerates. Deposition conditions are also investigated for chain extended precursor species and reveal distinct differences caused by chain elongation.
This thesis is concerned with Data Assimilation, the process of combining model predictions with observations. So called filters are of special interest. One is inter- ested in computing the probability distribution of the state of a physical process in the future, given (possibly) imperfect measurements. This is done using Bayes’ rule. The first part focuses on hybrid filters, that bridge between the two main groups of filters: ensemble Kalman filters (EnKF) and particle filters. The first are a group of very stable and computationally cheap algorithms, but they request certain strong assumptions. Particle filters on the other hand are more generally applicable, but computationally expensive and as such not always suitable for high dimensional systems. Therefore it exists a need to combine both groups to benefit from the advantages of each. This can be achieved by splitting the likelihood function, when assimilating a new observation and treating one part of it with an EnKF and the other part with a particle filter.
The second part of this thesis deals with the application of Data Assimilation to multi-scale models and the problems that arise from that. One of the main areas of application for Data Assimilation techniques is predicting the development of oceans and the atmosphere. These processes involve several scales and often balance rela- tions between the state variables. The use of Data Assimilation procedures most often violates relations of that kind, which leads to unrealistic and non-physical pre- dictions of the future development of the process eventually. This work discusses the inclusion of a post-processing step after each assimilation step, in which a minimi- sation problem is solved, which penalises the imbalance. This method is tested on four different models, two Hamiltonian systems and two spatially extended models, which adds even more difficulties.
Orogenic peridotites represent portions of upper subcontinental mantle now incorporated in mountain belts. They often contain layers, lenses and irregular bodies of pyroxenite and eclogite. The origin of this heterogeneity and the nature of these layers is still debated but it is likely to involve processes such as transient melts coming from the crust or the mantle and segregating in magma conduits, crust-mantle interaction, upwelling of the asthenosphere and metasomatism. All these processes occur in the lithospheric mantle and are often related with the subduction of crustal rocks to mantle depths. In fact, during subduction, fluids and melts are released from the slab and can interact with the overlying mantle, making the study of deep melts in this environment crucial to understand mantle heterogeneity and crust-mantle interaction. The aim of this thesis is precisely to better constrain how such processes take place studying directly the melt trapped as primary inclusions in pyroxenites and eclogites. The Bohemian Massif, crystalline core of the Variscan belt, is targeted for these purposes because it contains orogenic peridotites with layers of pyroxenite and eclogite and other mafic rocks enclosed in felsic high pressure and ultra-high pressure crustal rocks. Within this Massif mafic rocks from two areas have been selected: the garnet clinopyroxenite in orogenic peridotite of the Granulitgebirge and the ultra-high pressure eclogite in the diamond-bearing gneisses of the Erzgebirge. In both areas primary melt inclusions were recognized in the garnet, ranging in size between 2-25 µm and with different degrees of crystallization, from glassy to polycrystalline. They have been investigated with Micro Raman spectroscopy and EDS mapping and the mineral assemblage is kumdykolite, phlogopite, quartz, kokchetavite, phase with a main Raman peak at 430 cm-1, phase with a main Raman peak at 412 cm-1, white mica and calcite with some variability in relative abundance depending on the case study. In the Granulitgebirge osumilite and pyroxene are also present, whereas calcite is one of the main phases in the Erzgebirge. The presence of glass and the mineral assemblage in the nanogranitoids suggest that they were former droplets of melt trapped in the garnet while it was growing. Glassy inclusions and re-homogenized nanogranitoids show a silicate melt that is granitic, hydrous, high in alkalis and weakly peraluminous. The melt is also enriched in both case studies in Cs, Pb, Rb, U, Th, Li and B suggesting the involvement of crustal component, i.e. white mica (main carrier of Cs, Pb, Rb, Li and B), and a fluid (Cs, Th and U) in the melt producing reaction. The whole rock in both cases mainly consists of garnet and clinopyroxene with, in Erzgebirge samples, the additional presence of quartz both in the matrix and as a polycrystalline inclusion in the garnet. The latter is interpreted as a quartz pseudomorph after coesite and occurs in the same microstructural position as the melt inclusions. Both rock types show a crustal and subduction zone signature with garnet and clinopyroxene in equilibrium. Melt was likely present during the metamorphic peak of the rock, as it occurs in garnet.
Our data suggest that the processes most likely responsible for the formation of the investigated rocks in both areas is a metasomatic reaction between a melt produced in the crust and mafic layers formerly located in the mantle wedge for the Granulitgebirge and in the subducted continental crust itself in the Erzgebirge. Thus metasomatism in the first case took place in the mantle overlying the slab, whereas in the second case metasomatism took place in the continental crust that already contained, before subduction, mafic layers. Moreover, the presence of former coesite in the same microstructural position of the melt inclusions in the Erzgebirge garnets suggest that metasomatism took place at ultra-high pressure conditions.
Summarizing, in this thesis we provide new insights into the geodynamic evolution of the Bohemian Massif based on the study of melt inclusions in garnet in two different mafic rock types, combining the direct microstructural and geochemical investigation of the inclusions with the whole-rock and mineral geochemistry. We report for the first time data, directly extracted from natural rocks, on the metasomatic melt responsible for the metasomatism of several areas of the Bohemian Massif. Besides the two locations here investigated, belonging to the Saxothuringian Zone, a signature similar to the investigated melt is clearly visible in pyroxenite and peridotite of the T-7 borehole (again Saxothuringian Zone) and the durbachite suite located in the Moldanubian Zone.
Single-column data profiling
(2020)
The research area of data profiling consists of a large set of methods and processes to examine a given dataset and determine metadata about it. Typically, different data profiling tasks address different kinds of metadata, comprising either various statistics about individual columns (Single-column Analysis) or relationships among them (Dependency Discovery). Among the basic statistics about a column are data type, header, the number of unique values (the column's cardinality), maximum and minimum values, the number of null values, and the value distribution. Dependencies involve, for instance, functional dependencies (FDs), inclusion dependencies (INDs), and their approximate versions.
Data profiling has a wide range of conventional use cases, namely data exploration, cleansing, and integration. The produced metadata is also useful for database management and schema reverse engineering. Data profiling has also more novel use cases, such as big data analytics. The generated metadata describes the structure of the data at hand, how to import it, what it is about, and how much of it there is. Thus, data profiling can be considered as an important preparatory task for many data analysis and mining scenarios to assess which data might be useful and to reveal and understand a new dataset's characteristics.
In this thesis, the main focus is on the single-column analysis class of data profiling tasks. We study the impact and the extraction of three of the most important metadata about a column, namely the cardinality, the header, and the number of null values.
First, we present a detailed experimental study of twelve cardinality estimation algorithms. We classify the algorithms and analyze their efficiency, scaling far beyond the original experiments and testing theoretical guarantees. Our results highlight their trade-offs and point out the possibility to create a parallel or a distributed version of these algorithms to cope with the growing size of modern datasets.
Then, we present a fully automated, multi-phase system to discover human-understandable, representative, and consistent headers for a target table in cases where headers are missing, meaningless, or unrepresentative for the column values. Our evaluation on Wikipedia tables shows that 60% of the automatically discovered schemata are exact and complete. Considering more schema candidates, top-5 for example, increases this percentage to 72%.
Finally, we formally and experimentally show the ghost and fake FDs phenomenon caused by FD discovery over datasets with missing values. We propose two efficient scores, probabilistic and likelihood-based, for estimating the genuineness of a discovered FD. Our extensive set of experiments on real-world and semi-synthetic datasets show the effectiveness and efficiency of these scores.
Lately, the integration of upconverting nanoparticles (UCNP) in industrial, biomedical and scientific applications has been increasingly accelerating, owing to the exceptional photophysical properties that UCNP offer. Some of the most promising applications lie in the field of medicine and bioimaging due to such advantages as, among others, deeper tissue penetration, reduced optical background, possibility for multicolor imaging, and lower toxicity, compared to many known luminophores. However, some questions regarding not only the fundamental photophysical processes, but also the interaction of the UCNP with other luminescent reporters frequently used for bioimaging and the interaction with biological media remain unanswered. These issues were the primary motivation for the presented work.
This PhD thesis investigated several aspects of various properties and possibilities for bioapplications of Yb3+,Tm3+-doped NaYF4 upconverting nanoparticles. First, the effect of Gd3+ doping on the structure and upconverting behaviour of the nanocrystals was assessed. The ageing process of the UCNP in cyclohexane was studied over 24 months on the samples with different Gd3+ doping concentrations. Structural information was gathered by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), dynamic light scattering (DLS), and discussed in relation to spectroscopic results, obtained through multiparameter upconversion luminescence studies at various temperatures (from 4 K to 295 K). Time-resolved and steady-state emission spectra recorded over this ample temperature range allowed for a deeper understanding of photophysical processes and their dependence on structural changes of UCNP.
A new protocol using a commercially available high boiling solvent allowed for faster and more controlled production of very small and homogeneous UCNP with better photophysical properties, and the advantages of a passivating NaYF4 shell were shown.
Förster resonance energy transfer (FRET) between four different species of NaYF4: Yb3+, Tm3+ UCNP (synthesized using the improved protocol) and a small organic dye was studied. The influence of UCNP composition and the proximity of Tm3+ ions (donors in the process of FRET) to acceptor dye molecules have been assessed. The brightest upconversion luminescence was observed in the UCNP with a protective inert shell. UCNP with Tm3+ ions only in the shell were the least bright, but showed the most efficient energy transfer.
In the final part, two surface modification strategies were applied to make UCNP soluble in water, which simultaneously allowed for linking them via a non-toxic copper-free click reaction to the liposomes, which served as models for further cell experiments. The results were assessed on a confocal microscope system, which was made possible by lesser known downshifting properties of Yb3+, Tm3+-doped UCNP. Preliminary antibody-staining tests using two primary and one dye-labelled secondary antibodies were performed on MDCK-II cells.
Over the last decades, the Arctic regions of the earth have warmed at a rate 2–3 times faster than the global average– a phenomenon called Arctic Amplification. A complex, non-linear interplay of physical processes and unique pecularities in the Arctic climate system is responsible for this, but the relative role of individual processes remains to be debated. This thesis focuses on the climate change and related processes on Svalbard, an archipelago in the North Atlantic sector of the Arctic, which is shown to be a "hotspot" for the amplified recent warming during winter. In this highly dynamical region, both oceanic and atmospheric large-scale transports of heat and moisture interfere with spatially inhomogenous surface conditions, and the corresponding energy exchange strongly shapes the atmospheric boundary layer. In the first part, Pan-Svalbard gradients in the surface air temperature (SAT) and sea ice extent (SIE) in the fjords are quantified and characterized. This analysis is based on observational data from meteorological stations, operational sea ice charts, and hydrographic observations from the adjacent ocean, which cover the 1980–2016 period. It is revealed that typical estimates of SIE during late winter range from 40–50% (80–90%) in the western (eastern) parts of Svalbard. However, strong SAT warming during winter of the order of 2–3K per decade dictates excessive ice loss, leaving fjords in the western parts essentially ice-free in recent winters. It is further demostrated that warm water currents on the west coast of Svalbard, as well as meridional winds contribute to regional differences in the SIE evolution. In particular, the proximity to warm water masses of the West Spitsbergen Current can explain 20–37% of SIE variability in fjords on west Svalbard, while meridional winds and associated ice drift may regionally explain 20–50% of SIE variability in the north and northeast. Strong SAT warming has overruled these impacts in recent years, though.
In the next part of the analysis, the contribution of large-scale atmospheric circulation changes to the Svalbard temperature development over the last 20 years is investigated. A study employing kinematic air-back trajectories for Ny-Ålesund reveals a shift in the source regions of lower-troposheric air over time for both the winter and the summer season. In winter, air in the recent decade is more often of lower-latitude Atlantic origin, and less frequent of Arctic origin. This affects heat- and moisture advection towards Svalbard, potentially manipulating clouds and longwave downward radiation in that region. A closer investigation indicates that this shift during winter is associated with a strengthened Ural blocking high and Icelandic low, and contributes about 25% to the observed winter warming on Svalbard over the last 20 years. Conversely, circulation changes during summer include a strengthened Greenland blocking high which leads to more frequent cold air advection from the central Arctic towards Svalbard, and less frequent air mass origins in the lower latitudes of the North Atlantic. Hence, circulation changes during winter are shown to have an amplifying effect on the recent warming on Svalbard, while summer circulation changes tend to mask warming.
An observational case study using upper air soundings from the AWIPEV research station in Ny-Ålesund during May–June 2017 underlines that such circulation changes during summer are associated with tropospheric anomalies in temperature, humidity and boundary layer height.
In the last part of the analysis, the regional representativeness of the above described changes around Svalbard for the broader Arctic is investigated. Therefore, the terms in the diagnostic temperature equation in the Arctic-wide lower troposphere are examined for the Era-Interim atmospheric reanalysis product. Significant positive trends in diabatic heating rates, consistent with latent heat transfer to the atmosphere over regions of increasing ice melt, are found for all seasons over the Barents/Kara Seas, and in individual months in the vicinity of Svalbard. The above introduced warm (cold) advection trends during winter (summer) on Svalbard are successfully reproduced. Regarding winter, they are regionally confined to the Barents Sea and Fram Strait, between 70°–80°N, resembling a unique feature in the whole Arctic. Summer cold advection trends are confined to the area between eastern Greenland and Franz Josef Land, enclosing Svalbard.
Cleft exhaustivity
(2020)
In this dissertation a series of experimental studies are presented which demonstrate that the exhaustive inference of focus-background it-clefts in English and their cross-linguistic counterparts in Akan, French, and German is neither robust nor systematic. The inter-speaker and cross-linguistic variability is accounted for with a discourse-pragmatic approach to cleft exhaustivity, in which -- following Pollard & Yasavul 2016 -- the exhaustive inference is derived from an interaction with another layer of meaning, namely, the existence presupposition encoded in clefts.
To investigate the reliability and stability of spherical harmonic models based on archeo/-paleomagnetic data, 2000 Geomagnetic models were calculated. All models are based on the same data set but with randomized uncertainties. Comparison of these models to the geomagnetic field model gufm1 showed that large scale magnetic field structures up to spherical harmonic degree 4 are stable throughout all models. Through a ranking of all models by comparing the dipole coefficients to gufm1 more realistic uncertainty estimates were derived than the authors of the data provide.
The derived uncertainty estimates were used in further modelling, which combines archeo/-paleomagnetic and historical data. The huge difference in data count, accuracy and coverage of these two very different data sources made it necessary to introduce a time dependent spatial damping, which was constructed to constrain the spatial complexity of the model. Finally 501 models were calculated by considering that each data point is a Gaussian random variable, whose mean is the original value and whose standard deviation is its uncertainty. The final model arhimag1k is calculated by taking the mean of the 501 sets of Gauss coefficients. arhimag1k fits different dependent and independent data sets well. It shows an early reverse flux patch at the core-mantle boundary between 1000 AD and 1200 AD at the location of the South Atlantic Anomaly today. Another interesting feature is a high latitude flux patch over Greenland between 1200 and 1400 AD. The dipole moment shows a constant behaviour between 1600 and 1840 AD.
In the second part of the thesis 4 new paleointensities from 4 different flows of the island Fogo, which is part of Cape Verde, are presented. The data is fitted well by arhimag1k with the exception of the value at 1663 of 28.3 microtesla, which is approximately 10 microtesla lower than the model suggest.
The goal of this thesis was to thoroughly investigate the behavior of multimode fibres to aid the development of modern and forthcoming fibre-fed spectrograph systems. Based on the Eigenmode Expansion Method, a field propagation model was created that can emulate effects in fibres relevant for astronomical spectroscopy, such as modal noise, scrambling, and focal ratio degradation. These effects are of major concern for any fibre-coupled spectrograph used in astronomical research. Changes in the focal ratio, modal distribution of light or non-perfect scrambling limit the accuracy of measurements, e.g. the flux determination of the astronomical object, the sky-background subtraction and detection limit for faint galaxies, or the spectral line position accuracy used for the detection of extra-solar planets.
Usually, fibres used for astronomical instrumentation are characterized empirically through tests. The results of this work allow to predict the fibre behaviour under various conditions using sophisticated software tools to simulate the waveguide behaviour and mode transport of fibres.
The simulation environment works with two software interfaces. The first is the mode solver module FemSIM from Rsoft. It is used to calculate all the propagation modes and effective refractive indexes of a given system. The second interface consists of Python scripts which enable the simulation of the near- and far-field outputs of a given fibre. The characteristics of the input field can be manipulated to emulate real conditions. Focus variations, spatial translation, angular fluctuations, and disturbances through the mode coupling factor can also be simulated.
To date, complete coherent propagation or complete incoherent propagation can be simulated. Partial coherence was not addressed in this work. Another limitation of the simulations is that they work exclusively for the monochromatic case and that the loss coefficient of the fibres is not considered. Nevertheless, the simulations were able to match the results of realistic measurements.
To test the validity of the simulations, real fibre measurements were used for comparison. Two fibres with different cross-sections were characterized. The first fibre had a circular cross-section, and the second one had an octagonal cross-section. The utilized test-bench was originally developed for the prototype fibres of the 4MOST fibre feed characterization. It allowed for parallel laser beam measurements, light cone measurements, and scrambling measurements. Through the appropriate configuration, the acquisition of the near- and/or far-field was feasible.
By means of modal noise analysis, it was possible to compare the near-field speckle patterns of simulations and measurements as a function of the input angle. The spatial frequencies that originate from the modal interference could be analyzed by using the power spectral density analysis. Measurements and simulations yielded similar results. Measurements with induced modal scrambling were compared to simulations using incoherent propagation and once again similar results were achieved. Through both measurements and simulations, the enlargement of the near-field distribution could be observed and analyzed. The simulations made it possible to explain incoherent intensity fluctuations that appear in real measurements due to the field distribution of the active propagation modes.
By using the Voigt analysis in the far-field distribution, it was possible to separate the modal diffusion component in order to compare it with the simulations. Through an appropriate assessment, the modal diffusion component as a function of the input angle could be translated into angular divergence. The simulations gave the minimal angular divergence of the system. Through the mean of the difference between simulations and measurements, a figure of merit is given which can be used to characterize the angular divergence of real fibres using the simulations. Furthermore, it was possible to simulate light cone measurements. Due to the overall consistent results, it can be stated that the simulations represent a good tool to assist the fibre characterization process for fibre-fed spectrograph systems.
This work was possible through the BMBF Grant 05A14BA1 which was part of the phase A study of the fibre system for MOSAIC, a multi-object spectrograph for the Extremely Large Telescope (ELT-MOS).
This work presents a new design for programming environments that promote the exploration of domain-specific software artifacts and the construction of graphical tools for such program comprehension tasks. In complex software projects, tool building is essential because domain- or task-specific tools can support decision making by representing concerns concisely with low cognitive effort. In contrast, generic tools can only support anticipated scenarios, which usually align with programming language concepts or well-known project domains.
However, the creation and modification of interactive tools is expensive because the glue that connects data to graphics is hard to find, change, and test. Even if valuable data is available in a common format and even if promising visualizations could be populated, programmers have to invest many resources to make changes in the programming environment. Consequently, only ideas of predictably high value will be implemented. In the non-graphical, command-line world, the situation looks different and inspiring: programmers can easily build their own tools as shell scripts by configuring and combining filter programs to process data.
We propose a new perspective on graphical tools and provide a concept to build and modify such tools with a focus on high quality, low effort, and continuous adaptability. That is, (1) we propose an object-oriented, data-driven, declarative scripting language that reduces the amount of and governs the effects of glue code for view-model specifications, and (2) we propose a scalable UI-design language that promotes short feedback loops in an interactive, graphical environment such as Morphic known from Self or Squeak/Smalltalk systems.
We implemented our concept as a tool building environment, which we call VIVIDE, on top of Squeak/Smalltalk and Morphic. We replaced existing code browsing and debugging tools to iterate within our solution more quickly. In several case studies with undergraduate and graduate students, we observed that VIVIDE can be applied to many domains such as live language development, source-code versioning, modular code browsing, and multi-language debugging. Then, we designed a controlled experiment to measure the effect on the time to build tools. Several pilot runs showed that training is crucial and, presumably, takes days or weeks, which implies a need for further research.
As a result, programmers as users can directly work with tangible representations of their software artifacts in the VIVIDE environment. Tool builders can write domain-specific scripts to populate views to approach comprehension tasks from different angles. Our novel perspective on graphical tools can inspire the creation of new trade-offs in modularity for both data providers and view designers.
This dissertation aims to deliver a transcendental interpretation of Immanuel Kant's Kritik der Urteilskraft, considering both its coherence with other critical works as well as the internal coherence of the work itself. This interpretation is called transcendental insofar as special emphasis is placed on the newly introduced cognitive power, namely the reflective power of judgement, guided by the a priori principle of purposiveness. In this way the seeming manifold of themes, varying from judgements of taste through culture to teleological judgements about natural purposes, are discussed exclusively in regard of their dependence on this faculty and its transcendental principle. In contrast, in contemporary scholarship the book is often treated as a fragmented work, consisting of different independent parts, while my focus lies on the continuity comprised primarily of the activity of the power of judgement.
Going back to certain central yet silently presupposed concepts, adopted from previous critical works, the main contribution of this study is to integrate the KU within the overarching critical project. More specifically, I have argue how the need for the presupposition by the reflective power of judgement follows from the peculiar character of our sense-dependent discursive mind. Because we are sense-dependent discursive minds, we do not and cannot have immediate insight into all of nature's features. The particular constitution of our mind rather demands conceptually informed representations which mediately refer to objects.
Having said that, the principle of purposiveness, namely the presupposition that nature is organized in concert with the particular constitution of our mind, is a necessary condition for the possibility of reflection on nature's empirical features. Reflection refers on my account to a process of selecting features in order to allow a classification, including reflection on the method, means and selection criteria. Rather than directly contributing to cognition, like the categories, reflective judgements thus express our ignorance when it comes to the motivation behind nature's design, and this is most forcefully expressed by judgements of taste and teleological judgements about organized matter. In this way, reflection, regardless whether it is manifested in concept acquisition, scientific systematization, judgements of taste or judgements about organized matter, relies on a principle of the power of judgement which is revealed and justified in this transcendental inquiry.
The development of bioinspired self-assembling materials, such as hydrogels, with promising applications in cell culture, tissue engineering and drug delivery is a current focus in material science. Biogenic or bioinspired proteins and peptides are frequently used as versatile building blocks for extracellular matrix (ECM) mimicking hydrogels. However, precisely controlling and reversibly tuning the properties of these building blocks and the resulting hydrogels remains challenging. Precise control over the viscoelastic properties and self-healing abilities of hydrogels are key factors for developing intelligent materials to investigate cell matrix interactions. Thus, there is a need to develop building blocks that are self-healing, tunable and self-reporting. This thesis aims at the development of α-helical peptide building blocks, called coiled coils (CCs), which integrate these desired properties. Self-healing is a direct result of the fast self-assembly of these building blocks when used as material cross-links. Tunability is realized by means of reversible histidine (His)-metal coordination bonds. Lastly, implementing a fluorescent readout, which indicates the CC assembly state, self-reporting hydrogels are obtained.
Coiled coils are abundant protein folding motifs in Nature, which often have mechanical function, such as in myosin or fibrin. Coiled coils are superhelices made up of two or more α-helices wound around each other. The assembly of CCs is based on their repetitive sequence of seven amino acids, so-called heptads (abcdefg). Hydrophobic amino acids in the a and d position of each heptad form the core of the CC, while charged amino acids in the e and g position form ionic interactions. The solvent-exposed positions b, c and f are excellent targets for modifications since they are more variable. His-metal coordination bonds are strong, yet reversible interactions formed between the amino acid histidine and transition metal ions (e.g. Ni2+, Cu2+ or Zn2+). His-metal coordination bonds essentially contribute to the mechanical stability of various high-performance proteinaceous materials, such as spider fangs, Nereis worm jaws and mussel byssal threads. Therefore, I bioengineered reversible His-metal coordination sites into a well-characterized heterodimeric CC that served as tunable material cross-link. Specifically, I took two distinct approaches facilitating either intramolecular (Chapter 4.2) and/or intermolecular (Chapter 4.3) His-metal coordination.
Previous research suggested that force-induced CC unfolding in shear geometry starts from the points of force application. In order to tune the stability of a heterodimeric CC in shear geometry, I inserted His in the b and f position at the termini of force application (Chapter 4.2). The spacing of His is such that intra-CC His-metal coordination bonds can form to bridge one helical turn within the same helix, but also inter-CC coordination bonds are not generally excluded. Starting with Ni2+ ions, Raman spectroscopy showed that the CC maintained its helical structure and the His residues were able to coordinate Ni2+. Circular dichroism (CD) spectroscopy revealed that the melting temperature of the CC increased by 4 °C in the presence of Ni2+. Using atomic force microscope (AFM)-based single molecule force spectroscopy, the energy landscape parameters of the CC were characterized in the absence and the presence of Ni2+. His-Ni2+ coordination increased the rupture force by ~10 pN, accompanied by a decrease of the dissociation rate constant. To test if this stabilizing effect can be transferred from the single molecule level to the bulk viscoelastic material properties, the CC building block was used as a non-covalent cross-link for star-shaped poly(ethylene glycol) (star-PEG) hydrogels. Shear rheology revealed a 3-fold higher relaxation time in His-Ni2+ coordinating hydrogels compared to the hydrogel without metal ions. This stabilizing effect was fully reversible when using an excess of the metal chelator ethylenediaminetetraacetate (EDTA). The hydrogel properties were further investigated using different metal ions, i.e. Cu2+, Co2+ and Zn2+. Overall, these results suggest that Ni2+, Cu2+ and Co2+ primarily form intra-CC coordination bonds while Zn2+ also participates in inter-CC coordination bonds. This may be a direct result of its different coordination geometry.
Intermolecular His-metal coordination bonds in the terminal regions of the protein building blocks of mussel byssal threads are primarily formed by Zn2+ and were found to be intimately linked to higher-order assembly and self-healing of the thread. In the above example, the contribution of intra-CC and inter-CC His-Zn2+ cannot be disentangled. In Chapter 4.3, I redesigned the CC to prohibit the formation of intra-CC His-Zn2+ coordination bonds, focusing only on inter-CC interactions. Specifically, I inserted His in the solvent-exposed f positions of the CC to focus on the effect of metal-induced higher-order assembly of CC cross-links. Raman and CD spectroscopy revealed that this CC building block forms α-helical Zn2+ cross-linked aggregates. Using this CC as a cross-link for star-PEG hydrogels, I showed that the material properties can be switched from viscoelastic in the absence of Zn2+ to elastic-like in the presence of Zn2+. Moreover, the relaxation time of the hydrogel was tunable over three orders of magnitude when using different Zn2+:His ratios. This tunability is attributed to a progressive transformation of single CC cross-links into His-Zn2+ cross-linked aggregates, with inter-CC His-Zn2+ coordination bonds serving as an additional, cross-linking mode.
Rheological characterization of the hydrogels with inter-CC His-Zn2+ coordination raised the question whether the His-Zn2+ coordination bonds between CCs or also the CCs themselves rupture when shear strain is applied. In general, the amount of CC cross-links initially formed in the hydrogel as well as the amount of CC cross-links breaking under force remains to be elucidated. In order to more deeply probe these questions and monitor the state of the CC cross-links when force is applied, a fluorescent reporter system based on Förster resonance energy transfer (FRET) was introduced into the CC (Chapter 4.4). For this purpose, the donor-acceptor pair carboxyfluorescein and tetramethylrhodamine was used. The resulting self-reporting CC showed a FRET efficiency of 77 % in solution. Using this fluorescently labeled CC as a self-reporting, reversible cross-link in an otherwise covalently cross-linked star-PEG hydrogel enabled the detection of the FRET efficiency change under compression force. This proof-of-principle result sets the stage for implementing the fluorescently labeled CCs as molecular force sensors in non-covalently cross-linked hydrogels.
In summary, this thesis highlights that rationally designed CCs are excellent reversibly tunable, self-healing and self-reporting hydrogel cross-links with high application potential in bioengineering and biomedicine. For the first time, I demonstrated that His-metal coordination-based stabilization can be transferred from the single CC level to the bulk material with clear viscoelastic consequences. Insertion of His in specific sequence positions was used to implement a second non-covalent cross-linking mode via intermolecular His-metal coordination. This His-metal binding induced aggregation of the CCs enabled for reversibly tuning the hydrogel properties from viscoelastic to elastic-like. As a proof-of-principle to establish self-reporting CCs as material cross-links, I labeled a CC with a FRET pair. The fluorescently labelled CC acts as a molecular force sensor and first preliminary results suggest that the CC enables the detection of hydrogel cross-link failure under compression force. In the future, fluorescently labeled CC force sensors will likely not only be used as intelligent cross-links to study the failure of hydrogels but also to investigate cell-matrix interactions in 3D down to the single molecule level.
Escaping the plant cell
(2020)
Subsea permafrost is perennially cryotic earth material that lies offshore. Most submarine permafrost is relict terrestrial permafrost beneath the Arctic shelf seas, was inundated after the last glaciation, and has been warming and thawing ever since. It is a reservoir and confining layer for gas hydrates and has the potential to release greenhouse gases and affect global climate change. Furthermore, subsea permafrost thaw destabilizes coastal infrastructure. While numerous studies focus on its distribution and rate of thaw over glacial timescales, these studies have not been brought together and examined in their entirety to assess rates of thaw beneath the Arctic Ocean. In addition, there is still a large gap in our understanding of sub-aquatic permafrost processes on finer spatial and temporal scales. The degradation rate of subsea permafrost is influenced by the initial conditions upon submergence. Terrestrial permafrost that has already undergone warming, partial thawing or loss of ground ice may react differently to inundation by seawater compared to previously undisturbed ice-rich permafrost. Heat conduction models are sufficient to model the thaw of thick subsea permafrost from the bottom, but few studies have included salt diffusion for top-down chemical degradation in shallow waters characterized by mean annual cryotic conditions on the seabed. Simulating salt transport is critical for assessing degradation rates for recently inundated permafrost, which may accelerate in response to warming shelf waters, a lengthening open water season, and faster coastal erosion rates. In the nearshore zone, degradation rates are also controlled by seasonal processes like bedfast ice, brine injection, seasonal freezing under floating ice conditions and warm freshwater discharge from large rivers. The interplay of all these variables is complex and needs further research. To fill this knowledge gap, this thesis investigates sub-aquatic permafrost along the southern coast of the Bykovsky Peninsula in eastern Siberia. Sediment cores and ground temperature profiles were collected at a freshwater thermokarst lake and two thermokarst lagoons in 2017. At this site, the coastline is retreating, and seawater is inundating various types of permafrost: sections of ice-rich Pleistocene permafrost (Yedoma) cliffs at the coastline alternate with lagoons and lower elevation previously thawed and refrozen permafrost basins (Alases). Electrical resistivity surveys with floating electrodes were carried out to map ice-bearing permafrost and taliks (unfrozen zones in the permafrost, usually formed beneath lakes) along the diverse coastline and in the lagoons. Combined with the borehole data, the electrical resistivity results permit estimation of contemporary ice-bearing permafrost characteristics, distribution, and occasionally, thickness. To conceptualize possible geomorphological and marine evolutionary pathways to the formation of the observed layering, numerical models were applied. The developed model incorporates salt diffusion and seasonal dynamics at the seabed, including bedfast ice. Even along coastlines with mean annual non-cryotic boundary conditions like the Bykovsky Peninsula, the modelling results show that salt diffusion minimizes seasonal freezing of the seabed, leading to faster degradation rates compared to models without salt diffusion. Seasonal processes are also important for thermokarst lake to lagoon transitions because lagoons can generate cold hypersaline conditions underneath the ice cover. My research suggests that ice-bearing permafrost can form in a coastal lagoon environment, even under floating ice. Alas basins, however, may degrade more than twice as fast as Yedoma permafrost in the first several decades of inundation. In addition to a lower ice content compared to Yedoma permafrost, Alas basins may be pre-conditioned with salt from adjacent lagoons. Considering the widespread distribution of thermokarst in the Arctic, its integration into geophysical models and offshore surveys is important to quantify and understand subsea permafrost degradation and aggradation. Through numerical modelling, fieldwork, and a circum-Arctic review of subsea permafrost literature, this thesis provides new insights into sub-aquatic permafrost evolution in saline coastal environments.
NADPH is an essential cofactor that drives biosynthetic reactions in all living organisms. It is a reducing agent and thus electron donor of anabolic reactions that produce major cellular components as well as many products in biotechnology. Indeed, the engineering of metabolic pathways for the production of many products is often limited by the availability of NADPH. One common strategy to address this issue is to swap cofactor specificity from NADH to NADPH of enzymes. However, this process is time consuming and challenging because multiple parameters need to be engineered in parallel. Therefore, the first aim of this project is to establish an efficient metabolic biosensor to select enzymes that can reduce NADP+. An NADPH auxotroph strain was constructed by deleting major reactions involved in NADPH biosynthesis in E. coli’s central carbon metabolism with the exception of 6-phosphogluconate dehydrogenase. To validate this strain, two enzymes were tested in the presence of several carbon sources: a dihydrolipoamide dehydrogenase variant of E. coli harboring seven mutations and a formate dehydrogenase (FDH) from Mycobacterium vaccae N10 harboring four mutations were found to support NADPH biosynthesis and growth. The strain was subjected to adaptive laboratory evolution with the goal of testing its robustness under different carbon sources. Our evolution experiment resulted in the random mutagenesis of the malic enzyme (maeA), enabling it to produce NADPH. The additional deletion of maeA rendered a more robust second-generation biosensor strain for NADP+ reduction. We devised a structure-guided directed evolution approach to change cofactor specificity in Pseudomonas sp. 101 FDH. To this end, a library of >106 variants was tested using in vivo selection. Compared to the best engineered enzymes reported, our best variant carrying five mutations shows 5-fold higher catalytic efficiency and 13-fold higher specificity towards NADP+, as well as 2-fold higher affinity towards formate. In conclusion, we demonstrate the potential of in vivo selection and evolution-guided approaches to develop better NADPH biosensors and to engineer cofactor specificity by the simultaneous improvement of multiple parameters (kinetic efficiency with NADP+, specificity towards NADP+, and affinity towards formate), which is a major challenge in protein engineering due to the existence of tradeoffs and epistasis.
The idea of critical childhood studies is a relatively young disciplinary undertaking in eastern Africa. And so, a lot of inquiries have not been carried out. This field is a potential important socio-political marker, among others, of some narratives, that have emerged out of eastern Africa. Towards this end, my research seeks out an archaeology of childhood in eastern Africa. There is a monochromatic hue which has often painted the eastern African childhood. This broad stroke portrays the childhood as characterized by want. The image of the eastern African childhood is composed in terms of the war-child, poverty, disease-ridden, and aid-begging. The pitfall of this consciousness is that it erases a differentiated and pluralist nature of the eastern African childhood. Therefore, I hypothesise that childhood is a discourse from which institutional vectors become conduits of certain statement-making both process-wise and content-wise. As such a critical childhood study is a theatre of staging and unearthing its joys, tribulations, cultural constructions, and even political interventions. To this end childhood and its literatures not only reflect but also contribute to meaning making and worldliness thereof. As an attempt to move from an un-nuanced depiction, which is often monodirectional, I seek to present a chronologically synchronic and diachronic analysis of childhood in the eastern Africa. Accordingly, I excavate a chronological construction of childhood within this geopolitical region. The main conceptual anchorage is Francis Nyamnjoh who tells of the African occupying a life on convivial frontiers. He theorises an Africa that is involved in technologies of self-definition that privilege conversations, fluidity of being and relational connections on a globalised scale. I also appropriate the notion of Bula Matadi from the Congo as a decolonialist epistemological exercise to break apart polarising representations and practices of childhood in eastern Africa. This opens a space for an unbounded reconfiguration of childhood in eastern Africa. This book works on and with archival matter, in a cross-disciplinary manner and ranges from pre-colonial to post-colonial eastern Africa. It is an exploration of the trajectory of the discourse of childhood in eastern Africa, in order to eclectically investigate childhood in eastern Africa, in fictional and non-fictional representations.
Chloroplasts are the photosynthetic organelles in plant and algae cells that enable photoautotrophic growth. Due to their prokaryotic origin, modern-day chloroplast genomes harbor 100 to 200 genes. These genes encode for core components of the photosynthetic complexes and the chloroplast gene expression machinery, making most of them essential for the viability of the organism. The regulation of those genes is predominated by translational adjustments. The powerful technique of ribosome profiling was successfully used to generate highly resolved pictures of the translational landscape of Arabidopsis thaliana cytosol, identifying translation of upstream open reading frames and long non-coding transcripts. In addition, differences in plastidial translation and ribosomal pausing sites were addressed with this method. However, a highly resolved picture of the chloroplast translatome is missing. Here, with the use of chloroplast isolation and targeted ribosome affinity purification, I generated highly enriched ribosome profiling datasets of the chloroplasts translatome for Nicotiana tabacum in the dark and light. Chloroplast isolation was found unsuitable for the unbiased analysis of translation in the chloroplast but adequate to identify potential co-translational import. Affinity purification was performed for the small and large ribosomal subunit independently. The enriched datasets mirrored the results obtained from whole-cell ribosome profiling. Enhanced translational activity was detected for psbA in the light. An alternative translation initiation mechanism was not identified by selective enrichment of small ribosomal subunit footprints. In sum, this is the first study that used enrichment strategies to obtain high-depth ribosome profiling datasets of chloroplasts to study ribosome subunit distribution and chloroplast associated translation.
Ever-changing light intensities are challenging the photosynthetic capacity of photosynthetic organism. Increased light intensities may lead to over-excitation of photosynthetic reaction centers resulting in damage of the photosystem core subunits. Additional to an expensive repair mechanism for the photosystem II core protein D1, photosynthetic organisms developed various features to reduce or prevent photodamage. In the long-term, photosynthetic complex contents are adjusted for the efficient use of experienced irradiation. However, the contribution of chloroplastic gene expression in the acclimation process remained largely unknown. Here, comparative transcriptome and ribosome profiling was performed for the early time points of high-light acclimation in Nicotiana tabacum chloroplasts in a genome-wide scale. The time- course data revealed stable transcript level and only minor changes in translational activity of specific chloroplast genes during high-light acclimation. Yet, psbA translation was increased by two-fold in the high light from shortly after the shift until the end of the experiment. A stress-inducing shift from low- to high light exhibited increased translation only of psbA. This study indicate that acclimation fails to start in the observed time frame and only short-term responses to reduce photoinhibition were observed.
Remembering the dismembered
(2020)
This thesis – written in co-authorship with Tanzanian activist Mnyaka Sururu Mboro – examines different cases of repatriation of ancestral remains to African countries and communities through the prism of postcolonial memory studies. It follows the theft and displacement of prominent ancestors from East and Southern Africa (Sarah Baartman, Dawid Stuurman, Mtwa Mkwawa, Songea Mbano, King Hintsa and the victims of the Ovaherero and Nama genocides) and argues that efforts made for the repatriation of their remains have contributed to a transnational remembrance of colonial violence.
Drawing from cultural studies theories such as "multidirectional memory", "rehumanisation" and "necropolitics", the thesis argues for a new conceptualisation or "re-membrance" in repatriation, through processes of reunion, empowerment, story-telling and belonging. Besides, the afterlives of the dead ancestors, who stand at the centre of political debates on justice and reparations, remind of their past struggles against colonial oppression. They are therefore "memento vita", fostering counter-discourses that recognize them
as people and stories.
This manuscript is accompanied by a “(web)site of memory” where some of the research findings are made available to a wider audience. This blog also hosts important sound material which appears in the thesis as interventions by external contributors. Through QR codes, both the written and the digital version are linked with each other to problematize the idea of a written monograph and bring a polyphonic perspective to those diverse, yet connected, histories.
Successfully completing any data science project demands careful consideration across its whole process. Although the focus is often put on later phases of the process, in practice, experts spend more time in earlier phases, preparing data, to make them consistent with the systems' requirements or to improve their models' accuracies. Duplicate detection is typically applied during the data cleaning phase, which is dedicated to removing data inconsistencies and improving the overall quality and usability of data. While data cleaning involves a plethora of approaches to perform specific operations, such as schema alignment and data normalization, the task of detecting and removing duplicate records is particularly challenging. Duplicates arise when multiple records representing the same entities exist in a database. Due to numerous reasons, spanning from simple typographical errors to different schemas and formats of integrated databases. Keeping a database free of duplicates is crucial for most use-cases, as their existence causes false negatives and false positives when matching queries against it. These two data quality issues have negative implications for tasks, such as hotel booking, where users may erroneously select a wrong hotel, or parcel delivery, where a parcel can get delivered to the wrong address. Identifying the variety of possible data issues to eliminate duplicates demands sophisticated approaches.
While research in duplicate detection is well-established and covers different aspects of both efficiency and effectiveness, our work in this thesis focuses on the latter. We propose novel approaches to improve data quality before duplicate detection takes place and apply the latter in datasets even when prior labeling is not available. Our experiments show that improving data quality upfront can increase duplicate classification results by up to 19%. To this end, we propose two novel pipelines that select and apply generic as well as address-specific data preparation steps with the purpose of maximizing the success of duplicate detection. Generic data preparation, such as the removal of special characters, can be applied to any relation with alphanumeric attributes. When applied, data preparation steps are selected only for attributes where there are positive effects on pair similarities, which indirectly affect classification, or on classification directly. Our work on addresses is twofold; first, we consider more domain-specific approaches to improve the quality of values, and, second, we experiment with known and modified versions of similarity measures to select the most appropriate per address attribute, e.g., city or country.
To facilitate duplicate detection in applications where gold standard annotations are not available and obtaining them is not possible or too expensive, we propose MDedup. MDedup is a novel, rule-based, and fully automatic duplicate detection approach that is based on matching dependencies. These dependencies can be used to detect duplicates and can be discovered using state-of-the-art algorithms efficiently and without any prior labeling. MDedup uses two pipelines to first train on datasets with known labels, learning to identify useful matching dependencies, and then be applied on unseen datasets, regardless of any existing gold standard. Finally, our work is accompanied by open source code to enable repeatability of our research results and application of our approaches to other datasets.
With his September 2015 speech “Breaking the tragedy of the horizon”, the President of the Central Bank of England, Mark Carney, put climate change on the agenda of financial market regulators. Until then, climate change had been framed mainly as a problem of negative externalities leading to long-term economic costs, which resulted in countries trying to keep the short-term costs of climate action to a minimum. Carney argued that climate change, as well as climate policy, can also lead to short-term financial risks, potentially causing strong adjustments in asset prices. Analysing the effect of a sustainability transition on the financial sector challenges traditional economic and financial analysis and requires a much deeper understanding of the interrelations between climate policy and financial markets.
This dissertation thus investigates the implications of climate policy for financial markets as well as the role of financial markets in a transition to a sustainable economy. The approach combines insights from macroeconomic and financial risk analysis. Following an introduction and classification in Chapter 1, Chapter 2 shows a macroeconomic analysis that combines ambitious climate targets (negative externality) with technological innovation (positive externality), adaptive expectations and an investment program, resulting in overall positive macroeconomic outcomes. The analysis also reveals the limitations of climate economic models in their representation of financial markets. Therefore, the subsequent part of this dissertation is concerned with the link between climate policies and financial markets. In Chapter 3, an empirical analysis of stock-market responses to the announcement of climate policy targets is performed to investigate impacts of climate policy on financial markets. Results show that 1) international climate negotiations have an effect on asset prices and 2) investors increasingly recognize transition risks in carbon-intensive investments. In Chapter 4, an analysis of equity markets and the interbank market shows that transition risks can potentially affect a large part of the equity market and that financial interconnections can amplify negative shocks. In Chapter 5, an analysis of mortgage loans shows how information on climate policy and the energy performance of buildings can be integrated into risk management and reflected in interest rates.
While costs of climate action have been explored at great depth, this dissertation offers two main contributions. First, it highlights the importance of a green investment program to strengthen the macroeconomic benefits of climate action. Second, it shows different approaches on how to integrate transition risks and opportunities into financial market analysis. Anticipating potential losses and gains in the value of financial assets as early as possible can make the financial system more resilient to transition risks and can stimulate investments into the decarbonization of the economy.
The development of methods such as super-resolution microscopy (Nobel prize in Chemistry, 2014) and multi-scale computer modelling (Nobel prize in Chemistry, 2013) have provided scientists with powerful tools to study microscopic systems. Sub-micron particles or even fluorescently labelled single molecules can now be tracked for long times in a variety of systems such as living cells, biological membranes, colloidal solutions etc. at spatial and temporal resolutions previously inaccessible. Parallel to such single-particle tracking experiments, super-computing techniques enable simulations of large atomistic or coarse-grained systems such as biologically relevant membranes or proteins from picoseconds to seconds, generating large volume of data. These have led to an unprecedented rise in the number of reported cases of anomalous diffusion wherein the characteristic features of Brownian motion—namely linear growth of the mean squared displacement with time and the Gaussian form of the probability density function (PDF) to find a particle at a given position at some fixed time—are routinely violated. This presents a big challenge in identifying the underlying stochastic process and also estimating the corresponding parameters of the process to completely describe the observed behaviour. Finding the correct physical mechanism which leads to the observed dynamics is of paramount importance, for example, to understand the first-arrival time of transcription factors which govern gene regulation, or the survival probability of a pathogen in a biological cell post drug administration. Statistical Physics provides useful methods that can be applied to extract such vital information. This cumulative dissertation, based on five publications, focuses on the development, implementation and application of such tools with special emphasis on Bayesian inference and large deviation theory. Together with the implementation of Bayesian model comparison and parameter estimation methods for models of diffusion, complementary tools are developed based on different observables and large deviation theory to classify stochastic processes and gather pivotal information. Bayesian analysis of the data of micron-sized particles traced in mucin hydrogels at different pH conditions unveiled several interesting features and we gained insights into, for example, how in going from basic to acidic pH, the hydrogel becomes more heterogeneous and phase separation can set in, leading to observed non-ergodicity (non-equivalence of time and ensemble averages) and non-Gaussian PDF. With large deviation theory based analysis we could detect, for instance, non-Gaussianity in seeming Brownian diffusion of beads in aqueous solution, anisotropic motion of the beads in mucin at neutral pH conditions, and short-time correlations in climate data. Thus through the application of the developed methods to biological and meteorological datasets crucial information is garnered about the underlying stochastic processes and significant insights are obtained in understanding the physical nature of these systems.
Largescale patterns of global land use change are very frequently accompanied by natural habitat loss. To assess the consequences of habitat loss for the remaining natural and semi-natural biotopes, inclusion of cumulative effects at the landscape level is required. The interdisciplinary concept of vulnerability constitutes an appropriate assessment framework at the landscape level, though with few examples of its application for ecological assessments. A comprehensive biotope vulnerability analysis allows identification of areas most affected by landscape change and at the same time with the lowest chances of regeneration.
To this end, a series of ecological indicators were reviewed and developed. They measured spatial attributes of individual biotopes as well as some ecological and conservation characteristics of the respective resident species community. The final vulnerability index combined seven largely independent indicators, which covered exposure, sensitivity and adaptive capacity of biotopes to landscape changes. Results for biotope vulnerability were provided at the regional level. This seems to be an appropriate extent with relevance for spatial planning and designing the distribution of nature reserves.
Using the vulnerability scores calculated for the German federal state of Brandenburg, hot spots and clusters within and across the distinguished types of biotopes were analysed. Biotope types with high dependence on water availability, as well as biotopes of the open landscape containing woody plants (e.g., orchard meadows) are particularly vulnerable to landscape changes. In contrast, the majority of forest biotopes appear to be less vulnerable. Despite the appeal of such generalised statements for some biotope types, the distribution of values suggests that conservation measures for the majority of biotopes should be designed specifically for individual sites. Taken together, size, shape and spatial context of individual biotopes often had a dominant influence on the vulnerability score.
The implementation of biotope vulnerability analysis at the regional level indicated that large biotope datasets can be evaluated with high level of detail using geoinformatics. Drawing on previous work in landscape spatial analysis, the reproducible approach relies on transparent calculations of quantitative and qualitative indicators. At the same time, it provides a synoptic overview and information on the individual biotopes. It is expected to be most useful for nature conservation in combination with an understanding of population, species, and community attributes known for specific sites. The biotope vulnerability analysis facilitates a foresighted assessment of different land uses, aiding in identifying options to slow habitat loss to sustainable levels. It can also be incorporated into planning of restoration measures, guiding efforts to remedy ecological damage. Restoration of any specific site could yield synergies with the conservation objectives of other sites, through enhancing the habitat network or buffering against future landscape change.
Biotope vulnerability analysis could be developed in line with other important ecological concepts, such as resilience and adaptability, further extending the broad thematic scope of the vulnerability concept. Vulnerability can increasingly serve as a common framework for the interdisciplinary research necessary to solve major societal challenges.
The hepatokine FGF21 and the adipokine chemerin have been implicated as metabolic regulators and mediators of inter-tissue crosstalk. While FGF21 is associated with beneficial metabolic effects and is currently being tested as an emerging therapeutic for obesity and diabetes, chemerin is linked to inflammation-mediated insulin resistance. However, dietary regulation of both organokines and their role in tissue interaction needs further investigation.
The LEMBAS nutritional intervention study investigated the effects of two diets differing in their protein content in obese human subjects with non-alcoholic fatty liver disease (NAFLD). The study participants consumed hypocaloric diets containing either low (LP: 10 EN%, n = 10) or high (HP: 30 EN%, n = 9) dietary protein 3 weeks prior to bariatric surgery. Before and after the intervention the participants were anthropometrically assessed, blood samples were drawn, and hepatic fat content was determined by MRS. During bariatric surgery, paired subcutaneous and visceral adipose tissue biopsies as well as liver biopsies were collected. The aim of this thesis was to investigate circulating levels and tissue-specific regulation of (1) FGF21 and (2) chemerin in the LEMBAS cohort. The results were compared to data obtained in 92 metabolically healthy subjects with normal glucose tolerance and normal liver fat content.
(1) Serum FGF21 concentrations were elevated in the obese subjects, and strongly associated with intrahepatic lipids (IHL). In accordance, FGF21 serum concentrations increased with severity of NAFLD as determined histologically in the liver biopsies. Though both diets were successful in reducing IHL, the effect was more pronounced in the HP group. FGF21 serum concentrations and mRNA expression were bi-directionally regulated by dietary protein, independent from metabolic improvements. In accordance, in the healthy study subjects, serum FGF21 concentrations dropped by more than 60% in response to the HP diet. A short-term HP intervention confirmed the acute downregulation of FGF21 within 24 hours. Lastly, experiments in HepG2 cell cultures and primary murine hepatocytes identified nitrogen metabolites (NH4Cl and glutamine) to dose-dependently suppress FGF21 expression.
(2) Circulating chemerin concentrations were considerably elevated in the obese versus lean study participants and differently associated with markers of obesity and NAFLD in the two cohorts. The adipokine decreased in response to the hypocaloric interventions while an unhealthy high-fat diet induced a rise in chemerin serum levels. In the lean subjects, mRNA expression of RARRES2, encoding chemerin, was strongly and positively correlated with expression of several cytokines, including MCP1, TNFα, and IL6, as well as markers of macrophage infiltration in the subcutaneous fat depot. However, RARRES2 was not associated with any cytokine assessed in the obese subjects and the data indicated an involvement of chemerin not only in the onset but also resolution of inflammation. Analyses of the tissue biopsies and experiments in human primary adipocytes point towards a role of chemerin in adipogenesis while discrepancies between the in vivo and in vitro data were detected.
Taken together, the results of this thesis demonstrate that circulating FGF21 and chemerin levels are considerably elevated in obesity and responsive to dietary interventions. FGF21 was acutely and bi-directionally regulated by dietary protein in a hepatocyte-autonomous manner. Given that both, a lack in essential amino acids and excessive nitrogen intake, exert metabolic stress, FGF21 may serve as an endocrine signal for dietary protein balance. Lastly, the data revealed that chemerin is derailed in obesity and associated with obesity-related inflammation. However, future studies on chemerin should consider functional and regulatory differences between secreted and tissue-specific isoforms.
Even though the majority of individuals know that exercising is healthy, a high percentage struggle to achieve the recommended amount of exercise. The (social-cognitive) theories that are commonly applied to explain exercise motivation refer to the assumption that people base their decisions mainly on rational reasoning. However, behavior is not only bound to reflection. In recent years, the role of automaticity and affect for exercise motivation has been increasingly discussed. In this dissertation, central assumptions of the affective–reflective theory of physical inactivity and exercise (ART; Brand & Ekkekakis, 2018), an exercise-specific dual-process theory that emphasizes the role of a momentary automatic affective reaction for exercise-decisions, were examined. The central aim of this dissertation was to investigate exercisers and non-exercisers automatic affective reactions to exercise-related stimuli (i.e., type-1 process). In particular, the two components of the ART’s type-1 process, that are, automatic associations with exercise and the automatic affective valuation to exercise, were under study.
In the first publication (Schinkoeth & Antoniewicz, 2017), research on automatic (evaluative) associations with exercise was summarized and evaluated in a systematic review. The results indicated that automatic associations with exercise appeared to be relevant predictors for exercise behavior and other exercise-related variables, providing evidence for a central assumption of the ART’s type-1 process. Furthermore, indirect methods seem to be suitable to assess automatic associations. The aim of the second publication (Schinkoeth, Weymar, & Brand, 2019) was to approach the somato-affective core of the automatic valuation of exercise using analysis of reactivity in vagal HRV while viewing exercise-related pictures. Results revealed that differences in exercise volume could be regressed on HRV reactivity. In light of the ART, these findings were interpreted as evidence of an inter-individual affective reaction elicited at the thought of exercise and triggered by exercise-stimuli. In the third publication (Schinkoeth & Brand, 2019, subm.), it was sought to disentangle and relate to each other the ART’s type-1 process components—automatic associations and the affective valuation of exercise. Automatic associations to exercise were assessed with a recoding-free variant of an implicit association test (IAT). Analysis of HRV reactivity was applied to approach a somatic component of the affective valuation, and facial reactions in a facial expression (FE) task served as indicators of the automatic affective reaction’s valence. Exercise behavior was assessed via self-report. The measurement of the affective valuation’s valence with the FE task did not work well in this study. HRV reactivity was predicted by the IAT score and did also statistically predict exercise behavior. These results thus confirm and expand upon the results of publication two and provide empirical evidence for the type-1 process, as defined in the ART. This dissertation advances the field of exercise psychology concerning the influence of automaticity and affect on exercise motivation. Moreover, both methodical implications and theoretical extensions for the ART can be derived from the results.
The current thesis is focused on the properties of graphene supported by metallic substrates and specifically on the behaviour of electrons in such systems. Methods of scanning tunneling microscopy, electron diffraction and photoemission spectroscopy were applied to study the structural and electronic properties of graphene. The purpose of the first part of this work is to introduce the most relevant aspects of graphene physics and the methodical background of experimental techniques used in the current thesis.
The scientific part of this work starts with the extensive study by means of scanning tunneling microscopy of the nanostructures that appear in Au intercalated graphene on Ni(111). This study was aimed to explore the possible structural explanations of the Rashba-type spin splitting of ~100 meV experimentally observed in this system — much larger than predicted by theory. It was demonstrated that gold can be intercalated under graphene not only as a dense monolayer, but also in the form of well-periodic arrays of nanoclusters, a structure previously not reported. Such nanocluster arrays are able to decouple graphene from the strongly interacting Ni substrate and render it quasi-free-standing, as demonstrated by our DFT study. At the same time calculations confirm strong enhancement of the proximity-induced SOI in graphene supported by such nanoclusters in comparison to monolayer gold. This effect, attributed to the reduced graphene-Au distance in the case of clusters, provides a large Rashba-type spin splitting of ~60 meV.
The obtained results not only provide a possible mechanism of SOI enhancement in this particular system, but they can be also generalized for graphene on other strongly interacting substrates intercalated by nanostructures of heavy noble d metals.
Even more intriguing is the proximity of graphene to heavy sp-metals that were predicted to induce an intrinsic SOI and realize a spin Hall effect in graphene. Bismuth is the heaviest stable sp-metal and its compounds demonstrate a plethora of exciting physical phenomena. This was the motivation behind the next part of the current thesis, where structural and electronic properties of a previously unreported phase of Bi-intercalated graphene on Ir(111) were studied by means of scanning tunneling microscopy, spin- and angle-resolved photoemission spectroscopy and electron diffraction. Photoemission experiments revealed a remarkable, nearly ideal graphene band structure with strongly suppressed signatures of interaction between graphene and the Ir(111) substrate, moreover, the characteristic moiré pattern observed in graphene on Ir(111) by electron diffraction and scanning tunneling microscopy was strongly suppressed after intercalation. The whole set of experimental data evidences that Bi forms a dense intercalated layer that efficiently decouples graphene from the substrate. The interaction manifests itself only in the n-type charge doping (~0.4 eV) and a relatively small band gap at the Dirac point (~190 meV). The origin of this minor band gap is quite intriguing and in this work it was possible to exclude a wide range of mechanisms that could be responsible for it, such as induced intrinsic spin-orbit interaction, hybridization with the substrate states and corrugation of the graphene lattice. The main origin of the band gap was attributed to the A-B symmetry breaking and this conclusion found support in the careful analysis of the interference effects in photoemission that provided the band gap estimate of ~140 meV.
While the previous chapters were focused on adjusting the properties of graphene by proximity to heavy metals, graphene on its own is a great object to study various physical effects at crystal surfaces. The final part of this work is devoted to a study of surface scattering resonances by means of photoemission spectroscopy, where this effect manifests itself as a distinct modulation of photoemission intensity. Though scattering resonances were widely studied in the past by means of electron diffraction, studies about their observation in photoemission experiments started to appear only recently and they are very scarce.
For a comprehensive study of scattering resonances graphene was selected as a versatile model system with adjustable properties. After the theoretical and historical introduction to the topic of scattering resonances follows a detailed description of the unusual features observed in the photoemission spectra obtained in this work and finally the equivalence between these features and scattering resonances is proven. The obtained photoemission results are in a good qualitative agreement with the existing theory, as verified by our calculations in the framework of the interference model. This simple model gives a suitable explanation for the general experimental observations.
The possibilities of engineering the scattering resonances were also explored. A systematic study of graphene on a wide range of substrates revealed that the energy position of the resonances is in a direct relation to the magnitude of charge transfer between graphene and the substrate. Moreover, it was demonstrated that the scattering resonances in graphene on Ir(111) can be suppressed by nanopatterning either by a superlattice of Ir nanoclusters or by atomic hydrogen. These effects were attributed to strong local variations of tork function and/or destruction of long-range order of thephene lattice. The tunability of scattering resonances can be applied for optoelectronic devices based on graphene. Moreover, the results of this study expand the general understanding of the phenomenon of scattering resonances and are applicable to many other materials besides graphene.
In recent years, a substantial number of psycholinguistic studies and of studies on acquired language impairments have investigated the case of morphologically complex words. These have provided evidence for what is known as ‘morphological decomposition’, i.e. a mechanism that decomposes complex words into their constituent morphemes during online processing. This is believed to be a fundamental, possibly universal mechanism of morphological processing, operating irrespective of a word’s specific properties.
However, current accounts of morphological decomposition are mostly based on evidence from suffixed words and compound words, while prefixed words have been comparably neglected. At the same time, it has been consistently observed that, across languages, prefixed words are less widespread than suffixed words. This cross-linguistic preference for suffixing morphology has been claimed to be grounded in language processing and language learning mechanisms. This would predict differences in how prefixed words are processed and therefore also affected in language impairments, challenging the predictions of the major accounts of morphological decomposition.
Against this background, the present thesis aims at reducing the gap between the accounts of morphological decomposition and the accounts of the suffixing preference, by providing a thorough empirical investigation of prefixed words. Prefixed words are examined in three different domains: (i) visual word processing in native speakers; (ii) visual word processing in non-native speakers; (iii) acquired morphological impairments. The processing studies employ the masked priming paradigm, tapping into early stages of visual word recognition. Instead, the studies on morphological impairments investigate the errors produced in reading aloud tasks.
As for native processing, the present work first focuses on derivation (Publication I), specifically investigating whether German prefixed derived words, both lexically restricted (e.g. inaktiv ‘inactive’) and unrestricted (e.g. unsauber ‘unclean’) can be efficiently decomposed. I then present a second study (Publication II) on a Bantu language, Setswana, which offers the unique opportunity of testing inflectional prefixes, and directly comparing priming with prefixed inflected primes (e.g. dikgeleke ‘experts’) to priming with prefixed derived primes (e.g. bokgeleke ‘talent’). With regard to non-native processing (Publication I), the priming effects obtained from the lexically restricted and unrestricted prefixed derivations in native speakers are additionally compared to the priming effects obtained in a group of non-native speakers of German. Finally, in the two studies on acquired morphological impairments, the thesis investigates whether prefixed derived words yield different error patterns than suffixed derived words (Publication III and IV).
For native speakers, the results show evidence for morphological decomposition of both types of prefixed words, i.e. lexically unrestricted and restricted derivations, as well as of prefixed inflected words. Furthermore, non-native speakers are also found to efficiently decompose prefixed derived words, with parallel results to the group of native speakers. I therefore conclude that, for the early stages of visual word recognition, the relative position of stem and affix in prefixed versus suffixed words does not affect how efficiently complex words are decomposed, either in native or in non-native processing. In the studies on acquired language impairments, instead, prefixes are consistently found to be more impaired than suffixes. This is explained in terms of a learnability disadvantage for prefixed words, which may cause weaker representations of the information encoded in affixes when these precede the stem (prefixes) as compared to when they follow it (suffixes). Based on the impairment profiles of the individual participants and on the nature of the task, this dissociation is assumed to emerge from later processing stages than those that are tapped into by masked priming. I therefore conclude that the different characteristics of prefixed and suffixed words do come into play at later processing stages, during which the lexical-semantic information contained in the different constituent morphemes is processed.
The findings presented in the four manuscripts significantly contribute to our current understanding of the mechanisms involved in processing prefixed words. Crucially, the thesis constrains the processing disadvantage for prefixed words to later processing stages, thereby suggesting that theories trying to establish links between language universals and processing mechanisms should more carefully consider the different stages involved in language processing and what factors are relevant for each specific stage.
After endosymbiosis, chloroplasts lost most of their genome. Many former endosymbiotic genes are now nucleus-encoded and the products are re-imported post-translationally. Consequently, photosynthetic complexes are built of nucleus- and plastid-encoded subunits in a well-defined stoichiometry. In Chlamydomonas, the translation of chloroplast-encoded photosynthetic core subunits is feedback-regulated by the assembly state of the complexes they reside in. This process is called Control by Epistasy of Synthesis (CES) and enables the efficient production of photosynthetic core subunits in stoichiometric amounts. In chloroplasts of embryophytes, only Rubisco subunits have been shown to be feedback-regulated. That opens the question if there is additional CES regulation in embryophytes. I analyzed chloroplast gene expression in tobacco and Arabidopsis mutants with assembly defects for each photosynthetic complex to broadly answer this question. My results (i) confirmed CES within Rubisco and hint to potential translational feedback regulation in the synthesis of (ii) cytochrome b6f (Cyt b6f) and (iii) photosystem II (PSII) subunits. This work suggests a CES network in PSII that links psbD, psbA, psbB, psbE, and potentially psbH expression by a feedback mechanism that at least partially differs from that described in Chlamydomonas. Intriguingly, in the Cyt b6f complex, a positive feedback regulation that coordinates the synthesis of PetA and PetB was observed, which was not previously reported in Chlamydomonas. No evidence for CES interactions was found in the expression of NDH and ATP synthase subunits of embryophytes. Altogether, this work provides solid evidence for novel assembly-dependent feedback regulation mechanisms controlling the expression of photosynthetic genes in chloroplasts of embryophytes.
In order to obtain a comprehensive inventory of the rbcL and psbA RNA-binding proteomes (including factors that regulate their expression, especially factors involved in CES), an aptamer based affinity purification method was adapted and refined for the specific purification these transcripts from tobacco chloroplasts. To this end, three different aptamers (MS2, Sephadex ,and streptavidin binding) were stably introduced into the 3’ UTRs of psbA and rbcL by chloroplast transformation. RNA aptamer based purification and subsequent chip analysis (RAP Chip) demonstrated a strong enrichment of psbA and rbcL transcripts and currently, ongoing mass spectrometry analyses shall reveal potential regulatory factors. Furthermore, the suborganellar localization of MS2 tagged psbA and rbcL transcripts was analyzed by a combined affinity, immunology, and electron microscopy approach and demonstrated the potential of aptamer tags for the examination of the spatial distribution of chloroplast transcripts.
Ferruginous conditions were a prominent feature of the oceans throughout the Precambrian Eons and thus throughout much of Earth’s history. Organic matter mineralization and diagenesis within the ferruginous sediments that deposited from Earth’s early oceans likely played a key role in global biogeochemical cycling. Knowledge of organic matter mineralization in ferruginous sediments, however, remains almost entirely conceptual, as modern analogue environments are extremely rare and largely unstudied, to date. Lake Towuti on the island of Sulawesi, Indonesia is such an analogue environment and the purpose of this PhD project was to investigate the rates and pathways of organic matter mineralization in its ferruginous sediments.
Lake Towuti is the largest tectonic lake in Southeast Asia and is hosted in the mafic and ultramafic rocks of the East Sulawesi Ophiolite. It has a maximum water depth of 203 m and is weakly thermally stratified. A well-oygenated surface layer extends to 70 m depth, while waters below 130 m are persistently anoxic. Intensive weathering of the ultramafic catchment feeds the lake with large amounts of iron(oxy)hydroxides while the runoff contains only little sulfate, leading to sulfate-poor (< 20 µM) lake water and anoxic ferruginous conditions below 130 m. Such conditions are analogous to the ferruginous water columns that persisted throughout much of the Archean and Proterozoic eons. Short (< 35 cm) sediment cores were collected from different water depths corresponding to different bottom water redox conditions. Also, a drilling campaign of the International Continental Scientific Drilling Program (ICDP) retrieved a 114 m long sediment core dedicated for geomicrobiological investigations from a water depth of 153 m, well below the depth of oxygen penetration at the time of sampling. Samples collected from these sediment cores form the fundament of this thesis and were used to perform a suite of biogeochemical and microbiological analyses.
Geomirobiological investigations depend on uncontaminated samples. However, exploration of subsurface environments relies on drilling, which requires the use of a drilling fluid. Drilling fluid infiltration during drilling can not be avoided. Thus, in order to trace contamination of the sediment core and to identify uncontaminated samples for further analyses a simple and inexpensive technique for assessing contamination during drilling operations was developed and applied during the ICDP drilling campaign. This approach uses an aqeous fluorescent pigment dispersion commonly used in the paint industry as a particulate tracer. It has the same physical properties as conventionally used particulate tracers. However, the price is nearly four orders of magnitude lower solving the main problem of particulate tracer approaches. The approach requires only a minimum of equipment and allows for a rapid contamination assessment potentially even directly on site, while the senstitivity is in the range of already established approaches. Contaminated samples in the drill core were identified and not included for further geomicrobiological investigations.
Biogeochemical analyses of short sediment cores showed that Lake Towutis sediments are strongly depleted in electron acceptors commonly used in microbial organic matter mineralization (i.e. oxygen, nitrate, sulfate). Still, the sediments harbor high microbial cell densities, which are a function of redox conditions of Lake Towuti’s bottom water. In shallow water depths bottom water oxygenation leads to a higher input of labile organic matter and electron acceptors like sulfate and iron, which promotes a higher microbial abundance. Microbial analyses showed that a versatile microbial community with a potential to perform metabolisms related to iron and sulfate reduction, fermentation as well as methanogenesis inhabits Lake Towuti’s surface sediments.
Biogeochemical investigations of the upper 12 m of the 114 m sediment core showed that Lake Towuti’s sediment is extremely rich in iron with total concentrations up to 2500 µmol cm-3 (20 wt. %), which makes it the natural sedimentary environment with the highest total iron concentrations studied to date. In the complete or near absence of oxygen, nitrate and sulfate, organic matter mineralization in ferruginous sediments would be expected to proceed anaerobically via the energetically most favorable terminal electron acceptors available - in this case ferric iron. Astonishingly, however, methanogenesis is the dominant (>85 %) organic matter mineralization process in Lake Towuti’s sediment. Reactive ferric iron known to be available for microbial iron reduction is highly abundant throughout the upper 12 m and thus remained stable for at least 60.000 years. The produced methane is not oxidized anaerobically and diffuses out of the sediment into the water column. The proclivity towards methanogenesis, in these very iron-rich modern sediments, implies that methanogenesis may have played a more important role in organic matter mineralization thoughout the Precambrian than previously thought and thus could have been a key contributor to Earth’s early climate dynamics.
Over the whole sequence of the 114 m long sediment core siderites were identified and characterized using high-resolution microscopic and spectroscopic imaging together with microchemical and geochemical analyses. The data show early diagenetic growth of siderite crystals as a response to sedimentary organic matter mineralization. Microchemical zoning was identified in all siderite crystals. Siderite thus likely forms during diagenesis through growth on primary existing phases and the mineralogical and chemical features of these siderites are a function of changes in redox conditions of the pore water and sediment over time. Identification of microchemical zoning in ancient siderites deposited in the Precambrian may thus also be used to infer siderite growth histories in ancient sedimentary rocks including sedimentary iron formations.
In nature, bacteria are found to reside in multicellular communities encased in self-produced extracellular matrices. Indeed, biofilms are the default lifestyle of the bacteria which cause persistent infections in humans. The biofilm assembly protects bacterial cells from desiccation and limits the effectiveness of antimicrobial treatments. A myriad of biomolecules in the extracellular matrix, including proteins, exopolysaccharides, lipids, extracellular DNA and other, form a dense and viscoelastic three dimensional network. Many studies emphasized that a destabilization of the mechanical integrity of biofilm architectures potentially eliminates the protective shield and renders bacteria more susceptible to the immune system and antibiotics. Pantoea stewartii is a plant pathogen which infects monocotyledons such as maize and sweet corn. These bacteria produce dense biofilms in the xylem of infected plants which cause wilting of plants and crops. Stewartan is an exopolysaccharide which is produced by Pantoea stewartii and secreted as the major component to the extracellular matrix. It consists of heptasaccharide repeating units with a high degree of polymerization (2-4 MDa). In this work, the physicochemical properties of stewartan were investigated to understand the contributions of this exopolysaccharide to the mechanical integrity and cohesiveness of Pantoea stewartii biofilms. Therefore, a coarse-grained model of stewartan was developed with computational techniques to obtain a model for its three dimensional structural features. Here, coarse-grained molecular dynamic simulations revealed that the exopolysaccharide forms a hydrogel in which the exopolysaccharide chains arrange into a three dimensional mesh-like network. Simulations at different concentrations were used to investigate the influence of the water content on the network formation. Stewartan was further purified from 72 h grown Pantoea stewartii biofilms and the diffusion of bacteriophage and differently-sized nanoparticles (which ranged from 1.1 to 193 nm diameter) was analyzed in reconstituted stewartan solutions. Fluorescence correlation spectroscopy and single-particle tracking revealed that the stewartan network impeded the mobility of a set of differently-sized fluorescent particles in a size-dependent manner. Diffusion of these particles became more anomalous, as characterized by fitting the diffusion data to an anomalous diffusion model, with increasing stewartan concentrations. Further bulk and microrheological experiments were used to analyze the transitions in stewartan fluid behavior and stewartan chain entanglements were described. Moreover, it was noticed, that a small fraction of bacteriophage particles was trapped in small-sized pores deviating from classical random walks which highlighted the structural heterogeneity of the stewartan network. Additionally, the mobility of fluorescent particles
also depended on the charge of the stewartan exopolysaccharide and a model of a molecular sieve for the stewartan network was proposed. The here reported structural features of the stewartan polymers were used to provide a detailed description of the mechanical properties of typically glycan-based biofilms such as the one from Pantoea stewartii.
In addition, the mechanical properties of the biofilm architecture are permanently sensed by the embedded bacteria and enzymatic modifications of the extracellular matrix take place to address environmental cues. Hence, in this work the influence of enzymatic degradation of the stewartan exopolysaccharides on the overall exopolysaccharide network structure was analyzed to describe relevant physiological processes in Pantoea stewartii biofilms. Here, the stewartan hydrolysis kinetics of the tailspike protein from the ΦEa1h bacteriophage, which is naturally found to infect Pantoea stewartii cells, was compared to WceF. The latter protein is expressed from the Pantoea stewartii stewartan biosynthesis gene cluster wce I-III. The degradation of stewartan by the ΦEa1h tailspike protein was shown to be much faster than the hydrolysis kinetics of WceF, although both enzymes cleaved the β D GalIII(1→3)-α-D-GalI glycosidic linkage from the stewartan backbone. Oligosaccharide fragments which were produced during the stewartan cleavage, were analyzed in size-exclusion chromatography and capillary electrophoresis. Bioinformatic studies and the analysis of a WceF crystal structure revealed a remarkably high structural similarity of both proteins thus unveiling WceF as a bacterial tailspike-like protein. As a consequence, WceF might play a role in stewartan chain length control in Pantoea stewartii biofilms.
Urbanization and agricultural land use are two of the main drivers of global changes with effects on ecosystem functions and human wellbeing. Green Infrastructure is a new approach in spatial planning contributing to sustainable urban development, and to address urban challenges, such as biodiversity conservation, climate change adaptation, green economy development, and social cohesion. Because the research focus has been mainly on open green space structures, such as parks, urban forest, green building, street green, but neglected spatial and functional potentials of utilizable agricultural land, this thesis aims at fill this gap.
This cumulative thesis addresses how agricultural land in urban and peri-urban landscapes can contribute to the development of urban green infrastructure as a strategy to promote sustainable urban development. Therefore, a number of different research approaches have been applied. First, a quantitative, GIS-based modeling approach looked at spatial potentials, addressing the heterogeneity of peri-urban landscape that defines agricultural potentials and constraints. Second, a participatory approach was applied, involving stakeholder opinions to evaluate multiple urban functions and benefits. Finally, an evidence synthesis was conducted to assess the current state of research on evidence to support future policy making at different levels.
The results contribute to the conceptual understanding of urban green infrastructures as a strategic spatial planning approach that incorporates inner-urban utilizable agricultural land and the agriculturally dominated landscape at the outer urban fringe. It highlights the proposition that the linkage of peri-urban farmland with the green infrastructure concept can contribute to a network of multifunctional green spaces to provide multiple benefits to the urban system and to successfully address urban challenges. Four strategies are introduced for spatial planning with the contribution of peri-urban farmland to a strategically planned multifunctional network, namely the connecting, the productive, the integrated, and the adapted way. Finally, this thesis sheds light on the opportunities that arise from the integration of the peri- urban farmland in the green infrastructure concept to support transformation towards a more sustainable urban development. In particular, the inherent core planning principle of multifunctionality endorses the idea of co-benefits that are considered crucial to trigger transformative processes.
This work concludes that the linkage of peri-urban farmland with the green infrastructure concept is a promising action field for the development of new pathways for urban transformation towards sustainable urban development. Along with these outcomes, attention is drawn to limitations that remain to be addressed by future research, especially the identification of further mechanisms required to support policy integration at all levels.
Redox signalling in plants
(2020)
Once proteins are synthesized, they can additionally be modified by post-translational modifications (PTMs). Proteins containing reactive cysteine thiols, stabilized in their deprotonated form due to their local environment as thiolates (RS-), serve as redox sensors by undergoing a multitude of oxidative PTMs (Ox-PTMs). Ox-PTMs such as S-nitrosylation or formation of inter- or intra-disulfide bridges induce functional changes in these proteins. Proteins containing cysteines, whose thiol oxidation state regulates their functions, belong to the so-called redoxome. Such Ox-PTMs are controlled by site-specific cellular events that play a crucial role in protein regulation, affecting enzyme catalytic sites, ligand binding affinity, protein-protein interactions or protein stability. Reversible protein thiol oxidation is an essential regulatory mechanism of photosynthesis, metabolism, and gene expression in all photosynthetic organisms. Therefore, studying PTMs will remain crucial for understanding plant adaptation to external stimuli like fluctuating light conditions. Optimizing methods suitable for studying plants Ox-PTMs is of high importance for elucidation of the redoxome in plants. This study focusses on thiol modifications occurring in plant and provides novel insight into in vivo redoxome of Arabidopsis thaliana in response to light vs. dark. This was achieved by utilizing a resin-assisted thiol enrichment approach. Furthermore, confirmation of candidates on the single protein level was carried out by a differential labelling approach. The thiols and disulfides were differentially labelled, and the protein levels were detected using immunoblot analysis. Further analysis was focused on light-reduced proteins. By the enrichment approach many well studied redox-regulated proteins were identified. Amongst those were fructose 1,6-bisphosphatase (FBPase) and sedoheptulose-1,7-bisphosphatase (SBPase) which have previously been described as thioredoxin system targeted enzymes. The redox regulated proteins identified in the current study were compared to several published, independent results showing redox regulated proteins in Arabidopsis leaves, root, mitochondria and specifically S-nitrosylated proteins. These proteins were excluded as potential new candidates but remain as a proof-of-concept to the enrichment experiments to be effective. Additionally, CSP41A and CSP41B proteins, which emerged from this study as potential targets of redox-regulation, were analyzed by Ribo-Seq. The active translatome study of csp41a mutant vs. wild-type showed most of the significant changes at end of the night, similarly as csp41b. Yet, in both mutants only several chloroplast-encoded genes were altered. Further studies of CSP41A and CSP41B proteins are needed to reveal their functions and elucidate the role of redox regulation of these proteins.
Addressing both scholars of international law and political science as well as decision makers involved in cybersecurity policy, the book tackles the most important and intricate legal issues that a state faces when considering a reaction to a malicious cyber operation conducted by an adversarial state. While often invoked in political debates and widely analysed in international legal scholarship, self-defence and countermeasures will often remain unavailable to states in situations of cyber emergency due to the pervasive problem of reliable and timely attribution of cyber operations to state actors. Analysing the legal questions surrounding attribution in detail, the book presents the necessity defence as an evidently available alternative. However, the shortcomings of the doctrine as based in customary international law that render it problematic as a remedy for states are examined in-depth. In light of this, the book concludes by outlining a special emergency regime for cyberspace.
The steadily rising number of investor-State arbitration proceedings within the EU has triggered an extensive backlash and an increased questioning of the international investment law regime by different Member States as well as the EU Commission. This has resulted in the EU's assertion of control over the intra-EU investment regime by promoting the termination of bilateral intra-EU investment treaties (intra-EU BITs) and by opposing the jurisdiction of arbitral tribunals in intra-EU investor-State arbitration proceedings. Against the backdrop of the landmark Achmea decision of the European Court of Justice, the book offers an in depth analysis of the interplay of international investment law and the law of the European Union with regard to intra-EU investments, i.e. investments undertaken by an investor from one EU Member State within the territory of another EU Member State. It specifically analyses the conflict between the two investment protection regimes applicable within the EU with a particular emphasis on the compatibility of the international legal instruments with the law of the European Union. The book thereby addresses the more general question of the relationship between EU law and international law and offers a conceptual framework of intra-European investment protection based on the analysis of all intra-EU BITs, the Energy Charter Treaty and EU law, as well as the arbitral practice in over 180 intra-EU investor-State arbitration proceedings. Finally, the book develops possible solutions to reconcile the international legal standards of protection with the regionalized transnational law of the European Union
The current thesis contains the results from two experimental and one modelling study focused on the topic of ductile strain localization in the presence of material heterogeneities. Localization of strain in the high temperature regime is a well known feature of rock deformation occurring in nature at different scales and in a variety of lithologies. Large scale shear zones at the roots of major crustal fault zones are considered responsible for the activity of plate tectonics on our planet. A large number of mechanisms are suggested to be associated with strain softening and nucleation of localization. Among these, the presence of material heterogeneities within homogeneous host rocks is frequently observed in field examples to trigger shear zone development. Despite a number of studies conducted on the topic, the mechanisms controlling initiation and evolution of localization are not fully understood yet. We investigated, experimentally and by means of numerical modelling, phenomenological and microphysical aspects of high temperature strain localization in a homogeneous body containing single and paired inclusions of weaker material. A monomineralic carbonate system composed of Carrara marble (homogeneous, strong matrix) and Solnhofen limestone (weak planar inclusions) is selected for our studies based on its versatility as an experimental material and on the frequent occurrence of carbonate rocks at the core of natural shear zones.
To explore the influence of different loading conditions on heterogeneity-induced high temperature shear zones we conducted torsion experiments under constant twist (deformation) rate and constant torque (stress) conditions in a Paterson-type deformation apparatus on hollow cylinders of marble containing single planar inclusions of limestone. At the imposed experimental conditions (900 ◦C temperature and 400 MPa confining pressure) both materials deform plastically and the marble is ≈ 9 times stronger than the limestone. The viscosity contrast between the two materials induces a perturbation of the stress field within the marble matrix at the tip of the planar inclusion. Early on along the deformation path (at bulk shear strains ≈ 0.3), heterogeneous distribution of strain can be observed under both loading conditions and a small area of incipient strain localization is formed at the tip of the weak limestone inclusion. Strongly deformed grains, incipient dynamic recrystallization and a weak crystallographic preferred orientation characterize the marble within an area a few mm in front of the inclusion. As the bulk strain is increased (up to γ ≈ 1), the area of microstructural modification is expanded along the inclusion plane, the texture strengthens and grain size refinement by dynamic recrystallization becomes pervasive. Locally, evidences for coexisting brittle deformation are also observed regardless of the imposed loading conditions. A shear zone is effectively formed within the deforming Carrara marble, its geometry controlled by the plane containing the thin plate of limestone. Thorough microstructural and textural analysis, however, do not reveal substantial differences in the mechanisms or magnitude of strain localization at the different loading conditions. We conclude that, in the presence of material heterogeneities capable of inducing strain softening, the imposed loading conditions do not affect ductile localization in its nucleating and transient stages.
As the ultimate goal of experimental rock deformation is the extrapolation of results to geologically relevant time and space scales, we developed 2D numerical models reproducing (and benchmarked to) our experimental results. Our cm-scaled models have been implemented with a first-order strain-dependent softening law to reproduce the effect of rheological weakening in the deforming material. We successfully reproduced the local stress concentration at the inclusion tips and the strain localization initiated in the marble matrix. The heterogeneous distribution of strain and its evolution with imposed bulk deformation (i.e. the shape and extent of the nucleating shear zone) are observed to depend on the degree of softening imposed to the deforming matrix. When a second (artificial) softening step is introduced at elevated bulk strains in the model, the formation of a secondary high strain layer is observed at the core of the initial shear zone, analogous to the development of ultramylonite bands in high strain natural shear zones. Our results do not only reproduce the nucleation and transient evolution of a heterogeneity-induced high temperature shear zone with high accuracy, but also confirm the importance of introducing reliable softening laws capable of mimicking strain weakening to numerical models of crustal scale ductile processes.
Material heterogeneities inducing strain localization in the field are often consisting of brittle precursors (joints and fractures). More generally, the interaction of brittle and ductile deformation mechanisms and its effect on the localization of strain have been a key topic in the structural geology community for a long time. The positive feedback between (micro)fracturing and ductile strain localization is a well recognized effect in a number of field examples. We experimentally investigated the influence of brittle deformation on the initiation and evolution of high temperature shear zones in a strong matrix containing pairs of weak material heterogeneities. Our Carrara marble-Solnhofen limestone inclusions system was tested in triaxial compression under constant strain rate and high temperature (900 ◦C) conditions in a Paterson deformation apparatus. The inclusion pairs were arranged in non-overlapping step-over geometries of either compressional or extensional nature. Experimental runs were conducted at different confining pressures (30, 50, 100 and 300 MPa) to induce various amounts of brittle deformation within the marble matrix. At low confinement (30 and 50 MPa) abundant brittle deformation is observed in all configurations, but the spatial distribution of cracks is dependent on the kinematics of the step-over region: concentrated along the shearing plane between the inclusions in the extensional samples, or broadly distributed around the inclusions but outside the step-over region in the compressional configuration. Accordingly, brittle-assisted ductile processes tend to localize deformation along the inclusions plane in the extensional geometry or to distribute widely across large areas of the matrix in the compressional step-over. At pressures of 100 and 300 MPa fracturing is mostly suppressed in both configurations and strain is accommodated almost entirely by viscous creep. In extensional samples this leads to progressive de-localization with increasing confinement. Our results show that, while ductile localization of strain is indeed more efficient where assisted by brittle processes, these latter are only effective if themselves heterogeneously distributed, ultimately a function of the local stress perturbations.
Geomechanical and petrological characterisation of exposed slip zones, Alpine Fault, New Zealand
(2020)
The Alpine Fault is a large, plate-bounding, strike-slip fault extending along the north-western edge of the Southern Alps, South Island, New Zealand. It regularly accommodates large (MW > 8) earthquakes and has a high statistical probability of failure in the near future, i.e., is late in its seismic cycle. This pending earthquake and associated co-seismic landslides are expected to cause severe infrastructural damage that would affect thousands of people, so it presents a substantial geohazard. The interdisciplinary study presented here aims to characterise the fault zone’s 4D (space and time) architecture, because this provides information about its rheological properties that will enable better assessment of the hazard
the fault poses.
The studies undertaken include field investigations of principal slip zone fault gouges exposed
along strike of the fault, and subsequent laboratory analyses of these outcrop and additional borehole samples. These observations have provided new information on (I) characteristic microstructures down to the nanoscale that indicate which deformation mechanisms operated within the rocks, (II) mineralogical information that constrains the fault’s geomechanical behaviour and (III) geochemical compositional information that allows the influence of fluid- related alteration processes on material properties to be unraveled.
Results show that along-strike variations of fault rock properties such as microstructures and mineralogical composition are minor and / or do not substantially influence fault zone architecture. They furthermore provide evidence that the architecture of the fault zone, particularly its fault core, is more complex than previously considered, and also more complex than expected for this sort of mature fault cutting quartzofeldspathic rocks. In particular our results strongly suggest that the fault has more than one principal slip zone, and that these form an anastomosing network extending into the basement below the cover of Quaternary sediments.
The observations detailed in this thesis highlight that two major processes, (I) cataclasis and (II) authigenic mineral formation, are the major controls on the rheology of the Alpine Fault. The velocity-weakening behaviour of its fault gouge is favoured by abundant nanoparticles
promoting powder lubrication and grain rolling rather than frictional sliding. Wall-rock fragmentation is accompanied by co-seismic, fluid-assisted dilatancy that is recorded by calcite cementation. This mineralisation, along with authigenic formation of phyllosilicates, quickly alters the petrophysical fault zone properties after each rupture, restoring fault competency. Dense networks of anastomosing and mutually cross-cutting calcite veins and intensively reworked gouge matrix demonstrate that strain repeatedly localised within the narrow fault gouge. Abundantly undeformed euhedral chlorite crystallites and calcite veins cross-cutting both fault gouge and gravels that overlie basement on the fault’s footwall provide evidence that the processes of authigenic phyllosilicate growth, fluid-assisted dilatancy and associated fault healing are processes active particularly close to the Earth’s surface in this fault zone.
Exposed Alpine Fault rocks are subject to intense weathering as direct consequence of abundant orogenic rainfall associated with the fault’s location at the base of the Southern Alps. Furthermore, fault rock rheology is substantially affected by shallow-depth conditions such as the juxtaposition of competent hanging wall fault rocks on poorly consolidated footwall sediments. This means microstructural, mineralogical and geochemical properties of the exposed fault rocks may differ substantially from those at deeper levels, and thus are not characteristic of the majority of the fault rocks’ history. Examples are (I) frictionally weak smectites found within the fault gouges being artefacts formed at temperature conditions, and imparting petrophysical properties that are not typical for most of fault rocks of the Alpine Fault, (II) grain-scale dissolution resulting from subaerial weathering rather than deformation by pressure-solution processes and (III) fault gouge geometries being more complex than expected for deeper counterparts.
The methodological approaches deployed in analyses of this, and other fault zones, and the major results of this study are finally discussed in order to contextualize slip zone investigations of fault zones and landslides. Like faults, landslides are major geohazards, which highlights the importance of characterising their geomechanical properties. Similarities between faults, especially those exposed to subaerial processes, and landslides, include mineralogical composition and geomechanical behaviour. Together, this ensures failure occurs predominantly by cataclastic processes, although aseismic creep promoted by weak phyllosilicates is not uncommon. Consequently, the multidisciplinary approach commonly used to investigate fault zones may contribute to increase the understanding of landslide faulting processes and the assessment of their hazard potential.
Lava domes are severely hazardous, mound-shaped extrusions of highly viscous lava and commonly erupt at many active stratovolcanoes around the world. Due to gradual growth and flank oversteepening, such lava domes regularly experience partial or full collapses, resulting in destructive and far-reaching pyroclastic density currents. They are also associated with cyclic explosive activity as the complex interplay of cooling, degassing, and solidification of dome lavas regularly causes gas pressurizations on the dome or the underlying volcano conduit. Lava dome extrusions can last from days to decades, further highlighting the need for accurate and reliable monitoring data.
This thesis aims to improve our understanding of lava dome processes and to contribute to the monitoring and prediction of hazards posed by these domes. The recent rise and sophistication of photogrammetric techniques allows for the extraction of observational data in unprecedented detail and creates ideal tools for accomplishing this purpose. Here, I study natural lava dome extrusions as well as laboratory-based analogue models of lava dome extrusions and employ photogrammetric monitoring by Structure-from-Motion (SfM) and Particle-Image-Velocimetry (PIV) techniques. I primarily use aerial photography data obtained by helicopter, airplanes, Unoccupied Aircraft Systems (UAS) or ground-based timelapse cameras. Firstly, by combining a long time-series of overflight data at Volcán de Colima, México, with seismic and satellite radar data, I construct a detailed timeline of lava dome and crater evolution. Using numerical model, the impact of the extrusion on dome morphology and loading stress is further evaluated and an impact on the growth direction is identified, bearing important implications for the location of collapse hazards. Secondly, sequential overflight surveys at the Santiaguito lava dome, Guatemala, reveal surface motion data in high detail. I quantify the growth of the lava dome and the movement of a lava flow, showing complex motions that occur on different timescales and I provide insight into rock properties relevant for hazard assessment inferred purely by photogrammetric processing of remote sensing data. Lastly, I recreate artificial lava dome and spine growth using analogue modelling under controlled conditions, providing new insights into lava extrusion processes and structures as well as the conditions in which they form.
These findings demonstrate the capabilities of photogrammetric data analyses to successfully monitor lava dome growth and evolution while highlighting the advantages of complementary modelling methods to explain the observed phenomena. The results presented herein further bear important new insights and implications for the hazards posed by lava domes.
Carbonates play a key role in the chemistry and dynamics of our planet. They are directly connected to the CO2 budget of our atmosphere and have a great impact on the deep carbon cycle. Moreover, recent studies have shown that carbonates are stable along the geothermal gradient down to Earth's lower mantle conditions, changing their crystal structure and related properties. Subducted carbonates may also react with silicates to form new phases. These reactions will redistribute elements, such as calcium (Ca), magnesium (Mg), iron (Fe) and carbon in the form of carbon dioxide (CO2), but also trace elements, that are carried by the carbonates. The trace elements of most interest are strontium (Sr) and rare earth elements (REE) which have been found to be important constituents in the composition of the primitive lower mantle and in mineral inclusions found in super-deep diamonds. However, the stability of carbonates in presence of mantle silicates at relevant temperatures is far from being well understood. Related to this, very little is known about distribution processes of trace elements between carbonates and mantle silicates. To shed light on these processes, we studied reactions between Sr- and REE-containing CaCO3 and Mg/Fe-bearing silicates of the system (Mg,Fe)2SiO4 - (Mg,Fe)SiO3 at high pressure and high temperature using synchrotron radiation based μ-X-ray diffraction (μ-XRD) and μ-X-ray fluorescence (μ-XRF) with μm-resolution in a laser-heated diamond anvil cell. X-ray diffraction is used to derive the structural changes of the phase reactions whereas X-ray fluorescence gives information on the chemical changes in the sample. In-situ experiments at high pressure and high temperature were performed at beamline P02.2 at PETRA III (Hamburg, Germany) and at beamline ID27 at ESRF (Grenoble, France). In addition to μ-XRD and μ-XRF, ex-situ measurements were made on the recovered sample material using transmission electron microscopy (TEM) and provided further insights into the reaction kinetics of carbonate-silicate reactions.
Our investigations show that CaCO3 is unstable in presence of mantle silicates above 1700 K and a reaction takes place in which magnesite plus CaSiO3-perovskite are formed. In addition, we observed that a high content of iron in the carbonate-silicate system favours dolomite formation during the reaction. The subduction of natural carbonates with significant amounts of Sr leads to a comprehensive investigation of the stability not only of CaCO3 phases in contact with mantle silicates but also of SrCO3 (and of Sr-bearing CaCO3). We found that SrCO3 reacts with (Mg,Fe)SiO3-perovskite to form magnesite and gained evidence for the formation of SrSiO3-perovskite.
To complement our study on the stability of SrCO3 at conditions of the Earth's lower mantle, we performed powder X-ray diffraction and single crystal X-ray diffraction experiments at ambient temperature and up to 49 GPa. We observed a transformation from SrCO3-I into a new high-pressure phase SrCO3-II at around 26 GPa with Pmmn crystal structure and a bulk modulus of 103(10) GPa. This information is essential to fully understand the phase behaviour and stability of carbonates in the Earth's lower mantle and to elucidate the possibility of introducing Sr into mantle silicates by carbonate-silicate reactions.
Simultaneous recording of μ-XRD and μ-XRF in the μm-range over the heated areas provides spatial information not only about phase reactions but also on the elemental redistribution during the reactions. A comparison of the spatial intensity distribution of the XRF signal before and after heating indicates a change in the elemental distribution of Sr and an increase in Sr-concentration was found around the newly formed SrSiO3-perovskite. With the help of additional TEM analyses on the quenched sample material the elemental redistribution was studied at a sub-micrometer scale. Contrary to expectations from combined μ-XRD and μ-XRF measurements, we found that La and Eu were not incorporated into the silicate phases, instead they tend to form either isolated oxide phases (e.g. Eu2O3, La2O3) or hydroxyl-bastnäsite (La(CO3)(OH)). In addition, we observed the transformation from (Mg,Fe)SiO3-perovskite to low-pressure clinoenstatite during pressure release. The monoclinic structure (P21/c) of this phase allows the incorporation of Ca as shown by additional EDX analyses and, to a minor extent, Sr too.
Based on our experiments, we can conclude that a detection of the trace elements in-situ at high pressure and high temperature remains challenging. However, our first findings imply that silicates may incorporate the trace elements provided by the carbonates and indicate that carbonates may have a major effect on the trace element contents of mantle phases.
In the present study, we employ the angle-resolved photoemission spectroscopy (ARPES) technique to study the electronic structure of topological states of matter. In particular, the so-called topological crystalline insulators (TCIs) Pb1-xSnxSe and Pb1-xSnxTe, and the Mn-doped Z2 topological insulators (TIs) Bi2Te3 and Bi2Se3. The Z2 class of strong topological insulators is protected by time-reversal symmetry and is characterized by an odd number of metallic Dirac type surface states in the surface Brillouin zone. The topological crystalline insulators on the other hand are protected by the individual crystal symmetries and exhibit an even number of Dirac cones.
The topological properties of the lead tin chalcogenides topological crystalline insulators can be tuned by temperature and composition. Here, we demonstrate that Bi-doping of the Pb1-xSnxSe(111) epilayers induces a quantum phase transition from a topological crystalline insulator to a Z2 topological insulator. This occurs because Bi-doping lifts the fourfold valley degeneracy in the bulk. As a consequence a gap appears at ⌈¯, while the three Dirac cones at the M̅ points of the surface Brillouin zone remain intact. We interpret this new phase transition is caused by lattice distortion. Our findings extend the topological phase diagram enormously and make strong topological insulators switchable by distortions or electric field. In contrast, the bulk Bi doping of epitaxial Pb1-xSnxTe(111) films induces a giant Rashba splitting at the surface that can be tuned by the doping level. Tight binding calculations identify their origin as Fermi level pinning by trap states at the surface.
Magnetically doped topological insulators enable the quantum anomalous Hall effect (QAHE) which provide quantized edge states for lossless charge transport applications. The edge states are hosted by a magnetic energy gap at the Dirac point which has not been experimentally observed to date. Our low temperature ARPES studies unambiguously reveal the magnetic gap of Mn-doped Bi2Te3. Our analysis shows a five times larger gap size below the Tc than theoretically predicted. We assign this enhancement to a remarkable structure modification induced by Mn doping. Instead of a disordered impurity system, a self-organized alternating sequence of MnBi2Te4 septuple and Bi2Te3quintuple layers is formed. This enhances the wave-function overlap and gives rise to a large magnetic gap. Mn-doped Bi2Se3 forms similar heterostructure, but only a nonmagnetic gap is observed in this system. This correlates with the difference in magnetic anisotropy due to the much larger spin-orbit interaction in Bi2Te3 compared to Bi2Se3. These findings provide crucial insights for pushing lossless transport in topological insulators towards room-temperature applications.
The Willmore functional is a function that maps an immersed Riemannian manifold to its total mean curvature. Finding closed surfaces that minimizes the Willmore energy, or more generally finding critical surfaces, is a classic problem of differential geometry.
In this thesis we will develop the concept of generalized Willmore functionals for surfaces in Riemannian manifolds. We are guided by models in mathematical physics, such as the Hawking energy of general relativity and the bending energies for thin membranes.
We prove the existence of minimizers under area constraint for these generalized Willmore functionals in a suitable class of generalized surfaces. In particular, we construct minimizers of the bending energy mentioned above for prescribed area and enclosed volume.
Furthermore, we prove that critical surfaces of generalized Willmore functionals with prescribed area are smooth, away from finitely many points. These results and the following are based on the existing theory for the Willmore functional.
This general discussion is succeeded by a detailed analysis of the Hawking energy. In the context of general relativity the surrounding manifold describes the space at a given time, hence we strive to understand the interplay between the Hawking energy and the ambient space. We characterize points in the surrounding manifold for which there are small critical spheres with prescribed area in any neighborhood. These points are interpreted as concentration points of the Hawking energy.
Additionally, we calculate an expansion of the Hawking energy on small, round spheres. This allows us to identify a kind of energy density of the Hawking energy.
It needs to be mentioned that our results stand in contrast to previous expansions of the Hawking energy. However, these expansions are obtained on spheres along the light cone at a given point. At this point it is not clear how to explain the discrepancy.
Finally, we consider asymptotically Schwarzschild manifolds. They are a special case of asymptotically flat manifolds, which serf as models for isolated systems. The Schwarzschild spacetime itself is a classical solution to the Einstein equations and yields a simple description of a black hole.
In these asymptotically Schwarzschild manifolds we construct a foliation of the exterior region by critical spheres of the Hawking energy with prescribed large area. This foliation can be seen as a generalized notion of the center of mass of the isolated system. Additionally, the Hawking energy of grows along the foliation as the area of the surfaces grows.
Since the golden era of antibiotics natural products are of ever growing interest to both basic research and applied sciences as they are the main source of new bioactive compounds delivering lead structures for new pharmaceuticals with potent antibiotic, anti-inflammatory or anti-cancer activities. Alongside the technological advances in high-throughput genome sequencing and the better understanding of the general organization of those modular biosynthetic assembly lines of secondary metabolites, there was also a shift from wet-lab screening of active cell extracts towards algorithm-based in silico screening for new natural product biosynthesis gene clusters (BGCs). Although the increasing availability of full genome sequences revealed that such non-ribosomal peptide synthetases (NRPS), polyketide synthases (PKS) and ribosomally synthesized and post-translationally modified peptides (RiPPs) can be found in all three kingdoms of life, certain phyla like actinobacteria and cyanobacteria show a very high density of these secondary metabolite BGCs.
The facultative symbiotic, N2-fixing model organism N. punctiforme PCC73102 is a terrestrial type IV cyanobacterium that not only dedicates are very large fraction of its genome to secondary metabolite production but is also amenable to genetic modification. AntiSMASH analysis of the genome showed that there are sixteen potential secondary metabolite BGCs encoded in N. punctiforme, but until now there were only two compounds assigned to their respective BGC leaving the remaining fourteen orphan. This makes the organism a perfect subject for the establishment of a novel combinatorial genomic mining approach for the detection of new natural products.
In the course of this study a combinatorial approach of genomic mining, independent monitoring techniques and alteration of cultivation conditions lead to new insights in cyanobacterial natural product biosynthesis and ultimately to the description of a novel compound produced by N. punctiforme. With the generation and investigation of a reporter strain library consisting of CFP-producing transcriptional reporter mutants for every predicted secondary metabolite BGC of N. punctiforme, it could be shown that natural product expression is in fact not silent for all those BGCs where no compound can be detected. Instead several distinct expression patterns could be described highlighting that secondary metabolite production is under tight regulation and only a minor fraction of these BGCs is in fact silent under standard laboratory conditions. Furthermore, increasing light intensity and carbon dioxide availability and cultivating N. punctiforme to very high cell densities had a tremendous impact on the overall metabolic activity of the organism. Investigation of high density cultivated cell extracts ultimately lead to the detection of a so far undescribed set of microviridins with unusual extended peptide sequences named Microviridin N3 – N9. Both cultivation of the transcriptional reporter mutants as well as RTqPCR-based detection of secondary metabolite BGC transcription levels revealed that in fact 50% of N. punctiforme’s natural product BGCs are upregulated under high cell density conditions. In contrast to this very broad response, co-cultivation of N. punctiforme in chemical or physical contact with a N-deprived host plant (Blasia pusilla) lead to a very specific upregulation of two natural product BGCs, namely RIPP3 and RIPP4. Although this response could be confirmed by various independent monitoring techniques and heavy analytical efforts were spent, no compound could be assigned to either of these BGCs.
This study is the first in-depth systematic investigation of a cyanobacterial secondary metabolome by a combinatorial approach of genome mining and independent monitoring techniques that can serve as a new strategic approach to gain further insight into natural product synthesis of various organisms. Although there are single well described examples of secondary metabolites like the cell differentiation factor PatS in Anabaena sp. strain PCC 7120, the level and extent of regulation observed in this study is unprecedented and understanding of these mechanisms might be the key to streamline natural product discovery. However, the results of this study also highlight that induction of secondary metabolite BGCs is not the real challenge. Instead the new insights point towards analytical issues being a severe hurdle and finding reliable strategies to overcome these problems might as well drive natural product discovery.
Giant unilamellar vesicles are an important tool in todays experimental efforts to understand the structure and behaviour of biological cells. Their simple structure allows the isolation of the physical elastic properties of the lipid membrane. A central physical
property is the bending energy of the membrane, since the many different shapes of giant vesicles can be obtained by finding the minimum of the bending energy. In the spontaneous curvature model the bending energy is a function of the bending rigidity as well as the mean curvature and an additional parameter called the spontaneous curvature, which describes an internal preference of the lipid-bilayer to bend towards one side or the other. The spontaneous and mean curvature are local properties of the membrane.
Additional constraints arise from the conservation of the membrane surface area and the enclosed volume, which are global properties.
In this thesis the spontaneous curvature model is used to explain the experimental observation of a periodic shape oscillation of a giant unilamellar vesicle that was filled with a protein complex that periodically binds to and unbinds from the membrane.
By assuming that the binding of the proteins to the membrane induces a change in the spontaneous curvature the experimentally observed shapes could successfully be explained. This involves the numerical solution of the differential equations as obtained from the minimization of the bending energy respecting the area and volume constraints, the so called shape equations. Vice versa this approach can be used to estimate the spontaneous curvature from experimentally measurable quantities.
The second topic of this thesis is the analysis of concentration gradients in rigid conic membrane compartments. Gradients of an ideal gas due to gravity and gradients generated by the directed stochastic movement of molecular motors along a microtubulus were considered. It was possible to calculate the free energy and the bending energy analytically for the ideal gas. In the case of the non-equilibrium system with molecular motors, the characteristic length of the density profile, the jam-length, and its dependency on the opening angle of the conic compartment have been calculated in the mean-field limit.
The mean field results agree qualitatively with stochastic particle simulations.
The ability of a company to innovate and to launch innovation is a critical competitive edge to remain competitive in the 21st century. Large organizations therefore increasingly recognize employees as a significant factor and critical source of innovation. Several studies assert the fact that every employee has to offer certain skills and knowledge and can contribute to innovation. Hence, every employee has a certain ‘entrepreneurial potential’. This potential can be expressed in the form of entrepreneurial behaviour and can occur in many ways, from monopersonal innovation championing to several small scale contributions, where several individuals team up for innovation. To support entrepreneurial behaviour of their employees, large organizations increasingly rely on Corporate Entrepreneurship. They set up organizational structures and venturing units, offer vehicles and tools to their employees to be more entrepreneurial. The evolvement of new tools and technologies thereby allow for new ways of employee involvement, also allowing for more radical innovation to be developed collaboratively. Yet, many of such offerings fail to achieve the desired outcome. While some employees immediately opt-in for innovation, others do not and their entrepreneurial potential remains untapped. This research explores how large organizations can better support their employees to express their entrepreneurial potential, thus moving from non-entrepreneurial behaviour or not wanting to be involved, to actually expressing entrepreneurial behaviour. The underlying research therefore is two-fold. While focusing on the individual level and the entrepreneurial behaviour of employees, this research also takes the organizational perspective into account in order to identify how non-entrepreneurial behaviour can be stimulated towards entrepreneurial behaviour. Using an empirical qualitative research design based on pragmatism and abduction, data is collected by means of qualitative interviews as well as a longitudinal use case setting. Grounded theory is then applied for analysis and sense making. The main outcome is a theoretical model of why employees are expressing or not expressing their entrepreneurial potential and how non-expression can potentially be triggered towards entrepreneurial behaviour. The results indicate that there is no one-size-fits all model of Corporate Entrepreneurship. This research therefore argues that organizations can achieve higher levels of entrepreneurial behaviour when addressing employees differently. By developing a theoretical model as well as suggestions of how this model can be applied in practice, this research contributes to theory and practice alike. This document closes suggesting future research areas around supporting employees to express their entrepreneurial potential.
Research on novel and advanced biomaterials is an indispensable step towards their applications in desirable fields such as tissue engineering, regenerative medicine, cell culture, or biotechnology. The work presented here focuses on such a promising material: polyelectrolyte multilayer (PEM) composed of hyaluronic acid (HA) and poly(L-lysine) (PLL). This gel-like polymer surface coating is able to accumulate (bio-)molecules such as proteins or drugs and release them in a controlled manner. It serves as a mimic of the extracellular matrix (ECM) in composition and intrinsic properties. These qualities make the HA/PLL multilayers a promising candidate for multiple bio-applications such as those mentioned above. The work presented aims at the development of a straightforward approach for assessment of multi-fractional diffusion in multilayers (first part) and at control of local molecular transport into or from the multilayers by laser light trigger (second part).
The mechanism of the loading and release is governed by the interaction of bioactives with the multilayer constituents and by the diffusion phenomenon overall. The diffusion of a molecule in HA/PLL multilayers shows multiple fractions of different diffusion rate. Approaches, that are able to assess the mobility of molecules in such a complex system, are limited. This shortcoming motivated the design of a novel evaluation tool presented here.
The tool employs a simulation-based approach for evaluation of the data acquired by fluorescence recovery after photobleaching (FRAP) method. In this approach, possible fluorescence recovery scenarios are primarily simulated and afterwards compared with the data acquired while optimizing parameters of a model until a sufficient match is achieved. Fluorescent latex particles of different sizes and fluorescein in an aqueous medium are utilized as test samples validating the analysis results. The diffusion of protein cytochrome c in HA/PLL multilayers is evaluated as well.
This tool significantly broadens the possibilities of analysis of spatiotemporal FRAP data, which originate from multi-fractional diffusion, while striving to be widely applicable. This tool has the potential to elucidate the mechanisms of molecular transport and empower rational engineering of the drug release systems.
The second part of the work focuses on the fabrication of such a spatiotemporarily-controlled drug release system employing the HA/PLL multilayer. This release system comprises different layers of various functionalities that together form a sandwich structure. The bottom layer, which serves as a reservoir, is formed by HA/PLL PEM deposited on a planar glass substrate. On top of the PEM, a layer of so-called hybrids is deposited. The hybrids consist of thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) -based hydrogel microparticles with surface-attached gold nanorods. The layer of hybrids is intended to serve as a gate that controls the local molecular transport through the PEM–solution-interface. The possibility of stimulating the molecular transport by near-infrared (NIR) laser irradiation is being explored.
From several tested approaches for the deposition of hybrids onto the PEM surface, the drying-based approach was identified as optimal. Experiments, that examine the functionality of the fabricated sandwich at elevated temperature, document the reversible volume phase transition of the PEM-attached hybrids while sustaining the sandwich stability. Further, the gold nanorods were shown to effectively absorb light radiation in the tissue- and cell-friendly NIR spectral region while transducing the energy of light into heat. The rapid and reversible shrinkage of the PEM-attached hybrids was thereby achieved. Finally, dextran was employed as a model transport molecule. It loads into the PEM reservoir in a few seconds with the partition constant of 2.4, while it spontaneously releases in a slower, sustained manner. The local laser irradiation of the sandwich, which contains the fluorescein isothiocyanate tagged dextran, leads to a gradual reduction of fluorescence intensity in the irradiated region.
The release system fabricated employs renowned photoresponsivity of the hybrids in an innovative setting. The results of the research are a step towards a spatially-controlled on-demand drug release system that paves the way to spatiotemporally controlled drug release.
The approaches developed in this work have the potential to elucidate the molecular dynamics in ECM and to foster engineering of multilayers with properties tuned to mimic the ECM. The work aims at spatiotemporal control over the diffusion of bioactives and their presentation to the cells.
Advanced hybrid materials are recognized as one of the most significant enablers for new technologies, which holds true especially on the quest for sustainable energy sources and energy production schemes (e.g., semiconductor based photocatalytic materials). Usually, a single component is far from meeting all the demands needed for these advanced applications. Hybrid materials are composed of at least two components commonly an inorganic and an organic material on the molecular level, which feature novel properties exceeding the sum of the individual parts and might be the milestones of next-generation applications. This dissertation aims to provide novel combinations of the metal-free semiconductor graphitic carbon nitride (g-C3N4) with polymers to obtain materials with advanced properties and applications. Visible light constitutes the core of the present work as it is the only energy source utilized either in synthesis or in the application process. In the area of applications by combination of g-C3N4 and polymers, two different hybrids were thoroughly elucidated, i.e.. their design and construction as well as potential application in photocatalysis. Novel soft 3D liquid objects were formed via charge-interaction driven interfacial jamming between polyelectrolytes in aqueous environment and colloidal dispersions of g-C3N4 in edible sunflower oil. As such, stable liquid objects could be molded into specific shapes and utilized for photodegradation of organic dyes in water. Furthermore, the grafting of polymers onto g-C3N4 was investigated. Allyl-end functionalized polymers were grafted onto g-C3N4 by a photoinitiated process to yield g-C3N4 with versatile and improved properties, e.g. advanced dispersibility enabling processing via spin coating. As g-C3N4 produces radicals under visible light irradiation, which is of significant interest for polymer science, g-C3N4 containing polymer latex and macrogel beads (MGB) were synthesized by emulsion photopolymerization and inverse suspension photopolymerization, respectively. A well-controlled emulsion photopolymerization process via g-C3N4 initiation was designed, which features synthesis of well-defined and cross-linked polymer particles. Furthermore, the polymerization process was investigated thoroughly, indicating an ad-layer polymerization in early stages of the process. The utilization of functionalized g-C3N4 allowed the polymerization of various monomer types. Moreover, g-C3N4 was utilized as photoinitiator in hydrogel MGB formation. The formed MGB properties could be tailored via process design, e.g. stirring rate, cross-linker content and g-C3N4 content. Finally, MGBs were introduced as photocatalyst for waste water remediation, i.e. the degradation of Rhodamine B in aqueous solution was studied. The present thesis therefore builds a bridge between g-C3N4 and polymers and provides strategies for hybrid material formation. Furthermore, several potential applications are revealed with significant implications for photocatalysis, polymerization processes and polymer materials.
Interactions involving biological interfaces such as lipid-based membranes are of paramount importance for all life processes. The same also applies to artificial interfaces to which biological matter is exposed, for example the surfaces of drug delivery systems or implants. This thesis deals with the two main types of interface interactions, namely (i) interactions between a single interface and the molecular components of the surrounding aqueous medium and (ii) interactions between two interfaces. Each type is investigated with regard to an important scientific problem in the fields of biotechnology and biology:
1.) The adsorption of proteins to surfaces functionalized with hydrophilic polymer brushes; a process of great biomedical relevance in context with harmful foreign-body-response to implants and drug delivery systems.
2.) The influence of glycolipids on the interaction between lipid membranes; a hitherto largely unexplored phenomenon with potentially great biological relevance.
Both problems are addressed with the help of (quasi-)planar, lipid-based model surfaces in combination with x-ray and neutron scattering techniques which yield detailed structural insights into the interaction processes. Regarding the adsorption of proteins to brush-functionalized surfaces, the first scenario considered is the exposure of the surfaces to human blood serum containing a multitude of protein species. Significant blood protein adsorption was observed despite the functionalization, which is commonly believed to act as a protein repellent. The adsorption consists of two distinct modes, namely strong adsorption to the brush grafting surface and weak adsorption to the brush itself. The second aspect investigated was the fate of the brush-functionalized surfaces when exposed to aqueous media containing immune proteins (antibodies) against the brush polymer, an emerging problem in current biomedical applications. To this end, it was found that antibody binding cannot be prevented by variation of the brush grafting density or the polymer length. This result motivates the search for alternative, strictly non-antigenic brush chemistries. With respect to the influence of glycolipids on the interaction between lipid membranes, this thesis focused on the glycolipids’ ability to crosslink and thereby to tightly attract adjacent membranes. This adherence is due to preferential saccharide-saccharide interactions occurring among the glycolipid headgroups. This phenomenon had previously been described for lipids with special oligo-saccharide motifs. Here, it was investigated how common this phenomenon is among glycolipids with a variety of more abundant saccharide-headgroups. It was found that glycolipid-induced membrane crosslinking is equally observed for some of these abundant glycolipid types, strongly suggesting that this under-explored phenomenon is potentially of great biological relevance.
Completely water-based systems are of interest for the development of novel material for various reasons: On one hand, they provide benign environment for biological systems and on the other hand they facilitate effective molecular transport in a membrane-free environment. In order to investigate the general potential of aqueous two-phase systems (ATPSs) for biomaterials and compartmentalized systems, various solid particles were applied to stabilize all-aqueous emulsion droplets. The target ATPS to be investigated should be prepared via mixing of two aqueous solutions of water-soluble polymers, which turn biphasic when exceeding a critical polymer concentration. Hydrophilic polymers with a wide range of molar mass such as dextran/poly(ethylene glycol) (PEG) can therefore be applied. Solid particles adsorbed at the interfaces can be exceptionally efficient stabilizers forming so-called Pickering emulsions, and nanoparticles can bridge the correlation length of polymer solutions and are thereby the best option for water-in-water emulsions.
The first approach towards the investigation of ATPS was conducted with all aqueous dextran-PEG emulsions in the presence of poly(dopamine) particles (PDP) in Chapter 4. The water-in-water emulsions were formed with a PEG/dextran system via utilizing PDP as stabilizers. Studies of the formed emulsions were performed via laser scanning confocal microscope (CLSM), optical microscope (OM), cryo-scanning electron microscope (SEM) and tensiometry. The stable emulsions (at least 16 weeks) were demulsified easily via dilution or surfactant addition. Furthermore, the solid PDP at the water-water interface were crosslinked in order to inhibit demulsification of the Pickering emulsion. Transmission electron microscope (TEM) and scanning electron microscope (SEM) were used to visualize the morphology of PDP before and after crosslinking. PDP stabilized water-in-water emulsions were utilized in the following Chapter 5 to form supramolecular compartmentalized hydrogels. Here, hydrogels were prepared in pre-formed water-in-water emulsions and gelled via α-cyclodextrin-PEG (α-CD-PEG) inclusion complex formation. Studies of the formed complexes were performed via X-ray powder diffraction (XRD) and the mechanical properties of the hydrogels were measured with oscillatory shear rheology. In order to verify the compartmentalized state and its triggered decomposition, hydrogels and emulsions were assessed via OM, SEM and CLSM. The last chapter broadens the investigations from the previous two systems by utilizing various carbon nitrides (CN) as different stabilizers in ATPS. CN introduces another way to trigger demulsification, namely irradiation with visible light. Therefore, emulsification and demulsification with various triggers were probed. The investigated all aqueous multi-phase systems will act as model for future fabrication of biocompatible materials, cell micropatterning as well as separation of compartmentalized systems.
Magnetotactic bacteria comprise a heterogeneous group of Gram negative bacteria which share the ability to synthesise intracellular magnetic nanoparticles surrounded by a lipid bilayer, known as magnetosomes, which are arranged in linear chains. The bacteria exert a unique level of control onto the biomineralization of these nanoparticles, which is seen in the controlled size and shape they have. These characteristics have attracted great attention on understanding the process by which the bacteria synthesise the magnetosomes. Moreover, the magnetosome chain impart the bacteria with a net magnetic dipole which makes them susceptible to interact with magnetic fields and thus orient with the Earth’s magnetic field. This feature has attracted as well much interest to understand how the swimming motility of these microorganisms is affected by the presence of magnetic fields. Most of the studies performed in these bacteria so far have been conducted in the traditional manner using large populations of cells. Such studies have the disadvantage of averaging many different individuals with heterogeneous behaviours and fail to consider individual variations. In addition, in large populations each bacterium will be subjected to a different microenvironment that will influence the bacterial behaviour, but which cannot be defined using these traditional methods. In this thesis, different microfluidic platforms are proposed to overcome these limitations and to offer the possibility to study magnetotactic bacteria in defined environments and down to a single-cell resolution. First, a sediment-like microfluidic platform is presented with the purpose of mimicking the porous environment they bacteria naturally dwell in. The platform allows to observe via transmitted light microscopy that bacterial navigation in crowded environments is enhanced by the Earth’s magnetic field strengths (B = 50 μT) rather than by null (B = 0 μT) or higher magnetic fields (B = 500 μT). Second, a microfluidic system to confine single-bacterial cells in physically defined environments is presented. The system allows to study via transmitted light microscopy the interplay between wall curvature, magnetic fields and bacterial speed affect the motion of a confined bacterium, and shows how bacterial trajectories depend on those three parameters. Third, a microfluidic platform to conduct semi in vivo magnetosome nucleation with a single-cell resolution via X-ray fluorescence is fabricated. It is shown that signal arising from magnetosome full chains can be observed individually in each bacterium. Finally, the iron uptake kinetics of a single bacterium are studied via a fluorescent reporter through confocal microscopy. Two different approaches are used for this: one of the previously mentioned platforms, as well as giant lipid vesicles. It is observed how iron uptake rates vary between cells, as well as how these rates are consistent with magnetosome formation taking place within some hours. The present thesis shows therefore how microfluidic technologies can be implemented for the study of magnetotactic bacteria at different degrees, and the level of resolution that can be attained by going into the single- cell scale.
Most reading theories assume that readers aim at word centers for optimal information processing. During reading, saccade targeting turns out to be imprecise: Saccades’ initial landing positions often miss the word centers and have high variance, with an additional systematic error that is modulated by the distance from the launch site to the center of the target word. The performance of the oculomotor system, as reflected in the statistics of within-word landing positions, turns out to be very robust and mostly affected by the spatial information during reading. Hence, it is assumed that the saccade generation is highly automated.
The main goal of this thesis is to explore the performance of the oculomotor system under various reading conditions where orthographic information and the reading direction were manipulated. Additionally, the challenges in understanding the eye movement data to represent the oculomotor process during reading are addressed.
Two experimental studies and one simulation study were conducted for this thesis, which resulted in the following main findings:
(i) Reading texts with orthographic manipulations leads to specific changes in the eye movement patterns, both in temporal and spatial measures. The findings indicate that the oculomotor control of eye movements during reading is dependent on reading conditions (Chapter 2 & 3).
(ii) Saccades’ accuracy and precision can be simultaneously modulated under reversed reading condition, supporting the assumption that the random and systematic oculomotor errors are not independent. By assuming that readers increase the precision of sensory observation while maintaining the learned prior knowledge when reading direction was reversed, a process-oriented Bayesian model for saccade targeting can account for the simultaneous reduction of oculomotor errors (Chapter 2).
(iii) Plausible parameter values serving as proxies for the intended within-word landing positions can be estimated by using the maximum a posteriori estimator from Bayesian inference. Using the mean value of all observations as proxies is insufficient for studies focusing on the launch-site effect because the method exhibits the strongest bias when estimating the size of the effect. Mislocated fixations remain a challenge for the currently known estimation methods, especially when the systematic oculomotor error is large (Chapter 4).
The results reported in this thesis highlight the role of the oculomotor system, together with underlying cognitive processes, in eye movements during reading. The modulation of oculomotor control can be captured through a precise analysis of landing positions.
In this thesis, I examine different A-bar movement dependencies in Igbo, a Benue-Congo language spoken in southern Nigeria. Movement dependencies are found in constructions where an element is moved to the left edge of the clause to express information-structural categories such as in questions, relativization and focus. I show that these constructions in Igbo are very uniform from a syntactic point of view. The constructions are built on two basic fronting operations: relativization and focus movement, and are biclausal. I further investigate several morphophonological effects that are found in these A-bar constructions. I propose that these effects are reflexes of movement that are triggered when an element is moved overtly in relativization or focus. This proposal helps to explain the tone patterns that have previously been assumed to be a property of relative clauses. The thesis adds to the growing body of tonal reflexes of A-bar movement reported for a few African languages. The thesis also provides an insight into the complementizer domain (C-domain) of Igbo.
In this dissertation, I describe the mechanisms involved in magmatic plumbing system establishment and evolution. Magmatic plumbing systems play a key role in determining volcanic activity style and recognizing its complexities can help in forecasting eruptions, especially within hazardous volcanic systems such as calderas. I explore the mechanisms of dike emplacement and intrusion geometry that shape magmatic plumbing systems beneath caldera-like topographies and how their characteristics relate to precursory activity of a volcanic eruption. For this purpose, I use scaled laboratory models to study the effect of stress field reorientation on a propagating dike induced by caldera topography. I construct these models by using solid gelatin to mimic the elastic properties of the earth's crust with a caldera on the surface. I inject water as the magma analog and track the evolution of the experiments through qualitative (geometry and stress evolution) and quantitative (displacement and strain computation) descriptions. The results show that a vertical dike deviates towards and outside of the caldera-like margin due to stress field reorientation beneath the caldera-like topography. The propagating intrusion forms a circumferential-eruptive dike when the caldera-like size is small, whereas a cone sheet develops beneath the large caldera-like topography.
To corroborate the results obtained from the experimental models, this thesis also describes the results of a case study utilizing seismic monitoring data associated with the unrest period of the 2015 phreatic eruption of Lascar volcano. Lascar has a crater with a small-scale caldera-like topography and exhibited long-lasting anomalous evolution of the number of long-period (LP) events preceding the 2015 eruption. I apply seismic techniques to constrain the hypocentral locations of LP events and characterize their spatial distribution, obtaining an image of Lascar's plumbing system. I observe an agreement in shallow hypocentral locations obtained through four different seismic techniques; nevertheless, the cross-correlation technique provides the best results. These results depict a plumbing system with a narrow sub-vertical deep conduit and a shallow hydrothermal system, where most LP events are located. These two regions are connected through an intermediate region of path divergence, whose geometry and orientation likely is influenced by stress reorientation due to topographic effects of the caldera-like crater.
Finally, in order to further enhance the interpretations of the previous case study, the seismic data was analyzed in tandem with a complementary multiparametric monitoring dataset. This complementary study confirms that the anomalous LP activity occurred as a sign of unrest in the preparatory phase of the phreatic eruption. In addition, I show how changes observed in other monitored parameters enabled to detect further signs of unrest in the shallow hydrothermal system. Overall, this study demonstrates that detecting complex geometric regions within plumbing systems beneath volcanoes is fundamental to produce an effective forecast of eruptions that from a first view seem to occur without any precursory activity.
Furthermore, through the development of this research I show that combining methods that include both observations and models allows one to obtain a more precise interpretation of the volcanic processes.
Pre-service physics teachers often have difficulties seeing the relevance of the content of the content knowledge courses they attend in their study; they regularly do not see the connection with the physics they need in their later profession as a secondary school teacher. A lower perceived relevance is however connected to motivational problems which leads to both a qualitative and quantitative problem: not only is there a relation between the drop-out of students and their motivation, but their level of conceptual understanding is also suffering under this lower motivation.
In order to increase the perceived relevance of the problems that pre-service physics teachers have to solve for the courses Experimentalphysik 1 and 2, an intervention study has been designed and implemented. In these content knowledge courses, first- and second semester students attend lectures, do experiments and they solve problems on weekly problem sets which are discussed in tutorial sessions. The problems on a typical problem set are however mainly quantitative problems that have no connection to school. In the intervention study, regular, quantitative problems are used next to two newly designed conceptual (qualitative) problem types. One of these problem types are conceptual problems that have no implicit or explicit school-relevance; the other problems are based on school-related content knowledge. This content knowledge category describes knowledge that leads to a deeper understanding of school knowledge, relevant for teachers: a teacher-specific content knowledge. A new model for this category, SRCK, has been conceptualised and operationalised as a cross-disciplinary model that consists of conceptual knowledge and skills necessary for this deeper understanding of content that is relevant to teaching at a secondary school.
During two semesters in both the courses Experimentalphysik 1 and 2 (N = 75 and N = 43 respectively) students had to solve the problems on the problem sets. At the start of every tutorial session, they were asked to rate all the problems with respect to perceived relevance and difficulty. Analyses show that the problems based on SRCK were perceived as more relevant than the regular, quantitative problems. However, this difference is only statistically significant for the course Experimentalphysik 2.
The SRCK-problems show the connection between the content of the problems and school physics and are therefore seen as more relevant. In Experimentalphysik 1, the content is not that distant to school physics. This might be the reason that the students see all the problem types as just as relevant to them. When we however only look at the final third of the first semester, where more advanced subjects - that are not necessarily discussed in secondary school physics – are discussed, we see that in this part the SRCK-problems are seen as more relevant than the regular problems too. We can therefore conclude that if the content is distant to school physics, the SRCK-problems are seen as more relevant than the regular problems. We do not see a statistically significant difference between the (conceptual) problems based on SRCK and the conceptual problems that are not based on SRCK (and therefore have no school relevance). This means that we do not know whether the conceptual problems based on SRCK are more relevant because they are based on SRCK or because they are conceptual.
In order to find out what problem properties have an influence on the perceived relevance of these problems by pre-service teachers, an interview study with N = 7 pre-service teachers was conducted.
This interview was done using the repertory grid technique, based on the personal construct theory by Kelly (1955). This technique makes it possible to find personal constructs of students: how do students determine for themselves how relevant a problem is to them? It allows to capture their intuition or gut feeling. These personal constructs could then give us information about the problem properties that have a positive influence on relevance.
Six categories of personal constructs were found that have a high similarity to relevance. According to the personal constructs that were generated in the interviews, physics problems are more relevant when they are more conceptual (compared to calculational), are close to everyday life, have a lower level of mathematical requirement, have a content that is more school-relevant, give the students the idea that they have learned something, and contain a situation that has to be analysed.
Of the six problem properties described above, one can be connected to the facets of SRCK: many problems based on SRCK contain a situation (e.g. a textbook with a simplified explanation, a student solution with an error) that has to be analysed.
The expectation is that problems that are based on the six properties described above would be perceived as more relevant to pre-service physics teachers.
Living cells rely on transport and interaction of biomolecules to perform their diverse functions. A powerful toolbox to study these highly dynamic processes in the native environment is provided by fluorescence fluctuation spectroscopy (FFS) techniques. In more detail, FFS takes advantage of the inherent dynamics present in biological systems, such as diffusion, to infer molecular parameters from fluctuations of the signal emitted by an ensemble of fluorescently tagged molecules. In particular, two parameters are accessible: the concentration of molecules and their transit times through the observation volume. In addition, molecular interactions can be measured by analyzing the average signal emitted per molecule - the molecular brightness - and the cross-correlation of signals detected from differently tagged species.
In the present work, several FFS techniques were implemented and applied in different biological contexts. In particular, scanning fluorescence correlation spectroscopy (sFCS) was performed to measure protein dynamics and interactions at the plasma membrane (PM) of cells, and number and brightness (N&B) analysis to spatially map molecular aggregation. To account for technical limitations and sample related artifacts, e.g. detector noise, photobleaching, or background signal, several correction schemes were explored. In addition, sFCS was combined with spectral detection and higher moment analysis of the photon count distribution to resolve multiple species at the PM.
Using scanning fluorescence cross-correlation spectroscopy and cross-correlation N&B, the interactions of amyloid precursor-like protein 1 (APLP1), a synaptic membrane protein, were investigated. It is shown for the first time directly in living cells, that APLP1 undergoes specific interactions at cell-cell contacts. It is further demonstrated that zinc ions induce formation of large APLP1 clusters that enrich at contact sites and bind to clusters on the opposing cell. Altogether, these results provide direct evidence that APLP1 is a zinc ion dependent neuronal adhesion protein.
In the context of APLP1, discrepancies of oligomeric state estimates were observed, which were attributed to non-fluorescent states of the chosen red fluorescent protein (FP) tag mCardinal (mCard). Therefore, multiple FPs and their performance in FFS based measurements of protein interactions were systematically evaluated. The study revealed superior properties of monomeric enhanced green fluorescent protein (mEGFP) and mCherry2. Furthermore, a simple correction scheme allowed unbiased in situ measurements of protein oligomerization by quantifying non-fluorescent state fractions of FP tags. The procedure was experimentally confirmed for biologically relevant protein complexes consisting of up to 12 monomers.
In the last part of this work, fluorescence correlation spectroscopy (FCS) and single particle tracking (SPT) were used to characterize diffusive transport dynamics in a bacterial biofilm model. Biofilms are surface adherent bacterial communities, whose structural organization is provided by extracellular polymeric substances (EPS) that form a viscous polymer hydrogel. The presented study revealed a probe size and polymer concentration dependent (anomalous) diffusion hindrance in a reconstituted EPS matrix system caused by polymer chain entanglement at physiological concentrations. This result indicates a meshwork-like organization of the biofilm matrix that allows free diffusion of small particles, but strongly hinders diffusion of larger particles such as bacteriophages. Finally, it is shown that depolymerization of the matrix by phage derived enzymes rapidly facilitated free diffusion. In the context of phage infections, such enzymes may provide a key to evade trapping in the biofilm matrix and promote efficient infection of bacteria. In combination with phage application, matrix depolymerizing enzymes may open up novel antimicrobial strategies against multiresistant bacterial strains, as a promising, more specific alternative to conventional antibiotics.
Percolation process, which is intrinsically a phase transition process near the critical point, is ubiquitous in nature. Many of its applications embrace a wide spectrum of natural phenomena ranging from the forest fires, spread of contagious diseases, social behaviour dynamics to mathematical finance, formation of bedrocks and biological systems. The topology generated by the percolation process near the critical point is a random (stochastic) fractal. It is fundamental to the percolation theory that near the critical point, a unique infinite fractal structure, namely the infinite cluster, would emerge. As de Gennes suggested, the properties of the infinite cluster could be deduced by studying the dynamical behaviour of the random walk process taking place on it. He coined the term the ant in the labyrinth. The random walk process on such an infinite fractal cluster exhibits a subdiffusive dynamics in the sense that the mean squared displacement grows as ~t2/dw, where dw, called the fractal dimension of the random walk path, is greater than 2. Thus, the random walk process on the infinite cluster is classified as a process exhibiting the properties of anomalous diffusions. Yet near the critical point, the infinite cluster is not the sole emergent topology, but it coexists with other clusters whose size is finite. Though finite, on specific length scales these finite clusters exhibit fractal properties as well. In this work, it is assumed that the random walk process could take place on these finite size objects as well. Bearing this assumption in mind requires one address the non-equilibrium initial condition. Due to the lack of knowledge on the propagator of the random walk process in stochastic random environments, a phenomenological correspondence between the renowned Ornstein-Uhlenbeck process and the random walk process on finite size clusters is established. It is elucidated that when an ensemble of these finite size clusters and the infinite cluster is considered, the anisotropy and size of these finite clusters effects the mean squared displacement and its time averaged counterpart to grow in time as ~t(d+df (t-2))/dw, where d is the embedding Euclidean dimension, df is the fractal dimension of the infinite cluster, and , called the Fisher exponent, is a critical exponent governing the power-law distribution of the finite size clusters. Moreover, it is demonstrated that, even though the random walk process on a specific finite size cluster is ergodic, it exhibits a persistent non-ergodic behaviour when an ensemble of finite size and the infinite clusters is considered.
This cumulative thesis is concerned with the evolution of geomagnetic activity since the beginning of the 20th century, that is, the time-dependent response of the geomagnetic field to solar forcing. The focus lies on the description of the magnetospheric response field at ground level, which is particularly sensitive to the ring current system, and an interpretation of its variability in terms of the solar wind driving. Thereby, this work contributes to a comprehensive understanding of long-term solar-terrestrial interactions.
The common basis of the presented publications is formed by a reanalysis of vector magnetic field measurements from geomagnetic observatories located at low and middle geomagnetic latitudes. In the first two studies, new ring current targeting geomagnetic activity indices are derived, the Annual and Hourly Magnetospheric Currents indices (A/HMC). Compared to existing indices (e.g., the Dst index), they do not only extend the covered period by at least three solar cycles but also constitute a qualitative improvement concerning the absolute index level and the ~11-year solar cycle variability. The analysis of A/HMC shows that (a) the annual geomagnetic activity experiences an interval-dependent trend with an overall linear decline during 1900–2010 of ~5 % (b) the average trend-free activity level amounts to ~28 nT (c) the solar cycle related variability shows amplitudes of ~15–45 nT (d) the activity level for geomagnetically quiet conditions (Kp<2) lies slightly below 20 nT. The plausibility of the last three points is ensured by comparison to independent estimations either based on magnetic field measurements from LEO satellite missions (since the 1990s) or the modeling of geomagnetic activity from solar wind input (since the 1960s). An independent validation of the longterm trend is problematic mainly because the sensitivity of the locally measured geomagnetic activity depends on geomagnetic latitude. Consequently, A/HMC is neither directly comparable to global geomagnetic activity indices (e.g., the aa index) nor to the partly reconstructed open solar magnetic flux, which requires a homogeneous response of the ground-based measurements to the interplanetary magnetic field and the solar wind speed.
The last study combines a consistent, HMC-based identification of geomagnetic storms from 1930–2015 with an analysis of the corresponding spatial (magnetic local time-dependent) disturbance patterns. Amongst others, the disturbances at dawn and dusk, particularly their evolution during the storm recovery phases, are shown to be indicative of the solar wind driving structure (Interplanetary Coronal Mass Ejections vs. Stream or Co-rotating Interaction Regions), which enables a backward-prediction of the storm driver classes. The results indicate that ICME-driven geomagnetic storms have decreased since 1930 which is consistent with the concurrent decrease of HMC. Out of the collection of compiled follow-up studies the inclusion of measurements from high-latitude geomagnetic observatories into the third study’s framework seems most promising at this point.
From self-help books and nootropics, to self-tracking and home health tests, to the tinkering with technology and biological particles – biohacking brings biology, medicine, and the material foundation of life into the sphere of »do-it-yourself«. This trend has the potential to fundamentally change people's relationship with their bodies and biology but it also creates new cultural narratives of responsibility, authority, and differentiation. Covering a broad range of examples, this book explores practices and representations of biohacking in popular culture, discussing their ambiguous position between empowerment and requirement, promise and prescription.
A large body of research now supports the presence of both syntactic and lexical predictions in sentence processing. Lexical predictions, in particular, are considered to indicate a deep level of predictive processing that extends past the structural features of a necessary word (e.g. noun), right down to the phonological features of the lexical identity of a specific word (e.g. /kite/; DeLong et al., 2005). However, evidence for lexical predictions typically focuses on predictions in very local environments, such as the adjacent word or words (DeLong et al., 2005; Van Berkum et al., 2005; Wicha et al., 2004). Predictions in such local environments may be indistinguishable from lexical priming, which is transient and uncontrolled, and as such may prime lexical items that are not compatible with the context (e.g. Kukona et al., 2014). Predictive processing has been argued to be a controlled process, with top-down information guiding preactivation of plausible upcoming lexical items (Kuperberg & Jaeger, 2016). One way to distinguish lexical priming from prediction is to demonstrate that preactivated lexical content can be maintained over longer distances.
In this dissertation, separable German particle verbs are used to demonstrate that preactivation of lexical items can be maintained over multi-word distances. A self-paced reading time and an eye tracking experiment provide some support for the idea that particle preactivation triggered by a verb and its context can be observed by holding the sentence context constant and manipulating the predictabilty of the particle. Although evidence of an effect of particle predictability was only seen in eye tracking, this is consistent with previous evidence suggesting that predictive processing facilitates only some eye tracking measures to which the self-paced reading modality may not be sensitive (Staub, 2015; Rayner1998). Interestingly, manipulating the distance between the verb and the particle did not affect reading times, suggesting that the surprisal-predicted faster reading times at long distance may only occur when the additional distance is created by information that adds information about the lexical identity of a distant element (Levy, 2008; Grodner & Gibson, 2005). Furthermore, the results provide support for models proposing that temporal decay is not major influence on word processing (Lewandowsky et al., 2009; Vasishth et al., 2019).
In the third and fourth experiments, event-related potentials were used as a method for detecting specific lexical predictions. In the initial ERP experiment, we found some support for the presence of lexical predictions when the sentence context constrained the number of plausible particles to a single particle. This was suggested by a frontal post-N400 positivity (PNP) that was elicited when a lexical prediction had been violated, but not to violations when more than one particle had been plausible. The results of this study were highly consistent with previous research suggesting that the PNP might be a much sought-after ERP marker of prediction failure (DeLong et al., 2011; DeLong et al., 2014; Van Petten & Luka, 2012; Thornhill & Van Petten, 2012; Kuperberg et al., 2019). However, a second experiment in a larger sample experiment failed to replicate the effect, but did suggest the relationship of the PNP to predictive processing may not yet be fully understood. Evidence for long-distance lexical predictions was inconclusive.
The conclusion drawn from the four experiments is that preactivation of the lexical entries of plausible upcoming particles did occur and was maintained over long distances. The facilitatory effect of this preactivation at the particle site therefore did not appear to be the result of transient lexical priming. However, the question of whether this preactivation can also lead to lexical predictions of a specific particle remains unanswered. Of particular interest to future research on predictive processing is further characterisation of the PNP. Implications for models of sentence processing may be the inclusion of long-distance lexical predictions, or the possibility that preactivation of lexical material can facilitate reading times and ERP amplitude without commitment to a specific lexical item.
The impact that catalysis has on global economy and environment is substantial, since 85% of all chemical industrial processes are catalytic. Among those, 80% of the processes are heterogeneously catalyzed, 17% make use of homogeneous catalysts, and 3% are biocatalytic processes. Especially in the pharmaceutical and agrochemical industry, a significant part of these processes involves chiral compounds. Obtaining enantiomerically pure compounds is necessary and it is usually accomplished by asymmetric synthesis and catalysis, as well as chiral separation. The efficiency of these processes may be vastly improved if the chiral selectors are positioned on a porous solid support, thereby increasing the available surface area for chiral recognition. Similarly, the majority of commercial catalysts are also supported, usually comprising of metal nanoparticles (NPs) dispersed on highly porous oxide or nanoporous carbon material.
Materials that have exceptional thermal and chemical stability, and are electrically conductive are porous carbons. Their stability in extreme pH regions and temperatures, the possibility to tailor their pore architecture and chemical functionalization, and their electric conductivity have already established these materials in the fields of separation and catalysis. However, their heterogeneous chemical structure with abundant defects make it challenging to develop reliable models for the investigation of structure-performance relationships. Therefore, there is a necessity for expanding the fundamental understanding of these robust materials under experimental conditions to allow for their further optimization for particular applications. This thesis gives a contribution to our knowledge about carbons, through different aspects, and in different applications.
On the one hand, a rather exotic novel application was investigated by attempts in synthesizing porous carbon materials with an enantioselective surface. Chapter 4.1 described an approach for obtaining mesoporous carbons with an enantioselective surface by direct carbonization of a chiral precursor. Two enantiomers of chiral ionic liquids (CIL) based on amino acid tyrosine were used as carbon precursors and ordered mesoporous silica SBA-15 served as a hard template for obtaining porosity. The chiral recognition of the prepared carbons has been tested in the solution by isothermal titration calorimetry with enantiomers of Phenylalanine as probes, as well as chiral vapor adsorption with 2-butanol enantiomers. Measurements in both solution and the gas phase revealed the differences in the affinity of carbons towards two enantiomers.
The atomic efficiency of the CIL precursors was increased in Chapter 4.2, and the porosity was developed independently from the development of chiral carbons, through the formation of stable composites of pristine carbon and CIL-derived coating. After the same set of experiments for the investigation of chirality, the enantiomeric ratios of the composites reported herein were even higher than in the previous chapter.
On the other hand, the structure‒activity relationship of carbons as supports for gold nanoparticles in a rather traditional catalytic model reaction, on the interface between gas, liquid, and solid, was studied. In Chapter 5.1 it was shown on the series of catalysts with different porosities that the kinetics of ᴅ-glucose oxidation reaction can be enhanced by increasing the local concentration of the reactants around the active phase of the catalyst. A large amount of uniform narrow mesopores connected to the surface of the Au catalyst supported on ordered mesoporous carbon led to the water confinement, which increased the solubility of the oxygen in the proximity of the catalyst and thereby increased the apparent catalytic activity of this catalyst.
After increasing the oxygen concentration in the internal area of the catalyst, in Chapter 5.2 the concentration of oxygen was increased in the external environment of the catalyst, by the introduction of less cohesive liquids that serve as efficient solvent for oxygen, perfluorinated compounds, near the active phase of the catalyst. This was achieved by a formation of catalyst particle-stabilized emulsions of perfluorocarbon in aqueous ᴅ-glucose solution, that further promoted the catalytic activity of gold-on-carbon catalyst.
The findings reported within this thesis are an important step in the understanding of the structure-related properties of carbon materials.
Comment sections of online news platforms are an essential space to express opinions and discuss political topics. However, the misuse by spammers, haters, and trolls raises doubts about whether the benefits justify the costs of the time-consuming content moderation. As a consequence, many platforms limited or even shut down comment sections completely. In this thesis, we present deep learning approaches for comment classification, recommendation, and prediction to foster respectful and engaging online discussions. The main focus is on two kinds of comments: toxic comments, which make readers leave a discussion, and engaging comments, which make readers join a discussion. First, we discourage and remove toxic comments, e.g., insults or threats. To this end, we present a semi-automatic comment moderation process, which is based on fine-grained text classification models and supports moderators. Our experiments demonstrate that data augmentation, transfer learning, and ensemble learning allow training robust classifiers even on small datasets. To establish trust in the machine-learned models, we reveal which input features are decisive for their output with attribution-based explanation methods. Second, we encourage and highlight engaging comments, e.g., serious questions or factual statements. We automatically identify the most engaging comments, so that readers need not scroll through thousands of comments to find them. The model training process builds on upvotes and replies as a measure of reader engagement. We also identify comments that address the article authors or are otherwise relevant to them to support interactions between journalists and their readership. Taking into account the readers' interests, we further provide personalized recommendations of discussions that align with their favored topics or involve frequent co-commenters. Our models outperform multiple baselines and recent related work in experiments on comment datasets from different platforms.
The goal of regenerative medicine is to guide biological systems towards natural healing outcomes using a combination of niche-specific cells, bioactive molecules and biomaterials. In this regard, mimicking the extracellular matrix (ECM) surrounding cells and tissues in vivo is an effective strategy to modulate cell behaviors. Cellular function and phenotype is directed by the biochemical and biophysical signals present in the complex 3D network of ECMs composed mainly of glycoproteins and hydrophilic proteoglycans. While cellular modulation in response to biophysical cues emulating ECM features has been investigated widely, the influence of biochemical display of ECM glycoproteins mimicking their presentation in vivo is not well characterized. It remains a significant challenge to build artificial biointerfaces using ECM glycoproteins that precisely match their presentation in nature in terms of morphology, orientation and conformation. This challenge becomes clear, when one understands how ECM glycoproteins self-assemble in the body. Glycoproteins produced inside the cell are secreted in the extra-cellular space, where they are bound to the cell membrane or other glycoproteins by specific interactions. This leads to elevated local concentration and 2Dspatial confinement, resulting in self-assembly by the reciprocal interactions arising from the molecular complementarity encoded in the glycoprotein domains. In this thesis, air-water (A-W) interface is presented as a suitable platform, where self-assembly parameters of ECM glycoproteins such as pH, temperature and ionic strength can be controlled to simulate in vivo conditions (Langmuir technique), resulting in the formation of glycoprotein layers with defined characteristics. The layer can be further compressed with surface barriers to enhance glycoprotein-glycoprotein contacts and defined layers of glycoproteins can be immobilized on substrates by horizontal lift and touch method, called Langmuir-Schäfer (LS) method. Here, the benefit of Langmuir and LS methods in achieving ECM glycoprotein biointerfaces with controlled network morphology and ligand density on substrates is highlighted and contrasted with the commonly used (glyco)protein solution deposition (SO) method on substrates. In general, the (glyco)protein layer formation by SO is rather uncontrolled, influenced strongly by (glyco)protein-substrate interactions and it results in multilayers and aggregations on substrates, while the LS method results in (glyco)proteins layers with a more homogenous presentation. To achieve the goal of realizing defined ECM layers on substrates, ECM glycoproteins having the ability to self-assemble were selected: Collagen-IV (Col-IV) and fibronectin (FN). Highly packed FN layer with uniform presentation of ligands was deposited on polydimethysiloxane VIII (PDMS) by LS method, while a heterogeneous layer was formed on PDMS by SO with prominent aggregations visible. Mesenchymal stem cells (MSC) on PDMS equipped with FN by LS exhibited more homogeneous and elevated vinculin expression and weaker stress fiber formation than on PDMS equipped with FN by SO and these divergent responses could be attributed to the differences in glycoprotein presentation at the interface. Col-IV are scaffolding components of specialized ECM called basement membranes (BM), and have the propensity to form 2D networks by self-polymerization associated with cells. Col- IV behaves as a thin-disordered network at the A-W interface. As the Col-IV layer was compressed at the A-W interface using trough barriers, there was negligible change in thickness (layer thickness ~ 50 nm) or orientation of molecules. The pre-formed organization of Col-IV was transferred by LS method in a controlled fashion onto substrates meeting the wettability criterion (CA ≤ 80°). MSC adhesion (24h) on PET substrates deposited with Col-IV LS films at 10, 15 and 20 mN·m-1 surface pressures was (12269.0 ± 5856.4) cells for LS10, (16744.2 ± 1280.1) cells for LS15 and (19688.3 ± 1934.0) cells for LS20 respectively. Remarkably, by selecting the surface areal density of Col-IV on the Langmuir trough on PET, there is a linear increase between the number of adherent MSCs and the Col-IV ligand density. Further, FN has the ability to self-stabilize and form 2D networks (even without compression) while preserving native β-sheet structure at the A-W interface on a defined subphase (pH = 2). This provides the possibility to form such layers on any vessel (even on standard six-well culture plates) and the cohesive FN layers can be deposited by LS transfer, without the need for expensive LB instrumentation. Multilayers of FN can be immobilized on substrates by this approach, as easily as Layer-by-Layer method, even without the need for secondary adlayer or activated bare substrate. Thus, this facile glycoprotein coating strategy approach is accessible to many researchers to realize defined FN films on substrates for cell culture. In conclusion, Langmuir and LS methods can create biomimetic glycoprotein biointerfaces on substrates controlling aspects of presentation such as network morphology and ligand density. These methods will be utilized to produce artificial BM mimics and interstitial ECM mimics composed of more than one ECM glycoprotein layer on substrates, serving as artificial niches instructing stem cells for cell-replacement therapies in the future.
Early numeracy is one of the strongest predictors for later success in school mathematics (e.g., Duncan et al., 2007). The main goal of first grade mathematics teachers should therefore be to provide learning opportunities that enable all students to develop sound early numeracy skills. Developmental models, or learning progressions, can describe how early numerical understanding typically develops. Assessments that are aligned to empirically validated learning progressions can support teachers to understand their students learning better and target instruction accordingly. To date, there have been no progression-based instruments made available for German teachers to monitor their students’ progress in the domain of early numeracy. This dissertation contributes to the design of such an instrument. The first study analysed the suitability of early numeracy assessments currently used in German primary schools at school entry to identify students’ individual starting points for subsequent progress monitoring. The second study described the development of progression-based items and investigated the items in regards to main test quality criteria, such as reliability, validity, and test fairness, to find a suitable item pool to build targeted tests. The third study described the construction of the progress monitoring measure, referred to as the learning progress assessment (LPA). The study investigated the extent to which the LPA was able to monitor students’ individual learning progress in early numeracy over time. The results of the first study indicated that current school entry assessments were not able to provide meaningful information about the students’ initial learning status. Thus, the MARKO-D test (Ricken, Fritz, & Balzer, 2013) was used to determine the students’ initial numerical understanding in the other two studies, because it has been shown to be an effective measure of conceptual numerical understanding (Fritz, Ehlert, & Leutner, 2018). Both studies provided promising evidence for the quality of the LPA and its ability to detect changes in numerical understanding over the course of first grade. The studies of this dissertation can be considered an important step in the process of designing an empirically validated instrument that supports teachers to monitor their students’ early numeracy development and to adjust their teaching accordingly to enhance school achievement.
Cells and tissues are sensitive to mechanical forces applied to them. In particular, bone forming cells and connective tissues, composed of cells embedded in fibrous extracellular matrix (ECM), are continuously remodeled in response to the loads they bear. The mechanoresponses of cells embedded in tissue include proliferation, differentiation, apoptosis, internal signaling between cells, and formation and resorption of tissue.
Experimental in-vitro systems of various designs have demonstrated that forces affect tissue growth, maturation and mineralization. However, the results depended on different parameters such as the type and magnitude of the force applied in each study. Some experiments demonstrated that applied forces increase cell proliferation and inhibit cell maturation rate, while other studies found the opposite effect. When the effect of different magnitudes of forces was compared, some studies showed that higher forces resulted in a cell proliferation increase or differentiation decrease, while other studies observed the opposite trend or no trend at all.
In this study, MC3T3-E1 cells, a cell line of pre-osteoblasts (bone forming cells), was used. In this cell line, cell differentiation is known to accelerate after cells stop proliferating, typically at confluency. This makes this cell line an interesting subject for studying the influence of forces on the switch between the proliferation stage of the precursor cell and the differentiation to the mature osteoblasts.
A new experimental system was designed to perform systematic investigations of the influence of the type and magnitude of forces on tissue growth. A single well plate contained an array of 80 rectangular pores. Each pore was seeded with MC3T3-E1 cells. The culture medium contained magnetic beads (MBs) of 4.5 μm in diameter that were incorporated into the pre-osteoblast cells. Using an N52 neodymium magnet, forces ranging over three orders of magnitude were applied to MBs incorporated in cells at 10 different distances from the magnet. The amount of formed tissue was assessed after 24 days of culture. The experimental design allowed to obtain data concerning (i) the influence of the type of the force (static, oscillating, no force) on tissue growth; (ii) the influence of the magnitude of force (pN-nN range); (iii) the effect of functionalizing the magnetic beads with the tripeptide Arg-Gly-Asp (RGD). To learn about cell differentiation state, in the final state of the tissue growth experiments, an analysis for the expression of alkaline phosphatase (ALP), a well - known marker of osteoblast differentiation, was performed.
The experiments showed that the application of static magnetic forces increased tissue growth compared to control, while oscillating forces resulted in tissue growth reduction. A statistically significant positive correlation was found between the amount of tissue grown and the magnitude of the oscillating magnetic force. A positive but non-significant correlation of the amount of tissue with the magnitude of forces was obtained when static forces were applied. Functionalizing the MBs with RGD peptides and applying oscillating forces resulted in an increase of tissue growth relative to tissues incubated with “plain” epoxy MBs. ALP expression decreased as a function of the magnitude of force both when static and oscillating forces were applied. ALP stain intensity was reduced relative to control when oscillating forces were applied and was not significantly different than control for static forces.
The suggested interpretation of the experimental findings is that larger mechanical forces delay cell maturation and keep the pre-osteoblasts in a more proliferative stage characterized by more tissue formed and lower expression of ALP. While the influence of the force magnitude can be well explained by an effect of the force on the switch between proliferation and differentiation, the influence of force type (static or oscillating) is less clear. In particular, it is challenging to reconcile the reduction of tissue formed under oscillating forces as compared to controls with the simultaneous reduction of ALP expression. To better understand this, it may be necessary to refine the staining protocol of the scaffolds and to include the amount and structure of ECM as well as other factors that were not monitored in the experiment and which may influence tissue growth and maturation.
The developed experimental system proved well suited for a systematic and efficient study of the mechanoresponsiveness of tissue growth, it allowed a study of the dependence of tissue growth on force magnitude ranging over three orders of magnitude, and a comparison between the effect of static and oscillating forces. Future experiments can explore the multiple parameters that affect tissue growth as a function of the magnitude of the force: by applying different time-dependent forces; by extending the force range studied; or by using different cell lines and manipulating the mechanotransduction in the cells biochemically.
Some of the most frequent questions surrounding business negotiations address not only the nature of such negotiations, but also how they should be conducted. The answers given by business people from different cultural backgrounds to these questions are likely to differ from the standard answers found in business manuals.
In her book, Milene Mendes de Oliveira investigates how Brazilian and German business people conceptualize and act out business negotiations using English as a Lingua Franca. The frameworks of Cultural Linguistics, English as a Lingua Franca, World Englishes, and Business Discourse offer the theoretical and methodological grounding for the analysis of interviews with high-ranking Brazilian and German business people. Moreover, a side study on e-mail exchanges between Brazilian and German employees of a healthcare company serves as a test case for the results arising from the interviews, and helps understand other facets of authentic intercultural business communication.
Offering new insights on English as a Lingua Franca in international business contexts, Business Negotiations in ELF from a Cultural Linguistic Perspective simultaneously provides a detailed cultural-conceptual account of business negotiations from the viewpoint of Brazilian and German business people and a secondary analysis of their pragmatic aspects.
Socializing Development
(2020)
One of the tremendous discoveries by the Cassini spacecraft has been the detection of propeller structures in Saturn's A ring. Although the generating moonlet is too small to be resolved by the cameras aboard Cassini, its produced density structure within the rings, caused by its gravity can be well observed. The largest observed propeller is called Blériot and has an azimuthal extent over several thousand kilometers. Thanks to its large size, Blériot could be identified in different images over a time span of over 10 years, allowing the reconstruction of its orbital evolution. It turns out that Blériot deviates considerably from its expected Keplerian orbit in azimuthal direction by several thousand kilometers. This excess motion can be well reconstructed by a superposition of three harmonics, and therefore resembles the typical fingerprint of a resonantly perturbed body. This PhD thesis is directed to the excess motion of Blériot. Resonant perturbations are a known for some of the outer satellites of Saturn. Thus, in the first part of this thesis, we seek for suiting resonance candidates nearby the propeller, which might explain the observed periods and amplitudes. In numeric simulations, we show that indeed resonances by Prometheus, Pandora and Mimas can explain the libration periods in good agreement, but not the amplitudes. The amplitude problem is solved by the introduction of a propeller-moonlet interaction model, where we assume a broken symmetry of the propeller by a small displacement of the moonlet. This results in a librating motion the moonlet around the propeller's symmetry center due to the non-vanishing accelerations. The retardation of the reaction of the propeller structure to the motion of the moonlet causes the propeller to become asymmetric. Hydrodynamic simulations to test our analytical model confirm our predictions. In the second part of this thesis, we consider a stochastic migration of the moonlet, which is an alternative hypothesis to explain the observed excess motion of Blériot. The mean-longitude is a time-integrated quantity and thus introduces a correlation between the independent kicks of a random walk, smoothing the noise and thus makes the residual look similar to the observed one for Blériot. We apply a diagonalization test to decorrelated the observed residuals for the propellers Blériot and Earhart and the ring-moon Daphnis. It turns out that the decorrelated distributions do not strictly follow the expected Gaussian distribution. The decorrelation method fails to distinguish a correlated random walk from a noisy libration and thus we provide an alternative study. Assuming the three-harmonic fit to be a valid representation of the excess motion for Blériot, independently from its origin, we test the likelihood that this excess motion can be created by a random walk. It turns out that a non-correlated and correlated random walk is unlikely to explain the observed excess motion.
The two hallmark features of Brownian motion are the linear growth < x2(t)> = 2Ddt of the mean squared displacement (MSD) with diffusion coefficient D in d spatial dimensions, and the Gaussian distribution of displacements. With the increasing complexity of the studied systems deviations from these two central properties have been unveiled over the years. Recently, a large variety of systems have been reported in which the MSD exhibits the linear growth in time of Brownian (Fickian) transport, however, the distribution of displacements is pronouncedly non-Gaussian (Brownian yet non-Gaussian, BNG). A similar behaviour is also observed for viscoelastic-type motion where an anomalous trend of the MSD, i.e., <x2(t)> ~ ta, is combined with a priori unexpected non-Gaussian distributions (anomalous yet non-Gaussian, ANG). This kind of behaviour observed in BNG and ANG diffusions has been related to the presence of heterogeneities in the systems and a common approach has been established to address it, that is, the random diffusivity approach.
This dissertation explores extensively the field of random diffusivity models. Starting from a chronological description of all the main approaches used as an attempt of describing BNG and ANG diffusion, different mathematical methodologies are defined for the resolution and study of these models. The processes that are reported in this work can be classified in three subcategories, i) randomly-scaled Gaussian processes, ii) superstatistical models and iii) diffusing diffusivity models, all belonging to the more general class of random diffusivity models. Eventually, the study focuses more on BNG diffusion, which is by now well-established and relatively well-understood. Nevertheless, many examples are discussed for the description of ANG diffusion, in order to highlight the possible scenarios which are known so far for the study of this class of processes.
The second part of the dissertation deals with the statistical analysis of random diffusivity processes. A general description based on the concept of moment-generating function is initially provided to obtain standard statistical properties of the models. Then, the discussion moves to the study of the power spectral analysis and the first passage statistics for some particular random diffusivity models. A comparison between the results coming from the random diffusivity approach and the ones for standard Brownian motion is discussed. In this way, a deeper physical understanding of the systems described by random diffusivity models is also outlined.
To conclude, a discussion based on the possible origins of the heterogeneity is sketched, with the main goal of inferring which kind of systems can actually be described by the random diffusivity approach.
Organizing immigration
(2020)
Immigration constitutes a dynamic policy field with – often quite unpredictable – dynamics. This is based on immigration constituting a ‘wicked problem’ meaning that it is characterized by uncertainty, ambiguity and complexity. Due to the dynamics in the policy field, expectations towards public administrations often change. Following neo-institutionalist theory, public administrations depend on meeting the expectations in the organizational field in order to maintain legitimacy as the basis for, e.g., resources and compliance of stakeholders. With the dynamics in the policy field, expectations might change and public administrations consequently need to adapt in order to maintain or repair the then threatened legitimacy. If their organizational legitimacy is threatened by a perception of structures and processes being inadequate for changed expectations, an ‘institutional crisis’ unfolds. However, we know little about ministerial bureaucracies’ structural reactions to such crucial momentums and how this effects the quest for coordination within policy-making. Overall, the dissertation thus links to both policy analysis and public administration research and consists of five publications. It asks: How do structures in ministerial bureaucracies change in the context of institutional crises? And what effect do these changes have on ministerial coordination? The dissertation hereby focusses on the above described dynamic policy field of immigration in Germany in the period from 2005 to 2017 and pursues three objectives: 1) to identify the context and impulse for changes in the structures of ministerial bureaucracies, 2) to describe respective changes with regard to their organizational structures, and 3) to identify their effect on coordination. It hereby compares and contrasts institutional crises by incremental change and shock as well as changes and effects at federal and Länder level which allows a comprehensive answer to both of the research questions. Theoretically, the dissertation follows neo-institutionalist theory with a particular focus on changes in organizational structures, coordination and crisis management. Methodologically, it follows a comparative design. Each article (except for the literature review), focusses on ministerial bureaucracies at one governmental level (federal or Länder) and on an institutional crisis induced by either an incremental process or a shock. Thus, responses and effects can be compared and contrasted across impulses for institutional crises and governmental levels. Overall, the dissertation follows a mixed methods approach with a majority of qualitative single and small-n case studies based on document analysis and semi-structured interviews. Additionally, two articles use quantitative methods as they best suited the respective research question. The rather explorative nature of these two articles however fits to the overall interpretivist approach of the dissertation. Overall, the dissertation’s core argument is: Within the investigation period, varying dynamics and thus impulses for institutional crises took place in the German policy field of immigration. Respectively, expectations by stakeholders on how the politico-administrative system should address the policy problem changed. Ministerial administrations at both the federal and Länder level adapted to these expectations in order to maintain, or regain respectively, organizational legitimacy. The administration hereby referred to well-known recipes of structural changes. Institutional crises do not constitute fields of experimentation. The new structures had an immediate effect on ministerial coordination, with respect to both the horizontal and vertical dimension. Yet, they did not mean a comprehensive change of the system in place. The dissertation thus challenges the idea of the toppling effect of crises and rather shows that adaptability and persistence of public administrations constitute two sides of the same coin.
Using individual-based modeling to understand grassland diversity and resilience in the Anthropocene
(2020)
The world’s grassland systems are increasingly threatened by anthropogenic change. Susceptible to a variety of different stressors, from land-use intensification to climate change, understanding the mechanisms driving the maintenance of these systems’ biodiversity and stability, and how these mechanisms may shift under human-mediated disturbance, is thus critical for successfully navigating the next century. Within this dissertation, I use an individual-based and spatially-explicit model of grassland community assembly (IBC-grass) to examine several processes, thought key to understanding their biodiversity and stability and how it changes under stress. In the first chapter of my thesis, I examine the conditions under which intraspecific trait variation influences the diversity of simulated grassland communities. In the second and third chapters of my thesis, I shift focus towards understanding how belowground herbivores influence the stability of these grassland systems to either a disturbance that results in increased, stochastic, plant mortality, or eutrophication.
Intraspecific trait variation (ITV), or variation in trait values between individuals of the same species, is fundamental to the structure of ecological communities. However, because it has historically been difficult to incorporate into theoretical and statistical models, it has remained largely overlooked in community-level analyses. This reality is quickly shifting, however, as a consensus of research suggests that it may compose a sizeable proportion of the total variation within an ecological community and that it may play a critical role in determining if species coexist. Despite this increasing awareness that ITV matters, there is little consensus of the magnitude and direction of its influence. Therefore, to better understand how ITV changes the assembly of grassland communities, in the first chapter of my thesis, I incorporate it into an established, individual-based grassland community model, simulating both pairwise invasion experiments as well as the assembly of communities with varying initial diversities. By varying the amount of ITV in these species’ functional traits, I examine the magnitude and direction of ITV’s influence on pairwise invasibility and community coexistence. During pairwise invasion, ITV enables the weakest species to more frequently invade the competitively superior species, however, this influence does not generally scale to the community level. Indeed, unless the community has low alpha- and beta- diversity, there will be little effect of ITV in bolstering diversity. In these situations, since the trait axis is sparsely filled, the competitively inferior may suffer less competition and therefore ITV may buffer the persistence and abundance of these species for some time.
In the second and third chapters of my thesis, I model how one of the most ubiquitous trophic interactions within grasslands, herbivory belowground, influences their diversity and stability. Until recently, the fundamental difficulty in studying a process within the soil has left belowground herbivory “out of sight, out of mind.” This dilemma presents an opportunity for simulation models to explore how this understudied process may alter community dynamics. In the second chapter of my thesis, I implement belowground herbivory – represented by the weekly removal of plant biomass – into IBC-grass. Then, by introducing a pulse disturbance, modelled as the stochastic mortality of some percentage of the plant community, I observe how the presence of belowground herbivores influences the resistance and recovery of Shannon diversity in these communities. I find that high resource, low diversity, communities are significantly more destabilized by the presence of belowground herbivores after disturbance. Depending on the timing of the disturbance and whether the grassland’s seed bank persists for more than one season, the impact of the disturbance – and subsequently the influence of the herbivores – can be greatly reduced. However, because human-mediated eutrophication increases the amount of resources in the soil, thus pressuring grassland systems, our results suggest that the influence of these herbivores may become more important over time.
In the third chapter of my thesis, I delve further into understanding the mechanistic underpinnings of belowground herbivores on the diversity of grasslands by replicating an empirical mesocosm experiment that crosses the presence of herbivores above- and below-ground with eutrophication. I show that while aboveground herbivory, as predicted by theory and frequently observed in experiments, mitigates the impact of eutrophication on species diversity, belowground herbivores counterintuitively reduce biodiversity. Indeed, this influence positively interacts with the eutrophication process, amplifying its negative impact on diversity. I discovered the mechanism underlying this surprising pattern to be that, as the herbivores consume roots, they increase the proportion of root resources to root biomass. Because root competition is often symmetric, herbivory fails to mitigate any asymmetries in the plants’ competitive dynamics. However, since the remaining roots have more abundant access to resources, the plants’ competition shifts aboveground, towards asymmetric competition for light. This leads the community towards a low-diversity state, composed of mostly high-performance, large plant species. We further argue that this pattern will emerge unless the plants’ root competition is asymmetric, in which case, like its counterpart aboveground, belowground herbivory may buffer diversity by reducing this asymmetry between the competitively superior and inferior plants.
I conclude my dissertation by discussing the implications of my research on the state of the art in intraspecific trait variation and belowground herbivory, with emphasis on the necessity of more diverse theory development in the study of these fundamental interactions. My results suggest that the influence of these processes on the biodiversity and stability of grassland systems is underappreciated and multidimensional, and must be thoroughly explored if researchers wish to predict how the world’s grasslands will respond to anthropogenic change. Further, should researchers myopically focus on understanding central ecological interactions through only mathematically tractable analyses, they may miss entire suites of potential coexistence mechanisms that can increase the coviability of species, potentially leading to coexistence over ecologically-significant timespans. Individual-based modelling, therefore, with its focus on individual interactions, will prove a critical tool in the coming decades for understanding how local interactions scale to larger contexts, and how these interactions shape ecological communities and further predicting how these systems will change under human-mediated stress.
Geomorphology seeks to characterize the forms, rates, and magnitudes of sediment and water transport that sculpt landscapes. This is generally referred to as earth surface processes, which incorporates the influence of biologic (e.g., vegetation), climatic (e.g., rainfall), and tectonic (e.g., mountain uplift) factors in dictating the transport of water and eroded material. In mountains, high relief and steep slopes combine with strong gradients in rainfall and vegetation to create dynamic expressions of earth surface processes. This same rugged topography presents challenges in data collection and process measurement, where traditional techniques involving detailed observations or physical sampling are difficult to apply at the scale of entire catchments. Herein lies the utility of remote sensing. Remote sensing is defined as any measurement that does not disturb the natural environment, typically via acquisition of images in the visible- to radio-wavelength range of the electromagnetic spectrum. Remote sensing is an especially attractive option for measuring earth surface processes, because large areal measurements can be acquired at much lower cost and effort than traditional methods. These measurements cover not only topographic form, but also climatic and environmental metrics, which are all intertwined in the study of earth surface processes. This dissertation uses remote sensing data ranging from handheld camera-based photo surveying to spaceborne satellite observations to measure the expressions, rates, and magnitudes of earth surface processes in high-mountain catchments of the Eastern Central Andes in Northwest Argentina. This work probes the limits and caveats of remote sensing data and techniques applied to geomorphic research questions, and presents important progress at this disciplinary intersection.
Salt pans also termed playas are common landscape features of hydrologically closed basins in arid and semiarid zones, where evaporation significantly exceeds the local precipitation. The analysis and monitoring of salt pan environments is important for the evaluation of current and future impact of these landscape features. Locally, salt pans have importance for the ecosystem, wildlife and human health, and through dust emissions they influence the climate on regional and global scales. Increasing economic exploitation of these environments in the last years, e.g. by brine extraction for raw materials, as well as climate change severely affect the water, material and energy balance of these systems. Optical remote sensing has the potential to characterise salt pan environments and to increase the understanding of processes in playa basins, as well as to assess wider impacts and feedbacks that exist between climate forcing and human intervention in their regions. Remote sensing techniques can provide information for extensive regions on a high temporal basis compared to traditional field samples and ground observations. Specifically, for salt pans that are often challenging to study because of their large size, remote location, and limited accessibility due to missing infrastructure and ephemeral flooding. Furthermore, the availability of current and upcoming hyperspectral remote sensing data opened the opportunity for the analyses of the complex reflectance signatures that relate to the mineralogical mixtures found in the salt pan sediments. However, these new advances in sensor technology, as well as increased data availability currently have not been fully explored for the study of salt pan environments. The potential of new sensors needs to be assessed and state of the art methods need to be adapted and improved to provide reliable information for in depth analysis of processes and characterisation of the recent condition, as well as to support long-term monitoring and to evaluate environmental impacts of changing climate and anthropogenic activity.
This thesis provides an assessment of the capabilities of optical remote sensing for the study of salt pan environments that combines the information of hyperspectral data with the increased temporal coverage of multispectral observations for a more complete understanding of spatial and temporal complexity of salt pan environments using the Omongwa salt pan located in the south-west Kalahari as a test site. In particular, hyperspectral data are used for unmixing of the mineralogical surface composition, spectral feature-based modelling for quantification of main crust components, as well as time-series based classification of multispectral data for the assessment of the long-term dynamic and the analysis of the seasonal process regime. The results show that the surface of the Omongwa pan can be categorized into three major crust types based on diagnostic absorption features and mineralogical ground truth data. The mineralogical crust types can be related to different zones of surface dynamic as well as pan morphology that influences brine flow during the pan inundation and desiccation cycles. Using current hyperspectral imagery, as well as simulated data of upcoming sensors, robust quantification of the gypsum component could be derived. For the test site the results further indicate that the crust dynamic is mainly driven by flooding events in the wet season, but it is also influenced by temperature and aeolian activity in the dry season. Overall, the scientific outcomes show that optical remote sensing can provide a wide range of information helpful for the study of salt pan environments. The thesis also highlights that remote sensing approaches are most relevant, when they are adapted to the specific site conditions and research scenario and that upcoming sensors will increase the potential for mineralogical, sedimentological and geomorphological analysis, and will improve the monitoring capabilities with increased data availability.
The significant environmental and socioeconomic consequences of hydrometeorological extreme events, such as extreme rainfall, are constituted as a most important motivation for analyzing these events in the south-central Andes of NW Argentina. The steep topographic and climatic gradients and their interactions frequently lead to the formation of deep convective storms and consequently trigger extreme rainfall generation.
In this dissertation, I focus on identifying the dominant climatic variables and atmospheric conditions and their spatiotemporal variability leading to deep convection and extreme rainfall in the south-central Andes.
This dissertation first examines the significant contribution of temperature on atmospheric humidity (dew-point temperature, Td) and on convection (convective available potential energy, CAPE) for deep convective storms and hence, extreme rainfall along the topographic and climatic gradients. It was found that both climatic variables play an important role in extreme rainfall generation. However, their contributions differ depending on topographic and climatic sub-regions, as well as rainfall percentiles.
Second, this dissertation explores if (near real-time) the measurements conducted by the Global Navigation Satellite System (GNSS) on integrated water vapor (IWV) provide reliable data for explaining atmospheric humidity. I argue that GNSS-IWV, in conjunction with other atmospheric stability parameters such as CAPE, is able to decipher the extreme rainfall in the eastern central Andes. In my work, I rely on a multivariable regression analysis described by a theoretical relationship and fitting function analysis.
Third, this dissertation identifies the local impact of convection on extreme rainfall in the eastern Andes. Relying on a Principal Component Analysis (PCA) it was found that during the existence of moist and warm air, extreme rainfall is observed more often during local night hours. The analysis includes the mechanisms for this observation.
Exploring the atmospheric conditions and climatic variables leading to extreme rainfall is one of the main findings of this dissertation. The conditions and variables are a prerequisite for understanding the dynamics of extreme rainfall and predicting these events in the eastern Andes.
In recent years the development of renewable energy sources attracted much attention due to the increasing environmental pollution induced by burning fossil fuels. The growing public interest in reducing greenhouse gases and the use of pollution-free energies (bio-mass-, geothermal-, solar-, water- or wind energy) paved the way for scientific research in renewable energies. [1] Solar energy provides unlimited access and offers high applicational flexibility, which is needed for energy consumption in a modern society. The scientific interest in photovoltaics (PV) nowadays focuses on discovering new materials and improving materials properties, aiming for the production of highly efficient solar cells. Lately, a new type of absorber material based on the perovskite type structure reached power conversion efficiencies of more than 24%. [2] By varying the chemical composition the electronic properties as e.g. the band gap energy can be tuned to increase the absorption range of this absorber material. This makes them in particular attractive for use in tandem solar cells, where silicon and perovskite absorber layers are combined to absorb a large range of the vible light (28.0% efficiency). [2] However, perovskite based solar cells not only suffer from fast degradation when exposed to humidity, but also from the use of toxic elements (e.g. lead), which can result in long-term environmental damage. Therefore, the aim of this study was to determine the fundamental structural and optoelectronical properties of highly interesting hybrid perovskite materials, the MAPbX3 solid solution (MA=CH3NH3; X=I,Br,Cl) and the triple cation (FA1-xMAx)1-yCsyPbI3 solid solution (FA=HC(NH2)2). The study was performed on powder samples by using X-ray diffraction, revealing the crystal structure and solubility behavior of all solid solutions. Moreover the temperature-dependent behavior was studied using in-situ high resolution synchrotron X-ray diffraction and combinatorial thermal analysis methods. The influence of compositional changes on the band gap energy variation were observed using spectroscopic methods as photoluminescence and diffuse reflectance spectroscopy. The obtained results have shown that for the MAPb(I1-xBrx)3 solid solution a large miscibility gap in the range of 0.29 ( ± 0.02) ≤ x ≤ 0.92 ( ± 0.02) is present. This miscibility gap limits the suitable compositional range for use in thin film solar cells of mixed halide compounds. From the temperature-dependent in-situ synchrotron X-ray diffraction studies the complete T-X-phase diagram was established. Studies on the MAPb(Cl1-xBrx)3 solid solution revealed that MAPb(Cl1-xBrx)3 forms a complete solid solution series. For the triple cation (FA1-xMAx)1-yCsyPbI3 solid solution the aim was to study the formation of the d-modification in FAPbI3, which is undesired for solar cell application. This can be overcome by stabilizing the favored high temperature cubic a-modification at ambient conditions. By partial substituting the formamidinium molecule by methylammonium and cesium the stabilization of the cubic modification was successful. The solubility limit of FA1-xCsxPbI3 solid solution was determined to be x=0.1, while a full miscibility was observed for the FA1-xMAxPbI3 solid solution. For the triple cation (FA1-xMAx)1-yCsyPbI3 solid solution a solubility limit of cesium was observed to be y=0.1. The optoelectronic properties were investigated, revealing a linear change of band gap energy with chemical composition. It is demonstrated that the stabilized triple cation compound with cubic perovskite-type crystal structure shows enhanced stability of approximately six months. Furthermore, a short insight into lead-free perovskite-type materials is given, using germanium as non-toxic alternative to lead. For germanium based perovskites a fast decomposition in air was observed, due to the preferred formation of GeI4 in oxygen atmosphere. In-situ low temperature synchrotron X-ray diffraction measurements revealed a yet unknown low temperature modification of MAGeI3. [1] WESSELAK, Viktor; SCHABBACH, Thomas; LINK, Thomas; FISCHER, Joachim: Handbuch Regenerative Energietechnik. Springer, 2017 [2] NREL: Best Research-Cell Efficiencies. https://www.nrel.gov/pv/assets/pdfs/best-research-cell-efficiencies-190416.pdf. – 25.04.2019
Today’s focus on the 1930s as a time of radical politics paving the way for the apocalypse of the Second World War ignores the complexity of the decade’s cultural responses, especially those by British women writers who highlighted gender issues within their contemporary political climate. The decade’s literature is often understood to capture the political unrest, either narrating people’s chaotic movement or their paralysed shock. This book argues that 1930s novels collapse the distinction between movement and standstill and calls this phenomenon Dynamic Stasis. This Dynamic Stasis thematically and structurally informs the novels of Nancy Mitford, Stevie Smith, Rosamond Lehmann and Jean Rhys. By disrupting the oft-repeated cliché of the 1930s as the age of political extremes, gender politics and negotiations of femininity can emerge from the discursive periphery. This book therefore corrects a persistent gender blind spot, which opens up a (re)consideration of authors that have been overlooked in literary criticism of 1930s to this day.
Understanding how organisms adapt to their local environment is a major focus of evolutionary biology. Local adaptation occurs when the forces of divergent natural selection are strong enough compared to the action of other evolutionary forces. An improved understanding of the genetic basis of local adaptation can inform about the evolutionary processes in populations and is of major importance because of its relevance to altered selection pressures due to climate change. So far, most insights have been gained by studying model organisms, but our understanding about the genetic basis of local adaptation in wild populations of species with little genomic resources is still limited.
With the work presented in this thesis I therefore set out to provide insights into the genetic basis of local adaptation in populations of two voles species: the common vole (Microtus arvalis) and the bank vole (Myodes glareolus). Both voles species are small mammals, they have a high evolutionary potential compared to their dispersal capabilities and are thus likely to show genetic responses to local conditions, moreover, they have a wide distribution in which they experience a broad range of different environmental conditions, this makes them an ideal species to study local adaptation.
The first study focused on producing a novel mitochondrial genome to facilitate further research in M. arvalis. To this end, I generated the first mitochondrial genome of M. arvalis using shotgun sequencing and an iterative mapping approach. This was subsequently used in a phylogenetic analysis that produced novel insights into the phylogenetic relationships of the Arvicolinae.
The following two studies then focused on the genetic basis of local adaptation using ddRAD-sequencing data and genome scan methods. The first of these involved sequencing the genomic DNA of individuals from three low-altitude and three high-altitude M. arvalis study sites in the Swiss Alps. High-altitude environments with their low temperatures and low levels of oxygen (hypoxia) pose considerable challenges for small mammals. With their small body size and proportional large body surface they have to sustain high rates of aerobic metabolism to support thermogenesis and locomotion, which can be restricted with only limited levels of oxygen available. To generate insights into high-altitude adaptation I identified a large number of single nucleotide polymorphisms (SNPs). These data were first used to identify high levels of differentiation between study sites and a clear pattern of population structure, in line with a signal of isolation by distance. Using genome scan methods, I then identified signals of selection associated with differences in altitude in genes with functions related to oxygen transport into tissue and genes related to aerobic metabolic pathways. This indicates that hypoxia is an important selection pressure driving local adaptation at high altitude in M. arvalis. A number of these genes were linked with high-altitude adaptation in other species before, which lead to the suggestion that high-altitude populations of several species have evolved in a similar manner as a response to the unique conditions at high altitude
The next study also involved the genetic basis of local adaptation, here I provided insights into climate-related adaptation in M. glareolus across its European distribution. Climate is an important environmental factor affecting the physiology of all organisms. In this study I identified a large number of SNPs in individuals from twelve M. glareolus populations distributed across Europe. I used these, to first establish that populations are highly differentiated and found a strong pattern of population structure with signal of isolation by distance. I then employed genome scan methods to identify candidate loci showing signals of selection associated with climate, with a particular emphasis on polygenic loci. A multivariate analysis was used to determine that temperature was the most important climate variable responsible for adaptive genetic variation among all variables tested. By using novel methods and genome annotation of related species I identified the function of genes of candidate loci. This showed that genes under selection have functions related to energy homeostasis and immune processes. Suggesting that M. glareolus populations have evolved in response to local temperature and specific local pathogenic selection pressures.
The studies presented in this thesis provide evidence for the genetic basis of local adaptation in two vole species across different environmental gradients, suggesting that the identified genes are involved in local adaptation. This demonstrates that with the help of novel methods the study of wild populations, which often have little genomic resources available, can provide unique insights into evolutionary processes.
To find out the future of nowadays reef ecosystem turnover under the environmental stresses such as global warming and ocean acidification, analogue studies from the geologic past are needed. As a critical time of reef ecosystem innovation, the Permian-Triassic transition witnessed the most severe demise of Phanerozoic reef builders, and the establishment of modern style symbiotic relationships within the reef-building organisms. Being the initial stage of this transition, the Middle Permian (Capitanian) mass extinction coursed a reef eclipse in the early Late Permian, which lead to a gap of understanding in the post-extinction Wuchiapingian reef ecosystem, shortly before the radiation of Changhsingian reefs. Here, this thesis presents detailed biostratigraphic, sedimentological, and palaeoecological studies of the Wuchiapingian reef recovery following the Middle Permian (Capitanian) mass extinction, on the only recorded Wuchiapingian reef setting, outcropping in South China at the Tieqiao section.
Conodont biostratigraphic zonations were revised from the Early Permian Artinskian to the Late Permian Wuchiapingian in the Tieqiao section. Twenty main and seven subordinate conodont zones are determined at Tieqiao section including two conodont zone below and above the Tieqiao reef complex. The age of Tieqiao reef was constrained as early to middle Wuchiapingian.
After constraining the reef age, detailed two-dimensional outcrop mapping combined with lithofacies study were carried out on the Wuchiapingian Tieqiao Section to investigate the reef growth pattern stratigraphically as well as the lateral changes of reef geometry on the outcrop scale. Semi-quantitative studies of the reef-building organisms were used to find out their evolution pattern within the reef recovery. Six reef growth cycles were determined within six transgressive-regressive cycles in the Tieqiao section. The reefs developed within the upper part of each regressive phase and were dominated by different biotas. The timing of initial reef recovery after the Middle Permian (Capitanian) mass extinction was updated to the Clarkina leveni conodont zone, which is earlier than previous understanding. Metazoans such as sponges were not the major components of the Wuchiapingian reefs until the 5th and 6th cycles. So, the recovery of metazoan reef ecosystem after the Middle Permian (Capitanian) mass extinction was obviously delayed. In addition, although the importance of metazoan reef builders such as sponges did increase following the recovery process, encrusting organisms such as Archaeolithoporella and Tubiphytes, combined with microbial carbonate precipitation, still played significant roles to the reef building process and reef recovery after the mass extinction.
Based on the results from outcrop mapping and sedimentological studies, quantitative composition analysis of the Tieqiao reef complex were applied on selected thin sections to further investigate the functioning of reef building components and the reef evolution after the Middle Permian (Capitanian) mass extinction. Data sets of skeletal grains and whole rock components were analyzed. The results show eleven biocommunity clusters/eight rock composition clusters dominated by different skeletal grains/rock components. Sponges, Archaeolithoporella and Tubiphytes were the most ecologically important components within the Wuchiapingian Tieqiao reef, while the clotted micrites and syndepositional cements are the additional important rock components for reef cores. The sponges were important within the whole reef recovery. Tubiphytes were broadly distributed in different environments and played a key-role in the initial reef communities. Archaeolithoporella concentrated in the shallower part of reef cycles (i.e., the upper part of reef core) and was functionally significant for the enlargement of reef volume.
In general, the reef recovery after the Middle Permian (Capitanian) mass extinction has some similarities with the reef recovery following the end-Permian mass extinction. It shows a delayed recovery of metazoan reefs and a stepwise recovery pattern that was controlled by both ecological and environmental factors. The importance of encrusting organisms and microbial carbonates are also similar to most of the other post-extinction reef ecosystems. These findings can be instructive to extend our understanding of the reef ecosystem evolution under environmental perturbation or stresses.
Hybrid organic-inorganic perovskites have attracted attention in recent years, caused by the incomparable increase in efficiency in energy convergence, which implies the application as an absorber material for solar cells. A disadvantage of these materials is, among others, the instability to moisture and UV-radiation. One possible solution for these problems is the reduction of the size towards the nano world. With that nanosized perovskites are showing superior stability in comparison to e.g. perovskite layers. Additionally to this the nanosize even enables stable perovskite structures, which could not be achieved otherwise at
room temperature.
This thesis is separated into two major parts. The separation is done by the composition and the band gap of the material and at the same time the shape and size of the nanoparticles. Here the division is made by the methylammonium lead tribromide nanoplatelets and the caesium lead triiodide nanocubes.
The first part is focusing on the hybrid organic-inorganic perovskite (methylammonium lead tribromide) nanoplatelets with a band gap of 2.35 eV and their thermal behaviour. Due to the challenging character of this material, several analysis methods are used to investigate the sub nano and nanostructures under the influence of temperature. As a result, a shift of phase-transition temperatures towards higher temperatures is observed. This unusual behaviour can be explained by the ligand, which is incorporated in the perovskite outer structure and adds phase-stability into the system.
The second part of this thesis is focusing on the inorganic caesium lead triiodide nanocubes with a band gap of 1.83 eV. These nanocrystals are first investigated and compared by TEM, XRD and other optical methods. Within these methods, a cuboid and orthorhombic structure are revealed instead of the in literature described cubic shape and structure. Furthermore, these cuboids are investigated towards their self-assembly on a substrate. Here a high degree in self-assembly is shown. As a next step, the ligands of the nanocuboids are exchanged against other ligands to increase the charge carrier mobility. This is further investigated by the above-mentioned methods. The last section is dealing with the enhancement of the CsPbI3 structure, by incorporating potassium in the crystal structure. The results are suggesting here an increase in stability.
In recent years people have realised non-renewability of our modern society which relays on spending huge amounts of energy mostly produced from fosil fuels, such as oil and coal, and the shift towards more sustainable energy sources has started. However, sustainable sources of energy, such as wind-, solar- and hydro-energy, produce primarily electrical energy and can not just be poured in canister like many fosil fuels, creating necessity for rechragable batteries. However, modern Li-ion batteries are made from toxic heavy metals and sustainable alternatives are needed. Here we show that naturally abundant catecholic and guaiacyl groups can be utilised to replace heavy metals in Li-ion batteries.
Foremost vanillin, a naturally occurring food additive that can be sustainably synthesised from industrial biowaste, lignin, was utilised to synthesise materials that showed extraordinary performance as cathodes in Li-ion batteries. Furthermore, behaviour of catecholic and guiacyl groups in Li-ion system was compared, confirming usability of guiacayl containing biopolymers as cathodes in Li-ion batteries. Lastly, naturally occurring polyphenol, tannic acid, was incorporated in fully bioderived hybrid material that shows performance comparable to commercial Li-ion batteries and good stability.
This thesis presents an important advancement in understanding of biowaste derived cathode materials for Li-ion batteries. Further research should be conducted to better understand behaviour of guaiacyl groups during Li-ion battery cycling. Lastly, challenges of incorporation of lignin, an industrial biowaste, have to be addressed and lignin should be incorporated as a cathode material in Li-ion batteries.
Ammonia is a chemical of fundamental importance for nature`s vital nitrogen cycle. It is crucial for the growth of living organisms as well as food and energy source. Traditionally, industrial ammonia production is predominated by Haber- Bosch process (HBP) which is based on direct conversion of N2 and H2 gas under high temperature and high pressure (~500oC, 150-300 bar). However, it is not the favorite route because of its thermodynamic and kinetic limitations, and the need for the energy intense production of hydrogen gas by reforming processes. All these disfavors of HBP open a target to search for an alternative technique to perform efficient ammonia synthesis via electrochemical catalytic processes, in particular via water electrolysis, using water as the hydrogen source to save the process from gas reforming.
In this study, the investigation of the interface effects between imidazolium-based ionic liquids and the surface of porous carbon materials with a special interest in the nitrogen absorption capability. As the further step, the possibility to establish this interface as the catalytically active area for the electrochemical N2 reduction to NH3 has been evaluated. This particular combination has been chosen because the porous carbon materials and ionic liquids (IL) have a significant importance in many scientific fields including catalysis and electrocatalysis due to their special structural and physicochemical properties. Primarily, the effects of the confinement of ionic liquid (EmimOAc, 1-Ethyl-3-methylimidazolium acetate) into carbon pores have been investigated. The salt-templated porous carbons, which have different porosity (microporous and mesoporous) and nitrogen species, were used as model structures for the comparison of the IL confinement at different loadings. The nitrogen uptake of EmimOAc can be increased by about 10 times by the confinement in the pores of carbon materials compared to the bulk form. In addition, the most improved nitrogen absorption was observed by IL confinement in micropores and in nitrogen-doped carbon materials as a consequence of the maximized structural changes of IL. Furthermore, the possible use of such interfaces between EmimOAc and porous carbon for the catalytic activation of dinitrogen during the kinetically challenging NRR due to the limited gas absorption in the electrolyte, was examined. An electrocatalytic NRR system based on the conversion of water and nitrogen gas to ammonia at ambient operation conditions (1 bar, 25 °C) was performed in a setup under an applied electric potential with a single chamber electrochemical cell, which consists of the combination of EmimOAc electrolyte with the porous carbon-working electrode and without a traditional electrocatalyst. Under a potential of -3 V vs. SCE for 45 minutes, a NH3 production rate of 498.37 μg h-1 cm-2 and FE of 12.14% were achieved. The experimental observations show that an electric double-layer, which serves the catalytically active area, occurs between a microporous carbon material and ions of the EmimOAc electrolyte in the presence of sufficiently high provided electric potential. Comparing with the typical NRR systems which have been reported in the literature, the presented electrochemical ammonia synthesis approach provides a significantly higher ammonia production rate with a chance to avoid the possible kinetic limitations of NRR. In terms of operating conditions, ammonia production rate and the faradic efficiency without the need for any synthetic electrocatalyst can be resulted of electrocatalytic activation of nitrogen in the double-layer formed between carbon and IL ions.
Feminist Solidarities after Modulation produces an intersectional analysis of transnational feminist movements and their contemporary digital frameworks of identity and solidarity. Engaging media theory, critical race theory, and Black feminist theory, as well as contemporary feminist movements, this book argues that digital feminist interventions map themselves onto and make use of the multiplicity and ambiguity of digital spaces to question presentist and fixed notions of the internet as a white space and technologies in general as objective or universal. Understanding these frameworks as colonial constructions of the human, identity is traced to a socio-material condition that emerges with the modernity/colonialism binary. In the colonial moment, race and gender become the reasons for, as well as the effects of, technologies of identification, and thus need to be understood as and through technologies. What Deleuze has called modulation is not a present modality of control, but is placed into a longer genealogy of imperial division, which stands in opposition to feminist, queer, and anti-racist activism that insists on non-modular solidarities across seeming difference. At its heart, Feminist Solidarities after Modulation provides an analysis of contemporary digital feminist solidarities, which not only work at revealing the material histories and affective ""leakages"" of modular governance, but also challenges them to concentrate on forms of political togetherness that exceed a reductive or essentialist understanding of identity, solidarity, and difference.
The PhD thesis entitled “Actions through the lens of communicative cues. The influence of verbal cues and emotional cues on action processing and action selection in the second year of life” is based on four studies, which examined the cognitive integration of another person’s communicative cues (i.e., verbal cues, emotional cues) with behavioral cues in 18- and 24-month-olds. In the context of social learning of instrumental actions, it was investigated how the intention-related coherence of either a verbally announced action intention or an emotionally signaled action evaluation with an action demonstration influenced infants’ neuro-cognitive processing (Study I) and selection (Studies II, III, IV) of a novel object-directed action. Developmental research has shown that infants benefit from another’s behavioral cues (e.g., action effect, persistency, selectivity) to infer the underlying goal or intention, respectively, of an observed action (e.g., Cannon & Woodward, 2012; Woodward, 1998). Particularly action effects support infants in distinguishing perceptual action features (e.g., target object identity, movement trajectory, final target object state) from conceptual action features such as goals and intentions. However, less is known about infants’ ability to cognitively integrate another’s behavioral cues with additional action-related communicative cues. There is some evidence showing that in the second year of life, infants selectively imitate a novel action that is verbally (“There!”) or emotionally (positive expression) marked as aligning with the model’s action intention over an action that is verbally (“Whoops!”) or emotionally (negative expression) marked as unintentional (Carpenter, Akhtar, & Tomasello, 1998; Olineck & Poulin-Dubois, 2005, 2009; Repacholi, 2009; Repacholi, Meltzoff, Toub, & Ruba, 2016). Yet, it is currently unclear which role the specific intention-related coherence of a communicative cue with a behavioral cue plays in infants’ action processing and action selection that is, whether the communicative cue confirms, contrasts, clarifies, or is unrelated to the behavioral cue. Notably, by using both verbal cues and emotional cues, we examined not only two domains of communicative cues but also two qualitatively distinct relations between behavioral cues on the one hand and communicative cues on the other hand. More specifically, a verbal cue has the potential to communicate an action intention in the absence of an action demonstration and thus a prior-intention (Searle, 1983), whereas an emotional cue evaluates an ongoing or past action demonstration and thus signals an intention-in-action (Searle, 1983). In a first research focus, this thesis examined infants’ capacity to cognitively integrate another’s intention-related communicative cues and behavioral cues, and also focused on the role of the social cues’ coherence in infants’ action processing and action selection. In a second research focus, and to gain more elaborate insights into how the sub-processes of social learning (attention, encoding, response; cf. Bandura, 1977) are involved in this coherence-sensitive integrative processing, we employed a multi-measures approach. More specifically, we used Electroencephalography (EEG) and looking times to examine how the cues’ coherence influenced the compound of attention and encoding, and imitation (including latencies to first-touch and first-action) to address the compound of encoding and response. Based on the action-reconstruction account (Csibra, 2007), we predicted that infants use extra-motor information (i.e., communicative cues) together with behavioral cues to reconstruct another’s action intention. Accordingly, we expected infants to possess a flexibly organized internal action hierarchy, which they adapt according to the cues’ coherence that is, according to what they inferred to be the overarching action goal. More specifically, in a social-learning situation that comprised an adult model, who demonstrated an action on a novel object that offered two actions, we expected the demonstrated action to lead infants’ action hierarchy when the communicative (i.e., verbal, emotional) cue conveyed similar (confirming coherence) or no additional (un-related coherence) intention-related information relative to the behavioral cue. In terms of action selection, this action hierarchy should become evident in a selective imitation of the demonstrated action. However, when the communicative cue questioned (contrasting coherence) the behaviorally implied action goal or was the only cue conveying meaningful intention-related information (clarifying coherence), the verbally/emotionally intended action should ascend infants’ action hierarchy. Consequently, infants’ action selection should align with the verbally/emotionally intended action (goal emulation). Notably, these predictions oppose the direct-matching perspective (Rizzolatti & Craighero, 2004), according to which the observation of another’s action directly resonates with the observer’s motor repertoire, with this motor resonance enabling the identification of the underlying action goal. Importantly, the direct-matching perspective predicts a rather inflexible action hierarchy inasmuch as the process of goal identification should solely rely on the behavioral cue, irrespective of the behavioral cue’s coherence with extra-motor intention-related information, as it may be conveyed via communicative cues. As to the role of verbal cues, Study I used EEG to examine the influence of a confirming (Congruent) versus contrasting (Incongruent) coherence of a verbal action intention with the same action demonstration on 18-month-olds’ conceptual action processing (as measured via mid-latency mean negative ERP amplitude) and motor activation (as measured via central mu-frequency band power). The action was demonstrated on a novel object that offered two action alternatives from a neutral position. We expected mid-latency ERP negativity to be enhanced in Incongruent compared to Congruent, because past EEG research has demonstrated enhanced conceptual processing for stimuli that mismatched rather than matched the semantic context (Friedrich & Friederici, 2010; Kaduk et al., 2016). Regarding motor activation, Csibra (2007) posited that the identification of a clear action goal constitutes a crucial basis for motor activation to occur. We therefore predicted reduced mu power (indicating enhanced motor activation) for Congruent than Incongruent, because in Congruent, the cues’ match provides unequivocal information about the model’s action goal, whereas in Incongruent, the conflict may render the model’s action goal more unclear. Unexpectedly, in the entire sample, 18-month-olds’ mid-latency ERP negativity during the observation of the same action demonstration did not differ significantly depending on whether this action was congruent or incongruent with the model’s verbal action intention. Yet, post hoc analyses revealed the presence of two subgroups of infants, each of which exhibited significantly different mid-latency ERP negativity for Congruent versus Incongruent, but in opposing directions. The subgroups differed in their productive action-related language skills, with the linguistically more advanced infants exhibiting the expected response pattern of enhanced ERP mean negativity in Incongruent than Congruent, indicating enhanced conceptual processing of an action demonstration that was contrasted rather than confirmed by the verbal action context. As expected, central mu power in the entire sample was reduced in Congruent relative to Incongruent, indicating enhanced motor activation when the action demonstration was preceded by a confirming relative to a contrasting verbal action intention. This finding may indicate the covert preparation for a preferential imitation of the congruent relative to the incongruent action (Filippi et al., 2016; Frey & Gerry, 2006). Overall, these findings are in line with the action-reconstruction account (Csibra, 2007), because they suggest a coherence-sensitive attention to and encoding of the same perceptual features of another’s behavior and thus a cognitive integration of intention-related verbal cues and behavioral cues. Yet, because the subgroup constellation in infants’ ERPs was only discovered post hoc, future research is clearly required to substantiate this finding. Also, future research should validate our interpretation that enhanced motor activation may reflect an electrophysiological marker of subsequent imitation by employing EEG and imitation in a within-subjects design. Study II built on Study I by investigating the impact of coherence of a verbal cue and a behavioral cue on 18- and 24-month-olds’ action selection in an imitation study. When infants of both age groups observed a confirming (Congruent) or unrelated (Pseudo-word: action demonstration was associated with novel verb-like cue) coherence, they selectively imitated the demonstrated action over the not demonstrated, alternative action, with no difference between these two conditions. These findings suggest that, as expected, infants’ action hierarchy was led by the demonstrated action when the verbal cue provided similar (Congruent) or no additional (Pseudo-word) intention-related information relative to a meaningful behavioral cue. These findings support the above-mentioned interpretation that enhanced motor activation during action observation may reflect a covert preparation for imitation (Study I). Interestingly, infants did not seem to benefit from the intention-highlighting effect of the verbal cue in Congruent, suggesting that the verbal cue had an unspecific (e.g., attention-guiding) effect on infants’ action selection. Contrary, when infants observed a contrasting (Incongruent) or clarifying (Failed-attempt: model failed to manipulate the object but verbally announced a certain action intention) coherence, their action selection varied with age and also varied across the course of the experiment (block 1 vs. block 2). More specifically, the 24-month-olds made stronger use of the verbal cue for their action selection in block 1 than did the 18-month-olds. However, while the 18-month-olds’ use of the verbal cue increased across blocks, particularly in Incongruent, the 24-month-olds’ use of the verbal cue decreased across blocks. Overall, these results suggest that, as expected, infants’ action hierarchy in Incongruent (both age groups) and Failed-attempt (only 24-month-olds) drew on the verbal action intention, because in both age groups, infants emulated the verbal intention about as often as they imitated the demonstrated action or even emulated the verbal action intention preferentially. Yet, these findings were confined to certain blocks. It may be argued that the younger age group had a harder time inferring and emulating the intended, yet never observed action, because this requirement is more demanding in cognitive and motor terms. These demands may explain why the 18-month-olds needed some time to take account of the verbal action intention. Contrary, it seems that the 24-month-olds, although demonstrating their principle capacity to take account of the verbal cue in block 1, lost trust in the model’s verbal cue, maybe because the verbal cue did not have predictive value for the model’s actual behavior. Supporting this interpretation, research on selective trust has demonstrated that already infants evaluate another’s reliability or competence, respectively, based on how that model handles familiar objects (behavioral reliability) or labels familiar objects (verbal reliability; for reviews, see Mills, 2013; Poulin-Dubois & Brosseau-Liard, 2016). Relatedly, imitation research has demonstrated that the interpersonal aspects of a social-learning situation gain increasing relevance for infants during the second year of life (Gellén & Buttelmann, 2019; Matheson, Moore, & Akhtar, 2013; Uzgiris, 1981). It may thus be argued that when the 24-month-olds were repeatedly faced with a verbally unreliable model, they de-evaluated the verbal cue as signaling the model’s action intention and instead relied more heavily on alternative cues such as the behavioral cue (Incongruent) or the action context (e.g., object affordances, salience; Failed-attempt). Infants’ first-action latencies were higher in Incongruent and Failed-attempt than in both Congruent and Pseudo-word, and were also higher in Failed-attempt than in Incongruent. These latency-findings thus indicate that situations involving a meaningful verbal cue that deviated from the behavioral cue are cognitively more demanding, resulting in a delayed initiation of a behavioral response. In sum, the findings of Study II suggest that both age groups were highly flexible in their integration of a verbal cue and behavioral cue. Moreover, our results do not indicate a general superiority of either cue. Instead, it seems to depend on the informational gain conveyed by the verbal cue whether it exerts a specific, intention-highlighting effect (Incongruent, Failed-attempt) or an unspecific (e.g., attention-guiding) effect (Congruent, Pseudo-word). Studies III and IV investigated the impact of another’s action-related emotional cues on 18-month-olds’ action selection. In Study III, infants observed a model, who demonstrated two actions on a novel object in direct succession, and who combined one of the two actions with a positive (happy) emotional expression and the other action with a negative (sad) emotional expression. As expected, infants imitated the positively emoted (PE) action more often than the negatively emoted (NE) action. This preference arose from an increase in infants’ readiness to perform the PE action from the baseline period (prior to the action demonstrations) to the test period (following the action demonstrations), rather than from a decrease in readiness to the perform the NE action. The positive cue thus had a stronger behavior-regulating effect than the negative cue. Notably, infants’ more general object-directed behavior in terms of first-touch latencies remained unaffected by the emotional cues’ valence, indicating that infants had linked the emotional cues specifically to the corresponding action and not the object as a whole (Repacholi, 2009). Also, infants’ looking times during the action demonstration did not differ significantly as a function of emotional valence and were characterized by a predominant attentional focus to the action/object rather than to the model’s face. Together with the findings on infants’ first-touch latencies, these results indicate a sensitivity for the notion that emotions can have very specific referents (referential specificity; Martin, Maza, McGrath, & Phelps, 2014). Together, Study III provided evidence for selective imitation based on another’s intention-related (particularly positive) emotional cues in an action-selection task, and thus indicates that infants’ action hierarchy flexibly responds to another’s emotional evaluation of observed actions. According to Repacholi (2009), we suggest that infants used the model’s emotional evaluation to re-appraise the corresponding action (effect), for instance in terms of desirability. Study IV followed up on Study III by investigating the role of the negative emotional cue for infants’ action selection in more detail. Specifically, we investigated whether a contrasting (negative) emotional cue alone would be sufficient to differentially rank the two actions along infants’ action hierarchy or whether instead infants require direct information about the model’s action intention (in the form of a confirming action-emotion pair) to align their action selection with the emotional cues. Also, we examined whether the absence of a direct behavior-regulating effect of the negative cue in Study III was due to the negative cue itself or to the concurrently available positive cue masking the negative cue’s potential effect. To this end, we split the demonstration of the two action-emotion pairs across two trials. In each trial, one action was thus demonstrated and emoted (PE, NE action), and one action was not demonstrated and un-emoted (UE action). For trial 1, we predicted that infants, who observed a PE action demonstration, would selectively imitate the PE action, whereas infants, who observed a NE action demonstration would selectively emulate the UE action. As to trial 2, we expected the complementary action-emotion pair to provide additional clarifying information as the model’s emotional evaluation of both actions, which should either lead to adaptive perseveration (if infants’ action selection in trial 1 had already drawn on the emotional cue) or adaptive change (if infants’ action selection in trial 1 signaled a disregard of the emotional cue). As to trial 1, our findings revealed that, as expected, infants imitated the PE action more often than they emulated the UE action. Like in Study III, this selectivity arose from an increase in infants’ propensity to perform the PE action from baseline to trial 1. Also like in Study III, infants performed the NE action about equally often in baseline and trial 1, which speaks against a direct behavior-regulating effect of the negative cue also when presented in isolation. However, after a NE action demonstration, infants emulated the UE action more often in trial 1 than in baseline, suggesting an indirect behavior-regulating effect of the negative cue. Yet, this indirect effect did not yield a selective emulation of the UE action, because infants performed both action alternatives about equally often in trial 1. Unexpectedly, infants’ action selection in trial 2 was unaffected by the emotional cue. Instead, infants perseverated their action selection of trial 1 in trial 2, irrespective of whether it was adaptive or non-adaptive with respect to the model’s emotional evaluation of the action. It seems that infants changed their strategy across trials, from an initial adherence to the emotional (particularly positive) cue, towards bringing about a salient action effect (Marcovich & Zelazo, 2009). In sum, Studies III and IV indicate a dynamic interplay of different action-selection strategies, depending on valence and presentation order. Apparently, at least in infancy, action reconstruction as one basis for selective action performance reaches its limits when infants can only draw on indirect intention-related information (i.e., which action should be avoided). Overall, our findings favor the action-reconstruction account (Csibra, 2007), according to which actions are flexibly organized along a hierarchy, depending on inferential processes based on extra-motor intention-related information. At the same time, the findings question the direct-matching hypothesis (Rizzolatti & Craighero, 2004), according to which the identification (and pursuit) of action goals hinges on a direct simulation of another’s behavioral cues. Based on the studies’ findings, a preliminary working model is introduced, which seeks to integrate the two theoretical accounts by conceptualizing the routes that activation induced by social cues may take to eventually influence an infant’s action selection. Our findings indicate that it is useful to strive a differentiated conceptualization of communicative cues, because they seem to operate at different places within the process of cue integration, depending on their potential to convey direct intention-related information. Moreover, we suggest that there is bidirectional exchange within each compound of adjacent sub-processes (i.e., between attention and encoding, and encoding and response), and between the compounds. Hence, our findings highlight the benefits of a multi-measures approach when studying the development of infants’ social-cognitive abilities, because it provides a more comprehensive picture how the concerted use of social cues from different domains influences infants’ processing and selection of instrumental actions. Finally, this thesis points to potential future directions to substantiate our current interpretation of the findings.. Moreover, an extension to additional kinds of coherence is suggested to get closer to infants’ everyday-world of experience.
Negotiations have become a central aspect of managerial life and influence a company’s profit significantly. This is why organizations generally endeavor to increase their negotiation performance. Over the last decades, besides other factors, research found goal setting to be one of the best predictor of negotiation outcomes. Given the extent and complexity of multi-issue business negotiations, profit optimizing by means of improving a company’s goal setting has a great deal of potential. However, developing goal setting strategies before the actual negotiation is still rather uncommon in business practice. In order to provide professionals with empirical guidance, this work aims at investigating three steps for the development and effective management of goal setting strategies for business negotiations. Therefore, this dissertation contains three papers, each one dealing with one specific step. The first paper explores the characterization of social and economic outcomes in different business relationship types at the beginning of the relationship and the development of these outcomes toward the actual status quo. The second paper takes the number of goals into account for goal setting strategies. This paper uses the two dimensions goal scope and goal difficulty to investigate the relevance and potentials of combining different level of these dimensions in multi-issue negotiations. Therefore, a large experiment was conducted measuring the impact on individual and joint negotiation outcomes, and the impasse rate. The third paper analyzes the type and orientation of negotiation goals. When the set of negotiation issues has an integrative potential, the opportunity to increase the joint gains arises. To what extent negotiators pursue the integrative potential depends largely on their goal orientation. A quantitative analysis with practitioners was used to examine the influence that business negotiations’ situative and organizational factors have on the negotiators’ goal orientation. The dissertation closes with implications for practice, limitations of the work, and ideas for future research.
Small moonlets or moons embedded in dense planetary rings create S-shaped density modulations called propellers if their masses are smaller than a certain threshold, alternatively they create a circumferential gap in the disk if the embedded body’s mass exceeds this threshold (Spahn and Sremčević, 2000). The gravitational perturber scatters the ring particles, depletes the disk’s density, and, thus, clears a gap, whereas counteracting viscous diffusion of the ring material has the tendency to close the created gap, thereby forming a propeller. Propeller objects were predicted by Spahn and Sremčević (2000) and Sremčević et al. (2002) and were later discovered by the Cassini space probe (Tiscareno et al., 2006, Sremčević et al., 2007, Tiscareno et al., 2008, and Tiscareno et al., 2010). The ring moons Pan and Daphnis are massive enough to maintain the circumferential Encke and Keeler gaps in Saturn’s A ring and were detected by Showalter (1991) and Porco (2005) in Voyager and Cassini images, respectively. In this thesis, a nonlinear axisymmetric diffusion model is developed to describe radial density profiles of circumferential gaps in planetary rings created by embedded moons (Grätz et al., 2018). The model accounts for the gravitational scattering of the ring particles by the embedded moon and for the counteracting viscous diffusion of the ring matter back into the gap. With test particle simulations it is shown that the scattering of the ring particles passing the moon is larger for small impact parameters than estimated by Goldreich and Tremaine (1980). This is especially significant for the modeling of the Keeler gap. The model is applied to the Encke and Keeler gaps with the aim to estimate the shear viscosity of the ring in their vicinities. In addition, the model is used to analyze whether tiny icy moons whose dimensions lie below Cassini’s resolution capabilities would be able to cause the poorly understood gap structure of the C ring and the Cassini Division. One of the most intriguing facets of Saturn’s rings are the extremely sharp edges of the Encke and Keeler gaps: UVIS-scans of their gap edges show that the optical depth drops from order unity to zero over a range of far less than 100 m, a spatial scale comparable to the ring’s vertical extent. This occurs despite the fact that the range over which a moon transfers angular momentum onto the ring material is much larger. Borderies et al. (1982, 1989) have shown that this striking feature is likely related to the local reversal of the usually outward-directed viscous transport of angular momentum in strongly perturbed regions. We have revised the Borderies et al. (1989) model using a granular flow model to define the shear and bulk viscosities, ν and ζ, in order to incorporate the angular momentum flux reversal effect into the axisymmetric diffusion model for circumferential gaps presented in this thesis (Grätz et al., 2019). The sharp Encke and Keeler gap edges are modeled and conclusions regarding the shear and bulk viscosities of the ring are discussed. Finally, we explore the question of whether the radial density profile of the central and outer A ring, recently measured by Tiscareno and Harris (2018) in the highest resolution to date, and in particular, the sharp outer A ring edge can be modeled consistently from the balance of gravitational scattering by several outer moons and the mass and momentum transport. To this aim, the developed model is extended to account for the inward drifts caused by multiple discrete and overlapping resonances with multiple outer satellites and is then used to hydrodynamically simulate the normalized surface mass density profile of the A ring. This section of the thesis is based on studies by Tajeddine et al. (2017a) who recently discussed the common misconception that the 7:6 resonance with Janus alone maintains the outer A ring edge, showing that the combined effort of several resonances with several outer moons is required to confine the A ring as observed by the Cassini spacecraft.
Lifelong learning plays an increasingly important role in many societies. Technology is changing faster than ever and what has been important to learn today, may be obsolete tomorrow. The role of informal programs is becoming increasingly important. Particularly, Massive Open Online Courses have become popular among learners and instructors. In 2008, a group of Canadian education enthusiasts started the first Massive Open Online Courses or MOOCs to prove their cognitive theory of Connectivism. Around 2012, a variety of American start-ups redefined the concept of MOOCs. Instead of following the connectivist doctrine they returned to a more traditional approach. They focussed on video lecturing and combined this with a course forum that allowed the participants to discuss with each other and the teaching team. While this new version of the concept was enormously successful in terms of massiveness—hundreds of thousands of participants from all over the world joined the first of these courses—many educators criticized the re-lapse to the cognitivist model. In the early days, the evolving platforms often did not have more features than a video player, simple multiple-choice quizzes, and the course forum. It soon became a major interest of research to allow the scaling of more modern approaches of learning and teaching for the massiveness of these courses. Hands-on exercises, alternative forms of assessment, collaboration, and teamwork are some of the topics on the agenda. The insights provided by cognitive and pedagogical theories, however, do not necessarily always run in sync with the needs and the preferences of the majority of participants. While the former promote action-learning, hands-on-learning, competence-based-learning, project-based-learning, team-based-learning as the holy grail, many of the latter often rather prefer a more laid-back style of learning, sometimes referred to as edutainment. Obviously, given the large numbers of participants in these courses, there is not just one type of learners. Participants are not a homogeneous mass but a potpourri of individuals with a wildly heterogeneous mix of backgrounds, previous knowledge, familial and professional circumstances, countries of origin, gender, age, and so on. For the majority of participants, a full-time job and/or a family often just does not leave enough room for more time intensive tasks, such as practical exercises or teamwork. Others, however, particularly enjoy these hands-on or collaborative aspects of MOOCs. Furthermore, many subjects particularly require these possibilities and simply cannot be taught or learned in courses that lack collaborative or hands-on features. In this context, the thesis discusses how team assignments have been implemented on the HPI MOOC platform. During the recent years, several experiments have been conducted and a great amount of experience has been gained by employing team assignments in courses in areas, such as Object-Oriented Programming, Design Thinking, and Business Innovation on various instances of this platform: openHPI, openSAP, and mooc.house
Distance Education or e-Learning platform should be able to provide a virtual laboratory to let the participants have hands-on exercise experiences in practicing their skill remotely. Especially in Cybersecurity e-Learning where the participants need to be able to attack or defend the IT System. To have a hands-on exercise, the virtual laboratory environment must be similar to the real operational environment, where an attack or a victim is represented by a node in a virtual laboratory environment. A node is usually represented by a Virtual Machine (VM). Scalability has become a primary issue in the virtual laboratory for cybersecurity e-Learning because a VM needs a significant and fix allocation of resources. Available resources limit the number of simultaneous users. Scalability can be increased by increasing the efficiency of using available resources and by providing more resources. Increasing scalability means increasing the number of simultaneous users.
In this thesis, we propose two approaches to increase the efficiency of using the available resources. The first approach in increasing efficiency is by replacing virtual machines (VMs) with containers whenever it is possible. The second approach is sharing the load with the user-on-premise machine, where the user-on-premise machine represents one of the nodes in a virtual laboratory scenario. We also propose two approaches in providing more resources. One way to provide more resources is by using public cloud services. Another way to provide more resources is by gathering resources from the crowd, which is referred to as Crowdresourcing Virtual Laboratory (CRVL).
In CRVL, the crowd can contribute their unused resources in the form of a VM, a bare metal system, an account in a public cloud, a private cloud and an isolated group of VMs, but in this thesis, we focus on a VM. The contributor must give the credential of the VM admin or root user to the CRVL system. We propose an architecture and methods to integrate or dis-integrate VMs from the CRVL system automatically. A Team placement algorithm must also be investigated to optimize the usage of resources and at the same time giving the best service to the user. Because the CRVL system does not manage the contributor host machine, the CRVL system must be able to make sure that the VM integration will not harm their system and that the training material will be stored securely in the contributor sides, so that no one is able to take the training material away without permission. We are investigating ways to handle this kind of threats.
We propose three approaches to strengthen the VM from a malicious host admin. To verify the integrity of a VM before integration to the CRVL system, we propose a remote verification method without using any additional hardware such as the Trusted Platform Module chip. As the owner of the host machine, the host admins could have access to the VM's data via Random Access Memory (RAM) by doing live memory dumping, Spectre and Meltdown attacks. To make it harder for the malicious host admin in getting the sensitive data from RAM, we propose a method that continually moves sensitive data in RAM. We also propose a method to monitor the host machine by installing an agent on it. The agent monitors the hypervisor configurations and the host admin activities.
To evaluate our approaches, we conduct extensive experiments with different settings. The use case in our approach is Tele-Lab, a Virtual Laboratory platform for Cyber Security e-Learning. We use this platform as a basis for designing and developing our approaches. The results show that our approaches are practical and provides enhanced security.
In today's world, many applications produce large amounts of data at an enormous rate. Analyzing such datasets for metadata is indispensable for effectively understanding, storing, querying, manipulating, and mining them. Metadata summarizes technical properties of a dataset which rang from basic statistics to complex structures describing data dependencies. One type of dependencies is inclusion dependency (IND), which expresses subset-relationships between attributes of datasets. Therefore, inclusion dependencies are important for many data management applications in terms of data integration, query optimization, schema redesign, or integrity checking. So, the discovery of inclusion dependencies in unknown or legacy datasets is at the core of any data profiling effort.
For exhaustively detecting all INDs in large datasets, we developed S-indd++, a new algorithm that eliminates the shortcomings of existing IND-detection algorithms and significantly outperforms them. S-indd++ is based on a novel concept for the attribute clustering for efficiently deriving INDs. Inferring INDs from our attribute clustering eliminates all redundant operations caused by other algorithms. S-indd++ is also based on a novel partitioning strategy that enables discording a large number of candidates in early phases of the discovering process. Moreover, S-indd++ does not require to fit a partition into the main memory--this is a highly appreciable property in the face of ever-growing datasets. S-indd++ reduces up to 50% of the runtime of the state-of-the-art approach.
None of the approach for discovering INDs is appropriate for the application on dynamic datasets; they can not update the INDs after an update of the dataset without reprocessing it entirely. To this end, we developed the first approach for incrementally updating INDs in frequently changing datasets. We achieved that by reducing the problem of incrementally updating INDs to the incrementally updating the attribute clustering from which all INDs are efficiently derivable. We realized the update of the clusters by designing new operations to be applied to the clusters after every data update. The incremental update of INDs reduces the time of the complete rediscovery by up to 99.999%.
All existing algorithms for discovering n-ary INDs are based on the principle of candidate generation--they generate candidates and test their validity in the given data instance. The major disadvantage of this technique is the exponentially growing number of database accesses in terms of SQL queries required for validation. We devised Mind2, the first approach for discovering n-ary INDs without candidate generation. Mind2 is based on a new mathematical framework developed in this thesis for computing the maximum INDs from which all other n-ary INDs are derivable. The experiments showed that Mind2 is significantly more scalable and effective than hypergraph-based algorithms.
This thesis is focused on a better understanding of the formation mechanism of bulk birefringence gratings (BBG) and a surface relief gratings (SRG) in photo-sensitive polymer films. A new set-up is developed enabling the in situ investigation how the polymer film is being structured during irradiation with modulated light. The new aspect of the equipment is that it combines several techniques such as a diffraction efficiency (DE) set-up, an atomic force microscope (AFM) and an optical set-up for controlled illumination of the sample. This enables the simultaneous acquiring and differentiation of both gratings (BBG and SRG), while changing the irradiation conditions in desired way.
The dissertation is based on five publications. The first publication (I) is focused on the description of the set-up and interpretation of the measured data. A fine structure within the 1st-order diffraction spot is observed, which is a result of the inhomogeneity of the inscribed gratings.
In the second publication (II) the interplay of BBG and SRG in the DE is discussed. It has been found, that, dependent on the polarization of a weak probe beam, the diffraction components of the SRG and BBG either interfere constructively or destructively in the DE, altering the appearance of the intensity distribution within the diffracted spot.
The third (III) and fourth (IV) publications describe the light-induced reconfiguration of surface structures. Special attention is payed to conditions influencing the erasure of topography and bulk gratings. This can be achieved via thermal treatment or illumination of the polymer film. Using the translation of the interference pattern (IP) in a controlled way, the optical erase speed is significantly increased. Additionally, a dynamic reconfigurable surface is generated, which could move surface attached objects by the continuous translation of the interference pattern during irradiation of the polymer films.
The fifth publication (V) deals with the understanding of polymer deformation under irradiation with SP-IP, which is the only IP generating a half-period topography grating (compared to the period of the IP) on the photo-sensitive polymer film. This mechanism is used, e.g. to generate a SRG below the diffraction limit of light. It also represents an easy way of changing the period of the surface grating just by a small change in polarization angle of the interfering beams without adjusting the optical pass of the two beams. Additionally, complex surface gratings formed in mixed polarization- and intensity interference patterns are shown.
I J. Jelken, C. Henkel and S. Santer, Applied Physics B, 125 (2019), 218
II J. Jelken, C. Henkel and S. Santer, Appl. Phys. Lett., 116 (2020), 051601
III J. Jelken and S. Santer, RSC Advances, 9 (2019), 20295
IV J. Jelken, M. Brinkjans, C. Henkel and S. Santer, SPIE Proceedings, 11367 (2020), 1136710
V J. Jelken, C. Henkel and S. Santer, Formation of Half-Period Surface Relief Gratings in Azobenzene Containing Polymer Films (submitted to Applied Physics B)