Refine
Year of publication
- 2019 (180) (remove)
Document Type
- Doctoral Thesis (180) (remove)
Language
- English (180) (remove)
Keywords
- Klimawandel (4)
- climate change (4)
- Biodiversität (3)
- Spektroskopie (3)
- adaptation (3)
- machine learning (3)
- movement ecology (3)
- Anden (2)
- Andes (2)
- Bewegungsökologie (2)
Institute
- Institut für Biochemie und Biologie (51)
- Institut für Geowissenschaften (22)
- Institut für Physik und Astronomie (19)
- Institut für Chemie (17)
- Extern (10)
- Hasso-Plattner-Institut für Digital Engineering GmbH (10)
- Department Linguistik (9)
- Wirtschaftswissenschaften (9)
- Institut für Mathematik (8)
- Institut für Umweltwissenschaften und Geographie (6)
This dissertation is concerned with the relation between qualitative phonological organization in the form of syllabic structure and continuous phonetics, that is, the spatial and temporal dimensions of vocal tract action that express syllabic structure. The main claim of the dissertation is twofold. First, we argue that syllabic organization exerts multiple effects on the spatio-temporal properties of the segments that partake in that organization. That is, there is no unique or privileged exponent of syllabic organization. Rather, syllabic organization is expressed in a pleiotropy of phonetic indices. Second, we claim that a better understanding of the relation between qualitative phonological organization and continuous phonetics is reached when one considers how the string of segments (over which the nature of the phonological organization is assessed) responds to perturbations (scaling of phonetic variables) of localized properties (such as durations) within that string. Specifically, variation in phonetic variables and more specifically prosodic variation is a crucial key to understanding the nature of the link between (phonological) syllabic organization and the phonetic spatio-temporal manifestation of that organization. The effects of prosodic variation on segmental properties and on the overlap between the segments, we argue, offer the right pathway to discover patterns related to syllabic organization. In our approach, to uncover evidence for global organization, the sequence of segments partaking in that organization as well as properties of these segments or their relations with one another must be somehow locally varied. The consequences of such variation on the rest of the sequence can then be used to unveil the span of organization. When local perturbations to segments or relations between adjacent segments have effects that ripple through the rest of the sequence, this is evidence that organization is global. If instead local perturbations stay local with no consequences for the rest of the whole, this indicates that organization is local.
The public encounter
(2019)
This thesis puts the citizen-state interaction at its center. Building on a comprehensive model incorporating various perspectives on this interaction, I derive selected research gaps. The three articles, comprising this thesis, tackle these gaps. A focal role plays the citizens’ administrative literacy, the relevant competences and knowledge necessary to successfully interact with public organizations. The first article elaborates on the different dimensions of administrative literacy and develops a survey instrument to assess these. The second study shows that public employees change their behavior according to the competences that citizens display during public encounters. They treat citizens preferentially that are well prepared and able to persuade them of their application’s potential. Thereby, they signal a higher success potential for bureaucratic success criteria which leads to the employees’ cream-skimming behavior. The third article examines the dynamics of employees’ communication strategies when recovering from a service failure. The study finds that different explanation strategies yield different effects on the client’s frustration. While accepting the responsibility and explaining the reasons for a failure alleviates the frustration and anger, refusing the responsibility leads to no or even reinforcing effects on the client’s frustration. The results emphasize the different dynamics that characterize the nature of citizen-state interactions and how they establish their short- and long-term outcomes.
Oscillatory systems under weak coupling can be described by the Kuramoto model of phase oscillators. Kuramoto phase oscillators have diverse applications ranging from phenomena such as communication between neurons and collective influences of political opinions, to engineered systems such as Josephson Junctions and synchronized electric power grids. This thesis includes the author's contribution to the theoretical framework of coupled Kuramoto oscillators and to the understanding of non-trivial N-body dynamical systems via their reduced mean-field dynamics.
The main content of this thesis is composed of four parts. First, a partially integrable theory of globally coupled identical Kuramoto oscillators is extended to include pure higher-mode coupling. The extended theory is then applied to a non-trivial higher-mode coupled model, which has been found to exhibit asymmetric clustering. Using the developed theory, we could predict a number of features of the asymmetric clustering with only information of the initial state provided.
The second part consists of an iterated discrete-map approach to simulate phase dynamics. The proposed map --- a Moebius map --- not only provides fast computation of phase synchronization, it also precisely reflects the underlying group structure of the dynamics. We then compare the iterated-map dynamics and various analogous continuous-time dynamics. We are able to replicate known phenomena such as the synchronization transition of the Kuramoto-Sakaguchi model of oscillators with distributed natural frequencies, and chimera states for identical oscillators under non-local coupling.
The third part entails a particular model of repulsively coupled identical Kuramoto-Sakaguchi oscillators under common random forcing, which can be shown to be partially integrable. Via both numerical simulations and theoretical analysis, we determine that such a model cannot exhibit stationary multi-cluster states, contrary to the numerical findings in previous literature. Through further investigation, we find that the multi-clustering states reported previously occur due to the accumulation of discretization errors inherent in the integration algorithms, which introduce higher-mode couplings into the model. As a result, the partial integrability condition is violated.
Lastly, we derive the microscopic cross-correlation of globally coupled non-identical Kuramoto oscillators under common fluctuating forcing. The effect of correlation arises naturally in finite populations, due to the non-trivial fluctuations of the meanfield. In an idealized model, we approximate the finite-sized fluctuation by a Gaussian white noise. The analytical approximation qualitatively matches the measurements in numerical experiments, however, due to other periodic components inherent in the fluctuations of the mean-field there still exist significant inconsistencies.
Most of the matter in the universe consists of hydrogen. The hydrogen in the intergalactic medium (IGM), the matter between the galaxies, underwent a change of its ionisation state at the epoch of reionisation, at a redshift roughly between 6>z>10, or ~10^8 years after the Big Bang. At this time, the mostly neutral hydrogen in the IGM was ionised but the source of the responsible hydrogen ionising emission remains unclear. In this thesis I discuss the most likely candidates for the emission of this ionising radiation, which are a type of galaxy called Lyman alpha emitters (LAEs). As implied by their name, they emit Lyman alpha radiation, produced after a hydrogen atom has been ionised and recombines with a free electron. The ionising radiation itself (also called Lyman continuum emission) which is needed for this process inside the LAEs could also be responsible for ionising the IGM around those galaxies at the epoch of reionisation, given that enough Lyman continuum escapes. Through this mechanism, Lyman alpha and Lyman continuum radiation are closely linked and are both studied to better understand the properties of high redshift galaxies and the reionisation state of the universe.
Before I can analyse their Lyman alpha emission lines and the escape of Lyman continuum emission from them, the first step is the detection and correct classification of LAEs in integral field spectroscopic data, specifically taken with the Multi-Unit Spectroscopic Explorer (MUSE). After detecting emission line objects in the MUSE data, the task of classifying them and determining their redshift is performed with the graphical user interface QtClassify, which I developed during the work on this thesis. It uses the strength of the combination of spectroscopic and photometric information that integral field spectroscopy offers to enable the user to quickly identify the nature of the detected emission lines. The reliable classification of LAEs and determination of their redshifts is a crucial first step towards an analysis of their properties.
Through radiative transfer processes, the properties of the neutral hydrogen clouds in and around LAEs are imprinted on the shape of the Lyman alpha line. Thus after identifying the LAEs in the MUSE data, I analyse the properties of the Lyman alpha emission line, such as the equivalent width (EW) distribution, the asymmetry and width of the line as well as the double peak fraction. I challenge the common method of displaying EW distributions as histograms without taking the limits of the survey into account and construct a more independent EW distribution function that better reflects the properties of the underlying population of galaxies. I illustrate this by comparing the fraction of high EW objects between the two surveys MUSE-Wide and MUSE-Deep, both consisting of MUSE pointings (each with the size of one square arcminute) of different depths. In the 60 MUSE-Wide fields of one hour exposure time I find a fraction of objects with extreme EWs above EW_0>240A of ~20%, while in the MUSE-Deep fields (9 fields with an exposure time of 10 hours and one with an exposure time of 31 hours) I find a fraction of only ~1%, which is due to the differences in the limiting line flux of the surveys. The highest EW I measure is EW_0 = 600.63 +- 110A, which hints at an unusual underlying stellar population, possibly with a very low metallicity.
With the knowledge of the redshifts and positions of the LAEs detected in the MUSE-Wide survey, I also look for Lyman continuum emission coming from these galaxies and analyse the connection between Lyman continuum emission and Lyman alpha emission. I use ancillary Hubble Space Telescope (HST) broadband photometry in the bands that contain the Lyman continuum and find six Lyman continuum leaker candidates. To test whether the Lyman continuum emission of LAEs is coming only from those individual objects or the whole population, I select LAEs that are most promising for the detection of Lyman continuum emission, based on their rest-frame UV continuum and Lyman alpha line shape properties. After this selection, I stack the broadband data of the resulting sample and detect a signal in Lyman continuum with a significance of S/N = 5.5, pointing towards a Lyman continuum escape fraction of ~80%. If the signal is reliable, it strongly favours LAEs as the providers of the hydrogen ionising emission at the epoch of reionisation and beyond.
The present work is a compilation of three original research articles submitted (or already published) in international peer-reviewed venues of the field of speech science. These three articles address the topics of fundamental motor laws in speech and dynamics of corresponding speech movements:
1. Kuberski, Stephan R. and Adamantios I. Gafos (2019). "The speed-curvature power law in tongue movements of repetitive speech". PLOS ONE 14(3). Public Library of Science. doi: 10.1371/journal.pone.0213851.
2. Kuberski, Stephan R. and Adamantios I. Gafos (In press). "Fitts' law in tongue movements of repetitive speech". Phonetica: International Journal of Phonetic Science. Karger Publishers. doi: 10.1159/000501644
3. Kuberski, Stephan R. and Adamantios I. Gafos (submitted). "Distinct phase space topologies of identical phonemic sequences". Language. Linguistic Society of America.
The present work introduces a metronome-driven speech elicitation paradigm in which participants were asked to utter repetitive sequences of elementary consonant-vowel syllables. This paradigm, explicitly designed to cover speech rates from a substantially wider range than has been explored so far in previous work, is demonstrated to satisfy the important prerequisites for assessing so far difficult to access aspects of speech. Specifically, the paradigm's extensive speech rate manipulation enabled elicitation of a great range of movement speeds as well as movement durations and excursions of the relevant effectors. The presence of such variation is a prerequisite to assessing whether invariant relations between these and other parameters exist and thus provides the foundation for a rigorous evaluation of the two laws examined in the first two contributions of this work.
In the data resulting from this paradigm, it is shown that speech movements obey the same fundamental laws as movements from other domains of motor control do. In particular, it is demonstrated that speech strongly adheres to the power law relation between speed and curvature of movement with a clear speech rate dependency of the power law's exponent. The often-sought or reported exponent of one third in the statement of the law is unique to a subclass of movements which corresponds to the range of faster rates under which a particular utterance is produced. For slower rates, significantly larger values than one third are observed. Furthermore, for the first time in speech this work uncovers evidence for the presence of Fitts' law. It is shown that, beyond a speaker-specific speech rate, speech movements of the tongue clearly obey Fitts' law by emergence of its characteristic linear relation between movement time and index of difficulty. For slower speech rates (when temporal pressure is small), no such relation is observed. The methods and datasets obtained in the two assessment above provide a rigorous foundation both for addressing implications for theories and models of speech as well as for better understanding the status of speech movements in the context of human movements in general.
All modern theories of language rely on a fundamental segmental hypothesis according to which the phonological message of an utterance is represented by a sequence of segments or phonemes. It is commonly assumed that each of these phonemes can be mapped to some unit of speech motor action, a so-called speech gesture.
For the first time here, it is demonstrated that the relation between the phonological description of simple utterances and the corresponding speech motor action is non-unique. Specifically, by the extensive speech rate manipulation in the herein used experimental paradigm it is demonstrated that speech exhibits clearly distinct dynamical organizations underlying the production of simple utterances. At slower speech rates, the dynamical organization underlying the repetitive production of elementary /CV/ syllables can be described by successive concatenations of closing and opening gestures, each with its own equilibrium point. As speech rate increases, the equilibria of opening and closing gestures are not equally stable yielding qualitatively different modes of organization with either a single equilibrium point of a combined opening-closing gesture or a periodic attractor unleashed by the disappearance of both equilibria. This observation, the non-uniqueness of the dynamical organization underlying what on the surface appear to be identical phonemic sequences, is an entirely new result in the domain of speech. Beyond that, the demonstration of periodic attractors in speech reveals that dynamical equilibrium point models do not account for all possible modes of speech motor behavior.
Increasing concerns regarding the environmental impact of our chemical production have shifted attention towards possibilities for sustainable biotechnology. One-carbon (C1) compounds, including methane, methanol, formate and CO, are promising feedstocks for future bioindustry. CO2 is another interesting feedstock, as it can also be transformed using renewable energy to other C1 feedstocks for use. While formaldehyde is not suitable as a feedstock due to its high toxicity, it is a central intermediate in the process of C1 assimilation. This thesis explores formaldehyde metabolism and aims to engineer formaldehyde assimilation in the model organism Escherichia coli for the future C1-based bioindustry.
The first chapter of the thesis aims to establish growth of E. coli on formaldehyde via the most efficient naturally occurring route, the ribulose monophosphate pathway. Linear variants of the pathway were constructed in multiple-gene knockouts strains, coupling E. coli growth to the activities of the key enzymes of the pathway. Formaldehyde-dependent growth was achieved in rationally designed strains. In the final strain, the synthetic pathway provides the cell with almost all biomass and energy requirements.
In the second chapter, taking advantage of the unique feature of its reactivity, formaldehyde assimilation via condensation with glycine and pyruvate by two promiscuous aldolases was explored. Facilitated by these two reactions, the newly designed homoserine cycle is expected to support higher yields of a wide array of products than its counterparts. By dividing the pathway into segments and coupling them to the growth of dedicated strains, all pathway reactions were demonstrated to be sufficiently active. The work paves a way for future implementation of a highly efficient route for C1 feedstocks into commodity chemicals.
In the third chapter, the in vivo rate of the spontaneous formaldehyde tetrahydrofolate condensation to methylene-tetrahydrofolate was assessed in order to evaluate its applicability as a biotechnological process. Tested within an E. coli strain deleted in essential genes for native methylene-tetrahydrofolate biosynthesis, the reaction was shown to support the production of this essential intermediate. However, only low growth rates were observed and only at high formaldehyde concentrations. Computational analysis dependent on in vivo evidence from this strain deduced the slow rate of this spontaneous reaction, thus ruling out its substantial contribution to growth on C1 feedstocks.
The reactivity of formaldehyde makes it highly toxic. In the last chapter, the formation of thioproline, the condensation product of cysteine and formaldehyde, was confirmed to contribute this toxicity effect. Xaa-Pro aminopeptidase (PepP), which genetically links with folate metabolism, was shown to hydrolyze thioproline-containing peptides. Deleting pepP increased strain sensitivity to formaldehyde, pointing towards the toxicity of thioproline-containing peptides and the importance of their removal. The characterization in this study could be useful in handling this toxic intermediate.
Overall, this thesis identified challenges related to formaldehyde metabolism and provided novel solutions towards a future bioindustry based on sustainable C1 feedstocks in which formaldehyde serves as a key intermediate.
Aluminum oxide is an Earth-abundant geological material, and its interaction with water is of crucial importance for geochemical and environmental processes. Some aluminum oxide surfaces are also known to be useful in heterogeneous catalysis, while the surface chemistry of aqueous oxide interfaces determines the corrosion, growth and dissolution of such materials. In this doctoral work, we looked mainly at the (0001) surface of α-Al 2 O 3 and its reactivity towards water. In particular, a great focus of this work is dedicated to simulate and address the vibrational spectra of water adsorbed on the α-alumina(0001) surface in various conditions and at different coverages. In fact, the main source of comparison and inspiration for this work comes from the collaboration with the “Interfacial Molecular Spectroscopy” group led by Dr. R. Kramer Campen at the Fritz-Haber Institute of the MPG in Berlin. The expertise of our project partners in surface-sensitive Vibrational Sum Frequency (VSF) generation spectroscopy was crucial to develop and adapt specific simulation schemes used in this work. Methodologically, the main approach employed in this thesis is Ab Initio Molecular Dynamics (AIMD) based on periodic Density Functional Theory (DFT) using the PBE functional with D2 dispersion correction. The analysis of vibrational frequencies from both a static and a dynamic, finite-temperature perspective offers the ability to investigate the water / aluminum oxide interface in close connection to experiment.
The first project presented in this work considers the characterization of dissociatively adsorbed deuterated water on the Al-terminated (0001) surface. This particular structure is known from both experiment and theory to be the thermodynamically most stable surface termination of α-alumina in Ultra-High Vacuum (UHV) conditions. Based on experiments performed by our colleagues at FHI, different adsorption sites and products have been proposed and identified for D 2 O. While previous theoretical investigations only looked at vibrational frequencies of dissociated OD groups by staticNormal Modes Analysis (NMA), we rather employed a more sophisticated approach to directly assess vibrational spectra (like IR and VSF) at finite temperature from AIMD. In this work, we have employed a recent implementation which makes use of velocity-velocity autocorrelation functions to simulate such spectral responses of O-H(D) bonds. This approach allows for an efficient and qualitatively accurate estimation of Vibrational Densities of States (VDOS) as well as IR and VSF spectra, which are then tested against experimental spectra from our collaborators.
In order to extend previous work on unimolecularly dissociated water on α-Al 2 O 3 , we then considered a different system, namely, a fully hydroxylated (0001) surface, which results from the reconstruction of the UHV-stable Al-terminated surface at high water contents. This model is then further extended by considering a hydroxylated surface with additional water molecules, forming a two-dimensional layer which serves as a potential template to simulate an aqueous interface in environmental conditions. Again, employing finite-temperature AIMD trajectories at the PBE+D2 level, we investigated the behaviour of both hydroxylated surface (HS) and the water-covered structure derived from it (known as HS+2ML). A full range of spectra, from VDOS to IR and VSF, is then calculated using the same methodology, as described above. This is the main focus of the second project, reported in Chapter 5. In this case, comparison between theoretical spectra and experimental data is definitely good. In particular, we underline the nature of high-frequency resonances observed above 3700 cm −1 in VSF experiments to be associated with surface OH-groups, known as “aluminols” which are a key fingerprint of the fully hydroxylated surface.
In the third and last project, which is presented in Chapter 6, the extension of VSF spectroscopy experiments to the time-resolved regime offered us the opportunity to investigate vibrational energy relaxation at the α-alumina / water interface. Specifically, using again DFT-based AIMD simulations, we simulated vibrational lifetimes for surface aluminols as experimentally detected via pump-probe VSF. We considered the water-covered HS model as a potential candidate to address this problem. The vibrational (IR) excitation and subsequent relaxation is performed by means of a non-equilibrium molecular dynamics scheme. In such a scheme, we specifically looked at the O-H stretching mode of surface aluminols. Afterwards, the analysis of non-equilibrium trajectories allows for an estimation of relaxation times in the order of 2-4 ps which are in overall agreement with measured ones.
The aim of this work has been to provide, within a consistent theoretical framework, a better understanding of vibrational spectroscopy and dynamics for water on the α-alumina(0001) surface,ranging from very low water coverage (similar to the UHV case) up to medium-high coverages, resembling the hydroxylated oxide in environmental moist conditions.
Modern health care systems are characterized by pronounced prevention and cost-optimized treatments. This dissertation offers novel empirical evidence on how useful such measures can be. The first chapter analyzes how radiation, a main pollutant in health care, can negatively affect cognitive health. The second chapter focuses on the effect of Low Emission Zones on public heath, as air quality is the major external source of health problems. Both chapters point out potentials for preventive measures. Finally, chapter three studies how changes in treatment prices affect the reallocation of hospital resources. In the following, I briefly summarize each chapter and discuss implications for health care systems as well as other policy areas. Based on the National Educational Panel Study that is linked to data on radiation, chapter one shows that radiation can have negative long-term effects on cognitive skills, even at subclinical doses. Exploiting arguably exogenous variation in soil contamination in Germany due to the Chernobyl disaster in 1986, the findings show that people exposed to higher radiation perform significantly worse in cognitive tests 25 years later. Identification is ensured by abnormal rainfall within a critical period of ten days. The results show that the effect is stronger among older cohorts than younger cohorts, which is consistent with radiation accelerating cognitive decline as people get older. On average, a one-standarddeviation increase in the initial level of CS137 (around 30 chest x-rays) is associated with a decrease in the cognitive skills by 4.1 percent of a standard deviation (around 0.05 school years). Chapter one shows that sub-clinical levels of radiation can have negative consequences even after early childhood. This is of particular importance because most of the literature focuses on exposure very early in life, often during pregnancy. However, population exposed after birth is over 100 times larger. These results point to substantial external human capital costs of radiation which can be reduced by choices of medical procedures. There is a large potential for reductions because about one-third of all CT scans are assumed to be not medically justified (Brenner and Hall, 2007). If people receive unnecessary CT scans because of economic incentives, this chapter points to additional external costs of health care policies. Furthermore, the results can inform the cost-benefit trade-off for medically indicated procedures. Chapter two provides evidence about the effectiveness of Low Emission Zones. Low Emission Zones are typically justified by improvements in population health. However, there is little evidence about the potential health benefits from policy interventions aiming at improving air quality in inner-cities. The chapter ask how the coverage of Low Emission Zones air pollution and hospitalization, by exploiting variation in the roll out of Low Emission Zones in Germany. It combines information on the geographic coverage of Low Emission Zones with rich panel data on the universe of German hospitals over the period from 2006 to 2016 with precise information on hospital locations and the annual frequency of detailed diagnoses. In order to establish that our estimates of Low Emission Zones’ health impacts can indeed be attributed to improvements in local air quality, we use data from Germany’s official air pollution monitoring system and assign monitor locations to Low Emission Zones and test whether measures of air pollution are affected by the coverage of a Low Emission Zone. Results in chapter two confirm former results showing that the introduction of Low Emission Zones improved air quality significantly by reducing NO2 and PM10 concentrations. Furthermore, the chapter shows that hospitals which catchment areas are covered by a Low Emission Zone, diagnose significantly less air pollution related diseases, in particular by reducing the incidents of chronic diseases of the circulatory and the respiratory system. The effect is stronger before 2012, which is consistent with a general improvement in the vehicle fleet’s emission standards. Depending on the disease, a one-standard-deviation increase in the coverage of a hospitals catchment area covered by a Low Emission Zone reduces the yearly number of diagnoses up to 5 percent. These findings have strong implications for policy makers. In 2015, overall costs for health care in Germany were around 340 billion euros, of which 46 billion euros for diseases of the circulatory system, making it the most expensive type of disease caused by 2.9 million cases (Statistisches Bundesamt, 2017b). Hence, reductions in the incidence of diseases of the circulatory system may directly reduce society’s health care costs. Whereas chapter one and two study the demand-side in health care markets and thus preventive potential, chapter three analyzes the supply-side. By exploiting the same hospital panel data set as in chapter two, chapter three studies the effect of treatment price shocks on the reallocation of hospital resources in Germany. Starting in 2005, the implementation of the German-DRG-System led to general idiosyncratic treatment price shocks for individual hospitals. Thus far there is little evidence of the impact of general price shocks on the reallocation of hospital resources. Additionally, I add to the exiting literature by showing that price shocks can have persistent effects on hospital resources even when these shocks vanish. However, simple OLS regressions would underestimate the true effect, due to endogenous treatment price shocks. I implement a novel instrument variable strategy that exploits the exogenous variation in the number of days of snow in hospital catchment areas. A peculiarity of the reform allowed variation in days of snow to have a persistent impact on treatment prices. I find that treatment price increases lead to increases in input factors such as nursing staff, physicians and the range of treatments offered but to decreases in the treatment volume. This indicates supplier-induced demand. Furthermore, the probability of hospital mergers and privatization decreases. Structural differences in pre-treatment characteristics between hospitals enhance these effects. For instance, private and larger hospitals are more affected. IV estimates reveal that OLS results are biased towards zero in almost all dimensions because structural hospital differences are correlated with the reallocation of hospital resources. These results are important for several reasons. The G-DRG-Reform led to a persistent polarization of hospital resources, as some hospitals were exposed to treatment price increases, while others experienced reductions. If hospitals increase the treatment volume as a response to price reductions by offering unnecessary therapies, it has a negative impact on population wellbeing and public spending. However, results show a decrease in the range of treatments if prices decrease. Hospitals might specialize more, thus attracting more patients. From a policy perspective it is important to evaluate if such changes in the range of treatments jeopardize an adequate nationwide provision of treatments. Furthermore, the results show a decrease in the number of nurses and physicians if prices decrease. This could partly explain the nursing crisis in German hospitals. However, since hospitals specialize more they might be able to realize efficiency gains which justify reductions in input factors without loses in quality. Further research is necessary to provide evidence for the impact of the G-DRG-Reform on health care quality. Another important aspect are changes in the organizational structure. Many public hospitals have been privatized or merged. The findings show that this is at least partly driven by the G-DRG-Reform. This can again lead to a lack in services offered in some regions if merged hospitals specialize more or if hospitals are taken over by ecclesiastical organizations which do not provide all treatments due to moral conviction. Overall, this dissertation reveals large potential for preventive health care measures and helps to explain reallocation processes in the hospital sector if treatment prices change. Furthermore, its findings have potentially relevant implications for other areas of public policy. Chapter one identifies an effect of low dose radiation on cognitive health. As mankind is searching for new energy sources, nuclear power is becoming popular again. However, results of chapter one point to substantial costs of nuclear energy which have not been accounted yet. Chapter two finds strong evidence that air quality improvements by Low Emission Zones translate into health improvements, even at relatively low levels of air pollution. These findings may, for instance, be of relevance to design further policies targeted at air pollution such as diesel bans. As pointed out in chapter three, the implementation of DRG-Systems may have unintended side-effects on the reallocation of hospital resources. This may also apply to other providers in the health care sector such as resident doctors.
Additive Manufacturing (AM) in terms of laser powder-bed fusion (L-PBF) offers new prospects regarding the design of parts and enables therefore the production of lattice structures. These lattice structures shall be implemented in various industrial applications (e.g. gas turbines) for reasons of material savings or cooling channels. However, internal defects, residual stress, and structural deviations from the nominal geometry are unavoidable.
In this work, the structural integrity of lattice structures manufactured by means of L-PBF was non-destructively investigated on a multiscale approach.
A workflow for quantitative 3D powder analysis in terms of particle size, particle shape, particle porosity, inter-particle distance and packing density was established. Synchrotron computed tomography (CT) was used to correlate the packing density with the particle size and particle shape. It was also observed that at least about 50% of the powder porosity was released during production of the struts.
Struts are the component of lattice structures and were investigated by means of laboratory CT. The focus was on the influence of the build angle on part porosity and surface quality. The surface topography analysis was advanced by the quantitative characterisation of re-entrant surface features. This characterisation was compared with conventional surface parameters showing their complementary information, but also the need for AM specific surface parameters.
The mechanical behaviour of the lattice structure was investigated with in-situ CT under compression and successive digital volume correlation (DVC). The deformation was found to be knot-dominated, and therefore the lattice folds unit cell layer wise.
The residual stress was determined experimentally for the first time in such lattice structures. Neutron diffraction was used for the non-destructive 3D stress investigation. The principal stress directions and values were determined in dependence of the number of measured directions. While a significant uni-axial stress state was found in the strut, a more hydrostatic stress state was found in the knot. In both cases, strut and knot, seven directions were at least needed to find reliable principal stress directions.
For many years, psycholinguistic evidence has been predominantly based on findings from native speakers of Indo-European languages, primarily English, thus providing a rather limited perspective into the human language system. In recent years a growing body of experimental research has been devoted to broadening this picture, testing a wide range of speakers and languages, aiming to understanding the factors that lead to variability in linguistic performance. The present dissertation investigates sources of variability within the morphological domain, examining how and to what extent morphological processes and representations are shaped by specific properties of languages and speakers. Firstly, the present work focuses on a less explored language, Hebrew, to investigate how the unique non-concatenative morphological structure of Hebrew, namely a non-linear combination of consonantal roots and vowel patterns to form lexical entries (L-M-D + CiCeC = limed ‘teach’), affects morphological processes and representations in the Hebrew lexicon. Secondly, a less investigated population was tested: late learners of a second language. We directly compare native (L1) and non-native (L2) speakers, specifically highly proficient and immersed late learners of Hebrew. Throughout all publications, we have focused on a morphological phenomenon of inflectional classes (called binyanim; singular: binyan), comparing productive (class Piel, e.g., limed ‘teach’) and unproductive (class Paal, e.g., lamad ‘learn’) verbal inflectional classes. By using this test case, two psycholinguistic aspects of morphology were examined: (i) how morphological structure affects online recognition of complex words, using masked priming (Publications I and II) and cross-modal priming (Publication III) techniques, and (ii) what type of cues are used when extending morpho-phonological patterns to novel complex forms, a process referred to as morphological generalization, using an elicited production task (Publication IV).
The findings obtained in the four manuscripts, either published or under review, provide significant insights into the role of productivity in Hebrew morphological processing and generalization in L1 and L2 speakers. Firstly, the present L1 data revealed a close relationship between productivity of Hebrew verbal classes and recognition process, as revealed in both priming techniques. The consonantal root was accessed only in the productive class (Piel) but not the unproductive class (Paal). Another dissociation between the two classes was revealed in the cross-modal priming, yielding a semantic relatedness effect only for Paal but not Piel primes. These findings are taken to reflect that the Hebrew mental representations display a balance between stored undecomposable unstructured stems (Paal) and decomposed structured stems (Piel), in a similar manner to a typical dual-route architecture, showing that the Hebrew mental lexicon is less unique than previously claimed in psycholinguistic research. The results of the generalization study, however, indicate that there are still substantial differences between inflectional classes of Hebrew and other Indo-European classes, particularly in the type of information they rely on in generalization to novel forms. Hebrew binyan generalization relies more on cues of argument structure and less on phonological cues.
Secondly, clear L1/L2 differences were observed in the sensitivity to abstract morphological and morpho-syntactic information during complex word recognition and generalization. While L1 Hebrew speakers were sensitive to the binyan information during recognition, expressed by the contrast in root priming, L2 speakers showed similar root priming effects for both classes, but only when the primes were presented in an infinitive form. A root priming effect was not obtained for primes in a finite form. These patterns are interpreted as evidence for a reduced sensitivity of L2 speakers to morphological information, such as information about inflectional classes, and evidence for processing costs in recognition of forms carrying complex morpho-syntactic information. Reduced reliance on structural information cues was found in production of novel verbal forms, when the L2 group displayed a weaker effect of argument structure for Piel responses, in comparison to the L1 group. Given the L2 results, we suggest that morphological and morphosyntactic information remains challenging for late bilinguals, even at high proficiency levels.
This thesis investigates whether multilingual speakers’ use of grammatical constraints in an additional language (La) is affected by the native (L1) and non-native grammars (L2) of their linguistic repertoire.
Previous studies have used untimed measures of grammatical performance to show that L1 and L2 grammars affect the initial stages of La acquisition. This thesis extends this work by examining whether speakers at intermediate levels of La proficiency, who demonstrate mature untimed/offline knowledge of the target La constraints, are differentially affected by their L1 and L2 knowledge when they comprehend sentences under processing pressure. With this purpose, several groups of La German speakers were tested on word order and agreement phenomena using online/timed measures of grammatical knowledge. Participants had mirror distributions of their prior languages and they were either L1English/L2Spanish speakers or L1Spanish/L2English speakers. Crucially, in half of the phenomena the target La constraint aligned with English but not with Spanish, while in the other half it aligned with Spanish but not with English. Results show that the L1 grammar plays a major role in the use of La constraints under processing pressure, as participants displayed increased sensitivity to La constraints when they aligned with their L1, and reduced sensitivity when they did not. Further, in specific phenomena in which the L2 and La constraints aligned, increased L2 proficiency resulted in an enhanced sensitivity to the La constraint. These findings suggest that both native and non-native grammars affect how speakers use La grammatical constraints under processing pressure. However, L1 and L2 grammars differentially influence on participants’ performance: While L1 constraints seem to be reliably recruited to cope with the processing demands of real-time La use, proficiency in an L2 can enhance sensitivity to La constraints only in specific circumstances, namely when L2 and La constraints align.
Phytoplankton growth depends not only on the mean intensity but also on the dynamics of the light supply. The nonlinear light-dependency of growth is characterized by a small number of basic parameters: the compensation light intensity PARcompμ, where production and losses are balanced, the growth efficiency at sub-saturating light αµ, and the maximum growth rate at saturating light µmax. In surface mixed layers, phytoplankton may rapidly move between high light intensities and almost darkness. Because of the different frequency distribution of light and/or acclimation processes, the light-dependency of growth may differ between constant and fluctuating light. Very few studies measured growth under fluctuating light at a sufficient number of mean light intensities to estimate the parameters of the growth-irradiance relationship. Hence, the influence of light dynamics on µmax, αµ and PARcompμ are still largely unknown. By extension, accurate modelling predictions of phytoplankton development under fluctuating light exposure remain difficult to make. This PhD thesis does not intend to directly extrapolate few experimental results to aquatic systems – but rather improving the mechanistic understanding of the variation of the light-dependency of growth under light fluctuations and effects on phytoplankton development.
In Lake TaiHu and at the Three Gorges Reservoir (China), we incubated phytoplankton communities in bottles placed either at fixed depths or moved vertically through the water column to mimic vertical mixing. Phytoplankton at fixed depths received only the diurnal changes in light (defined as constant light regime), while phytoplankton received rapidly fluctuating light by superimposing the vertical light gradient on the natural sinusoidal diurnal sunlight. The vertically moved samples followed a circular movement with 20 min per revolution, replicating to some extent the full overturn of typical Langmuir cells. Growth, photosynthesis, oxygen production and respiration of communities (at Lake TaiHu) were
measured. To complete these investigations, a physiological experiment was performed in the laboratory on a toxic strain of Microcystis aeruginosa (FACBH 1322) incubated under 20 min period fluctuating light. Here, we measured electron transport rates and net oxygen production at a much higher time resolution (single minute timescale).
The present PhD thesis provides evidence for substantial effects of fluctuating light on the eco-physiology of phytoplankton. Both experiments performed under semi-natural conditions in Lake TaiHu and at the Three Gorges Reservoir gave similar results. The significant decline in community growth efficiencies αµ under fluctuating light was caused for a great share by different frequency distribution of light intensities that shortened the effective daylength for production. The remaining gap in community αµ was attributed to species-specific photoacclimation mechanisms and to light-dependent respiratory losses. In contrast, community maximal growth rates µmax were similar between incubations at constant and fluctuating light. At daily growth saturating light supply, differences in losses for biosynthesis between the two light regimes were observed. Phytoplankton experiencing constant light suffered photo-inhibition - leading to photosynthesis foregone and additional respiratory costs for photosystems repair. On the contrary, intermittent exposure to low and high light intensities prevented photo-inhibition of mixed algae but forced them to develop alternative light strategy. They better harvested and exploited surface irradiance by enhancing their photosynthesis. In the laboratory, we showed that Microcystis aeruginosa increased its oxygen consumption by dark respiration in the light few minutes only after exposure to increasing light intensities. More, we proved that within a simulated Langmuir cell, the net production at saturating light and the compensation light intensity for production at limiting light are positively related. These results are best explained by an accumulation of photosynthetic products at increasing irradiance and mobilization of these fresh resources by rapid enhancement of dark respiration for maintenance and biosynthesis at decreasing irradiance. At the daily timescale, we showed that the enhancement of photosynthesis at high irradiance for biosynthesis of species increased their maintenance respiratory costs at limiting light. Species-specific growth at saturating light µmax and compensation light intensity for growth PARcompμ of species incubated in Lake TaiHu were positively related. Because of this species-specific physiological tradeoff, species displayed different light affinities to limiting and saturating light - thereby exhibiting a gleaner-opportunist tradeoff. In Lake TaiHu, we showed that inter-specific differences in light acquisition traits (µmax and PARcompμ) allowed coexis¬tence of species on a gradient of constant
light while avoiding competitive exclusion. More interestingly we demonstrated for the first time that vertical mixing (inducing fluctuating light supply for phytoplankton) may alter or even reverse the light utilization strategies of species within couple of days. The intra-specific variation in traits under fluctuating light increased the niche space for acclimated species, precluding competitive exclusion.
Overall, this PhD thesis contributes to a better understanding of phytoplankton eco-physiology under fluctuating light supply. This work could enhance the quality of predictions of phytoplankton development under certain weather conditions or climate change scenarios.
Accurate weather observations are the keystone to many quantitative applications, such as precipitation monitoring and nowcasting, hydrological modelling and forecasting, climate studies, as well as understanding precipitation-driven natural hazards (i.e. floods, landslides, debris flow). Weather radars have been an increasingly popular tool since the 1940s to provide high spatial and temporal resolution precipitation data at the mesoscale, bridging the gap between synoptic and point scale observations. Yet, many institutions still struggle to tap the potential of the large archives of reflectivity, as there is still much to understand about factors that contribute to measurement errors, one of which is calibration. Calibration represents a substantial source of uncertainty in quantitative precipitation estimation (QPE). A miscalibration of a few dBZ can easily deteriorate the accuracy of precipitation estimates by an order of magnitude. Instances where rain cells carrying torrential rains are misidentified by the radar as moderate rain could mean the difference between a timely warning and a devastating flood.
Since 2012, the Philippine Atmospheric, Geophysical, and Astronomical Services Administration (PAGASA) has been expanding the country’s ground radar network. We had a first look into the dataset from one of the longest running radars (the Subic radar) after devastating week-long torrential rains and thunderstorms in August 2012 caused by the annual southwestmonsoon and enhanced by the north-passing Typhoon Haikui. The analysis of the rainfall spatial distribution revealed the added value of radar-based QPE in comparison to interpolated rain gauge observations. However, when compared with local gauge measurements, severe miscalibration of the Subic radar was found. As a consequence, the radar-based QPE would have underestimated the rainfall amount by up to 60% if they had not been adjusted by rain gauge observations—a technique that is not only affected by other uncertainties, but which is also not feasible in other regions of the country with very sparse rain gauge coverage.
Relative calibration techniques, or the assessment of bias from the reflectivity of two radars, has been steadily gaining popularity. Previous studies have demonstrated that reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and its successor, the Global Precipitation Measurement (GPM), are accurate enough to serve as a calibration reference for ground radars over low-to-mid-latitudes (± 35 deg for TRMM; ± 65 deg for GPM). Comparing spaceborne radars (SR) and ground radars (GR) requires cautious consideration of differences in measurement geometry and instrument specifications, as well as temporal coincidence. For this purpose, we implement a 3-D volume matching method developed by Schwaller and Morris (2011) and extended by Warren et al. (2018) to 5 years worth of observations from the Subic radar. In this method, only the volumetric intersections of the SR and GR beams are considered.
Calibration bias affects reflectivity observations homogeneously across the entire radar domain. Yet, other sources of systematic measurement errors are highly heterogeneous in space, and can either enhance or balance the bias introduced by miscalibration. In order to account for such heterogeneous errors, and thus isolate the calibration bias, we assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a qualityweighted average of reflectivity differences in any sample of matching SR–GR volumes. We exemplify the idea of quality-weighted averaging by using beam blockage fraction (BBF) as a quality variable. Quality-weighted averaging is able to increase the consistency of SR and GR observations by decreasing the standard deviation of the SR–GR differences, and thus increasing the precision of the bias estimates.
To extend this framework further, the SR–GR quality-weighted bias estimation is applied to the neighboring Tagaytay radar, but this time focusing on path-integrated attenuation (PIA) as the source of uncertainty. Tagaytay is a C-band radar operating at a lower wavelength and is therefore more affected by attenuation. Applying the same method used for the Subic radar, a time series of calibration bias is also established for the Tagaytay radar.
Tagaytay radar sits at a higher altitude than the Subic radar and is surrounded by a gentler terrain, so beam blockage is negligible, especially in the overlapping region. Conversely, Subic radar is largely affected by beam blockage in the overlapping region, but being an SBand radar, attenuation is considered negligible. This coincidentally independent uncertainty contributions of each radar in the region of overlap provides an ideal environment to experiment with the different scenarios of quality filtering when comparing reflectivities from the two ground radars. The standard deviation of the GR–GR differences already decreases if we consider either BBF or PIA to compute the quality index and thus the weights. However, combining them multiplicatively resulted in the largest decrease in standard deviation, suggesting that taking both factors into account increases the consistency between the matched samples.
The overlap between the two radars and the instances of the SR passing over the two radars at the same time allows for verification of the SR–GR quality-weighted bias estimation method. In this regard, the consistency between the two ground radars is analyzed before and after bias correction is applied. For cases when all three radars are coincident during a significant rainfall event, the correction of GR reflectivities with calibration bias estimates from SR overpasses dramatically improves the consistency between the two ground radars which have shown incoherent observations before correction. We also show that for cases where adequate SR coverage is unavailable, interpolating the calibration biases using a moving average can be used to correct the GR observations for any point in time to some extent. By using the interpolated biases to correct GR observations, we demonstrate that bias correction reduces the absolute value of the mean difference in most cases, and therefore improves the consistency between the two ground radars.
This thesis demonstrates that in general, taking into account systematic sources of uncertainty that are heterogeneous in space (e.g. BBF) and time (e.g. PIA) allows for a more consistent estimation of calibration bias, a homogeneous quantity. The bias still exhibits an unexpected variability in time, which hints that there are still other sources of errors that remain unexplored. Nevertheless, the increase in consistency between SR and GR as well as between the two ground radars, suggests that considering BBF and PIA in a weighted-averaging approach is a step in the right direction.
Despite the ample room for improvement, the approach that combines volume matching between radars (either SR–GR or GR–GR) and quality-weighted comparison is readily available for application or further scrutiny. As a step towards reproducibility and transparency in atmospheric science, the 3D matching procedure and the analysis workflows as well as sample data are made available in public repositories. Open-source software such as Python and wradlib are used for all radar data processing in this thesis. This approach towards open science provides both research institutions and weather services with a valuable tool that can be applied to radar calibration, from monitoring to a posteriori correction of archived data.
Hepcidin-25 (Hep-25) plays a crucial role in the control of iron homeostasis. Since the dysfunction of the hepcidin pathway leads to multiple diseases as a result of iron imbalance, hepcidin represents a potential target for the diagnosis and treatment of disorders of iron metabolism. Despite intense research in the last decade targeted at developing a selective immunoassay for iron disorder diagnosis and treatment and better understanding the ferroportin-hepcidin interaction, questions remain. The key to resolving these underlying questions is acquiring exact knowledge of the 3D structure of native Hep-25. Since it was determined that the N-terminus, which is responsible for the bioactivity of Hep-25, contains a small Cu(II)-binding site known as the ATCUN motif, it was assumed that the Hep-25-Cu(II) complex is the native, bioactive form of the hepcidin. This structure has thus far not been elucidated in detail. Owing to the lack of structural information on metal-bound Hep-25, little is known about its possible biological role in iron metabolism. Therefore, this work is focused on structurally characterizing the metal-bound Hep-25 by NMR spectroscopy and molecular dynamics simulations. For the present work, a protocol was developed to prepare and purify properly folded Hep-25 in high quantities. In order to overcome the low solubility of Hep-25 at neutral pH, we introduced the C-terminal DEDEDE solubility tag. The metal binding was investigated through a series of NMR spectroscopic experiments to identify the most affected amino acids that mediate metal coordination. Based on the obtained NMR data, a structural calculation was performed in order to generate a model structure of the Hep-25-Ni(II) complex. The DEDEDE tag was excluded from the structural calculation due to a lack of NMR restraints. The dynamic nature and fast exchange of some of the amide protons with solvent reduced the overall number of NMR restraints needed for a high-quality structure. The NMR data revealed that the 20 Cterminal Hep-25 amino acids experienced no significant conformational changes, compared to published results, as a result of a pH change from pH 3 to pH 7 and metal binding. A 3D model of the Hep-25-Ni(II) complex was constructed from NMR data recorded for the hexapeptideNi(II) complex and Hep-25-DEDEDE-Ni(II) complex in combination with the fixed conformation of 19 C-terminal amino acids. The NMR data of the Hep-25-DEDEDE-Ni(II) complex indicates that the ATCUN motif moves independently from the rest of the structure. The 3D model structure of the metal-bound Hep-25 allows for future works to elucidate hepcidin’s interaction with its receptor ferroportin and should serve as a starting point for the development of antibodies with improved selectivity.
The Government will create a motivated, merit-based, performance-driven, and professional civil service that is resistant to temptations of corruption and which provides efficient, effective and transparent public services that do not force customers to pay bribes.
— (GoIRA, 2006, p. 106)
We were in a black hole! We had an empty glass and had nothing from our side to fill it with! Thus, we accepted anything anybody offered; that is how our glass was filled; that is how we reformed our civil service.
— (Former Advisor to IARCSC, personal communication, August 2015)
How and under what conditions were the post-Taleban Civil Service Reforms of Afghanistan initiated? What were the main components of the reforms? What were their objectives and to which extent were they achieved? Who were the leading domestic and foreign actors involved in the process? Finally, what specific factors influenced the success and failure Afghanistan’s Civil Service Reforms since 2002? Guided by such fundamental questions, this research studies the wicked process of reforming the Afghan civil service in an environment where a variety of contextual, programmatic, and external factors affected the design and implementation of reforms that were entirely funded and technically assisted by the international community.
Focusing on the core components of reforms—recruitment, remuneration, and appraisal of civil servants—the qualitative study provides a detailed picture of the pre-reform civil service and its major human resources developments in the past. Following discussions on the content and purposes of the main reform programs, it will then analyze the extent of changes in policies and practices by examining the outputs and effects of these reforms.
Moreover, the study defines the specific factors that led the reforms toward a situation where most of the intended objectives remain unachieved. Doing so, it explores and explains how an overwhelming influence of international actors with conflicting interests, large-scale corruption, political interference, networks of patronage, institutionalized nepotism, culturally accepted cronyism and widespread ethnic favoritism created a very complex environment and prevented the reforms from transforming Afghanistan’s patrimonial civil service into a professional civil service, which is driven by performance and merit.
A central insight from psychological studies on human eye movements is that eye movement patterns are highly individually characteristic. They can, therefore, be used as a biometric feature, that is, subjects can be identified based on their eye movements. This thesis introduces new machine learning methods to identify subjects based on their eye movements while viewing arbitrary content. The thesis focuses on probabilistic modeling of the problem, which has yielded the best results in the most recent literature. The thesis studies the problem in three phases by proposing a purely probabilistic, probabilistic deep learning, and probabilistic deep metric learning approach. In the first phase, the thesis studies models that rely on psychological concepts about eye movements. Recent literature illustrates that individual-specific distributions of gaze patterns can be used to accurately identify individuals. In these studies, models were based on a simple parametric family of distributions. Such simple parametric models can be robustly estimated from sparse data, but have limited flexibility to capture the differences between individuals. Therefore, this thesis proposes a semiparametric model of gaze patterns that is flexible yet robust for individual identification. These patterns can be understood as domain knowledge derived from psychological literature. Fixations and saccades are examples of simple gaze patterns. The proposed semiparametric densities are drawn under a Gaussian process prior centered at a simple parametric distribution. Thus, the model will stay close to the parametric class of densities if little data is available, but it can also deviate from this class if enough data is available, increasing the flexibility of the model. The proposed method is evaluated on a large-scale dataset, showing significant improvements over the state-of-the-art. Later, the thesis replaces the model based on gaze patterns derived from psychological concepts with a deep neural network that can learn more informative and complex patterns from raw eye movement data. As previous work has shown that the distribution of these patterns across a sequence is informative, a novel statistical aggregation layer called the quantile layer is introduced. It explicitly fits the distribution of deep patterns learned directly from the raw eye movement data. The proposed deep learning approach is end-to-end learnable, such that the deep model learns to extract informative, short local patterns while the quantile layer learns to approximate the distributions of these patterns. Quantile layers are a generic approach that can converge to standard pooling layers or have a more detailed description of the features being pooled, depending on the problem. The proposed model is evaluated in a large-scale study using the eye movements of subjects viewing arbitrary visual input. The model improves upon the standard pooling layers and other statistical aggregation layers proposed in the literature. It also improves upon the state-of-the-art eye movement biometrics by a wide margin. Finally, for the model to identify any subject — not just the set of subjects it is trained on — a metric learning approach is developed. Metric learning learns a distance function over instances. The metric learning model maps the instances into a metric space, where sequences of the same individual are close, and sequences of different individuals are further apart. This thesis introduces a deep metric learning approach with distributional embeddings. The approach represents sequences as a set of continuous distributions in a metric space; to achieve this, a new loss function based on Wasserstein distances is introduced. The proposed method is evaluated on multiple domains besides eye movement biometrics. This approach outperforms the state of the art in deep metric learning in several domains while also outperforming the state of the art in eye movement biometrics.
Emotions are a central element of human experience. They occur with high frequency in everyday life and play an important role in decision making. However, currently there is no consensus among researchers on what constitutes an emotion and on how emotions should be investigated. This dissertation identifies three problems of current emotion research: the problem of ground truth, the problem of incomplete constructs and the problem of optimal representation. I argue for a focus on the detailed measurement of emotion manifestations with computer-aided methods to solve these problems. This approach is demonstrated in three research projects, which describe the development of methods specific to these problems as well as their application to concrete research questions.
The problem of ground truth describes the practice to presuppose a certain structure of emotions as the a priori ground truth. This determines the range of emotion descriptions and sets a standard for the correct assignment of these descriptions. The first project illustrates how this problem can be circumvented with a multidimensional emotion perception paradigm which stands in contrast to the emotion recognition paradigm typically employed in emotion research. This paradigm allows to calculate an objective difficulty measure and to collect subjective difficulty ratings for the perception of emotional stimuli. Moreover, it enables the use of an arbitrary number of emotion stimuli categories as compared to the commonly used six basic emotion categories. Accordingly, we collected data from 441 participants using dynamic facial expression stimuli from 40 emotion categories. Our findings suggest an increase in emotion perception difficulty with increasing actor age and provide evidence to suggest that young adults, the elderly and men underestimate their emotion perception difficulty. While these effects were predicted from the literature, we also found unexpected and novel results. In particular, the increased difficulty on the objective difficulty measure for female actors and observers stood in contrast to reported findings. Exploratory analyses revealed low relevance of person-specific variables for the prediction of emotion perception difficulty, but highlighted the importance of a general pleasure dimension for the ease of emotion perception.
The second project targets the problem of incomplete constructs which relates to vaguely defined psychological constructs on emotion with insufficient ties to tangible manifestations. The project exemplifies how a modern data collection method such as face tracking data can be used to sharpen these constructs on the example of arousal, a long-standing but fuzzy construct in emotion research. It describes how measures of distance, speed and magnitude of acceleration can be computed from face tracking data and investigates their intercorrelations. We find moderate to strong correlations among all measures of static information on one hand and all measures of dynamic information on the other. The project then investigates how self-rated arousal is tied to these measures in 401 neurotypical individuals and 19 individuals with autism. Distance to the neutral face was predictive of arousal ratings in both groups. Lower mean arousal ratings were found for the autistic group, but no difference in correlation of the measures and arousal ratings could be found between groups. Results were replicated in a high autistic traits group consisting of 41 participants. The findings suggest a qualitatively similar perception of arousal for individuals with and without autism. No correlations between valence ratings and any of the measures could be found which emphasizes the specificity of our tested measures for the construct of arousal.
The problem of optimal representation refers to the search for the best representation of emotions and the assumption that there is a one-fits-all solution. In the third project we introduce partial least squares analysis as a general method to find an optimal representation to relate two high-dimensional data sets to each other. The project demonstrates its applicability to emotion research on the question of emotion perception differences between men and women. The method was used with emotion rating data from 441 participants and face tracking data computed on 306 videos. We found quantitative as well as qualitative differences in the perception of emotional facial expressions between these groups. We showed that women’s emotional perception systematically captured more of the variance in facial expressions. Additionally, we could show that significant differences exist in the way that women and men perceive some facial expressions which could be visualized as concrete facial expression sequences. These expressions suggest differing perceptions of masked and ambiguous facial expressions between the sexes. In order to facilitate use of the developed method by the research community, a package for the statistical environment R was written. Furthermore, to call attention to the method and its usefulness for emotion research, a website was designed that allows users to explore a model of emotion ratings and facial expression data in an interactive fashion.
Electrets are dielectrics with quasi-permanent electric charge and/or dipoles, sometimes can be regarded as an electric analogy to a magnet. Since the discovery of the excellent charge retention capacity of poly(tetrafluoro ethylene) and the invention of the electret microphone, electrets have grown out of a scientific curiosity to an important application both in science and technology. The history of electret research goes hand in hand with the quest for new materials with better capacity at charge and/or dipole retention. To be useful, electrets normally have to be charged/poled to render them electro-active. This process involves electric-charge deposition and/or electric dipole orientation within the dielectrics ` surfaces and bulk. Knowledge of the spatial distribution of electric charge and/or dipole polarization after their deposition and subsequent decay is crucial in the task to improve their stability in the dielectrics.
Likewise, for dielectrics used in electrical insulation applications, there are also needs for accumulated space-charge and polarization spatial profiling. Traditionally, space-charge accumulation and large dipole polarization within insulating dielectrics is considered undesirable and harmful to the insulating dielectrics as they might cause dielectric loss and could lead to internal electric field distortion and local field enhancement. High local electric field could trigger several aging processes and reduce the insulating dielectrics' lifetime. However, with the advent of high-voltage DC transmission and high-voltage capacitor for energy storage, these are no longer the case. There are some overlapped between the two fields of electrets and electric insulation. While quasi-permanently trapped electric-charge and/or large remanent dipole polarization are the requisites for electret operation, stably trapped electric charge in electric insulation helps reduce electric charge transport and overall reduced electric conductivity. Controlled charge trapping can help in preventing further charge injection and accumulation as well as serving as field grading purpose in insulating dielectrics whereas large dipole polarization can be utilized in energy storage applications.
In this thesis, the Piezoelectrically-generated Pressure Steps (PPSs) were employed as a nondestructive method to probe the electric-charge and dipole polarization distribution in a range of thin film (several hundred micron) polymer-based materials, namely polypropylene (PP), low-density polyethylene/magnesium oxide (LDPE/MgO) nanocomposites and poly(vinylidene fluoride-co- trifluoro ethylene) (P(VDF-TrFE)) copolymer. PP film surface-treated with phosphoric acid to introduce surfacial isolated nanostructures serves as example of 2-dimensional nano-composites whereas LDPE/MgO serves as the case of 3-dimensional nano-composites with MgO nano-particles dispersed in LDPE polymer matrix. It is evidenced that the nanoparticles on the surface of acid-treated PP and in the bulk of LDPE/MgO nanocomposites improve charge trapping capacity of the respective material and prevent further charge injection and transport and that the enhanced charge trapping capacity makes PP and LDPE/MgO nanocomposites potential materials for both electret and electrical insulation applications. As for PVDF and VDF-based copolymers, the remanent spatial polarization distribution depends critically on poling method as well as specific parameters used in the respective poling method. In this work, homogeneous polarization poling of P(VDF-TrFE) copolymers with different VDF-contents have been attempted with hysteresis cyclical poling. The behaviour of remanent polarization growth and spatial polarization distribution are reported and discussed. The Piezoelectrically-generated Pressure Steps (PPSs) method has proven as a powerful method for the charge storage and transport characterization of a wide range of polymer material from nonpolar, to polar, to polymer nanocomposites category.
In light of the debate on the consequences of competitive contracting out of traditionally public services, this research compares two mechanisms used to allocate funds in development cooperation—direct awarding and competitive contracting out—aiming to identify their potential advantages and disadvantages.
The agency theory is applied within the framework of rational-choice institutionalism to study the institutional arrangements that surround two different money allocation mechanisms, identify the incentives they create for the behavior of individual actors in the field, and examine how these then transfer into measurable differences in managerial quality of development aid projects. In this work, project management quality is seen as an important determinant of the overall project success.
For data-gathering purposes, the German development agency, the Gesellschaft für Internationale Zusammenarbeit (GIZ), is used due to its unique way of work. Whereas the majority of projects receive funds via direct-award mechanism, there is a commercial department, GIZ International Services (GIZ IS) that has to compete for project funds.
The data concerning project management practices on the GIZ and GIZ IS projects was gathered via a web-based, self-administered survey of project team leaders. Principal component analysis was applied to reduce the dimensionality of the independent variable to total of five components of project management. Furthermore, multiple regression analysis identified the differences between the separate components on these two project types. Enriched by qualitative data gathered via interviews, this thesis offers insights into everyday managerial practices in development cooperation and identifies the advantages and disadvantages of the two allocation mechanisms.
The thesis first reiterates the responsibility of donors and implementers for overall aid effectiveness. It shows that the mechanism of competitive contracting out leads to better oversight and control of implementers, fosters deeper cooperation between the implementers and beneficiaries, and has a potential to strengthen ownership of recipient countries. On the other hand, it shows that the evaluation quality does not tremendously benefit from the competitive allocation mechanism and that the quality of the component knowledge management and learning is better when direct-award mechanisms are used. This raises questions about the lacking possibilities of actors in the field to learn about past mistakes and incorporate the finings into the future interventions, which is one of the fundamental issues of aid effectiveness. Finally, the findings show immense deficiencies in regard to oversight and control of individual projects in German development cooperation.
Determining the relationship between genotype and phenotype is the key to understand the plasticity and robustness of phenotypes in nature. While the directly observable plant phenotypes (e.g. agronomic, yield and stress resistance traits) have been well-investigated, there is still a lack in our knowledge about the genetic basis of intermediate phenotypes, such as metabolic phenotypes. Dissecting the links between genotype and phenotype depends on suitable statistical models. The state-of-the-art models are developed for directly observable phenotypes, regardless the characteristics of intermediate phenotypes. This thesis aims to fill the gaps in understanding genetic architecture of intermediate phenotypes, and how they tie to composite traits, namely plant growth. The metabolite levels and reaction fluxes, as two aspects of metabolic phenotypes, are shaped by the interrelated chemical reactions formed in genome-scale metabolic network. Here, I attempt to answer the question: Can the knowledge of underlying genome-scale metabolic network improve the model performance for prediction of metabolic phenotypes and associated plant growth? To this end, two projects are investigated in this thesis. Firstly, we propose an approach that couples genomic selection with genome-scale metabolic network and metabolic profiles in Arabidopsis thaliana to predict growth. This project is the first integration of genomic data with fluxes predicted based on constraint-based modeling framework and data on biomass composition. We demonstrate that our approach leads to a considerable increase of prediction accuracy in comparison to the state-of-the-art methods in both within and across environment predictions. Therefore, our work paves the way for combining knowledge on metabolic mechanisms in the statistical approach underlying genomic selection to increase the efficiency of future plant breeding approaches. Secondly, we investigate how reliable is genomic selection for metabolite levels, and which single nucleotide polymorphisms (SNPs), obtained from different neighborhoods of a given metabolic network, contribute most to the accuracy of prediction. The results show that the local structure of first and second neighborhoods are not sufficient for predicting the genetic basis of metabolite levels in Zea mays. Furthermore, we find that the enzymatic SNPs can capture most the genetic variance and the contribution of non-enzymatic SNPs is in fact small. To comprehensively understand the genetic architecture of metabolic phenotypes, I extend my study to a local Arabidopsis thaliana population and their hybrids. We analyze the genetic architecture in primary and secondary metabolism as well as in growth. In comparison to primary metabolites, compounds from secondary metabolism were more variable and show more non-additive inheritance patterns which could be attributed to epistasis. Therefore, our study demonstrates that heterozygosity in local Arabidopsis thaliana population generates metabolic variation and may impact several tasks directly linked to metabolism. The studies in this thesis improve the knowledge of genetic architecture of metabolic phenotypes in both inbreed and hybrid population. The approaches I proposed to integrate genome-scale metabolic network with genomic data provide the opportunity to obtain mechanistic insights about the determinants of agronomically important polygenic traits.
The North China Plain (NCP) is one of the most productive and intensive agricultural regions in China. High doses of mineral nitrogen (N) fertiliser, often combined with flood irrigation, are applied, resulting in N surplus, groundwater depletion and environmental pollution. The objectives of this thesis were to use the HERMES model to simulate the N cycle in winter wheat (Triticum aestivum L.)–summer maize (Zea mays L.) double crop rotations and show the performance of the HERMES model, of the new ammonia volatilisation sub-module and of the new nitrification inhibition tool in the NCP. Further objectives were to assess the models potential to save N and water on plot and county scale, as well as on short and long-term. Additionally, improved management strategies with the help of a model-based nitrogen fertiliser recommendation (NFR) and adapted irrigation, should be found.
Results showed that the HERMES model performed well under growing conditions of the NCP and was able to describe the relevant processes related to soil–plant interactions concerning N and water during a 2.5 year field experiment. No differences in grain yield between the real-time model-based NFR and the other treatments of the experiments on plot scale in Quzhou County could be found. Simulations with increasing amounts of irrigation resulted in significantly higher N leaching, higher N requirements of the NFR and reduced yields. Thus, conventional flood irrigation as currently practised by the farmers bears great uncertainties and exact irrigation amounts should be known for future simulation studies. In the best-practice scenario simulation on plot-scale, N input and N leaching, but also irrigation water could be reduced strongly within 2 years. Thus, the model-based NFR in combination with adapted irrigation had the highest potential to reduce nitrate leaching, compared to farmers practice and mineral N (Nmin)-reduced treatments. Also the calibrated and validated ammonia volatilisation sub-module of the HERMES model worked well under the climatic and soil conditions of northern China. Simple ammonia volatilisation approaches gave also satisfying results compared to process-oriented approaches. During the simulation with Ammonium sulphate Nitrate with nitrification inhibitor (ASNDMPP) ammonia volatilisation was higher than in the simulation without nitrification inhibitor, while the result for nitrate leaching was the opposite. Although nitrification worked well in the model, nitrification-born nitrous oxide emissions should be considered in future. Results of the simulated annual long-term (31 years) N losses in whole Quzhou County in Hebei Province were 296.8 kg N ha−1 under common farmers practice treatment and 101.7 kg N ha−1 under optimised treatment including NFR and automated irrigation (OPTai). Spatial differences in simulated N losses throughout Quzhou County, could only be found due to different N inputs. Simulations of an optimised treatment, could save on average more than 260 kg N ha−1a−1 from fertiliser input and 190 kg N ha−1a−1 from N losses and around 115.7 mm a−1 of water, compared to farmers practice. These long-term simulation results showed lower N and water saving potential, compared to short-term simulations and underline the necessity of long-term simulations to overcome the effect of high initial N stocks in soil.
Additionally, the OPTai worked best on clay loam soil except for a high simulated denitrification loss, while the simulations using farmers practice irrigation could not match the actual water needs resulting in yield decline, especially for winter wheat. Thus, a precise adaption of management to actual weather conditions and plant growth needs is necessary for future simulations. However, the optimised treatments did not seem to be able to maintain the soil organic matter pools, even with full crop residue input. Extra organic inputs seem to be required to maintain soil quality in the optimised treatments.
HERMES is a relatively simple model, with regard to data input requirements, to simulate the N cycle. It can offer interpretation of management options on plot, on county and regional scale for extension and research staff. Also in combination with other N and water saving methods the model promises to be a useful tool.
Earthquake swarms are characterized by large numbers of events occurring in a short period of time within a confined source volume and without significant mainshock aftershock pattern as opposed to tectonic sequences. Intraplate swarms in the absence of active volcanism usually occur in continental rifts as for example in the Eger Rift zone in North West Bohemia, Czech Republic. A common hypothesis links event triggering to pressurized fluids. However, the exact causal chain is often poorly understood since the underlying geotectonic processes are slow compared to tectonic sequences. The high event rate during active periods challenges standard seismological routines as these are often designed for single events and therefore costly in terms of human resources when working with phase picks or computationally costly when exploiting full waveforms.
This methodological thesis develops new approaches to analyze earthquake swarm seismicity as well as the underlying seismogenic volume. It focuses on the region of North West (NW) Bohemia, a well studied, well monitored earthquake swarm region.
In this work I develop and test an innovative approach to detect and locate earthquakes using deep convolutional neural networks. This technology offers great potential as it allows to efficiently process large amounts of data which becomes increasingly important given that seismological data storage grows at increasing pace. The proposed deep neural network trained on NW Bohemian earthquake swarm records is able to locate 1000 events in less than 1 second using full waveforms while approaching precision of double difference relocated catalogs. A further technological novelty is that the trained filters of the deep neural network’s first layer can be repurposed to function as a pattern matching event detector without additional training on noise datasets. For further methodological development and benchmarking, I present a new toolbox to generate realistic earthquake cluster catalogs as well as synthetic full waveforms of those clusters in an automated fashion. The input is parameterized using constraints on source volume geometry, nucleation and frequency-magnitude relations. It harnesses recorded noise to produce highly realistic synthetic data for benchmarking and development. This tool is used to study and assess detection performance in terms of magnitude of completeness Mc of a full waveform detector applied to synthetic data of a hydrofracturing experiment at the Wysin site, Poland.
Finally, I present and demonstrate a novel approach to overcome the masking effects of wave propagation between earthquake and stations and to determine source volume attenuation directly in the source volume where clustered earthquakes occur. The new event couple spectral ratio approach exploits high frequency spectral slopes of two events sharing the greater part of their rays. Synthetic tests based on the toolbox mentioned before show that this method is able to infer seismic wave attenuation within the source volume at high spatial resolution. Furthermore, it is independent from the distance towards a station as well as the complexity of the attenuation and velocity structure outside of the source volume of swarms. The application to recordings of the NW Bohemian earthquake swarm shows increased P phase attenuation within the source volume (Qp < 100) based on results at a station located close to the village Luby (LBC). The recordings of a station located in epicentral proximity, close to Nový Kostel (NKC), show a relatively high complexity indicating that waves arriving at that station experience more scattering than signals recorded at other stations. The high level of complexity destabilizes the inversion. Therefore, the Q estimate at NKC is not reliable and an independent proof of the high attenuation finding given the geometrical and frequency constraints is still to be done. However, a high attenuation in the source volume of NW Bohemian swarms has been postulated before in relation to an expected, highly damaged zone bearing CO 2 at high pressure.
The methods developed in the course of this thesis yield the potential to improve our understanding regarding the role of fluids and gases in intraplate event clustering.
En se penchant sur les réécritures de l'histoire pour le citoyen dans l’espace germanique et la France des Lumières et de la Révolution, ce livre apporte un regard nouveau et distancié sur les usages publics de l’histoire aujourd'hui, en France en particulier où le débat autour du roman national reste vif. La première partie de l’ouvrage, consacrée à l’exemplarité d’une histoire illustrée de gravures qui ont durablement marqué les représentations du passé, revisite la question des grands hommes, reproduit, traduit et analyse la circulation d’exemples édifiants entre les deux espaces.
La deuxième partie traite d’un mode de représentation pédagogique de l’histoire qui suscitait, et suscite toujours, la fascination tout en posant un défi de méthode: l’usage pédagogique d’un tableau permettant de saisir d’un seul coup d’oeil toute l’histoire d’un peuple voire de l’humanité tout entière, et d’en tirer des leçons politiques. L’idée, encore structurante aujourd’hui, d’un modèle politique ou pédagogique allemand ou français d’une écriture de l’histoire couplée, ou non, à la géographie est examinée ici au prisme des contextes précis où elle a été pensée.
This study assesses and explains international bureaucracies’ performance and role as policy advisors and as expert authorities from the perspective of domestic stakeholders. International bureaucracies are the secretariats of international organizations that carry out their work including generating knowledge, providing policy advice and implementing policy programs and projects. Scholars increasingly regard them as governance actors that are able to influence global and domestic policy making. In order to explain this influence, research has mainly focused on international bureaucracies’ formal features and/or staff characteristics. The way in which they are actually perceived by their domestic stakeholders, in particular by national bureaucrats, has not been systematically studied. Yet, this is equally important, given that they represent international bureaucracies’ addressees and are actors that (potentially) make use of international bureaucracies’ policy advice, which can be seen as an indicator for international bureaucracies’ influence. Accordingly, I argue that domestic stakeholders’ assessments can likewise contribute to explaining international bureaucracies’ influence.
The overarching research questions the study addresses are what are national stakeholders’ perspectives on international bureaucracies and under which conditions do they consider international bureaucracies’ policy advice? In answering these questions, I focus on three specific organizational features that the literature has considered important for international bureaucracies’ independent influence, namely international bureaucracies’ performance and their role as policy advisors and as expert authorities. These three features are studied separately in three independent articles, which are presented in Part II of this article-based dissertation.
To answer the research questions, I draw on novel data from a global survey among ministry officials of 121 countries. The survey captures ministry officials’ assessments of international bureaucracies’ features and their behavior with respect to international bureaucracies’ policy advice. The overall sample comprises the bureaucracies of nine global and nine regional international organizations in eight thematic areas in the policy fields of agriculture and finance.
The overall finding of this study is that international bureaucracies’ performance and their role as policy advisors and expert authorities as perceived by ministry officials are highly context-specific and relational. These features vary not only across international bureaucracies but much more intra-organizationally across the different thematic areas that an international bureaucracy addresses, i.e. across different thematic contexts. As far as to the relational nature of international bureaucracies’ features, the study generally finds strong variation across the assessments by ministry officials from different countries and across thematic areas. Hence, the findings highlight that it is likewise important to study international bureaucracies via the perspective of their stakeholders and to take account of the different thematic areas and contexts in which international bureaucracies operate.
The study contributes to current research on international bureaucracies in various ways. First, it directly surveys one important type of domestic stakeholders, namely national ministry officials, as to how they evaluate certain aspects of international bureaucracies instead of deriving them from their structural features, policy documents or assessments by their staff. Furthermore, the study empirically tests a range of theoretical hypotheses derived from the literature on international bureaucracies’ influence, as well as related literature. Second, the study advances methods of assessing international bureaucracies through a large-N, cross-national expert survey among ministry officials. A survey of this type of stakeholder and of this scope is – to my knowledge – unprecedented. Yet, as argued above, their perspectives are equally important for assessing and explaining international bureaucracies’ influence. Third, the study adapts common theories of international bureaucracies’ policy influence and expert authority to the assessments by ministry officials. In so doing, it tests hypotheses that are rooted in both rationalist and constructivist accounts and combines perspectives on international bureaucracies from both International Relations and Public Administration. Empirically supporting and challenging these hypotheses further complements the theoretical understanding of the determinants of international bureaucracies’ influence among national bureaucracies from both rationalist and constructivist perspectives.
Overall, this study advances our understanding of international bureaucracies by systematically taking into account ministry officials’ perspectives in order to determine under which conditions international bureaucracies are perceived to perform well and are able to have an effect as policy advisors and expert authorities among national bureaucracies. Thereby, the study helps to specify to what extent international bureaucracies – as global governance actors – are able to permeate domestic governance via ministry officials and, thus, contribute to the question of why some international bureaucracies play a greater role and are ultimately able to have more influence than others.
For millennia, humans have affected landscapes all over the world. Due to horizontal expansion, agriculture plays a major role in the process of fragmentation. This process is caused by a substitution of natural habitats by agricultural land leading to agricultural landscapes. These landscapes are characterized by an alternation of agriculture and other land use like forests. In addition, there are landscape elements of natural origin like small water bodies. Areas of different land use are beside each other like patches, or fragments. They are physically distinguishable which makes them look like a patchwork from an aerial perspective. These fragments are each an own ecosystem with conditions and properties that differ from their adjacent fragments. As open systems, they are in exchange of information, matter and energy across their boundaries. These boundary areas are called transition zones. Here, the habitat properties and environmental conditions are altered compared to the interior of the fragments. This changes the abundance and the composition of species in the transition zones, which in turn has a feedback effect on the environmental conditions.
The literature mainly offers information and insights on species abundance and composition in forested transition zones. Abiotic effects, the gradual changes in energy and matter, received less attention. In addition, little is known about non-forested transition zones. For example, the effects on agricultural yield in transition zones of an altered microclimate, matter dynamics or different light regimes are hardly researched or understood. The processes in transition zones are closely connected with altered provisioning and regulating ecosystem services. To disentangle the mechanisms and to upscale the effects, models can be used.
My thesis provides insights into these topics: literature was reviewed and a conceptual framework for the quantitative description of gradients of matter and energy in transition zones was introduced. The results of measurements of environmental gradients like microclimate, aboveground biomass and soil carbon and nitrogen content are presented that span from within the forest into arable land. Both the measurements and the literature review could not validate a transition zone of 100 m for abiotic effects. Although this value is often reported and used in the literature, it is likely to be smaller.
Further, the measurements suggest that on the one hand trees in transition zones are smaller compared to those in the interior of the fragments, while on the other hand less biomass was measured in the arable lands’ transition zone. These results support the hypothesis that less carbon is stored in the aboveground biomass in transition zones. The soil at the edge (zero line) between adjacent forest and arable land contains more nitrogen and carbon content compared to the interior of the fragments. One-year measurements in the transition zone also provided evidence that microclimate is different compared to the fragments’ interior.
To predict the possible yield decreases that transition zones might cause, a modelling approach was developed. Using a small virtual landscape, I modelled the effect of a forest fragment shading the adjacent arable land and the effects of this on yield using the MONICA crop growth model. In the transition zone yield was less compared to the interior due to shading. The results of the simulations were upscaled to the landscape level and exemplarily calculated for the arable land of a whole region in Brandenburg, Germany.
The major findings of my thesis are: (1) Transition zones are likely to be much smaller than assumed in the scientific literature; (2) transition zones aren’t solely a phenomenon of forested ecosystems, but significantly extend into arable land as well; (3) empirical and modelling results show that transition zones encompass biotic and abiotic changes that are likely to be important to a variety of agricultural landscape ecosystem services.
Interlocutors typically link their utterances to the discourse environment and enrich communication by linguistic (e.g., information packaging) and extra-linguistic (e.g., eye gaze, gestures) means to optimize information transfer. Psycholinguistic studies underline that ‒for meaning computation‒ listeners profit from linguistic and visual cues that draw their focus of attention to salient information. This dissertation is the first work that examines how linguistic compared to visual salience cues influence sentence comprehension using the very same experimental paradigms and materials, that is, German subject-before-object (SO) and object-before-subject (OS) sentences, across the two cue modalities. Linguistic salience was induced by indicating a referent as the aboutness topic. Visual salience was induced by implicit (i.e., unconscious) or explicit (i.e., shared) manipulations of listeners’ attention to a depicted referent.
In Study 1, a selective, facilitative impact of linguistic salience on the context-sensitive OS word order was found using offline comprehensibility judgments. More precisely, during online sentence processing, this impact was characterized by a reduced sentence-initial Late positivity which reflects reduced processing costs for updating the current mental representation of discourse. This facilitative impact of linguistic salience was not replicated by means of an implicit visual cue (Study 2) shown to modulate word order preferences during sentence production. However, a gaze shift to a depicted referent as an indicator of shared attention eased sentence-initial processing similar to linguistic salience as revealed by reduced reading times (Study 3). Yet, this cue did not modulate the strong subject-antecedent preference during later pronoun resolution like linguistic salience. Taken together, these findings suggest a significant impact of linguistic and visual salience cues on sentence comprehension, which substantiates that both the information delivered via language and via the visual environment is integrated into the mental representation of the discourse; but, the way how salience is induced is crucial to its impact.
Since half a century, cytometry has been a major scientific discipline in the field of cytomics - the study of system’s biology at single cell level. It enables the investigation of physiological processes, functional characteristics and rare events with proteins by analysing multiple parameters on an individual cell basis. In the last decade, mass cytometry has been established which increased the parallel measurement to up to 50 proteins. This has shifted the analysis strategy from conventional consecutive manual gates towards multi-dimensional data processing. Novel algorithms have been developed to tackle these high-dimensional protein combinations in the data. They are mainly based on clustering or non-linear dimension reduction techniques, or both, often combined with an upstream downsampling procedure. However, these tools have obstacles either in comprehensible interpretability, reproducibility, computational complexity or in comparability between samples and groups.
To address this bottleneck, a reproducible, semi-automated cytometric data mining workflow PRI (pattern recognition of immune cells) is proposed which combines three main steps: i) data preparation and storage; ii) bin-based combinatorial variable engineering of three protein markers, the so called triploTs, and subsequent sectioning of these triploTs in four parts; and iii) deployment of a data-driven supervised learning algorithm, the cross-validated elastic-net regularized logistic regression, with these triploT sections as input variables. As a result, the selected variables from the models are ranked by their prevalence, which potentially have discriminative value. The purpose is to significantly facilitate the identification of meaningful subpopulations, which are most distinguish between two groups. The proposed workflow PRI is exemplified by a recently published public mass cytometry data set. The authors found a T cell subpopulation which is discriminative between effective and ineffective treatment of breast carcinomas in mice. With PRI, that subpopulation was not only validated, but was further narrowed down as a particular Th1 cell population. Moreover, additional insights of combinatorial protein expressions are revealed in a traceable manner. An essential element in the workflow is the reproducible variable engineering. These variables serve as basis for a clearly interpretable visualization, for a structured variable exploration and as input layers in neural network constructs.
PRI facilitates the determination of marker levels in a semi-continuous manner. Jointly with the combinatorial display, it allows a straightforward observation of correlating patterns, and thus, the dominant expressed markers and cell hierarchies. Furthermore, it enables the identification and complex characterization of discriminating subpopulations due to its reproducible and pseudo-multi-parametric pattern presentation. This endorses its applicability as a tool for unbiased investigations on cell subsets within multi-dimensional cytometric data sets.
The central motivation of the thesis was to provide possible solutions and concepts to improve the performance (e.g. activity and selectivity) of electrochemical N2 reduction reaction (NRR). Given that porous carbon-based materials usually exhibit a broad range of structural properties, they could be promising NRR catalysts. Therefore, the advanced design of novel porous carbon-based materials and the investigation of their application in electrocatalytic NRR including the particular reaction mechanisms are the most crucial points to be addressed. In this regard, three main topics were investigated. All of them are related to the functionalization of porous carbon for electrochemical NRR or other electrocatalytic reactions.
In chapter 3, a novel C-TixOy/C nanocomposite has been described that has been obtained via simple pyrolysis of MIL-125(Ti). A novel mode for N2 activation is achieved by doping carbon atoms from nearby porous carbon into the anion lattice of TixOy. By comparing the NRR performance of M-Ts and by carrying out DFT calculations, it is found that the existence of (O-)Ti-C bonds in C-doped TixOy can largely improve the ability to activate and reduce N2 as compared to unoccupied OVs in TiO2. The strategy of rationally doping heteroatoms into the anion lattice of transition metal oxides to create active centers may open many new opportunities beyond the use of noble metal-based catalysts also for other reactions that require the activation of small molecules as well.
In chapter 4, a novel catalyst construction composed of Au single atoms decorated on the surface of NDPCs was reported. The introduction of Au single atoms leads to active reaction sites, which are stabilized by the N species present in NDPCs. Thus, the interaction within as-prepared AuSAs-NDPCs catalysts enabled promising performance for electrochemical NRR. For the reaction mechanism, Au single sites and N or C species can act as Frustrated Lewis pairs (FLPs) to enhance the electron donation and back-donation process to activate N2 molecules. This work provides new opportunities for catalyst design in order to achieve efficient N2 fixation at ambient conditions by utilizing recycled electric energy.
The last topic described in chapter 5 mainly focused on the synthesis of dual heteroatom-doped porous carbon from simple precursors. The introduction of N and B heteroatoms leads to the construction of N-B motives and Frustrated Lewis pairs in a microporous architecture which is also rich in point defects. This can improve the strength of adsorption of different reactants (N2 and HMF) and thus their activation. As a result, BNC-2 exhibits a desirable electrochemical NRR and HMF oxidation performance. Gas adsorption experiments have been used as a simple tool to elucidate the relationship between the structure and catalytic activity. This work provides novel and deep insights into the rational design and the origin of activity in metal-free electrocatalysts and enables a physically viable discussion of the active motives, as well as the search for their further applications.
Throughout this thesis, the ubiquitous problems of low selectivity and activity of electrochemical NRR are tackled by designing porous carbon-based catalysts with high efficiency and exploring their catalytic mechanisms. The structure-performance relationships and mechanisms of activation of the relatively inert N2 molecules are revealed by either experimental results or DFT calculations. These fundamental understandings pave way for a future optimal design and targeted promotion of NRR catalysts with porous carbon-based structure, as well as study of new N2 activation modes.
Medical imaging plays an important role in disease diagnosis, treatment planning, and clinical monitoring. One of the major challenges in medical image analysis is imbalanced training data, in which the class of interest is much rarer than the other classes. Canonical machine learning algorithms suppose that the number of samples from different classes in the training dataset is roughly similar or balance. Training a machine learning model on an imbalanced dataset can introduce unique challenges to the learning problem.
A model learned from imbalanced training data is biased towards the high-frequency samples. The predicted results of such networks have low sensitivity and high precision. In medical applications, the cost of misclassification of the minority class could be more than the cost of misclassification of the majority class. For example, the risk of not detecting a tumor could be much higher than referring to a healthy subject to a doctor. The current Ph.D. thesis introduces several deep learning-based approaches for handling class imbalanced problems for learning multi-task such as disease classification and semantic segmentation.
At the data-level, the objective is to balance the data distribution through re-sampling the data space: we propose novel approaches to correct internal bias towards fewer frequency samples. These approaches include patient-wise batch sampling, complimentary labels, supervised and unsupervised minority oversampling using generative adversarial networks for all.
On the other hand, at algorithm-level, we modify the learning algorithm to alleviate the bias towards majority classes. In this regard, we propose different generative adversarial networks for cost-sensitive learning, ensemble learning, and mutual learning to deal with highly imbalanced imaging data.
We show evidence that the proposed approaches are applicable to different types of medical images of varied sizes on different applications of routine clinical tasks, such as disease classification and semantic segmentation. Our various implemented algorithms have shown outstanding results on different medical imaging challenges.
The business model has emerged as a construct to understand how firms drive innovation through emerging technologies. It is defined as the ‘architecture of the firm’s value creation, delivery and appropriation mechanisms’ (Foss & Saebi, 2018, p. 5). The architecture is characterized by complex functional interrelations between activities that are conducted by various actors, some within and some outside of the firm. In other words, a firm’s value architecture is embedded within a wider system of actors that all contribute to the output of the value architecture.
The question of what drives innovation within this system and how the firm can shape and navigate this innovation is an essential question within innova- tion management research. This dissertation is a compendium of four individual research articles that examine how the design of a firm’s value architecture can fa- cilitate system-wide innovation in the context of Artificial Intelligence and Block- chain Technology. The first article studies how firms use Blockchain Technology to design a governance infrastructure that enables innovation within a platform ecosystem. The findings propose a framework for blockchain-enabled platform ecosystems that address the essential problem of opening the platform to allow for innovation while also ensuring that all actors get to capture their share of the value. The second article analyzes how German Artificial Intelligence startups design their business models. It identifies three distinct types of startup with dif- ferent underlying business models. The third article aims to understand the role of a firm’s value architecture during the socio-technical transition process of Arti- ficial Intelligence. It identifies three distinct ways in which Artificial Intelligence startups create a shared understanding of the technology. The last article exam- ines how corporate venture capital units configure value-adding services for their venture portfolios. It derives a taxonomy of different corporate venture capital types, driven by different strategic motivations.
Ultimately, this dissertation provides novel empirical insights into how a firm’s value architecture determines it’s role within a wider system of actors and how that role enables the firm to facilitate innovation. In that way, it contributes to both business model and innovation management literature.
Business process management (BPM) deals with modeling, executing, monitoring, analyzing, and improving business processes. During execution, the process communicates with its environment to get relevant contextual information represented as events. Recent development of big data and the Internet of Things (IoT) enables sources like smart devices and sensors to generate tons of events which can be filtered, grouped, and composed to trigger and drive business processes.
The industry standard Business Process Model and Notation (BPMN) provides several event constructs to capture the interaction possibilities between a process and its environment, e.g., to instantiate a process, to abort an ongoing activity in an exceptional situation, to take decisions based on the information carried by the events, as well as to choose among the alternative paths for further process execution. The specifications of such interactions are termed as event handling. However, in a distributed setup, the event sources are most often unaware of the status of process execution and therefore, an event is produced irrespective of the process being ready to consume it. BPMN semantics does not support such scenarios and thus increases the chance of processes getting delayed or getting in a deadlock by missing out on event occurrences which might still be relevant.
The work in this thesis reviews the challenges and shortcomings of integrating real-world events into business processes, especially the subscription management. The basic integration is achieved with an architecture consisting of a process modeler, a process engine, and an event processing platform. Further, points of subscription and unsubscription along the process execution timeline are defined for different BPMN event constructs. Semantic and temporal dependencies among event subscription, event occurrence, event consumption and event unsubscription are considered. To this end, an event buffer with policies for updating the buffer, retrieving the most suitable event for the current process instance, and reusing the event has been discussed that supports issuing of early subscription.
The Petri net mapping of the event handling model provides our approach with a translation of semantics from a business process perspective. Two applications based on this formal foundation are presented to support the significance of different event handling configurations on correct process execution and reachability of a process path. Prototype implementations of the approaches show that realizing flexible event handling is feasible with minor extensions of off-the-shelf process engines and event platforms.
Back pain is a problem in adolescent athletes affecting postural control which is an important requirement for physical and daily activities whether under static or dynamic conditions. One leg stance and star excursion balance postural control tests are effective in measuring static and dynamic postural control respectively. These tests have been used in individuals with back pain, athletes and non-athletes without first establishing their reliabilities. In addition to this, there is no published literature investigating dynamic posture in adolescent athletes with back pain using the star excursion balance test. Therefore, the aim of the thesis was to assess deficit in postural control in adolescent athletes with and without back pain using static (one leg stance test) and dynamic postural (SEBT) control tests.
Adolescent athletes with and without back pain participated in the study. Static and dynamic postural control tests were performed using one leg stance and SEBT respectively. The reproducibility of both tests was established. Afterwards, it was determined whether there was an association between static and dynamic posture using the measure of displacement of the centre pressure and reach distance respectively. Finally, it was investigated whether there was a difference in postural control in adolescent athletes with and without back pain using the one leg stance test and the SEBT.
Fair to excellent reliabilities was recorded for the static (one leg stance) and dynamic (star excursion balance) postural control tests in the subjects of interest. No association was found between variables of the static and dynamic tests for the adolescent athletes with and without back pain. Also, no statistically significant difference was obtained between adolescent athletics with and without back pain using the static and dynamic postural control test.
One leg stance test and SEBT can be used as measures of postural control in adolescent athletes with and without back pain. Although static and dynamic postural control might be related, adolescent athletes with and without back pain might be using different mechanisms in controlling their static and dynamic posture. Consequently, static and dynamic postural control in adolescent athletes with back pain was not different from those without back pain. These outcome measures might not be challenging enough to detect deficit in postural control in our study group of interest.
The African weakly electric fish genus Campylomormyrus is a well-investigated fish group of the species-rich family Mormyridae. They are able to generate species-specific electric organ discharges (EODs) which vary in their waveform characteristics including polarity, phase umber and duration. In mormyrid species EODs are used for communication, species discrimination and mate recognition, and it is thought hat they serve as pre-zygotic isolation mechanism driving sympatric speciation by promoting assortative mating. The EOD diversification, its volutionary effects and the link to species divergence have been examined histologically, behaviorally, and genetically. Molecular analyses are a major tool to identify species and their phenotypic traits by studying the underlying genes. The genetic variability between species further provides information from which evolutionary processes, such as speciation, can be deduced. Hence, the ultimate aim of this study is the investigation of genetic variability within the African weakly electric fish genus Campylomormyrus to better understand their sympatric speciation and comprehend their evolutionary drivers. In order to extend the current knowledge and gain more insights into its species history, karyological and genomic approaches are being pursued considering species differences. Previous studies have shown that species with different EOD duration have specific gene expression patterns and single nucleotide polymorphisms (SNPs). As EODs play a crucial role during the evolution of Campylomormyrus species, the identification of its underlying genes may suggest how the EOD diversity evolved and whether this trait is based on a complex network of genetic processes or is regulated by only a few genes. The results obtained in this study suggest that genes with non-synonymous SNPs, which are exclusive to C. tshokwe with an elongated EOD, have frequent functions ssociated with tissue morphogenesis and transcriptional regulation. Therefore, it is proposed that these processes likely co-determine EOD characteristics of Campylomormyrus species. Furthermore, genome-wide analyses confirm the genetic difference among most Campylomormyrus species. In contrast, the same analyses reveal genetic similarity among individuals of the alces-complex showing different EOD waveforms. It is therefore hypothesized that the low genetic variability and high EOD diversity represents incipient sympatric speciation. The karyological description of a Campylomormyrus species provides crucial information about chromosome number and shapes. Its diploid chromosome number of 2n=48 supports the conservation of this trait within Mormyridae. Differences have been detected in the number of bi-armed chromosomes which is unusually high compared to other mormyrid species. This high amount can be due to chromosome rearrangements which could cause genetic incompatibility and reproductive isolation. Hence an alternative hypothesis regarding processes which cause sympatric speciation is that chromosome differences are involved in the speciation process of Campylomormyrus by acting as postzygotic isolation mechanism. In summary, the karyological and genomic investigations conducted in this study contributed to the increase of knowledge about Campylomormyrus species, to the solution of some existing ambiguities like phylogenetic relationships and to the raising of new hypothesis explaining the sympatric speciation of those African weakly electric fish. This study provides a basis for future genomic research to obtain a complete picture for causes and results of evolutionary processes in Campylomormyrus.
Ultrafast magnetisation dynamics have been investigated intensely for two decades. The recovery process after demagnetisation, however, was rarely studied experimentally and discussed in detail. The focus of this work lies on the investigation of the magnetisation on long timescales after laser excitation. It combines two ultrafast time resolved methods to study the relaxation of the magnetic and lattice system after excitation with a high fluence ultrashort laser pulse. The magnetic system is investigated by time resolved measurements of the magneto-optical Kerr effect. The experimental setup has been implemented in the scope of this work. The lattice dynamics were obtained with ultrafast X-ray diffraction. The combination of both techniques leads to a better understanding of the mechanisms involved in magnetisation recovery from a non-equilibrium condition. Three different groups of samples are investigated in this work: Thin Nickel layers capped with nonmagnetic materials, a continuous sample of the ordered L10 phase of Iron Platinum and a sample consisting of Iron Platinum nanoparticles embedded in a carbon matrix. The study of the remagnetisation reveals a general trend for all of the samples: The remagnetisation process can be described by two time dependences. A first exponential recovery that slows down with an increasing amount of energy absorbed in the system until an approximately linear time dependence is observed. This is followed by a second exponential recovery. In case of low fluence excitation, the first recovery is faster than the second. With increasing fluence the first recovery is slowed down and can be described as a linear function. If the pump-induced temperature increase in the sample is sufficiently high, a phase transition to a paramagnetic state is observed. In the remagnetisation process, the transition into the ferromagnetic state is characterised by a distinct transition between the linear and exponential recovery. From the combination of the transient lattice temperature Tp(t) obtained from ultrafast X-ray measurements and magnetisation M(t) gained from magneto-optical measurements we construct the transient magnetisation versus temperature relations M(Tp). If the lattice temperature remains below the Curie temperature the remagnetisation curve M(Tp) is linear and stays below the M(T) curve in equilibrium in the continuous transition metal layers. When the sample is heated above phase transition, the remagnetisation converges towards the static temperature dependence. For the granular Iron Platinum sample the M(Tp) curves for different fluences coincide, i.e. the remagnetisation follows a similar path irrespective of the initial laser-induced temperature jump.
In the era of social networks, internet of things and location-based services, many online services produce a huge amount of data that have valuable objective information, such as geographic coordinates and date time. These characteristics (parameters) in the combination with a textual parameter bring the challenge for the discovery of geospatiotemporal knowledge. This challenge requires efficient methods for clustering and pattern mining in spatial, temporal and textual spaces.
In this thesis, we address the challenge of providing methods and frameworks for geospatiotemporal data analytics. As an initial step, we address the challenges of geospatial data processing: data gathering, normalization, geolocation, and storage. That initial step is the basement to tackle the next challenge -- geospatial clustering challenge. The first step of this challenge is to design the method for online clustering of georeferenced data. This algorithm can be used as a server-side clustering algorithm for online maps that visualize massive georeferenced data. As the second step, we develop the extension of this method that considers, additionally, the temporal aspect of data. For that, we propose the density and intensity-based geospatiotemporal clustering algorithm with fixed distance and time radius.
Each version of the clustering algorithm has its own use case that we show in the thesis.
In the next chapter of the thesis, we look at the spatiotemporal analytics from the perspective of the sequential rule mining challenge. We design and implement the framework that transfers data into textual geospatiotemporal data - data that contain geographic coordinates, time and textual parameters. By this way, we address the challenge of applying pattern/rule mining algorithms in geospatiotemporal space. As the applicable use case study, we propose spatiotemporal crime analytics -- discovery spatiotemporal patterns of crimes in publicly available crime data.
The second part of the thesis, we dedicate to the application part and use case studies. We design and implement the application that uses the proposed clustering algorithms to discover knowledge in data. Jointly with the application, we propose the use case studies for analysis of georeferenced data in terms of situational and public safety awareness.
Due to advances in science and technology towards smaller and more powerful processing units, the fabrication of micrometer sized machines for different tasks becomes more and more possible. Such micro-robots could revolutionize medical treatment of diseases and shall support to work on other small machines. Nevertheless, scaling down robots and other devices is a challenging task and will probably remain limited in near future. Over the past decade the concept of bio-hybrid systems has proved to be a promising approach in order to advance the further development of micro-robots. Bio-hybrid systems combine biological cells with artificial components, thereby benefiting from the functionality of living biological cells. Cell-driven micro-transport is one of the most prominent applications in the emerging field of these systems. So far, micrometer sized cargo has been successfully transported by means of swimming bacterial cells. The potential of motile adherent cells as transport systems has largely remained unexplored.
This thesis concentrates on the social amoeba Dictyostelium discoideum as a potential candidate for an amoeboid bio-hybrid transport system. The use of this model organism comes with several advantages. Due to the unspecific properties of Dictyostelium adhesion, a wide range of different cargo materials can be used for transport. As amoeboid cells exceed bacterial cells in size by one order of magnitude, also the size of an object carried by a single cell can also be much larger for an amoeba. Finally it is possible to guide the cell-driven transport based on the chemotactic behavior of the amoeba. Since cells undergo a developmentally induced chemotactic aggregation, cargo could be assembled in a self-organized manner into a cluster. It is also possible to impose an external chemical gradient to guide the amoeboid transport system to a desired location.
To establish Dictyostelium discoideum as a possible candidate for bio-hybrid transport systems, this thesis will first investigate the movement of single cells. Secondly, the interaction of cargo and cells will be studied. Eventually, a conceptional proof will be conducted, that the cheomtactic behavior can be exploited either to transport a cargo self-organized or through an external chemical source.
This thesis puts the citizen-state interaction at its center. Building on a comprehensive model incorporating various perspectives on this interaction, I derive selected research gaps. The three articles, comprising this thesis, tackle these gaps. A focal role plays the citizens’ administrative literacy, the relevant competences and knowledge necessary to successfully interact with public organizations. The first article elaborates on the different dimensions of administrative literacy and develops a survey instrument to assess these. The second study shows that public employees change their behavior according to the competences that citizens display during public encounters. They treat citizens preferentially that are well prepared and able to persuade them of their application’s potential. Thereby, they signal a higher success potential for bureaucratic success criteria which leads to the employees’ cream-skimming behavior. The third article examines the dynamics of employees’ communication strategies when recovering from a service failure. The study finds that different explanation strategies yield different effects on the client’s frustration. While accepting the responsibility and explaining the reasons for a failure alleviates the frustration and anger, refusing the responsibility leads to no or even reinforcing effects on the client’s frustration. The results emphasize the different dynamics that characterize the nature of citizen-state interactions and how they establish their short- and long-term outcomes.
Frailty and sarcopenia share some underlying characteristics like loss of muscle mass, low muscle strength, and low physical performance. Imaging parameters and functional examinations mainly assess frailty and sarcopenia criteria; however, these measures can have limitations in clinical settings. Therefore, finding suitable biomarkers that reflect a catabolic muscle state e.g. an elevated muscle protein turnover as suggested in frailty, are becoming more relevant concerning frailty diagnosis and risk assessment.
3-Methylhistidine (3-MH) and its ratios 3-MH-to-creatinine (3-MH/Crea) and 3 MH-to-estimated glomerular filtration rate (3-MH/eGFR) are under discussion as possible biomarkers for muscle protein turnover and might support the diagnosis of frailty. However, there is some skepticism about the reliability of 3-MH measures since confounders such as meat and fish intake might influence 3-MH plasma concentrations. Therefore, the influence of dietary habits and an intervention with white meat on plasma 3-MH was determined in young and healthy individuals. In another study, the cross-sectional associations of plasma 3-MH, 3-MH/Crea and 3-MH/eGFR with the frailty status (robust, pre-frail and frail) were investigated.
Oxidative stress (OS) is a possible contributor to frailty development, and high OS levels as well as low micronutrient levels are associated with the frailty syndrome. However, data on simultaneous measures of OS biomarkers together with micronutrients are lacking in studies including frail, pre-frail and robust individuals. Therefore, cross-sectional associations of protein carbonyls (PrCarb), 3-nitrotyrosine (3-NT) and several micronutrients with the frailty status were determined.
A validated UPLC-MS/MS (ultra-performance liquid chromatography tandem mass spectrometry) method for the simultaneous quantification of 3-MH and 1-MH (1 methylhistidine, as marker for meat and fish consumption) was presented and used for further analyses. Omnivores showed higher plasma 3-MH and 1-MH concentrations than vegetarians and a white meat intervention resulted in an increase in plasma 3-MH, 3 MH/Crea, 1-MH and 1-MH/Crea in omnivores. Elevated 3-MH and 3-MH/Crea levels declined significantly within 24 hours after this white meat intervention. Thus, 3-MH and 3-MH/Crea might be used as biomarker for muscle protein turnover when subjects did not consume meat 24 hours prior to blood samplings.
Plasma 3-MH, 3-MH/Crea and 3-MH/eGFR were higher in frail individuals than in robust individuals. Additionally, these biomarkers were positively associated with frailty in linear regression models, and higher odds to be frail were found for every increase in 3 MH and 3-MH/eGFR quintile in multivariable logistic regression models adjusted for several confounders. This was the first study using 3-MH/eGFR and it is concluded that plasma 3-MH, 3-MH/Crea and 3-MH/eGFR might be used to identify frail individuals or individuals at higher risk to be frail, and that there might be threshold concentrations or ratios to support these diagnoses.
Higher vitamin D3, lutein/zeaxanthin, γ-tocopherol, α-carotene, β-carotene, lycopene and β-cryptoxanthin concentrations and additionally lower PrCarb concentrations were found in robust compared to frail individuals in multivariate linear models. Frail subjects had higher odds to be in the lowest than in the highest tertile for vitamin D3 α-tocopherol, α-carotene, β-carotene, lycopene, lutein/zeaxanthin, and β cryptoxanthin, and had higher odds to be in the highest than in the lowest tertile for PrCarb than robust individuals in multivariate logistic regression models. Thus, a low micronutrient together with a high PrCarb status is associated with pre-frailty and frailty.
The increasing age of worldwide population is a major contributor for the rising prevalence of major pathologies and disease, such as type 2 diabetes, mediated by massive insulin resistance and a decline in functional beta-cell mass, highly associated with an elevated incidence of obesity. Thus, the impact of aging under physiological conditions and in combination with diet-induced metabolic stress on characteristics of pancreatic islets and beta-cells, with the focus on functionality and structural integrity, were investigated in the present dissertation.
Primarily induced by malnutrition due to chronic and excess intake of high caloric diets, containing large amounts of carbohydrates and fats, obesity followed by systemic inflammation and peripheral insulin resistance occurs over time, initiating metabolic stress conditions. Elevated insulin demands initiate an adaptive response by beta-cell mass expansion due to increased proliferation, but prolonged stress conditions drive beta-cell failure and loss. Aging has been also shown to affect beta-cell functionality and morphology, in particular by proliferative limitations. However, most studies in rodents were performed under beta-cell challenging conditions, such as high-fat diet interventions. Thus, in the first part of the thesis (publication I), a characterization of age-related alterations on pancreatic islets and beta-cells was performed by using plasma samples and pancreatic tissue sections of standard diet-fed C57BL/6J wild-type mice in several age groups (2.5, 5, 10, 15 and 21 months).
Aging was accompanied by decreased but sustained islet proliferative potential as well as an induction of cellular senescence. This was associated with a progressive islet expansion to maintain normoglycemia throughout lifespan. Moreover, beta-cell function and mass were not impaired although the formation and accumulation of AGEs occurred, located predominantly in the islet vasculature, accompanied by an induction of oxidative and nitrosative (redox) stress.
The nutritional behavior throughout human lifespan; however, is not restricted to a balanced diet. This emphasizes the significance to investigate malnutrition by the intake of high-energy diets, inducing metabolic stress conditions that synergistically with aging might amplify the detrimental effects on endocrine pancreas. Using diabetes-prone NZO mice aged 7 weeks, fed a dietary regimen of carbohydrate restriction for different periods (young mice - 11 weeks, middle-aged mice - 32 weeks) followed by a carbohydrate intervention for 3 weeks, offered the opportunity to distinguish the effects of diet-induced metabolic stress in different ages on the functionality and integrity of pancreatic islets and their beta-cells (publication II, manuscript).
Interestingly, while young NZO mice exhibited massive hyperglycemia in response to diet-induced metabolic stress accompanied by beta-cell dysfunction and apoptosis, middle-aged animals revealed only moderate hyperglycemia by the maintenance of functional beta-cells. The loss of functional beta-cell mass in islets of young mice was associated with reduced expression of PDX1 transcription factor, increased endocrine AGE formation and related redox stress as well as TXNIP-dependent induction of the mitochondrial death pathway. Although the amounts of secreted insulin and the proliferative potential were comparable in both age groups, islets of middle-aged mice exhibited sustained PDX1 expression, almost regular insulin secretory function, increased capacity for cell cycle progression as well as maintained redox potential.
The results of the present thesis indicate a loss of functional beta-cell mass in young diabetes-prone NZO mice, occurring by redox imbalance and induction of apoptotic signaling pathways. In contrast, aging under physiological conditions in C57BL/6J mice and in combination with diet-induced metabolic stress in NZO mice does not appear to have adverse effects on the functionality and structural integrity of pancreatic islets and beta-cells, associated with adaptive responses on changing metabolic demands. However, considering the detrimental effects of aging, it has to be assumed that the compensatory potential of mice might be exhausted at a later point of time, finally leading to a loss of functional beta-cell mass and the onset and progression of type 2 diabetes.
The polygenic, diabetes-prone NZO mouse is a suitable model for the investigation of human obesity-associated type 2 diabetes. However, mice at advanced age attenuated the diabetic phenotype or do not respond to the dietary stimuli. This might be explained by the middle age of mice, corresponding to the human age of about 38-40 years, in which the compensatory mechanisms of pancreatic islets and beta cells towards metabolic stress conditions are presumably more active.
Analysis of supramolecular assemblies of NE81, the first lamin protein in a non-metazoan organism
(2019)
Nuclear lamins are nucleus-specific intermediate filaments forming a network located at the inner nuclear membrane of the nuclear envelope. They form the nuclear lamina together with proteins of the inner nuclear membrane regulating nuclear shape and gene expression, among others. The amoebozoan Dictyostelium NE81 protein is a suitable candidate for an evolutionary conserved lamin protein in this non-metazoan organism. It shares the domain organization of metazoan lamins and is fulfilling major lamin functions in Dictyostelium. Moreover, field-emission scanning electron microscopy (feSEM) images of NE81 expressed on Xenopus oocytes nuclei revealed filamentous structures with an overall appearance highly reminiscent to that of metazoan Xenopus lamin B2. For the classification as a lamin-like or a bona fide lamin protein, a better understanding of the supramolecular NE81 structure was necessary. Yet, NE81 carrying a large N-terminal GFP-tag turned out as unsuitable source for protein isolation and characterization; GFP-NE81 expressed in Dictyostelium NE81 knock-out cells exhibited an abnormal distribution, which is an indicator for an inaccurate assembly of GFP-tagged NE81. Hence, a shorter 8×HisMyc construct was the tag of choice to investi-gate formation and structure of NE81 assemblies. One strategy was the structural analysis of NE81 in situ at the outer nuclear membrane in Dictyostelium cells; NE81 without a func-tional nuclear localization signal (NLS) forms assemblies at the outer face of the nucleus. Ultrastructural feSEM pictures of NE81ΔNLS nuclei showed a few filaments of the expected size but no repetitive filamentous structures. The former strategy should also be established for metazoan lamins in order to facilitate their structural analysis. However, heterologously expressed Xenopus and C. elegans lamins showed no uniform localization at the outer nucle-ar envelope of Dictyostelium and hence, no further ultrastructural analysis was undertaken. For in vitro assembly experiments a Dictyostelium mutant was generated, expressing NE81 without the NLS and the membrane-anchoring isoprenylation site (HisMyc-NE81ΔNLSΔCLIM). The cytosolic NE81 clusters were soluble at high ionic strength and were purified from Dictyostelium extracts using Ni-NTA Agarose. Widefield immunofluorescence microscopy, super-resolution light microscopy and electron microscopy images of purified NE81 showed its capability to form filamentous structures at low ionic strength, as described previously for metazoan lamins. Introduction of a phosphomimetic point mutation (S122E) into the CDK1-consensus sequence of NE81 led to disassembled NE81 protein in vivo, which could be reversibly stimulated to form supramolecular assemblies by blue light exposure.
The results of this work reveal that NE81 has to be considered a bona fide lamin, since it is able to form filamentous assemblies. Furthermore, they highlight Dictyostelium as a non-mammalian model organism with a well-characterized nuclear envelope containing all rele-vant protein components known in animal cells.
Introduction: Cystic fibrosis (CF) is a genetic disease which disrupts the function of an epithelial surface anion channel, CFTR (cystic fibrosis transmembrane conductance regulator). Impairment to this channel leads to inflammation and infection in the lung causing the majority of morbidity and mortality. However, CF is a multiorgan disease affecting many tissues, including vascular smooth muscle. Studies have revealed young people with cystic fibrosis lacking inflammation and infection still demonstrate vascular endothelial dysfunction, measured per flow-mediated dilation (FMD). In other disease cohorts, i.e. diabetic and obese, endurance exercise interventions have been shown improve or taper this impairment. However, long-term exercise interventions are risky, as well as costly in terms of time and resources. Nevertheless, emerging research has correlated the acute effects of exercise with its long-term benefits and advocates the study of acute exercise effects on FMD prior to longitudinal studies. The acute effects of exercise on FMD have previously not been examined in young people with CF, but could yield insights on the potential benefits of long-term exercise interventions.
The aims of these studies were to 1) develop and test the reliability of the FMD method and its applicability to study acute exercise effects; 2) compare baseline FMD and the acute exercise effect on FMD between young people with and without CF; and 3) explore associations between the acute effects of exercise on FMD and demographic characteristics, physical activity levels, lung function, maximal exercise capacity or inflammatory hsCRP levels.
Methods: Thirty young volunteers (10 people with CF, 10 non-CF and 10 non-CF active matched controls) between the ages of 10 and 30 years old completed blood draws, pulmonary function tests, maximal exercise capacity tests and baseline FMD measurements, before returning approximately 1 week later and performing a 30-min constant load training at 75% HRmax. FMD measurements were taken prior, immediately after, 30 minutes after and 1 hour after constant load training. ANOVAs and repeated measures ANOVAs were employed to explore differences between groups and timepoints, respectively. Linear regression was implemented and evaluated to assess correlations between FMD and demographic characteristics, physical activity levels, lung function, maximal exercise capacity or inflammatory hsCRP levels. For all comparisons, statistical significance was set at a p-value of α < 0.05.
Results: Young people with CF presented with decreased lung function and maximal exercise capacity compared to matched controls. Baseline FMD was also significantly decreased in the CF group (CF: 5.23% v non-CF: 8.27% v non-CF active: 9.12%). Immediately post-training, FMD was significantly attenuated (approximately 40%) in all groups with CF still demonstrating the most minimal FMD. Follow-up measurements of FMD revealed a slow recovery towards baseline values 30 min post-training and improvements in the CF and non-CF active groups 60 min post-training. Linear regression exposed significant correlations between maximal exercise capacity (VO2 peak), BMI and FMD immediately post-training.
Conclusion: These new findings confirm that CF vascular endothelial dysfunction can be acutely modified by exercise and will aid in underlining the importance of exercise in CF populations. The potential benefits of long-term exercise interventions on vascular endothelial dysfunction in young people with CF warrants further investigation.
Selenite pseudomorphs
(2019)
STERILE APETALA (SAP) is known to be an essential regulator of flower development for over 20 years. Loss of SAP function in the model plant Arabidopsis thaliana is associated with a reduction of floral organ number, size and fertility. In accordance with the function of SAP during early flower development, its spatial expression in flowers is confined to meristematic stages and to developing ovules. However, to date, despite extensive research, the molecular function of SAP and the regulation of its spatio-temporal expression still remain elusive.
In this work, amino acid sequence analysis and homology modeling revealed that SAP belongs to the rare class of plant F-box proteins with C-terminal WD40 repeats. In opisthokonts, this type of F-box proteins constitutes the substrate binding subunit of SCF complexes, which catalyze the ubiquitination of proteins to initiate their proteasomal degradation. With LC-MS/MS-based protein complex isolation, the interaction of SAP with major SCF complex subunits was confirmed. Additionally, candidate substrate proteins, such as the growth repressor PEAPOD 1 and 2 (PPD1/2), could be revealed during early stages of flower development. Also INDOLE-3-BUTYRIC ACID RESPONSE 5 (IBR5) was identified among putative interactors. Genetic analyses indicated that, different from substrate proteins, IBR5 is required for SAP function. Protein complex isolation together with transcriptome profiling emphasized that the SCFSAP complex integrates multiple biological processes, such as proliferative growth, vascular development, hormonal signaling and reproduction. Phenotypic analysis of sap mutant and SAP overexpressing plants positively correlated SAP function with plant growth during reproductive and vegetative development.
Furthermore, to elaborate on the transcriptional regulation of SAP, publicly available ChIP-seq data of key floral homeotic proteins were reanalyzed. Here, it was shown that the MADS-domain transcription factors APETALA 1 (AP1), APETALA 3 (AP3), PISTILLATA (PI), AGAMOUS (AG) and SEPALLATA 3 (SEP3) bind to the SAP locus, which indicates that SAP is expressed in a floral organ-specific manner. Reporter gene analyses in combination with CRISPR/Cas9-mediated deletion of putative regulatory regions further demonstrated that the intron contains major regulatory elements of SAP in Arabidopsis thaliana.
In conclusion, these data indicate that SAP is a pleiotropic developmental regulator that acts through tissue-specific destabilization of proteins. The presumed transcriptional regulation of SAP by the floral MADS-domain transcription factors could provide a missing link between the specification of floral organ identity and floral organ growth pathways.
Cellulose derived polymers
(2019)
Plastics, such as polyethylene, polypropylene, and polyethylene terephthalate are part of our everyday lives in the form of packaging, household goods, electrical insulation, etc. These polymers are non-degradable and create many environmental problems and public health concerns. Additionally, these polymers are produced from finite fossils resources. With the continuous utilization of these limited resources, it is important to look towards renewable sources along with biodegradation of the produced polymers, ideally. Although many bio-based polymers are known, such as polylactic acid, polybutylene succinate adipate or polybutylene succinate, none have yet shown the promise of replacing conventional polymers like polyethylene, polypropylene and polyethylene terephthalate. Cellulose is one of the most abundant renewable resources produced in nature. It can be transformed into various small molecules, such as sugars, furans, and levoglucosenone. The aim of this research is to use the cellulose derived molecules for the synthesis of polymers.
Acid-treated cellulose was subjected to thermal pyrolysis to obtain levoglucosenone, which was reduced to levoglucosenol. Levoglucosenol was polymerized, for the first time, by ring-opening metathesis polymerization (ROMP) yielding high molar mass polymers of up to ~150 kg/mol. The poly(levoglucosenol) is thermally stable up to ~220 ℃, amorphous, and is exhibiting a relatively high glass transition temperature of ~100 ℃. The poly(levoglucosenol) can be converted to a transparent film, resembling common plastic, and was found to degrade in a moist acidic environment. This means that poly(levoglucosenol) may find its use as an alternative to conventional plastic, for instance, polystyrene.
Levoglucosenol was also converted into levoglucosenyl methyl ether, which was polymerized by cationic ring-opening metathesis polymerization (CROP). Polymers were obtained with molar masses up to ~36 kg/mol. These polymers are thermally stable up to ~220 ℃ and are semi-crystalline thermoplastics, having a glass transition temperature of ~35 ℃ and melting transition of 70-100 ℃. Additionally, the polymers underwent cross-linking, hydrogenation and thiol-ene click chemistry.
Continuous insight into biological processes has led to the development of large-scale, mechanistic systems biology models of pharmacologically relevant networks. While these models are typically designed to study the impact of diverse stimuli or perturbations on multiple system variables, the focus in pharmacological research is often on a specific input, e.g., the dose of a drug, and a specific output related to the drug effect or response in terms of some surrogate marker.
To study a chosen input-output pair, the complexity of the interactions as well as the size of the models hinders easy access and understanding of the details of the input-output relationship.
The objective of this thesis is the development of a mathematical approach, in specific a model reduction technique, that allows (i) to quantify the importance of the different state variables for a given input-output relationship, and (ii) to reduce the dynamics to its essential features -- allowing for a physiological interpretation of state variables as well as parameter estimation in the statistical analysis of clinical data. We develop a model reduction technique using a control theoretic setting by first defining a novel type of time-limited controllability and observability gramians for nonlinear systems. We then show the superiority of the time-limited generalised gramians for nonlinear systems in the context of balanced truncation for a benchmark system from control theory.
The concept of time-limited controllability and observability gramians is subsequently used to introduce a state and time-dependent quantity called the input-response (ir) index that quantifies the importance of state variables for a given input-response relationship at a particular time.
We subsequently link our approach to sensitivity analysis, thus, enabling for the first time the use of sensitivity coefficients for state space reduction. The sensitivity based ir-indices are given as a product of two sensitivity coefficients. This allows not only for a computational more efficient calculation but also for a clear distinction of the extent to which the input impacts a state variable and the extent to which a state variable impacts the output.
The ir-indices give insight into the coordinated action of specific state variables for a chosen input-response relationship.
Our developed model reduction technique results in reduced models that still allow for a mechanistic interpretation in terms of the quantities/state variables of the original system, which is a key requirement in the field of systems pharmacology and systems biology and distinguished the reduced models from so-called empirical drug effect models. The ir-indices are explicitly defined with respect to a reference trajectory and thereby dependent on the initial state (this is an important feature of the measure). This is demonstrated for an example from the field of systems pharmacology, showing that the reduced models are very informative in their ability to detect (genetic) deficiencies in certain physiological entities. Comparing our novel model reduction technique to the already existing techniques shows its superiority.
The novel input-response index as a measure of the importance of state variables provides a powerful tool for understanding the complex dynamics of large-scale systems in the context of a specific drug-response relationship. Furthermore, the indices provide a means for a very efficient model order reduction and, thus, an important step towards translating insight from biological processes incorporated in detailed systems pharmacology models into the population analysis of clinical data.
Infection on the move
(2019)
Movement plays a major role in shaping population densities and contact rates among individuals, two factors that are particularly relevant for disease outbreaks. Although any differences in movement behaviour due to individual characteristics of the host and heterogeneity in landscape structure are likely to have considerable consequences for disease dynamics, these mechanisms are neglected in most epidemiological studies. Therefore, developing a general understanding how the interaction of movement behaviour and spatial heterogeneity shapes host densities, contact rates and ultimately pathogen spread is a key question in ecological and epidemiological research.
In my thesis, I address this gap using both theoretical and empirical modelling approaches. In the theoretical part of my thesis, I investigated bottom-up effects of individual movement behaviour and landscape structure on host density, contact rates, and ultimately disease dynamics. I extended an established agent-based model that simulates ecological and epidemiological key processes to incorporate explicit movement of host individuals and landscape complexity. Neutral landscape models are a powerful basis for spatially-explicit modelling studies to imitate the complex characteristics of natural landscapes. In chapter 2, the first study of my thesis, I introduce two complementary R packages, NLMR and landscapetools, that I have co-developed to simplify the workflow of simulation and customization of such landscapes. To demonstrate the use of the packages I present a case study using the spatially explicit eco-epidemiological model and show that landscape complexity per se increases the probability of disease persistence. By using simple rules to simulate explicit host movement, I highlight in chapter 3 how disease dynamics are affected by population-level properties emerging from different movement rules leading to differences in the realized movement distance, spatiotemporal host density, and heterogeneity in transmission rates. As a consequence, mechanistic movement decisions based on the underlying landscape or conspecific competition led to considerably higher probabilities than phenomenological random walk approaches due directed movement leading to spatiotemporal differences in host densities. The results of these two chapters highlight the need to explicitly consider spatial heterogeneity and host movement behaviour when theoretical approaches are used to assess control measures to prevent outbreaks or eradicate diseases.
In the empirical part of my thesis (chapter 4), I focus on the spatiotemporal dynamics of Classical Swine Fever in a wild boar population by analysing epidemiological data that was collected during an outbreak in Northern Germany persisting for eight years. I show that infection risk exhibits different seasonal patterns on the individual and the regional level. These patterns on the one hand show a higher infection risk in autumn and winter that may arise due to onset of mating behaviour and hunting intensity, which result in increased movement ranges. On the other hand, the increased infection risk of piglets, especially during the birth season, indicates the importance of new susceptible host individuals for local pathogen spread. The findings of this chapter underline the importance of different spatial and temporal scales to understand different components of pathogen spread that can have important implications for disease management.
Taken together, the complementary use of theoretical and empirical modelling in my thesis highlights that our inferences about disease dynamics depend heavily on the spatial and temporal resolution used and how the inclusion of explicit mechanisms underlying hosts movement are modelled. My findings are an important step towards the incorporation of spatial heterogeneity and a mechanism-based perspective in eco-epidemiological approaches. This will ultimately lead to an enhanced understanding of the feedbacks of contact rates on pathogen spread and disease persistence that are of paramount importance to improve predictive models at the interface of ecology and epidemiology.
Background and objectives: Drop jumps (DJs) are well-established exercise drills during plyometric training. Several sports are performed under unstable surface conditions (e.g., soccer, beach volleyball, gymnastics). To closely mimic sport-specific demands, plyometric training includes DJs on both stable and unstable surfaces. According to the mechanical properties of the unstable surface (e.g., thickness, stiffness), altered temporal, mechanical, and physiological demands have been reported from previous cross-sectional studies compared with stable conditions. However, given that the human body simultaneously interacts with various factors (e.g., drop height, footwear, gender) during DJs on unstable surfaces, the investigation of isolated effects of unstable surface conditions might not be sufficient for designing an effective and safe DJ stimulus. Instead, the combined investigation of different factors and their interaction with surface instability have to be taken into consideration. Therefore, the present doctoral thesis seeks to complement our knowledge by examining the main and interaction effects of surface instability, drop height, footwear, and gender on DJ performance, knee joint kinematics, and neuromuscular activation.
Methods: Healthy male and female physically active sports science students aged 19-26 years participated in the cross-sectional studies. Jump performance, sagittal and frontal plane knee joint kinematics, and leg muscle activity were measured during DJs on stable (i.e., firm force plate) and (highly) unstable surfaces (i.e., one or two AIREX® balance pads) from different drop heights (i.e., 20 cm, 40 cm, 60 cm) or under multiple footwear conditions (i.e., barefoot, minimal shoes, cushioned shoes).
Results: Findings revealed that surface instability caused a DJ performance decline, reduced sagittal plane knee joint kinematics, and lower leg muscle activity during DJs. Sagittal plane knee joint kinematics as well as leg muscle activity decreased even more with increasing surface instability (i.e., two vs. one AIREX® balance pads). Higher (60 cm) compared to lower drop heights (≤ 40 cm) resulted in a DJ performance decline. In addition, increased sagittal plane knee joint kinematics as well as higher shank muscle activity were found during DJs from higher (60 cm) compared to lower drop heights (≤ 40 cm). Footwear properties almost exclusively affected frontal plane knee joint kinematics, indicating larger maximum knee valgus angles when performing DJs barefoot compared to shod. Between the different shoe properties (i.e., minimal vs. cushioned shoes), no significant differences during DJs were found at all. Only a few significant surface-drop height as well as surface-footwear interactions were found during DJs. They mainly indicated that drop height- and footwear-related effects are more pronounced during DJs on unstable compared to stable surfaces. In this regard, the maximum knee valgus angle was significantly greater when performing DJs from high drop heights (60 cm), but only on highly unstable surface. Further, braking and push-off times were significantly longer when performing DJs barefoot compared to shod, but only on unstable surface. Finally, analyses indicated no significant interactions with the gender factor.
Conclusions: The findings of the present cumulative thesis indicate that stable rather than unstable surfaces as well as moderate (≤ 40 cm) rather than high (60 cm) drop heights provide sufficient stimuli to perform DJs. Furthermore, findings suggest that DJs on highly unstable surfaces (i.e., two AIREX® balance pads) from high drop heights (60 cm) as well as barefoot compared to shod seem to increase maximal knee valgus angle/stress by providing a more harmful DJ stimulus. Neuromuscular activation strategies appear to be modified by surface instability and drop height. However, leg muscle activity is only marginally effected by footwear and by the interactions of various external factors i.e., surface instability, drop height, footwear). Finally, gender did not significantly modulate the main or interaction effects of the observed external factors during DJs.
Although the search for promising business models (BMs) is crucial for every profit-oriented venture, searching for those challenges in particular entrepreneurs. Limited resources, missing expertise and absolute uncertainty call entrepreneurs to strongly rely on their cognition in searching for a promising BM. However, as prior studies have examined cognitive search activities in isolation and neglected cognitive differences, explanations of how cognitive factors affect the BM process and outcomes are thus far insufficient.
Addressing the overall question of how BMs emerge, the dissertation contributes to the cognitive perspective in entrepreneurship and BM research. Building on the dual-process theory from cognitive psychology, the micro-foundations of managerial decision-making and insights from framing literature, this dissertation explicitly investigates the impacts of different cognitive dispositions, search activities and visual framing effects. The core assumption is that cognitive dispositions and entrepreneurs’ searches for information determine their BM decision-making. Furthermore, BM visualisations have become popular instruments with which to explain and manage today’s complex business interactions. As they abstract from reality, they can also unfold impacts on the cognitive processes.
This dissertation offers new explanations to these aspects and consists of three studies and one reflective article. The first study explores the impacts of differences in search activities and cognitive dispositions in a qualitative study with 70 entrepreneurship students. The second qualitative study explores the cognitive impacts of 103 BM visualisations. Third, a quantitative PLS-SEM experiment with 197 entrepreneurs illuminates the link between BM visualisations and cognition. The reflective article expresses the results’ meaning for the teaching of BMs.
In sum, the studies have resulted in a new theory of stabilising factors explaining how cognitive dispositions, search activities and visual framing determine entrepreneurs’ decisions to imitate or deviate from existing BMs. It indicates that the decision depends on the context-dependent strategic orientation and cognitive disposition-dependent cognitive safety, that is the correspondence between characteristics of cognitive dispositions and search activities. Moreover, the studies identified five visual framing effects that are independent of cognitive dispositions and prior experiences. This provides fertile contributions to the literature on BM methods and how BM visualisations affect decisions. Most importantly, BM visualisations provide an emotionally stabilising function to rational entrepreneurs, a cognitively stabilising function to experiential participants and do not affect indifferent participants in general.
This thesis covers the synthesis of conjugates of 2-Deoxy-D-ribose-5-phosphate aldolase (DERA) with suitable polymers and the subsequent immobilization of these conjugates in thin films via two different approaches.
2-Deoxy-D-ribose-5-phosphate aldolase (DERA) is a biocatalyst that is capable of converting acetaldehyde and a second aldehyde as acceptor into enantiomerically pure mono- and diyhydroxyaldehydes, which are important structural motifs in a number of pharmaceutically active compounds. Conjugation and immobilization renders the enzyme applicable for utilization in a continuously run biocatalytic process which avoids the common problem of product inhibition. Within this thesis, conjugates of DERA and poly(N-isopropylacrylamide) (PNIPAm) for immobilization via a self-assembly approach were synthesized and isolated, as well as conjugates with poly(N,N-dimethylacrylamide) (PDMAA) for a simplified and scalable spray-coating approach. For the DERA/PNIPAm-conjugates different synthesis routes were tested, including grafting-from and grafting-to, both being common methods for the conjugation. Furthermore, both lysines and cysteines were addressed for the conjugation in order to find optimum conjugation conditions. It turned out that conjugation via lysine causes severe activity loss as one lysine plays a key role in the catalyzing mechanism. The conjugation via the cysteines by a grafting-to approach using pyridyl disulfide (PDS) end-group functionalized polymers led to high conjugation efficiencies in the presence of polymer solubilizing NaSCN. The resulting conjugates maintained enzymatic activity and also gained high acetaldehyde tolerance which is necessary for their use later on in an industrial relevant process after their immobilization.
The resulting DERA/PNIPAm conjugates exhibited enhanced interfacial activity at the air/water interface compared to the single components, which is an important pre-requisite for the immobilization via the self-assembly approach. Conjugates with longer polymer chains formed homogeneous films on silicon wafers and glass slides while the ones with short chains could only form isolated aggregates. On top of that, long chain conjugates showed better activity maintenance upon the immobilization.
The crosslinking of conjugates, as well as their fixation on the support materials, are important for the mechanical stability of the films obtained from the self-assembly process. Therefore, in a second step, we introduced the UV-crosslinkable monomer DMMIBA to the PNIPAm polymers to be used for conjugation. The introduction of DMMIBA reduced the lower critical solution temperature (LCST) of the polymer and thus the water solubility at ambient conditions, resulting in lower conjugation efficiencies and in turn slightly poorer acetaldehyde tolerance of the resulting conjugates. Unlike the DERA/PNIPAm, the conjugates from the copolymer P(NIPAM-co-DMMIBA) formed continuous, homogenous films only after the crosslinking step via UV-treatment. For a firm binding of the crosslinked films, a functionalization protocol for the model support material cyclic olefin copolymer (COC) and the final target support, PAN based membranes, was developed that introduces analogue UV-reactive groups to the support surface. The conjugates immobilized on the modified COC films maintained enzymatic activity and showed good mechanical stability after several cycles of activity assessment. Conjugates with longer polymer chains, however, showed a higher degree of crosslinking after the UV-treatment leading to a pronounced loss of activity. A porous PAN membrane onto which the conjugates were immobilized as well, was finally transferred to a dead end filtration membrane module to catalyze the aldol reaction of the industrially relevant mixture of acetaldehyde and hexanal in a continuous mode. Mono aldol product was detectable, but yields were comparably low and the operational stability needs to be further improved
Another approach towards immobilization of DERA conjugates that was followed, was to generate the conjugates in situ by simply mixing enzyme and polymer and spray coat the mixture onto the membrane support. Compared to the previous approach, the focus was more put on simplicity and a possible scalability of the immobilization. Conjugates were thus only generated in-situ and not further isolated and characterized. For the conjugation, PDMAA equipped with N-2-thiolactone acrylamide (TlaAm) side chains was used, an amine-reactive comonomer that can react with the lysine residues of DERA, as well as with amino groups introduced to a desired support surface. Furthermore disulfide formation after hydrolysis of the Tla groups causes a crosslinking effect. The synthesized copolymer poly(N,N-Dimethylacrylamide-co-N-2-thiolactone acrylamide) (P(DMAA-co-TlaAm)) thus serves a multiple purpose including protein binding, crosslinking and binding to support materials. The mixture of DERA and polymer could be immobilized on the PAN support by spray-coating under partial maintenance of enzymatic activity. To improve the acetaldehyde tolerance, the polymer in used was further equipped with cysteine reactive PDS end-groups that had been used for the conjugation as described in the first part of the thesis. The generated conjugates indeed showed good acetaldehyde tolerance and were thus used to be coated onto PAN membrane supports. Post treatment with a basic aqueous solution of H2O2 was supposed to further crosslink the spray-coated film hydrolysis and oxidation of the thiolactone groups. However, a washing off of the material was observed. Optimization is thus still necessary.
This dissertation investigates the impact of the economic and fiscal crisis starting in 2008 on EU climate policy-making. While the overall number of adopted greenhouse gas emission reduction policies declined in the crisis aftermath, EU lawmakers decided to introduce new or tighten existing regulations in some important policy domains. Existing knowledge about the crisis impact on EU legislative decision-making cannot explain these inconsistencies. In response, this study develops an actor-centred conceptual framework based on rational choice institutionalism that provides a micro-level link to explain how economic crises translate into altered policy-making patterns. The core theoretical argument draws on redistributive conflicts, arguing that tensions between ‘beneficiaries’ and ‘losers’ of a regulatory initiative intensify during economic crises and spill over to the policy domain. To test this hypothesis and using social network analysis, this study analyses policy processes in three case studies: The introduction of carbon dioxide emission limits for passenger cars, the expansion of the EU Emissions Trading System to aviation, and the introduction of a regulatory framework for biofuels. The key finding is that an economic shock causes EU policy domains to polarise politically, resulting in intensified conflict and more difficult decision-making. The results also show that this process of political polarisation roots in the industry that is the subject of the regulation, and that intergovernmental bargaining among member states becomes more important, but also more difficult in times of crisis.
Fold and thrust belts are characteristic features of collisional orogen that grow laterally through time by deforming the upper crust in response to stresses caused by convergence. The deformation propagation in the upper crust is accommodated by shortening along major folds and thrusts. The formation of these structures is influenced by the mechanical strength of décollements, basement architecture, presence of preexisting structures and taper of the wedge. These factors control not only the sequence of deformation but also cause differences in the structural style.
The Himalayan fold and thrust belt exhibits significant differences in the structural style from east to west. The external zone of the Himalayan fold and thrust belt, also called the Subhimalaya, has been extensively studied to understand the temporal development and differences in the structural style in Bhutan, Nepal and India; however, the Subhimalaya in Pakistan remains poorly studied. The Kohat and Potwar fold and thrust belts (herein called Kohat and Potwar) represent the Subhimalaya in Pakistan. The Main Boundary Thrust (MBT) marks the northern boundary of both Kohat and Potwar, showing that these belts are genetically linked to foreland-vergent deformation within the Himalayan orogen, despite the pronounced contrast in structural style. This contrast becomes more pronounced toward south, where the active strike-slip Kalabagh Fault Zone links with the Kohat and Potwar range fronts, known as the Surghar Range and the Salt Range, respectively. The Surghar and Salt Ranges developed above the Surghar Thrust (SGT) and Main Frontal Thrust (MFT). In order to understand the structural style and spatiotemporal development of the major structures in Kohat and Potwar, I have used structural modeling and low temperature thermochronolgy methods in this study. The structural modeling is based on construction of balanced cross-sections by integrating surface geology, seismic reflection profiles and well data. In order to constrain the timing and magnitude of exhumation, I used apatite (U-Th-Sm)/He (AHe) and apatite fission track (AFT) dating. The results obtained from both methods are combined to document the Paleozoic to Recent history of Kohat and Potwar.
The results of this research suggest two major events in the deformation history. The first major deformation event is related to Late Paleozoic rifting associated with the development of the Neo-Tethys Ocean. The second major deformation event is related to the Late Miocene to Pliocene development of the Himalayan fold and thrust belt in the Kohat and Potwar. The Late Paleozoic rifting is deciphered by inverse thermal modelling of detrital AFT and AHe ages from the Salt Range. The process of rifting in this area created normal faulting that resulted in the exhumation/erosion of Early to Middle Paleozoic strata, forming a major unconformity between Cambrian and Permian strata that is exposed today in the Salt Range. The normal faults formed in Late Paleozoic time played an important role in localizing the Miocene-Pliocene deformation in this area. The combination of structural reconstructions and thermochronologic data suggest that deformation initiated at 15±2 Ma on the SGT ramp in the southern part of Kohat. The early movement on the SGT accreted the foreland into the Kohat deforming wedge, forming the range front. The development of the MBT at 12±2 Ma formed the northern boundary of Kohat and Potwar. Deformation propagated south of the MBT in the Kohat on double décollements and in the Potwar on a single basal décollement. The double décollement in the Kohat adopted an active roof-thrust deformation style that resulted in the disharmonic structural style in the upper and lower parts of the stratigraphic section. Incremental shortening resulted in the development of duplexes in the subsurface between two décollements and imbrication above the roof thrust. Tectonic thickening caused by duplexes resulted in cooling and exhumation above the roof thrust by removal of a thick sequence of molasse strata. The structural modelling shows that the ramps on which duplexes formed in Kohat continue as tip lines of fault propagation folds in the Potwar. The absence of a double décollement in the Potwar resulted in the preservation of a thick sequence of molasse strata there. The temporal data suggest that deformation propagated in-sequence from ~ 8 to 3 Ma in the northern part of Kohat and Potwar; however, internal deformation in the Kohat was more intense, probably required for maintaining a critical taper after a significant load was removed above the upper décollement. In the southern part of Potwar, a steeper basement slope (β≥3°) and the presence of salt at the base of the stratigraphic section allowed for the complete preservation of the stratigraphic wedge, showcased by very little internal deformation. Activation of the MFT at ~4 Ma allowed the Salt Range to become the range front of the Potwar. The removal of a large amount of molasse strata above the MFT ramp enhanced the role of salt in shaping the structural style of the Salt Range and Kalabagh Fault Zone. Salt accumulation and migration resulted in the formation of normal faults in both areas. Salt migration in the Kalabagh fault zone has triggered out-of-sequence movement on ramps in the Kohat.
The amount of shortening calculated between the MBT and the SGT in Kohat is 75±5 km and between the MBT and the MFT in Potwar is 65±5 km. A comparable amount of shortening is accommodated in the Kohat and Potwar despite their different widths: 70 km Kohat and 150 km Potwar. In summary, this research suggests that deformation switched between different structures during the last ~15 Ma through different modes of fault propagation, resulting in different structural styles and the out-of-sequence development of Kohat and Potwar.
The foreland of the Andes in South America is characterised by distinct along strike changes in surface deformational styles. These styles are classified into two end-members, the thin-skinned and the thick-skinned style. The superficial expression of thin-skinned deformation is a succession of narrowly spaced hills and valleys, that form laterally continuous ranges on the foreland facing side of the orogen. Each of the hills is defined by a reverse fault that roots in a basal décollement surface within the sedimentary cover, and acted as thrusting ramp to stack the sedimentary pile. Thick-skinned deformation is morphologically characterised by spatially disparate, basement-cored mountain ranges. These mountain ranges are uplifted along reactivated high-angle crustal-scale discontinuities, such as suture zones between different tectonic terranes.
Amongst proposed causes for the observed variation are variations in the dip angle of the Nazca plate, variation in sediment thickness, lithospheric thickening, volcanism or compositional differences. The proposed mechanisms are predominantly based on geological observations or numerical thermomechanical modelling, but there has been no attempt to understand the mechanisms from a point of data-integrative 3D modelling. The aim of this dissertation is therefore to understand how lithospheric structure controls the deformational behaviour. The integration of independent data into a consistent model of the lithosphere allows to obtain additional evidence that helps to understand the causes for the different deformational styles. Northern Argentina encompasses the transition from the thin-skinned fold-and-thrust belt in Bolivia, to the thick-skinned Sierras Pampeanas province, which makes this area a well suited location for such a study. The general workflow followed in this study first involves data-constrained structural- and density-modelling in order to obtain a model of the study area. This model was then used to predict the steady-state thermal field, which was then used to assess the present-day rheological state in northern Argentina.
The structural configuration of the lithosphere in northern Argentina was determined by means of data-integrative, 3D density modelling verified by Bouguer gravity. The model delineates the first-order density contrasts in the lithosphere in the uppermost 200 km, and discriminates bodies for the sediments, the crystalline crust, the lithospheric mantle and the subducting Nazca plate. To obtain the intra-crustal density structure, an automated inversion approach was developed and applied to a starting structural model that assumed a homogeneously dense crust. The resulting final structural model indicates that the crustal structure can be represented by an upper crust with a density of 2800 kg/m³, and a lower crust of 3100 kg/m³. The Transbrazilian Lineament, which separates the Pampia terrane from the Río de la Plata craton, is expressed as a zone of low average crustal densities.
In an excursion, we demonstrate in another study, that the gravity inversion method developed to obtain intra-crustal density structures, is also applicable to obtain density variations in the uppermost lithospheric mantle. Densities in such sub-crustal depths are difficult to constrain from seismic tomographic models due to smearing of crustal velocities. With the application to the uppermost lithospheric mantle in the north Atlantic, we demonstrate in Tan et al. (2018) that lateral density trends of at least 125\,km width are robustly recovered by the inversion method, thereby providing an important tool for the delineation of subcrustal density trends.
Due to the genetic link between subduction, orogenesis and retroarc foreland basins the question rises whether the steady-state assumption is valid in such a dynamic setting. To answer this question, I analysed (i) the impact of subduction on the conductive thermal field of the overlying continental plate, (ii) the differences between the transient and steady-state thermal fields of a geodynamic coupled model. Both studies indicate that the assumption of a thermal steady-state is applicable in most parts of the study area. Within the orogenic wedge, where the assumption cannot be applied, I estimated the transient thermal field based on the results of the conducted analyses.
Accordingly, the structural model that had been obtained in the first step, could be used to obtain a 3D conductive steady-state thermal field. The rheological assessment based on this thermal field indicates that the lithosphere of the thin-skinned Subandean ranges is characterised by a relatively strong crust and a weak mantle. Contrarily, the adjacent foreland basin consists of a fully coupled, very strong lithosphere. Thus, shortening in northern Argentina can only be accommodated within the weak lithosphere of the orogen and the Subandean ranges. The analysis suggests that the décollements of the fold-and-thrust belt are the shallow continuation of shear zones that reside in the ductile sections of the orogenic crust. Furthermore, the localisation of the faults that provide strain transfer between the deeper ductile crust and the shallower décollement is strongly influenced by crustal weak zones such as foliation. In contrast to the northern foreland, the lithosphere of the thick-skinned Sierras Pampeanas is fully coupled and characterised by a strong crust and mantle. The high overall strength prevents the generation of crustal-scale faults by tectonic stresses. Even inherited crustal-scale discontinuities, such as sutures, cannot sufficiently reduce the strength of the lithosphere in order to be reactivated. Therefore, magmatism that had been identified to be a precursor of basement uplift in the Sierras Pampeanas, is the key factor that leads to the broken foreland of this province. Due to thermal weakening, and potentially lubrication of the inherited discontinuities, the lithosphere is locally weakened such that tectonic stresses can uplift the basement blocks. This hypothesis explains both the spatially disparate character of the broken foreland, as well as the observed temporal delay between volcanism and basement block uplift.
This dissertation provides for the first time a data-driven 3D model that is consistent with geophysical data and geological observations, and that is able to causally link the thermo-rheological structure of the lithosphere to the observed variation of surface deformation styles in the retroarc foreland of northern Argentina.
The fabrication of 1D nanostrands composed of stimuli responsive microgels has been shown in this work. Microgels are well known materials able to respond to various stimuli from outer environment. Since these microgels respond via a volume change to an external stimulus, a targeted mechanical response can be achieved. Through carefully choosing the right composition of the polymer matrix, microgels can be designed to react precisely to the targeted stimuli (e.g. drug delivery via pH and temperature changes, or selective contractions through changes in electrical current125).
In this work, it was aimed to create flexible nano-filaments which are capable of fast anisotropic contractions similar to muscle filaments. For the fabrication of such filaments or strands, nanostructured templates (PDMS wrinkles) were chosen due to a facile and low-cost fabrication and versatile tunability of their dimensions. Additionally, wrinkling is a well-known lithography-free method which enables the fabrication of nanostructures in a reproducible manner and with a high long-range periodicity.
In Chapter 2.1, it was shown for the first time that microgels as soft matter particles can be aligned to densely packed microgel arrays of various lateral dimensions. The alignment of microgels with different compositions (e.g. VCL/AAEM, NIPAAm, NIPAAm/VCL and charged microgels) was shown by using different assembly techniques (e.g. spin-coating, template confined molding). It was chosen to set one experimental parameter constant which was the SiOx surface composition of the templates and substrates (e.g. oxidized PDMS wrinkles, Si-wafers and glass slides). It was shown that the fabrication of nanoarrays was feasible with all tested microgel types. Although the microgels exhibited different deformability when aligned on a flat surface, they retained their thermo-responsivity and swelling behavior.
Towards the fabrication of 1D microgel strands interparticle connectivity was aspired. This was achieved via different cross-linking methods (i.e. cross-linking via UV-irradiation and host-guest complexation) discussed in Chapter 2.2. The microgel arrays created by different assembly methods and microgel types were tested for their cross-linking suitability. It was observed that NIPAAm based microgels cannot be cross-linked with UV light. Furthermore, it was found that these microgels exhibit a strong surface-particle-interaction and therefore could not be detached from the given substrates. In contrast to the latter, with VCL/AAEM based microgels it was possible to both UV cross-link them based on the keto-enol tautomerism of the AAEM copolymer, and to detach them from the substrate due to the lower adhesion energy towards SiOx surfaces. With VCL/AAEM microgels long, one-dimensional microgel strands could be re-dispersed in water for further analysis. It has also been shown that at least one lateral dimension of the free dispersed 1D microgel strands is easily controllable by adjusting the wavelength of the wrinkled template. For further work, only VCL/AAEM based microgels were used to focus on the main aim of this work, i.e. the fabrication of 1D microgel nanostrands.
As an alternative to the unspecific and harsh UV cross-linking, the host-guest complexation via diazobenzene cross-linkers and cyclodextrin hosts was explored. The idea behind this approach was to give means to a future construction kit-like approach by incorporation of cyclodextrin comonomers in a broad variety of particle systems (e.g. microgels, nanoparticles). For this purpose, VCL/AAEM microgels were copolymerized with different amounts of mono-acrylate functionalized β-cyclodextrin (CD). After successfully testing the cross-linking capability in solution, the cross-linking of aligned VCL/AAEM/CD microgels was tried. Although the cross-linking worked well, once the single arrays came into contact to each other, they agglomerated. As a reason for this behavior residual amounts of mono-complexed diazobenzene linkers were suspected. Thus, end-capping strategies were tried out (e.g. excess amounts of β-cyclodextrin and coverage with azobenzene functionalized AuNPs) but were unsuccessful. With deeper thought, entropy effects were taken into consideration which favor the release of complexed diazobenzene linker leading to agglomerations. To circumvent this entropy driven effect, a multifunctional polymer with 50% azobenzene groups (Harada polymer) was used. First experiments with this polymer showed promising results regarding a less pronounced agglomeration (Figure 77). Thus, this approach could be pursued in the future. In this chapter it was found out that in contrast to pearl necklace and ribbon like formations, particle alignment in zigzag formation provided the best compromise in terms of stability in dispersion (see Figure 44a and Figure 51) while maintaining sufficient flexibility.
For this reason, microgel strands in zigzag formation were used for the motion analysis described in Chapter 2.3. The aim was to observe the properties of unrestrained microgel strands in solution (e.g. diffusion behavior, rotational properties and ideally, anisotropic contraction after temperature increase). Initially, 1D microgel strands were manipulated via AFM in a liquid cell setup. It could be observed that the strands required a higher load force compared to single microgels to be detached from the surface. However, with the AFM it was not possible to detach the strands in a controllable manner but resulted in a complete removal of single microgel particles and a tearing off the strands from the surface, respectively. For this reason, to observe the motion behavior of unrestrained microgel strands in solution, confocal microscopy was used. Furthermore, to hinder an adsorption of the strands, it was found out that coating the surface of the substrates with a repulsive polymer film was beneficial. Confocal and wide-field microscopy videos showed that the microgel strands exhibit translational and rotational diffusive motion in solution without perceptible bending. Unfortunately, with these methods the detection of the anisotropic stimuli responsive contraction of the free moving microgel strands was not possible. To summarize, the flexibility of microgel strands is more comparable to the mechanical behavior of a semi flexible cable than to a yarn. The strands studied here consist of dozens or even hundreds of discrete submicron units strung together by cross-linking, having few parallels in nanotechnology.
With the insights gained in this work on microgel-surface interactions, in the future, a targeted functionalization of the template and substrate surfaces can be conducted to actively prevent unwanted microgel adsorption for a given microgel system (e.g. PVCL and polystyrene coating235). This measure would make the discussed alignment methods more diverse. As shown herein, the assembly methods enable a versatile microgel alignment (e.g. microgel meshes, double and triple strands). To go further, one could use more complex templates (e.g. ceramic rhombs and star shaped wrinkles (Figure 14) to expand the possibilities of microgel alignment and to precisely control their aspect ratios (e.g. microgel rods with homogeneous size distributions).
Over the last years there is an increasing awareness that historical land cover changes and associated land use legacies may be important drivers for present-day species richness and biodiversity due to time-delayed extinctions or colonizations in response to historical environmental changes. Historically altered habitat patches may therefore exhibit an extinction debt or colonization credit and can be expected to lose or gain species in the future. However, extinction debts and colonization credits are difficult to detect and their actual magnitudes or payments have rarely been quantified because species richness patterns and dynamics are also shaped by recent environmental conditions and recent environmental changes.
In this thesis we aimed to determine patterns of herb-layer species richness and recent species richness dynamics of forest herb layer plants and link those patterns and dynamics to historical land cover changes and associated land use legacies. The study was conducted in the Prignitz, NE-Germany, where the forest distribution remained stable for the last ca. 100 years but where a) the deciduous forest area had declined by more than 90 per cent (leaving only remnants of "ancient forests"), b) small new forests had been established on former agricultural land ("post-agricultural forests"). Here, we analyzed the relative importance of land use history and associated historical land cover changes for herb layer species richness compared to recent environmental factors and determined magnitudes of extinction debt and colonization credit and their payment in ancient and post-agricultural forests, respectively.
We showed that present-day species richness patterns were still shaped by historical land cover changes that ranged back to more than a century. Although recent environmental conditions were largely comparable we found significantly more forest specialists, species with short-distance dispersal capabilities and clonals in ancient forests than in post-agricultural forests. Those species richness differences were largely contingent to a colonization credit in post-agricultural forests that ranged up to 9 species (average 4.7), while the extinction debt in ancient forests had almost completely been paid. Environmental legacies from historical agricultural land use played a minor role for species richness differences. Instead, patch connectivity was most important. Species richness in ancient forests was still dependent on historical connectivity, indicating a last glimpse of an extinction debt, and the colonization credit was highest in isolated post-agricultural forests. In post-agricultural forests that were better connected or directly adjacent to ancient forest patches the colonization credit was way smaller and we were able to verify a gradual payment of the colonization credit from 2.7 species to 1.5 species over the last six decades.
With the growth of information technology, patient attitudes are shifting – away from passively receiving care towards actively taking responsibility for their well- being. Handling doctor-patient relationships collaboratively and providing patients access to their health information are crucial steps in empowering patients. In mental healthcare, the implicit consensus amongst practitioners has been that sharing medical records with patients may have an unpredictable, harmful impact on clinical practice. In order to involve patients more actively in mental healthcare processes, Tele-Board MED (TBM) allows for digital collaborative documentation in therapist-patient sessions. The TBM software system offers a whiteboard-inspired graphical user interface that allows therapist and patient to jointly take notes during the treatment session. Furthermore, it provides features to automatically reuse the digital treatment session notes for the creation of treatment session summaries and clinical case reports. This thesis presents the development of the TBM system and evaluates its effects on 1) the fulfillment of the therapist’s duties of clinical case documentation, 2) patient engagement in care processes, and 3) the therapist-patient relationship. Following the design research methodology, TBM was developed and tested in multiple evaluation studies in the domains of cognitive behavioral psychotherapy and addiction care. The results show that therapists are likely to use TBM with patients if they have a technology-friendly attitude and when its use suits the treatment context. Support in carrying out documentation duties as well as fulfilling legal requirements contributes to therapist acceptance. Furthermore, therapists value TBM as a tool to provide a discussion framework and quick access to worksheets during treatment sessions. Therapists express skepticism, however, regarding technology use in patient sessions and towards complete record transparency in general. Patients expect TBM to improve the communication with their therapist and to offer a better recall of discussed topics when taking a copy of their notes home after the session. Patients are doubtful regarding a possible distraction of the therapist and usage in situations when relationship-building is crucial. When applied in a clinical environment, collaborative note-taking with TBM encourages patient engagement and a team feeling between therapist and patient. Furthermore, it increases the patient’s acceptance of their diagnosis, which in turn is an important predictor for therapy success. In summary, TBM has a high potential to deliver more than documentation support and record transparency for patients, but also to contribute to a collaborative doctor-patient relationship. This thesis provides design implications for the development of digital collaborative documentation systems in (mental) healthcare as well as recommendations for a successful implementation in clinical practice.
Restful choreographies
(2019)
Business process management has become a key instrument to organize work as many companies represent their operations in business process models. Recently, business process choreography diagrams have been introduced as part of the Business Process Model and Notation standard to represent interactions between business processes, run by different partners. When it comes to the interactions between services on the Web, Representational State Transfer (REST) is one of the primary architectural styles employed by web services today. Ideally, the RESTful interactions between participants should implement the interactions defined at the business choreography level.
The problem, however, is the conceptual gap between the business process choreography diagrams and RESTful interactions. Choreography diagrams, on the one hand, are modeled from business domain experts with the purpose of capturing, communicating and, ideally, driving the business interactions. RESTful interactions, on the other hand, depend on RESTful interfaces that are designed by web engineers with the purpose of facilitating the interaction between participants on the internet. In most cases however, business domain experts are unaware of the technology behind web service interfaces and web engineers tend to overlook the overall business goals of web services. While there is considerable work on using process models during process implementation, there is little work on using choreography models to implement interactions between business processes. This thesis addresses this research gap by raising the following research question: How to close the conceptual gap between business process choreographies and RESTful interactions? This thesis offers several research contributions that jointly answer the research question.
The main research contribution is the design of a language that captures RESTful interactions between participants---RESTful choreography modeling language. Formal completeness properties (with respect to REST) are introduced to validate its instances, called RESTful choreographies. A systematic semi-automatic method for deriving RESTful choreographies from business process choreographies is proposed. The method employs natural language processing techniques to translate business interactions into RESTful interactions. The effectiveness of the approach is shown by developing a prototypical tool that evaluates the derivation method over a large number of choreography models.
In addition, the thesis proposes solutions towards implementing RESTful choreographies. In particular, two RESTful service specifications are introduced for aiding, respectively, the execution of choreographies' exclusive gateways and the guidance of RESTful interactions.
During lower sea levels in glacial periods, deep permafrost formed on large continental shelf areas of the Arctic Ocean. Subsequent sea level rise and coastal erosion created subsea permafrost, which generally degrades after inundation under the influence of a complex suite of marine, near-shore processes. Global warming is especially pronounced in the Arctic, and will increase the transition to and the degradation of subsea permafrost, with implications for atmospheric climate forcing, offshore infrastructure, and aquatic ecosystems.
This thesis combines new geophysical, borehole observational and modelling approaches to enhance our understanding of subsea permafrost dynamics. Three specific areas for advancement were identified: (I) sparsity of observational data, (II) lacking implementation of salt infiltration mechanisms in models, and (III) poor understanding of the regional differences in key driving parameters. This study tested the combination of spectral ratios of the ambient vibration seismic wavefield, together with estimated shear wave velocity from seismic interferometry analysis, for estimating the thickness of the unfrozen sediment overlying the ice-bonded permafrost offshore. Mesoscale numerical calculations (10^1 to 10^2 m, thousands of years) were employed to develop and solve the coupled heat diffusion and salt transport equations including phase change effects. Model soil parameters were constrained by borehole data, and the impact of a variety of influences during the transgression was tested in modelling studies. In addition, two inversion schemes (particle swarm optimization and a least-square method) were used to reconstruct temperature histories for the past 200-300 years in the Laptev Sea region in Siberia from two permafrost borehole temperature records. These data were evaluated against larger scale reconstructions from the region.
It was found (I) that peaks in spectral ratios modelled for three-layer, one-dimensional systems corresponded with thaw depths. Around Muostakh Island in the central Laptev Sea seismic receivers were deployed on the seabed. Derived depths of the ice-bonded permafrost table were between 3.7-20.7 m ± 15 %, increasing with distance from the coast. (II) Temperatures modelled during the transition to subsea permafrost resembled isothermal conditions after about 2000 years of inundation at Cape Mamontov Klyk, consistent with observations from offshore boreholes. Stratigraphic scenarios showed that salt distribution and infiltration had a large impact on the ice saturation in the sediments. Three key factors were identified that, when changed, shifted the modelled permafrost thaw depth most strongly: bottom water temperatures, shoreline retreat rate and initial temperature before inundation. Salt transport based on diffusion and contribution from arbitrary density-driven mechanisms only accounted for about 50 % of observed thaw depths at offshore sites hundreds to thousands of years after inundation. This bias was found consistently at all three sites in the Laptev Sea region. (III) In the temperature reconstructions, distinct differences in the local temperature histories between the western Laptev Sea and the Lena Delta sites were recognized, such as a transition to warmer temperatures a century later in the western Laptev Sea as well as a peak in warming three decades later. The local permafrost surface temperature history at Sardakh Island in the Lena Delta was reminiscent of the circum-Arctic regional average trends. However, Mamontov Klyk in the western Laptev Sea was consistent to Arctic trends only in the most recent decade and was more similar to northern hemispheric mean trends. Both sites were consistent with a rapid synoptic recent warming.
In conclusion, the consistency between modelled response, expected permafrost distribution, and observational data suggests that the passive seismic method is promising for the determination of the thickness of unfrozen sediment on the continental Arctic shelf. The quantified gap between currently modelled and observed thaw depths means that the impact of degradation on climate forcing, ecosystems, and infrastructure is larger than current models predict. This discrepancy suggests the importance of further mechanisms of salt penetration and thaw that have not been considered – either pre-inundation or post-inundation, or both. In addition, any meaningful modelling of subsea permafrost would have to constrain the identified key factors and their regional differences well. The shallow permafrost boreholes provide missing well-resolved short-scale temperature information in the coastal permafrost tundra of the Arctic. As local differences from circum-Arctic reconstructions, such as later warming and higher warming magnitude, were shown to exist in this region, these results provide a basis for local surface temperature record parameterization of climate and, in particular, permafrost models. The results of this work bring us one step further to understanding the full picture of the transition from terrestrial to subsea permafrost.
Pillars of Salt
(2019)
Quantum field theory on curved spacetimes is understood as a semiclassical approximation of some quantum theory of gravitation, which models a quantum field under the influence of a classical gravitational field, that is, a curved spacetime. The most remarkable effect predicted by this approach is the creation of particles by the spacetime itself, represented, for instance, by Hawking's evaporation of black holes or the Unruh effect. On the other hand, these aspects already suggest that certain cornerstones of Minkowski quantum field theory, more precisely a preferred vacuum state and, consequently, the concept of particles, do not have sensible counterparts within a theory on general curved spacetimes. Likewise, the implementation of covariance in the model has to be reconsidered, as curved spacetimes usually lack any non-trivial global symmetry. Whereas this latter issue has been resolved by introducing the paradigm of locally covariant quantum field theory (LCQFT), the absence of a reasonable concept for distinct vacuum and particle states on general curved spacetimes has become manifest even in the form of no-go-theorems.
Within the framework of algebraic quantum field theory, one first introduces observables, while states enter the game only afterwards by assigning expectation values to them. Even though the construction of observables is based on physically motivated concepts, there is still a vast number of possible states, and many of them are not reasonable from a physical point of view. We infer that this notion is still too general, that is, further physical constraints are required. For instance, when dealing with a free quantum field theory driven by a linear field equation, it is natural to focus on so-called quasifree states. Furthermore, a suitable renormalization procedure for products of field operators is vitally important. This particularly concerns the expectation values of the energy momentum tensor, which correspond to distributional bisolutions of the field equation on the curved spacetime. J. Hadamard's theory of hyperbolic equations provides a certain class of bisolutions with fixed singular part, which therefore allow for an appropriate renormalization scheme.
By now, this specification of the singularity structure is known as the Hadamard condition and widely accepted as the natural generalization of the spectral condition of flat quantum field theory. Moreover, due to Radzikowski's celebrated results, it is equivalent to a local condition, namely on the wave front set of the bisolution. This formulation made the powerful tools of microlocal analysis, developed by Duistermaat and Hörmander, available for the verification of the Hadamard property as well as the construction of corresponding Hadamard states, which initiated much progress in this field. However, although indispensable for the investigation in the characteristics of operators and their parametrices, microlocal analyis is not practicable for the study of their non-singular features and central results are typically stated only up to smooth objects. Consequently, Radzikowski's work almost directly led to existence results and, moreover, a concrete pattern for the construction of Hadamard bidistributions via a Hadamard series. Nevertheless, the remaining properties (bisolution, causality, positivity) are ensured only modulo smooth functions.
It is the subject of this thesis to complete this construction for linear and formally self-adjoint wave operators acting on sections in a vector bundle over a globally hyperbolic Lorentzian manifold. Based on Wightman's solution of d'Alembert's equation on Minkowski space and the construction for the advanced and retarded fundamental solution, we set up a Hadamard series for local parametrices and derive global bisolutions from them. These are of Hadamard form and we show existence of smooth bisections such that the sum also satisfies the remaining properties exactly.
This is a publication-based dissertation comprising three original research stud-ies (one published, one submitted and one ready for submission; status March 2019). The dissertation introduces a generic computer model as a tool to investigate the behaviour and population dynamics of animals in cyclic environments. The model is further employed for analysing how migratory birds respond to various scenarios of altered food supply under global change. Here, ecological and evolutionary time-scales are considered, as well as the biological constraints and trade-offs the individual faces, which ultimately shape response dynamics at the population level. Further, the effect of fine-scale temporal patterns in re-source supply are studied, which is challenging to achieve experimentally. My findings predict population declines, altered behavioural timing and negative carry-over effects arising in migratory birds under global change. They thus stress the need for intensified research on how ecological mechanisms are affected by global change and for effective conservation measures for migratory birds. The open-source modelling software created for this dissertation can now be used for other taxa and related research questions. Overall, this thesis improves our mechanistic understanding of the impacts of global change on migratory birds as one prerequisite to comprehend ongoing global biodiversity loss. The research results are discussed in a broader ecological and scientific context in a concluding synthesis chapter.
With the emergence of the Internet of things (IoT), plenty of battery-powered and energy-harvesting devices are being deployed to fulfill sensing and actuation tasks in a variety of application areas, such as smart homes, precision agriculture, smart cities, and industrial automation. In this context, a critical issue is that of denial-of-sleep attacks. Such attacks temporarily or permanently deprive battery-powered, energy-harvesting, or otherwise energy-constrained devices of entering energy-saving sleep modes, thereby draining their charge. At the very least, a successful denial-of-sleep attack causes a long outage of the victim device. Moreover, to put battery-powered devices back into operation, their batteries have to be replaced. This is tedious and may even be infeasible, e.g., if a battery-powered device is deployed at an inaccessible location. While the research community came up with numerous defenses against denial-of-sleep attacks, most present-day IoT protocols include no denial-of-sleep defenses at all, presumably due to a lack of awareness and unsolved integration problems. After all, despite there are many denial-of-sleep defenses, effective defenses against certain kinds of denial-of-sleep attacks are yet to be found.
The overall contribution of this dissertation is to propose a denial-of-sleep-resilient medium access control (MAC) layer for IoT devices that communicate over IEEE 802.15.4 links. Internally, our MAC layer comprises two main components. The first main component is a denial-of-sleep-resilient protocol for establishing session keys among neighboring IEEE 802.15.4 nodes. The established session keys serve the dual purpose of implementing (i) basic wireless security and (ii) complementary denial-of-sleep defenses that belong to the second main component. The second main component is a denial-of-sleep-resilient MAC protocol. Notably, this MAC protocol not only incorporates novel denial-of-sleep defenses, but also state-of-the-art mechanisms for achieving low energy consumption, high throughput, and high delivery ratios. Altogether, our MAC layer resists, or at least greatly mitigates, all denial-of-sleep attacks against it we are aware of. Furthermore, our MAC layer is self-contained and thus can act as a drop-in replacement for IEEE 802.15.4-compliant MAC layers. In fact, we implemented our MAC layer in the Contiki-NG operating system, where it seamlessly integrates into an existing protocol stack.
The importance of cryptic diversity in rotifers is well understood regarding its ecological consequences, but there remains an in depth comprehension of the underlying molecular mechanisms and forces driving speciation. Temperature has been found several times to affect species spatio-temporal distribution and organisms’ performance, but we lack information on the mechanisms that provide thermal tolerance to rotifers. High cryptic diversity was found recently in the freshwater rotifer “Brachionus calyciflorus”, showing that the complex comprises at least four species: B. calyciflorus sensu stricto (s.s.), B. fernandoi, B. dorcas, and B. elevatus. The temporal succession among species which have been observed in sympatry led to the idea that temperature might play a crucial role in species differentiation.
The central aim of this study was to unravel differences in thermal tolerance between species of the former B. calyciflorus species complex by comparing phenotypic and gene expression responses. More specifically, I used the critical maximum temperature as a proxy for inter-species differences in heat-tolerance; this was modeled as a bi-dimensional phenotypic trait taking into consideration the intention and the duration of heat stress. Significant differences on heat-tolerance between species were detected, with B. calyciflorus s.s. being able to tolerate higher temperatures than B. fernandoi.
Based on evidence of within species neutral genetic variation, I further examined adaptive genetic variability within two different mtDNA lineages of the heat tolerant B. calyciflorus s.s. to identify SNPs and genes under selection that might reflect their adaptive history. These analyses did not reveal adaptive genetic variation related to heat, however, they show putatively adaptive genetic variation which may reflect local adaptation. Functional enrichment of putatively positively selected genes revealed signals of adaptation in genes related to “lipid metabolism”, “xenobiotics biodegradation and metabolism” and “sensory system”, comprising candidate genes which can be utilized in studies on local adaptation. An absence of genetically-based differences in thermal adaptation between the two mtDNA lineages, together with our knowledge that B. calyciflorus s.s. can withstand a broad range of temperatures, led to the idea to further investigate shared transcriptomic responses to long-term exposure to high and low temperatures regimes. With this, I identified candidate genes that are involved in the response to temperature imposed stress. Lastly, I used comparative transcriptomics to examine responses to imposed heat-stress in heat-tolerant and heat-sensitive Brachionus species. I found considerably different patterns of gene expression in the two species. Most striking are patterns of expression regarding the heat shock proteins (hsps) between the two species. In the heat-tolerant, B. calyciflorus s.s., significant up-regulation of hsps at low temperatures was indicative of a stress response at the cooler end of the temperature regimes tested here. In contrast, in the heat-sensitive B. fernandoi, hsps generally exhibited up-regulation of these genes along with rising temperatures. Overall, identification of differences in expression of genes suggests suppression of protein biosynthesis to be a mechanism to increase thermal tolerance. Observed patterns in population growth are correlated with the hsp gene expression differences, indicating that this physiological stress response is indeed related to phenotypic life history performance.
Interactions and feedbacks between tectonics, climate, and upper plate architecture control basin geometry, relief, and depositional systems. The Andes is part of a longlived continental margin characterized by multiple tectonic cycles which have strongly modified the Andean upper plate architecture. In the Andean retroarc, spatiotemporal variations in the structure of the upper plate and tectonic regimes have resulted in marked along-strike variations in basin geometry, stratigraphy, deformational style, and mountain belt morphology. These along-strike variations include high-elevation plateaus (Altiplano and Puna) associated with a thin-skin fold-and-thrust-belt and thick-skin deformation in broken foreland basins such as the Santa Barbara system and the Sierras Pampeanas. At the confluence of the Puna Plateau, the Santa Barbara system and the Sierras Pampeanas, major along-strike changes in upper plate architecture, mountain belt morphology, basement exhumation, and deformation style can be recognized. I have used a source to sink approach to unravel the spatiotemporal tectonic evolution of the Andean retroarc between 26 and 28°S. I obtained a large low-temperature thermochronology data set from basement units which includes apatite fission track, apatite U-Th-Sm/He, and zircon U-Th/He (ZHe) cooling ages. Stratigraphic descriptions of Miocene units were temporally constrained by U-Pb LA-ICP-MS zircon ages from interbedded pyroclastic material.
Modeled ZHe ages suggest that the basement of the study area was exhumed during the Famatinian orogeny (550-450 Ma), followed by a period of relative tectonic quiescence during the Paleozoic and the Triassic. The basement experienced horst exhumation during the Cretaceous development of the Salta rift. After initial exhumation, deposition of thick Cretaceous syn-rift strata caused reheating of several basement blocks within the Santa Barbara system. During the Eocene-Oligocene, the Andean compressional setting was responsible for the exhumation of several disconnected basement blocks. These exhumed blocks were separated by areas of low relief, in which humid climate and low erosion rates facilitated the development of etchplains on the crystalline basement. The exhumed basement blocks formed an Eocene to Oligocene broken foreland basin in the back-bulge depozone of the Andean foreland. During the Early Miocene, foreland basin strata filled up the preexisting Paleogene topography. The basement blocks in lower relief positions were reheated; associated geothermal gradients were higher than 25°C/km. Miocene volcanism was responsible for lateral variations on the amount of reheating along the Campo-Arenal basin. Around 12 Ma, a new deformational phase modified the drainage network and fragmented the lacustrine system. As deformation and rock uplift continued, the easily eroded sedimentary cover was efficiently removed and reworked by an ephemeral fluvial system, preventing the development of significant relief. After ~6 Ma, the low erodibility of the basement blocks which began to be exposed caused relief increase, leading to the development of stable fluvial systems. Progressive relief development modified atmospheric circulation, creating a rainfall gradient. After 3 Ma, orographic rainfall and high relief lead to the development of proximal fluvial-gravitational depositional systems in the surrounding basins.
Weakly electric mormyrid fish comprise about 200 species. 15 species of the genus Campylomormyrus have been described. These are very diverse concerning the trunk-like snout and the shape and duration of the electric organ discharge (EOD) and the anatomy of the electric organ. In this dissertation data on the reproduction in captivity of four species and on the ontogeny of the EOD and the EO of three species are presented.
Reproduction of the four species C. compressirostris, C. rhynchophorus, C. tshokwe and C. numenius: Cyclical reproduction was provoked by changing only water conductivity (C): decreasing C led to gonadal recrudescence, an increase induced gonad regression. Data on the reproduction and development of three species are presented (in C. numenius gonad development could only be achieved in males). Agonistic behavior in the C. tshokwe pair forced us to divide the breeding tank; therefore, only ovipositions occurred. However, injection of an artificial GnRH hormone allowed us to obtain ripe eggs and sperm and to perform successful artificial reproduction. All three species (C. compressirostris, C. rhynchophorus, C. tshokwe) are indeterminate fractional spawners. Spawnings/ovipositions occurred during the second half of the night; no parental care was observed; no special spawning substrates were necessary. C. compressirostris successfully spawned in breeding groups, C. rhynchophorus as pair. Spawning intervals ranged from 6 to 66 days in C. rhynchophorus, 10–75 days in C. tshokwe, and 18 days in C. compressirostris (calculated values). Fecundities (eggs per fractional spawning) ranged from 70 to 1570 eggs in C. rhynchophorus, 100–1192 in C. tshokwe, and 38–246 in C. compressirostris. All three species produce yolky, slightly sticky eggs. Egg diameter ranges from 2.3–3.0 mm. Hatching occurred on day 3, feeding started on day 11. Transition from larval to juvenile stage occurred at around 20 mm total length (TL). At this size C. rhynchophorus developed a higher body than the two other species and differences between the species in the melanin pigmentation of the unpaired fins occurred. Between 32 and 35 mm TL the upper and lower jaws developed.
C. compressirostris and C. tamandua are similar in morphology and both produce short EODs of ca. 150-200 μs duration. Both species reproduce easily in captivity. We tried to obtain natural hybrids in two breeding groups, 1) four males of C. compressirostris and three females of C. tamandua and 2) six females of C. compressirostris and four males of C. tamandua. In both combinations several times oviposition occurred, however, we never found fertilized eggs. In subsequent experiments, not described here, we obtained hybrids between these two species by means of artificial reproduction.
Ontogeny of the EOD and the EO: The Campylomormyrus species are very diverse both concerning the shape and the duration of their EODs. There are species with very short EODs, e.g. C. compressirostris duration, a species with an EOD length of about 4-8 ms duration (C. tshokwe) and species with very long EODs of about 25 ms duration (e.g. C. rhynchophorus). Due to the successful breeding of the three species in captivity, we were able to investigate in detail the ontogeny of the EOD. Larvae of the three species C. compressirostris, C. tshokwe and C. rhynchophorus first produce a biphasic larval EOD typical for these small larvae. The first activity of the adult electric organ in the caudal peduncle is a biphasic juvenile EOD. Juvenile C. compressirostris and C. tshokwe start out with a short biphasic EOD of about 160 – 200 μs duration at sizes between 25 mm (C. compressirostris) and 37 mm (C. tshokwe). Adult C. compressirostris show an EOD identical to that of the juvenile. In C. tshokwe, the juvenile EOD changes continuously during development both concerning duration, amplitude increase and shape. 18 cm long C. tshokwe still do not yet produce an EOD typical for the adult fish. Juveniles of C. rhynchophorus produce at 33 mm total length a juvenile biphasic EOD, however, of longer duration (about 640 μs) than the two species mentioned above. This juvenile EOD changes continuously both in form, amplitude increase and duration with growth until the adult EOD waveform appears at about 15 cm body length. In juveniles about seven cm long the triphasic feature of the EOD starts to develop due to the appearance of a second head positive phase. Specific EOD stages are produced in relation to size and not to age. Individual differences in the EOD both concerning shape and duration are very small. The basic anatomy of the electrocytes is very similar in all three species: the main stalk which receives the innervation, is located at the caudal face of the electrocyte. Membrane penetrations of the stalks do not occur. However, there are differences in the fine structure of the electrocytes in the three species. Papillae, proliferations of the membrane, which increase the surface area of the electrocyte and are thought to incrase the EOD-duration, are only found in C. tshokwe and C. rhynchophorus. In these two species in addition, holes develop in the electrocytes during ontogeny. This might also have an impact on EOD duration.
Electrosynthesis and characterization of molecularly imprinted polymers for peptides and proteins
(2019)
The immense popularity of online communication services in the last decade has not only upended our lives (with news spreading like wildfire on the Web, presidents announcing their decisions on Twitter, and the outcome of political elections being determined on Facebook) but also dramatically increased the amount of data exchanged on these platforms. Therefore, if we wish to understand the needs of modern society better and want to protect it from new threats, we urgently need more robust, higher-quality natural language processing (NLP) applications that can recognize such necessities and menaces automatically, by analyzing uncensored texts. Unfortunately, most NLP programs today have been created for standard language, as we know it from newspapers, or, in the best case, adapted to the specifics of English social media.
This thesis reduces the existing deficit by entering the new frontier of German online communication and addressing one of its most prolific forms—users’ conversations on Twitter. In particular, it explores the ways and means by how people express their opinions on this service, examines current approaches to automatic mining of these feelings, and proposes novel methods, which outperform state-of-the-art techniques. For this purpose, I introduce a new corpus of German tweets that have been manually annotated with sentiments, their targets and holders, as well as lexical polarity items and their contextual modifiers. Using these data, I explore four major areas of sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained opinion mining, (iii) message-level polarity classification, and (iv) discourse-aware sentiment analysis. In the first task, I compare three popular groups of lexicon generation methods: dictionary-, corpus-, and word-embedding–based ones, finding that dictionary-based systems generally yield better polarity lists than the last two groups. Apart from this, I propose a linear projection algorithm, whose results surpass many existing automatically-generated lexicons. Afterwords, in the second task, I examine two common approaches to automatic prediction of sentiment spans, their sources, and targets: conditional random fields (CRFs) and recurrent neural networks, obtaining higher scores with the former model and improving these results even further by redefining the structure of CRF graphs. When dealing with message-level polarity classification, I juxtapose three major sentiment paradigms: lexicon-, machine-learning–, and deep-learning–based systems, and try to unite the first and last of these method groups by introducing a bidirectional neural network with lexicon-based attention. Finally, in order to make the new classifier aware of microblogs' discourse structure, I let it separately analyze the elementary discourse units of each tweet and infer the overall polarity of a message from the scores of its EDUs with the help of two new approaches: latent-marginalized CRFs and Recursive Dirichlet Process.
There is evidence that infants start extracting words from fluent speech around 7.5 months of age (e.g., Jusczyk & Aslin, 1995) and that they use at least two mechanisms to segment words forms from fluent speech: prosodic information (e.g., Jusczyk, Cutler & Redanz, 1993) and statistical information (e.g., Saffran, Aslin & Newport, 1996). However, how these two mechanisms interact and whether they change during development is still not fully understood.
The main aim of the present work is to understand in what way different cues to word segmentation are exploited by infants when learning the language in their environment, as well as to explore whether this ability is related to later language skills. In Chapter 3 we pursued to determine the reliability of the method used in most of the experiments in the present thesis (the Headturn Preference Procedure), as well as to examine correlations and individual differences between infants’ performance and later language outcomes. In Chapter 4 we investigated how German-speaking adults weigh statistical and prosodic information for word segmentation. We familiarized adults with an auditory string in which statistical and prosodic information indicated different word boundaries and obtained both behavioral and pupillometry responses. Then, we conducted further experiments to understand in what way different cues to word segmentation are exploited by 9-month-old German-learning infants (Chapter 5) and by 6-month-old German-learning infants (Chapter 6). In addition, we conducted follow-up questionnaires with the infants and obtained language outcomes at later stages of development.
Our findings from this thesis revealed that (1) German-speaking adults show a strong weight of prosodic cues, at least for the materials used in this study and that (2) German-learning infants weight these two kind of cues differently depending on age and/or language experience. We observed that, unlike English-learning infants, 6-month-old infants relied more strongly on prosodic cues. Nine-month-olds do not show any preference for either of the cues in the word segmentation task. From the present results it remains unclear whether the ability to use prosodic cues to word segmentation relates to later language vocabulary. We speculate that prosody provides infants with their first window into the specific acoustic regularities in the signal, which enables them to master the specific stress pattern of German rapidly. Our findings are a step forwards in the understanding of an early impact of the native prosody compared to statistical learning in early word segmentation.
Species assembly from a regional pool into local metacommunities and how they colonize and coexist over time and space is essential to understand how communities response to their environment including abiotic and biotic factors. In highly disturbed landscapes, connectivity of isolated habitat patches is essential to maintain biodiversity and the entire ecosystem functioning. In northeast Germany, a high density of the small water bodies called kettle holes, are good systems to study metacommunities due to their condition as “aquatic islands” suitable for hygrophilous species that are surrounded by in unsuitable matrix of crop fields. The main objective of this thesis was to infer the main ecological processes shaping plant communities and their response to the environment, from biodiversity patterns and key life-history traits involved in connectivity using ecological and genetic approaches; and to provide first insights of the role of kettle holes harboring wild-bee species as important mobile linkers connecting plant communities in this insular system.
t a community level, I compared plant diversity patterns and trait composition in ephemeral vs. permanent kettle holes). My results showed that types of kettle holes act as environmental filers shaping plant diversity, community-composition and trait-distribution, suggesting species sorting and niche processes in both types of kettle holes. At a population level, I further analyzed the role of dispersal and reproductive strategies of four selected species occurring in permanent kettle holes. Using microsatellites, I found that breeding system (degree of clonality), is the main factor shaping genetic diversity and genetic divergence. Although, higher gene flow and lower genetic differentiation among populations in wind vs. insect pollinated species was also found, suggesting that dispersal mechanisms played a role related to gene flow and connectivity. For most flowering plants, pollinators play an important role connecting communities. Therefore, as a first insight of the potential mobile linkers of these plant communities, I investigated the diversity wild-bees occurring in these kettle holes. My main results showed that local habitat quality (flower resources) had a positive effect on bee diversity, while habitat heterogeneity (number of natural landscape elements surrounding kettle holes 100–300m), was negatively correlated.
This thesis covers from genetic flow at individual and population level to plant community assembly. My results showed how patterns of biodiversity, dispersal and reproduction strategies in plant population and communities can be used to infer ecological processes. In addition, I showed the importance of life-history traits and the relationship between species and their abiotic and biotic interactions. Furthermore, I included a different level of mobile linkers (pollinators) for a better understanding of another level of the system. This integration is essential to understand how communities respond to their surrounding environment and how disturbances such as agriculture, land-use and climate change might affect them. I highlight the need to integrate many scientific areas covering from genes to ecosystems at different spatiotemporal scales for a better understanding, management and conservation of our ecosystems.
A new model that links visionary leadership with team performance is
postulated. It is proposed that leader prototypicality will negatively
moderate the effect of visionary leadership on team goal monitoring and performance. This model underlines that teams will compensate for the less prototypicality of a visionary leader by engaging in more goal monitoring, which is a process that is conducive to team performance. A field study included 60 teams, 180 individuals, and 60 team leaders was conducted in Egypt. Parameters were collected on the individual level.
Aggregation measures (rwg, ICC1 & ICC2) were acceptable and the averages were calculated for each team. The proposed three-factor model exhibited a reasonable fit to the data, χ2(130) = 259.93, p-value0.01; CFI = 0.90; and RMSEA = 0.13). The hypothesized negative moderation effect of leader prototypicality on the relationship between visionary leadership and team goal monitoring was statistically significant (-0.16; s.e.= 0.06; t = -3.13; p <0.01; 95% CI: -0.31, -0.07). Results showed a significant index of moderated mediation (-0.07; s.e.= 0.05; 95% CI: -0.20, -0.01). As predicted, the indirect effect of visionary leadership on team performance mediated by team goal monitoring was more strongly positive when leader prototypicality was low (b = 0.27; s.e.= 0.16; 95% CI: 0.04, 0.68), rather than high (b = 0.13; s.e.= 0.10; 95% CI: 0.01, 0.45). A proposal for extending the dimensions of identity-based leadership is discussed. This dissertation makes four significant contributions to theory and research on leadership. First, the main contribution of this research lies in showing that visionary leadership is more strongly positively related to team performance when leader prototypicality is low, rather than high. Second, this dissertation provides a contribution toward overcoming the fragmentation in the leadership literature by desegregating the literature on visionary leadership and leader-team prototypicality. Third, team goal monitoring as a mechanism that explains the interactive effects of visionary leadership and leader prototypicality on team performance was identified. Fourth, this study tests the postulated research model in Egypt, a culture that has in the past received scant attention.
Alexander Rhode investigates performance-oriented measures of Contracting Authorities in public tenders conducted within the EU. He finds that Contracting Authorities can improve their performance and attract more suppliers by publishing (as precise as possible) starting prices in the beginning of a tender. First, he reports that compared with private-sector negotiations, starting prices do not create entry barriers in public procurement. Second, he finds that increased numerical precision of starting prices is linearly correlated with better performance and a higher number of bids. In public procurement, suppliers tend to attribute increased credibility to precise starting prices which reduces their (perceived) entry risks.
Water is essential to life and thus, an essential resource. However, freshwater resources are limited and their maintenance is crucial. Pollution with chemicals and pathogens through urbanization and a growing population impair the quality of freshwater. Furthermore, water can serve as vector for the transmission of pathogens resulting in water-borne illness.
The Interdisciplinary Research Group III – "Water" of the Leibniz alliance project INFECTIONS‘21 investigated water as a hub for pathogens focusing on Clostridioides difficile and avian influenza A viruses that may be shed into the water. Another aim of this study was to characterize the bacterial communities in a wastewater treatment plant (WWTP) of the capital Berlin, Germany to further assess potential health risks associated with wastewater management practices.
Bacterial communities of WWTP inflow and effluent differed significantly. The proportion of fecal/enteric bacteria was relatively low and OTUs related to potential enteric pathogens were largely removed from inflow to effluent. However, a health risk might exist as an increased relative abundance of potential pathogenic Legionella spp. such as L. lytica was observed. Three Clostridioides difficile isolates from wastewater inflow and an urban bathing lake in Berlin (‗Weisser See‘) were obtained and sequenced. The two isolates from the wastewater did not carry toxin genes, whereas the isolate from the lake was positive for the toxin genes. All three isolates were closely related to human strains. This indicates a potential, but rather sporadic health risk. Avian influenza A viruses were detected in 38.8% of sediment samples by PCR, but virus isolation failed. An experiment with inoculated freshwater and sediment samples showed that virus isolation from sediment requires relatively high virus concentrations and worked much better in Madin-Darby Canine Kidney (MDCK) cell cultures than in embryonated chicken eggs, but low titre of influenza contamination in freshwater samples was sufficient to recover virus.
In conclusion, this work revealed potential health risks coming from bacterial groups with pathogenic potential such as Legionella spp. whose relative abundance is higher in the released effluent than in the inflow of the investigated WWTP. It further indicates that water bodies such as wastewater and lake sediments can serve as reservoir and vector, even for non-typical water-borne or water-transmitted pathogens such as C. difficile.
Supermassive black holes reside in the hearts of almost all massive galaxies. Their evolutionary path seems to be strongly linked to the evolution of their host galaxies, as implied by several empirical relations between the black hole mass (M BH ) and different host galaxy properties. The physical driver of this co-evolution is, however, still not understood. More mass measurements over homogeneous samples and a detailed understanding of systematic uncertainties are required to fathom the origin of the scaling relations.
In this thesis, I present the mass estimations of supermassive black holes in the nuclei of one late-type and thirteen early-type galaxies. Our SMASHING sample extends from the intermediate to the massive galaxy mass regime and was selected to fill in gaps in number of galaxies along the scaling relations. All galaxies were observed at high spatial resolution, making use of the adaptive-optics mode of integral field unit (IFU) instruments on state-of-the-art telescopes (SINFONI, NIFS, MUSE). I extracted the stellar kinematics from these observations and constructed dynamical Jeans and Schwarzschild models to estimate the mass of the central black holes robustly. My new mass estimates increase the number of early-type galaxies with measured black hole masses by 15%. The seven measured galaxies with nuclear light deficits (’cores’) augment the sample of cored galaxies with measured black holes by 40%. Next to determining massive black hole masses, evaluating the accuracy of black hole masses is crucial for understanding the intrinsic scatter of the black hole- host galaxy scaling relations. I tested various sources of systematic uncertainty on my derived mass estimates.
The M BH estimate of the single late-type galaxy of the sample yielded an upper limit, which I could constrain very robustly. I tested the effects of dust, mass-to-light ratio (M/L) variation, and dark matter on my measured M BH . Based on these tests, the typically assumed constant M/L ratio can be an adequate assumption to account for the small amounts of dark matter in the center of that galaxy. I also tested the effect of a variable M/L variation on the M BH measurement on a second galaxy. By considering stellar M/L variations in the dynamical modeling, the measured M BH decreased by 30%. In the future, this test should be performed on additional galaxies to learn how an as constant assumed M/L flaws the estimated black hole masses.
Based on our upper limit mass measurement, I confirm previous suggestions that resolving the predicted BH sphere-of-influence is not a strict condition to measure black hole masses. Instead, it is only a rough guide for the detection of the black hole if high-quality, and high signal-to-noise IFU data are used for the measurement. About half of our sample consists of massive early-type galaxies which show nuclear surface brightness cores and signs of triaxiality. While these types of galaxies are typically modeled with axisymmetric modeling methods, the effects on M BH are not well studied yet. The massive galaxies of our presented galaxy sample are well suited to test the effect of different stellar dynamical models on the measured black hole mass in evidently triaxial galaxies. I have compared spherical Jeans and axisymmetric Schwarzschild models and will add triaxial Schwarzschild models to this comparison in the future. The constructed Jeans and Schwarzschild models mostly disagree with each other and cannot reproduce many of the triaxial features of the galaxies (e.g., nuclear sub-components, prolate rotation). The consequence of the axisymmetric-triaxial assumption on the accuracy of M BH and its impact on the black hole - host galaxy relation needs to be carefully examined in the future.
In the sample of galaxies with published M BH , we find measurements based on different dynamical tracers, requiring different observations, assumptions, and methods. Crucially, different tracers do not always give consistent results. I have used two independent tracers (cold molecular gas and stars) to estimate M BH in a regular galaxy of our sample. While the two estimates are consistent within their errors, the stellar-based measurement is twice as high as the gas-based. Similar trends have also been found in the literature. Therefore, a rigorous test of the systematics associated with the different modeling methods is required in the future. I caution to take the effects of different tracers (and methods) into account when discussing the scaling relations.
I conclude this thesis by comparing my galaxy sample with the compilation of galaxies with measured black holes from the literature, also adding six SMASHING galaxies, which were published outside of this thesis. None of the SMASHING galaxies deviates significantly from the literature measurements. Their inclusion to the published early-type galaxies causes a change towards a shallower slope for the M BH - effective velocity dispersion relation, which is mainly driven by the massive galaxies of our sample. More unbiased and homogenous measurements are needed in the future to determine the shape of the relation and understand its physical origin.
Business process management is an established technique for business organizations to manage and support their processes. Those processes are typically represented by graphical models designed with modeling languages, such as the Business Process Model and Notation (BPMN).
Since process models do not only serve the purpose of documentation but are also a basis for implementation and automation of the processes, they have to satisfy certain correctness requirements. In this regard, the notion of soundness of workflow nets was developed, that can be applied to BPMN process models in order to verify their correctness. Because the original soundness criteria are very restrictive regarding the behavior of the model, different variants of the soundness notion have been developed for situations in which certain violations are not even harmful.
All of those notions do only consider the control-flow structure of a process model, however. This poses a problem, taking into account the fact that with the recent release and the ongoing development of the Decision Model and Notation (DMN) standard, an increasing number of process models are complemented by respective decision models. DMN is a dedicated modeling language for decision logic and separates the concerns of process and decision logic into two different models, process and decision models respectively.
Hence, this thesis is concerned with the development of decisionaware soundness notions, i.e., notions of soundness that build upon the original soundness ideas for process models, but additionally take into account complementary decision models. Similar to the various notions of workflow net soundness, this thesis investigates different notions of decision soundness that can be applied depending on the desired degree of restrictiveness. Since decision tables are a standardized means of DMN to represent decision logic, this thesis also puts special focus on decision tables, discussing how they can be translated into an unambiguous format and how their possible output values can be efficiently determined.
Moreover, a prototypical implementation is described that supports checking a basic version of decision soundness. The decision soundness notions were also empirically evaluated on models from participants of an online course on process and decision modeling as well as from a process management project of a large insurance company. The evaluation demonstrates that violations of decision soundness indeed occur and can be detected with our approach.
The Himalayas are a region that is most dependent, but also frequently prone to hazards from changing meltwater resources. This mountain belt hosts the highest mountain peaks on earth, has the largest reserve of ice outside the polar regions, and is home to a rapidly growing population in recent decades. One source of hazard has attracted scientific research in particular in the past two decades: glacial lake outburst floods (GLOFs) occurred rarely, but mostly with fatal and catastrophic consequences for downstream communities and infrastructure. Such GLOFs can suddenly release several million cubic meters of water from naturally impounded meltwater lakes. Glacial lakes have grown in number and size by ongoing glacial mass losses in the Himalayas. Theory holds that enhanced meltwater production may increase GLOF frequency, but has never been tested so far. The key challenge to test this notion are the high altitudes of >4000 m, at which lakes occur, making field work impractical. Moreover, flood waves can attenuate rapidly in mountain channels downstream, so that many GLOFs have likely gone unnoticed in past decades. Our knowledge on GLOFs is hence likely biased towards larger, destructive cases, which challenges a detailed quantification of their frequency and their response to atmospheric warming. Robustly quantifying the magnitude and frequency of GLOFs is essential for risk assessment and management along mountain rivers, not least to implement their return periods in building design codes.
Motivated by this limited knowledge of GLOF frequency and hazard, I developed an algorithm that efficiently detects GLOFs from satellite images. In essence, this algorithm classifies land cover in 30 years (~1988–2017) of continuously recorded Landsat images over the Himalayas, and calculates likelihoods for rapidly shrinking water bodies in the stack of land cover images. I visually assessed such detected tell-tale sites for sediment fans in the river channel downstream, a second key diagnostic of GLOFs. Rigorous tests and validation with known cases from roughly 10% of the Himalayas suggested that this algorithm is robust against frequent image noise, and hence capable to identify previously unknown GLOFs. Extending the search radius to the entire Himalayan mountain range revealed some 22 newly detected GLOFs. I thus more than doubled the existing GLOF count from 16 previously known cases since 1988, and found a dominant cluster of GLOFs in the Central and Eastern Himalayas (Bhutan and Eastern Nepal), compared to the rarer affected ranges in the North. Yet, the total of 38 GLOFs showed no change in the annual frequency, so that the activity of GLOFs per unit glacial lake area has decreased in the past 30 years. I discussed possible drivers for this finding, but left a further attribution to distinct GLOF-triggering mechanisms open to future research.
This updated GLOF frequency was the key input for assessing GLOF hazard for the entire Himalayan mountain belt and several subregions. I used standard definitions in flood hydrology, describing hazard as the annual exceedance probability of a given flood peak discharge [m3 s-1] or larger at the breach location. I coupled the empirical frequency of GLOFs per region to simulations of physically plausible peak discharges from all existing ~5,000 lakes in the Himalayas. Using an extreme-value model, I could hence calculate flood return periods. I found that the contemporary 100-year GLOF discharge (the flood level that is reached or exceeded on average once in 100 years) is 20,600+2,200/–2,300 m3 s-1 for the entire Himalayas. Given the spatial and temporal distribution of historic GLOFs, contemporary GLOF hazard is highest in the Eastern Himalayas, and lower for regions with rarer GLOF abundance. I also calculated GLOF hazard for some 9,500 overdeepenings, which could expose and fill with water, if all Himalayan glaciers have melted eventually. Assuming that the current GLOF rate remains unchanged, the 100-year GLOF discharge could double (41,700+5,500/–4,700 m3 s-1), while the regional GLOF hazard may increase largest in the Karakoram.
To conclude, these three stages–from GLOF detection, to analysing their frequency and estimating regional GLOF hazard–provide a framework for modern GLOF hazard assessment. Given the rapidly growing population, infrastructure, and hydropower projects in the Himalayas, this thesis assists in quantifying the purely climate-driven contribution to hazard and risk from GLOFs.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
The Italian Army’s participation in Hitler’s war against the Soviet Union has remained unrecognized and understudied. Bastian Matteo Scianna offers a wide-ranging, in-depth corrective. Mining Italian, German and Russian sources, he examines the history of the Italian campaign in the East between 1941 and 1943, as well as how the campaign was remembered and memorialized in the domestic and international arena during the Cold War. Linking operational military history with memory studies, this book revises our understanding of the Italian Army in the Second World War.
Due to its bioavailability and (bio)degradability, poly(lactide) (PLA) is an interesting polymer that is already being used as packaging material, surgical seam, and drug delivery system. Dependent on various parameters such as polymer composition, amphiphilicity, sample preparation, and the enantiomeric purity of lactide, PLA in an amphiphilic block copolymer can affect the self-assembly behavior dramatically. However, sizes and shapes of aggregates have a critical effect on the interactions between biological and drug delivery systems, where the general understanding of these polymers and their ability to influence self-assembly is of significant interest in science.
The first part of this thesis describes the synthesis and study of a series of linear poly(L-lactide) (PLLA) and poly(D-lactide) (PDLA)-based amphiphilic block copolymers with varying PLA (hydrophobic), and poly(ethylene glycol) (PEG) (hydrophilic) chain lengths and different block copolymer sequences (PEG-PLA and PLA-PEG). The PEG-PLA block copolymers were synthesized by ring-opening polymerization of lactide initiated by a PEG-OH macroinitiator. In contrast, the PLA-PEG block copolymers were produced by a Steglich-esterification of modified PLA with PEG-OH.
The aqueous self-assembly at room temperature of the enantiomerically pure PLLA-based block copolymers and their stereocomplexed mixtures was investigated by dynamic light scattering (DLS), transmission electron microscopy (TEM), wide-angle X-ray diffraction (WAXD), and differential scanning calorimetry (DSC). Spherical micelles and worm-like structures were produced, whereby the obtained self-assembled morphologies were affected by the lactide weight fraction in the block copolymer and self-assembly time. The formation of worm-like structures increases with decreasing PLA-chain length and arises from spherical micelles, which become colloidally unstable and undergo an epitaxial fusion with other micelles. As shown by DSC experiments, the crystallinity of the corresponding PLA blocks increases within the self-assembly time. However, the stereocomplexed self-assembled structures behave differently from the parent polymers and result in irregular-shaped clusters of spherical micelles. Additionally, time-dependent self-assembly experiments showed a transformation, from already self-assembled morphologies of different shapes to more compact micelles upon stereocomplexation.
In the second part of this thesis, with the objective to influence the self-assembly of PLA-based block copolymers and its stereocomplexes, poly(methyl phosphonate) (PMeP) and poly(isopropyl phosphonate) (PiPrP) were produced by ring-opening polymerization to implement an alternative to the hydrophilic block PEG. Although, the 1,8 diazabicyclo[5.4.0]unde 7 ene (DBU) or 1,5,7 triazabicyclo[4.4.0]dec-5-ene (TBD) mediated synthesis of the corresponding poly(alkyl phosphonate)s was successful, however, not so the polymerization of copolymers with PLA-based precursors (PLA-homo polymers, and PEG-PLA block copolymers). Transesterification, obtained by 1H-NMR spectroscopy, between the poly(phosphonate)- and PLA block caused a high-field shifted peak split of the methine proton in the PLA polymer chain, with split intensities depending on the used catalyst (DBU for PMeP, and TBD for PiPrP polymerization). An additional prepared block copolymer PiPrP-PLLA that wasn’t affected in its polymer sequence was finally used for self-assembly experiments with PLA-PEG and PEG-PLA mixing.
This work provides a comprehensive study of the self-assembly behavior of PLA-based block copolymers influenced by various parameters such as polymer block lengths, self-assembly time, and stereocomplexation of block copolymer mixtures.