Refine
Has Fulltext
- yes (2540) (remove)
Year of publication
Document Type
- Doctoral Thesis (2540) (remove)
Language
Keywords
- climate change (53)
- Klimawandel (51)
- Modellierung (34)
- Nanopartikel (28)
- machine learning (22)
- Fernerkundung (20)
- Synchronisation (19)
- remote sensing (18)
- Spracherwerb (17)
- Blickbewegungen (16)
Institute
- Institut für Physik und Astronomie (406)
- Institut für Biochemie und Biologie (387)
- Institut für Geowissenschaften (328)
- Institut für Chemie (304)
- Extern (151)
- Institut für Umweltwissenschaften und Geographie (122)
- Institut für Ernährungswissenschaft (102)
- Wirtschaftswissenschaften (97)
- Hasso-Plattner-Institut für Digital Engineering GmbH (92)
- Department Psychologie (89)
Trunk loading and back pain
(2017)
An essential function of the trunk is the compensation of external forces and loads in order to guarantee stability. Stabilising the trunk during sudden, repetitive loading in everyday tasks, as well as during performance is important in order to protect against injury. Hence, reduced trunk stability is accepted as a risk factor for the development of back pain (BP). An altered activity pattern including extended response and activation times as well as increased co-contraction of the trunk muscles as well as a reduced range of motion and increased movement variability of the trunk are evident in back pain patients (BPP). These differences to healthy controls (H) have been evaluated primarily in quasi-static test situations involving isolated loading directly to the trunk. Nevertheless, transferability to everyday, dynamic situations is under debate. Therefore, the aim of this project is to analyse 3-dimensional motion and neuromuscular reflex activity of the trunk as response to dynamic trunk loading in healthy (H) and back pain patients (BPP).
A measurement tool was developed to assess trunk stability, consisting of dynamic test situations. During these tests, loading of the trunk is generated by the upper and lower limbs with and without additional perturbation. Therefore, lifting of objects and stumbling while walking are adequate represents. With the help of a 12-lead EMG, neuromuscular activity of the muscles encompassing the trunk was assessed. In addition, three-dimensional trunk motion was analysed using a newly developed multi-segmental trunk model. The set-up was checked for reproducibility as well as validity. Afterwards, the defined measurement set-up was applied to assess trunk stability in comparisons of healthy and back pain patients.
Clinically acceptable to excellent reliability could be shown for the methods (EMG/kinematics) used in the test situations. No changes in trunk motion pattern could be observed in healthy adults during continuous loading (lifting of objects) of different weights. In contrast, sudden loading of the trunk through perturbations to the lower limbs during walking led to an increased neuromuscular activity and ROM of the trunk. Moreover, BPP showed a delayed muscle response time and extended duration until maximum neuromuscular activity in response to sudden walking perturbations compared to healthy controls. In addition, a reduced lateral flexion of the trunk during perturbation could be shown in BPP.
It is concluded that perturbed gait seems suitable to provoke higher demands on trunk stability in adults. The altered neuromuscular and kinematic compensation pattern in back pain patients (BPP) can be interpreted as increased spine loading and reduced trunk stability in patients. Therefore, this novel assessment of trunk stability is suitable to identify deficits in BPP. Assignment of affected BPP to therapy interventions with focus on stabilisation of the trunk aiming to improve neuromuscular control in dynamic situations is implied. Hence, sensorimotor training (SMT) to enhance trunk stability and compensation of unexpected sudden loading should be preferred.
Trends in precipitation over Germany and the Rhine basin related to changes in weather patterns
(2017)
Precipitation as the central meteorological feature for agriculture, water security, and human well-being amongst others, has gained special attention ever since. Lack of precipitation may have devastating effects such as crop failure and water scarcity. Abundance of precipitation, on the other hand, may as well result in hazardous events such as flooding and again crop failure. Thus, great effort has been spent on tracking changes in precipitation and relating them to underlying processes. Particularly in the face of global warming and given the link between temperature and atmospheric water holding capacity, research is needed to understand the effect of climate change on precipitation.
The present work aims at understanding past changes in precipitation and other meteorological variables. Trends were detected for various time periods and related to associated changes in large-scale atmospheric circulation. The results derived in this thesis may be used as the foundation for attributing changes in floods to climate change. Assumptions needed for the downscaling of large-scale circulation model output to local climate stations are tested and verified here.
In a first step, changes in precipitation over Germany were detected, focussing not only on precipitation totals, but also on properties of the statistical distribution, transition probabilities as a measure for wet/dry spells, and extreme precipitation events.
Shifting the spatial focus to the Rhine catchment as one of the major water lifelines of Europe and the largest river basin in Germany, detected trends in precipitation and other meteorological variables were analysed in relation to states of an ``optimal'' weather pattern classification. The weather pattern classification was developed seeking the best skill in explaining the variance of local climate variables.
The last question addressed whether observed changes in local climate variables are attributable to changes in the frequency of weather patterns or rather to changes within the patterns itself. A common assumption for a downscaling approach using weather patterns and a stochastic weather generator is that climate change is expressed only as a changed occurrence of patterns with the pattern properties remaining constant. This assumption was validated and the ability of the latest generation of general circulation models to reproduce the weather patterns was evaluated.
% Paper 1
Precipitation changes in Germany in the period 1951-2006 can be summarised briefly as negative in summer and positive in all other seasons. Different precipitation characteristics confirm the trends in total precipitation: while winter mean and extreme precipitation have increased, wet spells tend to be longer as well (expressed as increased probability for a wet day followed by another wet day). For summer the opposite was observed: reduced total precipitation, supported by decreasing mean and extreme precipitation and reflected in an increasing length of dry spells.
Apart from this general summary for the whole of Germany, the spatial distribution within the country is much more differentiated. Increases in winter precipitation are most pronounced in the north-west and south-east of Germany, while precipitation increases are highest in the west for spring and in the south for autumn. Decreasing summer precipitation was observed in most regions of Germany, with particular focus on the south and west.
The seasonal picture, however, was again differently represented in the contributing months, e.g.\ increasing autumn precipitation in the south of Germany is formed by strong trends in the south-west in October and in the south-east in November. These results emphasise the high spatial and temporal organisation of precipitation changes.
% Paper 2
The next step towards attributing precipitation trends to changes in large-scale atmospheric patterns was the derivation of a weather pattern classification that sufficiently stratifies the local climate variables under investigation. Focussing on temperature, radiation, and humidity in addition to precipitation, a classification based on mean sea level pressure, near-surface temperature, and specific humidity was found to have the best skill in explaining the variance of the local variables. A rather high number of 40 patterns was selected, allowing typical pressure patterns being assigned to specific seasons by the associated temperature patterns. While the skill in explaining precipitation variance is rather low, better skill was achieved for radiation and, of course, temperature.
Most of the recent GCMs from the CMIP5 ensemble were found to reproduce these weather patterns sufficiently well in terms of frequency, seasonality, and persistence.
% Paper 3
Finally, the weather patterns were analysed for trends in pattern frequency, seasonality, persistence, and trends in pattern-specific precipitation and temperature. To overcome uncertainties in trend detection resulting from the selected time period, all possible periods in 1901-2010 with a minimum length of 31 years were considered. Thus, the assumption of a constant link between patterns and local weather was tested rigorously. This assumption was found to hold true only partly. While changes in temperature are mainly attributable to changes in pattern frequency, for precipitation a substantial amount of change was detected within individual patterns.
Magnitude and even sign of trends depend highly on the selected time period. The frequency of certain patterns is related to the long-term variability of large-scale circulation modes.
Changes in precipitation were found to be heterogeneous not only in space, but also in time - statements on trends are only valid for the specific time period under investigation. While some part of the trends can be attributed to changes in the large-scale circulation, distinct changes were found within single weather patterns as well.
The results emphasise the need to analyse multiple periods for thorough trend detection wherever possible and add some note of caution to the application of downscaling approaches based on weather patterns, as they might misinterpret the effect of climate change due to neglecting within-type trends.
The nutrient exchange between plant and fungus is the key element of the arbuscular mycorrhizal (AM) symbiosis. The fungus improves the plant’s uptake of mineral nutrients, mainly phosphate, and water, while the plant provides the fungus with photosynthetically assimilated carbohydrates. Still, the knowledge about the mechanisms of the nutrient exchange between the symbiotic partners is very limited. Therefore, transport processes of both, the plant and the fungal partner, are investigated in this study. In order to enhance the understanding of the molecular basis underlying this tight interaction between the roots of Medicago truncatula and the AM fungus Rhizophagus irregularis, genes involved in transport processes of both symbiotic partners are analysed here. The AM-specific regulation and cell-specific expression of potential transporter genes of M. truncatula that were found to be specifically regulated in arbuscule-containing cells and in non-arbusculated cells of mycorrhizal roots was confirmed. A model for the carbon allocation in mycorrhizal roots is suggested, in which carbohydrates are mobilized in non-arbusculated cells and symplastically provided to the arbuscule-containing cells. New insights into the mechanisms of the carbohydrate allocation were gained by the analysis of hexose/H+ symporter MtHxt1 which is regulated in distinct cells of mycorrhizal roots. Metabolite profiling of leaves and roots of a knock-out mutant, hxt1, showed that it indeed does have an impact on the carbohydrate balance in the course of the symbiosis throughout the whole plant, and on the interaction with the fungal partner. The primary metabolite profile of M. truncatula was shown to be altered significantly in response to mycorrhizal colonization. Additionally, molecular mechanisms determining the progress of the interaction in the fungal partner of the AM symbiosis were investigated. The R. irregularis transcriptome in planta and in extraradical tissues gave new insight into genes that are differentially expressed in these two fungal tissues. Over 3200 fungal transcripts with a significantly altered expression level in laser capture microdissection-collected arbuscules compared to extraradical tissues were identified. Among them, six previously unknown specifically regulated potential transporter genes were found. These are likely to play a role in the nutrient exchange between plant and fungus. While the substrates of three potential MFS transporters are as yet unknown, two potential sugar transporters are might play a role in the carbohydrate flow towards the fungal partner. In summary, this study provides new insights into transport processes between plant and fungus in the course of the AM symbiosis, analysing M. truncatula on the transcript and metabolite level, and provides a dataset of the R. irregularis transcriptome in planta, providing a high amount of new information for future works.
Wetting and phase transitions play a very important role our daily life. Molecularly thin films of long-chain alkanes at solid/vapour interfaces (e.g. C30H62 on silicon wafers) are very good model systems for studying the relation between wetting behaviour and (bulk) phase transitions. Immediately above the bulk melting temperature the alkanes wet partially the surface (drops). In this temperature range the substrate surface is covered with a molecularly thin ordered, solid-like alkane film ("surface freezing"). Thus, the alkane melt wets its own solid only partially which is a quite rare phenomenon in nature. The thesis treats about how the alkane melt wets its own solid surface above and below the bulk melting temperature and about the corresponding melting and solidification processes. Liquid alkane drops can be undercooled to few degrees below the bulk melting temperature without immediate solidification. This undercooling behaviour is quite frequent and theoretical quite well understood. In some cases, slightly undercooled drops start to build two-dimensional solid terraces without bulk solidification. The terraces grow radially from the liquid drops on the substrate surface. They consist of few molecular layers with the thickness multiple of all-trans length of the molecule. By analyzing the terrace growth process one can find that, both below and above the melting point, the entire substrate surface is covered with a thin film of mobile alkane molecules. The presence of this film explains how the solid terrace growth is feeded: the alkane molecules flow through it from the undercooled drops to the periphery of the terrace. The study shows for the first time the coexistence of a molecularly thin film ("precursor") with partially wetting bulk phase. The formation and growth of the terraces is observed only in a small temperature interval in which the 2D nucleation of terraces is more likely than the bulk solidification. The nucleation mechanisms for 2D solidification are also analyzed in this work. More surprising is the terrace behaviour above bulk the melting temperature. The terraces can be slightly overheated before they melt. The melting does not occur all over the surface as a single event; instead small drops form at the terrace edge. Subsequently these drops move on the surface "eating" the solid terraces on their way. By this they grow in size leaving behind paths from were the material was collected. Both overheating and droplet movement can be explained by the fact that the alkane melt wets only partially its own solid. For the first time, these results explicitly confirm the supposed connection between the absence of overheating in solid and "surface melting": the solids usually start to melt without an energetic barrier from the surface at temperatures below the bulk melting point. Accordingly, the surface freezing of alkanes give rise of an energetic barrier which leads to overheating.
Methicillin resistant Staphylococcus aureus (MRSA) is one of the most important antibiotic-resistant pathogens in hospitals and the community. Recently, a new generation of MRSA, the so called livestock associated (LA) MRSA, has emerged occupying food producing animals as a new niche. LA-MRSA can be regularly isolated from economically important live-stock species including corresponding meats. The present thesis takes a methodological approach to confirm the hypothesis that LA-MRSA are transmitted along the pork, poultry and beef production chain from animals at farm to meat on consumers` table. Therefore two new concepts were developed, adapted to differing data sets.
A mathematical model of the pig slaughter process was developed which simulates the change in MRSA carcass prevalence during slaughter with special emphasis on identifying critical process steps for MRSA transmission. Based on prevalences as sole input variables the model framework is able to estimate the average value range of both the MRSA elimination and contamination rate of each of the slaughter steps. These rates are then used to set up a Monte Carlo simulation of the slaughter process chain. The model concludes that regardless of the initial extent of MRSA contamination low outcome prevalences ranging between 0.15 and 1.15 % can be achieved among carcasses at the end of slaughter. Thus, the model demonstrates that the standard procedure of pig slaughtering in principle includes process steps with the capacity to limit MRSA cross contamination. Scalding and singeing were identified as critical process steps for a significant reduction of superficial MRSA contamination.
In the course of the German national monitoring program for zoonotic agents MRSA prevalence and typing data are regularly collected covering the key steps of different food production chains. A new statistical approach has been proposed for analyzing this cross sectional set of MRSA data with regard to show potential farm to fork transmission. For this purpose, chi squared statistics was combined with the calculation of the Czekanowski similarity index to compare the distributions of strain specific characteristics between the samples from farm, carcasses after slaughter and meat at retail. The method was implemented on the turkey and veal production chains and the consistently high degrees of similarity which have been revealed between all sample pairs indicate MRSA transmission along the chain.
As the proposed methods are not specific to process chains or pathogens they offer a broad field of application and extend the spectrum of methods for bacterial transmission assessment.
Translating innovation
(2017)
This doctoral thesis studies the process of innovation adoption in public administrations, addressing the research question of how an innovation is translated to a local context. The study empirically explores Design Thinking as a new problem-solving approach introduced by a federal government organisation in Singapore. With a focus on user-centeredness, collaboration and iteration Design Thinking seems to offer a new way to engage recipients and other stakeholders of public services as well as to re-think the policy design process from a user’s point of view. Pioneered in the private sector, early adopters of the methodology include civil services in Australia, Denmark, the United Kingdom, the United States as well as Singapore. Hitherto, there is not much evidence on how and for which purposes Design Thinking is used in the public sector.
For the purpose of this study, innovation adoption is framed in an institutionalist perspective addressing how concepts are translated to local contexts. The study rejects simplistic views of the innovation adoption process, in which an idea diffuses to another setting without adaptation. The translation perspective is fruitful because it captures the multidimensionality and ‘messiness’ of innovation adoption. More specifically, the overall research question addressed in this study is: How has Design Thinking been translated to the local context of the public sector organisation under investigation? And from a theoretical point of view: What can we learn from translation theory about innovation adoption processes?
Moreover, there are only few empirical studies of organisations adopting Design Thinking and most of them focus on private organisations. We know very little about how Design Thinking is embedded in public sector organisations. This study therefore provides further empirical evidence of how Design Thinking is used in a public sector organisation, especially with regards to its application to policy work which has so far been under-researched.
An exploratory single case study approach was chosen to provide an in-depth analysis of the innovation adoption process. Based on a purposive, theory-driven sampling approach, a Singaporean Ministry was selected because it represented an organisational setting in which Design Thinking had been embedded for several years, making it a relevant case with regard to the research question. Following a qualitative research design, 28 semi-structured interviews (45-100 minutes) with employees and managers were conducted. The interview data was triangulated with observations and documents, collected during a field research research stay in Singapore.
The empirical study of innovation adoption in a single organisation focused on the intra-organisational perspective, with the aim to capture the variations of translation that occur during the adoption process. In so doing, this study opened the black box often assumed in implementation studies. Second, this research advances translation studies not only by showing variance, but also by deriving explanatory factors. The main differences in the translation of Design Thinking occurred between service delivery and policy divisions, as well as between the first adopter and the rest of the organisation. For the intra-organisational translation of Design Thinking in the Singaporean Ministry the following five factors played a role: task type, mode of adoption, type of expertise, sequence of adoption, and the adoption of similar practices.
Die Strahlentherapie ist neben der Chemotherapie und einer operativen Entfernung die stärkste Waffe für die Bekämpfung bösartiger Tumore in der Krebsmedizin. Nach Herz-Kreislauf-Erkrankungen ist Krebs die zweithäufigste Todesursache in der westlichen Welt, wobei Prostatakrebs heutzutage die häufigste, männliche Krebserkrankung darstellt. Trotz technologischer Fortschritte der radiologischen Verfahren kann es noch viele Jahre nach einer Radiotherapie zu einem Rezidiv kommen, was zum Teil auf die hohe Resistenzfähigkeit einzelner, entarteter Zellen des lokal vorkommenden Tumors zurückgeführt werden kann. Obwohl die moderne Strahlenbiologie viele Aspekte der Resistenzmechanismen näher beleuchtet hat, bleiben Fragestellungen, speziell über das zeitliche Ansprechen eines Tumors auf ionisierende Strahlung, größtenteils unbeantwortet, da systemweite Untersuchungen nur begrenzt vorliegen. Als Zellmodelle wurden vier Prostata-Krebszelllinien (PC3, DuCaP, DU-145, RWPE-1) mit unterschiedlichen Strahlungsempfindlichkeiten kultiviert und auf ihre Überlebensfähigkeit nach ionisierender Bestrahlung durch einen Trypanblau- und MTT-Vitalitätstest geprüft. Die proliferative Kapazität wurde mit einem Koloniebildungstest bestimmt. Die PC3 Zelllinie, als Strahlungsresistente, und die DuCaP Zelllinie, als Strahlungssensitive, zeigten dabei die größten Differenzen bezüglich der Strahlungsempfindlichkeit. Auf Grundlage dieser Ergebnisse wurden die beiden Zelllinien ausgewählt, um anhand ihrer transkriptomweiten Genexpressionen, eine Identifizierung potentieller Marker für die Prognose der Effizienz einer Strahlentherapie zu ermöglichen. Weiterhin wurde mit der PC3 Zelllinie ein Zeitreihenexperiment durchgeführt, wobei zu 8 verschiedenen Zeitpunkten nach Bestrahlung mit 1 Gy die mRNA mittels einer Hochdurchsatz-Sequenzierung quantifiziert wurde, um das dynamisch zeitversetzte Genexpressionsverhalten auf Resistenzmechanismen untersuchen zu können. Durch das Setzen eines Fold Change Grenzwertes in Verbindung mit einem P-Wert < 0,01 konnten aus 10.966 aktiven Genen 730 signifikant differentiell exprimierte Gene bestimmt werden, von denen 305 stärker in der PC3 und 425 stärker in der DuCaP Zelllinie exprimiert werden. Innerhalb dieser 730 Gene sind viele stressassoziierte Gene wiederzufinden, wie bspw. die beiden Transmembranproteingene CA9 und CA12. Durch Berechnung eines Netzwerk-Scores konnten aus den GO- und KEGG-Datenbanken interessante Kategorien und Netzwerke abgeleitet werden, wobei insbesondere die GO-Kategorien Aldehyd-Dehydrogenase [NAD(P)+] Aktivität (GO:0004030) und der KEGG-Stoffwechselweg der O-Glykan Biosynthese (hsa00512) als relevante Netzwerke auffällig wurden. Durch eine weitere Interaktionsanalyse konnten zwei vielversprechende Netzwerke mit den Transkriptionsfaktoren JUN und FOS als zentrale Elemente identifiziert werden. Zum besseren Verständnis des dynamisch zeitversetzten Ansprechens der strahlungsresistenten PC3 Zelllinie auf ionisierende Strahlung, konnten anhand der 10.840 exprimierten Gene und ihrer Expressionsprofile über 8 Zeitpunkte interessante Einblicke erzielt werden. Während es innerhalb von 30 min (00:00 - 00:30) nach Bestrahlung zu einer schnellen Runterregulierung der globalen Genexpression kommt, folgen in den drei darauffolgenden Zeitabschnitten (00:30 - 01:03; 01:03 - 02:12; 02:12 - 04:38) spezifische Expressionserhöhungen, die eine Aktivierung schützender Netzwerke, wie die Hochregulierung der DNA-Reparatursysteme oder die Arretierung des Zellzyklus, auslösen. In den abschließenden drei Zeitbereichen (04:38 - 09:43; 09:43 - 20:25; 20:25 - 42:35) liegt wiederum eine Ausgewogenheit zwischen Induzierung und Supprimierung vor, wobei die absoluten Genexpressionsveränderungen ansteigen. Beim Vergleich der Genexpressionen kurz vor der Bestrahlung mit dem letzten Zeitpunkt (00:00 - 42:53) liegen mit 2.670 die meisten verändert exprimierten Gene vor, was einer massiven, systemweiten Genexpressionsänderung entspricht. Signalwege wie die ATM-Regulierung des Zellzyklus und der Apoptose, des NRF2-Signalwegs nach oxidativer Stresseinwirkung und die DNA-Reparaturmechanismen der homologen Rekombination, des nicht-homologen End Joinings, der MisMatch-, der Basen-Exzision- und der Strang-Exzision-Reparatur spielen bei der zellulären Antwort eine tragende Rolle. Äußerst interessant sind weiterhin die hohen Aktivitäten RNA-gesteuerter Ereignisse, insbesondere von small nucleolar RNAs und Pseudouridin-Prozessen. Demnach scheinen diese RNA-modifizierenden Netzwerke einen bisher unbekannten funktionalen und schützenden Einfluss auf das Zellüberleben nach ionisierender Bestrahlung zu haben. All diese schützenden Netzwerke mit ihren zeitspezifischen Interaktionen sind essentiell für das Zellüberleben nach Einwirkung von oxidativem Stress und zeigen ein komplexes aber im Einklang befindliches Zusammenspiel vieler Einzelkomponenten zu einem systemweit ablaufenden Programm.
Die Etablierung der Transkription von kompletten Genen auf planaren Oberflächen soll eine Verbindung zwischen der Mikroarraytechnologie und der Transkriptomforschung herstellen. Darüber hinaus kann mit diesem Verfahren ein Brückenschlag zwischen der Synthese der Gene und ihrer kodierenden Proteine auf einer Oberfläche erfolgen. Alle transkribierten RNAs wurden mittels RT-PCR in cDNA umgeschrieben und in einer genspezifischen PCR amplifiziert. Die PCR-Produkte wurden hierfür entweder per Hand oder maschinell auf die Oberfläche transferiert. Über eine Oberflächen-PCR war es möglich, die Gensequenz des Reportergens EGFP direkt auf der Oberfläche zu synthetisieren und anschließend zu transkribieren. Somit war eine Transkription mit weniger als 1 ng an Matrize möglich. Der Vorteil einer Oberflächen-Transkription gegenüber der in Lösung liegt in der mehrfachen Verwendung der immobilisierten Matrize, wie sie in dieser Arbeit dreimal erfolgreich absolviert wurde. Die Oberflächen-Translation des EGFP-Gens konnte ebenfalls zweimal an einer immobilisierten Matrize gezeigt werden, wobei Zweifel über eine echte Festphasen-Translation nicht ausgeräumt werden konnten. Zusammenfassend kann festgestellt werden, dass die Transkription und Translation von immobilisierten Gensequenzen auf planaren Oberflächen möglich ist, wofür die linearen Matrizen direkt auf der Oberfläche synthetisiert werden können.
Synchronization – the adjustment of rhythms among coupled self-oscillatory systems – is a fascinating dynamical phenomenon found in many biological, social, and technical systems.
The present thesis deals with synchronization in finite ensembles of weakly coupled self-sustained oscillators with distributed frequencies.
The standard model for the description of this collective phenomenon is the Kuramoto model – partly due to its analytical tractability in the thermodynamic limit of infinitely many oscillators. Similar to a phase transition in the thermodynamic limit, an order parameter indicates the transition from incoherence to a partially synchronized state. In the latter, a part of the oscillators rotates at a common frequency. In the finite case, fluctuations occur, originating from the quenched noise of the finite natural frequency sample.
We study intermediate ensembles of a few hundred oscillators in which fluctuations are comparably strong but which also allow for a comparison to frequency distributions in the infinite limit.
First, we define an alternative order parameter for the indication of a collective mode in the finite case. Then we test the dependence of the degree of synchronization and the mean rotation frequency of the collective mode on different characteristics for different coupling strengths.
We find, first numerically, that the degree of synchronization depends strongly on the form (quantified by kurtosis) of the natural frequency sample and the rotation frequency of the collective mode depends on the asymmetry (quantified by skewness) of the sample. Both findings are verified in the infinite limit.
With these findings, we better understand and generalize observations of other authors. A bit aside of the general line of thoughts, we find an analytical expression for the volume contraction in phase space.
The second part of this thesis concentrates on an ordering effect of the finite-size fluctuations. In the infinite limit, the oscillators are separated into coherent and incoherent thus ordered and disordered oscillators. In finite ensembles, finite-size fluctuations can generate additional order among the asynchronous oscillators. The basic principle – noise-induced synchronization – is known from several recent papers. Among coupled oscillators, phases are pushed together by the order parameter fluctuations, as we on the one hand show directly and on the other hand quantify with a synchronization measure from directed statistics between pairs of passive oscillators.
We determine the dependence of this synchronization measure from the ratio of pairwise natural frequency difference and variance of the order parameter fluctuations. We find a good agreement with a simple analytical model, in which we replace the deterministic fluctuations of the order parameter by white noise.
Transient permeability in porous and fractured sandstones mediated by fluid-rock interactions
(2021)
Understanding the fluid transport properties of subsurface rocks is essential for a large number of geotechnical applications, such as hydrocarbon (oil/gas) exploitation, geological storage (CO2/fluids), and geothermal reservoir utilization. To date, the hydromechanically-dependent fluid flow patterns in porous media and single macroscopic rock fractures have received numerous investigations and are relatively well understood. In contrast, fluid-rock interactions, which may permanently affect rock permeability by reshaping the structure and changing connectivity of pore throats or fracture apertures, need to be further elaborated. This is of significant importance for improving the knowledge of the long-term evolution of rock transport properties and evaluating a reservoir’ sustainability. The thesis focuses on geothermal energy utilization, e.g., seasonal heat storage in aquifers and enhanced geothermal systems, where single fluid flow in porous rocks and rock fracture networks under various pressure and temperature conditions dominates.
In this experimental study, outcrop samples (i.e., Flechtinger sandstone, an illite-bearing Lower Permian rock, and Fontainebleau sandstone, consisting of pure quartz) were used for flow-through experiments under simulated hydrothermal conditions. The themes of the thesis are (1) the investigation of clay particle migration in intact Flechtinger sandstone and the coincident permeability damage upon cyclic temperature and fluid salinity variations; (2) the determination of hydro-mechanical properties of self-propping fractures in Flechtinger and Fontainebleau sandstones with different fracture features and contrasting mechanical properties; and (3) the investigation of the time-dependent fracture aperture evolution of Fontainebleau sandstone induced by fluid-rock interactions (i.e., predominantly pressure solution). Overall, the thesis aims to unravel the mechanisms of the instantaneous reduction (i.e., direct responses to thermo-hydro-mechanical-chemical (THMC) conditions) and progressively-cumulative changes (i.e., time-dependence) of rock transport properties.
Permeability of intact Flechtinger sandstone samples was measured under each constant condition, where temperature (room temperature up to 145 °C) and fluid salinity (NaCl: 0 ~ 2 mol/l) were stepwise changed. Mercury intrusion porosimetry (MIP), electron microprobe analysis (EMPA), and scanning electron microscopy (SEM) were performed to investigate the changes of local porosity, microstructures, and clay element contents before and after the experiments. The results indicate that the permeability of illite-bearing Flechtinger sandstones will be impaired by heating and exposure to low salinity pore fluids. The chemically induced permeability variations prove to be path-dependent concerning the applied succession of fluid salinity changes. The permeability decay induced by a temperature increase and a fluid salinity reduction operates by relatively independent mechanisms, i.e., thermo-mechanical and thermo-chemical effects.
Further, the hydro-mechanical investigations of single macroscopic fractures (aligned, mismatched tensile fractures, and smooth saw-cut fractures) illustrate that a relative fracture wall offset could significantly increase fracture aperture and permeability, but the degree of increase depends on fracture surface roughness. X-ray computed tomography (CT) demonstrates that the contact area ratio after the pressure cycles is inversely correlated to the fracture offset. Moreover, rock mechanical properties, determining the strength of contact asperities, are crucial so that relatively harder rock (i.e., Fontainebleau sandstone) would have a higher self-propping potential for sustainable permeability during pressurization. This implies that self-propping rough fractures with a sufficient displacement are efficient pathways for fluid flow if the rock matrix is mechanically strong.
Finally, two long-term flow-through experiments with Fontainebleau sandstone samples containing single fractures were conducted with an intermittent flow (~140 days) and continuous flow (~120 days), respectively. Permeability and fluid element concentrations were measured throughout the experiments. Permeability reduction occurred at the beginning stage when the stress was applied, while it converged at later stages, even under stressed conditions. Fluid chemistry and microstructure observations demonstrate that pressure solution governs the long-term fracture aperture deformation, with remarkable effects of the pore fluid (Si) concentration and the structure of contact grain boundaries. The retardation and the cessation of rock fracture deformation are mainly induced by the contact stress decrease due to contact area enlargement and a dissolved mass accumulation within the contact boundaries. This work implies that fracture closure under constant (pressure/stress and temperature) conditions is likely a spontaneous process, especially at the beginning stage after pressurization when the contact area is relatively small. In contrast, a contact area growth yields changes of fracture closure behavior due to the evolution of contact boundaries and concurrent changes in their diffusive properties. Fracture aperture and thus permeability will likely be sustainable in the long term if no other processes (e.g., mineral precipitations in the open void space) occur.
Classical semiconductor physics has been continuously improving electronic components such as diodes, light-emitting diodes, solar cells and transistors based on highly purified inorganic crystals over the past decades. Organic semiconductors, notably polymeric, are a comparatively young field of research, the first light-emitting diode based on conjugated polymers having been demonstrated in 1990. Polymeric semiconductors are of tremendous interest for high-volume, low-cost manufacturing ("printed electronics"). Due to their rather simple device structure mostly comprising only one or two functional layers, polymeric diodes are much more difficult to optimize compared to small-molecular organic devices. Usually, functions such as charge injection and transport are handled by the same material which thus needs to be highly optimized. The present work contributes to expanding the knowledge on the physical mechanisms determining device performance by analyzing the role of charge injection and transport on device efficiency for blue and white-emitting devices, based on commercially relevant spiro-linked polyfluorene derivatives. It is shown that such polymers can act as very efficient electron conductors and that interface effects such as charge trapping play the key role in determining the overall device efficiency. This work contributes to the knowledge of how charges drift through the polymer layer to finally find neutral emissive trap states and thus allows a quantitative prediction of the emission color of multichromophoric systems, compatible with the observed color shifts upon driving voltage and temperature variation as well as with electrical conditioning effects. In a more methodically oriented part, it is demonstrated that the transient device emission observed upon terminating the driving voltage can be used to monitor the decay of geminately-bound species as well as to determine trapped charge densities. This enables direct comparisons with numerical simulations based on the known properties of charge injection, transport and recombination. The method of charge extraction under linear increasing voltages (CELIV) is investigated in some detail, correcting for errors in the published approach and highlighting the role of non-idealized conditions typically present in experiments. An improved method is suggested to determine the field dependence of charge mobility in a more accurate way. Finally, it is shown that the neglect of charge recombination has led to a misunderstanding of experimental results in terms of a time-dependent mobility relaxation.
River flooding poses a threat to numerous cities and communities all over the world. The detection, quantification and attribution of changes in flood characteristics is key to assess changes in flood hazard and help affected societies to timely mitigate and adapt to emerging risks. The Rhine River is one of the major European rivers and numerous large cities reside at its shores. Runoff from several large tributaries superimposes in the main channel shaping the complex from regime. Rainfall, snowmelt as well as ice-melt are important runoff components. The main objective of this thesis is the investigation of a possible transient merging of nival and pluvial Rhine flood regimes under global warming. Rising temperatures cause snowmelt to occur earlier in the year and rainfall to be more intense. The superposition of snowmelt-induced floods originating from the Alps with more intense rainfall-induced runoff from pluvial-type tributaries might create a new flood type with potentially disastrous consequences.
To introduce the topic of changing hydrological flow regimes, an interactive web application that enables the investigation of runoff timing and runoff season- ality observed at river gauges all over the world is presented. The exploration and comparison of a great diversity of river gauges in the Rhine River Basin and beyond indicates that river systems around the world undergo fundamental changes. In hazard and risk research, the provision of background as well as real-time information to residents and decision-makers in an easy accessible way is of great importance. Future studies need to further harness the potential of scientifically engineered online tools to improve the communication of information related to hazards and risks.
A next step is the development of a cascading sequence of analytical tools to investigate long-term changes in hydro-climatic time series. The combination of quantile sampling with moving average trend statistics and empirical mode decomposition allows for the extraction of high resolution signals and the identification of mechanisms driving changes in river runoff. Results point out that the construction and operation of large reservoirs in the Alps is an important factor redistributing runoff from summer to winter and hint at more (intense) rainfall in recent decades, particularly during winter, in turn increasing high runoff quantiles. The development and application of the analytical sequence represents a further step in the scientific quest to disentangling natural variability, climate change signals and direct human impacts.
The in-depth analysis of in situ snow measurements and the simulations of the Alpine snow cover using a physically-based snow model enable the quantification of changes in snowmelt in the sub-basin upstream gauge Basel. Results confirm previous investigations indicating that rising temperatures result in a decrease in maximum melt rates. Extending these findings to a catchment perspective, a threefold effect of rising temperatures can be identified: snowmelt becomes weaker, occurs earlier and forms at higher elevations. Furthermore, results indicate that due to the wide range of elevations in the basin, snowmelt does not occur simultaneously at all elevation, but elevation bands melt together in blocks. The beginning and end of the release of meltwater seem to be determined by the passage of warm air masses, and the respective elevation range affected by accompanying temperatures and snow availability. Following those findings, a hypothesis describing elevation-dependent compensation effects in snowmelt is introduced: In a warmer world with similar sequences of weather conditions, snowmelt is moved upward to higher elevations, i.e., the block of elevation bands providing most water to the snowmelt-induced runoff is located at higher elevations. The movement upward the elevation range makes snowmelt in individual elevation bands occur earlier. The timing of the snowmelt-induced runoff, however, stays the same. Meltwater from higher elevations, at least partly, replaces meltwater from elevations below.
The insights on past and present changes in river runoff, snow covers and underlying mechanisms form the basis of investigations of potential future changes in Rhine River runoff. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios is used to analyse future changes in streamflow, snowmelt, precipitation and evapotranspiration at 1.5, 2.0 and
3.0 ◦ C global warming. Simulation results suggest that future changes in flood characteristics in the Rhine River Basin are controlled by increased precipitation amounts on the one hand, and reduced snowmelt on the other hand. Rising temperatures deplete seasonal snowpacks. At no time during the year, a warming climate results in an increase in the risk of snowmelt-driven flooding. Counterbalancing effects between snowmelt and precipitation often result in only little and transient changes in streamflow peaks. Although, investigations point at changes in both rainfall and snowmelt-driven runoff, there are no indications of a transient merging of nival and pluvial Rhine flood regimes due to climate warming. Flooding in the main tributaries of the Rhine, such as the Moselle River, as well as the High Rhine is controlled by both precipitation and snowmelt. Caution has to be exercised labelling sub-basins such as the Moselle catchment as purely pluvial-type or the Rhine River Basin at Basel as purely nival-type. Results indicate that this (over-) simplifications can entail misleading assumptions with regard to flood-generating mechanisms and changes in flood hazard. In the framework of this thesis, some progress has been made in detecting, quantifying and attributing past, present and future changes in Rhine flow/flood characteristics. However, further studies are necessary to pin down future changes in the flood genesis of Rhine floods, particularly very rare events.
The optical properties of chromophores, especially organic dyes and optically active inorganic molecules, are determined by their chemical structures, surrounding media, and excited state behaviors. The classical optical go-to techniques for spectroscopic investigations are absorption and luminescence spectroscopy. While both techniques are powerful and easy to apply spectroscopic methods, the limited time resolution of luminescence spectroscopy and its reliance on luminescent properties can make its application, in certain cases, complex, or even impossible. This can be the case when the investigated molecules do not luminesce anymore due to quenching effects, or when they were never luminescent in the first place. In those cases, transient absorption spectroscopy is an excellent and much more sophisticated technique to investigate such systems. This pump-probe laser-spectroscopic method is excellent for mechanistic investigations of luminescence quenching phenomena and photoreactions. This is due to its extremely high time resolution in the femto- and picosecond ranges, where many intermediate or transient species of a reaction can be identified and their kinetic evolution can be observed. Furthermore, it does not rely on the samples being luminescent, due to the active sample probing after excitation. In this work it is shown, that with transient absorption spectroscopy it was possible to identify the luminescence quenching mechanisms and thus luminescence quantum yield losses of the organic dye classes O4-DBD, S4-DBD, and pyridylanthracenes. Hence, the population of their triplet states could be identified as the competitive mechanism to their luminescence. While the good luminophores O4-DBD showed minor losses, the S4-DBD dye luminescence was almost entirely quenched by this process. However, for pyridylanthracenes, this phenomenon is present in both the protonated and unprotonated forms and moderately effects the luminescence quantum yield. Also, the majority of the quenching losses in the protonated forms are caused by additional non-radiative processes introduced by the protonation of the pyridyl rings. Furthermore, transient absorption spectroscopy can be applied to investigate the quenching mechanisms of uranyl(VI) luminescence by chloride and bromide. The reduction of the halides by excited uranyl(VI) leads to the formation of dihalide radicals X^(·−2). This excited state redox process is thus identified as the quenching mechanism for both halides, and this process, being diffusion-limited, can be suppressed by cryogenically freezing the samples or by observing these interactions in media with a lower dielectric constant, such as ACN and acetone.
Der E-Government-Fortschritt wird nach wie vor durch redundante Entwicklungsaktivitäten und isolierte, wenig interoperable Lösungen gehemmt. Die Herausforderung liegt weniger in der Entwicklung und Einführung leistungsstarker Informationssysteme, sondern in der Verbreitung bestehender Lösungen. Die Arbeit identifiziert mögliche Strategien für den Transfer von E-Government-Lösungen zwischen Verwaltungen gleicher wie auch verschiedener föderaler Ebene. Es werden Konzepte zur Diffusion von Innovationen, zum Technologie- wie auch Politiktransfer herangezogen. Weiter werden drei umfangreiche Fallstudien vorgestellt. Sie führen zu transferhemmenden wie auch fördernden Faktoren und somit zu Gestaltungsoptionen für erfolgreiche Transferprozesse unter den vielfältigen Rahmenbedingungen im öffentlichen Sektor.
Background: Individuals with aphasia after stroke (IWA) often present with working memory (WM) deficits. Research investigating the relationship between WM and language abilities has led to the promising hypothesis that treatments of WM could lead to improvements in language, a phenomenon known as transfer. Although recent treatment protocols have been successful in improving WM, the evidence to date is scarce and the extent to which improvements in trained tasks of WM transfer to untrained memory tasks, spoken sentence comprehension, and functional communication is yet poorly understood.
Aims: We aimed at (a) investigating whether WM can be improved through an adaptive n-back training in IWA (Study 1–3); (b) testing whether WM training leads to near transfer to unpracticed WM tasks (Study 1–3), and far transfer to spoken sentence comprehension (Study 1–3), functional communication (Study 2–3), and memory in daily life in IWA (Study 2–3); and (c) evaluating the methodological quality of existing WM treatments in IWA (Study 3). To address these goals, we conducted two empirical studies – a case-controls study with Hungarian speaking IWA (Study 1) and a multiple baseline study with German speaking IWA (Study 2) – and a systematic review (Study 3).
Methods: In Study 1 and 2 participants with chronic, post-stroke aphasia performed an adaptive, computerized n-back training. ‘Adaptivity’ was implemented by adjusting the tasks’ difficulty level according to the participants’ performance, ensuring that they always practiced at an optimal level of difficulty. To assess the specificity of transfer effects and to better understand the underlying mechanisms of transfer on spoken sentence comprehension, we included an outcome measure testing specific syntactic structures that have been proposed to involve WM processes (e.g., non-canonical structures with varying complexity).
Results: We detected a mixed pattern of training and transfer effects across individuals: five participants out of six significantly improved in the n-back training. Our most important finding is that all six participants improved significantly in spoken sentence comprehension (i.e., far transfer effects). In addition, we also found far transfer to functional communication (in two participants out of three in Study 2) and everyday memory functioning (in all three participants in Study 2), and near transfer to unpracticed n-back tasks (in four participants out of six). Pooled data analysis of Study 1 and 2 showed a significant negative relationship between initial spoken sentence comprehension and the amount of improvement in this ability, suggesting that the more severe the participants’ spoken sentence comprehension deficit was at the beginning of training, the more they improved after training. Taken together, we detected both near far and transfer effects in our studies, but the effects varied across participants. The systematic review evaluating the methodological quality of existing WM treatments in stroke IWA (Study 3) showed poor internal and external validity across the included 17 studies. Poor internal validity was mainly due to use of inappropriate design, lack of randomization of study phases, lack of blinding of participants and/or assessors, and insufficient sampling. Low external validity was mainly related to incomplete information on the setting, lack of use of appropriate analysis or justification for the suitability of the analysis procedure used, and lack of replication across participants and/or behaviors. Results in terms of WM, spoken sentence comprehension, and reading are promising, but further studies with more rigorous methodology and stronger experimental control are needed to determine the beneficial effects of WM intervention.
Conclusions: Results of the empirical studies suggest that WM can be improved with a computerized and adaptive WM training, and improvements can lead to transfer effects to spoken sentence comprehension and functional communication in some individuals with chronic post-stroke aphasia. The fact that improvements were not specific to certain syntactic structures (i.e., non-canonical complex sentences) in spoken sentence comprehension suggest that WM is not involved in the online, automatic processing of syntactic information (i.e., parsing and interpretation), but plays a more general role in the later stage of spoken sentence comprehension (i.e., post-interpretive comprehension). The individual differences in treatment outcomes call for future research to clarify how far these results are generalizable to the population level of IWA. Future studies are needed to identify a few mechanisms that may generalize to at least a subpopulation of IWA as well as to investigate baseline non-linguistic cognitive and language abilities that may play a role in transfer effects and the maintenance of such effects. These may require larger yet homogenous samples.
Traditionally, mental disorders have been identified based on specific symptoms and standardized diagnostic systems such as the DSM-5 and ICD-10. However, these symptom-based definitions may only partially represent neurobiological and behavioral research findings, which could impede the development of targeted treatments. A transdiagnostic approach to mental health research, such as the Research Domain Criteria (RDoC) approach, maps resilience and broader aspects of mental health to associated components. By investigating mental disorders in a transnosological way, we can better understand disease patterns and their distinguishing and common factors, leading to more precise prevention and treatment options.
Therefore, this dissertation focuses on (1) the latent domain structure of the RDoC approach in a transnosological sample including healthy controls, (2) its domain associations to disease severity in patients with anxiety and depressive disorders, and (3) an overview of the scientific results found regarding Positive (PVS) and Negative Valence Systems (NVS) associated with mood and anxiety disorders.
The following main results were found: First, the latent RDoC domain structure for PVS and NVS, Cognitive Systems (CS), and Social Processes (SP) could be validated using self-report and behavioral measures in a transnosological sample. Second, we found transdiagnostic and disease-specific associations between those four domains and disease severity in patients with depressive and anxiety disorders. Third, the scoping review showed a sizable amount of RDoC research conducted on PVS and NVS in mood and anxiety disorders, with research gaps for both domains and specific conditions.
In conclusion, the research presented in this dissertation highlights the potential of the transnosological RDoC framework approach in improving our understanding of mental disorders. By exploring the latent RDoC structure and associations with disease severity and disease-specific and transnosological associations for anxiety and depressive disorders, this research provides valuable insights into the full spectrum of psychological functioning. Additionally, this dissertation highlights the need for further research in this area, identifying both RDoC indicators and research gaps. Overall, this dissertation represents an important contribution to the ongoing efforts to improve our understanding and the treatment of mental disorders, particularly within the commonly comorbid disease spectrum of mood and anxiety disorders.
For more than two centuries, plant ecologists have aimed to understand how environmental gradients and biotic interactions shape the distribution and co-occurrence of plant species. In recent years, functional trait–based approaches have been increasingly used to predict patterns of species co-occurrence and species distributions along environmental gradients (trait–environment relationships). Functional traits are measurable properties at the individual level that correlate well with important processes. Thus, they allow us to identify general patterns by synthesizing studies across specific taxonomic compositions, thereby fostering our understanding of the underlying processes of species assembly. However, the importance of specific processes have been shown to be highly dependent on the spatial scale under consideration. In particular, it remains uncertain which mechanisms drive species assembly and allow for plant species coexistence at smaller, more local spatial scales. Furthermore, there is still no consensus on how particular environmental gradients affect the trait composition of plant communities. For example, increasing drought because of climate change is predicted to be a main threat to plant diversity, although it remains unclear which traits of species respond to increasing aridity. Similarly, there is conflicting evidence of how soil fertilization affects the traits related to establishment ability (e.g., seed mass). In this cumulative dissertation, I present three empirical trait-based studies that investigate specific research questions in order to improve our understanding of species distributions along environmental gradients.
In the first case study, I analyze how annual species assemble at the local scale and how environmental heterogeneity affects different facets of biodiversity—i.e. taxonomic, functional, and phylogenetic diversity—at different spatial scales. The study was conducted in a semi-arid environment at the transition zone between desert and Mediterranean ecosystems that features a sharp precipitation gradient (Israel). Different null model analyses revealed strong support for environmentally driven species assembly at the local scale, since species with similar traits tended to co-occur and shared high abundances within microsites (trait convergence). A phylogenetic approach, which assumes that closely related species are functionally more similar to each other than distantly related ones, partly supported these results. However, I observed that species abundances within microsites were, surprisingly, more evenly distributed across the phylogenetic tree than expected (phylogenetic overdispersion). Furthermore, I showed that environmental heterogeneity has a positive effect on diversity, which was higher on functional than on taxonomic diversity and increased with spatial scale. The results of this case study indicate that environmental heterogeneity may act as a stabilizing factor to maintain species diversity at local scales, since it influenced species distribution according to their traits and positively influenced diversity. All results were constant along the precipitation gradient.
In the second case study (same study system as case study one), I explore the trait responses of two Mediterranean annuals (Geropogon hybridus and Crupina crupinastrum) along a precipitation gradient that is comparable to the maximum changes in precipitation predicted to occur by the end of this century (i.e., −30%). The heterocarpic G. hybridus showed strong trends in seed traits, suggesting that dispersal ability increased with aridity. By contrast, the homocarpic C. crupinastrum showed only a decrease in plant height as aridity increased, while leaf traits of both species showed no consistent pattern along the precipitation gradient. Furthermore, variance decomposition of traits revealed that most of the trait variation observed in the study system was actually found within populations. I conclude that trait responses towards aridity are highly species-specific and that the amount of precipitation is not the most striking environmental factor at this particular scale.
In the third case study, I assess how soil fertilization mediates—directly by increased nutrient addition and indirectly by increased competition—the effect of seed mass on establishment ability. For this experiment, I used 22 species differing in seed mass from dry grasslands in northeastern Germany and analyzed the interacting effects of seed mass with nutrient availability and competition on four key components of seedling establishment: seedling emergence, time of seedling emergence, seedling survival, and seedling growth. (Time of) seedling emergence was not affected by seed mass. However, I observed that the positive effect of seed mass on seedling survival is lowered under conditions of high nutrient availability, whereas the positive effect of seed mass on seedling growth was only reduced by competition. Based on these findings, I developed a conceptual model of how seed mass should change along a soil fertility gradient in order to reconcile conflicting findings from the literature. In this model, seed mass shows a U-shaped pattern along the soil fertility gradient as a result of changing nutrient availability and competition.
Overall, the three case studies highlight the role of environmental factors on species distribution and co-occurrence. Moreover, the findings of this thesis indicate that spatial heterogeneity at local scales may act as a stabilizing factor that allows species with different traits to coexist. In the concluding discussion, I critically debate intraspecific trait variability in plant community ecology, the use of phylogenetic relationships and easily measured key functional traits as a proxy for species’ niches. Finally, I offer my outlook for the future of functional plant community research.
The increasing introduction of non-native plant species may pose a threat to local biodiversity. However, the basis of successful plant invasion is not conclusively understood, especially since these plant species can adapt to the new range within a short period of time despite impoverished genetic diversity of the starting populations. In this context, DNA methylation is considered promising to explain successful adaptation mechanisms in the new habitat. DNA methylation is a heritable variation in gene expression without changing the underlying genetic information. Thus, DNA methylation is considered a so-called epigenetic mechanism, but has been studied in mainly clonally reproducing plant species or genetic model plants. An understanding of this epigenetic mechanism in the context of non-native, predominantly sexually reproducing plant species might help to expand knowledge in biodiversity research on the interaction between plants and their habitats and, based on this, may enable more precise measures in conservation biology.
For my studies, I combined chemical DNA demethylation of field-collected seed material from predominantly sexually reproducing species and rearing offsping under common climatic conditions to examine DNA methylation in an ecological-evolutionary context. The contrast of chemically treated (demethylated) plants, whose variation in DNA methylation was artificially reduced, and untreated control plants of the same species allowed me to study the impact of this mechanism on adaptive trait differentiation and local adaptation. With this experimental background, I conducted three studies examining the effect of DNA methylation in non-native species along a climatic gradient and also between climatically divergent regions.
The first study focused on adaptive trait differentiation in two invasive perennial goldenrod species, Solidago canadensis sensu latu and S. gigantea AITON, along a climate gradient of more than 1000 km in length in Central Europe. I found population differences in flowering timing, plant height, and biomass in the temporally longer-established S. canadensis, but only in the number of regrowing shoots for S. gigantea. While S. canadensis did not show any population structure, I was able to identify three genetic groups along this climatic gradient in S. gigantea. Surprisingly, demethylated plants of both species showed no change in the majority of traits studied. In the subsequent second study, I focused on the longer-established goldenrod species S. canadensis and used molecular analyses to infer spatial epigenetic and genetic population differences in the same specimens from the previous study. I found weak genetic but no epigenetic spatial variation between populations. Additionally, I was able to identify one genetic marker and one epigenetic marker putatively susceptible to selection. However, the results of this study reconfirmed that the epigenetic mechanism of DNA methylation appears to be hardly involved in adaptive processes within the new range in S. canadensis.
Finally, I conducted a third study in which I reciprocally transplanted short-lived plant species between two climatically divergent regions in Germany to investigate local adaptation at the plant family level. For this purpose, I used four plant families (Amaranthaceae, Asteraceae, Plantaginaceae, Solanaceae) and here I additionally compared between non-native and native plant species. Seeds were transplanted to regions with a distance of more than 600 kilometers and had either a temperate-oceanic or a temperate-continental climate. In this study, some species were found to be maladapted to their own local conditions, both in non-native and native plant species alike. In demethylated individuals of the plant species studied, DNA methylation had inconsistent but species-specific effects on survival and biomass production. The results of this study highlight that DNA methylation did not make a substantial contribution to local adaptation in the non-native as well as native species studied.
In summary, my work showed that DNA methylation plays a negligible role in both adaptive trait variation along climatic gradients and local adaptation in non-native plant species that either exhibit a high degree of genetic variation or rely mainly on sexual reproduction with low clonal propagation. I was able to show that the adaptive success of these non-native plant species can hardly be explained by DNA methylation, but could be a possible consequence of multiple introductions, dispersal corridors and meta-population dynamics. Similarly, my results illustrate that the use of plant species that do not predominantly reproduce clonally and are not model plants is essential to characterize the effect size of epigenetic mechanisms in an ecological-evolutionary context.
The tropical warm pool waters surrounding Indonesia are one of the equatorial heat and moisture sources that are considered as a driving force of the global climate system. The climate in Indonesia is dominated by the equatorial monsoon system, and has been linked to El Niño-Southern Oscillation (ENSO) events, which often result in severe droughts or floods over Indonesia with profound societal and economic impacts on the populations living in the world's fourth most populated country. The latest IPCC report states that ENSO will remain the dominant mode in the tropical Pacific with global effects in the 21st century and ENSO-related precipitation extremes will intensify. However, no common agreement exists among climate simulation models for projected change in ENSO and the Australian-Indonesian Monsoon. Exploring high-resolution palaeoclimate archives, like tree rings or varved lake sediments, provide insights into the natural climate variability of the past, and thus helps improving and validating simulations of future climate changes. Centennial tree-ring stable isotope records | Within this doctoral thesis the main goal was to explore the potential of tropical tree rings to record climate signals and to use them as palaeoclimate proxies. In detail, stable carbon (δ13C) and oxygen (δ18O) isotopes were extracted from teak trees in order to establish the first well-replicated centennial (AD 1900-2007) stable isotope records for Java, Indonesia. Furthermore, different climatic variables were tested whether they show significant correlation with tree-ring proxies (ring-width, δ13C, δ18O). Moreover, highly resolved intra-annual oxygen isotope data were established to assess the transfer of the seasonal precipitation signal into the tree rings. Finally, the established oxygen isotope record was used to reveal possible correlations with ENSO events. Methodological achievements | A second goal of this thesis was to assess the applicability of novel techniques which facilitate and optimize high-resolution and high-throughput stable isotope analysis of tree rings. Two different UV-laser-based microscopic dissection systems were evaluated as a novel sampling tool for high-resolution stable isotope analysis. Furthermore, an improved procedure of tree-ring dissection from thin cellulose laths for stable isotope analysis was designed. The most important findings of this thesis are: I) The herein presented novel sampling techniques improve stable isotope analyses for tree-ring studies in terms of precision, efficiency and quality. The UV-laser-based microdissection serve as a valuable tool for sampling plant tissue at ultrahigh-resolution and for unprecedented precision. II) A guideline for a modified method of cellulose extraction from wholewood cross-sections and subsequent tree-ring dissection was established. The novel technique optimizes the stable isotope analysis process in two ways: faster and high-throughput cellulose extraction and precise tree-ring separation at annual to high-resolution scale. III) The centennial tree-ring stable isotope records reveal significant correlation with regional precipitation. High-resolution stable oxygen values, furthermore, allow distinguishing between dry and rainy season rainfall. IV) The δ18O record reveals significant correlation with different ENSO flavors and demonstrates the importance of considering ENSO flavors when interpreting palaeoclimatic data in the tropics. The findings of my dissertation show that seasonally resolved δ18O records from Indonesian teak trees are a valuable proxy for multi-centennial reconstructions of regional precipitation variability (monsoon signals) and large-scale ocean-atmosphere phenomena (ENSO) for the Indo-Pacific region. Furthermore, the novel methodological achievements offer many unexplored avenues for multidisciplinary research in high-resolution palaeoclimatology.
Assessing the impact of global change on hydrological systems is one of the greatest hydrological challenges of our time. Changes in land cover, land use, and climate have an impact on water quantity, quality, and temporal availability. There is a widespread consensus that, given the far-reaching effects of global change, hydrological systems can no longer be viewed as static in their structure; instead, they must be regarded as entire ecosystems, wherein hydrological processes interact and coevolve with biological, geomorphological, and pedological processes. To accurately predict the hydrological response under the impact of global change, it is essential to understand this complex coevolution. The knowledge of how hydrological processes, in particular the formation of subsurface (preferential) flow paths, evolve within this coevolution and how they feed back to the other processes is still very limited due to a lack of observational data.
At the hillslope scale, this intertwined system of interactions is known as the hillslope feedback cycle. This thesis aims to enhance our understanding of the hillslope feedback cycle by studying the coevolution of hillslope structure and hillslope hydrological response. Using chronosequences of moraines in two glacial forefields developed from siliceous and calcareous glacial till, the four studies shed light on the complex coevolution of hydrological, biological, and structural hillslope properties, as well as subsurface hydrological flow paths over an evolutionary period of 10 millennia in these two contrasting geologies. The findings indicate that the contrasting properties of siliceous and calcareous parent materials lead
to variations in soil structure, permeability, and water storage. As a result, different plant species and vegetation types are favored on siliceous versus calcareous parent material, leading to diverse ecosystems with distinct hydrological dynamics. The siliceous parent material was found to show a higher activity level in driving the coevolution. The soil pH resulting from parent material weathering emerges as a crucial factor, influencing vegetation development, soil formation, and consequently, hydrology. The acidic weathering of the siliceous parent material favored the accumulation of organic matter, increasing the soils’ water storage capacity and attracting acid-loving shrubs, which further promoted organic matter accumulation and ultimately led to podsolization after 10 000 years. Tracer experiments revealed that the subsurface flow path evolution was influenced by soil and vegetation development, and vice versa. Subsurface flow paths changed from vertical, heterogeneous matrix flow to finger-like flow paths over a few hundred years, evolving into macropore flow, water storage, and lateral subsurface flow after several thousand years. The changes in flow paths among younger age classes were driven by weathering processes altering soil structure, as well as by vegetation development and root activity. In the older age
class, the transition to more water storage and lateral flow was attributed to substantial organic matter accumulation and ongoing podsolization. The rapid vertical water transport in the finger-like flow paths, along with the conductive sandy material, contributed to podsolization and thus to the shift in the hillslope hydrological response.
In contrast, the calcareous site possesses a high pH buffering capacity, creating a neutral to basic environment with relatively low accumulation of dead organic matter, resulting in a lower water storage capacity and the establishment of predominantly grass vegetation. The coevolution was found to be less dynamic over the millennia. Similar to the siliceous site, significant changes in subsurface flow paths occurred between the young age classes. However, unlike the siliceous site, the subsurface flow paths at the calcareous site only altered in shape and not in direction. Tracer experiments showed that flow paths changed from vertical, heterogeneous matrix flow to vertical, finger-like flow paths after a few hundred to thousands of years, which was driven by root activities and weathering processes. Despite having a finer soil texture, water storage at the calcareous site was significantly lower than at the siliceous site, and water transport remained primarily rapid and vertical, contributing to the flourishing of grass vegetation.
The studies elucidated that changes in flow paths are predominantly shaped by the characteristics of the parent material and its weathering products, along with their complex interactions with initial water flow paths and vegetation development. Time, on the other hand, was not found to be a primary factor in describing the evolution of the hydrological response. This thesis makes a valuable contribution to closing the gap in the observations of the coevolution of hydrological processes within the hillslope feedback cycle, which is important to improve predictions of hydrological processes in changing landscapes. Furthermore, it emphasizes the importance of interdisciplinary studies in addressing the hydrological challenges arising from global change.
Nowadays, model-driven engineering (MDE) promises to ease software development by decreasing the inherent complexity of classical software development. In order to deliver on this promise, MDE increases the level of abstraction and automation, through a consideration of domain-specific models (DSMs) and model operations (e.g. model transformations or code generations). DSMs conform to domain-specific modeling languages (DSMLs), which increase the level of abstraction, and model operations are first-class entities of software development because they increase the level of automation. Nevertheless, MDE has to deal with at least two new dimensions of complexity, which are basically caused by the increased linguistic and technological heterogeneity. The first dimension of complexity is setting up an MDE environment, an activity comprised of the implementation or selection of DSMLs and model operations. Setting up an MDE environment is both time-consuming and error-prone because of the implementation or adaptation of model operations. The second dimension of complexity is concerned with applying MDE for actual software development. Applying MDE is challenging because a collection of DSMs, which conform to potentially heterogeneous DSMLs, are required to completely specify a complex software system. A single DSML can only be used to describe a specific aspect of a software system at a certain level of abstraction and from a certain perspective. Additionally, DSMs are usually not independent but instead have inherent interdependencies, reflecting (partial) similar aspects of a software system at different levels of abstraction or from different perspectives. A subset of these dependencies are applications of various model operations, which are necessary to keep the degree of automation high. This becomes even worse when addressing the first dimension of complexity. Due to continuous changes, all kinds of dependencies, including the applications of model operations, must also be managed continuously. This comprises maintaining the existence of these dependencies and the appropriate (re-)application of model operations. The contribution of this thesis is an approach that combines traceability and model management to address the aforementioned challenges of configuring and applying MDE for software development. The approach is considered as a traceability approach because it supports capturing and automatically maintaining dependencies between DSMs. The approach is considered as a model management approach because it supports managing the automated (re-)application of heterogeneous model operations. In addition, the approach is considered as a comprehensive model management. Since the decomposition of model operations is encouraged to alleviate the first dimension of complexity, the subsequent composition of model operations is required to counteract their fragmentation. A significant portion of this thesis concerns itself with providing a method for the specification of decoupled yet still highly cohesive complex compositions of heterogeneous model operations. The approach supports two different kinds of compositions - data-flow compositions and context compositions. Data-flow composition is used to define a network of heterogeneous model operations coupled by sharing input and output DSMs alone. Context composition is related to a concept used in declarative model transformation approaches to compose individual model transformation rules (units) at any level of detail. In this thesis, context composition provides the ability to use a collection of dependencies as context for the composition of other dependencies, including model operations. In addition, the actual implementation of model operations, which are going to be composed, do not need to implement any composition concerns. The approach is realized by means of a formalism called an executable and dynamic hierarchical megamodel, based on the original idea of megamodels. This formalism supports specifying compositions of dependencies (traceability and model operations). On top of this formalism, traceability is realized by means of a localization concept, and model management by means of an execution concept.
Selenium (Se) is an essential trace element that is ubiquitously present in the environment in small concentrations. Essential functions of Se in the human body are manifested through the wide range of proteins, containing selenocysteine as their active center. Such proteins are called selenoproteins which are found in multiple physiological processes like antioxidative defense and the regulation of thyroid hormone functions. Therefore, Se deficiency is known to cause a broad spectrum of physiological impairments, especially in endemic regions with low Se content. Nevertheless, being an essential trace element, Se could exhibit toxic effects, if its intake exceeds tolerable levels. Accordingly, this range between deficiency and overexposure represents optimal Se supply. However, this range was found to be narrower than for any other essential trace element. Together with significantly varying Se concentrations in soil and the presence of specific bioaccumulation factors, this represents a noticeable difficulty in the assessment of Se
epidemiological status. While Se is acting in the body through multiple selenoproteins, its intake occurs mainly in form of small organic or inorganic molecular mass species. Thus, Se exposure not only depends on daily intake but also on the respective chemical form, in which it is present.
The essential functions of selenium have been known for a long time and its primary forms in different food sources have been described. Nevertheless, analytical capabilities for a comprehensive investigation of Se species and their derivatives have been introduced only in the last decades. A new Se compound was identified in 2010 in the blood and tissues of bluefin tuna. It was called selenoneine (SeN) since it is an isologue of naturally occurring antioxidant ergothioneine (ET), where Se replaces sulfur. In the following years, SeN was identified in a number of edible fish species and attracted attention as a new dietary Se source and potentially strong antioxidant. Studies in populations whose diet largely relies on fish revealed that SeN
represents the main non-protein bound Se pool in their blood. First studies, conducted with enriched fish extracts, already demonstrated the high antioxidative potential of SeN and its possible function in the detoxification of methylmercury in fish. Cell culture studies demonstrated, that SeN can utilize the same transporter as ergothioneine, and SeN metabolite was found in human urine.
Until recently, studies on SeN properties were severely limited due to the lack of ways to obtain the pure compound. As a predisposition to this work was firstly a successful approach to SeN synthesis in the University of Graz, utilizing genetically modified yeasts. In the current study, by use of HepG2 liver carcinoma cells, it was demonstrated, that SeN does not cause toxic effectsup to 100 μM concentration in hepatocytes. Uptake experiments showed that SeN is not bioavailable to the used liver cells.
In the next part a blood-brain barrier (BBB) model, based on capillary endothelial cells from the porcine brain, was used to describe the possible transfer of SeN into the central nervous system (CNS). The assessment of toxicity markers in these endothelial cells and monitoring of barrier conditions during transfer experiments demonstrated the absence of toxic effects from SeN on the BBB endothelium up to 100 μM concentration. Transfer data for SeN showed slow but substantial transfer. A statistically significant increase was observed after 48 hours following SeN incubation from the blood-facing side of the barrier. However, an increase in Se content was clearly visible already after 6 hours of incubation with 1 μM of SeN. While the transfer rate of SeN after application of 0.1 μM dose was very close to that for 1 μM, incubation with 10 μM of SeN resulted in a significantly decreased transfer rate. Double-sided application of SeN caused no side-specific transfer of SeN, thus suggesting a passive diffusion mechanism of SeN across the BBB. This data is in accordance with animal studies, where ET accumulation was observed in the rat brain, even though rat BBB does not have the primary ET transporter – OCTN1. Investigation of capillary endothelial cell monolayers after incubation with SeN and reference selenium compounds showed no significant increase of intracellular selenium concentration. Speciesspecific Se measurements in medium samples from apical and basolateral compartments, as good as in cell lysates, showed no SeN metabolization. Therefore, it can be concluded that SeN may reach the brain without significant transformation.
As the third part of this work, the assessment of SeN antioxidant properties was performed in Caco-2 human colorectal adenocarcinoma cells. Previous studies demonstrated that the intestinal epithelium is able to actively transport SeN from the intestinal lumen to the blood side and accumulate SeN. Further investigation within current work showed a much higher antioxidant potential of SeN compared to ET. The radical scavenging activity after incubation with SeN was close to the one observed for selenite and selenomethionine. However, the SeN effect on the viability of intestinal cells under oxidative conditions was close to the one caused by ET. To answer the question if SeN is able to be used as a dietary Se source and induce the activity of selenoproteins, the activity of glutathione peroxidase (GPx) and the secretion of selenoprotein P (SelenoP) were measured in Caco-2 cells, additionally. As expected, reference selenium compounds selenite and selenomethionine caused efficient induction of GPx activity. In contrast to those SeN had no effect on GPx activity. To examine the possibility of SeN being embedded into the selenoproteome, SelenoP was measured in a culture medium. Even though Caco-2 cells effectively take up SeN in quantities much higher than selenite or selenomethionine, no secretion of SelenoP was observed after SeN incubation.
Summarizing, we can conclude that SeN can hardly serve as a Se source for selenoprotein synthesis. However, SeN exhibit strong antioxidative properties, which appear when sulfur in ET is exchanged by Se. Therefore, SeN is of particular interest for research not as part of Se metabolism, but important endemic dietary antioxidant.
Casualties and damages from urban pluvial flooding are increasing. Triggered by short, localized, and intensive rainfall events, urban pluvial floods can occur anywhere, even in areas without a history of flooding. Urban pluvial floods have relatively small temporal and spatial scales. Although cumulative losses from urban pluvial floods are comparable, most flood risk management and mitigation strategies focus on fluvial and coastal flooding. Numerical-physical-hydrodynamic models are considered the best tool to represent the complex nature of urban pluvial floods; however, they are computationally expensive and time-consuming. These sophisticated models make large-scale analysis and operational forecasting prohibitive. Therefore, it is crucial to evaluate and benchmark the performance of other alternative methods.
The findings of this cumulative thesis are represented in three research articles. The first study evaluates two topographic-based methods to map urban pluvial flooding, fill–spill–merge (FSM) and topographic wetness index (TWI), by comparing them against a sophisticated hydrodynamic model. The FSM method identifies flood-prone areas within topographic depressions while the TWI method employs maximum likelihood estimation to calibrate a TWI threshold (τ) based on inundation maps from the 2D hydrodynamic model. The results point out that the FSM method outperforms the TWI method. The study highlights then the advantage and limitations of both methods.
Data-driven models provide a promising alternative to computationally expensive hydrodynamic models. However, the literature lacks benchmarking studies to evaluate the different models' performance, advantages and limitations. Model transferability in space is a crucial problem. Most studies focus on river flooding, likely due to the relative availability of flow and rain gauge records for training and validation. Furthermore, they consider these models as black boxes. The second study uses a flood inventory for the city of Berlin and 11 predictive features which potentially indicate an increased pluvial flooding hazard to map urban pluvial flood susceptibility using a convolutional neural network (CNN), an artificial neural network (ANN) and the benchmarking machine learning models random forest (RF) and support vector machine (SVM). I investigate the influence of spatial resolution on the implemented models, the models' transferability in space and the importance of the predictive features. The results show that all models perform well and the RF models are superior to the other models within and outside the training domain. The models developed using fine spatial resolution (2 and 5 m) could better identify flood-prone areas. Finally, the results point out that aspect is the most important predictive feature for the CNN models, and altitude is for the other models.
While flood susceptibility maps identify flood-prone areas, they do not represent flood variables such as velocity and depth which are necessary for effective flood risk management. To address this, the third study investigates data-driven models' transferability to predict urban pluvial floodwater depth and the models' ability to enhance their predictions using transfer learning techniques. It compares the performance of RF (the best-performing model in the previous study) and CNN models using 12 predictive features and output from a hydrodynamic model. The findings in the third study suggest that while CNN models tend to generalise and smooth the target function on the training dataset, RF models suffer from overfitting. Hence, RF models are superior for predictions inside the training domains but fail outside them while CNN models could control the relative loss in performance outside the training domains. Finally, the CNN models benefit more from transfer learning techniques than RF models, boosting their performance outside training domains.
In conclusion, this thesis has evaluated both topographic-based methods and data-driven models to map urban pluvial flooding. However, further studies are crucial to have methods that completely overcome the limitation of 2D hydrodynamic models.
Towards unifying approaches in exposure modelling for scenario-based multi-hazard risk assessments
(2023)
This cumulative thesis presents a stepwise investigation of the exposure modelling process for risk assessment due to natural hazards while highlighting its, to date, not much-discussed importance and associated uncertainties. Although “exposure” refers to a very broad concept of everything (and everyone) that is susceptible to damage, in this thesis it is narrowed down to the modelling of large-area residential building stocks. Classical building exposure models for risk applications have been constructed fully relying on unverified expert elicitation over data sources (e.g., outdated census datasets), and hence have been implicitly assumed to be static in time and in space. Moreover, their spatial representation has also typically been simplified by geographically aggregating the inferred composition onto coarse administrative units whose boundaries do not always capture the spatial variability of the hazard intensities required for accurate risk assessments. These two shortcomings and the related epistemic uncertainties embedded within exposure models are tackled in the first three chapters of the thesis. The exposure composition of large-area residential building stocks is studied on the scope of scenario-based earthquake loss models. Then, the proposal of optimal spatial aggregation areas of exposure models for various hazard-related vulnerabilities is presented, focusing on ground-shaking and tsunami risks. Subsequently, once the experience is gained in the study of the composition and spatial aggregation of exposure for various hazards, this thesis moves towards a multi-hazard context while addressing cumulative damage and losses due to consecutive hazard scenarios. This is achieved by proposing a novel method to account for the pre-existing damage descriptions on building portfolios as a key input to account for scenario-based multi-risk assessment. Finally, this thesis shows how the integration of the aforementioned elements can be used in risk communication practices. This is done through a modular architecture based on the exploration of quantitative risk scenarios that are contrasted with social risk perceptions of the directly exposed communities to natural hazards.
In Chapter 1, a Bayesian approach is proposed to update the prior assumptions on such composition (i.e., proportions per building typology). This is achieved by integrating high-quality real observations and then capturing the intrinsic probabilistic nature of the exposure model. Such observations are accounted as real evidence from both: field inspections (Chapter 2) and freely available data sources to update existing (but outdated) exposure models (Chapter 3). In these two chapters, earthquake scenarios with parametrised ground motion fields were transversally used to investigate the role of such epistemic uncertainties related to the exposure composition through sensitivity analyses. Parametrised scenarios of seismic ground shaking were the hazard input utilised to study the physical vulnerability of building portfolios. The second issue that was investigated, which refers to the spatial aggregation of building exposure models, was investigated within two decoupled vulnerability contexts: due to seismic ground shaking through the integration of remote sensing techniques (Chapter 3); and within a multi-hazard context by integrating the occurrence of associated tsunamis (Chapter 4). Therein, a careful selection of the spatial aggregation entities while pursuing computational efficiency and accuracy in the risk estimates due to such independent hazard scenarios (i.e., earthquake and tsunami) are discussed. Therefore, in this thesis, the physical vulnerability of large-area building portfolios due to tsunamis is considered through two main frames: considering and disregarding the interaction at the vulnerability level, through consecutive and decoupled hazard scenarios respectively, which were then contrasted.
Contrary to Chapter 4, where no cumulative damages are addressed, in Chapter 5, data and approaches, which were already generated in former sections, are integrated with a novel modular method to ultimately study the likely interactions at the vulnerability level on building portfolios. This is tested by evaluating cumulative damages and losses after earthquakes with increasing magnitude followed by their respective tsunamis. Such a novel method is grounded on the possibility of re-using existing fragility models within a probabilistic framework. The same approach is followed in Chapter 6 to forecast the likely cumulative damages to be experienced by a building stock located in a volcanic multi-hazard setting (ash-fall and lahars). In that section, special focus was made on the manner the forecasted loss metrics are communicated to locally exposed communities. Co-existing quantitative scientific approaches (i.e., comprehensive exposure models; explorative risk scenarios involving single and multiple hazards) and semi-qualitative social risk perception (i.e., level of understanding that the exposed communities have about their own risk) were jointly considered. Such an integration ultimately allowed this thesis to also contribute to enhancing preparedness, science divulgation at the local level as well as technology transfer initiatives.
Finally, a synthesis of this thesis along with some perspectives for improvement and future work are presented.
Reversible addition-fragmentation transfer (RAFT) was used as a controlling technique for studying the aqueous heterophase polymerization. The polymerization rates obtained by calorimetric investigation of ab initio emulsion polymerization of styrene revealed the strong influence of the type and combination of the RAFT agent and initiator on the polymerization rate and its profile. The studies in all-glass reactors on the evolution of the characteristic data such as average molecular weight, molecular weight distribution, and average particle size during the polymerization revealed the importance of the peculiarities of the heterophase system such as compartmentalization, swelling, and phase transfer. These results illustrated the important role of the water solubility of the initiator in determining the main loci of polymerization and the crucial role of the hydrophobicity of the RAFT agent for efficient transportation to the polymer particles. For an optimum control during ab-initio batch heterophase polymerization of styrene with RAFT, the RAFT agent must have certain hydrophilicity and the initiator must be water soluble in order to minimize reactions in the monomer phase. An analytical method was developed for the quantitative measurements of the sorption of the RAFT agents to the polymer particles based on the absorption of the visible light by the RAFT agent. Polymer nanoparticles, temperature, and stirring were employed to simulate the conditions of a typical aqueous heterophase polymerization system. The results confirmed the role of the hydrophilicity of the RAFT agent on the effectiveness of the control due to its fast transportation to the polymer particles during the initial period of polymerization after particle nucleation. As the presence of the polymer particles were essential for the transportation of the RAFT agents into the polymer dispersion, it was concluded that in an ab initio emulsion polymerization the transport of the hydrophobic RAFT agent only takes place after the nucleation and formation of the polymer particles. While the polymerization proceeds and the particles grow the rate of the transportation of the RAFT agent increases with conversion until the free monomer phase disappears. The degradation of the RAFT agent by addition of KPS initiator revealed unambigueous evidence on the mechanism of entry in heterophase polymerization. These results showed that even extremely hydrophilic primary radicals, such as sulfate ion radical stemming from the KPS initiator, can enter the polymer particles without necessarily having propagated and reached a certain chain length. Moreover, these results recommend the employment of azo-initiators instead of persulfates for the application in seeded heterophase polymerization with RAFT agents. The significant slower rate of transportation of the RAFT agent to the polymer particles when its solvent (styrene) was replaced with a more hydrophilic monomer (methyl methacrylate) lead to the conclusion that a complicated cooperative and competitive interplay of solubility parameters and interaction parameter with the particles exist, determining an effective transportation of the organic molecules to the polymer particles through the aqueous phase. The choice of proper solutions of even the most hydrophobic organic molecules can provide the opportunity of their sorption into the polymer particles. Examples to support this idea were given by loading the extremely stiff fluorescent molecule, pentacene, and very hydrophobic dye, Sudan IV, into the polymer particles. Finally, the first application of RAFT at room temperature heterophase polymerization is reported. The results show that the RAFT process is effective at ambient temperature; however, the rate of fragmentation is significantly slower. The elevation of the reaction temperature in the presence of the RAFT agent resulted in faster polymerization and higher molar mass, suggesting that the fragmentation rate coefficient and its dependence on the temperature is responsible for the observed retardation.
This work presents mathematical and computational approaches to cover various aspects of metabolic network modelling, especially regarding the limited availability of detailed kinetic knowledge on reaction rates. It is shown that precise mathematical formulations of problems are needed i) to find appropriate and, if possible, efficient algorithms to solve them, and ii) to determine the quality of the found approximate solutions. Furthermore, some means are introduced to gain insights on dynamic properties of metabolic networks either directly from the network structure or by additionally incorporating steady-state information. Finally, an approach to identify key reactions in a metabolic networks is introduced, which helps to develop simple yet useful kinetic models. The rise of novel techniques renders genome sequencing increasingly fast and cheap. In the near future, this will allow to analyze biological networks not only for species but also for individuals. Hence, automatic reconstruction of metabolic networks provides itself as a means for evaluating this huge amount of experimental data. A mathematical formulation as an optimization problem is presented, taking into account existing knowledge and experimental data as well as the probabilistic predictions of various bioinformatical methods. The reconstructed networks are optimized for having large connected components of high accuracy, hence avoiding fragmentation into small isolated subnetworks. The usefulness of this formalism is exemplified on the reconstruction of the sucrose biosynthesis pathway in Chlamydomonas reinhardtii. The problem is shown to be computationally demanding and therefore necessitates efficient approximation algorithms. The problem of minimal nutrient requirements for genome-scale metabolic networks is analyzed. Given a metabolic network and a set of target metabolites, the inverse scope problem has as it objective determining a minimal set of metabolites that have to be provided in order to produce the target metabolites. These target metabolites might stem from experimental measurements and therefore are known to be produced by the metabolic network under study, or are given as the desired end-products of a biotechological application. The inverse scope problem is shown to be computationally hard to solve. However, I assume that the complexity strongly depends on the number of directed cycles within the metabolic network. This might guide the development of efficient approximation algorithms. Assuming mass-action kinetics, chemical reaction network theory (CRNT) allows for eliciting conclusions about multistability directly from the structure of metabolic networks. Although CRNT is based on mass-action kinetics originally, it is shown how to incorporate further reaction schemes by emulating molecular enzyme mechanisms. CRNT is used to compare several models of the Calvin cycle, which differ in size and level of abstraction. Definite results are obtained for small models, but the available set of theorems and algorithms provided by CRNT can not be applied to larger models due to the computational limitations of the currently available implementations of the provided algorithms. Given the stoichiometry of a metabolic network together with steady-state fluxes and concentrations, structural kinetic modelling allows to analyze the dynamic behavior of the metabolic network, even if the explicit rate equations are not known. In particular, this sampling approach is used to study the stabilizing effects of allosteric regulation in a model of human erythrocytes. Furthermore, the reactions of that model can be ranked according to their impact on stability of the steady state. The most important reactions in that respect are identified as hexokinase, phosphofructokinase and pyruvate kinase, which are known to be highly regulated and almost irreversible. Kinetic modelling approaches using standard rate equations are compared and evaluated against reference models for erythrocytes and hepatocytes. The results from this simplified kinetic models can simulate acceptably the temporal behavior for small changes around a given steady state, but fail to capture important characteristics for larger changes. The aforementioned approach to rank reactions according to their influence on stability is used to identify a small number of key reactions. These reactions are modelled in detail, including knowledge about allosteric regulation, while all other reactions were still described by simplified reaction rates. These so-called hybrid models can capture the characteristics of the reference models significantly better than the simplified models alone. The resulting hybrid models might serve as a good starting point for kinetic modelling of genome-scale metabolic networks, as they provide reasonable results in the absence of experimental data, regarding, for instance, allosteric regulations, for a vast majority of enzymatic reactions.
Towards seasonal prediction: stratosphere-troposphere coupling in the atmospheric model ICON-NWP
(2020)
Stratospheric variability is one of the main potential sources for sub-seasonal to seasonal predictability in mid-latitudes in winter. Stratospheric pathways play an important role for long-range teleconnections between tropical phenomena, such as the quasi-biennial oscillation (QBO) and El Niño-Southern Oscillation (ENSO), and the mid-latitudes on the one hand, and linkages between Arctic climate change and the mid-latitudes on the other hand. In order to move forward in the field of extratropical seasonal predictions, it is essential that an atmospheric model is able to realistically simulate the stratospheric circulation and variability. The numerical weather prediction (NWP) configuration of the ICOsahedral Non-hydrostatic atmosphere model ICON is currently being used by the German Meteorological Service for the regular weather forecast, and is intended to produce seasonal predictions in future. This thesis represents the first extensive evaluation of Northern Hemisphere stratospheric winter circulation in ICON-NWP by analysing a large set of seasonal ensemble experiments.
An ICON control climatology simulated with a default setup is able to reproduce the basic behaviour of the stratospheric polar vortex. However, stratospheric westerlies are significantly too weak and major stratospheric warmings too frequent, especially in January. The weak stratospheric polar vortex in ICON is furthermore connected to a mean sea level pressure (MSLP) bias pattern resembling the negative phase of the Arctic Oscillation (AO). Since a good representation of the drag exerted by gravity waves is crucial for a realistic simulation of the stratosphere, three sensitivity experiments with reduced gravity wave drag are performed. Both a reduction of the non-orographic and orographic gravity wave drag respectively, lead to a strengthening of the stratospheric vortex and thus a bias reduction in winter, in particular in January. However, the effect of the non-orographic gravity wave drag on the stratosphere is stronger. A third experiment, combining a reduced orographic and non-orographic drag, exhibits the largest stratospheric bias reductions. The analysis of stratosphere-troposphere coupling based on an index of the Northern Annular Mode demonstrates that ICON realistically represents downward coupling. This coupling is intensified and more realistic in experiments with a reduced gravity wave drag, in particular with reduced non-orographic drag. Tropospheric circulation is also affected by the reduced gravity wave drag, especially in January, when the strongly improved stratospheric circulation reduces biases in the MSLP patterns. Moreover, a retuning of the subgrid-scale orography parameterisations leads to a significant error reduction in the MSLP in all months. In conclusion, the combination of these adjusted parameterisations is recommended as a current optimal setup for seasonal simulations with ICON.
Additionally, this thesis discusses further possible influences on the stratospheric polar vortex, including the influence of tropical phenomena, such as QBO and ENSO, as well as the influence of a rapidly warming Arctic. ICON does not simulate the quasi-oscillatory behaviour of the QBO and favours weak easterlies in the tropical stratosphere. A comparison with a reanalysis composite of the easterly QBO phase reveals, that the shift towards the easterly QBO in ICON further weakens the stratospheric polar vortex. On the other hand, the stratospheric reaction to ENSO events in ICON is realistic. ICON and the reanalysis exhibit a weakened stratospheric vortex in warm ENSO years. Furthermore, in particular in winter, warm ENSO events favour the negative phase of the Arctic Oscillation, whereas cold events favour the positive phase. The ICON simulations also suggest a significant effect of ENSO on the Atlantic-European sector in late winter. To investigate the influence of Arctic climate change on mid-latitude circulation changes, two differing approaches with transient and fixed sea ice conditions are chosen. Neither ICON approach exhibits the mid-latitude tropospheric negative Arctic Oscillation circulation response to amplified Arctic warming, as it is discussed on the basis of observational evidence. Nevertheless, adding a new model to the current and active discussion on Arctic-midlatitude linkages, further contributes to the understanding of divergent conclusions between model and observational studies.
Distance Education or e-Learning platform should be able to provide a virtual laboratory to let the participants have hands-on exercise experiences in practicing their skill remotely. Especially in Cybersecurity e-Learning where the participants need to be able to attack or defend the IT System. To have a hands-on exercise, the virtual laboratory environment must be similar to the real operational environment, where an attack or a victim is represented by a node in a virtual laboratory environment. A node is usually represented by a Virtual Machine (VM). Scalability has become a primary issue in the virtual laboratory for cybersecurity e-Learning because a VM needs a significant and fix allocation of resources. Available resources limit the number of simultaneous users. Scalability can be increased by increasing the efficiency of using available resources and by providing more resources. Increasing scalability means increasing the number of simultaneous users.
In this thesis, we propose two approaches to increase the efficiency of using the available resources. The first approach in increasing efficiency is by replacing virtual machines (VMs) with containers whenever it is possible. The second approach is sharing the load with the user-on-premise machine, where the user-on-premise machine represents one of the nodes in a virtual laboratory scenario. We also propose two approaches in providing more resources. One way to provide more resources is by using public cloud services. Another way to provide more resources is by gathering resources from the crowd, which is referred to as Crowdresourcing Virtual Laboratory (CRVL).
In CRVL, the crowd can contribute their unused resources in the form of a VM, a bare metal system, an account in a public cloud, a private cloud and an isolated group of VMs, but in this thesis, we focus on a VM. The contributor must give the credential of the VM admin or root user to the CRVL system. We propose an architecture and methods to integrate or dis-integrate VMs from the CRVL system automatically. A Team placement algorithm must also be investigated to optimize the usage of resources and at the same time giving the best service to the user. Because the CRVL system does not manage the contributor host machine, the CRVL system must be able to make sure that the VM integration will not harm their system and that the training material will be stored securely in the contributor sides, so that no one is able to take the training material away without permission. We are investigating ways to handle this kind of threats.
We propose three approaches to strengthen the VM from a malicious host admin. To verify the integrity of a VM before integration to the CRVL system, we propose a remote verification method without using any additional hardware such as the Trusted Platform Module chip. As the owner of the host machine, the host admins could have access to the VM's data via Random Access Memory (RAM) by doing live memory dumping, Spectre and Meltdown attacks. To make it harder for the malicious host admin in getting the sensitive data from RAM, we propose a method that continually moves sensitive data in RAM. We also propose a method to monitor the host machine by installing an agent on it. The agent monitors the hypervisor configurations and the host admin activities.
To evaluate our approaches, we conduct extensive experiments with different settings. The use case in our approach is Tele-Lab, a Virtual Laboratory platform for Cyber Security e-Learning. We use this platform as a basis for designing and developing our approaches. The results show that our approaches are practical and provides enhanced security.
Identity management is at the forefront of applications’ security posture. It separates the unauthorised user from the legitimate individual. Identity management models have evolved from the isolated to the centralised paradigm and identity federations. Within this advancement, the identity provider emerged as a trusted third party that holds a powerful position. Allen postulated the novel self-sovereign identity paradigm to establish a new balance. Thus, extensive research is required to comprehend its virtues and limitations. Analysing the new paradigm, initially, we investigate the blockchain-based self-sovereign identity concept structurally. Moreover, we examine trust requirements in this context by reference to patterns. These shapes comprise major entities linked by a decentralised identity provider. By comparison to the traditional models, we conclude that trust in credential management and authentication is removed. Trust-enhancing attribute aggregation based on multiple attribute providers provokes a further trust shift. Subsequently, we formalise attribute assurance trust modelling by a metaframework. It encompasses the attestation and trust network as well as the trust decision process, including the trust function, as central components. A secure attribute assurance trust model depends on the security of the trust function. The trust function should consider high trust values and several attribute authorities. Furthermore, we evaluate classification, conceptual study, practical analysis and simulation as assessment strategies of trust models. For realising trust-enhancing attribute aggregation, we propose a probabilistic approach. The method exerts the principle characteristics of correctness and validity. These values are combined for one provider and subsequently for multiple issuers. We embed this trust function in a model within the self-sovereign identity ecosystem. To practically apply the trust function and solve several challenges for the service provider that arise from adopting self-sovereign identity solutions, we conceptualise and implement an identity broker. The mediator applies a component-based architecture to abstract from a single solution. Standard identity and access management protocols build the interface for applications. We can conclude that the broker’s usage at the side of the service provider does not undermine self-sovereign principles, but fosters the advancement of the ecosystem. The identity broker is applied to sample web applications with distinct attribute requirements to showcase usefulness for authentication and attribute-based access control within a case study.
Towards greener stationary phases : thermoresponsive and carbonaceous chromatographic supports
(2011)
Polymers which are sensitive towards external physical, chemical and electrical stimuli are termed as ‘intelligent materials’ and are widely used in medical and engineering applications. Presently, polymers which can undergo a physical change when heat is applied at a certain temperature (cloud point) in water are well-studied for this property in areas of separation chemistry, gene and drug delivery and as surface modifiers. One example of such a polymer is the poly (N-isopropylacrylamide) PNIPAAM, where it is dissolved well in water below 32 oC, while by increasing the temperature further leads to its precipitation. In this work, an alternative polymer poly (2-(2-methoxy ethoxy)ethyl methacrylate-co- oligo(ethylene glycol) methacrylate) (P(MEO2MA-co-OEGMA)) is studied due to its biocompatibility and the ability to vary its cloud points in water. When a layer of temperature responsive polymer was attached to a single continuous porous piece of silica-based material known as a monolith, the thermoresponsive characteristic was transferred to the column surfaces. The hybrid material was demonstrated to act as a simple temperature ‘switch’ in the separation of a mixture of five steroids under water. Different analytes were observed to be separated under varying column temperatures. Furthermore, more complex biochemical compounds such as proteins were also tested for separation. The importance of this work is attributed to separation processes utilizing environmentally friendly conditions, since harsh chemical environments conventionally used to resolve biocompounds could cause their biological activities to be rendered inactive.
Ammonia is a chemical of fundamental importance for nature`s vital nitrogen cycle. It is crucial for the growth of living organisms as well as food and energy source. Traditionally, industrial ammonia production is predominated by Haber- Bosch process (HBP) which is based on direct conversion of N2 and H2 gas under high temperature and high pressure (~500oC, 150-300 bar). However, it is not the favorite route because of its thermodynamic and kinetic limitations, and the need for the energy intense production of hydrogen gas by reforming processes. All these disfavors of HBP open a target to search for an alternative technique to perform efficient ammonia synthesis via electrochemical catalytic processes, in particular via water electrolysis, using water as the hydrogen source to save the process from gas reforming.
In this study, the investigation of the interface effects between imidazolium-based ionic liquids and the surface of porous carbon materials with a special interest in the nitrogen absorption capability. As the further step, the possibility to establish this interface as the catalytically active area for the electrochemical N2 reduction to NH3 has been evaluated. This particular combination has been chosen because the porous carbon materials and ionic liquids (IL) have a significant importance in many scientific fields including catalysis and electrocatalysis due to their special structural and physicochemical properties. Primarily, the effects of the confinement of ionic liquid (EmimOAc, 1-Ethyl-3-methylimidazolium acetate) into carbon pores have been investigated. The salt-templated porous carbons, which have different porosity (microporous and mesoporous) and nitrogen species, were used as model structures for the comparison of the IL confinement at different loadings. The nitrogen uptake of EmimOAc can be increased by about 10 times by the confinement in the pores of carbon materials compared to the bulk form. In addition, the most improved nitrogen absorption was observed by IL confinement in micropores and in nitrogen-doped carbon materials as a consequence of the maximized structural changes of IL. Furthermore, the possible use of such interfaces between EmimOAc and porous carbon for the catalytic activation of dinitrogen during the kinetically challenging NRR due to the limited gas absorption in the electrolyte, was examined. An electrocatalytic NRR system based on the conversion of water and nitrogen gas to ammonia at ambient operation conditions (1 bar, 25 °C) was performed in a setup under an applied electric potential with a single chamber electrochemical cell, which consists of the combination of EmimOAc electrolyte with the porous carbon-working electrode and without a traditional electrocatalyst. Under a potential of -3 V vs. SCE for 45 minutes, a NH3 production rate of 498.37 μg h-1 cm-2 and FE of 12.14% were achieved. The experimental observations show that an electric double-layer, which serves the catalytically active area, occurs between a microporous carbon material and ions of the EmimOAc electrolyte in the presence of sufficiently high provided electric potential. Comparing with the typical NRR systems which have been reported in the literature, the presented electrochemical ammonia synthesis approach provides a significantly higher ammonia production rate with a chance to avoid the possible kinetic limitations of NRR. In terms of operating conditions, ammonia production rate and the faradic efficiency without the need for any synthetic electrocatalyst can be resulted of electrocatalytic activation of nitrogen in the double-layer formed between carbon and IL ions.
The identification of vulnerabilities in IT infrastructures is a crucial problem in enhancing the security, because many incidents resulted from already known vulnerabilities, which could have been resolved. Thus, the initial identification of vulnerabilities has to be used to directly resolve the related weaknesses and mitigate attack possibilities. The nature of vulnerability information requires a collection and normalization of the information prior to any utilization, because the information is widely distributed in different sources with their unique formats. Therefore, the comprehensive vulnerability model was defined and different sources have been integrated into one database. Furthermore, different analytic approaches have been designed and implemented into the HPI-VDB, which directly benefit from the comprehensive vulnerability model and especially from the logical preconditions and postconditions.
Firstly, different approaches to detect vulnerabilities in both IT systems of average users and corporate networks of large companies are presented. Therefore, the approaches mainly focus on the identification of all installed applications, since it is a fundamental step in the detection. This detection is realized differently depending on the target use-case. Thus, the experience of the user, as well as the layout and possibilities of the target infrastructure are considered. Furthermore, a passive lightweight detection approach was invented that utilizes existing information on corporate networks to identify applications.
In addition, two different approaches to represent the results using attack graphs are illustrated in the comparison between traditional attack graphs and a simplistic graph version, which was integrated into the database as well. The implementation of those use-cases for vulnerability information especially considers the usability. Beside the analytic approaches, the high data quality of the vulnerability information had to be achieved and guaranteed. The different problems of receiving incomplete or unreliable information for the vulnerabilities are addressed with different correction mechanisms. The corrections can be carried out with correlation or lookup mechanisms in reliable sources or identifier dictionaries. Furthermore, a machine learning based verification procedure was presented that allows an automatic derivation of important characteristics from the textual description of the vulnerabilities.
This doctoral thesis seeks to elaborate how Wittgenstein’s very sparse writings on ethics and ethical thought, together with his later work on the more general problem of normativity and his approach to philosophical problems as a whole, can be applied to contemporary meta-ethical debates about the nature of moral thought and language and the sources of moral obligation. I begin with a discussion of Wittgenstein’s early “Lecture on Ethics”, distinguishing the thesis of a strict fact/value dichotomy that Wittgenstein defends there from the related thesis that all ethical discourse is essentially and intentionally nonsensical, an attempt to go beyond the limits of sense. The first chapter discusses and defends Wittgenstein’s argument that moral valuation always goes beyond any ascertaining of fact; the second chapter seeks to draw out the valuable insights from Wittgenstein’s (early) insistence that value discourse is nonsensical while also arguing that this thesis is ultimately untenable and also incompatible with later Wittgensteinian understanding of language. On the basis of this discussion I then take up the writings of the American philosopher Cora Diamond, who has worked out an ethical approach in a very closely Wittgensteinian spirit, and show how this approach shares many of the valuable insights of the moral expressivism and constructivism of contemporary authors such as Blackburn and Korsgaard while suggesting a way to avoid some of the problems and limitations of their approaches. Subsequently I turn to a criticism of the attempts by Lovibond and McDowell to enlist Wittgenstein in the support for a non-naturalist moral realism. A concluding chapter treats the ways that a broadly Wittgensteinian conception expands the subject of metaethics itself by questioning the primacy of discursive argument in moral thought and of moral propositions as the basic units of moral belief.
In our daily life, recurrence plays an important role on many spatial and temporal scales and in different contexts. It is the foundation of learning, be it in an evolutionary or in a neural context. It therefore seems natural that recurrence is also a fundamental concept in theoretical dynamical systems science. The way in which states of a system recur or develop in a similar way from similar initial states makes it possible to infer information about the underlying dynamics of the system. The mathematical space in which we define the state of a system (state space) is often high dimensional, especially in complex systems that can also exhibit chaotic dynamics. The recurrence plot (RP) enables us to visualize the recurrences of any high-dimensional systems in a two-dimensional, binary representation. Certain patterns in RPs can be related to physical properties of the underlying system, making the qualitative and quantitative analysis of RPs an integral part of nonlinear systems science. The presented work has a methodological focus and further develops recurrence analysis (RA) by addressing current research questions related to an increasing amount of available data and advances in machine learning techniques. By automatizing a central step in RA, namely the reconstruction of the state space from measured experimental time series, and by investigating the impact of important free parameters this thesis aims to make RA more accessible to researchers outside of physics.
The first part of this dissertation is concerned with the reconstruction of the state space from time series. To this end, a novel idea is proposed which automates the reconstruction problem in the sense that there is no need to preprocesse the data or estimate parameters a priori. The key idea is that the goodness of a reconstruction can be evaluated by a suitable objective function and that this function is minimized in the embedding process. In addition, the new method can process multivariate time series input data. This is particularly important because multi-channel sensor-based observations are ubiquitous in many research areas and continue to increase. Building on this, the described minimization problem of the objective function is then processed using a machine learning approach.
In the second part technical and methodological aspects of RA are discussed. First, we mathematically justify the idea of setting the most influential free parameter in RA, the recurrence threshold ε, in relation to the distribution of all pairwise distances in the data. This is especially important when comparing different RPs and their quantification statistics and is fundamental to any comparative study. Second, some aspects of recurrence quantification analysis (RQA) are examined. As correction schemes for biased RQA statistics, which are based on diagonal lines, we propose a simple method for dealing with border effects of an RP in RQA and a skeletonization algorithm for RPs. This results in less biased (diagonal line based) RQA statistics for flow-like data. Third, a novel type of RQA characteristic is developed, which can be viewed as a generalized non-linear powerspectrum of high dimensional systems. The spike powerspectrum transforms a spike-train like signal into its frequency domain. When transforming the diagonal line-dependent recurrence rate (τ-RR) of a RP in this way, characteristic periods, which can be seen in the state space representation of the system can be unraveled. This is not the case, when Fourier transforming τ-RR.
Finally, RA and RQA are applied to climate science in the third part and neuroscience in the fourth part. To the best of our knowledge, this is the first time RPs and RQA have been used to analyze lake sediment data in a paleoclimate context. Therefore, we first elaborate on the basic formalism and the interpretation of visually visible patterns in RPs in relation to the underlying proxy data. We show that these patterns can be used to classify certain types of variability and transitions in the Potassium record from six short (< 17m) sediment cores collected during the Chew Bahir Drilling Project. Building on this, the long core (∼ m composite) from the same site is analyzed and two types of variability and transitions are
identified and compared with ODP Site wetness index from the eastern Mediterranean. Type variability likely reflects the influence of precessional forcing in the lower latitudes at times of maximum values of the long eccentricity cycle ( kyr) of the earth’s orbit around the sun, with a tendency towards extreme events. Type variability appears to be related to the minimum values of this cycle and corresponds to fairly rapid transitions between relatively dry and relatively wet conditions.
In contrast, RQA has been applied in the neuroscientific context for almost two decades. In the final part, RQA statistics are used to quantify the complexity in a specific frequency band of multivariate EEG (electroencephalography) data. By analyzing experimental data, it can be shown that the complexity of the signal measured in this way across the sensorimotor cortex decreases as motor tasks are performed. The results are consistent with and comple- ment the well known concepts of motor-related brain processes. We assume that the thus discovered features of neuronal dynamics in the sensorimotor cortex together with the robust RQA methods for identifying and classifying these contribute to the non-invasive EEG-based development of brain-computer interfaces (BCI) for motor control and rehabilitation.
The present work is an important step towards a robust analysis of complex systems based on recurrence.
Understanding the principles of self-organisation exhibited by block copolymers requires the combination of synthetic and physicochemical knowledge. The ability to synthesise block copolymers with desired architecture facilitates the ability to manipulate their aggregation behaviour, thus providing the key to nanotechnology. Apart from relative block volumes, the size and morphology of the produced nanostructures is controlled by the effective incompatibility between the different blocks. Since polymerisation techniques allowing for the synthesis of well-defined block copolymers are restricted to a limited number of monomers, the ability to tune the incompatibility is very limited. Nevertheless, Polymer Analogue Reactions can offer another possibility for the production of functional block copolymers by chemical modifications of well-defined polymer precursors. Therefore, by applying appropriate modification methods both volume fractions and incompatibility, can be adjusted. Moreover, copolymers with introduced functional units allow utilization of the concept of molecular recognition in the world of synthetic polymers. The present work describes a modular synthetic approach towards functional block copolymers. Radical addition of functional mercaptanes was employed for the introduction of diverse functional groups to polybutadiene-containing block copolymers. Various modifications of 1,2-polybutadiene-poly(ethylene oxide) block copolymer precursors are described in detail. Furthermore, extension of the concept to 1,2-polybutadiene-polystyrene block copolymers is demonstrated. Further investigations involved the self-organisation of the modified block copolymers. Formed aggregates in aqueous solutions of block copolymers with introduced carboxylic acid, amine and hydroxyl groups as well as fluorinated chains were characterised. Study of the aggregation behaviour allowed general conclusions to be drawn regarding the influence of the introduced groups on the self-organisation of the modified copolymers. Finally, possibilities for the formation of complexes, based on electrostatic or hydrogen-bonding interactions in mixtures of block copolymers bearing mutually interacting functional groups, were investigated.
Public administrations confront fundamental challenges, including globalization, digitalization, and an eroding level of trust from society. By developing joint public service delivery with other stakeholders, public administrations can respond to these challenges. This increases the importance of inter-organizational governance—a development often referred to as New Public Governance, which to date has not been realized because public administrations focus on intra-organizational practices and follow the traditional “governmental chain.”
E-government initiatives, which can lead to high levels of interconnected public services, are currently perceived as insufficient to meet this goal. They are not designed holistically and merely affect the interactions of public and non-public stakeholders. A fundamental shift toward a joint public service delivery would require scrutiny of established processes, roles, and interactions between stakeholders.
Various scientists and practitioners within the public sector assume that the use of blockchain institutional technology could fundamentally change the relationship between public and non-public stakeholders. At first glance, inter-organizational, joint public service delivery could benefit from the use of blockchain. This dissertation aims to shed light on this widespread assumption. Hence, the objective of this dissertation is to substantiate the effect of blockchain on the relationship between public administrations and non-public stakeholders.
This objective is pursued by defining three major areas of interest. First, this dissertation strives to answer the question of whether or not blockchain is suited to enable New Public Governance and to identify instances where blockchain may not be the proper solution. The second area aims to understand empirically the status quo of existing blockchain implementations in the public sector and whether they comply with the major theoretical conclusions. The third area investigates the changing role of public administrations, as the blockchain ecosystem can significantly increase the number of stakeholders.
Corresponding research is conducted to provide insights into these areas, for example, combining theoretical concepts with empirical actualities, conducting interviews with subject matter experts and key stakeholders of leading blockchain implementations, and performing a comprehensive stakeholder analysis, followed by visualization of its results.
The results of this dissertation demonstrate that blockchain can support New Public Governance in many ways while having a minor impact on certain aspects (e.g., decentralized control), which account for this public service paradigm. Furthermore, the existing projects indicate changes to relationships between public administrations and non-public stakeholders, although not necessarily the fundamental shift proposed by New Public Governance. Lastly, the results suggest that power relations are shifting, including the decreasing influence of public administrations within the blockchain ecosystem. The results raise questions about the governance models and regulations required to support mature solutions and the further diffusion of blockchain for public service delivery.
Reversible-deactivation radical polymerization (RDRP) is without any doubt one of the most prevalent and powerful strategies for polymer synthesis, by which well-defined living polymers with targeted molecular weight (MW), low molar dispersity (Ɖ) and diverse morphologies can be prepared in a controlled fashion. Atom transfer radical polymerization (ATRP) as one of the most extensive studied types of RDRP has been particularly emphasized due to the high accessibility to hybrid materials, multifunctional copolymers and diverse end group functionalities via commercially available precursors. However, due to catalyst-induced side reactions and chain-chain coupling termination in bulk environment, synthesis of high MW polymers with uniform chain length (low Ɖ) and highly-preserved chain-end fidelity is usually challenging. Besides, owing to the inherited radical nature, the control of microstructure, namely tacticity control, is another laborious task. Considering the applied catalysts, the utilization of large amounts of non-reusable transition metal ions which lead to cumbersome purification process, product contamination and complicated reaction procedures all delimit the scope ATRP techniques.
Metal-organic frameworks (MOFs) are an emerging type of porous materials combing the properties of both organic polymers and inorganic crystals, characterized with well-defined crystalline framework, high specific surface area, tunable porous structure and versatile nanochannel functionalities. These promising properties of MOFs have thoroughly revolutionized academic research and applications in tremendous aspects, including gas processing, sensing, photoluminescence, catalysis and compartmentalized polymerization. Through functionalization, the microenvironment of MOF nanochannel can be precisely devised and tailored with specified functional groups for individual host-guest interactions. Furthermore, properties of high transition metal density, accessible catalytic sites and crystalline particles all indicate MOFs as prominent heterogeneous catalysts which open a new avenue towards unprecedented catalytic performance. Although beneficial properties in catalysis, high agglomeration and poor dispersibility restrain the potential catalytic capacity to certain degree.
Due to thriving development of MOF sciences, fundamental polymer science is undergoing a significant transformation, and the advanced polymerization strategy can eventually refine the intrinsic drawbacks of MOF solids reversely. Therefore, in the present thesis, a combination of low-dimensional polymers with crystalline MOFs is demonstrated as a robust and comprehensive approach to gain the bilateral advantages from polymers (flexibility, dispersibility) and MOFs (stability, crystallinity). The utilization of MOFs for in-situ polymerizations and catalytic purposes can be realized to synthesize intriguing polymers in a facile and universal process to expand the applicability of conventional ATRP methodology. On the other hand, through the formation of MOF/polymer composites by surface functionalization, the MOF particles with environment-adjustable dispersibility and high catalytic property can be as-prepared.
In the present thesis, an approach via combination of confined porous textures from MOFs and controlled radical polymerization is proposed to advance synthetic polymer chemistry. Zn2(bdc)2(dabco) (Znbdc) and the initiator-functionalized Zn MOFs, ZnBrbdc, are utilized as a reaction environment for in-situ polymerization of various size-dependent methacrylate monomers (i.e. methyl, ethyl, benzyl and isobornyl methacrylate) through (surface-initiated) activators regenerated by electron transfer (ARGET/SI-ARGET) ATRP, resulting in polymers with control over dispersity, end functionalities and tacticity with respect to distinct molecular size. While the functionalized MOFs are applied, due to the strengthened compartmentalization effect, the accommodated polymers with molecular weight up to 392,000 can be achieved. Moreover, a significant improvement in end-group fidelity and stereocontrol can be observed. The results highlight a combination of MOFs and ATRP is a promising and universal methodology to synthesize versatile well-defined polymers with high molecular weight, increment in isotactic trial and the preserved chain-end functionality.
More than being a host only, MOFs can act as heterogeneous catalysts for metal-catalyzed polymerizations. A Cu(II)-based MOF, Cu2(bdc)2(dabco), is demonstrated as a heterogeneous, universal catalyst for both thermal or visible light-triggered ARGET ATRP with expanded monomer range. The accessible catalytic metal sites enable the Cu(II) MOF to polymerize various monomers, including benzyl methacrylate (BzMA), styrene, methyl methacrylate (MMA), 2-(dimethylamino)ethyl methacrylate (DMAEMA) in the fashion of ARGET ATRP. Furthermore, due to the robust frameworks, surpassing the conventional homogeneous catalyst, the Cu(II) MOF can tolerate strongly coordinating monomers and polymerize challenging monomers (i.e. 4-vinyl pyridine, 2-vinyl pyridine and isoprene), in a well-controlled fashion. Therefore, a synthetic procedure can be significantly simplified, and catalyst-resulted chelation can be avoided as well. Like other heterogeneous catalysts, the Cu(II) MOF catalytic complexes can be easily collected by centrifugation and recycled for an arbitrary amount of times.
The Cu(II) MOF, composed of photostimulable metal sites, is further used to catalyze controlled photopolymerization under visible light and requires no external photoinitiator, dye sensitizer or ligand. A simple light trigger allows the photoreduction of Cu(II) to the active Cu(I) state, enabling controlled polymerization in the form of ARGET ATRP. More than polymerization application, the synergic effect between MOF frameworks and incorporated nucleophilic monomers/molecules is also observed, where the formation of associating complexes is able to adjust the photochemical and electrochemical properties of the Cu(II) MOF, altering the band gap and light harvesting behavior. Owing to the tunable photoabsorption property resulting from the coordinating guests, photoinduced Reversible-deactivation radical polymerization (PRDRP) can be achieved to further simplify and fasten the polymerization.
More than the adjustable photoabsorption ability, the synergistic strategy via a combination of controlled/living polymerization technique and crystalline MOFs can be again evidenced as demonstrated in the MOF-based heterogeneous catalysts with enhanced dispersibility in solution. Through introducing hollow pollen pivots with surface immobilized environment-responsive polymer, PDMAEMA, highly dispersed MOF nanocrystals can be prepared after associating on polymer brushes via the intrinsic amine functionality in each DMAEMA monomer. Intriguingly, the pollen-PDMAEMA composite can serve as a “smart” anchor to trap nanoMOF particles with improved dispersibility, and thus to significantly enhance liquid-phase photocatalytic performance. Furthermore, the catalytic activity can be switched on and off via stimulable coil-to-globule transition of the PDMAEMA chains exposing or burying MOF catalytic sites, respectively.
In experiments investigating sentence processing, eye movement measures such as fixation durations and regression proportions while reading are commonly used to draw conclusions about processing difficulties. However, these measures are the result of an interaction of multiple cognitive levels and processing strategies and thus are only indirect indicators of processing difficulty. In order to properly interpret an eye movement response, one has to understand the underlying principles of adaptive processing such as trade-off mechanisms between reading speed and depth of comprehension that interact with task demands and individual differences. Therefore, it is necessary to establish explicit models of the respective mechanisms as well as their causal relationship with observable behavior. There are models of lexical processing and eye movement control on the one side and models on sentence parsing and memory processes on the other. However, no model so far combines both sides with explicitly defined linking assumptions.
In this thesis, a model is developed that integrates oculomotor control with a parsing mechanism and a theory of cue-based memory retrieval. On the basis of previous empirical findings and independently motivated principles, adaptive, resource-preserving mechanisms of underspecification are proposed both on the level of memory access and on the level of syntactic parsing. The thesis first investigates the model of cue-based retrieval in sentence comprehension of Lewis & Vasishth (2005) with a comprehensive literature review and computational modeling of retrieval interference in dependency processing. The results reveal a great variability in the data that is not explained by the theory. Therefore, two principles, 'distractor prominence' and 'cue confusion', are proposed as an extension to the theory, thus providing a more adequate description of systematic variance in empirical results as a consequence of experimental design, linguistic environment, and individual differences. In the remainder of the thesis, four interfaces between parsing and eye movement control are defined: Time Out, Reanalysis, Underspecification, and Subvocalization. By comparing computationally derived predictions with experimental results from the literature, it is investigated to what extent these four interfaces constitute an appropriate elementary set of assumptions for explaining specific eye movement patterns during sentence processing. Through simulations, it is shown how this system of in itself simple assumptions results in predictions of complex, adaptive behavior.
In conclusion, it is argued that, on all levels, the sentence comprehension mechanism seeks a balance between necessary processing effort and reading speed on the basis of experience, task demands, and resource limitations. Theories of linguistic processing therefore need to be explicitly defined and implemented, in particular with respect to linking assumptions between observable behavior and underlying cognitive processes. The comprehensive model developed here integrates multiple levels of sentence processing that hitherto have only been studied in isolation. The model is made publicly available as an expandable framework for future studies of the interactions between parsing, memory access, and eye movement control.
The East African Plateau provides a spectacular example of geodynamic plateau uplift, active continental rifting, and associated climatic forcing. It is an integral part of the East African Rift System and has an average elevation of approximately 1,000 m. Its location coincides with a negative Bouguer gravity anomaly with a semi-circular shape, closely related to a mantle plume, which influences the Cenozoic crustal development since its impingement in Eocene-Oligocene time. The uplift of the East African Plateau, preceding volcanism, and rifting formed an important orographic barrier and tectonically controlled environment, which is profoundly influenced by climate driven processes. Its location within the equatorial realm supports recently proposed hypotheses, that topographic changes in this region must be considered as the dominant forcing factor influencing atmospheric circulation patterns and rainfall distribution. The uplift of this region has therefore often been associated with fundamental climatic and environmental changes in East Africa and adjacent regions. While the far-reaching influence of the plateau uplift is widely accepted, the timing and the magnitude of the uplift are ambiguous and are still subject to ongoing discussion. This dilemma stems from the lack of datable, geomorphically meaningful reference horizons that could record surface uplift. In order to quantify the amount of plateau uplift and to find evidence for the existence of significant relief along the East African Plateau prior to rifting, I analyzed and modeled one of the longest terrestrial lava flows; the 300-km-long Yatta phonolite flow in Kenya. This lava flow is 13.5 Ma old and originated in the region that now corresponds to the eastern rift shoulders. The phonolitic flow utilized an old riverbed that once drained the eastern flank of the plateau. Due to differential erosion this lava flow now forms a positive relief above the parallel-flowing Athi River, which is mimicking the course of the paleo-river. My approach is a lava-flow modeling, based on an improved composition and temperature dependent method to parameterize the flow of an arbitrary lava in a rectangular-shaped channel. The essential growth pattern is described by a one-dimensional model, in which Newtonian rheological flow advance is governed by the development of viscosity and/or velocity in the internal parts of the lava-flow front. Comparing assessments of different magma compositions reveal that length-dominated, channelized lava flows are characterized by high effusion rates, rapid emplacement under approximately isothermal conditions, and laminar flow. By integrating the Yatta lava flow dimensions and the covered paleo-topography (slope angle) into the model, I was able to determine the pre-rift topography of the East African Plateau. The modeling results yield a pre-rift slope of at least 0.2°, suggesting that the lava flow must have originated at a minimum elevation of 1,400 m. Hence, high topography in the region of the present-day Kenya Rift must have existed by at least 13.5 Ma. This inferred mid-Miocene uplift coincides with the two-step expansion of grasslands, as well as important radiation and speciation events in tropical Africa. Accordingly, the combination of my results regarding the Yatta lava flow emplacement history, its location, and its morphologic character, validates it as a suitable “paleo-tiltmeter” and has thus to be considered as an important topographic and volcanic feature for the topographic evolution in East Africa.
Die vorliegende Forschungsarbeit untersucht den Umgang mit Dilemmata von Topmanagern. Dilemmata sind ein alltägliches Geschäft im Topmanagement. Die entsprechenden Akteure sind daher immer wieder mit diesen konfrontiert und mit ihnen umzugehen, gehört gewissermaßen zu ihrer Berufsbeschreibung. Hinzu kommen Dilemmata im nicht direkt geschäftlichen Bereich, wie zum Beispiel jene zwischen Familien- und Arbeitszeit. Doch stellt dieses Feld ein kaum untersuchtes Forschungsgebiet dar. Während Dilemmata in anderen Bereichen eine zunehmende Aufmerksamkeit erfuhren, wurden deren Besonderheiten im Topmanagement genauso wenig differenziert betrachtet wie zugehörige Umgangsweisen. Theorie und Praxis stellen bezüglich Dilemmata von Topmanagern vor allem einen Gegensatz dar, beziehungsweise fehlt es an einer theoretischen Fundierung der Empirie. Diesem Umstand wird mittels dieser Studie begegnet. Auf der Grundlage einer differenzierten und breiten Erfassung von Theorien zu Dilemmata, so diese auch noch nicht auf Topmanager bezogen wurden, und einer empirischen Erhebung, die im Mittelpunkt dieser Arbeit stehen, soll das Feld Dilemmata von Topmanagern der Forschung geöffnet werden. Empirische Grundlage sind vor allem narrative Interviews mit Topmanagern über ihre Dilemmata-Wahrnehmung, ausgemachte Ursachen, Umgangsweisen und Resultate. Dies erlaubt es, Topmanagertypen sowie Dilemmata-Arten, mit denen sie konfrontiert sind oder waren, analytisch herauszuarbeiten. Angesichts der Praxisrelevanz von Dilemmata von Topmanagern wird jedoch nicht nur ein theoretisches Modell zu dieser Thematik erarbeitet, es werden auch Reflexionen auf die Praxis in Form von Handlungsempfehlungen vorgenommen. Schließlich gilt es, die allgemeine Theorie zu Dilemmata, ohne konkreten Bezug zu Topmanagern, mit den theoretischen Erkenntnissen dieser Studie auf empirischer Basis zu kontrastieren. Dabei wird im Rahmen der empirischen Erfassung und Auswertung dem Ansatz der Grounded-Theory-Methodologie gefolgt.
Die Digitalisierung ist ein wesentlicher Bestandteil aktueller Verwaltungsreformen. Trotz der hohen Bedeutung und langjähriger Bemühungen bleibt die Bilanz der Verwaltungsdigitalisierung in Deutschland ambivalent. Diese Studie konzentriert sich auf drei erfolgreiche Digitalisierungsvorhaben aus dem Onlinezugangsgesetz (OZG) und analysiert mittels problemzentrierter Expertenbefragung Einflussfaktoren auf die Umsetzung von OZG-Vorhaben und den Einfluss des Managements in diesem Prozess. Die Analyse erfolgt theoriegeleitet basierend auf dem Ansatz der begrenzten Rationalität und der ökonomischen Theorie der Bürokratie. Die Ergebnisse zeigen, dass anzunehmen ist, dass die identifizierten Einflussfaktoren unterschiedlich auf Nachnutzbarkeit und Reifegrad von Verwaltungsleistungen wirken und als Folgen begrenzter Rationalität im menschlichen Problemlösungsprozess interpretiert werden können. Managerinnen unterstützen die operativen Akteure bei der Umsetzung, indem sie deren begrenzte Rationalität mit geeigneten Strategien adressieren. Dazu können sie Ressourcen bereitstellen, mit ihrer Expertise unterstützen, Informationen zugänglich machen, Entscheidungswege verändern sowie zur Konfliktlösung beitragen. Die Studie bietet wertvolle Einblicke in die tatsächliche Managementpraxis und leitet daraus Empfehlungen für die Umsetzung öffentlicher Digitalisierungsvorhaben sowie für die Steuerung öffentlicher Verwaltungen ab. Diese Studie liefert einen wichtigen Beitrag zum Verständnis des Einflusses des Managements in der Verwaltungsdigitalisierung. Die Studie unterstreicht außerdem die Notwendigkeit weiterer Forschung in diesem Bereich, um die Praktiken und Herausforderungen der Verwaltungsdigitalisierung besser zu verstehen und effektiv zu adressieren.
Lately, the integration of upconverting nanoparticles (UCNP) in industrial, biomedical and scientific applications has been increasingly accelerating, owing to the exceptional photophysical properties that UCNP offer. Some of the most promising applications lie in the field of medicine and bioimaging due to such advantages as, among others, deeper tissue penetration, reduced optical background, possibility for multicolor imaging, and lower toxicity, compared to many known luminophores. However, some questions regarding not only the fundamental photophysical processes, but also the interaction of the UCNP with other luminescent reporters frequently used for bioimaging and the interaction with biological media remain unanswered. These issues were the primary motivation for the presented work.
This PhD thesis investigated several aspects of various properties and possibilities for bioapplications of Yb3+,Tm3+-doped NaYF4 upconverting nanoparticles. First, the effect of Gd3+ doping on the structure and upconverting behaviour of the nanocrystals was assessed. The ageing process of the UCNP in cyclohexane was studied over 24 months on the samples with different Gd3+ doping concentrations. Structural information was gathered by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), dynamic light scattering (DLS), and discussed in relation to spectroscopic results, obtained through multiparameter upconversion luminescence studies at various temperatures (from 4 K to 295 K). Time-resolved and steady-state emission spectra recorded over this ample temperature range allowed for a deeper understanding of photophysical processes and their dependence on structural changes of UCNP.
A new protocol using a commercially available high boiling solvent allowed for faster and more controlled production of very small and homogeneous UCNP with better photophysical properties, and the advantages of a passivating NaYF4 shell were shown.
Förster resonance energy transfer (FRET) between four different species of NaYF4: Yb3+, Tm3+ UCNP (synthesized using the improved protocol) and a small organic dye was studied. The influence of UCNP composition and the proximity of Tm3+ ions (donors in the process of FRET) to acceptor dye molecules have been assessed. The brightest upconversion luminescence was observed in the UCNP with a protective inert shell. UCNP with Tm3+ ions only in the shell were the least bright, but showed the most efficient energy transfer.
In the final part, two surface modification strategies were applied to make UCNP soluble in water, which simultaneously allowed for linking them via a non-toxic copper-free click reaction to the liposomes, which served as models for further cell experiments. The results were assessed on a confocal microscope system, which was made possible by lesser known downshifting properties of Yb3+, Tm3+-doped UCNP. Preliminary antibody-staining tests using two primary and one dye-labelled secondary antibodies were performed on MDCK-II cells.
In this thesis, I present my contributions to the field of ultrafast molecular spectroscopy. Using the molecule 2-thiouracil as an example, I use ultrashort x-ray pulses from free- electron lasers to study the relaxation dynamics of gas-phase molecular samples. Taking advantage of the x-ray typical element- and site-selectivity, I investigate the charge flow and geometrical changes in the excited states of 2-thiouracil.
In order to understand the photoinduced dynamics of molecules, knowledge about the ground-state structure and the relaxation after photoexcitation is crucial. Therefore, a part of this thesis covers the electronic ground-state spectroscopy of mainly 2-thiouracil to provide the basis for the time-resolved experiments. Many of the previously published studies that focused on the gas-phase time-resolved dynamics of thionated uracils after UV excitation relied on information from solution phase spectroscopy to determine the excitation energies. This is not an optimal strategy as solvents alter the absorption spec- trum and, hence, there is no guarantee that liquid-phase spectra resemble the gas-phase spectra. Therefore, I measured the UV-absorption spectra of all three thionated uracils to provide a gas-phase reference and, in combination with calculations, we determined the excited states involved in the transitions.
In contrast to the UV absorption, the literature on the x-ray spectroscopy of thionated uracil is sparse. Thus, we measured static photoelectron, Auger-Meitner and x-ray absorption spectra on the sulfur L edge before or parallel to the time-resolved experiments we performed at FLASH (DESY, Hamburg). In addition, (so far unpublished) measurements were performed at the synchrotron SOLEIL (France) which have been included in this thesis and show the spin-orbit splitting of the S 2p photoline and its satellite which was not observed at the free-electron laser.
The relaxation of 2-thiouracil has been studied extensively in recent years with ultrafast visible and ultraviolet methods showing the ultrafast nature of the molecular process after photoexcitation. Ultrafast spectroscopy probing the core-level electrons provides a complementary approach to common optical ultrafast techniques. The method inherits its local sensitivity from the strongly localised core electrons. The core energies and core-valence transitions are strongly affected by local valence charge and geometry changes, and past studies have utilised this sensitivity to investigate the molecular process reflected by the ultrafast dynamics. We have built an apparatus that provides the requirements to perform time-resolved x-ray spectroscopy on molecules in the gas phase. With the apparatus, we performed UV-pump x-ray-probe electron spectroscopy on the S 2p edge of 2-thiouracil using the free-electron laser FLASH2. While the UV triggers the relaxation dynamics, the x-ray probes the single sulfur atom inside the molecule. I implemented photoline self-referencing for the photoelectron spectral analysis. This minimises the spectral jitter of the FEL, which is due to the underlying self-amplified spontaneous emission (SASE) process. With this approach, we were not only able to study dynamical changes in the binding energy of the electrons but also to detect an oscillatory behaviour in the shift of the observed photoline, which we associate with non-adiabatic dynamics involving several electronic states. Moreover, we were able to link the UV-induced shift in binding energy to the local charge flow at the sulfur which is directly connected to the electronic state. Furthermore, the analysis of the Auger-Meitner electrons shows that energy shifts observed at early stages of the photoinduced relaxation are related to the geometry change in the molecule. More specifically, the observed increase in kinetic energy of the Auger-Meitner electrons correlates with a previously predicted C=S bond stretch.
The shallow Earth’s layers are at the interplay of many physical processes: some being driven by atmospheric forcing (precipitation, temperature...) whereas others take their origins at depth, for instance ground shaking due to seismic activity. These forcings cause the subsurface to continuously change its mechanical properties, therefore modulating the strength of the surface geomaterials and hydrological fluxes. Because our societies settle and rely on the layers hosting these time-dependent properties, constraining the hydro-mechanical dynamics of the shallow subsurface is crucial for our future geographical development. One way to investigate the ever-changing physical changes occurring under our feet is through the inference of seismic velocity changes from ambient noise, a technique called seismic interferometry. In this dissertation, I use this method to monitor the evolution of groundwater storage and damage induced by earthquakes. Two research lines are investigated that comprise the key controls of groundwater recharge in steep landscapes and the predictability and duration of the transient physical properties due to earthquake ground shaking. These two types of dynamics modulate each other and influence the velocity changes in ways that are challenging to disentangle. A part of my doctoral research also addresses this interaction. Seismic data from a range of field settings spanning several climatic conditions (wet to arid climate) in various seismic-prone areas are considered. I constrain the obtained seismic velocity time-series using simple physical models, independent dataset, geophysical tools and nonlinear analysis. Additionally, a methodological development is proposed to improve the time-resolution of passive seismic monitoring.
The seismicity of the Dead Sea fault zone (DSFZ) during the last two millennia is characterized by a number of damaging and partly devastating earthquakes. These events pose a considerable seismic hazard and seismic risk to Syria, Lebanon, Palestine, Jordan, and Israel. The occurrence rates for large earthquakes along the DSFZ show indications to temporal changes in the long-term view. The aim of this thesis is to find out, if the occurrence rates of large earthquakes (Mw ≥ 6) in different parts of the DSFZ are time-dependent and how. The results are applied to probabilistic seismic hazard assessments (PSHA) in the DSFZ and neighboring areas. Therefore, four time-dependent statistical models (distributions), including Weibull, Gamma, Lognormal and Brownian Passage Time (BPT), are applied beside the exponential distribution (Poisson process) as the classical time-independent model. In order to make sure, if the earthquake occurrence rate follows a unimodal or a multimodal form, a nonparametric bootstrap test of multimodality has been done. A modified method of weighted Maximum Likelihood Estimation (MLE) is applied to estimate the parameters of the models. For the multimodal cases, an Expectation Maximization (EM) method is used in addition to the MLE method. The selection of the best model is done by two methods; the Bayesian Information Criterion (BIC) as well as a modified Kolmogorov-Smirnov goodness-of-fit test. Finally, the confidence intervals of the estimated parameters corresponding to the candidate models are calculated, using the bootstrap confidence sets. In this thesis, earthquakes with Mw ≥ 6 along the DSFZ, with a width of about 20 km and inside 29.5° ≤ latitude ≤ 37° are considered as the dataset. The completeness of this dataset is calculated since 300 A.D. The DSFZ has been divided into three sub zones; the southern, the central and the northern sub zone respectively. The central and the northern sub zones have been investigated but not the southern sub zone, because of the lack of sufficient data. The results of the thesis for the central part of the DSFZ show that the earthquake occurrence rate does not significantly pursue a multimodal form. There is also no considerable difference between the time-dependent and time-independent models. Since the time-independent model is easier to interpret, the earthquake occurrence rate in this sub zone has been estimated under the exponential distribution assumption (Poisson process) and will be considered as time-independent with the amount of 9.72 * 10-3 events/year. The northern part of the DSFZ is a special case, where the last earthquake has occurred in 1872 (about 137 years ago). However, the mean recurrence time of Mw ≥ 6 events in this area is about 51 years. Moreover, about 96 percent of the observed earthquake inter-event times (the time between two successive earthquakes) in the dataset regarding to this sub zone are smaller than 137 years. Therefore, it is a zone with an overdue earthquake. The results for this sub zone verify that the earthquake occurrence rate is strongly time-dependent, especially shortly after an earthquake occurrence. A bimodal Weibull-Weibull model has been selected as the best fit for this sub zone. The earthquake occurrence rate, corresponding to the selected model, is a smooth function of time and reveals two clusters within the time after an earthquake occurrence. The first cluster begins right after an earthquake occurrence, lasts about 80 years, and is explicitly time-dependent. The occurrence rate, regarding to this cluster, is considerably lower right after an earthquake occurrence, increases strongly during the following ten years and reaches its maximum about 0.024 events/year, then decreases over the next 70 years to its minimum about 0.0145 events/year. The second cluster begins 80 years after an earthquake occurrence and lasts until the next earthquake occurs. The earthquake occurrence rate, corresponding to this cluster, increases extremely slowly, such as it can be considered as an almost constant rate about 0.015 events/year. The results are applied to calculate the time-dependent PSHA in the northern part of the DSFZ and neighbouring areas.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
The impact of individual differences in cognitive skills and socioeconomic background on key educational, occupational, and health outcomes, as well as the mechanisms underlying inequalities in these outcomes across the lifespan, are two central questions in lifespan psychology. The contextual embeddedness of such questions in ontogenetic (i.e., individual, age-related) and historical time is a key element of lifespan psychological theoretical frameworks such as the HIstorical changes in DEvelopmental COntexts (HIDECO) framework (Drewelies et al., 2019). Because the dimension of time is also a crucial part of empirical research designs examining developmental change, a third central question in research on lifespan development is how the timing and spacing of observations in longitudinal studies might affect parameter estimates of substantive phenomena. To address these questions in the present doctoral thesis, I applied innovative state-of-the-art methodology including static and dynamic longitudinal modeling approaches, used data from multiple international panel studies, and systematically simulated data based on empirical panel characteristics, in three empirical studies.
The first study of this dissertation, Study I, examined the importance of adolescent intelligence (IQ), grade point average (GPA), and parental socioeconomic status (pSES) for adult educational, occupational, and health outcomes over ontogenetic and historical time. To examine the possible impact of historical changes in the 20th century on the relationships between adolescent characteristics and key adult life outcomes, the study capitalized on data from two representative US cohort studies, the National Longitudinal Surveys of Youth 1979 and 1997, whose participants were born in the late 1960s and 1980s, respectively. Adolescent IQ, GPA, and pSES were positively associated with adult educational attainment, wage levels, and mental and physical health. Across historical time, the influence of IQ and pSES for educational, occupational, and health outcomes remained approximately the same, whereas GPA gained in importance over time for individuals born in the 1980s.
The second study of this dissertation, Study II, aimed to examine strict cumulative advantage (CA) processes as possible mechanisms underlying individual differences and inequality in wage development across the lifespan. It proposed dynamic structural equation models (DSEM) as a versatile statistical framework for operationalizing and empirically testing strict CA processes in research on wages and wage dynamics (i.e., wage levels and growth rates). Drawing on longitudinal representative data from the US National Longitudinal Survey of Youth 1979, the study modeled wage levels and growth rates across 38 years. Only 0.5 % of the sample revealed strict CA processes and explosive wage growth (autoregressive coefficients AR > 1), with the majority of individuals following logarithmic wage trajectories across the lifespan. Adolescent intelligence (IQ) and adult highest educational level explained substantial heterogeneity in initial wage levels and long-term wage growth rates over time.
The third study of this dissertation, Study III, investigated the role of observation timing variability in the estimation of non-experimental intervention effects in panel data. Although longitudinal studies often aim at equally spaced intervals between their measurement occasions, this goal is hardly ever met. Drawing on continuous time dynamic structural equation models, the study examines the –seemingly counterintuitive – potential benefits of measurement intervals that vary both within and between participants (often called individually varying time intervals, IVTs) in a panel study. It illustrates the method by modeling the effect of the transition from primary to secondary school on students’ academic motivation using empirical data from the German National Educational Panel Study (NEPS). Results of a simulation study based on this real-life example reveal that individual variation in time intervals can indeed benefit the estimation precision and recovery of the true intervention effect parameters.
This dissertation focuses on the handling of time in dialogue. Specifically, it investigates how humans bridge time, or “buy time”, when they are expected to convey information that is not yet available to them (e.g. a travel agent searching for a flight in a long list while the customer is on the line, waiting). It also explores the feasibility of modeling such time-bridging behavior in spoken dialogue systems, and it examines
how endowing such systems with more human-like time-bridging capabilities may affect humans’ perception of them.
The relevance of time-bridging in human-human dialogue seems to stem largely from a need to avoid lengthy pauses, as these may cause both confusion and discomfort among the participants of a conversation (Levinson, 1983; Lundholm Fors, 2015). However, this avoidance of prolonged silence is at odds with the incremental nature of speech production in dialogue (Schlangen and Skantze, 2011): Speakers often start to verbalize their contribution before it is fully formulated, and sometimes even before they possess the information they need to provide, which may result in them running out of content mid-turn.
In this work, we elicit conversational data from humans, to learn how they avoid being silent while they search for information to convey to their interlocutor. We identify commonalities in the types of resources employed by different speakers, and we propose a classification scheme. We explore ways of modeling human time-buying behavior computationally, and we evaluate the effect on human listeners of embedding this behavior in a spoken dialogue system.
Our results suggest that a system using conversational speech to bridge time while searching for information to convey (as humans do) can provide a better experience in several respects than one which remains silent for a long period of time. However, not all speech serves this purpose equally: Our experiments also show that a system whose time-buying behavior is more varied (i.e. which exploits several categories from the classification scheme we developed and samples them based on information from human data) can prevent overestimation of waiting time when compared, for example, with a system that repeatedly asks the interlocutor to wait (even if these requests for waiting are phrased differently each time). Finally, this research shows that it is possible to model human time-buying behavior on a relatively small corpus, and that a system using such a model can be preferred by participants over one employing a simpler strategy, such as randomly choosing utterances to produce during the wait —even when the utterances used by both strategies are the same.
It has always been enigmatic which processes control the accretion of the North American terranes towards the Pacific plate and the landward migration of the San Andreas plate boundary. One of the theories suggests that the Pacific plate first cools and captures the uprising mantle in the slab window, and then it causes the accretion of the continental crustal blocks. The alternative theory attributes the accretion to the capture of Farallon plate fragments (microplates) stalled in the ceased Farallon-North America subduction zone. Quantitative judgement between these two end-member concepts requires a 3D thermomechanical numerical modeling. However, the software tool required for such modeling is not available at present in the geodynamic modeling community. The major aim of the presented work is comprised basically of two interconnected tasks. The first task is the development and testing of the research Finite Element code with sufficiently advanced facilities to perform the three-dimensional geological time scale simulations of lithospheric deformation. The second task consists in the application of the developed tool to the Neogene deformations of the crust and the mantle along the San Andreas Fault System in Central and northern California. The geological time scale modeling of lithospheric deformation poses numerous conceptual and implementation challenges for the software tools. Among them is the necessity to handle the brittle-ductile transition within the single computational domain, adequately represent the rock rheology in a broad range of temperatures and stresses, and resolve the extreme deformations of the free surface and internal boundaries. In the framework of this thesis the new Finite Element code (SLIM3D) has been successfully developed and tested. This code includes a coupled thermo-mechanical treatment of deformation processes and allows for an elasto-visco-plastic rheology with diffusion, dislocation and Peierls creep mechanisms and Mohr-Coulomb plasticity. The code incorporates an Arbitrary Lagrangian Eulerian formulation with free surface and Winkler boundary conditions. The modeling technique developed is used to study the aspects influencing the Neogene lithospheric deformation in central and northern California. The model setup is focused on the interaction between three major tectonic elements in the region: the North America plate, the Pacific plate and the Gorda plate, which join together near the Mendocino Triple Junction. Among the modeled effects is the influence of asthenosphere upwelling in the opening slab window on the overlying North American plate. The models also incorporate the captured microplate remnants in the fossil Farallon subduction zone, simplified subducting Gorda slab, and prominent crustal heterogeneity such as the Salinian block. The results show that heating of the mantle roots beneath the older fault zones and the transpression related to fault stepping, altogether, render cooling in the slab window alone incapable to explain eastward migration of the plate boundary. From the viewpoint of the thermomechanical modeling, the results confirm the geological concept, which assumes that a series of microplate capture events has been the primary reason of the inland migration of the San Andreas plate boundary over the recent 20 Ma. The remnants of the Farallon slab, stalled in the fossil subduction zone, create much stronger heterogeneity in the mantle than the cooling of the uprising asthenosphere, providing the more efficient and direct way for transferring the North American terranes to Pacific plate. The models demonstrate that a high effective friction coefficient on major faults fails to predict the distinct zones of strain localization in the brittle crust. The magnitude of friction coefficient inferred from the modeling is about 0.075, which is far less than typical values 0.6 – 0.8 obtained by variety of borehole stress measurements and laboratory data. Therefore, the model results presented in this thesis provide additional independent constrain which supports the “weak-fault” hypothesis in the long-term ongoing debate over the strength of major faults in the SAFS.
A key non-destructive technique for analysis, optimization and developing of new functional materials such as sensors, transducers, electro-optical and memory devices is presented. The Thermal-Pulse Tomography (TPT) provides high-resolution three-dimensional images of electric field and polarization distribution in a material. This thermal technique use a pulsed heating by means of focused laser light which is absorbed by opaque electrodes. The diffusion of the heat causes changes in the sample geometry, generating a short-circuit current or change in surface potential, which contains information about the spatial distribution of electric dipoles or space charges. Afterwards, a reconstruction of the internal electric field and polarization distribution in the material is possible via Scale Transformation or Regularization methods. In this way, the TPT was used for the first time to image the inhomogeneous ferroelectric switching in polymer ferroelectric films (candidates to memory devices). The results shows the typical pinning of electric dipoles in the ferroelectric polymer under study and support the previous hypotheses of a ferroelectric reversal at a grain level via nucleation and growth. In order to obtain more information about the impact of the lateral and depth resolution of the thermal techniques, the TPT and its counterpart called Focused Laser Intensity Modulation Method (FLIMM) were implemented in ferroelectric films with grid-shaped electrodes. The results from both techniques, after the data analysis with different regularization and scale methods, are in total agreement. It was also revealed a possible overestimated lateral resolution of the FLIMM and highlights the TPT method as the most efficient and reliable thermal technique. After an improvement in the optics, the Thermal-Pulse Tomography method was implemented in polymer-dispersed liquid crystals (PDLCs) films, which are used in electro-optical applications. The results indicated a possible electrostatic interaction between the COH group in the liquid crystals and the fluorinate atoms of the used ferroelectric matrix. The geometrical parameters of the LC droplets were partially reproduced as they were compared with Scanning Electron Microscopy (SEM) images. For further applications, it is suggested the use of a non-strong-ferroelectric polymer matrix. In an effort to develop new polymerferroelectrets and for optimizing their properties, new multilayer systems were inspected. The results of the TPT method showed the non-uniformity of the internal electric-field distribution in the shaped-macrodipoles and thus suggested the instability of the sample. Further investigation on multilayers ferroelectrets was suggested and the implementation of less conductive polymers layers too.
This cumulative doctoral thesis consists of three empirical studies that examine the role of top-level executives in shaping adverse financial reporting outcomes and other forms of corporate misconduct. The first study examines CEO effects on a wide range of offenses. Using data from enforcement actions by more than 50 U.S. federal agencies, regression re-sults show CEO effects on the likelihood, frequency, and severity of corporate misconduct. The findings hold for financial, labor-related, and environmental offenses; however, CEO effects are more pronounced for non-financial misconduct. Further results show a positive relation between CEO ability and non-financial misconduct, but no relation with financial misconduct, suggesting that higher CEO ability can have adverse consequences for employee welfare and society and public health. The second study focuses on CEO and CFO effects on financial misreporting. Using data on restatements and public enforcement actions, regression results show that the incremental effect of CFOs is economically larger than that of CEOs. This greater economic impact of CFOs is particularly pronounced for fraudulent misreporting. The findings remain consistent across different samples, methods, misreporting measures, and specification choices for the underlying conceptual mechanism, highlighting the important role of the CFO as a key player in the beyond-GAAP setting. The third study reexamines the relation between equity incentives and different reporting outcomes. The literature review reveals large variation in the empirical measures for firm size as standard control variable, equity incentives as key explanatory variables, and the reporting outcome of interest. Regres-sion results show that these design choices have a direct bearing on empirical results, with changes in t-statistics that often exceed typical thresholds for statistical significance. The find-ings hold for aggressive accrual management, earnings management through discretionary accruals, and material misstatements, suggesting that common design choices can have a large impact on whether equity incentives effects are considered significant or not.
Three Essays on EFRAG
(2018)
This cumulative doctoral thesis consists of three papers that deal with the role of one specific European accounting player in the international accounting standard-setting, namely the European Financial Reporting Advisory Group (EFRAG). The first paper examines whether and how EFRAG generally fulfills its role in articulating Europe’s interests toward the International Accounting Standards Board (IASB). The qualitative data from the conducted interviews reveal that EFRAG influences the IASB’s decision making at a very early stage, long before other constituents are officially asked to comment on the IASB’s proposals. The second paper uses quantitative data and investigates the formal participation behavior of European constituents that seek to determine EFRAG’s voice. More precisely, this paper analyzes the nature of the constituents’ participation in EFRAG’s due process in terms of representation (constituent groups and geographical distribution) and the drivers of their participation behavior. EFRAG’s official decision making process is dominated by some specific constituent groups (such as preparers and the accounting profession) and by constituents from some specific countries (e.g. those with effective enforcement regimes). The third paper investigates in a first step who of the European constituents choose which lobbying channel (participation only at IASB, only at EFRAG, or at both institutions) and unveils in a second step possible reasons for their lobbying choices. The paper comprises quantitative and qualitative data. It reveals that English skills, time issues, the size of the constituent, and the country of origin are factors that can explain why the majority participates only in the IASB’s due process.
Modern health care systems are characterized by pronounced prevention and cost-optimized treatments. This dissertation offers novel empirical evidence on how useful such measures can be. The first chapter analyzes how radiation, a main pollutant in health care, can negatively affect cognitive health. The second chapter focuses on the effect of Low Emission Zones on public heath, as air quality is the major external source of health problems. Both chapters point out potentials for preventive measures. Finally, chapter three studies how changes in treatment prices affect the reallocation of hospital resources. In the following, I briefly summarize each chapter and discuss implications for health care systems as well as other policy areas. Based on the National Educational Panel Study that is linked to data on radiation, chapter one shows that radiation can have negative long-term effects on cognitive skills, even at subclinical doses. Exploiting arguably exogenous variation in soil contamination in Germany due to the Chernobyl disaster in 1986, the findings show that people exposed to higher radiation perform significantly worse in cognitive tests 25 years later. Identification is ensured by abnormal rainfall within a critical period of ten days. The results show that the effect is stronger among older cohorts than younger cohorts, which is consistent with radiation accelerating cognitive decline as people get older. On average, a one-standarddeviation increase in the initial level of CS137 (around 30 chest x-rays) is associated with a decrease in the cognitive skills by 4.1 percent of a standard deviation (around 0.05 school years). Chapter one shows that sub-clinical levels of radiation can have negative consequences even after early childhood. This is of particular importance because most of the literature focuses on exposure very early in life, often during pregnancy. However, population exposed after birth is over 100 times larger. These results point to substantial external human capital costs of radiation which can be reduced by choices of medical procedures. There is a large potential for reductions because about one-third of all CT scans are assumed to be not medically justified (Brenner and Hall, 2007). If people receive unnecessary CT scans because of economic incentives, this chapter points to additional external costs of health care policies. Furthermore, the results can inform the cost-benefit trade-off for medically indicated procedures. Chapter two provides evidence about the effectiveness of Low Emission Zones. Low Emission Zones are typically justified by improvements in population health. However, there is little evidence about the potential health benefits from policy interventions aiming at improving air quality in inner-cities. The chapter ask how the coverage of Low Emission Zones air pollution and hospitalization, by exploiting variation in the roll out of Low Emission Zones in Germany. It combines information on the geographic coverage of Low Emission Zones with rich panel data on the universe of German hospitals over the period from 2006 to 2016 with precise information on hospital locations and the annual frequency of detailed diagnoses. In order to establish that our estimates of Low Emission Zones’ health impacts can indeed be attributed to improvements in local air quality, we use data from Germany’s official air pollution monitoring system and assign monitor locations to Low Emission Zones and test whether measures of air pollution are affected by the coverage of a Low Emission Zone. Results in chapter two confirm former results showing that the introduction of Low Emission Zones improved air quality significantly by reducing NO2 and PM10 concentrations. Furthermore, the chapter shows that hospitals which catchment areas are covered by a Low Emission Zone, diagnose significantly less air pollution related diseases, in particular by reducing the incidents of chronic diseases of the circulatory and the respiratory system. The effect is stronger before 2012, which is consistent with a general improvement in the vehicle fleet’s emission standards. Depending on the disease, a one-standard-deviation increase in the coverage of a hospitals catchment area covered by a Low Emission Zone reduces the yearly number of diagnoses up to 5 percent. These findings have strong implications for policy makers. In 2015, overall costs for health care in Germany were around 340 billion euros, of which 46 billion euros for diseases of the circulatory system, making it the most expensive type of disease caused by 2.9 million cases (Statistisches Bundesamt, 2017b). Hence, reductions in the incidence of diseases of the circulatory system may directly reduce society’s health care costs. Whereas chapter one and two study the demand-side in health care markets and thus preventive potential, chapter three analyzes the supply-side. By exploiting the same hospital panel data set as in chapter two, chapter three studies the effect of treatment price shocks on the reallocation of hospital resources in Germany. Starting in 2005, the implementation of the German-DRG-System led to general idiosyncratic treatment price shocks for individual hospitals. Thus far there is little evidence of the impact of general price shocks on the reallocation of hospital resources. Additionally, I add to the exiting literature by showing that price shocks can have persistent effects on hospital resources even when these shocks vanish. However, simple OLS regressions would underestimate the true effect, due to endogenous treatment price shocks. I implement a novel instrument variable strategy that exploits the exogenous variation in the number of days of snow in hospital catchment areas. A peculiarity of the reform allowed variation in days of snow to have a persistent impact on treatment prices. I find that treatment price increases lead to increases in input factors such as nursing staff, physicians and the range of treatments offered but to decreases in the treatment volume. This indicates supplier-induced demand. Furthermore, the probability of hospital mergers and privatization decreases. Structural differences in pre-treatment characteristics between hospitals enhance these effects. For instance, private and larger hospitals are more affected. IV estimates reveal that OLS results are biased towards zero in almost all dimensions because structural hospital differences are correlated with the reallocation of hospital resources. These results are important for several reasons. The G-DRG-Reform led to a persistent polarization of hospital resources, as some hospitals were exposed to treatment price increases, while others experienced reductions. If hospitals increase the treatment volume as a response to price reductions by offering unnecessary therapies, it has a negative impact on population wellbeing and public spending. However, results show a decrease in the range of treatments if prices decrease. Hospitals might specialize more, thus attracting more patients. From a policy perspective it is important to evaluate if such changes in the range of treatments jeopardize an adequate nationwide provision of treatments. Furthermore, the results show a decrease in the number of nurses and physicians if prices decrease. This could partly explain the nursing crisis in German hospitals. However, since hospitals specialize more they might be able to realize efficiency gains which justify reductions in input factors without loses in quality. Further research is necessary to provide evidence for the impact of the G-DRG-Reform on health care quality. Another important aspect are changes in the organizational structure. Many public hospitals have been privatized or merged. The findings show that this is at least partly driven by the G-DRG-Reform. This can again lead to a lack in services offered in some regions if merged hospitals specialize more or if hospitals are taken over by ecclesiastical organizations which do not provide all treatments due to moral conviction. Overall, this dissertation reveals large potential for preventive health care measures and helps to explain reallocation processes in the hospital sector if treatment prices change. Furthermore, its findings have potentially relevant implications for other areas of public policy. Chapter one identifies an effect of low dose radiation on cognitive health. As mankind is searching for new energy sources, nuclear power is becoming popular again. However, results of chapter one point to substantial costs of nuclear energy which have not been accounted yet. Chapter two finds strong evidence that air quality improvements by Low Emission Zones translate into health improvements, even at relatively low levels of air pollution. These findings may, for instance, be of relevance to design further policies targeted at air pollution such as diesel bans. As pointed out in chapter three, the implementation of DRG-Systems may have unintended side-effects on the reallocation of hospital resources. This may also apply to other providers in the health care sector such as resident doctors.
Diese Arbeit befasst sich mit der Synthese und Charakterisierung von organolöslichen Thiophen und Benzodithiophen basierten Materialien und ihrer Anwendung als aktive lochleitende Halbleiterschichten in Feldeffekttransistoren. Im ersten Teil der Arbeit wird durch eine gezielte Modifikation des Thiophengrundgerüstes eine neue Comonomer-Einheit für die Synthese von Thiophen basierten Copolymeren erfolgreich dargestellt. Die hydrophoben Hexylgruppen in der 3-Position des Thiophens werden teilweise durch hydrophile 3,6-Dioxaheptylgruppen ersetzt. Über die Grignard-Metathese nach McCullough werden statistische Copolymere mit unterschiedlichen molaren Anteilen vom hydrophoben Hexyl- und hydrophilem 3,6-Dioxaheptylgruppen 1:1 (P-1), 1:2 (P-2) und 2:1 (P-3) erfolgreich hergestellt. Auch die Synthese eines definierten Blockcopolymers BP-1 durch sequentielle Addition der Comonomere wird realisiert. Optische und elektrochemische Eigenschaften der neuartigen Copolymere sind vergleichbar mit P3HT. Mit allen Copolymeren wird ein charakteristisches Transistorverhalten in einem Top-Gate/Bottom-Kontakt-Aufbau erhalten. Dabei werden mit P-1 als die aktive Halbleiterschicht im Bauteil, PMMA als Dielektrikum und Silber als Gate-Elektrode Mobilitäten von bis zu 10-2 cm2/Vs erzielt. Als Folge der optimierten Grenzfläche zwischen Dielektrikum und Halbleiter wird eine Verbesserung der Luftstabilität der Transistoren über mehrere Monate festgestellt. Im zweiten Teil der Arbeit werden Benzodithiophen basierte organische Materialien hergestellt. Für die Synthese der neuartigen Benzodithiophen-Derivate wird die Schlüsselverbindung TIPS-BDT in guter Ausbeute dargestellt. Die Difunktionalisierung von TIPS-BDT in den 2,6-Positionen über eine elektrophile Substitution liefert die gewünschten Dibrom- und Distannylmonomere. Zunächst werden über die Stille-Reaktion alternierende Copolymere mit alkylierten Fluoren- und Chinoxalin-Einheiten realisiert. Alle Copolymere zeichnen sich durch eine gute Löslichkeit in gängigen organischen Lösungsmitteln, hohe thermische Stabilität und durch gute Filmbildungseigenschaften aus. Des Weiteren sind alle Copolymere mit HOMO Lagen höher als -6.3 eV, verglichen mit den Thiophen basierten Copolymeren (P-1 bis P-3), sehr oxidationsstabil. Diese Copolymere zeigen amorphes Verhalten in den Halbleiterschichten in OFETs auf und es werden Mobilitäten bis zu 10-4 cm2/Vs erreicht. Eine Abhängigkeit der Bauteil-Leistung von dem Zinngehalt-Rest im Polymer wird nachgewiesen. Ein Zinngehalt von über 0.6 % kann enormen Einfluss auf die Mobilität ausüben, da die funktionellen SnMe3-Gruppen als Fallenzustände wirken können. Alternativ wird das alternierende TIPS-BDT/Fluoren-Copolymer P-5-Stille nach der Suzuki-Methode polymerisiert. Mit P-5-Suzuki als die aktive organische Halbleiterschicht im OFET wird die höchste Mobilität von 10-2 cm2/Vs erzielt. Diese Mobilität ist somit um zwei Größenordnungen höher als bei P-5-Stille, da die Fallenzustände in diesem Fall minimiert werden und folglich der Ladungstransport verbessert wird. Sowohl das Homopolymer P-12 als auch das Copolymer mit dem aromatischen Akzeptor Benzothiadiazol P-9 führen zu schwerlöslichen Polymeren. Aus diesem Grund werden einerseits Terpolymere aus TIPS-BDT/Fluoren/BTD-Einheiten P-10 und P-11 aufgebaut und andererseits wird versucht die TIPS-BDT-Einheit in die Seitenkette des Styrols einzubringen. Mit der Einführung von BTD in die Hauptpolymerkette werden insbesondere die Absorptions- und die elektrochemischen Eigenschaften beeinflusst. Im Vergleich zu dem TIPS-BDT/Fluoren-Copolymer reicht die Absorption bis in den sichtbaren Bereich und die LUMO Lage wird zu niederen Werten verschoben. Eine Verbesserung der Leistung in den Bauteilen wird jedoch nicht festgestellt. Die erfolgreiche erstmalige Synthese von TIPS-BDT als Seitenkettenpolymer an Styrol P-13 führt zu einem löslichen und amorphen Polymer mit vergleichbaren Mobilitäten von Styrol basierten Polymeren (µ = 10-5 cm2/Vs) im OFET. Ein weiteres Ziel dieser Arbeit ist die Synthese von niedermolekularen organolöslichen Benzodithiophen-Derivaten. Über Suzuki- und Stille-Reaktionen ist es erstmals möglich, verschiedenartige Aromaten über eine σ-Bindung an TIPS-BDT in den 2,6-Positionen zu knüpfen. Die UV/VIS-Untersuchungen zeigen, dass die Absorption durch die Verlängerung der π-Konjugationslänge zu höheren Wellenlängen verschoben wird. Darüber hinaus ist es möglich, thermisch vernetzbare Gruppen wie Allyloxy in das Molekülgerüst einzubauen. Das Einführen von F-Atomen in das Molekülgerüst resultiert in einer verstärkten Packungsordnung im Fluorbenzen funktionalisiertem TIPS-BDT (SM-4) im Festkörper mit sehr guten elektronischen Eigenschaften im OFET, wobei Mobilitäten bis zu 0.09 cm2/Vs erreicht werden.
Im Rahmen dieser Dissertation wurde der Sauerstoff im Grundgerüst der [1,3]-Dioxolo[4.5-f]benzodioxol-Fluoreszenzfarbstoffe (DBD-Fluoreszenzfarbstoffe) vollständig mit Schwefel ausgetauscht und daraus eine neue Klasse von Fluoreszenzfarbstoffen entwickelt, die Benzo[1,2-d:4,5-d']bis([1,3]dithiol)-Fluorophore (S4-DBD-Fluorophore). Insgesamt neun der besonders interessanten, difunktionalisierten Vertreter konnten synthetisiert werden, die sich in ihren elektronenziehenden Gruppen und in ihrer Anordnung unterschieden.
Durch den Austausch von Sauerstoff mit Schwefel kam es zu teilweise auffälligen Veränderungen in den Fluoreszenzparametern, wie eine Abnahme der Fluoreszenzquantenausbeuten und -lebenszeiten aber auch eine deutliche Rotverschiebung in den Absorptions- und Emissionswellenlängen mit großen STOKES-Verschiebungen. Damit sind die S4-DBD-Fluorophore eine wertvolle Ergänzung für die DBD-Farbstoffe.
Die Ursachen für die Abnahme der Lebenszeiten und Quantenausbeuten konnte auf eine hohe Besetzung des Triplett-Zustandes zurückgeführt werden, welcher durch die verstärkten Spin-Bahn-Kopplungen des Schwefels hervorgerufen wird. Zusammen mit dem Arbeitskreis physikalische Chemie der Universität Potsdam konnten auch die photophysikalischen Prozesse über die Transienten-Absorptionsspektroskopie (TAS) aufgeklärt werden.
Eine Strategie zur Funktionalisierung der S4-DBD-Farbstoffe am Thioacetalgerüst konnte entwickelt werden. So gelang es Alkohol-, Propargyl-, Azid-, NHS-Ester-, Carbonsäure-, Maleimid- und Tosyl-Gruppen an S4-DBD-Dialdehyden anzubringen.
Erweiternd wurden molekulare Stäbe auf Basis von Schwefel-Oligo-Spiro-Ketalen (SOSKs) untersucht, bei denen Sauerstoff durch Schwefel ersetzt wurde. Hier konnten die Synthesen der löslichkeitsvermittelnden TER-Muffe und auch des Tetrathiapentaerythritols als Grundbaustein deutlich verbessert werden. Aus diesen konnte ein einfaches SOSK-Polymer hergestellt werden. Weitere Versuche zum Aufbau eines Stabes müssen aber noch untersucht werden. Um einen S-OSK-Stab aufzubauen hat sich dabei die Dithiocarbonat-Gruppe in ersten Versuchen als potenzielle geeignete Schutzgruppe für das Tetrathiapentaerythritol herausgestellt.
With the rise of nanotechnology in the last decade, nanofluidics has been established as a research field and gained increased interest in science and industry. Natural aqueous nanofluidic systems are very complex, there is often a predominance of liquid interfaces or the fluid contains charged or differently shaped colloids. The effects, promoted by these additives, are far from being completely understood and interesting questions arise with regards to the confinement of such complex fluidic systems. A systematic study of nanofluidic processes requires designing suitable experimental model nano – channels with required characteristics. The present work employed thin liquid films (TLFs) as experimental models. They have proven to be useful experimental tools because of their simple geometry, reproducible preparation, and controllable liquid interfaces. The thickness of the channels can be adjusted easily by the concentration of electrolyte in the film forming solution. This way, channel dimensions from 5 – 100 nm are possible, a high flexibility for an experimental system. TLFs have liquid IFs of different charge and properties and they offer the possibility to confine differently shaped ions and molecules to very small spaces, or to subject them to controlled forces. This makes the foam films a unique “device” available to obtain information about fluidic systems in nanometer dimensions. The main goal of this thesis was to study nanofluidic processes using TLFs as models, or tools, and to subtract information about natural systems plus deepen the understanding on physical chemical conditions. The presented work showed that foam films can be used as experimental models to understand the behavior of liquids in nano – sized confinement. In the first part of the thesis, we studied the process of thinning of thin liquid films stabilized with the non – ionic surfactant n – dodecyl – β – maltoside (β – C₁₂G₂) with primary interest in interfacial diffusion processes during the thinning process dependent on surfactant concentration 64. The surfactant concentration in the film forming solutions was varied at constant electrolyte (NaCl) concentration. The velocity of thinning was analyzed combining previously developed theoretical approaches. Qualitative information about the mobility of the surfactant molecules at the film surfaces was obtained. We found that above a certain limiting surfactant concentration the film surfaces were completely immobile and they behaved as non – deformable, which decelerated the thinning process. This follows the predictions for Reynolds flow of liquid between two non – deformable disks. In the second part of the thesis, we designed a TLF nanofluidic system containing rod – like multivalent ions and compared this system to films containing monovalent ions. We presented first results which recognized for the first time the existence of an additional attractive force in the foam films based on the electrostatic interaction between rod – like ions and oppositely charged surfaces. We may speculate that this is an ion bridging component of the disjoining pressure. The results show that for films prepared in presence of spermidine the transformation of the thicker CF to the thinnest NBF is more probable as films prepared with NaCl at similar conditions of electrostatic interaction. This effect is not a result of specific adsorption of any of the ions at the fluid surfaces and it does not lead to any changes in the equilibrium properties of the CF and NBF. Our hypothesis was proven using the trivalent ion Y3+ which does not show ion bridging. The experimental results are compared to theoretical predictions and a quantitative agreement on the system’s energy gain for the change from CF to NBF could be obtained. In the third part of the work, the behavior of nanoparticles in confinement was investigated with respect to their impact on the fluid flow velocity. The particles altered the flow velocity by an unexpected high amount, so that the resulting changes in the dynamic viscosity could not be explained by a realistic change of the fluid viscosity. Only aggregation, flocculation and plug formation can explain the experimental results. The particle systems in the presented thesis had a great impact on the film interfaces due to the stabilizer molecules present in the bulk solution. Finally, the location of the particles with respect to their lateral and vertical arrangement in the film was studied with advanced reflectivity and scattering methods. Neutron Reflectometry studies were performed to investigate the location of nanoparticles in the TLF perpendicular to the IF. For the first time, we study TLFs using grazing incidence small angle X – ray scattering (GISAXS), which is a technique sensitive to the lateral arrangement of particles in confined volumes. This work provides preliminary data on a lateral ordering of particles in the film.
Thermoresponsive Zellkultursubstrate für zeitlich-räumlich gesteuertes Auswachsen neuronaler Zellen
(2019)
Ein wichtiges Ziel der Neurowissenschaften ist das Verständnis der komplexen und zugleich faszinierenden, hochgeordneten Vernetzung der Neurone im Gehirn, welche neuronalen Prozessen, wie zum Beispiel dem Wahrnehmen oder Lernen wie auch Neuropathologien zu Grunde liegt. Für verbesserte neuronale Zellkulturmodelle zur detaillierten Untersuchung dieser Prozesse ist daher die Rekonstruktion von geordneten neuronalen Verbindungen dringend erforderlich. Mit Oberflächenstrukturen aus zellattraktiven und zellabweisenden Beschichtungen können neuronale Zellen und ihre Neuriten in vitro strukturiert werden. Zur Kontrolle der neuronalen Verbindungsrichtung muss das Auswachsen der Axone zu benachbarten Zellen dynamisch gesteuert werden, zum Beispiel über eine veränderliche Zugänglichkeit der Oberfläche.
In dieser Arbeit wurde untersucht, ob mit thermoresponsiven Polymeren (TRP) beschichtete Zellkultursubstrate für eine dynamische Kontrolle des Auswachsens neuronaler Zellen geeignet sind. TRP können über die Temperatur von einem zellabweisenden in einen zellattraktiven Zustand geschaltet werden, womit die Zugänglichkeit der Oberfläche für Zellen dynamisch gesteuert werden kann. Die TRP-Beschichtung wurde mikrostrukturiert, um einzelne oder wenige neuronale Zellen zunächst auf der Oberfläche anzuordnen und das Auswachsen der Zellen und Neuriten über definierte TRP-Bereiche in Abhängigkeit der Temperatur zeitlich und räumlich zu kontrollieren. Das Protokoll wurde mit der neuronalen Zelllinie SH-SY5Y etabliert und auf humane induzierte Neurone übertragen. Die Anordnung der Zellen konnte bei Kultivierung im zellabweisenden Zustand des TRPs für bis zu 7 Tage aufrecht erhalten werden. Durch Schalten des TRPs in den zellattraktiven Zustand konnte das Auswachsen der Neuriten und Zellen zeitlich und räumlich induziert werden. Immunozytochemische Färbungen und Patch-Clamp-Ableitungen der Neurone demonstrierten die einfache Anwendbarkeit und Zellkompatibilität der TRP-Substrate.
Eine präzisere räumliche Kontrolle des Auswachsens der Zellen sollte durch lokales Schalten der TRP-Beschichtung erreicht werden. Dafür wurden Mikroheizchips mit Mikroelektroden zur lokalen Jouleschen Erwärmung der Substratoberfläche entwickelt. Zur Evaluierung der generierten Temperaturprofile wurde eine Temperaturmessmethode entwickelt und die erhobenen Messwerte mit numerisch simulierten Werten abgeglichen. Die Temperaturmessmethode basiert auf einfach zu applizierenden Sol-Gel-Schichten, die den temperatursensitiven Fluoreszenzfarbstoff Rhodamin B enthalten. Sie ermöglicht oberflächennahe Temperaturmessungen in trockener und wässriger Umgebung mit hoher Orts- und Temperaturauflösung. Numerische Simulationen der Temperaturprofile korrelierten gut mit den experimentellen Daten. Auf dieser Basis konnten Geometrie und Material der Mikroelektroden hinsichtlich einer lokal stark begrenzten Temperierung optimiert werden. Ferner wurden für die Kultvierung der Zellen auf den Mikroheizchips eine Zellkulturkammer und Kontaktboard für die elektrische Kontaktierung der Mikroelektroden geschaffen.
Die vorgestellten Ergebnisse demonstrieren erstmalig das enorme Potential thermoresponsiver Zellkultursubstrate für die zeitlich und räumlich gesteuerte Formation geordneter neuronaler Verbindungen in vitro. Zukünftig könnte dies detaillierte Studien zur neuronalen Informationsverarbeitung oder zu Neuropathologien an relevanten, humanen Zellmodellen ermöglichen.
Despite the popularity of thermoresponsive polymers, much is still unknown about their behavior, how it is triggered, and what factors influence it, hindering the full exploitation of their potential. One particularly puzzling phenomenon is called co-nonsolvency, in which a polymer is soluble in two individual solvents, but counter-intuitively becomes insoluble in mixtures of both. Despite the innumerous potential applications of such systems, including actuators, viscosity regulators and as carrier structures, this field has not yet been extensively studied apart from the classical example of poly(N isopropyl acrylamide) (PNIPAM) in mixtures of water and methanol. Therefore, this thesis focuses on evaluating how changes in the chemical structure of the polymers impact the thermoresponsive, aggregation and co-nonsolvency behaviors of both homopolymers and amphiphilic block copolymers. Within this scope, both the synthesis of the polymers and their characterization in solution is investigated. Homopolymers were synthesized by conventional free radical polymerization, whereas block copolymers were synthesized by consecutive reversible addition fragmentation chain transfer (RAFT) polymerizations. The synthesis of the monomers N isopropyl methacrylamide (NIPMAM) and N vinyl isobutyramide (NVIBAM), as well as a few chain transfer agents is also covered. Through turbidimetry measurements, the thermoresponsive and co-nonsolvency behavior of PNIPMAM and PNVIBAM homopolymers is then compared to the well-known PNIPAM, in aqueous solutions with 9 different organic co-solvents. Additionally, the effects of end-groups, molar mass, and concentration are investigated. Despite the similarity of their chemical structures, the 3 homopolymers show significant differences in transition temperatures and some divergences in their co-nonsolvency behavior. More complex systems are also evaluated, namely amphiphilic di- and triblock copolymers of PNIPAM and PNIPMAM with polystyrene and poly(methyl methacrylate) hydrophobic blocks. Dynamic light scattering is used to evaluate their aggregation behavior in aqueous and mixed aqueous solutions, and how it is affected by the chemical structure of the blocks, the chain architecture, presence of cosolvents and polymer concentration. The results obtained shed light into the thermoresponsive, co-nonsolvency and aggregation behavior of these polymers in solution, providing valuable information for the design of systems with a desired aggregation behavior, and that generate targeted responses to temperature and solvent mixture changes.
Thermoresponsive block copolymers of presumably highly biocompatible character exhibiting upper critical solution temperature (UCST) type phase behavior were developed. In particular, these polymers were designed to exhibit UCST-type cloud points (Tcp) in physiological saline solution (9 g/L) within the physiologically interesting window of 30-50°C. Further, their use as carrier for controlled release purposes was explored. Polyzwitterion-based block copolymers were synthesized by atom transfer radical polymerization (ATRP) via a macroinitiator approach with varied molar masses and co-monomer contents. These block copolymers can self-assemble in the amphiphilic state to form micelles, when the thermoresponsive block experiences a coil-to-globule transition upon cooling. Poly(ethylene glycol) methyl ether (mPEG) was used as the permanently hydrophilic block to stabilize the colloids formed, and polyzwitterions as the thermoresponsive block to promote the temperature-triggered assembly-disassembly of the micellear aggregates at low temperature.
Three zwitterionic monomers were used for this studies, namely 3-((2-(methacryloyloxy)ethyl)dimethylammonio)propane-1-sulfonate (SPE), 4-((2-(methacryloyl- oxy)ethyl)dimethylammonio)butane-1-sulfonate (SBE), and 3-((2-(methacryloyloxy)ethyl)- dimethylammonio)propane-1-sulfate) (ZPE). Their (co)polymers were characterized with respect to their molecular structure by proton nuclear magnetic resonance (1H-NMR) and gel permeation chromatography (GPC). Their phase behaviors in pure water as well as in physiological saline were studied by turbidimetry and dynamic light scattering (DLS). These (co)polymers are thermoresponsive with UCST-type phase behavior in aqueous solution. Their phase transition temperatures depend strongly on the molar masses and the incorporation of co-monomers: phase transition temperatures increased with increasing molar masses and content of poorly water-soluble co-monomer. In addition, the presence of salt influenced the phase transition dramatically. The phase transition temperature decreased with increasing salt content in the solution. While the PSPE homopolymers show a phase transition only in pure water, the PZPE homopolymers are able to exhibit a phase transition only in high salinity, as in physiological saline. Although both polyzwitterions have similar chemical structures that differ only in the anionic group (sulfonate group in SPE and sulfate group in ZPE), the water solubility is very different. Therefore, the phase transition temperatures of targeted block copolymers were modulated by using statistical copolymer of SPE and ZPE as thermoresponsive block, and varying the ratio of SPE to ZPE. Indeed, the statistical copolymers of P(SPE-co-ZPE) show phase transitions both in pure water as well as in physiological saline. Surprisingly, it was found that mPEG-b-PSBE block copolymer can display “schizophrenic” behavior in pure water, with the UCST-type cloud point occurring at lower temperature than the LCST-type one.
The block copolymer, which satisfied best the boundary conditions, is block copolymer mPEG114-b-P(SPE43-co-ZPE39) with a cloud point of 45°C in physiological saline. Therefore, it was chosen for solubilization studies of several solvatochromic dyes as models of active agents, using the thermoresponsive block copolymer as “smart” carrier. The uptake and release of the dyes were explored by UV-Vis and fluorescence spectroscopy, following the shift of the wavelength of the absorbance or emission maxima at low and high temperature. These are representative for the loaded and released state, respectively. However, no UCST-transition triggered uptake and release of these dyes could be observed. Possibly, the poor affinity of the polybetaines to the dyes in aqueous environtments may be related to the widely reported antifouling properties of zwitterionic polymers.
Thermophony in real gases
(2016)
A thermophone is an electrical device for sound generation. The advantages of thermophones over conventional sound transducers such as electromagnetic, electrostatic or piezoelectric transducers are their operational principle which does not require any moving parts, their resonance-free behavior, their simple construction and their low production costs.
In this PhD thesis, a novel theoretical model of thermophonic sound generation in real gases has been developed. The model is experimentally validated in a frequency range from 2 kHz to 1 MHz by testing more then fifty thermophones of different materials, including Carbon nano-wires, Titanium, Indium-Tin-Oxide, different sizes and shapes for sound generation in gases such as air, argon, helium, oxygen, nitrogen and sulfur hexafluoride.
Unlike previous approaches, the presented model can be applied to different kinds of thermophones and various gases, taking into account the thermodynamic properties of thermophone materials and of adjacent gases, degrees of freedom and the volume occupied by the gas atoms and molecules, as well as sound attenuation effects, the shape and size of the thermophone surface and the reduction of the generated acoustic power due to photonic emission. As a result, the model features better prediction accuracy than the existing models by a factor up to 100. Moreover, the new model explains previous experimental findings on thermophones which can not be explained with the existing models.
The acoustic properties of the thermophones have been tested in several gases using unique, highly precise experimental setups comprising a Laser-Doppler-Vibrometer combined with a thin polyethylene film which acts as a broadband and resonance-free sound-pressure detector. Several outstanding properties of the thermophones have been demonstrated for the first time, including the ability to generate arbitrarily shaped acoustic signals, a greater acoustic efficiency compared to conventional piezoelectric and electrostatic airborne ultrasound transducers, and applicability as powerful and tunable sound sources with a bandwidth up to the megahertz range and beyond.
Additionally, new applications of thermophones such as the study of physical properties of gases, the thermo-acoustic gas spectroscopy, broad-band characterization of transfer functions of sound and ultrasound detection systems, and applications in non-destructive materials testing are discussed and experimentally demonstrated.
The Andes are a ~7000 km long N-S trending mountain range developed along the South American western continental margin. Driven by the subduction of the oceanic Nazca plate beneath the continental South American plate, the formation of the northern and central parts of the orogen is a type case for a non-collisional orogeny. In the southern Central Andes (SCA, 29°S-39°S), the oceanic plate changes the subduction angle between 33°S and 35°S from almost horizontal (< 5° dip) in the north to a steeper angle (~30° dip) in the south. This sector of the Andes also displays remarkable along- and across- strike variations of the tectonic deformation patterns. These include a systematic decrease of topographic elevation, of crustal shortening and foreland and orogenic width, as well as an alternation of the foreland deformation style between thick-skinned and thin-skinned recorded along- and across the strike of the subduction zone. Moreover, the SCA are a very seismically active region. The continental plate is characterized by a relatively shallow seismicity (< 30 km depth) which is mainly focussed at the transition from the orogen to the lowland areas of the foreland and the forearc; in contrast, deeper seismicity occurs below the interiors of the northern foreland. Additionally, frequent seismicity is also recorded in the shallow parts of the oceanic plate and in a sector of the flat slab segment between 31°S and 33°S. The observed spatial heterogeneity in tectonic and seismic deformation in the SCA has been attributed to multiple causes, including variations in sediment thickness, the presence of inherited structures and changes in the subduction angle of the oceanic slab. However, there is no study that inquired the relationship between the long-term rheological configuration of the SCA and the spatial deformation patterns. Moreover, the effects of the density and thickness configuration of the continental plate and of variations in the slab dip angle in the rheological state of the lithosphere have been not thoroughly investigated yet. Since rheology depends on composition, pressure and temperature, a detailed characterization of the compositional, structural and thermal fields of the lithosphere is needed. Therefore, by using multiple geophysical approaches and data sources, I constructed the following 3D models of the SCA lithosphere: (i) a seismically-constrained structural and density model that was tested against the gravity field; (ii) a thermal model integrating the conversion of mantle shear-wave velocities to temperature with steady-state conductive calculations in the uppermost lithosphere (< 50 km depth), validated by temperature and heat-flow measurements; and (iii) a rheological model of the long-term lithospheric strength using as input the previously-generated models.
The results of this dissertation indicate that the present-day thermal and rheological fields of the SCA are controlled by different mechanisms at different depths. At shallow depths (< 50 km), the thermomechanical field is modulated by the heterogeneous composition of the continental lithosphere. The overprint of the oceanic slab is detectable where the oceanic plate is shallow (< 85 km depth) and the radiogenic crust is thin, resulting in overall lower temperatures and higher strength compared to regions where the slab is steep and the radiogenic crust is thick. At depths > 50 km, largest temperatures variations occur where the descending slab is detected, which implies that the deep thermal field is mainly affected by the slab dip geometry.
The outcomes of this thesis suggests that long-term thermomechanical state of the lithosphere influences the spatial distribution of seismic deformation. Most of the seismicity within the continental plate occurs above the modelled transition from brittle to ductile conditions. Additionally, there is a spatial correlation between the location of these events and the transition from the mechanically strong domains of the forearc and foreland to the weak domain of the orogen. In contrast, seismicity within the oceanic plate is also detected where long-term ductile conditions are expected. I therefore analysed the possible influence of additional mechanisms triggering these earthquakes, including the compaction of sediments in the subduction interface and dehydration reactions in the slab. To that aim, I carried out a qualitative analysis of the state of hydration in the mantle using the ratio between compressional- and shear-wave velocity (vp/vs ratio) from a previous seismic tomography. The results from this analysis indicate that the majority of the seismicity spatially correlates with hydrated areas of the slab and overlying continental mantle, with the exception of the cluster within the flat slab segment. In this region, earthquakes are likely triggered by flexural processes where the slab changes from a flat to a steep subduction angle.
First-order variations in the observed tectonic patterns also seem to be influenced by the thermomechanical configuration of the lithosphere. The mechanically strong domains of the forearc and foreland, due to their resistance to deformation, display smaller amounts of shortening than the relatively weak orogenic domain. In addition, the structural and thermomechanical characteristics modelled in this dissertation confirm previous analyses from geodynamic models pointing to the control of the observed heterogeneities in the orogen and foreland deformation style. These characteristics include the lithospheric and crustal thickness, the presence of weak sediments and the variations in gravitational potential energy.
Specific conditions occur in the cold and strong northern foreland, which is characterized by active seismicity and thick-skinned structures, although the modelled crustal strength exceeds the typical values of externally-applied tectonic stresses. The additional mechanisms that could explain the strain localization in a region that should resist deformation are: (i) increased tectonic forces coming from the steepening of the slab and (ii) enhanced weakening along inherited structures from pre-Andean deformation events. Finally, the thermomechanical conditions of this sector of the foreland could be a key factor influencing the preservation of the flat subduction angle at these latitudes of the SCA.
Widespread landscape changes are presently observed in the Arctic and are most likely to
accelerate in the future, in particular in permafrost regions which are sensitive to climate warming. To assess current and future developments, it is crucial to understand past
environmental dynamics in these landscapes. Causes and interactions of environmental variability can hardly be resolved by instrumental records covering modern time scales. However, long-term
environmental variability is recorded in paleoenvironmental archives. Lake sediments are important archives that allow reconstruction of local limnogeological processes as well as past environmental changes driven directly or indirectly by climate dynamics. This study aims at
reconstructing Late Quaternary permafrost and thermokarst dynamics in central-eastern Beringia,
the terrestrial land mass connecting Eurasia and North America during glacial sea-level low stands. In order to investigate development, processes and influence of thermokarst dynamics, several sediment cores from extant lakes and drained lake basins were analyzed to answer the
following research questions:
1. When did permafrost degradation and thermokarst lake development take place and what were enhancing and inhibiting environmental factors?
2. What are the dominant processes during thermokarst lake development and how are
they reflected in proxy records?
3. How did, and still do, thermokarst dynamics contribute to the inventory and properties of organic matter in sediments and the carbon cycle?
Methods applied in this study are based upon a multi-proxy approach combining
sedimentological, geochemical, geochronological, and micropaleontological analyses, as well as
analyses of stable isotopes and hydrochemistry of pore-water and ice. Modern field observations of water quality and basin morphometrics complete the environmental investigations.
The investigated sediment cores reveal permafrost degradation and thermokarst dynamics on different time scales. The analysis of a sediment core from GG basin on the northern Seward
Peninsula (Alaska) shows prevalent terrestrial accumulation of yedoma throughout the Early to
Mid Wisconsin with intermediate wet conditions at around 44.5 to 41.5 ka BP. This first wetland
development was terminated by the accumulation of a 1-meter-thick airfall tephra most likely originating from the South Killeak Maar eruption at 42 ka BP. A depositional hiatus between 22.5 and 0.23 ka BP may indicate thermokarst lake formation in the surrounding of the site which forms a yedoma upland till today. The thermokarst lake forming GG basin initiated 230 ± 30 cal a
BP and drained in Spring 2005 AD. Four years after drainage the lake talik was still unfrozen below 268 cm depth.
A permafrost core from Mama Rhonda basin on the northern Seward Peninsula preserved a
full lacustrine record including several lake phases. The first lake generation developed at 11.8 cal ka BP during the Lateglacial-Early Holocene transition; its old basin (Grandma Rhonda) is still partially preserved at the southern margin of the study basin. Around 9.0 cal ka BP a shallow and more dynamic thermokarst lake developed with actively eroding shorelines and potentially intermediate shallow water or wetland phases (Mama Rhonda). Mama Rhonda lake drainage at 1.1 cal ka BP was followed by gradual accumulation of terrestrial peat and top-down refreezing of the lake talik. A significant lower organic carbon content was measured in Grandma Rhonda deposits (mean TOC of 2.5 wt%) than in Mama Rhonda deposits (mean TOC of 7.9 wt%) highlighting the impact of thermokarst dynamics on biogeochemical cycling in different lake generations by thawing and mobilization of organic carbon into the lake system.
Proximal and distal sediment cores from Peatball Lake on the Arctic Coastal Plain of Alaska revealed young thermokarst dynamics since about 1,400 years along a depositional gradient based on reconstructions from shoreline expansion rates and absolute dating results. After its initiation as a remnant pond of a previous drained lake basin, a rapidly deepening lake with increasing oxygenation of the water column is evident from laminated sediments, and higher Fe/Ti and Fe/S ratios in the sediment. The sediment record archived characterizing shifts in depositional regimes and sediment sources from upland deposits and re-deposited sediments from drained thaw lake basins depending on the gradually changing shoreline configuration. These changes are evident from alternating organic inputs into the lake system which highlights the potential for thermokarst lakes to recycle old carbon from degrading permafrost deposits of its catchment.
The lake sediment record from Herschel Island in the Yukon (Canada) covers the full Holocene period. After its initiation as a thermokarst lake at 11.7 cal ka BP and intense thermokarst activity until 10.0 cal ka BP, the steady sedimentation was interrupted by a depositional hiatus at 1.6 cal ka BP which likely resulted from lake drainage or allochthonous slumping due to collapsing shore lines. The specific setting of the lake on a push moraine composed of marine deposits is reflected in the sedimentary record. Freshening of the maturing lake is indicated by decreasing electrical conductivity in pore-water. Alternation of marine to freshwater ostracods and foraminifera confirms decreasing salinity as well but also reflects episodical re-deposition of allochthonous marine sediments.
Based on permafrost and lacustrine sediment records, this thesis shows examples of the Late Quaternary evolution of typical Arctic permafrost landscapes in central-eastern Beringia and the complex interaction of local disturbance processes, regional environmental dynamics and global climate patterns. This study confirms that thermokarst lakes are important agents of organic matter recycling in complex and continuously changing landscapes.
Current climate warming is affecting arctic regions at a faster rate than the rest of the world. This has profound effects on permafrost that underlies most of the arctic land area. Permafrost thawing can lead to the liberation of considerable amounts of greenhouse gases as well as to significant changes in the geomorphology, hydrology, and ecology of the corresponding landscapes, which may in turn act as a positive feedback to the climate system. Vast areas of the east Siberian lowlands, which are underlain by permafrost of the Yedoma-type Ice Complex, are particularly sensitive to climate warming because of the high ice content of these permafrost deposits. Thermokarst and thermal erosion are two major types of permafrost degradation in periglacial landscapes. The associated landforms are prominent indicators of climate-induced environmental variations on the regional scale. Thermokarst lakes and basins (alasses) as well as thermo-erosional valleys are widely distributed in the coastal lowlands adjacent to the Laptev Sea. This thesis investigates the spatial distribution and morphometric properties of these degradational features to reconstruct their evolutionary conditions during the Holocene and to deduce information on the potential impact of future permafrost degradation under the projected climate warming. The methodological approach is a combination of remote sensing, geoinformation, and field investigations, which integrates analyses on local to regional spatial scales. Thermokarst and thermal erosion have affected the study region to a great extent. In the Ice Complex area of the Lena River Delta, thermokarst basins cover a much larger area than do present thermokarst lakes on Yedoma uplands (20.0 and 2.2 %, respectively), which indicates that the conditions for large-area thermokarst development were more suitable in the past. This is supported by the reconstruction of the development of an individual alas in the Lena River Delta, which reveals a prolonged phase of high thermokarst activity since the Pleistocene/Holocene transition that created a large and deep basin. After the drainage of the primary thermokarst lake during the mid-Holocene, permafrost aggradation and degradation have occurred in parallel and in shorter alternating stages within the alas, resulting in a complex thermokarst landscape. Though more dynamic than during the first phase, late Holocene thermokarst activity in the alas was not capable of degrading large portions of Pleistocene Ice Complex deposits and substantially altering the Yedoma relief. Further thermokarst development in existing alasses is restricted to thin layers of Holocene ice-rich alas sediments, because the Ice Complex deposits underneath the large primary thermokarst lakes have thawed completely and the underlying deposits are ice-poor fluvial sands. Thermokarst processes on undisturbed Yedoma uplands have the highest impact on the alteration of Ice Complex deposits, but will be limited to smaller areal extents in the future because of the reduced availability of large undisturbed upland surfaces with poor drainage. On Kurungnakh Island in the central Lena River Delta, the area of Yedoma uplands available for future thermokarst development amounts to only 33.7 %. The increasing proximity of newly developing thermokarst lakes on Yedoma uplands to existing degradational features and other topographic lows decreases the possibility for thermokarst lakes to reach large sizes before drainage occurs. Drainage of thermokarst lakes due to thermal erosion is common in the study region, but thermo-erosional valleys also provide water to thermokarst lakes and alasses. Besides these direct hydrological interactions between thermokarst and thermal erosion on the local scale, an interdependence between both processes exists on the regional scale. A regional analysis of extensive networks of thermo-erosional valleys in three lowland regions of the Laptev Sea with a total study area of 5,800 km² found that these features are more common in areas with higher slopes and relief gradients, whereas thermokarst development is more pronounced in flat lowlands with lower relief gradients. The combined results of this thesis highlight the need for comprehensive analyses of both, thermokarst and thermal erosion, in order to assess past and future impacts and feedbacks of the degradation of ice-rich permafrost on hydrology and climate of a certain region.
Research on monolayers of amphiphilic lipids on aqueous solution is of basic importance in surface science. Due to the applicability of a variety of surface sensitive techniques, floating insoluble monolayers are very suitable model systems for the study of order, structure formation and material transport in two dimensions or the interactions of molecules at the interface with ions or molecules in the bulk (headword 'molecular recognition'). From the behavior of monolayers conclusions can be drawn on the properties of lipid layers on solid substrates or in biological membranes. This work deals with specific and fundamental interactions in monolayers both on the molecular and on the microscopic scale and with their relation to the lattice structure, morphology and thermodynamic behavior of monolayers at the air-water interface. As model system especially monolayers of long chain fatty acids are used, since there the molecular interactions can be gradually adjusted by varying the degree of dissociation by means of the suphase pH value. For manipulating the molecular interactions besides the subphase composition also temperature and monolayer composition are systematically varied. The change in the monolayer properties as a function of an external parameter is analyzed by means of isotherm and surface potential measurements, Brewster-angle microscopy, X-ray diffraction at grazing incidence and polarization modulated infrared reflection absorption spectroscopy. For this a quantitative measure for the molecular interactions and for the chain conformational order is derived from the X-ray data. The most interesting results of this work are the elucidation of the origin of regular polygonal and dendritic domain shapes, the various effects of cholesterol on molecular packing and lattice order of long chain amphiphiles, as well as the detection of an abrupt change in the head group bonding interactions, the chain conformational order and the phase transition pressure between tilted phases in fatty acid monolayers near pH 9. For the interpretation of the latter point a model of the head group bonding structure in fatty acid monolayers as a function of the pH value is developed.
In der vorliegenden Arbeit werden Methoden der Erdsystemanalyse auf die Untersuchung der Habitabilität terrestrischer Exoplaneten angewandt. Mit Hilfe eines parametrisierten Konvektionsmodells für die Erde wird die thermische Evolution von terrestrischen Planeten berechnet. Bei zunehmender Leuchtkraft des Zentralsterns wird über den globalen Karbonat-Silikat-Kreislauf das planetare Klima stabilisiert. Für eine photosynthetisch-aktive Biosphäre, die in einem bestimmten Temperaturbereich bei hinreichender CO2-Konzentration existieren kann, wird eine Überlebenspanne abgeschätzt. Der Abstandsbereich um einen Stern, in dem eine solche Biosphäre produktiv ist, wird als photosynthetisch-aktive habitable Zone (pHZ) definiert und berechnet. Der Zeitpunkt, zu dem die pHZ in einem extrasolaren Planetensystem endgültig verschwindet, ist die maximale Lebenspanne der Biosphäre. Für Supererden, massereiche terrestrische Planeten, ist sie umso länger, je massereicher der Planet ist und umso kürzer, je mehr er mit Kontinenten bedeckt ist. Für Supererden, die keine ausgeprägten Wasser- oder Landwelten sind, skaliert die maximale Lebenspanne mit der Planetenmasse mit einem Exponenten von 0,14. Um K- und M-Sterne ist die Überlebensspanne einer Biosphäre auf einem Planeten immer durch die maximale Lebensspanne bestimmt und nicht durch das Ende der Hauptreihenentwicklung des Zentralsterns limitiert. Das pHZ-Konzept wird auf das extrasolare Planetensystem Gliese 581 angewandt. Danach könnte die 8-Erdmassen-Supererde Gliese 581d habitabel sein. Basierend auf dem vorgestellten pHZ-Konzept wird erstmals die von Ward und Brownlee 1999 aufgestellte Rare-Earth-Hypothese für die Milchstraße quantifiziert. Diese Hypothese besagt, dass komplexes Leben im Universum vermutlich sehr selten ist, wohingegen primitives Leben weit verbreitet sein könnte. Unterschiedliche Temperatur- und CO2-Toleranzen sowie ein unterschiedlicher Einfluss auf die Verwitterung für komplexe und primitive Lebensformen führt zu unterschiedlichen Grenzen der pHZ und zu einer unterschiedlichen Abschätzung für die Anzahl der Planeten, die mit den entsprechenden Lebensformen besiedelt sein könnten. Dabei ergibt sich, dass komplex besiedelte Planeten heute etwa 100-mal seltener sein müssten als primitiv besiedelte.
Thermal cis-trans isomerization of azobenzene studied by path sampling and QM/MM stochastic dynamics
(2017)
Azobenzene-based molecular photoswitches have extensively been applied to biological systems, involving photo-control of peptides, lipids and nucleic acids. The isomerization between the stable trans and the metastable cis state of the azo moieties leads to pronounced changes in shape and other physico-chemical properties of the molecules into which they are incorporated. Fast switching can be induced via transitions to excited electronic states and fine-tuned by a large number of different substituents at the phenyl rings. But a rational design of tailor-made azo groups also requires control of their stability in the dark, the half-lifetime of the cis isomer. In computational chemistry, thermally activated barrier crossing on the ground state Born-Oppenheimer surface can efficiently be estimated with Eyring’s transition state theory (TST) approach; the growing complexity of the azo moiety and a rather heterogeneous environment, however, may render some of the underlying simplifying assumptions problematic.
In this dissertation, a computational approach is established to remove two restrictions at once: the environment is modeled explicitly by employing a quantum mechanical/molecular mechanics (QM/MM) description; and the isomerization process is tracked by analyzing complete dynamical pathways between stable states. The suitability of this description is validated by using two test systems, pure azo benzene and a derivative with electron donating and electron withdrawing substituents (“push-pull” azobenzene). Each system is studied in the gas phase, in toluene and in polar DMSO solvent. The azo molecules are treated at the QM level using a very recent, semi-empirical approximation to density functional theory (density functional tight binding approximation). Reactive pathways are sampled by implementing a version of the so-called transition path sampling method (TPS), without introducing any bias into the system dynamics. By analyzing ensembles of reactive trajectories, the change in isomerization pathway from linear inversion to rotation in going from apolar to polar solvent, predicted by the TST approach, could be verified for the push-pull derivative. At the same time, the mere presence of explicit solvation is seen to broaden the distribution of isomerization pathways, an effect TST cannot account for.
Using likelihood maximization based on the TPS shooting history, an improved reaction coordinate was identified as a sine-cosine combination of the central bend angles and the rotation dihedral, r (ω,α,α′). The computational van’t Hoff analysis for the activation entropies was performed to gain further insight into the differential role of solvent for the case of the unsubstituted and the push-pull azobenzene. In agreement with the experiment, it yielded positive activation entropies for azobenzene in the DMSO solvent while negative for the push-pull derivative, reflecting the induced ordering of solvent around the more dipolar transition state associated to the latter compound. Also, the dynamically corrected rate constants were evaluated using the reactive flux approach where an increase comparable to the experimental one was observed for a high polarity medium for both azobenzene derivatives.
Einleitung: Die Erdnussallergie zählt zu den häufigsten Nahrungsmittelallergien im Kindesalter. Bereits kleine Mengen Erdnuss (EN) können zu schweren allergischen Reaktionen führen. EN ist der häufigste Auslöser einer lebensbedrohlichen Anaphylaxie bei Kindern und Jugendlichen. Im Gegensatz zu anderen frühkindlichen Nahrungsmittelallergien entwickeln Patienten mit einer EN-Allergie nur selten eine natürliche Toleranz. Seit mehreren Jahren wird daher an kausalen Therapiemöglichkeiten für EN-Allergiker, insbesondere an der oralen Immuntherapie (OIT), geforscht. Erste kleinere Studien zur OIT bei EN-Allergie zeigten erfolgsversprechende Ergebnisse. Im Rahmen einer randomisierten, doppelblind, Placebo-kontrollierten Studie mit größerer Fallzahl werden in der vorliegenden Arbeit die klinische Wirksamkeit und Sicherheit dieser Therapieoption bei Kindern mit EN-Allergie genauer evaluiert. Des Weiteren werden immunologische Veränderungen sowie die Lebensqualität und Therapiebelastung unter OIT untersucht.
Methoden: Kinder zwischen 3-18 Jahren mit einer IgE-vermittelten EN-Allergie wurden in die Studie eingeschlossen. Vor Beginn der OIT wurde eine orale Provokation mit EN durchgeführt. Die Patienten wurden 1:1 randomisiert und entsprechend der Verum- oder Placebogruppe zugeordnet. Begonnen wurde mit 2-120 mg EN bzw. Placebo pro Tag, abhängig von der Reaktionsdosis bei der oralen Provokation. Zunächst wurde die tägliche OIT-Dosis alle zwei Wochen über etwa 14 Monate langsam bis zu einer Erhaltungsdosis von mindestens 500 mg EN (= 125 mg EN-Protein, ~ 1 kleine EN) bzw. Placebo gesteigert. Die maximal erreichte Dosis wurde dann über zwei Monate täglich zu Hause verabreicht. Im Anschluss erfolgte erneut eine orale Provokation mit EN. Der primäre Endpunkt der Studie war die Anzahl an Patienten der Verum- und Placebogruppe, die unter oraler Provokation nach OIT ≥1200 mg EN vertrugen (=„partielle Desensibilisierung“). Sowohl vor als auch nach OIT wurde ein Hautpricktest mit EN durchgeführt und EN-spezifisches IgE und IgG4 im Serum bestimmt. Außerdem wurden die Basophilenaktivierung sowie die Ausschüttung von T-Zell-spezifischen Zytokinen nach Stimulation mit EN in vitro gemessen. Anhand von Fragebögen wurde die Lebensqualität vor und nach OIT sowie die Therapiebelastung während OIT erfasst.
Ergebnisse: 62 Patienten wurden in die Studie eingeschlossen und randomisiert. Nach etwa 16 Monaten unter OIT zeigten 74,2% (23/31) der Patienten der Verumgruppe und nur 16,1% (5/31) der Placebogruppe eine „partielle Desensibilisierung“ gegenüber EN (p<0,001). Im Median vertrugen Patienten der Verumgruppe 4000 mg EN (~8 kleine EN) unter der Provokation nach OIT wohingegen Patienten der Placebogruppe nur 80 mg EN (~1/6 kleine EN) vertrugen (p<0,001). Fast die Hälfte der Patienten der Verumgruppe (41,9%) tolerierten die Höchstdosis von 18 g EN unter Provokation („komplette Desensibilisierung“). Es zeigte sich ein vergleichbares Sicherheitsprofil unter Verum- und Placebo-OIT in Bezug auf objektive Nebenwirkungen. Unter Verum-OIT kam es jedoch signifikant häufiger zu subjektiven Nebenwirkungen wie oralem Juckreiz oder Bauchschmerzen im Vergleich zu Placebo (3,7% der Verum-OIT-Gaben vs. 0,5% der Placebo-OIT-Gaben, p<0,001). Drei Kinder der Verumgruppe (9,7%) und sieben Kinder der Placebogruppe (22,6%) beendeten die Studie vorzeitig, je zwei Patienten beider Gruppen aufgrund von Nebenwirkungen. Im Gegensatz zu Placebo, zeigten sich unter Verum-OIT signifikante immunologische Veränderungen. So kam es zu einer Abnahme des EN-spezifischen Quaddeldurchmessers im Hautpricktest, einem Anstieg der EN-spezifischen IgG4-Werte im Serum sowie zu einer verminderten EN-spezifischen Zytokinsekretion, insbesondere der Th2-spezifischen Zytokine IL-4 und IL-5. Hinsichtlich der EN-spezifischen IgE-Werte sowie der EN-spezifischen Basophilenaktivierung zeigten sich hingegen keine Veränderungen unter OIT. Die Lebensqualität von Kindern der Verumgruppe war nach OIT signifikant verbessert, jedoch nicht bei Kindern der Placebogruppe. Während der OIT wurde die Therapie von fast allen Kindern (82%) und Müttern (82%) als positiv bewertet (= niedrige Therapiebelastung).
Diskussion: Die EN-OIT führte bei einem Großteil der EN-allergischen Kinder zu einer Desensibilisierung und einer deutlich erhöhten Reaktionsschwelle auf EN. Somit sind die Kinder im Alltag vor akzidentellen Reaktionen auf EN geschützt, was die Lebensqualität der Kinder deutlich verbessert. Unter den kontrollierten Studienbedingungen zeigte sich ein akzeptables Sicherheitsprofil, mit vorrangig milder Symptomatik. Die klinische Desensibilisierung ging mit Veränderungen auf immunologischer Ebene einher. Langzeitstudien zur EN-OIT müssen jedoch abgewartet werden, um die klinische und immunologische Wirksamkeit hinsichtlich einer möglichen langfristigen oralen Toleranzinduktion sowie die Sicherheit unter langfristiger OIT zu untersuchen, bevor das Therapiekonzept in die Praxis übertragen werden kann.
Theory of mRNA degradation
(2012)
One of the central themes of biology is to understand how individual cells achieve a high fidelity in gene expression. Each cell needs to ensure accurate protein levels for its proper functioning and its capability to proliferate. Therefore, complex regulatory mechanisms have evolved in order to render the expression of each gene dependent on the expression level of (all) other genes. Regulation can occur at different stages within the framework of the central dogma of molecular biology. One very effective and relatively direct mechanism concerns the regulation of the stability of mRNAs. All organisms have evolved diverse and powerful mechanisms to achieve this. In order to better comprehend the regulation in living cells, biochemists have studied specific degradation mechanisms in detail. In addition to that, modern high-throughput techniques allow to obtain quantitative data on a global scale by parallel analysis of the decay patterns of many different mRNAs from different genes. In previous studies, the interpretation of these mRNA decay experiments relied on a simple theoretical description based on an exponential decay. However, this does not account for the complexity of the responsible mechanisms and, as a consequence, the exponential decay is often not in agreement with the experimental decay patterns. We have developed an improved and more general theory of mRNA degradation which provides a general framework of mRNA expression and allows describing specific degradation mechanisms. We have made an attempt to provide detailed models for the regulation in different organisms. In the yeast S. cerevisiae, different degradation pathways are known to compete and furthermore most of them rely on the biochemical modification of mRNA molecules. In bacteria such as E. coli, degradation proceeds primarily endonucleolytically, i.e. it is governed by the initial cleavage within the coding region. In addition, it is often coupled to the level of maturity and the size of the polysome of an mRNA. Both for S. cerevisiae and E. coli, our descriptions lead to a considerable improvement of the interpretation of experimental data. The general outcome is that the degradation of mRNA must be described by an age-dependent degradation rate, which can be interpreted as a consequence of molecular aging of mRNAs. Within our theory, we find adequate ways to address this much debated topic from a theoretical perspective. The improvements of the understanding of mRNA degradation can be readily applied to further comprehend the mRNA expression under different internal or environmental conditions such as after the induction of transcription or stress application. Also, the role of mRNA decay can be assessed in the context of translation and protein synthesis. The ultimate goal in understanding gene regulation mediated by mRNA stability will be to identify the relevance and biological function of different mechanisms. Once more quantitative data will become available, our description allows to elaborate the role of each mechanism by devising a suitable model.
Der Beitrag der Dissertation „Theoriebasierte Betreuung vom Schulpraktikum im Lehramtsstudium Englisch“ zum wissenschaftlichen Diskurs liegt in der Verbindung von Theoriebereichen der Professionalisierungsforschung und angewandten Linguistik mit Untersuchungen zur hochschuldidaktischen Begleitung und Betreuung im ersten Unterrichtspraktikum des Lehramtsstudiums, dem fachdidaktischen Tagespraktikum, an der Universität Potsdam. Ein interaktionsanalytisches Vorgehen wurde eingesetzt zur Weiterentwicklung des hochschuldidaktischen Settings einer disziplinenverbindenden, fachwissenschaftlichen Begleitung von Praktika im komplexen Kontext Schule. Die Implementierung entsprechender Formate ins reguläre Studium wurde in einer über drei Jahre angelegten iterativen Studie turnusmäßig evaluiert.
Der Umgang mit der musikalischen Fachsprache wird in den meisten Lehrplänen für den Musikunterricht der Sekundarstufe I gefordert. Allerdings fehlt nicht nur in den Lehrplänen, sondern auch in der musikdidaktischen Literatur eine inhaltliche Ausgestaltung dieser Forderung. Über Inhalt, Umfang und Ziel der in der Schule anzuwendenden musikalischen Fachsprache herrscht daher keine Klarheit. Empirische Untersuchungen zu den sprachlichen Inhalten im Musikunterricht liegen ebenfalls nicht vor. Auch in vielen anderen Unterrichtsfächern ist die Forschungslage die sprachlichen Inhalte betreffend überschaubar. Mit der Verwendung von Sprache sind jedoch nicht nur Kommunikationsprozesse verbunden, sondern gleichzeitig Lernprozesse innerhalb der Sprache, von der Wortschatzerweiterung bis zur Herstellung von inhaltlich-thematischen Zusammenhängen. Diese Lernprozesse werden beeinflusst von der Wortwahl der Lernenden und Lehrenden. Die Wortwahl der Lernenden lässt gleichzeitig einen Schluss zu auf den Stand des Wissens und dessen Vernetzung. Auf dieser Basis ist der sprachliche Inhalt des Musikunterrichtes der Gegenstand der vorgelegten Arbeit. Ziel der Studie war herauszu¬finden, inwieweit es gelingen kann, durch die Art und Weise des Einsatzes und den Umfang von Fachsprache im Musikunterricht Lernprozesse effektiver und erfolgreicher zu gestalten und besser an Gegenwarts- und Zukunftsbedürfnissen der Lernenden auszurichten.
Optimization is a core part of technological advancement and is usually heavily aided by computers. However, since many optimization problems are hard, it is unrealistic to expect an optimal solution within reasonable time. Hence, heuristics are employed, that is, computer programs that try to produce solutions of high quality quickly. One special class are estimation-of-distribution algorithms (EDAs), which are characterized by maintaining a probabilistic model over the problem domain, which they evolve over time. In an iterative fashion, an EDA uses its model in order to generate a set of solutions, which it then uses to refine the model such that the probability of producing good solutions is increased.
In this thesis, we theoretically analyze the class of univariate EDAs over the Boolean domain, that is, over the space of all length-n bit strings. In this setting, the probabilistic model of a univariate EDA consists of an n-dimensional probability vector where each component denotes the probability to sample a 1 for that position in order to generate a bit string.
My contribution follows two main directions: first, we analyze general inherent properties of univariate EDAs. Second, we determine the expected run times of specific EDAs on benchmark functions from theory. In the first part, we characterize when EDAs are unbiased with respect to the problem encoding. We then consider a setting where all solutions look equally good to an EDA, and we show that the probabilistic model of an EDA quickly evolves into an incorrect model if it is always updated such that it does not change in expectation.
In the second part, we first show that the algorithms cGA and MMAS-fp are able to efficiently optimize a noisy version of the classical benchmark function OneMax. We perturb the function by adding Gaussian noise with a variance of σ², and we prove that the algorithms are able to generate the true optimum in a time polynomial in σ² and the problem size n. For the MMAS-fp, we generalize this result to linear functions. Further, we prove a run time of Ω(n log(n)) for the algorithm UMDA on (unnoisy) OneMax. Last, we introduce a new algorithm that is able to optimize the benchmark functions OneMax and LeadingOnes both in O(n log(n)), which is a novelty for heuristics in the domain we consider.
Thematic role assignment and word order preferences in the child language acquisition of Tagalog
(2018)
A critical task in daily communications is identifying who did what to whom in an utterance, or assigning the thematic roles agent and patient in a sentence. This dissertation is concerned with Tagalog-speaking children’s use of word order and morphosyntactic markers for thematic role assignment. It aims to explain children’s difficulties in interpreting sentences with a non-canonical order of arguments (i.e., patient-before-agent) by testing the predictions of the following accounts: the frequency account (Demuth, 1989), the Competition model (MacWhinney & Bates, 1989), and the incremental processing account (Trueswell & Gleitman, 2004). Moreover, the experiments in this dissertation test the influence of a word order strategy in a language like Tagalog, where the thematic roles are always unambiguous in a sentence, due to its verb-initial order and its voice-marking system. In Tagalog’s voice-marking system, the inflection on the verb indicates the thematic role of the noun marked by 'ang.' First, the possible basis for a word order strategy in Tagalog was established using a sentence completion experiment given to adults and 5- and 7-year-old children (Chapter 2) and a child-directed speech corpus analysis (Chapter 3). In general, adults and children showed an agent-before-patient preference, although adults’ preference was also affected by sentence voice. Children’s comprehension was then examined through a self-paced listening and picture verification task (Chapter 3) and an eye-tracking and picture selection task (Chapter 4), where word order (agent-initial or patient-initial) and voice (agent voice or patient voice) were manipulated. Offline (i.e., accuracy) and online (i.e., listening times, looks to the target) measures revealed that 5- and 7-year-old Tagalog-speaking children had a bias to interpret the first noun as the agent. Additionally, the use of word order and morphosyntactic markers was found to be modulated by voice. In the agent voice, children relied more on a word order strategy; while in the patient voice, they relied on the morphosyntactic markers. These results are only partially explained by the accounts being tested in this dissertation. Instead, the findings support computational accounts of incremental word prediction and learning such as Chang, Dell, & Bock’s (2006) model.
Galaxy clusters are the largest known gravitationally bound objects, their study is important for both an intrinsic understanding of their systems and an investigation of the large scale structure of the universe. The multi- component nature of galaxy clusters offers multiple observable signals across the electromagnetic spectrum. At X-ray wavelengths, galaxy clusters are simply identified as X-ray luminous, spatially extended, and extragalactic sources. X-ray observations offer the most powerful technique for constructing cluster catalogues. The main advantages of the X-ray cluster surveys are their excellent purity and completeness and the X-ray observables are tightly correlated with mass, which is indeed the most fundamental parameter of clusters. In my thesis I have conducted the 2XMMi/SDSS galaxy cluster survey, which is a serendipitous search for galaxy clusters based on the X-ray extended sources in the XMM-Newton Serendipitous Source Catalogue (2XMMi-DR3). The main aims of the survey are to identify new X-ray galaxy clusters, investigate their X-ray scaling relations, identify distant cluster candidates, and study the correlation of the X-ray and optical properties. The survey is constrained to those extended sources that are in the footprint of the Sloan Digital Sky Survey (SDSS) in order to be able to identify the optical counterparts as well as to measure their redshifts that are mandatory to measure their physical properties. The overlap area be- tween the XMM-Newton fields and the SDSS-DR7 imaging, the latest SDSS data release at the starting of the survey, is 210 deg^2. The survey comprises 1180 X-ray cluster candidates with at least 80 background-subtracted photon counts, which passed the quality control process. To measure the optical redshifts of the X-ray cluster candidates, I used three procedures; (i) cross-matching these candidates with the recent and largest optically selected cluster catalogues in the literature, which yielded the photometric redshifts of about a quarter of the X-ray cluster candidates. (ii) I developed a finding algorithm to search for overdensities of galaxies at the positions of the X-ray cluster candidates in the photometric redshift space and to measure their redshifts from the SDSS-DR8 data, which provided the photometric redshifts of 530 groups/clusters. (iii) I developed an algorithm to identify the cluster candidates associated with spectroscopically targeted Luminous Red Galaxies (LRGs) in the SDSS-DR9 and to measure the cluster spectroscopic redshift, which provided 324 groups and clusters with spectroscopic confirmation based on spectroscopic redshift of at least one LRG. In total, the optically confirmed cluster sample comprises 574 groups and clusters with redshifts (0.03 ≤ z ≤ 0.77), which is the largest X-ray selected cluster catalogue to date based on observations from the current X-ray observatories (XMM-Newton, Chandra, Suzaku, and Swift/XRT). Among the cluster sample, about 75 percent are newly X-ray discovered groups/clusters and 40 percent are new systems to the literature. To determine the X-ray properties of the optically confirmed cluster sample, I reduced and analysed their X-ray data in an automated way following the standard pipelines of processing the XMM-Newton data. In this analysis, I extracted the cluster spectra from EPIC(PN, MOS1, MOS2) images within an optimal aperture chosen to maximise the signal-to-noise ratio. The spectral fitting procedure provided the X-ray temperatures kT (0.5 - 7.5 keV) for 345 systems that have good quality X-ray data. For all the optically confirmed cluster sample, I measured the physical properties L500 (0.5 x 10^42 – 1.2 x 10^45 erg s-1 ) and M500 (1.1 x 10^13 – 4.9 x 10^14 M⊙) from an iterative procedure using published scaling relations. The present X-ray detected groups and clusters are in the low and intermediate luminosity regimes apart from few luminous systems, thanks to the XMM-Newton sensitivity and the available XMM-Newton deep fields The optically confirmed cluster sample with measurements of redshift and X-ray properties can be used for various astrophysical applications. As a first application, I investigated the LX - T relation for the first time based on a large cluster sample of 345 systems with X-ray spectroscopic parameters drawn from a single survey. The current sample includes groups and clusters with wide ranges of redshifts, temperatures, and luminosities. The slope of the relation is consistent with the published ones of nearby clusters with higher temperatures and luminosities. The derived relation is still much steeper than that predicted by self-similar evolution. I also investigated the evolution of the slope and the scatter of the LX - T relation with the cluster redshift. After excluding the low luminosity groups, I found no significant changes of the slope and the intrinsic scatter of the relation with redshift when dividing the sample into three redshift bins. When including the low luminosity groups in the low redshift subsample, I found its LX - T relation becomes after than the relation of the intermediate and high redshift subsamples. As a second application of the optically confirmed cluster sample from our ongoing survey, I investigated the correlation between the cluster X-ray and the optical parameters that have been determined in a homogenous way. Firstly, I investigated the correlations between the BCG properties (absolute magnitude and optical luminosity) and the cluster global proper- ties (redshift and mass). Secondly, I computed the richness and the optical luminosity within R500 of a nearby subsample (z ≤ 0.42, with a complete membership detection from the SDSS data) with measured X-ray temperatures from our survey. The relation between the estimated optical luminosity and richness is also presented. Finally, the correlation between the cluster optical properties (richness and luminosity) and the cluster global properties (X-ray luminosity, temperature, mass) are investigated.
During the past several decades polymer materials become widely used as components of medical devices and implants such as hemodialysers, bioartificial organs as well as vascular and recombinant surgery. Most of the devices cannot avoid the blood contact in their use. When the polymer materials come in contact with blood they can cause different undesired host responses like thrombosis, inflammatory reactions and infections. Thus the materials must be hemocompatible in order to minimize these undesired body responses. The earliest and one of the main problems in the use of blood-contacting biomaterials is the surface induced thrombosis. The sequence of the thrombus formation on the artificial surfaces has been well established. The first event, which occurs, after exposure of biomaterials to blood, is the adsorption of blood proteins. Surface physicochemical properties of the materials as wettability greatly influence the amount and conformational changes of adsorbed proteins. In turn the type, amount and conformational state of the adsorbed protein layer determines whether platelets will adhere and become activated or not on the artificial surface and thus to complete the thrombus formation. The adsorption of fibrinogen (FNG), which is present in plasma, has been shown to be closely related to surface induced thrombosis by participating in all processes of the thrombus formation such as fibrin formation, platelet adhesion and aggregation. Therefore study the FNG adsorption to artificial surfaces could contribute to better understanding of the mechanisms of platelet adhesion and activation and thus to controlling the surface induced thrombosis. Endothelization of the polymer surfaces is one of the strategies for improving the materials hemocompatibility, which is believed to be the most ideal solution for making truly blood-compatible materials. Since at physiological conditions proteins such as FNG and fibronectin (FN) are the usual extracellular matrix (ECM) for endothelial cells (EC) adhesion, precoating of the materials with these proteins has been shown to improve EC adhesion and growth in vitro. ECM proteins play an essential role not only like a structural support for cell adhesion and spreading, but also they are important factor in transmitting signals for different cell functions. The ability of cells to remodel plasma proteins such as FNG and FN in matrix-like structures together with the classical cell parameters such as actin cytoskeleton and focal adhesion formation could be used as an criteria for proper cell functioning. The establishment and the maintaining of delicate balance between cell-cell and cell-substrate contacts is another important factor for better EC colonization of the implants. The functionality of newly established endothelium in order to produce antithromotic substances should be always considered when EC seeding is used for improving the hemocompatibility of the polymer materials. Controlling the polymer surface properties such as surface wettability represents a versatile approach to manipulate the above cellular responses and therefore can be used in biomaterial and tissue engineering applications for producing better hemocompatible materials.
This thesis investigates whether multilingual speakers’ use of grammatical constraints in an additional language (La) is affected by the native (L1) and non-native grammars (L2) of their linguistic repertoire.
Previous studies have used untimed measures of grammatical performance to show that L1 and L2 grammars affect the initial stages of La acquisition. This thesis extends this work by examining whether speakers at intermediate levels of La proficiency, who demonstrate mature untimed/offline knowledge of the target La constraints, are differentially affected by their L1 and L2 knowledge when they comprehend sentences under processing pressure. With this purpose, several groups of La German speakers were tested on word order and agreement phenomena using online/timed measures of grammatical knowledge. Participants had mirror distributions of their prior languages and they were either L1English/L2Spanish speakers or L1Spanish/L2English speakers. Crucially, in half of the phenomena the target La constraint aligned with English but not with Spanish, while in the other half it aligned with Spanish but not with English. Results show that the L1 grammar plays a major role in the use of La constraints under processing pressure, as participants displayed increased sensitivity to La constraints when they aligned with their L1, and reduced sensitivity when they did not. Further, in specific phenomena in which the L2 and La constraints aligned, increased L2 proficiency resulted in an enhanced sensitivity to the La constraint. These findings suggest that both native and non-native grammars affect how speakers use La grammatical constraints under processing pressure. However, L1 and L2 grammars differentially influence on participants’ performance: While L1 constraints seem to be reliably recruited to cope with the processing demands of real-time La use, proficiency in an L2 can enhance sensitivity to La constraints only in specific circumstances, namely when L2 and La constraints align.
There are many factors which make speaking and understanding a second language (L2) a highly complex challenge. Skills and competencies in in both linguistic and metalinguistic areas emerge as parts of a multi-faceted, flexible concept underlying bilingual/multilingual communication. On the linguistic level, a combination of an extended knowledge of idiomatic expressions, a broad lexical familiarity, a large vocabulary size, and the ability to deal with phonetic distinctions and fine phonetic detail has been argued necessary for effective nonnative comprehension of spoken language. The scientific interest in these factors has also led to more interest in the L2’s information structure, the way in which information is organised and packaged into informational units, both within and between clauses. On a practical level, the information structure of a language can offer the means to assign focus to a certain element considered important. Speakers can draw from a rich pool of linguistic means to express this focus, and listeners can in turn interpret these to guide them to the highlighted information which in turn facilitates comprehension, resulting in an appropriate understanding of what has been said. If a speaker doesn’t follow the principles of information structure, and the main accent in a sentence is placed on an unimportant word, then there may be inappropriate information transfer within the discourse, and misunderstandings. The concept of focus as part of the information structure of a language, the linguistic means used to express it, and the differential use of focus in native and nonnative language processing are central to this dissertation. Languages exhibit a wide range of ways of directing focus, including by prosodic means, by syntactic constructions, and by lexical means. The general principles underlying information structure seem to contrast structurally across different languages, and they can also differ in the way they express focus. In the context of L2 acquisition, characteristics of the L1 linguistic system are argued to influence the acquisition of the L2. Similarly, the conceptual patterns of information structure of the L1 may influence the organization of information in the L2. However, strategies and patterns used to exploit information structure for succesful language comprehension in the native L1, may not apply at all, or work in different ways or todifferent degrees in the L2. This means that L2 learners ideally have to understand the way that information structure is expressed in the L2 to fully use the information structural benefit in the L2. The knowledge of information structural requirements in the L2 could also imply that the learner would have to make adjustments regarding the use of information structural devices in the L2. The general question is whether the various means to mark focus in the learners’ native language are also accessible in the nonnative language, and whether a L1-L2 transfer of their usage should be considered desirable. The current work explores how information structure helps the listener to discover and structure the forms and meanings of the L2. The central hypothesis is that the ability to access information structure has an impact on the level of the learners’ appropriateness and linguistic competence in the L2. Ultimately, the ability to make use of information structure in the L2 is believed to underpin the L2 learners’ ability to effectively communicate in the L2. The present study investigated how use of focus markers affects processing speed and word recall recall in a native-nonnative language comparison. The predominant research question was whether the type of focus marking leads to more efficient and accurate word processing in marked structures than in unmarked structures, and whether differences in processing patterns can be observed between the two language conditions. Three perception studies were conducted, each concentrating on one of the following linguistic parameters: 1. Prosodic prominence: Does prosodic focus conveyed by sentence accent and by word position facilitate word recognition? 2. Syntactical means: Do cleft constructions result in faster and more accurate word processing? 3. Lexical means: Does focus conveyed by the particles even/only (German: sogar/nur) facilitate word processing and word recall? Experiments 2 and 3 additionally investigated the contribution of context in the form of preceding questions. Furthermore, they considered accent and its facilitative effect on the processing of words which are in the scope of syntactic or lexical focus marking. All three experiments tested German learners of English in a native German language condition and in English as their L2. Native English speakers were included as a control for the English language condition. Test materials consisted of single sentences, all dealing with bird life. Experiment 1 tested word recognition in three focus conditions (broad focus, narrow focus on the target, and narrow focus on a constituent than the target) in one condition using natural unmanipulated sentences, and in the other two conditions using spliced sentences. Experiment 2 (effect of syntactic focus marking) and Experiment 3 (effect of lexical focus marking) used phoneme monitoring as a measure for the speed of word processing. Additionally, a word recall test (4AFC) was conducted to assess the effective entry of target-bearing words in the listeners’ memory. Experiment 1: Focus marking by prosodic means Prosodic focus marking by pitch accent was found to highlight important information (Bolinger, 1972), making the accented word perceptually more prominent (Klatt, 1976; van Santen & Olive, 1990; Eefting, 1991; Koopmans-van Beinum & van Bergem, 1989). However, accent structure seems to be processed faster in native than in nonnative listening (Akker& Cutler, 2003, Expt. 3). Therefore, it is expected that prosodically marked words are better recognised than unmarked words, and that listeners can exploit accent structure better for accurate word recognition in their L1 than they do in the L2 (L1 > L2). Altogether, a difference in word recognition performance in L1 listening is expected between different focus conditions (narrow focus > broad focus). Results of Experiments 1 show that words were better recognized in native listening than in nonnative listening. Focal accent, however, doesn’t seem to help the German subjects recognize accented words more accurately, in both the L1 and the L2. This could be due to the focus conditions not being acoustically distinctive enough. Results of experiments with spliced materials suggest that the surrounding prosodic sentence contour made listeners remember a target word and not the local, prosodic realization of the word. Prosody seems to indeed direct listeners’ attention to the focus of the sentence (see Cutler, 1976). Regarding the salience of word position, VanPatten (2002; 2004) postulated a sentence location principle for L2 processing, stating a ranking of initial > final > medial word position. Other evidence mentions a processing adantage of items occurring late in the sentence (Akker & Cutler, 2003), and Rast (2003) observed in an English L2 production study a trend of an advantage of items occurring at the outer ends of the sentence. The current Experiment 1 aimed to keep the length of the sentences to an acceptable length, mainly to keep the task in the nonnative lnaguage condition feasable. Word length showed an effect only in combination with word position (Rast, 2003; Rast & Dommergues, 2003). Therefore, word length was included in the current experiment as a secondary factor and without hypotheses. Results of Experiment 1 revealed that the length of a word doesn’t seem to be important for its accurate recognition. Word position, specifically the final position, clearly seems to facilitate accurate word recognition in German. A similar trend emerges in condition English L2, confirming Klein (1984) and Slobin (1985). Results don’t support the sentence location principle of VanPatten (2002; 2004). The salience of the final position is interpreted as recency effect (Murdock, 1962). In addition, the advantage of the final position may benefit from the discourse convention that relevant background information is referred to first, and then what is novel later (Haviland & Clark, 1974). This structure is assumed to cue the listener as to what the speaker considers to be important information, and listeners might have reacted according to this convention. Experiment 2: Focus marking by syntactic means Atypical syntactic structures often draw listeners’ attention to certain information in an utterance, and the cleft structure as a focus marking device appears to be a common surface feature in many languages (Lambrecht, 2001). Surface structure influences sentence processing (Foss & Lynch, 1969; Langford & Holmes, 1979), which leads to competing hypotheses in Experiment 2: on the one hand, the focusing effect of the cleft construction might reduce processing times. On the other, cleft constructions in German were found to be used less to mark fo than in English (Ahlemeyer & Kohlhof, 1999; Doherty, 1999; E. Klein, 1988). The complexity of the constructions, and the experience from the native language might work against an advantage of the focus effect in the L2. Results of Experiment 2 show that the cleft structure is an effective device to mark focus in German L1. The processing advantage is explained by the low degree of structural markedness of cleft structures: listeners use the focus function of sentence types headed by the dummy subject es (English: it) due to reliance on 'safe' subject-prominent SVO-structures. The benefit of cleft is enhanced when the sentences are presented with context, suggesting a substantial benefit when focus effects of syntactic surface structure and coherence relation between sentences are integrated. Clefts facilitate word processing for English native speakers. Contrary to German L1, the marked cleft construction doesn’t reduce processing times in English L2. The L1-L2 difference was interpreted as a learner problem of applying specific linguistic structures according to the principles of information structure in the target language. Focus marking by cleft did not help German learners in native or in nonnative word recall. This could be attributed to the phonological similarity of the multiple choice options (Conrad & Hull, 1964), and to a long time span between listening and recall (Birch & Garnsey, 1995; McKoon et al., 1993). Experiment 3: Focus marking by lexical means Focus particles are elements of structure that can indicate focus (König, 1991), and their function is to emphasize a certain part of the sentence (Paterson et al., 1999). I argue that the focus particles even/only (German: sogar/nur) evoke contrast sets of alternatives resp. complements to the element in focus (Ni et al., 1996), which causes interpretations of context. Therefore, lexical focus marking isn’t expected to lead to faster word processing. However, since different mechanisms of encoding seem to underlie word memory, a benefit of the focusing function of particles is expected to show in the recall task: due to focus particles being a preferred and well-used feature for native speakers of German, a transfer of this habitualness is expected, resulting in a better recall of focused words. Results indicated that focus particles seem to be the weakest option to mark focus: Focus marking by lexical particle don’t seem to reduce word processing times in either German L1, English L2, or in English L1. The presence of focus particles is likely to instantiate a complex discourse model which lets the listener await further modifying information (Liversedge et al., 2002). This semantic complexity might slow down processing. There are no indications that focus particles facilitate native language word recall in German L1 and English L1. This could be because focus particles open sets of conditions and contexts that enlarge the set of representations in listeners rather than narrowing it down to the element in the scope of the focus particle. In word recall, the facilitative effect of focus particles emerges only in the nonnative language condition. It is suggested that L2 learners, when faced with more demanding tasks in an L2, use a broad variety of means that identify focus for a better representation of novel words in the memory. In Experiments 2 and 3, evidence suggests that accent is an important factor for efficient word processing and accurate recall in German L1 and English L1, but less so in English L2. This underlines the function of accent as core speech parameter and consistent cue to the perception of prominence native language use (see Cutler & Fodor, 1979; Pitt & Samuel, 1990a; Eriksson et al., 2002; Akker & Cutler, 2003); the L1-L2 difference is attributed to patterns of expectation that are employed in the L1 but not (yet?) in the L2. There seems to exist a fine-tuned sensitivity to how accents are distributed in the native language, listeners expect an appropriate distribution and interpret it accordingly (Eefting, 1991). This pleads for accent placement as extremely important to L2 proficiency; the current results also suggest that accent and its relationship with other speech parameters has to be newly established in the L2 to fully reveal its benefits for efficient processing of speech. There is evidence that additional context facilitates processing of complex syntactic structures but that a surplus of information has no effect if the sentence construction is less challenging for the listener. The increased amount of information to be processed seems to impede better word recall, particularly in the L2. Altogether, it seems that focus marking devices and context can combine to form an advantageous alliance: a substantial benefit in processing efficiency is found when parameters of focus marking and sentence coherence are integrated. L2 research advocates the beneficial aspects of providing context for efficient L2 word learning (Lawson & Hogben, 1996). The current thesis promotes the view that a context which offers more semantic, prosodic, or lexical connections might compensate for the additional processing load that context constitutes for the listeners. A methodological consideration concerns the order in which language conditions are presented to listeners, i.e., L1-L2 or L2-L1. Findings suggest that presentation order could enforce a learning bias, with the performance in the second experiment being influenced by knowledge acquired in the first (see Akker & Cutler, 2003). To conclude this work: The results of the present study suggest that information structure is more accessible in the native language than it is in the nonnative language. There is, however, some evidence that L2 learners have an understanding of the significance of some information-structural parameters of focus marking. This has a beneficial effect on processing efficiency and recall accuracy; on the cognitive side it illustrates the benefits and also the need of a dynamic exchange of information-structural organization between L1 and L2. The findings of the current thesis encourage the view that an understanding of information structure can help the learner to discover and categorise forms and meanings of the L2. Information structure thus emerges as a valuable resource to advance proficiency in a second language.
The intergalactic medium is kept highly photoionised by the intergalactic UV background radiation field generated by the overall population of quasars and galaxies. In the vicinity of sources of UV photons, such as luminous high-redshift quasars, the UV radiation field is enhanced due to the local source contribution. The higher degree of ionisation is visible as a reduced line density or generally as a decreased level of absorption in the Lyman alpha forest of neutral hydrogen. This so-called proximity effect has been detected with high statistical significance towards luminous quasars. If quasars radiate rather isotropically, background quasar sightlines located near foreground quasars should show a region of decreased Lyman alpha absorption close to the foreground quasar. Despite considerable effort, such a transverse proximity effect has only been detected in a few cases. So far, studies of the transverse proximity effect were mostly limited by the small number of suitable projected pairs or groups of high-redshift quasars. With the aim to substantially increase the number of quasar groups in the vicinity of bright quasars we conduct a targeted survey for faint quasars around 18 well-studied quasars at employing slitless spectroscopy. Among the reduced and calibrated slitless spectra of 29000 objects on a total area of 4.39 square degrees we discover in total 169 previously unknown quasar candidates based on their prominent emission lines. 81 potential z>1.7 quasars are selected for confirmation by slit spectroscopy at the Very Large Telescope (VLT). We are able to confirm 80 of these. 64 of the newly discovered quasars reside at z>1.7. The high success rate of the follow-up observations implies that the majority of the remaining candidates are quasars as well. In 16 of these groups we search for a transverse proximity effect as a systematic underdensity in the HI Lyman alpha absorption. We employ a novel technique to characterise the random absorption fluctuations in the forest in order to estimate the significance of the transverse proximity effect. Neither low-resolution spectra nor high-resolution spectra of background quasars of our groups present evidence for a transverse proximity effect. However, via Monte Carlo simulations the effect should be detectable only at the 1-2sigma level near three of the foreground quasars. Thus, we cannot distinguish between the presence or absence of a weak signature of the transverse proximity effect. The systematic effects of quasar variability, quasar anisotopy and intrinsic overdensities near quasars likely explain the apparent lack of the transverse proximity effect. Even in absence of the systematic effects, we show that a statistically significant detection of the transverse proximity effect requires at least 5 medium-resolution quasar spectra of background quasars near foreground quasars whose UV flux exceeds the UV background by a factor 3. Therefore, statistical studies of the transverse proximity effect require large numbers of suitable pairs. Two sightlines towards the central quasars of our survey fields show intergalactic HeII Lyman alpha absorption. A comparison of the HeII absorption to the corresponding HI absorption yields an estimate of the spectral shape of the intergalactic UV radiation field, typically parameterised by the HeII/HI column density ratio eta. We analyse the fluctuating UV spectral shape on both lines of sight and correlate it with seven foreground quasars. On the line of sight towards Q0302-003 we find a harder radiation field near 4 foreground quasars. In the direct vicinity of the quasars eta is consistent with values of 25-100, whereas at large distances from the quasars eta>200 is required. The second line of sight towards HE2347-4342 probes lower redshifts where eta is directly measurable in the resolved HeII forest. Again we find that the radiation field near the 3 foreground quasars is significantly harder than in general. While eta still shows large fluctuations near the quasars, probably due to radiative transfer, the radiation field is on average harder near the quasars than far away from them. We interpret these discoveries as the first detections of the transverse proximity effect as a local hardness fluctuation in the UV spectral shape. No significant HI proximity effect is predicted for the 7 foreground quasars. In fact, the HI absorption near the quasars is close to or slightly above the average, suggesting that the weak signature of the transverse proximity effect is masked by intrinsic overdensities. However, we show that the UV spectral shape traces the transverse proximity effect even in overdense regions or at large distances. Therefore, the spectral hardness is a sensitive physical measure of the transverse proximity effect that is able to break the density degeneracy affecting the traditional searches.
The foreland of the Andes in South America is characterised by distinct along strike changes in surface deformational styles. These styles are classified into two end-members, the thin-skinned and the thick-skinned style. The superficial expression of thin-skinned deformation is a succession of narrowly spaced hills and valleys, that form laterally continuous ranges on the foreland facing side of the orogen. Each of the hills is defined by a reverse fault that roots in a basal décollement surface within the sedimentary cover, and acted as thrusting ramp to stack the sedimentary pile. Thick-skinned deformation is morphologically characterised by spatially disparate, basement-cored mountain ranges. These mountain ranges are uplifted along reactivated high-angle crustal-scale discontinuities, such as suture zones between different tectonic terranes.
Amongst proposed causes for the observed variation are variations in the dip angle of the Nazca plate, variation in sediment thickness, lithospheric thickening, volcanism or compositional differences. The proposed mechanisms are predominantly based on geological observations or numerical thermomechanical modelling, but there has been no attempt to understand the mechanisms from a point of data-integrative 3D modelling. The aim of this dissertation is therefore to understand how lithospheric structure controls the deformational behaviour. The integration of independent data into a consistent model of the lithosphere allows to obtain additional evidence that helps to understand the causes for the different deformational styles. Northern Argentina encompasses the transition from the thin-skinned fold-and-thrust belt in Bolivia, to the thick-skinned Sierras Pampeanas province, which makes this area a well suited location for such a study. The general workflow followed in this study first involves data-constrained structural- and density-modelling in order to obtain a model of the study area. This model was then used to predict the steady-state thermal field, which was then used to assess the present-day rheological state in northern Argentina.
The structural configuration of the lithosphere in northern Argentina was determined by means of data-integrative, 3D density modelling verified by Bouguer gravity. The model delineates the first-order density contrasts in the lithosphere in the uppermost 200 km, and discriminates bodies for the sediments, the crystalline crust, the lithospheric mantle and the subducting Nazca plate. To obtain the intra-crustal density structure, an automated inversion approach was developed and applied to a starting structural model that assumed a homogeneously dense crust. The resulting final structural model indicates that the crustal structure can be represented by an upper crust with a density of 2800 kg/m³, and a lower crust of 3100 kg/m³. The Transbrazilian Lineament, which separates the Pampia terrane from the Río de la Plata craton, is expressed as a zone of low average crustal densities.
In an excursion, we demonstrate in another study, that the gravity inversion method developed to obtain intra-crustal density structures, is also applicable to obtain density variations in the uppermost lithospheric mantle. Densities in such sub-crustal depths are difficult to constrain from seismic tomographic models due to smearing of crustal velocities. With the application to the uppermost lithospheric mantle in the north Atlantic, we demonstrate in Tan et al. (2018) that lateral density trends of at least 125\,km width are robustly recovered by the inversion method, thereby providing an important tool for the delineation of subcrustal density trends.
Due to the genetic link between subduction, orogenesis and retroarc foreland basins the question rises whether the steady-state assumption is valid in such a dynamic setting. To answer this question, I analysed (i) the impact of subduction on the conductive thermal field of the overlying continental plate, (ii) the differences between the transient and steady-state thermal fields of a geodynamic coupled model. Both studies indicate that the assumption of a thermal steady-state is applicable in most parts of the study area. Within the orogenic wedge, where the assumption cannot be applied, I estimated the transient thermal field based on the results of the conducted analyses.
Accordingly, the structural model that had been obtained in the first step, could be used to obtain a 3D conductive steady-state thermal field. The rheological assessment based on this thermal field indicates that the lithosphere of the thin-skinned Subandean ranges is characterised by a relatively strong crust and a weak mantle. Contrarily, the adjacent foreland basin consists of a fully coupled, very strong lithosphere. Thus, shortening in northern Argentina can only be accommodated within the weak lithosphere of the orogen and the Subandean ranges. The analysis suggests that the décollements of the fold-and-thrust belt are the shallow continuation of shear zones that reside in the ductile sections of the orogenic crust. Furthermore, the localisation of the faults that provide strain transfer between the deeper ductile crust and the shallower décollement is strongly influenced by crustal weak zones such as foliation. In contrast to the northern foreland, the lithosphere of the thick-skinned Sierras Pampeanas is fully coupled and characterised by a strong crust and mantle. The high overall strength prevents the generation of crustal-scale faults by tectonic stresses. Even inherited crustal-scale discontinuities, such as sutures, cannot sufficiently reduce the strength of the lithosphere in order to be reactivated. Therefore, magmatism that had been identified to be a precursor of basement uplift in the Sierras Pampeanas, is the key factor that leads to the broken foreland of this province. Due to thermal weakening, and potentially lubrication of the inherited discontinuities, the lithosphere is locally weakened such that tectonic stresses can uplift the basement blocks. This hypothesis explains both the spatially disparate character of the broken foreland, as well as the observed temporal delay between volcanism and basement block uplift.
This dissertation provides for the first time a data-driven 3D model that is consistent with geophysical data and geological observations, and that is able to causally link the thermo-rheological structure of the lithosphere to the observed variation of surface deformation styles in the retroarc foreland of northern Argentina.
The Central Andes region in South America is characterized by a complex and heterogeneous deformation system. Recorded seismic activity and mapped neotectonic structures indicate that most of the intraplate deformation is located along the margins of the orogen, in the transitions to the foreland and the forearc. Furthermore, the actively deforming provinces of the foreland exhibit distinct deformation styles that vary along strike, as well as characteristic distributions of seismicity with depth. The style of deformation transitions from thin-skinned in the north to thick-skinned in the south, and the thickness of the seismogenic layer increases to the south. Based on geological/geophysical observations and numerical modelling, the most commonly invoked causes for the observed heterogeneity are the variations in sediment thickness and composition, the presence of inherited structures, and changes in the dip of the subducting Nazca plate. However, there are still no comprehensive investigations on the relationship between the lithospheric composition of the Central Andes, its rheological state and the observed deformation processes. The central aim of this dissertation is therefore to explore the link between the nature of the lithosphere in the region and the location of active deformation. The study of the lithospheric composition by means of independent-data integration establishes a strong base to assess the thermal and rheological state of the Central Andes and its adjacent lowlands, which alternatively provide new foundations to understand the complex deformation of the region. In this line, the general workflow of the dissertation consists in the construction of a 3D data-derived and gravity-constrained density model of the Central Andean lithosphere, followed by the simulation of the steady-state conductive thermal field and the calculation of strength distribution. Additionally, the dynamic response of the orogen-foreland system to intraplate compression is evaluated by means of 3D geodynamic modelling.
The results of the modelling approach suggest that the inherited heterogeneous composition of the lithosphere controls the present-day thermal and rheological state of the Central Andes, which in turn influence the location and depth of active deformation processes. Most of the seismic activity and neo--tectonic structures are spatially correlated to regions of modelled high strength gradients, in the transition from the felsic, hot and weak orogenic lithosphere to the more mafic, cooler and stronger lithosphere beneath the forearc and the foreland. Moreover, the results of the dynamic simulation show a strong localization of deviatoric strain rate second invariants in the same region suggesting that shortening is accommodated at the transition zones between weak and strong domains. The vertical distribution of seismic activity appears to be influenced by the rheological state of the lithosphere as well. The depth at which the frequency distribution of hypocenters starts to decrease in the different morphotectonic units correlates with the position of the modelled brittle-ductile transitions; accordingly, a fraction of the seismic activity is located within the ductile part of the crust. An exhaustive analysis shows that practically all the seismicity in the region is restricted above the 600°C isotherm, in coincidence with the upper temperature limit for brittle behavior of olivine. Therefore, the occurrence of earthquakes below the modelled brittle-ductile could be explained by the presence of strong residual mafic rocks from past tectonic events. Another potential cause of deep earthquakes is the existence of inherited shear zones in which brittle behavior is favored through a decrease in the friction coefficient. This hypothesis is particularly suitable for the broken foreland provinces of the Santa Barbara System and the Pampean Ranges, where geological studies indicate successive reactivation of structures through time. Particularly in the Santa Barbara System, the results indicate that both mafic rocks and a reduction in friction are required to account for the observed deep seismic events.
Intracontinental deformation usually is a result of tectonic forces associated with distant plate collisions. In general, the evolution of mountain ranges and basins in this environment is strongly controlled by the distribution and geometries of preexisting structures. Thus, predictive models usually fail in forecasting the deformation evolution in these kinds of settings. Detailed information on each range and basin-fill is vital to comprehend the evolution of intracontinental mountain belts and basins. In this dissertation, I have investigated the complex Cenozoic tectonic evolution of the western Tien Shan in Central Asia, which is one of the most active intracontinental ranges in the world. The work presented here combines a broad array of datasets, including thermo- and geochronology, paleoenvironmental interpretations, sediment provenance and subsurface interpretations in order to track changes in tectonic deformation. Most of the identified changes are connected and can be related to regional-scale processes that governed the evolution of the western Tien Shan.
The NW-SE trending Talas-Fergana fault (TFF) separates the western from the central Tien Shan and constitutes a world-class example of the influence of preexisting anisotropies on the subsequent structural development of a contractile orogen. While to the east most of ranges and basins have a sub-parallel E-W trend, the triangular-shaped Fergana basin forms a substantial feature in the western Tien Shan morphology with ranges on all three sides. In this thesis, I present 55 new thermochronologic ages (apatite fission track and zircon (U-Th)/He)) used to constrain exhumation histories of several mountain ranges in the western Tien Shan. At the same time, I analyzed the Fergana basin-fill looking for progressive changes in sedimentary paleoenvironments, source areas and stratal geometrical configurations in the subsurface and outcrops.
The data presented in this thesis suggests that low cooling rates (<1°C Myr-1), calm depositional environments, and low depositional rates (<10 m Myr-1) were widely distributed across the western Tien Shan, describing a quiescent tectonic period throughout the Paleogene. Increased cooling rates in the late Cenozoic occurred diachronously and with variable magnitudes in different ranges. This rapid cooling stage is interpreted to represent increased erosion caused by active deformation and constrains the onset of Cenozoic deformation in the western Tien Shan. Time-temperature histories derived from the northwestern Tien Shan samples show an increase in cooling rates by ~25 Ma. This event is correlated with a synchronous pulse
iv
in the South Tien Shan. I suggest that strike-slip motion along the TFF commenced at the Oligo-Miocene boundary, facilitating CCW rotation of the Fergana basin and enabling exhumation of the linked horsetail splays. Higher depositional rates (~150 m Myr-1) in the Oligo-Miocene section (Massaget Fm.) of the Fergana basin suggest synchronous deformation in the surrounding ranges. The central Alai Range also experienced rapid cooling around this time, suggesting that the onset of intramontane basin fragmentation and isolation is coeval. These results point to deformation starting simultaneously in the late Oligocene – early Miocene in geographically distant mountain ranges. I suggest that these early uplifts are controlled by reactivated structures (like the TFF), which are probably the frictionally weakest and most-suitably oriented for accommodating and transferring N-S horizontal shortening along the western Tien Shan.
Afterwards, in the late Miocene (~10 Ma), a period of renewed rapid cooling affected the Tien Shan and most mountain ranges and inherited structures started to actively deform. This episode is widely distributed and an increase in exhumation is interpreted in most of the sampled ranges. Moreover, the Pliocene section in the basin subsurface shows the higher depositional rates (>180 m Myr-1) and higher energy facies. The deformation and exhumation increase further contributed to intramontane basin partitioning. Overall, the interpretation is that the Tien Shan and much of Central Asia suffered a global increase in the rate of horizontal crustal shortening. Previously, stress transfer along the rigid Tarim block or Pamir indentation has been proposed to account for Himalayan hinterland deformation. However, the extent of the episode requires a different and broader geodynamic driver.
New chain transfer agents based on dithiobenzoate and trithiocarbonate for free radical polymerization via Reversible Addition-Fragmentation chain Transfer (RAFT) were synthesized. The new compounds bear permanently hydrophilic sulfonate moieties which provide solubility in water independent of the pH. One of them bears a fluorophore, enabling unsymmetrical double end group labelling as well as the preparation of fluorescent labeled polymers. Their stability against hydrolysis in water was studied, and compared with the most frequently employed water-soluble RAFT agent 4-cyano-4-thiobenzoylsulfanylpentanoic acid dithiobenzoate, using UV-Vis and 1H-NMR spectroscopy. An improved resistance to hydrolysis was found for the new RAFT agents, providing good stabilities in the pH range between 1 and 8, and up to temperatures of 70°C. Subsequently, a series of non-ionic, anionic and cationic water-soluble monomers were polymerized via RAFT in water. In these experiments, polymerizations were conducted either at 48°C or 55°C, that are lower than the conventionally employed temperatures (>60°C) for RAFT in organic solvents, in order to minimize hydrolysis of the active chain ends (e.g. dithioester and trithiocarbonate), and thus to obtain good control over the polymerization. Under these conditions, controlled polymerization in aqueous solution was possible with styrenic, acrylic and methacrylic monomers: molar masses increase with conversion, polydispersities are low, and the degree of end group functionalization is high. But polymerizations of methacrylamides were slow at temperatures below 60°C, and showed only moderate control. The RAFT process in water was also proved to be a powerful method to synthesize di- and triblock copolymers including the preparation of functional polymers with complex structure, such as amphiphilic and stimuli-sensitive block copolymers. These include polymers containing one or even two stimuli-sensitive hydrophilic blocks. The hydrophilic character of a single or of several blocks was switched by changing the pH, the temperature or the salt content, to demonstrate the variability of the molecular designs suited for stimuli-sensitive polymeric amphiphiles, and to exemplify the concept of multiple-sensitive systems. Furthermore, stable colloidal block ionomer complexes were prepared by mixing anionic surfactants in aqueous media with a double hydrophilic block copolymer synthesized via RAFT in water. The block copolymer is composed of a noncharged hydrophilic block based on polyethyleneglycol and a cationic block. The complexes prepared with perfluoro decanoate were found so stable that they even withstand dialysis; notably they do not denaturate proteins. So, they are potentially useful for biomedical applications in vivo.
Adopting a minimalist framework, the dissertation provides an analysis for the syntactic structure of comparatives, with special attention paid to the derivation of the subclause. The proposed account explains how the comparative subclause is connected to the matrix clause, how the subclause is formed in the syntax and what additional processes contribute to its final structure. In addition, it casts light upon these problems in cross-linguistic terms and provides a model that allows for synchronic and diachronic differences. This also enables one to give a more adequate explanation for the phenomena found in English comparatives since the properties of English structures can then be linked to general settings of the language and hence need no longer be considered as idiosyncratic features of the grammar of English. First, the dissertation provides a unified analysis of degree expressions, relating the structure of comparatives to that of other degrees. It is shown that gradable adjectives are located within a degree phrase (DegP), which in turn projects a quantifier phrase (QP) and that these two functional layers are always present, irrespectively of whether there is a phonologically visible element in these layers. Second, the dissertation presents a novel analysis of Comparative Deletion by reducing it to an overtness constraint holding on operators: in this way, it is reduced to morphological differences and cross-linguistic variation is not conditioned by way of postulating an arbitrary parameter. Cross-linguistic differences are ultimately dependent on whether a language has overt operators equipped with the relevant – [+compr] and [+rel] – features. Third, the dissertation provides an adequate explanation for the phenomenon of Attributive Comparative Deletion, as attested in English, by way of relating it to the regular mechanism of Comparative Deletion. I assume that Attributive Comparative Deletion is not a universal phenomenon, and its presence in English can be conditioned by independent, more general rules, while the absence of such restrictions leads to its absence in other languages. Fourth, the dissertation accounts for certain phenomena related to diachronic changes, examining how the changes in the status of comparative operators led to changes in whether Comparative Deletion is attested in a given language: I argue that only operators without a lexical XP can be grammaticalised. The underlying mechanisms underlying are essentially general economy principles and hence the processes are not language-specific or exceptional. Fifth, the dissertation accounts for optional ellipsis processes that play a crucial role in the derivation of typical comparative subclauses. These processes are not directly related to the structure of degree expressions and hence the elimination of the quantified expression from the subclause; nevertheless, they are shown to be in interaction with the mechanisms underlying Comparative Deletion or the absence thereof.
The survey of the prevalence of chronic ankle instability in elite Taiwanese basketball athletes
(2021)
BACKGROUND: Ankle sprains are common in basketball. It could develop into Chronic Ankle Instability (CAI) causing decreased quality of life, functional performance, early osteoarthritis, and increased risk of other injuries. To develop a strategy of CAI prevention, localized epidemiology data and a valid/reliable tool are essential. However, the epidemiological data of CAI is not conclusive from previous studies and the prevalence of CAI in Taiwanese basketball athletes are not clear. In addition, a valid and reliable tool among the Taiwan-Chinese version to evaluate ankle instability is missing.
PURPOSE: The aims were to have an overview of the prevalence of CAI in sports population using a systematic review, to develop a valid and reliable cross-cultural adapted Cumberland Ankle Instability Tool Questionnaire (CAIT) in Taiwan-Chinese (CAIT-TW), and to survey the prevalence of CAI in elite basketball athletes in Taiwan using CAIT-TW.
METHODS: Firstly, a systematic search was conducted. Research articles applying CAI related questionnaires in order to survey the prevalence of CAI were included in the review. Second, the English version of CAIT was translated and cross-culturally adapted into the CAIT-TW. The construct validity, test-retest reliability, internal consistency, and cutoff score of CAIT-TW were evaluated in an athletic population (N=135). Finally, the cross-sectional data of CAI prevalence in 388 elite Taiwanese basketball athletes were presented. Demographics, presence of CAI, and difference of prevalence between gender, different competitive levels and play positions were evaluated.
RESULTS: The prevalence of CAI was 25%, ranging between 7% and 53%. The prevalence of CAI among participants with a history of ankle sprains was 46%, ranging between 9% and 76%. In addition, the cross-cultural adapted CAIT-TW showed a moderate to strong construct validity, an excellent test-retest reliability, a good internal consistency, and a cutoff score of 21.5 for the Taiwanese athletic population. Finally, 26% of Taiwanese basketball athletes had unilateral CAI while 50% of them had bilateral CAI. In addition, women athletes in the investigated cohort had a higher prevalence of CAI than men. There was no difference in prevalence between competitive levels and among play positions.
CONCLUSION: The systematic review shows that the prevalence of CAI has a wide range among included studies. This could be due to the different exclusion criteria, age, sports discipline, or other factors among the included studies. For future studies, standardized criteria to investigate the epidemiology of CAI are required. The CAI epidemiological study should be prospective. Factors affecting the prevalence of CAI ability should be investigated and described. The translated CAIT-TW is a valid and reliable tool to differentiate between stable and unstable ankles in athletes and may further apply for research or daily practice in Taiwan. In the Taiwanese basketball population, CAI is highly prevalent. This might relate to the research method, preexisting ankle instability, and training-related issues. Women showed a higher prevalence of CAI than men. When applying the preventive measure, gender should be taken into consideration.
The Government will create a motivated, merit-based, performance-driven, and professional civil service that is resistant to temptations of corruption and which provides efficient, effective and transparent public services that do not force customers to pay bribes.
— (GoIRA, 2006, p. 106)
We were in a black hole! We had an empty glass and had nothing from our side to fill it with! Thus, we accepted anything anybody offered; that is how our glass was filled; that is how we reformed our civil service.
— (Former Advisor to IARCSC, personal communication, August 2015)
How and under what conditions were the post-Taleban Civil Service Reforms of Afghanistan initiated? What were the main components of the reforms? What were their objectives and to which extent were they achieved? Who were the leading domestic and foreign actors involved in the process? Finally, what specific factors influenced the success and failure Afghanistan’s Civil Service Reforms since 2002? Guided by such fundamental questions, this research studies the wicked process of reforming the Afghan civil service in an environment where a variety of contextual, programmatic, and external factors affected the design and implementation of reforms that were entirely funded and technically assisted by the international community.
Focusing on the core components of reforms—recruitment, remuneration, and appraisal of civil servants—the qualitative study provides a detailed picture of the pre-reform civil service and its major human resources developments in the past. Following discussions on the content and purposes of the main reform programs, it will then analyze the extent of changes in policies and practices by examining the outputs and effects of these reforms.
Moreover, the study defines the specific factors that led the reforms toward a situation where most of the intended objectives remain unachieved. Doing so, it explores and explains how an overwhelming influence of international actors with conflicting interests, large-scale corruption, political interference, networks of patronage, institutionalized nepotism, culturally accepted cronyism and widespread ethnic favoritism created a very complex environment and prevented the reforms from transforming Afghanistan’s patrimonial civil service into a professional civil service, which is driven by performance and merit.
The South Chilean subduction zone between 41° and 43.5°S : seismicity, structure and state of stress
(2008)
While the northern and central part of the South American subduction zone has been intensively studied, the southern part has attracted less attention, which may be due to its difficult accessibility and lower seismic activity. However, the southern part exhibits strong seismic and tsunamogenic potential with the prominent example of the Mw=9.5 May 22, 1960 Valdivia earthquake. In this study data from an amphibious seismic array (Project TIPTEQ) is presented. The network reached from the trench to the active magmatic arc incorporating the Island of Chiloé and the north-south trending Liquiñe-Ofqui fault zone (LOFZ). 364 local events were observed in an 11-month period from November 2004 until October 2005. The observed seismicity allows to constrain for the first time the current state of stress of the subducting plate and magmatic arc, as well as the local seismic velocity structure. The downgoing Benioff zone is readily identifiable as an eastward dipping plane with an inclination of ~30°. Main seismic activity occurred predominantly in a belt parallel to the coast of Chiloé Island in a depth range of 12-30 km, which is presumably related to the plate interface. The down-dip termination of abundant intermediate depth seismicity at approximately 70 km depth seems to be related to the young age (and high temperature) of the oceanic plate. A high-quality subset of events was inverted for a 2-D velocity model. The vp model resolves the sedimentary basins and the downgoing slab. Increased velocities below the longitudinal valley and the eastern part of Chiloé Island suggest the existence of a mantle bulge. Apart from the events in the Benioff Zone, shallow crustal events were observed mainly in different clusters along the magmatic arc. These crustal clusters of seismicity are related to the LOFZ, as well as to the volcanoes Chaitén, Michinmahuida and Corcovado. Seismic activity up to a magnitude of 3.8 Mw reveals the recent activity of the fault zone. Focal mechanisms for the events along the LOFZ were calculated using a moment tensor inversion of amplitude spectra for body waves which mostly yield strike-slip mechanisms indicating a SW-NE striking of sigma_1 for the LOFZ. Focal mechanism stress inversion indicates a strike-slip regime along the arc and a thrust regime in the Benioff zone. The observed deformation - which is also revealed by teleseismic observations - suggests a confirmation for the proposed northward movement of a forearc sliver acting as a detached continental micro-plate.
Among the bloom-forming and potentially harmful cyanobacteria, the genus Microcystis represents a most diverse taxon, on the genomic as well as on morphological and secondary metabolite levels. Microcystis communities are composed of a variety of diversified strains. The focus of this study lies on potential interactions between Microcystis representatives and the roles of secondary metabolites in these interaction processes.
The role of secondary metabolites functioning as signaling molecules in the investigated interactions is demonstrated exemplary for the prevalent hepatotoxin microcystin. The extracellular and intracellular roles of microcystin are tested in microarray-based transcriptomic approaches. While an extracellular effect of microcystin on Microcystis transcription is confirmed and connected to a specific gene cluster of another secondary metabolite in this study, the intracellularly occurring microcystin is related with several pathways of the primary metabolism. A clear correlation of a microcystin knockout and the SigE-mediated regulation of carbon metabolism is found. According to the acquired transcriptional data, a model is proposed that postulates the regulating effect of microcystin on transcriptional regulators such as the alternative sigma factor SigE, which in return captures an essential role in sugar catabolism and redox-state regulation.
For the purpose of simulating community conditions as found in the field, Microcystis colonies are isolated from the eutrophic lakes near Potsdam, Germany and established as stably growing under laboratory conditions. In co-habitation simulations, the recently isolated field strain FS2 is shown to specifically induce nearly immediate aggregation reactions in the axenic lab strain Microcystis aeruginosa PCC 7806. In transcriptional studies via microarrays, the induced expression program in PCC 7806 after aggregation induction is shown to involve the reorganization of cell envelope structures, a highly altered nutrient uptake balance and the reorientation of the aggregating cells to a heterotrophic carbon utilization, e.g. via glycolysis. These transcriptional changes are discussed as mechanisms of niche adaptation and acclimation in order to prevent competition for resources.
The overarching goal of this dissertation is to provide a better understanding of the role of wind and water in shaping Earth’s Cenozoic orogenic plateaus - prominent high-elevation, low relief sectors in the interior of Cenozoic mountain belts. In particular, the feedbacks between surface uplift, the build-up of topography and ensuing changes in precipitation, erosion, and vegetation patterns are addressed in light of past and future climate change. Regionally, the study focuses on the two world’s largest plateaus, the Altiplano-Puna Plateau of the Andes and Tibetan Plateau, both characterized by average elevations of >4 km. Both plateaus feature high, deeply incised flanks with pronounced gradients in rainfall, vegetation, hydrology, and surface processes. These characteristics are rooted in the role of plateaus to act as efficient orographic barriers to rainfall and to force changes in atmospheric flow.
The thesis examines the complex topics of tectonic and climatic forcing of the surface-process regime on three different spatial and temporal scales: (1) bedrock wind-erosion rates are quantified in the arid Qaidam Basin of NW Tibet over millennial timescales using cosmogenic radionuclide dating; (2) present-day stable isotope composition in rainfall is examined across the south-central Andes in three transects between 22° S and 28° S; these data are modeled and assessed with remotely sensed rainfall data of the Tropical Rainfall Measuring Mission and the Moderate Resolution Imaging Spectroradiometer; (3) finally, a 2.5-km-long Mio-Pliocene sedimentary record of the intermontane Angastaco Basin (25°45’ S, 66°00’ W) is presented in the context of hydrogen and carbon compositions of molecular lipid biomarker, and oxygen and carbon isotopes obtained from pedogenic carbonates; these records are compared to other environmental proxies, including hydrated volcanic glass shards from volcanic ashes intercalated in the sedimentary strata.
There are few quantitative estimates of eolian bedrock-removal rates from arid, low relief landscapes. Wind-erosion rates from the western Qaidam Basin based on cosmogenic 10Be measurements document erosion rates between 0.05 to 0.4 mm/yr. This finding indicates that in arid environments with strong winds, hyperaridity, exposure of friable strata, and ongoing rock deformation and uplift, wind erosion can outpace fluvial erosion. Large eroded sediment volumes within the Qaidam Basin and coeval dust deposition on the Chinese Loess plateau, exemplify the importance of dust production within arid plateau environments for marine and terrestrial depositional processes, but also health issues and fertilization of soils.
In the south-central Andes, the analysis of 234 stream-water samples for oxygen and hydrogen reveals that areas experiencing deep convective storms do not show the commonly observed patterns of isotopic fractionation and the expected co-varying relationships between oxygen and hydrogen with increasing elevation. These convective storms are formed over semi-arid intermontane basins in the transition between the broken foreland of the Sierras Pampeanas, the Eastern Cordillera, and the Puna Plateau in the interior of the orogen. Here, convective rainfall dominates the precipitation budget and no systematic stable isotope-elevation relationship exists. Regions to the north, in the transition between the broken foreland and the Subandean foreland fold-and-thrust belt, the impact of convection is subdued, with lower degrees of storminess and a stronger expected isotope-elevation relationship. This finding of present-day fractionation trends of meteoric water is of great importance for paleoenvironmental studies in attempts to use stable isotope relationships in the reconstruction of paleoelevations.
The third part of the thesis focuses on the paleohydrological characteristics of the Mio-Pliocene (10-2 Ma) Angastaco Basin sedimentary record, which reveals far-reaching environmental changes during Andean uplift and orographic barrier formation. A precipitation- evapotranspiration record identifies the onset of a precipitation regime related to the South American Low Level Jet at this latitude after 9 Ma. Humid foreland conditions existed until 7 Ma, followed by orographic barrier uplift to the east of the present-day Angastaco Basin. This was superseded by rapid (~0.5 Myr) aridification in an intermontane basin, highlighting the effects of eastward-directed deformation. A transition in vegetation cover from a humid C3 forest ecosystem to semi-arid C4-dominated vegetation was coeval with continued basin uplift to modern elevations.
In the here presented work we discuss a series of results that are all in one way or another connected to the phenomenon of trapping in black hole spacetimes.
First we present a comprehensive review of the Kerr-Newman-Taub-NUT-de-Sitter family of black hole spacetimes and their most important properties. From there we go into a detailed analysis of the bahaviour of null geodesics in the exterior region of a sub-extremal Kerr spacetime. We show that most well known fundamental properties of null geodesics can be represented in one plot. In particular, one can see immediately that the ergoregion and trapping are separated in phase space.
We then consider the sets of future/past trapped null geodesics in the exterior region of a sub-extremal Kerr-Newman-Taub-NUT spacetime. We show that from the point of view of any timelike observer outside of such a black hole, trapping can be understood as two smooth sets of spacelike directions on the celestial sphere of the observer. Therefore the topological structure of the trapped set on the celestial sphere of any observer is identical to that in Schwarzschild.
We discuss how this is relevant to the black hole stability problem.
In a further development of these observations we introduce the notion of what it means for the shadow of two observers to be degenerate. We show that, away from the axis of symmetry, no continuous degeneration exists between the shadows of observers at any point in the exterior region of any Kerr-Newman black hole spacetime of unit mass. Therefore, except possibly for discrete changes, an observer can, by measuring the black holes shadow, determine the angular momentum and the charge of the black hole under observation, as well as the observer's radial position and angle of elevation above the equatorial plane. Furthermore, his/her relative velocity compared to a standard observer can also be measured. On the other hand, the black hole shadow does not allow for a full parameter resolution in the case of a Kerr-Newman-Taub-NUT black hole, as a continuous degeneration relating specific angular momentum, electric charge, NUT charge and elevation angle exists in this case.
We then use the celestial sphere to show that trapping is a generic feature of any black hole spacetime.
In the last chapter we then prove a generalization of the mode stability result of Whiting (1989) for the Teukolsky equation for the case of real frequencies. The main result of the last chapter states that a separated solution of the Teukolsky equation governing massless test fields on the Kerr spacetime, which is purely outgoing at infinity, and purely ingoing at the horizon, must vanish. This has the consequence, that for real frequencies, there are linearly independent fundamental solutions of the radial Teukolsky equation which are purely ingoing at the horizon, and purely outgoing at infinity, respectively. This fact yields a representation formula for solutions of the inhomogenous Teukolsky equation, and was recently used by Shlapentokh-Rothman (2015) for the scalar wave equation.
The Southern Central Andes (33°-36°S) are an excellent natural laboratory to study orogenic deformation processes, where boundary conditions, such as the geometry of the subducted plate, impose an important control on the evolution of the orogen. On the other hand, the South American plate presents a series of heterogeneities that additionally impart control on the mode of deformation. This thesis aims to test the control of this last factor over the construction of the Cenozoic Andean orogenic system.
From the integration of surface and subsurface information in the southern area (34-36°S), the evolution of Andean deformation over the steeply dipping subduction segment was studied. A structural model was developed evaluating the stress state from the Miocene to the present-day and its influence in the migration of magmatic fluids and hydrocarbons. Based on these data, together with the data generated by other researchers in the northern zone of the study area (33-34°S), geodynamic numerical modeling was performed to test the hypothesis of the decisive role of upper-plate heterogeneities in the Andean evolution. Geodynamic codes (LAPEX-2D and ASPECT) which simulate the behavior of materials with elasto-visco-plastic rheologies under deformation, were used. The model results suggest that upper-plate contractional deformation is significantly controlled by the strength of the lithosphere, which is defined by the composition of the upper and lower crust, and by the proportion of lithospheric mantle, which in turn is determined by previous tectonic events. In addition, the previous regional tectono-magmatic events also defined the composition of the crust and its geometry, which is another factor that controls the localization of deformation. Accordingly, with more felsic lower crustal composition, the deformation follows a pure-shear mode, while more mafic compositions induce a simple-shear deformation mode. On the other hand, it was observed that initial lithospheric thickness may fundamentally control the location of deformation, with zones characterized by thin lithosphere are prone to concentrate it. Finally, it was found that an asymmetric lithosphere-astenosphere boundary resulting from corner flow in the mantle wedge of the eastward-directed subduction zone tends to generate east-vergent detachments.
Most reading theories assume that readers aim at word centers for optimal information processing. During reading, saccade targeting turns out to be imprecise: Saccades’ initial landing positions often miss the word centers and have high variance, with an additional systematic error that is modulated by the distance from the launch site to the center of the target word. The performance of the oculomotor system, as reflected in the statistics of within-word landing positions, turns out to be very robust and mostly affected by the spatial information during reading. Hence, it is assumed that the saccade generation is highly automated.
The main goal of this thesis is to explore the performance of the oculomotor system under various reading conditions where orthographic information and the reading direction were manipulated. Additionally, the challenges in understanding the eye movement data to represent the oculomotor process during reading are addressed.
Two experimental studies and one simulation study were conducted for this thesis, which resulted in the following main findings:
(i) Reading texts with orthographic manipulations leads to specific changes in the eye movement patterns, both in temporal and spatial measures. The findings indicate that the oculomotor control of eye movements during reading is dependent on reading conditions (Chapter 2 & 3).
(ii) Saccades’ accuracy and precision can be simultaneously modulated under reversed reading condition, supporting the assumption that the random and systematic oculomotor errors are not independent. By assuming that readers increase the precision of sensory observation while maintaining the learned prior knowledge when reading direction was reversed, a process-oriented Bayesian model for saccade targeting can account for the simultaneous reduction of oculomotor errors (Chapter 2).
(iii) Plausible parameter values serving as proxies for the intended within-word landing positions can be estimated by using the maximum a posteriori estimator from Bayesian inference. Using the mean value of all observations as proxies is insufficient for studies focusing on the launch-site effect because the method exhibits the strongest bias when estimating the size of the effect. Mislocated fixations remain a challenge for the currently known estimation methods, especially when the systematic oculomotor error is large (Chapter 4).
The results reported in this thesis highlight the role of the oculomotor system, together with underlying cognitive processes, in eye movements during reading. The modulation of oculomotor control can be captured through a precise analysis of landing positions.
During the last two decades, instability training devices have become a popular means in athletic training and rehabilitation of mimicking unstable surfaces during movements like vertical jumps. Of note, under unstable conditions, trunk muscles seem to have a stabilizing function during exercise to facilitate the transfer of torques and angular momentum between the lower and upper extremities. The present thesis addresses the acute effects of surface instability on performance during jump-landing tasks. Additionally, the long-term effects (i.e., training) of surface instability were examined with a focus on the role of the trunk in athletic performance/physical fitness.
Healthy adolescent, and young adult subjects participated in three cross-sectional and one longitudinal study, respectively. Performance in jump-landing tasks on stable and unstable surfaces was assessed by means of a ground reaction force plate. Trunk muscle strength (TMS) was determined using an isokinetic device or the Bourban TMS test. Physical fitness was quantified by standing long jump, sprint, stand-and-reach, jumping sideways, Emery balance, and Y balance test on stable surfaces. In addition, activity of selected trunk and leg muscles and lower limb kinematics were recorded during jump-landing tasks.
When performing jump-landing tasks on unstable compared to stable surfaces, jump performance and leg muscle activity were significantly lower. Moreover, significantly smaller knee flexion angles and higher knee valgus angles were observed when jumping and landing on unstable compared to stable conditions and in women compared to men. Significant but small associations were found between behavioral and neuromuscular data, irrespective of surface condition. Core strength training on stable as well as on unstable surfaces significantly improved TMS, balance and coordination.
The findings of the present thesis imply that stable rather than unstable surfaces provide sufficient training stimuli during jump exercises (i.e., plyometrics). Additionally, knee motion strategy during plyometrics appears to be modified by surface instability and sex. Of note, irrespective of surface condition, trunk muscles only play a minor role for leg muscle performance/activity during jump exercises. Moreover, when implemented in strength training programs (i.e., core strength training), there is no advantage in using instability training devices compared to stable surfaces in terms of enhancement of athletic performance.
Pronoun resolution normally takes place without conscious effort or awareness, yet the processes behind it are far from straightforward. A large number of cues and constraints have previously been recognised as playing a role in the identification and integration of potential antecedents, yet there is considerable debate over how these operate within the resolution process. The aim of this thesis is to investigate how the parser handles multiple antecedents in order to understand more about how certain information sources play a role during pronoun resolution. I consider how both structural information and information provided by the prior discourse is used during online processing. This is investigated through several eye tracking during reading experiments that are complemented by a number of offline questionnaire experiments. I begin by considering how condition B of the Binding Theory (Chomsky 1981; 1986) has been captured in pronoun processing models; some researchers have claimed that processing is faithful to syntactic constraints from the beginning of the search (e.g. Nicol and Swinney 1989), while others have claimed that potential antecedents which are ruled out on structural grounds nonetheless affect processing, because the parser must also pay attention to a potential antecedent’s features (e.g. Badecker and Straub 2002). My experimental findings demonstrate that the parser is sensitive to the subtle changes in syntactic configuration which either allow or disallow pronoun reference to a local antecedent, and indicate that the parser is normally faithful to condition B at all stages of processing. Secondly, I test the Primitives of Binding hypothesis proposed by Koornneef (2008) based on work by Reuland (2001), which is a modular approach to pronoun resolution in which variable binding (a semantic relationship between pronoun and antecedent) takes place before coreference. I demonstrate that a variable-binding (VB) antecedent is not systematically considered earlier than a coreference (CR) antecedent online. I then go on to explore whether these findings could be attributed to the linear order of the antecedents, and uncover a robust recency preference both online and offline. I consider what role the factor of recency plays in pronoun resolution and how it can be reconciled with the first-mention advantage (Gernsbacher and Hargreaves 1988; Arnold 2001; Arnold et al., 2007). Finally, I investigate how aspects of the prior discourse affect pronoun resolution. Prior discourse status clearly had an effect on pronoun resolution, but an antecedent’s appearance in the previous context was not always facilitative; I propose that this is due to the number of topic switches that a reader must make, leading to a lack of discourse coherence which has a detrimental effect on pronoun resolution. The sensitivity of the parser to structural cues does not entail that cue types can be easily separated into distinct sequential stages, and I therefore propose that the parser is structurally sensitive but not modular. Aspects of pronoun resolution can be captured within a parallel constraints model of pronoun resolution, however, such a model should be sensitive to the activation of potential antecedents based on discourse factors, and structural cues should be strongly weighted.
Flood generation at the scale of large river basins is triggered by the interaction of the hydrological pre-conditions and the meteorological event conditions at different spatial and temporal scales. This interaction controls diverse flood generating processes and results in floods varying in magnitude and extent, duration as well as socio-economic consequences. For a process-based understanding of the underlying cause-effect relationships, systematic approaches are required. These approaches have to cover the complete causal flood chain, including the flood triggering meteorological event in combination with the hydrological (pre-)conditions in the catchment, runoff generation, flood routing, possible floodplain inundation and finally flood losses.
In this thesis, a comprehensive probabilistic process-based understanding of the causes and effects of floods is advanced. The spatial and temporal dynamics of flood events as well as the geophysical processes involved in the causal flood chain are revealed and the systematic interconnections within the flood chain are deciphered by means of the classification of their associated causes and effects. This is achieved by investigating the role of the hydrological pre-conditions and the meteorological event conditions with respect to flood occurrence, flood processes and flood characteristics as well as their interconnections at the river basin scale.
Broadening the knowledge about flood triggers, which up to now has been limited to linking large-scale meteorological conditions to flood occurrence, the influence of large-scale pre-event hydrological conditions on flood initiation is investigated. Using the Elbe River basin as an example, a classification of soil moisture, a key variable of pre-event conditions, is developed and a probabilistic link between patterns of soil moisture and flood occurrence is established. The soil moisture classification is applied to continuously simulated soil moisture data which is generated using the semi-distributed conceptual rainfall-runoff model SWIM. Applying successively a principal component analysis and a cluster analysis, days of similar soil moisture patterns are identified in the period November 1951 to October 2003.
The investigation of flood triggers is complemented by including meteorological conditions described by a common weather pattern classification that represents the main modes of atmospheric state variability. The newly developed soil moisture classification thereby provides the basis to study the combined impact of hydrological pre-conditions and large-scale meteorological event conditions on flood occurrence at the river basin scale.
A process-based understanding of flood generation and its associated probabilities is attained by classifying observed flood events into process-based flood types such as snowmelt floods or long-rain floods. Subsequently, the flood types are linked to the soil moisture and weather patterns. Further understanding of the processes is gained by modeling of the complete causal flood chain, incorporating a rainfall-runoff model, a 1D/2D hydrodynamic model and a flood loss model. A reshuffling approach based on weather patterns and the month of their occurrence is developed to generate synthetic data fields of meteorological conditions, which drive the model chain, in order to increase the flood sample size. From the large number of simulated flood events, the impact of hydro-meteorological conditions on various flood characteristics is detected through the analysis of conditional cumulative distribution functions and regression trees.
The results show the existence of catchment-scale soil moisture patterns, which comprise of large-scale seasonal wetting and drying components as well as of smaller-scale variations related to spatially heterogeneous catchment processes. Soil moisture patterns frequently occurring before the onset of floods are identified. In winter, floods are initiated by catchment-wide high soil moisture, whereas in summer the flood-initiating soil moisture patterns are diverse and the soil moisture conditions are less stable in time. The combined study of both soil moisture and weather patterns shows that the flood favoring hydro-meteorological patterns as well as their interactions vary seasonally. In the analysis period, 18 % of the weather patterns only result in a flood in the case of preceding soil saturation. The classification of 82 past events into flood types reveals seasonally varying flood processes that can be linked to hydro-meteorological patterns. For instance, the highest flood potential for long-rain floods is associated with a weather pattern that is often detected in the presence of so-called ‘Vb’ cyclones. Rain-on-snow and snowmelt floods are associated with westerly and north-westerly wind directions. The flood characteristics vary among the flood types and can be reproduced by the applied model chain. In total, 5970 events are simulated. They reproduce the observed event characteristics between September 1957 and August 2002 and provide information on flood losses. A regression tree analysis relates the flood processes of the simulated events to the hydro-meteorological (pre-)event conditions and highlights the fact that flood magnitude is primarily controlled by the meteorological event, whereas flood extent is primarily controlled by the soil moisture conditions.
Describing flood occurrence, processes and characteristics as a function of hydro-meteorological patterns, this thesis is part of a paradigm shift towards a process-based understanding of floods. The results highlight that soil moisture patterns as well as weather patterns are not only beneficial to a probabilistic conception of flood initiation but also provide information on the involved flood processes and the resulting flood characteristics.
Background: The concept self-compassion (SC), a special way of being compassionate with oneself while dealing with stressful life circumstances, has attracted increasing attention in research over the past two decades. Research has already shown that SC has beneficial effects on affective well-being and other mental health outcomes. However, little is known in which ways SC might facilitate our affective well-being in stressful situations. Hence, a central concern of this dissertation was to focus on the question which underlying processes might influence the link between SC and affective well-being. Two established components in stress processing, which might also play an important role in this context, could be the amount of experienced stress and the way of coping with a stressor. Thus, using a multi-method approach, this dissertation aimed at finding to which extent SC might help to alleviate the experienced stress and promotes the use of more salutary coping, while dealing with stressful circumstances. These processes might ultimately help improve one’s affective well-being. Derived from that, it was hypothesized that more SC is linked to less perceived stress and intensified use of salutary coping responses. Additionally, it was suggested that perceived stress and coping mediate the relation between SC and affective well-being.
Method: The research questions were targeted in three single studies and one meta-study. To test my assumptions about the relations of SC and coping in particular, a systematic literature search was conducted resulting in k = 136 samples with an overall sample size of N = 38,913. To integrate the z-transformed Pearson correlation coefficients, random-effects models were calculated. All hypotheses were tested with a three-wave cross-lagged design in two short-term longitudinal online studies assessing SC, perceived stress and coping responses in all waves. The first study explored the assumptions in a student sample (N = 684) with a mean age of 27.91 years over a six-week period, whereas the measurements were implemented in the GESIS Panel (N = 2934) with a mean age of 52.76 years analyzing the hypotheses in a populationbased sample across eight weeks. Finally, an ambulatory assessment study was designed to expand the findings of the longitudinal studies to the intraindividual level. Thus, a sample of 213 participants completed questionnaires of momentary SC, perceived stress, engagement and disengagement coping, and affective well-being on their smartphones three times per day over seven consecutive days. The data was processed using 1-1-1 multilevel mediation analyses.
Results: Results of the meta-analysis indicated that higher SC is significantly associated with more use of engagement coping and less use of disengagement coping. Considering the relations between SC and stress processing variables in all three single studies, cross-lagged paths from the longitudinal data, as well as multilevel modeling paths from the ambulatory assessment data indicated a notable relation between all relevant stress variables. As expected, results showed a significant negative relation between SC and perceived stress and disengagement coping, as well as a positive connection with engagement coping responses at the dispositional and intra-individual level. However, considering the mediational hypothesis, the most promising pathway in the link between SC and affective well-being turned out to be perceived stress in all three studies, while effects of the mediational pathways through coping responses were less robust.
Conclusion: Thus, a more self-compassionate attitude and higher momentary SC, when needed in specific situations, can help to engage in effective stress processing. Considering the underlying mechanisms in the link between SC and affective well-being, stress perception in particular seemed to be the most promising candidate for enhancing affective well-being at the dispositional and at the intraindividual level. Future research should explore the pathways between SC and affective well-being in specific contexts and samples, and also take into account additional influential factors.
Flooding is a vast problem in many parts of the world, including Europe. It occurs mainly due to extreme weather conditions (e.g. heavy rainfall and snowmelt) and the consequences of flood events can be devastating. Flood risk is mainly defined as a combination of the probability of an event and its potential adverse impacts. Therefore, it covers three major dynamic components: hazard (physical characteristics of a flood event), exposure (people and their physical environment that being exposed to flood), and vulnerability (the elements at risk). Floods are natural phenomena and cannot be fully prevented. However, their risk can be managed and mitigated. For a sound flood risk management and mitigation, a proper risk assessment is needed. First of all, this is attained by a clear understanding of the flood risk dynamics. For instance, human activity may contribute to an increase in flood risk. Anthropogenic climate change causes higher intensity of rainfall and sea level rise and therefore an increase in scale and frequency of the flood events. On the other hand, inappropriate management of risk and structural protection measures may not be very effective for risk reduction. Additionally, due to the growth of number of assets and people within the flood-prone areas, risk increases. To address these issues, the first objective of this thesis is to perform a sensitivity analysis to understand the impacts of changes in each flood risk component on overall risk and further their mutual interactions. A multitude of changes along the risk chain are simulated by regional flood model (RFM) where all processes from atmosphere through catchment and river system to damage mechanisms are taken into consideration. The impacts of changes in risk components are explored by plausible change scenarios for the mesoscale Mulde catchment (sub-basin of the Elbe) in Germany.
A proper risk assessment is ensured by the reasonable representation of the real-world flood event. Traditionally, flood risk is assessed by assuming homogeneous return periods of flood peaks throughout the considered catchment. However, in reality, flood events are spatially heterogeneous and therefore traditional assumption misestimates flood risk especially for large regions. In this thesis, two different studies investigate the importance of spatial dependence in large scale flood risk assessment for different spatial scales. In the first one, the “real” spatial dependence of return period of flood damages is represented by continuous risk modelling approach where spatially coherent patterns of hydrological and meteorological controls (i.e. soil moisture and weather patterns) are included. Further the risk estimations under this modelled dependence assumption are compared with two other assumptions on the spatial dependence of return periods of flood damages: complete dependence (homogeneous return periods) and independence (randomly generated heterogeneous return periods) for the Elbe catchment in Germany. The second study represents the “real” spatial dependence by multivariate dependence models. Similar to the first study, the three different assumptions on the spatial dependence of return periods of flood damages are compared, but at national (United Kingdom and Germany) and continental (Europe) scales. Furthermore, the impacts of the different models, tail dependence, and the structural flood protection level on the flood risk under different spatial dependence assumptions are investigated.
The outcomes of the sensitivity analysis framework suggest that flood risk can vary dramatically as a result of possible change scenarios. The risk components that have not received much attention (e.g. changes in dike systems and in vulnerability) may mask the influence of climate change that is often investigated component.
The results of the spatial dependence research in this thesis further show that the damage under the false assumption of complete dependence is 100 % larger than the damage under the modelled dependence assumption, for the events with return periods greater than approximately 200 years in the Elbe catchment. The complete dependence assumption overestimates the 200-year flood damage, a benchmark indicator for the insurance industry, by 139 %, 188 % and 246 % for the UK, Germany and Europe, respectively. The misestimation of risk under different assumptions can vary from upstream to downstream of the catchment. Besides, tail dependence in the model and flood protection level in the catchments can affect the risk estimation and the differences between different spatial dependence assumptions.
In conclusion, the broader consideration of the risk components, which possibly affect the flood risk in a comprehensive way, and the consideration of the spatial dependence of flood return periods are strongly recommended for a better understanding of flood risk and consequently for a sound flood risk management and mitigation.
Protein-metal coordination complexes are well known as active centers in enzymatic catalysis, and to contribute to signal transduction, gas transport, and to hormone function. Additionally, they are now known to contribute as load-bearing cross-links to the mechanical properties of several biological materials, including the jaws of Nereis worms and the byssal threads of marine mussels. The primary aim of this thesis work is to better understand the role of protein-metal cross-links in the mechanical properties of biological materials, using the mussel byssus as a model system. Specifically, the focus is on histidine-metal cross-links as sacrificial bonds in the fibrous core of the byssal thread (Chapter 4) and L-3,4-dihydroxyphenylalanine (DOPA)-metal bonds in the protective thread cuticle (Chapter 5).
Byssal threads are protein fibers, which mussels use to attach to various substrates at the seashore. These relatively stiff fibers have the ability to extend up to about 100 % strain, dissipating large amounts of mechanical energy from crashing waves, for example. Remarkably, following damage from cyclic loading, initial mechanical properties are subsequently recovered by a material-intrinsic self-healing capability. Histidine residues coordinated to transition metal ions in the proteins comprising the fibrous thread core have been suggested as reversible sacrificial bonds that contribute to self-healing; however, this remains to be substantiated in situ. In the first part of this thesis, the role of metal coordination bonds in the thread core was investigated using several spectroscopic methods. In particular, X-ray absorption spectroscopy (XAS) was applied to probe the coordination environment of zinc in Mytilus californianus threads at various stages during stretching and subsequent healing. Analysis of the extended X-ray absorption fine structure (EXAFS) suggests that tensile deformation of threads is correlated with the rupture of Zn-coordination bonds and that self-healing is connected with the reorganization of Zn-coordination bond topologies rather than the mere reformation of Zn-coordination bonds. These findings have interesting implications for the design of self-healing metallopolymers.
The byssus cuticle is a protective coating surrounding the fibrous thread core that is both as hard as an epoxy and extensible up to 100 % strain before cracking. It was shown previously that cuticle stiffness and hardness largely depend on the presence of Fe-DOPA coordination bonds. However, the byssus is known to concentrate a large variety of metals from seawater, some of which are also capable of binding DOPA (e.g. V). Therefore, the question arises whether natural variation of metal composition can affect the mechanical performance of the byssal thread cuticle. To investigate this hypothesis, nanoindentation and confocal Raman spectroscopy were applied to the cuticle of native threads, threads with metals removed (EDTA treated), and threads in which the metal ions in the native tissue were replaced by either Fe or V. Interestingly, replacement of metal ions with either Fe or V leads to the full recovery of native mechanical properties with no statistical difference between each other or the native properties. This likely indicates that a fixed number of metal coordination sites are maintained within the byssal thread cuticle – possibly achieved during thread formation – which may provide an evolutionarily relevant mechanism for maintaining reliable mechanics in an unpredictable environment.
While the dynamic exchange of bonds plays a vital role in the mechanical behavior and self-healing in the thread core by allowing them to act as reversible sacrificial bonds, the compatibility of DOPA with other metals allows an inherent adaptability of the thread cuticle to changing circumstances. The requirements to both of these materials can be met by the dynamic nature of the protein-metal cross-links, whereas covalent cross-linking would fail to provide the adaptability of the cuticle and the self-healing of the core. In summary, these studies of the thread core and the thread cuticle serve to underline the important and dynamic roles of protein-metal coordination in the mechanical function of load-bearing protein fibers, such as the mussel byssus.
Partial melting is a first order process for the chemical differentiation of the crust (Vielzeuf et al., 1990). Redistribution of chemical elements during melt generation crucially influences the composition of the lower and upper crust and provides a mechanism to concentrate and transport chemical elements that may also be of economic interest. Understanding of the diverse processes and their controlling factors is therefore not only of scientific interest but also of high economic importance to cover the demand for rare metals.
The redistribution of major and trace elements during partial melting represents a central step for the understanding how granite-bound mineralization develops (Hedenquist and Lowenstern, 1994). The partial melt generation and mobilization of ore elements (e.g. Sn, W, Nb, Ta) into the melt depends on the composition of the sedimentary source and melting conditions. Distinct source rocks have different compositions reflecting their deposition and alteration histories. This specific chemical “memory” results in different mineral assemblages and melting reactions for different protolith compositions during prograde metamorphism (Brown and Fyfe, 1970; Thompson, 1982; Vielzeuf and Holloway, 1988). These factors do not only exert an important influence on the distribution of chemical elements during melt generation, they also influence the volume of melt that is produced, extraction of the melt from its source, and its ascent through the crust (Le Breton and Thompson, 1988). On a larger scale, protolith distribution and chemical alteration (weathering), prograde metamorphism with partial melting, melt extraction, and granite emplacement are ultimately depending on a (plate-)tectonic control (Romer and Kroner, 2016). Comprehension of the individual stages and their interaction is crucial in understanding how granite-related mineralization forms, thereby allowing estimation of the mineralization potential of certain areas. Partial melting also influences the isotope systematics of melt and restite. Radiogenic and stable isotopes of magmatic rocks are commonly used to trace back the source of intrusions or to quantify mixing of magmas from different sources with distinct isotopic signatures (DePaolo and Wasserburg, 1979; Lesher, 1990; Chappell, 1996). These applications are based on the fundamental requirement that the isotopic signature in the melt reflects that of the bulk source from which it is derived. Different minerals in a protolith may have isotopic compositions of radiogenic isotopes that deviate from their whole rock signature (Ayres and Harris, 1997; Knesel and Davidson, 2002). In particular, old minerals with a distinct parent-to-daughter (P/D) ratio are expected to have a specific radiogenic isotope signature. As the partial melting reaction only involves selective phases in a protolith, the isotopic signature of the melt reflects that of the minerals involved in the melting reaction and, therefore, should be different from the bulk source signature. Similar considerations hold true for stable isotopes.
Plants and some unicellular algae store carbon in the form of transitory starch on a diurnal basis. The turnover of this glucose polymer is tightly regulated and timely synthesis as well as mobilization is essential to provide energy for heterotrophic growth. Especially for starch degradation, novel enzymes and mechanisms have been proposed recently. However, the catalytic properties of these enzymes and their coordination with metabolic regulation are still to be discovered. This thesis develops theoretical methods in order to interpret and analyze enzymes and their role in starch degradation. In the first part, a novel description of interfacial enzyme catalysis is proposed. Since the initial steps of starch degradation involve reactions at the starch-stroma interface it is necessary to have a framework which allows the derivation of interfacial enzyme rate laws. A cornerstone of the method is the introduction of the available area function - a concept from surface physics - to describe the adsorption step in the catalytic cycle. The method is applied to derive rate laws for two hydrolases, the Beta-amylase (BAM3) and the Isoamylase (DBE/ISA3), as well as to the Glucan, water dikinase (GWD) and a Phosphoglucan phosphatase (DSP/SEX4). The second part uses the interfacial rate laws to formulate a kinetic model of starch degradation. It aims at reproducing the stimulatory effect of reversible phosphorylation by GWD and DSP on the breakdown of the granule. The model can describe the dynamics of interfacial properties during degradation and suggests that interfacial amylopectin side-chains undergo spontaneous helix-coil transitions. Reversible phosphorylation has a synergistic effect on glucan release especially in the early phase dropping off during degradation. Based on the model, the hypothesis is formulated that interfacial phosphorylation is important for the rapid switch from starch synthesis to starch degradation. The third part takes a broader perspective on carbohydrate-active enzymes (CAZymes) but is motivated by the organization of the downstream pathway of starch breakdown. This comprises Alpha-1,4-glucanotransferases (DPE1 and DPE2) and Alpha-glucan-phosphorylases (Pho or PHS) both in the stroma and in the cytosol. CAZymes accept many different substrates and catalyze numerous reactions and therefore cannot be characterized in classical enzymological terms. A concise characterization is provided by conceptually linking statistical thermodynamics and polymer biochemistry. Each reactant is interpreted as an energy level, transitions between which are constrained by the enzymatic mechanisms. Combinations of in vitro assays of polymer-active CAZymes essential for carbon metabolism in plants confirmed the dominance of entropic gradients. The principle of entropy maximization provides a generalization of the equilibrium constant. Stochastic simulations confirm the results and suggest that randomization of metabolites in the cytosolic pool of soluble heteroglycans (SHG) may contribute to a robust integration of fluctuating carbon fluxes coming from chloroplasts.
Rhythm is a temporal and systematic organization of acoustic events in terms of prominence, timing and grouping, helping to structure our most basic experiences, such as body movement, music and speech. In speech, rhythm groups auditory events, e.g., sounds and pauses, together into words, making their boundaries acoustically prominent and aiding word segmentation and recognition by the hearer. After word recognition, the hearer is able to retrieve word meaning form his mental lexicon, integrating it with information from other linguistic domains, such as semantics, syntax and pragmatics, until comprehension is achieved. The importance of speech rhythm, however, is not restricted to word segmentation and recognition only. Beyond the word level rhythm continues to operate as an organization device, interacting with different linguistic domains, such as syntax and semantics, and grouping words into larger prosodic constituents, organized in a prosodic hierarchy. This dissertation investigates the function of speech rhythm as a sentence segmentation device during syntactic ambiguity processing, possible limitations on its use, i.e., in the context of second language processing, and its transferability as cognitive skill to the music domain.
The evolution of most orogens typically records cogenetic shortening and extension. Pervasive normal faulting in an orogen, however, has been related to late syn- and post-collisional stages of mountain building with shortening focused along the peripheral sectors of the orogen. While extensional processes constitute an integral part of orogenic evolution, the spatiotemporal characteristics and the kinematic linkage of structures related to shortening and extension in the core regions of the orogen are often not well known. Related to the India-Eurasia collision, the Himalaya forms the southern margin of the Tibetan Plateau and constitutes the most prominent Cenozoic type example of a collisional orogen. While thrusting is presently observed along the foothills of the orogen, several generations of extensional structures have been detected in the internal, high-elevation regions, both oriented either parallel or perpendicular to the strike of the orogen. In the NW Indian Himalaya, earthquake focal mechanisms, seismites and ubiquitous normal faulting in Quaternary deposits, and regional GPS measurements reveal ongoing E-W extension. In contrast to other extensional structures observed in the Himalaya, this extension direction is neither parallel nor perpendicular to the NE-SW regional shortening direction. In this study, I took advantage of this obliquity between the trend of the orogen and structures related to E-W oriented extension in order to address the question of the driving forces of different extension directions. Thus, extension might be triggered triggered by processes within the Tibetan Plateau or originates from the curvature of the Himalayan orogen. In order to elaborate on this topic, I present new fault-kinematic data based on systematic measurements of approximately 2000 outcrop-scale brittle fault planes with displacements of up to several centimeters that cover a large area of the NW Indian Himalaya. This new data set together with field observations relevant for relative chronology allows me to distinguish six different deformation styles. One of the main results are that the overall strain pattern derived from this data reflects the regionally important contractional deformation pattern very well, but also reveals significant extensional deformation. In total, I was able to identify six deformation styles, most of which are temporally and spatially linked and represent protracted shortening, but also significant extensional directions. For example, this is the first data set where a succession of both, arc-normal and E-W extension have been documented in the Himalaya. My observations also furnish the basis for a detailed overview of the younger extensional deformation history in the NW Indian Himalaya. Field and remote-sensing based geomorphic analyses, and geochronologic 40Ar/39Ar data on synkinematic muscovites along normal faults help elucidate widespread E-W extension in the NW Indian Himalaya which must have started at approximately 14-16 Ma, if not earlier. In addition, I documented and mapped fault scarps in Quaternary sedimentary deposits using satellite imagery and field inspection. Furthermore, I made field observations of regional normal faults, compiled structures from geological maps and put them in a regional context. Finally, I documented seismites in lake sediments close to the currently most active normal fault in the study area in order to extend the (paleo) seismic record of this particular fault. Taken together, this data sets document that E-W extension is the dominant active deformation style in the internal parts of the orogen. In addition, the combined field, geomorphic and remote-sensing data sets prove that E-W extension occurs in a much more larger region toward the south and west than the seismicity data have suggested. In conclusion, the data presented here reveal the importance of extension in a region, which is still dominated by ongoing collision and shortening. The regional fault distribution and cross-cutting relationships suggest that extension parallel and perpendicular to the strike of the orogen are an integral part of the southward propagation of the active thrust front and the associated lateral growth of the Himalayan arc. In the light of a wide range of models proposed for extension in the Himalaya and the Tibetan plateau, I propose that E-W extension in the NW Indian Himalaya is transferred from the Tibetan Plateau due the inability of the Karakorum fault (KF) to adequately accommodate ongoing E-W extension on the Tibetan Plateau. Furthermore, in line with other observations from Tibet, the onset of E-W normal faulting in the NW Himalaya may also reflect the attainment of high topography in this region, which generated crustal stresses conducive to spatially extensive extension.