Refine
Has Fulltext
- yes (1755) (remove)
Year of publication
Document Type
- Doctoral Thesis (1755) (remove)
Language
- English (1755) (remove)
Keywords
- climate change (49)
- Klimawandel (48)
- Modellierung (26)
- Nanopartikel (22)
- machine learning (21)
- Fernerkundung (17)
- Blickbewegungen (16)
- Synchronisation (15)
- remote sensing (15)
- Arktis (14)
Institute
- Institut für Physik und Astronomie (351)
- Institut für Geowissenschaften (301)
- Institut für Biochemie und Biologie (265)
- Institut für Chemie (193)
- Extern (129)
- Hasso-Plattner-Institut für Digital Engineering GmbH (89)
- Institut für Umweltwissenschaften und Geographie (84)
- Department Linguistik (65)
- Institut für Informatik und Computational Science (65)
- Institut für Mathematik (57)
The Antarctic plays an important role in the global climate system. On the one hand, the Antarctic Ice Sheet is the largest freshwater reservoir on Earth. On the other hand, a major proportion of the global bottom-water formation takes place in Antarctic shelf regions, forcing the global thermohaline circulation. The main goal of this dissertation is to provide new insights into the dynamics and stability of the EAIS during the Quaternary. Additionally, variations in the activity of bottom-water formation and their causes are investigated. The dissertation is a German contribution to the International Polar Year 2007/ 2008 and was funded by the ‘Deutsche Forschungsgesellschaft’ (DFG) within the scope of priority program 1158 ‘Antarctic research with comparative studies in Arctic ice regions’. During RV Polarstern expedition ANT-XXIII/9, glaciomarine sediments were recovered from the Prydz Bay-Kerguelen region. Prydz Bay is a key region for the study of East EAIS dynamics, as 16% of the EAIS are drained through the Lambert Glacier into the bay. Thereby, the glacier transports sediment into Prydz Bay which is then further distributed by calving icebergs or by current transport. The scientific approach of this dissertation is the reconstruction of past glaciomarine environments to infer on the response of the Lambert Glacier-Amery Ice Shelf system to climate shifts during the Quaternary. To characterize the depositional setting, sedimentological methods are used and statistical analyses are applied. Mineralogical and (bio)geochemical methods provide a means to reconstruct sediment provenances and to provide evidence on changes in the primary production in the surface water column. Age-depth models were constructed based on palaeomagnetic and palaeointensity measurements, diatom stratigraphy and radiocarbon dating. Sea-bed surface sediments in the investigation area show distinct variations in terms of their clay minerals and heavy-mineral assemblages. Considerable differences in the mineralogical composition of surface sediments are determined on the continental shelf. Clay minerals as well as heavy minerals provide useful parameters to differentiate between sediments which originated from erosion of crystalline rocks and sediments originating from Permo-Triassic deposits. Consequently, mineralogical parameters can be used to reconstruct the provenance of current-transported and ice-rafted material. The investigated sediment cores cover the time intervals of the last 1.4 Ma (continental slope) and the last 12.8 cal. ka BP (MacRobertson shelf). The sediment deposits were mainly influenced by glacial and oceanographic processes and further by biological activity (continental shelf), meltwater input and possibly gravitational transport. Sediments from the continental slope document two major deglacial events: the first deglaciation is associated with the mid-Pleistocene warming recognized around the Antarctic. In Prydz Bay, the Lambert Glacier-Amery Ice Shelf retreated far to the south and high biogenic productivity commenced or biogenic remains were better preserved due to increased sedimentation rates. Thereafter, stable glacial conditions continued until 400 - 500 ka BP. Calving of icebergs was restricted to the western part of the Lambert Glacier. The deeper bathymetry in this area allows for floating ice shelf even during times of decreased sea-level. Between 400 - 500 ka BP and the last interglacial (marine isotope stage 5) the glacier was more dynamic. During or shortly after the last interglacial the LAIS retreated again due to sea-level rise of 6 - 9 m. Both deglacial events correlate with a reduction in the thickness of ice masses in the Prince Charles Mountains. It indicates that a disintegration of the Amery Ice Shelf possibly led to increased drainage of ice masses from the Prydz Bay hinterland. A new end-member modelling algorithm was successfully applied on sediments from the MacRobertson shelf used to unmix the sand grain size fractions sorted by current activity and ice transport, respectively. Ice retreat on MacRobertson Shelf commenced 12.8 cal. ka BP and ended around 5.5 cal. ka BP. During the Holocene, strong fluctuations of the bottomwater activity were observed, probably related to variations of sea-ice formation in the Cape Darnley polynya. Increased activity of bottom-water flow was reconstructed at transitions from warm to cool conditions, whereas bottom-water activity receded during the mid- Holocene climate optimum. It can be concluded that the Lambert Glacier-Amery Ice Shelf system was relatively stable in terms of climate variations during the Quaternary. In contrast, bottom-water formation due to polynya activity was very sensitive to changes in atmospheric forcing and should gain more attention in future research.
Adjustment of empirically derived ground motion prediction equations (GMPEs), from a data- rich region/site where they have been derived to a data-poor region/site, is one of the major challenges associated with the current practice of seismic hazard analysis. Due to the fre- quent use in engineering design practices the GMPEs are often derived for response spectral ordinates (e.g., spectral acceleration) of a single degree of freedom (SDOF) oscillator. The functional forms of such GMPEs are based upon the concepts borrowed from the Fourier spectral representation of ground motion. This assumption regarding the validity of Fourier spectral concepts in the response spectral domain can lead to consequences which cannot be explained physically.
In this thesis, firstly results from an investigation that explores the relationship between Fourier and response spectra, and implications of this relationship on the adjustment issues of GMPEs, are presented. The relationship between the Fourier and response spectra is explored by using random vibration theory (RVT), a framework that has been extensively used in earthquake engineering, for instance within the stochastic simulation framework and in the site response analysis. For a 5% damped SDOF oscillator the RVT perspective of response spectra reveals that no one-to-one correspondence exists between Fourier and response spectral ordinates except in a limited range (i.e., below the peak of the response spectra) of oscillator frequencies. The high oscillator frequency response spectral ordinates are dominated by the contributions from the Fourier spectral ordinates that correspond to the frequencies well below a selected oscillator frequency. The peak ground acceleration (PGA) is found to be related with the integral over the entire Fourier spectrum of ground motion which is in contrast to the popularly held perception that PGA is a high-frequency phenomenon of ground motion.
This thesis presents a new perspective for developing a response spectral GMPE that takes the relationship between Fourier and response spectra into account. Essentially, this frame- work involves a two-step method for deriving a response spectral GMPE: in the first step two empirical models for the FAS and for a predetermined estimate of duration of ground motion are derived, in the next step, predictions from the two models are combined within the same RVT framework to obtain the response spectral ordinates. In addition to that, a stochastic model based scheme for extrapolating the individual acceleration spectra beyond the useable frequency limits is also presented. To that end, recorded acceleration traces were inverted to obtain the stochastic model parameters that allow making consistent extrapola- tion in individual (acceleration) Fourier spectra. Moreover an empirical model, for a dura- tion measure that is consistent within the RVT framework, is derived. As a next step, an oscillator-frequency-dependent empirical duration model is derived that allows obtaining the most reliable estimates of response spectral ordinates. The framework of deriving the response spectral GMPE presented herein becomes a self-adjusting model with the inclusion of stress parameter (∆σ) and kappa (κ0) as the predictor variables in the two empirical models. The entire analysis of developing the response spectral GMPE is performed on recently compiled RESORCE-2012 database that contains recordings made from Europe, the Mediterranean and the Middle East. The presented GMPE for response spectral ordinates should be considered valid in the magnitude range of 4 ≤ MW ≤ 7.6 at distances ≤ 200 km.
The India-Eurasia continental collision zone provides a spectacular example of active mountain building and climatic forcing. In order to quantify the critically important process of mass removal, I analyzed spatial and temporal precipitation patterns of the oscillating monsoon system and their geomorphic imprints. I processed passive microwave satellite data to derive high-resolution rainfall estimates for the last decade and identified an abnormal monsoon year in 2002. During this year, precipitation migrated far into the Sutlej Valley in the northwestern part of the Himalaya and reached regions behind orographic barriers that are normally arid. There, sediment flux, mean basin denudation rates, and channel-forming processes such as erosion by debris-flows increased significantly. Similarly, during the late Pleistocene and early Holocene, solar forcing increased the strength of the Indian summer monsoon for several millennia and presumably lead to analogous precipitation distribution as were observed during 2002. However, the persistent humid conditions in the steep, high-elevation parts of the Sutlej River resulted in deep-seated landsliding. Landslides were exceptionally large, mainly due to two processes that I infer for this time: At the onset of the intensified monsoon at 9.7 ka BP heavy rainfall and high river discharge removed material stored along the river, and lowered the baselevel. Second, enhanced discharge, sediment flux, and increased pore-water pressures along the hillslopes eventually lead to exceptionally large landslides that have not been observed in other periods. The excess sediments that were removed from the upstream parts of the Sutlej Valley were rapidly deposited in the low-gradient sectors of the lower Sutlej River. Timing of downcutting correlates with centennial-long weaker monsoon periods that were characterized by lower rainfall. I explain this relationship by taking sediment flux and rainfall dynamics into account: High sediment flux derived from the upstream parts of the Sutlej River during strong monsoon phases prevents fluvial incision due to oversaturation the fluvial sediment-transport capacity. In contrast, weaker monsoons result in a lower sediment flux that allows incision in the low-elevation parts of the Sutlej River.
Organizations incorporate the institutional demands from their environment in order to be deemed legitimate and survive. Yet, complexifying societies promulgate multiple and sometimes inconsistent institutional prescriptions. When these prescriptions collide, organizations are said to face “institutional complexity”. How does an organization then incorporate incompatible demands? What are the consequences of institutional complexity for an organization? The literature provides contradictory conceptual and empirical insights on the matter. A central assumption, however, remains that internal incompatibilities generate tensions that, under certain conditions, can escalate into intractable conflicts, resulting in dysfunctionality and loss of legitimacy. The present research is an inquiry into what happens inside an organization when it incorporates complex institutional demands.
To answer this question, I focus on how individuals inside an organization interpret a complex institutional prescription. I examine how members of the French Development Agency interpret ‘results-based management’, a central but complex concept of organizing in the field of development aid. I use an inductive mixed methods design to systematically explore how different interpretations of results-based management relate to one another and to the organizational context in which they are embedded.
The results reveal that results-based management is a contested concept in the French Development Agency. I find multiple interpretations of the concept, which are attached to partly incompatible rationales about “who we are” and “what we do as an organization”. These rationales nevertheless coexist as balanced forces, without escalating into open conflict. The analysis points to four reasons for this peaceful coexistence of diverging rationales inside one and the same organization: 1) individuals’ capacity to manipulate different interpretations of a complex institutional demand, 2) the nature of interpretations, which makes them more or less prone to conflict, 3) the balanced distribution of rationales across the organizational sub-contexts and 4) the shared rules of interpretation provided by the larger socio-cultural context.
This research shows that an organization that incorporates institutional complexity comes to represent different, partly incompatible things to its members without being at war with itself. In doing so, it contributes to our knowledge of institutional complexity and organizational hybridity. It also advances our understanding of internal organizational legitimacy and of the translation of managerial concepts in organizations.
In the first section of the thesis graphitic carbon nitride was for the first time synthesised using the high-temperature condensation of dicyandiamide (DCDA) – a simple molecular precursor – in a eutectic salt melt of lithium chloride and potassium chloride. The extent of condensation, namely next to complete conversion of all reactive end groups, was verified by elemental microanalysis and vibrational spectroscopy. TEM- and SEM-measurements gave detailed insight into the well-defined morphology of these organic crystals, which are not based on 0D or 1D constituents like known molecular or short-chain polymeric crystals but on the packing motif of extended 2D frameworks. The proposed crystal structure of this g-C3N4 species was derived in analogy to graphite by means of extensive powder XRD studies, indexing and refinement. It is based on sheets of hexagonally arranged s-heptazine (C6N7) units that are held together by covalent bonds between C and N atoms. These sheets stack in a graphitic, staggered fashion adopting an AB-motif, as corroborated by powder X-ray diffractometry and high-resolution transmission electron microscopy. This study was contrasted with one of many popular – yet unsuccessful – approaches in the last 30 years of scientific literature to perform the condensation of an extended carbon nitride species through synthesis in the bulk. The second section expands the repertoire of available salt melts introducing the lithium bromide and potassium bromide eutectic as an excellent medium to obtain a new phase of graphitic carbon nitride. The combination of SEM, TEM, PXRD and electron diffraction reveals that the new graphitic carbon nitride phase stacks in an ABA’ motif forming unprecedentedly large crystals. This section seizes the notion of the preceding chapter, that condensation in a eutectic salt melt is the key to obtain a high degree of conversion mainly through a solvatory effect. At the close of this chapter ionothermal synthesis is seen established as a powerful tool to overcome the inherent kinetic problems of solid state reactions such as incomplete polymerisation and condensation in the bulk especially when the temperature requirement of the reaction in question falls into the proverbial “no man’s land” of classical solvents, i.e. above 250 to 300 °C. The following section puts the claim to the test, that the crystalline carbon nitrides obtained from a salt melt are indeed graphitic. A typical property of graphite – namely the accessibility of its interplanar space for guest molecules – is transferred to the graphitic carbon nitride system. Metallic potassium and graphitic carbon nitride are converted to give the potassium intercalation compound, K(C6N8)3 designated according to its stoichiometry and proposed crystal structure. Reaction of the intercalate with aqueous solvents triggers the exfoliation of the graphitic carbon nitride material and – for the first time – enables the access of singular (or multiple) carbon nitride sheets analogous to graphene as seen in the formation of sheets, bundles and scrolls of carbon nitride in TEM imaging. The thus exfoliated sheets form a stable, strongly fluorescent solution in aqueous media, which shows no sign in UV/Vis spectroscopy that the aromaticity of individual sheets was subject to degradation. The final section expands on the mechanism underlying the formation of graphitic carbon nitride by literally expanding the distance between the covalently linked heptazine units which constitute these materials. A close examination of all proposed reaction mechanisms to-date in the light of exhaustive DSC/MS experiments highlights the possibility that the heptazine unit can be formed from smaller molecules, even if some of the designated leaving groups (such as ammonia) are substituted by an element, R, which later on remains linked to the nascent heptazine. Furthermore, it is suggested that the key functional groups in the process are the triazine- (Tz) and the carbonitrile- (CN) group. On the basis of these assumptions, molecular precursors are tailored which encompass all necessary functional groups to form a central heptazine unit of threefold, planar symmetry and then still retain outward functionalities for self-propagated condensation in all three directions. Two model systems based on a para-aryl (ArCNTz) and para-biphenyl (BiPhCNTz) precursors are devised via a facile synthetic procedure and then condensed in an ionothermal process to yield the heptazine based frameworks, HBF-1 and HBF-2. Due to the structural motifs of their molecular precursors, individual sheets of HBF-1 and HBF-2 span cavities of 14.2 Å and 23.0 Å respectively which makes both materials attractive as potential organic zeolites. Crystallographic analysis confirms the formation of ABA’ layered, graphitic systems, and the extent of condensation is confirmed as next-to-perfect by elemental analysis and vibrational spectroscopy.
This publications-based thesis summarizes my contribution to the scientific field of ultrafast structural dynamics. It consists of 16 publications, about the generation, detection and coupling of coherent gigahertz longitudinal acoustic phonons, also called hypersonic waves. To generate such high frequency phonons, femtosecond near infrared laser pulses were used to heat nanostructures composed of perovskite oxides on an ultrashort timescale. As a consequence the heated regions of such a nanostructure expand and a high frequency acoustic phonon pulse is generated. To detect such coherent acoustic sound pulses I use ultrafast variants of optical Brillouin and x-ray scattering. Here an incident optical or x-ray photon is scattered by the excited sound wave in the sample. The scattered light intensity measures the occupation of the phonon modes.
The central part of this work is the investigation of coherent high amplitude phonon wave packets which can behave nonlinearly, quite similar to shallow water waves which show a steepening of wave fronts or solitons well known as tsunamis. Due to the high amplitude of the acoustic wave packets in the solid, the acoustic properties can change significantly in the vicinity of the sound pulse. This may lead to a shape change of the pulse. I have observed by time-resolved Brillouin scattering, that a single cycle hypersound pulse shows a wavefront steepening. I excited hypersound pulses with strain amplitudes until 1% which I have calibrated by ultrafast x-ray diffraction (UXRD).
On the basis of this first experiment we developed the idea of the nonlinear mixing of narrowband phonon wave packets which we call "nonlinear phononics" in analogy with the nonlinear optics, which summarizes a kaleidoscope of surprising optical phenomena showing up at very high electric fields. Such phenomena are for instance Second Harmonic Generation, four-wave-mixing or solitons. But in case of excited coherent phonons the wave packets have usually very broad spectra which make it nearly impossible to look at elementary scattering processes between phonons with certain momentum and energy.
For that purpose I tested different techniques to excite narrowband phonon wave packets which mainly consist of phonons with a certain momentum and frequency. To this end epitaxially grown metal films on a dielectric substrate were excited with a train of laser pulses. These excitation pulses drive the metal film to oscillate with the frequency given by their inverse temporal displacement and send a hypersonic wave of this frequency into the substrate. The monochromaticity of these wave packets was proven by ultrafast optical Brillouin and x-ray scattering.
Using the excitation of such narrowband phonon wave packets I was able to observe the Second Harmonic Generation (SHG) of coherent phonons as a first example of nonlinear wave mixing of nanometric phonon wave packets.
It is a common finding that preschoolers have difficulties in identifying who is doing what to whom in non-canonical sentences, such as (object-verb-subject) OVS and passive sentences in German. This dissertation investigates how German monolingual and German-Italian simultaneous bilingual children process German OVS sentences in Study 1 and German passives in Study 2. Offline data (i.e., accuracy data) and online data (i.e., eye-gaze and pupillometry data) were analyzed to explore whether children can assign thematic roles during sentence comprehension and processing. Executive functions, language-internal and -external factors were investigated as potential predictors for children’s sentence comprehension and processing.
Throughout the literature, there are contradicting findings on the relation between language and executive functions. While some results show a bilingual cognitive advantage over monolingual speakers, others suggest there is no relationship between bilingualism and executive functions. If bilingual children possess more advanced executive function abilities than monolingual children, then this might also be reflected in a better performance on linguistic tasks. In the current studies monolingual and bilingual children were tested by means of two executive function tasks: the Flanker task and the task-switching paradigm. However, these findings showed no bilingual cognitive advantages and no better performance by bilingual children in the linguistic tasks. The performance was rather comparable between bilingual and monolingual children, or even better for the monolingual group. This may be due to cross-linguistic influences and language experience (i.e., language input and output). Italian was used because it does not syntactically overlap with the structure of German OVS sentences, and it only overlapped with one of the two types of sentence condition used for the passive study - considering the subject-(finite)verb alignment. The findings showed a better performance of bilingual children in the passive sentence structure that syntactically overlapped in the two languages, providing evidence for cross-linguistic influences.
Further factors for children’s sentence comprehension were considered. The parents’ education, the number of older siblings and language experience variables were derived from a language background questionnaire completed by parents. Scores of receptive vocabulary and grammar, visual and short-term memory and reasoning ability were measured by means of standardized tests. It was shown that higher German language experience by bilinguals correlates with better accuracy in German OVS sentences but not in passive sentences. Memory capacity had a positive effect on the comprehension of OVS and passive sentences in the bilingual group. Additionally, a role was played by executive function abilities in the comprehension of OVS sentences and not of passive sentences. It is suggested that executive function abilities might help children in the sentence comprehension task when the linguistic structures are not yet fully mastered.
Altogether, these findings show that bilinguals’ poorer performance in the comprehension and processing of German OVS is mainly due to reduced language experience in German, and that the different performance of bilingual children with the two types of passives is mainly due to cross-linguistic influences.
Comparative study of gene expression during the differentiation of white and brown preadipocytes
(2002)
Introduction Mammals have two types of adipose tissue: the lipid storing white adipose tissue and the brown adipose tissue characterised by its capacity for non-shivering thermogenesis. White and brown adipocytes have the same origin in mesodermal stem cells. Yet nothing is known so far about the commitment of precursor cells to the white and brown adipose lineage. Several experimental approaches indicate that they originate from the differentiation of two distinct types of precursor cells, white and brown preadipocytes. Based on this hypothesis, the aim of this study was to analyse the gene expression of white and brown preadipocytes in a systematic approach. Experimental approach The white and brown preadipocytes to compare were obtained from primary cell cultures of preadipocytes from the Djungarian dwarf hamster. Representational difference analysis was used to isolate genes potentially differentially expressed between the two cell types. The thus obtained cDNA libraries were spotted on microarrays for a large scale gene expression analysis in cultured preadipocytes and adipocytes and in tissue samples. Results 4 genes with higher expression in white preadipocytes (3 members of the complement system and a fatty acid desaturase) and 8 with higher expression in brown preadipocytes were identified. From the latter 3 coded for structural proteins (fibronectin, metargidin and a actinin 4), 3 for proteins involved in transcriptional regulation (necdin, vigilin and the small nuclear ribonucleoprotein polypeptide A) and 2 are of unknown function. Cluster analysis was applied to the gene expression data in order to characterise them and led to the identification of four major typical expression profiles: genes up-regulated during differentiation, genes down-regulated during differentiation, genes higher expressed in white preadipocytes and genes higher expressed in brown preadipocytes. Conclusion This study shows that white and brown preadipocytes can be distinguished by different expression levels of several genes. These results draw attention to interesting candidate genes for the determination of white and brown preadipocytes (necdin, vigilin and others) and furthermore indicate that potential importance of several functional groups in the differentiation of white and brown preadipocytes, mainly the complement system and extracellular matrix.
The problem under consideration in the thesis is a two level atom in a photonic crystal and a pumping laser. The photonic crystal provides an environment for the atom, that modifies the decay of the exited state, especially if the atom frequency is close to the band gap. The population inversion is investigated als well as the emission spectrum. The dynamics is analysed in the context of open quantum systems. Due to the multiple reflections in the photonic crystal, the system has a finite memory that inhibits the Markovian approximation. In the Heisenberg picture the equations of motion for the system variables form a infinite hierarchy of integro-differential equations. To get a closed system, approximations like a weak coupling approximation are needed. The thesis starts with a simple photonic crystal that is amenable to analytic calculations: a one-dimensional photonic crystal, that consists of alternating layers. The Bloch modes inside and the vacuum modes outside a finite crystal are linked with a transformation matrix that is interpreted as a transfer matrix. Formulas for the band structure, the reflection from a semi-infinite crystal, and the local density of states in absorbing crystals are found; defect modes and negative refraction are discussed. The quantum optics section of the work starts with the discussion of three problems, that are related to the full resonance fluorescence problem: a pure dephasing model, the driven atom and resonance fluorescence in free space. In the lowest order of the system-environment coupling, the one-time expectation values for the full problem are calculated analytically and the stationary states are discussed for certain cases. For the calculation of the two time correlation functions and spectra, the additional problem of correlations between the two times appears. In the Markovian case, the quantum regression theorem is valid. In the general case, the fluctuation dissipation theorem can be used instead. The two-time correlation functions are calculated by the two different methods. Within the chosen approximations, both methods deliver the same result. Several plots show the dependence of the spectrum on the parameters. Some examples for squeezing spectra are shown with different approximations. A projection operator method is used to establish two kinds of Markovian expansion with and without time convolution. The lowest order is identical with the lowest order of system environment coupling, but higher orders give different results.
The present thesis was born and evolved within the RAdial Velocity Experiment (RAVE) with the goal of measuring chemical abundances from the RAVE spectra and exploit them to investigate the chemical gradients along the plane of the Galaxy to provide constraints on possible Galactic formation scenarios. RAVE is a large spectroscopic survey which aims to observe spectroscopically ~10^6 stars by the end of 2012 and measures their radial velocities, atmospheric parameters and chemical abundances. The project makes use of the UK Schmidt telescope at Australian Astronomical Observatory (AAO) in Siding Spring, Australia, equipped with the multiobject spectrograph 6dF. To date, RAVE collected and measured more than 450,000 spectra. The precision of the chemical abundance estimations depends on the reliability of the atomic and atmosphere parameters adopted (in particular the oscillator strengths of the absorption lines and the effective temperature, gravity, and metallicity of the stars measured). Therefore we first identified 604 absorption lines in the RAVE wavelength range and refined their oscillator strengths with an inverse spectral analysis. Then, we improved the RAVE stellar parameters by modifying the RAVE pipeline and the spectral library the pipeline rely on. The modifications removed some systematic errors in stellar parameters discovered during this work. To obtain chemical abundances, we developed two different processing pipelines. Both of them perform chemical abundances measurements by assuming stellar atmospheres in Local Thermodynamic Equilibrium (LTE). The first one determines elements abundances from equivalent widths of absorption lines. Since this pipeline showed poor sensibility on abundances relative to iron, it has been superseded. The second one exploits the chi^2 minimization technique between observed and model spectra. Thanks to its precision, it has been adopted for the creation of the RAVE chemical catalogue. This pipeline provides abundances with uncertains of about ~0.2dex for spectra with signal-to-noise ratio S/N>40 and ~0.3dex for spectra with 20>S/N>40. For this work, the pipeline measured chemical abundances up to 7 elements for 217,358 RAVE stars. With these data we investigated the chemical gradients along the Galactic radius of the Milky Way. We found that stars with low vertical velocities |W| (which stay close to the Galactic plane) show an iron abundance gradient in agreement with previous works (~-0.07$ dex kpc^-1) whereas stars with larger |W| which are able to reach larger heights above the Galactic plane, show progressively flatter gradients. The gradients of the other elements follow the same trend. This suggests that an efficient radial mixing acts in the Galaxy or that the thick disk formed from homogeneous interstellar matter. In particular, we found hundreds of stars which can be kinetically classified as thick disk stars exhibiting a chemical composition typical of the thin disk. A few stars of this kind have already been detected by other authors, and their origin is still not clear. One possibility is that they are thin disk stars kinematically heated, and then underwent an efficient radial mixing process which blurred (and so flattened) the gradient. Alternatively they may be a transition population" which represents an evolutionary bridge between thin and thick disk. Our analysis shows that the two explanations are not mutually exclusive. Future follow-up high resolution spectroscopic observations will clarify their role in the Galactic disk evolution.
Streamflow dynamics in mountainous environments are controlled by runoff generation processes in the basin upstream. Runoff generation processes are thus a major control of the terrestrial part of the water cycle, influencing both, water quality and water quantity as well as their dynamics. The understanding of these processes becomes especially important for the prediction of floods, erosion, and dangerous mass movements, in particular as hydrological systems often show threshold behavior. In case of extensive environmental changes, be it in climate or in landuse, the understanding of runoff generation processes will allow us to better anticipate the consequences and can thus lead to a more responsible management of resources as well as risks. In this study the runoff generation processes in a small undisturbed catchment in the Chilean Andes were investigated. The research area is characterized by steep hillslopes, volcanic ash soils, undisturbed old growth forest and high rainfall amounts. The investigation of runoff generation processes in this data scarce area is of special interest as a) little is known on the hydrological functioning of the young volcanic ash soils, which are characterized by extremely high porosities and hydraulic conductivities, b) no process studies have been carried out in this area at either slope or catchment scale, and c) understanding the hydrological processes in undisturbed catchments will provide a basis to improve our understanding of disturbed systems, the shift in processes that followed the disturbance and maybe also future process evolution necessary for the achievement of a new steady state. The here studied catchment has thus the potential to serve as a reference catchment for future investigations. As no long term data of rainfall and runoff exists, it was necessary to replace long time series of data with a multitude of experimental methods, using the so called "multi-method approach". These methods cover as many aspects of runoff generation as possible and include not only the measurement of time series such as discharge, rainfall, soil water dynamics and groundwater dynamics, but also various short term measurements and experiments such as determination of throughfall amounts and variability, water chemistry, soil physical parameters, soil mineralogy, geo-electrical soundings and tracer techniques. Assembling the results like pieces of a puzzle produces a maybe not complete but nevertheless useful picture of the dynamic ensemble of runoff generation processes in this catchment. The employed methods were then evaluated for their usefulness vs. expenditures (labour and financial costs). Finally, the hypotheses - the perceptual model of runoff generation generated from the experimental findings - were tested with the physically based model Catflow. Additionally the process-based model Wasim-ETH was used to investigate the influence of landuse on runoff generation at the catchment scale. An initial assessment of hydrologic response of the catchment was achieved with a linear statistical model for the prediction of event runoff coefficients. The parameters identified as best predictors give a first indication of important processes. Various results acquired with the "multi-method approach" show that response to rainfall is generally fast. Preferential vertical flow is of major importance and is reinforced by hydrophobicity during the summer months. Rapid lateral water transport is necessary to produce the fast response signal, however, while lateral subsurface flow was observed at several soil moisture profiles, the location and type of structures causing fast lateral flow on the hillslope scale is still not clear and needs to be investigated in more detail. Surface runoff has not been observed and is unlikely due to the high hydraulic conductivities of the volcanic ash soils. Additionally, a large subsurface storage retains most of the incident rainfall amount during events (>90%, often even >95%) and produces streamflow even after several weeks of drought. Several findings suggest a shift in processes from summer to winter causing changes in flow patterns, changes in response of stream chemistry to rainfall events and also in groundwater-surface water interactions. The results of the modelling study confirm the importance of rapid and preferential flow processes. However, due to the limited knowledge on subsurface structures the model still does not fully capture runoff response. Investigating the importance of landuse on runoff generation showed that while peak runoff generally increased with deforested area, the location of these areas also had an effect. Overall, the "multi-method approach" of replacing long time series with a multitude of experimental methods was successful in the identification of dominant hydrological processes and thus proved its applicability for data scarce catchments under the constraint of limited resources.
The programmable network envisioned in the 1990s within standardization and research for the Intelligent Network is currently coming into reality using IPbased Next Generation Networks (NGN) and applying Service-Oriented Architecture (SOA) principles for service creation, execution, and hosting. SOA is the foundation for both next-generation telecommunications and middleware architectures, which are rapidly converging on top of commodity transport services. Services such as triple/quadruple play, multimedia messaging, and presence are enabled by the emerging service-oriented IPMultimedia Subsystem (IMS), and allow telecommunications service providers to maintain, if not improve, their position in the marketplace. SOA becomes the de facto standard in next-generation middleware systems as the system model of choice to interconnect service consumers and providers within and between enterprises. We leverage previous research activities in overlay networking technologies along with recent advances in network abstraction, service exposure, and service creation to develop a paradigm for a service environment providing converged Internet and Telecommunications services that we call Service Broker. Such a Service Broker provides mechanisms to combine and mediate between different service paradigms from the two domains Internet/WWW and telecommunications. Furthermore, it enables the composition of services across these domains and is capable of defining and applying temporal constraints during creation and execution time. By adding network-awareness into the service fabric, such a Service Broker may also act as a next generation network-to-service element allowing the composition of crossdomain and cross-layer network and service resources. The contribution of this research is threefold: first, we analyze and classify principles and technologies from Information Technologies (IT) and telecommunications to identify and discuss issues allowing cross-domain composition in a converging service layer. Second, we discuss service composition methods allowing the creation of converged services on an abstract level; in particular, we present a formalized method for model-checking of such compositions. Finally, we propose a Service Broker architecture converging Internet and Telecom services. This environment enables cross-domain feature interaction in services through formalized obligation policies acting as constraints during service discovery, creation, and execution time.
The experience of premenstrual syndrome (PMS) affects up to 90% of individuals with an active menstrual cycle and involves a spectrum of aversive physiological and psychological symptoms in the days leading up to menstruation (Tschudin et al., 2010). Despite its high prevalence, the precise origins of PMS remain elusive, with influences ranging from hormonal fluctuations to cognitive, social, and cultural factors (Hunter, 2007; Matsumoto et al., 2013).
Biologically, hormonal fluctuations, particularly in gonadal steroids, are commonly believed to be implicated in PMS, with the central factor being varying susceptibilities to the fluctuations between individuals and cycles (Rapkin & Akopians, 2012). Allopregnanolone (ALLO), a neuroactive steroid and progesterone metabolite, has emerged as a potential link to PMS symptoms (Hantsoo & Epperson, 2020). ALLO is a positive allosteric modulator of the GABAA receptor, influencing inhibitory communication (Rupprecht, 2003; Andréen et al., 2006). Different susceptibility to ALLO fluctuations throughout the cycle may lead to reduced GABAergic signal transmission during the luteal phase of the menstrual cycle.
The GABAergic system's broad influence leads to a number of affected physiological systems, including a consistent reduction in vagally mediated heart rate variability (vmHRV) during the luteal phase (Schmalenberger et al., 2019). This reduction in vmHRV is more pronounced in individuals with high PMS symptoms (Baker et al., 2008; Matsumoto et al., 2007). Fear conditioning studies have shown inconsistent associations with cycle phases, suggesting a complex interplay between physiological parameters and PMS-related symptoms (Carpenter et al., 2022; Epperson et al., 2007; Milad et al., 2006).
The neurovisceral integration model posits that vmHRV reflects the capacity of the central autonomous network (CAN), which is responsible for regulatory processes on behavioral, cognitive, and autonomous levels (Thayer & Lane, 2000, 2009). Fear learning, mediated within the CAN, is suggested to be indicative of vmHRV's capacity for successful
VI
regulation (Battaglia & Thayer, 2022). Given the GABAergic mediation of central inhibitory functional connectivity in the CAN, which may be affected by ALLO fluctuations, this thesis proposes that fluctuating CAN activity in the luteal phase contributes to diverse aversive symptoms in PMS.
A research program was designed to empirically test these propositions. Study 1 investigated fear discrimination during different menstrual cycle phases and its interaction with vmHRV, revealing nuanced effects on acoustic startle response and skin conductance response. While there was heightened fear discrimination in acoustic startle responses in participants in the luteal phase, there was an interaction between menstrual cycle phase and vmHRV in skin conductance responses. In this measure, heightened fear discrimination during the luteal phase was only visible in individuals with high resting vmHRV; those with low vmHRV showed reduced fear discrimination and higher overall responses.
Despite affecting the vast majority of menstruating people, there are very limited tools available to reliably assess these symptoms in the German speaking area. Study 2 aimed at closing this gap, by translating and validating a German version of the short version of the Premenstrual Assessment Form (Allen et al., 1991), providing a reliable tool for future investigations, which closes the gap in PMS questionnaires in the German-speaking research area.
Study 3 employed a diary study paradigm to explore daily associations between vmHRV and PMS symptoms. The results showed clear simultaneous fluctuations between the two constructs with a peak in PMS and a low point in vmHRV a few days before menstruation onset. The association between vmHRV and PMS was driven by psychological PMS symptoms.
Based on the theoretical considerations regarding the neurovisceral perspective on PMS, another interesting construct to consider is attentional control, as it is closely related to functions of the CAN. Study 4 delved into attentional control and vmHRV differences between menstrual cycle phases, demonstrating an interaction between cycle phase and PMS symptoms. In a pilot, we found reduced vmHRV and attentional control during the luteal phase only in participants who reported strong PMS.
While Studies 1-4 provided evidence for the mechanisms underlying PMS, Studies 5 and 6 investigated short- and long-term intervention protocols to ameliorate PMS symptomatology. Study 5 explored the potential of heart rate variability biofeedback (HRVB) in alleviating PMS symptoms and a number of other outcome measures. In a waitlist-control design, participants underwent a 4-week smartphone-based HRVB intervention. The results revealed positive effects on PMS, with larger effect sizes on psychological symptoms, as well as on depressive symptoms, anxiety/stress and attentional control.
Finally, Study 6 examined the acute effects of HRVB on attentional control. The study found positive impact but only in highly stressed individuals.
The thesis, based on this comprehensive research program, expands our understanding of PMS as an outcome of CAN fluctuations mediated by GABAA receptor reactivity. The results largely support the model. These findings not only deepen our understanding of PMS but also offer potential avenues for therapeutic interventions. The promising results of smartphone-based HRVB training suggest a non-pharmacological approach to managing PMS symptoms, although further research is needed to confirm its efficacy.
In conclusion, this thesis illuminates the complex web of factors contributing to PMS, providing valuable insights into its etiological underpinnings and potential interventions. By elucidating the relationships between hormonal fluctuations, CAN activity, and psychological responses, this research contributes to more effective treatments for individuals grappling with the challenges of PMS. The findings hold promise for improving the quality of life for those affected by this prevalent and often debilitating condition.
‘Heterosis’ is a term used in genetics and breeding referring to hybrid vigour or the superiority of hybrids over their parents in terms of traits such as size, growth rate, biomass, fertility, yield, nutrient content, disease resistance or tolerance to abiotic and abiotic stress. Parental plants which are two different inbred (pure) lines that have desired traits are crossed to obtain hybrids. Maximum heterosis is observed in the first generation (F1) of crosses. Heterosis has been utilised in plant and animal breeding programs for at least 90 years: by the end of the 21st century, 65% of worldwide maize production was hybrid-based. Generally, it is believed that an understanding of the molecular basis of heterosis will allow the creation of new superior genotypes which could either be used directly as F1 hybrids or form the basis for the future breeding selection programmes. Two selected accessions of a research model plant Arabidopsis thaliana (thale cress) were crossed to obtain hybrids. These typically exhibited a 60-80% increase of biomass when compared to the average weight of both parents. This PhD project focused on investigating the role of selected regulatory genes given their potentially key involvement in heterosis. In the first part of the project, the most appropriate developmental stage for this heterosis study was determined by metabolite level measurements and growth observations in parents and hybrids. At the selected stage, around 60 candidate regulatory genes (i.e. differentially expressed in hybrids when compared to parents) were identified. Of these, the majority were transcription factors, genes that coordinate the expression of other genes. Subsequent expression analyses of the candidate genes in biomass-heterotic hybrids of other Arabidopsis accessions revealed a differential expression in a gene subset, highlighting their relevance for heterosis. Moreover, a fraction of the candidate regulatory genes were found within DNA regions closely linked to the genes that underlie the biomass or growth heterosis. Additional analyses to validate the role of selected candidate regulatory genes in heterosis appeared insufficient to establish their role in heterosis. This uncovered a need for using novel approaches as discussed in the thesis. Taken together, the work provided an insight into studies on the molecular mechanisms underlying heterosis. Although studies on heterosis date back to more than one hundred years, this project as many others revealed that more investigations will be needed to uncover this phenomenon.
This work presents the synthesis and the self-assembly of symmetrical amphiphilic ABA and BAB triblock copolymers in dilute, semi-concentrated and highly concentrated aqueous solution. A series of new bifunctional bistrithiocarbonates as RAFT agents was used to synthesise these triblock copolymers, which are characterised by a long hydrophilic middle block and relatively small, but strongly hydrophobic end blocks. As hydrophilic A blocks, poly(N-isopropylacrylamide) (PNIPAM) and poly(methoxy diethylene glycol acrylate) (PMDEGA) were employed, while as hydrophobic B blocks, poly(4-tert-butyl styrene), polystyrene, poly(3,5-dibromo benzyl acrylate), poly(2-ethylhexyl acrylate), and poly(octadecyl acrylate) were explored as building blocks with different hydrophobicities and glass transition temperatures. The five bifunctional trithiocarbonates synthesised belong to two classes: the first are RAFT agents, which position the active group of the growing polymer chain at the outer ends of the polymer (Z-C(=S)-S-R-S-C(=S)-Z, type I). The second class places the active groups in the middle of the growing polymer chain (R-S-C(=S)-Z-C(=S)-S-R, type II). These RAFT agents enable the straightforward synthesis of amphiphilic triblock copolymers in only two steps, allowing to vary the nature of the hydrophobic blocks as well as the length of the hydrophobic and hydrophilic blocks broadly with good molar mass control and narrow polydispersities. Specific side reactions were observed among some RAFT agents including the elimination of ethylenetrithiocarbonate in the early stage of the polymerisation of styrene mediated by certain agents of the type II, while the use of the RAFT agents of type I resulted in retardation of the chain extension of PNIPAM with styrene. These results underline the need of a careful choice of RAFT agents for a given task. The various copolymers self-assemble in dilute and semi-concentrated aqueous solution into small flower-like micelles. No indication for the formation of micellar clusters was found, while only at high concentration, physical hydrogels are formed. The reversible thermoresponsive behaviour of the ABA and BAB type copolymer solutions in water with A made of PNIPAM was examined by turbidimetry and dynamic light scattering (DLS). The cloud point of the copolymers was nearly identical to the cloud point of the homopolymer and varied between 28-32 °C with concentrations from 0.01 to 50 wt%. This is attributed to the formation of micelles where the hydrophobic blocks are shielded from a direct contact with water, so that the hydrophobic interactions of the copolymers are nearly the same as for pure PNIPAM. Dynamic light scattering measurements showed the presence of small micelles at ambient temperature. The aggregate size dramatically increased above the cloud point, indicating a change of aggregate morphology into clusters due to the thermosensitivity of the PNIPAM block. The rheological behaviour of the amphiphilic BAB triblock copolymers demonstrated the formation of hydrogels at high concentrations, typically above 30-35 wt%. The minimum concentration to induce hydrogels decreased with the increasing glass transition temperatures and increasing length of the end blocks. The weak tendency to form hydrogels was attributed to a small share of bridged micelles only, due to the strong segregation regime occurring. In order to learn about the role of the nature of the thermoresponsive block for the aggregation, a new BAB triblock copolymer consisting of short polystyrene end blocks and PMDEGA as stimuli-responsive middle block was prepared and investigated. Contrary to PNIPAM, dilute aqueous solutions of PMDEGA and of its block copolymers showed reversible phase transition temperatures characterised by a strong dependence on the polymer composition. Moreover, the PMDEGA block copolymer allowed the formation of physical hydrogels at lower concentration, i.e. from 20 wt%. This result suggests that PMDEGA has a higher degree of water-swellability than PNIPAM.
Semi-empirical sea-level models (SEMs) exploit physically motivated empirical relationships between global sea level and certain drivers, in the following global mean temperature. This model class evolved as a supplement to process-based models (Rahmstorf (2007)) which were unable to fully represent all relevant processes. They thus failed to capture past sea-level change (Rahmstorf et al. (2012)) and were thought likely to underestimate future sea-level rise. Semi-empirical models were found to be a fast and useful tool for exploring the uncertainties in future sea-level rise, consistently giving significantly higher projections than process-based models.
In the following different aspects of semi-empirical sea-level modelling have been studied. Models were first validated using various data sets of global sea level and temperature. SEMs were then used on the glacier contribution to sea level, and to infer past global temperature from sea-level data via inverse modelling. Periods studied encompass the instrumental period, covered by tide gauges (starting 1700 CE (Common Era) in Amsterdam) and satellites (first launched in 1992 CE), the era from 1000 BCE (before CE) to present, and the full length of the Holocene (using proxy data). Accordingly different data, model formulations and implementations have been used. It could be shown in Bittermann et al. (2013) that SEMs correctly predict 20th century sea-level when calibrated with data until 1900 CE. SEMs also turned out to give better predictions than the Intergovernmental Panel on Climate Change (IPCC) 4th assessment report (AR4, IPCC (2007)) models, for the period from 1961–2003 CE.
With the first multi-proxy reconstruction of global sea-level as input, estimate of the human-induced component of modern sea-level change and projections of future sea-level rise were calculated (Kopp et al. (2016)). It turned out with 90% confidence that more than 40 % of the observed 20th century sea-level rise is indeed anthropogenic. With the new semi-empirical and IPCC (2013) 5th assessment report (AR5) projections the gap between SEM and process-based model projections closes, giving higher credibility to both. Combining all scenarios, from strong mitigation to business as usual, a global sea-level rise of 28–131 cm relative to 2000 CE, is projected with 90% confidence. The decision for a low carbon pathway could halve the expected global sea-level rise by 2100 CE.
Present day temperature and thus sea level are driven by the globally acting greenhouse-gas forcing. Unlike that, the Milankovich forcing, acting on Holocene timescales, results mainly in a northern-hemisphere temperature change. Therefore a semi-empirical model can be driven with northernhemisphere temperatures, which makes it possible to model the main subcomponent of sea-level change over this period. It showed that an additional positive constant rate of the order of the estimated Antarctic sea-level contribution is then required to explain the sea-level evolution over the Holocene. Thus the global sea level, following the climatic optimum, can be interpreted as the sum of a temperature induced sea-level drop and a positive long-term contribution, likely an ongoing response to deglaciation coming from Antarctica.
In this thesis, I investigated the factors influencing the growth and vertical distribution of planktonic algae in extremely acidic mining lakes (pH 2-3). In the focal study site, Lake 111 (pH 2.7; Lusatia, Germany), the chrysophyte, Ochromonas sp., dominates in the upper water strata and the chlorophyte, Chlamydomonas sp., in the deeper strata, forming a pronounced deep chlorophyll maximum (DCM). Inorganic carbon (IC) limitation influenced the phototrophic growth of Chlamydomonas sp. in the upper water strata. Conversely, in deeper strata, light limited its phototrophic growth. When compared with published data for algae from neutral lakes, Chlamydomonas sp. from Lake 111 exhibited a lower maximum growth rate, an enhanced compensation point and higher dark respiration rates, suggesting higher metabolic costs due to the extreme physico-chemical conditions. The photosynthetic performance of Chlamydomonas sp. decreased in high-light-adapted cells when IC limited. In addition, the minimal phosphorus (P) cell quota was suggestive of a higher P requirement under IC limitation. Subsequently, it was shown that Chlamydomonas sp. was a mixotroph, able to enhance its growth rate by taking up dissolved organic carbon (DOC) via osmotrophy. Therefore, it could survive in deeper water strata where DOC concentrations were higher and light limited. However, neither IC limitation, P availability nor in situ DOC concentrations (bottom-up control) could fully explain the vertical distribution of Chlamydomonas sp. in Lake 111. Conversely, when a novel approach was adopted, the grazing influence of the phagotrophic phototroph, Ochromonas sp., was found to exert top-down control on its prey (Chlamydomonas sp.) reducing prey abundance in the upper water strata. This, coupled with the fact that Chlamydomonas sp. uses DOC for growth, leads to a pronounced accumulation of Chlamydomonas sp. cells at depth; an apparent DCM. Therefore, grazing appears to be the main factor influencing the vertical distribution of algae observed in Lake 111. The knowledge gained from this thesis provides information essential for predicting the effect of strategies to neutralize the acidic mining lakes on the food-web.
As of late, epidemiological studies have highlighted a strong association of dairy intake with lower disease risk, and similarly with an increased amount of odd-chain fatty acids (OCFA). While the OCFA also demonstrate inverse associations with disease incidence, the direct dietary sources and mode of action of the OCFA remain poorly understood.
The overall aim of this thesis was to determine the impact of two main fractions of dairy, milk fat and milk protein, on OCFA levels and their influence on health outcomes under high-fat (HF) diet conditions. Both fractions represent viable sources of OCFA, as milk fats contain a significant amount of OCFA and milk proteins are high in branched chain amino acids (BCAA), namely valine (Val) and isoleucine (Ile), which can produce propionyl-CoA (Pr-CoA), a precursor for endogenous OCFA synthesis, while leucine (Leu) does not. Additionally, this project sought to clarify the specific metabolic effects of the OCFA heptadecanoic acid (C17:0).
Both short-term and long-term feeding studies were performed using male C57BL/6JRj mice fed HF diets supplemented with milk fat or C17:0, as well as milk protein or individual BCAA (Val; Leu) to determine their influences on OCFA and metabolic health. Short-term feeding revealed that both milk fractions induce OCFA in vivo, and the increases elicited by milk protein could be, in part, explained by Val intake. In vitro studies using primary hepatocytes further showed an induction of OCFA after Val treatment via de novo lipogenesis and increased α-oxidation. In the long-term studies, both milk fat and milk protein increased hepatic and circulating OCFA levels; however, only milk protein elicited protective effects on adiposity and hepatic fat accumulation—likely mediated by the anti-obesogenic effects of an increased Leu intake. In contrast, Val feeding did not increase OCFA levels nor improve obesity, but rather resulted in glucotoxicity-induced insulin resistance in skeletal muscle mediated by its metabolite 3-hydroxyisobutyrate (3-HIB). Finally, while OCFA levels correlated with improved health outcomes, C17:0 produced negligible effects in preventing HF-diet induced health impairments.
The results presented herein demonstrate that the beneficial health outcomes associated with dairy intake are likely mediated through the effects of milk protein, while OCFA levels are likely a mere association and do not play a significant causal role in metabolic health under HF conditions. Furthermore, the highly divergent metabolic effects of the two BCAA, Leu and Val, unraveled herein highlight the importance of protein quality.
Adherent cells constantly collect information about the mechanical properties of their extracellular environment by actively pulling on it through cell-matrix contacts, which act as mechanosensors. In recent years, the sophisticated use of elastic substrates has shown that cells respond very sensitively to changes in effective stiffness in their environment, which results in a reorganization of the cytoskeleton in response to mechanical input. We develop a theoretical model to predict cellular self-organization in soft materials on a coarse grained level. Although cell organization in principle results from complex regulatory events inside the cell, the typical response to mechanical input seems to be a simple preference for large effective stiffness, possibly because force is more efficiently generated in a stiffer environment. The term effective stiffness comprises effects of both rigidity and prestrain in the environment. This observation can be turned into an optimization principle in elasticity theory. By specifying the cellular probing force pattern and by modeling the environment as a linear elastic medium, one can predict preferred cell orientation and position. Various examples for cell organization, which are of large practical interest, are considered theoretically: cells in external strain fields and cells close to boundaries or interfaces for different sample geometries and boundary conditions. For this purpose the elastic equations are solved exactly for an infinite space, an elastic half space and the elastic sphere. The predictions of the model are in excellent agreement with experiments for fibroblast cells, both on elastic substrates and in hydrogels. Mechanically active cells like fibroblasts could also interact elastically with each other. We calculate the optimal structures on elastic substrates as a function of material properties, cell density and the geometry of cell positioning, respectively, that allows each cell to maximize the effective stiffness in its environment due to the traction of all the other cells. Finally, we apply Monte Carlo simulations to study the effect of noise on cellular structure formation. The model not only contributes to a better understanding of many physiological situations. In the future it could also be used for biomedical applications to optimize protocols for artificial tissues with respect to sample geometry, boundary condition, material properties or cell density.
The Arctic is considered as a focal region in the ongoing climate change debate. The currently observed and predicted climate warming is particularly pronounced in the high northern latitudes. Rising temperatures in the Arctic cause progressive deepening and duration of permafrost thawing during the arctic summer, creating an ‘active layer’ with high bioavailability of nutrients and labile carbon for microbial consumption. The microbial mineralization of permafrost carbon creates large amounts of greenhouse gases, including carbon dioxide and methane, which can be released to the atmosphere, creating a positive feedback to global warming. However, to date, the microbial communities that drive the overall carbon cycle and specifically methane production in the Arctic are poorly constrained. To assess how these microbial communities will respond to the predicted climate changes, such as an increase in atmospheric and soil temperatures causing increased bioavailability of organic carbon, it is necessary to investigate the current status of this environment, but also how these microbial communities reacted to climate changes in the past. This PhD thesis investigated three records from two different study sites in the Russian Arctic, including permafrost, lake shore and lake deposits from Siberia and Chukotka. A combined stratigraphic approach of microbial and molecular organic geochemical techniques were used to identify and quantify characteristic microbial gene and lipid biomarkers. Based on this data it was possible to characterize and identify the climate response of microbial communities involved in past carbon cycling during the Middle Pleistocene and the Late Pleistocene to Holocene. It is shown that previous warmer periods were associated with an expansion of bacterial and archaeal communities throughout the Russian Arctic, similar to present day conditions. Different from this situation, past glacial and stadial periods experienced a substantial decrease in the abundance of Bacteria and Archaea. This trend can also be confirmed for the community of methanogenic archaea that were highly abundant and diverse during warm and particularly wet conditions. For the terrestrial permafrost, a direct effect of the temperature on the microbial communities is likely. In contrast, it is suggested that the temperature rise in scope of the glacial-interglacial climate variations led to an increase of the primary production in the Arctic lake setting, as can be seen in the corresponding biogenic silica distribution. The availability of this algae-derived carbon is suggested to be a driver for the observed pattern in the microbial abundance. This work demonstrates the effect of climate changes on the community composition of methanogenic archae. Methanosarcina-related species were abundant throughout the Russian Arctic and were able to adapt to changing environmental conditions. In contrast, members of Methanocellales and Methanomicrobiales were not able to adapt to past climate changes. This PhD thesis provides first evidence that past climatic warming led to an increased abundance of microbial communities in the Arctic, closely linked to the cycling of carbon and methane production. With the predicted climate warming, it may, therefore, be anticipated that extensive amounts of microbial communities will develop. Increasing temperatures in the Arctic will affect the temperature sensitive parts of the current microbiological communities, possibly leading to a suppression of cold adapted species and the prevalence of methanogenic archaea that tolerate or adapt to increasing temperatures. These changes in the composition of methanogenic archaea will likely increase the methane production potential of high latitude terrestrial regions, changing the Arctic from a carbon sink to a source.
From its first use in the field of biochemistry, instrumental analysis offered a variety of invaluable tools for the comprehensive description of biological systems. Multi-selective methods that aim to cover as many endogenous compounds as possible in biological samples use different analytical platforms and include methods like gene expression profile and metabolite profile analysis. The enormous amount of data generated in application of profiling methods needs to be evaluated in a manner appropriate to the question under investigation. The new field of system biology rises to the challenge to develop strategies for collecting, processing, interpreting, and archiving this vast amount of data; to make those data available in form of databases, tools, models, and networks to the scientific community. On the background of this development a multi-selective method for the determination of phytohormones was developed and optimised, complementing the profile analyses which are already in use (Chapter I). The general feasibility of a simultaneous analysis of plant metabolites and phytohormones in one sample set-up was tested by studies on the analytical robustness of the metabolite profiling protocol. The recovery of plant metabolites proved to be satisfactory robust against variations in the extraction protocol by using common extraction procedures for phytohormones; a joint extraction of metabolites and hormones from plant tissue seems practicable (Chapter II). Quantification of compounds within the context of profiling methods requires particular scrutiny (Chapter II). In Chapter III, the potential of stable-isotope in vivo labelling as normalisation strategy for profiling data acquired with mass spectrometry is discussed. First promising results were obtained for a reproducible quantification by stable-isotope in vivo labelling, which was applied in metabolomic studies. In-parallel application of metabolite and phytohormone analysis to seedlings of the model plant Arabidopsis thaliana exposed to sulfate limitation was used to investigate the relationship between the endogenous concentration of signal elements and the ‘metabolic phenotype’ of a plant. An automated evaluation strategy was developed to process data of compounds with diverse physiological nature, such as signal elements, genes and metabolites – all which act in vivo in a conditional, time-resolved manner (Chapter IV). Final data analysis focussed on conditionality of signal-metabolome interactions.
Individuals have an intrinsic need to express themselves to other humans within a given community by sharing their experiences, thoughts, actions, and opinions. As a means, they mostly prefer to use modern online social media platforms such as Twitter, Facebook, personal blogs, and Reddit. Users of these social networks interact by drafting their own statuses updates, publishing photos, and giving likes leaving a considerable amount of data behind them to be analyzed. Researchers recently started exploring the shared social media data to understand online users better and predict their Big five personality traits: agreeableness, conscientiousness, extraversion, neuroticism, and openness to experience. This thesis intends to investigate the possible relationship between users’ Big five personality traits and the published information on their social media profiles. Facebook public data such as linguistic status updates, meta-data of likes objects, profile pictures, emotions, or reactions records were adopted to address the proposed research questions. Several machine learning predictions models were constructed with various experiments to utilize the engineered features correlated with the Big 5 Personality traits. The final predictive performances improved the prediction accuracy compared to state-of-the-art approaches, and the models were evaluated based on established benchmarks in the domain. The research experiments were implemented while ethical and privacy points were concerned. Furthermore, the research aims to raise awareness about privacy between social media users and show what third parties can reveal about users’ private traits from what they share and act on different social networking platforms.
In the second part of the thesis, the variation in personality development is studied within a cross-platform environment such as Facebook and Twitter platforms. The constructed personality profiles in these social platforms are compared to evaluate the effect of the used platforms on one user’s personality development. Likewise, personality continuity and stability analysis are performed using two social media platforms samples. The implemented experiments are based on ten-year longitudinal samples aiming to understand users’ long-term personality development and further unlock the potential of cooperation between psychologists and data scientists.
This work analyzes the saving and consumption behavior of agents faced with the possibility of unemployment in a dynamic and stochastic life cycle model. The intertemporal optimization is based on Dynamic Programming with a backward recursion algorithm. The implemented uncertainty is not based on income shocks as it is done in traditional life cycle models but uses Markov probabilities where the probability for the next employment status of the agent depends on the current status. The utility function used is a CRRA function (constant relative risk aversion), combined with a CES function (constant elasticity of substitution) and has several consumption goods, a subsistence level, money and a bequest function.
In the living cell, the organization of the complex internal structure relies to a large extent on molecular motors. Molecular motors are proteins that are able to convert chemical energy from the hydrolysis of adenosine triphosphate (ATP) into mechanical work. Being about 10 to 100 nanometers in size, the molecules act on a length scale, for which thermal collisions have a considerable impact onto their motion. In this way, they constitute paradigmatic examples of thermodynamic machines out of equilibrium. This study develops a theoretical description for the energy conversion by the molecular motor myosin V, using many different aspects of theoretical physics. Myosin V has been studied extensively in both bulk and single molecule experiments. Its stepping velocity has been characterized as a function of external control parameters such as nucleotide concentration and applied forces. In addition, numerous kinetic rates involved in the enzymatic reaction of the molecule have been determined. For forces that exceed the stall force of the motor, myosin V exhibits a 'ratcheting' behaviour: For loads in the direction of forward stepping, the velocity depends on the concentration of ATP, while for backward loads there is no such influence. Based on the chemical states of the motor, we construct a general network theory that incorporates experimental observations about the stepping behaviour of myosin V. The motor's motion is captured through the network description supplemented by a Markov process to describe the motor dynamics. This approach has the advantage of directly addressing the chemical kinetics of the molecule, and treating the mechanical and chemical processes on equal grounds. We utilize constraints arising from nonequilibrium thermodynamics to determine motor parameters and demonstrate that the motor behaviour is governed by several chemomechanical motor cycles. In addition, we investigate the functional dependence of stepping rates on force by deducing the motor's response to external loads via an appropriate Fokker-Planck equation. For substall forces, the dominant pathway of the motor network is profoundly different from the one for superstall forces, which leads to a stepping behaviour that is in agreement with the experimental observations. The extension of our analysis to Markov processes with absorbing boundaries allows for the calculation of the motor's dwell time distributions. These reveal aspects of the coordination of the motor's heads and contain direct information about the backsteps of the motor. Our theory provides a unified description for the myosin V motor as studied in single motor experiments.
Sucrose synthase (Susy) is a key enzyme of sucrose metabolism, catalysing the reversible conversion of sucrose and UDP to UDP-glucose and fructose. Therefore, its activity, localization and function have been studied in various plant species. It has been shown that Susy can play a role in supplying energy in companion cells for phloem loading (Fu and Park, 1995), provides substrates for starch synthesis (Zrenner et al., 1995), and supplies UDP-glucose for cell wall synthesis (Haigler et al., 2001). Analysis of the Arabidopsis genome identifies six Susy isoforms. The expression of these isoforms was investigated using promoter-reporter gene constructs (GUS) and real time RT-PCR. Although these isoforms are closely related at the protein level they have radically different spatial and temporal patterns of expression in the plant with no two isoforms showing the same distribution. More than one isoform is expressed in all organs examined. Some of them have high but specific expression in particular organs or developmental stages whilst others are constantly expressed throughout the whole plant and across various stages of development. The in planta function of the six Susy isoforms were explored through analysis of T-DNA insertion mutants and RNAi lines. Plants without the expression of individual isoforms show no differences in growth and development, and are not significantly different from wild type plants in soluble sugars, starch and cellulose contents under all growth conditions investigated. Analysis of T-DNA insertion mutant lacking Sus3 isoform that was exclusively expressed in stomata cells only had a minor influence on guard cell osmoregulation and/or bioenergetics. Although none of the sucrose synthases appear to be essential for normal growth under our standard growth conditions, they may be necessary for growth under stress conditions. Different isoforms of sucrose synthase respond differently to various abiotic stresses. It has been shown that oxygen deprivation up regulates Sus1 and Sus4 and increases total Susy activity. However, the analysis of the plants with reduced expression of both Sus1 and Sus4 revealed no obvious effects on plant performance under oxygen deprivation. Low temperature up regulates Sus1 expression but the loss of this isoform has no effect on the freezing tolerance of non acclimated and cold acclimated plants. These data provide a comprehensive overview of the expression of this gene family which supports some of the previously reported roles for Susy and indicates the involvement of specific isoforms in metabolism and/or signalling.
Sulphur, a macronutrient essential for plant growth, is among the most versatile elements in living organisms. Unfortunately, little is known about regulation of sulphate uptake and assimilation by plants. Identification of sulphate signalling processes will allow to control sulphate acquisition and assimilation and may prove useful in the future to improve sulphur-use efficiency in agriculture. Many of genes involved in sulphate metabolism are regulated on transcriptional level by products of other genes called transcription factors (TF). Several published experiments revealed TF genes that respond to sulphate deprivation, but none of these have been so far been characterized functionally. Thus, we aimed at identifying and characterising transcription factors that control sulphate metabolism in the model plant Arabidopsis thaliana. To achieve that goal we postulated that factors regulating Arabidopsis responses to inorganic sulphate deficiency change their transcriptional levels under sulphur-limited conditions. By comparing TF transcript profiles from plants grown on different sulphate regimes, we identified TF genes that may specifically induce or repress changes in expression of genes that allow plants to adapt to changes in sulphate availability. Candidate genes obtained from this screening were tested by reverse genetics approaches. Transgenic plants constitutively overproducing selected TF genes and mutant plants, lacking functional selected TF genes (knock out), were used. By comparing metabolite and transcript profiles from transgenic and wild type plants we aimed at confirming the role of selected AP2 TF candidate genes in plant adaptation to sulphur unavailability. After preliminary characterisation of WRKY24 and MYB93 TF genes, we postulate that these factors are involved in a complex multifactorial regulatory network, in which WRKY24 and MYB93 would act as superior factors regulating other transcription factors directly involved in the regulation of S-metabolism genes. Results obtained for plants overproducing TOE1 and TOE2 TF genes suggests that these factors may be involved in a mechanism, which is promoting synthesis of an essential amino acid, methionine, over synthesis of another amino acid, cysteine. Thus, TOE1 and TOE2 genes might be a part of transcriptional regulation of methionine synthesis. Approaches creating genetically manipulated plants may produce plant phenotypes of immediate biotechnological interest, such as plants with increased sulphate or sulphate-containing amino acid content, or better adapted to the sulphate unavailability.
Carbonates play a key role in the chemistry and dynamics of our planet. They are directly connected to the CO2 budget of our atmosphere and have a great impact on the deep carbon cycle. Moreover, recent studies have shown that carbonates are stable along the geothermal gradient down to Earth's lower mantle conditions, changing their crystal structure and related properties. Subducted carbonates may also react with silicates to form new phases. These reactions will redistribute elements, such as calcium (Ca), magnesium (Mg), iron (Fe) and carbon in the form of carbon dioxide (CO2), but also trace elements, that are carried by the carbonates. The trace elements of most interest are strontium (Sr) and rare earth elements (REE) which have been found to be important constituents in the composition of the primitive lower mantle and in mineral inclusions found in super-deep diamonds. However, the stability of carbonates in presence of mantle silicates at relevant temperatures is far from being well understood. Related to this, very little is known about distribution processes of trace elements between carbonates and mantle silicates. To shed light on these processes, we studied reactions between Sr- and REE-containing CaCO3 and Mg/Fe-bearing silicates of the system (Mg,Fe)2SiO4 - (Mg,Fe)SiO3 at high pressure and high temperature using synchrotron radiation based μ-X-ray diffraction (μ-XRD) and μ-X-ray fluorescence (μ-XRF) with μm-resolution in a laser-heated diamond anvil cell. X-ray diffraction is used to derive the structural changes of the phase reactions whereas X-ray fluorescence gives information on the chemical changes in the sample. In-situ experiments at high pressure and high temperature were performed at beamline P02.2 at PETRA III (Hamburg, Germany) and at beamline ID27 at ESRF (Grenoble, France). In addition to μ-XRD and μ-XRF, ex-situ measurements were made on the recovered sample material using transmission electron microscopy (TEM) and provided further insights into the reaction kinetics of carbonate-silicate reactions.
Our investigations show that CaCO3 is unstable in presence of mantle silicates above 1700 K and a reaction takes place in which magnesite plus CaSiO3-perovskite are formed. In addition, we observed that a high content of iron in the carbonate-silicate system favours dolomite formation during the reaction. The subduction of natural carbonates with significant amounts of Sr leads to a comprehensive investigation of the stability not only of CaCO3 phases in contact with mantle silicates but also of SrCO3 (and of Sr-bearing CaCO3). We found that SrCO3 reacts with (Mg,Fe)SiO3-perovskite to form magnesite and gained evidence for the formation of SrSiO3-perovskite.
To complement our study on the stability of SrCO3 at conditions of the Earth's lower mantle, we performed powder X-ray diffraction and single crystal X-ray diffraction experiments at ambient temperature and up to 49 GPa. We observed a transformation from SrCO3-I into a new high-pressure phase SrCO3-II at around 26 GPa with Pmmn crystal structure and a bulk modulus of 103(10) GPa. This information is essential to fully understand the phase behaviour and stability of carbonates in the Earth's lower mantle and to elucidate the possibility of introducing Sr into mantle silicates by carbonate-silicate reactions.
Simultaneous recording of μ-XRD and μ-XRF in the μm-range over the heated areas provides spatial information not only about phase reactions but also on the elemental redistribution during the reactions. A comparison of the spatial intensity distribution of the XRF signal before and after heating indicates a change in the elemental distribution of Sr and an increase in Sr-concentration was found around the newly formed SrSiO3-perovskite. With the help of additional TEM analyses on the quenched sample material the elemental redistribution was studied at a sub-micrometer scale. Contrary to expectations from combined μ-XRD and μ-XRF measurements, we found that La and Eu were not incorporated into the silicate phases, instead they tend to form either isolated oxide phases (e.g. Eu2O3, La2O3) or hydroxyl-bastnäsite (La(CO3)(OH)). In addition, we observed the transformation from (Mg,Fe)SiO3-perovskite to low-pressure clinoenstatite during pressure release. The monoclinic structure (P21/c) of this phase allows the incorporation of Ca as shown by additional EDX analyses and, to a minor extent, Sr too.
Based on our experiments, we can conclude that a detection of the trace elements in-situ at high pressure and high temperature remains challenging. However, our first findings imply that silicates may incorporate the trace elements provided by the carbonates and indicate that carbonates may have a major effect on the trace element contents of mantle phases.
One of the main problems in machine learning is to train a predictive model from training data and to make predictions on test data. Most predictive models are constructed under the assumption that the training data is governed by the exact same distribution which the model will later be exposed to. In practice, control over the data collection process is often imperfect. A typical scenario is when labels are collected by questionnaires and one does not have access to the test population. For example, parts of the test population are underrepresented in the survey, out of reach, or do not return the questionnaire. In many applications training data from the test distribution are scarce because they are difficult to obtain or very expensive. Data from auxiliary sources drawn from similar distributions are often cheaply available. This thesis centers around learning under differing training and test distributions and covers several problem settings with different assumptions on the relationship between training and test distributions-including multi-task learning and learning under covariate shift and sample selection bias. Several new models are derived that directly characterize the divergence between training and test distributions, without the intermediate step of estimating training and test distributions separately. The integral part of these models are rescaling weights that match the rescaled or resampled training distribution to the test distribution. Integrated models are studied where only one optimization problem needs to be solved for learning under differing distributions. With a two-step approximation to the integrated models almost any supervised learning algorithm can be adopted to biased training data. In case studies on spam filtering, HIV therapy screening, targeted advertising, and other applications the performance of the new models is compared to state-of-the-art reference methods.
In this work approaches for new detection system development for an Analytical Ultracentrifuge (AUC) were explored. Unlike its counterpart in chromatography fractionation techniques, the use of a Multidetection system for AUC has not yet been implemented to full extent despite its potential benefit. In this study we tried to couple existing fundamental spectroscopic and scattering techniques that are used in day to day science as tool for extracting analyte information. Trials were performed for adapting Raman, Light scattering and UV/Vis (with possibility to work with the whole range of wavelengths) to AUC. Conclusions were drawn for Raman and Light scattering to be a possible detection system for AUC, while the development for a fast fiber optics based multiwavelength detector was completed. The multiwavelength detector demonstrated the capability of data generation matching the literature and reference measurement data and faster data collection than that of the commercial instrument. It became obvious that with the generation of data in 3-D space in the UV/Vis detection system, the user can select the wavelength for the evaluation of experimental results as the data set contains the whole range of information from UV/Vis wavelength. The detector showed the data generation with much faster speed unlike the commercial instruments. The advantage of fast data generation was exemplified with the evaluation of data for a mixture of three colloids. These data were in conformity with measurement results from normal radial experiments and without significant diffusion broadening. Thus conclusions were drawn that with our designed Multiwavelength detector, meaningful data in 3-D space can be collected with much faster speed of data generation.
The selective infrared (IR) excitation of molecular vibrations is a powerful tool to control the photoreactivity prior to electronic excitation in the ultraviolet / visible (UV/Vis) light regime ("vibrationally mediated chemistry"). For adsorbates on surfaces it has been theoretically predicted that IR preexcitation will lead to higher UV/Vis photodesorption yields and larger cross sections for other photoreactions. In a recent experiment, IR-mediated desorption of molecular hydrogen from a Si(111) surface on which atomic hydrogen and deuterium were co-adsorbed was achieved, following a vibrational mechanism as indicated by the isotope-selectivity. In the present work, selective vibrational IR excitation of adsorbate molecules, treated as multi-dimensional oscillators on dissipative surfaces, has been simulated within the framework of open-system density matrix theory. Not only potential-mediated, inter-mode coupling poses an obstacle to selective excitation but also the coupling of the adsorbate ("system") modes to the electronic and phononic degrees of freedom of the surface ("bath") does. Vibrational relaxation thereby takes place, depending on the availabilty of energetically fitting electron-hole (e/h) pairs and/or phonons (lattice vibrations) in the surface, on time-scales ranging from milliseconds to several hundreds of femtoseconds. On metal surfaces, where the relaxation process of the adsorbate via the e/h pair mechanism dominates, vibrational lifetimes are usually shorter than on insulator or semiconductor surfaces, in the range of picoseconds, being also the timescale of the IR pulses used here. Further inhibiting factors for selectivity can be the harmonicity of a mode and weak dipole activities ("dark modes") rendering vibrational excitation with moderate field intensities difficult. In addition to simple analytical pulses, optimal control theory (OCT) has been employed here to generate a suitable electric field to populate the target state/mode maximally. The complex OCT fields were analyzed by Husimi transformation, resolving the control field in time and energy. The adsorbate/surface systems investigated were CO/Cu(100), H/Si(100) and 2H/Ru(0001). These systems proved to be suitable models to study the above mentioned effects. Further, effects of temperature, pure dephasing (elastic scattering processes), pulse duration and dimensionality (up to four degrees of freedom) were studied. It was possible to selectively excite single vibrational modes, often even state-selective. Special processes like hot-band excitation, vibrationally mediated desorption and the excitation of "dark modes" were simulated. Finally, a novel OCT algorithm in density matrix representation has been developed which allows for time-dependent target operators and thus enables to control the excitation mechanism instead of only the final state. The algorithm is based on a combination of global (iterative) and local (non-iterative) OCT schemes, such that short, globally controlled time-intervals are coupled locally in time. Its numerical performance and accuracy were tested and verified and it was successfully applied to stabilize a two-state linear-combination and to enforce a successive "ladder climbing" in a rather harmonic system, where monochromatic, analytical pulses simultaneously excited several states, leading to a population loss in the target state.
Nowadays, graph data models are employed, when relationships between entities have to be stored and are in the scope of queries. For each entity, this graph data model locally stores relationships to adjacent entities. Users employ graph queries to query and modify these entities and relationships. These graph queries employ graph patterns to lookup all subgraphs in the graph data that satisfy certain graph structures. These subgraphs are called graph pattern matches. However, this graph pattern matching is NP-complete for subgraph isomorphism. Thus, graph queries can suffer a long response time, when the number of entities and relationships in the graph data or the graph patterns increases.
One possibility to improve the graph query performance is to employ graph views that keep ready graph pattern matches for complex graph queries for later retrieval. However, these graph views must be maintained by means of an incremental graph pattern matching to keep them consistent with the graph data from which they are derived, when the graph data changes. This maintenance adds subgraphs that satisfy a graph pattern to the graph views and removes subgraphs that do not satisfy a graph pattern anymore from the graph views.
Current approaches for incremental graph pattern matching employ Rete networks. Rete networks are discrimination networks that enumerate and maintain all graph pattern matches of certain graph queries by employing a network of condition tests, which implement partial graph patterns that together constitute the overall graph query. Each condition test stores all subgraphs that satisfy the partial graph pattern. Thus, Rete networks suffer high memory consumptions, because they store a large number of partial graph pattern matches. But, especially these partial graph pattern matches enable Rete networks to update the stored graph pattern matches efficiently, because the network maintenance exploits the already stored partial graph pattern matches to find new graph pattern matches. However, other kinds of discrimination networks exist that can perform better in time and space than Rete networks. Currently, these other kinds of networks are not used for incremental graph pattern matching.
This thesis employs generalized discrimination networks for incremental graph pattern matching. These discrimination networks permit a generalized network structure of condition tests to enable users to steer the trade-off between memory consumption and execution time for the incremental graph pattern matching. For that purpose, this thesis contributes a modeling language for the effective definition of generalized discrimination networks. Furthermore, this thesis contributes an efficient and scalable incremental maintenance algorithm, which updates the (partial) graph pattern matches that are stored by each condition test. Moreover, this thesis provides a modeling evaluation, which shows that the proposed modeling language enables the effective modeling of generalized discrimination networks. Furthermore, this thesis provides a performance evaluation, which shows that a) the incremental maintenance algorithm scales, when the graph data becomes large, and b) the generalized discrimination network structures can outperform Rete network structures in time and space at the same time for incremental graph pattern matching.
Microsaccades
(2015)
The first thing we do upon waking is open our eyes. Rotating them in our eye sockets, we scan our surroundings and collect the information into a picture in our head. Eye movements can be split into saccades and fixational eye movements, which occur when we attempt to fixate our gaze. The latter consists of microsaccades, drift and tremor. Before we even lift our eye lids, eye movements – such as saccades and microsaccades that let the eyes jump from one to another position – have partially been prepared in the brain stem. Saccades and microsaccades are often assumed to be generated by the same mechanisms. But how saccades and microsaccades can be classified according to shape has not yet been reported in a statistical manner. Research has put more effort into the investigations of microsaccades’ properties and generation only since the last decade. Consequently, we are only beginning to understand the dynamic processes governing microsaccadic eye movements. Within this thesis, the dynamics governing the generation of microsaccades is assessed and the development of a model for the underlying processes. Eye movement trajectories from different experiments are used, recorded with a video-based eye tracking technique, and a novel method is proposed for the scale-invariant detection of saccades (events of large amplitude) and microsaccades (events of small amplitude). Using a time-frequency approach, the method is examined with different experiments and validated against simulated data. A shape model is suggested that allows for a simple estimation of saccade- and microsaccade related properties. For sequences of microsaccades, in this thesis a time-dynamic Markov model is proposed, with a memory horizon that changes over time and which can best describe sequences of microsaccades.
Change points in time series are perceived as heterogeneities in the statistical or dynamical characteristics of the observations. Unraveling such transitions yields essential information for the understanding of the observed system’s intrinsic evolution and potential external influences. A precise detection of multiple changes is therefore of great importance for various research disciplines, such as environmental sciences, bioinformatics and economics. The primary purpose of the detection approach introduced in this thesis is the investigation of transitions underlying direct or indirect climate observations. In order to develop a diagnostic approach capable to capture such a variety of natural processes, the generic statistical features in terms of central tendency and dispersion are employed in the light of Bayesian inversion. In contrast to established Bayesian approaches to multiple changes, the generic approach proposed in this thesis is not formulated in the framework of specialized partition models of high dimensionality requiring prior specification, but as a robust kernel-based approach of low dimensionality employing least informative prior distributions.
First of all, a local Bayesian inversion approach is developed to robustly infer on the location and the generic patterns of a single transition. The analysis of synthetic time series comprising changes of different observational evidence, data loss and outliers validates the performance, consistency and sensitivity of the inference algorithm. To systematically investigate time series for multiple changes, the Bayesian inversion is extended to a kernel-based inference approach. By introducing basic kernel measures, the weighted kernel inference results are composed into a proxy probability to a posterior distribution of multiple transitions. The detection approach is applied to environmental time series from the Nile river in Aswan and the weather station Tuscaloosa, Alabama comprising documented changes. The method’s performance confirms the approach as a powerful diagnostic tool to decipher multiple changes underlying direct climate observations.
Finally, the kernel-based Bayesian inference approach is used to investigate a set of complex terrigenous dust records interpreted as climate indicators of the African region of the Plio-Pleistocene period. A detailed inference unravels multiple transitions underlying the indirect climate observations, that are interpreted as conjoint changes. The identified conjoint changes coincide with established global climate events. In particular, the two-step transition associated to the establishment of the modern Walker-Circulation contributes to the current discussion about the influence of paleoclimate changes on the environmental conditions in tropical and subtropical Africa at around two million years ago.
In the present work synchronization phenomena in complex dynamical systems exhibiting multiple time scales have been analyzed. Multiple time scales can be active in different manners. Three different systems have been analyzed with different methods from data analysis. The first system studied is a large heterogenous network of bursting neurons, that is a system with two predominant time scales, the fast firing of action potentials (spikes) and the burst of repetitive spikes followed by a quiescent phase. This system has been integrated numerically and analyzed with methods based on recurrence in phase space. An interesting result are the different transitions to synchrony found in the two distinct time scales. Moreover, an anomalous synchronization effect can be observed in the fast time scale, i.e. there is range of the coupling strength where desynchronization occurs. The second system analyzed, numerically as well as experimentally, is a pair of coupled CO₂ lasers in a chaotic bursting regime. This system is interesting due to its similarity with epidemic models. We explain the bursts by different time scales generated from unstable periodic orbits embedded in the chaotic attractor and perform a synchronization analysis of these different orbits utilizing the continuous wavelet transform. We find a diverse route to synchrony of these different observed time scales. The last system studied is a small network motif of limit cycle oscillators. Precisely, we have studied a hub motif, which serves as elementary building block for scale-free networks, a type of network found in many real world applications. These hubs are of special importance for communication and information transfer in complex networks. Here, a detailed study on the mechanism of synchronization in oscillatory networks with a broad frequency distribution has been carried out. In particular, we find a remote synchronization of nodes in the network which are not directly coupled. We also explain the responsible mechanism and its limitations and constraints. Further we derive an analytic expression for it and show that information transmission in pure phase oscillators, such as the Kuramoto type, is limited. In addition to the numerical and analytic analysis an experiment consisting of electrical circuits has been designed. The obtained results confirm the former findings.
In this work, an approach of paleoclimate reconstruction for tropical East Africa is presented. After giving a short summary of modern climate conditions in the tropics and the East African climate peculiarity, the potential of reconstructing climate from paleolake sediments is discussed. As demonstrated, the hydrologic sensitivity of high-elevated closed-basin lakes in the Central Kenya Rift yields valuable guaranties for the establishment of long-term climate records. Temporal fluctuations of the limnological characteristics saved in the lake sediments are used to define variations in the Quaternary climate history. Based on diatom analyses in radiocarbon- and 40Ar/39Ar-dated sediments, a chronology of paleoecologic fluctuations is developed for the Central Kenya Rift -lakes Nakuru, Elmenteita and Naivasha. At least during the penultimate interglacial (around 140 to 60 kyr BP) and during the last interglacial (around 12 to 4 kyr BP), these lakes experienced several transgression-regression cycles on time intervals of about 11,000 years. Additionally, a long-term trend of lake evolution is found suggesting the general succession from deep freshwater lakes towards more saline waters during the last million years. Using ecologic transfer functions and a simple lake-balance model, the observed paleohydrologic fluctuations are linked to potential precipitation-evaporation changes in the lake basins. Though also tectonic influences on the drainage pattern and the effect of varied seepage are investigated, it can be shown that already a small increase in precipitation of about 30±10 % may have affected the hydrologic budget of the intra-rift lakes within the reconstructed range. The findings of this study help to assess the natural climate variability of East Africa. They furthermore reflect the sensitivity of the Central Kenya Rift -lakes to fluctuations of large-scale climate parameters, such as solar radiation and sea-surface temperatures of the Indian Ocean.
For more than two centuries, plant ecologists have aimed to understand how environmental gradients and biotic interactions shape the distribution and co-occurrence of plant species. In recent years, functional trait–based approaches have been increasingly used to predict patterns of species co-occurrence and species distributions along environmental gradients (trait–environment relationships). Functional traits are measurable properties at the individual level that correlate well with important processes. Thus, they allow us to identify general patterns by synthesizing studies across specific taxonomic compositions, thereby fostering our understanding of the underlying processes of species assembly. However, the importance of specific processes have been shown to be highly dependent on the spatial scale under consideration. In particular, it remains uncertain which mechanisms drive species assembly and allow for plant species coexistence at smaller, more local spatial scales. Furthermore, there is still no consensus on how particular environmental gradients affect the trait composition of plant communities. For example, increasing drought because of climate change is predicted to be a main threat to plant diversity, although it remains unclear which traits of species respond to increasing aridity. Similarly, there is conflicting evidence of how soil fertilization affects the traits related to establishment ability (e.g., seed mass). In this cumulative dissertation, I present three empirical trait-based studies that investigate specific research questions in order to improve our understanding of species distributions along environmental gradients.
In the first case study, I analyze how annual species assemble at the local scale and how environmental heterogeneity affects different facets of biodiversity—i.e. taxonomic, functional, and phylogenetic diversity—at different spatial scales. The study was conducted in a semi-arid environment at the transition zone between desert and Mediterranean ecosystems that features a sharp precipitation gradient (Israel). Different null model analyses revealed strong support for environmentally driven species assembly at the local scale, since species with similar traits tended to co-occur and shared high abundances within microsites (trait convergence). A phylogenetic approach, which assumes that closely related species are functionally more similar to each other than distantly related ones, partly supported these results. However, I observed that species abundances within microsites were, surprisingly, more evenly distributed across the phylogenetic tree than expected (phylogenetic overdispersion). Furthermore, I showed that environmental heterogeneity has a positive effect on diversity, which was higher on functional than on taxonomic diversity and increased with spatial scale. The results of this case study indicate that environmental heterogeneity may act as a stabilizing factor to maintain species diversity at local scales, since it influenced species distribution according to their traits and positively influenced diversity. All results were constant along the precipitation gradient.
In the second case study (same study system as case study one), I explore the trait responses of two Mediterranean annuals (Geropogon hybridus and Crupina crupinastrum) along a precipitation gradient that is comparable to the maximum changes in precipitation predicted to occur by the end of this century (i.e., −30%). The heterocarpic G. hybridus showed strong trends in seed traits, suggesting that dispersal ability increased with aridity. By contrast, the homocarpic C. crupinastrum showed only a decrease in plant height as aridity increased, while leaf traits of both species showed no consistent pattern along the precipitation gradient. Furthermore, variance decomposition of traits revealed that most of the trait variation observed in the study system was actually found within populations. I conclude that trait responses towards aridity are highly species-specific and that the amount of precipitation is not the most striking environmental factor at this particular scale.
In the third case study, I assess how soil fertilization mediates—directly by increased nutrient addition and indirectly by increased competition—the effect of seed mass on establishment ability. For this experiment, I used 22 species differing in seed mass from dry grasslands in northeastern Germany and analyzed the interacting effects of seed mass with nutrient availability and competition on four key components of seedling establishment: seedling emergence, time of seedling emergence, seedling survival, and seedling growth. (Time of) seedling emergence was not affected by seed mass. However, I observed that the positive effect of seed mass on seedling survival is lowered under conditions of high nutrient availability, whereas the positive effect of seed mass on seedling growth was only reduced by competition. Based on these findings, I developed a conceptual model of how seed mass should change along a soil fertility gradient in order to reconcile conflicting findings from the literature. In this model, seed mass shows a U-shaped pattern along the soil fertility gradient as a result of changing nutrient availability and competition.
Overall, the three case studies highlight the role of environmental factors on species distribution and co-occurrence. Moreover, the findings of this thesis indicate that spatial heterogeneity at local scales may act as a stabilizing factor that allows species with different traits to coexist. In the concluding discussion, I critically debate intraspecific trait variability in plant community ecology, the use of phylogenetic relationships and easily measured key functional traits as a proxy for species’ niches. Finally, I offer my outlook for the future of functional plant community research.
Cargo transport by molecular motors is ubiquitous in all eukaryotic cells and is typically driven cooperatively by several molecular motors, which may belong to one or several motor species like kinesin, dynein or myosin. These motor proteins transport cargos such as RNAs, protein complexes or organelles along filaments, from which they unbind after a finite run length. Understanding how these motors interact and how their movements are coordinated and regulated is a central and challenging problem in studies of intracellular transport. In this thesis, we describe a general theoretical framework for the analysis of such transport processes, which enables us to explain the behavior of intracellular cargos based on the transport properties of individual motors and their interactions. Motivated by recent in vitro experiments, we address two different modes of transport: unidirectional transport by two identical motors and cooperative transport by actively walking and passively diffusing motors. The case of cargo transport by two identical motors involves an elastic coupling between the motors that can reduce the motors’ velocity and/or the binding time to the filament. We show that this elastic coupling leads, in general, to four distinct transport regimes. In addition to a weak coupling regime, kinesin and dynein motors are found to exhibit a strong coupling and an enhanced unbinding regime, whereas myosin motors are predicted to attain a reduced velocity regime. All of these regimes, which we derive both by analytical calculations and by general time scale arguments, can be explored experimentally by varying the elastic coupling strength. In addition, using the time scale arguments, we explain why previous studies came to different conclusions about the effect and relevance of motor-motor interference. In this way, our theory provides a general and unifying framework for understanding the dynamical behavior of two elastically coupled molecular motors. The second mode of transport studied in this thesis is cargo transport by actively pulling and passively diffusing motors. Although these passive motors do not participate in active transport, they strongly enhance the overall cargo run length. When an active motor unbinds, the cargo is still tethered to the filament by the passive motors, giving the unbound motor the chance to rebind and continue its active walk. We develop a stochastic description for such cooperative behavior and explicitly derive the enhanced run length for a cargo transported by one actively pulling and one passively diffusing motor. We generalize our description to the case of several pulling and diffusing motors and find an exponential increase of the run length with the number of involved motors.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
Efficiently managing large state is a key challenge for data management systems. Traditionally, state is split into fast but volatile state in memory for processing and persistent but slow state on secondary storage for durability. Persistent memory (PMem), as a new technology in the storage hierarchy, blurs the lines between these states by offering both byte-addressability and low latency like DRAM as well persistence like secondary storage. These characteristics have the potential to cause a major performance shift in database systems.
Driven by the potential impact that PMem has on data management systems, in this thesis we explore their use of PMem. We first evaluate the performance of real PMem hardware in the form of Intel Optane in a wide range of setups. To this end, we propose PerMA-Bench, a configurable benchmark framework that allows users to evaluate the performance of customizable database-related PMem access. Based on experimental results obtained with PerMA-Bench, we discuss findings and identify general and implementation-specific aspects that influence PMem performance and should be considered in future work to improve PMem-aware designs. We then propose Viper, a hybrid PMem-DRAM key-value store. Based on PMem-aware access patterns, we show how to leverage PMem and DRAM efficiently to design a key database component. Our evaluation shows that Viper outperforms existing key-value stores by 4–18x for inserts while offering full data persistence and achieving similar or better lookup performance. Next, we show which changes must be made to integrate PMem components into larger systems. By the example of stream processing engines, we highlight limitations of current designs and propose a prototype engine that overcomes these limitations. This allows our prototype to fully leverage PMem's performance for its internal state management. Finally, in light of Optane's discontinuation, we discuss how insights from PMem research can be transferred to future multi-tier memory setups by the example of Compute Express Link (CXL).
Overall, we show that PMem offers high performance for state management, bridging the gap between fast but volatile DRAM and persistent but slow secondary storage. Although Optane was discontinued, new memory technologies are continuously emerging in various forms and we outline how novel designs for them can build on insights from existing PMem research.
We do magnetohydrodynamic (MHD) simulations of local box models of turbulent Interstellar Medium (ISM) and analyse the process of amplification and saturation of mean magnetic fields with methods of mean field dynamo theory. It is shown that the process of saturation of mean fields can be partially described by the prolonged diffusion time scales in presence of the dynamically significant magnetic fields. However, the outward wind also plays an essential role in the saturation in higher SN rate case. Algebraic expressions for the back reaction of the magnetic field onto the turbulent transport coefficients are derived, which allow a complete description of the nonlinear dynamo. We also present the effects of dynamically significant mean fields on the ISM configuration and pressure distribution. We further add the cosmic ray component in the simulations and investigate the kinematic growth of mean fields with a dynamo perspective.
The work done during the PhD studies has been focused on measurements of distribution functions of rotating galaxies using integral field spectroscopy observations.
Throughout the main body of research presented here we have been using CALIFA (Calar Alto Legacy Integral Field Area) survey stellar velocity fields to obtain robust measurements of circular velocities for rotating galaxies of all morphological types. A crucial part of the work was enabled by well-defined CALIFA sample selection criteria: it enabled reconstructing sample-independent distributions of galaxy properties.
In Chapter 2, we measure the distribution in absolute magnitude - circular velocity space for a well-defined sample of 199 rotating CALIFA galaxies using their stellar kinematics. Our aim in this analysis is to avoid subjective selection criteria and to take volume and large-scale structure factors into account. Using stellar velocity fields instead of gas emission line kinematics allows including rapidly rotating early type galaxies. Our initial sample contains 277 galaxies with available stellar velocity fields and growth curve r-band photometry. After rejecting 51 velocity fields that could not be modelled due to the low number of bins, foreground contamination or significant interaction we perform Markov Chain Monte Carlo (MCMC) modelling of the velocity fields, obtaining the rotation curve and kinematic parameters and their realistic uncertainties. We perform an extinction correction and calculate the circular velocity v_circ accounting for pressure support a given galaxy has. The resulting galaxy distribution on the M_r - v_circ plane is then modelled as a mixture of two distinct populations, allowing robust and reproducible rejection of outliers, a significant fraction of which are slow rotators. The selection effects are understood well enough that the incompleteness of the sample can be corrected and the 199 galaxies can be weighted by volume and large-scale structure factors enabling us to fit a volume-corrected Tully-Fisher relation (TFR). More importantly, we also provide the volume-corrected distribution of galaxies in the M_r - v_circ plane, which can be compared with cosmological simulations. The joint distribution of the luminosity and circular velocity space densities, representative over the range of -20 > M_r > -22 mag, can place more stringent constraints on the galaxy formation and evolution scenarios than linear TFR fit parameters or the luminosity function alone.
In Chapter 3, we measure one of the marginal distributions of the M_r - v_circ distribution: the circular velocity function of rotating galaxies. The velocity function is a fundamental observable statistic of the galaxy population, being of a similar importance as the luminosity function, but much more difficult to measure. We present the first directly measured circular velocity function that is representative between 60 < v_circ < 320 km s^-1 for galaxies of all morphological types at a given rotation velocity. For the low mass galaxy population 60 < v_circ < 170 km s^-1, we use the HIPASS velocity function. For the massive galaxy population 170 < v_circ < 320 km s^-1, we use stellar circular velocities from CALIFA. The CALIFA velocity function includes homogeneous velocity measurements of both late and early-type rotation-supported galaxies. It has the crucial advantage of not missing gas-poor massive ellipticals that HI surveys are blind to. We show that both velocity functions can be combined in a seamless manner, as their ranges of validity overlap. The resulting observed velocity function is compared to velocity functions derived from cosmological simulations of the z = 0 galaxy population. We find that dark matter-only simulations show a strong mismatch with the observed VF. Hydrodynamic Illustris simulations fare better, but still do not fully reproduce observations.
In Chapter 4, we present some other work done during the PhD studies, namely, a method that improves the precision of specific angular measurements by combining simultaneous Markov Chain Monte Carlo modelling of ionised gas 2D velocity fields and HI linewidths. To test the method we use a sample of 25 galaxies from the Sydney-AAO Multi-object Integral field (SAMI) survey that had matching ALFALFA HI linewidths. Such a method allows constraining the rotation curve both in the inner regions of a galaxy and in its outskirts, leading to increased precision of specific angular momentum measurements. It could be used to further constrain the observed relation between galaxy mass, specific angular momentum and morphology (Obreschkow & Glazebrook 2014).
Mathematical and computational methods are presented in the appendices.
Causes for slow weathering and erosion in the steep, warm, monsoon-subjected Highlands of Sri Lanka
(2018)
In the Highlands of Sri Lanka, erosion and chemical weathering rates are among the lowest for global mountain denudation. In this tropical humid setting, highly weathered deep saprolite profiles have developed from high-grade metamorphic charnockite during spheroidal weathering of the bedrock. The spheroidal weathering produces rounded corestones and spalled rindlets at the rock-saprolite interface. I used detailed textural, mineralogical, chemical, and electron-microscopic (SEM, FIB, TEM) analyses to identify the factors limiting the rate of weathering front advance in the profile, the sequence of weathering reactions, and the underlying mechanisms. The first mineral attacked by weathering was found to be pyroxene initiated by in situ Fe oxidation, followed by in situ biotite oxidation. Bulk dissolution of the primary minerals is best described with a dissolution – re-precipitation process, as no chemical gradients towards the mineral surface and sharp structural boundaries are observed at the nm scale. Only the local oxidation in pyroxene and biotite is better described with an ion by ion process. The first secondary phases are oxides and amorphous precipitates from which secondary minerals (mainly smectite and kaolinite) form. Only for biotite direct solid state transformation to kaolinite is likely. The initial oxidation of pyroxene and biotite takes place in locally restricted areas and is relatively fast: log J = -11 molmin/(m2 s). However, calculated corestone-scale mineral oxidation rates are comparable to corestone-scale mineral dissolution rates: log R = -13 molpx/(m2 s) and log R = -15 molbt/(m2 s). The oxidation reaction results in a volume increase. Volumetric calculations suggest that this observed oxidation leads to the generation of porosity due to the formation of micro-fractures in the minerals and the bedrock allowing for fluid transport and subsequent dissolution of plagioclase. At the scale of the corestone, this fracture reaction is responsible for the larger fractures that lead to spheroidal weathering and to the formation of rindlets. Since these fractures have their origin from the initial oxidational induced volume increase, oxidation is the rate limiting parameter for weathering to take place. The ensuing plagioclase weathering leads to formation of high secondary porosity in the corestone over a distance of only a few cm and eventually to the final disaggregation of bedrock to saprolite. As oxidation is the first weathering reaction, the supply of O2 is a rate-limiting factor for chemical weathering. Hence, the supply of O2 and its consumption at depth connects processes at the weathering front with erosion at the surface in a feedback mechanism. The strength of the feedback depends on the relative weight of advective versus diffusive transport of O2 through the weathering profile. The feedback will be stronger with dominating diffusive transport. The low weathering rate ultimately depends on the transport of O2 through the whole regolith, and on lithological factors such as low bedrock porosity and the amount of Fe-bearing primary minerals. In this regard the low-porosity charnockite with its low content of Fe(II) bearing minerals impedes fast weathering reactions. Fresh weatherable surfaces are a pre-requisite for chemical weathering. However, in the case of the charnockite found in the Sri Lankan Highlands, the only process that generates these surfaces is the fracturing induced by oxidation. Tectonic quiescence in this region and low pre-anthropogenic erosion rate (attributed to a dense vegetation cover) minimize the rejuvenation of the thick and cohesive regolith column, and lowers weathering through the feedback with erosion.
New bio-based polymers
(2018)
Redox-responsive polymers, such as poly(disulfide)s, are a versatile class of polymers with potential applications including gene- and drug-carrier systems. Their degradability under reductive conditions allows for a controlled response to the different redox states that are present throughout the body. Poly(disulfide)s are typically synthesized by step growth polymerizations. Step growth polymerizations, however, may suffer from low conversions and therefore low molar masses, limiting potential applications. The purpose of this thesis was therefore to find and investigate new synthetic routes towards the synthesis of amino acid-based poly(disulfide)s.
The different routes in this thesis include entropy-driven ring opening polymerizations of novel macrocyclic monomers, derived from cystine derivatives. These monomers were obtained with overall yields of up to 77% and were analyzed by mass spectrometry as well as by 1D and 2D NMR spectroscopy. The kinetics of the entropy-driven ring-opening metathesis polymerization (ED-ROMP) were thoroughly investigated in dependence of temperature, monomer concentration, and catalyst concentration. The polymerization was optimized to yield poly(disulfide)s with weight average molar masses of up to 80 kDa and conversions of ~80%, at the thermodynamic equilibrium. Additionally, an alternative metal free polymerization, namely the entropy-driven ring-opening disulfide metathesis polymerization (ED-RODiMP) was established for the polymerization of the macrocyclic monomers. The effect of different solvents, concentrations and catalyst loadings on the polymerization process and its kinetics were studied. Polymers with very high weight average molar masses of up to 177 kDa were obtained. Moreover, various post-polymerization reactions were successfully performed.
This work provides the first example of the homopolymerization of endo-cyclic disulfides by ED-ROMP and the first substantial study into the kinetics of the ED-RODiMP process.
Exploring elections features from a geographical perspective is the focus of this study. Its primary objective is to develop a scientific approach based on geoinformation technology (GIT) that promotes deeper understanding how geographical settings affect the spatial and temporal variations of voting behaviour and election outcomes. For this purpose, the five parliamentary elections (1991-2005) following the political turnaround in 1990 in the South East European reform country Albania have been selected as a case study. Elections, like other social phenomena that do not develop uniformly over a territory, inherit a spatial dimension. Despite of fact that elections have been researched by various scientific disciplines ranging from political science to geography, studies that incorporate their spatial dimension are still limited in number and approaches. Consequently, the methodologies needed to generate an integrated knowledge on many facets that constitute election features are lacking. This study addresses characteristics and interactions of the essential elements involved in an election process. Thus, the baseline of the approach presented here is the exploration of relations between three entities: electorate (political and sociodemographic features), election process (electoral system and code) and place (environment where voters reside). To express this interaction the concept of electoral pattern is introduced. Electoral patterns are defined by the study as the final view of election results, chiefly in tabular and/or map form, generated by the complex interaction of social, economic, juridical, and spatial features of the electorate, which has occurred at a specific time and in a particular geographical location. GIT methods of geoanalysis and geovisualization are used to investigate the characteristics of electoral patterns in their spatial and temporal distribution. Aggregate-level data modelled in map form were used to analyse and visualize the spatial distribution of election patterns components and relations. The spatial dimension of the study is addressed in the following three main relations: One, the relation between place and electorate and its expression through the social, demographic and economic features of the electorate resulting in the profile of the electorate’s context; second, the electorate-election interaction which forms the baseline to explore the perspective of local contextual effects in voting behaviour and election results; third, the relation between geographical location and election outcomes reflecting the implication of determining constituency boundaries on election results. To address the above relations, three types of variables: geo, independent and dependent, have been elaborated and two models have been created. The Data Model, developed in a GIS environment, facilitates structuring of election data in order to perform spatial analysis. The peculiarity of electoral patterns – a multidimensional array that contains information on three variables, stored in data layers of dissimilar spatial units of reference and scales of value measurement – prohibit spatial analysis based on the original source data. To perform a joint spatial analysis it is therefore mandatory to restructure the spatial units of reference while preserving their semantic content. In this operation, all relevant electoral as well as socio-demographic data referenced to different administrative spatial entities are re-referenced to uniform grid cells as virtual spatial units of reference. Depending on the scale of data acquisition and map presentation, a cell width of 0.5 km has been determined. The resulting fine grid forms the basis of subsequent data analyses and correlations. Conversion of the original vector data layers into target raster layers allows for unification of spatial units, at the same time retaining the existing level of detail of the data (variables, uniform distribution over space). This in turn facilitates the integration of the variables studied and the performance of GIS-based spatial analysis. In addition, conversion to raster format makes it possible to assign new values to the original data, which are based on a common scale eliminating existing differences in scale of measurement. Raster format operations of the type described are well-established data analysis techniques in GIT, yet they have rarely been employed to process and analyse electoral data. The Geovisualization Model, developed in a cartographic environment, complements the Data Model. As an analog graphic model it facilitates efficient communication and exploration of geographical information through cartographic visualization. Based on this model, 52 choropleth maps have been generated. They represent the outcome of the GIS-based electoral data analysis. The analog map form allows for in-depth visual analysis and interpretation of the distribution and correlation of the electoral data studied. For researchers, decision makers and a wider public the maps provide easy-to-access information on and promote easy-to-understand insight into the spatial dimension, regional variation and resulting structures of the electoral patterns defined.
The cytoskeletal motor protein kinesin-1 (conventional kinesin) is the fast carrier for intracellular cargo transport along microtubules. So far most studies aimed at investigating the transport properties of individual motor molecules. However, the transport in cells usually involves the collective work of more than one motor. In the present work, we have studied the movement of beads as artificial loads/organelles pulled by several kinesin-1 motors in vitro. For a wide range of motor coverage of the beads and different bead (cargo) sizes the transport parameters walking distance or run length, velocity and force generation are measured. The results indicate that the transport parameters are influenced by the number of motors carrying the bead. While the transport velocity slightly decreases, an increase in the run length was measured and higher forces are determined, when more motors are involved. The effective number of motors pulling a bead is estimated by measuring the change in the hydrodynamic diameter of kinesin-coated beads using dynamic light scattering. The geometrical constraints imposed by the transport system have been taken into account. Thus, results for beads of different size and motor-surface coverage could be compared. In addition, run length-distributions obtained for the smallest bead size were matched to theoretically calculated distributions. The latter yielded an average number of pulling motors, which is in agreement with the effective motor numbers determined experimentally.
In this thesis, we give two constructions for Riemannian metrics on Seiberg-Witten moduli spaces. Both these constructions are naturally induced from the L2-metric on the configuration space. The construction of the so called quotient L2-metric is very similar to the one construction of an L2-metric on Yang-Mills moduli spaces as given by Groisser and Parker. To construct a Riemannian metric on the total space of the Seiberg-Witten bundle in a similar way, we define the reduced gauge group as a subgroup of the gauge group. We show, that the quotient of the premoduli space by the reduced gauge group is isomorphic as a U(1)-bundle to the quotient of the premoduli space by the based gauge group. The total space of this new representation of the Seiberg-Witten bundle carries a natural quotient L2-metric, and the bundle projection is a Riemannian submersion with respect to these metrics. We compute explicit formulae for the sectional curvature of the moduli space in terms of Green operators of the elliptic complex associated with a monopole. Further, we construct a Riemannian metric on the cobordism between moduli spaces for different perturbations. The second construction of a Riemannian metric on the moduli space uses a canonical global gauge fixing, which represents the total space of the Seiberg-Witten bundle as a finite dimensional submanifold of the configuration space. We consider the Seiberg-Witten moduli space on a simply connected Käuhler surface. We show that the moduli space (when nonempty) is a complex projective space, if the perturbation does not admit reducible monpoles, and that the moduli space consists of a single point otherwise. The Seiberg-Witten bundle can then be identified with the Hopf fibration. On the complex projective plane with a special Spin-C structure, our Riemannian metrics on the moduli space are Fubini-Study metrics. Correspondingly, the metrics on the total space of the Seiberg-Witten bundle are Berger metrics. We show that the diameter of the moduli space shrinks to 0 when the perturbation approaches the wall of reducible perturbations. Finally we show, that the quotient L2-metric on the Seiberg-Witten moduli space on a Kähler surface is a Kähler metric.
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
Hyperspectral remote sensing of the spatial and temporal heterogeneity of low Arctic vegetation
(2019)
Arctic tundra ecosystems are experiencing warming twice the global average and Arctic vegetation is responding in complex and heterogeneous ways. Shifting productivity, growth, species composition, and phenology at local and regional scales have implications for ecosystem functioning as well as the global carbon and energy balance. Optical remote sensing is an effective tool for monitoring ecosystem functioning in this remote biome. However, limited field-based spectral characterization of the spatial and temporal heterogeneity limits the accuracy of quantitative optical remote sensing at landscape scales. To address this research gap and support current and future satellite missions, three central research questions were posed:
• Does canopy-level spectral variability differ between dominant low Arctic vegetation communities and does this variability change between major phenological phases?
• How does canopy-level vegetation colour images recorded with high and low spectral resolution devices relate to phenological changes in leaf-level photosynthetic pigment concentrations?
• How does spatial aggregation of high spectral resolution data from the ground to satellite scale influence low Arctic tundra vegetation signatures and thereby what is the potential of upcoming hyperspectral spaceborne systems for low Arctic vegetation characterization?
To answer these questions a unique and detailed database was assembled. Field-based canopy-level spectral reflectance measurements, nadir digital photographs, and photosynthetic pigment concentrations of dominant low Arctic vegetation communities were acquired at three major phenological phases representing early, peak and late season. Data were collected in 2015 and 2016 in the Toolik Lake Research Natural Area located in north central Alaska on the North Slope of the Brooks Range. In addition to field data an aerial AISA hyperspectral image was acquired in the late season of 2016. Simulations of broadband Sentinel-2 and hyperspectral Environmental and Mapping Analysis Program (EnMAP) satellite reflectance spectra from ground-based reflectance spectra as well as simulations of EnMAP imagery from aerial hyperspectral imagery were also obtained.
Results showed that canopy-level spectral variability within and between vegetation communities differed by phenological phase. The late season was identified as the most discriminative for identifying many dominant vegetation communities using both ground-based and simulated hyperspectral reflectance spectra. This was due to an overall reduction in spectral variability and comparable or greater differences in spectral reflectance between vegetation communities in the visible near infrared spectrum.
Red, green, and blue (RGB) indices extracted from nadir digital photographs and pigment-driven vegetation indices extracted from ground-based spectral measurements showed strong significant relationships. RGB indices also showed moderate relationships with chlorophyll and carotenoid pigment concentrations. The observed relationships with the broadband RGB channels of the digital camera indicate that vegetation colour strongly influences the response of pigment-driven spectral indices and digital cameras can track the seasonal development and degradation of photosynthetic pigments.
Spatial aggregation of hyperspectral data from the ground to airborne, to simulated satel-lite scale was influenced by non-photosynthetic components as demonstrated by the distinct shift of the red edge to shorter wavelengths. Correspondence between spectral reflectance at the three scales was highest in the red spectrum and lowest in the near infra-red. By artificially mixing litter spectra at different proportions to ground-based spectra, correspondence with aerial and satellite spectra increased. Greater proportions of litter were required to achieve correspondence at the satellite scale.
Overall this thesis found that integrating multiple temporal, spectral, and spatial data is necessary to monitor the complexity and heterogeneity of Arctic tundra ecosystems. The identification of spectrally similar vegetation communities can be optimized using non-peak season hyperspectral data leading to more detailed identification of vegetation communities. The results also highlight the power of vegetation colour to link ground-based and satellite data. Finally, a detailed characterization non-photosynthetic ecosystem components is crucial for accurate interpretation of vegetation signals at landscape scales.
Business process management is an acknowledged asset for running an organization in a productive and sustainable way. One of the most important aspects of business process management, occurring on a daily basis at all levels, is decision making. In recent years, a number of decision management frameworks have appeared in addition to existing business process management systems. More recently, Decision Model and Notation (DMN) was developed by the OMG consortium with the aim of complementing the widely used Business Process Model and Notation (BPMN). One of the reasons for the emergence of DMN is the increasing interest in the evolving paradigm known as the separation of concerns. This paradigm states that modeling decisions complementary to processes reduces process complexity by externalizing decision logic from process models and importing it into a dedicated decision model. Such an approach increases the agility of model design and execution. This provides organizations with the flexibility to adapt to the ever increasing rapid and dynamic changes in the business ecosystem. The research gap, identified by us, is that the separation of concerns, recommended by DMN, prescribes the externalization of the decision logic of process models in one or more separate decision models, but it does not specify this can be achieved.
The goal of this thesis is to overcome the presented gap by developing a framework for discovering decision models in a semi-automated way from information about existing process decision making. Thus, in this thesis we develop methodologies to extract decision models from: (1) control flow and data of process models that exist in enterprises; and (2) from event logs recorded by enterprise information systems, encapsulating day-to-day operations. Furthermore, we provide an extension of the methodologies to discover decision models from event logs enriched with fuzziness, a tool dealing with partial knowledge of the process execution information. All the proposed techniques are implemented and evaluated in case studies using real-life and synthetic process models and event logs. The evaluation of these case studies shows that the proposed methodologies provide valid and accurate output decision models that can serve as blueprints for executing decisions complementary to process models. Thus, these methodologies have applicability in the real world and they can be used, for example, for compliance checks, among other uses, which could improve the organization's decision making and hence it's overall performance.
One third of the world's population lives in areas where earthquakes causing at least slight damage are frequently expected. Thus, the development and testing of global seismicity models is essential to improving seismic hazard estimates and earthquake-preparedness protocols for effective disaster-risk mitigation. Currently, the availability and quality of geodetic data along plate-boundary regions provides the opportunity to construct global models of plate motion and strain rate, which can be translated into global maps of forecasted seismicity. Moreover, the broad coverage of existing earthquake catalogs facilitates in present-day the calibration and testing of global seismicity models. As a result, modern global seismicity models can integrate two independent factors necessary for physics-based, long-term earthquake forecasting, namely interseismic crustal strain accumulation and sudden lithospheric stress release.
In this dissertation, I present the construction of and testing results for two global ensemble seismicity models, aimed at providing mean rates of shallow (0-70 km) earthquake activity for seismic hazard assessment. These models depend on the Subduction Megathrust Earthquake Rate Forecast (SMERF2), a stationary seismicity approach for subduction zones, based on the conservation of moment principle and the use of regional "geodesy-to-seismicity" parameters, such as corner magnitudes, seismogenic thicknesses and subduction dip angles. Specifically, this interface-earthquake model combines geodetic strain rates with instrumentally-recorded seismicity to compute long-term rates of seismic and geodetic moment. Based on this, I derive analytical solutions for seismic coupling and earthquake activity, which provide this earthquake model with the initial abilities to properly forecast interface seismicity. Then, I integrate SMERF2 interface-seismicity estimates with earthquake computations in non-subduction zones provided by the Seismic Hazard Inferred From Tectonics based on the second iteration of the Global Strain Rate Map seismicity approach to construct the global Tectonic Earthquake Activity Model (TEAM). Thus, TEAM is designed to reduce number, and potentially spatial, earthquake inconsistencies of its predecessor tectonic earthquake model during the 2015-2017 period. Also, I combine this new geodetic-based earthquake approach with a global smoothed-seismicity model to create the World Hybrid Earthquake Estimates based on Likelihood scores (WHEEL) model. This updated hybrid model serves as an alternative earthquake-rate approach to the Global Earthquake Activity Rate model for forecasting long-term rates of shallow seismicity everywhere on Earth.
Global seismicity models provide scientific hypotheses about when and where earthquakes may occur, and how big they might be. Nonetheless, the veracity of these hypotheses can only be either confirmed or rejected after prospective forecast evaluation. Therefore, I finally test the consistency and relative performance of these global seismicity models with independent observations recorded during the 2014-2019 pseudo-prospective evaluation period. As a result, hybrid earthquake models based on both geodesy and seismicity are the most informative seismicity models during the testing time frame, as they obtain higher information scores than their constituent model components. These results support the combination of interseismic strain measurements with earthquake-catalog data for improved seismicity modeling. However, further prospective evaluations are required to more accurately describe the capacities of these global ensemble seismicity models to forecast longer-term earthquake activity.