Filtern
Volltext vorhanden
- ja (181) (entfernen)
Erscheinungsjahr
- 2023 (181) (entfernen)
Dokumenttyp
- Dissertation (181) (entfernen)
Gehört zur Bibliographie
- ja (181)
Schlagworte
- climate change (9)
- Klimawandel (8)
- machine learning (6)
- Modellierung (3)
- Reflexion (3)
- körperliche Fitness (3)
- maschinelles Lernen (3)
- physical fitness (3)
- reinforcement learning (3)
- Anden (2)
Institut
- Institut für Biochemie und Biologie (26)
- Institut für Geowissenschaften (24)
- Extern (23)
- Institut für Physik und Astronomie (21)
- Institut für Chemie (18)
- Hasso-Plattner-Institut für Digital Engineering GmbH (17)
- Institut für Umweltwissenschaften und Geographie (13)
- Department Sport- und Gesundheitswissenschaften (7)
- Department Psychologie (6)
- Institut für Ernährungswissenschaft (6)
- Department Erziehungswissenschaft (5)
- Department Linguistik (4)
- Institut für Informatik und Computational Science (4)
- Institut für Mathematik (4)
- Digital Engineering Fakultät (3)
- Fachgruppe Betriebswirtschaftslehre (3)
- Fachgruppe Politik- & Verwaltungswissenschaft (3)
- Fakultät für Gesundheitswissenschaften (2)
- Historisches Institut (2)
- Wirtschaftswissenschaften (2)
- Applied Computational Linguistics (1)
- Department Grundschulpädagogik (1)
- Department Musik und Kunst (1)
- Fachgruppe Soziologie (1)
- Fachgruppe Volkswirtschaftslehre (1)
- Institut für Germanistik (1)
- Institut für Künste und Medien (1)
- Institut für Romanistik (1)
- Language Acquisition (1)
- Patholinguistics/Neurocognition of Language (1)
- Phonology & Phonetics (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Strukturbereich Kognitionswissenschaften (1)
Ribosomes decode mRNA to synthesize proteins. Ribosomes, once considered static, executing machines, are now viewed as dynamic modulators of translation. Increasingly detailed analyses of structural ribosome heterogeneity led to a paradigm shift toward ribosome specialization for selective translation. As sessile organisms, plants cannot escape harmful environments and evolved strategies to withstand. Plant cytosolic ribosomes are in some respects more diverse than those of other metazoans. This diversity may contribute to plant stress acclimation. The goal of this thesis was to determine whether plants use ribosome heterogeneity to regulate protein synthesis through specialized translation. I focused on temperature acclimation, specifically on shifts to low temperatures. During cold acclimation, Arabidopsis ceases growth for seven days while establishing the responses required to resume growth. Earlier results indicate that ribosome biogenesis is essential for cold acclimation. REIL mutants (reil-dkos) lacking a 60S maturation factor do not acclimate successfully and do not resume growth. Using these genotypes, I ascribed cold-induced defects of ribosome biogenesis to the assembly of the polypeptide exit tunnel (PET) by performing spatial statistics of rProtein changes mapped onto the plant 80S structure. I discovered that growth cessation and PET remodeling also occurs in barley, suggesting a general cold response in plants. Cold triggered PET remodeling is consistent with the function of Rei-1, a REIL homolog of yeast, which performs PET quality control. Using seminal data of ribosome specialization, I show that yeast remodels the tRNA entry site of ribosomes upon change of carbon sources and demonstrate that spatially constrained remodeling of ribosomes in metazoans may modulate protein synthesis. I argue that regional remodeling may be a form of ribosome specialization and show that heterogeneous cytosolic polysomes accumulate after cold acclimation, leading to shifts in the translational output that differs between wild-type and reil-dkos. I found that heterogeneous complexes consist of newly synthesized and reused proteins. I propose that tailored ribosome complexes enable free 60S subunits to select specific 48S initiation complexes for translation. Cold acclimated ribosomes through ribosome remodeling synthesize a novel proteome consistent with known mechanisms of cold acclimation. The main hypothesis arising from my thesis is that heterogeneous/ specialized ribosomes alter translation preferences, adjust the proteome and thereby activate plant programs for successful cold acclimation.
The role of biogenic carbonate producers in the evolution of the geometries of carbonate systems has been the subject of numerous research projects. Attempts to classify modern and ancient carbonate systems by their biotic components have led to the discrimination of biogenic carbonate producers broadly into Photozoans, which are characterised by an affinity for warm tropical waters and high dependence on light penetration, and Heterozoans which are generally associated with both cool water environments and nutrient-rich settings with little to no light penetration. These broad categories of carbonate sediment producers have also been recognised to dominate in specific carbonate systems. Photozoans are commonly dominant in flat-topped platforms with steep margins, while Heterozoans generally dominate carbonate ramps. However, comparatively little is known on how these two main groups of carbonate producers interact in the same system and impact depositional geometries responding to changes in environmental conditions such as sea level fluctuation, antecedent slope, sediment transport processes, etc. This thesis presents numerical models to investigate the evolution of Miocene carbonate systems in the Mediterranean from two shallow marine domains: 1) a Miocene flat-topped platform dominated by Photozoans, with a significant component of Hetrozoans in the slope and 2) a Heterozoan distally steepened ramp, with seagrass-influenced (Photozoan) inner ramp. The overarching aim of the three articles comprising this cumulative thesis is to provide a numerical study of the role of Photozoans and Heterozoans in the evolution of carbonate system geometries and how these biotas respond to changes in environmental conditions. This aim was achieved using stratigraphic forward modelling, which provides an approach to quantitatively integrate multi-scale datasets to reconstruct sedimentary processes and products during the evolution of a sedimentary system.
In a Photozoan-dominated carbonate system, such as the Miocene Llucmajor platform in Western Mediterranean, stratigraphic forward modelling dovetailed with a robust set of sensitivity tests reveal how the geometry of the carbonate system is determined by the complex interaction of Heterozoan and Photozoan biotas in response to variable conditions of sea level fluctuation, substrate configuration, sediment transport processes and the dominance of Photozoan over Heterozoan production. This study provides an enhanced understanding of the different carbonate systems that are possible under different ecological and hydrodynamic conditions. The research also gives insight into the roles of different biotic associations in the evolution of carbonate geometries through time and space. The results further show that the main driver of platform progradation in a Llucmajor-type system is the lowstand production of Heterozoan sediments, which form the necessary substratum for Photozoan production.
In Heterozoan systems, sediment production is mainly characterised by high transport deposits, that are prone to redistribution by waves and gravity, thereby precluding the development of steep margins. However, in the Menorca ramp, the occurrence of sediment trapping by seagrass led to the evolution of distal slope steepening. We investigated, through numerical modelling, how such a seagrass-influenced ramp responds to the frequency and amplitude of sea level changes, variable carbonate production between the euphotic and oligophotic zone, and changes in the configuration of the paleoslope. The study reinforces some previous hypotheses and presents alternative scenarios to the established concepts of high-transport ramp evolution. The results of sensitivity experiments show that steep slopes are favoured in ramps that develop in high-frequency sea level fluctuation with amplitudes between 20 m and 40 m. We also show that ramp profiles are significantly impacted by the paleoslope inclination, such that an optimal antecedent slope of about 0.15 degrees is required for the Menorca distally steepened ramp to develop.
The third part presents an experimental case to argue for the existence of a Photozoan sediment threshold required for the development of steep margins in carbonate platforms. This was carried out by developing sensitivity tests on the forward models of the flat-topped (Llucmajor) platform and the distally steepened (Menorca) platform. The results show that models with Photozoan sediment proportion below a threshold of about 40% are incapable of forming steep slopes. The study also demonstrates that though it is possible to develop steep margins by seagrass sediment trapping, such slopes can only be stabilized by the appropriate sediment fabric and/or microbial binding. In the Photozoan-dominated system, the magnitude of slope steepness depends on the proportion of Photozoan sediments in the system. Therefore, this study presents a novel tool for characterizing carbonate systems based on their biogenic components.
Background: The role of fatty acid (FA) intake and metabolism in type 2 diabetes (T2D) incidence is controversial. Some FAs are not synthesised endogenously and, therefore, these circulating FAs reflect dietary intake, for example, the trans fatty acids (TFAs), saturated odd chain fatty acids (OCFAs), and linoleic acid, an n-6 polyunsaturated fatty acids (PUFA). It remains unclear if intake of TFA influence T2D risk and whether industrial TFAs (iTFAs) and ruminant TFAs (rTFAs) exert the same effect. Unlike even chain saturated FAs, the OCFAs have been inversely associated with T2D risk, but this association is poorly understood. Furthermore, the associations of n-6 PUFAs intake with T2D risk are still debated, while delta-5 desaturase (D5D), a key enzyme in the metabolism of PUFAs, has been consistently related to T2D risk. To better understand these relationships, the FA composition in circulating lipid fractions can be used as biomarkers of dietary intake and metabolism. The exploration of TFAs subtypes in plasma phospholipids and OCFAs and n-6 PUFAs within a wide range of lipid classes may give insights into the pathophysiology of T2D.
Aim: This thesis aimed mainly to analyse the association of TFAs, OCFAs and n-6 PUFAs with self-reported dietary intake and prospective T2D risk, using seven types of TFAs in plasma phospholipids and deep lipidomics profiling data from fifteen lipid classes.
Methods: A prospective case-cohort study was designed within the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam study, including all the participants who developed T2D (median follow-up 6.5 years) and a random subsample of the full cohort (subcohort: n=1248; T2D cases: n=820). The main analyses included two lipid profiles. The first was an assessment of seven TFA in plasma phospholipids, with a modified method for analysis of FA with very low abundances. The second lipid profile was derived from a high-throughout lipid profiling technology, which identified 940 distinct molecular species and allowed to quantify OCFAs and PUFAs composition across 15 lipid classes. Delta-5 desaturase (D5D) activity was estimated as 20:4/20:3-ratio. Using multivariable Cox regression models, we examined the associations of TFA subtypes with incident T2D and class-specific associations of OCFA and n-6 PUFAs with T2D risk.
Results: 16:1n-7t, 18:1n-7t, and c9t11-CLA were positively correlated with the intake of fat-rich dairy foods. iTFA 18:1 isomers were positively correlated with margarine. After adjustment for confounders and other TFAs, higher plasma phospholipid concentrations of two rTFAs were associated with a lower incidence of T2D: 18:1n-7t and t10c12-CLA. In contrast, the rTFA c9t11-CLA was associated with a higher incidence of T2D. rTFA 16:1n-7t and iTFAs (18:1n-6t, 18:1n-9t, 18:2n-6,9t) were not statistically significantly associated with T2D risk.
We observed heterogeneous integration of OCFA in different lipid classes, and the contribution of 15:0 versus 17:0 to the total OCFA abundance differed across lipid classes. Consumption of fat-rich dairy and fiber-rich foods were positively and red meat inversely correlated to OCFA abundance in plasma phospholipid classes. In women only, higher abundances of 15:0 in phosphatidylcholines (PC) and diacylglycerols (DG), and 17:0 in PC, lysophosphatidylcholines (LPC), and cholesterol esters (CE) were inversely associated with T2D risk. In men and women, a higher abundance of 15:0 in monoacylglycerols (MG) was also inversely associated with T2D. Conversely, a higher 15:0 concentration in LPC and triacylglycerols (TG) was associated with higher T2D risk in men. Women with a higher concentration of 17:0 as free fatty acids (FFA) also had higher T2D incidence.
The integration of n-6 PUFAs in lipid classes was also heterogeneous. 18:2 was highly abundant in phospholipids (particularly PC), CE, and TG; 20:3 represented a small fraction of FA in most lipid classes, and 20:4 accounted for a large proportion of circulating phosphatidylinositol (PI) and phosphatidylethanolamines (PE). Higher concentrations of 18:2 were inversely associated with T2D risk, especially within DG, TG, and LPC. However, 18:2 as part of MG was positively associated with T2D risk. Higher concentrations of 20:3 in phospholipids (PC, PE, PI), FFA, CE, and MG were linked to higher T2D incidence. 20:4 was unrelated to risk in most lipid classes, except positive associations were observed for 20:4 enriched in FFA and PE. The estimated D5D activities in PC, PE, PI, LPC, and CE were inversely associated with T2D and explained variance of estimated D5D activity by genomic variation in the FADS locus was only substantial in those lipid classes.
Conclusion: The TFAs' conformation is essential in their relationship to diabetes risk, as indicated by plasma rTFA subtypes concentrations having opposite directions of associations with diabetes risk. Plasma OCFA concentration is linked to T2D risk in a lipid class and sex-specific manner. Plasma n-6 PUFA concentrations are associated differently with T2D incidence depending on the specific FA and the lipid class. Overall, these results highlight the complexity of circulating FAs and their heterogeneous association with T2D risk depending on the specific FA structure, lipid class, and sex. My results extend the evidence of the relationship between diet, lipid metabolism, and subsequent T2D risk. In addition, my work generated several potential new biomarkers of dietary intake and prospective T2D risk.
The central gas in half of all galaxy clusters shows short cooling times. Assuming unimpeded cooling, this should lead to high star formation and mass cooling rates, which are not observed. Instead, it is believed that condensing gas is accreted by the central black hole that powers an active galactic nuclei jet, which heats the cluster. The detailed heating mechanism remains uncertain. A promising mechanism invokes cosmic ray protons that scatter on self-generated magnetic fluctuations, i.e. Alfvén waves. Continuous damping of Alfvén waves provides heat to the intracluster medium. Previous work has found steady state solutions for a large sample of clusters where cooling is balanced by Alfvénic wave heating. To verify modeling assumptions, we set out to study cosmic ray injection in three-dimensional magnetohydrodynamical simulations of jet feedback in an idealized cluster with the moving-mesh code arepo. We analyze the interaction of jet-inflated bubbles with the turbulent magnetized intracluster medium.
Furthermore, jet dynamics and heating are closely linked to the largely unconstrained jet composition. Interactions of electrons with photons of the cosmic microwave background result in observational signatures that depend on the bubble content. Those recent observations provided evidence for underdense bubbles with a relativistic filling while adopting simplifying modeling assumptions for the bubbles. By reproducing the observations with our simulations, we confirm the validity of their modeling assumptions and as such, confirm the important finding of low-(momentum) density jets.
In addition, the velocity and magnetic field structure of the intracluster medium have profound consequences for bubble evolution and heating processes. As velocity and magnetic fields are physically coupled, we demonstrate that numerical simulations can help link and thereby constrain their respective observables. Finally, we implement the currently preferred accretion model, cold accretion, into the moving-mesh code arepo and study feedback by light jets in a radiatively cooling magnetized cluster. While self-regulation is attained independently of accretion model, jet density and feedback efficiencies, we find that in order to reproduce observed cold gas morphology light jets are preferred.
Background: The characteristics of osteoporosis are decreased bone mass and destruction towards the microarchitecture of bone tissue, which raises the risk of fracture. Psychosocialstress and osteoporosis are linked by sympathetic nervous system, hypothalamic-pituitary-adrenal axis, and other endocrine factors. Psychosocial stress causes a series of effects on the organism, and this long-term depletion at the cellular level is considered to be mitochondrial allostatic load, including mitochondrial dysfunction and oxidative stress. Extracellular vesicles (EVs) are involved in the mitochondrial allostatic load process and may as biomarkers in this setting. As critical participants during cell-to-cell communications, EVs serve as transport vehicles for nucleic acid and proteins, alter the phenotypic and functional characteristics of their target cells, and promote cell-to-cell contact. And hence, they play a significant role in the diagnosis and therapy of many diseases, such as osteoporosis.
Aim: This narrative review attempts to outline the features of EVs, investigate their involvement in both psychosocial stress and osteoporosis, and analyze if EVs can be potential mediators between both.
Methods: The online database from PubMed, Google Scholar, and Science Direct were searched for keywords related to the main topic of this study, and the availability of all the selected studies was verified. Afterward, the findings from the articles were summarized and synthesized.
Results: Psychosocial stress affects bone remodeling through increased neurotransmitters such as glucocorticoids and catecholamines, as well as increased glucose metabolism. Furthermore, psychosocial stress leads to mitochondrial allostatic load, including oxidative stress, which may affect bone remodeling. In vitro and in vivo data suggest EVs might involve in the link between psychosocial stress and bone remodeling through the transfer of bioactive substances and thus be a potential mediator of psychosocial stress leading to osteoporosis.
Conclusions: According to the included studies, psychosocial stress affects bone remodeling, leading to osteoporosis. By summarizing the specific properties of EVs and the function of EVs in both psychosocial stress and osteoporosis, respectively, it has been demonstrated that EVs are possible mediators of both, and have the prospects to be useful in innovative research areas.
Im Rahmen dieser Dissertation wurden die erstmaligen Totalsynthesen der Arylnaphthalen-Lignane Alashinol D, Vitexdoin C, Vitrofolal E, Noralashinol C1 und Ternifoliuslignan E vorgestellt. Der Schlüsselschritt der entwickelten Methode, basiert auf einer regioselektiven intramolekularen Photo-Dehydro-Diels-Alder (PDDA)-Reaktion, die mittels UV-Strahlung im Durchflussreaktor durchgeführt wurde. Bei der Synthese der PDDA-Vorläufer (Diarylsuberate) wurde eine Synthesestrategie nach dem Baukastenprinzip verfolgt. Diese ermöglicht die Darstellung asymmetrischer komplexer Systeme aus nur wenigen Grundbausteinen und die Totalsynthese einer Vielzahl an Lignanen. In systematischen Voruntersuchungen konnte zudem die klare Überlegenheit der intra- gegenüber der intermolekularen PDDA-Reaktion aufgezeigt werden. Dabei stellte sich eine Verknüpfung der beiden Arylpropiolester über einen Korksäurebügel, in para-Position, als besonders effizient heraus. Werden asymmetrisch substituierte Diarylsuberate, bei denen einer der endständigen Estersubstituenten durch eine Trimethylsilyl-Gruppe oder ein Wasserstoffatom ersetzt wurde, verwendet, durchlaufen diese Systeme eine regioselektive Cyclisierung und als Hauptprodukt werden Naphthalenophane mit einem Methylester in 3-Position erhalten. Mit Hilfe von umfangreichen Experimenten zur Funktionalisierung der 4-Position, konnte zudem gezeigt werden, dass die Substitution der nucleophilen Cycloallen-Intermediate, während der PDDA-Reaktion, generell durch die Zugabe von N-Halogen-Succinimiden möglich ist. In Anbetracht der geringen Ausbeuten haben diese intermolekularen Abfangreaktionen, jedoch keinen präparativen Nutzen für die Totalsynthesen von Lignanen. Mit dem Ziel die allgemeinen photochemischen Reaktionsbedingungen zu optimieren, wurde erstmalig die triplettsensibilisierte PDDA-Reaktion vorgestellt. Durch die Verwendung von Xanthon als Sensibilisator wurde der Einsatz von effizienteren UVA-Lichtquellen ermöglicht, wodurch die Gefahr einer Photozersetzung durch Überbestrahlung minimiert wurde. Im Vergleich zur direkten Anregung mit UVB-Strahlung, konnten die Ausbeuten mit indirekter Anregung durch einen Photokatalysator signifikant gesteigert werden. Die grundlegenden Erkenntnisse und die entwickelten Synthesestrategien dieser Arbeit, können dazu beitragen zukünftig die Erschließung neuer pharmakologisch interessanter Lignane voranzutreiben.
1 Bisher ist nur die semisynthetische Darstellung von Noralashinol C ausgehend von Hydroxymatairesinol literaturbekannt.
In recent decades, astronomy has seen a boom in large-scale stellar surveys of the Galaxy. The detailed information obtained about millions of individual stars in the Milky Way is bringing us a step closer to answering one of the most outstanding questions in astrophysics: how do galaxies form and evolve? The Milky Way is the only galaxy where we can dissect many stars into their high-dimensional chemical composition and complete phase space, which analogously as fossil records can unveil the past history of the genesis of the Galaxy. The processes that lead to large structure formation, such as the Milky Way, are critical for constraining cosmological models; we call this line of study Galactic archaeology or near-field cosmology.
At the core of this work, we present a collection of efforts to chemically and dynamically characterise the disks and bulge of our Galaxy. The results we present in this thesis have only been possible thanks to the advent of the Gaia astrometric satellite, which has revolutionised the field of Galactic archaeology by precisely measuring the positions, parallax distances and motions of more than a billion stars. Another, though not less important, breakthrough is the APOGEE survey, which has observed spectra in the near-infrared peering into the dusty regions of the Galaxy, allowing us to determine detailed chemical abundance patterns in hundreds of thousands of stars. To accurately depict the Milky Way structure, we use and develop the Bayesian isochrone fitting tool/code called StarHorse; this software can predict stellar distances, extinctions and ages by combining astrometry, photometry and spectroscopy based on stellar evolutionary models. The StarHorse code is pivotal to calculating distances where Gaia parallaxes alone cannot allow accurate estimates.
We show that by combining Gaia, APOGEE, photometric surveys and using StarHorse, we can produce a chemical cartography of the Milky way disks from their outermost to innermost parts. Such a map is unprecedented in the inner Galaxy. It reveals a continuity of the bimodal chemical pattern previously detected in the solar neighbourhood, indicating two populations with distinct formation histories. Furthermore, the data reveals a chemical gradient within the thin disk where the content of 𝛼-process elements and metals is higher towards the centre. Focusing on a sample in the inner MW we confirm the extension of the chemical duality to the innermost regions of the Galaxy. We find stars with bar shape orbits to show both high- and low-𝛼 abundances, suggesting the bar formed by secular evolution trapping stars that already existed. By analysing the chemical orbital space of the inner Galactic regions, we disentangle the multiple populations that inhabit this complex region. We reveal the presence of the thin disk, thick disk, bar, and a counter-rotating population, which resembles the outcome of a perturbed proto-Galactic disk. Our study also finds that the inner Galaxy holds a high quantity of super metal-rich stars up to three times solar suggesting it is a possible repository of old super-metal-rich stars found in the solar neighbourhood.
We also enter into the complicated task of deriving individual stellar ages. With StarHorse, we calculate the ages of main-sequence turn-off and sub-giant stars for several public spectroscopic surveys. We validate our results by investigating linear relations between chemical abundances and time since the 𝛼 and neutron capture elements are sensitive to age as a reflection of the different enrichment timescales of these elements. For further study of the disks in the solar neighbourhood, we use an unsupervised machine learning algorithm to delineate a multidimensional separation of chrono-chemical stellar groups revealing the chemical thick disk, the thin disk, and young 𝛼-rich stars. The thick disk is shown to have a small age dispersion indicating its fast formation contrary to the thin disk that spans a wide range of ages.
With groundbreaking data, this thesis encloses a detailed chemo-dynamical view of the disk and bulge of our Galaxy. Our findings on the Milky Way can be linked to the evolution of high redshift disk galaxies, helping to solve the conundrum of galaxy formation.
Following the extinction of dinosaurs, the great adaptive radiation of mammals occurred, giving rise to an astonishing ecological and phenotypic diversity of mammalian species. Even closely related species often inhabit vastly different habitats, where they encounter diverse environmental challenges and are exposed to different evolutionary pressures. As a response, mammals evolved various adaptive phenotypes over time, such as morphological, physiological and behavioural ones. Mammalian genomes vary in their content and structure and this variation represents the molecular mechanism for the long-term evolution of phenotypic variation. However, understanding this molecular basis of adaptive phenotypic variation is usually not straightforward.
The recent development of sequencing technologies and bioinformatics tools has enabled a better insight into mammalian genomes. Through these advances, it was acknowledged that mammalian genomes differ more, both within and between species, as a consequence of structural variation compared to single-nucleotide differences. Structural variant types investigated in this thesis - such as deletion, duplication, inversion and insertion, represent a change in the structure of the genome, impacting the size, copy number, orientation and content of DNA sequences. Unlike short variants, structural variants can span multiple genes. They can alter gene dosage, and cause notable gene expression differences and subsequently phenotypic differences. Thus, they can lead to a more dramatic effect on the fitness (reproductive success) of individuals, local adaptation of populations and speciation.
In this thesis, I investigated and evaluated the potential functional effect of structural variations on the genomes of mustelid species. To detect the genomic regions associated with phenotypic variation I assembled the first reference genome of the tayra (Eira barbara) relying on linked-read sequencing technology to achieve a high level of genome completeness important for reliable structural variant discovery. I then set up a bioinformatics pipeline to conduct a comparative genomic analysis and explore variation between mustelid species living in different environments. I found numerous genes associated with species-specific phenotypes related to diet, body condition and reproduction among others, to be impacted by structural variants.
Furthermore, I investigated the effects of artificial selection on structural variants in mice selected for high fertility, increased body mass and high endurance. Through selective breeding of each mouse line, the desired phenotypes have spread within these populations, while maintaining structural variants specific to each line. In comparison to the control line, the litter size has doubled in the fertility lines, individuals in the high body mass lines have become considerably larger, and mice selected for treadmill performance covered substantially more distance. Structural variants were found in higher numbers in these trait-selected lines than in the control line when compared to the mouse reference genome. Moreover, we have found twice as many structural variants spanning protein-coding genes (specific to each line) in trait-selected lines. Several of these variants affect genes associated with selected phenotypic traits. These results imply that structural variation does indeed contribute to the evolution of the selected phenotypes and is heritable.
Finally, I suggest a set of critical metrics of genomic data that should be considered for a stringent structural variation analysis as comparative genomic studies strongly rely on the contiguity and completeness of genome assemblies. Because most of the available data used to represent reference genomes of mammalian species is generated using short-read sequencing technologies, we may have incomplete knowledge of genomic features. Therefore, a cautious structural variation analysis is required to minimize the effect of technical constraints.
The impact of structural variants on the adaptive evolution of mammalian genomes is slowly gaining more focus but it is still incorporated in only a small number of population studies. In my thesis, I advocate the inclusion of structural variants in studies of genomic diversity for a more comprehensive insight into genomic variation within and between species, and its effect on adaptive evolution.
Cosmic rays (CRs) constitute an important component of the interstellar medium (ISM) of galaxies and are thought to play an essential role in governing their evolution. In particular, they are able to impact the dynamics of a galaxy by driving galactic outflows or heating the ISM and thereby affecting the efficiency of star-formation. Hence, in order to understand galaxy formation and evolution, we need to accurately model this non-thermal constituent of the ISM. But except in our local environment within the Milky Way, we do not have the ability to measure CRs directly in other galaxies. However, there are many ways to indirectly observe CRs via the radiation they emit due to their interaction with magnetic and interstellar radiation fields as well as with the ISM.
In this work, I develop a numerical framework to calculate the spectral distribution of CRs in simulations of isolated galaxies where a steady-state between injection and cooling is assumed. Furthermore, I calculate the non-thermal emission processes arising from the modelled CR proton and electron spectra ranging from radio wavelengths up to the very high-energy gamma-ray regime.
I apply this code to a number of high-resolution magneto-hydrodynamical (MHD) simulations of isolated galaxies, where CRs are included. This allows me to study their CR spectra and compare them to observations of the CR proton and electron spectra by the Voyager-1 satellite and the AMS-02 instrument in order to reveal the origin of the measured spectral features.
Furthermore, I provide detailed emission maps, luminosities and spectra of the non-thermal emission from our simulated galaxies that range from dwarfs to Milk-Way analogues to starburst galaxies at different evolutionary stages. I successfully reproduce the observed relations between the radio and gamma-ray luminosities with the far-infrared (FIR) emission of star-forming (SF) galaxies, respectively, where the latter is a good tracer of the star-formation rate. I find that highly SF galaxies are close to the limit where their CR population would lose all of their energy due to the emission of radiation, whereas CRs tend to escape low SF galaxies more quickly. On top of that, I investigate the properties of CR transport that are needed in order to match the observed gamma-ray spectra.
Furthermore, I uncover the underlying processes that enable the FIR-radio correlation (FRC) to be maintained even in starburst galaxies and find that thermal free-free-emission naturally explains the observed radio spectra in SF galaxies like M82 and NGC 253 thus solving the riddle of flat radio spectra that have been proposed to contradict the observed tight FRC.
Lastly, I scrutinise the steady-state modelling of the CR proton component by investigating for the first time the influence of spectrally resolved CR transport in MHD simulations on the hadronic gamma-ray emission of SF galaxies revealing new insights into the observational signatures of CR transport both spectrally and spatially.
Neue Wege ins Lehramt
(2023)
Bis zum Jahr 2035 fehlen nach neuesten Prognosen von Klemm (2022) in Deutschland ca. 127.000 Lehrkräfte. Diese große Lücke kann nicht mehr allein durch Lehrkräfte abge-deckt werden, die ein traditionelles Lehramtsstudium absolviert haben. Als Antwort auf den Lehrkräftemangel werden in Schulen in Deutschland daher vermehrt Personen ohne traditio-nelles Lehramtsstudium eingestellt, um die Unterrichtsversorgung zu gewährleisten (KMK, 2022). Nicht-traditionell ausgebildete Lehrkräfte durchlaufen vor ihrer Einstellung in den Schuldienst in der Regel ein alternatives Qualifizierungsprogramm. Diese Qualifizierungs-programme sind jedoch in ihrer zeitlichen und inhaltlichen Ausgestaltung sehr heterogen und setzen unterschiedliche Eingangsvoraussetzungen der Bewerber:innen voraus (Driesner & Arndt, 2020). Sie sind in der Regel jedoch deutlich kürzer als traditionelle Lehramtsstudien-gänge an Hochschulen und Universitäten, um einen schnellen Einstieg in den Schuldienst zu gewährleisten. Die kürzere Qualifizierung geht damit mit einer geringeren Anzahl an Lern- und Lehrgelegenheiten einher, wie sie in einem traditionellen Lehramtsstudium zu finden wäre. Infolgedessen kann davon ausgegangen werden, dass nicht-traditionell ausgebildete Lehrkräfte weniger gut auf die Anforderungen des Lehrberufs vorbereitet sind.
Diese Annahme wird auch oft in der Öffentlichkeit vertreten und die Kritik an alternati-ven Qualifizierungsprogrammen ist groß. So äußerte sich beispielsweise der Präsident des Deutschen Lehrerverbandes, Heinz-Peter Meidinger, im Jahr 2019 gegenüber der Zeitung „Die Welt“, dass die unzureichende Qualifizierung von Quereinsteiger:innen „ein Verbre-chen an den Kindern“ sei (Die Welt, 2019). Die Forschung im deutschsprachigen Raum, die in der Läge wäre, belastbare Befunde für die Unterstützung dieser Kritik liefern zu können, steht jedoch noch am Anfang. Erste Arbeiten weisen generell auf wenige Unterschiede zwi-schen traditionell und nicht-traditionell ausgebildeten Lehrkräften hin (Kleickmann & An-ders, 2011; Kunina-Habenicht et al., 2013; Oettinghaus, Lamprecht & Korneck, 2014). Ar-beiten, die Unterschiede finden, zeigen diese vor allem im Bereich des pädagogischen Wis-sens zuungunsten der nicht-traditionell ausgebildeten Lehrkräfte. Die Frage nach weiteren Unterschieden, beispielsweise in der Unterrichtsqualität oder im beruflichen Wohlbefinden, ist bislang jedoch für den deutschen Kontext nicht beantwortet worden.
Die vorliegende Arbeit hat zum Ziel, einen Teil dieser Forschungslücken zu schließen. Sie bearbeitet in diesem Zusammenhang im Rahmen von drei Teilstudien die Fragen nach Unterschieden zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften hin-sichtlich ihrer professionellen Kompetenz, Berufswahlmotivation, Wohlbefinden und Unter-richtsqualität. Die übergeordnete Fragestellung wird vor dem Hintergrund des theoretischen Modells zu den Determinanten und Konsequenzen der professionellen Kompetenz (Kunter, Kleickmann, Klusmann & Richter, 2011) bearbeitet. Dieses Modell wird auch für die theore-tische Aufarbeitung der bereits bestehenden nationalen und internationalen Forschungsarbei-ten zu Unterschieden zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften herangezogen.
Teilstudie I untersucht zunächst Unterschiede in der professionellen Kompetenz zwi-schen traditionell und nicht-traditionell ausgebildeten Lehrkräften. Nach dem Kompetenz-modell nach Baumert und Kunter (2006) werden die beiden Gruppen in den vier Aspekten professioneller Kompetenz – Professionswissen, Überzeugungen, motivationale Orientierun-gen und selbstregulative Fähigkeiten – verglichen. Im Fokus dieser Arbeit stehen traditionell ausgebildete Lehramtsanwärter:innen und die sogenannten Quereinsteiger:innen während des Vorbereitungsdiensts. Mittels multivariater Kovarianzanalysen wurde eine Sekundärdaten-analyse des Projekts COACTIV-R durchgeführt und Unterschiede analysiert.
Teilstudie II beleuchtet sowohl Determinanten als auch Konsequenzen professioneller Kompetenz. Auf Seiten der Determinanten werden Unterschiede in der Berufswahlmotivati-on zwischen Lehrkräften mit und ohne traditionellem Lehramtsstudium untersucht. Ferner erfolgt die Analyse von Unterschieden im beruflichen Wohlbefinden (emotionale Erschöp-fung, Enthusiasmus) und die Intention, im Beruf zu verbleiben, als Konsequenz professionel-ler Kompetenz. Es erfolgte eine Analyse der Daten aus der Pilotierungsstudie aus dem Jahr 2019 für den Bildungstrend des Instituts für Qualitätsentwicklung im Bildungswesen (IQB). Unterschiede zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften wurden erneut mittels multivariater Kovarianzanalysen berechnet.
Abschließend erfolgte in Teilstudie III die Untersuchung von Unterschieden in der Un-terrichtsqualität zwischen traditionell und nicht-traditionell ausgebildeten Lehrkräften als Konsequenz professioneller Kompetenz. Hierzu wurden Daten des IQB-Bildungstrends 2018 im Rahmen einer Sekundäranalyse mithilfe doppelt-latenter Mehrebenenanalysen genutzt. Es wurden die Unterschiede in den Bereichen Abwesenheit von Störungen, kognitive Akti-vierung und Schüler:innenunterstützung betrachtet.
Im finalen Kapitel der vorliegenden Arbeiten werden die zentralen Befunde der drei Teilstudien zusammengefasst und diskutiert. Die Ergebnisse weisen darauf hin, dass sich traditionell und nicht-traditionell ausgebildete Lehrkräfte nur in wenigen der untersuchten Aspekte signifikant voneinander unterscheiden. Nicht-traditionell ausgebildete Lehrkräfte verfügen über weniger pädagogisches Wissen, haben bessere selbstregulative Fähigkeiten und unterscheiden sich nicht in ihren Berufswahlmotiven, ihrem Wohlbefinden und in der Unterrichtsqualität von traditionell ausgebildeten Lehrkräften. Die Ergebnisse öffnen die Tür für die Diskussion der Relevanz des traditionellen Lehramtsstudiums, bieten eine Grundlage bzgl. der Implikationen für weiterführende Forschungsarbeiten und die Bildungspolitik. Die Arbeiten werden abschließend hinsichtlich ihrer Grenzen bewertet.
In Forschungsprogrammen werden zahlreiche Akteure mit unterschiedlichen Hintergründen und fachlichen Expertisen in Einzel- oder Verbundvorhaben vereint, die jedoch weitestgehend unabhängig voneinander durchgeführt werden. Vor dem Hintergrund, dass gesamtgesellschaftliche Herausforderungen wie die globale Erwärmung zunehmend disziplinübergreifende Lösungsansätze erfordern, sollten Vernetzungs- und Transferprozesse in Forschungsprogrammen stärker in den Fokus rücken. Mit der Implementierung einer Begleitforschung kann dieser Forderung Rechnung getragen werden. Begleitforschung unterscheidet sich in ihrer Herangehensweise und ihrer Zielvorstellung von den „üblichen“ Projekten und kann in unterschiedlichen theoretischen Reinformen auftreten. Verkürzt dargestellt agiert sie entweder (1) inhaltlich komplementär zu den jeweiligen Forschungsprojekten, (2) auf einer Metaebene mit Fokus auf die Prozesse im Forschungsprogramm oder (3) als integrierende, synthetisierende Instanz, für die die Vernetzung der Projekte im Forschungsprogramm sowie der Wissenstransfer von Bedeutung sind. Zwar sind diese Formen analytisch in theoretische Reinformen trennbar, in der Praxis ergibt sich in der Regel jedoch ein Mix aus allen dreien.
In diesem Zusammenhang schließt die vorliegende Dissertation als ergänzende Studie an bisherige Ansätze zum methodischen Handwerkszeug der Begleitforschung an und fokussiert auf folgende Fragestellungen: Auf welcher Basis kann die Vernetzung der Akteure in einem Forschungsprogramm durchgeführt werden, um diese effektiv zusammenzubringen? Welche weiteren methodischen Elemente sollten daran ansetzen, um einen Mehrwert zu generieren, der die Summe der Einzelergebnisse des Forschungsprogrammes übersteigt? Von welcher Art kann dann ein solcher Mehrwert sein und welche Rolle spielt dabei die Begleitforschung?
Das erste methodische Element bildet die Erhebung und Aufbereitung einer Ausgangsdatenbasis. Durch eine auf semantischer Analyse basierenden Verschlagwortung projektbezogener Texte lässt sich eine umfassende Datenbasis aus den Inhalten der Forschungsprojekte generieren. Die Schlagwörter werden dabei anhand eines kontrollierten Vokabulars in einem Schlagwortkatalog strukturiert. Parallel dazu werden sie wiederum den jeweiligen Projekten zugeordnet, wodurch diese thematische Merkmale erhalten. Um thematische Überschneidungen zwischen Forschungsprojekten sichtbar und interpretierbar zu machen, beinhaltet das zweite Element Ansätze zur Visualisierung. Dazu werden die Informationen in einen Netzwerkgraphen transferiert, der sowohl alle im Forschungsprogramm involvierten Projekte als auch die identifizierten Schlagwörter in Relation zueinander abbilden kann. So kann zum Beispiel sichtbar gemacht werden, welche Forschungsprojekte sich auf Basis ihrer Inhalte „näher“ sind als andere. Genau diese Information wird im dritten methodischen Element als Planungsgrundlage für unterschiedliche Veranstaltungsformate wie Arbeitstagungen oder Transferwerkstätten genutzt. Das vierte methodische Element umfasst die Synthesebildung. Diese gestaltet sich als Prozess über den gesamten Zeitraum der Zusammenarbeit zwischen Begleitforschung und den weiteren Forschungsprojekten hinweg, da in die Synthese unter anderem Zwischen-, Teil- und Endergebnisse der Projekte einfließen, genauso wie Inhalte aus den unterschiedlichen Veranstaltungen. Letztendlich ist dieses vierte Element auch das Mittel, um aus den integrierten und synthetisierten Informationen Handlungsempfehlungen für zukünftige Vorhaben abzuleiten.
Die Erarbeitung der methodischen Elemente erfolgte im laufenden Prozess des Begleitforschungsprojektes KlimAgrar, welches der vorliegenden Dissertation als Fallbeispiel dient und dessen Hintergründe in der Thematik Klimaschutz und Klimaanpassung in der Landwirtschaft im Text ausführlich erläutert werden.
Creative intensive processes
(2023)
Creativity – developing something new and useful – is a constant challenge in the working world. Work processes, services, or products must be sensibly adapted to changing times. To be able to analyze and, if necessary, adapt creativity in work processes, a precise understanding of these creative activities is necessary. Process modeling techniques are often used to capture business processes, represent them graphically and analyze them for adaptation possibilities. This has been very limited for creative work. An accurate understanding of creative work is subject to the challenge that, on the one hand, it is usually very complex and iterative. On the other hand, it is at least partially unpredictable as new things emerge. How can the complexity of creative business processes be adequately addressed and simultaneously manageable? This dissertation attempts to answer this question by first developing a precise process understanding of creative work. In an interdisciplinary approach, the literature on the process description of creativity-intensive work is analyzed from the perspective of psychology, organizational studies, and business informatics. In addition, a digital ethnographic study in the context of software development is used to analyze creative work. A model is developed based on which four elementary process components can be analyzed: Intention of the creative activity, Creation to develop the new, Evaluation to assess its meaningfulness, and Planning of the activities arising in the process – in short, the ICEP model. These four process elements are then translated into the Knockledge Modeling Description Language (KMDL), which was developed to capture and represent knowledge-intensive business processes. The modeling extension based on the ICEP model enables creative business processes to be identified and specified without the need for extensive modeling of all process details. The modeling extension proposed here was developed using ethnographic data and then applied to other organizational process contexts. The modeling method was applied to other business contexts and evaluated by external parties as part of two expert studies. The developed ICEP model provides an analytical framework for complex creative work processes. It can be comprehensively integrated into process models by transforming it into a modeling method, thus expanding the understanding of existing creative work in as-is process analyses.
Insight by de—sign
(2023)
The calculus of design is a diagrammatic approach towards the relationship between design and insight. The thesis I am evolving is that insights are not discovered, gained, explored, revealed, or mined, but are operatively de—signed. The de in design neglects the contingency of the space towards the sign. The — is the drawing of a distinction within the operation. Space collapses through the negativity of the sign; the command draws a distinction that neglects the space for the form's sake. The operation to de—sign is counterintuitively not the creation of signs, but their removal, the exclusion of possible sign propositions of space. De—sign is thus an act of exclusion; the possibilities of space are crossed into form.
Many complex systems that we encounter in the world can be formalized using networks. Consequently, they have been in the focus of computer science for decades, where algorithms are developed to understand and utilize these systems.
Surprisingly, our theoretical understanding of these algorithms and their behavior in practice often diverge significantly. In fact, they tend to perform much better on real-world networks than one would expect when considering the theoretical worst-case bounds. One way of capturing this discrepancy is the average-case analysis, where the idea is to acknowledge the differences between practical and worst-case instances by focusing on networks whose properties match those of real graphs. Recent observations indicate that good representations of real-world networks are obtained by assuming that a network has an underlying hyperbolic geometry.
In this thesis, we demonstrate that the connection between networks and hyperbolic space can be utilized as a powerful tool for average-case analysis. To this end, we first introduce strongly hyperbolic unit disk graphs and identify the famous hyperbolic random graph model as a special case of them. We then consider four problems where recent empirical results highlight a gap between theory and practice and use hyperbolic graph models to explain these phenomena theoretically. First, we develop a routing scheme, used to forward information in a network, and analyze its efficiency on strongly hyperbolic unit disk graphs. For the special case of hyperbolic random graphs, our algorithm beats existing performance lower bounds. Afterwards, we use the hyperbolic random graph model to theoretically explain empirical observations about the performance of the bidirectional breadth-first search. Finally, we develop algorithms for computing optimal and nearly optimal vertex covers (problems known to be NP-hard) and show that, on hyperbolic random graphs, they run in polynomial and quasi-linear time, respectively.
Our theoretical analyses reveal interesting properties of hyperbolic random graphs and our empirical studies present evidence that these properties, as well as our algorithmic improvements translate back into practice.
Laser cutting is a fast and precise fabrication process. This makes laser cutting a powerful process in custom industrial production. Since the patents on the original technology started to expire, a growing community of tech-enthusiasts embraced the technology and started sharing the models they fabricate online. Surprisingly, the shared models appear to largely be one-offs (e.g., they proudly showcase what a single person can make in one afternoon). For laser cutting to become a relevant mainstream phenomenon (as opposed to the current tech enthusiasts and industry users), it is crucial to enable users to reproduce models made by more experienced modelers, and to build on the work of others instead of creating one-offs.
We create a technological basis that allows users to build on the work of others—a progression that is currently held back by the use of exchange formats that disregard mechanical differences between machines and therefore overlook implications with respect to how well parts fit together mechanically (aka engineering fit).
For the field to progress, we need a machine-independent sharing infrastructure.
In this thesis, we outline three approaches that together get us closer to this:
(1) 2D cutting plans that are tolerant to machine variations. Our initial take is a minimally invasive approach: replacing machine-specific elements in cutting plans with more tolerant elements using mechanical hacks like springs and wedges. The resulting models fabricate on any consumer laser cutter and in a range of materials.
(2) sharing models in 3D. To allow building on the work of others, we build a 3D modeling environment for laser cutting (kyub). After users design a model, they export their 3D models to 2D cutting plans optimized for the machine and material at hand. We extend this volumetric environment with tools to edit individual plates, allowing users to leverage the efficiency of volumetric editing while having control over the most detailed elements in laser-cutting (plates)
(3) converting legacy 2D cutting plans to 3D models. To handle legacy models, we build software to interactively reconstruct 3D models from 2D cutting plans. This allows users to reuse the models in more productive ways. We revisit this by automating the assembly process for a large subset of models.
The above-mentioned software composes a larger system (kyub, 140,000 lines of code). This system integration enables the push towards actual use, which we demonstrate through a range of workshops where users build complex models such as fully functional guitars. By simplifying sharing and re-use and the resulting increase in model complexity, this line of work forms a small step to enable personal fabrication to scale past the maker phenomenon, towards a mainstream phenomenon—the same way that other fields, such as print (postscript) and ultimately computing itself (portable programming languages, etc.) reached mass adoption.
In this work, binding interactions between biomolecules were analyzed by a technique that is based on electrically controllable DNA nanolevers. The technique was applied to virus-receptor interactions for the first time. As receptors, primarily peptides on DNA nanostructures and antibodies were utilized. The DNA nanostructures were integrated into the measurement technique and enabled the presentation of the peptides in a controllable geometrical order. The number of peptides could be varied to be compatible to the binding sites of the viral surface proteins.
Influenza A virus served as a model system, on which the general measurability was demonstrated. Variations of the receptor peptide, the surface ligand density, the measurement temperature and the virus subtypes showed the sensitivity and applicability of the technology. Additionally, the immobilization of virus particles enabled the measurement of differences in oligovalent binding of DNA-peptide nanostructures to the viral proteins in their native environment.
When the coronavirus pandemic broke out in 2020, work on binding interactions of a peptide from the hACE2 receptor and the spike protein of the SARS-CoV-2 virus revealed that oligovalent binding can be quantified in the switchSENSE technology. It could also be shown that small changes in the amino acid sequence of the spike protein resulted in complete loss of binding. Interactions of the peptide and inactivated virus material as well as pseudo virus particles could be measured. Additionally, the switchSENSE technology was utilized to rank six antibodies for their binding affinity towards the nucleocapsid protein of SARS-CoV-2 for the development of a rapid antigen test device.
The technique was furthermore employed to show binding of a non-enveloped virus (adenovirus) and a virus-like particle (norovirus-like particle) to antibodies. Apart from binding interactions, the use of DNA origami levers with a length of around 50 nm enabled the switching of virus material. This proved that the technology is also able to size objects with a hydrodynamic diameter larger than 14 nm.
A theoretical work on diffusion and reaction-limited binding interactions revealed that the technique and the chosen parameters enable the determination of binding rate constants in the reaction-limited regime.
Overall, the applicability of the switchSENSE technique to virus-receptor binding interactions could be demonstrated on multiple examples. While there are challenges that remain, the setup enables the determination of affinities between viruses and receptors in their native environment. Especially the possibilities regarding the quantification of oligo- and multivalent binding interactions could be presented.
Selenium (Se) is an essential trace element that is ubiquitously present in the environment in small concentrations. Essential functions of Se in the human body are manifested through the wide range of proteins, containing selenocysteine as their active center. Such proteins are called selenoproteins which are found in multiple physiological processes like antioxidative defense and the regulation of thyroid hormone functions. Therefore, Se deficiency is known to cause a broad spectrum of physiological impairments, especially in endemic regions with low Se content. Nevertheless, being an essential trace element, Se could exhibit toxic effects, if its intake exceeds tolerable levels. Accordingly, this range between deficiency and overexposure represents optimal Se supply. However, this range was found to be narrower than for any other essential trace element. Together with significantly varying Se concentrations in soil and the presence of specific bioaccumulation factors, this represents a noticeable difficulty in the assessment of Se
epidemiological status. While Se is acting in the body through multiple selenoproteins, its intake occurs mainly in form of small organic or inorganic molecular mass species. Thus, Se exposure not only depends on daily intake but also on the respective chemical form, in which it is present.
The essential functions of selenium have been known for a long time and its primary forms in different food sources have been described. Nevertheless, analytical capabilities for a comprehensive investigation of Se species and their derivatives have been introduced only in the last decades. A new Se compound was identified in 2010 in the blood and tissues of bluefin tuna. It was called selenoneine (SeN) since it is an isologue of naturally occurring antioxidant ergothioneine (ET), where Se replaces sulfur. In the following years, SeN was identified in a number of edible fish species and attracted attention as a new dietary Se source and potentially strong antioxidant. Studies in populations whose diet largely relies on fish revealed that SeN
represents the main non-protein bound Se pool in their blood. First studies, conducted with enriched fish extracts, already demonstrated the high antioxidative potential of SeN and its possible function in the detoxification of methylmercury in fish. Cell culture studies demonstrated, that SeN can utilize the same transporter as ergothioneine, and SeN metabolite was found in human urine.
Until recently, studies on SeN properties were severely limited due to the lack of ways to obtain the pure compound. As a predisposition to this work was firstly a successful approach to SeN synthesis in the University of Graz, utilizing genetically modified yeasts. In the current study, by use of HepG2 liver carcinoma cells, it was demonstrated, that SeN does not cause toxic effectsup to 100 μM concentration in hepatocytes. Uptake experiments showed that SeN is not bioavailable to the used liver cells.
In the next part a blood-brain barrier (BBB) model, based on capillary endothelial cells from the porcine brain, was used to describe the possible transfer of SeN into the central nervous system (CNS). The assessment of toxicity markers in these endothelial cells and monitoring of barrier conditions during transfer experiments demonstrated the absence of toxic effects from SeN on the BBB endothelium up to 100 μM concentration. Transfer data for SeN showed slow but substantial transfer. A statistically significant increase was observed after 48 hours following SeN incubation from the blood-facing side of the barrier. However, an increase in Se content was clearly visible already after 6 hours of incubation with 1 μM of SeN. While the transfer rate of SeN after application of 0.1 μM dose was very close to that for 1 μM, incubation with 10 μM of SeN resulted in a significantly decreased transfer rate. Double-sided application of SeN caused no side-specific transfer of SeN, thus suggesting a passive diffusion mechanism of SeN across the BBB. This data is in accordance with animal studies, where ET accumulation was observed in the rat brain, even though rat BBB does not have the primary ET transporter – OCTN1. Investigation of capillary endothelial cell monolayers after incubation with SeN and reference selenium compounds showed no significant increase of intracellular selenium concentration. Speciesspecific Se measurements in medium samples from apical and basolateral compartments, as good as in cell lysates, showed no SeN metabolization. Therefore, it can be concluded that SeN may reach the brain without significant transformation.
As the third part of this work, the assessment of SeN antioxidant properties was performed in Caco-2 human colorectal adenocarcinoma cells. Previous studies demonstrated that the intestinal epithelium is able to actively transport SeN from the intestinal lumen to the blood side and accumulate SeN. Further investigation within current work showed a much higher antioxidant potential of SeN compared to ET. The radical scavenging activity after incubation with SeN was close to the one observed for selenite and selenomethionine. However, the SeN effect on the viability of intestinal cells under oxidative conditions was close to the one caused by ET. To answer the question if SeN is able to be used as a dietary Se source and induce the activity of selenoproteins, the activity of glutathione peroxidase (GPx) and the secretion of selenoprotein P (SelenoP) were measured in Caco-2 cells, additionally. As expected, reference selenium compounds selenite and selenomethionine caused efficient induction of GPx activity. In contrast to those SeN had no effect on GPx activity. To examine the possibility of SeN being embedded into the selenoproteome, SelenoP was measured in a culture medium. Even though Caco-2 cells effectively take up SeN in quantities much higher than selenite or selenomethionine, no secretion of SelenoP was observed after SeN incubation.
Summarizing, we can conclude that SeN can hardly serve as a Se source for selenoprotein synthesis. However, SeN exhibit strong antioxidative properties, which appear when sulfur in ET is exchanged by Se. Therefore, SeN is of particular interest for research not as part of Se metabolism, but important endemic dietary antioxidant.
Non-local boundary conditions for the spin Dirac operator on spacetimes with timelike boundary
(2023)
Non-local boundary conditions – for example the Atiyah–Patodi–Singer (APS) conditions – for Dirac operators on Riemannian manifolds are rather well-understood, while not much is known for such operators on Lorentzian manifolds. Recently, Bär and Strohmaier [15] and Drago, Große, and Murro [27] introduced APS-like conditions for the spin Dirac operator on Lorentzian manifolds with spacelike and timelike boundary, respectively. While Bär and Strohmaier [15] showed the Fredholmness of the Dirac operator with these boundary conditions, Drago, Große, and Murro [27] proved the well-posedness of the corresponding initial boundary value problem under certain geometric assumptions.
In this thesis, we will follow the footsteps of the latter authors and discuss whether the APS-like conditions for Dirac operators on Lorentzian manifolds with timelike boundary can be replaced by more general conditions such that the associated initial boundary value problems are still wellposed.
We consider boundary conditions that are local in time and non-local in the spatial directions. More precisely, we use the spacetime foliation arising from the Cauchy temporal function and split the Dirac operator along this foliation. This gives rise to a family of elliptic operators each acting on spinors of the spin bundle over the corresponding timeslice. The theory of elliptic operators then ensures that we can find families of non-local boundary conditions with respect to this family of operators. Proceeding, we use such a family of boundary conditions to define a Lorentzian boundary condition on the whole timelike boundary. By analyzing the properties of the Lorentzian boundary conditions, we then find sufficient conditions on the family of non-local boundary conditions that lead to the well-posedness of the corresponding Cauchy problems. The well-posedness itself will then be proven by using classical tools including energy estimates and approximation by solutions of the regularized problems.
Moreover, we use this theory to construct explicit boundary conditions for the Lorentzian Dirac operator. More precisely, we will discuss two examples of boundary conditions – the analogue of the Atiyah–Patodi–Singer and the chirality conditions, respectively, in our setting. For doing this, we will have a closer look at the theory of non-local boundary conditions for elliptic operators and analyze the requirements on the family of non-local boundary conditions for these specific examples.
In the last two decades, process mining has developed from a niche
discipline to a significant research area with considerable impact on academia and industry. Process mining enables organisations to identify the running business processes from historical execution data. The first requirement of any process mining technique is an event log, an artifact that represents concrete business process executions in the form of sequence of events. These logs can be extracted from the organization's information systems and are used by process experts to retrieve deep insights from the organization's running processes. Considering the events pertaining to such logs, the process models can be automatically discovered and enhanced or annotated with performance-related information. Besides behavioral information, event logs contain domain specific data, albeit implicitly. However, such data are usually overlooked and, thus, not utilized to their full potential.
Within the process mining area, we address in this thesis the research gap of discovering, from event logs, the contextual information that cannot be captured by applying existing process mining techniques. Within this research gap, we identify four key problems and tackle them by looking at an event log from different angles. First, we address the problem of deriving an event log in the absence of a proper database access and domain knowledge. The second problem is related to the under-utilization of the implicit domain knowledge present in an event log that can increase the understandability of the discovered process model. Next, there is a lack of a holistic representation of the historical data manipulation at the process model level of abstraction. Last but not least, each process model presumes to be independent of other process models when discovered from an event log, thus, ignoring possible data dependencies between processes within an organization.
For each of the problems mentioned above, this thesis proposes a dedicated method. The first method provides a solution to extract an event log only from the transactions performed on the database that are stored in the form of redo logs. The second method deals with discovering the underlying data model that is implicitly embedded in the event log, thus, complementing the discovered process model with important domain knowledge information. The third method captures, on the process model level, how the data affects the running process instances. Lastly, the fourth method is about the discovery of the relations between business processes (i.e., how they exchange data) from a set of event logs and explicitly representing such complex interdependencies in a business process architecture.
All the methods introduced in this thesis are implemented as a prototype and their feasibility is proven by being applied on real-life event logs.
Die vorgelegte Dissertation befasst sich mit der frühen Wortsegmentierung im monolingualen und bilingualen Spracherwerb. Die Wortsegmentierung stellt eine der wesentlichen Herausforderungen für Säuglinge im Spracherwerb dar, da gesprochene Sprache kontinuierlich ist und Wortgrenzen nicht zuverlässig durch akustische Pausen markiert werden. Zahlreiche Studien konnten für mehrere Sprachen zeigen, dass sich Segmentierungsfähigkeiten von monolingualen Säuglingen zwischen dem 6. und 12. Lebensmonat herausbilden (z. B. Englisch: Jusczyk, Houston & Newsome, 1999; Französisch: Nazzi, Mersad, Sundara, Iakimova & Polka, 2014; Deutsch: Höhle & Weissenborn, 2003; Bartels, Darcy & Höhle, 2009). Frühe Wortsegmentierungsfähigkeiten sind sprachspezifisch (Polka & Sundara, 2012). Crosslinguistische Studien zeigten, dass eine sprachübergreifende Segmentierung für einsprachig aufwachsende Säuglinge nur erfolgreich bewältigt wird, wenn die nicht-native Sprache rhythmische Eigenschaften mit ihrer Muttersprache teilt (Houston, Jusczyk, Kuijpers, Coolen & Cutler, 2000; Höhle, 2002; Polka & Sundara, 2012).
In vier Studien dieser Dissertation wurden mit behavioralen (Headturn Preference Paradigma) und elektrophysiologischen Untersuchungen (Elektroenzephalografie) monolingual Deutsch aufwachsende und bilingual Deutsch-Französisch aufwachsende Säuglinge im Alter von 9 Monaten untersucht. Dabei wurde der Frage nachgegangen, ob monolingual Deutsch aufwachsende Säuglinge im Alter von 9 Monaten in der Lage sind, ihre Muttersprache Deutsch und die rhythmisch unähnliche Sprache Französisch zu segmentieren. Mit anderen Worten: Können monolinguale Säuglinge im Alter von 9 Monaten ihre Segmentierungsprozeduren modifizieren bzw. von ihrer bevorzugten Segmentierung abweichen, um auch nicht-muttersprachlichen Input erfolgreich zu segmentieren?
Bezogen auf die bilingualen Sprachlerner wurde der Frage nachgegangen, ob zweisprachig aufwachsende Säuglinge vergleichbare Segmentierungsfähigkeiten wie monolingual aufwachsende Säuglinge aufweisen und ob sich zudem ein Einfluss der Sprachdominanz auf die Entwicklung der Wortsegmentierungsfähigkeiten in einer bilingualen Population zeigt.
Durch die gewählten Methoden konnten sowohl Verhaltenskorrelate als auch elektrophysiologische Korrelate zur Beantwortung der Fragestellungen herangezogen werden. Darüber hinaus ermöglichte das EEG durch ereigniskorrelierte Potenziale (EKPs) einen Einblick in Lern- und Verarbeitungsprozesse, die mit Verhaltensmethoden nicht erfassbar waren.
Die Ergebnisse zeigen, dass monolingual Deutsch aufwachsende Säuglinge im Alter von 9 Monaten sowohl ihre Muttersprache als auch die nicht-native Sprache Französisch erfolgreich segmentieren. Die Fähigkeit zur Segmentierung der nicht-nativen Sprache Französisch wird jedoch beeinflusst von der Muttersprache: monolinguale Säuglinge, die mit Französisch zuerst getestet wurden, segmentierten sowohl das Französische als auch das im Anschluss präsentierte deutsche Sprachmaterial. Monolinguale Säuglinge die zuerst mit Deutsch und anschließend mit Französisch getestet wurden, segmentierten die deutschen Stimuli, jedoch nicht das französische Sprachmaterial.
Bilingual Deutsch-Französisch aufwachsende Säuglinge segmentieren im Alter von 9 Monaten beide Muttersprachen erfolgreich. Die Ergebnisse deuten zudem auf einen Einfluss der Sprachdominanz auf die Wortsegmentierungsfähigkeiten von zweisprachig aufwachsenden Säuglingen. Die balancierten Bilingualen segmentieren beide Muttersprachen erfolgreich, die unbalancierten Bilingualen zeigen nur für die jeweils dominante Sprache eine erfolgreiche Segmentierung.
Zusammenfassend liefert diese Arbeit erstmals Evidenz für eine erfolgreiche sprachübergreifende Segmentierung in prosodisch differenten Sprachen unterschiedlicher Rhythmusklassen in einer monolingualen Population. Darüber hinaus liefern die Studien dieser Arbeit Evidenz dafür, dass bilingual aufwachsende Säuglinge bezogen auf die Wortsegmentierungsfähigkeiten eine vergleichbare Entwicklung wie einsprachig aufwachsende Sprachlerner zeigen. Dieses Ergebnis erweitert die Datenlage bisheriger Studien, die für verschiedene Entwicklungsschritte im Spracherwerb keine Verzögerung, sondern eine zu monolingual aufwachsenden Säuglingen vergleichbare Entwicklung innerhalb einer bilingualen Population nachweisen konnten (Sprachdiskrimination: Byers-Heinlein, Burns & Werker, 2010; Bosch & Sebastian-Galles, 1997; Phonemdiskrimination: Albareda-Castellot, Pons & Sebastián-Gallés, 2011; Wahrnehmung rhythmischer Eigenschaften: Bijeljac-Babic, Höhle & Nazzi, 2016).
Hantaviruses (HVs) are a group of zoonotic viruses that infect human beings primarily through aerosol transmission of rodent excreta and urine samplings. HVs are classified geographically into: Old World HVs (OWHVs) that are found in Europe and Asia, and New World HVs (NWHVs) that are observed in the Americas. These different strains can cause severe hantavirus diseases with pronounced renal syndrome or severe cardiopulmonary system distress. HVs can be extremely lethal, with NWHV infections reaching up to 40 % mortality rate. HVs are known to generate epidemic outbreaks in many parts of the world including Germany, which has seen periodic HV infections over the past decade. HV has a trisegmented genome. The small segment (S) encodes the nucleocapsid protein (NP), the middle segment (M) encodes the glycoproteins (GPs) Gn and Gc which forms up to tetramers and primarily monomers \& dimers upon independent expression respectively and large segment (L) encodes RNA dependent RNA polymerase (RdRp). Interactions between these viral proteins are crucial in providing mechanistic insights into HV virion development. Despite best efforts, there continues to be lack of quantification of these associations in living cells. This is required in developing the mechanistic models for HV viral assembly. This dissertation focuses on three key questions pertaining to the initial steps of virion formation that primarily involves the GPs and NP.
The research investigations in this work were completed using Fluorescence Correlation Spectroscopy (FCS) approaches. FCS is frequently used in assessing the biophysical features of bio-molecules including protein concentration and diffusion dynamics and circumvents the requirement of protein overexpression. FCS was primarily applied in this thesis to evaluate protein multimerization, at single cell resolution.
The first question addressed which GP spike formation model proposed by Hepojoki et al.(2010) appropriately describes the evidence in living cells. A novel in cellulo assay was developed to evaluate the amount of fluorescently labelled and unlabeled GPs upon co-expression. The results clearly showed that Gn and Gc initially formed a heterodimeric Gn:Gc subunit. This sub-unit then multimerizes with congruent Gn:Gc subunits to generate the final GP spike. Based on these interactions, models describing the formation of GP complex (with multiple GP spike subunits) were additionally developed.
HV GP assembly primarily takes place in the Golgi apparatus (GA) of infected cells. Interestingly, NWHV GPs are hypothesized to assemble at the plasma membrane (PM). This led to the second research question in this thesis, in which a systematic comparison between OWHV and NWHV GPs was conducted to validate this hypothesis. Surprisingly, GP localization at the PM was congruently observed with OWHV and NWHV GPs. Similar results were also discerned with OWHV and NWHV GP localization in the absence of cytoskeletal factors that regulate HV trafficking in cells.
The final question focused on quantifying the NP-GP interactions and understanding their influence of NP and GP multimerization. Gc mutlimers were detected in the presence of NP and complimented by the presence of localized regions of high NP-Gc interactions in the perinuclear region of living cells. Gc-CT domain was shown to influence NP-Gc associations. Gn, on the other hand, formed up to tetrameric complexes, independent from the presence of NP.
The results in this dissertation sheds light on the initial steps of HV virion formation by quantifying homo and heterotypic interactions involving NP and GPs, which otherwise are very difficult to perform. Finally, the in cellulo methodologies implemented in this work can be potentially extended to understand other key interactions involved in HV virus assembly.
Este trabajo pretende demostrar que en la obra narrativa del escritor Tomás Carrasquilla Naranjo (1858 - 1940) hay un Wahrheitsgehalt (Benjamin, 2012), la concreción temporal de una idea, que se materializa a través de lo que aquí he denominado imagen de la religiosidad popular. Esto quiere decir que la obra del antioqueño estaría construida a la manera de un gran mosaico, en el que pese a los variados y disparejos elementos que la componen, la unión de todos produce una imagen (Bild). En dicha imagen se representa la experiencia histórica de lo moderno en los sectores populares, a partir de la unión fugaz entre los rezagos de tradiciones vetustas y las formas de vida más novedosas. Lejos de las convenciones de su época, donde la pregunta por la experiencia de lo moderno redunda en los ámbitos metropolitanos y el papel del artista, Carrasquilla se pregunta por lo que ocurre en los extensos ámbitos rurales o liminares entre lo citadino y lo rural, y sus respectivos entrecruzamientos. Los sujetos que habitan estos ámbitos, al carecer de herramientas conceptuales que les permita definir esta nueva “experiencia viviente”, esa nueva Structures and Feeling como la denomina Raymond Williams (2019); apelan a lo único que conocen, los vetustos saberes transmitidos oralmente para explicar su ahora.
En este sentido, es posible afirmar que Carrasquilla, valiéndose de esta imagen de la religiosidad popular, intentó establecer un diálogo en el campo de lo literario, desde el que postuló una idea de lo moderno diferencial. En varias ocasiones, el antioqueño manifestó que la literatura debía incorporar las experiencias locales al diálogo de lo universal. Ejemplo de esto es el símil de la literatura con el sistema planetario, pues, según él, las relaciones de jerarquía se establecen cuando los países que producen modas literarias, los planetas (Europa), relegan a los otros a ser simples satélites, es decir, a imitar (Carrasquilla, 1991). Hoy en día, se aprecia en aquella crítica dirigida a sus paisanos, los modernistas antioqueños, una reivindicación de la alteridad. Por lo que aquí se postula, que si bien dichas vivencias, no son similares a las que se dan en los nacientes ámbitos metropolitanos, donde las mercancías representan a los nuevos sustitutos de la fe; en esos extensos ámbitos, en apariencia provincianos y alejados del contacto con otras culturas y saberes, la imagen de religiosidad popular viene a desempeñar el mismo papel que aquellas. En otras palabras, “indem an Dingen ihr Gebrauchswert abstirbt” (utilidad o adoración), la subjetividad del personaje las carga con “Intentionen von Wunsch und Angst” (Benjamin, 2013a.), convirtiéndolas en objetos de contemplación, bien sea portándolas o coleccionándolas. De manera similar Carrasquilla se habría valido del cúmulo de saberes (Wissen) residuales de su hipotético público lector, heredado de diversas áreas culturales -durante el proceso de la colonización-, sus respectivos y heterogéneos tiempos y lenguas particulares (Ette, 2019), para aunarlos a las experiencias profanas actuales. Así, la obra (cuento o novela) representaría artísticamente “formas de vida” popular, a través de las cuales se “experimenta estéticamente” cómo se sobrevive (überleben) (Ette, 2015) a la modernidad en los sectores marginados. Es decir, solo desde lo vetusto y ruinoso de la religiosidad popular, otrora sagrado, es posible explicar la experiencia de lo moderno, su aquí y ahora.
Air pollution has been a persistent global problem in the past several hundred years. While some industrialized nations have shown improvements in their air quality through stricter regulation, others have experienced declines as they rapidly industrialize. The WHO’s 2021 update of their recommended air pollution limit values reflects the substantial impacts on human health of pollutants such as NO2 and O3, as recent epidemiological evidence suggests substantial long-term health impacts of air pollution even at low concentrations. Alongside developments in our understanding of air pollution's health impacts, the new technology of low-cost sensors (LCS) has been taken up by both academia and industry as a new method for measuring air pollution. Due primarily to their lower cost and smaller size, they can be used in a variety of different applications, including in the development of higher resolution measurement networks, in source identification, and in measurements of air pollution exposure. While significant efforts have been made to accurately calibrate LCS with reference instrumentation and various statistical models, accuracy and precision remain limited by variable sensor sensitivity. Furthermore, standard procedures for calibration still do not exist and most proprietary calibration algorithms are black-box, inaccessible to the public. This work seeks to expand the knowledge base on LCS in several different ways: 1) by developing an open-source calibration methodology; 2) by deploying LCS at high spatial resolution in urban environments to test their capability in measuring microscale changes in urban air pollution; 3) by connecting LCS deployments with the implementation of local mobility policies to provide policy advice on resultant changes in air quality.
In a first step, it was found that LCS can be consistently calibrated with good performance against reference instrumentation using seven general steps: 1) assessing raw data distribution, 2) cleaning data, 3) flagging data, 4) model selection and tuning, 5) model validation, 6) exporting final predictions, and 7) calculating associated uncertainty. By emphasizing the need for consistent reporting of details at each step, most crucially on model selection, validation, and performance, this work pushed forward with the effort towards standardization of calibration methodologies. In addition, with the open-source publication of code and data for the seven-step methodology, advances were made towards reforming the largely black-box nature of LCS calibrations.
With a transparent and reliable calibration methodology established, LCS were then deployed in various street canyons between 2017 and 2020. Using two types of LCS, metal oxide (MOS) and electrochemical (EC), their performance in capturing expected patterns of urban NO2 and O3 pollution was evaluated. Results showed that calibrated concentrations from MOS and EC sensors matched general diurnal patterns in NO2 and O3 pollution measured using reference instruments. While MOS proved to be unreliable for discerning differences among measured locations within the urban environment, the concentrations measured with calibrated EC sensors matched expectations from modelling studies on NO2 and O3 pollution distribution in street canyons. As such, it was concluded that LCS are appropriate for measuring urban air quality, including for assisting urban-scale air pollution model development, and can reveal new insights into air pollution in urban environments.
To achieve the last goal of this work, two measurement campaigns were conducted in connection with the implementation of three mobility policies in Berlin. The first involved the construction of a pop-up bike lane on Kottbusser Damm in response to the COVID-19 pandemic, the second surrounded the temporary implementation of a community space on Böckhstrasse, and the last was focused on the closure of a portion of Friedrichstrasse to all motorized traffic. In all cases, measurements of NO2 were collected before and after the measure was implemented to assess changes in air quality resultant from these policies. Results from the Kottbusser Damm experiment showed that the bike-lane reduced NO2 concentrations that cyclists were exposed to by 22 ± 19%. On Friedrichstrasse, the street closure reduced NO2 concentrations to the level of the urban background without worsening the air quality on side streets. These valuable results were communicated swiftly to partners in the city administration responsible for evaluating the policies’ success and future, highlighting the ability of LCS to provide policy-relevant results.
As a new technology, much is still to be learned about LCS and their value to academic research in the atmospheric sciences. Nevertheless, this work has advanced the state of the art in several ways. First, it contributed a novel open-source calibration methodology that can be used by a LCS end-users for various air pollutants. Second, it strengthened the evidence base on the reliability of LCS for measuring urban air quality, finding through novel deployments in street canyons that LCS can be used at high spatial resolution to understand microscale air pollution dynamics. Last, it is the first of its kind to connect LCS measurements directly with mobility policies to understand their influences on local air quality, resulting in policy-relevant findings valuable for decisionmakers. It serves as an example of the potential for LCS to expand our understanding of air pollution at various scales, as well as their ability to serve as valuable tools in transdisciplinary research.
The East African Rift System (EARS) is a significant example of active tectonics, which provides opportunities to examine the stages of continental faulting and landscape evolution. The southwest extension of the EARS is one of the most significant examples of active tectonics nowadays, however, seismotectonic research in the area has been scarce, despite the fundamental importance of neotectonics. Our first study area is located between the Northern Province of Zambia and the southeastern Katanga Province of the Democratic Republic of Congo. Lakes Mweru and Mweru Wantipa are part of the southwest extension of the EARS. Fault analysis reveals that, since the Miocene, movements along the active Mweru-Mweru Wantipa Fault System (MMFS) have been largely responsible for the reorganization of the landscape and the drainage patterns across the southwestern branch of the EARS. To investigate the spatial and temporal patterns of fluvial-lacustrine landscape development, we determined in-situ cosmogenic 10Be and 26Al in a total of twenty-six quartzitic bedrock samples that were collected from knickpoints across the Mporokoso Plateau (south of Lake Mweru) and the eastern part of the Kundelungu Plateau (north of Lake Mweru). Samples from the Mporokoso Plateau and close to the MMFS provide evidence of temporary burial. By contrast, surfaces located far from the MMFS appear to have remained uncovered since their initial exposure as they show consistent 10Be and 26Al exposure ages ranging up to ~830 ka. Reconciliation of the observed burial patterns with morphotectonic and stratigraphic analysis reveals the existence of an extensive paleo-lake during the Pleistocene. Through hypsometric analyses of the dated knickpoints, the potential maximum water level of the paleo-lake is constrained to ~1200 m asl (present lake lavel: 917 m asl). High denudation rates (up to ~40 mm ka-1) along the eastern Kundelungu Plateau suggest that footwall uplift, resulting from normal faulting, caused river incision, possibly controlling paleo-lake drainage. The lake level was reduced gradually reaching its current level at ~350 ka.
Parallel to the MMFS in the north, the Upemba Fault System (UFS) extends across the southeastern Katanga Province of the Democratic Republic of Congo. This part of our research is focused on the geomorphological behavior of the Kiubo Waterfalls. The waterfalls are the currently active knickpoint of the Lufira River, which flows into the Upemba Depression. Eleven bedrock samples along the Lufira River and its tributary stream, Luvilombo River, were collected. In-situ cosmogenic 10Be and 26Al were used in order to constrain the K constant of the Stream Power Law equation. Constraining the K constant allowed us to calculate the knickpoint retreat rate of the Kiubo Waterfalls at ~0.096 m a-1. Combining the calculated retreat rate of the knickpoint with DNA sequencing from fish populations, we managed to present extrapolation models and estimate the location of the onset of the Kiubo Waterfalls, revealing its connection to the seismicity of the UFS.
Soil is today considered a non-renewable resource on societal time scale, as the rate of soil loss is higher than the one of soil formation.
Soil formation is complex, can take several thousands of years and is influenced by a variety of factors, one of them is time. Oftentimes, there is the assumption of constant and progressive conditions for soil and/or profile development (i.e., steady-state). In reality, for most of the soils, their (co-)evolution leads to a complex and irregular soil development in time and space characterised by “progressive” and “regressive” phases.
Lateral transport of soil material (i.e., soil erosion) is one of the principal processes shaping the land surface and soil profile during “regressive” phases and one of the major environmental problems the world faces.
Anthropogenic activities like agriculture can exacerbate soil erosion. Thus, it is of vital importance to distinguish short-term soil redistribution rates (i.e., within decades) influenced by human activities differ from long-term natural rates. To do so, soil erosion (and denudation) rates can be determined by using a set of isotope methods that cover different time scales at landscape level.
With the aim to unravel the co-evolution of weathering, soil profile development and lateral redistribution on a landscape level, we used Pluthonium-239+240 (239+240Pu), Beryllium-10 (10Be, in situ and meteoric) and Radiocarbon (14C) to calculate short- and long-term erosion rates in two settings, i.e., a natural and an anthropogenic environment in the hummocky ground moraine landscape of the Uckermark, North-eastern Germany. The main research questions were:
1. How do long-term and short-term rates of soil redistributing processes differ?
2. Are rates calculated from in situ 10Be comparable to those of using meteoric 10Be?
3. How do soil redistribution rates (short- and long-term) in an agricultural and in a natural landscape compare to each other?
4. Are the soil patterns observed in northern Germany purely a result of past events (natural and/or anthropogenic) or are they imbedded in ongoing processes?
Erosion and deposition are reflected in a catena of soil profiles with no or almost no erosion on flat positions (hilltop), strong erosion on the mid-slope and accumulation of soil material at the toeslope position. These three characteristic process domains were chosen within the CarboZALF-D experimental site, characterised by intense anthropogenic activities. Likewise, a hydrosequence in an ancient forest was chosen for this study and being regarded as a catena strongly influenced by natural soil transport.
The following main results were obtained using the above-mentioned range of isotope methods available to measure soil redistribution rates depending on the time scale needed (e.g., 239+240Pu, 10Be, 14C):
1. Short-term erosion rates are one order of magnitude higher than long-term rates in agricultural settings.
2. Both meteoric and in situ 10Be are suitable soil tracers to measure the long-term soil redistribution rates giving similar results in an anthropogenic environment for different landscape positions (e.g., hilltop, mid-slope, toeslope)
3. Short-term rates were extremely low/negligible in a natural landscape and very high in an agricultural landscape – -0.01 t ha-1 yr-1 (average value) and -25 t ha-1 yr-1 respectively. On the contrary, long-term rates in the forested landscape are comparable to those calculated in the agricultural area investigated with average values of -1.00 t ha-1 yr-1 and -0.79 t ha-1 yr-1.
4. Soil patterns observed in the forest might be due to human impact and activities started after the first settlements in the region, earlier than previously postulated, between 4.5 and 6.8 kyr BP, and not a result of recent soil erosion.
5. Furthermore, long-term soil redistribution rates are similar independently from the settings, meaning past natural soil mass redistribution processes still overshadow the present anthropogenic erosion processes.
Overall, this study could make important contributions to the deciphering of the co-evolution of weathering, soil profile development and lateral redistribution in North-eastern Germany. The multi-methodological approach used can be challenged by the application in a wider range of landscapes and geographic regions.
Development of electrochemical antibody-based and enzymatic assays for mycotoxin analysis in food
(2023)
Electrochemical methods are promising to meet the demand for easy-to-use devices monitoring key parameters in the food industry. Many companies run own lab procedures for mycotoxin analysis, but it is a major goal to simplify the analysis. The enzyme-linked immunosorbent assay using horseradish peroxidase as enzymatic label, together with 3,3',5,5' tetramethylbenzidine (TMB)/H2O2 as substrates allows sensitive mycotoxin detection with optical detection methods. For the miniaturization of the detection step, an electrochemical system for mycotoxin analysis was developed. To this end, the electrochemical detection of TMB was studied by cyclic voltammetry on different screen-printed electrodes (carbon and gold) and at different pH values (pH 1 and pH 4). A stable electrode reaction, which is the basis for the further construction of the electrochemical detection system, could be achieved at pH 1 on gold electrodes. An amperometric detection method for oxidized TMB, using a custom-made flow cell for screen-printed electrodes, was established and applied for a competitive magnetic bead-based immunoassay for the mycotoxin ochratoxin A. A limit of detection of 150 pM (60 ng/L) could be obtained and the results were verified with optical detection. The applicability of the magnetic bead-based immunoassay was tested in spiked beer using a handheld potentiostat connected via Bluetooth to a smartphone for amperometric detection allowing to quantify ochratoxin A down to 1.2 nM (0.5 µg/L).
Based on the developed electrochemical detection system for TMB, the applicability of the approach was demonstrated with a magnetic bead-based immunoassay for the ergot alkaloid, ergometrine. Under optimized assay conditions a limit of detection of 3 nM (1 µg/L) was achieved and in spiked rye flour samples ergometrine levels in a range from 25 to 250 µg/kg could be quantified. All results were verified with optical detection. The developed electrochemical detection method for TMB gives great promise for the detection of TMB in many other HRP-based assays.
A new sensing approach, based on an enzymatic electrochemical detection system for the mycotoxin fumonisin B1 was established using an Aspergillus niger fumonisin amine oxidase (AnFAO). AnFAO was produced recombinantly in E. coli as maltose-binding protein fusion protein and catalyzes the oxidative deamination of fumonisins, producing hydrogen peroxide. It was found that AnFAO has a high storage and temperature stability. The enzyme was coupled covalently to magnetic particles, and the enzymatically produced H2O2 in the reaction with fumonisin B1 was detected amperometrically in a flow injection system using Prussian blue/carbon electrodes and the custom-made wall-jet flow cell. Fumonisin B1 could be quantified down to 1.5 µM (≈ 1 mg/L). The developed system represents a new approach to detect mycotoxins using enzymes and electrochemical methods.
Evaluation of nitrogen dynamics in high-order streams and rivers based on high-frequency monitoring
(2023)
Nutrient storage, transform and transport are important processes for achieving environmental and ecological health, as well as conducting water management plans. Nitrogen is one of the most noticeable elements due to its impacts on tremendous consequences of eutrophication in aquatic systems. Among all nitrogen components, researches on nitrate are blooming because of widespread deployments of in-situ high-frequency sensors. Monitoring and studying nitrate can become a paradigm for any other reactive substances that may damage environmental conditions and cause economic losses.
Identifying nitrate storage and its transport within a catchment are inspiring to the management of agricultural activities and municipal planning. Storm events are periods when hydrological dynamics activate the exchange between nitrate storage and flow pathways. In this dissertation, long-term high-frequency monitoring data at three gauging stations in the Selke river were used to quantify event-scale nitrate concentration-discharge (C-Q) hysteretic relationships. The Selke catchment is characterized into three nested subcatchments by heterogeneous physiographic conditions and land use. With quantified hysteresis indices, impacts of seasonality and landscape gradients on C-Q relationships are explored. For example, arable area has deep nitrate legacy and can be activated with high intensity precipitation during wetting/wet periods (i.e., the strong hydrological connectivity). Hence, specific shapes of C-Q relationships in river networks can identify targeted locations and periods for agricultural management actions within the catchment to decrease nitrate output into downstream aquatic systems like the ocean.
The capacity of streams for removing nitrate is of both scientific and social interest, which makes the quantification motivated. Although measurements of nitrate dynamics are advanced compared to other substances, the methodology to directly quantify nitrate uptake pathways is still limited spatiotemporally. The major problem is the complex convolution of hydrological and biogeochemical processes, which limits in-situ measurements (e.g., isotope addition) usually to small streams with steady flow conditions. This makes the extrapolation of nitrate dynamics to large streams highly uncertain. Hence, understanding of in-stream nitrate dynamic in large rivers is still necessary. High-frequency monitoring of nitrate mass balance between upstream and downstream measurement sites can quantitatively disentangle multi-path nitrate uptake dynamics at the reach scale (3-8 km). In this dissertation, we conducted this approach in large stream reaches with varying hydro-morphological and environmental conditions for several periods, confirming its success in disentangling nitrate uptake pathways and their temporal dynamics. Net nitrate uptake, autotrophic assimilation and heterotrophic uptake were disentangled, as well as their various diel and seasonal patterns. Natural streams generally can remove more nitrate under similar environmental conditions and heterotrophic uptake becomes dominant during post-wet seasons. Such two-station monitoring provided novel insights into reach-scale nitrate uptake processes in large streams.
Long-term in-stream nitrate dynamics can also be evaluated with the application of water quality model. This is among the first time to use a data-model fusion approach to upscale the two-station methodology in large-streams with complex flow dynamics under long-term high-frequency monitoring, assessing the in-stream nitrate retention and its responses to drought disturbances from seasonal to sub-daily scale. Nitrate retention (both net uptake and net release) exhibited substantial seasonality, which also differed in the investigated normal and drought years. In the normal years, winter and early spring seasons exhibited extensive net releases, then general net uptake occurred after the annual high-flow season at later spring and early summer with autotrophic processes dominating and during later summer-autumn low-flow periods with heterotrophy-characteristics predominating. Net nitrate release occurred since late autumn until the next early spring. In the drought years, the late-autumn net releases were not so consistently persisted as in the normal years and the predominance of autotrophic processes occurred across seasons. Aforementioned comprehensive results of nitrate dynamics on stream scale facilitate the understanding of instream processes, as well as raise the importance of scientific monitoring schemes for hydrology and water quality parameters.
Extreme weather and climate events are one of the greatest dangers for present-day society. Therefore, it is important to provide reliable statements on what changes in extreme events can be expected along with future global climate change. However, the projected overall response to future climate change is generally a result of a complex interplay between individual physical mechanisms originated within the different climate subsystems. Hence, a profound understanding of these individual contributions is required in order to provide meaningful assessments of future changes in extreme events. One aspect of climate change is the recently observed phenomenon of Arctic Amplification and the related dramatic Arctic sea ice decline, which is expected to continue over the next decades. The question to what extent Arctic sea ice loss is able to affect atmospheric dynamics and extreme events over mid-latitudes has received a lot of attention over recent years and still remains a highly debated topic.
In this respect, the objective of this thesis is to contribute to a better understanding on the impact of future Arctic sea ice retreat on European temperature extremes and large-scale atmospheric dynamics.
The outcomes are based on model data from the atmospheric general circulation model ECHAM6. Two different sea ice sensitivity simulations from the Polar Amplification Intercomparison Project are employed and contrasted to a present day reference experiment: one experiment with prescribed future sea ice loss over the entire Arctic, as well as another one with sea ice reductions only locally prescribed over the Barents-Kara Sea.% prescribed over the entire Arctic, as well as only locally over the Barent/Karasea with a present day reference experiment.
The first part of the thesis focuses on how future Arctic sea ice reductions affect large-scale atmospheric dynamics over the Northern Hemisphere in terms of occurrence frequency changes of five preferred Euro-Atlantic circulation regimes. When compared to circulation regimes computed from ERA5 it shows that ECHAM6 is able to realistically simulate the regime structures. Both ECHAM6 sea ice sensitivity experiments exhibit similar regime frequency changes. Consistent with tendencies found in ERA5, a more frequent occurrence of a Scandinavian blocking pattern in midwinter is for instance detected under future sea ice conditions in the sensitivity experiments. Changes in occurrence frequencies of circulation regimes in summer season are however barely detected.
After identifying suitable regime storylines for the occurrence of European temperature extremes in winter, the previously detected regime frequency changes are used to quantify dynamically and thermodynamically driven contributions to sea ice-induced changes in European winter temperature extremes.
It is for instance shown how the preferred occurrence of a Scandinavian blocking regime under low sea ice conditions dynamically contributes to more frequent midwinter cold extreme occurrences over Central Europe. In addition, a reduced occurrence frequency of a Atlantic trough regime is linked to reduced winter warm extremes over Mid-Europe. Furthermore, it is demonstrated how the overall thermodynamical warming effect due to sea ice loss can result in less (more) frequent winter cold (warm) extremes, and consequently counteracts the dynamically induced changes.
Compared to winter season, circulation regimes in summer are less suitable as storylines for the occurrence of summer heat extremes.
Therefore, an approach based on circulation analogues is employed in order to quantify thermodyamically and dynamically driven contributions to sea ice-induced changes of summer heat extremes over three different European sectors. Reduced occurrences of blockings over Western Russia are detected in the ECHAM6 sea ice sensitivity experiments; however, arguing for dynamically and thermodynamically induced contributions to changes in summer heat extremes remains rather challenging.
This cumulative dissertation consists of three full empirical investigations based on three separate collections of data dealing with the phenomenon of negotiations in audit processes, which are combined in two research articles. In the first study, I examine internal auditors’ views on negotiation interactions with auditees. My research is based on 23 semi-structured interviews with internal auditors (14 in-house and 9 external service providers) to gain insight into when and about what (RQ1), why (RQ2), and how (RQ3) they negotiate with auditees. By adapting the Gibbins et al. (2001) negotiation framework to the context of internal auditing, I obtain specific process (negotiation issue, auditor-auditee process, and outcome) and context elements that form the basis of my analyses. Through the additional use of inductive procedures, I conclude that internal auditors negotiate when they face professional and non-professional resistance from auditees during the audit process (RQ1). This resistance occurs in a variety of audit types and audit issues. Internal auditors choose negotiations to overcome this resistance primarily out of functional interest, as they cannot simply instruct auditees to acknowledge the findings and implement the required actions (RQ2). I find that the implementation of the required actions is the main goal of the respondents, which is also an important quality factor for internal auditing. Although few respondents interpret these interactions with auditees as negotiations, all respondents use a variety of negotiation strategies to create value (e.g., cost cutting, logrolling, and bridging) and claim value (e.g. positional commitment and threats) (RQ3). Finally, I contribute to empirical research on internal audit negotiations and internal audit quality by shedding light on the black box of internal auditor-auditee interactions. The second study consists of two experiments that examine the effects of tax auditors’ emotion expressions during tax audit negotiations. In the first experiment, we demonstrate that auditors expressing anger obtain more concessions from taxpayers than auditors expressing happiness. This reveals that taxpayers interpret auditors’ emotions strategically and do not respond affectively. In the second experiment, we show that the experience with an auditor who expressed either happiness or anger reduces taxpayers’ post-audit compliance compared to the experience with an emotionally neutral auditor. Apparently, taxpayers use their experience with an emotional auditor to rationalize later noncompliance. Taken together, both experiments show the potentially detrimental effects of positive and negative emotion expressions by the auditor and point to the benefits of avoiding emotion expressions. We find that when auditors avoid emotion expressions this does not result in fewer concessions from taxpayers than when auditors express anger. However, when auditors avoid emotion expressions this leads to a significantly better evaluation of the taxpayer-auditor relationship and significantly reduces taxpayers’ post-audit noncompliance.
Despite the popularity of thermoresponsive polymers, much is still unknown about their behavior, how it is triggered, and what factors influence it, hindering the full exploitation of their potential. One particularly puzzling phenomenon is called co-nonsolvency, in which a polymer is soluble in two individual solvents, but counter-intuitively becomes insoluble in mixtures of both. Despite the innumerous potential applications of such systems, including actuators, viscosity regulators and as carrier structures, this field has not yet been extensively studied apart from the classical example of poly(N isopropyl acrylamide) (PNIPAM) in mixtures of water and methanol. Therefore, this thesis focuses on evaluating how changes in the chemical structure of the polymers impact the thermoresponsive, aggregation and co-nonsolvency behaviors of both homopolymers and amphiphilic block copolymers. Within this scope, both the synthesis of the polymers and their characterization in solution is investigated. Homopolymers were synthesized by conventional free radical polymerization, whereas block copolymers were synthesized by consecutive reversible addition fragmentation chain transfer (RAFT) polymerizations. The synthesis of the monomers N isopropyl methacrylamide (NIPMAM) and N vinyl isobutyramide (NVIBAM), as well as a few chain transfer agents is also covered. Through turbidimetry measurements, the thermoresponsive and co-nonsolvency behavior of PNIPMAM and PNVIBAM homopolymers is then compared to the well-known PNIPAM, in aqueous solutions with 9 different organic co-solvents. Additionally, the effects of end-groups, molar mass, and concentration are investigated. Despite the similarity of their chemical structures, the 3 homopolymers show significant differences in transition temperatures and some divergences in their co-nonsolvency behavior. More complex systems are also evaluated, namely amphiphilic di- and triblock copolymers of PNIPAM and PNIPMAM with polystyrene and poly(methyl methacrylate) hydrophobic blocks. Dynamic light scattering is used to evaluate their aggregation behavior in aqueous and mixed aqueous solutions, and how it is affected by the chemical structure of the blocks, the chain architecture, presence of cosolvents and polymer concentration. The results obtained shed light into the thermoresponsive, co-nonsolvency and aggregation behavior of these polymers in solution, providing valuable information for the design of systems with a desired aggregation behavior, and that generate targeted responses to temperature and solvent mixture changes.
The urge of light utilization in fabrication of materials is as encouraging as challenging. Steadily increasing energy consumption in accordance with rapid population growth, is requiring a corresponding solution within the same rate of occurrence speed. Therefore, creating, designing and manufacturing materials that can interact with light and in further be applicable as well as disposable in photo-based applications are very much under attention of researchers. In the era of sustainability for renewable energy systems, semiconductor-based photoactive materials have received great attention not only based on solar and/or hydrocarbon fuels generation from solar energy, but also successful stimulation of photocatalytic reactions such as water splitting, pollutant degradation and organic molecule synthesisThe turning point had been reached for water splitting with an electrochemical cell consisting of TiO2-Pt electrode illuminated by UV light as energy source rather than an external voltage, that successfully pursued water photolysis by Fujishima and Honda in 1972. Ever since, there has been a great deal of interest in research of semiconductors (e.g. metal oxide, metal-free organic, noble-metal complex) exhibiting effective band gap for photochemical reactions. In the case of environmental friendliness, toxicity of metal-based semiconductors brings some restrictions in possible applications. Regarding this, very robust and ‘earth-abundant’ organic semiconductor, graphitic carbon nitride has been synthesized and successfully applied in photoinduced applications as novel photocatalyst. Properties such as suitable band gap, low charge carrier recombination and feasibility for scaling up, pave the way of advance combination with other catalysts to gather higher photoactivity based on compatible heterojunction.
This dissertation aims to demonstrate a series of combinations between organic semiconductor g-CN and polymer materials that are forged through photochemistry, either in synthesis or in application. Fabrication and design processes as well as applications performed in accordance to the scope of thesis will be elucidated in detail. In addition to UV light, more attention is placed on visible light as energy source with a vision of more sustainability and better scalability in creation of novel materials and solar energy based applications.
Lithium-ion capacitors (LICs) are promising energy storage devices by asymmetrically combining anode with a high energy density close to lithium-ion batteries and cathode with a high power density and long-term stability close to supercapacitors. For the further improvement of LICs, the development of electrode materials with hierarchical porosity, nitrogen-rich lithiophilic sites, and good electrical conductivity is essential. Nitrogen-rich all-carbon composite hybrids are suitable for these conditions along with high stability and tunability, resulting in a breakthrough to achieve the high performance of LICs. In this thesis, two different all-carbon composites are suggested to unveil how the pore structure of lithiophilic composites influences the properties of LICs. Firstly, the composite with 0-dimensional zinc-templated carbon (ZTC) and hexaazatriphenylene-hexacarbonitrile (HAT) is examined how the pore structure is connected to Li-ion storage property as LIC electrode. As the pore structure of HAT/ZTC composite is easily tunable depending on the synthetic factor and ratio of each component, the results will allow deeper insights into Li-ion dynamics in different porosity, and low-cost synthesis by optimization of the HAT:ZTC ratio. Secondly, the composite with 1-dimensional nanoporous carbon fiber (ACF) and cost-effective melamine is proposed as a promising all-carbon hybrid for large-scale application. Since ACF has ultra-micropores, the numerical structure-property relationships will be calculated out not only from total pore volume but more specifically from ultra-micropore volume. From these results above, it would be possible to understand how hybrid all-carbon composites interact with lithium ions in nanoscale as well as how structural properties affect the energy storage performance. Based on this understanding derived from the simple materials modeling, it will provide a clue to design the practical hybrid materials for efficient electrodes in LICs.
The collaboration-based professional development approach Lesson Study (LS), which has its roots in the Japanese education system, has gained international recognition over the past three decades and spread quickly throughout the world. LS is a collaborative method to professional development (PD) that incorporates multiple characteristics that have been identified in the research literature as key to effective PD. Specifically, LS is a long-term process that consists of subsequent inquiry cycles, it is site-based and integrated in teachers’ practice, it encourages collaboration and reflection, places a strong emphasis on student learning, and it typically involves external experts that support the process or offer additional insights.
As LS integrates all these characteristics, it has rapidly gained international popularity since the turn of the 21st century and is currently being practiced in over 40 countries around the world. This international borrowing of the idea of LS to new national contexts has given rise to a research field that aims to investigate the effectiveness of LS on teacher learning as well as the circumstances and mechanisms that make LS effective in various settings around the world. Such research is important, as borrowing educational innovations and adapting them to new contexts can be a challenging process. Educational innovations that fail to deliver the expected outcomes tend to be abandoned prematurely and before they have been completely understood or a substantial research base has been established.
In order to prevent LS from early abandonment, Lewis and colleagues outlined three critical research needs in 2006, not long after LS was initially introduced to the United States. These research needs included (1) developing a descriptive knowledge base on LS, (2) examining the mechanisms by which teachers learn through LS, and (3) using design-based research cycles to analyze and improve LS.
This dissertation set out to take stock of the progress that has been made on these research needs over the past 20 years. The scoping review conducted for the framework of this dissertation indicates that, while a large and international knowledge base has been developed, the field has not yet produced reliable evidence of the effectiveness of LS. Based on the scoping review, this dissertation makes the case that Lewis et al.’s (2006) critical research needs should be updated. In order to do so, a number of limitations to the current knowledge base on LS need to be addressed. These limitations include (1) the frequent lack of comparable and replicable descriptions of the LS intervention in publications, (2) the incoherent use or lack of use of theoretical frameworks to explain teacher learning through LS, (3) the inconsistent use of terminology and concepts, and (4) the lack of scientific rigor in research studies and of established ways or tools to measure the effectiveness of LS.
This dissertation aims to advance the critical research needs in the field by examining the extent and nature of these limitations in three research studies. The focus of these studies lies on the LS stages of observation and reflection, as these stages have a high potential to facilitate teacher learning. The first study uses a mixed-method design to examine how teachers at German primary schools reflect critically together. The study derives a theory-based definition of critical and collaborative reflection in order to re-frame the reflection element in LS.
The second study, a systematic review of 129 articles on LS, assess how transparent research articles are in reporting how teachers observed and reflected together. In addition, it is investigated whether these articles provide any kind of theorization for the stages of observation and reflection.
The third study proposes a conceptual model for the field of LS that is based on existing models of continuous professional development and research findings on team effectiveness and collaboration. The model describes the dimensions of input, mediating mechanisms, and outcomes in order to provide a conceptual grid to teachers’ continuous professional development through LS.
Reiz der Revolution
(2023)
Die Dissertation untersucht die vielseitigen Verflechtungen und Transfers im Rahmen der deutschen Nicaraguasolidarität der späten 1970er und der 1980er Jahre. Bereits im Vorfeld ihres Machtantritts hatten die Sandinistas in beiden Lagern um ausländische staatliche und zivile Unterstützung geworben. Nun gestalteten sie mit dem sandinistischen Reformstaat zugleich ein internationales Netz an Solidaritätsbeziehungen aus, die zur Finanzierung ihrer sozialreformerischen Programme, aber auch zur Legitimation ihrer Herrschaft dienten.
Allein in der Bundesrepublik entstanden mehrere hundert Solidaritätsgruppen. In der DDR löste die politische Führung eine staatlich gelenkte Solidarisierung mit Nicaragua aus, der sich zehntausende Menschen und unabhängige Basisinitiativen anschlossen. Trotz ihrer Verwurzelung in rivalisierenden Systemen und der Heterogenität ihrer Weltbilder – von christlicher Soziallehre bis zur kritischen Linken – arbeiteten etliche Solidaritätsinitiativen in beiden Ländern am selben Zielobjekt: einem Nicaragua jenseits der Blöcke. Gemeinsam mit ihren nicaraguanischen Projektpartner_innen eröffneten sie auf transnationaler Ebene einen neuen Raum für Kommunikation und stießen dabei auf Differenzen und Auseinandersetzungen über politische Ideen, die beiderseits des Atlantiks neue Praktiken anregten.
Die Forschungsarbeit basiert auf einer umfangreichen Quellenauswertung in insgesamt 13 Archiven, darunter das Archiv der Robert-Havemann-Gesellschaft, das Archiv der BStU, verschiedene westdeutsche Bewegungsarchive und die archivalischen Nachlässe des nicaraguanischen Kulturministeriums.
Justice structures societies and social relations of any kind; its psychological integration provides a fundamental cornerstone for social, moral, and personality development. The trait justice sensitivity captures individual differences in responses toward perceived injustice (JS; Schmitt et al., 2005, 2010). JS has shown substantial relations to social and moral behavior in adult and adolescent samples; however, it was not yet investigated in middle childhood despite this being a sensitive phase for personality development. JS differentiates in underlying perspectives that are either more self- or other-oriented regarding injustice, with diverging outcome relations. The present research project investigated JS and its perspectives in children aged 6 to 12 years with a special focus on variables of social and moral development as potential correlates and outcomes in four cross-sectional studies. Study 1 started with a closer investigation of JS trait manifestation, measurement, and relations to important variables from the nomological network, such as temperamental dimensions, social-cognitive skills, and global pro- and antisocial behavior in a pilot sample of children from south Germany. Study 2 investigated relations between JS and distributive behavior following distributive principles in a large-scale data set of children from Berlin and Brandenburg. Study 3 explored the relations of JS with moral reasoning, moral emotions, and moral identity as important precursors of moral development in the same large-scale data set. Study 4 investigated punishment motivation to even out, prevent, or compensate norm transgressions in a subsample, whereby JS was considered as a potential predictor of different punishment motives. All studies indicated that a large-scale, economic measurement of JS is possible at least from middle childhood onward. JS showed relations to temperamental dimensions, social skills, global social behavior; distributive decisions and preferences for distributive principles; moral reasoning, emotions, and identity; as well as with punishment motivation; indicating that trait JS is highly relevant for social and moral development. The underlying self- or other-oriented perspectives showed diverging correlate and outcome relations mostly in line with theory and previous findings from adolescent and adult samples, but also provided new theoretical ideas on the construct and its differentiation. Findings point to an early internal justice motive underlying trait JS, but additional motivations underlying the JS perspectives. Caregivers, educators, and clinical psychologists should pay attention to children’s JS and toward promoting an adaptive justice-related personality development to foster children’s prosocial and moral development as well as their mental health.
Visual perception is a complex and dynamic process that plays a crucial role in how we perceive and interact with the world. The eyes move in a sequence of saccades and fixations, actively modulating perception by moving different parts of the visual world into focus. Eye movement behavior can therefore offer rich insights into the underlying cognitive mechanisms and decision processes. Computational models in combination with a rigorous statistical framework are critical for advancing our understanding in this field, facilitating the testing of theory-driven predictions and accounting for observed data. In this thesis, I investigate eye movement behavior through the development of two mechanistic, generative, and theory-driven models. The first model is based on experimental research regarding the distribution of attention, particularly around the time of a saccade, and explains statistical characteristics of scan paths. The second model implements a self-avoiding random walk within a confining potential to represent the microscopic fixational drift, which is present even while the eye is at rest, and investigates the relationship to microsaccades. Both models are implemented in a likelihood-based framework, which supports the use of data assimilation methods to perform Bayesian parameter inference at the level of individual participants, analyses of the marginal posteriors of the interpretable parameters, model comparisons, and posterior predictive checks. The application of these methods enables a thorough investigation of individual variability in the space of parameters. Results show that dynamical modeling and the data assimilation framework are highly suitable for eye movement research and, more generally, for cognitive modeling.
This dissertation focuses on the handling of time in dialogue. Specifically, it investigates how humans bridge time, or “buy time”, when they are expected to convey information that is not yet available to them (e.g. a travel agent searching for a flight in a long list while the customer is on the line, waiting). It also explores the feasibility of modeling such time-bridging behavior in spoken dialogue systems, and it examines
how endowing such systems with more human-like time-bridging capabilities may affect humans’ perception of them.
The relevance of time-bridging in human-human dialogue seems to stem largely from a need to avoid lengthy pauses, as these may cause both confusion and discomfort among the participants of a conversation (Levinson, 1983; Lundholm Fors, 2015). However, this avoidance of prolonged silence is at odds with the incremental nature of speech production in dialogue (Schlangen and Skantze, 2011): Speakers often start to verbalize their contribution before it is fully formulated, and sometimes even before they possess the information they need to provide, which may result in them running out of content mid-turn.
In this work, we elicit conversational data from humans, to learn how they avoid being silent while they search for information to convey to their interlocutor. We identify commonalities in the types of resources employed by different speakers, and we propose a classification scheme. We explore ways of modeling human time-buying behavior computationally, and we evaluate the effect on human listeners of embedding this behavior in a spoken dialogue system.
Our results suggest that a system using conversational speech to bridge time while searching for information to convey (as humans do) can provide a better experience in several respects than one which remains silent for a long period of time. However, not all speech serves this purpose equally: Our experiments also show that a system whose time-buying behavior is more varied (i.e. which exploits several categories from the classification scheme we developed and samples them based on information from human data) can prevent overestimation of waiting time when compared, for example, with a system that repeatedly asks the interlocutor to wait (even if these requests for waiting are phrased differently each time). Finally, this research shows that it is possible to model human time-buying behavior on a relatively small corpus, and that a system using such a model can be preferred by participants over one employing a simpler strategy, such as randomly choosing utterances to produce during the wait —even when the utterances used by both strategies are the same.
Advances in hydrogravimetry
(2023)
The interest of the hydrological community in the gravimetric method has steadily increased within the last decade. This is reflected by numerous studies from many different groups with a broad range of approaches and foci. Many of those are traditionally rather hydrology-oriented groups who recognized gravimetry as a potential added value for their hydrological investigations. While this resulted in a variety of interesting and useful findings, contributing to extend the respective knowledge and confirming the methodological potential, on the other hand, many interesting and unresolved questions emerged.
This thesis manifests efforts, analyses and solutions carried out in this regard. Addressing and evaluating many of those unresolved questions, the research contributes to advancing hydrogravimetry, the combination of gravimetric and hydrological methods, in showing how gravimeters are a highly useful tool for applied hydrological field research.
In the first part of the thesis, traditional setups of stationary terrestrial superconducting gravimeters are addressed. They are commonly installed within a dedicated building, the impermeable structure of which shields the underlying soil from natural exchange of water masses (infiltration, evapotranspiration, groundwater recharge). As gravimeters are most sensitive to mass changes directly beneath the meter, this could impede their suitability for local hydrological process investigations, especially for near-surface water storage changes (WSC). By studying temporal local hydrological dynamics at a dedicated site equipped with traditional hydrological measurement devices, both below and next to the building, the impact of these absent natural dynamics on the gravity observations were quantified. A comprehensive analysis with both a data-based and model-based approach led to the development of an alternative method for dealing with this limitation. Based on determinable parameters, this approach can be transferred to a broad range of measurement sites where gravimeters are deployed in similar structures. Furthermore, the extensive considerations on this topic enabled a more profound understanding of this so called umbrella effect.
The second part of the thesis is a pilot study about the field deployment of a superconducting gravimeter. A newly developed field enclosure for this gravimeter was tested in an outdoor installation adjacent to the building used to investigate the umbrella effect. Analyzing and comparing the gravity observations from both indoor and outdoor gravimeters showed performance with respect to noise and stable environmental conditions was equivalent while the sensitivity to near-surface WSC was highly increased for the field deployed instrument. Furthermore it was demonstrated that the latter setup showed gravity changes independent of the depth where mass changes occurred, given their sufficiently wide horizontal extent. As a consequence, the field setup suits monitoring of WSC for both short and longer time periods much better. Based on a coupled data-modeling approach, its gravity time series was successfully used to infer and quantify local water budget components (evapotranspiration, lateral subsurface discharge) on the daily to annual time scale.
The third part of the thesis applies data from a gravimeter field deployment for applied hydrological process investigations. To this end, again at the same site, a sprinkling experiment was conducted in a 15 x 15 m area around the gravimeter. A simple hydro-gravimetric model was developed for calculating the gravity response resulting from water redistribution in the subsurface. It was found that, from a theoretical point of view, different subsurface water distribution processes (macro pore flow, preferential flow, wetting front advancement, bypass flow and perched water table rise) lead to a characteristic shape of their resulting gravity response curve. Although by using this approach it was possible to identify a dominating subsurface water distribution process for this site, some clear limitations stood out. Despite the advantage for field installations that gravimetry is a non-invasive and integral method, the problem of non-uniqueness could only be overcome by additional measurements (soil moisture, electric resistivity tomography) within a joint evaluation. Furthermore, the simple hydrological model was efficient for theoretical considerations but lacked the capability to resolve some heterogeneous spatial structures of water distribution up to a needed scale. Nevertheless, this unique setup for plot to small scale hydrological process research underlines the high potential of gravimetery and the benefit of a field deployment.
The fourth and last part is dedicated to the evaluation of potential uncertainties arising from the processing of gravity observations. The gravimeter senses all mass variations in an integral way, with the gravitational attraction being directly proportional to the magnitude of the change and inversely proportional to the square of the distance of the change. Consequently, all gravity effects (for example, tides, atmosphere, non-tidal ocean loading, polar motion, global hydrology and local hydrology) are included in an aggregated manner. To isolate the signal components of interest for a particular investigation, all non-desired effects have to be removed from the observations. This process is called reduction. The large-scale effects (tides, atmosphere, non-tidal ocean loading and global hydrology) cannot be measured directly and global model data is used to describe and quantify each effect. Within the reduction process, model errors and uncertainties propagate into the residual, the result of the reduction. The focus of this part of the thesis is quantifying the resulting, propagated uncertainty for each individual correction. Different superconducting gravimeter installations were evaluated with respect to their topography, distance to the ocean and the climate regime. Furthermore, different time periods of aggregated gravity observation data were assessed, ranging from 1 hour up to 12 months. It was found that uncertainties were highest for a frequency of 6 months and smallest for hourly frequencies. Distance to the ocean influences the uncertainty of the non-tidal ocean loading component, while geographical latitude affects uncertainties of the global hydrological component. It is important to highlight that the resulting correction-induced uncertainties in the residual have the potential to mask the signal of interest, depending on the signal magnitude and its frequency. These findings can be used to assess the value of gravity data across a range of applications and geographic settings.
In an overarching synthesis all results and findings are discussed with a general focus on their added value for bringing hydrogravimetric field research to a new level. The conceptual and applied methodological benefits for hydrological studies are highlighted. Within an outlook for future setups and study designs, it was once again shown what enormous potential is offered by gravimeters as hydrological field tools.
This cumulative doctoral thesis consists of five empirical studies examining various aspects of crisis and change from a management-accounting perspective. Within the first study, a bibliometric analysis is conducted. More precisely, based on publications between the financial crisis (since 2007) and the COVID-19 crisis (starting in 2020), the crisis literature in management accounting is investigated to uncover the most influential aspects of the field and to analyze the theoretical foundations of the literature. Moreover, this investigation also serves to identify future research streams and to provide starting points for future research. Based on a survey, the second study investigates the impact of several management-accounting tools on organizational resilience and its effect on a company’s competitive advantage during a crisis. The results show that their target-oriented use positively influences organizational resilience and contributes to the company’s competitive advantage during the crisis. The third study provides a more detailed view on the relationship between budgeting and risk management and their benefit for companies in times of crisis. For this purpose, the relationship between the relevance of budgeting functions and risk management in the company and the corresponding impact on company performance are investigated. The results show a positive relationship. However, a crisis can also affect the relationship between the company and its shareholders: Thus, the fourth study – based on publicly available data and a survey – examines the consequences of virtual annual general meetings on shareholder rights. The results show that, temporarily, particularly the right to information was severely restricted. For the following year, this problem was fixed, and ultimately, the virtual option was introduced permanently. The crisis has thus brought about a lasting change. But not only crises cause changes: The fifth study, also based on survey data, investigates the changes in the role of management accountants caused by digitalization. More precisely, it investigates how management accountants deal with tasks that are considered outdated and unattractive. The results of the study show that different types of personalities also act differently as far as the willingness to do those unattractive tasks is concerned, and career ambitions also influence that willingness. In addition to this, the results provide insights into the motivation of management accountants to conduct tasks and thus counteract existing assumptions based on stereotypes and clichés circulating within the research community.
Casualties and damages from urban pluvial flooding are increasing. Triggered by short, localized, and intensive rainfall events, urban pluvial floods can occur anywhere, even in areas without a history of flooding. Urban pluvial floods have relatively small temporal and spatial scales. Although cumulative losses from urban pluvial floods are comparable, most flood risk management and mitigation strategies focus on fluvial and coastal flooding. Numerical-physical-hydrodynamic models are considered the best tool to represent the complex nature of urban pluvial floods; however, they are computationally expensive and time-consuming. These sophisticated models make large-scale analysis and operational forecasting prohibitive. Therefore, it is crucial to evaluate and benchmark the performance of other alternative methods.
The findings of this cumulative thesis are represented in three research articles. The first study evaluates two topographic-based methods to map urban pluvial flooding, fill–spill–merge (FSM) and topographic wetness index (TWI), by comparing them against a sophisticated hydrodynamic model. The FSM method identifies flood-prone areas within topographic depressions while the TWI method employs maximum likelihood estimation to calibrate a TWI threshold (τ) based on inundation maps from the 2D hydrodynamic model. The results point out that the FSM method outperforms the TWI method. The study highlights then the advantage and limitations of both methods.
Data-driven models provide a promising alternative to computationally expensive hydrodynamic models. However, the literature lacks benchmarking studies to evaluate the different models' performance, advantages and limitations. Model transferability in space is a crucial problem. Most studies focus on river flooding, likely due to the relative availability of flow and rain gauge records for training and validation. Furthermore, they consider these models as black boxes. The second study uses a flood inventory for the city of Berlin and 11 predictive features which potentially indicate an increased pluvial flooding hazard to map urban pluvial flood susceptibility using a convolutional neural network (CNN), an artificial neural network (ANN) and the benchmarking machine learning models random forest (RF) and support vector machine (SVM). I investigate the influence of spatial resolution on the implemented models, the models' transferability in space and the importance of the predictive features. The results show that all models perform well and the RF models are superior to the other models within and outside the training domain. The models developed using fine spatial resolution (2 and 5 m) could better identify flood-prone areas. Finally, the results point out that aspect is the most important predictive feature for the CNN models, and altitude is for the other models.
While flood susceptibility maps identify flood-prone areas, they do not represent flood variables such as velocity and depth which are necessary for effective flood risk management. To address this, the third study investigates data-driven models' transferability to predict urban pluvial floodwater depth and the models' ability to enhance their predictions using transfer learning techniques. It compares the performance of RF (the best-performing model in the previous study) and CNN models using 12 predictive features and output from a hydrodynamic model. The findings in the third study suggest that while CNN models tend to generalise and smooth the target function on the training dataset, RF models suffer from overfitting. Hence, RF models are superior for predictions inside the training domains but fail outside them while CNN models could control the relative loss in performance outside the training domains. Finally, the CNN models benefit more from transfer learning techniques than RF models, boosting their performance outside training domains.
In conclusion, this thesis has evaluated both topographic-based methods and data-driven models to map urban pluvial flooding. However, further studies are crucial to have methods that completely overcome the limitation of 2D hydrodynamic models.
This thesis bridges two areas of mathematics, algebra on the one hand with the Milnor-Moore theorem (also called Cartier-Quillen-Milnor-Moore theorem) as well as the Poincaré-Birkhoff-Witt theorem, and analysis on the other hand with Shintani zeta functions which generalise multiple zeta functions.
The first part is devoted to an algebraic formulation of the locality principle in physics and generalisations of classification theorems such as Milnor-Moore and Poincaré-Birkhoff-Witt theorems to the locality framework. The locality principle roughly says that events that take place far apart in spacetime do not infuence each other. The algebraic formulation of this principle discussed here is useful when analysing singularities which arise from events located far apart in space, in order to renormalise them while keeping a memory of the fact that they do not influence each other. We start by endowing a vector space with a symmetric relation, named the locality relation, which keeps track of elements that are "locally independent". The pair of a vector space together with such relation is called a pre-locality vector space. This concept is extended to tensor products allowing only tensors made of locally independent elements. We extend this concept to the locality tensor algebra, and locality symmetric algebra of a pre-locality vector space and prove the universal properties of each of such structures. We also introduce the pre-locality Lie algebras, together with their associated locality universal enveloping algebras and prove their universal property. We later upgrade all such structures and results from the pre-locality to the locality context, requiring the locality relation to be compatible with the linear structure of the vector space. This allows us to define locality coalgebras, locality bialgebras, and locality Hopf algebras. Finally, all the previous results are used to prove the locality version of the Milnor-Moore and the Poincaré-Birkhoff-Witt theorems. It is worth noticing that the proofs presented, not only generalise the results in the usual (non-locality) setup, but also often use less tools than their counterparts in their non-locality counterparts.
The second part is devoted to study the polar structure of the Shintani zeta functions. Such functions, which generalise the Riemman zeta function, multiple zeta functions, Mordell-Tornheim zeta functions, among others, are parametrised by matrices with real non-negative arguments. It is known that Shintani zeta functions extend to meromorphic functions with poles on afine hyperplanes. We refine this result in showing that the poles lie on hyperplanes parallel to the facets of certain convex polyhedra associated to the defining matrix for the Shintani zeta function. Explicitly, the latter are the Newton polytopes of the polynomials induced by the columns of the underlying matrix. We then prove that the coeficients of the equation which describes the hyperplanes in the canonical basis are either zero or one, similar to the poles arising when renormalising generic Feynman amplitudes. For that purpose, we introduce an algorithm to distribute weight over a graph such that the weight at each vertex satisfies a given lower bound.
Movement is a mechanism that shapes biodiversity patterns across spatialtemporal scales. Thereby, the movement process affects species interactions, population dynamics and community composition. In this thesis, I disentangled the effects of movement on the biodiversity of zooplankton ranging from the individual to the community level. On the individual movement level, I used video-based analysis to explore the implication of movement behavior on preypredator interactions. My results showed that swimming behavior was of great importance as it determined their survival in the face of predation. The findings also additionally highlighted the relevance of the defense status/morphology of prey, as it not only affected the prey-predator relationship by the defense itself but also by plastic movement behavior. On the community movement level, I used a field mesocosm experiment to explore the role of dispersal (time i.e., from the egg bank into the water body and space i.e., between water bodies) in shaping zooplankton metacommunities. My results revealed that priority effects and taxon-specific dispersal limitation influenced community composition. Additionally, different modes of dispersal also generated distinct community structures. The egg bank and biotic vectors (i.e. mobile links) played significant roles in the colonization of newly available habitat patches. One crucial aspect that influences zooplankton species after arrival in new habitats is the local environmental conditions. By using common garden experiments, I assessed the performance of zooplankton communities in their home vs away environments in a group of ponds embedded within an agricultural landscape. I identified environmental filtering as a driving factor as zooplankton communities from individual ponds developed differently in their home and away environments. On the individual species level, there was no consistent indication of local adaptation. For some species, I found a higher abundance/fitness in their home environment, but for others, the opposite was the case, and some cases were indifferent.
Overall, the thesis highlights the links between movement and biodiversity patterns, ranging from the individual active movement to the community level.
Carbonates carried in subducting slabs may play a major role in sourcing and storing carbon in the deep Earth’s interior. Current estimates indicate that between 40 to 66 million tons of carbon per year enter subduction zones, but it is uncertain how much of it reaches the lower mantle. It appears that most of this carbon might be extracted from subducting slabs at the mantle wedge and only a limited amount continues deeper and eventually reaches the deep mantle. However, estimations on deeply subducted carbon broadly range from 0.0001 to 52 million tons of carbon per year. This disparity is primarily due to the limited understanding of the survival of carbonate minerals during their transport to deep mantle conditions. Indeed, carbon has very low solubility in mantle silicates, therefore it is expected to be stored primarily in accessory phases such as carbonates. Among those carbonates, magnesite (MgCO3), as a single phase, is the most stable under all mantle conditions. However, experimental investigation on the stability of magnesite in contact with SiO2 at lower mantle conditions suggests that magnesite is stable only along a cold subducted slab geotherm. Furthermore, our understanding of magnesite’s stability when interacting with more complex mantle silicate phases remains incomplete. In the first part of this dissertation, laser-heated diamond anvil cells and multi-anvil apparatus experiments were performed to investigate the stability of magnesite in contact with iron-bearing mantle silicates. Sub-solidus reactions, melting, decarbonation and diamond formation were examined from shallow to mid-lower mantle conditions (25 to 68 GPa; 1300 to 2000 K). Multi-anvil experiments at 25 GPa show the formation of carbonate-rich melt, bridgmanite, and stishovite with melting occurring at a temperature corresponding to all geotherms except the coldest one. In situ X-ray diffraction, in laser-heating diamond anvil cells experiments, shows crystallization of bridgmanite and stishovite but no melt phase was detected in situ at high temperatures. To detect decarbonation phases such as diamond, Raman spectroscopy was used. Crystallization of diamonds is observed as a sub-solidus process even at temperatures relevant and lower than the coldest slab geotherm (1350 K at 33 GPa). Data obtained from this work suggest that magnesite is unstable in contact with the surrounding peridotite mantle in the upper-most lower mantle. The presence of magnesite instead induces melting under oxidized conditions and/or foster diamond formation under more reduced conditions, at depths ∼700 km. Consequently, carbonates will be removed from the carbonate-rich slabs at shallow lower mantle conditions, where subducted slabs can stagnate. Therefore, the transport of carbonate to deeper depths will be restricted, supporting the presence of a barrier for carbon subduction at the top of the lower mantle. Moreover, the reduction of magnesite, forming diamonds provides additional evidence that super-deep diamond crystallization is related to the reduction of carbonates or carbonated-rich melt.
The second part of this dissertation presents the development of a portable laser-heating system optimized for X-ray emission spectroscopy (XES) or nuclear inelastic scattering (NIS) spectroscopy with signal collection at near 90◦. The laser-heated diamond anvil cell is the only static pressure device that can replicate the pressure and temperatures of the Earth’s lower mantle and core. The high temperatures are reached by using high-powered lasers focused on the sample contained between the diamond anvils. Moreover, diamonds’ transparency to X-rays enables in situ X-ray spectroscopy measurements that can probe the sample under high-temperature and high-pressure conditions. Therefore, the development of portable laser-heating systems has linked high-pressure and temperature research with high-resolution X-ray spectroscopy techniques to synchrotron beamlines that do not have a dedicated, permanent, laser-heating system. A general description of the system is provided, as well as details on the use of a parabolic mirror as a reflective imaging objective for on-axis laser heating and radiospectrometric temperature measurements with zero attenuation of incoming X-rays. The parabolic mirror improves the accuracy of temperature measurements free from chromatic aberrations in a wide spectral range and its perforation permits in situ X-rays measurement at synchrotron facilities. The parabolic mirror is a well-suited alternative to refractive objectives in laser heating systems, which will facilitate future applications in the use of CO2 lasers.
In model-driven engineering, the adaptation of large software systems with dynamic structure is enabled by architectural runtime models. Such a model represents an abstract state of the system as a graph of interacting components. Every relevant change in the system is mirrored in the model and triggers an evaluation of model queries, which search the model for structural patterns that should be adapted. This thesis focuses on a type of runtime models where the expressiveness of the model and model queries is extended to capture past changes and their timing. These history-aware models and temporal queries enable more informed decision-making during adaptation, as they support the formulation of requirements on the evolution of the pattern that should be adapted. However, evaluating temporal queries during adaptation poses significant challenges. First, it implies the capability to specify and evaluate requirements on the structure, as well as the ordering and timing in which structural changes occur. Then, query answers have to reflect that the history-aware model represents the architecture of a system whose execution may be ongoing, and thus answers may depend on future changes. Finally, query evaluation needs to be adequately fast and memory-efficient despite the increasing size of the history---especially for models that are altered by numerous, rapid changes.
The thesis presents a query language and a querying approach for the specification and evaluation of temporal queries. These contributions aim to cope with the challenges of evaluating temporal queries at runtime, a prerequisite for history-aware architectural monitoring and adaptation which has not been systematically treated by prior model-based solutions. The distinguishing features of our contributions are: the specification of queries based on a temporal logic which encodes structural patterns as graphs; the provision of formally precise query answers which account for timing constraints and ongoing executions; the incremental evaluation which avoids the re-computation of query answers after each change; and the option to discard history that is no longer relevant to queries. The query evaluation searches the model for occurrences of a pattern whose evolution satisfies a temporal logic formula. Therefore, besides model-driven engineering, another related research community is runtime verification. The approach differs from prior logic-based runtime verification solutions by supporting the representation and querying of structure via graphs and graph queries, respectively, which is more efficient for queries with complex patterns. We present a prototypical implementation of the approach and measure its speed and memory consumption in monitoring and adaptation scenarios from two application domains, with executions of an increasing size. We assess scalability by a comparison to the state-of-the-art from both related research communities. The implementation yields promising results, which pave the way for sophisticated history-aware self-adaptation solutions and indicate that the approach constitutes a highly effective technique for runtime monitoring on an architectural level.
With the recent growth of sensors, cloud computing handles the data processing of many applications. Processing some of this data on the cloud raises, however, many concerns regarding, e.g., privacy, latency, or single points of failure. Alternatively, thanks to the development of embedded systems, smart wireless devices can share their computation capacity, creating a local wireless cloud for in-network processing. In this context, the processing of an application is divided into smaller jobs so that a device can run one or more jobs.
The contribution of this thesis to this scenario is divided into three parts. In part one, I focus on wireless aspects, such as power control and interference management, for deciding which jobs to run on which node and how to route data between nodes. Hence, I formulate optimization problems and develop heuristic and meta-heuristic algorithms to allocate wireless and computation resources. Additionally, to deal with multiple applications competing for these resources, I develop a reinforcement learning (RL) admission controller to decide which application should be admitted. Next, I look into acoustic applications to improve wireless throughput by using microphone clock synchronization to synchronize wireless transmissions.
In the second part, I jointly work with colleagues from the acoustic processing field to optimize both network and application (i.e., acoustic) qualities. My contribution focuses on the network part, where I study the relation between acoustic and network qualities when selecting a subset of microphones for collecting audio data or selecting a subset of optional jobs for processing these data; too many microphones or too many jobs can lessen quality by unnecessary delays. Hence, I develop RL solutions to select the subset of microphones under network constraints when the speaker is moving while still providing good acoustic quality. Furthermore, I show that autonomous vehicles carrying microphones improve the acoustic qualities of different applications. Accordingly, I develop RL solutions (single and multi-agent ones) for controlling these vehicles.
In the third part, I close the gap between theory and practice. I describe the features of my open-source framework used as a proof of concept for wireless in-network processing. Next, I demonstrate how to run some algorithms developed by colleagues from acoustic processing using my framework. I also use the framework for studying in-network delays (wireless and processing) using different distributions of jobs and network topologies.
Most machine learning methods provide only point estimates when being queried to predict on new data. This is problematic when the data is corrupted by noise, e.g. from imperfect measurements, or when the queried data point is very different to the data that the machine learning model has been trained with. Probabilistic modelling in machine learning naturally equips predictions with corresponding uncertainty estimates which allows a practitioner to incorporate information about measurement noise into the modelling process and to know when not to trust the predictions. A well-understood, flexible probabilistic framework is provided by Gaussian processes that are ideal as building blocks of probabilistic models. They lend themself naturally to the problem of regression, i.e., being given a set of inputs and corresponding observations and then predicting likely observations for new unseen inputs, and can also be adapted to many more machine learning tasks. However, exactly inferring the optimal parameters of such a Gaussian process model (in a computationally tractable manner) is only possible for regression tasks in small data regimes. Otherwise, approximate inference methods are needed, the most prominent of which is variational inference.
In this dissertation we study models that are composed of Gaussian processes embedded in other models in order to make those more flexible and/or probabilistic. The first example are deep Gaussian processes which can be thought of as a small network of Gaussian processes and which can be employed for flexible regression. The second model class that we study are Gaussian process state-space models. These can be used for time-series modelling, i.e., the task of being given a stream of data ordered by time and then predicting future observations. For both model classes the state-of-the-art approaches offer a trade-off between expressive models and computational properties (e.g. speed or convergence properties) and mostly employ variational inference. Our goal is to improve inference in both models by first getting a deep understanding of the existing methods and then, based on this, to design better inference methods. We achieve this by either exploring the existing trade-offs or by providing general improvements applicable to multiple methods.
We first provide an extensive background, introducing Gaussian processes and their sparse (approximate and efficient) variants. We continue with a description of the models under consideration in this thesis, deep Gaussian processes and Gaussian process state-space models, including detailed derivations and a theoretical comparison of existing methods.
Then we start analysing deep Gaussian processes more closely: Trading off the properties (good optimisation versus expressivity) of state-of-the-art methods in this field, we propose a new variational inference based approach. We then demonstrate experimentally that our new algorithm leads to better calibrated uncertainty estimates than existing methods.
Next, we turn our attention to Gaussian process state-space models, where we closely analyse the theoretical properties of existing methods.The understanding gained in this process leads us to propose a new inference scheme for general Gaussian process state-space models that incorporates effects on multiple time scales. This method is more efficient than previous approaches for long timeseries and outperforms its comparison partners on data sets in which effects on multiple time scales (fast and slowly varying dynamics) are present.
Finally, we propose a new inference approach for Gaussian process state-space models that trades off the properties of state-of-the-art methods in this field. By combining variational inference with another approximate inference method, the Laplace approximation, we design an efficient algorithm that outperforms its comparison partners since it achieves better calibrated uncertainties.
Today, point clouds are among the most important categories of spatial data, as they constitute digital 3D models of the as-is reality that can be created at unprecedented speed and precision. However, their unique properties, i.e., lack of structure, order, or connectivity information, necessitate specialized data structures and algorithms to leverage their full precision. In particular, this holds true for the interactive visualization of point clouds, which requires to balance hardware limitations regarding GPU memory and bandwidth against a naturally high susceptibility to visual artifacts.
This thesis focuses on concepts, techniques, and implementations of robust, scalable, and portable 3D visualization systems for massive point clouds. To that end, a number of rendering, visualization, and interaction techniques are introduced, that extend several basic strategies to decouple rendering efforts and data management: First, a novel visualization technique that facilitates context-aware filtering, highlighting, and interaction within point cloud depictions. Second, hardware-specific optimization techniques that improve rendering performance and image quality in an increasingly diversified hardware landscape. Third, natural and artificial locomotion techniques for nausea-free exploration in the context of state-of-the-art virtual reality devices. Fourth, a framework for web-based rendering that enables collaborative exploration of point clouds across device ecosystems and facilitates the integration into established workflows and software systems.
In cooperation with partners from industry and academia, the practicability and robustness of the presented techniques are showcased via several case studies using representative application scenarios and point cloud data sets. In summary, the work shows that the interactive visualization of point clouds can be implemented by a multi-tier software architecture with a number of domain-independent, generic system components that rely on optimization strategies specific to large point clouds. It demonstrates the feasibility of interactive, scalable point cloud visualization as a key component for distributed IT solutions that operate with spatial digital twins, providing arguments in favor of using point clouds as a universal type of spatial base data usable directly for visualization purposes.
The trace elements, selenium (Se) and copper (Cu) play an important role in maintaining normal brain function. Since they have essential functions as cofactors of enzymes or structural components of proteins, an optimal supply as well as a well-defined homeostatic regulation are crucial. Disturbances in trace element homeostasis affect the health status and contribute to the incidence and severity of various diseases. The brain in particular is vulnerable to oxidative stress due to its extensive oxygen consumption and high energy turnover, among other factors. As components of a number of antioxidant enzymes, both elements are involved in redox homeostasis. However, high concentrations are also associated with the occurrence of oxidative stress, which can induce cellular damage. Especially high Cu concentrations in some brain areas are associated with the development and progression of neurodegenerative diseases such as Alzheimer's disease (AD). In contrast, reduced Se levels were measured in brains of AD patients. The opposing behavior of Cu and Se renders the study of these two trace elements as well as the interactions between them being particularly relevant and addressed in this work.
Natural gas hydrates are ice-like crystalline compounds containing water cavities that trap natural gas molecules like methane (CH4), which is a potent greenhouse gas with high energy density. The Mallik site at the Mackenzie Delta in the Canadian Arctic contains a large volume of technically recoverable CH4 hydrate beneath the base of the permafrost. Understanding how the sub-permafrost hydrate is distributed can aid in searching for the ideal locations for deploying CH4 production wells to develop the hydrate as a cleaner alternative to crude oil or coal. Globally, atmospheric warming driving permafrost thaw results in sub-permafrost hydrate dissociation, releasing CH4 into the atmosphere to intensify global warming. It is therefore crucial to evaluate the potential risk of hydrate dissociation due to permafrost degradation. To quantitatively predict hydrate distribution and volume in complex sub-permafrost environments, a numerical framework was developed to simulate sub-permafrost hydrate formation by coupling the equilibrium CH4-hydrate formation approach with a fluid flow and transport simulator (TRANSPORTSE). In addition, integrating the equations of state describing ice melting and forming with TRANSPORTSE enabled this framework to simulate the permafrost evolution during the sub-permafrost hydrate formation. A modified sub-permafrost hydrate formation mechanism for the Mallik site is presented in this study. According to this mechanism, the CH4-rich fluids have been vertically transported since the Late Pleistocene from deep overpressurized zones via geologic fault networks to form the observed hydrate deposits in the Kugmallit–Mackenzie Bay Sequences. The established numerical framework was verified by a benchmark of hydrate formation via dissolved methane. Model calibration was performed based on laboratory data measured during a multi-stage hydrate formation experiment undertaken in the LArge scale Reservoir Simulator (LARS). As the temporal and spatial evolution of simulated and observed hydrate saturation matched well, the LARS model was therefore validated. This laboratory-scale model was then upscaled to a field-scale 2D model generated from a seismic transect across the Mallik site. The simulation confirmed the feasibility of the introduced sub-permafrost hydrate formation mechanism by demonstrating consistency with field observations. The 2D model was extended to the first 3D model of the Mallik site by using well-logs and seismic profiles, to investigate the geologic controls on the spatial hydrate distribution. An assessment of this simulation revealed the hydraulic contribution of each geological element, including relevant fault networks and sedimentary sequences. Based on the simulation results, the observed heterogeneous distribution of sub-permafrost hydrate resulted from the combined factors of the source-gas generation rate, subsurface temperature, and the permeability of geologic elements. Analysis of the results revealed that the Mallik permafrost was heated by 0.8–1.3 °C, induced by the global temperature increase of 0.44 °C and accelerated by Arctic amplification from the early 1970s to the mid-2000s. This study presents a numerical framework that can be applied to study the formation of the permafrost-hydrate system from laboratory to field scales, across timescales ranging from hours to millions of years. Overall, these simulations deepen the knowledge about the dominant factors controlling the spatial hydrate distribution in sub-permafrost environments with heterogeneous geologic elements. The framework can support improving the design of hydrate formation experiments and provide valuable contributions to future industrial hydrate exploration and exploitation activities.
Hybrid nanomaterials offer the combination of individual properties of different types of nanoparticles. Some strategies for the development of new nanostructures in larger scale rely on the self-assembly of nanoparticles as a bottom-up approach. The use of templates provides ordered assemblies in defined patterns. In a typical soft-template, nanoparticles and other surface-active agents are incorporated into non-miscible liquids. The resulting self-organized dispersions will mediate nanoparticle interactions to control the subsequent self-assembly. Especially interactions between nanoparticles of very different dispersibility and functionality can be directed at a liquid-liquid interface.
In this project, water-in-oil microemulsions were formulated from quasi-ternary mixtures with Aerosol-OT as surfactant. Oleyl-capped superparamagnetic iron oxide and/or silver nanoparticles were incorporated in the continuous organic phase, while polyethyleneimine-stabilized gold nanoparticles were confined in the dispersed water droplets. Each type of nanoparticle can modulate the surfactant film and the inter-droplet interactions in diverse ways, and their combination causes synergistic effects. Interfacial assemblies of nanoparticles resulted after phase-separation. On one hand, from a biphasic Winsor type II system at low surfactant concentration, drop-casting of the upper phase afforded thin films of ordered nanoparticles in filament-like networks. Detailed characterization proved that this templated assembly over a surface is based on the controlled clustering of nanoparticles and the elongation of the microemulsion droplets. This process offers versatility to use different nanoparticle compositions by keeping the surface functionalization, in different solvents and over different surfaces. On the other hand, a magnetic heterocoagulate was formed at higher surfactant concentration, whose phase-transfer from oleic acid to water was possible with another auxiliary surfactant in ethanol-water mixture. When the original components were initially mixed under heating, defined oil-in-water, magnetic-responsive nanostructures were obtained, consisting on water-dispersible nanoparticle domains embedded by a matrix-shell of oil-dispersible nanoparticles.
Herein, two different approaches were demonstrated to form diverse hybrid nanostructures from reverse microemulsions as self-organized dispersions of the same components. This shows that microemulsions are versatile soft-templates not only for the synthesis of nanoparticles, but also for their self-assembly, which suggest new approaches towards the production of new sophisticated nanomaterials in larger scale.