Refine
Has Fulltext
- yes (309) (remove)
Year of publication
- 2021 (309) (remove)
Document Type
- Postprint (150)
- Doctoral Thesis (124)
- Working Paper (16)
- Monograph/Edited Volume (6)
- Master's Thesis (4)
- Article (3)
- Part of Periodical (3)
- Conference Proceeding (1)
- Habilitation Thesis (1)
- Other (1)
Language
- English (309) (remove)
Is part of the Bibliography
- yes (309) (remove)
Keywords
- Arabidopsis thaliana (4)
- Spektroskopie (4)
- climate change (4)
- embodied cognition (4)
- machine learning (4)
- spectroscopy (4)
- 20. Jahrhundert (3)
- 20th century (3)
- COVID-19 (3)
- Klimawandel (3)
Institute
- Institut für Biochemie und Biologie (42)
- Institut für Geowissenschaften (36)
- Institut für Physik und Astronomie (31)
- Hasso-Plattner-Institut für Digital Engineering GmbH (25)
- Institut für Chemie (21)
- Strukturbereich Kognitionswissenschaften (19)
- Department Psychologie (18)
- Institut für Umweltwissenschaften und Geographie (16)
- Center for Economic Policy Analysis (CEPA) (15)
- Extern (15)
The incorporation of proteins in artificial materials such as membranes offers great opportunities to avail oneself the miscellaneous qualities of proteins and enzymes perfected by nature over millions of years. One possibility to leverage proteins is the modification with artificial polymers. To obtain such protein-polymer conjugates, either a polymer can be grown from the protein surface (grafting-from) or a pre-synthesized polymer attached to the protein (grafting-to). Both techniques were used to synthesize conjugates of different proteins with thermo-responsive polymers in this thesis.
First, conjugates were analyzed by protein NMR spectroscopy. Typical characterization techniques for conjugates can verify the successful conjugation and give hints on the secondary structure of the protein. However, the 3-dimensional structure, being highly important for the protein function, cannot be probed by standard techniques. NMR spectroscopy is a unique method allowing to follow even small alterations in the protein structure. A mutant of the carbohydrate binding module 3b (CBM3bN126W) was used as model protein and functionalized with poly(N-isopropylacrylamide). Analysis of conjugates prepared by grafting-to or grafting-from revealed a strong impact of conjugation type on protein folding. Whereas conjugates prepared by grafting a pre-formed polymer to the protein resulted in complete preservation of protein folding, grafting the polymer from the protein surface led to (partial) disruption of the protein structure.
Next, conjugates of bovine serum albumin (BSA) as cheap and easily accessible protein were synthesized with PNIPAm and different oligoethylene glycol (meth)acrylates. The obtained protein-polymer conjugates were analyzed by an in-line combination of size exclusion chromatography and multi-angle laser light scattering (SEC-MALS). This technique is particular advantageous to determine molar masses, as no external calibration of the system is needed. Different SEC column materials and operation conditions were tested to evaluate the applicability of this system to determine absolute molar masses and hydrodynamic properties of heterogeneous conjugates prepared by grafting-from and grafting-to. Hydrophobic and non-covalent interactions of conjugates lead to error-prone values not in accordance to expected molar masses based on conversions and extents of modifications.
As alternative to this method, conjugates were analyzed by sedimentation velocity analytical ultracentrifugation (SV-AUC) to gain insights in the hydrodynamic properties and how they change after conjugation. Within a centrifugal field, a sample moves and fractionates according to the mass, density, and shape of its individual components. Conjugates of BSA with PNIPAm were analyzed below and above the cloud point temperature of the thermo-responsive polymer component. It was identified that the polymer characteristics were transferred to the conjugate molecule which than showed a decreased ideality – defined as increased deviation from a perfect sphere model – below and increased ideality above the cloud point temperature. This effect can be attributed to an arrangement of the polymer chain pointing towards the solvent (expanded state) or snuggling around the protein surface depending on the applied temperature.
The last project dealt with the synthesis of ferric hydroxamate uptake protein component A (FhuA)-polymer conjugates as building blocks for novel membrane materials. The shape of FhuA can be described as barrel and removal of a cork domain inside the protein results in a passive channel aimed to be utilized as pores in the membrane system. The polymer matrix surrounding the membrane protein is composed of a thermo-responsive and a UV-crosslinkable part. Therefore, an external trigger for covalent immobilization of these building blocks in the membrane and switchability of the membrane between different states was incorporated. The overall performance of membranes prepared by a drying-mediated self-assembly approach was evaluated by permeability and size exclusion experiments. The obtained membranes displayed an insufficiency in interchain crosslinking and therefore a lack in performance. Furthermore, the aimed switch between a hydrophilic and hydrophobic state of the polymer matrix did not occur. Correspondingly, size exclusion experiments did not result in a retention of analytes larger than the pores defined by the dimension of the used FhuA variant.
Overall, different paths to generate protein-polymer conjugates by either grafting-from or grafting-to the protein surface were presented paving the way to the generation of new hybrid materials. Different analytical methods were utilized to describe the folding and hydrodynamic properties of conjugates providing a deeper insight in the overall characteristics of these seminal building blocks.
Compound weather events may lead to extreme impacts that can affect many aspects of society including agriculture. Identifying the underlying mechanisms that cause extreme impacts, such as crop failure, is of crucial importance to improve their understanding and forecasting. In this study, we investigate whether key meteorological drivers of extreme impacts can be identified using the least absolute shrinkage and selection operator (LASSO) in a model environment, a method that allows for automated variable selection and is able to handle collinearity between variables. As an example of an extreme impact, we investigate crop failure using annual wheat yield as simulated by the Agricultural Production Systems sIMulator (APSIM) crop model driven by 1600 years of daily weather data from a global climate model (EC-Earth) under present-day conditions for the Northern Hemisphere. We then apply LASSO logistic regression to determine which weather conditions during the growing season lead to crop failure. We obtain good model performance in central Europe and the eastern half of the United States, while crop failure years in regions in Asia and the western half of the United States are less accurately predicted. Model performance correlates strongly with annual mean and variability of crop yields; that is, model performance is highest in regions with relatively large annual crop yield mean and variability. Overall, for nearly all grid points, the inclusion of temperature, precipitation and vapour pressure deficit is key to predict crop failure. In addition, meteorological predictors during all seasons are required for a good prediction. These results illustrate the omnipresence of compounding effects of both meteorological drivers and different periods of the growing season for creating crop failure events. Especially vapour pressure deficit and climate extreme indicators such as diurnal temperature range and the number of frost days are selected by the statistical model as relevant predictors for crop failure at most grid points, underlining their overarching relevance. We conclude that the LASSO regression model is a useful tool to automatically detect compound drivers of extreme impacts and could be applied to other weather impacts such as wildfires or floods. As the detected relationships are of purely correlative nature, more detailed analyses are required to establish the causal structure between drivers and impacts.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
The prevalence of diseases associated with misfolded proteins increases with age. When cellular defense mechanisms become limited, misfolded proteins form aggregates and may also develop more stable cross-β structures ultimately forming amyloid aggregates. Amyloid aggregates are associated with neurodegenerative diseases such as Alzheimer’s disease and Huntington’s disease. The formation of amyloid deposits, their toxicity and cellular defense mechanisms have been intensively studied. However, surprisingly little is known about the effects of protein aggregates on cellular signal transduction. It is also not understood whether the presence of aggregation-prone, but still soluble proteins affect signal transduction.
In this study, the still soluble aggregation-prone HttExon1Q74 and its amyloid aggregates were used to analyze the effect of amyloid aggregates on internalization and receptor activation of G protein-coupled receptors (GPCRs), the largest protein family of mammalian cell surface receptors involved in signal transduction. The aggregated HttExon1Q74, but not its soluble form, could inhibit ligand-induced clathrin-mediated endocytosis (CME) of various GPCRs. Most likely this inhibitory effect is based on a terminal sequestration of the HSC70 chaperone to the aggregates which is necessary for CME. Using the vasopressinV1a receptor (V1aR) and the corticotropin-releasing factor receptor 1 (CRF1R) as a model, it could be shown that the presence of HttExon1Q74 aggregates and the inhibition of ligand-induced CME leads to an accumulation of desensitized receptors at the plasma membrane. In turn, this disrupts Gq-mediated Ca2+ signaling and Gs-mediated cAMP signaling of the V1aR and the CRF1R respectively. In contrast to HttExon1Q74 amyloid aggregates, soluble HttExon1Q74 as well as amorphous aggregates did not inhibit GPCR internalization and signaling demonstrating that cellular signal transduction mechanisms are specifically impaired in response to the formation of amyloid aggregates.
In addition, preliminary experiments could show that HttExon1Q74 aggregates provoke an increase in membrane expression of a protein from a structurally and functionally unrelated membrane protein family, namely the serotonin transporter SERT. As SERT is the main pharmacological target to treat depression this could shed light on this commonly occurring comorbidity in neurodegenerative diseases, in particular in early disease states.
Conceptual knowledge about objects, people and events in the world is central to human cognition, underlying core cognitive abilities such as object recognition and use, and word comprehension. Previous research indicates that concepts consist of perceptual and motor features represented in modality-specific perceptual-motor brain regions. In addition, cross-modal convergence zones integrate modality-specific features into more abstract conceptual representations.
However, several questions remain open: First, to what extent does the retrieval of perceptual-motor features depend on the concurrent task? Second, how do modality-specific and cross-modal regions interact during conceptual knowledge retrieval? Third, which brain regions are causally relevant for conceptually-guided behavior? This thesis addresses these three key issues using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in the healthy human brain.
Study 1 - an fMRI activation study - tested to what extent the retrieval of sound and action features of concepts, and the resulting engagement of auditory and somatomotor brain regions depend on the concurrent task. 40 healthy human participants performed three different tasks - lexical decision, sound judgment, and action judgment - on words with a high or low association to sounds and actions. We found that modality-specific regions selectively respond to task-relevant features: Auditory regions selectively responded to sound features during sound judgments, and somatomotor regions selectively responded to action features during action judgments. Unexpectedly, several regions (e.g. the left posterior parietal cortex; PPC) exhibited a task-dependent response to both sound and action features. We propose these regions to be "multimodal", and not "amodal", convergence zones which retain modality-specific information.
Study 2 - an fMRI connectivity study - investigated the functional interaction between modality-specific and multimodal areas during conceptual knowledge retrieval. Using the above fMRI data, we asked (1) whether modality-specific and multimodal regions are functionally coupled during sound and action feature retrieval, (2) whether their coupling depends on the task, (3) whether information flows bottom-up, top-down, or bidirectionally, and (4) whether their coupling is behaviorally relevant. We found that functional coupling between multimodal and modality-specific areas is task-dependent, bidirectional, and relevant for conceptually-guided behavior. Left PPC acted as a connectivity "switchboard" that flexibly adapted its coupling to task-relevant modality-specific nodes.
Hence, neuroimaging studies 1 and 2 suggested a key role of left PPC as a multimodal convergence zone for conceptual knowledge. However, as neuroimaging is correlational, it remained unknown whether left PPC plays a causal role as a multimodal conceptual hub. Therefore, study 3 - a TMS study - tested the causal relevance of left PPC for sound and action feature retrieval. We found that TMS over left PPC selectively impaired action judgments on low sound-low action words, as compared to sham stimulation. Computational simulations of the TMS-induced electrical field revealed that stronger stimulation of left PPC was associated with worse performance on action, but not sound, judgments. These results indicate that left PPC causally supports conceptual processing when action knowledge is task-relevant and cannot be compensated by sound knowledge. Our findings suggest that left PPC is specialized for action knowledge, challenging the view of left PPC as a multimodal conceptual hub.
Overall, our studies support "hybrid theories" which posit that conceptual processing involves both modality-specific perceptual-motor regions and cross-modal convergence zones. In our new model of the conceptual system, we propose conceptual processing to rely on a representational hierarchy from modality-specific to multimodal up to amodal brain regions. Crucially, this hierarchical system is flexible, with different regions and connections being engaged in a task-dependent fashion. Our model not only reconciles the seemingly opposing grounded cognition and amodal theories, it also incorporates task dependency of conceptually-related brain activity and connectivity, thereby resolving several current issues on the neural basis of conceptual knowledge retrieval.
This paper-based dissertation aims to contribute to the open innovation (OI) and technology management (TM) research fields by investigating their mechanisms, and potentials at the operational level. The dissertation connects the well-known concept of technology management with OI formats and applies these on specific manufacturing technologies within a clearly defined setting.
Technological breakthroughs force firms to continuously adapt and reinvent themselves. The pace of technological innovation and their impact on firms is constantly increasing due to more connected infrastructure and accessible resources (i.e. data, knowledge). Especially in the manufacturing sector it is one key element to leverage new technologies to stay competitive. These technological shifts call for new management practices.
TM supports firms with various tools to manage these shifts at different levels in the firm. It is a multifunctional and multidisciplinary field as it deals with all aspects of integrating technological issues into business decision-making and is directly relevant to a number of core business processes. Thus, it makes sense to utilize this theory and their practices as a foundation of this dissertation. However, considering the increasing complexity and number of technologies it is not sufficient anymore for firms to only rely on previous internal R&D and managerial practices. OI can expanse these practices by involving distributed innovation processes and accessing further external knowledge sources. This expansion can lead to an increasing innovation performance and thereby accelerate the time-to-market of technologies.
Research in this dissertation was based on the expectations that OI formats will support the R&D activities of manufacturing technologies on the operational level by providing access to resources, knowledge, and leading-edge technology. The dissertation represents uniqueness regarding the rich practical data sets (observations, internal documents, project reviews) drawn from a very large German high-tech firm. The researcher was embedded in an R&D unit within the operational TM department for manufacturing technologies. The analyses include 1.) an exploratory in-depth analysis of a crowdsourcing initiative to elaborate the impact on specific manufacturing technologies, 2.) a deductive approach for developing a technology evaluation score model to create a common understanding of the value of selected manufacturing technologies at the operational level, and 3.) an abductive reasoning approach in form of a longitudinal case study to derive important indicator for the in-process activities of science-based partnership university-industry collaboration format. Thereby, the dissertation contributed to research and practice 1.) linkages of TM and OI practices to assimilate technologies at the operational level, 2.) insights about the impact of CS on manufacturing technologies and a related guideline to execute CS initiatives in this specific environment 3.) introduction of manufacturing readiness levels and further criteria into the TM and OI research field to support decision-makers in the firm in gaining a common understanding of the maturity of manufacturing technologies and, 4.) context-specific important indicators for science based university-industry collaboration projects and a holistic framework to connect TM with the university-industry collaboration approach
The findings of this dissertation illustrate that OI formats can support the acceleration of time-to-market of manufacturing technologies and further improve the technical requirements of the product by leveraging external capabilities. The conclusions and implications made are intended to foster further research and improve managerial practices to evolve TM into an open collaborative context with interconnectivities between all internal and external involved technologies, individuals and organizational levels.
Anthropogenic climate change alters the hydrological cycle. While certain areas experience more intense precipitation events, others will experience droughts and increased evaporation, affecting water storage in long-term reservoirs, groundwater, snow, and glaciers. High elevation environments are especially vulnerable to climate change, which will impact the water supply for people living downstream. The Himalaya has been identified as a particularly vulnerable system, with nearly one billion people depending on the runoff in this system as their main water resource. As such, a more refined understanding of spatial and temporal changes in the water cycle in high altitude systems is essential to assess variations in water budgets under different climate change scenarios.
However, not only anthropogenic influences have an impact on the hydrological cycle, but changes to the hydrological cycle can occur over geological timescales, which are connected to the interplay between orogenic uplift and climate change. However, their temporal evolution and causes are often difficult to constrain. Using proxies that reflect hydrological changes with an increase in elevation, we can unravel the history of orogenic uplift in mountain ranges and its effect on the climate.
In this thesis, stable isotope ratios (expressed as δ2H and δ18O values) of meteoric waters and organic material are combined as tracers of atmospheric and hydrologic processes with remote sensing products to better understand water sources in the Himalayas. In addition, the record of modern climatological conditions based on the compound specific stable isotopes of leaf waxes (δ2Hwax) and brGDGTs (branched Glycerol dialkyl glycerol tetraethers) in modern soils in four Himalayan river catchments was assessed as proxies of the paleoclimate and (paleo-) elevation. Ultimately, hydrological variations over geological timescales were examined using δ13C and δ18O values of soil carbonates and bulk organic matter originating from sedimentological sections from the pre-Siwalik and Siwalik groups to track the response of vegetation and monsoon intensity and seasonality on a timescale of 20 Myr.
I find that Rayleigh distillation, with an ISM moisture source, mainly controls the isotopic composition of surface waters in the studied Himalayan catchments. An increase in d-excess in the spring, verified by remote sensing data products, shows the significant impact of runoff from snow-covered and glaciated areas on the surface water isotopic values in the timeseries.
In addition, I show that biomarker records such as brGDGTs and δ2Hwax have the potential to record (paleo-) elevation by yielding a significant correlation with the temperature and surface water δ2H values, respectively, as well as with elevation. Comparing the elevation inferred from both brGDGT and δ2Hwax, large differences were found in arid sections of the elevation transects due to an additional effect of evapotranspiration on δ2Hwax. A combined study of these proxies can improve paleoelevation estimates and provide recommendations based on the results found in this study.
Ultimately, I infer that the expansion of C4 vegetation between 20 and 1 Myr was not solely dependent on atmospheric pCO2, but also on regional changes in aridity and seasonality from to the stable isotopic signature of the two sedimentary sections in the Himalaya (east and west).
This thesis shows that the stable isotope chemistry of surface waters can be applied as a tool to monitor the changing Himalayan water budget under projected increasing temperatures. Minimizing the uncertainties associated with the paleo-elevation reconstructions were assessed by the combination of organic proxies (δ2Hwax and brGDGTs) in Himalayan soil. Stable isotope ratios in bulk soil and soil carbonates showed the evolution of vegetation influenced by the monsoon during the late Miocene, proving that these proxies can be used to record monsoon intensity, seasonality, and the response of vegetation. In conclusion, the use of organic proxies and stable isotope chemistry in the Himalayas has proven to successfully record changes in climate with increasing elevation. The combination of δ2Hwax and brGDGTs as a new proxy provides a more refined understanding of (paleo-)elevation and the influence of climate.
The mitochondrial chaperone complex HSP60/HSP10 facilitates mitochondrial protein homeostasis by folding more than 300 mitochondrial matrix proteins. It has been shown previously that HSP60 is downregulated in brains of type 2 diabetic (T2D) mice and patients,
causing mitochondrial dysfunction and insulin resistance. As HSP60 is also decreased in peripheral tissues in T2D animals, this thesis investigated the effect of overall reduced HSP60 in the development of obesity and associated co-morbidities.
To this end, both female and male C57Bl/6N control (i.e. without further alterations in their genome, Ctrl) and heterozygous whole-body Hsp60 knock-out (Hsp60+/-) mice, which exhibit a 50 % reduction of HSP60 in all tissues, were fed a normal chow diet (NCD) or a highfat diet (HFD, 60 % calories from fat) for 16 weeks and were subjected to extensive metabolic phenotyping including indirect calorimetry, NMR spectroscopy, insulin, glucose and pyruvate tolerance tests, vena cava insulin injections, as well as histological and molecular analysis.
Interestingly, NCD feeding did not result in any striking phenotype, only a mild increase in energy expenditure in Hsp60+/- mice. Exposing mice to a HFD however revealed an increased body weight due to higher muscle mass in female Hsp60+/- mice, with a simultaneous decrease in energy expenditure. Additionally, these mice displayed decreased fasting glycemia. Opposingly, male Hsp60+/- compared to control mice showed lower body weight gain due to decreased fat mass and an increased energy expenditure, strikingly independent of lean mass. Further, only male Hsp60+/- mice display improved HOMA-IR and Matsuda
insulin sensitivity indices.
Despite the opposite phenotype in regards to body weight development, Hsp60+/- mice of both sexes show a significantly higher cell number, as well as a reduction in adipocyte size in the subcutaneous and gonadal white adipose tissue (sc/gWAT). Curiously, this adipocyte hyperplasia – usually associated with positive aspects of WAT function – is disconnected from metabolic improvements, as the gWAT of male Hsp60+/- mice shows mitochondrial dysfunction, oxidative stress, and insulin resistance. Transcriptomic analysis of gWAT shows an up
regulation of genes involved in macroautophagy. Confirmatory, expression of microtubuleassociated protein 1A/1B light chain 3B (LC3), as a protein marker of autophagy, and direct measurement of lysosomal activity is increased in the gWAT of male Hsp60+/- mice.
In summary, this thesis revealed a novel gene-nutrient interaction. The reduction of the crucial chaperone HSP60 did not have large effects in mice fed a NCD, but impacted metabolism during DIO in a sex-specific manner, where, despite opposing body weight and
body composition phenotypes, both female and male Hsp60+/- mice show signs of protection from high fat diet-induced systemic insulin resistance.
The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes located in the Khomas Highland of Namibia. H.E.S.S. operates in a wide energy range from several tens of GeV to several tens of TeV, reaching the best sensitivity around 1 TeV or at lower energies. However, there are many important topics – such as the search for Galactic PeVatrons, the study of gamma-ray production scenarios for sources (hadronic vs. leptonic), EBL absorption studies – which require good sensitivity at energies above 10 TeV. This work aims at improving the sensitivity of H.E.S.S. and increasing the gamma-ray statistics at high energies. The study investigates an enlargement of the H.E.S.S. effective field of view using events with larger offset angles in the analysis. The greatest challenges in the analysis of large-offset events are a degradation of the reconstruction accuracy and a rise of the background rate as the offset angle increases. The more sophisticated direction reconstruction method (DISP) and improvements to the standard background rejection technique, which by themselves are effective ways to increase the gamma-ray statistics and improve the sensitivity of the analysis, are implemented to overcome the above-mentioned issues. As a result, the angular resolution at the preselection level is improved by 5 - 10% for events at 0.5◦ offset angle and by 20 - 30% for events at 2◦ offset angle. The background rate at large offset angles is decreased nearly to a level typical for offset angles below 2.5◦. Thereby, sensitivity improvements of 10 - 20% are achieved for the proposed analysis compared to the standard analysis at small offset angles. Developed analysis also allows for the usage of events at large offset angles up to approximately 4◦, which was not possible before. This analysis method is applied to the analysis of the Galactic plane data above 10 TeV. As a result, 40 sources out of the 78 presented in the H.E.S.S. Galactic plane survey (HGPS) are detected above 10 TeV. Among them are representatives of all source classes that are present in the HGPS catalogue; namely, binary systems, supernova remnants, pulsar wind nebulae and composite objects. The potential of the improved analysis method is demonstrated by investigating the more than 10 TeV emission for two objects: the region associated with the shell-type SNR HESS J1731−347 and the PWN candidate associated with PSR J0855−4644 that is coincident with Vela Junior (HESS J0852−463).
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
Filaments are omnipresent features in the solar chromosphere, one of the atmospheric layers of the Sun, which is located above the photosphere, the visible surface of the Sun. They are clouds of plasma reaching from the photosphere to the chromosphere, and even to the outer-most atmospheric layer, the corona. They are stabalized by the magnetic field. If the magnetic field is disturbed, filaments can erupt as coronal mass ejections (CME), releasing plasma into space, which can also hit the Earth. A special type of filaments are polar crown filaments, which form at the interface of the unipolar field of the poles and flux of opposite magnetic polarity, which was transported towards the poles. This flux transport is related to the global dynamo of the Sun and can therefore be analyzed indirectly with polar crown filaments. The main objective of this thesis is to better understand the physical properties and environment of high-latitude and polar crown filaments, which can be approached from two perspectives: (1) analyzing the large-scale properties of high-latitude and polar crown filaments with full-disk Hα observations from the Chromospheric Telescope (ChroTel) and (2) determining the relation of polar crown and high-latitude filaments from the chromosphere to the lower-lying photosphere with high-spatial resolution observations of the Vacuum Tower Telescope (VTT), which reveal the smallest details.
The Chromospheric Telescope (ChroTel) is a small 10-cm robotic telescope at Observatorio del Teide on Tenerife (Spain), which observes the entire Sun in Hα, Ca IIK, and He I 10830 Å. We present a new calibration method that includes limb-darkening correction, removal of non-uniform filter transmission, and determination of He I Doppler velocities. Chromospheric full-disk filtergrams are often obtained with Lyot filters, which may display non-uniform transmission causing large-scale intensity variations across the solar disk. Removal of a 2D symmetric limb-darkening function from full-disk images results in a flat background. However, transmission artifacts remain and are even more distinct in these contrast-enhanced images. Zernike polynomials are uniquely appropriate to fit these large-scale intensity variations of the background. The Zernike coefficients show a distinct temporal evolution for ChroTel data, which is likely related to the telescope’s alt-azimuth mount that introduces image rotation. In addition, applying this calibration to sets of seven filtergrams that cover the He I triplet facilitates determining chromospheric Doppler velocities. To validate the method, we use three datasets with varying levels of solar activity. The Doppler velocities are benchmarked with respect to co-temporal high-resolution spectroscopic data of the GREGOR Infrared Spectrograph (GRIS). Furthermore, this technique can be applied to ChroTel Hα and Ca IIK data. The calibration method for ChroTel filtergrams can be easily adapted to other full-disk data exhibiting unwanted large-scale variations. The spectral region of the He I triplet is a primary choice for high-resolution near-infrared spectropolarimetry. Here, the improved calibration of ChroTel data will provide valuable context data.
Polar crown filaments form above the polarity inversion line between the old magnetic flux of the previous cycle and the new magnetic flux of the current cycle. Studying their appearance and their properties can lead to a better understanding of the solar cycle. We use full-disk data of the ChroTel at Observatorio del Teide, Tenerife, Spain, which were taken in three different chromospheric absorption lines (Hα 6563 Å, Ca IIK 3933 Å, and He I 10830 Å), and we create synoptic maps. In addition, the spectroscopic He I data allow us to compute Doppler velocities and to create synoptic Doppler maps. ChroTel data cover the rising and decaying phase of Solar Cycle 24 on about 1000 days between 2012 and 2018. Based on these data, we automatically extract polar crown filaments with image-processing tools and study their properties. We compare contrast maps of polar crown filaments with those of quiet-Sun filaments. Furthermore, we present a super-synoptic map summarizing the entire ChroTel database. In summary, we provide statistical properties, i.e. number and location of filaments, area, and tilt angle for both the maximum and declining phase of Solar Cycle 24. This demonstrates that ChroTel provides a
promising dataset to study the solar cycle.
The cyclic behavior of polar crown filaments can be monitored by regular full-disk Hα observations. ChroTel provides such regular observations of the Sun in three chromospheric wavelengths. To analyze the cyclic behavior and the statistical properties of polar crown filaments, we have to extract the filaments from the images. Manual extraction is tedious, and extraction with morphological image processing tools produces a large number of false positive detections and the manual extraction of these takes too much time. Automatic object detection and extraction in a reliable manner allows us to process more data in a shorter time. We will present an overview of the ChroTel database and a proof of concept of a machine learning application, which allows us a unified extraction of, for example, filaments from ChroTel data.
The chromospheric Hα spectral line dominates the spectrum of the Sun and other stars. In the stellar regime, this spectral line is already used as a powerful tracer of magnetic activity. For the Sun, other tracers are typically used to monitor solar activity. Nonetheless, the Sun is observed constantly in Hα with globally distributed ground-based full-disk imagers. The aim of this study is to introduce Hα as a tracer of solar activity and compare it to other established indicators. We discuss the newly created imaging Hα excess in the perspective of possible application for modelling of stellar atmospheres. In particular, we try to determine how constant is the mean intensity of the Hα excess and number density of low-activity regions between solar maximum and minimum. Furthermore, we investigate whether the active region coverage fraction or the changing emission strength in the active regions dominates time variability in solar Hα observations. We use ChroTel observations of full-disk Hα filtergrams and morphological image processing techniques to extract the positive and negative imaging Hα excess, for bright features (plage regions) and dark absorption features (filaments and sunspots), respectively. We describe the evolution of the Hα excess during Solar Cycle 24 and compare it to other well established tracers: the relative sunspot number, the F10.7 cm radio flux, and the Mg II index. Moreover, we discuss possible applications of the Hα excess for stellar activity diagnostics and the contamination of exoplanet transmission spectra. The positive and negative Hα excess follow the behavior of the solar activity over the course of the cycle. Thereby, positive Hα excess is closely correlated to the chromospheric Mg II index. On the other hand, the negative Hα excess, created from dark features like filaments and sunspots, is introduced as a tracer of solar activity for the first time. We investigated the mean intensity distribution for active regions for solar minimum and maximum and found that the shape of both distributions is very similar but with different amplitudes. This might be related with the relatively stable coronal temperature component during the solar cycle. Furthermore, we found that the coverage fraction of Hα excess and the Hα excess of bright features are strongly correlated, which will influence modelling of stellar and exoplanet atmospheres.
High-resolution observations of polar crown and high-latitude filaments are scarce. We present a unique sample of such filaments observed in high-resolution Hα narrow-band filtergrams and broad-band images, which were obtained with a new fast camera system at the VTT. ChroTel provided full-disk context observations in Hα, Ca IIK, and He I 10830 Å. The Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) provided line-of-sight magnetograms and ultraviolet (UV) 1700 Å filtergrams, respectively. We study filigree in the vicinity of polar crown and high-latitude filaments and relate their locations to magnetic concentrations at the filaments’ footpoints. Bright points are a well studied phenomenon in the photosphere at low latitudes, but they were not yet studied in the quiet network close to the poles. We examine size, area, and eccentricity of bright points and find that their morphology is very similar to their counterparts at lower latitudes, but their sizes and areas are larger. Bright points at the footpoints of polar crown filaments are preferentially located at stronger magnetic flux concentrations, which are related to bright regions at the border of supergranules as observed in UV filtergrams. Examining the evolution of bright points on three consecutive days reveals that their amount increases while the filament decays, which indicates they impact the equilibrium of the cool plasma contained in filaments.
“Embodied Practices – Looking From Small Places” is an edited transcript of a conversation between theatre and performance scholar Sruti Bala (University of Amsterdam) and sociologist, criminologist and anthropologist Dylan Kerrigan (University of Leicester) that took place as an online event in November 2020. Throughout their talk, Bala and Kerrigan engage with the legacy of Haitian anthropologist Michel-Rolph Trouillot. Specifically, they focus on his approach of looking from small units, such as small villages in Dominica, outwards to larger political structures such as global capitalism, social inequalities and the distribution of power. They also share insights from their own research on embodied practices in the Caribbean, Europe and India and answer questions such as: What can research on and through embodied practices tell us about systems of power and domination that move between the local and the global? How can performance practices which are informed by multiple locations and cultures be read and appreciated adequately? Sharing insights from his research into Guyanese prisons, Kerrigan outlines how he aims to connect everyday experiences and struggles of Caribbean people to trans-historical and transnational processes such as racial capitalism and post/coloniality. Furthermore, he elaborates on how he uses performance practices such as spoken word poetry and data verbalisation to connect with systematically excluded groups. Bala challenges naïve notions about the inherent transformative potential of performance in her research on performance and translation. She points to the way in which performance and its reception is always already inscribed in what she calls global or planetary asymmetries. At the conclusion of this conversation, they broach the question: are small places truly as small as they seem?
Active Galactic Nuclei (AGN) are considered to be the main powering source of active galaxies, where central Super Massive Black Holes (SMBHs), with masses between 106 and 109 M⊙ gravitationally pull the surrounding material via accre- tion. AGN phenomenon expands over a very wide range of luminosities, from the most luminous high-redshift quasars (QSOs), to the local Low-Luminosity AGN (LLAGN), with significantly weaker luminosities. While "typical" luminous AGNs distinguish themselves by their characteristical blue featureless continuum, the Broad Emission Lines (BELs) with Full Widths at Half Maximum (FWHM) in order of few thousands km s1, arising from the so-called Broad Line Region (BLR), and strong radio and/or X-ray emission, detection of LLAGNs on the other hand is quite chal- lenging due to their extremely weak emission lines, and absence of the power-law continuum. In order to fully understand AGN evolution and their duty-cycles across cosmic history, we need a proper knowledge of AGN phenomenon at all luminosi- ties and redshifts, as well as perspectives from different wavelength bands.
In this thesis I present a search for AGN signatures in central spectra of 542 local (0.005 < z < 0.03) galaxies from the Calar Alto Legacy Integral Field Area (CALIFA) survey. The adopted aperture of 3′′ × 3′′ corresponds to central ∼ 100 − 500 pc for the redshift range of CALIFA. Using the standard emission-line ratio diagnostic diagrams, we initially classified all CALIFA emission-line galaxies (526) into star- forming, LINER-like, Seyfert 2 and intermediates. We further detected signatures of the broad Hα component in 89 spectra from the sample, of which more than 60% are present in the central spectra of LINER-like galaxies. These BELs are very weak, with luminosities in range 1038 − 1041 erg s−1, but with FWHMs between 1000 km s−1 and 6000 km s−1, comparable to those of luminous high-z AGN. This result implies that type 1 AGN are in fact quite frequent in the local Universe. We also identified additional 29 Seyfert 2 galaxies using the emission-line ratio diagnostic diagrams.
Using the MBH − σ∗ correlation, we estimated black hole masses of 55 type 1 AGN from CALIFA, a sample for which we had estimates of bulge stellar velocity dispersions σ∗. We compared these masses to the ones that we estimated from the virial method and found large discrepancies. We analyzed the validity of both meth- ods for black hole mass estimation of local LLAGN, and concluded that most likely virial scaling relations can no longer be applied as a valid MBH estimator in such low-luminosity regime. These black holes accrete at very low rate, having Edding- ton ratios in range 4.1 × 10−5 − 2.4 × 10−3. Detection of BELs with such low lumi- nosities and at such low Eddington rates implies that these LLAGN are still able to form the BLR, although with probably modified structure of the central engine.
In order to obtain full picture of black hole growth across cosmic time, it is es- sential that we study them in different stages of their activity. For that purpose, we estimated the broad AGN Luminosity Function (AGNLF) of our entire type 1 AGN sample using the 1/Vmax method. The shape of AGNLF indicates an apparent flattening below luminosities LHα ∼ 1039 erg s−1. Correspondingly we estimated ac- tive Black Hole Mass Function (BHMF) and Eddington Ration Distribution Function (ERDF) for a sub-sample of type 1 AGN for which we have MBH and λ estimates. The flattening is also present in both BHMF and ERDF, around log(MBH) ∼ 7.7 and log(λ) < 3, respectively. We estimated the fraction of active SMBHs in CALIFA by comparing our active BHMF to the one of the local quiescent SMBHs. The shape of
the active fraction which decreases with increasing MBH, as well as the flattening of AGNLF, BHMF and ERDF is consistent with scenario of AGN cosmic downsizing.
To complete AGN census in the CALIFA galaxy sample, it is necessary to search for them in various wavelength bands. For the purpose of completing the census we performed cross-correlations between all 542 CALIFA galaxies and multiwavelength surveys, Swift – BAT 105 month catalogue (in hard 15 - 195 keV X-ray band), and NRAO VLA Sky Survey (NVSS, in 1.4 GHz radio domain). This added 1 new AGN candidate in X-ray, and 7 in radio wavelength band to our local LLAGN count.
It is possible to detect AGN emission signatures within 10 – 20 kpc outside of the central galactic regions. This may happen when the central AGN has recently switched off and the photoionized material is spread across the galaxy within the light-travel-time, or the photoionized material is blown away from the nucleus by outflows. In order to detect these extended AGN regions we constructed spatially resolved emission-line ratio diagnostic diagrams of all emission-line galaxies from the CALIFA, and found 1 new object that was previously not identified as AGN.
Obtaining the complete AGN census in CALIFA, with five different AGN types, showed that LLAGN contribute a significant fraction of 24% of the emission-line galaxies in the CALIFA sample. This result implies that AGN are quite common in the local Universe, and although being in very low activity stage, they contribute to large fraction of all local SMBHs. Within this thesis we approached the upper limit of AGN fraction in the local Universe and gained some deeper understanding of the LLAGN phenomenon.
Transient permeability in porous and fractured sandstones mediated by fluid-rock interactions
(2021)
Understanding the fluid transport properties of subsurface rocks is essential for a large number of geotechnical applications, such as hydrocarbon (oil/gas) exploitation, geological storage (CO2/fluids), and geothermal reservoir utilization. To date, the hydromechanically-dependent fluid flow patterns in porous media and single macroscopic rock fractures have received numerous investigations and are relatively well understood. In contrast, fluid-rock interactions, which may permanently affect rock permeability by reshaping the structure and changing connectivity of pore throats or fracture apertures, need to be further elaborated. This is of significant importance for improving the knowledge of the long-term evolution of rock transport properties and evaluating a reservoir’ sustainability. The thesis focuses on geothermal energy utilization, e.g., seasonal heat storage in aquifers and enhanced geothermal systems, where single fluid flow in porous rocks and rock fracture networks under various pressure and temperature conditions dominates.
In this experimental study, outcrop samples (i.e., Flechtinger sandstone, an illite-bearing Lower Permian rock, and Fontainebleau sandstone, consisting of pure quartz) were used for flow-through experiments under simulated hydrothermal conditions. The themes of the thesis are (1) the investigation of clay particle migration in intact Flechtinger sandstone and the coincident permeability damage upon cyclic temperature and fluid salinity variations; (2) the determination of hydro-mechanical properties of self-propping fractures in Flechtinger and Fontainebleau sandstones with different fracture features and contrasting mechanical properties; and (3) the investigation of the time-dependent fracture aperture evolution of Fontainebleau sandstone induced by fluid-rock interactions (i.e., predominantly pressure solution). Overall, the thesis aims to unravel the mechanisms of the instantaneous reduction (i.e., direct responses to thermo-hydro-mechanical-chemical (THMC) conditions) and progressively-cumulative changes (i.e., time-dependence) of rock transport properties.
Permeability of intact Flechtinger sandstone samples was measured under each constant condition, where temperature (room temperature up to 145 °C) and fluid salinity (NaCl: 0 ~ 2 mol/l) were stepwise changed. Mercury intrusion porosimetry (MIP), electron microprobe analysis (EMPA), and scanning electron microscopy (SEM) were performed to investigate the changes of local porosity, microstructures, and clay element contents before and after the experiments. The results indicate that the permeability of illite-bearing Flechtinger sandstones will be impaired by heating and exposure to low salinity pore fluids. The chemically induced permeability variations prove to be path-dependent concerning the applied succession of fluid salinity changes. The permeability decay induced by a temperature increase and a fluid salinity reduction operates by relatively independent mechanisms, i.e., thermo-mechanical and thermo-chemical effects.
Further, the hydro-mechanical investigations of single macroscopic fractures (aligned, mismatched tensile fractures, and smooth saw-cut fractures) illustrate that a relative fracture wall offset could significantly increase fracture aperture and permeability, but the degree of increase depends on fracture surface roughness. X-ray computed tomography (CT) demonstrates that the contact area ratio after the pressure cycles is inversely correlated to the fracture offset. Moreover, rock mechanical properties, determining the strength of contact asperities, are crucial so that relatively harder rock (i.e., Fontainebleau sandstone) would have a higher self-propping potential for sustainable permeability during pressurization. This implies that self-propping rough fractures with a sufficient displacement are efficient pathways for fluid flow if the rock matrix is mechanically strong.
Finally, two long-term flow-through experiments with Fontainebleau sandstone samples containing single fractures were conducted with an intermittent flow (~140 days) and continuous flow (~120 days), respectively. Permeability and fluid element concentrations were measured throughout the experiments. Permeability reduction occurred at the beginning stage when the stress was applied, while it converged at later stages, even under stressed conditions. Fluid chemistry and microstructure observations demonstrate that pressure solution governs the long-term fracture aperture deformation, with remarkable effects of the pore fluid (Si) concentration and the structure of contact grain boundaries. The retardation and the cessation of rock fracture deformation are mainly induced by the contact stress decrease due to contact area enlargement and a dissolved mass accumulation within the contact boundaries. This work implies that fracture closure under constant (pressure/stress and temperature) conditions is likely a spontaneous process, especially at the beginning stage after pressurization when the contact area is relatively small. In contrast, a contact area growth yields changes of fracture closure behavior due to the evolution of contact boundaries and concurrent changes in their diffusive properties. Fracture aperture and thus permeability will likely be sustainable in the long term if no other processes (e.g., mineral precipitations in the open void space) occur.
Due to global climate change providing food security for an increasing world population is a big challenge. Especially abiotic stressors have a strong negative effect on crop yield. To develop climate-adapted crops a comprehensive understanding of molecular alterations in the response of varying levels of environmental stresses is required. High throughput or ‘omics’ technologies can help to identify key-regulators and pathways of abiotic stress responses. In addition to obtain omics data also tools and statistical analyses need to be designed and evaluated to get reliable biological results.
To address these issues, I have conducted three different studies covering two omics technologies. In the first study, I used transcriptomic data from the two polymorphic Arabidopsis thaliana accessions, namely Col-0 and N14, to evaluate seven computational tools for their ability to map and quantify Illumina single-end reads. Between 92% and 99% of the reads were mapped against the reference sequence. The raw count distributions obtained from the different tools were highly correlated. Performing a differential gene expression analysis between plants exposed to 20 °C or 4°C (cold acclimation), a large pairwise overlap between the mappers was obtained. In the second study, I obtained transcript data from ten different Oryza sativa (rice) cultivars by PacBio Isoform sequencing that can capture full-length transcripts. De novo reference transcriptomes were reconstructed resulting in 38,900 to 54,500 high-quality isoforms per cultivar. Isoforms were collapsed to reduce sequence redundancy and evaluated, e.g. for protein completeness level (BUSCO), transcript length, and number of unique transcripts per gene loci. For the heat and drought tolerant aus cultivar N22, I identified around 650 unique and novel transcripts of which 56 were significantly differentially expressed in developing seeds during combined drought and heat stress. In the last study, I measured and analyzed the changes in metabolite profiles of eight rice cultivars exposed to high night temperature (HNT) stress and grown during the dry and wet season on the field in the Philippines. Season-specific changes in metabolite levels, as well as for agronomic parameters, were identified and metabolic pathways causing a yield decline at HNT conditions suggested.
In conclusion, the comparison of mapper performances can help plant scientists to decide on the right tool for their data. The de novo reconstruction of rice cultivars without a genome sequence provides a targeted, cost-efficient approach to identify novel genes responding to stress conditions for any organism. With the metabolomics approach for HNT stress in rice, I identified stress and season-specific metabolites which might be used as molecular markers for crop improvement in the future.
Bottom-up synthetic biology is used for the understanding of how a cell works. It is achieved through developing techniques to produce lipid-based vesicular structures as cellular mimics. The most common techniques used to produce cellular mimics or synthetic cells is through electroformation and swelling method. However, the abovementioned techniques cannot efficiently encapsulate macromolecules such as proteins, enzymes, DNA and even liposomes as synthetic organelles. This urges the need to develop new techniques that can circumvent this issue and make the artificial cell a reality where it is possible to imitate a eukaryotic cell through encapsulating macromolecules. In this thesis, the aim to construct a cell system using giant unilamellar vesicles (GUVs) to reconstitute the mitochondrial molybdenum cofactor biosynthetic pathway. This pathway is highly conserved among all life forms, and therefore is known for its biological significance in disorders induced through its malfunctioning. Furthermore, the pathway itself is a multi-step enzymatic reaction that takes place in different compartments. Initially, GTP in the mitochondrial matrix is converted to cPMP in the presence of cPMP synthase. Further, produced cPMP is transported across the membrane to the cytosol, to be converted by MPT synthase into MPT. This pathway provides a possibility to address the general challenges faced in the development of a synthetic cell, to encapsulate large biomolecules with good efficiency and greater control and to evaluate the enzymatic reactions involved in the process.
For this purpose, the emulsion-based technique was developed and optimised to allow rapid production of GUVs (~18 min) with high encapsulation efficiency (80%). This was made possible by optimizing various parameters such as density, type of oil, the impact of centrifugation speed/time, lipid concentration, pH, temperature, and emulsion droplet volume. Furthermore, the method was optimised in microtiter plates for direct experimentation and visualization after the GUV formation. Using this technique, the two steps - formation of cPMP from GTP and the formation of MPT from cPMP were encapsulated in different sets of GUVs to mimic the two compartments. Two independent fluorescence-based detection systems were established to confirm the successful encapsulation and conversion of the reactants. Alternatively, the enzymes produced using bacterial expression and measured. Following the successful encapsulation and evaluation of enzymatic reactions, cPMP transport across mitochondrial membrane has been mimicked using GUVs using a complex mitochondrial lipid composition. It was found that the cPMP interaction with the lipid bilayer results in transient pore-formation and leakage of internal contents.
Overall, it can be concluded that in this thesis a novel technique has been optimised for fast production of functional synthetic cells. The individual enzymatic steps of the Moco biosynthetic pathway have successfully implemented and quantified within these cellular mimics.
The particle noch (‘still’) can have an additive reading similar to auch (‘also’). We argue that both particles indicate that a previously partially answered QUD is re-opened to add a further answer. The particles differ in that the QUD, in the case of auch, can be re-opened with respect to the same topic situation, whereas noch indicates that the QUD is re-opened with respect to a new topic situation. This account predicts a difference in the accommodation behavior of the two particles. We present an experiment whose results are in line with this prediction.
Botulinum neurotoxin (BoNT) is produced by the anaerobic bacterium Clostridium botulinum. It is one of the most potent toxins found in nature and can enter motor neurons (MN) to cleave proteins necessary for neurotransmission, resulting in flaccid paralysis. The toxin has applications in both traditional and esthetic medicine. Since BoNT activity varies between batches despite identical protein concentrations, the activity of each lot must be assessed. The gold standard method is the mouse lethality assay, in which mice are injected with a BoNT dilution series to determine the dose at which half of the animals suffer death from peripheral asphyxia. Ethical concerns surrounding the use of animals in toxicity testing necessitate the creation of alternative model systems to measure the potency of BoNT.
Prerequisites of a successful model are that it is human specific; it monitors the complete toxic pathway of BoNT; and it is highly sensitive, at least in the range of the mouse lethality assay. One model system was developed by our group, in which human SIMA neuroblastoma cells were genetically modified to express a reporter protein (GLuc), which is packaged into neurosecretory vesicles, and which, upon cellular depolarization, can be released – or inhibited by BoNT – simultaneously with neurotransmitters. This assay has great potential, but includes the inherent disadvantages that the GLuc sequence was randomly inserted into the genome and the tumor cells only have limited sensitivity and specificity to BoNT. This project aims to improve these deficits, whereby induced pluripotent stem cells (iPSCs) were genetically modified by the CRISPR/Cas9 method to insert the GLuc sequence into the AAVS1 genomic safe harbor locus, precluding genetic disruption through non-specific integrations. Furthermore, GLuc was modified to associate with signal peptides that direct to the lumen of both large dense core vesicles (LDCV), which transport neuropeptides, and synaptic vesicles (SV), which package neurotransmitters. Finally, the modified iPSCs were differentiated into motor neurons (MNs), the true physiological target of BoNT, and hypothetically the most sensitive and specific cells available for the MoN-Light BoNT assay.
iPSCs were transfected to incorporate one of three constructs to direct GLuc into LDCVs, one construct to direct GLuc into SVs, and one “no tag” GLuc control construct. The LDCV constructs fused GLuc with the signal peptides for proopiomelanocortin (hPOMC-GLuc), chromogranin-A (CgA-GLuc), and secretogranin II (SgII-GLuc), which are all proteins found in the LDCV lumen. The SV construct comprises a VAMP2-GLuc fusion sequence, exploiting the SV membrane-associated protein synaptobrevin (VAMP2). The no tag GLuc expresses GLuc non-specifically throughout the cell and was created to compare the localization of vesicle-directed GLuc.
The clones were characterized to ensure that the GLuc sequence was only incorporated into the AAVS1 safe harbor locus and that the signal peptides directed GLuc to the correct vesicles. The accurate insertion of GLuc was confirmed by PCR with primers flanking the AAVS1 safe harbor locus, capable of simultaneously amplifying wildtype and modified alleles. The PCR amplicons, along with an insert-specific amplicon from candidate clones were Sanger sequenced to confirm the correct genomic region and sequence of the inserted DNA. Off-target integrations were analyzed with the newly developed dc-qcnPCR method, whereby the insert DNA was quantified by qPCR against autosomal and sex-chromosome encoded genes. While the majority of clones had off-target inserts, at least one on-target clone was identified for each construct.
Finally, immunofluorescence was utilized to localize GLuc in the selected clones. In iPSCs, the vesicle-directed GLuc should travel through the Golgi apparatus along the neurosecretory pathway, while the no tag GLuc should not follow this pathway. Initial analyses excluded the CgA-GLuc and SgII-GLuc clones due to poor quality protein visualization. The colocalization of GLuc with the Golgi was analyzed by confocal microscopy and quantified. GLuc was strongly colocalized with the Golgi in the hPOMC-GLuc clone (r = 0.85±0.09), moderately in the VAMP2-GLuc clone (r = 0.65±0.01), and, as expected, only weakly in the no tag GLuc clone (r = 0.44±0.10). Confocal microscopy of differentiated MNs was used to analyze the colocalization of GLuc with proteins associated with LDCVs and SVs, SgII in the hPOMC-GLuc clone (r = 0.85±0.08) and synaptophysin in the VAMP2-GLuc clone (r = 0.65±0.07). GLuc was also expressed in the same cells as the MN-associated protein, Islet1.
A significant portion of GLuc was found in the correct cell type and compartment. However, in the MoN-Light BoNT assay, the hPOMC-GLuc clone could not be provoked to reliably release GLuc upon cellular depolarization. The depolarization protocol for hPOMC-GLuc must be further optimized to produce reliable and specific release of GLuc upon exposure to a stimulus. On the other hand, the VAMP2-GLuc clone could be provoked to release GLuc upon exposure to the muscarinic and nicotinic agonist carbachol. Furthermore, upon simultaneous exposure to the calcium chelator EGTA, the carbachol-provoked release of GLuc could be significantly repressed, indicating the detection of GLuc was likely associated with vesicular fusion at the presynaptic terminal. The application of the VAMP2-GLuc clone in the MoN-Light BoNT assay must still be verified, but the results thus far indicate that this clone could be appropriate for the application of BoNT toxicity assessment.
A tale of shifting relations
(2021)
Understanding the dynamics between the East Asian summer (EASM) and winter monsoon (EAWM) is needed to predict their variability under future global warming scenarios. Here, we investigate the relationship between EASM and EAWM as well as the mechanisms driving their variability during the last 10,000 years by stacking marine and terrestrial (non-speleothem) proxy records from the East Asian realm. This provides a regional and proxy independent signal for both monsoonal systems. The respective signal was subsequently analysed using a linear regression model. We find that the phase relationship between EASM and EAWM is not time-constant and significantly depends on orbital configuration changes. In addition, changes in the Atlantic Meridional Overturning circulation, Arctic sea-ice coverage, El Niño-Southern Oscillation and Sun Spot numbers contributed to millennial scale changes in the EASM and EAWM during the Holocene. We also argue that the bulk signal of monsoonal activity captured by the stacked non-speleothem proxy records supports the previously argued bias of speleothem climatic archives to moisture source changes and/or seasonality.
In the last decades, there was a notable progress in solving the well-known Boolean satisfiability (Sat) problem, which can be witnessed by powerful Sat solvers. One of the reasons why these solvers are so fast are structural properties of instances that are utilized by the solver’s interna. This thesis deals with the well-studied structural property treewidth, which measures the closeness of an instance to being a tree. In fact, there are many problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth.
In this work, we study advanced treewidth-based methods and tools for problems in knowledge representation and reasoning (KR). Thereby, we provide means to establish precise runtime results (upper bounds) for canonical problems relevant to KR. Then, we present a new type of problem reduction, which we call decomposition-guided (DG) that
allows us to precisely monitor the treewidth when reducing from one problem to another problem. This new reduction type will be the basis for a long-open lower bound result for quantified Boolean formulas and allows us to design a new methodology for establishing runtime lower bounds for problems parameterized by treewidth.
Finally, despite these lower bounds, we provide an efficient implementation of algorithms that adhere to treewidth. Our approach finds suitable abstractions of instances, which are subsequently refined in a recursive fashion, and it uses Sat solvers for solving subproblems. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat.
There is a general consensus that diverse ecological communities are better equipped to adapt to changes in their environment, but our understanding of the mechanisms by which they do so remains incomplete. Accurately predicting how the global biodiversity crisis affects the functioning of ecosystems, and the services they provide, requires extensive knowledge about these mechanisms.
Mathematical models of food webs have been successful in uncovering many aspects of the link between diversity and ecosystem functioning in small food web modules, containing at most two adaptive trophic levels. Meaningful extrapolation of this understanding to the functioning of natural food webs remains difficult, due to the presence of complex interactions that are not always accurately captured by bitrophic descriptions of food webs. In this dissertation, we expand this approach to tritrophic food web models by including the third trophic level. Using a functional trait approach, coexistence of all species is ensured using fitness-balancing trade-offs. For example, the defense-growth trade-off implies that species may be defended against predation, but this defense comes at the cost of a lower maximal growth rate. In these food webs, the functional diversity on a given trophic level can be varied by modifying the trait differences between the species on that level.
In the first project, we find that functional diversity promotes high biomass on the top level, which, in turn, leads to a reduction in the temporal variability due to compensatory dynamical patterns governed by the top level. Next, these results are generalized by investigating the average behavior of tritrophic food webs, for wide intervals of all parameters describing species interactions in the food web. We find that the diversity on the top level is most important for determining the biomass and temporal variability of all other trophic levels, and show how biomass is only transferred efficiently to the top level when diversity is high everywhere in the food web. In the third project, we compare the response of a simple food chain against a nutrient pulse perturbation, to that of a food web with diversity on every trophic level. By joint consideration of the resistance, resilience, and elasticity, we uncover that the response is efficiently buffered when biomass on the top level is high, which is facilitated by functional diversity on every trophic level in the food web. Finally, in the fourth project, we show that even in a simple consumer-resource model without any diversity, top-down control on the intermediate level frequently causes the phase difference between the intermediate and basal level to deviate from the quarter-cycle lag rule. By adding a top predator, we show that these deviations become even more likely, and anti-phase cycles are often observed.
The combined results of these projects show how the properties of the top trophic level, including its functional diversity, have a decisive influence on the functioning of tritrophic food webs from a mechanistic perspective. Because top species are often among the most vulnerable to extinction, our results emphasize the importance of their conservation in ecosystem management and restoration strategies.
As part of our everyday life we consume breaking news and interpret it based on our own viewpoints and beliefs. We have easy access to online social networking platforms and news media websites, where we inform ourselves about current affairs and often post about our own views, such as in news comments or social media posts. The media ecosystem enables opinions and facts to travel from news sources to news readers, from news article commenters to other readers, from social network users to their followers, etc. The views of the world many of us have depend on the information we receive via online news and social media. Hence, it is essential to maintain accurate, reliable and objective online content to ensure democracy and verity on the Web. To this end, we contribute to a trustworthy media ecosystem by analyzing news and social media in the context of politics to ensure that media serves the public interest. In this thesis, we use text mining, natural language processing and machine learning techniques to reveal underlying patterns in political news articles and political discourse in social networks.
Mainstream news sources typically cover a great amount of the same news stories every day, but they often place them in a different context or report them from different perspectives. In this thesis, we are interested in how distinct and predictable newspaper journalists are, in the way they report the news, as a means to understand and identify their different political beliefs. To this end, we propose two models that classify text from news articles to their respective original news source, i.e., reported speech and also news comments. Our goal is to capture systematic quoting and commenting patterns by journalists and news commenters respectively, which can lead us to the newspaper where the quotes and comments are originally published. Predicting news sources can help us understand the potential subjective nature behind news storytelling and the magnitude of this phenomenon. Revealing this hidden knowledge can restore our trust in media by advancing transparency and diversity in the news.
Media bias can be expressed in various subtle ways in the text and it is often challenging to identify these bias manifestations correctly, even for humans. However, media experts, e.g., journalists, are a powerful resource that can help us overcome the vague definition of political media bias and they can also assist automatic learners to find the hidden bias in the text. Due to the enormous technological advances in artificial intelligence, we hypothesize that identifying political bias in the news could be achieved through the combination of sophisticated deep learning modelsxi and domain expertise. Therefore, our second contribution is a high-quality and reliable news dataset annotated by journalists for political bias and a state-of-the-art solution for this task based on curriculum learning. Our aim is to discover whether domain expertise is necessary for this task and to provide an automatic solution for this traditionally manually-solved problem. User generated content is fundamentally different from news articles, e.g., messages are shorter, they are often personal and opinionated, they refer to specific topics and persons, etc. Regarding political and socio-economic news, individuals in online communities make use of social networks to keep their peers up-to-date and to share their own views on ongoing affairs. We believe that social media is also an as powerful instrument for information flow as the news sources are, and we use its unique characteristic of rapid news coverage for two applications. We analyze Twitter messages and debate transcripts during live political presidential debates to automatically predict the topics that Twitter users discuss. Our goal is to discover the favoured topics in online communities on the dates of political events as a way to understand the political subjects of public interest. With the up-to-dateness of microblogs, an additional opportunity emerges, namely to use social media posts and leverage the real-time verity about discussed individuals to find their locations.
That is, given a person of interest that is mentioned in online discussions, we use the wisdom of the crowd to automatically track her physical locations over time. We evaluate our approach in the context of politics, i.e., we predict the locations of US politicians as a proof of concept for important use cases, such as to track people that
are national risks, e.g., warlords and wanted criminals.
The propagation of test fields, such as electromagnetic, Dirac or linearized gravity, on a fixed spacetime manifold is often studied by using the geometrical optics approximation. In the limit of infinitely high frequencies, the geometrical optics approximation provides a conceptual transition between the test field and an effective point-particle description. The corresponding point-particles, or wave rays, coincide with the geodesics of the underlying spacetime. For most astrophysical applications of interest, such as the observation of celestial bodies, gravitational lensing, or the observation of cosmic rays, the geometrical optics approximation and the effective point-particle description represent a satisfactory theoretical model. However, the geometrical optics approximation gradually breaks down as test fields of finite frequency are considered.
In this thesis, we consider the propagation of test fields on spacetime, beyond the leading-order geometrical optics approximation. By performing a covariant Wentzel-Kramers-Brillouin analysis for test fields, we show how higher-order corrections to the geometrical optics approximation can be considered. The higher-order corrections are related to the dynamics of the spin internal degree of freedom of the considered test field. We obtain an effective point-particle description, which contains spin-dependent corrections to the geodesic motion obtained using geometrical optics. This represents a covariant generalization of the well-known spin Hall effect, usually encountered in condensed matter physics and in optics. Our analysis is applied to electromagnetic and massive Dirac test fields, but it can easily be extended to other fields, such as linearized gravity. In the electromagnetic case, we present several examples where the gravitational spin Hall effect of light plays an important role. These include the propagation of polarized light rays on black hole spacetimes and cosmological spacetimes, as well as polarization-dependent effects on the shape of black hole shadows. Furthermore, we show that our effective point-particle equations for polarized light rays reproduce well-known results, such as the spin Hall effect of light in an inhomogeneous medium, and the relativistic Hall effect of polarized electromagnetic wave packets encountered in Minkowski spacetime.
The suitability of a newly developed cell-based functional assay was tested for the detection of the activity of a range of neurotoxins and neuroactive pharmaceuticals which act by stimulation or inhibition of calcium-dependent neurotransmitter release. In this functional assay, a reporter enzyme is released concomitantly with the neurotransmitter from neurosecretory vesicles. The current study showed that the release of a luciferase from a differentiated human neuroblastoma-based reporter cell line (SIMA-hPOMC1-26-GLuc cells) can be stimulated by a carbachol-mediated activation of the Gq-coupled muscarinic-acetylcholine receptor and by the Ca2+-channel forming spider toxin α-latrotoxin. Carbachol-stimulated luciferase release was completely inhibited by the muscarinic acetylcholine receptor antagonist atropine and α-latrotoxin-mediated release by the Ca2+-chelator EGTA, demonstrating the specificity of luciferase-release stimulation. SIMA-hPOMC1-26-GLuc cells express mainly L- and N-type and to a lesser extent T-type VGCC on the mRNA and protein level. In accordance with the expression profile a depolarization-stimulated luciferase release by a high K+-buffer was effectively and dose-dependently inhibited by L-type VGCC inhibitors and to a lesser extent by N-type and T-type inhibitors. P/Q- and R-type inhibitors did not affect the K+-stimulated luciferase release. In summary, the newly established cell-based assay may represent a versatile tool to analyze the biological efficiency of a range of neurotoxins and neuroactive pharmaceuticals which mediate their activity by the modulation of calcium-dependent neurotransmitter release.
Geomagnetic field modeling using spherical harmonics requires the inversion for hundreds to thousands of parameters. This large-scale problem can always be formulated as an optimization problem, where a global minimum of a certain cost function has to be calculated. A variety of approaches is known in order to solve this inverse problem, e.g. derivative-based methods or least-squares methods and their variants. Each of these methods has its own advantages and disadvantages, which affect for example the applicability to non-differentiable functions or the runtime of the corresponding algorithm.
In this work, we pursue the goal to find an algorithm which is faster than the established methods and which is applicable to non-linear problems. Such non-linear problems occur for example when estimating Euler angles or when the more robust L_1 norm is applied. Therefore, we will investigate the usability of stochastic optimization methods from the CMAES family for modeling the geomagnetic field of Earth's core. On one hand, basics of core field modeling and their parameterization are discussed using some examples from the literature. On the other hand, the theoretical background of the stochastic methods are provided. A specific CMAES algorithm was successfully applied in order to invert data of the Swarm satellite mission and to derive the core field model EvoMag. The EvoMag model agrees well with established models and observatory data from Niemegk. Finally, we present some observed difficulties and discuss the results of our model.
As society paves its way towards device miniaturization and precision medicine, micro-scale actuation and guided transport become increasingly prominent research fields, with high potential impact in both technological and clinical contexts. In order to accomplish directed motion of micron-sized objects, as biosensors and drug-releasing microparticles, towards specific target sites, a promising strategy is the use of living cells as smart biochemically-powered carriers, building the so-called bio-hybrid systems. Inspired by leukocytes, native cells of living organisms efficiently migrating to critical targets as tumor tissue, an emerging concept is to exploit the amoeboid crawling motility of such cells as mean of transport for drug delivery applications.
In the research work described in this thesis, I synergistically applied experimental, computational and theoretical modeling approaches to investigate the behaviour and transport mechanism of a novel kind of bio-hybrid system for active transport at the micro-scale, referred to as cellular truck. This system consists of an amoeboid crawling cell, the carrier, attached to a microparticle, the cargo, which may ideally be drug-loaded for specific therapeutic treatments.
For the purposes of experimental investigation, I employed the amoeba Dictyostelium discoideum as crawling cellular carrier, being a renowned model organism for leukocyte migration and, in general, for eukaryotic cell motility. The performed experiments revealed a complex recurrent cell-cargo relative motion, together with an intermittent motility of the cellular truck as a whole. The evidence suggests the presence of cargoes on amoeboid cells to act as mechanical stimulus leading cell polarization, thus promoting cell motility and giving rise to the observed intermittent dynamics of the truck. Particularly, bursts in cytoskeletal polarity along the cell-cargo axis have been
found to occur in time with a rate dependent on cargo geometrical features, as particle diameter. Overall, the collected experimental evidence pointed out a pivotal role of cell-cargo interactions in the emergent cellular truck motion dynamics. Especially, they can determine the transport capabilities of amoeboid cells, as the cargo size significantly impacts the cytoskeletal activity and repolarization dynamics along the cell-cargo axis, the latter responsible for truck displacement and reorientation.
Furthermore, I developed a modeling framework, built upon the experimental evidence on cellular truck behaviour, that connects the relative dynamics and interactions arising at the truck scale with the actual particle transport dynamics. In fact, numerical simulations of the proposed model successfully reproduced the phenomenology of the cell-cargo system, while enabling the prediction of the transport properties of cellular trucks over larger spatial and temporal scales. The theoretical analysis provided a deeper understanding of the role of cell-cargo interaction on mass transport, unveiling in particular how the long-time transport efficiency is governed by the interplay between the persistence time of cell polarity and time scales of the relative dynamics stemming from cell-cargo interaction. Interestingly, the model predicts the existence of an optimal cargo size, enhancing the diffusivity of cellular trucks; this is in line with previous independent experimental data, which appeared rather counterintuitive and had no explanation prior to this study.
In conclusion, my research work shed light on the importance of cargo-carrier interactions in the context of crawling cell-mediated particle transport, and provides a prototypical, multifaceted framework for the analysis and modelling of such complex bio-hybrid systems and their perspective optimization.
Stereoselective [4+2] Cycloaddition of Singlet Oxygen to Naphthalenes Controlled by Carbohydrates
(2021)
Stereoselective reactions of singlet oxygen are of current interest. Since enantioselective photooxygenations have not been realized efficiently, auxiliary control is an attractive alternative. However, the obtained peroxides are often too labile for isolation or further transformations into enantiomerically pure products. Herein, we describe the oxidation of naphthalenes by singlet oxygen, where the face selectivity is controlled by carbohydrates for the first time. The synthesis of the precursors is easily achieved starting from naphthoquinone and a protected glucose derivative in only two steps. Photooxygenations proceed smoothly at low temperature, and we detected the corresponding endoperoxides as sole products by NMR. They are labile and can thermally react back to the parent naphthalenes and singlet oxygen. However, we could isolate and characterize two enantiomerically pure peroxides, which are sufficiently stable at room temperature. An interesting influence of substituents on the stereoselectivities of the photooxygenations has been found, ranging from 51:49 to up to 91:9 dr (diastereomeric ratio). We explain this by a hindered rotation of the carbohydrate substituents, substantiated by a combination of NOESY measurements and theoretical calculations. Finally, we could transfer the chiral information from a pure endoperoxide to an epoxide, which was isolated after cleavage of the sugar chiral auxiliary in enantiomerically pure form.
Leveraging large-deviation statistics to decipher the stochastic properties of measured trajectories
(2021)
Extensive time-series encoding the position of particles such as viruses, vesicles, or individualproteins are routinely garnered insingle-particle tracking experiments or supercomputing studies.They contain vital clues on how viruses spread or drugs may be delivered in biological cells.Similar time-series are being recorded of stock values in financial markets and of climate data.Such time-series are most typically evaluated in terms of time-averaged mean-squareddisplacements (TAMSDs), which remain random variables for finite measurement times. Theirstatistical properties are different for differentphysical stochastic processes, thus allowing us toextract valuable information on the stochastic process itself. To exploit the full potential of thestatistical information encoded in measured time-series we here propose an easy-to-implementand computationally inexpensive new methodology, based on deviations of the TAMSD from itsensemble average counterpart. Specifically, we use the upper bound of these deviations forBrownian motion (BM) to check the applicability of this approach to simulated and real data sets.By comparing the probability of deviations fordifferent data sets, we demonstrate how thetheoretical bound for BM reveals additional information about observed stochastic processes. Weapply the large-deviation method to data sets of tracer beads tracked in aqueous solution, tracerbeads measured in mucin hydrogels, and of geographic surface temperature anomalies. Ouranalysis shows how the large-deviation properties can be efficiently used as a simple yet effectiveroutine test to reject the BM hypothesis and unveil relevant information on statistical propertiessuch as ergodicity breaking and short-time correlations.
This work develops hybrid methods of imaging spectroscopy for open pit mining and examines their feasibility compared with state-of-the-art. The material distribution within a mine face differs in the small scale and within daily assigned extraction segments. These changes can be relevant to subsequent processing steps but are not always visually identifiable prior to the extraction. Misclassifications that cause false allocations of extracted material need to be minimized in order to reduce energy-intensive material re-handling. The use of imaging spectroscopy aspires to the allocation of relevant deposit-specific materials before extraction, and allows for efficient material handling after extraction. The aim of this work is the parameterization of imaging spectroscopy for pit mining applications and the development and evaluation of a workflow for a mine face, ground- based, spectral characterization. In this work, an application-based sensor adaptation is proposed. The sensor complexity is reduced by down-sampling the spectral resolution of the system based on the samples’ spectral characteristics. This was achieved by the evaluation of existing hyperspectral outcrop analysis approaches based on laboratory sample scans from the iron quadrangle in Minas Gerais, Brazil and by the development of a spectral mine face monitoring workflow which was tested for both an operating and an inactive open pit copper mine in the Republic of Cyprus.
The workflow presented here is applied to three regional data sets: 1) Iron ore samples from Brazil, (laboratory); 2) Samples and hyperspectral mine face imagery from the copper-gold-pyrite mine Apliki, Republic of Cyprus (laboratory and mine face data); and 3) Samples and hyperspectral mine face imagery from the copper-gold-pyrite deposit Three Hills, Republic of Cyprus (laboratory and mine face data). The hyperspectral laboratory dataset of fifteen Brazilian iron ore samples was used to evaluate different analysis methods and different sensor models. Nineteen commonly used methods to analyze and map hyperspectral data were compared regarding the methods’ resulting data products and the accuracy of the mapping and the analysis computation time. Four of the evaluated methods were determined for subsequent analyses to determine the best-performing algorithms: The spectral angle mapper (SAM), a support vector machine algorithm (SVM), the binary feature fitting algorithm (BFF) and the EnMap geological mapper (EnGeoMap). Next, commercially available imaging spectroscopy sensors were evaluated for their usability in open pit mining conditions. Step-wise downsampling of the data - the reduction of the number of bands with an increase of each band’s bandwidth - was performed to investigate the possible simplification and ruggedization of a sensor without a quality fall-off of the mapping results. The impact of the atmosphere visible in the spectrum between 1300–2010nm was reduced by excluding the spectral range from the data for mapping. This tested the feasibility of the method under realistic open pit data conditions. Thirteen datasets based on the different, downsampled sensors were analyzed with the four predetermined methods. The optimum sensor for spectral mine face material distinction was determined as a VNIR-SWIR sensor with 40nm bandwidths in the VNIR and 15nm bandwidths in the SWIR spectral range and excluding the atmospherically impacted bands. The Apliki mine sample dataset was used for the application of the found optimal analyses and sensors. Thirty-six samples were analyzed geochemically and mineralogically. The sample spectra were compiled to two spectral libraries, both distinguishing between seven different geochemical-spectral clusters. The reflectance dataset was downsampled to five different sensors. The five different datasets were mapped with the SAM, BFF and SVM method achieving mapping accuracies of 85-72%, 85-76% and 57-46% respectively. One mine face scan of Apliki was used for the application of the developed workflow. The mapping results were validated against the geochemistry and mineralogy of thirty-six documented field sampling points and a zonation map of the mine face which is based on sixty-six samples and field mapping. The mine face was analyzed with SAM and BFF. The analysis maps were visualized on top of a Structure-from-Motion derived 3D model of the open pit. The mapped geological units and zones correlate well with the expected zonation of the mine face. The third set of hyperspectral imagery from Three Hills was available for applying the fully-developed workflow. Geochemical sample analyses and laboratory spectral data of fifteen different samples from the Three Hills mine, Republic of Cyprus, were used to analyse a downsampled mine face scan of the open pit. Here, areas of low, medium and high ore content were identified.
The developed workflow is successfully applied to the open pit mines Apliki and Three Hills and the spectral maps reflect the prevailing geological conditions. This work leads through the acquisition, preparation and processing of imaging spectroscopy data, the optimum choice of analysis methodology, and the utilization of simplified, robust sensors that meet the requirements of open pit mining conditions. It accentuates the importance of a site-specific and deposit-specific spectral library for the mine face analysis and underlines the need for geological and spectral analysis experts to successfully implement imaging spectroscopy in the field of open pit mining.
The aim of the doctoral project was to answer the question of whether the structural word-initial noun capitalization, as it can otherwise only be found in Luxembourgish alongside German, has a function that is advantageous for the reader. The overriding hypothesis was that an advantage is achieved by activating a syntactic category, namely the core of a noun phrase, through the parafoveal perception of the capital letters. This perception from the corner of the eye should make it possible to preprocess the following noun. As a result, sentence processing should be facilitated, which should ultimately be reflected in overall faster reading times and fixation durations.
The structure of the project includes three studies, some of which included different participant groups:
Study 1:
Study design: Semantic priming using garden-path sentences should bring out the functionality of noun capitalization for the reader
Participant groups: German natives reading German
Study 2:
Study design: same design as study 1, but in English
Participant groups:
English natives without any knowledge of German reading English
English natives who regularly read German reading English
German with high proficiency in English reading English
Study 3:
Study design:
Influence of the noun frequency on a potential preprocessing using the boundary paradigm; Study languages: German and English
Participant groups:
German natives reading German
English natives without any knowledge of German reading English
German with high proficiency in English reading English
Brief summary: The noun capitalization clearly has an impact on sentence processing in both German and English. It cannot be confirmed that this has a substantial, decisive advantage.
Compound values are not universally supported in virtual machine (VM)-based programming systems and languages. However, providing data structures with value characteristics can be beneficial. On one hand, programming systems and languages can adequately represent physical quantities with compound values and avoid inconsistencies, for example, in representation of large numbers. On the other hand, just-in-time (JIT) compilers, which are often found in VMs, can rely on the fact that compound values are immutable, which is an important property in optimizing programs. Considering this, compound values have an optimization potential that can be put to use by implementing them in VMs in a way that is efficient in memory usage and execution time. Yet, optimized compound values in VMs face certain challenges: to maintain consistency, it should not be observable by the program whether compound values are represented in an optimized way by a VM; an optimization should take into account, that the usage of compound values can exhibit certain patterns at run-time; and that necessary value-incompatible properties due to implementation restrictions should be reduced.
We propose a technique to detect and compress common patterns of compound value usage at run-time to improve memory usage and execution speed. Our approach identifies patterns of frequent compound value references and introduces abbreviated forms for them. Thus, it is possible to store multiple inter-referenced compound values in an inlined memory representation, reducing the overhead of metadata and object references. We extend our approach by a notion of limited mutability, using cells that act as barriers for our approach and provide a location for shared, mutable access with the possibility of type specialization. We devise an extension to our approach that allows us to express automatic unboxing of boxed primitive data types in terms of our initial technique. We show that our approach is versatile enough to express another optimization technique that relies on values, such as Booleans, that are unique throughout a programming system. Furthermore, we demonstrate how to re-use learned usage patterns and optimizations across program runs, thus reducing the performance impact of pattern recognition.
We show in a best-case prototype that the implementation of our approach is feasible and can also be applied to general purpose programming systems, namely implementations of the Racket language and Squeak/Smalltalk. In several micro-benchmarks, we found that our approach can effectively reduce memory consumption and improve execution speed.
Modern knowledge bases contain and organize knowledge from many different topic areas. Apart from specific entity information, they also store information about their relationships amongst each other. Combining this information results in a knowledge graph that can be particularly helpful in cases where relationships are of central importance. Among other applications, modern risk assessment in the financial sector can benefit from the inherent network structure of such knowledge graphs by assessing the consequences and risks of certain events, such as corporate insolvencies or fraudulent behavior, based on the underlying network structure. As public knowledge bases often do not contain the necessary information for the analysis of such scenarios, the need arises to create and maintain dedicated domain-specific knowledge bases.
This thesis investigates the process of creating domain-specific knowledge bases from structured and unstructured data sources. In particular, it addresses the topics of named entity recognition (NER), duplicate detection, and knowledge validation, which represent essential steps in the construction of knowledge bases.
As such, we present a novel method for duplicate detection based on a Siamese neural network that is able to learn a dataset-specific similarity measure which is used to identify duplicates. Using the specialized network architecture, we design and implement a knowledge transfer between two deduplication networks, which leads to significant performance improvements and a reduction of required training data.
Furthermore, we propose a named entity recognition approach that is able to identify company names by integrating external knowledge in the form of dictionaries into the training process of a conditional random field classifier. In this context, we study the effects of different dictionaries on the performance of the NER classifier. We show that both the inclusion of domain knowledge as well as the generation and use of alias names results in significant performance improvements.
For the validation of knowledge represented in a knowledge base, we introduce Colt, a framework for knowledge validation based on the interactive quality assessment of logical rules. In its most expressive implementation, we combine Gaussian processes with neural networks to create Colt-GP, an interactive algorithm for learning rule models. Unlike other approaches, Colt-GP uses knowledge graph embeddings and user feedback to cope with data quality issues of knowledge bases. The learned rule model can be used to conditionally apply a rule and assess its quality.
Finally, we present CurEx, a prototypical system for building domain-specific knowledge bases from structured and unstructured data sources. Its modular design is based on scalable technologies, which, in addition to processing large datasets, ensures that the modules can be easily exchanged or extended. CurEx offers multiple user interfaces, each tailored to the individual needs of a specific user group and is fully compatible with the Colt framework, which can be used as part of the system.
We conduct a wide range of experiments with different datasets to determine the strengths and weaknesses of the proposed methods. To ensure the validity of our results, we compare the proposed methods with competing approaches.
Angular momentum is a particularly sensitive probe into stellar evolution because it changes significantly over the main sequence life of a star. In this thesis, I focus on young main sequence stars of which some feature a rapid evolution in their rotation rates. This transition from fast to slow rotation is inadequately explored observationally and this work aims to provide insights into the properties and time scales but also investigates stellar rotation in young open clusters in general.
I focus on the two open clusters NGC 2516 and NGC 3532 which are ~150 Myr (zero-age main sequence age) and ~300 Myr old, respectively. From 42 d-long time series photometry obtained at the Cerro Tololo Inter-American Observatory, I determine stellar rotation periods in both clusters. With accompanying low resolution spectroscopy, I measure radial velocities and chromospheric emission for NGC 3532, the former to establish a clean membership and the latter to probe the rotation-activity connection.
The rotation period distribution derived for NGC 2516 is identical to that of four other coeval open clusters, including the Pleiades, which shows the universality of stellar rotation at the zero-age main sequence. Among the similarities (with the Pleiades) the "extended slow rotator sequence" is a new, universal, yet sparse, feature in the colour-period diagrams of open clusters. From a membership study, I find NGC 3532 to be one of the richest nearby open clusters with 660 confirmed radial velocity members and to be slightly sub-solar in metallicity. The stellar rotation periods for NGC 3532 are the first published for a 300 Myr-old open cluster, a key age to understand the transition from fast to slow rotation. The fast rotators at this age have significantly evolved beyond what is observed in NGC 2516 which allows to estimate the spin-down timescale and to explore the issues that angular momentum models have in describing this transition. The transitional sequence is also clearly identified in a colour-activity diagram of stars in NGC 3532. The synergies of the chromospheric activity and the rotation periods allow to understand the colour-activity-rotation connection for NGC 3532 in unprecedented detail and to estimate additional rotation periods for members of NGC 3532, including stars on the "extended slow rotator sequence".
In conclusion, this thesis probes the transition from fast to slow rotation but has also more general implications for the angular momentum evolution of young open clusters.
Extracellular vesicles: potential mediators of psychosocial stress contribution to osteoporosis?
(2021)
Osteoporosis is characterized by low bone mass and damage to the bone tissue’s microarchitecture, leading to increased fracture risk. Several studies have provided evidence for associations between psychosocial stress and osteoporosis through various pathways, including the hypothalamic-pituitary-adrenocortical axis, the sympathetic nervous system, and other endocrine factors. As psychosocial stress provokes oxidative cellular stress with consequences for mitochondrial function and cell signaling (e.g., gene expression, inflammation), it is of interest whether extracellular vesicles (EVs) may be a relevant biomarker in this context or act by transporting substances. EVs are intercellular communicators, transfer substances encapsulated in them, modify the phenotype and function of target cells, mediate cell-cell communication, and, therefore, have critical applications in disease progression and clinical diagnosis and therapy. This review summarizes the characteristics of EVs, their role in stress and osteoporosis, and their benefit as biological markers. We demonstrate that EVs are potential mediators of psychosocial stress and osteoporosis and may be beneficial in innovative research settings.
Geochemical processes such as mineral dissolution and precipitation alter the microstructure of rocks, and thereby affect their hydraulic and mechanical behaviour. Quantifying these property changes and considering them in reservoir simulations is essential for a sustainable utilisation of the geological subsurface. Due to the lack of alternatives, analytical methods and empirical relations are currently applied to estimate evolving hydraulic and mechanical rock properties associated with chemical reactions. However, the predictive capabilities of analytical approaches remain limited, since they assume idealised microstructures, and thus are not able to reflect property evolution for dynamic processes. Hence, aim of the present thesis is to improve the prediction of permeability and stiffness changes resulting from pore space alterations of reservoir sandstones.
A detailed representation of rock microstructure, including the morphology and connectivity of pores, is essential to accurately determine physical rock properties. For that purpose, three-dimensional pore-scale models of typical reservoir sandstones, obtained from highly resolved micro-computed tomography (micro-CT), are used to numerically calculate permeability and stiffness. In order to adequately depict characteristic distributions of secondary minerals, the virtual samples are systematically altered and resulting trends among the geometric, hydraulic, and mechanical rock properties are quantified. It is demonstrated that the geochemical reaction regime controls the location of mineral precipitation within the pore space, and thereby crucially affects the permeability evolution. This emphasises the requirement of determining distinctive porosity-permeability relationships
by means of digital pore-scale models. By contrast, a substantial impact of spatial alterations patterns on the stiffness evolution of reservoir sandstones are only observed in case of certain microstructures, such as highly porous granular rocks or sandstones comprising framework-supporting cementations. In order to construct synthetic granular samples a process-based approach is proposed including grain deposition and diagenetic cementation. It is demonstrated that the generated samples reliably represent the microstructural complexity of natural sandstones. Thereby, general limitations of imaging techniques can be overcome and various realisations of granular rocks can be flexibly produced. These can be further altered by virtual experiments, offering a fast and cost-effective way to examine the impact of precipitation, dissolution or fracturing on various petrophysical correlations.
The presented research work provides methodological principles to quantify trends in permeability and stiffness resulting from geochemical processes. The calculated physical property relations are directly linked to pore-scale alterations, and thus have a higher accuracy than commonly applied analytical approaches. This will considerably improve the predictive capabilities of reservoir models, and is further relevant to assess and reduce potential risks, such as productivity or injectivity losses as well as reservoir compaction or fault reactivation. Hence, the proposed method is of paramount importance for a wide range of natural and engineered subsurface applications, including geothermal energy systems, hydrocarbon reservoirs, CO2 and energy storage as well as hydrothermal deposit exploration.
The majority of baryons in the Universe is believed to reside in the intergalactic medium (IGM). This makes the IGM an important component in understanding cosmological structure formation. It is expected to trace the same dark matter distribution as galaxies, forming structures like filaments and clusters. However, whereas galaxies can be observed to be arranged along these large-scale structures, the spatial distribution of the diffuse IGM is not as easily unveiled. Absorption line studies of quasar (QSO) spectra can help with mapping the IGM, as well as the boundary layer between IGM and galaxies: the circumgalactic medium (CGM). By studying gas in the Local Group, as well as in the IGM, this study aims to get a better understanding of how the gas is linked to the large-scale structure of the local Universe and the galaxies residing in that structure.
Chapter 1 gives an introduction to the CGM and IGM, while the methods used in this study are explained in Chapter 2. Chapter 3 starts on a relatively small cosmological scale, namely that of our Local Group, which includes i.a. the Milky Way (MW) and the M31. Within the CGM of the MW, there exist denser clouds, some of which are infalling while others are moving away from the Galactic disc. To study these clouds, 29 QSO spectra obtained with the Cosmic Origins Spectrograph (COS) aboard the Hubble Space Telescope (HST) were analysed. Abundances of Si II, Si III, Si IV, C II, and C IV were measured for 69 HVCs belonging to two samples: one in the direction of the LG’s barycentre and the other in the anti-barycentre direction. Their velocities range from -100 ≥ vLSR ≥ -400 km/s for the barycentre sample and between +100 ≤ vLSR ≤ +300 km/s for the anti-barycentre sample. By using Cloudy models, these data could then be used to derive gas volume densities for the HVCs. Because of the relationship between density and pressure of the ambient medium, which is in turn determined by the Galactic radiation field, the distances of the HVCs could be estimated. From this, a subsample of absorbers located in the direction of M31 was found to exist outside of the MW’s virial radius, their low densities (log nH ≤ -3.54) making it likely for them to be part of the gas in between the MW and M31. No such low-density absorbers were found in the anti-barycentre sample. Our results thus hint at gas following the dark matter potential, which would be deeper between the MW and M31 as they are by far the most massive members of the LG.
From this bridge of gas in the LG, this study zooms out to the large-scale structure of the local Universe (z ~ 0) in Chapter 4. Galaxy data from the V8k catalogue and QSO spectra from COS were used to study the relation between the galaxies tracing large-scale filaments and the gas existing outside of those galaxies. This study used the filaments defined in Courtois et al. (2013). A total of 587 Lyman α (Lyα) absorbers were found in the 302 QSO spectra in the velocity range 1070 - 6700 km/s. After selecting sightlines passing through or close to these filaments, model spectra were made for 91 sightlines and 215 (227) Lyα absorbers (components) were measured in this sample. The velocity gradient along each filament was calculated and 74 absorbers were found within 1000 km/s of the nearest filament segment.
In order to find whether the absorbers are more tied to galaxies or to the large-scale structure, equivalent widths of the Lyα absorbers were plotted against both galaxy and filament impact parameters. While stronger absorbers do tend to be closer to either galaxies or filaments, there is a large scatter in this relation. Despite this large scatter, this study found that the absorbers do not follow a random distribution either. They cluster less strongly around filaments than galaxies, but stronger than random distributions, as confirmed by a Kolmogorov-Smirnov test.
Furthermore, the column density distribution function found in this study has a slope of -β = 1.63±0.12 for the total sample and -β =1.47±0.24 for the absorbers within 1000 km/s of a filament. The shallower slope for the latter subsample could indicate an excess of denser absorbers within the filament, but they are consistent within errors. These values are in agreement with values found in e.g. Lehner et al. (2007); Danforth et al. (2016).
The picture that emerges from this study regarding the relation between the IGM and the large-scale structure in the local Universe fits with what is found in other studies: while at least part of the gas traces the same filamentary structure as galaxies, the relation is complex. This study has shown that by taking a large sample of sightlines and comparing the data gathered from those with galaxy data, it is possible to study the gaseous large-scale structure. This approach can be used in the future together with simulations to get a better understanding of structure formation and evolution in the Universe.
The evolution of life on Earth has been driven by disturbances of different types and magnitudes over the 4.6 million years of Earth’s history (Raup, 1994, Alroy, 2008). One example for such disturbances are mass extinctions which are characterized by an exceptional increase in the extinction rate affecting a great number of taxa in a short interval of geologic time (Sepkoski, 1986). During the 541 million years of the Phanerozoic, life on Earth suffered five exceptionally severe mass extinctions named the “Big Five Extinctions”. Many mass extinctions are linked to changes in climate
(Feulner, 2009). Hence, the study of past mass extinctions is not only intriguing, but can also provide insights into the complex nature of the Earth system. This thesis aims at deepening our understanding of the triggers of mass extinctions and how they affected life. To accomplish this, I investigate changes in climate during two of the Big Five extinctions using a coupled climate model.
During the Devonian (419.2–358.9 million years ago) the first vascular plants and vertebrates evolved on land while extinction events occurred in the ocean (Algeo et al., 1995). The causes of these formative changes, their interactions and their links to changes in climate are still poorly understood. Therefore, we explore the sensitivity of the Devonian climate to various boundary conditions using an intermediate-complexity climate model (Brugger et al., 2019). In contrast to Le Hir et al. (2011), we find only a minor biogeophysical effect of changes in vegetation cover due to unrealistically high soil albedo values used in the earlier study. In addition, our results cannot support the strong influence of orbital parameters on the Devonian climate, as simulated with a climate model with a strongly simplified ocean model (De Vleeschouwer et al., 2013, 2014, 2017). We can only reproduce the changes in Devonian climate suggested by proxy data by decreasing atmospheric CO2. Still, finding agreement between the evolution of sea surface temperatures reconstructed from proxy data (Joachimski et al., 2009) and our simulations remains challenging and suggests a lower δ18O ratio of Devonian seawater. Furthermore, our study of the sensitivity of the Devonian climate reveals a prevailing mode of climate variability on a timescale of decades to centuries. The quasi-periodic ocean temperature fluctuations are linked to a physical mechanism of changing sea-ice cover, ocean convection and overturning in high northern latitudes.
In the second study of this thesis (Dahl et al., under review) a new reconstruction of atmospheric CO2 for the Devonian, which is based on CO2-sensitive carbon isotope fractionation in the earliest vascular plant fossils, suggests a much earlier drop of atmo- spheric CO2 concentration than previously reconstructed, followed by nearly constant CO2 concentrations during the Middle and Late Devonian. Our simulations for the Early Devonian with identical boundary conditions as in our Devonian sensitivity study (Brugger et al., 2019), but with a low atmospheric CO2 concentration of 500 ppm, show no direct conflict with available proxy and paleobotanical data and confirm that under the simulated climatic conditions carbon isotope fractionation represents a robust proxy for atmospheric CO2. To explain the earlier CO2 drop we suggest that early forms of vascular land plants have already strongly influenced weathering. This new perspective on the Devonian questions previous ideas about the climatic conditions and earlier explanations for the Devonian mass extinctions.
The second mass extinction investigated in this thesis is the end-Cretaceous mass extinction (66 million years ago) which differs from the Devonian mass extinctions in terms of the processes involved and the timescale on which the extinctions occurred. In the two studies presented here (Brugger et al., 2017, 2021), we model the climatic effects of the Chicxulub impact, one of the proposed causes of the end-Cretaceous extinction, for the first millennium after the impact. The light-dimming effect of stratospheric sulfate aerosols causes severe cooling, with a decrease of global annual mean surface air temperature of at least 26◦C and a recovery to pre-impact temperatures after more than 30 years. The sudden surface cooling of the ocean induces deep convection which brings nutrients from the deep ocean via upwelling to the surface ocean. Using an ocean biogeochemistry model we explore the combined effect of ocean mixing and iron-rich dust originating from the impactor on the marine biosphere. As soon as light levels have recovered, we find a short, but prominent peak in marine net primary productivity. This newly discovered mechanism could result in toxic effects for marine near-surface ecosystems. Comparison of our model results to proxy data (Vellekoop et al., 2014, 2016, Hull et al., 2020) suggests that carbon release from the terrestrial biosphere is required in addition to the carbon dioxide which can be attributed to the target material. Surface ocean acidification caused by the addition of carbon dioxide and sulfur is only moderate. Taken together, the results indicate a significant contribution of the Chicxulub impact to the end-Cretaceous mass extinction by triggering multiple stressors for the Earth system.
Although the sixth extinction we face today is characterized by human intervention in nature, this thesis shows that we can gain many insights into future extinctions from studying past mass extinctions, such as the importance of the rate of change (Rothman, 2017), the interplay of multiple stressors (Gunderson et al., 2016), and changes in the carbon cycle (Rothman, 2017, Tierney et al., 2020).
Digital inclusion
(2021)
In this thesis, we tackle two social disruptions: recent refugee waves in Germany and the COVID-19 pandemic. We focus on the use of information and communication technology (ICT) as a key means of alleviating these disruptions and promoting social inclusion. As social disruptions typically lead to frustration and fragmentation, it is essential to ensure the social inclusion of individuals and societies during such times.
In the context of the social inclusion of refugees, we focus on the Syrian refugees who arrived in Germany as of 2015, as they form a large and coherent refugee community. In particular, we address the role of ICTs in refugees’ social inclusion and investigate how different ICTs (especially smartphones and social networks) can foster refugees’ integration and social inclusion. In the context of the COVID-19 pandemic, we focus on the widespread unconventional working model of work from home (WFH). Our research here centers on the main constructs of WFH and the key differences in WFH experiences based on personal characteristics such as gender and parental status.
We reveal novel insights through four well-established research methods: literature review, mixed methods, qualitative method, and quantitative method. The results of our research have been published in the form of eight articles in major information systems venues and journals. Key results from the refugee research stream include the following: Smartphones represent a central component of refugee ICT use; refugees view ICT as a source of information and power; the social connectedness of refugees is strongly correlated with their Internet use; refugees are not relying solely on traditional methods to learn the German language or pursue further education; the ability to use smartphones anytime and anywhere gives refugees an empowering feeling of global connectedness; and ICTs empower refugees on three levels (community participation, sense of control, and self-efficacy).
Key insights from the COVID-19 WFH stream include: Gender and the presence of children under the age of 18 affect workers’ control over their time, technology usefulness, and WFH conflicts, while not affecting their WFH attitudes; and both personal and technology-related factors affect an individual’s attitude toward WFH and their productivity. Further insights are being gathered at the time of submitting this thesis.
This thesis contributes to the discussion within the information systems community regarding how to use different ICT solutions to promote the social inclusion of refugees in their new communities and foster an inclusive society. It also adds to the growing body of research on COVID-19, in particular on the sudden workplace transformation to WFH. The insights gathered in this thesis reveal theoretical implications and future opportunities for research in the field of information systems, practical implications for relevant stakeholders, and social implications related to the refugee crisis and the COVID-19 pandemic that must be addressed.
We investigate how inviting students to set task-based goals affects usage of an online learning platform and course performance. We design and implement a randomized field experiment in a large mandatory economics course with blended learning elements. The low-cost treatment induces students to use the online learning system more often, more intensively, and to begin earlier with exam preparation. Treated students perform better in the course than the control group: they are 18.8% (0.20 SD) more likely to pass the exam and earn 6.7% (0.19 SD) more points on the exam. There is no evidence that treated students spend significantly more time, rather they tend to shift to more productive learning methods. The heterogeneity analysis suggests that higher treatment effects are associated with higher levels of behavioral bias but also with poor early course behavior.
Foresight in networks
(2021)
The goal of this dissertation is to contribute to the corporate foresight research field by investigating capabilities, practices, and challenges particularly in the context of interorganizational settings and networked organizations informed by the theoretical perspectives of the relational view and dynamic capabilities.
Firms are facing an increasingly complex environment and highly complex product and service landscapes that often require multiple organizations to collaborate for innovation and offerings. Public-private partnerships that are targeted at supporting this have been introduced by policy-makers in the recent past. One example for such a partnership is the European Institute of Innovation and Technology (EIT) with multiple Knowledge and Innovation Communities (KICs). The EIT has been initiated by the European Commission in 2008 with the ambition of addressing grand societal challenges, driving innovativeness of European companies, and supporting systemic change. The resulting network organizations are managed similarly to corporations with managers, boards, and firm-like governance structures. EIT Digital as one of the EIT KICs are a central case of this work.
Research in this dissertation was based on the expectation that corporate foresight activities will increasingly be embedded in such interorganizational settings and a) can draw on such settings for the benefit of themselves and b) may contribute to shared visions, trust building and planning in these network organizations. In this dissertation the EIT Digital (formerly EIT ICT Labs) is a central case, supplemented with insights from three additional cases. I draw on the rich theoretical understanding of the resource-based view, dynamic capabilities, and particularly the relational view to further the discussion in the field of corporate foresight—defined as foresight in organizations in contrast to foresight with a macro-economical perspective—towards a relational understanding. Further, I use and revisit Rohrbeck’s Maturity Model for the Future Orientation of Firms as conceptual frame for corporate foresight in interorganizational settings. The analyses—available as four individual publications complemented by on additional chapter—are designed as exploratory case studies based on multiple data sources including an interview series with 49 persons, two surveys (N=54, n=20), three supplementary interviews, access to key documents and presentations, and observation through participation in meetings and activities of the EIT Digital. This research setting allowed me to contribute to corporate foresight research and practice by 1) integrating relational constructs primarily drawn from the relational view and dynamic capabilities research into the corporate foresight research stream, 2) exploring and understanding capabilities that are required for corporate foresight in interorganizational and networked organizations, 3) discussing and extending the Maturity Model for network organizations, and 4) to support individual organizations to tie their foresight systems effectively to networked foresight systems.
While the last few decades have seen impressive improvements in several areas in Natural Language Processing, asking a computer to make sense of the discourse of utterances in a text remains challenging. There are several different theories that aim to describe and analyse the coherent structure that a well-written text inhibits. These theories have varying degrees of applicability and feasibility for practical use. Presumably the most data-driven of these theories is the paradigm that comes with the Penn Discourse TreeBank, a corpus annotated for discourse relations containing over 1 million words. Any language other than English however, can be considered a low-resource language when it comes to discourse processing.
This dissertation is about shallow discourse parsing (discourse parsing following the paradigm of the Penn Discourse TreeBank) for German. The limited availability of annotated data for German means the potential of modern, deep-learning based methods relying on such data is also limited. This dissertation explores to what extent machine-learning and more recent deep-learning based methods can be combined with traditional, linguistic feature engineering to improve performance for the discourse parsing task. A pivotal role is played by connective lexicons that exhaustively list the discourse connectives of a particular language along with some of their core properties.
To facilitate training and evaluation of the methods proposed in this dissertation, an existing corpus (the Potsdam Commentary Corpus) has been extended and additional data has been annotated from scratch. The approach to end-to-end shallow discourse parsing for German adopts a pipeline architecture and either presents the first results or improves over state-of-the-art for German for the individual sub-tasks of the discourse parsing task, which are, in processing order, connective identification, argument extraction and sense classification. The end-to-end shallow discourse parser for German that has been developed for the purpose of this dissertation is open-source and available online.
In the course of writing this dissertation, work has been carried out on several connective lexicons in different languages. Due to their central role and demonstrated usefulness for the methods proposed in this dissertation, strategies are discussed for creating or further developing such lexicons for a particular language, as well as suggestions on how to further increase their usefulness for shallow discourse parsing.
Major challenges during geothermal exploration and exploitation include the structural-geological characterization of the geothermal system and the application of sustainable monitoring concepts to explain changes in a geothermal reservoir during production and/or reinjection of fluids. In the absence of sufficiently permeable reservoir rocks, faults and fracture networks are preferred drilling targets because they can facilitate the migration of hot and/or cold fluids. In volcanic-geothermal systems considerable amounts of gas emissions can be released at the earth surface, often related to these fluid-releasing structures.
In this thesis, I developed and evaluated different methodological approaches and measurement concepts to determine the spatial and temporal variation of several soil gas parameters to understand the structural control on fluid flow. In order to validate their potential as innovative geothermal exploration and monitoring tools, these methodological approaches were applied to three different volcanic-geothermal systems. At each site an individual survey design was developed regarding the site-specific questions.
The first study presents results of the combined measurement of CO2 flux, ground temperatures, and the analysis of isotope ratios (δ13CCO2, 3He/4He) across the main production area of the Los Humeros geothermal field, to identify locations with a connection to its supercritical (T > 374◦C and P > 221 bar) geothermal reservoir. The results of the systematic and large-scale (25 x 200 m) CO2 flux scouting survey proved to be a fast and flexible way to identify areas of anomalous degassing. Subsequent sampling with high resolution surveys revealed the actual extent and heterogenous pattern of anomalous degassing areas. They have been related to the internal fault hydraulic architecture and allowed to assess favourable structural settings for fluid flow such as fault intersections. Finally, areas of unknown structurally controlled permeability with a connection to the superhot geothermal reservoir have been determined, which represent promising targets for future geothermal exploration and development.
In the second study, I introduce a novel monitoring approach by examining the variation of CO2 flux to monitor changes in the reservoir induced by fluid reinjection. For that reason, an automated, multi-chamber CO2 flux system was deployed across the damage zone of a major normal fault crossing the Los Humeros geothermal field. Based on the results of the CO2 flux scouting survey, a suitable site was selected that had a connection to the geothermal reservoir, as identified by hydrothermal CO2 degassing and hot ground temperatures (> 50 °C). The results revealed a response of gas emissions to changes in reinjection rates within 24 h, proving an active hydraulic communication between the geothermal reservoir and the earth surface. This is a promising monitoring strategy that provides nearly real-time and in-situ data about changes in the reservoir and allows to timely react to unwanted changes (e.g., pressure decline, seismicity).
The third study presents results from the Aluto geothermal field in Ethiopia where an area-wide and multi-parameter analysis, consisting of measurements of CO2 flux, 222Rn, and 220Rn activity concentrations and ground temperatures was conducted to detect hidden permeable structures. 222Rn and 220Rn activity concentrations are evaluated as a complementary soil gas parameter to CO2 flux, to investigate their potential to understand tectono-volcanic degassing. The combined measurement of all parameters enabled to develop soil gas fingerprints, a novel visualization approach. Depending on the magnitude of gas emissions and their migration velocities the study area was divided in volcanic (heat), tectonic (structures), and volcano-tectonic dominated areas. Based on these concepts, volcano-tectonic dominated areas, where hot hydrothermal fluids migrate along permeable faults, present the most promising targets for future geothermal exploration and development in this geothermal field. Two of these areas have been identified in the south and south-east which have not yet been targeted for geothermal exploitation. Furthermore, two unknown areas of structural related permeability could be identified by 222Rn and 220Rn activity concentrations.
Eventually, the fourth study presents a novel measurement approach to detect structural controlled CO2 degassing, in Ngapouri geothermal area, New Zealand. For the first time, the tunable diode laser (TDL) method was applied in a low-degassing geothermal area, to evaluate its potential as a geothermal exploration method. Although the sampling approach is based on profile measurements, which leads to low spatial resolution, the results showed a link between known/inferred faults and increased CO2 concentrations. Thus, the TDL method proved to be a successful in the determination of structural related permeability, also in areas where no obvious geothermal activity is present. Once an area of anomalous CO2 concentrations has been identified, it can be easily complemented by CO2 flux grid measurements to determine the extent and orientation of the degassing segment.
With the results of this work, I was able to demonstrate the applicability of systematic and area-wide soil gas measurements for geothermal exploration and monitoring purposes. In particular, the combination of different soil gases using different measurement networks enables the identification and characterization of fluid-bearing structures and has not yet been used and/or tested as standard practice. The different studies present efficient and cost-effective workflows and demonstrate a hands-on approach to a successful and sustainable exploration and monitoring of geothermal resources. This minimizes the resource risk during geothermal project development. Finally, to advance the understanding of the complex structure and dynamics of geothermal systems, a combination of comprehensive and cutting-edge geological, geochemical, and geophysical exploration methods is essential.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
Natural products have proved to be a major resource in the discovery and development of many pharmaceuticals that are in use today. There is a wide variety of biologically active natural products that contain conjugated polyenes or benzofuran structures. Therefore, new synthetic methods for the construction of such building blocks are of great interest to synthetic chemists. The recently developed one-pot tethered ring-closing metathesis approach allows for the formation of Z,E-dienoates in high stereoselectivity. The extension of this method with a Julia-Kocienski olefination protocol would allow for the formation of conjugated trienes in a stereoselective manner. This strategy was applied in the total synthesis of conjugated triene containing (+)-bretonin B. Additionally, investigations of cross metathesis using methyl substituted olefins were pursued. This methodology was applied, as a one-pot cross metathesis/ring-closing metathesis sequence, in the total synthesis of benzofuran containing 7-methoxywutaifuranal. Finally, the design and synthesis of a catalyst for stereoretentive metathesis in aqueous media was investigated.
Biodiversity decline causes a loss of functional diversity, which threatens ecosystems through a dangerous feedback loop: This loss may hamper ecosystems’ ability to buffer environmental changes, leading to further biodiversity losses. In this context, the increasing frequency of human-induced excessive loading of nutrients causes major problems in aquatic systems. Previous studies investigating how functional diversity influences the response of food webs to disturbances have mainly considered systems with at most two functionally diverse trophic levels. We investigated the effects of functional diversity on the robustness, that is, resistance, resilience, and elasticity, using a tritrophic—and thus more realistic—plankton food web model. We compared a non-adaptive food chain with no diversity within the individual trophic levels to a more diverse food web with three adaptive trophic levels. The species fitness differences were balanced through trade-offs between defense/growth rate for prey and selectivity/half-saturation constant for predators. We showed that the resistance, resilience, and elasticity of tritrophic food webs decreased with larger perturbation sizes and depended on the state of the system when the perturbation occurred. Importantly, we found that a more diverse food web was generally more resistant and resilient but its elasticity was context-dependent. Particularly, functional diversity reduced the probability of a regime shift toward a non-desirable alternative state. The basal-intermediate interaction consistently determined the robustness against a nutrient pulse despite the complex influence of the shape and type of the dynamical attractors. This relationship was strongly influenced by the diversity present and the third trophic level. Overall, using a food web model of realistic complexity, this study confirms the destructive potential of the positive feedback loop between biodiversity loss and robustness, by uncovering mechanisms leading to a decrease in resistance, resilience, and potentially elasticity as functional diversity declines.
Measuring migration 2.0
(2021)
The interest in human migration is at its all-time high, yet data to measure migration is notoriously limited. “Big data” or “digital trace data” have emerged as new sources of migration measurement complementing ‘traditional’ census, administrative and survey data. This paper reviews the strengths and weaknesses of eight novel, digital data sources along five domains: reliability, validity, scope, access and ethics. The review highlights the opportunities for migration scholars but also stresses the ethical and empirical challenges. This review intends to be of service to researchers and policy analysts alike and help them navigate this new and increasingly complex field.
Janus droplets were prepared by vortex mixing of three non-mixable liquids, i.e., olive oil, silicone oil and water, in the presence of gold nanoparticles (AuNPs) in the aqueous phase and magnetite nanoparticles (MNPs) in the olive oil. The resulting Pickering emulsions were stabilized by a red-colored AuNP layer at the olive oil/water interface and MNPs at the oil/oil interface. The core–shell droplets can be stimulated by an external magnetic field. Surprisingly, an inner rotation of the silicon droplet is observed when MNPs are fixed at the inner silicon droplet interface. This is the first example of a controlled movement of the inner parts of complex double emulsions by magnetic manipulation via interfacially confined magnetic nanoparticles.
Rotational motions play a key role in measuring seismic wavefield properties. Using newly developed portable rotational instruments, it is now possible to directly measure rotational motions in a broad frequency range. Here, we investigated the instrumental self-noise and data quality in a huddle test in Fürstenfeldbruck, Germany, in August 2019. We compare the data from six rotational and three translational sensors. We studied the recorded signals using correlation, coherence analysis, and probabilistic power spectral densities. We sorted the coherent noise into five groups with respect to the similarities in frequency content and shape of the signals. These coherent noises were most likely caused by electrical devices, the dehumidifier system in the building, humans, and natural sources such as wind. We calculated self-noise levels through probabilistic power spectral densities and by applying the Sleeman method, a three-sensor method. Our results from both methods indicate that self-noise levels are stable between 0.5 and 40 Hz. Furthermore, we recorded the 29 August 2019 ML 3.4 Dettingen earthquake. The calculated source directions are found to be realistic for all sensors in comparison to the real back azimuth. We conclude that the five tested blueSeis-3A rotational sensors, when compared with respect to coherent noise, self-noise, and source direction, provide reliable and consistent results. Hence, field experiments with single rotational sensors can be undertaken.