Refine
Has Fulltext
- yes (165) (remove)
Year of publication
- 2021 (165) (remove)
Document Type
- Doctoral Thesis (165) (remove)
Is part of the Bibliography
- yes (165)
Keywords
- Spektroskopie (4)
- Klimawandel (3)
- Politik (3)
- climate change (3)
- spectroscopy (3)
- 3D-Visualisierung (2)
- Agrarökologie (2)
- Air pollution (2)
- Alpen (2)
- Alps (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Geowissenschaften (22)
- Institut für Biochemie und Biologie (20)
- Institut für Chemie (20)
- Hasso-Plattner-Institut für Digital Engineering GmbH (13)
- Institut für Umweltwissenschaften und Geographie (9)
- Wirtschaftswissenschaften (9)
- Institut für Informatik und Computational Science (8)
- Department Psychologie (6)
- Extern (5)
- Institut für Mathematik (5)
- Institut für Ernährungswissenschaft (4)
- Department Erziehungswissenschaft (3)
- Department Sport- und Gesundheitswissenschaften (3)
- Historisches Institut (3)
- Department Linguistik (2)
- Institut für Romanistik (2)
- Department Grundschulpädagogik (1)
- Department Musik und Kunst (1)
- Department für Inklusionspädagogik (1)
- Fachgruppe Betriebswirtschaftslehre (1)
- Foundations of Computational Linguistics (1)
- Institut für Germanistik (1)
- Institut für Jüdische Studien und Religionswissenschaft (1)
- Institut für Philosophie (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Psycholinguistics and Neurolinguistics (1)
- Sozialwissenschaften (1)
- Strukturbereich Kognitionswissenschaften (1)
- Öffentliches Recht (1)
Centroid moment tensor inversion can provide insight into ongoing tectonic processes and active faults. In the Alpine mountains (central Europe), challenges result from low signal-to-noise ratios of earthquakes with small to moderate magnitudes and complex wave propagation effects through the heterogeneous crustal structure of the mountain belt. In this thesis, I make use of the temporary installation of the dense AlpArray seismic network (AASN) to establish a work flow to study seismic source processes and enhance the knowledge of the Alpine seismicity. The cumulative thesis comprises four publications on the topics of large seismic networks, seismic source processes in the Alps, their link to tectonics and stress field, and the inclusion of small magnitude earthquakes into studies of active faults.
Dealing with hundreds of stations of the dense AASN requires the automated assessment of data and metadata quality. I developed the open source toolbox AutoStatsQ to perform an automated data quality control. Its first application to the AlpArray seismic network has revealed significant errors of amplitude gains and sensor orientations. A second application of the orientation test to the Turkish KOERI network, based on Rayleigh wave polarization, further illustrated the potential in comparison to a P wave polarization method. Taking advantage of the gain and orientation results of the AASN, I tested different inversion settings and input data types to approach the specific challenges of centroid moment tensor (CMT) inversions in the Alps. A comparative study was carried out to define the best fitting procedures.
The application to 4 years of seismicity in the Alps (2016-2019) substantially enhanced the amount of moment tensor solutions in the region. We provide a list of moment tensors solutions down to magnitude Mw 3.1. Spatial patterns of typical focal mechanisms were analyzed in the seismotectonic context, by comparing them to long-term seismicity, historical earthquakes and observations of strain rates. Additionally, we use our MT solutions to investigate stress regimes and orientations along the Alpine chain. Finally, I addressed the challenge of including smaller magnitude events into the study of active faults and source processes. The open-source toolbox Clusty was developed for the clustering of earthquakes based on waveforms recorded across a network of seismic stations. The similarity of waveforms reflects both, the location and the similarity of source mechanisms. Therefore the clustering bears the opportunity to identify earthquakes of similar faulting styles, even when centroid moment tensor inversion is not possible due to low signal-to-noise ratios of surface waves or oversimplified velocity models. The toolbox is described through an application to the Zakynthos 2018 aftershock sequence and I subsequently discuss its potential application to weak earthquakes (Mw<3.1) in the Alps.
Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville Problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular and some singular SLPs of even orders (tested up to order eight), with a mix of boundary conditions (including non-separable and finite singular endpoints), accurately and efficiently.
The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved.
Next, a concrete implementation to the inverse Sturm–Liouville problem
algorithm proposed by Barcilon (1974) is provided. Furthermore, computational feasibility and applicability of this algorithm to solve inverse Sturm–Liouville problems of order n=2,4 is verified successfully. It is observed that the method is successful even in the presence of significant noise, provided that the assumptions of the algorithm are satisfied.
In conclusion, this work provides methods that can be adapted successfully for solving a direct (regular/singular) or inverse SLP of an arbitrary order with arbitrary boundary conditions.
The aim of the doctoral project was to answer the question of whether the structural word-initial noun capitalization, as it can otherwise only be found in Luxembourgish alongside German, has a function that is advantageous for the reader. The overriding hypothesis was that an advantage is achieved by activating a syntactic category, namely the core of a noun phrase, through the parafoveal perception of the capital letters. This perception from the corner of the eye should make it possible to preprocess the following noun. As a result, sentence processing should be facilitated, which should ultimately be reflected in overall faster reading times and fixation durations.
The structure of the project includes three studies, some of which included different participant groups:
Study 1:
Study design: Semantic priming using garden-path sentences should bring out the functionality of noun capitalization for the reader
Participant groups: German natives reading German
Study 2:
Study design: same design as study 1, but in English
Participant groups:
English natives without any knowledge of German reading English
English natives who regularly read German reading English
German with high proficiency in English reading English
Study 3:
Study design:
Influence of the noun frequency on a potential preprocessing using the boundary paradigm; Study languages: German and English
Participant groups:
German natives reading German
English natives without any knowledge of German reading English
German with high proficiency in English reading English
Brief summary: The noun capitalization clearly has an impact on sentence processing in both German and English. It cannot be confirmed that this has a substantial, decisive advantage.
Compound values are not universally supported in virtual machine (VM)-based programming systems and languages. However, providing data structures with value characteristics can be beneficial. On one hand, programming systems and languages can adequately represent physical quantities with compound values and avoid inconsistencies, for example, in representation of large numbers. On the other hand, just-in-time (JIT) compilers, which are often found in VMs, can rely on the fact that compound values are immutable, which is an important property in optimizing programs. Considering this, compound values have an optimization potential that can be put to use by implementing them in VMs in a way that is efficient in memory usage and execution time. Yet, optimized compound values in VMs face certain challenges: to maintain consistency, it should not be observable by the program whether compound values are represented in an optimized way by a VM; an optimization should take into account, that the usage of compound values can exhibit certain patterns at run-time; and that necessary value-incompatible properties due to implementation restrictions should be reduced.
We propose a technique to detect and compress common patterns of compound value usage at run-time to improve memory usage and execution speed. Our approach identifies patterns of frequent compound value references and introduces abbreviated forms for them. Thus, it is possible to store multiple inter-referenced compound values in an inlined memory representation, reducing the overhead of metadata and object references. We extend our approach by a notion of limited mutability, using cells that act as barriers for our approach and provide a location for shared, mutable access with the possibility of type specialization. We devise an extension to our approach that allows us to express automatic unboxing of boxed primitive data types in terms of our initial technique. We show that our approach is versatile enough to express another optimization technique that relies on values, such as Booleans, that are unique throughout a programming system. Furthermore, we demonstrate how to re-use learned usage patterns and optimizations across program runs, thus reducing the performance impact of pattern recognition.
We show in a best-case prototype that the implementation of our approach is feasible and can also be applied to general purpose programming systems, namely implementations of the Racket language and Squeak/Smalltalk. In several micro-benchmarks, we found that our approach can effectively reduce memory consumption and improve execution speed.
Die vorliegende Studie beschäftigt sich mit der Planung und Durchführung des Lernprozesses von Schauspielern, wobei das Hauptaugenmerk auf dem Einsatz von Lernstrategien liegt. Es geht darum, welcher Strategien sich professionell Lernende bedienen, um die für die Berufsausübung erforderliche Textsicherheit zu erlangen, nicht um die Optimierung des Lernerfolges.
Die Literaturrecherche machte deutlich, dass aktuelle Studien zum Lernen von Erwachsenen vor allem im berufsspezifischen Kontext angesiedelt sind und sich auf den Erwerb von Kompetenzen, Problemlösestrategien und gesellschaftliche Teilhabe beziehen. Dem Lernen von Schauspielern liegt aber keine Absicht einer Verhaltensänderung oder eines konkreten Wissenszuwachses zugrunde.
Für Schauspieler ist der Auftritt Bestandteil ihrer Berufskultur. Angesichts der Tatsache, dass präzisem Faktenwissen als Grundlage für kompetentes, überzeugendes Präsentieren entscheidende Bedeutung zukommt, sind die Ergebnisse der Studie auch für Berufsgruppen relevant, die öffentlich auftreten müssen, wie z. B. für Priester, Juristen und Lehrende. Das gilt ebenso für Schüler und Studenten, die Referate halten und/oder Arbeiten präsentieren müssen.
Für die empirische Untersuchung werden zwölf renommierte Schauspieler mittels problemzentriertem Interview befragt, anschließend wird eine qualitative Inhaltsanalyse durchgeführt.
In der Auswertung der Daten kann ein deutlicher Zusammenhang zwischen Körper und Sprechpraxis nachgewiesen werden. Ebenso ergibt die Analyse, wie wichtig Bewegung für den Lernprozess ist. Es können Ergebnisse in Bezug auf kognitive, metakognitive und ressourcenorientierte Strategien generiert werden, wobei der Lernumgebung und dem Lernen mit Kollegen entscheidende Bedeutung zukommt.
The propagation of test fields, such as electromagnetic, Dirac or linearized gravity, on a fixed spacetime manifold is often studied by using the geometrical optics approximation. In the limit of infinitely high frequencies, the geometrical optics approximation provides a conceptual transition between the test field and an effective point-particle description. The corresponding point-particles, or wave rays, coincide with the geodesics of the underlying spacetime. For most astrophysical applications of interest, such as the observation of celestial bodies, gravitational lensing, or the observation of cosmic rays, the geometrical optics approximation and the effective point-particle description represent a satisfactory theoretical model. However, the geometrical optics approximation gradually breaks down as test fields of finite frequency are considered.
In this thesis, we consider the propagation of test fields on spacetime, beyond the leading-order geometrical optics approximation. By performing a covariant Wentzel-Kramers-Brillouin analysis for test fields, we show how higher-order corrections to the geometrical optics approximation can be considered. The higher-order corrections are related to the dynamics of the spin internal degree of freedom of the considered test field. We obtain an effective point-particle description, which contains spin-dependent corrections to the geodesic motion obtained using geometrical optics. This represents a covariant generalization of the well-known spin Hall effect, usually encountered in condensed matter physics and in optics. Our analysis is applied to electromagnetic and massive Dirac test fields, but it can easily be extended to other fields, such as linearized gravity. In the electromagnetic case, we present several examples where the gravitational spin Hall effect of light plays an important role. These include the propagation of polarized light rays on black hole spacetimes and cosmological spacetimes, as well as polarization-dependent effects on the shape of black hole shadows. Furthermore, we show that our effective point-particle equations for polarized light rays reproduce well-known results, such as the spin Hall effect of light in an inhomogeneous medium, and the relativistic Hall effect of polarized electromagnetic wave packets encountered in Minkowski spacetime.
Today, the Mekong Delta in the southern of Vietnam is home for 18 million people. The delta also accounts for more than half of the country’s food production and 80% of the exported rice. Due to the low elevation, it is highly susceptible to the risk of fluvial and coastal flooding. Although extreme floods often result in excessive damages and economic losses, the annual flood pulse from the Mekong is vital to sustain agricultural cultivation and livelihoods of million delta inhabitants.
Delta-wise risk management and adaptation strategies are required to mitigate the adverse impacts from extreme events while capitalising benefits from floods. However, a proper flood risk management has not been implemented in the VMD, because the quantification of flood damage is often overlooked and the risks are thus not quantified. So far, flood management has been exclusively focused on engineering measures, i.e. high- and low- dyke systems, aiming at flood-free or partial inundation control without any consideration of the actual risks or a cost-benefit analysis. Therefore, an analysis of future delta flood dynamics driven these stressors is valuable to facilitate the transition from sole hazard control towards a risk management approach, which is more cost-effective and also robust against future changes in risk.
Built on these research gaps, this thesis investigates the current state and future projections of flood hazard, damage and risk to rice cultivation, the most important economic activity in the VMD. The study quantifies the changes in risk and hazard brought by the development of delta-based flood control measures in the last decades, and analyses the expected changes in risk driven by the changing climate, rising sea-level and deltaic land subsidence, and finally the development of hydropower projects in the Mekong Basin. For this purpose, flood trend analyses and comprehensive hydraulic modelling were performed, together with the development of a concept to quantify flood damage and risk to rice plantation.
The analysis of observed flood levels revealed strong and robust increasing trends of peak and duration downstream of the high-dyke areas with a step change in 2000/2001, i.e. after the disastrous flood which initiated the high-dyke development. These changes were in contrast to the negative trends detected upstream, suggested that high-dyke development has shifted flood hazard downstream. Findings of the trend’s analysis were later confirmed by hydraulic simulations of the two recent extreme floods in 2000 and 2011, where the hydrological boundaries and dyke system settings were interchanged.
However, the high-dyke system was not the only and often not the main cause for a shift of flood hazard, as a comparative analysis of these two extreme floods proved. The high-dyke development was responsible for 20–90% of the observed changes in flood level between 2000 and 2011, with large spatial variances. The particular flood hydrograph of the two events had the highest contribution in the northern part of the delta, while the tidal level had 2–3 times higher influence than the high-dyke in the lower-central and coastal areas downstream of high-dyke areas. The impact of the high-dyke development was highest in the areas closely downstream of the high-dyke area just south of the Cambodia-Vietnam border. The hydraulic simulations also validated that the concurrence of the flood peak with spring tides, i.e. high sea level along the coast, amplified the flood level and inundation in the central and coastal regions substantially.
The risk assessment quantified the economic losses of rice cultivation to USD 25.0 and 115 million (0.02–0.1% of the total GDP of Vietnam in 2011) corresponding to the 10-year and the 100-year floods, with an expected annual damage of about USD 4.5 million. A particular finding is that the flood damage was highly sensitive to flood timing. Here, a 10-year event with an early peak, i.e. late August-September, could cause as much damage as a 100-year event that peaked in October. This finding underlines the importance of a reliable early flood warning, which could substantially reduce the damage to rice crops and thus the risk.
The developed risk assessment concept was furthermore applied to investigate two high-dyke development alternatives, which are currently under discussion among the administrative bodies in Vietnam, but also in the public. The first option favouring the utilization of the current high-dyke compartments as flood retention areas instead for rice cropping during the flood season could reduce flood hazard and expected losses by 5–40%, depending on the region of the delta. On the contrary, the second option promoting the further extension of the areas protected by high-dyke to facilitate third rice crop planting on a larger area, tripled the current expected annual flood damage. This finding challenges the expected economic benefit of triple rice cultivation, in addition to the already known reducing of nutrient supply by floodplain sedimentation and thus higher costs for fertilizers.
The economic benefits of the high-dyke and triple rice cropping system is further challenged by the changes in the flood dynamics to be expected in future. For the middle of the 21st century (2036-2065) the effective sea-level rise an increase of the inundation extent by 20–27% was projected. This corresponds to an increase of flood damage to rice crops in dry, normal and wet year by USD 26.0, 40.0 and 82.0 million in dry, normal and wet year compared to the baseline period 1971-2000.
Hydraulic simulations indicated that the planned massive development of hydropower dams in the Mekong Basin could potentially compensate the increase in flood hazard and agriculture losses stemming from climate change. However, the benefits of dams as mitigation of flood losses are highly uncertain, because a) the actual development of the dams is highly disputed, b) the operation of the dams is primarily targeted at power generation, not flood control, and c) this would require international agreements and cooperation, which is difficult to achieve in South-East Asia. The theoretical flood mitigation benefit is additionally challenged by a number of negative impacts of the dam development, e.g. disruption of floodplain inundation in normal, non-extreme flood years. Adding to the certain reduction of sediment and nutrient load to the floodplains, hydropower dams will drastically impair rice and agriculture production, the basis livelihoods of million delta inhabitants.
In conclusion, the VMD is expected to face increasing threats of tidal induced floods in the coming decades. Protection of the entire delta coastline solely with “hard” engineering flood protection structures is neither technically nor economically feasible, adaptation and mitigation actions are urgently required. Better control and reduction of groundwater abstraction is thus strongly recommended as an immediate and high priority action to reduce the land subsidence and thus tidal flooding and salinity intrusion in the delta. Hydropower development in the Mekong basin might offer some theoretical flood protection for the Mekong delta, but due to uncertainties in the operation of the dams and a number of negative effects, the dam development cannot be recommended as a strategy for flood management. For the Vietnamese authorities, it is advisable to properly maintain the existing flood protection structures and to develop flexible risk-based flood management plans. In this context the study showed that the high-dyke compartments can be utilized for emergency flood management in extreme events. For this purpose, a reliable flood forecast is essential, and the action plan should be materialised in official documents and legislation to assure commitment and consistency in the implementation and operation.
Over the last decades, the rate of near-surface warming in the Arctic is at least double than elsewhere on our planet (Arctic amplification). However, the relative contribution of different feedback processes to Arctic amplification is a topic of ongoing research, including the role of aerosol and clouds. Lidar systems are well-suited for the investigation of aerosol and optically-thin clouds as they provide vertically-resolved information on fine temporal scales. Global aerosol models fail to converge on the sign of the Arctic aerosol radiative effect (ARE). In the first part of this work, the optical and microphysical properties of Arctic aerosol were characterized at case study level in order to assess the short-wave (SW) ARE. A long-range transport episode was first investigated. Geometrically similar aerosol layers were captured over three locations. Although the aerosol size distribution was different between Fram Strait(bi-modal) and Ny-Ålesund (fine mono-modal), the atmospheric column ARE was similar. The latter was related to the domination of accumulation mode aerosol. Over both locations top of the atmosphere (TOA) warming was accompanied by surface cooling.
Subsequently, the sensitivity of ARE was investigated with respect to different aerosol and spring-time ambient conditions. A 10% change in the single-scattering albedo (SSA) induced higher ARE perturbations compared to a 30% change in the aerosol extinction coefficient. With respect to ambient conditions, the ARETOA was more sensitive to solar elevation changes compared to AREsur f ace. Over dark surfaces the ARE profile was exclusively negative, while over bright surfaces a negative to positive shift occurred above the aerosol layers. Consequently, the sign of ARE can be highly sensitive in spring since this season is characterized by transitional surface albedo conditions.
As the inversion of the aerosol microphysics is an ill-posed problem, the inferred aerosol size distribution of a low-tropospheric event was compared to the in-situ measured distribution. Both techniques revealed a bi-modal distribution, with good agreement in the total volume concentration. However, in terms of SSA a disagreement was found, with the lidar inversion indicating highly scattering particles and the in-situ measurements pointing to absorbing particles. The discrepancies could stem from assumptions in the inversion (e.g. wavelength-independent refractive index) and errors in the conversion of the in-situ measured light attenuation into absorption. Another source of discrepancy might be related to an incomplete capture of fine particles in the in-situ sensors. The disagreement in the most critical parameter for the Arctic ARE necessitates further exploration in the frame of aerosol closure experiments. Care must be taken in ARE modelling studies, which may use either the in-situ or lidar-derived SSA as input.
Reliable characterization of cirrus geometrical and optical properties is necessary for improving their radiative estimates. In this respect, the detection of sub-visible cirrus is of special importance. The total cloud radiative effect (CRE) can be negatively biased, should only the optically-thin and opaque cirrus contributions are considered. To this end, a cirrus retrieval scheme was developed aiming at increased sensitivity to thin clouds. The cirrus detection was based on the wavelet covariance transform (WCT) method, extended by dynamic thresholds. The dynamic WCT exhibited high sensitivity to faint and thin cirrus layers (less than 200 m) that were partly or completely undetected by the existing static method. The optical characterization scheme extended the Klett–Fernald retrieval by an iterative lidar ratio (LR) determination (constrained Klett). The iterative process was constrained by a reference value, which indicated the aerosol concentration beneath the cirrus cloud. Contrary to existing approaches, the aerosol-free assumption was not adopted, but the aerosol conditions were approximated by an initial guess. The inherent uncertainties of the constrained Klett were higher for optically-thinner cirrus, but an overall good agreement was found with two established retrievals. Additionally, existing approaches, which rely on aerosol-free assumptions, presented increased accuracy when the proposed reference value was adopted. The constrained Klett retrieved reliably the optical properties in all cirrus regimes, including upper sub-visible cirrus with COD down to 0.02.
Cirrus is the only cloud type capable of inducing TOA cooling or heating at daytime. Over the Arctic, however, the properties and CRE of cirrus are under-explored. In the final part of this work, long-term cirrus geometrical and optical properties were investigated for the first time over an Arctic site (Ny-Ålesund). To this end, the newly developed retrieval scheme was employed. Cirrus layers over Ny-Ålesund seemed to be more absorbing in the visible spectral region compared to lower latitudes and comprise relatively more spherical ice particles. Such meridional differences could be related to discrepancies in absolute humidity and ice nucleation mechanisms. The COD tended to decline for less spherical and smaller ice particles probably due to reduced water vapor deposition on the particle surface. The cirrus optical properties presented weak dependence on ambient temperature and wind conditions.
Over the 10 years of the analysis, no clear temporal trend was found and the seasonal cycle was not pronounced. However, winter cirrus appeared under colder conditions and stronger winds. Moreover, they were optically-thicker, less absorbing and consisted of relatively more spherical ice particles. A positive CREnet was primarily revealed for a broad range of representative cloud properties and ambient conditions. Only for high COD (above 10) and over tundra a negative CREnet was estimated, which did not hold true over snow/ice surfaces. Consequently, the COD in combination with the surface albedo seem to play the most critical role in determining the CRE sign over the high European Arctic.
The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction.
Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion.
PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 – 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants.
Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles.
Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 – 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.
Deoxyribonucleic acid (DNA) nanostructures enable the attachment of functional molecules to nearly any unique location on their underlying structure. Due to their single-base-pair structural resolution, several ligands can be spatially arranged and closely controlled according to the geometry of their desired target, resulting in optimized binding and/or signaling interactions.
This dissertation covers three main projects. All of them use variations of functionalized DNA nanostructures that act as platform for oligovalent presentation of ligands. The purpose of this work was to evaluate the ability of DNA nanostructures to precisely display different types of functional molecules and to consequently enhance their efficacy according to the concept of multivalency. Moreover, functionalized DNA structures were examined for their suitability in functional screening assays. The developed DNA-based compound ligands were used to target structures in different biological systems.
One part of this dissertation attempted to bind pathogens with small modified DNA nanostructures. Pathogens like viruses and bacteria are known for their multivalent attachment to host cells membranes. By blocking their receptors for recognition and/or fusion with their targeted host in an oligovalent manner, the objective was to impede their ability to adhere to and invade cells. For influenza A, only enhanced binding of oligovalent peptide-DNA constructs compared to the monovalent peptide could be observed, whereas in the case of respiratory syncytial virus (RSV), binding as well as blocking of the target receptors led to an increased inhibition of infection in vitro.
In the final part, the ability of chimeric DNA-peptide constructs to bind to and activate signaling receptors on the surface of cells was investigated. Specific binding of DNA trimers, conjugated with up to three peptides, to EphA2 receptor expressing cells was evaluated in flow cytometry experiments. Subsequently, their ability to activate these receptors via phosphorylation was assessed. EphA2 phosphorylation was significantly increased by DNA trimers carrying three peptides compared to monovalent peptide. As a result of activation, cells underwent characteristic morphological changes, where they "round up" and retract their periphery.
The results obtained in this work comprehensively prove the capability of DNA nanostructures to serve as stable, biocompatible, controllable platforms for the oligovalent presentation of functional ligands. Functionalized DNA nanostructures were used to enhance biological effects and as tool for functional screening of bio-activity. This work demonstrates that modified DNA structures have the potential to improve drug development and to unravel the activation of signaling pathways.
Elucidating the molecular basis of enhanced growth in the Arabidopsis thaliana accession Bur-0
(2021)
The life cycle of flowering plants is a dynamic process that involves successful passing through several developmental phases and tremendous progress has been made to reveal cellular and molecular regulatory mechanisms underlying these phases, morphogenesis, and growth. Although several key regulators of plant growth or developmental phase transitions have been identified in Arabidopsis, little is known about factors that become active during embryogenesis, seed development and also during further postembryonic growth. Much less is known about accession-specific factors that determine plant architecture and organ size. Bur-0 has been reported as a natural Arabidopsis thaliana accession with exceptionally big seeds and a large rosette; its phenotype makes it an interesting candidate to study growth and developmental aspects in plants, however, the molecular basis underlying this big phenotype remains to be elucidated. Thus, the general aim of this PhD project was to investigate and unravel the molecular mechanisms underlying the big phenotype in Bur-0.
Several natural Arabidopsis accessions and late flowering mutant lines were analysed in this study, including Bur-0. Phenotypes were characterized by determining rosette size, seed size, flowering time, SAM size and growth in different photoperiods, during embryonic and postembryonic development. Our results demonstrate that Bur-0 stands out as an interesting accession with simultaneously larger rosettes, larger SAM, later flowering phenotype and larger seeds, but also larger embryos. Interestingly, inter-accession crosses (F1) resulted in bigger seeds than the parental self-crossed accessions, particularly when Bur-0 was used as the female parental genotype, suggesting parental effects on seed size that might be maternally controlled. Furthermore, developmental stage-based comparisons revealed that the large embryo size of Bur-0 is achieved during late embryogenesis and the large rosette size is achieved during late postembryonic growth. Interestingly, developmental phase progression analyses revealed that from germination onwards, the length of developmental phases during postembryonic growth is delayed in Bur-0, suggesting that in general, the mechanisms that regulate developmental phase progression are shared across developmental phases.
On the other hand, a detailed physiological characterization in different tissues at different developmental stages revealed accession-specific physiological and metabolic traits that underlie accession-specific phenotypes and in particular, more carbon resources during embryonic and postembryonic development were found in Bur-0, suggesting an important role of carbohydrates in determination of the bigger Bur-0 phenotype. Additionally, differences in the cellular organization, nuclei DNA content, as well as ploidy level were analyzed in different tissues/cell types and we found that the large organ size in Bur-0 can be mainly attributed to its larger cells and also to higher cell proliferation in the SAM, but not to a different ploidy level.
Furthermore, RNA-seq analysis of embryos at torpedo and mature stage, as well as SAMs at vegetative and floral transition stage from Bur-0 and Col-0 was conducted to identify accession-specific genetic determinants of plant phenotypes, shared across tissues and developmental stages during embryonic and postembryonic growth. Potential candidate genes were identified and further validation of transcriptome data by expression analyses of candidate genes as well as known key regulators of organ size and growth during embryonic and postembryonic development confirmed that the high confidence transcriptome datasets generated in this study are reliable for elucidation of molecular mechanisms regulating plant growth and accession-specific phenotypes in Arabidopsis.
Taken together, this PhD project contributes to the plant development research field providing a detailed analysis of mechanisms underlying plant growth and development at different levels of biological organization, focusing on Arabidopsis accessions with remarkable phenotypical differences. For this, the natural accession Bur-0 was an ideal outlier candidate and different mechanisms at organ and tissue level, cell level, metabolism, transcript and gene expression level were identified, providing a better understanding of different factors involved in plant growth regulation and mechanisms underlying different growth patterns in nature.
Bottom-up synthetic biology is used for the understanding of how a cell works. It is achieved through developing techniques to produce lipid-based vesicular structures as cellular mimics. The most common techniques used to produce cellular mimics or synthetic cells is through electroformation and swelling method. However, the abovementioned techniques cannot efficiently encapsulate macromolecules such as proteins, enzymes, DNA and even liposomes as synthetic organelles. This urges the need to develop new techniques that can circumvent this issue and make the artificial cell a reality where it is possible to imitate a eukaryotic cell through encapsulating macromolecules. In this thesis, the aim to construct a cell system using giant unilamellar vesicles (GUVs) to reconstitute the mitochondrial molybdenum cofactor biosynthetic pathway. This pathway is highly conserved among all life forms, and therefore is known for its biological significance in disorders induced through its malfunctioning. Furthermore, the pathway itself is a multi-step enzymatic reaction that takes place in different compartments. Initially, GTP in the mitochondrial matrix is converted to cPMP in the presence of cPMP synthase. Further, produced cPMP is transported across the membrane to the cytosol, to be converted by MPT synthase into MPT. This pathway provides a possibility to address the general challenges faced in the development of a synthetic cell, to encapsulate large biomolecules with good efficiency and greater control and to evaluate the enzymatic reactions involved in the process.
For this purpose, the emulsion-based technique was developed and optimised to allow rapid production of GUVs (~18 min) with high encapsulation efficiency (80%). This was made possible by optimizing various parameters such as density, type of oil, the impact of centrifugation speed/time, lipid concentration, pH, temperature, and emulsion droplet volume. Furthermore, the method was optimised in microtiter plates for direct experimentation and visualization after the GUV formation. Using this technique, the two steps - formation of cPMP from GTP and the formation of MPT from cPMP were encapsulated in different sets of GUVs to mimic the two compartments. Two independent fluorescence-based detection systems were established to confirm the successful encapsulation and conversion of the reactants. Alternatively, the enzymes produced using bacterial expression and measured. Following the successful encapsulation and evaluation of enzymatic reactions, cPMP transport across mitochondrial membrane has been mimicked using GUVs using a complex mitochondrial lipid composition. It was found that the cPMP interaction with the lipid bilayer results in transient pore-formation and leakage of internal contents.
Overall, it can be concluded that in this thesis a novel technique has been optimised for fast production of functional synthetic cells. The individual enzymatic steps of the Moco biosynthetic pathway have successfully implemented and quantified within these cellular mimics.
Zum Einfluss von Adaptivität auf die Wahrnehmung von Komplexität in der Mensch-Technik-Interaktion
(2021)
Wir leben in einer Gesellschaft, die von einem stetigen Wunsch nach Innovation und Fortschritt geprägt ist. Folgen dieses Wunsches sind die immer weiter fortschreitende Digitalisierung und informatische Vernetzung aller Lebensbereiche, die so zu immer komplexeren sozio-technischen Systemen führen. Ziele dieser Systeme sind u. a. die Unterstützung von Menschen, die Verbesserung ihrer Lebenssituation oder Lebensqualität oder die Erweiterung menschlicher Möglichkeiten. Doch haben neue komplexe technische Systeme nicht nur positive soziale und gesellschaftliche Effekte. Oft gibt es unerwünschte Nebeneffekte, die erst im Gebrauch sichtbar werden, und sowohl Konstrukteur*innen als auch Nutzer*innen komplexer vernetzter Technologien fühlen sich oft orientierungslos. Die Folgen können von sinkender Akzeptanz bis hin zum kompletten Verlust des Vertrauens in vernetze Softwaresysteme reichen. Da komplexe Anwendungen, und damit auch immer komplexere Mensch-Technik-Interaktionen, immer mehr an Relevanz gewinnen, ist es umso wichtiger, wieder Orientierung zu finden. Dazu müssen wir zuerst diejenigen Elemente identifizieren, die in der Interaktion mit vernetzten sozio-technischen Systemen zu Komplexität beitragen und somit Orientierungsbedarf hervorrufen.
Mit dieser Arbeit soll ein Beitrag geleistet werden, um ein strukturiertes Reflektieren über die Komplexität vernetzter sozio-technischer Systeme im gesamten Konstruktionsprozess zu ermöglichen. Dazu wird zuerst eine Definition von Komplexität und komplexen Systemen erarbeitet, die über das informatische Verständnis von Komplexität (also der Kompliziertheit von Problemen, Algorithmen oder Daten) hinausgeht. Im Vordergrund soll vielmehr die sozio-technische Interaktion mit und in komplexen vernetzten Systemen stehen. Basierend auf dieser Definition wird dann ein Analysewerkzeug entwickelt, welches es ermöglicht, die Komplexität in der Interaktion mit sozio-technischen Systemen sichtbar und beschreibbar zu machen.
Ein Bereich, in dem vernetzte sozio-technische Systeme zunehmenden Einzug finden, ist jener digitaler Bildungstechnologien. Besonders adaptiven Bildungstechnologien wurde in den letzten Jahrzehnten ein großes Potential zugeschrieben. Zwei adaptive Lehr- bzw. Trainingssysteme sollen deshalb exemplarisch mit dem in dieser Arbeit entwickelten Analysewerkzeug untersucht werden. Hierbei wird ein besonderes Augenmerkt auf den Einfluss von Adaptivität auf die Komplexität von Mensch-Technik-Interaktionssituationen gelegt. In empirischen Untersuchungen werden die Erfahrungen von Konstrukteur*innen und Nutzer*innen jener adaptiver Systeme untersucht, um so die entscheidenden Kriterien für Komplexität ermitteln zu können. Auf diese Weise können zum einen wiederkehrende Orientierungsfragen bei der Entwicklung adaptiver Bildungstechnologien aufgedeckt werden. Zum anderen werden als komplex wahrgenommene Interaktionssituationen identifiziert. An diesen Situationen kann gezeigt werden, wo aufgrund der Komplexität des Systems die etablierten Alltagsroutinen von Nutzenden nicht mehr ausreichen, um die Folgen der Interaktion mit dem System vollständig erfassen zu können. Dieses Wissen kann sowohl Konstrukteur*innen als auch Nutzer*innen helfen, in Zukunft besser mit der inhärenten Komplexität moderner Bildungstechnologien umzugehen.
Monoklonale Antikörper sind essenzielle Werkzeuge in der modernen Laboranalytik sowie in der medizinischen Therapie und Diagnostik. Die Herstellung monoklonaler Antikörper ist ein zeit- und arbeitsintensiver Prozess. Herkömmliche Methoden beruhen auf der Immunisierung von Labortieren, die mitunter mehrere Monate in Anspruch nimmt. Anschließend werden die Antikörper-produzierenden B-Lymphozyten bzw. deren Antikörpergene isoliert und in Screening-Verfahren untersucht, um geeignete Binder zu identifizieren.
Der Transfer der humoralen Immunantwort in eine in vitro Umgebung erlaubt eine Verkürzung des Prozesses und umgeht die Notwendigkeit der in vivo Immunisierung. Das komplexe Zusammenspiel aller involvierten Immunzellen in vitro abzubilden, stellt sich allerdings als schwierig dar. Der Schwerpunkt dieser Arbeit war deshalb die Realisierung einer vereinfachten In vitro Immunisierung, die sich auf die Protagonisten der Antikörper-Produktion konzentriert: die B-Lymphozyten. Darüber hinaus sollte eine permanente Zelllinie etabliert werden, die zur Antikörper-Herstellung eingesetzt werden und die Verwendung von Primärzellen ersetzen würde.
Im ersten Teil der Arbeit wurde ein Protokoll zur In vitro Immunisierung muriner BLymphozyten etabliert. In Vorversuchen wurden die optimalen Konditionen für die Antigenspezifische Aktivierung gereinigter Milz-B-Lymphozyten aus nicht-immunisierten Mäusen
determiniert. Dazu wurde der Einfluss verschiedener Stimuli auf die Produktion unspezifischer und spezifischer Antikörper untersucht. Eine Kombination aus dem Modellantigen VP1 (Hamster Polyomavirus Hüllprotein 1), einem Anti-CD40-Antikörper, Interleukin 4 (IL 4) und Lipopolysaccharid (LPS) oder IL 7 induzierte nachweislich eine Antigen-spezifische Antikörper-Antwort in vitro. Als Indikatoren einer erfolgreichen Aktivierung der B-Lymphozyten infolge der in vitro Stimulation wurden die rapide Proliferation und die Expression charakteristischer Aktivierungsmarker auf der Zelloberfläche nachgewiesen. In einer Zeitreihe über zehn Tage wurde am zehnten Tag der In vitro Immunisierung die verhältnismäßig höchste Konzentration Antigen-spezifischer IgG-Antikörper im Kulturüberstand der stimulierten Zellen nachgewiesen.
Als nächster Schritt sollte eine permanente Zelllinie hergestellt werden, die statt primärer BLymphozyten für die zuvor etablierte In vitro Immunisierung eingesetzt werden könnte. Zu diesem Zweck wurden retrovirale Vektoren hergestellt, die durch den Transfer verschiedener Onkogene in murine B-Lymphozyten bzw. deren Vorläuferzellen das Proliferationsverhalten der Zellen manipulieren sollen. Es wurden Retroviren mit Doxycyclin-induzierbaren Expressionskassetten mit den Onkogenen cmyc, Bcl2, BclxL und dem Fusionsgen NUP98HOXB4 generiert. Eine Testzelllinie wurde erfolgreich mit den hergestellten Retroviren transduziert und die Funktionalität der hergestellten Viren anhand verschiedener Assays belegt. Die transferierten Gene konnten in der Testzelllinie auf DNAEbene nachgewiesen oder die Überexpression der entsprechenden Proteine im Western Blot detektiert werden. Es wurden schließlich B-Lymphozyten bzw. unreife Vorläuferzellen derselben mit den generierten Retroviren transduziert und mit Knochenmark-ähnlichen Stromazellen co-kultiviert. Aus keinem der transduzierten Ansätze konnte bisher eine Zelllinie oder eine Langzeit-Kultur etabliert werden.
Im letzten Teil der Arbeit wurde die Effektivität und Übertragbarkeit des zuvor etablierten Protokolls zur In vitro Immunisierung muriner B-Lymphozyten anhand verschiedener Antigene gezeigt. Es konnten in vitro spezifische IgG-Antworten gegen VP1, Legionella pneumophila und das Protein Mip, von dem ein Peptid in das zur Immunisierung eingesetzte VP1 integriert wurde, induziert werden. Die stimulierten B-Lymphozyten wurden durch Fusion mit Myelomzellen in permanente Antikörper-produzierende Zelllinien transformiert.
Dabei konnten mehrere Hybridomzelllinien generiert werden, die spezifische IgGAntikörper gegen VP1 oder Mip produzieren. Die generierten Antikörper konnten sowohl im Western Blot als auch im ELISA (Enzyme-Linked Immunosorbent Assay) das entsprechende Antigen spezifisch binden.
Die hier etablierte In vitro Immunisierung bietet eine effektive Alternative zu bisherigen Verfahren zur Herstellung spezifischer Antikörper. Sie ersetzt die Immunisierung von Versuchstieren und reduziert den Zeitaufwand erheblich. In Kombination mit der Hybridomtechnologie können die in vitro immunisierten Zellen, wie hier demonstriert, zur Generation von Hybridomzelllinien und zur Herstellung monoklonaler Antikörper genutzt werden. Um die Verwendung von Versuchstieren in dieser Methode durch eine adäquate permanente Zelllinie zu ersetzen, muss die genetische Veränderung von B-Lymphozyten und unreifen hämatopoetischen Zellen optimiert werden. Die Ergebnisse bieten eine Basis für eine universelle, Spezies-unabhängige Methodik zur Antikörperherstellung und für die
Etablierung einer idealen, tierfreien In vitro Immunisierung.
Flooding is a vast problem in many parts of the world, including Europe. It occurs mainly due to extreme weather conditions (e.g. heavy rainfall and snowmelt) and the consequences of flood events can be devastating. Flood risk is mainly defined as a combination of the probability of an event and its potential adverse impacts. Therefore, it covers three major dynamic components: hazard (physical characteristics of a flood event), exposure (people and their physical environment that being exposed to flood), and vulnerability (the elements at risk). Floods are natural phenomena and cannot be fully prevented. However, their risk can be managed and mitigated. For a sound flood risk management and mitigation, a proper risk assessment is needed. First of all, this is attained by a clear understanding of the flood risk dynamics. For instance, human activity may contribute to an increase in flood risk. Anthropogenic climate change causes higher intensity of rainfall and sea level rise and therefore an increase in scale and frequency of the flood events. On the other hand, inappropriate management of risk and structural protection measures may not be very effective for risk reduction. Additionally, due to the growth of number of assets and people within the flood-prone areas, risk increases. To address these issues, the first objective of this thesis is to perform a sensitivity analysis to understand the impacts of changes in each flood risk component on overall risk and further their mutual interactions. A multitude of changes along the risk chain are simulated by regional flood model (RFM) where all processes from atmosphere through catchment and river system to damage mechanisms are taken into consideration. The impacts of changes in risk components are explored by plausible change scenarios for the mesoscale Mulde catchment (sub-basin of the Elbe) in Germany.
A proper risk assessment is ensured by the reasonable representation of the real-world flood event. Traditionally, flood risk is assessed by assuming homogeneous return periods of flood peaks throughout the considered catchment. However, in reality, flood events are spatially heterogeneous and therefore traditional assumption misestimates flood risk especially for large regions. In this thesis, two different studies investigate the importance of spatial dependence in large scale flood risk assessment for different spatial scales. In the first one, the “real” spatial dependence of return period of flood damages is represented by continuous risk modelling approach where spatially coherent patterns of hydrological and meteorological controls (i.e. soil moisture and weather patterns) are included. Further the risk estimations under this modelled dependence assumption are compared with two other assumptions on the spatial dependence of return periods of flood damages: complete dependence (homogeneous return periods) and independence (randomly generated heterogeneous return periods) for the Elbe catchment in Germany. The second study represents the “real” spatial dependence by multivariate dependence models. Similar to the first study, the three different assumptions on the spatial dependence of return periods of flood damages are compared, but at national (United Kingdom and Germany) and continental (Europe) scales. Furthermore, the impacts of the different models, tail dependence, and the structural flood protection level on the flood risk under different spatial dependence assumptions are investigated.
The outcomes of the sensitivity analysis framework suggest that flood risk can vary dramatically as a result of possible change scenarios. The risk components that have not received much attention (e.g. changes in dike systems and in vulnerability) may mask the influence of climate change that is often investigated component.
The results of the spatial dependence research in this thesis further show that the damage under the false assumption of complete dependence is 100 % larger than the damage under the modelled dependence assumption, for the events with return periods greater than approximately 200 years in the Elbe catchment. The complete dependence assumption overestimates the 200-year flood damage, a benchmark indicator for the insurance industry, by 139 %, 188 % and 246 % for the UK, Germany and Europe, respectively. The misestimation of risk under different assumptions can vary from upstream to downstream of the catchment. Besides, tail dependence in the model and flood protection level in the catchments can affect the risk estimation and the differences between different spatial dependence assumptions.
In conclusion, the broader consideration of the risk components, which possibly affect the flood risk in a comprehensive way, and the consideration of the spatial dependence of flood return periods are strongly recommended for a better understanding of flood risk and consequently for a sound flood risk management and mitigation.
The incorporation of proteins in artificial materials such as membranes offers great opportunities to avail oneself the miscellaneous qualities of proteins and enzymes perfected by nature over millions of years. One possibility to leverage proteins is the modification with artificial polymers. To obtain such protein-polymer conjugates, either a polymer can be grown from the protein surface (grafting-from) or a pre-synthesized polymer attached to the protein (grafting-to). Both techniques were used to synthesize conjugates of different proteins with thermo-responsive polymers in this thesis.
First, conjugates were analyzed by protein NMR spectroscopy. Typical characterization techniques for conjugates can verify the successful conjugation and give hints on the secondary structure of the protein. However, the 3-dimensional structure, being highly important for the protein function, cannot be probed by standard techniques. NMR spectroscopy is a unique method allowing to follow even small alterations in the protein structure. A mutant of the carbohydrate binding module 3b (CBM3bN126W) was used as model protein and functionalized with poly(N-isopropylacrylamide). Analysis of conjugates prepared by grafting-to or grafting-from revealed a strong impact of conjugation type on protein folding. Whereas conjugates prepared by grafting a pre-formed polymer to the protein resulted in complete preservation of protein folding, grafting the polymer from the protein surface led to (partial) disruption of the protein structure.
Next, conjugates of bovine serum albumin (BSA) as cheap and easily accessible protein were synthesized with PNIPAm and different oligoethylene glycol (meth)acrylates. The obtained protein-polymer conjugates were analyzed by an in-line combination of size exclusion chromatography and multi-angle laser light scattering (SEC-MALS). This technique is particular advantageous to determine molar masses, as no external calibration of the system is needed. Different SEC column materials and operation conditions were tested to evaluate the applicability of this system to determine absolute molar masses and hydrodynamic properties of heterogeneous conjugates prepared by grafting-from and grafting-to. Hydrophobic and non-covalent interactions of conjugates lead to error-prone values not in accordance to expected molar masses based on conversions and extents of modifications.
As alternative to this method, conjugates were analyzed by sedimentation velocity analytical ultracentrifugation (SV-AUC) to gain insights in the hydrodynamic properties and how they change after conjugation. Within a centrifugal field, a sample moves and fractionates according to the mass, density, and shape of its individual components. Conjugates of BSA with PNIPAm were analyzed below and above the cloud point temperature of the thermo-responsive polymer component. It was identified that the polymer characteristics were transferred to the conjugate molecule which than showed a decreased ideality – defined as increased deviation from a perfect sphere model – below and increased ideality above the cloud point temperature. This effect can be attributed to an arrangement of the polymer chain pointing towards the solvent (expanded state) or snuggling around the protein surface depending on the applied temperature.
The last project dealt with the synthesis of ferric hydroxamate uptake protein component A (FhuA)-polymer conjugates as building blocks for novel membrane materials. The shape of FhuA can be described as barrel and removal of a cork domain inside the protein results in a passive channel aimed to be utilized as pores in the membrane system. The polymer matrix surrounding the membrane protein is composed of a thermo-responsive and a UV-crosslinkable part. Therefore, an external trigger for covalent immobilization of these building blocks in the membrane and switchability of the membrane between different states was incorporated. The overall performance of membranes prepared by a drying-mediated self-assembly approach was evaluated by permeability and size exclusion experiments. The obtained membranes displayed an insufficiency in interchain crosslinking and therefore a lack in performance. Furthermore, the aimed switch between a hydrophilic and hydrophobic state of the polymer matrix did not occur. Correspondingly, size exclusion experiments did not result in a retention of analytes larger than the pores defined by the dimension of the used FhuA variant.
Overall, different paths to generate protein-polymer conjugates by either grafting-from or grafting-to the protein surface were presented paving the way to the generation of new hybrid materials. Different analytical methods were utilized to describe the folding and hydrodynamic properties of conjugates providing a deeper insight in the overall characteristics of these seminal building blocks.
Virtualizing physical space
(2021)
The true cost for virtual reality is not the hardware, but the physical space it requires, as a one-to-one mapping of physical space to virtual space allows for the most immersive way of navigating in virtual reality. Such “real-walking” requires physical space to be of the same size and the same shape of the virtual world represented. This generally prevents real-walking applications from running on any space that they were not designed for.
To reduce virtual reality’s demand for physical space, creators of such applications let users navigate virtual space by means of a treadmill, altered mappings of physical to virtual space, hand-held controllers, or gesture-based techniques. While all of these solutions succeed at reducing virtual reality’s demand for physical space, none of them reach the same level of immersion that real-walking provides.
Our approach is to virtualize physical space: instead of accessing physical space directly, we allow applications to express their need for space in an abstract way, which our software systems then map to the physical space available. We allow real-walking applications to run in spaces of different size, different shape, and in spaces containing different physical objects. We also allow users immersed in different virtual environments to share the same space.
Our systems achieve this by using a tracking volume-independent representation of real-walking experiences — a graph structure that expresses the spatial and logical relationships between virtual locations, virtual elements contained within those locations, and user interactions with those elements. When run in a specific physical space, this graph representation is used to define a custom mapping of the elements of the virtual reality application and the physical space by parsing the graph using a constraint solver. To re-use space, our system splits virtual scenes and overlap virtual geometry. The system derives this split by means of hierarchically clustering of our virtual objects as nodes of our bi-partite directed graph that represents the logical ordering of events of the experience. We let applications express their demands for physical space and use pre-emptive scheduling between applications to have them share space. We present several application examples enabled by our system. They all enable real-walking, despite being mapped to physical spaces of different size and shape, containing different physical objects or other users.
We see substantial real-world impact in our systems. Today’s commercial virtual reality applications are generally designing to be navigated using less immersive solutions, as this allows them to be operated on any tracking volume. While this is a commercial necessity for the developers, it misses out on the higher immersion offered by real-walking. We let developers overcome this hurdle by allowing experiences to bring real-walking to any tracking volume, thus potentially bringing real-walking to consumers.
Die eigentlichen Kosten für Virtual Reality Anwendungen entstehen nicht primär durch die erforderliche Hardware, sondern durch die Nutzung von physischem Raum, da die eins-zu-eins Abbildung von physischem auf virtuellem Raum die immersivste Art von Navigation ermöglicht. Dieses als „Real-Walking“ bezeichnete Erlebnis erfordert hinsichtlich Größe und Form eine Entsprechung von physischem Raum und virtueller Welt. Resultierend daraus können Real-Walking-Anwendungen nicht an Orten angewandt werden, für die sie nicht entwickelt wurden.
Um den Bedarf an physischem Raum zu reduzieren, lassen Entwickler von Virtual Reality-Anwendungen ihre Nutzer auf verschiedene Arten navigieren, etwa mit Hilfe eines Laufbandes, verfälschten Abbildungen von physischem zu virtuellem Raum, Handheld-Controllern oder gestenbasierten Techniken. All diese Lösungen reduzieren zwar den Bedarf an physischem Raum, erreichen jedoch nicht denselben Grad an Immersion, den Real-Walking bietet.
Unser Ansatz zielt darauf, physischen Raum zu virtualisieren: Anstatt auf den physischen Raum direkt zuzugreifen, lassen wir Anwendungen ihren Raumbedarf auf abstrakte Weise formulieren, den unsere Softwaresysteme anschließend auf den verfügbaren physischen Raum abbilden. Dadurch ermöglichen wir Real-Walking-Anwendungen Räume mit unterschiedlichen Größen und Formen und Räume, die unterschiedliche physische Objekte enthalten, zu nutzen. Wir ermöglichen auch die zeitgleiche Nutzung desselben Raums durch mehrere Nutzer verschiedener Real-Walking-Anwendungen.
Unsere Systeme erreichen dieses Resultat durch eine Repräsentation von Real-Walking-Erfahrungen, die unabhängig sind vom gegebenen Trackingvolumen – eine Graphenstruktur, die die räumlichen und logischen Beziehungen zwischen virtuellen Orten, den virtuellen Elementen innerhalb dieser Orte, und Benutzerinteraktionen mit diesen Elementen, ausdrückt. Bei der Instanziierung der Anwendung in einem bestimmten physischen Raum wird diese Graphenstruktur und ein Constraint Solver verwendet, um eine individuelle Abbildung der virtuellen Elemente auf den physischen Raum zu erreichen. Zur mehrmaligen Verwendung des Raumes teilt unser System virtuelle Szenen und überlagert virtuelle Geometrie. Das System leitet diese Aufteilung anhand eines hierarchischen Clusterings unserer virtuellen Objekte ab, die als Knoten unseres bi-partiten, gerichteten Graphen die logische Reihenfolge aller Ereignisse repräsentieren. Wir verwenden präemptives Scheduling zwischen den Anwendungen für die zeitgleiche Nutzung von physischem Raum. Wir stellen mehrere Anwendungsbeispiele vor, die Real-Walking ermöglichen – in physischen Räumen mit unterschiedlicher Größe und Form, die verschiedene physische Objekte oder weitere Nutzer enthalten.
Wir sehen in unseren Systemen substantielles Potential. Heutige Virtual Reality-Anwendungen sind bisher zwar so konzipiert, dass sie auf einem beliebigen Trackingvolumen betrieben werden können, aber aus kommerzieller Notwendigkeit kein Real-Walking beinhalten. Damit entgeht Entwicklern die Gelegenheit eine höhere Immersion herzustellen. Indem wir es ermöglichen, Real-Walking auf jedes Trackingvolumen zu bringen, geben wir Entwicklern die Möglichkeit Real-Walking zu ihren Nutzern zu bringen.
While patients are known to respond differently to drug therapies, current clinical practice often still follows a standardized dosage regimen for all patients. For drugs with a narrow range of both effective and safe concentrations, this approach may lead to a high incidence of adverse events or subtherapeutic dosing in the presence of high patient variability. Model-informedprecision dosing (MIPD) is a quantitative approach towards dose individualization based on mathematical modeling of dose-response relationships integrating therapeutic drug/biomarker monitoring (TDM) data. MIPD may considerably improve the efficacy and safety of many drug therapies. Current MIPD approaches, however, rely either on pre-calculated dosing tables or on simple point predictions of the therapy outcome. These
approaches lack a quantification of uncertainties and the ability to account for effects that are delayed. In addition, the underlying models are not improved while applied to patient data. Therefore, current approaches are not well suited for informed clinical decision-making based on a differentiated understanding of the individually predicted therapy outcome.
The objective of this thesis is to develop mathematical approaches for MIPD, which (i) provide efficient fully Bayesian forecasting of the individual therapy outcome including associated uncertainties, (ii) integrate Markov decision processes via reinforcement learning (RL) for a comprehensive decision framework for dose individualization, (iii) allow for continuous learning across patients and hospitals. Cytotoxic anticancer chemotherapy with its major dose-limiting toxicity, neutropenia, serves as a therapeutically relevant application example.
For more comprehensive therapy forecasting, we apply Bayesian data assimilation (DA) approaches, integrating patient-specific TDM data into mathematical models of chemotherapy-induced neutropenia that build on prior population analyses. The value of uncertainty quantification is demonstrated as it allows reliable computation of the patient-specific probabilities of relevant clinical quantities, e.g., the neutropenia grade. In view of novel home monitoring devices that increase the amount of TDM data available, the data processing of
sequential DA methods proves to be more efficient and facilitates handling of the variability between dosing events.
By transferring concepts from DA and RL we develop novel approaches for MIPD. While DA-guided dosing integrates individualized uncertainties into dose selection, RL-guided dosing provides a framework to consider delayed effects of dose selections. The combined
DA-RL approach takes into account both aspects simultaneously and thus represents a holistic approach towards MIPD. Additionally, we show that RL can be used to gain insights into important patient characteristics for dose selection. The novel dosing strategies substantially reduce the occurrence of both subtherapeutic and life-threatening neutropenia grades in a simulation study based on a recent clinical study (CEPAC-TDM trial) compared to currently used MIPD approaches.
If MIPD is to be implemented in routine clinical practice, a certain model bias with respect to the underlying model is inevitable, as the models are typically based on data from comparably small clinical trials that reflect only to a limited extent the diversity in real-world patient populations. We propose a sequential hierarchical Bayesian inference framework that enables continuous cross-patient learning to learn the underlying model parameters of the target patient population. It is important to note that the approach only requires summary information of the individual patient data to update the model. This separation of the individual inference from population inference enables implementation across different centers of care.
The proposed approaches substantially improve current MIPD approaches, taking into account new trends in health care and aspects of practical applicability. They enable progress towards more informed clinical decision-making, ultimately increasing patient benefits beyond the current practice.
Spatiotemporal variations of key air pollutants and greenhouse gases in the Himalayan foothills
(2021)
South Asia is a rapidly developing, densely populated and highly polluted region that is facing the impacts of increasing air pollution and climate change, and yet it remains one of the least studied regions of the world scientifically. In recognition of this situation, this thesis focuses on studying (i) the spatial and temporal variation of key greenhouse gases (CO2 and CH4) and air pollutants (CO and O3) and (ii) the vertical distribution of air pollutants (PM, BC) in the foothills of the Himalaya. Five sites were selected in the Kathmandu Valley, the capital region of Nepal, along with two sites outside of the valley in the Makawanpur and Kaski districts, and conducted measurements during the period of 2013-2014 and 2016. These measurements are analyzed in this thesis.
The CO measurements at multiple sites in the Kathmandu Valley showed a clear diurnal cycle: morning and evening levels were high, with an afternoon dip. There are slight differences in the diurnal cycles of CO2 and CH4, with the CO2 and CH4 mixing ratios increasing after the afternoon dip, until the morning peak the next day. The mixing layer height (MLH) of the nocturnal stable layer is relatively constant (~ 200 m) during the night, after which it transitions to a convective mixing layer during the day and the MLH increases up to 1200 m in the afternoon. Pollutants are thus largely trapped in the valley from the evening until sunrise the following day, and the concentration of pollutants increases due to emissions during the night. During afternoon, the pollutants are diluted due to the circulation by the valley winds after the break-up of the mixing layer. The major emission sources of GHGs and air pollutants in the valley are transport sector, residential cooking, brick kilns, trash burning, and agro-residue burning. Brick industries are influential in the winter and pre-monsoon season. The contribution of regional forest fires and agro-residue burning are seen during the pre-monsoon season. In addition, relatively higher CO values were also observed at the valley outskirts (Bhimdhunga and Naikhandi), which indicates the contribution of regional emission sources. This was also supported by the presence of higher concentrations of O3 during the pre-monsoon season.
The mixing ratios of CO2 (419.3 ±6.0 ppm) and CH4 (2.192 ±0.066 ppm) in the valley were much higher than at background sites, including the Mauna Loa observatory (CO2: 396.8 ± 2.0 ppm, CH4:1.831 ± 0.110 ppm) and Waligaun (CO2: 397.7 ± 3.6 ppm, CH4: 1.879 ± 0.009 ppm), China, as well as at an urban site Shadnagar (CH4: 1.92 ± 0.07 ppm) in India.
The daily 8 hour maximum O3 average in the Kathmandu Valley exceeds the WHO recommended value during more than 80% of the days during the pre-monsoon period, which represents a significant risk for human health and ecosystems in the region. Moreover, in the measurements of the vertical distribution of particulate matter, which were made using an ultralight aircraft, and are the first of their kind in the region, an elevated polluted layer at around ca. 3000 m asl. was detected over the Pokhara Valley. The layer could be associated with the large-scale regional transport of pollution. These contributions towards understanding the distributions of key air pollutants and their main sources will provide helpful information for developing management plans and policies to help reduce the risks for the millions of people living in the region.
The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes located in the Khomas Highland of Namibia. H.E.S.S. operates in a wide energy range from several tens of GeV to several tens of TeV, reaching the best sensitivity around 1 TeV or at lower energies. However, there are many important topics – such as the search for Galactic PeVatrons, the study of gamma-ray production scenarios for sources (hadronic vs. leptonic), EBL absorption studies – which require good sensitivity at energies above 10 TeV. This work aims at improving the sensitivity of H.E.S.S. and increasing the gamma-ray statistics at high energies. The study investigates an enlargement of the H.E.S.S. effective field of view using events with larger offset angles in the analysis. The greatest challenges in the analysis of large-offset events are a degradation of the reconstruction accuracy and a rise of the background rate as the offset angle increases. The more sophisticated direction reconstruction method (DISP) and improvements to the standard background rejection technique, which by themselves are effective ways to increase the gamma-ray statistics and improve the sensitivity of the analysis, are implemented to overcome the above-mentioned issues. As a result, the angular resolution at the preselection level is improved by 5 - 10% for events at 0.5◦ offset angle and by 20 - 30% for events at 2◦ offset angle. The background rate at large offset angles is decreased nearly to a level typical for offset angles below 2.5◦. Thereby, sensitivity improvements of 10 - 20% are achieved for the proposed analysis compared to the standard analysis at small offset angles. Developed analysis also allows for the usage of events at large offset angles up to approximately 4◦, which was not possible before. This analysis method is applied to the analysis of the Galactic plane data above 10 TeV. As a result, 40 sources out of the 78 presented in the H.E.S.S. Galactic plane survey (HGPS) are detected above 10 TeV. Among them are representatives of all source classes that are present in the HGPS catalogue; namely, binary systems, supernova remnants, pulsar wind nebulae and composite objects. The potential of the improved analysis method is demonstrated by investigating the more than 10 TeV emission for two objects: the region associated with the shell-type SNR HESS J1731−347 and the PWN candidate associated with PSR J0855−4644 that is coincident with Vela Junior (HESS J0852−463).
Modern knowledge bases contain and organize knowledge from many different topic areas. Apart from specific entity information, they also store information about their relationships amongst each other. Combining this information results in a knowledge graph that can be particularly helpful in cases where relationships are of central importance. Among other applications, modern risk assessment in the financial sector can benefit from the inherent network structure of such knowledge graphs by assessing the consequences and risks of certain events, such as corporate insolvencies or fraudulent behavior, based on the underlying network structure. As public knowledge bases often do not contain the necessary information for the analysis of such scenarios, the need arises to create and maintain dedicated domain-specific knowledge bases.
This thesis investigates the process of creating domain-specific knowledge bases from structured and unstructured data sources. In particular, it addresses the topics of named entity recognition (NER), duplicate detection, and knowledge validation, which represent essential steps in the construction of knowledge bases.
As such, we present a novel method for duplicate detection based on a Siamese neural network that is able to learn a dataset-specific similarity measure which is used to identify duplicates. Using the specialized network architecture, we design and implement a knowledge transfer between two deduplication networks, which leads to significant performance improvements and a reduction of required training data.
Furthermore, we propose a named entity recognition approach that is able to identify company names by integrating external knowledge in the form of dictionaries into the training process of a conditional random field classifier. In this context, we study the effects of different dictionaries on the performance of the NER classifier. We show that both the inclusion of domain knowledge as well as the generation and use of alias names results in significant performance improvements.
For the validation of knowledge represented in a knowledge base, we introduce Colt, a framework for knowledge validation based on the interactive quality assessment of logical rules. In its most expressive implementation, we combine Gaussian processes with neural networks to create Colt-GP, an interactive algorithm for learning rule models. Unlike other approaches, Colt-GP uses knowledge graph embeddings and user feedback to cope with data quality issues of knowledge bases. The learned rule model can be used to conditionally apply a rule and assess its quality.
Finally, we present CurEx, a prototypical system for building domain-specific knowledge bases from structured and unstructured data sources. Its modular design is based on scalable technologies, which, in addition to processing large datasets, ensures that the modules can be easily exchanged or extended. CurEx offers multiple user interfaces, each tailored to the individual needs of a specific user group and is fully compatible with the Colt framework, which can be used as part of the system.
We conduct a wide range of experiments with different datasets to determine the strengths and weaknesses of the proposed methods. To ensure the validity of our results, we compare the proposed methods with competing approaches.
Natural products have proved to be a major resource in the discovery and development of many pharmaceuticals that are in use today. There is a wide variety of biologically active natural products that contain conjugated polyenes or benzofuran structures. Therefore, new synthetic methods for the construction of such building blocks are of great interest to synthetic chemists. The recently developed one-pot tethered ring-closing metathesis approach allows for the formation of Z,E-dienoates in high stereoselectivity. The extension of this method with a Julia-Kocienski olefination protocol would allow for the formation of conjugated trienes in a stereoselective manner. This strategy was applied in the total synthesis of conjugated triene containing (+)-bretonin B. Additionally, investigations of cross metathesis using methyl substituted olefins were pursued. This methodology was applied, as a one-pot cross metathesis/ring-closing metathesis sequence, in the total synthesis of benzofuran containing 7-methoxywutaifuranal. Finally, the design and synthesis of a catalyst for stereoretentive metathesis in aqueous media was investigated.
The survey of the prevalence of chronic ankle instability in elite Taiwanese basketball athletes
(2021)
BACKGROUND: Ankle sprains are common in basketball. It could develop into Chronic Ankle Instability (CAI) causing decreased quality of life, functional performance, early osteoarthritis, and increased risk of other injuries. To develop a strategy of CAI prevention, localized epidemiology data and a valid/reliable tool are essential. However, the epidemiological data of CAI is not conclusive from previous studies and the prevalence of CAI in Taiwanese basketball athletes are not clear. In addition, a valid and reliable tool among the Taiwan-Chinese version to evaluate ankle instability is missing.
PURPOSE: The aims were to have an overview of the prevalence of CAI in sports population using a systematic review, to develop a valid and reliable cross-cultural adapted Cumberland Ankle Instability Tool Questionnaire (CAIT) in Taiwan-Chinese (CAIT-TW), and to survey the prevalence of CAI in elite basketball athletes in Taiwan using CAIT-TW.
METHODS: Firstly, a systematic search was conducted. Research articles applying CAI related questionnaires in order to survey the prevalence of CAI were included in the review. Second, the English version of CAIT was translated and cross-culturally adapted into the CAIT-TW. The construct validity, test-retest reliability, internal consistency, and cutoff score of CAIT-TW were evaluated in an athletic population (N=135). Finally, the cross-sectional data of CAI prevalence in 388 elite Taiwanese basketball athletes were presented. Demographics, presence of CAI, and difference of prevalence between gender, different competitive levels and play positions were evaluated.
RESULTS: The prevalence of CAI was 25%, ranging between 7% and 53%. The prevalence of CAI among participants with a history of ankle sprains was 46%, ranging between 9% and 76%. In addition, the cross-cultural adapted CAIT-TW showed a moderate to strong construct validity, an excellent test-retest reliability, a good internal consistency, and a cutoff score of 21.5 for the Taiwanese athletic population. Finally, 26% of Taiwanese basketball athletes had unilateral CAI while 50% of them had bilateral CAI. In addition, women athletes in the investigated cohort had a higher prevalence of CAI than men. There was no difference in prevalence between competitive levels and among play positions.
CONCLUSION: The systematic review shows that the prevalence of CAI has a wide range among included studies. This could be due to the different exclusion criteria, age, sports discipline, or other factors among the included studies. For future studies, standardized criteria to investigate the epidemiology of CAI are required. The CAI epidemiological study should be prospective. Factors affecting the prevalence of CAI ability should be investigated and described. The translated CAIT-TW is a valid and reliable tool to differentiate between stable and unstable ankles in athletes and may further apply for research or daily practice in Taiwan. In the Taiwanese basketball population, CAI is highly prevalent. This might relate to the research method, preexisting ankle instability, and training-related issues. Women showed a higher prevalence of CAI than men. When applying the preventive measure, gender should be taken into consideration.
As society paves its way towards device miniaturization and precision medicine, micro-scale actuation and guided transport become increasingly prominent research fields, with high potential impact in both technological and clinical contexts. In order to accomplish directed motion of micron-sized objects, as biosensors and drug-releasing microparticles, towards specific target sites, a promising strategy is the use of living cells as smart biochemically-powered carriers, building the so-called bio-hybrid systems. Inspired by leukocytes, native cells of living organisms efficiently migrating to critical targets as tumor tissue, an emerging concept is to exploit the amoeboid crawling motility of such cells as mean of transport for drug delivery applications.
In the research work described in this thesis, I synergistically applied experimental, computational and theoretical modeling approaches to investigate the behaviour and transport mechanism of a novel kind of bio-hybrid system for active transport at the micro-scale, referred to as cellular truck. This system consists of an amoeboid crawling cell, the carrier, attached to a microparticle, the cargo, which may ideally be drug-loaded for specific therapeutic treatments.
For the purposes of experimental investigation, I employed the amoeba Dictyostelium discoideum as crawling cellular carrier, being a renowned model organism for leukocyte migration and, in general, for eukaryotic cell motility. The performed experiments revealed a complex recurrent cell-cargo relative motion, together with an intermittent motility of the cellular truck as a whole. The evidence suggests the presence of cargoes on amoeboid cells to act as mechanical stimulus leading cell polarization, thus promoting cell motility and giving rise to the observed intermittent dynamics of the truck. Particularly, bursts in cytoskeletal polarity along the cell-cargo axis have been
found to occur in time with a rate dependent on cargo geometrical features, as particle diameter. Overall, the collected experimental evidence pointed out a pivotal role of cell-cargo interactions in the emergent cellular truck motion dynamics. Especially, they can determine the transport capabilities of amoeboid cells, as the cargo size significantly impacts the cytoskeletal activity and repolarization dynamics along the cell-cargo axis, the latter responsible for truck displacement and reorientation.
Furthermore, I developed a modeling framework, built upon the experimental evidence on cellular truck behaviour, that connects the relative dynamics and interactions arising at the truck scale with the actual particle transport dynamics. In fact, numerical simulations of the proposed model successfully reproduced the phenomenology of the cell-cargo system, while enabling the prediction of the transport properties of cellular trucks over larger spatial and temporal scales. The theoretical analysis provided a deeper understanding of the role of cell-cargo interaction on mass transport, unveiling in particular how the long-time transport efficiency is governed by the interplay between the persistence time of cell polarity and time scales of the relative dynamics stemming from cell-cargo interaction. Interestingly, the model predicts the existence of an optimal cargo size, enhancing the diffusivity of cellular trucks; this is in line with previous independent experimental data, which appeared rather counterintuitive and had no explanation prior to this study.
In conclusion, my research work shed light on the importance of cargo-carrier interactions in the context of crawling cell-mediated particle transport, and provides a prototypical, multifaceted framework for the analysis and modelling of such complex bio-hybrid systems and their perspective optimization.
The presented study investigated the influence of microbial and biogeochemical processes on the physical transport related properties and the fate of microplastics in freshwater reservoirs. The overarching goal was to elucidate the mechanisms leading to sedimentation and deposition of microplastics in such environments. This is of importance, as large amounts of initially buoyant microplastics are found in reservoir sediments worldwide. However, the transport processes which lead to microplastics accumulation in sediments, were up to now understudied.
The impact of biofilm formation on the density and subsequent sedimentation of microplastics was investigated in the eutrophic Bautzen reservoirs (Chapter 2). Biofilms are complex microbial communities fixed to submerged surfaces through a slimy organic film. The mineral calcite was detected in the biofilms, which led to the
sinking of the overgrown microplastic particles. The calcite was of biogenic origin, most likely precipitated by sessile cyanobacteria within the biofilms.
Biofilm formation was also studied in the mesotrophic Malter reservoir. Unlike in Bautzen reservoir, biofilm formation did not govern the sedimentation of different microplastics in Malter reservoir (Chapter 3). Instead autumnal lake mixing led to
the formation of sinking aggregates of microplastics and iron colloids. Such colloids form when anoxic, iron-rich water from the hypolimnion mixes with the oxygenated epilimnetic waters. The colloids bind organic material from the lake water, which leads to the formation of large and sinking iron-organo flocs.
Hence, iron-organo floc formation and their influence on the buoyancy or burial of microplastics into sediments of Bautzen reservoir was studied in laboratory experiments (Chapter 4). Microplastics of different shapes (fiber, fragment, sphere) and sizes were readily incorporated into sinking iron-organo flocs. By this initially buoyant polyethylene microplastics were transported on top of sediments from Bautzen reservoir. Shortly after deposition, the microplastic bearing flocs started to subside and transported the pollutants into deeper sediment layers. The microplastics were not released from the sediments within two months of laboratory incubation.
The stability of floc microplastic deposition was further investigated employing experiments with the iron reducing model organism Shewanella oneidensis (Chapter 5). It was shown, that reduction or re-mineralization of the iron minerals did not affect the integrity of the iron-organo flocs. The organic matrix was stable under iron reducing conditions. Hence, no incorporated microplastics were released from the flocs. As similar processes are likely to take place in natural sediments, this might explain the previous described low microplastic release from the sediments.
This thesis introduced different mechanisms leading to the sedimentation of initially buoyant microplastics and to their subsequent deposition in freshwater reservoirs. Novel processes such as the aggregation with iron-organo flocs were identified and the understudied issue of biofilm densification through biogenic mineral formation was further investigated. The findings might have implications for the fate of microplastics within the river-reservoir system and outline the role of freshwater reservoirs as important accumulation zone for microplastics. Microplastics deposited in the sediments of reservoirs might not be transported further by through flowing river. Hence the study might contribute to better risk assessment and transport balances of these anthropogenic contaminants.
As part of our everyday life we consume breaking news and interpret it based on our own viewpoints and beliefs. We have easy access to online social networking platforms and news media websites, where we inform ourselves about current affairs and often post about our own views, such as in news comments or social media posts. The media ecosystem enables opinions and facts to travel from news sources to news readers, from news article commenters to other readers, from social network users to their followers, etc. The views of the world many of us have depend on the information we receive via online news and social media. Hence, it is essential to maintain accurate, reliable and objective online content to ensure democracy and verity on the Web. To this end, we contribute to a trustworthy media ecosystem by analyzing news and social media in the context of politics to ensure that media serves the public interest. In this thesis, we use text mining, natural language processing and machine learning techniques to reveal underlying patterns in political news articles and political discourse in social networks.
Mainstream news sources typically cover a great amount of the same news stories every day, but they often place them in a different context or report them from different perspectives. In this thesis, we are interested in how distinct and predictable newspaper journalists are, in the way they report the news, as a means to understand and identify their different political beliefs. To this end, we propose two models that classify text from news articles to their respective original news source, i.e., reported speech and also news comments. Our goal is to capture systematic quoting and commenting patterns by journalists and news commenters respectively, which can lead us to the newspaper where the quotes and comments are originally published. Predicting news sources can help us understand the potential subjective nature behind news storytelling and the magnitude of this phenomenon. Revealing this hidden knowledge can restore our trust in media by advancing transparency and diversity in the news.
Media bias can be expressed in various subtle ways in the text and it is often challenging to identify these bias manifestations correctly, even for humans. However, media experts, e.g., journalists, are a powerful resource that can help us overcome the vague definition of political media bias and they can also assist automatic learners to find the hidden bias in the text. Due to the enormous technological advances in artificial intelligence, we hypothesize that identifying political bias in the news could be achieved through the combination of sophisticated deep learning modelsxi and domain expertise. Therefore, our second contribution is a high-quality and reliable news dataset annotated by journalists for political bias and a state-of-the-art solution for this task based on curriculum learning. Our aim is to discover whether domain expertise is necessary for this task and to provide an automatic solution for this traditionally manually-solved problem. User generated content is fundamentally different from news articles, e.g., messages are shorter, they are often personal and opinionated, they refer to specific topics and persons, etc. Regarding political and socio-economic news, individuals in online communities make use of social networks to keep their peers up-to-date and to share their own views on ongoing affairs. We believe that social media is also an as powerful instrument for information flow as the news sources are, and we use its unique characteristic of rapid news coverage for two applications. We analyze Twitter messages and debate transcripts during live political presidential debates to automatically predict the topics that Twitter users discuss. Our goal is to discover the favoured topics in online communities on the dates of political events as a way to understand the political subjects of public interest. With the up-to-dateness of microblogs, an additional opportunity emerges, namely to use social media posts and leverage the real-time verity about discussed individuals to find their locations.
That is, given a person of interest that is mentioned in online discussions, we use the wisdom of the crowd to automatically track her physical locations over time. We evaluate our approach in the context of politics, i.e., we predict the locations of US politicians as a proof of concept for important use cases, such as to track people that
are national risks, e.g., warlords and wanted criminals.
The goal of this dissertation is to empirically evaluate the predictions of two classes of models applied to language processing: the similarity-based interference models (Lewis & Vasishth, 2005; McElree, 2000) and the group of smaller-scale accounts that we will refer to as faulty encoding accounts (Eberhard, Cutting, & Bock, 2005; Bock & Eberhard, 1993). Both types of accounts make predictions with regard to processing the same class of structures: sentences containing a non-subject (interfering) noun in addition to a subject noun and a verb. Both accounts make the same predictions for processing ungrammatical sentences with a number-mismatching interfering noun, and this prediction finds consistent support in the data. However, the similarity-based interference accounts predict similar effects not only for morphosyntactic, but also for the semantic level of language organization. We verified this prediction in three single-trial online experiments, where we found consistent support for the predictions of the similarity-based interference account. In addition, we report computational simulations further supporting the similarity-based interference accounts. The combined evidence suggests that the faulty encoding accounts are not required to explain comprehension of ill-formed sentences.
For the processing of grammatical sentences, the accounts make conflicting predictions, and neither the slowdown predicted by the similarity-based interference account, nor the complementary slowdown predicted by the faulty encoding accounts were systematically observed. The majority of studies found no difference between the compared configurations. We tested one possible explanation for the lack of predicted difference, namely, that both slowdowns are present simultaneously and thus conceal each other. We decreased the amount of similarity-based interference: if the effects were concealing each other, decreasing one of them should allow the other to surface. Surprisingly, throughout three larger-sample single-trial online experiments, we consistently found the slowdown predicted by the faulty encoding accounts, but no effects consistent with the presence of inhibitory interference.
The overall pattern of the results observed across all the experiments reported in this dissertation is consistent with previous findings: predictions of the interference accounts for the processing of ungrammatical sentences receive consistent support, but the predictions for the processing of grammatical sentences are not always met. Recent proposals by Nicenboim et al. (2016) and Mertzen et al. (2020) suggest that interference might arise only in people with high working memory capacity or under deep processing mode. Following these proposals, we tested whether interference effects might depend on the depth of processing: we manipulated the complexity of the training materials preceding the grammatical experimental sentences while making no changes to the experimental materials themselves. We found that the slowdown predicted by the faulty encoding accounts disappears in the deep processing mode, but the effects consistent with the predictions of the similarity-based interference account do not arise.
Independently of whether similarity-based interference arises under deep processing mode or not, our results suggest that the faulty encoding accounts cannot be dismissed since they make unique predictions with regard to processing grammatical sentences, which are supported by data. At the same time, the support is not unequivocal: the slowdowns are present only in the superficial processing mode, which is not predicted by the faulty encoding accounts. Our results might therefore favor a much simpler system that superficially tracks number features and is distracted by every plural feature.
Smart contracts promise to reform the legal domain by automating clerical and procedural work, and minimizing the risk of fraud and manipulation. Their core idea is to draft contract documents in a way which allows machines to process them, to grasp the operational and non-operational parts of the underlying legal agreements, and to use tamper-proof code execution alongside established judicial systems to enforce their terms. The implementation of smart contracts has been largely limited by the lack of an adequate technological foundation which does not place an undue amount of trust in any contract party or external entity. Only recently did the emergence of Decentralized Applications (DApps) change this: Stored and executed via transactions on novel distributed ledger and blockchain networks, powered by complex integrity and consensus protocols, DApps grant secure computation and immutable data storage while at the same time eliminating virtually all assumptions of trust.
However, research on how to effectively capture, deploy, and most of all enforce smart contracts with DApps in mind is still in its infancy. Starting from the initial expression of a smart contract's intent and logic, to the operation of concrete instances in practical environments, to the limits of automatic enforcement---many challenges remain to be solved before a widespread use and acceptance of smart contracts can be achieved.
This thesis proposes a model-driven smart contract management approach to tackle some of these issues. A metamodel and semantics of smart contracts are presented, containing concepts such as legal relations, autonomous and non-autonomous actions, and their interplay. Guided by the metamodel, the notion and a system architecture of a Smart Contract Management System (SCMS) is introduced, which facilitates smart contracts in all phases of their lifecycle. Relying on DApps in heterogeneous multi-chain environments, the SCMS approach is evaluated by a proof-of-concept implementation showing both its feasibility and its limitations.
Further, two specific enforceability issues are explored in detail: The performance of fully autonomous tamper-proof behavior with external off-chain dependencies and the evaluation of temporal constraints within DApps, both of which are essential for smart contracts but challenging to support in the restricted transaction-driven and closed environment of blockchain networks. Various strategies of implementing or emulating these capabilities, which are ultimately applicable to all kinds of DApp projects independent of smart contracts, are presented and evaluated.
Conceptual knowledge about objects, people and events in the world is central to human cognition, underlying core cognitive abilities such as object recognition and use, and word comprehension. Previous research indicates that concepts consist of perceptual and motor features represented in modality-specific perceptual-motor brain regions. In addition, cross-modal convergence zones integrate modality-specific features into more abstract conceptual representations.
However, several questions remain open: First, to what extent does the retrieval of perceptual-motor features depend on the concurrent task? Second, how do modality-specific and cross-modal regions interact during conceptual knowledge retrieval? Third, which brain regions are causally relevant for conceptually-guided behavior? This thesis addresses these three key issues using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in the healthy human brain.
Study 1 - an fMRI activation study - tested to what extent the retrieval of sound and action features of concepts, and the resulting engagement of auditory and somatomotor brain regions depend on the concurrent task. 40 healthy human participants performed three different tasks - lexical decision, sound judgment, and action judgment - on words with a high or low association to sounds and actions. We found that modality-specific regions selectively respond to task-relevant features: Auditory regions selectively responded to sound features during sound judgments, and somatomotor regions selectively responded to action features during action judgments. Unexpectedly, several regions (e.g. the left posterior parietal cortex; PPC) exhibited a task-dependent response to both sound and action features. We propose these regions to be "multimodal", and not "amodal", convergence zones which retain modality-specific information.
Study 2 - an fMRI connectivity study - investigated the functional interaction between modality-specific and multimodal areas during conceptual knowledge retrieval. Using the above fMRI data, we asked (1) whether modality-specific and multimodal regions are functionally coupled during sound and action feature retrieval, (2) whether their coupling depends on the task, (3) whether information flows bottom-up, top-down, or bidirectionally, and (4) whether their coupling is behaviorally relevant. We found that functional coupling between multimodal and modality-specific areas is task-dependent, bidirectional, and relevant for conceptually-guided behavior. Left PPC acted as a connectivity "switchboard" that flexibly adapted its coupling to task-relevant modality-specific nodes.
Hence, neuroimaging studies 1 and 2 suggested a key role of left PPC as a multimodal convergence zone for conceptual knowledge. However, as neuroimaging is correlational, it remained unknown whether left PPC plays a causal role as a multimodal conceptual hub. Therefore, study 3 - a TMS study - tested the causal relevance of left PPC for sound and action feature retrieval. We found that TMS over left PPC selectively impaired action judgments on low sound-low action words, as compared to sham stimulation. Computational simulations of the TMS-induced electrical field revealed that stronger stimulation of left PPC was associated with worse performance on action, but not sound, judgments. These results indicate that left PPC causally supports conceptual processing when action knowledge is task-relevant and cannot be compensated by sound knowledge. Our findings suggest that left PPC is specialized for action knowledge, challenging the view of left PPC as a multimodal conceptual hub.
Overall, our studies support "hybrid theories" which posit that conceptual processing involves both modality-specific perceptual-motor regions and cross-modal convergence zones. In our new model of the conceptual system, we propose conceptual processing to rely on a representational hierarchy from modality-specific to multimodal up to amodal brain regions. Crucially, this hierarchical system is flexible, with different regions and connections being engaged in a task-dependent fashion. Our model not only reconciles the seemingly opposing grounded cognition and amodal theories, it also incorporates task dependency of conceptually-related brain activity and connectivity, thereby resolving several current issues on the neural basis of conceptual knowledge retrieval.
In our daily life, recurrence plays an important role on many spatial and temporal scales and in different contexts. It is the foundation of learning, be it in an evolutionary or in a neural context. It therefore seems natural that recurrence is also a fundamental concept in theoretical dynamical systems science. The way in which states of a system recur or develop in a similar way from similar initial states makes it possible to infer information about the underlying dynamics of the system. The mathematical space in which we define the state of a system (state space) is often high dimensional, especially in complex systems that can also exhibit chaotic dynamics. The recurrence plot (RP) enables us to visualize the recurrences of any high-dimensional systems in a two-dimensional, binary representation. Certain patterns in RPs can be related to physical properties of the underlying system, making the qualitative and quantitative analysis of RPs an integral part of nonlinear systems science. The presented work has a methodological focus and further develops recurrence analysis (RA) by addressing current research questions related to an increasing amount of available data and advances in machine learning techniques. By automatizing a central step in RA, namely the reconstruction of the state space from measured experimental time series, and by investigating the impact of important free parameters this thesis aims to make RA more accessible to researchers outside of physics.
The first part of this dissertation is concerned with the reconstruction of the state space from time series. To this end, a novel idea is proposed which automates the reconstruction problem in the sense that there is no need to preprocesse the data or estimate parameters a priori. The key idea is that the goodness of a reconstruction can be evaluated by a suitable objective function and that this function is minimized in the embedding process. In addition, the new method can process multivariate time series input data. This is particularly important because multi-channel sensor-based observations are ubiquitous in many research areas and continue to increase. Building on this, the described minimization problem of the objective function is then processed using a machine learning approach.
In the second part technical and methodological aspects of RA are discussed. First, we mathematically justify the idea of setting the most influential free parameter in RA, the recurrence threshold ε, in relation to the distribution of all pairwise distances in the data. This is especially important when comparing different RPs and their quantification statistics and is fundamental to any comparative study. Second, some aspects of recurrence quantification analysis (RQA) are examined. As correction schemes for biased RQA statistics, which are based on diagonal lines, we propose a simple method for dealing with border effects of an RP in RQA and a skeletonization algorithm for RPs. This results in less biased (diagonal line based) RQA statistics for flow-like data. Third, a novel type of RQA characteristic is developed, which can be viewed as a generalized non-linear powerspectrum of high dimensional systems. The spike powerspectrum transforms a spike-train like signal into its frequency domain. When transforming the diagonal line-dependent recurrence rate (τ-RR) of a RP in this way, characteristic periods, which can be seen in the state space representation of the system can be unraveled. This is not the case, when Fourier transforming τ-RR.
Finally, RA and RQA are applied to climate science in the third part and neuroscience in the fourth part. To the best of our knowledge, this is the first time RPs and RQA have been used to analyze lake sediment data in a paleoclimate context. Therefore, we first elaborate on the basic formalism and the interpretation of visually visible patterns in RPs in relation to the underlying proxy data. We show that these patterns can be used to classify certain types of variability and transitions in the Potassium record from six short (< 17m) sediment cores collected during the Chew Bahir Drilling Project. Building on this, the long core (∼ m composite) from the same site is analyzed and two types of variability and transitions are
identified and compared with ODP Site wetness index from the eastern Mediterranean. Type variability likely reflects the influence of precessional forcing in the lower latitudes at times of maximum values of the long eccentricity cycle ( kyr) of the earth’s orbit around the sun, with a tendency towards extreme events. Type variability appears to be related to the minimum values of this cycle and corresponds to fairly rapid transitions between relatively dry and relatively wet conditions.
In contrast, RQA has been applied in the neuroscientific context for almost two decades. In the final part, RQA statistics are used to quantify the complexity in a specific frequency band of multivariate EEG (electroencephalography) data. By analyzing experimental data, it can be shown that the complexity of the signal measured in this way across the sensorimotor cortex decreases as motor tasks are performed. The results are consistent with and comple- ment the well known concepts of motor-related brain processes. We assume that the thus discovered features of neuronal dynamics in the sensorimotor cortex together with the robust RQA methods for identifying and classifying these contribute to the non-invasive EEG-based development of brain-computer interfaces (BCI) for motor control and rehabilitation.
The present work is an important step towards a robust analysis of complex systems based on recurrence.
Identification of chemical mediators that regulate the specialized metabolism in Nostoc punctiforme
(2021)
Specialized metabolites, so-called natural products, are produced by a variety of different organisms, including bacteria and fungi. Due to their wide range of different biological activities, including pharmaceutical relevant properties, microbial natural products are an important source for drug development. They are encoded by biosynthetic gene clusters (BGCs), which are a group of locally clustered genes. By screening genomic data for genes encoding typical core biosynthetic enzymes, modern bioinformatical approaches are able to predict a wide range of BGCs. To date, only a small fraction of the predicted BGCs have their associated products identified.
The phylum of the cyanobacteria has been shown to be a prolific, but largely untapped source for natural products. Especially multicellular cyanobacterial genera, like Nostoc, harbor a high amount of BGCs in their genomes.
A main goal of this study was to develop new concepts for the discovery of natural products in cyanobacteria. Due to its diverse setup of orphan BGCs and its amenability to genetic manipulation, Nostoc punctiforme PCC 73102 (N. punctiforme) appeared to be a promising candidate to be established as a model organism for natural product discovery in cyanobacteria. By utilizing a combination of genome-mining, bioactivity-screening, variations of culture conditions, as well as metabolic engineering, not only two new polyketides were discovered, but also first-time insights into the regulation of the specialized metabolism in N. punctiforme were gained during this study.
The cultivation of N. punctiforme to very high densities by utilizing increasing light intensities and CO2 levels, led to an enhanced metabolite production, causing rather complex metabolite extracts. By utilizing a library of CFP reporter mutant strains, each strain reporting for one of the predicted BGCs, it was shown that eight out of 15 BGCs were upregulated under high density (HD) cultivation conditions. Furthermore, it could be demonstrated that the supernatant of an HD culture can increase the expression of four of the influenced BGCs, even under conventional cultivation conditions. This led to the hypothesis that a chemical mediator encoded by one of the affected BGCs is accumulating in the HD supernatant and is able to increase the expression of other BGCs as part of a cell-density dependent regulatory circuit. To identify which of the BGCs could be a main trigger of the presumed regulatory circuit, it was tried to activate four BGCs (pks1, pks2, ripp3, ripp4) selectively by overexpression of putative pathway-specific regulatory genes that were found inside the gene clusters. Transcriptional analysis of the mutants revealed that only the mutant strain targeting the pks1 BGC, called AraC_PKS1, was able to upregulate the expression of its associated BGC. From an RNA sequencing study of the AraC_PKS1 mutant strain, it was discovered that beside pks1, the orphan BGCs ripp3 and ripp4 were also upregulated in the mutant strain. Furthermore, it was observed that secondary metabolite production in the AraC_PKS1 mutant strain is further enhanced under high-light and high-CO2 cultivation conditions. The increased production of the pks1 regulator NvlA also had an impact on other regulatory factors, including sigma factors and the RNA chaperone Hfq. Analysis of the AraC_PKS1 cell and supernatant extracts led to the discovery of two novel polyketides, nostoclide and nostovalerolactone, both encoded by the pks1 BGC. Addition of the polyketides to N. punctiforme WT demonstrated that the pks1-derived compounds are able to partly reproduce the effects on secondary metabolite production found in the AraC_PKS1 mutant strain. This indicates that both compounds are acting as extracellular signaling factors as part of a regulatory network. Since not all transcriptional effects that were found in the AraC_PKS1 mutant strain could be reproduced by the pks1 products, it can be assumed that the regulator NvlA has a global effect and is not exclusively specific to the pks1 pathway.
This study was the first to use a putative pathway specific regulator for the specific activation of BGC expression in cyanobacteria. This strategy did not only lead to the detection of two novel polyketides, it also gave first-time insights into the regulatory mechanism of the specialized metabolism in N. punctiforme. This study illustrates that understanding regulatory pathways can aid in the discovery of novel natural products. The findings of this study can guide the design of new screening strategies for bioactive compounds in cyanobacteria and help to develop high-titer production platforms for cyanobacterial natural products.
Silicate melts are major components of the Earth’s interior and as such they make an essential contribution in igneous processes, in the dynamics of the solid Earth and the chemical development of the entire Earth. Macroscopic physical and chemical properties such as density, compressibility, viscosity, degree of polymerization etc. are determined by the atomic structure of the melt. Depending on the pressure, but also on the temperature and the chemical composition, silicate melts show different structural properties. These properties are best described by the local coordination environment, i.e. symmetry and number of neighbors (coordination number) of an atom, as well as the distance between the central atom and its neighbors (inter-atomic distance). With increasing pressure and temperature, i.e. with increasing depth in the Earth, the density of the melt increases, which can lead to changes in coordination number and distances. If the coordination number remains the same, the distance usually decreases. If the coordination number increases, the distance can increase. These general trends can, however, vary greatly, which can be attributed in particular to the chemical composition.
Due to the fact that natural melts of the deep earth are not accessible to direct investigations, in order to understand their properties under the relevant conditions, extensive experimental and theoretical investigations have been carried out so far. This has often been studied using the example of amorphous samples of the end-members SiO2 and GeO2 , with the latter serving as a structural and chemical analog model to SiO2. Commonly, the experiments were carried out at high pressure and at room temperature. Natural melts are chemically much more complex than the simple end-member SiO2 and GeO2, so that observations made on them may lead to incorrect compression models. Furthermore, the investigations on glasses at room temperature can show potentially strong deviations from the properties of melts under natural thermodynamic conditions.
The aim of this thesis was to explain the influence of the composition and the temperature on the structural properties of the melts at high pressures. To understand this, we studied complex alumino-germanate and alumino-silicate glasses. More precisely, we studied synthetic glasses that have a composition like the mineral albite and like a mixture of albite-diopside at the eutectic point. The albite glass is structurally similar to a simplified granitic melt, while the albite-diopside glass simulates a simplified basaltic melt. To study the local coordination environment of the elements, we used X-ray absorption spectroscopy in combination with a diamond anvil cell. Because the diamonds have a high absorbance for X-rays with energies below 10 keV, the direct investigation of the geologically relevant elements such as Si, Al, Ca, Mg etc. with this spectroscopic probe technique in combination with a diamond anvil cell is not possible. Therefore the glasses were doped with Ge and Sr. These elements serve partially or fully as substitutes for important major elements. In this sense, Ge serves as an a substitute for Si and other network formers, while Sr replaces network modifiers such as Ca, Na, Mg etc.,
as well as other cations with a large ionic radius.
In the first step we studied the Ge K-edge in Ge-Albit-glass, NaAlGe3O8, at room temperature up to 131 GPa. This glass has a higher chemical complexity than SiO2 and GeO2, but it is still fully polymerized. The differences in the compression mechanism between this glass and the simple oxides can clearly be attributed to higher chemical complexity. The albite and albite-diopside compositions partially doped with Ge and Sr were probed at room temperature for Ge up to 164 GPa and for Sr up to 42 GPa. While the albite glass is nominally fully polymerized like NaAlGe3O8, the albite-diopside glass is partially depolymerized. The results show that structural changes take place in all three glasses in the first 25 to a maximum of 30 GPa, with both Ge and Sr reaching the maximum coordination number 6 and ∼9, respectively. At higher pressures, only isostructural shrinkage of the coordination polyhedra takes place in the glasses. The most important finding of the high pressure studies on the alumino-silicate and alumino-germanate glasses is that in these complex glasses the polyhedra show a much higher compressibility than what can be observed in the end-members. This is shown in particular by the strong shortening of the Ge-O distances in the amorphous NaAlGe3O8 and albite-diopside glass at pressures above 30 GPa.
In addition to the effects of the composition on the compaction process, we investigated the influence of temperature on the structural changes. To do this, we probed the albite-diopside glass, as it is chemically most similar to the melts in the lower mantle. We studied the Ge K edge of the sample with a resistively heated and a laser-heated diamond anvil cell, for a pressure range of up to 48 GPa and a temperature range of up to 5000 K. High temperatures at which the sample is liquid and that are relevant for the Earth mantle, have a significant impact on the structural transformation, with a shift of approx. 30% to significantly lower pressures, compared to the glasses at room temperature and below 1000 K.
The results of this thesis represent an important contribution to the understanding of the properties of melts at conditions of the lower mantle. In the context of the discussion about the existence and origin of ultra-dense silicate melts at the core-mantle boundary, these investigations show that the higher density compared to the surrounding material cannot be explained by only structural features, but by a distinct chemical composition. The results also suggest that only very low solubilities of noble gases are to be expected for melts in the lower mantle, so that the structural properties clearly influence the overall budget and transport of noble gases in the Earth’s mantle.
In Systems Medicine, in addition to high-throughput molecular data (*omics), the wealth of clinical characterization plays a major role in the overall understanding of a disease. Unique problems and challenges arise from the heterogeneity of data and require new solutions to software and analysis methods. The SMART and EurValve studies establish a Systems Medicine approach to valvular heart disease -- the primary cause of subsequent heart failure.
With the aim to ascertain a holistic understanding, different *omics as well as the clinical picture of patients with aortic stenosis (AS) and mitral regurgitation (MR) are collected. Our task within the SMART consortium was to develop an IT platform for Systems Medicine as a basis for data storage, processing, and analysis as a prerequisite for collaborative research. Based on this platform, this thesis deals on the one hand with the transfer of the used Systems Biology methods to their use in the Systems Medicine context and on the other hand with the clinical and biomolecular differences of the two heart valve diseases. To advance differential expression/abundance (DE/DA) analysis software for use in Systems Medicine, we state 21 general software requirements and features of automated DE/DA software, including a novel concept for the simple formulation of experimental designs that can represent complex hypotheses, such as comparison of multiple experimental groups, and demonstrate our handling of the wealth of clinical data in two research applications DEAME and Eatomics. In user interviews, we show that novice users are empowered to formulate and test their multiple DE hypotheses based on clinical phenotype. Furthermore, we describe insights into users' general impression and expectation of the software's performance and show their intention to continue using the software for their work in the future. Both research applications cover most of the features of existing tools or even extend them, especially with respect to complex experimental designs. Eatomics is freely available to the research community as a user-friendly R Shiny application.
Eatomics continued to help drive the collaborative analysis and interpretation of the proteomic profile of 75 human left myocardial tissue samples from the SMART and EurValve studies. Here, we investigate molecular changes within the two most common types of valvular heart disease: aortic valve stenosis (AS) and mitral valve regurgitation (MR). Through DE/DA analyses, we explore shared and disease-specific protein alterations, particularly signatures that could only be found in the sex-stratified analysis. In addition, we relate changes in the myocardial proteome to parameters from clinical imaging. We find comparable cardiac hypertrophy but differences in ventricular size, the extent of fibrosis, and cardiac function. We find that AS and MR show many shared remodeling effects, the most prominent of which is an increase in the extracellular matrix and a decrease in metabolism. Both effects are stronger in AS. In muscle and cytoskeletal adaptations, we see a greater increase in mechanotransduction in AS and an increase in cortical cytoskeleton in MR. The decrease in proteostasis proteins is mainly attributable to the signature of female patients with AS. We also find relevant therapeutic targets.
In addition to the new findings, our work confirms several concepts from animal and heart failure studies by providing the largest collection of human tissue from in vivo collected biopsies to date. Our dataset contributing a resource for isoform-specific protein expression in two of the most common valvular heart diseases. Apart from the general proteomic landscape, we demonstrate the added value of the dataset by showing proteomic and transcriptomic evidence for increased expression of the SARS-CoV-2- receptor at pressure load but not at volume load in the left ventricle and also provide the basis of a newly developed metabolic model of the heart.
Over the past decades, natural hazards, many of which are aggravated by climate change and reveal an increasing trend in frequency and intensity, have caused significant human and economic losses and pose a considerable obstacle to sustainable development. Hence, dedicated action toward disaster risk reduction is needed to understand the underlying drivers and create efficient risk mitigation plans. Such action is requested by the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), a global agreement launched in 2015 that establishes stating priorities for action, e.g. an improved understanding of disaster risk. Turkey is one of the SFDRR contracting countries and has been severely affected by many natural hazards, in particular earthquakes and floods. However, disproportionately little is known about flood hazards and risks in Turkey. Therefore, this thesis aims to carry out a comprehensive analysis of flood hazards for the first time in Turkey from triggering drivers to impacts. It is intended to contribute to a better understanding of flood risks, improvements of flood risk mitigation and the facilitated monitoring of progress and achievements while implementing the SFDRR.
In order to investigate the occurrence and severity of flooding in comparison to other natural hazards in Turkey and provide an overview of the temporal and spatial distribution of flood losses, the Turkey Disaster Database (TABB) was examined for the years 1960-2014. The TABB database was reviewed through comparison with the Emergency Events Database (EM-DAT), the Dartmouth Flood Observatory database, the scientific literature and news archives. In addition, data on the most severe flood events between 1960 and 2014 were retrieved. These served as a basis for analyzing triggering mechanisms (i.e. atmospheric circulation and precipitation amounts) and aggravating pathways (i.e. topographic features, catchment size, land use types and soil properties). For this, a new approach was developed and the events were classified using hierarchical cluster analyses to identify the main influencing factor per event and provide additional information about the dominant flood pathways for severe floods. The main idea of the study was to start with the event impacts based on a bottom-up approach and identify the causes that created damaging events, instead of applying a model chain with long-term series as input and searching for potentially impacting events as model outcomes. However, within the frequency analysis of the flood-triggering circulation pattern types, it was discovered that events in terms of heavy precipitation were not included in the list of most severe floods, i.e. their impacts were not recorded in national and international loss databases but were mentioned in news archives and reported by the Turkish State Meteorological Service. This finding challenges bottom-up modelling approaches and underlines the urgent need for consistent event and loss documentation. Therefore, as a next step, the aim was to enhance the flood loss documentation by calibrating, validating and applying the United Nations Office for Disaster Risk Reduction (UNDRR) loss estimation method for the recent severe flood events (2015-2020). This provided, a consistent flood loss estimation model for Turkey, allowing governments to estimate losses as quickly as possible after events, e.g. to better coordinate financial aid.
This thesis reveals that, after earthquakes, floods have the second most destructive effects in Turkey in terms of human and economic impacts, with over 800 fatalities and US$ 885.7 million in economic losses between 1960 and 2020, and that more attention should be paid on the national scale. The clustering results of the dominant flood-producing mechanisms (e.g. circulation pattern types, extreme rainfall, sudden snowmelt) present crucial information regarding the source and pathway identification, which can be used as base information for hazard identification in the preliminary risk assessment process. The implementation of the UNDRR loss estimation model shows that the model with country-specific parameters, calibrated damage ratios and sufficient event documentation (i.e. physically damaged units) can be recommended in order to provide first estimates of the magnitude of direct economic losses, even shortly after events have occurred, since it performed well when estimates were compared to documented losses.
The presented results can contribute to improving the national disaster loss database in Turkey and thus enable a better monitoring of the national progress and achievements with regard to the targets stated by the SFDRR. In addition, the outcomes can be used to better characterize and classify flood events. Information on the main underlying factors and aggravating flood pathways further supports the selection of suitable risk reduction policies.
All input variables used in this thesis were obtained from publicly available data. The results are openly accessible and can be used for further research.
As an overall conclusion, it can be stated that consistent loss data collection and better event documentation should gain more attention for a reliable monitoring of the implementation of the SFDRR. Better event documentation should be established according to a globally accepted standard for disaster classification and loss estimation in Turkey. Ultimately, this enables stakeholders to create better risk mitigation actions based on clear hazard definitions, flood event classification and consistent loss estimations.
The development and optimization of carbonaceous materials is of great interest for several applications including gas sorption, electrochemical storage and conversion, or heterogeneous catalysis. In this thesis, the exploration and optimization of nitrogen containing carbonaceous materials by direct condensation of smart chosen, molecular precursors will be presented. As suggested with the concept of noble carbons, the choice of a stable, nitrogen-containing precursor will lead to an even more stable, nitrogen doped carbonaceous material with a controlled structure and electronic properties. Molecules fulfilling this requirement are for example nucleobases. The direct condensation of nucleobases leads to highly nitrogen containing carbonaceous materials without any further post or pretreatment. By using salt melt templating, pore structure adjustment is possible without the use of hazardous or toxic reagents and the template can be reused.
Using these simple tools, the synergetic effect of the pore structure and nitrogen content of the materials can be explored. Within this thesis, the influence of the condensation parameters will be correlated to the structure and performance of the materials. First, the influence of the condensation temperature to the porosity and nitrogen content of guanine will be discussed and the exploration of highly CO2 selective structural pores in C1N1 materials will be shown. Further tuning the pore structure of the materials by salt melt templating will be then explored, the potential of the prepared materials as heterogeneous catalysts and their basic catalytic strength will be correlated to their nitrogen content and pore morphology. A similar approach is used to explore the water sorption behavior of uric acid derived carbonaceous materials as potential sorbents for heat transformation applications. Changes in maximum water uptake and hydrophilicity of the prepared materials will be correlated to the nitrogen content and pore architecture. Due to the high thermal stability, porosity, and nitrogen content of ionic liquid derived nitrogen doped carbonaceous materials, a simple impregnation and calcination route can be conducted to obtain copper nano cluster decorated nitrogen-doped carbonaceous materials. The activity as catalyst for the oxygen reduction reaction of the obtained materials will be shown and structure performance relations are discussed.
In conclusion, the versatility of nitrogen doped carbonaceous materials with a nitrogen to carbon ratio of up to one will be shown. The possibility to tune the pore structure as well as the nitrogen content by using a simple procedure including salt melt templating as well as the use of molecular precursors and their effect on the performance will be discussed.
This work develops hybrid methods of imaging spectroscopy for open pit mining and examines their feasibility compared with state-of-the-art. The material distribution within a mine face differs in the small scale and within daily assigned extraction segments. These changes can be relevant to subsequent processing steps but are not always visually identifiable prior to the extraction. Misclassifications that cause false allocations of extracted material need to be minimized in order to reduce energy-intensive material re-handling. The use of imaging spectroscopy aspires to the allocation of relevant deposit-specific materials before extraction, and allows for efficient material handling after extraction. The aim of this work is the parameterization of imaging spectroscopy for pit mining applications and the development and evaluation of a workflow for a mine face, ground- based, spectral characterization. In this work, an application-based sensor adaptation is proposed. The sensor complexity is reduced by down-sampling the spectral resolution of the system based on the samples’ spectral characteristics. This was achieved by the evaluation of existing hyperspectral outcrop analysis approaches based on laboratory sample scans from the iron quadrangle in Minas Gerais, Brazil and by the development of a spectral mine face monitoring workflow which was tested for both an operating and an inactive open pit copper mine in the Republic of Cyprus.
The workflow presented here is applied to three regional data sets: 1) Iron ore samples from Brazil, (laboratory); 2) Samples and hyperspectral mine face imagery from the copper-gold-pyrite mine Apliki, Republic of Cyprus (laboratory and mine face data); and 3) Samples and hyperspectral mine face imagery from the copper-gold-pyrite deposit Three Hills, Republic of Cyprus (laboratory and mine face data). The hyperspectral laboratory dataset of fifteen Brazilian iron ore samples was used to evaluate different analysis methods and different sensor models. Nineteen commonly used methods to analyze and map hyperspectral data were compared regarding the methods’ resulting data products and the accuracy of the mapping and the analysis computation time. Four of the evaluated methods were determined for subsequent analyses to determine the best-performing algorithms: The spectral angle mapper (SAM), a support vector machine algorithm (SVM), the binary feature fitting algorithm (BFF) and the EnMap geological mapper (EnGeoMap). Next, commercially available imaging spectroscopy sensors were evaluated for their usability in open pit mining conditions. Step-wise downsampling of the data - the reduction of the number of bands with an increase of each band’s bandwidth - was performed to investigate the possible simplification and ruggedization of a sensor without a quality fall-off of the mapping results. The impact of the atmosphere visible in the spectrum between 1300–2010nm was reduced by excluding the spectral range from the data for mapping. This tested the feasibility of the method under realistic open pit data conditions. Thirteen datasets based on the different, downsampled sensors were analyzed with the four predetermined methods. The optimum sensor for spectral mine face material distinction was determined as a VNIR-SWIR sensor with 40nm bandwidths in the VNIR and 15nm bandwidths in the SWIR spectral range and excluding the atmospherically impacted bands. The Apliki mine sample dataset was used for the application of the found optimal analyses and sensors. Thirty-six samples were analyzed geochemically and mineralogically. The sample spectra were compiled to two spectral libraries, both distinguishing between seven different geochemical-spectral clusters. The reflectance dataset was downsampled to five different sensors. The five different datasets were mapped with the SAM, BFF and SVM method achieving mapping accuracies of 85-72%, 85-76% and 57-46% respectively. One mine face scan of Apliki was used for the application of the developed workflow. The mapping results were validated against the geochemistry and mineralogy of thirty-six documented field sampling points and a zonation map of the mine face which is based on sixty-six samples and field mapping. The mine face was analyzed with SAM and BFF. The analysis maps were visualized on top of a Structure-from-Motion derived 3D model of the open pit. The mapped geological units and zones correlate well with the expected zonation of the mine face. The third set of hyperspectral imagery from Three Hills was available for applying the fully-developed workflow. Geochemical sample analyses and laboratory spectral data of fifteen different samples from the Three Hills mine, Republic of Cyprus, were used to analyse a downsampled mine face scan of the open pit. Here, areas of low, medium and high ore content were identified.
The developed workflow is successfully applied to the open pit mines Apliki and Three Hills and the spectral maps reflect the prevailing geological conditions. This work leads through the acquisition, preparation and processing of imaging spectroscopy data, the optimum choice of analysis methodology, and the utilization of simplified, robust sensors that meet the requirements of open pit mining conditions. It accentuates the importance of a site-specific and deposit-specific spectral library for the mine face analysis and underlines the need for geological and spectral analysis experts to successfully implement imaging spectroscopy in the field of open pit mining.
In der vorliegenden Arbeit wird die Herstellung und Charakterisierung von Mixed-Matrix-Membranen (MMM) für die Gastrennung thematisiert. Dazu wurden verschiedene Füllstoffe genutzt, um in Verbindung mit dem Membranmaterial Polysulfon MMMs herzustellen. Als Füllstoffe wurden 3 aktive und 2 passive Füllstoffe verwendet. Die aktiven Füllstoffe besaßen Porenöffnungen, die in der Lage sind Gase in Abhängigkeit der Molekülgröße zu trennen. Daraus folgt ein höherer idealer Trennfaktor für bestimmte Gaspaare als in Polysulfon selbst. Aufgrund der durch die Poren gebildeten permanenten Kanäle in den aktiven Füllstoffen ergibt sich ein schnellerer Gastransport (Permeabilität) als in Polysulfon. Es handelte sich bei den aktiven Füllstoffen um den Zeolith SAPO-34 und 2 Chargen eines Zeolitic Imidazolate Framework (ZIF) ZIF-8. Die beiden Chargen ZIF-8 unterschieden sich in ihrer spezifischen Oberfläche, was diesen Einfluss speziell in die Untersuchungen zum Gastransport einbeziehen sollte. Bei den passiven Füllstoffen handelte es sich um ein aminofunktionalisiertes Kieselgel und unporöse (dichte) Glaskügelchen. Das Kieselgel besaß Poren, die zu groß waren, um Gase effektiv zu trennen. Die Glaskügelchen konnten keine Gastrennung ermöglichen, da sie keine Poren besaßen.
Aus der Literatur ist bekannt, dass die Einbettung von Füllstoffen oft zu Defekten in MMMs führt. Ein Ziel dieser Arbeit war es daher die Einbettung zu optimieren. Weiterhin sollte der Gastransport in MMMs dieser Arbeit mit dem in einer unbeladenen Polysulfonmembran verglichen werden. Aufgrund des selektiveren Trennverhaltens der aktiven Füllstoffe im Vergleich zum Membranmaterial, sollte mit der Einbettung aktiver Füllstoffe die Trennleistung der MMMs mit steigender Füllstoffbeladung immer weiter verbessert werden.
Um die Eigenschaften der MMMs zu untersuchen, wurden diese mittels Rasterelektronenmikroskop (REM), Gaspermeationsmessungen (GP) und Thermogravimetrischer Analyse gekoppelt mit Massenspektrometrie (TGA-MS) charakterisiert.
Untersuchungen am REM konnten eine Verbesserung der Einbettung zeigen, wenn ein polymerer Haftvermittler verwendet wurde. Verglichen wurde die optimierte Einbettung mit der Einbettung ohne Haftvermittler und Ergebnissen aus der Literatur, in der die Verwendung verschiedener Silane als Haftvermittler beschrieben wurde. Trotz der verbesserten Einbettung konnte lediglich bei geringen Beladungen an Füllstoff (10 und 20 Ma-% bezogen auf das Membranmaterial) eine geringe Steigerung des idealen Trennfaktors in den MMMs gegenüber der unbeladenen Polysulfonmembranen beobachtet werden. Bei höheren Füllstoffbeladungen (30, 40 und 50 Ma-%) war ein deutlicher Anstieg der Permeabilität bei stark sinkendem idealen Trennfaktor zu beobachten. Mit Hilfe von TGA-MS Messungen konnte darüber hinaus festgestellt werden, dass der verwendete Zeolith SAPO-34 durch Wassermoleküle blockierte Porenöffnungen besaß. Das verhinderte den Gastransport im Füllstoff, wodurch die Trennleistung des Füllstoffes nicht ausgenutzt werden konnte. Die Füllstoffe ZIF-8 (chargenunabhängig) und aminofunktionalisiertes Kieselgel wiesen keine blockierten Poren auf. Dennoch zeigte sich in diesen MMMs keine Verbesserung der Gastrenn- oder Gastransporteigenschaften. MMMs mit dichten Glaskügelchen als Füllstoff zeigten dasselbe Gastrenn- und Gastransportverhalten, wie alle MMMs mit den zuvor genannten Füllstoffen.
In dieser Arbeit konnte, trotz optimierter Einbettung anorganischer Füllstoffe, für MMMs keine Verbesserung der Gastrenn- oder Gastransporteigenschaften nachgewiesen werden. Vielmehr wurde ein Einfluss der Füllstoffmenge auf die Gastransporteigenschaften in MMMs festgestellt. Die Änderungen der MMMs gegenüber Polysulfon stammen von den Folgen der Einbettung von Füllstoffen in das Matrixpolymer. Durch die Einbettung werden die Eigenschaften des Matrixpolymers ändern, sodass auch der Gastransport beeinflusst wird. Des Weiteren wurde dokumentiert, dass in Abhängigkeit der Füllstoffbeladung die entstehende Membranstruktur beeinflusst wird. Die Beeinflussung war dabei unabhängig von der Füllstoffart. Es wurde eine Korrelation zwischen Füllstoffmenge und veränderter Membranstruktur gefunden.
Insulinresistenz ist ein zentraler Bestandteil des metabolischen Syndroms und trägt maßgeblich zur Ausbildung eines Typ-2-Diabetes bei. Eine mögliche Ursache für die Entstehung von Insulinresistenz ist eine chronische unterschwellige Entzündung, welche ihren Ursprung im Fettgewebe übergewichtiger Personen hat. Eingewanderte Makrophagen produzieren vermehrt pro-inflammatorische Mediatoren, wie Zytokine und Prostaglandine, wodurch die Konzentrationen dieser Substanzen sowohl lokal als auch systemisch erhöht sind. Darüber hinaus weisen übergewichtige Personen einen gestörten Fettsäuremetabolismus und eine erhöhte Darmpermeabilität auf. Ein gesteigerter Flux an freien Fettsäuren vom Fettgewebe in andere Organe führt zu einer lokalen Konzentrationssteigerung in diesen Organen. Eine erhöhte Darmpermeabilität erleichtert das Eindringen von Pathogenen und anderer körperfremder Substanzen in den Körper.
Ziel dieser Arbeit war es, zu untersuchen, ob hohe Konzentrationen von Insulin, des bakteriellen Bestandteils Lipopolysaccharid (LPS) oder der freien Fettsäure Palmitat eine Entzündungsreaktion in Makrophagen auslösen oder verstärken können und ob diese Entzündungsantwort zur Ausbildung einer Insulinresistenz beitragen kann. Weiterhin sollte untersucht werden, ob Metabolite und Signalsubstanzen, deren Konzentrationen beim metabolischen Syndrom erhöht sind, die Produktion des Prostaglandins (PG) E2 begünstigen können und ob dieses wiederum die Entzündungsreaktion und seine eigene Produktion in Makrophagen regulieren kann. Um den Einfluss dieser Faktoren auf die Produktion pro-inflammatorischer Mediatoren in Makrophagen zu untersuchen, wurden Monozyten-artigen Zelllinien und primäre humane Monozyten, welche aus dem Blut gesunder Probanden isoliert wurden, in Makrophagen differenziert und mit Insulin, LPS, Palmitat und/ oder PGE2 inkubiert. Überdies wurden primäre Hepatozyten der Ratte isoliert und mit Überständen Insulin-stimulierter Makrophagen inkubiert, um zu untersuchen, ob die Entzündungsanwort in Makrophagen an der Ausbildung einer Insulinresistenz in Hepatozyten beteiligt ist.
Insulin induzierte die Expression pro-inflammatorischer Zytokine in Makrophagen-artigen Zelllinien wahrscheinlich vorrangig über den Phosphoinositid-3-Kinase (PI3K)-Akt-Signalweg mit anschließender Aktiverung des Transkriptionsfaktors NF-κB (nuclear factor 'kappa-light-chain-enhancer' of activated B-cells). Die dabei ausgeschütteten Zytokine hemmten in primären Hepatozyten der Ratte die Insulin-induzierte Expression der Glukokinase durch Überstände Insulin-stimulierter Makrophagen.
Auch LPS oder Palmitat, deren lokale Konzentrationen im Zuge des metabolischen Syndroms erhöht sind, waren in der Lage, die Expression pro-inflammatorischer Zytokine in Makrophagen-artigen Zelllinien zu stimulieren. Während LPS seine Wirkung, laut Literatur, unbestritten über eine Aktivierung des Toll-ähnlichen Rezeptors (toll-like receptor; TLR) 4 vermittelt, scheint Palmitat jedoch weitestgehend TLR4-unabhängig wirken zu können. Vielmehr schien die de novo-Ceramidsynthese eine entscheidene Rolle zu spielen. Darüber hinaus verstärkte Insulin sowohl die LPS- als auch die Palmitat-induzierte Ent-zündungsantwort in beiden Zelllinien. Die in Zelllinien gewonnenen Ergebnisse wurden größtenteils in primären humanen Makrophagen bestätigt.
Desweiteren induzierten sowohl Insulin als auch LPS oder Palmitat die Produktion von PGE2 in den untersuchten Makrophagen. Die Daten legen nahe, dass dies auf eine gesteigerte Expression PGE2-synthetisierender Enzyme zurückzuführen ist.
PGE2 wiederum hemmte auf der einen Seite die Stimulus-abhängige Expression des pro-inflammatorischen Zytokins Tumornekrosefaktor (TNF) α in U937-Makrophagen. Auf der anderen Seite verstärkte es jedoch die Expression der pro-inflammatorischen Zytokine Interleukin- (IL-) 1β und IL-8. Darüber hinaus verstärkte es die Expression von IL-6-Typ-Zytokinen, welche sowohl pro- als auch anti-inflammatorisch wirken können. Außerdem vestärkte PGE2 die Expression PGE2-synthetisierender Enzyme. Es scheint daher in der Lage zu sein, seine eigene Synthese zu verstärken.
Zusammenfassend kann die Freisetzung pro-inflammatorischer Mediatoren aus Makro-phagen im Zuge einer Hyperinsulinämie die Entstehung einer Insulinresistenz begünstigen. Insulin ist daher in der Lage, einen Teufelskreis der immer stärker werdenden Insulin-resistenz in Gang zu setzen.
Auch Metabolite und Signalsubstanzen, deren Konzentrationen beim metabolischen Syndrom erhöht sind (zum Beispiel LPS, freie Fettsäuren und PGE2), lösten Entzündungsantworten in Makrophagen aus. Das wechselseitige Zusammenspiel von Insulin und diesen Metaboliten und Signalsubstanzen löste eine stärkere Entzündungsantwort in Makrophagen aus als jeder der Einzelkomponenten. Die dadurch freigesetzten Zytokine könnten zur Manifestation einer Insulinresistenz und des metabolischen Syndroms beitragen.
Membrane contact sites are of particular interest in the field of synthetic biology and biophysics. They are involved in a great variety of cellular functions. They form in between two cellular organelles or an organelle and the plasma membrane in order to establish a communication path for molecule transport or signal transmission.
The development of an artificial membrane system which can mimic membrane contact sites using bottom up synthetic biology was the goal of this research study. For this, a multi - compartmentalised giant unilamellar vesicle (GUV) system was created with the membrane of the outer vesicle mimicking the plasma membrane and the inner GUVs posing as cellular organelles.
In the following steps, three different strategies were used to achieve an internal membrane - membrane adhesion.
Die vorliegende Studie beschreitet im religionswissenschaftlichen Kontext einen Weg zur Erforschung der Modifikation und Neuausrichtung eines einzelnen christlichen Bildmotivs, dessen Bildformel sich bis in die Gegenwart durchgesetzt hat.
Das Bildmotiv der Pietà wird in der Gegenwartskunst verstärkt als innovative Bildformel in politischen oder sozialen Kontexten verwendet, um existenzielle Lebenserfahrungen oder gesellschaftskritische, sowie politische Anklagen zu formulieren. Es erlebt einen Relaunch in der Medienberichterstattung, der Kunst, in Filmen oder der Alltagskultur. Künstler_innen und Fotojournalist_innen geben ihren Objekten vermehrt den Titel Pietà oder er wird ihnen von außen zuge-schrieben. Die Semantik dieses spezifischen Bildmotivs rührt offenbar an und kann bei Betrachtenden eine emotionale Gestimmtheit evozieren. Für diese Stu-die ist das Norm- und Wertesystem mit dem dahinter liegenden Tradierungs- und Transformationsprozess von Interesse. Bisher fehlt eine Monografie, in der die Zusammenhänge der Wiederbelebung eines primär christlichen Bildmotivs und der gegenwärtigen Bezüge zu Gewalt, Tod, Angst, Vergänglichkeit, dem Altern oder des Verlustes analysiert werden.
Im Vordergrund steht die Frage nach einer Modifikation bzw. Neuinterpretation dieser Ikonik. Das Aufzeigen eines möglichen dynamischen Entwicklungspro-zesses des Bildmotivs soll klären, welche veränderten Funktionen dem Pietà-Motiv in der Gegenwartskunst zugeschrieben werden. Über ein Set international renomierter, zeitgenössischer Künstler_innen werden eventuelle Veränderun-gen und ein damit verbundener gesellschaftlicher Bedeutungswandel seit dem 21. Jahrhundert analysiert.
Vor diesem Hintergrund ist die Frage nach einer religionsübergreifenden Wirk-mächtigkeit ikonischer Präsenz eines religiösen Bildmotivs in der Kunst und den Bildmedien von aktueller Relevanz. Diese Studie leistet einen exemplarischen Beitrag für die Affektforschung, die sich in den vergangenen Jahren vermehrt mit der Emotionsdarstellung und der Emotionsvermittlung in den audiovisuellen Medien befasst.
Die vorliegende Arbeit thematisiert die Synthese und Charakterisierung von neuen funktionalisierten ionischen Flüssigkeiten und deren Polymerisation. Die ionischen Flüssigkeiten wurden dabei sowohl mit polymerisierbaren Kationen als auch Anionen hergestellt. Zum einen wurden bei thermisch initiierten Polymerisationen Azobis(isobutyronitril) (AIBN) verwendet und zum anderen dienten bei photochemisch initiierten Polymerisationen Bis-4-(methoxybenzoyl)diethylgermanium (Ivocerin®) als Radikalstarter.
Mittels Gelpermeationschromatographie konnte das Homopolymer Polydimethylaminoethylmethacrylat untersucht werden, welches erst im Anschluss an die GPC-Messungen polymeranalog modifiziert wurde. Dabei wurden nach einer Quaternisierung und anschließender Anionenmetathese bei diesen Polymeren die Grenzviskositäten bestimmt und mit den Grenzviskositäten der direkt polymerisierten ionischen Flüssigkeiten verglichen. Bei der direkten Polymerisation von Poly(N-[2-(Methacryloyloxy)ethyl]-N-butyl-N,N-dimethyl-ammoniumbis(trifluormethylsulfonyl)imid) lag [η_Huggins] bei 100 mL/g und bei dem polymeranalog hergestellten Polymer betrug [η_Huggins] = 40 mL/g.
Die ionischen Flüssigkeiten mit polymerisierbaren funktionellen Gruppen wurden mittels Photo-DSC hinsichtlich der maximalen Polymerisationsgeschwindigkeit (Rpmax), der Zeit, in der dieses Maximum erreicht wurde, tmax, ihrer Glasüberganstemperatur (Tg) und des Umsatzes an Vinylprotonen untersucht. Bei diesen Messungen wurde zum einen der Einfluss der unterschiedlichen Alkylkettenlänge am Ammoniumion und der Einfluss von verschiedenen Anionen bei gleichbleibender Kationenstruktur analysiert. So polymerisierte das ethylsubstituierte Kation mit einer tmax von 21 Sekunden am langsamsten. Die maximale Polymerisationsgeschwindigkeit (Rpmax) betrug 3.3∙10-2 s-1. Die tmax Werte der übrigen alkylsubstituierten ionischen Flüssigkeiten mit einer polymerisierbaren funktionellen Gruppe hingegen lagen zwischen 10 und 15 Sekunden. Die Glasübergangstemperaturen der mittels photoinduzierter Polymerisation hergestellten Polymere lagen mit 44 bis 55 °C nahe beieinander. Alle Monomere zeigten einen hohen Umsatz der Vinylprotonen; er betrug zwischen 93 und 100%.
Mithilfe einer Bandanlage, ausgerüstet mit einer LED (λ = 395 nm), konnten Polymerfilme hergestellt werden. Der Umsatz an Doppelbindungsäquivalenten dieser Filme wurde anhand der 1H-NMR Spektroskopie bestimmt. Bei der dynamisch-mechanischen Analyse wurden die Polymerfilme mit einer konstanten Heizrate und Frequenz periodisch wechselnden Beanspruchungen ausgesetzt, um die Glasübergangstemperaturen zu bestimmen. Die niedrigste Tg mit 26 °C besaß das butylsubstituierte N-[2-(Methacryloyloxy)ethyl]-N-butyl-N,N-dimethyl-ammoniumbis(trifluormethylsulfonyl)imid, welches als Polymerfilm mit Ivocerin® als Initiator hergestellt wurde, wohingegen die höchste Tg bei dem gleichen Polymer, welches direkt durch freie radikalische Polymerisation der ionischen Flüssigkeit in Masse mit AIBN hergestellt wurde, 51 °C betrug. Zusätzlich wurden die Filme unter dem Aspekt der Topographie mit einem Rasterkraftmikroskop untersucht, welches eine Domänenstruktur des Polymers N-[2-(methacryloyloxy)ethyl]-N-butyl-N,N-dimethyl-ammonium tris(pentafluorethyl)trifluorphosphat offenbarte.
Active Galactic Nuclei (AGN) are considered to be the main powering source of active galaxies, where central Super Massive Black Holes (SMBHs), with masses between 106 and 109 M⊙ gravitationally pull the surrounding material via accre- tion. AGN phenomenon expands over a very wide range of luminosities, from the most luminous high-redshift quasars (QSOs), to the local Low-Luminosity AGN (LLAGN), with significantly weaker luminosities. While "typical" luminous AGNs distinguish themselves by their characteristical blue featureless continuum, the Broad Emission Lines (BELs) with Full Widths at Half Maximum (FWHM) in order of few thousands km s1, arising from the so-called Broad Line Region (BLR), and strong radio and/or X-ray emission, detection of LLAGNs on the other hand is quite chal- lenging due to their extremely weak emission lines, and absence of the power-law continuum. In order to fully understand AGN evolution and their duty-cycles across cosmic history, we need a proper knowledge of AGN phenomenon at all luminosi- ties and redshifts, as well as perspectives from different wavelength bands.
In this thesis I present a search for AGN signatures in central spectra of 542 local (0.005 < z < 0.03) galaxies from the Calar Alto Legacy Integral Field Area (CALIFA) survey. The adopted aperture of 3′′ × 3′′ corresponds to central ∼ 100 − 500 pc for the redshift range of CALIFA. Using the standard emission-line ratio diagnostic diagrams, we initially classified all CALIFA emission-line galaxies (526) into star- forming, LINER-like, Seyfert 2 and intermediates. We further detected signatures of the broad Hα component in 89 spectra from the sample, of which more than 60% are present in the central spectra of LINER-like galaxies. These BELs are very weak, with luminosities in range 1038 − 1041 erg s−1, but with FWHMs between 1000 km s−1 and 6000 km s−1, comparable to those of luminous high-z AGN. This result implies that type 1 AGN are in fact quite frequent in the local Universe. We also identified additional 29 Seyfert 2 galaxies using the emission-line ratio diagnostic diagrams.
Using the MBH − σ∗ correlation, we estimated black hole masses of 55 type 1 AGN from CALIFA, a sample for which we had estimates of bulge stellar velocity dispersions σ∗. We compared these masses to the ones that we estimated from the virial method and found large discrepancies. We analyzed the validity of both meth- ods for black hole mass estimation of local LLAGN, and concluded that most likely virial scaling relations can no longer be applied as a valid MBH estimator in such low-luminosity regime. These black holes accrete at very low rate, having Edding- ton ratios in range 4.1 × 10−5 − 2.4 × 10−3. Detection of BELs with such low lumi- nosities and at such low Eddington rates implies that these LLAGN are still able to form the BLR, although with probably modified structure of the central engine.
In order to obtain full picture of black hole growth across cosmic time, it is es- sential that we study them in different stages of their activity. For that purpose, we estimated the broad AGN Luminosity Function (AGNLF) of our entire type 1 AGN sample using the 1/Vmax method. The shape of AGNLF indicates an apparent flattening below luminosities LHα ∼ 1039 erg s−1. Correspondingly we estimated ac- tive Black Hole Mass Function (BHMF) and Eddington Ration Distribution Function (ERDF) for a sub-sample of type 1 AGN for which we have MBH and λ estimates. The flattening is also present in both BHMF and ERDF, around log(MBH) ∼ 7.7 and log(λ) < 3, respectively. We estimated the fraction of active SMBHs in CALIFA by comparing our active BHMF to the one of the local quiescent SMBHs. The shape of
the active fraction which decreases with increasing MBH, as well as the flattening of AGNLF, BHMF and ERDF is consistent with scenario of AGN cosmic downsizing.
To complete AGN census in the CALIFA galaxy sample, it is necessary to search for them in various wavelength bands. For the purpose of completing the census we performed cross-correlations between all 542 CALIFA galaxies and multiwavelength surveys, Swift – BAT 105 month catalogue (in hard 15 - 195 keV X-ray band), and NRAO VLA Sky Survey (NVSS, in 1.4 GHz radio domain). This added 1 new AGN candidate in X-ray, and 7 in radio wavelength band to our local LLAGN count.
It is possible to detect AGN emission signatures within 10 – 20 kpc outside of the central galactic regions. This may happen when the central AGN has recently switched off and the photoionized material is spread across the galaxy within the light-travel-time, or the photoionized material is blown away from the nucleus by outflows. In order to detect these extended AGN regions we constructed spatially resolved emission-line ratio diagnostic diagrams of all emission-line galaxies from the CALIFA, and found 1 new object that was previously not identified as AGN.
Obtaining the complete AGN census in CALIFA, with five different AGN types, showed that LLAGN contribute a significant fraction of 24% of the emission-line galaxies in the CALIFA sample. This result implies that AGN are quite common in the local Universe, and although being in very low activity stage, they contribute to large fraction of all local SMBHs. Within this thesis we approached the upper limit of AGN fraction in the local Universe and gained some deeper understanding of the LLAGN phenomenon.
Zusammenfassung zur Dissertation „Neuartige DBD-Fluoreszenzfarbstoffe: Synthese, Untersuchungen und Anwendungen“ von Leonard John
In dieser Arbeit konnten auf Basis der etablierten [1,3]-Dioxolo[4,5-f][1,3]benzodioxol (DBD) Fluoreszenzfarbstoffe zwei neue Konzepte zur Darstellung unsymmetrisch funktionalisierter DBD-Fluorophore entwickelt werden. Die Variation der elektronenziehenden Reste führte zu einer Erweiterung des Farbspektrums an DBD-Fluorophoren, wobei alle weiteren spektroskopischen Parameter (Fluoreszenzlebenszeit, -quantenausbeute und STOKES-Verschiebung) unverändert hohe Werte aufweisen. Neben der Variation der elektronenziehenden Reste wurde das "pi"-System des DBD-Farbstoffs mit der Einführung von Stilben-, und Tolan-Derivaten vergrößert. Stilben-Derivate zeigten ähnlich gute spektroskopische Eigenschaften wie die bereits etablierten DBD-Farbstoffe.
Fluorophore mit langwelliger Emission sind auf Grund der großen Gewebe-Eindringtiefe besonders interessant für biologische Anwendungen. Da der langwelligste Vertreter der O4-DBD-Farbstoffe in polaren Medien nur schwer löslich ist, wurde ein Weg zur Einführung löslichkeitsvermittelnder Gruppen gesucht. Hierbei fiel die Wahl auf eine Carbonsäure-Gruppe zur Steigerung der Hydrophilie. Eine von vier untersuchten Methoden erwies sich als zielführend, sodass das gewünschte Molekül isoliert werden konnte. Eine erhöhte Wasserlöslichkeit wurde allerdings nicht beobachtet.
Zur Erforschung von Fettstoffwechselkrankheiten wie der ALZHEIMER-Krankheit werden fluoreszenzmarkierte Lipide benötigt. Um unterschiedliche Bereiche einer Membran zu untersuchen, war das Ziel, den Fluorophor an unterschiedlichen Stellen innerhalb der Fettsäure zu lokalisieren. Hierbei sollte die Gesamtkettenlänge des DBD-Lipids einer C18-Kette, analog der Stearinsäure, entsprechen. Durch die stufenweise Einführung der Reste gelang es, drei DBD-Lipide herzustellen, wobei sich der Fluorophor an unterschiedlichen Positionen innerhalb der Kette befindet. Die photophysikalischen Eigenschaften der Lipide weichen nur marginal von denen der reinen Fluorophore ab. Eine Einlagerung in giant unilamellar vesicles (GUVs) konnte für zwei Derivate beobachtet werden, wobei keine domänenspezifisch war.
Ein weiteres Ziel dieser Arbeit war es, die vier Sauerstoffatome im DBD-Grundkörper stufenweise durch Schwefelatome zu ersetzen und die Ringgrößen des DBD-Fluorophors zu variieren. Für die Ringgröße zeigte der 1,2-S2-DBD mit jeweils zwei Fünfringen die besten spektroskopischen Eigenschaften. Durch die Synthese von zwei weiteren schwefelhaltigen DBD-Grundkörpern (S1- und 1,4-S2-DBD) konnten insgesamt drei neue Farbstoffklassen zugänglich gemacht werden. Für alle neuen Chromophore wurden elektronenziehende Reste (Aldehyd, Acyl, Ester, Carboxy) eingeführt und die jeweiligen Derivate spektroskopisch untersucht. Mit steigender Anzahl an Schwefel-Atomen im Grundkörper zeigt sich eine bathochrome Verschiebung der Emission,
wobei die Werte für die Fluoreszenzlebenszeit- und -quantenausbeute abnehmen. Die optimalen spektroskopischen Eigenschaften aus langwelliger Emission, hoher Fluoreszenzlebenszeit und -quantenausbeute zeigt das 1,4-S2-Dialdehyd-Derivat. Für die S1- und 1,2-S2-Dialdehyd-
Derivate wurden Konzepte entwickelt, um bioreaktive Reste (Alkin, HOSu, Maleimid) einzuführen und die Fluorophore in biologischen Systemen anwenden zu können.
Major challenges during geothermal exploration and exploitation include the structural-geological characterization of the geothermal system and the application of sustainable monitoring concepts to explain changes in a geothermal reservoir during production and/or reinjection of fluids. In the absence of sufficiently permeable reservoir rocks, faults and fracture networks are preferred drilling targets because they can facilitate the migration of hot and/or cold fluids. In volcanic-geothermal systems considerable amounts of gas emissions can be released at the earth surface, often related to these fluid-releasing structures.
In this thesis, I developed and evaluated different methodological approaches and measurement concepts to determine the spatial and temporal variation of several soil gas parameters to understand the structural control on fluid flow. In order to validate their potential as innovative geothermal exploration and monitoring tools, these methodological approaches were applied to three different volcanic-geothermal systems. At each site an individual survey design was developed regarding the site-specific questions.
The first study presents results of the combined measurement of CO2 flux, ground temperatures, and the analysis of isotope ratios (δ13CCO2, 3He/4He) across the main production area of the Los Humeros geothermal field, to identify locations with a connection to its supercritical (T > 374◦C and P > 221 bar) geothermal reservoir. The results of the systematic and large-scale (25 x 200 m) CO2 flux scouting survey proved to be a fast and flexible way to identify areas of anomalous degassing. Subsequent sampling with high resolution surveys revealed the actual extent and heterogenous pattern of anomalous degassing areas. They have been related to the internal fault hydraulic architecture and allowed to assess favourable structural settings for fluid flow such as fault intersections. Finally, areas of unknown structurally controlled permeability with a connection to the superhot geothermal reservoir have been determined, which represent promising targets for future geothermal exploration and development.
In the second study, I introduce a novel monitoring approach by examining the variation of CO2 flux to monitor changes in the reservoir induced by fluid reinjection. For that reason, an automated, multi-chamber CO2 flux system was deployed across the damage zone of a major normal fault crossing the Los Humeros geothermal field. Based on the results of the CO2 flux scouting survey, a suitable site was selected that had a connection to the geothermal reservoir, as identified by hydrothermal CO2 degassing and hot ground temperatures (> 50 °C). The results revealed a response of gas emissions to changes in reinjection rates within 24 h, proving an active hydraulic communication between the geothermal reservoir and the earth surface. This is a promising monitoring strategy that provides nearly real-time and in-situ data about changes in the reservoir and allows to timely react to unwanted changes (e.g., pressure decline, seismicity).
The third study presents results from the Aluto geothermal field in Ethiopia where an area-wide and multi-parameter analysis, consisting of measurements of CO2 flux, 222Rn, and 220Rn activity concentrations and ground temperatures was conducted to detect hidden permeable structures. 222Rn and 220Rn activity concentrations are evaluated as a complementary soil gas parameter to CO2 flux, to investigate their potential to understand tectono-volcanic degassing. The combined measurement of all parameters enabled to develop soil gas fingerprints, a novel visualization approach. Depending on the magnitude of gas emissions and their migration velocities the study area was divided in volcanic (heat), tectonic (structures), and volcano-tectonic dominated areas. Based on these concepts, volcano-tectonic dominated areas, where hot hydrothermal fluids migrate along permeable faults, present the most promising targets for future geothermal exploration and development in this geothermal field. Two of these areas have been identified in the south and south-east which have not yet been targeted for geothermal exploitation. Furthermore, two unknown areas of structural related permeability could be identified by 222Rn and 220Rn activity concentrations.
Eventually, the fourth study presents a novel measurement approach to detect structural controlled CO2 degassing, in Ngapouri geothermal area, New Zealand. For the first time, the tunable diode laser (TDL) method was applied in a low-degassing geothermal area, to evaluate its potential as a geothermal exploration method. Although the sampling approach is based on profile measurements, which leads to low spatial resolution, the results showed a link between known/inferred faults and increased CO2 concentrations. Thus, the TDL method proved to be a successful in the determination of structural related permeability, also in areas where no obvious geothermal activity is present. Once an area of anomalous CO2 concentrations has been identified, it can be easily complemented by CO2 flux grid measurements to determine the extent and orientation of the degassing segment.
With the results of this work, I was able to demonstrate the applicability of systematic and area-wide soil gas measurements for geothermal exploration and monitoring purposes. In particular, the combination of different soil gases using different measurement networks enables the identification and characterization of fluid-bearing structures and has not yet been used and/or tested as standard practice. The different studies present efficient and cost-effective workflows and demonstrate a hands-on approach to a successful and sustainable exploration and monitoring of geothermal resources. This minimizes the resource risk during geothermal project development. Finally, to advance the understanding of the complex structure and dynamics of geothermal systems, a combination of comprehensive and cutting-edge geological, geochemical, and geophysical exploration methods is essential.
Soft actuators have drawn significant attention due to their relevance for applications, such as artificial muscles in devices developed for medicine and robotics. Tuning their performance and expanding their functionality are frequently done by means of chemical modification. The introduction of structural elements rendering non-synthetic modification of the performance possible, as well as control over physical appearance and facilitating their recycling is a subject of a great interest in the field of smart materials. The primary aim of this thesis was to create a shape-memory polymeric actuator, where the capability for non-synthetic tuning of the actuation performance is combined with reprocessability. Physically cross-linked polymeric matrices provide a solid material platform, where the in situ processing methods can be employed for modification of the composition and morphology, resulting in the fine tuning of the related mechanical properties and shape-memory actuation capability.
The morphological features, required for shape-memory polymeric actuators, namely two crystallisable domains and anchoring points for physical cross-links, were embedded into a multiblock copolymer with poly(ε-caprolactone) and poly(L-lactide) segments (PLLA-PCL). Here, the melting transition of PCL was bisected into the actuating and skeleton-forming units, while the cross-linking was introduced via PLA stereocomplexation in blends with oligomeric poly(D-lactide) (ODLA). PLLA segment number average length of 12-15 repeating units was experimentally defined to be capable of the PLA stereocomplexes formation, but not sufficient for the isotactic crystallisation. Multiblock structure and phase dilution broaden the PCL melting transition, facilitating its separation into two conditionally independent crystalline domains. Low molar mass of the PLA stereocomplex components and a multiblock structure enables processing and reprocessing of the PLLA-PCL / ODLA blends with common non-destructive techniques. The modularity of the PLLA-PCL structure and synthetic approach allows for independent tuning of the properties of its components. The designed material establishes a solid platform for non-synthetic tuning of thermomechanical and structural properties of thermoplastic elastomers.
To evaluate the thermomechanical stability of the formed physical network, three criteria were appraised. As physical cross-links, PLA stereocomplexes have to be evenly distributed within the material matrix, their melting temperature shall not overlap with the thermal transitions of the PCL domains and they have to maintain the structural integrity within the strain ε ranges further applied in the shape-memory actuation experiments. Assigning PCL the function of the skeleton-forming and actuating units, and PLA stereocomplexes the role of physical netpoints, shape-memory actuation was realised in the PLLA-PCL / ODLA blends. Reversible strain of shape-memory actuation was found to be a function of PLA stereocomplex crystallinity, i.e. physical cross-linking density, with a maximum of 13.4 ± 1.5% at PLA stereocomplex content of 3.1 ± 0.3 wt%. In this way, shape-memory actuation can be tuned via adjusting the composition of the PLLA-PCL / ODLA blend. This makes the developed material a valuable asset in the production of cost-effective tunable soft polymeric actuators for the applications in medicine and soft robotics.
The Central Andes region in South America is characterized by a complex and heterogeneous deformation system. Recorded seismic activity and mapped neotectonic structures indicate that most of the intraplate deformation is located along the margins of the orogen, in the transitions to the foreland and the forearc. Furthermore, the actively deforming provinces of the foreland exhibit distinct deformation styles that vary along strike, as well as characteristic distributions of seismicity with depth. The style of deformation transitions from thin-skinned in the north to thick-skinned in the south, and the thickness of the seismogenic layer increases to the south. Based on geological/geophysical observations and numerical modelling, the most commonly invoked causes for the observed heterogeneity are the variations in sediment thickness and composition, the presence of inherited structures, and changes in the dip of the subducting Nazca plate. However, there are still no comprehensive investigations on the relationship between the lithospheric composition of the Central Andes, its rheological state and the observed deformation processes. The central aim of this dissertation is therefore to explore the link between the nature of the lithosphere in the region and the location of active deformation. The study of the lithospheric composition by means of independent-data integration establishes a strong base to assess the thermal and rheological state of the Central Andes and its adjacent lowlands, which alternatively provide new foundations to understand the complex deformation of the region. In this line, the general workflow of the dissertation consists in the construction of a 3D data-derived and gravity-constrained density model of the Central Andean lithosphere, followed by the simulation of the steady-state conductive thermal field and the calculation of strength distribution. Additionally, the dynamic response of the orogen-foreland system to intraplate compression is evaluated by means of 3D geodynamic modelling.
The results of the modelling approach suggest that the inherited heterogeneous composition of the lithosphere controls the present-day thermal and rheological state of the Central Andes, which in turn influence the location and depth of active deformation processes. Most of the seismic activity and neo--tectonic structures are spatially correlated to regions of modelled high strength gradients, in the transition from the felsic, hot and weak orogenic lithosphere to the more mafic, cooler and stronger lithosphere beneath the forearc and the foreland. Moreover, the results of the dynamic simulation show a strong localization of deviatoric strain rate second invariants in the same region suggesting that shortening is accommodated at the transition zones between weak and strong domains. The vertical distribution of seismic activity appears to be influenced by the rheological state of the lithosphere as well. The depth at which the frequency distribution of hypocenters starts to decrease in the different morphotectonic units correlates with the position of the modelled brittle-ductile transitions; accordingly, a fraction of the seismic activity is located within the ductile part of the crust. An exhaustive analysis shows that practically all the seismicity in the region is restricted above the 600°C isotherm, in coincidence with the upper temperature limit for brittle behavior of olivine. Therefore, the occurrence of earthquakes below the modelled brittle-ductile could be explained by the presence of strong residual mafic rocks from past tectonic events. Another potential cause of deep earthquakes is the existence of inherited shear zones in which brittle behavior is favored through a decrease in the friction coefficient. This hypothesis is particularly suitable for the broken foreland provinces of the Santa Barbara System and the Pampean Ranges, where geological studies indicate successive reactivation of structures through time. Particularly in the Santa Barbara System, the results indicate that both mafic rocks and a reduction in friction are required to account for the observed deep seismic events.
Im Mittelpunkt dieser Dissertation steht die Wiederentdeckung, Analyse und bildungshistorische Einordnung des reformpädagogischen Schulprojekts von Eugenie SCHWARZWALD (1872-1940) in Wien im ersten Drittel des 20. Jahrhunderts. Die Genese der Schulentwicklung offenbart die reformpädagogischen Verflechtungen eines überregional bedeutsamen Schulprojekts, die maßgeblich das Profil, die inhaltliche sowie didaktisch-methodische Ausgestaltung von Schule, Schulleben und Unterricht geprägt haben. In der Einleitung (Kap. 1) werden das Erkenntnisinteresse, die zentralen Fragestellungen, die ausgewerteten Quellenbestände und die methodische Vorgehensweise der Arbeit als historisch kritische Analyse der herangezogenen Quellen aufgezeigt. Die systematische Entfaltung des Themas erfolgt entlang von drei zentralen Kapiteln. Dabei rücken die gesellschaftliche und bildungshistorische Einordnung des Schulprojekts in die Ideenwelt und sozialstrukturelle Wirklichkeit Wiens (Kap. 2), biographische Zugänge der Schulgründerin, die Gründung, Genese, Ausformung sowie Beendigung des Schulprojekts, die strukturellen und pädagogischen Charakteristika, die reformpädagogischen Merkmale im ersten Drittel des 20. Jahrhunderts (Kap. 3) in den Mittelpunkt der Analyse. Zugleich werden exemplarische Verflechtungen zu den zeitgenössischen reformpädagogischen Strömungen ebenso sichtbar gemacht wie die damit verbundene Impulsgebung des SCHWARZWALD-Schulprojekts auf das Schulwesen Wiens und Österreichs. Einen Schwerpunkt der Arbeit bildet die Analyse der mannigfachen Vernetzungen der SCHWARZWALDschule im Hinblick auf die Künstlerische Avantgarde (Kap. 4). In der thesenhaften Zusammenfassung (Kap. 5) werden SCHWARZWALDs Leistungen für das österreichische Schul- und Bildungswesen, u. a. für die höhere Mädchenbildung, gewürdigt. Die Arbeit fragt schließlich nach der Reichweite der mit dem Schulprojekt verbundenen reformpädagogischen Impulse und systematisiert Gelingens- und Nichtgelingens-Bedingungen für den Schulreformprozess. Das macht die Arbeit – mit Blick auf Transferüberlegungen – für aktuelle Fragestellungen der Schulentwicklung anschlussfähig.
Digitalisierung ermöglicht es uns, mit Partnern (z.B. Unternehmen, Institutionen) in einer IT-unterstützten Umgebung zu interagieren und Tätigkeiten auszuführen, die vormals manuell erledigt wurden. Ein Ziel der Digitalisierung ist dabei, Dienstleistungen unterschiedlicher fachlicher Domänen zu Prozessen zu kombinieren und vielen Nutzergruppen bedarfsgerecht zugänglich zu machen. Hierzu stellen Anbieter technische Dienste bereit, die in unterschiedliche Anwendungen integriert werden können.
Die Digitalisierung stellt die Anwendungsentwicklung vor neue Herausforderungen. Ein Aspekt ist die bedarfsgerechte Anbindung von Nutzern an Dienste. Zur Interaktion menschlicher Nutzer mit den Diensten werden Benutzungsschnittstellen benötigt, die auf deren Bedürfnisse zugeschnitten sind. Hierzu werden Varianten für spezifische Nutzergruppen (fachliche Varianten) und variierende Umgebungen (technische Varianten) benötigt. Zunehmend müssen diese mit Diensten anderer Anbieter kombiniert werden können, um domänenübergreifend Prozesse zu Anwendungen mit einem erhöhten Mehrwert für den Endnutzer zu verknüpfen (z.B. eine Flugbuchung mit einer optionalen Reiseversicherung).
Die Vielfältigkeit der Varianten lässt die Erstellung von Benutzungsschnittstellen komplex und die Ergebnisse sehr individuell erscheinen. Daher werden die Varianten in der Praxis vorwiegend manuell erstellt. Dies führt zur parallelen Entwicklung einer Vielzahl sehr ähnlicher Anwendungen, die nur geringes Potential zur Wiederverwendung besitzen. Die Folge sind hohe Aufwände bei Erstellung und Wartung. Dadurch wird häufig auf die Unterstützung kleiner Nutzerkreise mit speziellen Anforderungen verzichtet (z.B. Menschen mit physischen Einschränkungen), sodass diese weiterhin von der Digitalisierung ausgeschlossen bleiben.
Die Arbeit stellt eine konsistente Lösung für diese neuen Herausforderungen mit den Mitteln der modellgetriebenen Entwicklung vor. Sie präsentiert einen Ansatz zur Modellierung von Benutzungsschnittstellen, Varianten und Kompositionen und deren automatischer Generierung für digitale Dienste in einem verteilten Umfeld. Die Arbeit schafft eine Lösung zur Wiederverwendung und gemeinschaftlichen Nutzung von Benutzungsschnittstellen über Anbietergrenzen hinweg. Sie führt zu einer Infrastruktur, in der eine Vielzahl von Anbietern ihre Expertise in gemeinschaftliche Anwendungen einbringen können.
Die Beiträge bestehen im Einzelnen in Konzepten und Metamodellen zur Modellierung von Benutzungsschnittstellen, Varianten und Kompositionen sowie einem Verfahren zu deren vollständig automatisierten Transformation in funktionale Benutzungsschnittstellen. Zur Umsetzung der gemeinschaftlichen Nutzbarkeit werden diese ergänzt um eine universelle Repräsentation der Modelle, einer Methodik zur Anbindung unterschiedlicher Dienst-Anbieter sowie einer Architektur zur verteilten Nutzung der Artefakte und Verfahren in einer dienstorientierten Umgebung.
Der Ansatz bietet die Chance, unterschiedlichste Menschen bedarfsgerecht an der Digitalisierung teilhaben zu lassen. Damit setzt die Arbeit Impulse für zukünftige Methoden zur Anwendungserstellung in einem zunehmend vielfältigen Umfeld.
Die Dissertation legt ihren Schwerpunkt auf die synchronische und diachronische Variation im Gebrauch der französischen Kausalkonjunktion parce que sowie auf die Interaktion mit den außersprachlichen Variablen Alter und sozioprofessionelle Kategorie. Basierend auf vorausgehenden makrodiachronischen Studien, die Anhaltspunkte dafür liefern, dass die Konjunktion einen Prozess der Pragmatikalisierung durchlaufen hat und weiterhin durchläuft, wurde ein Untersuchungskorpus von 56 Interviews aus den diachronisch distinkten Korpora ESLO1, ESLO2 und LangAge extrahiert. Dieses Untersuchungskorpus diente als Grundlage für Panelstudien und Trendstudien, die darauf ausgerichtet waren, die Pragmatikalisierung von parce que aus einem mikrodiachronischen Gesichtspunkt zu verifizieren. Zusätzlich zu der diachronischen Perspektive wurde eine synchronische Perspektive eingenommen, um die Variation im Gebrauch der Konjunktion so einem diachronischen Phänomen wie dem age grading oder der apparent time zuordnen zu können. Ausgehend von der Theorie der Konstruktionsgrammatik wurden parce que enthaltende Konstruktionen bottom-up annotiert und in fünf Pragmatikalitätsgrade kategorisiert (pra0–pra4). Diese wurden anschließend quantifiziert und in Abhängigkeit des Geburtsjahres und der sozioprofessionellen Kategorie der (männlichen) Sprecher mithilfe mehrerer R-Modelle wie ctrees, trees, lm, hclust und kmeans analysiert.
Die Frequenzentwicklung der Pragmatikalitätsgrade bestätigte die Pragmatikalisierungshypothese in einem mikrodiachronischen Rahmen. Zudem konnte ein quantitativer Rückgang im Gebrauch der Konstruktionen am nicht- oder weniger pragmatikalisierten (pra0, pra1) Pol festgestellt werden, während Verwendungsweisen höherer Pragmatikalisierungsgrade (pra2–pra4) über 40 Jahre vergleichsweise stabil blieben.
Obwohl für pra2 kein signifikanter Wandel hervortrat, wies dessen Entwicklung bei den Sprechern im mittleren Lebensalter sowie das synchronische Muster in Abhängigkeit von Alter (oder Geburtsjahr) und von sozioprofessioneller Kategorie dennoch in Richtung einer zugrundeliegenden diachronischen Variation. Diese könnte als ein durch die sozialen Transformationen der 1960er und 1970er Jahre katalysiertes Phänomen des age grading interpretiert werden. Für die näher am pragmatischen Pol situierten Gebrauchsweisen (pra3 und pra4) konnte keine klare Tendenz ermittelt werden.
Die Ergebnisse fordern diachronische Konzepte wie age grading und apparent time heraus, indem sie die Simplizität der zugrundeliegenden Mechanismen sowie die gängigen Methoden, diese zu identifizieren, infrage stellen.
Die Dissertation geht der grundlegenden Forschungsfrage nach, wie die Liberal-Demokratische Partei Deutschlands (LDPD) auf lokaler Ebene die ihr zugeschriebene Rolle im politischen Alltag ausfüllte, in welchem Verhältnis sie zum System der DDR stand sowie welche Handlungsspielräume bestanden und genutzt wurden. Ihre Parteiarbeit vor Ort vom Mauerbau bis in die 1980er Jahre hinein blieb von der Forschung bisher weitgehend unbeobachtet, da das Interesse verstärkt der herrschenden SED oder den rebellischen Ansätzen der LDPD in den 1940er und späten 1980er Jahren galt. Die vorliegende Arbeit hat einen ersten Schritt unternommen, die liberale Partei auf Kreis- und Ortsebene zu untersuchen, und trägt dazu bei, diese Lücken zu schließen. Anhand der Fallbeispiele Gotha, Erfurt-Stadt und Eisenach beleuchtet die Dissertation die interne Parteiorganisation, Verhalten und Motivationen der Mitglieder sowie unter Berücksichtigung netzwerktheoretischer Ansätze die Verflechtungen der lokalen Parteifunktionsträger, die sich in die kommunale Arbeit vor Ort einmischten. Informations- und Situationsberichte sowie Korrespondenzen und Organisationsunterlagen gaben Auskunft über Selbstbilder, Aktivität, Themen und Kommunikationsaspekte. Deutlich werden die strengen Kontrollmechanismen innerhalb der Partei sowie das Spannungsfeld zwischen einer klaren Unterstützung der SED-Politik und individuell eigen-sinnigem Verhalten.
Durch die Analysekategorie des „Eigen-Sinns“ als Form der vielschichtigen Aneig- nung von Herrschaftsstrukturen in Abgrenzung zu den Begriffen Opposition und Widerstand wird gezeigt, dass die LDPD-Mitglieder in den untersuchten Kreisen sich zwar Freiheiten der Kritikäußerung nahmen sowie weitgehend selbstständig den Grad ihrer Aktivität bestimmten, dabei die grundlegenden Systemfragen jedoch nicht berührten. Es existierten viele unterschiedliche Lebenswelten der Akteure, abhängig von Tätigkeitsfeld, Motivation und Umfeld, die zu verschiedenen Taktiken und Ausprägungen des Eigen-Sinns bei einfachen Mitgliedern und den lokalen Funktionsträgern führten. Durch ihre kommunale Mitarbeit jedoch kümmerten sich die Liberaldemokraten in den Gemeinden um die drängendsten Versorgungsprobleme und sorgten mit der aktiven Rekrutierung ihrer Mitglieder für Arbeitsprogramme und Wettbewerbe für eine Beteiligung der LDPD an der Beseitigung der schlimmsten Mängel im öffentlichen Raum. Damit leisteten sie einen Beitrag zur Dämpfung der allgemeinen Unzufriedenheit und stärkten mittelbar das DDR-System. Im Gegenzug erhielten sie dafür von der SED eingeschränkte und klar definierte Handlungsspielräume. Mittels der beruflichen Verankerung der meisten aktiven Liberaldemokraten im ökonomischen Bereich konnte viel Praxiswissen herausgebildet werden, mit dem sich die untersuchten LDPD-Verbände im Rahmen der gewährten Gestaltungsfreiheit durchaus selbstbewusst in kommunale Prozesse einmischten. Für die Stabilisierung des Systems über die lange Zeit zwischen Mauerbau und Mauerfall spielten sie damit eine wichtige Rolle.
Die Vermischung von Distanzierung, Akzeptanz, Widerspruch und Gehorsam machen die Parteibasis und auch die aktiven Parteifunktionsträger auf der unteren Ebene zu einem sehr spannenden Untersuchungsfeld, das auch noch längst nicht ausgeschöpft ist.