Refine
Has Fulltext
- yes (418) (remove)
Year of publication
- 2021 (418) (remove)
Document Type
- Doctoral Thesis (165)
- Postprint (154)
- Master's Thesis (18)
- Working Paper (17)
- Article (16)
- Part of Periodical (15)
- Monograph/Edited Volume (12)
- Bachelor Thesis (7)
- Report (7)
- Habilitation Thesis (2)
Is part of the Bibliography
- yes (418) (remove)
Keywords
- Arabidopsis thaliana (4)
- COVID-19 (4)
- Spektroskopie (4)
- climate change (4)
- embodied cognition (4)
- machine learning (4)
- spectroscopy (4)
- 20. Jahrhundert (3)
- 20th century (3)
- Bildung (3)
Institute
- Institut für Biochemie und Biologie (45)
- Institut für Geowissenschaften (39)
- Institut für Physik und Astronomie (34)
- Institut für Chemie (28)
- Hasso-Plattner-Institut für Digital Engineering GmbH (25)
- Institut für Umweltwissenschaften und Geographie (21)
- Department Psychologie (19)
- Extern (19)
- Strukturbereich Kognitionswissenschaften (19)
- Department Sport- und Gesundheitswissenschaften (17)
The present work focuses on minimising the usage of toxic chemicals by integration of the biobased monomers, derived from fatty acid esters, to photopolymerization processes, which are known to be nature friendly. Internal double bond present in the oleic acid was converted to more reactive (meth)acrylate or epoxy group. Biobased starting materials, functionalized by different pendant groups, were used for photopolymerizing formulations to design of new polymeric structures by using ultraviolet light emitting diode (UV-LED) (395 nm) via free radical polymerization or cationic polymerization.
New (meth)acrylates (2,3 and 4) consisting of two isomers, methyl 9-((meth)acryloyloxy)-10-hydroxyoctadecanoate / methyl 9-hydroxy-10-((meth)acryloyloxy)octadecanoate (2 and 3) and methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4), modified from oleic acid mix, and ionic liquid monomers (1a and 1b) bearing long alkyl chain were polymerized photochemically. New (meth)acrylates are based on vegetable oil, and ionic liquids (ILs) have nonvolatile behaviour. Therefore, both monomer types have green approach. Photoinitiated polymerization of new (meth)acrylates and ionic liquids was investigated in the presence of ethyl (2,4,6-trimethylbenzoyl) phenylphosphinate (Irgacure® TPO−L) or di(4-methoxybenzoyl)diethylgermane (Ivocerin®) as photoinitiator (PI). Additionally, the results were discussed in comparison with those obtained from commercial 1,6-hexanediol di(meth)acrylate (5 and 6) for deeper investigation of biobased monomer’s potential to substitute petroleum derived materials with renewable resources for possible coating applications. Kinetic study shows that methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4) and ionic liquids (1a and 1b) have quantitative conversion after irradiation process which is important for practical applications. On the other hand, heat generation occurs in a longer time during the polymerization of biobased systems or ILs.
The poly(meth)acrylates modified from (meth)acrylated fatty acid methyl ester monomers generally show a low glass transition temperature because of the presence of long aliphatic chain in the polymer structure. However, poly(meth)acrylates containing aromatic group have higher glass transition temperature. Therefore, new 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was synthesized which can be a promising candidate for the green techniques, such as light induced polymerization. Photokinetic investigation of the new monomer, 4-(4-methacryloyloxyphenyl)-butan-2-one (7), was discussed using Irgacure® TPO−L or Ivocerin® as photoinitiator. The reactivity of that monomer was compared to commercial 2-phenoxyethyl methacrylate (8) and phenyl methacrylate (9) basis of the differences on monomer structures. The photopolymer of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) might be an interesting candidate for the coating application with the properties of quantitative conversion and high molecular weight. It also shows higher glass transition temperature.
In addition to the linear systems based on renewable materials, new crosslinked polymers were also designed in this thesis. Therefore, isomer mixture consisting of ethane-1,2-diyl bis(9-methacryloyloxy-10-hydroxy octadecanoate), ethane-1,2-diyl 9-hydroxy-10-methacryloyloxy-9’-methacryloyloxy10’-hydroxy octadecanoate and ethane-1,2-diyl bis(9-hydroxy-10-methacryloyloxy octadecanoate) (10) was synthesized by derivation of the oleic acid which has not been previously described in the literature. Crosslinked material based on this biobased monomer was produced by photoinitiated free radical polymerization using Irgacure® TPO−L or Ivocerin® as photoinitiator. Furthermore, material properties were diversified by copolymerization of 10 with 4-(4-methacryloyloxyphenyl)-butan-2-one (7) or methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4). In addition to this, influence of comonomer with different chemical structure on the network system was investigated by analysis of thermo-mechanical properties, crosslink density and molecular weight between two crosslink junctions. An increase in the glass transition temperature caused by copolymerization of biobased monomer 10 with the excess amount of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was confirmed by both techniques, differential scanning calorimetry (DSC) and dynamic mechanical analysis (DMA). On the other hand, crosslink density decreased as a result of copolymerization reactions due to the reduction in the mean functionality of the system. Furthermore, surface characterization has been tested by contact angle measurements using solvents with different polarity.
This work also contributes to the limited data reported about cationic photopolymerization of the epoxidized vegetable oils in the literature in contrast to the widely investigation of thermal curing of the biorenewable epoxy monomers. In addition to the 9,10-epoxystearic acid methyl ester (11), a new monomer of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) has been synthesized from oleic acid. These two biobased epoxies have been polymerized via cationic photoinitiated polymerization in the presence of bis(t-butyl)-iodonium-tetrakis(perfluoro-t-butoxy)aluminate ([Al(O-t-C4F9)4]-) and isopropylthioxanthone (ITX) as photinitiating system. Polymerization kinetic of 9,10-epoxystearic acid methyl ester (11) and bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was investigated and compared with the kinetic of commercial monomers being 3,4-epoxycyclohexylmethyl-3’,4’-epoxycyclohexane carboxylate (13), 1,4-butanediol diglycidyl ether (14), and diglycidylether of bisphenol-A (15). Both biobased epoxies (11 and 12) showed higher conversion than cycloaliphatic epoxy (13), and lower reactivity than 1,4-butanediol diglycidyl ether (14). Additional network systems were designed by copolymerization of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) in different molar ratios (1:1; 1:5; 1:9). It addresses that, final conversion is dependent on polymerization rate as well as physical processes such as vitrification during polymerization. Moreover, low glass transition temperature of homopolymer derived from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was successfully increased by copolymerization with diglycidylether bisphenol-A (15). On the other hand, the surface produced from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) shows hydrophobic character. Higher concentration of biobased diepoxy (12) in the copolymerizing mixture decreases surface free energy. Network systems were also investigated according to the rubber elasticity theory. Crosslinked polymer derived from the mixture of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) (molar ratio=1:5) exhibits almost ideal polymer network.
Models for the predictions of monetary losses from floods mainly blend data deemed to represent a single flood type and region. Moreover, these approaches largely ignore indicators of preparedness and how predictors may vary between regions and events, challenging the transferability of flood loss models. We use a flood loss database of 1812 German flood-affected households to explore how Bayesian multilevel models can estimate normalised flood damage stratified by event, region, or flood process type. Multilevel models acknowledge natural groups in the data and allow each group to learn from others. We obtain posterior estimates that differ between flood types, with credibly varying influences of water depth, contamination, duration, implementation of property-level precautionary measures, insurance, and previous flood experience; these influences overlap across most events or regions, however. We infer that the underlying damaging processes of distinct flood types deserve further attention. Each reported flood loss and affected region involved mixed flood types, likely explaining the uncertainty in the coefficients. Our results emphasise the need to consider flood types as an important step towards applying flood loss models elsewhere. We argue that failing to do so may unduly generalise the model and systematically bias loss estimations from empirical data.
The Internet of Things (IoT) is a system of physical objects that can be discovered, monitored, controlled, or interacted with by electronic devices that communicate over various networking interfaces and eventually can be connected to the wider Internet. [Guinard and Trifa, 2016]. IoT devices are equipped with sensors and/or actuators and may be constrained in terms of memory, computational power, network bandwidth, and energy. Interoperability can help to manage such heterogeneous devices. Interoperability is the ability of different types of systems to work together smoothly. There are four levels of interoperability: physical, network and transport, integration, and data. The data interoperability is subdivided into syntactic and semantic data. Semantic data describes the meaning of data and the common understanding of vocabulary e.g. with the help of dictionaries, taxonomies, ontologies. To achieve interoperability, semantic interoperability is necessary.
Many organizations and companies are working on standards and solutions for interoperability in the IoT. However, the commercial solutions produce a vendor lock-in. They focus on centralized approaches such as cloud-based solutions. This thesis proposes a decentralized approach namely Edge Computing. Edge Computing is based on the concepts of mesh networking and distributed processing. This approach has an advantage that information collection and processing are placed closer to the sources of this information. The goals are to reduce traffic, latency, and to be robust against a lossy or failed Internet connection.
We see management of IoT devices from the network configuration management perspective. This thesis proposes a framework for network configuration management of heterogeneous, constrained IoT devices by using semantic descriptions for interoperability. The MYNO framework is an acronym for MQTT, YANG, NETCONF and Ontology. The NETCONF protocol is the IETF standard for network configuration management. The MQTT protocol is the de-facto standard in the IoT. We picked up the idea of the NETCONF-MQTT bridge, originally proposed by Scheffler and Bonneß[2017], and extended it with semantic device descriptions. These device descriptions provide a description of the device capabilities. They are based on the oneM2M Base ontology and formalized by the Semantic Web Standards.
The novel approach is using a ontology-based device description directly on a constrained device in combination with the MQTT protocol. The bridge was extended in order to query such descriptions. Using a semantic annotation, we achieved that the device capabilities are self-descriptive, machine readable and re-usable.
The concept of a Virtual Device was introduced and implemented, based on semantic device descriptions. A Virtual Device aggregates the capabilities of all devices at the edge network and contributes therefore to the scalability. Thus, it is possible to control all devices via a single RPC call.
The model-driven NETCONF Web-Client is generated automatically from this YANG model which is generated by the bridge based on the semantic device description. The Web-Client provides a user-friendly interface, offers RPC calls and displays sensor values. We demonstrate the feasibility of this approach in different use cases: sensor and actuator scenarios, as well as event configuration and triggering.
The semantic approach results in increased memory overhead. Therefore, we evaluated CBOR and RDF HDT for optimization of ontology-based device descriptions for use on constrained devices. The evaluation shows that CBOR is not suitable for long strings and RDF HDT is a promising candidate but is still a W3C Member Submission. Finally, we used an optimized JSON-LD format for the syntax of the device descriptions.
One of the security tasks of network management is the distribution of firmware updates. The MYNO Update Protocol (MUP) was developed and evaluated on constrained devices CC2538dk and 6LoWPAN. The MYNO update process is focused on freshness and authenticity of the firmware. The evaluation shows that it is challenging but feasible to bring the firmware updates to constrained devices using MQTT. As a new requirement for the next MQTT version, we propose to add a slicing feature for the better support of constrained devices. The MQTT broker should slice data to the maximum packet size specified by the device and transfer it slice-by-slice.
For the performance and scalability evaluation of MYNO framework, we setup the High Precision Agriculture demonstrator with 10 ESP-32 NodeMCU boards at the edge of the network. The ESP-32 NodeMCU boards, connected by WLAN, were equipped with six sensors and two actuators. The performance evaluation shows that the processing of ontology-based descriptions on a Raspberry Pi 3B with the RDFLib is a challenging task regarding computational power. Nevertheless, it is feasible because it must be done only once per device during the discovery process.
The MYNO framework was tested with heterogeneous devices such as CC2538dk from Texas Instruments, Arduino Yún Rev 3, and ESP-32 NodeMCU, and IP-based networks such as 6LoWPAN and WLAN.
Summarizing, with the MYNO framework we could show that the semantic approach on constrained devices is feasible in the IoT.
Der vorliegende Band enthält sämtliche Impulsvorträge der Lateintage von 2018 bis 2020. Zum Thema „Im Schatten der Gesellschaft? Roms Umgang mit sozialen Randgruppen“ sprachen 2018 Meike Rühl und Nicola Hömke. Unter der Überschrift „Im Zentrum der Macht: Forum Romanum“ beleuchteten Jon Albers, Filippo Carlà-Uhink und Jan Reimann 2019 jenen wirkmächtigen Ort in seinen verschiedenen Facetten näher. 2020 gab Holger Sonnabend Lateinschülern einen Einblick in das Thema „Nero – Kaiser und Künstler“. Die Vorträge sind in der Reihenfolge abgedruckt, in der sie auf dem jeweiligen Lateintag gehalten wurden.
Permafrost is warming globally, which leads to widespread permafrost thaw and impacts the surrounding landscapes, ecosystems and infrastructure. Especially ice-rich permafrost is vulnerable to rapid and abrupt thaw, resulting from the melting of excess ground ice. Local remote sensing studies have detected increasing rates of abrupt permafrost disturbances, such as thermokarst lake change and drainage, coastal erosion and RTS in the last two decades. All of which indicate an acceleration of permafrost degradation.
In particular retrogressive thaw slumps (RTS) are abrupt disturbances that expand by up to several meters each year and impact local and regional topographic gradients, hydrological pathways, sediment and nutrient mobilisation into aquatic systems, and increased permafrost carbon mobilisation. The feedback between abrupt permafrost thaw and the carbon cycle is a crucial component of the Earth system and a relevant driver in global climate models. However, an assessment of RTS at high temporal resolution to determine the dynamic thaw processes and identify the main thaw drivers as well as a continental-scale assessment across diverse permafrost regions are still lacking.
In northern high latitudes optical remote sensing is restricted by environmental factors and frequent cloud coverage. This decreases image availability and thus constrains the application of automated algorithms for time series disturbance detection for large-scale abrupt permafrost disturbances at high temporal resolution. Since models and observations suggest that abrupt permafrost disturbances will intensify, we require disturbance products at continental-scale, which allow for meaningful integration into Earth system models.
The main aim of this dissertation therefore, is to enhance our knowledge on the spatial extent and temporal dynamics of abrupt permafrost disturbances in a large-scale assessment. To address this, three research objectives were posed:
1. Assess the comparability and compatibility of Landsat-8 and Sentinel-2 data for a combined use in multi-spectral analysis in northern high latitudes.
2. Adapt an image mosaicking method for Landsat and Sentinel-2 data to create combined mosaics of high quality as input for high temporal disturbance assessments in northern high latitudes.
3. Automatically map retrogressive thaw slumps on the landscape-scale and assess their high temporal thaw dynamics.
We assessed the comparability of Landsat-8 and Sentinel-2 imagery by spectral comparison of corresponding bands. Based on overlapping same-day acquisitions of Landsat-8 and Sentinel-2 we derived spectral bandpass adjustment coefficients for North Siberia to adjust Sentinel-2 reflectance values to resemble Landsat-8 and harmonise the two data sets. Furthermore, we adapted a workflow to combine Landsat and Sentinel-2 images to create homogeneous and gap-free annual mosaics. We determined the number of images and cloud-free pixels, the spatial coverage and the quality of the mosaic with spectral comparisons to demonstrate the relevance of the Landsat+Sentinel-2 mosaics. Lastly, we adapted the automatic disturbance detection algorithm LandTrendr for large-scale RTS identification and mapping at high temporal resolution. For this, we modified the temporal segmentation algorithm for annual gradual and abrupt disturbance detection to incorporate the annual Landsat+Sentinel-2 mosaics. We further parametrised the temporal segmentation and spectral filtering for optimised RTS detection, conducted further spatial masking and filtering, and implemented a binary object classification algorithm with machine-learning to derive RTS from the LandTrendr disturbance output. We applied the algorithm to North Siberia, covering an area of 8.1 x 106 km2.
The spectral band comparison between same-day Landsat-8 and Sentinel-2 acquisitions already showed an overall good fit between both satellite products. However, applying the acquired spectral bandpass coefficients for adjustment of Sentinel-2 reflectance values, resulted in a near-perfect alignment between the same-day images. It can therefore be concluded that the spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to those of Landsat-8 in North Siberia.
The number of available cloud-free images increased steadily between 1999 and 2019, especially intensified after 2016 with the addition of Sentinel-2 images. This signifies a highly improved input database for the mosaicking workflow. In a comparison of annual mosaics, the Landsat+Sentinel-2 mosaics always fully covered the study areas, while Landsat-only mosaics contained data-gaps for the same years. The spectral comparison of input images and Landsat+Sentinel-2 mosaic showed a high correlation between the input images and the mosaic bands, testifying mosaicking results of high quality. Our results show that especially the mosaic coverage for northern, coastal areas was substantially improved with the Landsat+Sentinel-2 mosaics. By combining data from both Landsat and Sentinel-2 sensors we reliably created input mosaics at high spatial resolution for comprehensive time series analyses.
This research presents the first automatically derived assessment of RTS distribution and temporal dynamics at continental-scale. In total, we identified 50,895 RTS, primarily located in ice-rich permafrost regions, as well as a steady increase in RTS-affected areas between 2001 and 2019 across North Siberia. From 2016 onward the RTS area increased more abruptly, indicating heightened thaw slump dynamics in this period. Overall, the RTS-affected area increased by 331 % within the observation period. Contrary to this, five focus sites show spatiotemporal variability in their annual RTS dynamics, alternating between periods of increased and decreased RTS development. This suggests a close relationship to varying thaw drivers. The majority of identified RTS was active from 2000 onward and only a small proportion initiated during the assessment period. This highlights that the increase in RTS-affected area was mainly caused by enlarging existing RTS and not by newly initiated RTS.
Overall, this research showed the advantages of combining Landsat and Sentinel-2 data in northern high latitudes and the improvements in spatial and temporal coverage of combined annual mosaics. The mosaics build the database for automated disturbance detection to reliably map RTS and other abrupt permafrost disturbances at continental-scale. The assessment at high temporal resolution further testifies the increasing impact of abrupt permafrost disturbances and likewise emphasises the spatio-temporal variability of thaw dynamics across landscapes. Obtaining such consistent disturbance products is necessary to parametrise regional and global climate change models, for enabling an improved representation of the permafrost thaw feedback.
This dissertation was carried out as part of the international and interdisciplinary graduate school StRATEGy. This group has set itself the goal of investigating geological processes that take place on different temporal and spatial scales and have shaped the southern central Andes. This study focuses on claystones and carbonates of the Yacoraite Fm. that were deposited between Maastricht and Dan in the Cretaceous Salta Rift Basin. The former rift basin is located in northwest Argentina and is divided into the sub-basins Tres Cruces, Metán-Alemanía and Lomas de Olmedo. The overall motivation for this study was to gain new knowledge about the evolution of marine and lacustrine conditions during the Yacoraite Fm. Deposit in the Tres Cruces and Metán-Alemanía sub-basins. Other important aspects that were examined within the scope of this dissertation are the conversion of organic matter from Yacoraite Fm. into oil and its genetic relationship to selected oils produced and natural oil spills. The results of my study show that the Yacoraite Fm. began to be deposited under marine conditions and that a lacustrine environment developed by the end of the deposition in the Tres Cruces and Metán-Alemanía Basins. In general, the kerogen of Yacoraite Fm. consists mainly of the kerogen types II, III and II / III mixtures. Kerogen type III is mainly found in samples from the Yacoraite Fm., whose TOC values are low. Due to the adsorption of hydrocarbons on the mineral surfaces (mineral matrix effect), the content of type III kerogen with Rock-Eval pyrolysis in these samples could be overestimated. Investigations using organic petrography show that the organic particles of Yacoraite Fm. mainly consist of alginites and some vitrinite-like particles. The pyrolysis GC of the rock samples showed that the Yacoraite Fm. generates low-sulfur oils with a predominantly low-wax, paraffinic-naphthenic-aromatic composition and paraffinic wax-rich oils. Small proportions of paraffinic, low-wax oils and a gas condensate-generating facies are also predicted. Here, too, mineral matrix effects were taken into account, which can lead to a quantitative overestimation of the gas-forming character.
The results of an additional 1D tank modeling carried out show that the beginning (10% TR) of the oil genesis took place between ≈10 Ma and ≈4 Ma. Most of the oil (from ≈50% to 65%) was generated prior to the development of structural traps formed during the Plio-Pleistocene Diaguita deformation phase. Only ≈10% of the total oil generated was formed and potentially trapped after the formation of structural traps. Important factors in the risk assessment of this petroleum system, which can determine the small amounts of generated and migrated oil, are the generally low TOC contents and the variable thickness of the Yacoraite Fm. Additional risks are associated with a low density of information about potentially existing reservoir structures and the quality of the overburden.
Climatic change alters the frequency and intensity of natural hazards. In order to assess potential future changes in flood seasonality in the Rhine River Basin, we analyse changes in streamflow, snowmelt, precipitation, and evapotranspiration at 1.5, 2.0 and 3.0 ◦C global warming levels. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios (five general circulation models under three representative concentration pathways) is used to simulate the present and future climate conditions of both, pluvial and nival hydrological regimes. Our results indicate that the interplay between changes in snowmelt- and rainfall-driven runoff is crucial to understand changes in streamflow maxima in the Rhine River. Climate projections suggest that future changes in flood characteristics in the entire Rhine River are controlled by both, more intense precipitation events and diminishing snow packs. The nature of this interplay defines the type of change in runoff peaks. On the sub-basin level (the Moselle River), more intense rainfall during winter is mostly counterbalanced by reduced snowmelt contribution to the streamflow. In the High Rhine (gauge at Basel), the strongest increases in streamflow maxima show up during winter, when strong increases in liquid precipitation intensity encounter almost unchanged snowmelt-driven runoff. The analysis of snowmelt events suggests that at no point in time during the snowmelt season, a warming climate results in an increase in the risk of snowmelt-driven flooding. We do not find indications of a transient merging of pluvial and nival floods due to climate warming.
River flooding poses a threat to numerous cities and communities all over the world. The detection, quantification and attribution of changes in flood characteristics is key to assess changes in flood hazard and help affected societies to timely mitigate and adapt to emerging risks. The Rhine River is one of the major European rivers and numerous large cities reside at its shores. Runoff from several large tributaries superimposes in the main channel shaping the complex from regime. Rainfall, snowmelt as well as ice-melt are important runoff components. The main objective of this thesis is the investigation of a possible transient merging of nival and pluvial Rhine flood regimes under global warming. Rising temperatures cause snowmelt to occur earlier in the year and rainfall to be more intense. The superposition of snowmelt-induced floods originating from the Alps with more intense rainfall-induced runoff from pluvial-type tributaries might create a new flood type with potentially disastrous consequences.
To introduce the topic of changing hydrological flow regimes, an interactive web application that enables the investigation of runoff timing and runoff season- ality observed at river gauges all over the world is presented. The exploration and comparison of a great diversity of river gauges in the Rhine River Basin and beyond indicates that river systems around the world undergo fundamental changes. In hazard and risk research, the provision of background as well as real-time information to residents and decision-makers in an easy accessible way is of great importance. Future studies need to further harness the potential of scientifically engineered online tools to improve the communication of information related to hazards and risks.
A next step is the development of a cascading sequence of analytical tools to investigate long-term changes in hydro-climatic time series. The combination of quantile sampling with moving average trend statistics and empirical mode decomposition allows for the extraction of high resolution signals and the identification of mechanisms driving changes in river runoff. Results point out that the construction and operation of large reservoirs in the Alps is an important factor redistributing runoff from summer to winter and hint at more (intense) rainfall in recent decades, particularly during winter, in turn increasing high runoff quantiles. The development and application of the analytical sequence represents a further step in the scientific quest to disentangling natural variability, climate change signals and direct human impacts.
The in-depth analysis of in situ snow measurements and the simulations of the Alpine snow cover using a physically-based snow model enable the quantification of changes in snowmelt in the sub-basin upstream gauge Basel. Results confirm previous investigations indicating that rising temperatures result in a decrease in maximum melt rates. Extending these findings to a catchment perspective, a threefold effect of rising temperatures can be identified: snowmelt becomes weaker, occurs earlier and forms at higher elevations. Furthermore, results indicate that due to the wide range of elevations in the basin, snowmelt does not occur simultaneously at all elevation, but elevation bands melt together in blocks. The beginning and end of the release of meltwater seem to be determined by the passage of warm air masses, and the respective elevation range affected by accompanying temperatures and snow availability. Following those findings, a hypothesis describing elevation-dependent compensation effects in snowmelt is introduced: In a warmer world with similar sequences of weather conditions, snowmelt is moved upward to higher elevations, i.e., the block of elevation bands providing most water to the snowmelt-induced runoff is located at higher elevations. The movement upward the elevation range makes snowmelt in individual elevation bands occur earlier. The timing of the snowmelt-induced runoff, however, stays the same. Meltwater from higher elevations, at least partly, replaces meltwater from elevations below.
The insights on past and present changes in river runoff, snow covers and underlying mechanisms form the basis of investigations of potential future changes in Rhine River runoff. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios is used to analyse future changes in streamflow, snowmelt, precipitation and evapotranspiration at 1.5, 2.0 and
3.0 ◦ C global warming. Simulation results suggest that future changes in flood characteristics in the Rhine River Basin are controlled by increased precipitation amounts on the one hand, and reduced snowmelt on the other hand. Rising temperatures deplete seasonal snowpacks. At no time during the year, a warming climate results in an increase in the risk of snowmelt-driven flooding. Counterbalancing effects between snowmelt and precipitation often result in only little and transient changes in streamflow peaks. Although, investigations point at changes in both rainfall and snowmelt-driven runoff, there are no indications of a transient merging of nival and pluvial Rhine flood regimes due to climate warming. Flooding in the main tributaries of the Rhine, such as the Moselle River, as well as the High Rhine is controlled by both precipitation and snowmelt. Caution has to be exercised labelling sub-basins such as the Moselle catchment as purely pluvial-type or the Rhine River Basin at Basel as purely nival-type. Results indicate that this (over-) simplifications can entail misleading assumptions with regard to flood-generating mechanisms and changes in flood hazard. In the framework of this thesis, some progress has been made in detecting, quantifying and attributing past, present and future changes in Rhine flow/flood characteristics. However, further studies are necessary to pin down future changes in the flood genesis of Rhine floods, particularly very rare events.
Computation of the instantaneous phase and amplitude via the Hilbert Transform is a powerful tool of data analysis. This approach finds many applications in various science and engineering branches but is not proper for causal estimation because it requires knowledge of the signal’s past and future. However, several problems require real-time estimation of phase and amplitude; an illustrative example is phase-locked or amplitude-dependent stimulation in neuroscience. In this paper, we discuss and compare three causal algorithms that do not rely on the Hilbert Transform but exploit well-known physical phenomena, the synchronization and the resonance. After testing the algorithms on a synthetic data set, we illustrate their performance computing phase and amplitude for the accelerometer tremor measurements and a Parkinsonian patient’s beta-band brain activity.
Implementing innovation laboratories to leverage intrapreneurship are an increasingly popular organizational practice. A typical feature in these creative environments are semi-autonomous teams in which multiple members collectively exert leadership influence, thereby challenging traditional command-and-control conceptions of leadership. An extensive body of research on the team-centric concept of shared leadership has recognized the potential for pluralized leadership structures in enhancing team effectiveness; however, little empirical work has been conducted in organizational contexts in which creativity is key. This study set out to explore antecedents of shared leadership and its influence on team creativity in an innovation lab. Building on extant shared leadership and innovation research, we propose antecedents customary to creative teamwork, that is, experimental culture, task reflexivity, and voice. Multisource data were collected from 104 team members and 49 evaluations of 29 coaches nested in 21 teams working in a prototypical innovation lab. We identify factors specific to creative teamwork that facilitate the emergence of shared leadership by providing room for experimentation, encouraging team members to speak up in the creative process, and cultivating a reflective application of entrepreneurial thinking. We provide specific exemplary activities for innovation lab teams to increase levels of shared leadership.
Populations adapt to novel environmental conditions by genetic changes or phenotypic plasticity. Plastic responses are generally faster and can buffer fitness losses under variable conditions. Plasticity is typically modeled as random noise and linear reaction norms that assume simple one-to- one genotype–phenotype maps and no limits to the phenotypic response. Most studies on plasticity have focused on its effect on population viability. However, it is not clear, whether the advantage of plasticity depends solely on environmental fluctuations or also on the genetic and demographic properties (life histories) of populations. Here we present an individual-based model and study the relative importance of adaptive and nonadaptive plasticity for populations of sexual species with different life histories experiencing directional stochastic climate change. Environmental fluctuations were simulated using differentially autocorrelated climatic stochasticity or noise color, and scenarios of directiona
climate change. Nonadaptive plasticity was simulated as a random environmental effect on trait development, while adaptive plasticity as a linear, saturating, or sinusoidal reaction norm. The last two imposed limits to the plastic response and emphasized flexible interactions of the genotype with the environment. Interestingly, this assumption led to (a) smaller phenotypic than genotypic variance in the population (many-to- one genotype–phenotype map) and the coexistence of polymorphisms, and (b) the maintenance of higher genetic variation—compared to linear reaction norms and genetic determinism—even when the population was exposed to a constant environment for several generations. Limits to plasticity led to genetic accommodation, when costs were negligible, and to the appearance of cryptic variation when limits were exceeded. We found that adaptive plasticity promoted population persistence under red environmental noise and was particularly important for life histories with low fecundity. Populations produing more offspring could cope with environmental fluctuations solely by genetic changes or random plasticity, unless environmental change was too fast.
Learning analytics at scale
(2021)
Digital technologies are paving the way for innovative educational approaches. The learning format of Massive Open Online Courses (MOOCs) provides a highly accessible path to lifelong learning while being more affordable and flexible than face-to-face courses. Thereby, thousands of learners can enroll in courses mostly without admission restrictions, but this also raises challenges. Individual supervision by teachers is barely feasible, and learning persistence and success depend on students' self-regulatory skills. Here, technology provides the means for support. The use of data for decision-making is already transforming many fields, whereas in education, it is still a young research discipline. Learning Analytics (LA) is defined as the measurement, collection, analysis, and reporting of data about learners and their learning contexts with the purpose of understanding and improving learning and learning environments. The vast amount of data that MOOCs produce on the learning behavior and success of thousands of students provides the opportunity to study human learning and develop approaches addressing the demands of learners and teachers.
The overall purpose of this dissertation is to investigate the implementation of LA at the scale of MOOCs and to explore how data-driven technology can support learning and teaching in this context. To this end, several research prototypes have been iteratively developed for the HPI MOOC Platform. Hence, they were tested and evaluated in an authentic real-world learning environment. Most of the results can be applied on a conceptual level to other MOOC platforms as well. The research contribution of this thesis thus provides practical insights beyond what is theoretically possible. In total, four system components were developed and extended:
(1) The Learning Analytics Architecture: A technical infrastructure to collect, process, and analyze event-driven learning data based on schema-agnostic pipelining in a service-oriented MOOC platform. (2) The Learning Analytics Dashboard for Learners: A tool for data-driven support of self-regulated learning, in particular to enable learners to evaluate and plan their learning activities, progress, and success by themselves. (3) Personalized Learning Objectives: A set of features to better connect learners' success to their personal intentions based on selected learning objectives to offer guidance and align the provided data-driven insights about their learning progress. (4) The Learning Analytics Dashboard for Teachers: A tool supporting teachers with data-driven insights to enable the monitoring of their courses with thousands of learners, identify potential issues, and take informed action.
For all aspects examined in this dissertation, related research is presented, development processes and implementation concepts are explained, and evaluations are conducted in case studies. Among other findings, the usage of the learner dashboard in combination with personalized learning objectives demonstrated improved certification rates of 11.62% to 12.63%. Furthermore, it was observed that the teacher dashboard is a key tool and an integral part for teaching in MOOCs. In addition to the results and contributions, general limitations of the work are discussed—which altogether provide a solid foundation for practical implications and future research.
Objective: This study investigated intraindividual differences of intratendinous blood flow (IBF) in response to running exercise in participants with Achilles tendinopathy.
Design: This is a cross-sectional study.
Setting: The study was conducted at the University Outpatient Clinic.
Participants: Sonographic detectable intratendinous blood flow was examined in symptomatic and contralateral asymptomatic Achilles tendons of 19 participants (42 ± 13 years, 178 ± 10 cm, 76 ± 12 kg, VISA-A 75 ± 16) with clinically diagnosed unilateral Achilles tendinopathy and sonographic evident tendinosis.
Intervention: IBF was assessed using Doppler ultrasound “Advanced Dynamic Flow” before (Upre) and 5, 30, 60, and 120 min (U5–U120) after a standardized submaximal constant load run.
Main Outcome Measure: IBF was quantified by counting the number (n) of vessels in each tendon.
Results: At Upre, IBF was higher in symptomatic compared with asymptomatic tendons [mean 6.3 (95% CI: 2.8–9.9) and 1.7 (0.4–2.9), p < 0.01]. Overall, 63% of symptomatic and 47% of asymptomatic Achilles tendons responded to exercise, whereas 16 and 11% showed persisting IBF and 21 and 42% remained avascular throughout the investigation. At U5, IBF increased in both symptomatic and asymptomatic tendons [difference to baseline: 2.4 (0.3–4.5) and 0.9 (0.5–1.4), p = 0.05]. At U30 to U120, IBF was still increased in symptomatic but not in asymptomatic tendons [mean difference to baseline: 1.9 (0.8–2.9) and 0.1 (-0.9 to 1.2), p < 0.01].
Conclusion: Irrespective of pathology, 47–63% of Achilles tendons responded to exercise with an immediate acute physiological IBF increase by an average of one to two vessels (“responders”). A higher amount of baseline IBF (approximately five vessels) and a prolonged exercise-induced IBF response found in symptomatic ATs indicate a pain-associated altered intratendinous “neovascularization.”
Background: The relationship between exercise-induced intratendinous blood flow (IBF) and tendon pathology or training exposure is unclear.
Objective: This study investigates the acute effect of running exercise on sonographic detectable IBF in healthy and tendinopathic Achilles tendons (ATs) of runners and recreational participants.
Methods: 48 participants (43 ± 13 years, 176 ± 9 cm, 75 ± 11 kg) performed a standardized submaximal 30-min constant load treadmill run with Doppler ultrasound “Advanced dynamic flow” examinations before (Upre) and 5, 30, 60, and 120 min (U5-U120) afterward. Included were runners (>30 km/week) and recreational participants (<10 km/week) with healthy (Hrun, n = 10; Hrec, n = 15) or tendinopathic (Trun, n = 13; Trec, n = 10) ATs. IBF was assessed by counting number [n] of intratendinous vessels. IBF data are presented descriptively (%, median [minimum to maximum range] for baseline-IBF and IBF-difference post-exercise). Statistical differences for group and time point IBF and IBF changes were analyzed with Friedman and Kruskal-Wallis ANOVA (α = 0.05).
Results: At baseline, IBF was detected in 40% (3 [1–6]) of Hrun, in 53% (4 [1–5]) of Hrec, in 85% (3 [1–25]) of Trun, and 70% (10 [2–30]) of Trec. At U5 IBF responded to exercise in 30% (3 [−1–9]) of Hrun, in 53% (4 [−2–6]) of Hrec, in 70% (4 [−10–10]) of Trun, and in 80% (5 [1–10]) of Trec. While IBF in 80% of healthy responding ATs returned to baseline at U30, IBF remained elevated until U120 in 60% of tendinopathic ATs. Within groups, IBF changes from Upre-U120 were significant for Hrec (p < 0.01), Trun (p = 0.05), and Trec (p < 0.01). Between groups, IBF changes in consecutive examinations were not significantly different (p > 0.05) but IBF-level was significantly higher at all measurement time points in tendinopathic versus healthy ATs (p < 0.05).
Conclusion: Irrespective of training status and tendon pathology, running leads to an immediate increase of IBF in responding tendons. This increase occurs shortly in healthy and prolonged in tendinopathic ATs. Training exposure does not alter IBF occurrence, but IBF level is elevated in tendon pathology. While an immediate exercise-induced IBF increase is a physiological response, prolonged IBF is considered a pathological finding associated with Achilles tendinopathy.
Background
Artificial intelligence (AI) is one of the most promising areas in medicine with many possibilities for improving health and wellness. Already today, diagnostic decision support systems may help patients to estimate the severity of their complaints. This fictional case study aimed to test the diagnostic potential of an AI algorithm for common sports injuries and pathologies.
Methods
Based on a literature review and clinical expert experience, five fictional “common” cases of acute, and subacute injuries or chronic sport-related pathologies were created: Concussion, ankle sprain, muscle pain, chronic knee instability (after ACL rupture) and tennis elbow. The symptoms of these cases were entered into a freely available chatbot-guided AI app and its diagnoses were compared to the pre-defined injuries and pathologies.
Results
A mean of 25–36 questions were asked by the app per patient, with optional explanations of certain questions or illustrative photos on demand. It was stressed, that the symptom analysis would not replace a doctor’s consultation. A 23-yr-old male patient case with a mild concussion was correctly diagnosed. An ankle sprain of a 27-yr-old female without ligament or bony lesions was also detected and an ER visit was suggested. Muscle pain in the thigh of a 19-yr-old male was correctly diagnosed. In the case of a 26-yr-old male with chronic ACL instability, the algorithm did not sufficiently cover the chronic aspect of the pathology, but the given recommendation of seeing a doctor would have helped the patient. Finally, the condition of the chronic epicondylitis in a 41-yr-old male was correctly detected.
Conclusions
All chosen injuries and pathologies were either correctly diagnosed or at least tagged with the right advice of when it is urgent for seeking a medical specialist. However, the quality of AI-based results could presumably depend on the data-driven experience of these programs as well as on the understanding of their users. Further studies should compare existing AI programs and their diagnostic accuracy for medical injuries and pathologies.
Magmatic continental rifts often constitute the earliest stage of nascent plate boundaries. These extensional tectonic provinces are characterized by ubiquitous normal faulting and volcanic activity; the spatial pattern, the geometry, and the age of these normal faults can help to unravel the spatiotemporal relationships between extensional deformation, magmatism, and long-wavelength crustal deformation of continental rift provinces. This study focuses on the active faulting in the Kenya Rift of the Cenozoic East African Rift System (EARS) with a focus on the mid-Pleistocene to the present-day.
To examine the early stages of continental break-up in the EARS, this thesis presents a time-averaged minimum extension rate for the inner graben of the Northern Kenya Rift (NKR) for the last 0.5 m.y. Using the TanDEM-X digital elevation model, fault-scarp geometries and associated throws are determined across the volcano-tectonic axis of the inner graben of the NKR. By integrating existing geochronology of faulted units with new ⁴⁰Ar/³⁹Ar radioisotopic dates, time-averaged extension rates are calculated. This study reveals that in the inner graben of the NKR, the long-term extension rate based on mid-Pleistocene to recent brittle deformation has minimum values of 1.0 to 1.6 mm yr⁻¹, locally with values up to 2.0 mm yr⁻¹. In light of virtually inactive border faults of the NKR, we show that extension is focused in the region of the active volcano-tectonic axis in the inner graben, thus highlighting the maturing of continental rifting in the NKR.
The phenomenon of focused extension is further investigated with a structural analysis of the youngest volcanic manifestations of the Kenya Rift, their relationship with extensional structures, and their overprint by Holocene faulting. In this context I analyzed the fault characteristics at the ~36 ka old Menengai Caldera and adjacent areas in the Central Kenya Rift using detailed field mapping and a structure-from-motion-based DEM generated from UAV data. In general, the Holocene intra-rift normal faults are dip-slip faults which strike NNE and thus reflect the present-day tectonic stress field; however, inside Menengai caldera persistent magmatic activity and magmatic resurgence overprints these young structures significantly. The caldera is located at the center of an actively extending rift segment and this and the other volcanic edifices of the Kenya Rift may constitute nucleation points of faulting an magmatic extensional processes that ultimately lead into a future stage of magma-assisted rifting.
When viewed at the scale of the entire Kenya Rift the protracted normal faulting in this region compartmentalizes the larger rift depressions, and influences the sedimentology and the hydrology of the intra-rift basins at a scale of less than 100 km. In the present day, most of the fault-bounded sub-basins of the Kenya Rift are hydrologically isolated due to this combination of faulting and magmatic activity that has generated efficient hydrological barriers that maintain these basins as semi-independent geomorphic entities. This isolation, however, was overcome during wetter climatic conditions during the past when the basins were transiently connected. I therefore also investigated the hydrological connectivity of the rift basins during the African Humid Period of the early Holocene, when climate was wetter. With the help of DEM analysis, lake-highstand indicators, radiocarbon dating, and a review of the fossil record, two lake-river-cascades could be identified: one directed southward, and one directed northward. Both cascades connected presently isolated rift basins during the early Holocene via spillovers of lakes and incised river gorges. This hydrological connection fostered the dispersal of aquatic faunas along the rift, and in addition, the water divide between the two river systems represented the only terrestrial dispersal corridor across the Kenya Rift. The reconstruction explains isolated distributions of Nilotic fish species in Kenya Rift lakes and of Guineo-Congolian mammal species in forests east of the Kenya Rift. On longer timescales, repeated episodes of connectivity and isolation must have occurred. To address this problem I participated in research to analyze a sediment drill core from the Koora basin of the Southern Kenya Rift, which provides a paleo-environmental record of the last 1 Ma. Based on this record it can be concluded that at ~400 ka relatively stable environmental conditions were disrupted by tectonic, hydrological, and ecological changes, resulting in increasingly large and frequent fluctuations in water availability, grassland communities, and woody plant cover. The major environmental shifts reflected in the drill core data coincide with phases where volcano-tectonic activity affected the basin. This thesis therefore shows how protracted extensional tectonic processes and the resulting geomorphologic conditions can affect the hydrology, the paleo-environment and the biodiversity of extensional zones in Kenya and elsewhere.
Die Quantifizierung der Biomasse von Pflanzen mithilfe effizienter Messmethoden hat für verschiedene Wissenschaftsbereiche eine große Bedeutung. Die vorliegende Arbeit soll es ermöglichen, über die einzelbaumbasierte Schätzung der oberirdischen Biomasse einer Apfel- und einer Kirschkultur am Forschungsstandort Marquardt (Potsdam) auf die Menge des in ihr enthaltenen Wasserstoffs zu schließen. Hierzu wurde das Volumen von 13 Kirsch- und 11 Apfelbäumen bestimmt, indem sie in Segmente unterteilt, diese einzeln vermessen und in Durchmesserklassen eingeteilt wurden. Des Weiteren wurden die Dichte der Zweige und die mittlere Laubmasse bestimmt. Zur Berechnung der Biomasse wurde zusätzlich ein Literaturwert der Holzdichte der entsprechenden Baumarten herangezogen. Es wurde die Verteilung der Holzbiomasse auf die einzelnen Durchmesserklassen untersucht und einfach zu erhebende Baumparameter, sowie Daten eines Terrestrischen Laserscanners als Prädiktorvariablen für eine Regressionsanalyse herangezogen. Die experimentell ermittelten Dichtewerte zeigten eine Zunahme mit steigendem Zweigdurchmesser. Dabei wichen sie beim Kirschbaumholz leicht, beim Apfelbaumholz stärker vom Literaturwert ab. Die Erhebungen zur Laubmasse wurden unabhängig von den vermessenen Bäumen gemacht und die Ergebnisse unterlagen großer Varianz, weshalb kein Zusammenhang zwischen Holz- und Laubbiomasse hergestellt und nur durchschnittliche Werte ermittelt werden konnten. Der Anteil der verschiedenen Durchmesserklassen an der Gesamtmasse erwies sich als stark variabel und eine Schätzung der Biomasse aus dem Gewicht weniger kräftiger Baumsegmente daher als nicht geeignet. Eine zuverlässige und effiziente Abschätzung der oberirdischen verholzten Biomasse kann jedoch durch die Anwendung der erstellten Modelle erreicht werden. Für die vorliegende Population gleichaltriger und ähnlich großer Individuen wurden mit einer linearen Regression die besten Ergebnisse erzielt. Während die auf Laserdaten basierenden Variablen kaum mit der Holzbiomasse korrelierten, zeigten lineare Modelle mit dem Stammdurchmesser d bzw. d² als Prädiktor bei beiden Baumarten eine hohe Signifikanz (p - Wert < 0.001) und sehr gute Anpassung (R² > 0.8).
This study investigates the relationship between teacher quality and teachers’ engagement in professional development (PD) activities using data on 229 German secondary school mathematics teachers. We assessed different aspects of teacher quality (e.g. professional knowledge, instructional quality) using a variety of measures, including standardised tests of teachers’ content knowledge, to determine what characteristics are associated with high participation in PD. The results show that teachers with higher scores for teacher quality variables take part in more content-focused PD than teachers with lower scores for these variables. This suggests that teacher learning may be subject to a Matthew effect, whereby more proficient teachers benefit more from PD than less proficient teachers.
Die Geschichtsschreibung terminiert das Ende des deutschen Zionismus bisher mit dem NS-Verbot der Zionistischen Vereinigung für Deutschland im Zuge des Novemberpogroms 1938. Zu diesem Zeitpunkt hatte er aber von seinem geographischen Kontext entgrenzt, in Erez Israel bereits neue Wurzeln geschlagen. Zionisten aus Deutschland schickten sich nun an, mit ihrem spezifischen Erfahrungshorizont und Wertemaßstab und mitgebrachtem ideologischen Rüstzeug die Entwicklung des jüdischen Nationalheims mitzugestalten und einer umfassenden ökonomischen, kulturellen und politischen Akkulturation der deutschen Alijah den Weg zu bahnen. Entgegen aller zionistischen Theorie gründeten sie auf landsmannschaftlicher Basis im Jahr 1932 die Selbsthilfeorganisation Hitachduth Olej Germania und während des Weltkrieges die Partei Alija Chadascha.
Die Dissertation beinhaltet die Gesamtschau des deutschen Zionismus in seiner letzten Phase in den Jahren 1932 bis 1948; zugleich beleuchtet sie die Geschichte der etwa 60.000 in Palästina eingewanderten Juden aus Deutschland in der für diese Abhandlung relevanten Zeitperiode. Im ersten Teil wird in chronologischer Folge die 1932 beginnende letztmalige Sammlung und Neuformierung des deutschen Zionismus in seiner neu-alten Heimat dargestellt. Wenn man so will, die formativen Jahre im personellen, organisatorischen und ideologisch-politischen Sinne, die schließlich nach dem fast gänzlichen Scheitern der politischen Integration der deutschen Alijah mit der – in der Rückschau – fast zwangsläufig erscheinenden Begründung der Alija Chadascha ihren Abschluss fanden. Im zweiten Teil werden die Positionen der deutschen Zionisten zu den existenziellen Fragen der jüdischen Gemeinschaft in Palästina, hebräisch Jischuw genannt, in der im Fokus stehenden Zeitperiode dargestellt. Im Einzelnen handelt es sich erstens um die Einwanderungsfrage, die untrennbar verbunden war mit der in der zionistischen Theorie unabdingbaren Forderung nach der Erlangung einer jüdischen Majorität in Palästina; zweitens um die der staatlichen Ausgestaltung des zukünftigen jüdischen Gemeinwesens und drittens um die Frage der adäquaten Reaktion des Jischuw auf die Schoah. In diese jeweils in separaten Kapiteln behandelten Themenkomplexe wird die Frage nach dem anzustrebenden Verhältnis zur britischen Mandatsmacht mit einfließen. Hieran mussten die deutschen Zionisten ihr mitgebrachtes geistig-ideologisches Rüstzeug einem Praxistest unterziehen und nach realpolitischen Antworten suchen.
Dem kometenhaften Aufstieg der weiterhin landsmannschaftlich geprägten Alija Chadascha folgte dann in den ersten Nachkriegsjahren ein ebenso rapider Zerfall. Einige Monate nach der Staatsgründung Israels löste sie sich dann sang- und klanglos auf und das Gros ihrer Aktivisten integrierte sich in das Parteiengefüge des neuen Staates. Der deutsche Zionismus als politische Bewegung kam nun wirklich an sein Ende. Diese Abhandlung wird somit zum einen den Kampf der deutschen Alijah um gesellschaftliche Anerkennung und politische Partizipation im Jischuw nachzeichnen und zum anderen eine geistig-ideologische Verortung des deutschen Zionismus in seiner letzten Phase vollziehen und Tendenzen der ideologischen Neuausrichtung offenlegen. Darüber hinaus werden in der Historiographie vorhandene Allgemeinplätze wie die fast allseits anerkannte These vom Scheitern der deutschen Zionisten in der neuen Heimat einer Überprüfung unterzogen. Die letzte vorhandene Leerstelle im wissenschaftlichen Kanon zur mehr als 50-jährigen Geschichte des deutschen Zionismus wird somit geschlossen.
In vielen Studiengängen kommt es durch die oft heterogenen Vorkenntnisse in der Studieneingangsphase zu mangelnder Motivation durch Über- oder Unterforderung. Dieses Problem tritt auch in der musiktheoretischen Grundausbildung an Hochschulen auf. Durch Einsatz von Elementen, die aus dem Unterhaltungskontext geläufig sind, kann eine Steigerung der Motivation erreicht werden. Die Nutzung solcher Elemente wird als Gamification bezeichnet.
Das Ziel der vorliegenden Arbeit ist es, am Fallbeispiel der musiktheoretischen Grundausbildung zu analysieren, ob Lerngelegenheiten durch einen gamifizierten interaktiven Prototyp einer Lernumgebung unterstützt werden können. Dazu wird die folgende Forschungsfrage gestellt: Inwieweit wirkt Gamification auf die Motivation bei den Lernenden zur Beschäftigung mit dem Thema (musikalische) Funktionsanalyse?
Um die Forschungsfragen zu beantworten, wurde zunächst ein systematisches, theoriegeleitetes Vorgehensmodell zur Gamification von Lernumgebungen entwickelt und angewandt. Der so entstandene Prototyp wurde anschließend um alle Game-Design-Elemente reduziert und im Rahmen einer experimentellen Studie mit zwei unabhängigen Versuchsgruppen mit der gamifizierten Variante verglichen.
Die Untersuchung zeigte, dass die Gamification einer Lernanwendung nach dem entwickelten Vorgehensmodell grundsätzlich das Potenzial besitzt, manche Aspekte des Nutzungserlebnisses (UX) positiv zu beeinflussen. Insbesondere hatte die Gamification positive Effekte auf die Joy of Use und die Immersivität. Allerdings blieb das Ausmaß der beobachteten Effekte deutlich hinter den Erwartungen zurück, die auf Basis verschiedener Motivationstheorien getroffen wurden.
Daher erscheint Gamification besonders in außeruniversitären Kontexten vielversprechend, in denen der Fokus auf einer Erhöhung der Joy of Use oder einer Steigerung der Immersivität liegt. Allerdings lassen sich durch die Untersuchung neue Erkenntnisse zur emotionalen Wirkung von Gamification und zu einem systematischen Vorgehen bei der Gamification von Lernanwendungen herausstellen.
Weiterführende Forschung könnte an diese Erkenntnisse anknüpfen, indem sie die emotionale Wirkung von Gamification und deren Einfluss auf die Motivation näher untersucht. Darüber hinaus sollte sie Gamification auch aus einer entscheidungstheoretischen Perspektive betrachten und Analysemethoden entwickeln, mit denen entschieden werden kann, ob der Einsatz von Gamification zur Motivationssteigerung in einem spezifischen Anwendungsfall zielführend ist. Unter Verwendung des entwickelten Vorgehensmodells kann es sinnvoll sein, näher zu untersuchen, welche Faktoren insgesamt für das Gelingen einer Gamification-Maßnahme in Bildungskontexten entscheidend sind. Die Erkenntnisse einer solchen Untersuchung könnten entscheidend zur Verbesserung und Validierung des Vorgehensmodells beitragen.
The sharing economy gains momentum and develops a major economic impact on traditional markets and firms. However, only rudimentary theoretical and empirical insights exist on how sharing networks, i.e., focal firms, shared goods providers and customers, create and capture value in their sharing-based business models. We conduct a qualitative study to find key differences in sharing-based business models that are decisive for their value configurations. Our results show that (1) customization versus standardization of shared goods and (2) the centralization versus particularization of property rights over the shared goods are two important dimensions to distinguish value configurations. A second, quantitative study confirms the visibility and relevance of these dimensions to customers. We discuss strategic options for focal firms to design value configurations regarding the two dimensions to optimize value creation and value capture in sharing networks. Firms can use this two-dimensional search grid to explore untapped opportunities in the sharing economy.
The numerous applications of rare earth elements (REE) has lead to a growing global demand and to the search for new REE deposits. One promising technique for exploration of these deposits is laser-induced breakdown spectroscopy (LIBS). Among a number of advantages of the technique is the possibility to perform on-site measurements without sample preparation. Since the exploration of a deposit is based on the analysis of various geological compartments of the surrounding area, REE-bearing rock and soil samples were analyzed in this work. The field samples are from three European REE deposits in Sweden and Norway. The focus is on the REE cerium, lanthanum, neodymium and yttrium. Two different approaches of data analysis were used for the evaluation. The first approach is univariate regression (UVR). While this approach was successful for the analysis of synthetic REE samples, the quantitative analysis of field samples from different sites was influenced by matrix effects. Principal component analysis (PCA) can be used to determine the origin of the samples from the three deposits. The second approach is based on multivariate regression methods, in particular interval PLS (iPLS) regression. In comparison to UVR, this method is better suited for the determination of REE contents in heterogeneous field samples. View Full-Text
Holocene temperature proxy records are commonly used in quantitative synthesis and model-data comparisons. However, comparing correlations between time series from records collected in proximity to one another with the expected correlations based on climate model simulations indicates either regional or noisy climate signals in Holocene temperature proxy records. In this study, we evaluate the consistency of spatial correlations present in Holocene proxy records with those found in data from the Last Glacial Maximum (LGM). Specifically, we predict correlations expected in LGM proxy records if the only difference to Holocene correlations would be due to more time uncertainty and more climate variability in the LGM. We compare this simple prediction to the actual correlation structure in the LGM proxy records. We found that time series data of ice-core stable isotope records and planktonic foraminifera Mg/Ca ratios were consistent between the Holocene and LGM periods, while time series of Uk'37 proxy records were not as we found no correlation between nearby LGM records. Our results support the finding of highly regional or noisy marine proxy records in the compilation analysed here and suggest the need for further studies on the role of climate proxies and the processes of climate signal recording and preservation.
By regulating the concentration of carbon in our atmosphere, the global carbon cycle drives changes in our planet’s climate and habitability. Earth surface processes play a central, yet insufficiently constrained role in regulating fluxes of carbon between terrestrial reservoirs and the atmosphere. River systems drive global biogeochemical cycles by redistributing significant masses of carbon across the landscape. During fluvial transit, the balance between carbon oxidation and preservation determines whether this mass redistribution is a net atmospheric CO2 source or sink. Existing models for fluvial carbon transport fail to integrate the effects of sediment routing processes, resulting in large uncertainties in fluvial carbon fluxes to the oceans.
In this Ph.D. dissertation, I address this knowledge gap through three studies that focus on the timescale and routing pathways of fluvial mass transfer and show their effect on the composition and fluxes of organic carbon exported by rivers. The hypotheses posed in these three studies were tested in an analog lowland alluvial river system – the Rio Bermejo in Argentina. The Rio Bermejo annually exports more than 100 Mt of sediment and organic matter from the central Andes, and transports this material nearly 1300 km downstream across the lowland basin without influence from tributaries, allowing me to isolate the effects of geomorphic processes on fluvial organic carbon cycling. These studies focus primarily on the geochemical composition of suspended sediment collected from river depth profiles along the length of the Rio Bermejo.
In Chapter 3, I aimed to determine the mean fluvial sediment transit time for the Rio Bermejo and evaluate the geomorphic processes that regulate the rate of downstream sediment transfer. I developed a framework to use meteoric cosmogenic 10Be (10Bem) as a chronometer to track the duration of sediment transit from the mountain front downstream along the ~1300 km channel of the Rio Bermejo. I measured 10Bem concentrations in suspended sediment sampled from depth profiles, and found a 230% increase along the fluvial transit pathway. I applied a simple model for the time-dependent accumulation of 10Bem on the floodplain to estimate a mean sediment transit time of 8.5±2.2 kyr. Furthermore, I show that sediment transit velocity is influenced by lateral migration rate and channel morphodynamics. This approach to measuring sediment transit time is much more precise than other methods previously used and shows promise for future applications.
In Chapter 4, I aimed to quantify the effects of hydrodynamic sorting on the composition and quantity of particulate organic carbon (POC) export transported by lowland rivers. I first used scanning electron miscroscopy (SEM) coupled with nanoscale secondary ion mass spectrometry (NanoSIMS) analyses to show that the Bermejo transports two principal types of POC: 1) mineral-bound organic carbon associated with <4 µm, platy grains, and 2) coarse discrete organic particles. Using n-alkane stable isotope data and particle shape analysis, I showed that these two carbon pools are vertically sorted in the water column, due to differences in particle settling velocity. This vertical sorting may drive modern POC to be transported efficiently from source-to-sink, driving efficient CO2 drawdown. Simultaneously, vertical sorting may drive degraded, mineral-bound POC to be deposited overbank and stored on the floodplain for centuries to millennia, resulting in enhanced POC remineralization. In the Rio Bermejo, selective deposition of coarse material causes the proportion of mineral-bound POC to increase with distance downstream, but the majority of exported POC is composed of discrete organic particles, suggesting that the river is a net carbon sink. In summary, this study shows that selective deposition and hydraulic sorting control the composition and fate of fluvial POC during fluvial transit.
In Chapter 5, I characterized and quantified POC transformation and oxidation during fluvial transit. I analyzed the radiocarbon content and stable carbon isotopic composition of Rio Bermejo suspended sediment and found that POC ages during fluvial transit, but is also degraded and oxidized during transient floodplain storage. Using these data, I developed a conceptual model for fluvial POC cycling that allows the estimation of POC oxidation relative to POC export, and ultimately reveals whether a river is a net source or sink of CO2 to the atmosphere. Through this study, I found that the Rio Bermejo annually exports more POC than is oxidized during transit, largely due to high rates of lateral migration that cause erosion of floodplain vegetation and soil into the river. These results imply that human engineering of rivers could alter the fluvial carbon balance, by reducing lateral POC inputs and increasing the mean sediment transit time.
Together, these three studies quantitatively link geomorphic processes to rates of POC transport and degradation across sub-annual to millennial time scales and nanoscale to 103 km spatial scales, laying the groundwork for a global-scale fluvial organic carbon cycling model.
Die Technologie des 3D-Drucks hat sich in den letzten Jahrzehnten rasant entwickelt. Im Industriebereich entstehen immer modernere und spezialisiertere Druckverfahren, im Hobby- und Privatanwenderbereich hingegen werden stetig kostengünstigere und einfacher zu bedienende Geräte zugänglich. Einzig im Bildungsbereich scheint das Themenfeld hingegen erst langsam eine Rolle zu spielen, obwohl sich zahlreiche Bezugspunkte für einen Einsatz in verschiedensten Fächern finden lassen. Insbesondere im Fach Wirtschaft-Arbeit-Technik sind die Schnittstellen zum Rahmenlehrplan Berlin/Brandenburg augenscheinlich, doch es liegen erst vereinzelt konkrete und systematische didaktische Konzepte und Vorschläge zur unterrichtspraktischen Einbettung vor. Die Verfasserin versucht daher in dieser Arbeit die Relevanz des Themas für die technische Bildung deutlich zu machen, eine kurze technische Einführung in das für einen schulischen Einsatz besonders geeignete FDM-Druckverfahren zu geben und daran anknüpfend konkrete Umsetzungsvorschläge aufzuzeigen: einerseits in Form eines allgemeinen Phasenmodells zur Planung von Technikunterricht sowie andererseits in Form eines exemplarischen Unterrichtskonzepts. Am Beispiel eines Schachsets wird verdeutlicht, wie Schülerinnen und Schüler zum Anfertigen der Konstruktionsunterlagen digitale CAD-Programme nutzen und anschließend mit Hilfe eines 3D-Druckers additiv fertigen können.
Das Manuskript dient der Vorbereitung der Prüfung der Fachkunde zum Strahlenschutz für Lehrer. Es enthält wichtige Grundlagen der Kernphysik, insbesondere die Eigenschaften der Alpha-, Beta-, Gamma-, Neutronen- und Röntgenstrahlen. Es folgt eine kurze Beschreibung des Einflusses der Strahlung auf belebte Materie. Wichtige Paragrafen der Strahlenschutzverordnung werden beschrieben. Eine Aufgabensammlung dient zur Illustration und Übung.
Mycotoxins and pesticides regularly co-occur in agricultural products worldwide. Thus, humans can be exposed to both toxic contaminants and pesticides simultaneously, and multi-methods assessing the occurrence of various food contaminants and residues in a single method are necessary. A two-dimensional high performance liquid chromatography tandem mass spectrometry method for the analysis of 40 (modified) mycotoxins, two plant growth regulators, two tropane alkaloids, and 334 pesticides in cereals was developed. After an acetonitrile/water/formic acid (79:20:1, v/v/v) multi-analyte extraction procedure, extracts were injected into the two-dimensional setup, and an online clean-up was performed. The method was validated according to Commission Decision (EC) no. 657/2002 and document N° SANTE/12682/2019. Good linearity (R2 > 0.96), recovery data between 70-120%, repeatability and reproducibility values < 20%, and expanded measurement uncertainties < 50% were obtained for a wide range of analytes, including very polar substances like deoxynivalenol-3-glucoside and methamidophos. However, results for fumonisins, zearalenone-14,16-disulfate, acid-labile pesticides, and carbamates were unsatisfying. Limits of quantification meeting maximum (residue) limits were achieved for most analytes. Matrix effects varied highly (−85 to +1574%) and were mainly observed for analytes eluting in the first dimension and early-eluting analytes in the second dimension. The application of the method demonstrated the co-occurrence of different types of cereals with 28 toxins and pesticides. Overall, 86% of the samples showed positive findings with at least one mycotoxin, plant growth regulator, or pesticide.
Atmospheric water vapour content is a key variable that controls the development of deep convective storms and rainfall extremes over the central Andes. Direct measurements of water vapour are challenging; however, recent developments in microwave processing allow the use of phase delays from L-band radar to measure the water vapour content throughout the atmosphere: Global Navigation Satellite System (GNSS)-based integrated water vapour (IWV) monitoring shows promising results to measure vertically integrated water vapour at high temporal resolutions. Previous works also identified convective available potential energy (CAPE) as a key climatic variable for the formation of deep convective storms and rainfall in the central Andes. Our analysis relies on GNSS data from the Argentine Continuous Satellite Monitoring Network, Red Argentina de Monitoreo Satelital Continuo (RAMSAC) network from 1999 to 2013. CAPE is derived from version 2.0 of the ECMWF’s (European Centre for Medium-Range Weather Forecasts) Re-Analysis (ERA-interim) and rainfall from the TRMM (Tropical Rainfall Measuring Mission) product. In this study, we first analyse the rainfall characteristics of two GNSS-IWV stations by comparing their complementary cumulative distribution function (CCDF). Second, we separately derive the relation between rainfall vs. CAPE and GNSS-IWV. Based on our distribution fitting analysis, we observe an exponential relation of rainfall to GNSS-IWV. In contrast, we report a power-law relationship between the daily mean value of rainfall and CAPE at the GNSS-IWV station locations in the eastern central Andes that is close to the theoretical relationship based on parcel theory. Third, we generate a joint regression model through a multivariable regression analysis using CAPE and GNSS-IWV to explain the contribution of both variables in the presence of each other to extreme rainfall during the austral summer season. We found that rainfall can be characterised with a higher statistical significance for higher rainfall quantiles, e.g., the 0.9 quantile based on goodness-of-fit criterion for quantile regression. We observed different contributions of CAPE and GNSS-IWV to rainfall for each station for the 0.9 quantile. Fourth, we identify the temporal relation between extreme rainfall (the 90th, 95th, and 99th percentiles) and both GNSS-IWV and CAPE at 6 h time steps. We observed an increase before the rainfall event and at the time of peak rainfall—both for GNSS-integrated water vapour and CAPE. We show higher values of CAPE and GNSS-IWV for higher rainfall percentiles (99th and 95th percentiles) compared to the 90th percentile at a 6-h temporal scale. Based on our correlation analyses and the dynamics of the time series, we show that both GNSS-IWV and CAPE had comparable magnitudes, and we argue to consider both climatic variables when investigating their effect on rainfall extremes.
Postural balance represents a fundamental movement skill for the successful performance of everyday and sport-related activities. There is ample evidence on the effectiveness of balance training on balance performance in athletic and non-athletic population. However, less is known on potential transfer effects of other training types, such as plyometric jump training (PJT) on measures of balance. Given that PJT is a highly dynamic exercise mode with various forms of jump-landing tasks, high levels of postural control are needed to successfully perform PJT exercises. Accordingly, PJT has the potential to not only improve measures of muscle strength and power but also balance. To systematically review and synthetize evidence from randomized and non-randomized controlled trials regarding the effects of PJT on measures of balance in apparently healthy participants. Systematic literature searches were performed in the electronic databases PubMed, Web of Science, and SCOPUS. A PICOS approach was applied to define inclusion criteria, (i) apparently healthy participants, with no restrictions on their fitness level, sex, or age, (ii) a PJT program, (iii) active controls (any sport-related activity) or specific active controls (a specific exercise type such as balance training), (iv) assessment of dynamic, static balance pre- and post-PJT, (v) randomized controlled trials and controlled trials. The methodological quality of studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. This meta-analysis was computed using the inverse variance random-effects model. The significance level was set at p <0.05. The initial search retrieved 8,251 plus 23 records identified through other sources. Forty-two articles met our inclusion criteria for qualitative and 38 for quantitative analysis (1,806 participants [990 males, 816 females], age range 9–63 years). PJT interventions lasted between 4 and 36 weeks. The median PEDro score was 6 and no study had low methodological quality (≤3). The analysis revealed significant small effects of PJT on overall (dynamic and static) balance (ES = 0.46; 95% CI = 0.32–0.61; p < 0.001), dynamic (e.g., Y-balance test) balance (ES = 0.50; 95% CI = 0.30–0.71; p < 0.001), and static (e.g., flamingo balance test) balance (ES = 0.49; 95% CI = 0.31–0.67; p < 0.001). The moderator analyses revealed that sex and/or age did not moderate balance performance outcomes. When PJT was compared to specific active controls (i.e., participants undergoing balance training, whole body vibration training, resistance training), both PJT and alternative training methods showed similar effects on overall (dynamic and static) balance (p = 0.534). Specifically, when PJT was compared to balance training, both training types showed similar effects on overall (dynamic and static) balance (p = 0.514). Conclusion: Compared to active controls, PJT showed small effects on overall balance, dynamic and static balance. Additionally, PJT produced similar balance improvements compared to other training types (i.e., balance training). Although PJT is widely used in athletic and recreational sport settings to improve athletes' physical fitness (e.g., jumping; sprinting), our systematic review with meta-analysis is novel in as much as it indicates that PJT also improves balance performance. The observed PJT-related balance enhancements were irrespective of sex and participants' age. Therefore, PJT appears to be an adequate training regime to improve balance in both, athletic and recreational settings.
Janus droplets were prepared by vortex mixing of three non-mixable liquids, i.e., olive oil, silicone oil and water, in the presence of gold nanoparticles (AuNPs) in the aqueous phase and magnetite nanoparticles (MNPs) in the olive oil. The resulting Pickering emulsions were stabilized by a red-colored AuNP layer at the olive oil/water interface and MNPs at the oil/oil interface. The core–shell droplets can be stimulated by an external magnetic field. Surprisingly, an inner rotation of the silicon droplet is observed when MNPs are fixed at the inner silicon droplet interface. This is the first example of a controlled movement of the inner parts of complex double emulsions by magnetic manipulation via interfacially confined magnetic nanoparticles.
Angepasste Pathogene besitzen eine Reihe von Virulenzmechanismen, um pflanzliche Immunantworten unterhalb eines Schwellenwerts der effektiven Resistenz zu unterdrücken. Dadurch sind sie in der Lage sich zu vermehren und Krankheiten auf einem bestimmten Wirt zu verursachen. Eine essentielle Virulenzstrategie Gram-negativer Bakterien ist die Translokation von sogenannten Typ-III Effektorproteinen (T3Es) direkt in die Wirtszelle. Dort stören diese die Immunantwort des Wirts oder fördern die Etablierung einer für das Pathogen günstigen Umgebung. Eine kritische Komponente der Pflanzenimmunität gegen eindringende Pathogene ist die schnelle transkriptionelle Umprogrammierung der angegriffenen Zelle. Viele adaptierte bakterielle Pflanzenpathogene verwenden T3Es, um die Induktion Abwehr-assoziierter Gene zu stören. Die Aufklärung von Effektor-Funktionen, sowie die Identifikation ihrer pflanzlichen Zielproteine sind für das Verständnis der bakteriellen Pathogenese essentiell. Im Rahmen dieser Arbeit sollte das Typ-III Effektorprotein XopS aus Xanthomonas campestris pv. vesicatoria (Xcv) funktionell charakterisiert werden. Zudem lag hier ein besonderer Fokus auf der Untersuchung der Wechselwirkung zwischen XopS und seinem in Vorarbeiten identifizierten pflanzlichen Interaktionspartner WRKY40, einem transkriptionellen Regulator der Abwehr-assoziierten Genexpression. Es konnte gezeigt werden, dass XopS ein essentieller Virulenzfaktor des Phytopathogens Xcv während der präinvasiven Immunantwort ist. So zeigten xopS-defiziente Xcv Bakterien bei einer Inokulation der Blattoberfläche suszeptibler Paprika Pflanzen eine deutlich reduzierte Virulenz im Vergleich zum Xcv Wildtyp. Die Translokation von XopS durch Xcv, sowie die ektopische Expression von XopS in Arabidopsis oder N. benthamiana verhinderte das Schließen von Stomata als Reaktion auf Bakterien bzw. einem Pathogen-assoziierten Stimulus, wobei zudem gezeigt werden konnte, dass dies in einer WRKY40-abhängigen Weise geschieht. Weiter konnte gezeigt werden, dass XopS in der Lage ist, die Expression Abwehr-assoziierter Gene zu manipulieren. Dies deutet darauf hin, dass XopS sowohl in die prä-als auch in die postinvasive, apoplastische Abwehr eingreift. Phytohormon-Signalnetzwerke spielen während des Aufbaus einer effizienten pflanzlichen Immunantwort eine wichtige Rolle. Hier konnte gezeigt werden, dass XopS mit genau diesen Signalnetzwerken zu interferieren scheint. Eine ektopische Expression des Effektors in Arabidopsis führte beispielsweise zu einer signifikanten Induktion des Phytohormons Jasmonsäure (JA), während eine Infektion von suszeptiblen Paprika Pflanzen mit einem xopS-defizienten Xcv Stamm zu einer ebenfalls signifikanten Akkumulation des Salicylsäure (SA)-Gehalts führte.
So kann zu diesem Zeitpunkt vermutet werden, dass XopS die Virulenz von Xcv fördert, indem JA-abhängige Signalwege induziert werden und es gleichzeitig zur Unterdrückung SA-abhängiger Signalwege kommt. Die Virus-induzierte Genstilllegung des XopS Interaktionspartners WRKY40a in Paprika erhöhte die Toleranz der Pflanze gegenüber einer Xcv Infektion, was darauf hindeutet, dass es sich bei diesem Protein um einen transkriptionellen Repressor pflanzlicher Immunantworten handelt. Die Hypothese, dass WRKY40 die Abwehr-assoziierte Genexpression reprimiert, konnte hier über verschiedene experimentelle Ansätze bekräftigt werden. So wurde beispielsweise gezeigt, dass die Expression von verschiedenen Abwehrgenen einschließlich des SA-abhängigen Gens PR1 und die des Negativregulators des JA-Signalwegs JAZ8 von WRKY40 gehemmt wird. Um bei einem Pathogenangriff die Abwehr-assoziierte Genexpression zu gewährleisten, muss WRKY40 als Negativregulator abgebaut werden. Vorarbeiten zeigten, dass WRKY40 über das 26S Proteasom abgebaut wird. In der hier vorliegenden Studie konnte weiter bestätigt, dass der T3E XopS zu einer Stabilisierung des WRKY40 Proteins führt, indem er auf bislang ungeklärte Weise dessen Abbau über das 26S Proteasom verhindert. Die Ergebnisse aus der hier vorliegenden Arbeit lassen die Vermutung zu, dass die Stabilisierung des Negativregulators der Immunantwort WRKY40 seitens XopS dazu führt, dass eine darüber vermittelte Manipulation der Abwehr-assoziierten Genexpression, sowie eine Umsteuerung phytohormoneller Wechselwirkungen die Ausbreitung von Xcv auf suszeptiblen Paprikapflanzen fördert. Ein weiteres Ziel dieser Arbeit war es, weitere potentielle in planta Interaktionspartner von XopS zu identifizieren die für seine Interaktion mit WRKY40 bzw. für die Aufschlüsselung seines Wirkmechanismus relevant sein könnten. So konnte die Deubiquitinase UBP12 als weiterer pflanzlicher Interaktionspartner sowohl von XopS als auch von WRKY40 gefunden werden. Dieses Enzym ist in der Lage, die Ubiquitinierung von Substratproteinen zu modifizieren und seine Funktion könnte somit ein Bindeglied zwischen XopS und dessen Interferenz mit dem proteasomalen Abbau von WRKY40 sein. Während einer kompatiblen Xcv-Wirtsinteraktion führte die Virus-induzierte Genstilllegung von UBP12 zu einer reduzierten Resistenz der Pflanze gegenüber des Pathogens Xcv, was auf dessen positiv-regulatorische Wirkung während der Immunantwort hindeutet. Zudem zeigten Western Blot Analysen, dass das Protein WRKY40 bei einer Herunterregulierung von UBP12 akkumuliert und dass diese Akkumulation von der Anwesenheit des T3Es XopS zusätzlich verstärkt wird. Weiterführende Analysen zur biochemischen Charakterisierung der XopS/WRKY40/UBP12 Interaktion sollten in Zukunft durchgeführt werden, um den genauen Wirkmechanismus des XopS T3Es weiter aufzuschlüsseln.
Semi-natural habitats (SNHs) are becoming increasingly scarce in modern agricultural landscapes. This may reduce natural ecosystem services such as pest control with its putatively positive effect on crop production. In agreement with other studies, we recently reported wheat yield reductions at field borders which were linked to the type of SNH and the distance to the border. In this experimental landscape-wide study, we asked whether these yield losses have a biotic origin while analyzing fungal seed and fungal leaf pathogens, herbivory of cereal leaf beetles, and weed cover as hypothesized mediators between SNHs and yield. We established experimental winter wheat plots of a single variety within conventionally managed wheat fields at fixed distances either to a hedgerow or to an in-field kettle hole. For each plot, we recorded the fungal infection rate on seeds, fungal infection and herbivory rates on leaves, and weed cover. Using several generalized linear mixed-effects models as well as a structural equation model, we tested the effects of SNHs at a field scale (SNH type and distance to SNH) and at a landscape scale (percentage and diversity of SNHs within a 1000-m radius). In the dry year of 2016, we detected one putative biotic culprit: Weed cover was negatively associated with yield values at a 1-m and 5-m distance from the field border with a SNH. None of the fungal and insect pests, however, significantly affected yield, neither solely nor depending on type of or distance to a SNH. However, the pest groups themselves responded differently to SNH at the field scale and at the landscape scale. Our findings highlight that crop losses at field borders may be caused by biotic culprits; however, their negative impact seems weak and is putatively reduced by conventional farming practices.
Boon and bane
(2021)
Semi-natural habitats (SNHs) in agricultural landscapes represent important refugia for biodiversity including organisms providing ecosystem services. Their spill-over into agricultural fields may lead to the provision of regulating ecosystem services such as biological pest control ultimately affecting agricultural yield. Still, it remains largely unexplored, how different habitat types and their distributions in the surrounding landscape shape this provision of ecosystem services within arable fields. Hence, in this thesis I investigated the effect of SNHs on biodiversity-driven ecosystem services and disservices affecting wheat production with an emphasis on the role and interplay of habitat type, distance to the habitat and landscape complexity.
I established transects from the field border into the wheat field, starting either from a field-to-field border, a hedgerow, or a kettle hole, and assessed beneficial and detrimental organisms and their ecosystem functions as well as wheat yield at several in-field distances. Using this study design, I conducted three studies where I aimed to relate the impacts of SNHs at the field and at the landscape scale on ecosystem service providers to crop production.
In the first study, I observed yield losses close to SNHs for all transect types. Woody habitats, such as hedgerows, reduced yields stronger than kettle holes, most likely due to shading from the tall vegetation structure. In order to find the biotic drivers of these yield losses close to SNHs, I measured pest infestation by selected wheat pests as potential ecosystem disservices to crop production in the second study. Besides relating their damage rates to wheat yield of experimental plots, I studied the effect of SNHs on these pest rates at the field and at the landscape scale. Only weed cover could be associated to yield losses, having their strongest impact on wheat yield close to the SNH. While fungal seed infection rates did not respond to SNHs, fungal leaf infection and herbivory rates of cereal leaf beetle larvae were positively influenced by kettle holes. The latter even increased at kettle holes with increasing landscape complexity suggesting a release of natural enemies at isolated habitats within the field interior.
In the third study, I found that also ecosystem service providers benefit from the presence of kettle holes. The distance to a SNH decreased species richness of ecosystem service providers, whereby the spatial range depended on species mobility, i.e. arable weeds diminished rapidly while carabids were less affected by the distance to a SNH. Contrarily, weed seed predation increased with distance suggesting that a higher food availability at field borders might have diluted the predation on experimental seeds. Intriguingly, responses to landscape complexity were rather mixed: While weed species richness was generally elevated with increasing landscape complexity, carabids followed a hump-shaped curve with highest species numbers and activity-density in simple landscapes. The latter might give a hint that carabids profit from a minimum endowment of SNHs, while a further increase impedes their mobility. Weed seed predation was affected differently by landscape complexity depending on weed species displayed. However, in habitat-rich landscapes seed predation of the different weed species converged to similar rates, emphasising that landscape complexity can stabilize the provision of ecosystem services. Lastly, I could relate a higher weed seed predation to an increase in wheat yield even though seed predation did not diminish weed cover. The exact mechanisms of the provision of weed control to crop production remain to be investigated in future studies.
In conclusion, I found habitat-specific responses of ecosystem (dis)service providers and their functions emphasizing the need to evaluate the effect of different habitat types on the provision of ecosystem services not only at the field scale, but also at the landscape scale. My findings confirm that besides identifying species richness of ecosystem (dis)service providers the assessment of their functions is indispensable to relate the actual delivery of ecosystem (dis)services to crop production.
Exercise is known for its beneficial effects on preventing cardiometabolic diseases (CMDs) in the general population. People living with the human immunodeficiency virus (PLWH) are prone to sedentarism, thus raising their already elevated risk of developing CMDs in comparison to individuals without HIV. The aim of this cross-sectional study was to determine if exercise is associated with reduced risk of self-reported CMDs in a German HIV-positive sample (n = 446). Participants completed a self-report survey to assess exercise levels, date of HIV diagnosis, CD4 cell count, antiretroviral therapy, and CMDs. Participants were classified into exercising or sedentary conditions. Generalized linear models with Poisson regression were conducted to assess the prevalence ratio (PR) of PLWH reporting a CMD. Exercising PLWH were less likely to report a heart arrhythmia for every increase in exercise duration (PR: 0.20: 95% CI: 0.10–0.62, p < 0.01) and diabetes mellitus for every increase in exercise session per week (PR: 0.40: 95% CI: 0.10–1, p < 0.01). Exercise frequency and duration are associated with a decreased risk of reporting arrhythmia and diabetes mellitus in PLWH. Further studies are needed to elucidate the mechanisms underlying exercise as a protective factor for CMDs in PLWH.
Development of chronic pain after a low back pain episode is associated with increased pain sensitivity, altered pain processing mechanisms and the influence of psychosocial factors. Although there is some evidence that multimodal therapy (such as behavioral or motor control therapy) may be an important therapeutic strategy, its long-term effect on pain reduction and psychosocial load is still unclear. Prospective longitudinal designs providing information about the extent of such possible long-term effects are missing. This study aims to investigate the long-term effects of a homebased uni- and multidisciplinary motor control exercise program on low back pain intensity, disability and psychosocial variables. 14 months after completion of a multicenter study comparing uni- and multidisciplinary exercise interventions, a sample of one study center (n = 154) was assessed once more. Participants filled in questionnaires regarding their low back pain symptoms (characteristic pain intensity and related disability), stress and vital exhaustion (short version of the Maastricht Vital Exhaustion Questionnaire), anxiety and depression experiences (the Hospital and Anxiety Depression Scale), and pain-related cognitions (the Fear Avoidance Beliefs Questionnaire). Repeated measures mixed ANCOVAs were calculated to determine the long-term effects of the interventions on characteristic pain intensity and disability as well as on the psychosocial variables. Fifty four percent of the sub-sample responded to the questionnaires (n = 84). Longitudinal analyses revealed a significant long-term effect of the exercise intervention on pain disability. The multidisciplinary group missed statistical significance yet showed a medium sized long-term effect. The groups did not differ in their changes of the psychosocial variables of interest. There was evidence of long-term effects of the interventions on pain-related disability, but there was no effect on the other variables of interest. This may be partially explained by participant's low comorbidities at baseline. Results are important regarding costless homebased alternatives for back pain patients and prevention tasks. Furthermore, this study closes the gap of missing long-term effect analysis in this field.
Quantitative geomorphic research depends on accurate topographic data often collected via remote sensing. Lidar, and photogrammetric methods like structure-from-motion, provide the highest quality data for generating digital elevation models (DEMs). Unfortunately, these data are restricted to relatively small areas, and may be expensive or time-consuming to collect. Global and near-global DEMs with 1 arcsec (∼30 m) ground sampling from spaceborne radar and optical sensors offer an alternative gridded, continuous surface at the cost of resolution and accuracy. Accuracy is typically defined with respect to external datasets, often, but not always, in the form of point or profile measurements from sources like differential Global Navigation Satellite System (GNSS), spaceborne lidar (e.g., ICESat), and other geodetic measurements. Vertical point or profile accuracy metrics can miss the pixel-to-pixel variability (sometimes called DEM noise) that is unrelated to true topographic signal, but rather sensor-, orbital-, and/or processing-related artifacts. This is most concerning in selecting a DEM for geomorphic analysis, as this variability can affect derivatives of elevation (e.g., slope and curvature) and impact flow routing. We use (near) global DEMs at 1 arcsec resolution (SRTM, ASTER, ALOS, TanDEM-X, and the recently released Copernicus) and develop new internal accuracy metrics to assess inter-pixel variability without reference data. Our study area is in the arid, steep Central Andes, and is nearly vegetation-free, creating ideal conditions for remote sensing of the bare-earth surface. We use a novel hillshade-filtering approach to detrend long-wavelength topographic signals and accentuate short-wavelength variability. Fourier transformations of the spatial signal to the frequency domain allows us to quantify: 1) artifacts in the un-projected 1 arcsec DEMs at wavelengths greater than the Nyquist (twice the nominal resolution, so > 2 arcsec); and 2) the relative variance of adjacent pixels in DEMs resampled to 30-m resolution (UTM projected). We translate results into their impact on hillslope and channel slope calculations, and we highlight the quality of the five DEMs. We find that the Copernicus DEM, which is based on a carefully edited commercial version of the TanDEM-X, provides the highest quality landscape representation, and should become the preferred DEM for topographic analysis in areas without sufficient coverage of higher-quality local DEMs.
Background
Millions of people in Germany suffer from chronic pain, in which course and intensity are multifactorial. Besides physical injuries, certain psychosocial risk factors are involved in the disease process. The national health care guidelines for the diagnosis and treatment of non-specific low back pain recommend the screening of psychosocial risk factors as early as possible, to be able to adapt the therapy to patient needs (e.g., unimodal or multimodal). However, such a procedure has been difficult to implement in practice and has not yet been integrated into the rehabilitation care structures across the country.
Methods
The aim of this study is to implement an individualized therapy and aftercare program within the rehabilitation offer of the German Pension Insurance in the area of orthopedics and to examine its success and sustainability in comparison to the previous standard aftercare program.
The study is a multicenter randomized controlled trial including 1204 patients from six orthopedic rehabilitation clinics. A 2:1 allocation ratio to intervention (individualized and home-based rehabilitation aftercare) versus the control group (regular outpatient rehabilitation aftercare) is set. Upon admission to the rehabilitation clinic, participants in the intervention group will be screened according to their psychosocial risk profile. They could then receive either unimodal or multimodal, together with an individualized training program. The program is instructed in the clinic (approximately 3 weeks) and will continue independently at home afterwards for 3 months. The success of the program is examined by means of a total of four surveys. The co-primary outcomes are the Characteristic Pain Intensity and Disability Score assessed by the German version of the Chronic Pain Grade questionnaire (CPG).
Discussion
An improvement in terms of pain, work ability, patient compliance, and acceptance in our intervention program compared to the standard aftercare is expected. The study contributes to provide individualized care also to patients living far away from clinical centers.
Trial registration
DRKS, DRKS00020373. Registered on 15 April 2020
In diesem Beitrag schlagen wir am Beispiel des Computerspiels Brothers vor, wie narrative und Gameplay-Leerstellen in Sprachlern-Settings genutzt werden können. Wir stellen die Funktion von Leerstellen beim Lesen (Iser), bei Computerspielen (Aarseth und Pias) und in der Sprachdidaktik vor und machen im Anschluss konkrete Vorschläge, wie das Spiel in Sprachlern-Settings eingesetzt werden kann.
Das Professionswissen von Lehrkräften gehört zu den bedeutendsten Stellschrauben der Bildung an den Schulen. Seine Kernbereiche sind fachwissenschaftliches Wissen und fachdidaktisches Wissen, welche hauptsächlich in der universitären Ausbildung erworben werden.
Die vorliegende Arbeit verfolgt das Ziel, einen Beitrag zur stetigen Verbesserung und Sicherung der Qualität der Lehrerausbildung an der Universität Potsdam zu leisten, und stellt die Frage: Über welches fachwissenschaftliche und fachdidaktische Wissen verfügen die Lehramtsstudierenden im Fach Mathematik nach Besuch der Lehrveranstaltung Arithmetik und ihre Didaktik I und II? Untersucht wurde exemplarisch das Wissen der Lehramtsstudierenden im Bereich der rationalen Zahlen mit dem Fokus auf dem Verständnis der Dichte von Bruchzahlen. Die Dichte stellt eines der am schwierigsten zu erwerbenden Konzepte im Bruchzahlerwerb dar und fordert ein konzeptionelles Umdenken sowie die Reorganisation bereits erworbener Vorstellungen. Um die Forschungsfrage zu beantworten, wurden in einer qualitativen Studie 112 Lehramtsstudierende hinsichtlich ihres Wissens zu dem Thema Dichte von rationalen Zahlen schriftlich getestet. Um Denkprozesse der Studierenden zu verstehen und Denkhürden zu identifizieren, wurden zusätzlich qualitative Interviews in Form von Gruppendiskussionen geführt. Die Daten wurden mithilfe der Qualitativen Inhaltsanalyse computergestützt ausgewertet.
Es zeigte sich eine große Bandbreite verschiedener Wissensbestände. Die Ergebnisse im fachdidaktischen Wissen blieben hinter den Ergebnissen im fachwissenschaftlichen Wissen zurück. Am schwierigsten fiel den Studierenden die Gegenüberstellung von wesentlichen Eigenschaften der rationalen und natürlichen Zahlen auf der metakognitiven Ebene. Neben positiven Ergebnissen, welche für die Effektivität der Konzeption der Lehrveranstaltung sprechen, zeigten sich diverse Denkhürden. Defizite im Fachwissen wie ein mangelndes Verständnis von äquivalenten Brüchen oder Fehler im Erweitern von Brüchen enthüllen unzulänglich ausgebildete Grundvorstellungen im Bereich der rationalen Zahlen seitens der Studierenden. Schwierigkeiten in den fachdidaktischen Aufgaben wie die Formulierung einer kindgerechten Erklärung oder die anschauliche Darstellung des mathematischen Inhalts auf bildlicher Ebene lassen sich ursächlich auf die Defizite im Fachwissen zurückführen. Zusätzlich stellten sich Einschränkungen seitens der Studierenden in der Motivation und Relevanzzuschreibung heraus.
Die Ergebnisse führen zu gezielten Änderungsvorschlägen bezüglich der Konzeption der Lehrveranstaltung. Es wird empfohlen, verschiedene Lernangebote wie Hausaufgaben und wöchentliche Selbsttests zur individuellen Lernzielkontrolle für alle Teilnehmenden der Lehrveranstaltung verpflichtend zu gestalten und motivationale Aspekte verstärkt aufzugreifen. Zusätzlich wird der Ausbau von konkreten Übungen auf der enaktiven Ebene empfohlen, um den Aufbau von notwendigen Grundvorstellungen im Bereich der rationalen Zahlen zu fördern und somit Denkhürden gezielt zu begegnen.
Halide perovskites are a class of novel photovoltaic materials that have recently attracted much attention in the photovoltaics research community due to their highly promising optoelectronic properties, including large absorption coefficients and long carrier lifetimes. The charge carrier mobility of halide perovskites is investigated in this thesis by THz spectroscopy, which is a contact-free technique that yields the intra-grain sum mobility of electrons and holes
in a thin film.
The polycrystalline halide perovskite thin films, provided from Potsdam University, show moderate mobilities in the range from 21.5 to 33.5 cm2V-1s-1. It is shown in this work that the room temperature mobility is limited by charge carrier scattering at polar optical phonons. The mobility at low temperature is likely to be limited by scattering at charged and neutral impurities at impurity concentration N=1017-1018 cm-3. Furthermore, it is shown that exciton formation
may decrease the mobility at low temperatures. Scattering at acoustic phonons can be neglected at both low and room temperatures. The analysis of mobility spectra over a broad range of temperatures for perovskites with various cation compounds shows that cations have a minor impact on charge carrier mobility.
The low-dimensional thin films of quasi-2D perovskite with different numbers of [PbI6]4−sheets (n=2-4) alternating with long organic spacer molecules were provided by S. Zhang from Potsdam University. They exhibit mobilities in the range from 3.7 to 8 cm2V-1s-1. A clear
decrease of mobility is observed with decrease in number of metal-halide sheets n, which likely arises from charge carrier confinement within metal-halide layers. Modelling the measured THz mobility with the modified Drude-Smith model yields localization length from 0.9 to 3.7 nm, which agrees well on the thicknesses of the metal-halide layers. Additionally, the mobilities are found to be dependent on the orientation of the layers. The charge carrier dynamics is also
dependent on the number of metal-halide sheets n. For the thin films with n =3-4 the dynamics is similar to the 3D MHPs. However, the thin film with n = 2 shows clearly different dynamics, where the signs of exciton formation are observed within 390 fs timeframe after
photoexcitation.
Also, the charge carrier dynamics of CsPbI3 perovskite nanocrystals was investigated, in particular the effect of post treatments on the charge carrier transport.
Bezug nehmend auf Rainer E. Zimmermanns Buch "Metaphysik als Grundlegung von Naturdialektik. Zum Sagbaren und Unsagbaren im spekulativen Denken" wird der von Zimmermann entwickelte Ansatz eines transzendentalen Materialismus in der Traditionslinie Schellingscher Dialektik einerseits und dem Spin-Schaum-Ansatz der Quantengravitationstheorie andererseits erörtert. Die Rückführung von Wirklichkeitsstrukturen auf mathematische Strukturen - auf das Prozessieren von Zahlen - wird problematisiert.
Centroid moment tensor inversion can provide insight into ongoing tectonic processes and active faults. In the Alpine mountains (central Europe), challenges result from low signal-to-noise ratios of earthquakes with small to moderate magnitudes and complex wave propagation effects through the heterogeneous crustal structure of the mountain belt. In this thesis, I make use of the temporary installation of the dense AlpArray seismic network (AASN) to establish a work flow to study seismic source processes and enhance the knowledge of the Alpine seismicity. The cumulative thesis comprises four publications on the topics of large seismic networks, seismic source processes in the Alps, their link to tectonics and stress field, and the inclusion of small magnitude earthquakes into studies of active faults.
Dealing with hundreds of stations of the dense AASN requires the automated assessment of data and metadata quality. I developed the open source toolbox AutoStatsQ to perform an automated data quality control. Its first application to the AlpArray seismic network has revealed significant errors of amplitude gains and sensor orientations. A second application of the orientation test to the Turkish KOERI network, based on Rayleigh wave polarization, further illustrated the potential in comparison to a P wave polarization method. Taking advantage of the gain and orientation results of the AASN, I tested different inversion settings and input data types to approach the specific challenges of centroid moment tensor (CMT) inversions in the Alps. A comparative study was carried out to define the best fitting procedures.
The application to 4 years of seismicity in the Alps (2016-2019) substantially enhanced the amount of moment tensor solutions in the region. We provide a list of moment tensors solutions down to magnitude Mw 3.1. Spatial patterns of typical focal mechanisms were analyzed in the seismotectonic context, by comparing them to long-term seismicity, historical earthquakes and observations of strain rates. Additionally, we use our MT solutions to investigate stress regimes and orientations along the Alpine chain. Finally, I addressed the challenge of including smaller magnitude events into the study of active faults and source processes. The open-source toolbox Clusty was developed for the clustering of earthquakes based on waveforms recorded across a network of seismic stations. The similarity of waveforms reflects both, the location and the similarity of source mechanisms. Therefore the clustering bears the opportunity to identify earthquakes of similar faulting styles, even when centroid moment tensor inversion is not possible due to low signal-to-noise ratios of surface waves or oversimplified velocity models. The toolbox is described through an application to the Zakynthos 2018 aftershock sequence and I subsequently discuss its potential application to weak earthquakes (Mw<3.1) in the Alps.
Background
Reproducible benchmarking is important for assessing the effectiveness of novel feature selection approaches applied on gene expression data, especially for prior knowledge approaches that incorporate biological information from online knowledge bases. However, no full-fledged benchmarking system exists that is extensible, provides built-in feature selection approaches, and a comprehensive result assessment encompassing classification performance, robustness, and biological relevance. Moreover, the particular needs of prior knowledge feature selection approaches, i.e. uniform access to knowledge bases, are not addressed. As a consequence, prior knowledge approaches are not evaluated amongst each other, leaving open questions regarding their effectiveness.
Results
We present the Comprior benchmark tool, which facilitates the rapid development and effortless benchmarking of feature selection approaches, with a special focus on prior knowledge approaches. Comprior is extensible by custom approaches, offers built-in standard feature selection approaches, enables uniform access to multiple knowledge bases, and provides a customizable evaluation infrastructure to compare multiple feature selection approaches regarding their classification performance, robustness, runtime, and biological relevance.
Conclusion
Comprior allows reproducible benchmarking especially of prior knowledge approaches, which facilitates their applicability and for the first time enables a comprehensive assessment of their effectiveness
Cellulose and chitin are the most abundant polymeric, organic carbon source globally. Thus, microbes degrading these polymers significantly influence global carbon cycling and greenhouse gas production. Fungi are recognized as important for cellulose decomposition in terrestrial environments, but are far less studied in marine environments, where bacterial organic matter degradation pathways tend to receive more attention. In this study, we investigated the potential of fungi to degrade kelp detritus, which is a major source of cellulose in marine systems. Given that kelp detritus can be transported considerable distances in the marine environment, we were specifically interested in the capability of endophytic fungi, which are transported with detritus, to ultimately contribute to kelp detritus degradation. We isolated 10 species and two strains of endophytic fungi from the kelp Ecklonia radiata. We then used a dye decolorization assay to assess their ability to degrade organic polymers (lignin, cellulose, and hemicellulose) under both oxic and anoxic conditions and compared their degradation ability with common terrestrial fungi. Under oxic conditions, there was evidence that Ascomycota isolates produced cellulose-degrading extracellular enzymes (associated with manganese peroxidase and sulfur-containing lignin peroxidase), while Mucoromycota isolates appeared to produce both lignin and cellulose-degrading extracellular enzymes, and all Basidiomycota isolates produced lignin-degrading enzymes (associated with laccase and lignin peroxidase). Under anoxic conditions, only three kelp endophytes degraded cellulose. We concluded that kelp fungal endophytes can contribute to cellulose degradation in both oxic and anoxic environments. Thus, endophytic kelp fungi may play a significant role in marine carbon cycling via polymeric organic matter degradation.
Background:
Epidemiological evidence indicates that diets rich in plant foods are associated with a lower risk of ischaemic heart disease (IHD), but there is sparse information on fruit and vegetable subtypes and sources of dietary fibre. This study examined the associations of major plant foods, their subtypes and dietary fibre with risk of IHD in the European Prospective Investigation into Cancer and Nutrition (EPIC).
Methods:
We conducted a prospective analysis of 490 311 men and women without a history of myocardial infarction or stroke at recruitment (12.6 years of follow-up, n cases = 8504), in 10 European countries. Dietary intake was assessed using validated questionnaires, calibrated with 24-h recalls. Multivariable Cox regressions were used to estimate hazard ratios (HR) of IHD.
Results:
There was a lower risk of IHD with a higher intake of fruit and vegetables combined [HR per 200 g/day higher intake 0.94, 95% confidence interval (CI): 0.90-0.99, P-trend = 0.009], and with total fruits (per 100 g/day 0.97, 0.95-1.00, P-trend = 0.021). There was no evidence for a reduced risk for fruit subtypes, except for bananas. Risk was lower with higher intakes of nuts and seeds (per 10 g/day 0.90, 0.82-0.98, Ptrend = 0.020), total fibre (per 10 g/day 0.91, 0.85-0.98, P-trend = 0.015), fruit and vegetable fibre (per 4 g/day 0.95, 0.91-0.99, P-trend = 0.022) and fruit fibre (per 2 g/day 0.97, 0.95-1.00, P-trend = 0.045). No associations were observed between vegetables, vegetables subtypes, legumes, cereals and IHD risk.
Conclusions:
In this large prospective study, we found some small inverse associations between plant foods and IHD risk, with fruit and vegetables combined being the most strongly inversely associated with risk. Whether these small associations are causal remains unclear.
Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville Problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular and some singular SLPs of even orders (tested up to order eight), with a mix of boundary conditions (including non-separable and finite singular endpoints), accurately and efficiently.
The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved.
Next, a concrete implementation to the inverse Sturm–Liouville problem
algorithm proposed by Barcilon (1974) is provided. Furthermore, computational feasibility and applicability of this algorithm to solve inverse Sturm–Liouville problems of order n=2,4 is verified successfully. It is observed that the method is successful even in the presence of significant noise, provided that the assumptions of the algorithm are satisfied.
In conclusion, this work provides methods that can be adapted successfully for solving a direct (regular/singular) or inverse SLP of an arbitrary order with arbitrary boundary conditions.
Background:
Research into the application of virtual reality technology in the health care sector has rapidly increased, resulting in a large body of research that is difficult to keep up with.
Objective:
We will provide an overview of the annual publication numbers in this field and the most productive and influential countries, journals, and authors, as well as the most used, most co-occurring, and most recent keywords.
Methods:
Based on a data set of 356 publications and 20,363 citations derived from Web of Science, we conducted a bibliometric analysis using BibExcel, HistCite, and VOSviewer.
Results:
The strongest growth in publications occurred in 2020, accounting for 29.49% of all publications so far. The most productive countries are the United States, the United Kingdom, and Spain; the most influential countries are the United States, Canada, and the United Kingdom. The most productive journals are the Journal of Medical Internet Research (JMIR), JMIR Serious Games, and the Games for Health Journal; the most influential journals are Patient Education and Counselling, Medical Education, and Quality of Life Research. The most productive authors are Riva, del Piccolo, and Schwebel; the most influential authors are Finset, del Piccolo, and Eide. The most frequently occurring keywords other than “virtual” and “reality” are “training,” “trial,” and “patients.” The most relevant research themes are communication, education, and novel treatments; the most recent research trends are fitness and exergames.
Conclusions:
The analysis shows that the field has left its infant state and its specialization is advancing, with a clear focus on patient usability.
The aim of the doctoral project was to answer the question of whether the structural word-initial noun capitalization, as it can otherwise only be found in Luxembourgish alongside German, has a function that is advantageous for the reader. The overriding hypothesis was that an advantage is achieved by activating a syntactic category, namely the core of a noun phrase, through the parafoveal perception of the capital letters. This perception from the corner of the eye should make it possible to preprocess the following noun. As a result, sentence processing should be facilitated, which should ultimately be reflected in overall faster reading times and fixation durations.
The structure of the project includes three studies, some of which included different participant groups:
Study 1:
Study design: Semantic priming using garden-path sentences should bring out the functionality of noun capitalization for the reader
Participant groups: German natives reading German
Study 2:
Study design: same design as study 1, but in English
Participant groups:
English natives without any knowledge of German reading English
English natives who regularly read German reading English
German with high proficiency in English reading English
Study 3:
Study design:
Influence of the noun frequency on a potential preprocessing using the boundary paradigm; Study languages: German and English
Participant groups:
German natives reading German
English natives without any knowledge of German reading English
German with high proficiency in English reading English
Brief summary: The noun capitalization clearly has an impact on sentence processing in both German and English. It cannot be confirmed that this has a substantial, decisive advantage.
The suitability of a newly developed cell-based functional assay was tested for the detection of the activity of a range of neurotoxins and neuroactive pharmaceuticals which act by stimulation or inhibition of calcium-dependent neurotransmitter release. In this functional assay, a reporter enzyme is released concomitantly with the neurotransmitter from neurosecretory vesicles. The current study showed that the release of a luciferase from a differentiated human neuroblastoma-based reporter cell line (SIMA-hPOMC1-26-GLuc cells) can be stimulated by a carbachol-mediated activation of the Gq-coupled muscarinic-acetylcholine receptor and by the Ca2+-channel forming spider toxin α-latrotoxin. Carbachol-stimulated luciferase release was completely inhibited by the muscarinic acetylcholine receptor antagonist atropine and α-latrotoxin-mediated release by the Ca2+-chelator EGTA, demonstrating the specificity of luciferase-release stimulation. SIMA-hPOMC1-26-GLuc cells express mainly L- and N-type and to a lesser extent T-type VGCC on the mRNA and protein level. In accordance with the expression profile a depolarization-stimulated luciferase release by a high K+-buffer was effectively and dose-dependently inhibited by L-type VGCC inhibitors and to a lesser extent by N-type and T-type inhibitors. P/Q- and R-type inhibitors did not affect the K+-stimulated luciferase release. In summary, the newly established cell-based assay may represent a versatile tool to analyze the biological efficiency of a range of neurotoxins and neuroactive pharmaceuticals which mediate their activity by the modulation of calcium-dependent neurotransmitter release.
Compound values are not universally supported in virtual machine (VM)-based programming systems and languages. However, providing data structures with value characteristics can be beneficial. On one hand, programming systems and languages can adequately represent physical quantities with compound values and avoid inconsistencies, for example, in representation of large numbers. On the other hand, just-in-time (JIT) compilers, which are often found in VMs, can rely on the fact that compound values are immutable, which is an important property in optimizing programs. Considering this, compound values have an optimization potential that can be put to use by implementing them in VMs in a way that is efficient in memory usage and execution time. Yet, optimized compound values in VMs face certain challenges: to maintain consistency, it should not be observable by the program whether compound values are represented in an optimized way by a VM; an optimization should take into account, that the usage of compound values can exhibit certain patterns at run-time; and that necessary value-incompatible properties due to implementation restrictions should be reduced.
We propose a technique to detect and compress common patterns of compound value usage at run-time to improve memory usage and execution speed. Our approach identifies patterns of frequent compound value references and introduces abbreviated forms for them. Thus, it is possible to store multiple inter-referenced compound values in an inlined memory representation, reducing the overhead of metadata and object references. We extend our approach by a notion of limited mutability, using cells that act as barriers for our approach and provide a location for shared, mutable access with the possibility of type specialization. We devise an extension to our approach that allows us to express automatic unboxing of boxed primitive data types in terms of our initial technique. We show that our approach is versatile enough to express another optimization technique that relies on values, such as Booleans, that are unique throughout a programming system. Furthermore, we demonstrate how to re-use learned usage patterns and optimizations across program runs, thus reducing the performance impact of pattern recognition.
We show in a best-case prototype that the implementation of our approach is feasible and can also be applied to general purpose programming systems, namely implementations of the Racket language and Squeak/Smalltalk. In several micro-benchmarks, we found that our approach can effectively reduce memory consumption and improve execution speed.
Increasingly fast development cycles and individualized products pose major challenges for today's smart production systems in times of industry 4.0. The systems must be flexible and continuously adapt to changing conditions while still guaranteeing high throughputs and robustness against external disruptions. Deep reinforcement learning (RL) algorithms, which already reached impressive success with Google DeepMind's AlphaGo, are increasingly transferred to production systems to meet related requirements. Unlike supervised and unsupervised machine learning techniques, deep RL algorithms learn based on recently collected sensorand process-data in direct interaction with the environment and are able to perform decisions in real-time. As such, deep RL algorithms seem promising given their potential to provide decision support in complex environments, as production systems, and simultaneously adapt to changing circumstances. While different use-cases for deep RL emerged, a structured overview and integration of findings on their application are missing. To address this gap, this contribution provides a systematic literature review of existing deep RL applications in the field of production planning and control as well as production logistics. From a performance perspective, it became evident that deep RL can beat heuristics significantly in their overall performance and provides superior solutions to various industrial use-cases. Nevertheless, safety and reliability concerns must be overcome before the widespread use of deep RL is possible which presumes more intensive testing of deep RL in real world applications besides the already ongoing intensive simulations.
Background: To handle the competition demands, sparring drills are used for specific technical–tactical training as well as physical–physiological conditioning in combat sports. While the effects of different area sizes and number of within-round sparring partners on physiological and perceptive responses in combats sports were examined in previous studies, technical and tactical aspects were not investigated. This study investigated the effect of different within-round sparring partners number (i.e., at a time; 1 vs. 1, 1 vs. 2, and 1 vs. 4) and area sizes (2 m × 2 m, 4 m × 4 m, and 6 m × 6 m) variation on the technical–tactical aspects of small combat games in kickboxing.
Method: Twenty male kickboxers (mean ± standard deviation, age: 20.3 ± 0.9 years), regularly competing in regional and national events randomly performed nine different kickboxing combats, lasting 2 min each. All combats were video recorded and analyzed using the software Dartfish.
Results: Results showed that the total number of punches was significantly higher in 1 versus 4 compared with 1 versus 1 (p = 0.011, d = 0.83). Further, the total number of kicks was significantly higher in 1 versus 4 compared with 1 versus 1 and 1 versus 2 (p < 0.001; d = 0.99 and d = 0.83, respectively). Moreover, the total number of kick combinations was significantly higher in 1 versus 4 compared with 1 versus 1 and 1 versus 2 (p < 0.001; d = 1.05 and d = 0.95, respectively). The same outcome was significantly lower in 2 m × 2 m compared with 4 m × 4 m and 6 m × 6 m areas (p = 0.010 and d = − 0.45; p < 0.001 and d = − 0.6, respectively). The number of block-and-parry was significantly higher in 1 versus 4 compared with 1 versus 1 (p < 0.001, d = 1.45) and 1 versus 2 (p = 0.046, d = 0.61) and in 2 m × 2 m compared with 4 m × 4 m and 6 × 6 m areas (p < 0.001; d = 0.47 and d = 0.66, respectively). Backwards lean actions occurred more often in 2 m × 2 m compared with 4 m × 4 m (p = 0.009, d = 0.53) and 6 m × 6 m (p = 0.003, d = 0.60). However, the number of foot defenses was significantly lower in 2 m × 2 m compared with 6 m × 6 m (p < 0.001, d = 1.04) and 4 m × 4 m (p = 0.004, d = 0.63). Additionally, the number of clinches was significantly higher in 1 versus 1 compared with 1 versus 2 (p = 0.002, d = 0.7) and 1 versus 4 (p = 0.034, d = 0.45).
Conclusions: This study provides practical insights into how to manipulate within-round sparring partners’ number and/or area size to train specific kickboxing technical–tactical fundamentals.
Die vorliegende Studie beschäftigt sich mit der Planung und Durchführung des Lernprozesses von Schauspielern, wobei das Hauptaugenmerk auf dem Einsatz von Lernstrategien liegt. Es geht darum, welcher Strategien sich professionell Lernende bedienen, um die für die Berufsausübung erforderliche Textsicherheit zu erlangen, nicht um die Optimierung des Lernerfolges.
Die Literaturrecherche machte deutlich, dass aktuelle Studien zum Lernen von Erwachsenen vor allem im berufsspezifischen Kontext angesiedelt sind und sich auf den Erwerb von Kompetenzen, Problemlösestrategien und gesellschaftliche Teilhabe beziehen. Dem Lernen von Schauspielern liegt aber keine Absicht einer Verhaltensänderung oder eines konkreten Wissenszuwachses zugrunde.
Für Schauspieler ist der Auftritt Bestandteil ihrer Berufskultur. Angesichts der Tatsache, dass präzisem Faktenwissen als Grundlage für kompetentes, überzeugendes Präsentieren entscheidende Bedeutung zukommt, sind die Ergebnisse der Studie auch für Berufsgruppen relevant, die öffentlich auftreten müssen, wie z. B. für Priester, Juristen und Lehrende. Das gilt ebenso für Schüler und Studenten, die Referate halten und/oder Arbeiten präsentieren müssen.
Für die empirische Untersuchung werden zwölf renommierte Schauspieler mittels problemzentriertem Interview befragt, anschließend wird eine qualitative Inhaltsanalyse durchgeführt.
In der Auswertung der Daten kann ein deutlicher Zusammenhang zwischen Körper und Sprechpraxis nachgewiesen werden. Ebenso ergibt die Analyse, wie wichtig Bewegung für den Lernprozess ist. Es können Ergebnisse in Bezug auf kognitive, metakognitive und ressourcenorientierte Strategien generiert werden, wobei der Lernumgebung und dem Lernen mit Kollegen entscheidende Bedeutung zukommt.
Synchronization of coupled oscillators manifests itself in many natural and man-made systems, including cyrcadian clocks, central pattern generators, laser arrays, power grids, chemical and electrochemical oscillators, only to name a few. The mathematical description of this phenomenon is often based on the paradigmatic Kuramoto model, which represents each oscillator by one scalar variable, its phase. When coupled, phase oscillators constitute a high-dimensional dynamical system, which exhibits complex behaviour, ranging from synchronized uniform oscillation to quasiperiodicity and chaos. The corresponding collective rhythms can be useful or harmful to the normal operation of various systems, therefore they have been the subject of much research.
Initially, synchronization phenomena have been studied in systems with all-to-all (global) and nearest-neighbour (local) coupling, or on random networks. However, in recent decades there has been a lot of interest in more complicated coupling structures, which take into account the spatially distributed nature of real-world oscillator systems and the distance-dependent nature of the interaction between their components. Examples of such systems are abound in biology and neuroscience. They include spatially distributed cell populations, cilia carpets and neural networks relevant to working memory. In many cases, these systems support a rich variety of patterns of synchrony and disorder with remarkable properties that have not been observed in other continuous media. Such patterns are usually referred to as the coherence-incoherence patterns, but in symmetrically coupled oscillator systems they are also known by the name chimera states.
The main goal of this work is to give an overview of different types of collective behaviour in large networks of spatially distributed phase oscillators and to develop mathematical methods for their analysis. We focus on the Kuramoto models for one-, two- and three-dimensional oscillator arrays with nonlocal coupling, where the coupling extends over a range wider than nearest neighbour coupling and depends on separation. We use the fact that, for a special (but still quite general) phase interaction function, the long-term coarse-grained dynamics of the above systems can be described by a certain integro-differential equation that follows from the mathematical approach called the Ott-Antonsen theory. We show that this equation adequately represents all relevant patterns of synchrony and disorder, including stationary, periodically breathing and moving coherence-incoherence patterns. Moreover, we show that this equation can be used to completely solve the existence and stability problem for each of these patterns and to reliably predict their main properties in many application relevant situations.
The propagation of test fields, such as electromagnetic, Dirac or linearized gravity, on a fixed spacetime manifold is often studied by using the geometrical optics approximation. In the limit of infinitely high frequencies, the geometrical optics approximation provides a conceptual transition between the test field and an effective point-particle description. The corresponding point-particles, or wave rays, coincide with the geodesics of the underlying spacetime. For most astrophysical applications of interest, such as the observation of celestial bodies, gravitational lensing, or the observation of cosmic rays, the geometrical optics approximation and the effective point-particle description represent a satisfactory theoretical model. However, the geometrical optics approximation gradually breaks down as test fields of finite frequency are considered.
In this thesis, we consider the propagation of test fields on spacetime, beyond the leading-order geometrical optics approximation. By performing a covariant Wentzel-Kramers-Brillouin analysis for test fields, we show how higher-order corrections to the geometrical optics approximation can be considered. The higher-order corrections are related to the dynamics of the spin internal degree of freedom of the considered test field. We obtain an effective point-particle description, which contains spin-dependent corrections to the geodesic motion obtained using geometrical optics. This represents a covariant generalization of the well-known spin Hall effect, usually encountered in condensed matter physics and in optics. Our analysis is applied to electromagnetic and massive Dirac test fields, but it can easily be extended to other fields, such as linearized gravity. In the electromagnetic case, we present several examples where the gravitational spin Hall effect of light plays an important role. These include the propagation of polarized light rays on black hole spacetimes and cosmological spacetimes, as well as polarization-dependent effects on the shape of black hole shadows. Furthermore, we show that our effective point-particle equations for polarized light rays reproduce well-known results, such as the spin Hall effect of light in an inhomogeneous medium, and the relativistic Hall effect of polarized electromagnetic wave packets encountered in Minkowski spacetime.
Das Hauptziel der Bachelorarbeit stellt eine theoretische Auseinandersetzung mit dem Thema Wassergewöhnung im eigenen Zuhause dar. Ausgehend von dieser Ausführung erstellt die Autorin als Theorie-Praxis-Transfer eine Handreichung für Erziehungsberechtigte mit den relevantesten Informationen ihrer Qualifikationsarbeit in komprimierter Form. Damit die Erziehungsberechtigten ihren Kindern proaktiv zur Seite stehen können, soll die Handreichung adressat*innengerecht und prägnant sein, ohne den Erziehungsberechtigten essenzielle Details vorzuenthalten. Die Erziehungsberechtigten erhalten eine Handreichung, welche die bedeutendsten Informationen rund um die Wassergewöhnung zu Hause enthält. Sie erfahren unter anderem etwas über die höchstmögliche Aufenthaltsdauer der Kinder im Wasser und die optimale Temperatur des Badewassers. Außerdem erhalten sie wichtige Informationen rund um die Körperreaktionen, welche durch oder im Wasser auftreten können. Das sind bspw. der Lidschlussreflex oder der Kältereiz. Sie werden über essenzielle Sicherheitsaspekte informiert und erhalten eine kompakte Darstellung über Verhaltensregeln, den sogenannten do’s and dont‘s. Die Übungen/Spiele werden nach den aktuellen Vorgaben der DGUV (2019) für die Inhalte der Wassergewöhnung ausgewählt und nach den heimischen Voraussetzungen strukturiert sein. In der Handreichung werden zudem auch Übungen/Spiele zu finden sein, bei welchen keine Eigenschaften oder Wirkungen des Wassers kennengelernt werden. Atem- und Tauchübungen werden in der Handreichung ebenso beschrieben. Die Angst vor dem Wasser stellt, sobald sie sich manifestiert, bekanntlich das größte Hindernis der Nichtschwimmer*innen dar (DGUV, 2019). Darum möchte die Autorin mit der Aufklärung über diese Angst in ihrer Qualifikationsarbeit und der Handreichung bewirken, dass die Erziehungsberechtigten in der Lage sind, den Kindern das Angstgefühl gegenüber dem Wasser zu nehmen oder ihre Angstfreiheit beizubehalten und um daran anschließend den Kindern Freude an der Bewegung im Wasser zu ermöglichen. „Je mehr Freude die Kinder im Kleinkindalter am Baden haben, je weniger Angst sie mit dem Medium verbinden, umso schneller erlernen sie später das Schwimmen“ (DGUV, 2016, S. 6).
Die theoretischen Grundlagen der Handreichungen stellen die zentralen Aspekte und Ziele der Wassergewöhnung dar. Diese werden der, im Rahmen Schule, bedeutsamen Publikation der Deutschen Gesetzlichen Unfallversicherung aus dem Jahr 2019 entnommen. Hierbei handelt es sich um die Wahrnehmung der spezifischen Voraussetzungen des Wassers sowie deren Annäherung und Gewöhnung. Die Kinder erfahren die Eigenschaften Dichte, Druck und Temperatur des Elements und den Einfluss des Wassers auf den Körper. Das sind Wasserwiderstand, Auftrieb und die Wasserkraft. So werden die Übungen, in denen die Kinder das Wasser kennenlernen, beziehungsweise zum ersten Mal intensiv in Berührung mit diesem kommen, zu Beginn erwähnt. Anschließend folgen Übungen, überwiegend in Spielformen, bei denen die Freude geweckt werden soll. Als letzte Phase folgen Übungen, bei welchen der spezifische Umgang mit dem Wassers vonnöten ist. Diese Struktur ist an den ersten drei Phasen nach Baumeisters (1984) Methodik zur Wassergewöhnung orientiert. So wird zudem das methodische Prinzip vom Einfachen zum Komplexen als theoretische Grundlage verwendet. Legahn (2007) beschreibt einige Lernmodelle, die je nach Alter und Entwicklungsstand bei der Wassergewöhnung angewendet werden können. In der Handreichung wird die Autorin auf diese zurückgreifen und passende Lerntechniken ausführen. Beispiele hierfür sind unter anderem das Lernen am Modell (Nachahmung von Personen, Tieren oder Puppen) oder das Aktive Lernen (ein spielerischer Bewegungsaufbau verbessert die Fertigkeiten). Die benötigten Materialien werden in der Handreichung unter der Überschrift der Übungen/Spiele ausgeführt und dienen als erste Information. Neben der Überschrift werden die möglichen Eigenschaften und Wirkungen des Wassers, welche in dieser spezifischen Übung kennengelernt werden, benannt. Das sind beispielsweise Druck und Auftrieb für Wasserdruck und Wasserauftrieb. Darunter wird die jeweilige Übung beschrieben. Als Visualisierung erstellt die Autorin selbstständig gezeichnete Bilder. Unterhalb dieser Bilder befindet sich oft auch eine passende Spielvariante, um mit dieser Übung noch zusätzlich Freude zu wecken. Ebenso werden auch mehrmals passende Übungsformen oder Tipps erwähnt.
Effects of manganese on genomic integrity in the multicellular model organism Caenorhabditis elegans
(2021)
Although manganese (Mn) is an essential trace element, overexposure is associated with Mn-induced toxicity and neurological dysfunction. Even though Mn-induced oxidative stress is discussed extensively, neither the underlying mechanisms of the potential consequences of Mn-induced oxidative stress on DNA damage and DNA repair, nor the possibly resulting toxicity are characterized yet. In this study, we use the model organism Caenorhabditis elegans to investigate the mode of action of Mn toxicity, focusing on genomic integrity by means of DNA damage and DNA damage response. Experiments were conducted to analyze Mn bioavailability, lethality, and induction of DNA damage. Different deletion mutant strains were then used to investigate the role of base excision repair (BER) and dePARylation (DNA damage response) proteins in Mn-induced toxicity. The results indicate a dose- and time-dependent uptake of Mn, resulting in increased lethality. Excessive exposure to Mn decreases genomic integrity and activates BER. Altogether, this study characterizes the consequences of Mn exposure on genomic integrity and therefore broadens the molecular understanding of pathways underlying Mn-induced toxicity. Additionally, studying the basal poly(ADP-ribosylation) (PARylation) of worms lacking poly(ADP-ribose) glycohydrolase (PARG) parg-1 or parg-2 (two orthologue of PARG), indicates that parg-1 accounts for most of the glycohydrolase activity in worms.
Today, the Mekong Delta in the southern of Vietnam is home for 18 million people. The delta also accounts for more than half of the country’s food production and 80% of the exported rice. Due to the low elevation, it is highly susceptible to the risk of fluvial and coastal flooding. Although extreme floods often result in excessive damages and economic losses, the annual flood pulse from the Mekong is vital to sustain agricultural cultivation and livelihoods of million delta inhabitants.
Delta-wise risk management and adaptation strategies are required to mitigate the adverse impacts from extreme events while capitalising benefits from floods. However, a proper flood risk management has not been implemented in the VMD, because the quantification of flood damage is often overlooked and the risks are thus not quantified. So far, flood management has been exclusively focused on engineering measures, i.e. high- and low- dyke systems, aiming at flood-free or partial inundation control without any consideration of the actual risks or a cost-benefit analysis. Therefore, an analysis of future delta flood dynamics driven these stressors is valuable to facilitate the transition from sole hazard control towards a risk management approach, which is more cost-effective and also robust against future changes in risk.
Built on these research gaps, this thesis investigates the current state and future projections of flood hazard, damage and risk to rice cultivation, the most important economic activity in the VMD. The study quantifies the changes in risk and hazard brought by the development of delta-based flood control measures in the last decades, and analyses the expected changes in risk driven by the changing climate, rising sea-level and deltaic land subsidence, and finally the development of hydropower projects in the Mekong Basin. For this purpose, flood trend analyses and comprehensive hydraulic modelling were performed, together with the development of a concept to quantify flood damage and risk to rice plantation.
The analysis of observed flood levels revealed strong and robust increasing trends of peak and duration downstream of the high-dyke areas with a step change in 2000/2001, i.e. after the disastrous flood which initiated the high-dyke development. These changes were in contrast to the negative trends detected upstream, suggested that high-dyke development has shifted flood hazard downstream. Findings of the trend’s analysis were later confirmed by hydraulic simulations of the two recent extreme floods in 2000 and 2011, where the hydrological boundaries and dyke system settings were interchanged.
However, the high-dyke system was not the only and often not the main cause for a shift of flood hazard, as a comparative analysis of these two extreme floods proved. The high-dyke development was responsible for 20–90% of the observed changes in flood level between 2000 and 2011, with large spatial variances. The particular flood hydrograph of the two events had the highest contribution in the northern part of the delta, while the tidal level had 2–3 times higher influence than the high-dyke in the lower-central and coastal areas downstream of high-dyke areas. The impact of the high-dyke development was highest in the areas closely downstream of the high-dyke area just south of the Cambodia-Vietnam border. The hydraulic simulations also validated that the concurrence of the flood peak with spring tides, i.e. high sea level along the coast, amplified the flood level and inundation in the central and coastal regions substantially.
The risk assessment quantified the economic losses of rice cultivation to USD 25.0 and 115 million (0.02–0.1% of the total GDP of Vietnam in 2011) corresponding to the 10-year and the 100-year floods, with an expected annual damage of about USD 4.5 million. A particular finding is that the flood damage was highly sensitive to flood timing. Here, a 10-year event with an early peak, i.e. late August-September, could cause as much damage as a 100-year event that peaked in October. This finding underlines the importance of a reliable early flood warning, which could substantially reduce the damage to rice crops and thus the risk.
The developed risk assessment concept was furthermore applied to investigate two high-dyke development alternatives, which are currently under discussion among the administrative bodies in Vietnam, but also in the public. The first option favouring the utilization of the current high-dyke compartments as flood retention areas instead for rice cropping during the flood season could reduce flood hazard and expected losses by 5–40%, depending on the region of the delta. On the contrary, the second option promoting the further extension of the areas protected by high-dyke to facilitate third rice crop planting on a larger area, tripled the current expected annual flood damage. This finding challenges the expected economic benefit of triple rice cultivation, in addition to the already known reducing of nutrient supply by floodplain sedimentation and thus higher costs for fertilizers.
The economic benefits of the high-dyke and triple rice cropping system is further challenged by the changes in the flood dynamics to be expected in future. For the middle of the 21st century (2036-2065) the effective sea-level rise an increase of the inundation extent by 20–27% was projected. This corresponds to an increase of flood damage to rice crops in dry, normal and wet year by USD 26.0, 40.0 and 82.0 million in dry, normal and wet year compared to the baseline period 1971-2000.
Hydraulic simulations indicated that the planned massive development of hydropower dams in the Mekong Basin could potentially compensate the increase in flood hazard and agriculture losses stemming from climate change. However, the benefits of dams as mitigation of flood losses are highly uncertain, because a) the actual development of the dams is highly disputed, b) the operation of the dams is primarily targeted at power generation, not flood control, and c) this would require international agreements and cooperation, which is difficult to achieve in South-East Asia. The theoretical flood mitigation benefit is additionally challenged by a number of negative impacts of the dam development, e.g. disruption of floodplain inundation in normal, non-extreme flood years. Adding to the certain reduction of sediment and nutrient load to the floodplains, hydropower dams will drastically impair rice and agriculture production, the basis livelihoods of million delta inhabitants.
In conclusion, the VMD is expected to face increasing threats of tidal induced floods in the coming decades. Protection of the entire delta coastline solely with “hard” engineering flood protection structures is neither technically nor economically feasible, adaptation and mitigation actions are urgently required. Better control and reduction of groundwater abstraction is thus strongly recommended as an immediate and high priority action to reduce the land subsidence and thus tidal flooding and salinity intrusion in the delta. Hydropower development in the Mekong basin might offer some theoretical flood protection for the Mekong delta, but due to uncertainties in the operation of the dams and a number of negative effects, the dam development cannot be recommended as a strategy for flood management. For the Vietnamese authorities, it is advisable to properly maintain the existing flood protection structures and to develop flexible risk-based flood management plans. In this context the study showed that the high-dyke compartments can be utilized for emergency flood management in extreme events. For this purpose, a reliable flood forecast is essential, and the action plan should be materialised in official documents and legislation to assure commitment and consistency in the implementation and operation.
The present study aims to identify the optimal body-size/shape and maturity characteristics associated with superior fitness test performances having controlled for body-size, sex, and chronological-age differences. The sample consisted of 597 Tunisian children (396 boys and 201 girls) aged 8 to 15 years. Three sprint speeds recorded at 10, 20 and 30 m; two vertical and two horizontal jump tests; a change-of-direction and a handgrip-strength tests, were assessed during physical-education classes. Allometric modelling was used to identify the benefit of being an early or late maturer. Findings showed that being tall and light is the ideal shape to be successful at most physical fitness tests, but the height-to-weight “shape” ratio seems to be test-dependent. Having controlled for body-size/shape, sex, and chronological age, the model identified maturity-offset as an additional predictor. Boys who go earlier/younger through peak-height-velocity (PHV) outperform those who go at a later/older age. However, most of the girls’ physical-fitness tests peaked at the age at PHV and decline thereafter. Girls whose age at PHV was near the middle of the age range would appear to have an advantage compared to early or late maturers. These findings have important implications for talent scouts and coaches wishing to recruit children into their sports/athletic clubs.
Im Projekt „Forschungsdatenmanagement in Brandenburg (FDM-BB)“ wurden grundlegende Erkenntnisse bezüglich der Anforderungen und des Status Quo im Bereich Forschungsdatenmanagement (FDM) an den acht brandenburgischen Hochschulen generiert mit dem Ziel, daraus konkrete Handlungs- und Implementierungsempfehlungen für Brandenburg abzuleiten.
Mit Hilfe von spezifischen Umfragen (FactSheets, FDM-Palette) an den Hochschulen und Interviews mit den anderen geförderten FDM-Bundeslandinitiativen konnte eine Priorisierung der nächsten Schritte auf dem Weg hin zu einem institutionellen und nachhaltigen Forschungsdatenmanagement identifiziert werden, die jeweils in den Verantwortungsbereichen der folgenden drei Akteursgruppen liegen: Ministerium für Wirtschaft, Forschung und Kultur in Brandenburg (MWFK), die einzelne Hochschule und für gemeinsame Maßnahmen die kooperative Umsetzung durch (fast) alle Hochschulen.
Zusätzlich wurden Implementierungsempfehlungen erarbeitet, wie der lokale Kompetenzaufbau an den einzelnen Hochschulen in Brandenburg, die kooperative Bereitstellung landesweit relevanter IT-Dienste und Dienstleistungen sowie die Koordinierung FDM-BB.
Ziel ist auch, für Brandenburg gemeinsam eine Forschungsdatenstrategie zu formulieren, die alle brandenburgischen Einrichtungen einbezieht und mit Hilfe von kooperativ verteilten Verantwortlichkeiten dem (noch) sehr dynamischen Thema Forschungsdatenmanagement gerecht werden kann.
Floodplains are threatened ecosystems and are not only ecologically meaningful but also important for humans by creating multiple benefits. Many underlying functions, like nutrient retention, carbon sequestration or water regulation, strongly depend on regular inundation. So far, these are approached on the basis of what are called ‘active floodplains’. Active floodplains, defined as statistically inundated once every 100 years, represent less than 10% of a floodplain’s original size. Still, should this remaining area be considered as one homogenous surface in terms of floodplain function, or are there any alternative approaches to quantify ecologically active floodplains? With the European Flood Hazard Maps, the extent of not only medium floods (T-medium) but also frequent floods (T-frequent) needs to be modelled by all member states of the European Union. For large German rivers, both scenarios were compared to quantify the extent, as well as selected indicators for naturalness derived from inundation. It is assumed that the more naturalness there is, the more inundation and the better the functioning. Real inundation was quantified using measured discharges from relevant gauges over the past 20 years. As a result, land uses indicating strong human impacts changed significantly from T-frequent to T-medium floodplains. Furthermore, the extent, water depth and water volume stored in the T-frequent and T-medium floodplains is significantly different. Even T-frequent floodplains experienced inundation for only half of the considered gauges during the past 20 years. This study gives evidence for considering regulation functions on the basis of ecologically active floodplains, meaning in floodplains with more frequent inundation that T-medium floodplains delineate.
Inhibition of acid sphingomyelinase (ASM), a lysosomal enzyme that catalyzes the hydrolysis of sphingomyelin into ceramide and phosphorylcholine, may serve as an investigational tool or a therapeutic intervention to control many diseases. Specific ASM inhibitors are currently not sufficiently characterized. Here, we found that 1-aminodecylidene bis-phosphonic acid (ARC39) specifically and efficiently (>90%) inhibits both lysosomal and secretory ASM in vitro. Results from investigating sphingomyelin phosphodiesterase 1 (SMPD1/Smpd1) mRNA and ASM protein levels suggested that ARC39 directly inhibits ASM's catalytic activity in cultured cells, a mechanism that differs from that of functional inhibitors of ASM. We further provide evidence that ARC39 dose- and time-dependently inhibits lysosomal ASM in intact cells, and we show that ARC39 also reduces platelet- and ASM-promoted adhesion of tumor cells. The observed toxicity of ARC39 is low at concentrations relevant for ASM inhibition in vitro, and it does not strongly alter the lysosomal compartment or induce phospholipidosis in vitro. When applied intraperitoneally in vivo, even subtoxic high doses administered short-term induced sphingomyelin accumulation only locally in the peritoneal lavage without significant accumulation in plasma, liver, spleen, or brain. These findings require further investigation with other possible chemical modifications. In conclusion, our results indicate that ARC39 potently and selectively inhibits ASM in vitro and highlight the need for developing compounds that can reach tissue concentrations sufficient for ASM inhibition in vivo.
Over the last decades, the rate of near-surface warming in the Arctic is at least double than elsewhere on our planet (Arctic amplification). However, the relative contribution of different feedback processes to Arctic amplification is a topic of ongoing research, including the role of aerosol and clouds. Lidar systems are well-suited for the investigation of aerosol and optically-thin clouds as they provide vertically-resolved information on fine temporal scales. Global aerosol models fail to converge on the sign of the Arctic aerosol radiative effect (ARE). In the first part of this work, the optical and microphysical properties of Arctic aerosol were characterized at case study level in order to assess the short-wave (SW) ARE. A long-range transport episode was first investigated. Geometrically similar aerosol layers were captured over three locations. Although the aerosol size distribution was different between Fram Strait(bi-modal) and Ny-Ålesund (fine mono-modal), the atmospheric column ARE was similar. The latter was related to the domination of accumulation mode aerosol. Over both locations top of the atmosphere (TOA) warming was accompanied by surface cooling.
Subsequently, the sensitivity of ARE was investigated with respect to different aerosol and spring-time ambient conditions. A 10% change in the single-scattering albedo (SSA) induced higher ARE perturbations compared to a 30% change in the aerosol extinction coefficient. With respect to ambient conditions, the ARETOA was more sensitive to solar elevation changes compared to AREsur f ace. Over dark surfaces the ARE profile was exclusively negative, while over bright surfaces a negative to positive shift occurred above the aerosol layers. Consequently, the sign of ARE can be highly sensitive in spring since this season is characterized by transitional surface albedo conditions.
As the inversion of the aerosol microphysics is an ill-posed problem, the inferred aerosol size distribution of a low-tropospheric event was compared to the in-situ measured distribution. Both techniques revealed a bi-modal distribution, with good agreement in the total volume concentration. However, in terms of SSA a disagreement was found, with the lidar inversion indicating highly scattering particles and the in-situ measurements pointing to absorbing particles. The discrepancies could stem from assumptions in the inversion (e.g. wavelength-independent refractive index) and errors in the conversion of the in-situ measured light attenuation into absorption. Another source of discrepancy might be related to an incomplete capture of fine particles in the in-situ sensors. The disagreement in the most critical parameter for the Arctic ARE necessitates further exploration in the frame of aerosol closure experiments. Care must be taken in ARE modelling studies, which may use either the in-situ or lidar-derived SSA as input.
Reliable characterization of cirrus geometrical and optical properties is necessary for improving their radiative estimates. In this respect, the detection of sub-visible cirrus is of special importance. The total cloud radiative effect (CRE) can be negatively biased, should only the optically-thin and opaque cirrus contributions are considered. To this end, a cirrus retrieval scheme was developed aiming at increased sensitivity to thin clouds. The cirrus detection was based on the wavelet covariance transform (WCT) method, extended by dynamic thresholds. The dynamic WCT exhibited high sensitivity to faint and thin cirrus layers (less than 200 m) that were partly or completely undetected by the existing static method. The optical characterization scheme extended the Klett–Fernald retrieval by an iterative lidar ratio (LR) determination (constrained Klett). The iterative process was constrained by a reference value, which indicated the aerosol concentration beneath the cirrus cloud. Contrary to existing approaches, the aerosol-free assumption was not adopted, but the aerosol conditions were approximated by an initial guess. The inherent uncertainties of the constrained Klett were higher for optically-thinner cirrus, but an overall good agreement was found with two established retrievals. Additionally, existing approaches, which rely on aerosol-free assumptions, presented increased accuracy when the proposed reference value was adopted. The constrained Klett retrieved reliably the optical properties in all cirrus regimes, including upper sub-visible cirrus with COD down to 0.02.
Cirrus is the only cloud type capable of inducing TOA cooling or heating at daytime. Over the Arctic, however, the properties and CRE of cirrus are under-explored. In the final part of this work, long-term cirrus geometrical and optical properties were investigated for the first time over an Arctic site (Ny-Ålesund). To this end, the newly developed retrieval scheme was employed. Cirrus layers over Ny-Ålesund seemed to be more absorbing in the visible spectral region compared to lower latitudes and comprise relatively more spherical ice particles. Such meridional differences could be related to discrepancies in absolute humidity and ice nucleation mechanisms. The COD tended to decline for less spherical and smaller ice particles probably due to reduced water vapor deposition on the particle surface. The cirrus optical properties presented weak dependence on ambient temperature and wind conditions.
Over the 10 years of the analysis, no clear temporal trend was found and the seasonal cycle was not pronounced. However, winter cirrus appeared under colder conditions and stronger winds. Moreover, they were optically-thicker, less absorbing and consisted of relatively more spherical ice particles. A positive CREnet was primarily revealed for a broad range of representative cloud properties and ambient conditions. Only for high COD (above 10) and over tundra a negative CREnet was estimated, which did not hold true over snow/ice surfaces. Consequently, the COD in combination with the surface albedo seem to play the most critical role in determining the CRE sign over the high European Arctic.
Schire Simroh
(2021)
Arno Nadel ist 1878 in Wilna geboren und 1943 in Auschwitz ermordet worden. Es sind nur einige wenige Dokumente überliefert, anhand derer sich der Lebensweg von Arno Nadel rekonstruieren lässt. Das ist nur wenig verwunderlich, denn die Welt von Arno Nadel ist drei Mal untergegangen: zuerst die jüdische Welt von Wilna, dann die deutsche von Königsberg und schließlich die deutsch-jüdische von Berlin. Es ist allerdings erstaunlich, wie gründlich Arno Nadels Wirken danach in Vergessenheit geriet. Allein seine Vielseitigkeit hätte eigentlich diesen außergewöhnlichen Menschen vor dem Vergessen bewahren müssen. Arno Nadel war Dichter, Philosoph, Bühnenautor, Religionsgelehrter, Übersetzer, Maler und Grafiker, Komponist, Musik- und Literaturwissenschaftler, Ethnologe, Chordirigent, Pianist, Organist und Musikpublizist. Wenn man von Beschäftigungen zum reinen Broterwerb absieht, wie seiner Anstellung als Lehrer an einer Schule. All diese vielseitigen Tätigkeiten waren keineswegs dilettantische Versuche eines zerstreuten Menschen, sondern vollwertige Berufe und Berufungen, die er mehr oder weniger gleichzeitig mit höchster Intensität und Professionalität ausübte. In dieser Hinsicht war Nadel eine nicht nur zu seiner Zeit einzigartige Erscheinung, ein Phänomen, das eher an die Künstlerpersönlichkeiten der Renaissance erinnert. Auf jedem seiner Schaffensgebiete war Nadel unwahrscheinlich produktiv, so produktiv, dass man sich mit Ehrfurcht fragen muss, wie ein Mensch im Laufe seines Lebens derart viele geistige Werte zu schaffen vermochte. Obwohl ein großer Teil seines Nachlasses den Zweiten Weltkrieg nicht überdauerte, ist die Fülle der erhaltenen Manuskripte und publizierten Werke kaum zu überblicken. Um sein gesamtes Werk umfassend auszuwerten, bedürfte es der Anstrengungen eines ganzen Teams von Wissenschaftlern aus unterschiedlichen Disziplinen.
Für diese CD-Produktion wurden u.a. fünf Kompositionen für Chasan, Chor und Orgel für den Freitagabend-Gottesdienst eingespielt, die ursprünglich in dem Sammelband „Schire Simroh” erschienen sind, der synagogale Kompositionen zeitgenössischer Autoren zusammentrug. Sie sind für den Wettbewerb des Allgemeinen Deutschen Kantoren-Verbandes e. V. im Jahre 1926 geschrieben worden und im Verlag J. Kauffmann in Frankfurt am Main publiziert worden. 1930 wurde ein weiterer Sonderdruck veröffentlicht. Diese äußerst seltene Ausgabe wurde 1968 im Journal of Synagogue Music reproduziert. Dazu zählt auch das Stück “W’schomru”, das – ebenso wie die anderen vier publizierten Stücke – zum Kompendium gehörte. Diese Komposition gibt einen Eindruck von Nadels expressivem Stil, der “die östliche Freiheit des Ausdrucks mit dem westlichen Instrumentarium, einen weitgehend traditionellen jüdischen Melodieaufbau mit europäischer Polyphonie und harmonischer Kühnheit verbindet”. (Vgl. Jascha Nemtsov: Arno Nadel. Sein Beitrag zur jüdischen Musikkultur. Berlin 2008.)
Zusätzlich befindet sich das Stück “J’hi Scholom” auf der CD, ein Stück für Chasan, Chor und Orgel, das Arno Nadel zur Einweihung des Friedenstempels in Berlin komponiert hat.
Der Psalm 24, herausgegeben zum 70. Geburtstag von Moritz Schaefer, am 21. Mai 1927 mit der Widmung “Herrn Prof. Moritz Schaefer, dem Freunde aller großen jüdischen Bestrebungen” ist eine A-Capella-Komposition für Männerchor mit Kantor-Solo, die für die Liturgie der Torahlesung geschrieben wurde.
Drei Solo-Orgelvorspiele umrahmen diese Aufnahme. Das erste ist für die Hohen Feiertage komponiert und wurde nach den repräsentativsten liturgischen Motiven von “Bar’chu” und “Hamelech” für das Abend- und Morgengebet komponiert.
Das mittlere Orgelvorspiel hat Arno Nadel für die drei Wochen vor dem Tisch’a B’Av geschrieben, die traurigsten Wochen des jüdischen Volkes, in deren Verlauf an die Zerstörung beider Tempel in Jerusalem erinnert wird. Dieses Vorspiel basiert auf den Motiven der Liturgie für die Tage, an denen zum einen die “Kinnot” rezitiert werden, liturgische Gedichte, die die Leiden des Volkes Israels im Exil beschreiben, und zum anderen die Klagelieder von Jeremia, die die Trauer und den Schrecken der Zerstörung des Tempels beschreiben. Das abschließende Stück dieser CD eröffnet den letzten Teil des Gottesdienstes am Jom Kippur, dem Versöhnungstag. Es trägt den Namen „Ne’ilah“ und entspricht dem Gebet, das vor dem Schließen der Tempeltore in Jerusalem am Ende des Tages gesprochen wurde.
The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction.
Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion.
PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 – 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants.
Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles.
Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 – 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.
Sensing and Responding of Cardiomyocytes to Changes of Tissue Stiffness in the Diseased Heart
(2021)
Cardiomyocytes are permanently exposed to mechanical stimulation due to cardiac contractility. Passive myocardial stiffness is a crucial factor, which defines the physiological ventricular compliance and volume of diastolic filling with blood. Heart diseases often present with increased myocardial stiffness, for instance when fibrotic changes modify the composition of the cardiac extracellular matrix (ECM). Consequently, the ventricle loses its compliance, and the diastolic blood volume is reduced. Recent advances in the field of cardiac mechanobiology revealed that disease-related environmental stiffness changes cause severe alterations in cardiomyocyte cellular behavior and function. Here, we review the molecular mechanotransduction pathways that enable cardiomyocytes to sense stiffness changes and translate those into an altered gene expression. We will also summarize current knowledge about when myocardial stiffness increases in the diseased heart. Sophisticated in vitro studies revealed functional changes, when cardiomyocytes faced a stiffer matrix. Finally, we will highlight recent studies that described modulations of cardiac stiffness and thus myocardial performance in vivo. Mechanobiology research is just at the cusp of systematic investigations related to mechanical changes in the diseased heart but what is known already makes way for new therapeutic approaches in regenerative biology.
The sediment profile from Lake Goscia(z) over dot in central Poland comprises a continuous, seasonally resolved and exceptionally well-preserved archive of the Younger Dryas (YD) climate variation. This provides a unique opportunity for detailed investigation of lake system responses during periods of rapid climate cooling (YD onset) and warming (YD termination). The new varve record of Lake Goscia(z) over dot presented here spans 1662 years from the late Allerod (AL) to the early Preboreal (PB). Microscopic varve counting provides an independent chronology with a YD duration of 1149+14/-22 years, which confirms previous results of 1140 +/- 40 years. We link stable oxygen isotopes and chironomid-based air temperature reconstructions with the response of various geochemical and varve microfacies proxies especially focusing on the onset and termination of the YD. Cooling at the YD onset lasted similar to 180 years, which is about a century longer than the terminal warming that was completed in similar to 70 years. During the AL/YD transition, environmental proxy data lagged the onset of cooling by similar to 90 years and revealed an increase of lake productivity and internal lake re-suspension as well as slightly higher detrital sediment input. In contrast, rapid warming and environmental changes during the YD/PB transition occurred simultaneously. However, initial changes such as declining diatom deposition and detrital input occurred already a few centuries before the rapid warming at the YD/PB transition. These environmental changes likely reflect a gradual increase in summer air temperatures already during the YD. Our data indicate complex and differing environmental responses to the major climate changes related to the YD, which involve different proxy sensitivities and threshold processes.
Deoxyribonucleic acid (DNA) nanostructures enable the attachment of functional molecules to nearly any unique location on their underlying structure. Due to their single-base-pair structural resolution, several ligands can be spatially arranged and closely controlled according to the geometry of their desired target, resulting in optimized binding and/or signaling interactions.
This dissertation covers three main projects. All of them use variations of functionalized DNA nanostructures that act as platform for oligovalent presentation of ligands. The purpose of this work was to evaluate the ability of DNA nanostructures to precisely display different types of functional molecules and to consequently enhance their efficacy according to the concept of multivalency. Moreover, functionalized DNA structures were examined for their suitability in functional screening assays. The developed DNA-based compound ligands were used to target structures in different biological systems.
One part of this dissertation attempted to bind pathogens with small modified DNA nanostructures. Pathogens like viruses and bacteria are known for their multivalent attachment to host cells membranes. By blocking their receptors for recognition and/or fusion with their targeted host in an oligovalent manner, the objective was to impede their ability to adhere to and invade cells. For influenza A, only enhanced binding of oligovalent peptide-DNA constructs compared to the monovalent peptide could be observed, whereas in the case of respiratory syncytial virus (RSV), binding as well as blocking of the target receptors led to an increased inhibition of infection in vitro.
In the final part, the ability of chimeric DNA-peptide constructs to bind to and activate signaling receptors on the surface of cells was investigated. Specific binding of DNA trimers, conjugated with up to three peptides, to EphA2 receptor expressing cells was evaluated in flow cytometry experiments. Subsequently, their ability to activate these receptors via phosphorylation was assessed. EphA2 phosphorylation was significantly increased by DNA trimers carrying three peptides compared to monovalent peptide. As a result of activation, cells underwent characteristic morphological changes, where they "round up" and retract their periphery.
The results obtained in this work comprehensively prove the capability of DNA nanostructures to serve as stable, biocompatible, controllable platforms for the oligovalent presentation of functional ligands. Functionalized DNA nanostructures were used to enhance biological effects and as tool for functional screening of bio-activity. This work demonstrates that modified DNA structures have the potential to improve drug development and to unravel the activation of signaling pathways.
Welche Eigenschaften machen das Computerspiel zum geeigneten Medium, das den pädagogischen Einsatz im Unterricht bereichern kann? Welche Computerspiele bieten welche Möglichkeiten zur Auseinandersetzung mit welchen Themen? Wie kann das Computerspiel auch im schulischen Umfeld den für den Lernprozess so wichtigen Lebensweltbezug herstellen?
Diese und viele weitere Fragen beantworten die Autor*innen des Bandes „Didaktik des digitalen Spielens“. Dafür begeben sie sich in einen Dialog der Wissenschaftsdisziplinen, leiten Möglichkeiten zum Einsatz von Computerspielen ab und werten Erfahrungen mit dem Einsatz von Computerspielen – auch in der Lehrendenbildung – aus. Mit ihren verschiedenen Zugängen zu Fragestellungen rund um eine „Didaktik des digitalen Spielens“ liefern sie einen Beitrag zu einem Diskurs, der besonders in Zeiten von Distanzunterricht notwendig und folgerichtig geführt werden muss. Die im Rahmen der gleichnamigen interdisziplinären Ringvorlesung im Wintersemester 2018/19 an der Universität Potsdam gehaltenen Vorträge sind durch die Diskussionen mit Studierenden geprägt und ausgewertet worden, so dass sie in der nun veröffentlichten Form auf mehreren Ebenen einen mehrperspektivischen Blick auf den Gegenstand „Computerspiel im Unterricht“ legen.
Geomagnetic field modeling using spherical harmonics requires the inversion for hundreds to thousands of parameters. This large-scale problem can always be formulated as an optimization problem, where a global minimum of a certain cost function has to be calculated. A variety of approaches is known in order to solve this inverse problem, e.g. derivative-based methods or least-squares methods and their variants. Each of these methods has its own advantages and disadvantages, which affect for example the applicability to non-differentiable functions or the runtime of the corresponding algorithm.
In this work, we pursue the goal to find an algorithm which is faster than the established methods and which is applicable to non-linear problems. Such non-linear problems occur for example when estimating Euler angles or when the more robust L_1 norm is applied. Therefore, we will investigate the usability of stochastic optimization methods from the CMAES family for modeling the geomagnetic field of Earth's core. On one hand, basics of core field modeling and their parameterization are discussed using some examples from the literature. On the other hand, the theoretical background of the stochastic methods are provided. A specific CMAES algorithm was successfully applied in order to invert data of the Swarm satellite mission and to derive the core field model EvoMag. The EvoMag model agrees well with established models and observatory data from Niemegk. Finally, we present some observed difficulties and discuss the results of our model.
Interoception is an often neglected but crucial aspect of the human minimal self. In this perspective, we extend the embodiment account of interoceptive inference to explain the development of the minimal self in humans. To do so, we first provide a comparative overview of the central accounts addressing the link between interoception and the minimal self. Grounding our arguments on the embodiment framework, we propose a bidirectional relationship between motor and interoceptive states, which jointly contribute to the development of the minimal self. We present empirical findings on interoception in development and discuss the role of interoception in the development of the minimal self. Moreover, we make theoretical predictions that can be tested in future experiments. Our goal is to provide a comprehensive view on the mechanisms underlying the minimal self by explaining the role of interoception in the development of the minimal self.
Elucidating the molecular basis of enhanced growth in the Arabidopsis thaliana accession Bur-0
(2021)
The life cycle of flowering plants is a dynamic process that involves successful passing through several developmental phases and tremendous progress has been made to reveal cellular and molecular regulatory mechanisms underlying these phases, morphogenesis, and growth. Although several key regulators of plant growth or developmental phase transitions have been identified in Arabidopsis, little is known about factors that become active during embryogenesis, seed development and also during further postembryonic growth. Much less is known about accession-specific factors that determine plant architecture and organ size. Bur-0 has been reported as a natural Arabidopsis thaliana accession with exceptionally big seeds and a large rosette; its phenotype makes it an interesting candidate to study growth and developmental aspects in plants, however, the molecular basis underlying this big phenotype remains to be elucidated. Thus, the general aim of this PhD project was to investigate and unravel the molecular mechanisms underlying the big phenotype in Bur-0.
Several natural Arabidopsis accessions and late flowering mutant lines were analysed in this study, including Bur-0. Phenotypes were characterized by determining rosette size, seed size, flowering time, SAM size and growth in different photoperiods, during embryonic and postembryonic development. Our results demonstrate that Bur-0 stands out as an interesting accession with simultaneously larger rosettes, larger SAM, later flowering phenotype and larger seeds, but also larger embryos. Interestingly, inter-accession crosses (F1) resulted in bigger seeds than the parental self-crossed accessions, particularly when Bur-0 was used as the female parental genotype, suggesting parental effects on seed size that might be maternally controlled. Furthermore, developmental stage-based comparisons revealed that the large embryo size of Bur-0 is achieved during late embryogenesis and the large rosette size is achieved during late postembryonic growth. Interestingly, developmental phase progression analyses revealed that from germination onwards, the length of developmental phases during postembryonic growth is delayed in Bur-0, suggesting that in general, the mechanisms that regulate developmental phase progression are shared across developmental phases.
On the other hand, a detailed physiological characterization in different tissues at different developmental stages revealed accession-specific physiological and metabolic traits that underlie accession-specific phenotypes and in particular, more carbon resources during embryonic and postembryonic development were found in Bur-0, suggesting an important role of carbohydrates in determination of the bigger Bur-0 phenotype. Additionally, differences in the cellular organization, nuclei DNA content, as well as ploidy level were analyzed in different tissues/cell types and we found that the large organ size in Bur-0 can be mainly attributed to its larger cells and also to higher cell proliferation in the SAM, but not to a different ploidy level.
Furthermore, RNA-seq analysis of embryos at torpedo and mature stage, as well as SAMs at vegetative and floral transition stage from Bur-0 and Col-0 was conducted to identify accession-specific genetic determinants of plant phenotypes, shared across tissues and developmental stages during embryonic and postembryonic growth. Potential candidate genes were identified and further validation of transcriptome data by expression analyses of candidate genes as well as known key regulators of organ size and growth during embryonic and postembryonic development confirmed that the high confidence transcriptome datasets generated in this study are reliable for elucidation of molecular mechanisms regulating plant growth and accession-specific phenotypes in Arabidopsis.
Taken together, this PhD project contributes to the plant development research field providing a detailed analysis of mechanisms underlying plant growth and development at different levels of biological organization, focusing on Arabidopsis accessions with remarkable phenotypical differences. For this, the natural accession Bur-0 was an ideal outlier candidate and different mechanisms at organ and tissue level, cell level, metabolism, transcript and gene expression level were identified, providing a better understanding of different factors involved in plant growth regulation and mechanisms underlying different growth patterns in nature.
Bottom-up synthetic biology is used for the understanding of how a cell works. It is achieved through developing techniques to produce lipid-based vesicular structures as cellular mimics. The most common techniques used to produce cellular mimics or synthetic cells is through electroformation and swelling method. However, the abovementioned techniques cannot efficiently encapsulate macromolecules such as proteins, enzymes, DNA and even liposomes as synthetic organelles. This urges the need to develop new techniques that can circumvent this issue and make the artificial cell a reality where it is possible to imitate a eukaryotic cell through encapsulating macromolecules. In this thesis, the aim to construct a cell system using giant unilamellar vesicles (GUVs) to reconstitute the mitochondrial molybdenum cofactor biosynthetic pathway. This pathway is highly conserved among all life forms, and therefore is known for its biological significance in disorders induced through its malfunctioning. Furthermore, the pathway itself is a multi-step enzymatic reaction that takes place in different compartments. Initially, GTP in the mitochondrial matrix is converted to cPMP in the presence of cPMP synthase. Further, produced cPMP is transported across the membrane to the cytosol, to be converted by MPT synthase into MPT. This pathway provides a possibility to address the general challenges faced in the development of a synthetic cell, to encapsulate large biomolecules with good efficiency and greater control and to evaluate the enzymatic reactions involved in the process.
For this purpose, the emulsion-based technique was developed and optimised to allow rapid production of GUVs (~18 min) with high encapsulation efficiency (80%). This was made possible by optimizing various parameters such as density, type of oil, the impact of centrifugation speed/time, lipid concentration, pH, temperature, and emulsion droplet volume. Furthermore, the method was optimised in microtiter plates for direct experimentation and visualization after the GUV formation. Using this technique, the two steps - formation of cPMP from GTP and the formation of MPT from cPMP were encapsulated in different sets of GUVs to mimic the two compartments. Two independent fluorescence-based detection systems were established to confirm the successful encapsulation and conversion of the reactants. Alternatively, the enzymes produced using bacterial expression and measured. Following the successful encapsulation and evaluation of enzymatic reactions, cPMP transport across mitochondrial membrane has been mimicked using GUVs using a complex mitochondrial lipid composition. It was found that the cPMP interaction with the lipid bilayer results in transient pore-formation and leakage of internal contents.
Overall, it can be concluded that in this thesis a novel technique has been optimised for fast production of functional synthetic cells. The individual enzymatic steps of the Moco biosynthetic pathway have successfully implemented and quantified within these cellular mimics.
Zum Einfluss von Adaptivität auf die Wahrnehmung von Komplexität in der Mensch-Technik-Interaktion
(2021)
Wir leben in einer Gesellschaft, die von einem stetigen Wunsch nach Innovation und Fortschritt geprägt ist. Folgen dieses Wunsches sind die immer weiter fortschreitende Digitalisierung und informatische Vernetzung aller Lebensbereiche, die so zu immer komplexeren sozio-technischen Systemen führen. Ziele dieser Systeme sind u. a. die Unterstützung von Menschen, die Verbesserung ihrer Lebenssituation oder Lebensqualität oder die Erweiterung menschlicher Möglichkeiten. Doch haben neue komplexe technische Systeme nicht nur positive soziale und gesellschaftliche Effekte. Oft gibt es unerwünschte Nebeneffekte, die erst im Gebrauch sichtbar werden, und sowohl Konstrukteur*innen als auch Nutzer*innen komplexer vernetzter Technologien fühlen sich oft orientierungslos. Die Folgen können von sinkender Akzeptanz bis hin zum kompletten Verlust des Vertrauens in vernetze Softwaresysteme reichen. Da komplexe Anwendungen, und damit auch immer komplexere Mensch-Technik-Interaktionen, immer mehr an Relevanz gewinnen, ist es umso wichtiger, wieder Orientierung zu finden. Dazu müssen wir zuerst diejenigen Elemente identifizieren, die in der Interaktion mit vernetzten sozio-technischen Systemen zu Komplexität beitragen und somit Orientierungsbedarf hervorrufen.
Mit dieser Arbeit soll ein Beitrag geleistet werden, um ein strukturiertes Reflektieren über die Komplexität vernetzter sozio-technischer Systeme im gesamten Konstruktionsprozess zu ermöglichen. Dazu wird zuerst eine Definition von Komplexität und komplexen Systemen erarbeitet, die über das informatische Verständnis von Komplexität (also der Kompliziertheit von Problemen, Algorithmen oder Daten) hinausgeht. Im Vordergrund soll vielmehr die sozio-technische Interaktion mit und in komplexen vernetzten Systemen stehen. Basierend auf dieser Definition wird dann ein Analysewerkzeug entwickelt, welches es ermöglicht, die Komplexität in der Interaktion mit sozio-technischen Systemen sichtbar und beschreibbar zu machen.
Ein Bereich, in dem vernetzte sozio-technische Systeme zunehmenden Einzug finden, ist jener digitaler Bildungstechnologien. Besonders adaptiven Bildungstechnologien wurde in den letzten Jahrzehnten ein großes Potential zugeschrieben. Zwei adaptive Lehr- bzw. Trainingssysteme sollen deshalb exemplarisch mit dem in dieser Arbeit entwickelten Analysewerkzeug untersucht werden. Hierbei wird ein besonderes Augenmerkt auf den Einfluss von Adaptivität auf die Komplexität von Mensch-Technik-Interaktionssituationen gelegt. In empirischen Untersuchungen werden die Erfahrungen von Konstrukteur*innen und Nutzer*innen jener adaptiver Systeme untersucht, um so die entscheidenden Kriterien für Komplexität ermitteln zu können. Auf diese Weise können zum einen wiederkehrende Orientierungsfragen bei der Entwicklung adaptiver Bildungstechnologien aufgedeckt werden. Zum anderen werden als komplex wahrgenommene Interaktionssituationen identifiziert. An diesen Situationen kann gezeigt werden, wo aufgrund der Komplexität des Systems die etablierten Alltagsroutinen von Nutzenden nicht mehr ausreichen, um die Folgen der Interaktion mit dem System vollständig erfassen zu können. Dieses Wissen kann sowohl Konstrukteur*innen als auch Nutzer*innen helfen, in Zukunft besser mit der inhärenten Komplexität moderner Bildungstechnologien umzugehen.
Portal Wissen = Change
(2021)
Change makes everything different. Let’s be honest: Just about everything is constantly in transformation. Even huge massifs that seem like eternity turned to stone will eventually dissolve into dust. So is change itself the only constant? The Greek philosopher Heraclitus certainly thought so. He said, “The only thing that is constant is change.”
Change is frightening. A change that we cannot explain throws us into turmoil – like a magic trick we cannot decipher. Viruses that mutate, ecosystems that collapse, stars that perish – they all seem to threaten the fragile balance that makes our existence possible. Humanity is late in recognizing that we ourselves are all too often the impetus for dangerous transformations.
Change gives hope. People have always been fascinated by change and felt compelled to explore its origin and essence. Quite successfully. We understand many things much better than generations before. But well enough? Not at all. Alexander von Humboldt said, “Every law of nature that reveals itself to the observer suggests a higher, as yet unrecognized one.” There is still much to be done.
The current issue of Portal Wissen is all about change. We spoke to an astrophysicist who has found her happiness in researching the formation and change of stars. We also look at different aspects of the very earthly climate change and its consequences: A geoscientist explains how global warming affects the stability of mountain ranges.
A legal expert makes clear that the call for a right to climate protection has gone largely unheard until now. How human land use affects biodiversity is being investigated by young researchers of the “Bio- Move” research training group, who have provided us with insights into their work on brown hares, water fleas, and mallard ducks. Other researchers focus on change in the contexts of humans. A group of nutrition scientists at the German Institute of Human Nutrition (DIfE) and sports scientists at the University of Potsdam are investigating the factors that cause our bodies to change as we age – and why some people lose muscles more quickly than others.
Despite all these changes, we do not lose sight of the diversity of research at the University of Potsdam. A visit to the laboratory of the project “OptiZeD” gives us an idea of the possibilities offered by optical sensors for the personalized medicine of tomorrow, while an educational researcher explains why cultural diversity is an asset beneficial to our education. In addition, a cultural scientist reports on the fascination of comics. They are all part of the hopeful change that science is initiating and accomplishing! Enjoy the read!
Monoklonale Antikörper sind essenzielle Werkzeuge in der modernen Laboranalytik sowie in der medizinischen Therapie und Diagnostik. Die Herstellung monoklonaler Antikörper ist ein zeit- und arbeitsintensiver Prozess. Herkömmliche Methoden beruhen auf der Immunisierung von Labortieren, die mitunter mehrere Monate in Anspruch nimmt. Anschließend werden die Antikörper-produzierenden B-Lymphozyten bzw. deren Antikörpergene isoliert und in Screening-Verfahren untersucht, um geeignete Binder zu identifizieren.
Der Transfer der humoralen Immunantwort in eine in vitro Umgebung erlaubt eine Verkürzung des Prozesses und umgeht die Notwendigkeit der in vivo Immunisierung. Das komplexe Zusammenspiel aller involvierten Immunzellen in vitro abzubilden, stellt sich allerdings als schwierig dar. Der Schwerpunkt dieser Arbeit war deshalb die Realisierung einer vereinfachten In vitro Immunisierung, die sich auf die Protagonisten der Antikörper-Produktion konzentriert: die B-Lymphozyten. Darüber hinaus sollte eine permanente Zelllinie etabliert werden, die zur Antikörper-Herstellung eingesetzt werden und die Verwendung von Primärzellen ersetzen würde.
Im ersten Teil der Arbeit wurde ein Protokoll zur In vitro Immunisierung muriner BLymphozyten etabliert. In Vorversuchen wurden die optimalen Konditionen für die Antigenspezifische Aktivierung gereinigter Milz-B-Lymphozyten aus nicht-immunisierten Mäusen
determiniert. Dazu wurde der Einfluss verschiedener Stimuli auf die Produktion unspezifischer und spezifischer Antikörper untersucht. Eine Kombination aus dem Modellantigen VP1 (Hamster Polyomavirus Hüllprotein 1), einem Anti-CD40-Antikörper, Interleukin 4 (IL 4) und Lipopolysaccharid (LPS) oder IL 7 induzierte nachweislich eine Antigen-spezifische Antikörper-Antwort in vitro. Als Indikatoren einer erfolgreichen Aktivierung der B-Lymphozyten infolge der in vitro Stimulation wurden die rapide Proliferation und die Expression charakteristischer Aktivierungsmarker auf der Zelloberfläche nachgewiesen. In einer Zeitreihe über zehn Tage wurde am zehnten Tag der In vitro Immunisierung die verhältnismäßig höchste Konzentration Antigen-spezifischer IgG-Antikörper im Kulturüberstand der stimulierten Zellen nachgewiesen.
Als nächster Schritt sollte eine permanente Zelllinie hergestellt werden, die statt primärer BLymphozyten für die zuvor etablierte In vitro Immunisierung eingesetzt werden könnte. Zu diesem Zweck wurden retrovirale Vektoren hergestellt, die durch den Transfer verschiedener Onkogene in murine B-Lymphozyten bzw. deren Vorläuferzellen das Proliferationsverhalten der Zellen manipulieren sollen. Es wurden Retroviren mit Doxycyclin-induzierbaren Expressionskassetten mit den Onkogenen cmyc, Bcl2, BclxL und dem Fusionsgen NUP98HOXB4 generiert. Eine Testzelllinie wurde erfolgreich mit den hergestellten Retroviren transduziert und die Funktionalität der hergestellten Viren anhand verschiedener Assays belegt. Die transferierten Gene konnten in der Testzelllinie auf DNAEbene nachgewiesen oder die Überexpression der entsprechenden Proteine im Western Blot detektiert werden. Es wurden schließlich B-Lymphozyten bzw. unreife Vorläuferzellen derselben mit den generierten Retroviren transduziert und mit Knochenmark-ähnlichen Stromazellen co-kultiviert. Aus keinem der transduzierten Ansätze konnte bisher eine Zelllinie oder eine Langzeit-Kultur etabliert werden.
Im letzten Teil der Arbeit wurde die Effektivität und Übertragbarkeit des zuvor etablierten Protokolls zur In vitro Immunisierung muriner B-Lymphozyten anhand verschiedener Antigene gezeigt. Es konnten in vitro spezifische IgG-Antworten gegen VP1, Legionella pneumophila und das Protein Mip, von dem ein Peptid in das zur Immunisierung eingesetzte VP1 integriert wurde, induziert werden. Die stimulierten B-Lymphozyten wurden durch Fusion mit Myelomzellen in permanente Antikörper-produzierende Zelllinien transformiert.
Dabei konnten mehrere Hybridomzelllinien generiert werden, die spezifische IgGAntikörper gegen VP1 oder Mip produzieren. Die generierten Antikörper konnten sowohl im Western Blot als auch im ELISA (Enzyme-Linked Immunosorbent Assay) das entsprechende Antigen spezifisch binden.
Die hier etablierte In vitro Immunisierung bietet eine effektive Alternative zu bisherigen Verfahren zur Herstellung spezifischer Antikörper. Sie ersetzt die Immunisierung von Versuchstieren und reduziert den Zeitaufwand erheblich. In Kombination mit der Hybridomtechnologie können die in vitro immunisierten Zellen, wie hier demonstriert, zur Generation von Hybridomzelllinien und zur Herstellung monoklonaler Antikörper genutzt werden. Um die Verwendung von Versuchstieren in dieser Methode durch eine adäquate permanente Zelllinie zu ersetzen, muss die genetische Veränderung von B-Lymphozyten und unreifen hämatopoetischen Zellen optimiert werden. Die Ergebnisse bieten eine Basis für eine universelle, Spezies-unabhängige Methodik zur Antikörperherstellung und für die
Etablierung einer idealen, tierfreien In vitro Immunisierung.
Flooding is a vast problem in many parts of the world, including Europe. It occurs mainly due to extreme weather conditions (e.g. heavy rainfall and snowmelt) and the consequences of flood events can be devastating. Flood risk is mainly defined as a combination of the probability of an event and its potential adverse impacts. Therefore, it covers three major dynamic components: hazard (physical characteristics of a flood event), exposure (people and their physical environment that being exposed to flood), and vulnerability (the elements at risk). Floods are natural phenomena and cannot be fully prevented. However, their risk can be managed and mitigated. For a sound flood risk management and mitigation, a proper risk assessment is needed. First of all, this is attained by a clear understanding of the flood risk dynamics. For instance, human activity may contribute to an increase in flood risk. Anthropogenic climate change causes higher intensity of rainfall and sea level rise and therefore an increase in scale and frequency of the flood events. On the other hand, inappropriate management of risk and structural protection measures may not be very effective for risk reduction. Additionally, due to the growth of number of assets and people within the flood-prone areas, risk increases. To address these issues, the first objective of this thesis is to perform a sensitivity analysis to understand the impacts of changes in each flood risk component on overall risk and further their mutual interactions. A multitude of changes along the risk chain are simulated by regional flood model (RFM) where all processes from atmosphere through catchment and river system to damage mechanisms are taken into consideration. The impacts of changes in risk components are explored by plausible change scenarios for the mesoscale Mulde catchment (sub-basin of the Elbe) in Germany.
A proper risk assessment is ensured by the reasonable representation of the real-world flood event. Traditionally, flood risk is assessed by assuming homogeneous return periods of flood peaks throughout the considered catchment. However, in reality, flood events are spatially heterogeneous and therefore traditional assumption misestimates flood risk especially for large regions. In this thesis, two different studies investigate the importance of spatial dependence in large scale flood risk assessment for different spatial scales. In the first one, the “real” spatial dependence of return period of flood damages is represented by continuous risk modelling approach where spatially coherent patterns of hydrological and meteorological controls (i.e. soil moisture and weather patterns) are included. Further the risk estimations under this modelled dependence assumption are compared with two other assumptions on the spatial dependence of return periods of flood damages: complete dependence (homogeneous return periods) and independence (randomly generated heterogeneous return periods) for the Elbe catchment in Germany. The second study represents the “real” spatial dependence by multivariate dependence models. Similar to the first study, the three different assumptions on the spatial dependence of return periods of flood damages are compared, but at national (United Kingdom and Germany) and continental (Europe) scales. Furthermore, the impacts of the different models, tail dependence, and the structural flood protection level on the flood risk under different spatial dependence assumptions are investigated.
The outcomes of the sensitivity analysis framework suggest that flood risk can vary dramatically as a result of possible change scenarios. The risk components that have not received much attention (e.g. changes in dike systems and in vulnerability) may mask the influence of climate change that is often investigated component.
The results of the spatial dependence research in this thesis further show that the damage under the false assumption of complete dependence is 100 % larger than the damage under the modelled dependence assumption, for the events with return periods greater than approximately 200 years in the Elbe catchment. The complete dependence assumption overestimates the 200-year flood damage, a benchmark indicator for the insurance industry, by 139 %, 188 % and 246 % for the UK, Germany and Europe, respectively. The misestimation of risk under different assumptions can vary from upstream to downstream of the catchment. Besides, tail dependence in the model and flood protection level in the catchments can affect the risk estimation and the differences between different spatial dependence assumptions.
In conclusion, the broader consideration of the risk components, which possibly affect the flood risk in a comprehensive way, and the consideration of the spatial dependence of flood return periods are strongly recommended for a better understanding of flood risk and consequently for a sound flood risk management and mitigation.
This paper examines the attempts of implement-ing components of the concept called Civiliza-tional Hexagon as a pathway to civilizing conflict in the Sub-Saharan Africa in the post-Cold War period. Despite significant decline in the violent conflict and substantial progress socio-economic aspects in the period, most states in the region have been facing challenges in their way to civilize conflict related to absence of inclusive political system, weak state unable to monopolize the use of violence in its territory, and social injustice. On the other hand, states like Botswana and Mauritius managed to civilize conflict through significant improvement in democratic consolidation. Besides their relative success in implementing six elements, these states enabled to integrate traditional institutions with modern state apparatus that helped them to fill the gap created as result of exogenous state formation process and the resulting unfinished nation-building project. Additionally, traditional institutions contributed to managing diversity.
Proceedings of the HPI Research School on Service-oriented Systems Engineering 2020 Fall Retreat
(2021)
Design and Implementation of service-oriented architectures imposes a huge number of research questions from the fields of software engineering, system analysis and modeling, adaptability, and application integration. Component orientation and web services are two approaches for design and realization of complex web-based system. Both approaches allow for dynamic application adaptation as well as integration of enterprise application.
Service-Oriented Systems Engineering represents a symbiosis of best practices in object-orientation, component-based development, distributed computing, and business process management. It provides integration of business and IT concerns.
The annual Ph.D. Retreat of the Research School provides each member the opportunity to present his/her current state of their research and to give an outline of a prospective Ph.D. thesis. Due to the interdisciplinary structure of the research school, this technical report covers a wide range of topics. These include but are not limited to: Human Computer Interaction and Computer Vision as Service; Service-oriented Geovisualization Systems; Algorithm Engineering for Service-oriented Systems; Modeling and Verification of Self-adaptive Service-oriented Systems; Tools and Methods for Software Engineering in Service-oriented Systems; Security Engineering of Service-based IT Systems; Service-oriented Information Systems; Evolutionary Transition of Enterprise Applications to Service Orientation; Operating System Abstractions for Service-oriented Computing; and Services Specification, Composition, and Enactment.
The Anthropocene is the era of urbanization. The accelerating expansion of cities occurs at the expense of natural reservoirs of biodiversity and presents animals with challenges for which their evolutionary past might not have prepared them. Cognitive and behavioral adjustments to novelty could promote animals’ persistence under these altered conditions. We investigated the structure of, and covariance between, different aspects of responses to novelty in rural and urban small mammals of two non-commensal rodent species. We ran replicated experiments testing responses to three novelty types (object, food, or space) of 47 individual common voles (Microtus arvalis) and 41 individual striped field mice (Apodemus agrarius). We found partial support for the hypothesis that responses to novelty are structured, clustering (i) speed of responses, (ii) intensity of responses, and (iii) responses to food into separate dimensions. Rural and urban small mammals did not differ in most responses to novelty, suggesting that urban habitats do not reduce neophobia in these species. Further studies investigating whether comparable response patters are found throughout different stages of colonization, and along synurbanization processes of different duration, will help illuminate the dynamics of animals’ cognitive adjustments to urban life.
The formal modeling and analysis is of crucial importance for software development processes following the model based approach. We present the formalism of Interval Probabilistic Timed Graph Transformation Systems (IPTGTSs) as a high-level modeling language. This language supports structure dynamics (based on graph transformation), timed behavior (based on clocks, guards, resets, and invariants as in Timed Automata (TA)), and interval probabilistic behavior (based on Discrete Interval Probability Distributions). That is, for the probabilistic behavior, the modeler using IPTGTSs does not need to provide precise probabilities, which are often impossible to obtain, but rather provides a probability range instead from which a precise probability is chosen nondeterministically. In fact, this feature on capturing probabilistic behavior distinguishes IPTGTSs from Probabilistic Timed Graph Transformation Systems (PTGTSs) presented earlier.
Following earlier work on Interval Probabilistic Timed Automata (IPTA) and PTGTSs, we also provide an analysis tool chain for IPTGTSs based on inter-formalism transformations. In particular, we provide in our tool AutoGraph a translation of IPTGTSs to IPTA and rely on a mapping of IPTA to Probabilistic Timed Automata (PTA) to allow for the usage of the Prism model checker. The tool Prism can then be used to analyze the resulting PTA w.r.t. probabilistic real-time queries asking for worst-case and best-case probabilities to reach a certain set of target states in a given amount of time.
We use ultrafast x-ray diffraction to investigate the effect of expansive phononic and contractive magnetic stress driving the picosecond strain response of a metallic perovskite SrRuO3 thin film upon femtosecond laser excitation. We exemplify how the anisotropic bulk equilibrium thermal expansion can be used to predict the response of the thin film to ultrafast deposition of energy. It is key to consider that the laterally homogeneous laser excitation changes the strain response compared to the near-equilibrium thermal expansion because the balanced in-plane stresses suppress the Poisson stress on the picosecond timescale. We find a very large negative Grüneisen constant describing the large contractive stress imposed by a small amount of energy in the spin system. The temperature and fluence dependence of the strain response for a double-pulse excitation scheme demonstrates the saturation of the magnetic stress in the high-fluence regime.
The incorporation of proteins in artificial materials such as membranes offers great opportunities to avail oneself the miscellaneous qualities of proteins and enzymes perfected by nature over millions of years. One possibility to leverage proteins is the modification with artificial polymers. To obtain such protein-polymer conjugates, either a polymer can be grown from the protein surface (grafting-from) or a pre-synthesized polymer attached to the protein (grafting-to). Both techniques were used to synthesize conjugates of different proteins with thermo-responsive polymers in this thesis.
First, conjugates were analyzed by protein NMR spectroscopy. Typical characterization techniques for conjugates can verify the successful conjugation and give hints on the secondary structure of the protein. However, the 3-dimensional structure, being highly important for the protein function, cannot be probed by standard techniques. NMR spectroscopy is a unique method allowing to follow even small alterations in the protein structure. A mutant of the carbohydrate binding module 3b (CBM3bN126W) was used as model protein and functionalized with poly(N-isopropylacrylamide). Analysis of conjugates prepared by grafting-to or grafting-from revealed a strong impact of conjugation type on protein folding. Whereas conjugates prepared by grafting a pre-formed polymer to the protein resulted in complete preservation of protein folding, grafting the polymer from the protein surface led to (partial) disruption of the protein structure.
Next, conjugates of bovine serum albumin (BSA) as cheap and easily accessible protein were synthesized with PNIPAm and different oligoethylene glycol (meth)acrylates. The obtained protein-polymer conjugates were analyzed by an in-line combination of size exclusion chromatography and multi-angle laser light scattering (SEC-MALS). This technique is particular advantageous to determine molar masses, as no external calibration of the system is needed. Different SEC column materials and operation conditions were tested to evaluate the applicability of this system to determine absolute molar masses and hydrodynamic properties of heterogeneous conjugates prepared by grafting-from and grafting-to. Hydrophobic and non-covalent interactions of conjugates lead to error-prone values not in accordance to expected molar masses based on conversions and extents of modifications.
As alternative to this method, conjugates were analyzed by sedimentation velocity analytical ultracentrifugation (SV-AUC) to gain insights in the hydrodynamic properties and how they change after conjugation. Within a centrifugal field, a sample moves and fractionates according to the mass, density, and shape of its individual components. Conjugates of BSA with PNIPAm were analyzed below and above the cloud point temperature of the thermo-responsive polymer component. It was identified that the polymer characteristics were transferred to the conjugate molecule which than showed a decreased ideality – defined as increased deviation from a perfect sphere model – below and increased ideality above the cloud point temperature. This effect can be attributed to an arrangement of the polymer chain pointing towards the solvent (expanded state) or snuggling around the protein surface depending on the applied temperature.
The last project dealt with the synthesis of ferric hydroxamate uptake protein component A (FhuA)-polymer conjugates as building blocks for novel membrane materials. The shape of FhuA can be described as barrel and removal of a cork domain inside the protein results in a passive channel aimed to be utilized as pores in the membrane system. The polymer matrix surrounding the membrane protein is composed of a thermo-responsive and a UV-crosslinkable part. Therefore, an external trigger for covalent immobilization of these building blocks in the membrane and switchability of the membrane between different states was incorporated. The overall performance of membranes prepared by a drying-mediated self-assembly approach was evaluated by permeability and size exclusion experiments. The obtained membranes displayed an insufficiency in interchain crosslinking and therefore a lack in performance. Furthermore, the aimed switch between a hydrophilic and hydrophobic state of the polymer matrix did not occur. Correspondingly, size exclusion experiments did not result in a retention of analytes larger than the pores defined by the dimension of the used FhuA variant.
Overall, different paths to generate protein-polymer conjugates by either grafting-from or grafting-to the protein surface were presented paving the way to the generation of new hybrid materials. Different analytical methods were utilized to describe the folding and hydrodynamic properties of conjugates providing a deeper insight in the overall characteristics of these seminal building blocks.
Virtualizing physical space
(2021)
The true cost for virtual reality is not the hardware, but the physical space it requires, as a one-to-one mapping of physical space to virtual space allows for the most immersive way of navigating in virtual reality. Such “real-walking” requires physical space to be of the same size and the same shape of the virtual world represented. This generally prevents real-walking applications from running on any space that they were not designed for.
To reduce virtual reality’s demand for physical space, creators of such applications let users navigate virtual space by means of a treadmill, altered mappings of physical to virtual space, hand-held controllers, or gesture-based techniques. While all of these solutions succeed at reducing virtual reality’s demand for physical space, none of them reach the same level of immersion that real-walking provides.
Our approach is to virtualize physical space: instead of accessing physical space directly, we allow applications to express their need for space in an abstract way, which our software systems then map to the physical space available. We allow real-walking applications to run in spaces of different size, different shape, and in spaces containing different physical objects. We also allow users immersed in different virtual environments to share the same space.
Our systems achieve this by using a tracking volume-independent representation of real-walking experiences — a graph structure that expresses the spatial and logical relationships between virtual locations, virtual elements contained within those locations, and user interactions with those elements. When run in a specific physical space, this graph representation is used to define a custom mapping of the elements of the virtual reality application and the physical space by parsing the graph using a constraint solver. To re-use space, our system splits virtual scenes and overlap virtual geometry. The system derives this split by means of hierarchically clustering of our virtual objects as nodes of our bi-partite directed graph that represents the logical ordering of events of the experience. We let applications express their demands for physical space and use pre-emptive scheduling between applications to have them share space. We present several application examples enabled by our system. They all enable real-walking, despite being mapped to physical spaces of different size and shape, containing different physical objects or other users.
We see substantial real-world impact in our systems. Today’s commercial virtual reality applications are generally designing to be navigated using less immersive solutions, as this allows them to be operated on any tracking volume. While this is a commercial necessity for the developers, it misses out on the higher immersion offered by real-walking. We let developers overcome this hurdle by allowing experiences to bring real-walking to any tracking volume, thus potentially bringing real-walking to consumers.
Die eigentlichen Kosten für Virtual Reality Anwendungen entstehen nicht primär durch die erforderliche Hardware, sondern durch die Nutzung von physischem Raum, da die eins-zu-eins Abbildung von physischem auf virtuellem Raum die immersivste Art von Navigation ermöglicht. Dieses als „Real-Walking“ bezeichnete Erlebnis erfordert hinsichtlich Größe und Form eine Entsprechung von physischem Raum und virtueller Welt. Resultierend daraus können Real-Walking-Anwendungen nicht an Orten angewandt werden, für die sie nicht entwickelt wurden.
Um den Bedarf an physischem Raum zu reduzieren, lassen Entwickler von Virtual Reality-Anwendungen ihre Nutzer auf verschiedene Arten navigieren, etwa mit Hilfe eines Laufbandes, verfälschten Abbildungen von physischem zu virtuellem Raum, Handheld-Controllern oder gestenbasierten Techniken. All diese Lösungen reduzieren zwar den Bedarf an physischem Raum, erreichen jedoch nicht denselben Grad an Immersion, den Real-Walking bietet.
Unser Ansatz zielt darauf, physischen Raum zu virtualisieren: Anstatt auf den physischen Raum direkt zuzugreifen, lassen wir Anwendungen ihren Raumbedarf auf abstrakte Weise formulieren, den unsere Softwaresysteme anschließend auf den verfügbaren physischen Raum abbilden. Dadurch ermöglichen wir Real-Walking-Anwendungen Räume mit unterschiedlichen Größen und Formen und Räume, die unterschiedliche physische Objekte enthalten, zu nutzen. Wir ermöglichen auch die zeitgleiche Nutzung desselben Raums durch mehrere Nutzer verschiedener Real-Walking-Anwendungen.
Unsere Systeme erreichen dieses Resultat durch eine Repräsentation von Real-Walking-Erfahrungen, die unabhängig sind vom gegebenen Trackingvolumen – eine Graphenstruktur, die die räumlichen und logischen Beziehungen zwischen virtuellen Orten, den virtuellen Elementen innerhalb dieser Orte, und Benutzerinteraktionen mit diesen Elementen, ausdrückt. Bei der Instanziierung der Anwendung in einem bestimmten physischen Raum wird diese Graphenstruktur und ein Constraint Solver verwendet, um eine individuelle Abbildung der virtuellen Elemente auf den physischen Raum zu erreichen. Zur mehrmaligen Verwendung des Raumes teilt unser System virtuelle Szenen und überlagert virtuelle Geometrie. Das System leitet diese Aufteilung anhand eines hierarchischen Clusterings unserer virtuellen Objekte ab, die als Knoten unseres bi-partiten, gerichteten Graphen die logische Reihenfolge aller Ereignisse repräsentieren. Wir verwenden präemptives Scheduling zwischen den Anwendungen für die zeitgleiche Nutzung von physischem Raum. Wir stellen mehrere Anwendungsbeispiele vor, die Real-Walking ermöglichen – in physischen Räumen mit unterschiedlicher Größe und Form, die verschiedene physische Objekte oder weitere Nutzer enthalten.
Wir sehen in unseren Systemen substantielles Potential. Heutige Virtual Reality-Anwendungen sind bisher zwar so konzipiert, dass sie auf einem beliebigen Trackingvolumen betrieben werden können, aber aus kommerzieller Notwendigkeit kein Real-Walking beinhalten. Damit entgeht Entwicklern die Gelegenheit eine höhere Immersion herzustellen. Indem wir es ermöglichen, Real-Walking auf jedes Trackingvolumen zu bringen, geben wir Entwicklern die Möglichkeit Real-Walking zu ihren Nutzern zu bringen.
-Christoph Markschies: Geleitwort
-Ulrich Päßler: Christian Gottfried Ehrenberg: Lebensbilder eines Naturforschers
-Mathias Grote: „Aus dem Kleinen bauen sich die Welten“ – Christian Gottfried Ehrenbergs ökologische Mikrobiologie avant la lettre
-Anne Greenwood MacKinney: Die Inszenierung naturforschender Gelehrsamkeit beim Sammeln: Christian Gottfried Ehrenbergs und Wilhelm Hemprichs nordafrikanische Forschungsreise (1820 – 1825)
-Ulrich Päßler: Reisen im Nahen Osten. Zeichnungen
-Ulrich Päßler: Christian Gottfried Ehrenberg und die Biogeographie: Die russisch-sibirische Reise mit Alexander von Humboldt (1829)
-Ulrich Päßler: Russisch-Sibirische Reise. Zeichnungen
-Wolf-Henning Kusber, Regine Jahn: Christian Gottfried Ehrenbergs Zeichnungen: Eine frühe wissenschaftliche Dokumentation mikroskopischer Organismen
-Ferdinand Damaschun: Christian Gottfried Ehrenberg und die Entwicklung der Mikroskop-Technik im 19. Jahrhundert
-Ulrich Päßler: Die Reise ins Kleinste der Natur. Zeichnungen
-Katrin Böhme: Das große Ganze: Christian Gottfried Ehrenberg und die Gesellschaft Naturforschender Freunde zu Berlin
Infants show impressive speech decoding abilities and detect acoustic regularities that highlight the syntactic relations of a language, often coded via non-adjacent dependencies (NADs, e.g., is singing). It has been claimed that infants learn NADs implicitly and associatively through passive listening and that there is a shift from effortless associative learning to a more controlled learning of NADs after the age of 2 years, potentially driven by the maturation of the prefrontal cortex. To investigate if older children are able to learn NADs, Lammertink et al. (2019) recently developed a word-monitoring serial reaction time (SRT) task and could show that 6–11-year-old children learned the NADs, as their reaction times (RTs) increased then they were presented with violated NADs. In the current study we adapted their experimental paradigm and tested NAD learning in a younger group of 52 children between the age of 4–8 years in a remote, web-based, game-like setting (whack-a-mole). Children were exposed to Italian phrases containing NADs and had to monitor the occurrence of a target syllable, which was the second element of the NAD. After exposure, children did a “Stem Completion” task in which they were presented with the first element of the NAD and had to choose the second element of the NAD to complete the stimuli. Our findings show that, despite large variability in the data, children aged 4–8 years are sensitive to NADs; they show the expected differences in r RTs in the SRT task and could transfer the NAD-rule in the Stem Completion task. We discuss these results with respect to the development of NAD dependency learning in childhood and the practical impact and limitations of collecting these data in a web-based setting.
Reliably modelling the demographic and distributional responses of a species to environmental changes can be crucial for successful conservation and management planning. Process-based models have the potential to achieve this goal, but so far they remain underused for predictions of species' distributions. Individual-based models offer the additional capability to model inter-individual variation and evolutionary dynamics and thus capture adaptive responses to environmental change. We present RangeShiftR, an R implementation of a flexible individual-based modelling platform which simulates eco-evolutionary dynamics in a spatially explicit way. The package provides flexible and fast simulations by making the software RangeShifter available for the widely used statistical programming platform R. The package features additional auxiliary functions to support model specification and analysis of results. We provide an outline of the package's functionality, describe the underlying model structure with its main components and present a short example. RangeShiftR offers substantial model complexity, especially for the demographic and dispersal processes. It comes with elaborate tutorials and comprehensive documentation to facilitate learning the software and provide help at all levels. As the core code is implemented in C++, the computations are fast. The complete source code is published under a public licence, making adaptations and contributions feasible. The RangeShiftR package facilitates the application of individual-based and mechanistic modelling to eco-evolutionary questions by operating a flexible and powerful simulation model from R. It allows effortless interoperation with existing packages to create streamlined workflows that can include data preparation, integrated model specification and results analysis. Moreover, the implementation in R strengthens the potential for coupling RangeShiftR with other models.
While patients are known to respond differently to drug therapies, current clinical practice often still follows a standardized dosage regimen for all patients. For drugs with a narrow range of both effective and safe concentrations, this approach may lead to a high incidence of adverse events or subtherapeutic dosing in the presence of high patient variability. Model-informedprecision dosing (MIPD) is a quantitative approach towards dose individualization based on mathematical modeling of dose-response relationships integrating therapeutic drug/biomarker monitoring (TDM) data. MIPD may considerably improve the efficacy and safety of many drug therapies. Current MIPD approaches, however, rely either on pre-calculated dosing tables or on simple point predictions of the therapy outcome. These
approaches lack a quantification of uncertainties and the ability to account for effects that are delayed. In addition, the underlying models are not improved while applied to patient data. Therefore, current approaches are not well suited for informed clinical decision-making based on a differentiated understanding of the individually predicted therapy outcome.
The objective of this thesis is to develop mathematical approaches for MIPD, which (i) provide efficient fully Bayesian forecasting of the individual therapy outcome including associated uncertainties, (ii) integrate Markov decision processes via reinforcement learning (RL) for a comprehensive decision framework for dose individualization, (iii) allow for continuous learning across patients and hospitals. Cytotoxic anticancer chemotherapy with its major dose-limiting toxicity, neutropenia, serves as a therapeutically relevant application example.
For more comprehensive therapy forecasting, we apply Bayesian data assimilation (DA) approaches, integrating patient-specific TDM data into mathematical models of chemotherapy-induced neutropenia that build on prior population analyses. The value of uncertainty quantification is demonstrated as it allows reliable computation of the patient-specific probabilities of relevant clinical quantities, e.g., the neutropenia grade. In view of novel home monitoring devices that increase the amount of TDM data available, the data processing of
sequential DA methods proves to be more efficient and facilitates handling of the variability between dosing events.
By transferring concepts from DA and RL we develop novel approaches for MIPD. While DA-guided dosing integrates individualized uncertainties into dose selection, RL-guided dosing provides a framework to consider delayed effects of dose selections. The combined
DA-RL approach takes into account both aspects simultaneously and thus represents a holistic approach towards MIPD. Additionally, we show that RL can be used to gain insights into important patient characteristics for dose selection. The novel dosing strategies substantially reduce the occurrence of both subtherapeutic and life-threatening neutropenia grades in a simulation study based on a recent clinical study (CEPAC-TDM trial) compared to currently used MIPD approaches.
If MIPD is to be implemented in routine clinical practice, a certain model bias with respect to the underlying model is inevitable, as the models are typically based on data from comparably small clinical trials that reflect only to a limited extent the diversity in real-world patient populations. We propose a sequential hierarchical Bayesian inference framework that enables continuous cross-patient learning to learn the underlying model parameters of the target patient population. It is important to note that the approach only requires summary information of the individual patient data to update the model. This separation of the individual inference from population inference enables implementation across different centers of care.
The proposed approaches substantially improve current MIPD approaches, taking into account new trends in health care and aspects of practical applicability. They enable progress towards more informed clinical decision-making, ultimately increasing patient benefits beyond the current practice.
Spatiotemporal variations of key air pollutants and greenhouse gases in the Himalayan foothills
(2021)
South Asia is a rapidly developing, densely populated and highly polluted region that is facing the impacts of increasing air pollution and climate change, and yet it remains one of the least studied regions of the world scientifically. In recognition of this situation, this thesis focuses on studying (i) the spatial and temporal variation of key greenhouse gases (CO2 and CH4) and air pollutants (CO and O3) and (ii) the vertical distribution of air pollutants (PM, BC) in the foothills of the Himalaya. Five sites were selected in the Kathmandu Valley, the capital region of Nepal, along with two sites outside of the valley in the Makawanpur and Kaski districts, and conducted measurements during the period of 2013-2014 and 2016. These measurements are analyzed in this thesis.
The CO measurements at multiple sites in the Kathmandu Valley showed a clear diurnal cycle: morning and evening levels were high, with an afternoon dip. There are slight differences in the diurnal cycles of CO2 and CH4, with the CO2 and CH4 mixing ratios increasing after the afternoon dip, until the morning peak the next day. The mixing layer height (MLH) of the nocturnal stable layer is relatively constant (~ 200 m) during the night, after which it transitions to a convective mixing layer during the day and the MLH increases up to 1200 m in the afternoon. Pollutants are thus largely trapped in the valley from the evening until sunrise the following day, and the concentration of pollutants increases due to emissions during the night. During afternoon, the pollutants are diluted due to the circulation by the valley winds after the break-up of the mixing layer. The major emission sources of GHGs and air pollutants in the valley are transport sector, residential cooking, brick kilns, trash burning, and agro-residue burning. Brick industries are influential in the winter and pre-monsoon season. The contribution of regional forest fires and agro-residue burning are seen during the pre-monsoon season. In addition, relatively higher CO values were also observed at the valley outskirts (Bhimdhunga and Naikhandi), which indicates the contribution of regional emission sources. This was also supported by the presence of higher concentrations of O3 during the pre-monsoon season.
The mixing ratios of CO2 (419.3 ±6.0 ppm) and CH4 (2.192 ±0.066 ppm) in the valley were much higher than at background sites, including the Mauna Loa observatory (CO2: 396.8 ± 2.0 ppm, CH4:1.831 ± 0.110 ppm) and Waligaun (CO2: 397.7 ± 3.6 ppm, CH4: 1.879 ± 0.009 ppm), China, as well as at an urban site Shadnagar (CH4: 1.92 ± 0.07 ppm) in India.
The daily 8 hour maximum O3 average in the Kathmandu Valley exceeds the WHO recommended value during more than 80% of the days during the pre-monsoon period, which represents a significant risk for human health and ecosystems in the region. Moreover, in the measurements of the vertical distribution of particulate matter, which were made using an ultralight aircraft, and are the first of their kind in the region, an elevated polluted layer at around ca. 3000 m asl. was detected over the Pokhara Valley. The layer could be associated with the large-scale regional transport of pollution. These contributions towards understanding the distributions of key air pollutants and their main sources will provide helpful information for developing management plans and policies to help reduce the risks for the millions of people living in the region.
The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes located in the Khomas Highland of Namibia. H.E.S.S. operates in a wide energy range from several tens of GeV to several tens of TeV, reaching the best sensitivity around 1 TeV or at lower energies. However, there are many important topics – such as the search for Galactic PeVatrons, the study of gamma-ray production scenarios for sources (hadronic vs. leptonic), EBL absorption studies – which require good sensitivity at energies above 10 TeV. This work aims at improving the sensitivity of H.E.S.S. and increasing the gamma-ray statistics at high energies. The study investigates an enlargement of the H.E.S.S. effective field of view using events with larger offset angles in the analysis. The greatest challenges in the analysis of large-offset events are a degradation of the reconstruction accuracy and a rise of the background rate as the offset angle increases. The more sophisticated direction reconstruction method (DISP) and improvements to the standard background rejection technique, which by themselves are effective ways to increase the gamma-ray statistics and improve the sensitivity of the analysis, are implemented to overcome the above-mentioned issues. As a result, the angular resolution at the preselection level is improved by 5 - 10% for events at 0.5◦ offset angle and by 20 - 30% for events at 2◦ offset angle. The background rate at large offset angles is decreased nearly to a level typical for offset angles below 2.5◦. Thereby, sensitivity improvements of 10 - 20% are achieved for the proposed analysis compared to the standard analysis at small offset angles. Developed analysis also allows for the usage of events at large offset angles up to approximately 4◦, which was not possible before. This analysis method is applied to the analysis of the Galactic plane data above 10 TeV. As a result, 40 sources out of the 78 presented in the H.E.S.S. Galactic plane survey (HGPS) are detected above 10 TeV. Among them are representatives of all source classes that are present in the HGPS catalogue; namely, binary systems, supernova remnants, pulsar wind nebulae and composite objects. The potential of the improved analysis method is demonstrated by investigating the more than 10 TeV emission for two objects: the region associated with the shell-type SNR HESS J1731−347 and the PWN candidate associated with PSR J0855−4644 that is coincident with Vela Junior (HESS J0852−463).
In this paper, we study the effect of exogenous global crop price changes on migration from agricultural and non-agricultural households in Sub-Saharan Africa. We show that, similar to the effect of positive local weather shocks, the effect of a locally-relevant global crop price increase on household out-migration depends on the initial household wealth. Higher international producer prices relax the budget constraint of poor agricultural households and facilitate migration. The order of magnitude of a standardized price effect is approx. one third of the standardized effect of a local weather shock. Unlike positive weather shocks, which mostly facilitate internal rural-urban migration, positive income shocks through rising producer prices only increase migration to neighboring African countries, likely due to the simultaneous decrease in real income in nearby urban areas. Finally, we show that while higher producer prices induce conflict, conflict does not play a role for the household decision to send a member as a labor migrant.
Es ist meist ein homogenes Verständnis vom Schulwesen, das in den Köpfen vieler Pädagog*innen, Lehrer*innen, Eltern und Bildungspolitiker*innen scheinbar zu Erfolg von Schule und Bildung führt und noch tief sitzt. Dass real existierende Heterogenität, die sich alltäglich Lehrer*innen und Pädagog*innen zeigt, als problematisch und konfliktreich gesehen wird, ist durch meine zahlreichen persönlichen und fachlichen Erlebnisse im Austausch in Lehrerfortbildungen sowie im Rahmen von Elternarbeit und schulischen Gremien eine häufige Erfahrung. Wo Diversität und Unterschiede gesehen und empfunden werden, wird Diskriminierung nicht ausgeschlossen.
Doch was erklärt diese negativ behaftete Haltung und den Umgang mit Vielfalt? Welche Problemfelder sind in der Auseinandersetzung mit Diversität an Schulen in Berlin zu verorten? Können Diversitäts- und Antidiskriminierungskonzepte etwas Positives im professionellen Umgang mit Vielfalt an Schulen bewirken? Wo liegen Chancen in der Umsetzung solcher Konzepte? Welche Hindernisse stehen dem im Weg? Kann Diversität als Markenzeichen einer Schule gesehen werden?
Als Forscherin ist es essenziell, in Bezug auf die problemorientierte Sicht auf Diversität an Schulen den für mich offenen Fragen nachzugehen und ebenso die vielfältigen Perspektiven von Expert*innen auf diese Thematik im Rahmen dieser Arbeit aufzuzeigen, um Chancen und Herausforderungen zu den genannten Fragen im schulischen Kontext zu ergründen. Mit dieser Masterarbeit können zudem Impulse für eine Reformierung der Schulentwicklungsarbeit sowie Anstöße für weitere Forschungsarbeiten im Bereich der Diversitätsprozesse an unseren Schulen gegeben werden. Im kommenden Kapitel gehe ich auf einige wesentliche Anhaltspunkte ein, die dieser Forschung Relevanz verleihen.
Das computergestützte Training sozialer Kognition zeigt bei Menschen mit Autismus kurzfristig gute Lernerfolge. Jedoch werden immer wieder Trainingsabbrüche durch eine zu geringe Motivation beobachtet, die eine langfristige Wirkung im Alltag beeinträchtigen. Hier können sowohl spielerische als auch adaptive Ansätze helfen. Der Beitrag zeigt anhand von ausgewählten Beispielen, wie für das Themenfeld der Emotionserkennung und für die besondere Zielgruppe der Menschen mit Autismus derartige Trainingssysteme gestaltet werden können. Abschließend werden die in den Konstruktions- und Nutzungsprozessen dieser Systeme gesammelten Erfahrungen reflektiert.
Modern knowledge bases contain and organize knowledge from many different topic areas. Apart from specific entity information, they also store information about their relationships amongst each other. Combining this information results in a knowledge graph that can be particularly helpful in cases where relationships are of central importance. Among other applications, modern risk assessment in the financial sector can benefit from the inherent network structure of such knowledge graphs by assessing the consequences and risks of certain events, such as corporate insolvencies or fraudulent behavior, based on the underlying network structure. As public knowledge bases often do not contain the necessary information for the analysis of such scenarios, the need arises to create and maintain dedicated domain-specific knowledge bases.
This thesis investigates the process of creating domain-specific knowledge bases from structured and unstructured data sources. In particular, it addresses the topics of named entity recognition (NER), duplicate detection, and knowledge validation, which represent essential steps in the construction of knowledge bases.
As such, we present a novel method for duplicate detection based on a Siamese neural network that is able to learn a dataset-specific similarity measure which is used to identify duplicates. Using the specialized network architecture, we design and implement a knowledge transfer between two deduplication networks, which leads to significant performance improvements and a reduction of required training data.
Furthermore, we propose a named entity recognition approach that is able to identify company names by integrating external knowledge in the form of dictionaries into the training process of a conditional random field classifier. In this context, we study the effects of different dictionaries on the performance of the NER classifier. We show that both the inclusion of domain knowledge as well as the generation and use of alias names results in significant performance improvements.
For the validation of knowledge represented in a knowledge base, we introduce Colt, a framework for knowledge validation based on the interactive quality assessment of logical rules. In its most expressive implementation, we combine Gaussian processes with neural networks to create Colt-GP, an interactive algorithm for learning rule models. Unlike other approaches, Colt-GP uses knowledge graph embeddings and user feedback to cope with data quality issues of knowledge bases. The learned rule model can be used to conditionally apply a rule and assess its quality.
Finally, we present CurEx, a prototypical system for building domain-specific knowledge bases from structured and unstructured data sources. Its modular design is based on scalable technologies, which, in addition to processing large datasets, ensures that the modules can be easily exchanged or extended. CurEx offers multiple user interfaces, each tailored to the individual needs of a specific user group and is fully compatible with the Colt framework, which can be used as part of the system.
We conduct a wide range of experiments with different datasets to determine the strengths and weaknesses of the proposed methods. To ensure the validity of our results, we compare the proposed methods with competing approaches.
Natural products have proved to be a major resource in the discovery and development of many pharmaceuticals that are in use today. There is a wide variety of biologically active natural products that contain conjugated polyenes or benzofuran structures. Therefore, new synthetic methods for the construction of such building blocks are of great interest to synthetic chemists. The recently developed one-pot tethered ring-closing metathesis approach allows for the formation of Z,E-dienoates in high stereoselectivity. The extension of this method with a Julia-Kocienski olefination protocol would allow for the formation of conjugated trienes in a stereoselective manner. This strategy was applied in the total synthesis of conjugated triene containing (+)-bretonin B. Additionally, investigations of cross metathesis using methyl substituted olefins were pursued. This methodology was applied, as a one-pot cross metathesis/ring-closing metathesis sequence, in the total synthesis of benzofuran containing 7-methoxywutaifuranal. Finally, the design and synthesis of a catalyst for stereoretentive metathesis in aqueous media was investigated.
Transitory starch plays a central role in the life cycle of plants. Many aspects of this important metabolism remain unknown; however, starch granules provide insight into this persistent metabolic process. Therefore, monitoring alterations in starch granules with high temporal resolution provides one significant avenue to improve understanding. Here, a previously established method that combines LCSM and safranin-O staining for in vivo imaging of transitory starch granules in leaves of Arabidopsis thaliana was employed to demonstrate, for the first time, the alterations in starch granule size and morphology that occur both throughout the day and during leaf aging. Several starch-related mutants were included, which revealed differences among the generated granules. In ptst2 and sex1-8, the starch granules in old leaves were much larger than those in young leaves; however, the typical flattened discoid morphology was maintained. In ss4 and dpe2/phs1/ss4, the morphology of starch granules in young leaves was altered, with a more rounded shape observed. With leaf development, the starch granules became spherical exclusively in dpe2/phs1/ss4. Thus, the presented data provide new insights to contribute to the understanding of starch granule morphogenesis.
Transitory starch granules result from complex carbon turnover and display specific situations during starch synthesis and degradation. The fundamental mechanisms that specify starch granule characteristics, such as granule size, morphology, and the number per chloroplast, are largely unknown. However, transitory starch is found in the various cells of the leaves of Arabidopsis thaliana, but comparative analyses are lacking. Here, we adopted a fast method of laser confocal scanning microscopy to analyze the starch granules in a series of Arabidopsis mutants with altered starch metabolism. This allowed us to separately analyze the starch particles in the mesophyll and in guard cells. In all mutants, the guard cells were always found to contain more but smaller plastidial starch granules than mesophyll cells. The morphological properties of the starch granules, however, were indiscernible or identical in both types of leaf cells.
Background: Chronic ankle instability, developing from ankle sprain, is one of the most common sports injuries. Besides it being an ankle issue, chronic ankle instability can also cause additional injuries. Investigating the epidemiology of chronic ankle instability is an essential step to develop an adequate injury prevention strategy. However, the epidemiology of chronic ankle instability remains unknown. Therefore, the purpose of this study was to investigate the epidemiology of chronic ankle instability through valid and reliable self-reported tools in active populations.
Methods: An electronic search was performed on PubMed and Web of Science in July 2020. The inclusion criteria for articles were peer-reviewed, published between 2006 and 2020, using one of the valid and reliable tools to evaluate ankle instability, determining chronic ankle instability based on the criteria of the International Ankle
Consortium, and including the outcome of epidemiology of chronic ankle instability. The risk of bias of the included studies was evaluated with an adapted tool for the sports injury review method.
Results: After removing duplicated studies, 593 articles were screened for eligibility. Twenty full-texts were screened and finally nine studies were included, assessing 3804 participants in total. The participants were between 15 and 32 years old and represented soldiers, students, athletes and active individuals with a history of ankle sprain. The prevalence of chronic ankle instability was 25%, ranging between 7 and 53%. The prevalence of chronic ankle instability within participants with a history of ankle sprains was 46%, ranging between 9 and 76%. Five included studies identified chronic ankle instability based on the standard criteria, and four studies applied adapted exclusion criteria to conduct the study. Five out of nine included studies showed a low risk of bias.
Conclusions: The prevalence of chronic ankle instability shows a wide range. This could be due to the different exclusion criteria, age, sports discipline, or other factors among the included studies. For future studies, standardized criteria to investigate the epidemiology of chronic ankle instability are required. The epidemiology of
CAI should be prospective. Factors affecting the prevalence of chronic ankle instability should be investigated and clearly described.
The survey of the prevalence of chronic ankle instability in elite Taiwanese basketball athletes
(2021)
BACKGROUND: Ankle sprains are common in basketball. It could develop into Chronic Ankle Instability (CAI) causing decreased quality of life, functional performance, early osteoarthritis, and increased risk of other injuries. To develop a strategy of CAI prevention, localized epidemiology data and a valid/reliable tool are essential. However, the epidemiological data of CAI is not conclusive from previous studies and the prevalence of CAI in Taiwanese basketball athletes are not clear. In addition, a valid and reliable tool among the Taiwan-Chinese version to evaluate ankle instability is missing.
PURPOSE: The aims were to have an overview of the prevalence of CAI in sports population using a systematic review, to develop a valid and reliable cross-cultural adapted Cumberland Ankle Instability Tool Questionnaire (CAIT) in Taiwan-Chinese (CAIT-TW), and to survey the prevalence of CAI in elite basketball athletes in Taiwan using CAIT-TW.
METHODS: Firstly, a systematic search was conducted. Research articles applying CAI related questionnaires in order to survey the prevalence of CAI were included in the review. Second, the English version of CAIT was translated and cross-culturally adapted into the CAIT-TW. The construct validity, test-retest reliability, internal consistency, and cutoff score of CAIT-TW were evaluated in an athletic population (N=135). Finally, the cross-sectional data of CAI prevalence in 388 elite Taiwanese basketball athletes were presented. Demographics, presence of CAI, and difference of prevalence between gender, different competitive levels and play positions were evaluated.
RESULTS: The prevalence of CAI was 25%, ranging between 7% and 53%. The prevalence of CAI among participants with a history of ankle sprains was 46%, ranging between 9% and 76%. In addition, the cross-cultural adapted CAIT-TW showed a moderate to strong construct validity, an excellent test-retest reliability, a good internal consistency, and a cutoff score of 21.5 for the Taiwanese athletic population. Finally, 26% of Taiwanese basketball athletes had unilateral CAI while 50% of them had bilateral CAI. In addition, women athletes in the investigated cohort had a higher prevalence of CAI than men. There was no difference in prevalence between competitive levels and among play positions.
CONCLUSION: The systematic review shows that the prevalence of CAI has a wide range among included studies. This could be due to the different exclusion criteria, age, sports discipline, or other factors among the included studies. For future studies, standardized criteria to investigate the epidemiology of CAI are required. The CAI epidemiological study should be prospective. Factors affecting the prevalence of CAI ability should be investigated and described. The translated CAIT-TW is a valid and reliable tool to differentiate between stable and unstable ankles in athletes and may further apply for research or daily practice in Taiwan. In the Taiwanese basketball population, CAI is highly prevalent. This might relate to the research method, preexisting ankle instability, and training-related issues. Women showed a higher prevalence of CAI than men. When applying the preventive measure, gender should be taken into consideration.