Refine
Has Fulltext
- yes (304) (remove)
Year of publication
- 2021 (304) (remove)
Document Type
- Postprint (145)
- Doctoral Thesis (124)
- Working Paper (16)
- Monograph/Edited Volume (6)
- Master's Thesis (4)
- Article (3)
- Part of Periodical (3)
- Conference Proceeding (1)
- Habilitation Thesis (1)
- Other (1)
Language
- English (304) (remove)
Is part of the Bibliography
- yes (304) (remove)
Keywords
- Arabidopsis thaliana (4)
- Spektroskopie (4)
- climate change (4)
- embodied cognition (4)
- spectroscopy (4)
- 20. Jahrhundert (3)
- 20th century (3)
- COVID-19 (3)
- Klimawandel (3)
- Synchronisation (3)
Institute
- Institut für Biochemie und Biologie (42)
- Institut für Geowissenschaften (36)
- Institut für Physik und Astronomie (31)
- Hasso-Plattner-Institut für Digital Engineering GmbH (25)
- Institut für Chemie (21)
- Strukturbereich Kognitionswissenschaften (19)
- Department Psychologie (18)
- Institut für Umweltwissenschaften und Geographie (16)
- Center for Economic Policy Analysis (CEPA) (15)
- Extern (15)
The large literature that aims to find evidence of climate migration delivers mixed findings. This meta-regression analysis i) summarizes direct links between adverse climatic events and migration, ii) maps patterns of climate migration, and iii) explains the variation in outcomes. Using a set of limited dependent variable models, we meta-analyze thus-far the most comprehensive sample of 3,625 estimates from 116 original studies and produce novel insights on climate migration. We find that extremely high temperatures and drying conditions increase migration. We do not find a significant effect of sudden-onset events. Climate migration is most likely to emerge due to contemporaneous events, to originate in rural areas and to take place in middle-income countries, internally, to cities. The likelihood to become trapped in affected areas is higher for women and in low-income countries, particularly in Africa. We uniquely quantify how pitfalls typical for the broader empirical climate impact literature affect climate migration findings. We also find evidence of different publication biases.
The increasing development of antibiotic resistance in bacteria has been a major problem for years, both in human and veterinary medicine. Prophylactic measures, such as the use of vaccines, are of great importance in reducing the use of antibiotics in livestock. These vaccines are mainly produced based on formaldehyde inactivation. However, the latter damages the recognition elements of the bacterial proteins and thus could reduce the immune response in the animal. An alternative inactivation method developed in this work is based on gentle photodynamic inactivation using carbon nanodots (CNDs) at excitation wavelengths λex > 290 nm. The photodynamic inactivation was characterized on the nonvirulent laboratory strain Escherichia coli K12 using synthesized CNDs. For a gentle inactivation, the CNDs must be absorbed into the cytoplasm of the E. coli cell. Thus, the inactivation through photoinduced formation of reactive oxygen species only takes place inside the bacterium, which means that the outer membrane is neither damaged nor altered. The loading of the CNDs into E. coli was examined using fluorescence microscopy. Complete loading of the bacterial cells could be achieved in less than 10 min. These studies revealed a reversible uptake process allowing the recovery and reuse of the CNDs after irradiation and before the administration of the vaccine. The success of photodynamic inactivation was verified by viability assays on agar. In a homemade flow photoreactor, the fastest successful irradiation of the bacteria could be carried out in 34 s. Therefore, the photodynamic inactivation based on CNDs is very effective. The membrane integrity of the bacteria after irradiation was verified by slide agglutination and atomic force microscopy. The method developed for the laboratory strain E. coli K12 could then be successfully applied to the important avian pathogens Bordetella avium and Ornithobacterium rhinotracheale to aid the development of novel vaccines.
Models are useful tools for understanding and predicting ecological patterns and processes. Under ongoing climate and biodiversity change, they can greatly facilitate decision-making in conservation and restoration and help designing adequate management strategies for an uncertain future. Here, we review the use of spatially explicit models for decision support and to identify key gaps in current modelling in conservation and restoration. Of 650 reviewed publications, 217 publications had a clear management application and were included in our quantitative analyses. Overall, modelling studies were biased towards static models (79%), towards the species and population level (80%) and towards conservation (rather than restoration) applications (71%). Correlative niche models were the most widely used model type. Dynamic models as well as the gene-to-individual level and the community-to-ecosystem level were underrepresented, and explicit cost optimisation approaches were only used in 10% of the studies. We present a new model typology for selecting models for animal conservation and restoration, characterising model types according to organisational levels, biological processes of interest and desired management applications. This typology will help to more closely link models to management goals. Additionally, future efforts need to overcome important challenges related to data integration, model integration and decision-making. We conclude with five key recommendations, suggesting that wider usage of spatially explicit models for decision support can be achieved by 1) developing a toolbox with multiple, easier-to-use methods, 2) improving calibration and validation of dynamic modelling approaches and 3) developing best-practise guidelines for applying these models. Further, more robust decision-making can be achieved by 4) combining multiple modelling approaches to assess uncertainty, and 5) placing models at the core of adaptive management. These efforts must be accompanied by long-term funding for modelling and monitoring, and improved communication between research and practise to ensure optimal conservation and restoration outcomes.
Portal Wissen = Departure
(2021)
On October 20, 1911, the Norwegian Roald Amundsen left the safe base camp “Framheim” at the Bay of Whales together with four other explorers and 52 sledge dogs to be the first person to reach the South Pole. Ahead of them lay the perpetual ice at temperatures of 20 to 30 degrees Celsius below zero and a distance of 1,400 kilometers. After eight weeks, the group reached its destination on December 13. The men planted the Norwegian flag in the lonely snow and shortly afterwards set off to make their way back – celebrated, honored as conquerors of the South Pole and laden with information and knowledge from the world of Antarctica. The voyage of Amundsen and his companions is undoubtedly so extraordinary because the five proved that it was possible and were the first to succeed. It is, however, also a symbol of what enables humans to push the boundaries of their world: the urge to set out into the unknown, to discover what has not yet been found, explored, and described.
What distinguishes science - even before each discovery and new knowledge – is the element of departure. Questioning apparent certainties, taking a critical look at outdated knowledge, and breaking down encrusted thought patterns is the starting point of exploratory curiosity. And to set out from there for new knowledge is the essence of scientific activities – neither protected nor supported by the reliable and known. Probing, trying, courageously questioning, and sensing that the solid ground, which still lies hidden, can only be reached again in this way. “Research is always a departure for new shoreless waters,” said chemist Prof. Dr. Hans-Jürgen Quadbeck-Seeger. Leaving behind the safe harbor, trusting that new shores are waiting and can be reached is the impetus that makes science so important and valuable.
For the current issue of the University of Potsdam’s research magazine, we looked over the shoulders of some researchers as they set out on new research journeys – whether in the lab, in the library, in space, or in the mind. Astrophysicist Lidia Oskinova, for example, uses the Hubble telescope to search for particularly massive stars, while hydrologist Thorsten Wagener is trying to better understand the paths of water on Earth. Economists and social scientists such as Elmar Kriegler and Maik Heinemann are researching in different projects what politics can do to achieve a turnaround in climate policy and stop climate change.
Time and again, however, such departures are themselves the focus of research: And a group of biologists and environmental scientists is investigating how nature revives forest fire areas and how the newly emerging forests can become more resilient to future fires.
Since – as has already been said – a departure is inherent in every research question, this time the entire issue of “Portal Wissen” is actually devoted to the cover topic. And so we invite you to set out with Romance linguist Annette Gerstenberg to research language in old age, with immunologist Katja Hanack to develop a quick and safe SARS-CoV-2 test, and with the team of the Potsdam Center for Industry 4.0 to the virtual factory of tomorrow. And we will show you how evidence- based economic research can inform and advise politicians, and how a warning system is intended to prevent future accidents involving cyclists.
So, what are you waiting for?!
Noise is ubiquitous in nature and usually results in rich dynamics in stochastic systems such as oscillatory systems, which exist in such various fields as physics, biology and complex networks. The correlation and synchronization of two or many oscillators are widely studied topics in recent years.
In this thesis, we mainly investigate two problems, i.e., the stochastic bursting phenomenon in noisy excitable systems and synchronization in a three-dimensional Kuramoto model with noise. Stochastic bursting here refers to a sequence of coherent spike train, where each spike has random number of followers due to the combined effects of both time delay and noise. Synchronization, as a universal phenomenon in nonlinear dynamical systems, is well illustrated in the Kuramoto model, a prominent model in the description of collective motion.
In the first part of this thesis, an idealized point process, valid if the characteristic timescales in the problem are well separated, is used to describe statistical properties such as the power spectral density and the interspike interval distribution. We show how the main parameters of the point process, the spontaneous excitation rate, and the probability to induce a spike during the delay action can be calculated from the solutions of a stationary and a forced Fokker-Planck equation. We extend it to the delay-coupled case and derive analytically the statistics of the spikes in each neuron, the pairwise correlations between any two neurons, and the spectrum of the total output from the network.
In the second part, we investigate the three-dimensional noisy Kuramoto model, which can be used to describe the synchronization in a swarming model with helical trajectory. In the case without natural frequency, the Kuramoto model can be connected with the Vicsek model, which is widely studied in collective motion and swarming of active matter. We analyze the linear stability of the incoherent state and derive the critical coupling strength above which the incoherent state loses stability. In the limit of no natural frequency, an exact self-consistent equation of the mean field is derived and extended straightforward to any high-dimensional case.
Reciprocal space slicing
(2021)
An experimental technique that allows faster assessment of out-of-plane strain dynamics of thin film heterostructures via x-ray diffraction is presented. In contrast to conventional high-speed reciprocal space-mapping setups, our approach reduces the measurement time drastically due to a fixed measurement geometry with a position-sensitive detector. This means that neither the incident (ω) nor the exit (2θ) diffraction angle is scanned during the strain assessment via x-ray diffraction. Shifts of diffraction peaks on the fixed x-ray area detector originate from an out-of-plane strain within the sample. Quantitative strain assessment requires the determination of a factor relating the observed shift to the change in the reciprocal lattice vector. The factor depends only on the widths of the peak along certain directions in reciprocal space, the diffraction angle of the studied reflection, and the resolution of the instrumental setup. We provide a full theoretical explanation and exemplify the concept with picosecond strain dynamics of a thin layer of NbO2.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
When it comes to teacher attitudes towards teaching and learning, research relies heavily on explicit measures (e.g., questionnaires). These attitudes are generally conceptualized as constructivist and transmissive views on teaching and learning with constructivism often considered to be more desirable. In explicit measures, this can have drawbacks like socially desirable responding. It is for this reason that, in this study, we investigated implicit attitudes as well as explicit attitudes towards constructivism and transmission. N = 100 preservice teachers worked on a questionnaire and two Single-Target Implicit Association Tests (ST-IAT constructivism and ST-IAT transmission) before (T1) and after (T2) a single master’s semester. One group (n = 50) did student teaching while a second group (n = 50) took master’s courses. We evaluated preservice teachers’ views on teaching at the end of their masters’ studies. Participants agreed with transmission and constructivism (T1) on both an explicit and implicit level. Implicit measures seem to exceed explicit measures in differentially assessing constructivist and transmissive views on teaching and learning. After student teaching (T2), there was no overall effect of attitude development but changes in rank indicate that participants’ implicit attitudes towards constructivism and transmission developed differently for each individual.
BACKGROUND: The orbitofrontal cortex (OFC) is implicated in depression. The hypothesis investigated was whether the OFC sensitivity to reward and nonreward is related to the severity of depressive symptoms.
METHODS: Activations in the monetary incentive delay task were measured in the IMAGEN cohort at ages 14 years (n = 1877) and 19 years (n = 1140) with a longitudinal design. Clinically relevant subgroups were compared at ages 19 (high-severity group: n = 116; low-severity group: n = 206) and 14.
RESULTS: The medial OFC exhibited graded activation increases to reward, and the lateral OFC had graded activation increases to nonreward. In this general population, the medial and lateral OFC activations were associated with concurrent depressive symptoms at both ages 14 and 19 years. In a stratified high-severity depressive symptom group versus control group comparison, the lateral OFC showed greater sensitivity for the magnitudes of activations related to nonreward in the high-severity group at age 19 (p = .027), and the medial OFC showed decreased sensitivity to the reward magnitudes in the high-severity group at both ages 14 (p = .002) and 19 (p = .002). In a longitudinal design, there was greater sensitivity to nonreward of the lateral OFC at age 14 for those who exhibited high depressive symptom severity later at age 19 (p = .003).
CONCLUSIONS: Activations in the lateral OFC relate to sensitivity to not winning, were associated with high depressive symptom scores, and at age 14 predicted the depressive symptoms at ages 16 and 19. Activations in the medial OFC were related to sensitivity to winning, and reduced reward sensitivity was associated with concurrent high depressive symptom scores.
With ongoing anthropogenic global warming, some of the most vulnerable components of the Earth system might become unstable and undergo a critical transition. These subsystems are the so-called tipping elements. They are believed to exhibit threshold behaviour and would, if triggered, result in severe consequences for the biosphere and human societies. Furthermore, it has been shown that climate tipping elements are not isolated entities, but interact across the entire Earth system. Therefore, this thesis aims at mapping out the potential for tipping events and feedbacks in the Earth system mainly by the use of complex dynamical systems and network science approaches, but partially also by more detailed process-based models of the Earth system.
In the first part of this thesis, the theoretical foundations are laid by the investigation of networks of interacting tipping elements. For this purpose, the conditions for the emergence of global cascades are analysed against the structure of paradigmatic network types such as Erdös-Rényi, Barabási-Albert, Watts-Strogatz and explicitly spatially embedded networks. Furthermore, micro-scale structures are detected that are decisive for the transition of local to global cascades. These so-called motifs link the micro- to the macro-scale in the network of tipping elements. Alongside a model description paper, all these results are entered into the Python software package PyCascades, which is publicly available on github.
In the second part of this dissertation, the tipping element framework is first applied to components of the Earth system such as the cryosphere and to parts of the biosphere. Afterwards it is applied to a set of interacting climate tipping elements on a global scale. Using the Earth system Model of Intermediate Complexity (EMIC) CLIMBER-2, the temperature feedbacks are quantified, which would arise if some of the large cryosphere elements disintegrate over a long span of time. The cryosphere components that are investigated are the Arctic summer sea ice, the mountain glaciers, the Greenland and the West Antarctic Ice Sheets. The committed temperature increase, in case the ice masses disintegrate, is on the order of an additional half a degree on a global average (0.39-0.46 °C), while local to regional additional temperature increases can exceed 5 °C. This means that, once tipping has begun, additional reinforcing feedbacks are able to increase global warming and with that the risk of further tipping events.
This is also the case in the Amazon rainforest, whose parts are dependent on each other via the so-called moisture-recycling feedback. In this thesis, the importance of drought-induced tipping events in the Amazon rainforest is investigated in detail. Despite the Amazon rainforest is assumed to be adapted to past environmental conditions, it is found that tipping events sharply increase if the drought conditions become too intense in a too short amount of time, outpacing the adaptive capacity of the Amazon rainforest. In these cases, the frequency of tipping cascades also increases to 50% (or above) of all tipping events. In the model that was developed in this study, the southeastern region of the Amazon basin is hit hardest by the simulated drought patterns. This is also the region that already nowadays suffers a lot from extensive human-induced changes due to large-scale deforestation, cattle ranching or infrastructure projects.
Moreover, on the larger Earth system wide scale, a network of conceptualised climate tipping elements is constructed in this dissertation making use of a large literature review, expert knowledge and topological properties of the tipping elements. In global warming scenarios, tipping cascades are detected even under modest scenarios of climate change, limiting global warming to 2 °C above pre-industrial levels. In addition, the structural roles of the climate tipping elements in the network are revealed. While the large ice sheets on Greenland and Antarctica are the initiators of tipping cascades, the Atlantic Meridional Overturning Circulation (AMOC) acts as the transmitter of cascades. Furthermore, in our conceptual climate tipping element model, it is found that the ice sheets are of particular importance for the stability of the entire system of investigated climate tipping elements.
In the last part of this thesis, the results from the temperature feedback study with the EMIC CLIMBER-2 are combined with the conceptual model of climate tipping elements. There, it is observed that the likelihood of further tipping events slightly increases due to the temperature feedbacks even if no further CO$_2$ would be added to the atmosphere.
Although the developed network model is of conceptual nature, it is possible with this work for the first time to quantify the risk of tipping events between interacting components of the Earth system under global warming scenarios, by allowing for dynamic temperature feedbacks at the same time.
As competition over peer status becomes intense during adolescence, some adolescents develop insecure feelings regarding their social standing among their peers (i.e., social status insecurity). These adolescents sometimes use aggression to defend or promote their status. The aim of this study was to examine the relationships among social status insecurity, callous-unemotional (CU) traits, and popularity-motivated aggression and prosocial behaviors among adolescents, while controlling for gender. Another purpose was to examine the potential moderating role of CU traits in these relationships. Participants were 1,047 (49.2% girls; Mage = 12.44 years; age range from 11 to 14 years) in the 7th or 8th grades from a large Midwestern city. They completed questionnaires on social status insecurity, CU traits, and popularity-motivated relational aggression, physical aggression, cyberaggression, and prosocial behaviors. A structural regression model was conducted, with gender as a covariate. The model had adequate fit. Social status insecurity was associated positively with callousness, unemotional, and popularity-motivated aggression and related negatively to popularity-motivated prosocial behaviors. High social status insecurity was related to greater popularity-motivated aggression when adolescents had high callousness traits. The findings have implications for understanding the individual characteristics associated with social status insecurity.
Exendin-4 is a pharmaceutical peptide used in the control of insulin secretion. Structural information on exendin-4 and related peptides especially on the level of quaternary structure is scarce. We present the first published association equilibria of exendin-4 directly measured by static and dynamic light scattering. We show that exendin-4 oligomerization is pH dependent and that these oligomers are of low compactness. We relate our experimental results to a structural hypothesis to describe molecular details of exendin-4 oligomers. Discussion of the validity of this hypothesis is based on NMR, circular dichroism and fluorescence spectroscopy, and light scattering data on exendin-4 and a set of exendin-4 derived peptides. The essential forces driving oligomerization of exendin-4 are helix–helix interactions and interactions of a conserved hydrophobic moiety. Our structural hypothesis suggests that key interactions of exendin-4 monomers in the experimentally supported trimer take place between a defined helical segment and a hydrophobic triangle constituted by the Phe22 residues of the three monomeric subunits. Our data rationalize that Val19 might function as an anchor in the N-terminus of the interacting helix-region and that Trp25 is partially shielded in the oligomer by C-terminal amino acids of the same monomer. Our structural hypothesis suggests that the Trp25 residues do not interact with each other, but with C-terminal Pro residues of their own monomers.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
Biodiversity decline causes a loss of functional diversity, which threatens ecosystems through a dangerous feedback loop: This loss may hamper ecosystems’ ability to buffer environmental changes, leading to further biodiversity losses. In this context, the increasing frequency of human-induced excessive loading of nutrients causes major problems in aquatic systems. Previous studies investigating how functional diversity influences the response of food webs to disturbances have mainly considered systems with at most two functionally diverse trophic levels. We investigated the effects of functional diversity on the robustness, that is, resistance, resilience, and elasticity, using a tritrophic—and thus more realistic—plankton food web model. We compared a non-adaptive food chain with no diversity within the individual trophic levels to a more diverse food web with three adaptive trophic levels. The species fitness differences were balanced through trade-offs between defense/growth rate for prey and selectivity/half-saturation constant for predators. We showed that the resistance, resilience, and elasticity of tritrophic food webs decreased with larger perturbation sizes and depended on the state of the system when the perturbation occurred. Importantly, we found that a more diverse food web was generally more resistant and resilient but its elasticity was context-dependent. Particularly, functional diversity reduced the probability of a regime shift toward a non-desirable alternative state. The basal-intermediate interaction consistently determined the robustness against a nutrient pulse despite the complex influence of the shape and type of the dynamical attractors. This relationship was strongly influenced by the diversity present and the third trophic level. Overall, using a food web model of realistic complexity, this study confirms the destructive potential of the positive feedback loop between biodiversity loss and robustness, by uncovering mechanisms leading to a decrease in resistance, resilience, and potentially elasticity as functional diversity declines.
The effects of exercise interventions on unspecific chronic low back pain (CLBP) have been investigated in many studies, but the results are inconclusive regarding exercise types, efficiency, and sustainability. This may be because the influence of psychosocial factors on exercise induced adaptation regarding CLBP is neglected. Therefore, this study assessed psychosocial characteristics, which moderate and mediate the effects of sensorimotor exercise on LBP. A single-blind 3-arm multicenter randomized controlled trial was conducted for 12-weeks. Three exercise groups, sensorimotor exercise (SMT), sensorimotor and behavioral training (SMT-BT), and regular routines (CG) were randomly assigned to 662 volunteers. Primary outcomes (pain intensity and disability) and psychosocial characteristics were assessed at baseline (M1) and follow-up (3/6/12/24 weeks, M2-M5). Multiple regression models were used to analyze whether psychosocial characteristics are moderators of the relationship between exercise and pain, meaning that psychosocial factors and exercise interact. Causal mediation analysis were conducted to analyze, whether psychosocial characteristics mediate the exercise effect on pain. A total of 453 participants with intermittent pain (mean age = 39.5 ± 12.2 years, f = 62%) completed the training. It was shown, that depressive symptomatology (at M4, M5), vital exhaustion (at M4), and perceived social support (at M5) are significant moderators of the relationship between exercise and the reduction of pain intensity. Further depressive mood (at M4), social-satisfaction (at M4), and anxiety (at M5 SMT) significantly moderate the exercise effect on pain disability. The amount of moderation was of clinical relevance. In contrast, there were no psychosocial variables which mediated exercise effects on pain. In conclusion it was shown, that psychosocial variables can be moderators in the relationship between sensorimotor exercise induced adaptation on CLBP which may explain conflicting results in the past regarding the merit of exercise interventions in CLBP. Results suggest further an early identification of psychosocial risk factors by diagnostic tools, which may essential support the planning of personalized exercise therapy.
Level of Evidence: Level I.
Clinical Trial Registration: DRKS00004977, LOE: I, MiSpEx: grant-number: 080102A/11-14. https://www.drks.de/drks_web/navigate.do?navigationId=trial.HTML&TRIAL_ID=DRKS00004977.
Supernova remnants (SNRs) are discussed as the most promising sources of galactic cosmic rays (CR). The diffusive shock acceleration (DSA) theory predicts particle spectra in a rough agreement with observations. Upon closer inspection, however, the photon spectra of observed SNRs indicate that the particle spectra produced at SNRs shocks deviate from the standard expectation. This work suggests a viable explanation for a softening of the particle spectra in SNRs. The basic idea is the re-acceleration of particles in the turbulent region immediately downstream of the shock. This thesis shows that at the re-acceleration of particles by the fast-mode waves in the downstream region can be efficient enough to impact particle spectra over several decades in energy. To demonstrate this, a generic SNR model is presented, where the evolution of particles is described by the reduced transport equation for CR. It is shown that the resulting particle and the corresponding synchrotron spectra are significantly softer compared to the standard case. Next, this work outlines RATPaC, a code developed to model particle acceleration and corresponding photon emissions in SNRs. RATPaC solves the particle transport equation in test-particle mode using hydrodynamic simulations of the SNR plasma flow. The background magnetic field can be either computed from the induction equation or follows analytic profiles. This work presents an extended version of RATPaC that accounts for stochastic re-acceleration by fast-mode waves that provide diffusion of particles in momentum space. This version is then applied to model the young historical SNR Tycho. According to radio observations, Tycho’s SNR features the radio spectral index of approximately −0.65. In previous modeling approaches, this fact has been attributed to the strongly distinctive Alfvénic drift, which is assumed to operate in the shock vicinity. In this work, the problems and inconsistencies of this scenario are discussed. Instead, stochastic re-acceleration of electrons in the immediate downstream region of Tycho’s SNR is suggested as a cause for the soft radio spectrum. Furthermore, this work investigates two different scenarios for magnetic-field distributions inside Tycho’s SNR. It is concluded that magnetic-field damping is needed to account for the observed filaments in the radio range. Two models are presented for Tycho’s SNR, both of them feature strong hadronic contribution. Thus, a purely leptonic model is considered as very unlikely. Additionally, to the detailed modeling of Tycho’s SNR, this dissertation presents a relatively simple one-zone model for the young SNR Cassiopeia A and an interpretation for the recently analyzed VERITAS and Fermi-LAT data. It shows that the γ-ray emission of Cassiopeia A cannot be explained without a hadronic contribution and that the remnant accelerates protons up to TeV energies. Thus, Cassiopeia A is found to be unlikely a PeVatron.
Geochemical processes such as mineral dissolution and precipitation alter the microstructure of rocks, and thereby affect their hydraulic and mechanical behaviour. Quantifying these property changes and considering them in reservoir simulations is essential for a sustainable utilisation of the geological subsurface. Due to the lack of alternatives, analytical methods and empirical relations are currently applied to estimate evolving hydraulic and mechanical rock properties associated with chemical reactions. However, the predictive capabilities of analytical approaches remain limited, since they assume idealised microstructures, and thus are not able to reflect property evolution for dynamic processes. Hence, aim of the present thesis is to improve the prediction of permeability and stiffness changes resulting from pore space alterations of reservoir sandstones.
A detailed representation of rock microstructure, including the morphology and connectivity of pores, is essential to accurately determine physical rock properties. For that purpose, three-dimensional pore-scale models of typical reservoir sandstones, obtained from highly resolved micro-computed tomography (micro-CT), are used to numerically calculate permeability and stiffness. In order to adequately depict characteristic distributions of secondary minerals, the virtual samples are systematically altered and resulting trends among the geometric, hydraulic, and mechanical rock properties are quantified. It is demonstrated that the geochemical reaction regime controls the location of mineral precipitation within the pore space, and thereby crucially affects the permeability evolution. This emphasises the requirement of determining distinctive porosity-permeability relationships
by means of digital pore-scale models. By contrast, a substantial impact of spatial alterations patterns on the stiffness evolution of reservoir sandstones are only observed in case of certain microstructures, such as highly porous granular rocks or sandstones comprising framework-supporting cementations. In order to construct synthetic granular samples a process-based approach is proposed including grain deposition and diagenetic cementation. It is demonstrated that the generated samples reliably represent the microstructural complexity of natural sandstones. Thereby, general limitations of imaging techniques can be overcome and various realisations of granular rocks can be flexibly produced. These can be further altered by virtual experiments, offering a fast and cost-effective way to examine the impact of precipitation, dissolution or fracturing on various petrophysical correlations.
The presented research work provides methodological principles to quantify trends in permeability and stiffness resulting from geochemical processes. The calculated physical property relations are directly linked to pore-scale alterations, and thus have a higher accuracy than commonly applied analytical approaches. This will considerably improve the predictive capabilities of reservoir models, and is further relevant to assess and reduce potential risks, such as productivity or injectivity losses as well as reservoir compaction or fault reactivation. Hence, the proposed method is of paramount importance for a wide range of natural and engineered subsurface applications, including geothermal energy systems, hydrocarbon reservoirs, CO2 and energy storage as well as hydrothermal deposit exploration.
The Big Five personality traits play a major role in student achievement. As such, there is consistent evidence that students that are more conscientious receive better teacher-assigned grades in secondary school. However, research often does not support the claim that students that are more conscientious similarly achieve higher scores in domain-specific standardized achievement tests. Based on the Invest-and-Accrue Model, we argue that conscientiousness explains to some extent why certain students receive better grades despite similar academic accomplishments (i.e., achieving similar scores in domain-specific standardized achievement tests). Therefore, the present study examines to what extent the relationship between student personality and teacher-assigned grades consists of direct as opposed to indirect associations (via subject-specific standardized test scores). We used a representative sample of 14,710 ninth-grade students to estimate these direct and indirect pathways in mathematics and German. Structural equation models showed that test scores explained between 8 and 11% of the variance in teacher-assigned grades in mathematics and German. The Big Five personality traits in students additionally explained between 8 and 10% of the variance in grades. Finally, the personality-grade relationship consisted of direct (0.02 | β| ≤ 0.27) and indirect associations via test scores (0.01 | β| ≤ 0.07). Conscientiousness explained discrepancies between teacher-assigned grades and students’ scores in domain-specific standardized tests to a greater extent than any of the other Big Five personality traits. Our findings suggest that students that are more conscientious may invest more effort to accomplish classroom goals, but fall short of mastery.
Background
Building on the Realistic Accuracy Model, this paper explores whether it is easier for teachers to assess the achievement of some students than others. Accordingly, we suggest that certain individual characteristics of students, such as extraversion, academic self-efficacy, and conscientiousness, may guide teachers' evaluations of student achievement, resulting in more appropriate judgements and a stronger alignment of assigned grades with students' actual achievement level (as measured using standardized tests).
Aims
We examine whether extraversion, academic self-efficacy, and conscientiousness moderate the relations between teacher-assigned grades and students' standardized test scores in mathematics.
Sample
This study uses a representative sample of N = 5,919 seventh-grade students in Germany (48.8% girls; mean age: M = 12.5, SD = 0.62) who participated in a national, large-scale assessment focusing on students' academic development.
Methods
We specified structural equation models to examine the inter-relations of teacher-assigned grades with students' standardized test scores in mathematics, Big Five personality traits, and academic self-efficacy, while controlling for students' socioeconomic status, gender, and age.
Results
The correlation between teacher-assigned grades and standardized test scores in mathematics was r = .40. Teacher-assigned grades more closely related to standardized test scores when students reported higher levels of conscientiousness (beta = .05, p = .002). Students' extraversion and academic self-efficacy did not moderate the relationship between teacher-assigned grades and standardized test scores.
Conclusions
Our findings indicate that students' conscientiousness is a personality trait that seems to be important when it comes to how closely mathematics teachers align their grades to standardized test scores.
Background: High-intensity muscle actions have the potential to temporarily improve the performance which has been denoted as postactivation performance enhancement.
Objectives: This study determined the acute effects of different stretch-shortening (fast vs. low) and strength (dynamic vs. isometric) exercises executed during one training session on subsequent balance performance in youth weightlifters.
Materials and Methods: Sixteen male and female young weightlifters, aged 11.3±0.6years, performed four strength exercise conditions in randomized order, including dynamic strength (DYN; 3 sets of 3 repetitions of 10 RM) and isometric strength exercises (ISOM; 3 sets of maintaining 3s of 10 RM of back-squat), as well as fast (FSSC; 3 sets of 3 repetitions of 20-cm drop-jumps) and slow (SSSC; 3 sets of 3 hurdle jumps over a 20-cm obstacle) stretch-shortening cycle protocols. Balance performance was tested before and after each of the four exercise conditions in bipedal stance on an unstable surface (i.e., BOSU ball with flat side facing up) using two dependent variables, i.e., center of pressure surface area (CoP SA) and velocity (CoP V).
Results: There was a significant effect of time on CoP SA and CoP V [F(1,60)=54.37, d=1.88, p<0.0001; F(1,60)=9.07, d=0.77, p=0.003]. In addition, a statistically significant effect of condition on CoP SA and CoP V [F(3,60)=11.81, d=1.53, p<0.0001; F(3,60)=7.36, d=1.21, p=0.0003] was observed. Statistically significant condition-by-time interactions were found for the balance parameters CoP SA (p<0.003, d=0.54) and CoP V (p<0.002, d=0.70). Specific to contrast analysis, all specified hypotheses were tested and demonstrated that FSSC yielded significantly greater improvements than all other conditions in CoP SA and CoP V [p<0.0001 (d=1.55); p=0.0004 (d=1.19), respectively]. In addition, FSSC yielded significantly greater improvements compared with the two conditions for both balance parameters [p<0.0001 (d=2.03); p<0.0001 (d=1.45)].
Conclusion: Fast stretch-shortening cycle exercises appear to be more effective to improve short-term balance performance in young weightlifters. Due to the importance of balance for overall competitive achievement in weightlifting, it is recommended that young weightlifters implement dynamic plyometric exercises in the fast stretch-shortening cycle during the warm-up to improve their balance performance.
Energy is at the heart of the climate crisis—but also at the heart of any efforts for climate change mitigation. Energy consumption is namely responsible for approximately three quarters of global anthropogenic greenhouse gas (GHG) emissions. Therefore, central to any serious plans to stave off a climate catastrophe is a major transformation of the world's energy system, which would move society away from fossil fuels and towards a net-zero energy future. Considering that fossil fuels are also a major source of air pollutant emissions, the energy transition has important implications for air quality as well, and thus also for human and environmental health. Both Europe and Germany have set the goal of becoming GHG neutral by 2050, and moreover have demonstrated their deep commitment to a comprehensive energy transition. Two of the most significant developments in energy policy over the past decade have been the interest in expansion of shale gas and hydrogen, which accordingly have garnered great interest and debate among public, private and political actors.
In this context, sound scientific information can play an important role by informing stakeholder dialogue and future research investments, and by supporting evidence-based decision-making. This thesis examines anticipated environmental impacts from possible, relevant changes in the European energy system, in order to impart valuable insight and fill critical gaps in knowledge. Specifically, it investigates possible future shale gas development in Germany and the United Kingdom (UK), as well as a hypothetical, complete transition to hydrogen mobility in Germany. Moreover, it assesses the impacts on GHG and air pollutant emissions, and on tropospheric ozone (O3) air quality. The analysis is facilitated by constructing emission scenarios and performing air quality modeling via the Weather Research and Forecasting model coupled with chemistry (WRF-Chem). The work of this thesis is presented in three research papers.
The first paper finds that methane (CH4) leakage rates from upstream shale gas development in Germany and the UK would range between 0.35% and 1.36% in a realistic, business-as-usual case, while they would be significantly lower - between 0.08% and 0.15% - in an optimistic, strict regulation and high compliance case, thus demonstrating the value and potential of measures to substantially reduce emissions. Yet, while the optimistic case is technically feasible, it is unlikely that the practices and technologies assumed would be applied and accomplished on a systematic, regular basis, owing to economics and limited monitoring resources. The realistic CH4 leakage rates estimated in this study are comparable to values reported by studies carried out in the US and elsewhere. In contrast, the optimistic rates are similar to official CH4 leakage data from upstream gas production in Germany and in the UK. Considering that there is a lack of systematic, transparent and independent reports supporting the official values, this study further highlights the need for more research efforts in this direction. Compared with national energy sector emissions, this study suggests that shale gas emissions of volatile organic compounds (VOCs) could be significant, though relatively insignificant for other air pollutants. Similar to CH4, measures could be effective for reducing VOCs emissions.
The second paper shows that VOC and nitrogen oxides (NOx) emissions from a future shale gas industry in Germany and the UK have potentially harmful consequences for European O3 air quality on both the local and regional scale. The results indicate a peak increase in maximum daily 8-hour average O3 (MDA8) ranging from 3.7 µg m-3 to 28.3 µg m-3. Findings suggest that shale gas activities could result in additional exceedances of MDA8 at a substantial percentage of regulatory measurement stations both locally and in neighboring and distant countries, with up to circa one third of stations in the UK and one fifth of stations in Germany experiencing additional exceedances. Moreover, the results reveal that the shale gas impact on the cumulative health-related metric SOMO35 (annual Sum of Ozone Means Over 35 ppb) could be substantial, with a maximum increase of circa 28%. Overall, the findings suggest that shale gas VOC emissions could play a critical role in O3 enhancement, while NOx emissions would contribute to a lesser extent. Thus, the results indicate that stringent regulation of VOC emissions would be important in the event of future European shale gas development to minimize deleterious health outcomes.
The third paper demonstrates that a hypothetical, complete transition of the German vehicle fleet to hydrogen fuel cell technology could contribute substantially to Germany's climate and air quality goals. The results indicate that if the hydrogen were to be produced via renewable-powered water electrolysis (green hydrogen), German carbon dioxide equivalent (CO2eq) emissions would decrease by 179 MtCO2eq annually, though if electrolysis were powered by the current electricity mix, emissions would instead increase by 95 MtCO2eq annually. The findings generally reveal a notable anticipated decrease in German energy emissions of regulated air pollutants. The results suggest that vehicular hydrogen demand is 1000 PJ annually, which would require between 446 TWh and 525 TWh for electrolysis, hydrogen transport and storage. When only the heavy duty vehicle segment (HDVs) is shifted to green hydrogen, the results of this thesis show that vehicular hydrogen demand drops to 371 PJ, while a deep emissions cut is still realized (-57 MtCO2eq), suggesting that HDVs are a low-hanging fruit for contributing to decarbonization of the German road transport sector with hydrogen energy.
Background
Relatively little is known about protective factors and the emergence and maintenance of positive outcomes in the field of adolescents with chronic conditions. Therefore, the primary aim of the study is to acquire a deeper understanding of the dynamic process of resilience factors, coping strategies and psychosocial adjustment of adolescents living with chronic conditions.
Methods/design
We plan to consecutively recruit N = 450 adolescents (12–21 years) from three German patient registries for chronic conditions (type 1 diabetes, cystic fibrosis, or juvenile idiopathic arthritis). Based on screening for anxiety and depression, adolescents are assigned to two parallel groups – “inconspicuous” (PHQ-9 and GAD-7 < 7) vs. “conspicuous” (PHQ-9 or GAD-7 ≥ 7) – participating in a prospective online survey at baseline and 12-month follow-up. At two time points (T1, T2), we assess (1) intra- and interpersonal resiliency factors, (2) coping strategies, and (3) health-related quality of life, well-being, satisfaction with life, anxiety and depression. Using a cross-lagged panel design, we will examine the bidirectional longitudinal relations between resiliency factors and coping strategies, psychological adaptation, and psychosocial adjustment. To monitor Covid-19 pandemic effects, participants are also invited to take part in an intermediate online survey.
Discussion
The study will provide a deeper understanding of adaptive, potentially modifiable processes and will therefore help to develop novel, tailored interventions supporting a positive adaptation in youths with a chronic condition. These strategies should not only support those at risk but also promote the maintenance of a successful adaptation.
Trial registration
German Clinical Trials Register (DRKS), no. DRKS00025125. Registered on May 17, 2021.
Mitochondria are critical for hypothalamic function and regulators of metabolism. Hypothalamic mitochondrial dysfunction with decreased mitochondrial chaperone expression is present in type 2 diabetes (T2D). Recently, we demonstrated that a dysregulated mitochondrial stress response (MSR) with reduced chaperone expression in the hypothalamus is an early event in obesity development due to insufficient insulin signaling. Although insulin activates this response and improves metabolism, the metabolic impact of one of its members, the mitochondrial chaperone heat shock protein 10 (Hsp10), is unknown. Thus, we hypothesized that a reduction of Hsp10 in hypothalamic neurons will impair mitochondrial function and impact brain insulin action. Therefore, we investigated the role of chaperone Hsp10 by introducing a lentiviral-mediated Hsp10 knockdown (KD) in the hypothalamic cell line CLU-183 and in the arcuate nucleus (ARC) of C57BL/6N male mice. We analyzed mitochondrial function and insulin signaling utilizing qPCR, Western blot, XF96 Analyzer, immunohistochemistry, and microscopy techniques. We show that Hsp10 expression is reduced in T2D mice brains and regulated by leptin in vitro. Hsp10 KD in hypothalamic cells induced mitochondrial dysfunction with altered fatty acid metabolism and increased mitochondria-specific oxidative stress resulting in neuronal insulin resistance. Consequently, the reduction of Hsp10 in the ARC of C57BL/6N mice caused hypothalamic insulin resistance with acute liver insulin resistance.
Magnetic strain contributions in laser-excited metals studied by time-resolved X-ray diffraction
(2021)
In this work I explore the impact of magnetic order on the laser-induced ultrafast strain response of metals. Few experiments with femto- or picosecond time-resolution have so far investigated magnetic stresses. This is contrasted by the industrial usage of magnetic invar materials or magnetostrictive transducers for ultrasound generation, which already utilize magnetostrictive stresses in the low frequency regime.
In the reported experiments I investigate how the energy deposition by the absorption of femtosecond laser pulses in thin metal films leads to an ultrafast stress generation. I utilize that this stress drives an expansion that emits nanoscopic strain pulses, so called hypersound, into adjacent layers. Both the expansion and the strain pulses change the average inter-atomic distance in the sample, which can be tracked with sub-picosecond time resolution using an X-ray diffraction setup at a laser-driven Plasma X-ray source. Ultrafast X-ray diffraction can also be applied to buried layers within heterostructures that cannot be accessed by optical methods, which exhibit a limited penetration into metals. The reconstruction of the initial energy transfer processes from the shape of the strain pulse in buried detection layers represents a contribution of this work to the field of picosecond ultrasonics.
A central point for the analysis of the experiments is the direct link between the deposited energy density in the nano-structures and the resulting stress on the crystal lattice. The underlying thermodynamical concept of a Grüneisen parameter provides the theoretical framework for my work. I demonstrate how the Grüneisen principle can be used for the interpretation of the strain response on ultrafast timescales in various materials and that it can be extended to describe magnetic stresses. The class of heavy rare-earth elements exhibits especially large magnetostriction effects, which can even lead to an unconventional contraction of the laser-excited transducer material. Such a dominant contribution of the magnetic stress to the motion of atoms has not been demonstrated previously. The observed rise time of the magnetic stress contribution in Dysprosium is identical to the decrease in the helical spin-order, that has been found previously using time-resolved resonant X-ray diffraction. This indicates that the strength of the magnetic stress can be used as a proxy of the underlying magnetic order. Such magnetostriction measurements are applicable even in case of antiparallel or non-collinear alignment of the magnetic moments and a vanishing magnetization.
The strain response of metal films is usually determined by the pressure of electrons and lattice vibrations. I have developed a versatile two-pulse excitation routine that can be used to extract the magnetic contribution to the strain response even if systematic measurements above and below the magnetic ordering temperature are not feasible. A first laser pulse leads to a partial ultrafast demagnetization so that the amplitude and shape of the strain response triggered by the second pulse depends on the remaining magnetic order. With this method I could identify a strongly anisotropic magnetic stress contribution in the magnetic data storage material iron-platinum and identify the recovery of the magnetic order by the variation of the pulse-to-pulse delay. The stark contrast of the expansion of iron-platinum nanograins and thin films shows that the different constraints for the in-plane expansion have a strong influence on the out-of-plane expansion, due to the Poisson effect. I show how such transverse strain contributions need to be accounted for when interpreting the ultrafast out-of-plane strain response using thermal expansion coefficients obtained in near equilibrium conditions.
This work contributes an investigation of magnetostriction on ultrafast timescales to the literature of magnetic effects in materials. It develops a method to extract spatial and temporal varying stress contributions based on a model for the amplitude and shape of the emitted strain pulses. Energy transfer processes result in a change of the stress profile with respect to the initial absorption of the laser pulses. One interesting example occurs in nanoscopic gold-nickel heterostructures, where excited electrons rapidly transport energy into a distant nickel layer, that takes up much more energy and expands faster and stronger than the laser-excited gold capping layer. Magnetic excitations in rare earth materials represent a large energy reservoir that delays the energy transfer into adjacent layers. Such magneto-caloric effects are known in thermodynamics but not extensively covered on ultrafast timescales. The combination of ultrafast X-ray diffraction and time-resolved techniques with direct access to the magnetization has a large potential to uncover and quantify such energy transfer processes.
Compound weather events may lead to extreme impacts that can affect many aspects of society including agriculture. Identifying the underlying mechanisms that cause extreme impacts, such as crop failure, is of crucial importance to improve their understanding and forecasting. In this study, we investigate whether key meteorological drivers of extreme impacts can be identified using the least absolute shrinkage and selection operator (LASSO) in a model environment, a method that allows for automated variable selection and is able to handle collinearity between variables. As an example of an extreme impact, we investigate crop failure using annual wheat yield as simulated by the Agricultural Production Systems sIMulator (APSIM) crop model driven by 1600 years of daily weather data from a global climate model (EC-Earth) under present-day conditions for the Northern Hemisphere. We then apply LASSO logistic regression to determine which weather conditions during the growing season lead to crop failure. We obtain good model performance in central Europe and the eastern half of the United States, while crop failure years in regions in Asia and the western half of the United States are less accurately predicted. Model performance correlates strongly with annual mean and variability of crop yields; that is, model performance is highest in regions with relatively large annual crop yield mean and variability. Overall, for nearly all grid points, the inclusion of temperature, precipitation and vapour pressure deficit is key to predict crop failure. In addition, meteorological predictors during all seasons are required for a good prediction. These results illustrate the omnipresence of compounding effects of both meteorological drivers and different periods of the growing season for creating crop failure events. Especially vapour pressure deficit and climate extreme indicators such as diurnal temperature range and the number of frost days are selected by the statistical model as relevant predictors for crop failure at most grid points, underlining their overarching relevance. We conclude that the LASSO regression model is a useful tool to automatically detect compound drivers of extreme impacts and could be applied to other weather impacts such as wildfires or floods. As the detected relationships are of purely correlative nature, more detailed analyses are required to establish the causal structure between drivers and impacts.
The co-occurrence of warm spells and droughts can lead to detrimental socio-economic and ecological impacts, largely surpassing the impacts of either warm spells or droughts alone. We quantify changes in the number of compound warm spells and droughts from 1979 to 2018 in the Mediterranean Basin using the ERA5 data set. We analyse two types of compound events: 1) warm season compound events, which are extreme in absolute terms in the warm season from May to October and 2) year-round deseasonalised compound events, which are extreme in relative terms respective to the time of the year. The number of compound events increases significantly and especially warm spells are increasing strongly – with an annual growth rates of 3.9 (3.5) % for warm season (deseasonalised) compound events and 4.6 (4.4) % for warm spells –, whereas for droughts the change is more ambiguous depending on the applied definition. Therefore, the rise in the number of compound events is primarily driven by temperature changes and not the lack of precipitation. The months July and August show the highest increases in warm season compound events, whereas the highest increases of deseasonalised compound events occur in spring and early summer. This increase in deseasonalised compound events can potentially have a significant impact on the functioning of Mediterranean ecosystems as this is the peak phase of ecosystem productivity and a vital phenophase.
Mediterranean ecosystems are particularly vulnerable to climate change and the associated increase in climate anomalies. This study investigates extreme ecosystem responses evoked by climatic drivers in the Mediterranean Basin for the time span 1999–2019 with a specific focus on seasonal variations as the seasonal timing of climatic anomalies is considered essential for impact and vulnerability assessment. A bivariate vulnerability analysis is performed for each month of the year to quantify which combinations of the drivers temperature (obtained from ERA5-Land) and soil moisture (obtained from ESA CCI and ERA5-Land) lead to extreme reductions in ecosystem productivity using the fraction of absorbed photosynthetically active radiation (FAPAR; obtained from the Copernicus Global Land Service) as a proxy.
The bivariate analysis clearly showed that, in many cases, it is not just one but a combination of both drivers that causes ecosystem vulnerability. The overall pattern shows that Mediterranean ecosystems are prone to three soil moisture regimes during the yearly cycle: they are vulnerable to hot and dry conditions from May to July, to cold and dry conditions from August to October, and to cold conditions from November to April, illustrating the shift from a soil-moisture-limited regime in summer to an energy-limited regime in winter. In late spring, a month with significant vulnerability to hot conditions only often precedes the next stage of vulnerability to both hot and dry conditions, suggesting that high temperatures lead to critically low soil moisture levels with a certain time lag. In the eastern Mediterranean, the period of vulnerability to hot and dry conditions within the year is much longer than in the western Mediterranean. Our results show that it is crucial to account for both spatial and temporal variability to adequately assess ecosystem vulnerability. The seasonal vulnerability approach presented in this study helps to provide detailed insights regarding the specific phenological stage of the year in which ecosystem vulnerability to a certain climatic condition occurs.
How to cite.
Vogel, J., Paton, E., and Aich, V.: Seasonal ecosystem vulnerability to climatic anomalies in the Mediterranean, Biogeosciences, 18, 5903–5927, https://doi.org/10.5194/bg-18-5903-2021, 2021.
In response to the impending spread of COVID-19, universities worldwide abruptly stopped face-to-face teaching and switched to technology-mediated teaching. As a result, the use of technology in the learning processes of students of different disciplines became essential and the only way to teach, communicate and collaborate for months. In this crisis context, we conducted a longitudinal study in four German universities, in which we collected a total of 875 responses from students of information systems and music and arts at four points in time during the spring–summer 2020 semester. Our study focused on (1) the students’ acceptance of technology-mediated learning, (2) any change in this acceptance during the semester and (3) the differences in acceptance between the two disciplines. We applied the Technology Acceptance Model and were able to validate it for the extreme situation of the COVID-19 pandemic. We extended the model with three new variables (time flexibility, learning flexibility and social isolation) that influenced the construct of perceived usefulness. Furthermore, we detected differences between the disciplines and over time. In this paper, we present and discuss our study’s results and derive short- and long-term implications for science and practice.
Electrical muscle stimulation (EMS) is an increasingly popular training method and has become the focus of research in recent years. New EMS devices offer a wide range of mobile applications for whole-body EMS (WB-EMS) training, e.g., the intensification of dynamic low-intensity endurance exercises through WB-EMS. The present study aimed to determine the differences in exercise intensity between WB-EMS-superimposed and conventional walking (EMS-CW), and CON and WB-EMS-superimposed Nordic walking (WB-EMS-NW) during a treadmill test. Eleven participants (52.0 ± years; 85.9 ± 7.4 kg, 182 ± 6 cm, BMI 25.9 ± 2.2 kg/m2) performed a 10 min treadmill test at a given velocity (6.5 km/h) in four different test situations, walking (W) and Nordic walking (NW) in both conventional and WB-EMS superimposed. Oxygen uptake in absolute (VO2) and relative to body weight (rel. VO2), lactate, and the rate of perceived exertion (RPE) were measured before and after the test. WB-EMS intensity was adjusted individually according to the feedback of the participant. The descriptive statistics were given in mean ± SD. For the statistical analyses, one-factorial ANOVA for repeated measures and two-factorial ANOVA [factors include EMS, W/NW, and factor combination (EMS*W/NW)] were performed (α = 0.05). Significant effects were found for EMS and W/NW factors for the outcome variables VO2 (EMS: p = 0.006, r = 0.736; W/NW: p < 0.001, r = 0.870), relative VO2 (EMS: p < 0.001, r = 0.850; W/NW: p < 0.001, r = 0.937), and lactate (EMS: p = 0.003, r = 0.771; w/NW: p = 0.003, r = 0.764) and both the factors produced higher results. However, the difference in VO2 and relative VO2 is within the range of biological variability of ± 12%. The factor combination EMS*W/NW is statistically non-significant for all three variables. WB-EMS resulted in the higher RPE values (p = 0.035, r = 0.613), RPE differences for W/NW and EMS*W/NW were not significant. The current study results indicate that WB-EMS influences the parameters of exercise intensity. The impact on exercise intensity and the clinical relevance of WB-EMS-superimposed walking (WB-EMS-W) exercise is questionable because of the marginal differences in the outcome variables.
The Role of Interoceptive Sensibility and Emotional Conceptualization for the Experience of Emotions
(2021)
The theory of constructed emotions suggests that different psychological components, including core affect (mental and neural representations of bodily changes), and conceptualization (meaning-making based on prior experiences and semantic knowledge), are involved in the formation of emotions. However, little is known about their role in experiencing emotions. In the current study, we investigated how individual differences in interoceptive sensibility and emotional conceptualization (as potential correlates of these components) interact to moderate three important aspects of emotional experiences: emotional intensity (strength of emotion felt), arousal (degree of activation), and granularity (ability to differentiate emotions with precision). To this end, participants completed a series of questionnaires assessing interoceptive sensibility and emotional conceptualization and underwent two emotion experience tasks, which included standardized material (emotion differentiation task; ED task) and self-experienced episodes (day reconstruction method; DRM). Correlational analysis showed that individual differences in interoceptive sensibility and emotional conceptualization were related to each other. Principal Component Analysis (PCA) revealed two independent factors that were referred to as sensibility and monitoring. The Sensibility factor, interpreted as beliefs about the accuracy of an individual in detecting internal physiological and emotional states, predicted higher granularity for negative words. The Monitoring factor, interpreted as the tendency to focus on the internal states of an individual, was negatively related to emotional granularity and intensity. Additionally, Sensibility scores were more strongly associated with greater well-being and adaptability measures than Monitoring scores. Our results indicate that independent processes underlying individual differences in interoceptive sensibility and emotional conceptualization contribute to emotion experiencing.
Anthropogenic climate change alters the hydrological cycle. While certain areas experience more intense precipitation events, others will experience droughts and increased evaporation, affecting water storage in long-term reservoirs, groundwater, snow, and glaciers. High elevation environments are especially vulnerable to climate change, which will impact the water supply for people living downstream. The Himalaya has been identified as a particularly vulnerable system, with nearly one billion people depending on the runoff in this system as their main water resource. As such, a more refined understanding of spatial and temporal changes in the water cycle in high altitude systems is essential to assess variations in water budgets under different climate change scenarios.
However, not only anthropogenic influences have an impact on the hydrological cycle, but changes to the hydrological cycle can occur over geological timescales, which are connected to the interplay between orogenic uplift and climate change. However, their temporal evolution and causes are often difficult to constrain. Using proxies that reflect hydrological changes with an increase in elevation, we can unravel the history of orogenic uplift in mountain ranges and its effect on the climate.
In this thesis, stable isotope ratios (expressed as δ2H and δ18O values) of meteoric waters and organic material are combined as tracers of atmospheric and hydrologic processes with remote sensing products to better understand water sources in the Himalayas. In addition, the record of modern climatological conditions based on the compound specific stable isotopes of leaf waxes (δ2Hwax) and brGDGTs (branched Glycerol dialkyl glycerol tetraethers) in modern soils in four Himalayan river catchments was assessed as proxies of the paleoclimate and (paleo-) elevation. Ultimately, hydrological variations over geological timescales were examined using δ13C and δ18O values of soil carbonates and bulk organic matter originating from sedimentological sections from the pre-Siwalik and Siwalik groups to track the response of vegetation and monsoon intensity and seasonality on a timescale of 20 Myr.
I find that Rayleigh distillation, with an ISM moisture source, mainly controls the isotopic composition of surface waters in the studied Himalayan catchments. An increase in d-excess in the spring, verified by remote sensing data products, shows the significant impact of runoff from snow-covered and glaciated areas on the surface water isotopic values in the timeseries.
In addition, I show that biomarker records such as brGDGTs and δ2Hwax have the potential to record (paleo-) elevation by yielding a significant correlation with the temperature and surface water δ2H values, respectively, as well as with elevation. Comparing the elevation inferred from both brGDGT and δ2Hwax, large differences were found in arid sections of the elevation transects due to an additional effect of evapotranspiration on δ2Hwax. A combined study of these proxies can improve paleoelevation estimates and provide recommendations based on the results found in this study.
Ultimately, I infer that the expansion of C4 vegetation between 20 and 1 Myr was not solely dependent on atmospheric pCO2, but also on regional changes in aridity and seasonality from to the stable isotopic signature of the two sedimentary sections in the Himalaya (east and west).
This thesis shows that the stable isotope chemistry of surface waters can be applied as a tool to monitor the changing Himalayan water budget under projected increasing temperatures. Minimizing the uncertainties associated with the paleo-elevation reconstructions were assessed by the combination of organic proxies (δ2Hwax and brGDGTs) in Himalayan soil. Stable isotope ratios in bulk soil and soil carbonates showed the evolution of vegetation influenced by the monsoon during the late Miocene, proving that these proxies can be used to record monsoon intensity, seasonality, and the response of vegetation. In conclusion, the use of organic proxies and stable isotope chemistry in the Himalayas has proven to successfully record changes in climate with increasing elevation. The combination of δ2Hwax and brGDGTs as a new proxy provides a more refined understanding of (paleo-)elevation and the influence of climate.
While previous research underscores the role of leaders in stimulating employee voice behaviour, comparatively little is known about what affects leaders' support for such constructive but potentially threatening employee behaviours. We introduce leader member exchange quality (LMX) as a central predictor of leaders' support for employees' ideas for constructive change. Apart from a general benefit of high LMX for leaders' idea support, we propose that high LMX is particularly critical to leaders' idea support if the idea voiced by an employee constitutes a power threat to the leader. We investigate leaders' attribution of prosocial and egoistic employee intentions as mediators of these effects. Hypotheses were tested in a quasi-experimental vignette study (N = 160), in which leaders evaluated a simulated employee idea, and a field study (N = 133), in which leaders evaluated an idea that had been voiced to them at work. Results show an indirect effect of LMX on leaders' idea support via attributed prosocial intentions but not via attributed egoistic intentions, and a buffering effect of high LMX on the negative effect of power threat on leaders' idea support. Results differed across studies with regard to the main effect of LMX on idea support.
Root water uptake is an essential process for terrestrial plants that strongly affects the spatiotemporal distribution of water in vegetated soil. Fast neutron tomography is a recently established non-invasive imaging technique capable to capture the 3D architecture of root systems in situ and even allows for tracking of three-dimensional water flow in soil and roots. We present an in vivo analysis of local water uptake and transport by roots of soil-grown maize plants—for the first time measured in a three-dimensional time-resolved manner. Using deuterated water as tracer in infiltration experiments, we visualized soil imbibition, local root uptake, and tracked the transport of deuterated water throughout the fibrous root system for a day and night situation. This revealed significant differences in water transport between different root types. The primary root was the preferred water transport path in the 13-days-old plants while seminal roots of comparable size and length contributed little to plant water supply. The results underline the unique potential of fast neutron tomography to provide time-resolved 3D in vivo information on the water uptake and transport dynamics of plant root systems, thus contributing to a better understanding of the complex interactions of plant, soil and water.
Inertial measurement units (IMUs) enable easy to operate and low-cost data recording for gait analysis. When combined with treadmill walking, a large number of steps can be collected in a controlled environment without the need of a dedicated gait analysis laboratory. In order to evaluate existing and novel IMU-based gait analysis algorithms for treadmill walking, a reference dataset that includes IMU data as well as reliable ground truth measurements for multiple participants and walking speeds is needed. This article provides a reference dataset consisting of 15 healthy young adults who walked on a treadmill at three different speeds. Data were acquired using seven IMUs placed on the lower body, two different reference systems (Zebris FDMT-HQ and OptoGait), and two RGB cameras. Additionally, in order to validate an existing IMU-based gait analysis algorithm using the dataset, an adaptable modular data analysis pipeline was built. Our results show agreement between the pressure-sensitive Zebris and the photoelectric OptoGait system (r = 0.99), demonstrating the quality of our reference data. As a use case, the performance of an algorithm originally designed for overground walking was tested on treadmill data using the data pipeline. The accuracy of stride length and stride time estimations was comparable to that reported in other studies with overground data, indicating that the algorithm is equally applicable to treadmill data. The Python source code of the data pipeline is publicly available, and the dataset will be provided by the authors upon request, enabling future evaluations of IMU gait analysis algorithms without the need of recording new data.
Generative adversarial networks (GANs) have been broadly applied to a wide range of application domains since their proposal. In this thesis, we propose several methods that aim to tackle different existing problems in GANs. Particularly, even though GANs are generally able to generate high-quality samples, the diversity of the generated set is often sub-optimal. Moreover, the common increase of the number of models in the original GANs framework, as well as their architectural sizes, introduces additional costs. Additionally, even though challenging, the proper evaluation of a generated set is an important direction to ultimately improve the generation process in GANs. We start by introducing two diversification methods that extend the original GANs framework to multiple adversaries to stimulate sample diversity in a generated set. Then, we introduce a new post-training compression method based on Monte Carlo methods and importance sampling to quantize and prune the weights and activations of pre-trained neural networks without any additional training. The previous method may be used to reduce the memory and computational costs introduced by increasing the number of models in the original GANs framework. Moreover, we use a similar procedure to quantize and prune gradients during training, which also reduces the communication costs between different workers in a distributed training setting. We introduce several topology-based evaluation methods to assess data generation in different settings, namely image generation and language generation. Our methods retrieve both single-valued and double-valued metrics, which, given a real set, may be used to broadly assess a generated set or separately evaluate sample quality and sample diversity, respectively. Moreover, two of our metrics use locality-sensitive hashing to accurately assess the generated sets of highly compressed GANs. The analysis of the compression effects in GANs paves the way for their efficient employment in real-world applications. Given their general applicability, the methods proposed in this thesis may be extended beyond the context of GANs. Hence, they may be generally applied to enhance existing neural networks and, in particular, generative frameworks.
Measuring migration 2.0
(2021)
The interest in human migration is at its all-time high, yet data to measure migration is notoriously limited. “Big data” or “digital trace data” have emerged as new sources of migration measurement complementing ‘traditional’ census, administrative and survey data. This paper reviews the strengths and weaknesses of eight novel, digital data sources along five domains: reliability, validity, scope, access and ethics. The review highlights the opportunities for migration scholars but also stresses the ethical and empirical challenges. This review intends to be of service to researchers and policy analysts alike and help them navigate this new and increasingly complex field.
Leveraging large-deviation statistics to decipher the stochastic properties of measured trajectories
(2021)
Extensive time-series encoding the position of particles such as viruses, vesicles, or individualproteins are routinely garnered insingle-particle tracking experiments or supercomputing studies.They contain vital clues on how viruses spread or drugs may be delivered in biological cells.Similar time-series are being recorded of stock values in financial markets and of climate data.Such time-series are most typically evaluated in terms of time-averaged mean-squareddisplacements (TAMSDs), which remain random variables for finite measurement times. Theirstatistical properties are different for differentphysical stochastic processes, thus allowing us toextract valuable information on the stochastic process itself. To exploit the full potential of thestatistical information encoded in measured time-series we here propose an easy-to-implementand computationally inexpensive new methodology, based on deviations of the TAMSD from itsensemble average counterpart. Specifically, we use the upper bound of these deviations forBrownian motion (BM) to check the applicability of this approach to simulated and real data sets.By comparing the probability of deviations fordifferent data sets, we demonstrate how thetheoretical bound for BM reveals additional information about observed stochastic processes. Weapply the large-deviation method to data sets of tracer beads tracked in aqueous solution, tracerbeads measured in mucin hydrogels, and of geographic surface temperature anomalies. Ouranalysis shows how the large-deviation properties can be efficiently used as a simple yet effectiveroutine test to reject the BM hypothesis and unveil relevant information on statistical propertiessuch as ergodicity breaking and short-time correlations.
Massive Open Online Courses (MOOCs) open up new opportunities to learn a wide variety of skills online and are thus well suited for individual education, especially where proffcient teachers are not available locally. At the same time, modern society is undergoing a digital transformation, requiring the training of large numbers of current and future employees. Abstract thinking, logical reasoning, and the need to formulate instructions for computers are becoming increasingly relevant. A holistic way to train these skills is to learn how to program. Programming, in addition to being a mental discipline, is also considered a craft, and practical training is required to achieve mastery. In order to effectively convey programming skills in MOOCs, practical exercises are incorporated into the course curriculum to offer students the necessary hands-on experience to reach an in-depth understanding of the programming concepts presented. Our preliminary analysis showed that while being an integral and rewarding part of courses, practical exercises bear the risk of overburdening students who are struggling with conceptual misunderstandings and unknown syntax. In this thesis, we develop, implement, and evaluate different interventions with the aim to improve the learning experience, sustainability, and success of online programming courses. Data from four programming MOOCs, with a total of over 60,000 participants, are employed to determine criteria for practical programming exercises best suited for a given audience.
Based on over five million executions and scoring runs from students' task submissions, we deduce exercise difficulties, students' patterns in approaching the exercises, and potential flaws in exercise descriptions as well as preparatory videos. The primary issue in online learning is that students face a social gap caused by their isolated physical situation. Each individual student usually learns alone in front of a computer and suffers from the absence of a pre-determined time structure as provided in traditional school classes. Furthermore, online learning usually presses students into a one-size-fits-all curriculum, which presents the same content to all students, regardless of their individual needs and learning styles. Any means of a personalization of content or individual feedback regarding problems they encounter are mostly ruled out by the discrepancy between the number of learners and the number of instructors. This results in a high demand for self-motivation and determination of MOOC participants. Social distance exists between individual students as well as between students and course instructors. It decreases engagement and poses a threat to learning success. Within this research, we approach the identified issues within MOOCs and suggest scalable technical solutions, improving social interaction and balancing content difficulty.
Our contributions include situational interventions, approaches for personalizing educational content as well as concepts for fostering collaborative problem-solving. With these approaches, we reduce counterproductive struggles and create a universal improvement for future programming MOOCs. We evaluate our approaches and methods in detail to improve programming courses for students as well as instructors and to advance the state of knowledge in online education.
Data gathered from our experiments show that receiving peer feedback on one's programming problems improves overall course scores by up to 17%. Merely the act of phrasing a question about one's problem improved overall scores by about 14%. The rate of students reaching out for help was significantly improved by situational just-in-time interventions. Request for Comment interventions increased the share of students asking for help by up to 158%. Data from our four MOOCs further provide detailed insight into the learning behavior of students. We outline additional significant findings with regard to student behavior and demographic factors. Our approaches, the technical infrastructure, the numerous educational resources developed, and the data collected provide a solid foundation for future research.
The Big Naryn Complex (BNC) in the East Djetim-Too Range of the Kyrgyz Middle Tianshan block is a tectonized, at least 2 km thick sequence of predominantly felsic to intermediate volcanic rocks intruded by porphyric rhyolite sills. It overlies a basement of metamorphic rocks and is overlain by late Neoproterozoic Djetim-Too Formation sediments; these also occur as tectonic intercalations in the BNC. The up to ca. 1100 m thick Lower Member is composed of predominantly rhyolites-to-dacites and minor basalts, while the at least 900 m thick pyroclastic Upper Member is dominated by rhyolitic-to-dacitic ignimbrites. Porphyric rhyolite sills are concentrated at the top of the Lower Member. A Lower Member rhyolite and a sill sample have LA-ICP-MS U-Pb zircon crystallization ages of 726.1 +/- 2.2 Ma and 720.3 +/- 6.5 Ma, respectively, showing that most of the magmatism occurred within a short time span in the late Tonian-early Cryogenian. Inherited zircons in the sill sample have Neoarchean (2.63, 2.64 Ga), Paleo- (2.33-1.81 Ga), Meso- (1.55 Ga), and Neoproterozoic (ca. 815 Ma) ages, and were derived from a heterogeneous Kuilyu Complex basement. A 1751 +/- 7 Ma Ar-40/Ar-39 age for amphibole from metagabbro is the age of cooling subsequent to Paleoproterozoic metamorphism of the Kuilyu Complex. The large amount of pyroclastic rocks, and their major and trace element compositions, the presence of Neoarchean to Neoproterozoic inherited zircons and a depositional basement of metamorphic rocks point to formation of the BNC in a continental magmatic arc setting.
Partial synchronous states exist in systems of coupled oscillators between full synchrony and asynchrony. They are an important research topic because of their variety of different dynamical states. Frequently, they are studied using phase dynamics. This is a caveat, as phase dynamics are generally obtained in the weak coupling limit of a first-order approximation in the coupling strength. The generalization to higher orders in the coupling strength is an open problem. Of particular interest in the research of partial synchrony are systems containing both attractive and repulsive coupling between the units. Such a mix of coupling yields very specific dynamical states that may help understand the transition between full synchrony and asynchrony. This thesis investigates partial synchronous states in mixed-coupling systems. First, a method for higher-order phase reduction is introduced to observe interactions beyond the pairwise one in the first-order phase description, hoping that these may apply to mixed-coupling systems. This new method for coupled systems with known phase dynamics of the units gives correct results but, like most comparable methods, is computationally expensive. It is applied to three Stuart-Landau oscillators coupled in a line with a uniform coupling strength. A numerical method is derived to verify the analytical results. These results are interesting but give importance to simpler phase models that still exhibit exotic states. Such simple models that are rarely considered are Kuramoto oscillators with attractive and repulsive interactions. Depending on how the units are coupled and the frequency difference between the units, it is possible to achieve many different states. Rich synchronization dynamics, such as a Bellerophon state, are observed when considering a Kuramoto model with attractive interaction in two subpopulations (groups) and repulsive interactions between groups. In two groups, one attractive and one repulsive, of identical oscillators with a frequency difference, an interesting solitary state appears directly between full and partial synchrony. This system can be described very well analytically.
The objective of this work was to investigate the potential effect of cereal α-amylase/trypsin inhibitors (ATIs) on growth parameters and selective digestive enzymes of Tenebrio molitor L. larvae. The approach consisted of feeding the larvae with wheat, sorghum and rice meals containing different levels and composition of α-amylase/trypsin inhibitors. The developmental and biochemical characteristics of the larvae were assessed over feeding periods of 5 h, 5 days and 10 days, and the relative abundance of α-amylase and selected proteases in larvae were determined using liquid chromatography tandem mass spectrometry. Overall, weight gains ranged from 21% to 42% after five days of feeding. The larval death rate significantly increased in all groups after 10 days of feeding (p < 0.05), whereas the pupation rate was about 25% among larvae fed with rice (Oryza sativa L.) and Siyazan/Esperya wheat meals, and only 8% and 14% among those fed with Damougari and S35 sorghum meals. As determined using the Lowry method, the protein contents of the sodium phosphate extracts ranged from 7.80 ± 0.09 to 9.42 ± 0.19 mg/mL and those of the ammonium bicarbonate/urea reached 19.78 ± 0.16 to 37.47 ± 1.38 mg/mL. The total protein contents of the larvae according to the Kjeldahl method ranged from 44.0 and 49.9 g/100 g. The relative abundance of α-amylase, CLIP domain-containing serine protease, modular serine protease zymogen and C1 family cathepsin significantly decreased in the larvae, whereas dipeptidylpeptidase I and chymotrypsin increased within the first hours after feeding (p < 0.05). Trypsin content was found to be constant independently of time or feed material. Finally, based on the results we obtained, it was difficult to substantively draw conclusions on the likely effects of meal ATI composition on larval developmental characteristics, but their effects on the digestive enzyme expression remain relevant.
The detection and quantification of nut allergens remains a major challenge. The liquid chroma-tography tandem mass spectrometry (LC-MS/MS) is emerging as one of the most widely used methods, but sample preparation prior to the analysis is still a key issue. The objective of this work was to establish optimized protocols for extraction, tryptic digestion and LC-MS analysis of almond, cashew, hazelnut, peanut, pistachio and walnut samples. Ammonium bicar-bonate/urea extraction (Ambi/urea), SDS buffer extraction (SDS), polyvinylpolypyrroli-done (PVPP) extraction, trichloroacetic acid/acetone extraction (TCA/acetone) and chloro-form/methanol/sodium chloride precipitation (CM/NaCl) as well as the performances of con-ventional tryptic digestion and microwave-assisted breakdown were investigated. Overall, the protein extraction yields ranged from 14.9 ± 0.5 (almond extract from CM/NaCl) to 76.5 ± 1.3% (hazelnut extract from Ambi/urea). Electrophoretic profiling showed that the SDS extraction method clearly presented a high amount of extracted proteins in the range of 0–15 kDa, 15–35 kDa, 35–70 kDa and 70–250 kDa compared to the other methods. The linearity of the LC-MS methods in the range of 0 to 0.4 µg equivalent defatted nut flour was assessed and recovery of internal standards GWGG and DPLNV(d8)LKPR ranged from 80 to 120%. The identified bi-omarkers peptides were used to relatively quantifier selected allergenic protein form the inves-tigated nut samples. Considering the overall results, it can be concluded that SDS buffer allows a better protein extraction from almond, peanut and walnut samples while PVPP buffer is more appropriate for cashew, pistachio and hazelnut samples. It was also found that conventional overnight digestion is indicated for cashew, pistachio and hazelnut samples, while microwave assisted tryptic digestion is recommended for almond, hazelnut and peanut extracts.
The spread of shrubs in Namibian savannas raises questions about the resilience of these ecosystems to global change. This makes it necessary to understand the past dynamics of the vegetation, since there is no consensus on whether shrub encroachment is a new phenomenon, nor on its main drivers. However, a lack of long-term vegetation datasets for the region and the scarcity of suitable palaeoecological archives, makes reconstructing past vegetation and land cover of the savannas a challenge.
To help meet this challenge, this study addresses three main research questions: 1) is pollen analysis a suitable tool to reflect the vegetation change associated with shrub encroachment in savanna environments? 2) Does the current encroached landscape correspond to an alternative stable state of savanna vegetation? 3) To what extent do pollen-based quantitative vegetation reconstructions reflect changes in past land cover?
The research focuses on north-central Namibia, where despite being the region most affected by shrub invasion, particularly since the 21st century, little is known about the dynamics of this phenomenon.
Field-based vegetation data were compared with modern pollen data to assess their correspondence in terms of composition and diversity along precipitation and grazing intensity gradients. In addition, two sediment cores from Lake Otjikoto were analysed to reveal changes in vegetation composition that have occurred in the region over the past 170 years and their possible drivers. For this, a multiproxy approach (fossil pollen, sedimentary ancient DNA (sedaDNA), biomarkers, compound specific carbon (δ13C) and deuterium (δD) isotopes, bulk carbon isotopes (δ13Corg), grain size, geochemical properties) was applied at high taxonomic and temporal resolution. REVEALS modelling of the fossil pollen record from Lake Otjikoto was run to quantitatively reconstruct past vegetation cover. For this, we first made pollen productivity estimates (PPE) of the most relevant savanna taxa in the region using the extended R-value model and two pollen dispersal options (Gaussian plume model and Lagrangian stochastic model). The REVEALS-based vegetation reconstruction was then validated using remote sensing-based regional vegetation data.
The results show that modern pollen reflects the composition of the vegetation well, but diversity less well. Interestingly, precipitation and grazing explain a significant amount of the compositional change in the pollen and vegetation spectra. The multiproxy record shows that a state change from open Combretum woodland to encroached Terminalia shrubland can occur over a century, and that the transition between states spans around 80 years and is characterized by a unique vegetation composition. This transition is supported by gradual environmental changes induced by management (i.e. broad-scale logging for the mining industry, selective grazing and reduced fire activity associated with intensified farming) and related land-use change. Derived environmental changes (i.e. reduced soil moisture, reduced grass cover, changes in species composition and competitiveness, reduced fire intensity) may have affected the resilience of Combretum open woodlands, making them more susceptible to change to an encroached state by stochastic events such as consecutive years of precipitation and drought, and by high concentrations of pCO2. We assume that the resulting encroached state was further stabilized by feedback mechanisms that favour the establishment and competitiveness of woody vegetation.
The REVEALS-based quantitative estimates of plant taxa indicate the predominance of a semi-open landscape throughout the 20th century and a reduction in grass cover below 50% since the 21st century associated with the spread of encroacher woody taxa. Cover estimates show a close match with regional vegetation data, providing support for the vegetation dynamics inferred from multiproxy analyses. Reasonable PPEs were made for all woody taxa, but not for Poaceae.
In conclusion, pollen analysis is a suitable tool to reconstruct past vegetation dynamics in savannas. However, because pollen cannot identify grasses beyond family level, a multiproxy approach, particularly the use of sedaDNA, is required. I was able to separate stable encroached states from mere woodland phases, and could identify drivers and speculate about related feedbacks. In addition, the REVEALS-based quantitative vegetation reconstruction clearly reflects the magnitude of the changes in the vegetation cover that occurred during the last 130 years, despite the limitations of some PPEs.
This research provides new insights into pollen-vegetation relationships in savannas and highlights the importance of multiproxy approaches when reconstructing past vegetation dynamics in semi-arid environments. It also provides the first time series with sufficient taxonomic resolution to show changes in vegetation composition during shrub encroachment, as well as the first quantitative reconstruction of past land cover in the region. These results help to identify the different stages in savanna dynamics and can be used to calibrate predictive models of vegetation change, which are highly relevant to land management.
Carbonatite magmatism is a highly efficient transport mechanism from Earth’s mantle to the crust, thus providing insights into the chemistry and dynamics of the Earth’s mantle. One evolving and promising tool for tracing magma interaction are stable iron isotopes, particularly because iron isotope fractionation is controlled by oxidation state and bonding environment. Meanwhile, a large data set on iron isotope fractionation in igneous rocks exists comprising bulk rock compositions and fractionation between mineral groups. Iron isotope data from natural carbonatite rocks are extremely light and of remarkably high variability. This resembles iron isotope data from mantle xenoliths, which are characterized by a variability in δ56Fe spanning three times the range found in basalts, and by the extremely light values of some whole rock samples, reaching δ56Fe as low as -0.69 ‰ in a spinel lherzolite. Cause to this large range of variations may be metasomatic processes, involving metasomatic agents like volatile bearing high-alkaline silicate melts or carbonate melts. The expected effects of metasomatism on iron isotope fractionation vary with parameters like melt/rock-ratio, reaction time, and the nature of metasomatic agents and mineral reactions involved. An alternative or additional way to enrich light isotopes in the mantle could be multiple phases of melt extraction. To interpret the existing data sets more knowledge on iron isotope fractionation factors is needed.
To investigate the behavior of iron isotopes in the carbonatite systems, kinetic and equilibration experiments in natro-carbonatite systems between immiscible silicate and carbonate melts were performed in an internally heated gas pressure vessel at intrinsic redox conditions at temperatures between 900 and 1200 °C and pressures of 0.5 and 0.7 GPa. The iron isotope compositions of coexisting silicate melt and carbonate melt were analyzed by solution MC-ICP-MS. The kinetic experiments employing a Fe-58 spiked starting material show that isotopic equilibrium is obtained after 48 hours. The experimental studies of equilibrium iron isotope fractionation between immiscible silicate and carbonate melts have shown that light isotopes are enriched in the carbonatite melt. The highest Δ56Fesil.m.-carb.melt (mean) of 0.13 ‰ was determined in a system with a strongly peralkaline silicate melt composition (ASI ≥ 0.21, Na/Al ≤ 2.7). In three systems with extremely peralkaline silicate melt compositions (ASI between 0.11 and 0.14) iron isotope fractionation could analytically not be resolved. The lowest Δ56Fesil.m.-carb.melt (mean) of 0.02 ‰ was determined in a system with an extremely peralkaline silicate melt composition (ASI ≤ 0.11 , Na/Al ≥ 6.1). The observed iron isotope fractionation is most likely governed by the redox conditions of the system. Yet, in the systems, where no fractionation occurred, structural changes induced by compositional changes possibly overrule the influence of redox conditions. This interpretation implicates, that the iron isotope system holds the potential to be useful not only for exploring redox conditions in magmatic systems, but also for discovering structural changes in a melt.
In situ iron isotope analyses by femtosecond laser ablation coupled to MC-ICP-MS on magnetite and olivine grains were performed to reveal variations in iron isotope composition on the micro scale. The investigated sample is a melilitite bomb from the Salt Lake Crater group at Honolulu (Oahu, Hawaii), showing strong evidence for interaction with a carbonatite melt. While magnetite grains are rather homogeneous in their iron isotope compositions, olivine grains span a far larger range in iron isotope ratios. The variability of δ56Fe in magnetite is limited from - 0.17 ‰ (± 0.11 ‰, 2SE) to +0.08 ‰ (± 0.09 ‰, 2SE). δ56Fe in olivine range from -0.66‰ (± 0.11 ‰, 2SE) to +0.10 ‰ (± 0.13 ‰, 2SE). Olivine and magnetite grains hold different informations regarding kinetic and equilibrium fractionation due to their different Fe diffusion coefficients. The observations made in the experiments and in the in situ iron isotope analyses suggest that the extremely light iron isotope signatures found in carbonatites are generated by several steps of isotope fractionation during carbonatite genesis. These may involve equilibrium and kinetic fractionation. Since iron isotopic signatures in natural systems are generated by a combination of multiple factors (pressure, temperature, redox conditions, phase composition and structure, time scale), multi tracer approaches are needed to explain signatures found in natural rocks.
One of the key challenges in modern Facility Management (FM) is to digitally reflect the current state of the built environment, referred to as-is or as-built versus as-designed representation. While the use of Building Information Modeling (BIM) can address the issue of digital representation, the generation and maintenance of BIM data requires a considerable amount of manual work and domain expertise. Another key challenge is being able to monitor the current state of the built environment, which is used to provide feedback and enhance decision making. The need for an integrated solution for all data associated with the operational life cycle of a building is becoming more pronounced as practices from Industry 4.0 are currently being evaluated and adopted for FM use. This research presents an approach for digital representation of indoor environments in their current state within the life cycle of a given building. Such an approach requires the fusion of various sources of digital data. The key to solving such a complex issue of digital data integration, processing and representation is with the use of a Digital Twin (DT). A DT is a digital duplicate of the physical environment, states, and processes. A DT fuses as-designed and as-built digital representations of built environment with as-is data, typically in the form of floorplans, point clouds and BIMs, with additional information layers pertaining to the current and predicted states of an indoor environment or a complete building (e.g., sensor data). The design, implementation and initial testing of prototypical DT software services for indoor environments is presented and described. These DT software services are implemented within a service-oriented paradigm, and their feasibility is presented through functioning and tested key software components within prototypical Service-Oriented System (SOS) implementations. The main outcome of this research shows that key data related to the built environment can be semantically enriched and combined to enable digital representations of indoor environments, based on the concept of a DT. Furthermore, the outcomes of this research show that digital data, related to FM and Architecture, Construction, Engineering, Owner and Occupant (AECOO) activity, can be combined, analyzed and visualized in real-time using a service-oriented approach. This has great potential to benefit decision making related to Operation and Maintenance (O&M) procedures within the scope of the post-construction life cycle stages of typical office buildings.
Seed traits matter
(2021)
Although many plants are dispersed by wind and seeds can travel long distances across unsuitable matrix areas, a large proportion relies on co-evolved zoochorous seed dispersal to connect populations in isolated habitat islands. Particularly in agricultural landscapes, where remaining habitat patches are often very small and highly isolated, mobile linkers as zoochorous seed dispersers are critical for the population dynamics of numerous plant species. However, knowledge about the quali- or quantification of such mobile link processes, especially in agricultural landscapes, is still limited. In a controlled feeding experiment, we recorded the seed intake and germination success after complete digestion by the European brown hare (Lepus europaeus) and explored its mobile link potential as an endozoochoric seed disperser. Utilizing a suite of common, rare, and potentially invasive plant species, we disentangled the effects of seed morphological traits on germination success while controlling for phylogenetic relatedness. Further, we measured the landscape connectivity via hares in two contrasting agricultural landscapes (simple: few natural and semi-natural structures, large fields; complex: high amount of natural and semi-natural structures, small fields) using GPS-based movement data. With 34,710 seeds of 44 plant species fed, one of 200 seeds (0.51%) with seedlings of 33 species germinated from feces. Germination after complete digestion was positively related to denser seeds with comparatively small surface area and a relatively slender and elongated shape, suggesting that, for hares, the most critical seed characteristics for successful endozoochorous seed dispersal minimize exposure of the seed to the stomach and the associated digestive system. Furthermore, we could show that a hare's retention time is long enough to interconnect different habitats, especially grasslands and fields. Thus, besides other seed dispersal mechanisms, this most likely allows hares to act as effective mobile linkers contributing to ecosystem stability in times of agricultural intensification, not only in complex but also in simple landscapes.
Anthropogenic activities such as continuous landscape changes threaten biodiversity at both local and regional scales. Metacommunity models attempt to combine these two scales and continuously contribute to a better mechanistic understanding of how spatial processes and constraints, such as fragmentation, affect biodiversity. There is a strong consensus that such structural changes of the landscape tend to negatively effect the stability of metacommunities. However, in particular the interplay of complex trophic communities and landscape structure is not yet fully understood.
In this present dissertation, a metacommunity approach is used based on a dynamic and spatially explicit model that integrates population dynamics at the local scale and dispersal dynamics at the regional scale. This approach allows the assessment of complex spatial landscape components such as habitat clustering on complex species communities, as well as the analysis of population dynamics of a single species. In addition to the impact of a fixed landscape structure, periodic environmental disturbances are also considered, where a periodical change of habitat availability, temporally alters landscape structure, such as the seasonal drying of a water body.
On the local scale, the model results suggest that large-bodied animal species, such as predator species at high trophic positions, are more prone to extinction in a state of large patch isolation than smaller species at lower trophic levels.
Increased metabolic losses for species with a lower body mass lead to increased energy limitation for species on higher trophic levels and serves as an explanation for a predominant loss of these species. This effect is particularly pronounced for food webs, where species are more sensitive to increased metabolic losses through dispersal and a change in landscape structure.
In addition to the impact of species composition in a food web for diversity, the strength of local foraging interactions likewise affect the synchronization of population dynamics. A reduced predation pressure leads to more asynchronous population dynamics, beneficial for the stability of population dynamics as it reduces the risk of correlated extinction events among habitats. On the regional scale, two landscape aspects, which are the mean patch isolation and the formation of local clusters of two patches, promote an increase in $\beta$-diversity. Yet, the individual composition and robustness of the local species community equally explain a large proportion of the observed diversity patterns.
A combination of periodic environmental disturbance and patch isolation has a particular impact on population dynamics of a species. While the periodic disturbance has a synchronizing effect, it can even superimpose emerging asynchronous dynamics in a state of large patch isolation and unifies trends in synchronization between different species communities.
In summary, the findings underline a large local impact of species composition and interactions on local diversity patterns of a metacommunity. In comparison, landscape structures such as fragmentation have a negligible effect on local diversity patterns, but increase their impact for regional diversity patterns. In contrast, at the level of population dynamics, regional characteristics such as periodic environmental disturbance and patch isolation have a particularly strong impact and contribute substantially to the understanding of the stability of population dynamics in a metacommunity. These studies demonstrate once again the complexity of our ecosystems and the need for further analysis for a better understanding of our surrounding environment and more targeted conservation of biodiversity.
Kenya and Uganda are amongst the countries that, for different historical, political, and economic reasons, have embarked on law reform processes as regards to citizenship. In 2009, Uganda made provisions in its laws to allow citizens to have dual citizenship while Kenya’s 2010 constitution similarly introduced it, and at the same time, a general prohibition on dual citizenship was lifted, that is, a ban on state officers, including the President and Deputy President, being dual nationals (Manby, 2018).
Against this background, I analysed the reasons for which these countries that previously held stringent laws and policies against dual citizenship, made a shift in a close time proximity. Given their geo-political roles, location, regional, continental, and international obligations, I conducted a comparative study on the processes, actors, impact, and effect. A specific period of 2000 to 2010 was researched, that is, from when the debates for law reforms emerged, to the processes being implemented, the actors, and the implications.
According to Rubenstein (2000, p. 520), citizenship is observed in terms of “political institutions” that are free to act according to the will of, in the interests of, or with authority over, their citizenry. Institutions are emergent national or international, higher-order factors above the individual spectrum, having the interests and political involvement of their actors without requiring recurring collective mobilisation or imposing intervention to realise these regularities. Transnational institutions are organisations with authority beyond single governments. Given their International obligations, I analysed the role of the UN, AU, and EAC in influencing the citizenship debates and reforms in Kenya and Uganda. Further, non-state actors, such as civil society, were considered.
Veblen, (1899) describes institutions as a set of settled habits of thought common to the generality of men. Institutions function only because the rules involved are rooted in shared habits of thought and behaviour although there is some ambiguity in the definition of the term “habit”. Whereas abstracts and definitions depend on different analytical procedures, institutions restrain some forms of action and facilitate others. Transnational institutions both restrict and aid behaviour. The famous “invisible hand” is nothing else but transnational institutions. Transnational theories, as applied to politics, posit two distinct forms that are of influence over policy and political action (Veblen, 1899). This influence and durability of institutions is “a function of the degree to which they are instilled in political actors at the individual or organisational level, and the extent to which they thereby “tie up” material resources and networks. Against this background, transitional networks with connection to Kenya and Uganda were considered alongside the diaspora from these two countries and their role in the debate and reforms on Dual citizenship.
Sterian (2013, p. 310) notes that Nation states may be vulnerable to institutional influence and this vulnerability can pose a threat to a nation’s autonomy, political legitimacy, and to the democratic public law. Transnational institutions sometimes “collide with the sovereignty of the state when they create new structures for regulating cross-border relationships”. However, Griffin (2003) disagrees that transnational institutional behaviour is premised on the principles of neutrality, impartiality, and independence. Transnational institutions have become the main target of the lobby groups and civil society, consequently leading to excessive politicisation. Kenya and Uganda are member states not only of the broader African union but also of the E.A.C which has adopted elements of socio-economic uniformity. Therefore, in the comparative analysis, I examine the role of the East African Community and its partners in the dual citizenship debate on the two countries.
I argue in the analysis that it is not only important to be a citizen within Kenya or Uganda but also important to discover how the issue of dual citizenship is legally interpreted within the borders of each individual nation-state. In light of this discussion, I agree with Mamdani’s definition of the nation-state as a unique form of power introduced in Africa by colonial powers between 1880 and 1940 whose outcomes can be viewed as “debris of a modernist postcolonial project, an attempt to create a centralised modern state as the bearer of Westphalia sovereignty against the background of indirect rule” (Mamdani, 1996, p. xxii). I argue that this project has impacted the citizenship debate through the adopted legal framework of post colonialism, built partly on a class system, ethnic definitions, and political affiliation. I, however, insist that the nation-state should still be a vital custodian of the citizenship debate, not in any way denying the individual the rights to identity and belonging. The question then that arises is which type of nation-state? Mamdani (1996, p. 298) asserts that the core agenda that African states faced at independence was threefold: deracialising civil society; detribalising the native authority; and developing the economy in the context of unequal international relations. Post-independence governments grappled with overcoming the citizen and subject dichotomy through either preserving the customary in the name of “defending tradition against alien encroachment or abolishing it in the name of overcoming backwardness and embracing triumphant modernism”. Kenya and Uganda are among countries that have reformed their citizenship laws attesting to Mamdani’s latter assertion.
Mamdani’s (1996) assertions on how African states continue to deal with the issue of citizenship through either the defence of tradition against subjects or abolishing it in the name of overcoming backwardness and acceptance of triumphant modernism are based on the colonial legal theory and the citizen-subject dichotomy within Africa communities. To further create a wider perspective on legal theory, I argue that those assertions above, point to the historical divergence between the republican model of citizenship, which places emphasis on political agency as envisioned in Rousseau´s social contract, as opposed to the liberal model of citizenship, which stresses the legal status and protection (Pocock, 1995).
I, therefore, compare the contexts of both Kenya and Uganda, the actors, the implications of transnationalism and post-nationalism, on the citizens, the nation-state and the region. I conclude by highlighting the shortcomings in the law reforms that allowed for dual citizenship, further demonstrating an urgent need to address issues, such as child statelessness, gender nationality laws, and the rights of dual citizens. Ethnicity, a weak nation state, and inconsistent citizenship legal reforms are closely linked to the historical factors of both countries. I further indicate the economic and political incentives that influenced the reform.
Keywords: Citizenship, dual citizenship, nation state, republicanism, liberalism, transnationalism, post-nationalism
Charities typically ask potential donors repeatedly for a donation. These repeated requests might trigger avoidance behavior. Considering that, this paper analyzes the impact of offering an ask avoidance option on charitable giving. In a proposed utility framework, the avoidance option decreases the social pressure to donate. At the same time, it induces feelings of gratitude toward the fundraiser, which may lead to a reciprocal increase in donations. The results of a lab experiment designed to disentangle the two channels show no negative impact of the option to avoid repeated asking on donations. Instead, the full model indicates a positive impact of the reciprocity channel. This finding suggests that it might be beneficial for charities to introduce an ask avoidance option during high-frequency fundraising campaigns.
Forming as a result of the collision between the Adriatic and European plates, the Alpine orogen exhibits significant lithospheric heterogeneity due to the long history of interplay between these plates, other continental and oceanic blocks in the region, and inherited features from preceeding orogenies. This implies that the thermal and rheological configuration of the lithosphere also varies significantly throughout the region. Lithology and temperature/pressure conditions exert a first order control on rock strength, principally via thermally activated creep deformation and on the distribution at depth of the brittle-ductile transition zone, which can be regarded as the lower bound to the seismogenic zone. Therefore, they influence the spatial distribution of seismicity within a lithospheric plate. In light of this, accurately constrained geophysical models of the heterogeneous Alpine lithospheric configuration, are crucial in describing regional deformation patterns. However, despite the amount of research focussing on the area, different hypotheses still exist regarding the present-day lithospheric state and how it might relate to the present-day seismicity distribution.
This dissertaion seeks to constrain the Alpine lithospheric configuration through a fully 3D integrated modelling workflow, that utilises multiple geophysical techniques and integrates from all available data sources. The aim is therefore to shed light on how lithospheric heterogeneity may play a role in influencing the heterogeneous patterns of seismicity distribution observed within the region. This was accomplished through the generation of: (i) 3D seismically constrained, structural and density models of the lithosphere, that were adjusted to match the observed gravity field; (ii) 3D models of the lithospheric steady state thermal field, that were adjusted to match observed wellbore temperatures; and (iii) 3D rheological models of long term lithospheric strength, with the results of each step used as input for the following steps.
Results indicate that the highest strength within the crust (~ 1 GPa) and upper mantle (> 2 GPa), are shown to occur at temperatures characteristic for specific phase transitions (more felsic crust: 200 – 400 °C; more mafic crust and upper lithospheric mantle: ~600 °C) with almost all seismicity occurring in these regions. However, inherited lithospheric heterogeneity was found to significantly influence this, with seismicity in the thinner and more mafic Adriatic crust (~22.5 km, 2800 kg m−3, 1.30E-06 W m-3) occuring to higher temperatures (~600 °C) than in the thicker and more felsic European crust (~27.5 km, 2750 kg m−3, 1.3–2.6E-06 W m-3, ~450 °C). Correlation between seismicity in the orogen forelands and lithospheric strength, also show different trends, reflecting their different tectonic settings. As such, events in the plate boundary setting of the southern foreland correlate with the integrated lithospheric strength, occurring mainly in the weaker lithosphere surrounding the strong Adriatic indenter. Events in the intraplate setting of the northern foreland, instead correlate with crustal strength, mainly occurring in the weaker and warmer crust beneath the Upper Rhine Graben.
Therefore, not only do the findings presented in this work represent a state of the art understanding of the lithospheric configuration beneath the Alps and their forelands, but also a significant improvement on the features known to significantly influence the occurrence of seismicity within the region. This highlights the importance of considering lithospheric state in regards to explaining observed patterns of deformation.
The ubiquitin-proteasome-system (UPS) is a cellular cascade involving three enzymatic steps for protein ubiquitination to target them to the 26S proteasome for proteolytic degradation. Several components of the UPS have been shown to be central for regulation of defense responses during infections with phytopathogenic bacteria. Upon recognition of the pathogen, local defense is induced which also primes the plant to acquire systemic resistance (SAR) for enhanced immune responses upon challenging infections. Here, ubiquitinated proteins were shown to accumulate locally and systemically during infections with Psm and after treatment with the SAR-inducing metabolites salicylic acid (SA) and pipecolic acid (Pip). The role of the 26S proteasome in local defense has been described in several studies, but the potential role during SAR remains elusive and was therefore investigated in this project by characterizing the Arabidopsis proteasome mutants rpt2a-2 and rpn12a-1 during priming and infections with Pseudomonas. Bacterial replication assays reveal decreased basal and systemic immunity in both mutants which was verified on molecular level showing impaired activation of defense- and SAR-genes. rpt2a-2 and rpn12a-1 accumulate wild type like levels of camalexin but less SA. Endogenous SA treatment restores local PR gene expression but does not rescue the SAR-phenotype. An RNAseq experiment of Col-0 and rpt2a-2 reveal weak or absent induction of defense genes in the proteasome mutant during priming. Thus, a functional 26S proteasome was found to be required for induction of SAR while compensatory mechanisms can still be initiated.
E3-ubiquitin ligases conduct the last step of substrate ubiquitination and thereby convey specificity to proteasomal protein turnover. Using RNAseq, 11 E3-ligases were found to be differentially expressed during priming in Col-0 of which plant U-box 54 (PUB54) and ariadne 12 (ARI12) were further investigated to gain deeper understanding of their potential role during priming.
PUB54 was shown to be expressed during priming and /or triggering with virulent Pseudomonas. pub54 I and pub54-II mutants display local and systemic defense comparable to Col-0. The heavy-metal associated protein 35 (HMP35) was identified as potential substrate of PUB54 in yeast which was verified in vitro and in vivo. PUB54 was shown to be an active E3-ligase exhibiting auto-ubiquitination activity and performing ubiquitination of HMP35. Proteasomal turnover of HMP35 was observed indicating that PUB54 targets HMP35 for ubiquitination and subsequent proteasomal degradation. Furthermore, hmp35-I benefits from increased resistance in bacterial replication assays. Thus, HMP35 is potentially a negative regulator of defense which is targeted and ubiquitinated by PUB54 to regulate downstream defense signaling. ARI12 is transcriptionally activated during priming or triggering and hyperinduced during priming and triggering. Gene expression is not inducible by the defense related hormone salicylic acid (SA) and is dampened in npr1 and fmo1 mutants consequently depending on functional SA- and Pip-pathways, respectively. ARI12 accumulates systemically after priming with SA, Pip or Pseudomonas. ari12 mutants are not altered in resistance but stable overexpression leads to increased resistance in local and systemic tissue. During priming and triggering, unbalanced ARI12 levels (i.e. knock out or overexpression) leads to enhanced FMO1 activation indicating a role of ARI12 in Pip-mediated SAR. ARI12 was shown to be an active E3-ligase with auto-ubiquitination activity likely required for activation with an identified ubiquitination site at K474. Mass spectrometrically identified potential substrates were not verified by additional experiments yet but suggest involvement of ARI12 in regulation of ROS in turn regulating Pip-dependent SAR pathways.
Thus, data from this project provide strong indications about the involvement of the 26S proteasome in SAR and identified a central role of the two so far barely described E3-ubiquitin ligases PUB54 and ARI12 as novel components of plant defense.
Objective
The Caribbean is an important global biodiversity hotspot. Adaptive radiations there lead to many speciation events within a limited period and hence are particularly prominent biodiversity generators. A prime example are freshwater fish of the genus Limia, endemic to the Greater Antilles. Within Hispaniola, nine species have been described from a single isolated site, Lake Miragoâne, pointing towards extraordinary sympatric speciation. This study examines the evolutionary history of the Limia species in Lake Miragoâne, relative to their congeners throughout the Caribbean.
Results
For 12 Limia species, we obtained almost complete sequences of the mitochondrial cytochrome b gene, a well-established marker for lower-level taxonomic relationships. We included sequences of six further Limia species from GenBank (total N = 18 species). Our phylogenies are in concordance with other published phylogenies of Limia. There is strong support that the species found in Lake Miragoâne in Haiti are monophyletic, confirming a recent local radiation. Within Lake Miragoâne, speciation is likely extremely recent, leading to incomplete lineage sorting in the mtDNA. Future studies using multiple unlinked genetic markers are needed to disentangle the relationships within the Lake Miragoâne clade.
Different forms of methodological and ontological naturalism constitute the current near-orthodoxy in analytic philosophy. Many prominent figures have called naturalism a (scientific) image (Sellars, W. 1962. “Philosophy and the Scientific Image of Man.” In Wilfrid Sellars, Science, Perception, Reality, 1–40. Ridgeview Publishing), a Weltanschauung (Loewer, B. 2001. “From Physics to Physicalism.” In Physicalism and its Discontents, edited by C. Gillett, and B. Loewer. Cambridge: Cambridge University Press; Stoljar, D. 2010. Physicalism. Routledge), or even a “philosophical ideology” (Kim, J. 2003. “The American Origins of Philosophical Naturalism.” Journal of Philosophical Research 28: 83–98). This suggests that naturalism is indeed something over-and-above an ordinary philosophical thesis (e.g. in contrast to the justified true belief-theory of knowledge). However, these thinkers fail to tease out the host of implications this idea – naturalism being a worldview – presents. This paper draws on (somewhat underappreciated) remarks of Dilthey and Jaspers on the concept of worldviews (Weltanschauung, Weltbild) in order to demonstrate that naturalism as a worldview is a presuppositional background assumption which is left untouched by arguments against naturalism as a thesis. The concluding plea is (in order to make dialectical progress) to re-organize the existing debate on naturalism in a way that treats naturalism not as a first-order philosophical claim, but rather shifts its focus on naturalism’s status as a worldview.
This open access book presents a topical, comprehensive and differentiated analysis of Germany’s public administration and reforms. It provides an overview on key elements of German public administration at the federal, Länder and local levels of government as well as on current reform activities of the public sector. It examines the key institutional features of German public administration; the changing relationships between public administration, society and the private sector; the administrative reforms at different levels of the federal system and numerous sectors; and new challenges and modernization approaches like digitalization, Open Government and Better Regulation. Each chapter offers a combination of descriptive information and problem-oriented analysis, presenting key topical issues in Germany which are relevant to an international readership.
Background: A growing body of research has documented negative effects of sexualization in the media on individuals’ self-objectification. This research is predominantly built on studies examining traditional media, such as magazines and television, and young female samples. Furthermore, longitudinal studies are scarce, and research is missing studying mediators of the relationship. The first aim of the present PhD thesis was to investigate the relations between the use of sexualized interactive media and social media and self-objectification. The second aim of this work was to examine the presumed processes within understudied samples, such as males and females beyond college age, thus investigating the moderating roles of age and gender. The third aim was to shed light on possible mediators of the relation between sexualized media and self-objectification.
Method: The research aims were addressed within the scope of four studies. In an experiment, women’s self-objectification and body satisfaction was measured after playing a video game with a sexualized vs. a nonsexualized character that was either personalized or generic. The second study investigated the cross-sectional link between sexualized television use and self-objectification and consideration of cosmetic surgery in a sample of women across a broad age spectrum, examining the role of age in the relations. The third study looked at the cross-sectional link between male and female sexualized images on Instagram and their associations with self-objectification among a sample of male and female adolescents. Using a two-wave longitudinal design, the fourth study examined sexualized video game and Instagram use as predictors of adolescents’ self-objectification. Path models were conceptualized for the second, third and fourth study, in which media use predicted body surveillance via appearance comparisons (Study 4), thin-ideal internalization (Study 2, 3, 4), muscular-ideal internalization (Study 3, 4), and valuing appearance (all studies).
Results: The results of the experimental study revealed no effect of sexualized video game characters on women’s self-objectification and body satisfaction. No moderating effect of personalization emerged. Sexualized television use was associated to consideration of cosmetic surgery via body surveillance and valuing appearance for women of all ages in Study 2, while no moderating effect of age was found. Study 3 revealed that seeing sexualized male images on Instagram was indirectly associated with higher body surveillance via muscular-ideal internalization for boys and girls. Sexualized female images were indirectly linked to higher body surveillance via thin-ideal internalization and valuing appearance over competence only for girls. The longitudinal analysis of Study 4 showed no moderating effect of gender: For boys and girls, sexualized video game use at T1 predicted body surveillance at T2 via appearance comparisons, thin-ideal internalization and valuing appearance over competence. Furthermore, the use of sexualized Instagram images at T1 predicted body surveillance at T2 via valuing appearance.
Conclusion: The findings show that sexualization in the media is linked to self-objectification among a variety of media formats and within diverse groups of people. While the longitudinal study indicates that sexualized media predict self-objectification over time, the experimental null findings warrant caution regarding this temporal order. The results demonstrate that several mediating variables might be involved in this link. Possible implications for research and practice, such as intervention programs and policy-making, are discussed.
Media artists have been struggling for financial survival ever since media art came into being. The non-material value of the artwork, a provocative attitude towards the traditional arts world and originally anti-capitalist mindset of the movement makes it particularly difficult to provide a constructive solution. However, a cultural entrepreneurial approach can be used to build a framework in order to find a balance between culture and business while ensuring that the cultural mission remains the top priority.
The presence of impermeable surfaces in urban areas hinders natural drainage and directs the surface runoff to storm drainage systems with finite capacity, which makes these areas prone to pluvial flooding. The occurrence of pluvial flooding depends on the existence of minimal areas for surface runoff generation and concentration. Detailed hydrologic and hydrodynamic simulations are computationally expensive and require intensive resources. This study compared and evaluated the performance of two simplified methods to identify urban pluvial flood-prone areas, namely the fill–spill–merge (FSM) method and the topographic wetness index (TWI) method and used the TELEMAC-2D hydrodynamic numerical model for benchmarking and validation. The FSM method uses common GIS operations to identify flood-prone depressions from a high-resolution digital elevation model (DEM). The TWI method employs the maximum likelihood method (MLE) to probabilistically calibrate a TWI threshold (τ) based on the inundation maps from a 2D hydrodynamic model for a given spatial window (W) within the urban area. We found that the FSM method clearly outperforms the TWI method both conceptually and effectively in terms of model performance.
Crochet is a popular handcraft all over the world. While other techniques such as knitting or weaving have received technical support over the years through machines, crochet is still a purely manual craft. Not just the act of crochet itself is manual but also the process of creating instructions for new crochet patterns, which is barely supported by domain specific digital solutions. This leads to unstructured and often also ambiguous and erroneous pattern instructions. In this report, we propose a concept to digitally represent crochet patterns. This format incorporates crochet techniques which allows domain specific support for crochet pattern designers during the pattern creation and instruction writing process. As contributions, we present a thorough domain analysis, the concept of a graph structure used as domain specific language to specify crochet patterns and a prototype of a projectional editor using the graph as representation format of patterns and a diagramming system to visualize them in 2D and 3D. By analyzing the domain, we learned about crochet techniques and pain points of designers in their pattern creation workflow. These insights are the basis on which we defined the pattern representation. In order to evaluate our concept, we built a prototype by which the feasibility of the concept is shown and we tested the software with professional crochet designers who approved of the concept.
We investigate models for incremental binary classification, an example for supervised online learning. Our starting point is a model for human and machine learning suggested by E.M.Gold.
In the first part, we consider incremental learning algorithms that use all of the available binary labeled training data in order to compute the current hypothesis. For this model, we observe that the algorithm can be assumed to always terminate and that the distribution of the training data does not influence learnability. This is still true if we pose additional delayable requirements that remain valid despite a hypothesis output delayed in time. Additionally, we consider the non-delayable requirement of consistent learning. Our corresponding results underpin the claim for delayability being a suitable structural property to describe and collectively investigate a major part of learning success criteria. Our first theorem states the pairwise implications or incomparabilities between an established collection of delayable learning success criteria, the so-called complete map. Especially, the learning algorithm can be assumed to only change its last hypothesis in case it is inconsistent with the current training data. Such a learning behaviour is called conservative.
By referring to learning functions, we obtain a hierarchy of approximative learning success criteria. Hereby we allow an increasing finite number of errors of the hypothesized concept by the learning algorithm compared with the concept to be learned. Moreover, we observe a duality depending on whether vacillations between infinitely many different correct hypotheses are still considered a successful learning behaviour. This contrasts the vacillatory hierarchy for learning from solely positive information.
We also consider a hypothesis space located between the two most common hypothesis space types in the nearby relevant literature and provide the complete map.
In the second part, we model more efficient learning algorithms. These update their hypothesis referring to the current datum and without direct regress to past training data. We focus on iterative (hypothesis based) and BMS (state based) learning algorithms. Iterative learning algorithms use the last hypothesis and the current datum in order to infer the new hypothesis.
Past research analyzed, for example, the above mentioned pairwise relations between delayable learning success criteria when learning from purely positive training data. We compare delayable learning success criteria with respect to iterative learning algorithms, as well as learning from either exclusively positive or binary labeled data. The existence of concept classes that can be learned by an iterative learning algorithm but not in a conservative way had already been observed, showing that conservativeness is restrictive. An additional requirement arising from cognitive science research %and also observed when training neural networks is U-shapedness, stating that the learning algorithm does diverge from a correct hypothesis. We show that forbidding U-shapes also restricts iterative learners from binary labeled data.
In order to compute the next hypothesis, BMS learning algorithms refer to the currently observed datum and the actual state of the learning algorithm. For learning algorithms equipped with an infinite amount of states, we provide the complete map. A learning success criterion is semantic if it still holds, when the learning algorithm outputs other parameters standing for the same classifier. Syntactic (non-semantic) learning success criteria, for example conservativeness and syntactic non-U-shapedness, restrict BMS learning algorithms. For proving the equivalence of the syntactic requirements, we refer to witness-based learning processes. In these, every change of the hypothesis is justified by a later on correctly classified witness from the training data. Moreover, for every semantic delayable learning requirement, iterative and BMS learning algorithms are equivalent. In case the considered learning success criterion incorporates syntactic non-U-shapedness, BMS learning algorithms can learn more concept classes than iterative learning algorithms.
The proofs are combinatorial, inspired by investigating formal languages or employ results from computability theory, such as infinite recursion theorems (fixed point theorems).
Skilled reading requires information processing of the fixated and the not-yet-fixated words to generate precise control of gaze. Over the last 30 years, experimental research provided evidence that word processing is distributed across the perceptual span, which permits recognition of the fixated (foveal) word as well as preview of parafoveal words to the right of fixation. However, theoretical models have been unable to differentiate the specific influences of foveal and parafoveal information on saccade control. Here we show how parafoveal word difficulty modulates spatial and temporal control of gaze in a computational model to reproduce experimental results. In a fully Bayesian framework, we estimated model parameters for different models of parafoveal processing and carried out large-scale predictive simulations and model comparisons for a gaze-contingent reading experiment. We conclude that mathematical modeling of data from gaze-contingent experiments permits the precise identification of pathways from parafoveal information processing to gaze control, uncovering potential mechanisms underlying the parafoveal contribution to eye-movement control.
During sentence reading the eyes quickly jump from word to word to sample visual information with the high acuity of the fovea. Lexical properties of the currently fixated word are known to affect the duration of the fixation, reflecting an interaction of word processing with oculomotor planning. While low level properties of words in the parafovea can likewise affect the current fixation duration, results concerning the influence of lexical properties have been ambiguous (Drieghe, Rayner, & Pollatsek, 2008; Kliegl, Nuthmann, & Engbert, 2006). Experimental investigations of such lexical parafoveal-on-foveal effects using the boundary paradigm have instead shown, that lexical properties of parafoveal previews affect fixation durations on the upcoming target words (Risse & Kliegl, 2014). However, the results were potentially confounded with effects of preview validity.
The notion of parafoveal processing of lexical information challenges extant models of eye movements during reading. Models containing serial word processing assumptions have trouble explaining such effects, as they usually couple successful word processing to saccade planning, resulting in skipping of the parafoveal word. Although models with parallel word processing are less restricted, in the SWIFT model (Engbert, Longtin, & Kliegl, 2002) only processing of the foveal word can directly influence the saccade latency.
Here we combine the results of a boundary experiment (Chapter 2) with a predictive modeling approach using the SWIFT model, where we explore mechanisms of parafoveal inhibition in a simulation study (Chapter 4). We construct a likelihood function for the SWIFT model (Chapter 3) and utilize the experimental data in a Bayesian approach to parameter estimation (Chapter 3 & 4).
The experimental results show a substantial effect of parafoveal preview frequency on fixation durations on the target word, which can be clearly distinguished from the effect of preview validity. Using the eye movement data from the participants, we demonstrate the feasibility of the Bayesian approach even for a small set of estimated parameters, by comparing summary statistics of experimental and simulated data. Finally, we can show that the SWIFT model can account for the lexical preview effects, when a mechanism for parafoveal inhibition is added. The effects of preview validity were modeled best, when processing dependent saccade cancellation was added for invalid trials. In the simulation study only the control condition of the experiment was used for parameter estimation, allowing for cross validation. Simultaneously the number of free parameters was increased. High correlations of summary statistics demonstrate the capabilities of the parameter estimation approach. Taken together, the results advocate for a better integration of experimental data into computational modeling via parameter estimation.
To achieve a sustainable energy economy, it is necessary to turn back on the combustion of fossil fuels as a means of energy production and switch to renewable sources. However, their temporal availability does not match societal consumption needs, meaning that renewably generated energy must be stored in its main generation times and allocated during peak consumption periods. Electrochemical energy storage (EES) in general is well suited due to its infrastructural independence and scalability. The lithium ion battery (LIB) takes a special place, among EES systems due to its energy density and efficiency, but the scarcity and uneven geological occurrence of minerals and ores vital for many cell components, and hence the high and fluctuating costs will decelerate its further distribution.
The sodium ion battery (SIB) is a promising successor to LIB technology, as the fundamental setup and cell chemistry is similar in the two systems. Yet, the most widespread negative electrode material in LIBs, graphite, cannot be used in SIBs, as it cannot store sufficient amounts of sodium at reasonable potentials. Hence, another carbon allotrope, non-graphitizing or hard carbon (HC) is used in SIBs. This material consists of turbostratically disordered, curved graphene layers, forming regions of graphitic stacking and zones of deviating layers, so-called internal or closed pores.
The structural features of HC have a substantial impact of the charge-potential curve exhibited by the carbon when it is used as the negative electrode in an SIB. At defects and edges an adsorption-like mechanism of sodium storage is prevalent, causing a sloping voltage curve, ill-suited for the practical application in SIBs, whereas a constant voltage plateau of relatively high capacities is found immediately after the sloping region, which recent research attributed to the deposition of quasimetallic sodium into the closed pores of HC.
Literature on the general mechanism of sodium storage in HCs and especially the role of the closed pore is abundant, but the influence of the pore geometry and chemical nature of the HC on the low-potential sodium deposition is yet in an early stage. Therefore, the scope of this thesis is to investigate these relationships using suitable synthetic and characterization methods. Materials of precisely known morphology, porosity, and chemical structure are prepared in clear distinction to commonly obtained ones and their impact on the sodium storage characteristics is observed. Electrochemical impedance spectroscopy in combination with distribution of relaxation times analysis is further established as a technique to study the sodium storage process, in addition to classical direct current techniques, and an equivalent circuit model is proposed to qualitatively describe the HC sodiation mechanism, based on the recorded data. The obtained knowledge is used to develop a method for the preparation of closed porous and non-porous materials from open porous ones, proving not only the necessity of closed pores for efficient sodium storage, but also providing a method for effective pore closure and hence the increase of the sodium storage capacity and efficiency of carbon materials.
The insights obtained and methods developed within this work hence not only contribute to the better understanding of the sodium storage mechanism in carbon materials of SIBs, but can also serve as guidance for the design of efficient electrode materials.
Background
The metabolic syndrome (MetS) is a risk cluster for a number of secondary diseases. The implementation of prevention programs requires early detection of individuals at risk. However, access to health care providers is limited in structurally weak regions. Brandenburg, a rural federal state in Germany, has an especially high MetS prevalence and disease burden. This study aims to validate and test the feasibility of a setup for mobile diagnostics of MetS and its secondary diseases, to evaluate the MetS prevalence and its association with moderating factors in Brandenburg and to identify new ways of early prevention, while establishing a “Mobile Brandenburg Cohort” to reveal new causes and risk factors for MetS.
Methods
In a pilot study, setups for mobile diagnostics of MetS and secondary diseases will be developed and validated. A van will be equipped as an examination room using point-of-care blood analyzers and by mobilizing standard methods. In study part A, these mobile diagnostic units will be placed at different locations in Brandenburg to locally recruit 5000 participants aged 40-70 years. They will be examined for MetS and advice on nutrition and physical activity will be provided. Questionnaires will be used to evaluate sociodemographics, stress perception, and physical activity. In study part B, participants with MetS, but without known secondary diseases, will receive a detailed mobile medical examination, including MetS diagnostics, medical history, clinical examinations, and instrumental diagnostics for internal, cardiovascular, musculoskeletal, and cognitive disorders. Participants will receive advice on nutrition and an exercise program will be demonstrated on site. People unable to participate in these mobile examinations will be interviewed by telephone. If necessary, participants will be referred to general practitioners for further diagnosis.
Discussion
The mobile diagnostics approach enables early detection of individuals at risk, and their targeted referral to local health care providers. Evaluation of the MetS prevalence, its relation to risk-increasing factors, and the “Mobile Brandenburg Cohort” create a unique database for further longitudinal studies on the implementation of home-based prevention programs to reduce mortality, especially in rural regions.
Trial registration
German Clinical Trials Register, DRKS00022764; registered 07 October 2020—retrospectively registered.
Cyber-physical systems often encompass complex concurrent behavior with timing constraints and probabilistic failures on demand. The analysis whether such systems with probabilistic timed behavior adhere to a given specification is essential. When the states of the system can be represented by graphs, the rule-based formalism of Probabilistic Timed Graph Transformation Systems (PTGTSs) can be used to suitably capture structure dynamics as well as probabilistic and timed behavior of the system. The model checking support for PTGTSs w.r.t. properties specified using Probabilistic Timed Computation Tree Logic (PTCTL) has been already presented. Moreover, for timed graph-based runtime monitoring, Metric Temporal Graph Logic (MTGL) has been developed for stating metric temporal properties on identified subgraphs and their structural changes over time. In this paper, we (a) extend MTGL to the Probabilistic Metric Temporal Graph Logic (PMTGL) by allowing for the specification of probabilistic properties, (b) adapt our MTGL satisfaction checking approach to PTGTSs, and (c) combine the approaches for PTCTL model checking and MTGL satisfaction checking to obtain a Bounded Model Checking (BMC) approach for PMTGL. In our evaluation, we apply an implementation of our BMC approach in AutoGraph to a running example.
Spinal muscular atrophy (SMA) and Duchenne muscular dystrophy (DMD) both are rare genetic neuromuscular diseases with progressive loss of motor ability. The neuromotor developmental course of those diseases is well documented. In contrast, there is only little evidence about characteristics of general and specific cognitive development. In both conditions the final motor outcome is characterized by an inability to move autonomously: children with SMA never accomplish independent motoric exploration of their environment, while children with DMD do but later lose this ability again. These profound differences in developmental pathways might affect cognitive development of SMA vs. DMD children, as cognition is shaped by individual motor experiences. DMD patients show impaired executive functions, working memory, and verbal IQ, whereas only motor ability seems to be impaired in SMA. Advanced cognitive capacity in SMA may serve as a compensatory mechanism for achieving in education, career progression, and social satisfaction. This study aimed to relate differences in basic numerical concepts and arithmetic achievement in SMA and DMD patients to differences in their motor development and resulting sensorimotor and environmental experiences. Horizontal and vertical spatial-numerical associations were explored in SMA/DMD children ranging between 6 and 12 years through the random number generation task. Furthermore, arithmetic skills as well as general cognitive ability were assessed. Groups differed in spatial number processing as well as in arithmetic and domain-general cognitive functions. Children with SMA showed no horizontal and even reversed vertical spatial-numerical associations. Children with DMD on the other hand revealed patterns in spatial numerical associations comparable to healthy developing children. From the embodied Cognition perspective, early sensorimotor experience does play a role in development of mental number representations. However, it remains open whether and how this becomes relevant for the acquisition of higher order cognitive and arithmetic skills.
Detecting and categorizing particular entities in the environment are important visual tasks that humans have had to solve at various points in our evolutionary time. The question arises whether characteristics of entities that were of ecological significance for humans play a particular role during the development of visual categorization.
The current project addressed this question by investigating the effects of developing visual abilities, visual properties and ecological significance on categorization early in life. Our stimuli were monochromatic photographs of structure-like assemblies and surfaces taken from three categories: vegetation, non-living natural elements, and artifacts. A set of computational and rated visual properties were assessed for these stimuli. Three empirical studies applied coherent research concepts and methods in young children and adults, comprising (a) two card-sorting tasks with preschool children (age: 4.1-6.1 years) and adults (age: 18-50 years) which assessed classification and similarity judgments, (b) a gaze contingent eye-tracking search task which investigated the impact of visual properties and category membership on 8-month-olds' ability to segregate visual structure. Because eye-tracking with infants still provides challenges, a methodological study (c) assessed the effect of infant eye-tracking procedures on data quality with 8- to 12-month-old infants and adults.
In the categorization tasks we found that category membership and visual properties impacted the performance of all participant groups. Sensitivity to the respective categories varied between tasks and over the age groups. For example, artifact images hindered infants' visual search but were classified best by adults, whereas sensitivity to vegetation was highest during similarity judgments. Overall, preschool children relied less on visual properties than adults, but some properties (e.g., rated depth, shading) were drawn upon similarly strong. In children and infants, depth predicted task performance stronger than shape-related properties. Moreover, children and infants were sensitive to variations in the complexity of low-level visual statistics. These results suggest that classification of visual structures, and attention to particular visual properties is affected by the functional or ecological significance these categories and properties may have for each of the respective age groups.
Based on this, the project highlights the importance of further developmental research on visual categorization with naturalistic, structure-like stimuli. As intended with the current work, this would allow important links between developmental and adult research.
Botulinum neurotoxin (BoNT) is produced by the anaerobic bacterium Clostridium botulinum. It is one of the most potent toxins found in nature and can enter motor neurons (MN) to cleave proteins necessary for neurotransmission, resulting in flaccid paralysis. The toxin has applications in both traditional and esthetic medicine. Since BoNT activity varies between batches despite identical protein concentrations, the activity of each lot must be assessed. The gold standard method is the mouse lethality assay, in which mice are injected with a BoNT dilution series to determine the dose at which half of the animals suffer death from peripheral asphyxia. Ethical concerns surrounding the use of animals in toxicity testing necessitate the creation of alternative model systems to measure the potency of BoNT.
Prerequisites of a successful model are that it is human specific; it monitors the complete toxic pathway of BoNT; and it is highly sensitive, at least in the range of the mouse lethality assay. One model system was developed by our group, in which human SIMA neuroblastoma cells were genetically modified to express a reporter protein (GLuc), which is packaged into neurosecretory vesicles, and which, upon cellular depolarization, can be released – or inhibited by BoNT – simultaneously with neurotransmitters. This assay has great potential, but includes the inherent disadvantages that the GLuc sequence was randomly inserted into the genome and the tumor cells only have limited sensitivity and specificity to BoNT. This project aims to improve these deficits, whereby induced pluripotent stem cells (iPSCs) were genetically modified by the CRISPR/Cas9 method to insert the GLuc sequence into the AAVS1 genomic safe harbor locus, precluding genetic disruption through non-specific integrations. Furthermore, GLuc was modified to associate with signal peptides that direct to the lumen of both large dense core vesicles (LDCV), which transport neuropeptides, and synaptic vesicles (SV), which package neurotransmitters. Finally, the modified iPSCs were differentiated into motor neurons (MNs), the true physiological target of BoNT, and hypothetically the most sensitive and specific cells available for the MoN-Light BoNT assay.
iPSCs were transfected to incorporate one of three constructs to direct GLuc into LDCVs, one construct to direct GLuc into SVs, and one “no tag” GLuc control construct. The LDCV constructs fused GLuc with the signal peptides for proopiomelanocortin (hPOMC-GLuc), chromogranin-A (CgA-GLuc), and secretogranin II (SgII-GLuc), which are all proteins found in the LDCV lumen. The SV construct comprises a VAMP2-GLuc fusion sequence, exploiting the SV membrane-associated protein synaptobrevin (VAMP2). The no tag GLuc expresses GLuc non-specifically throughout the cell and was created to compare the localization of vesicle-directed GLuc.
The clones were characterized to ensure that the GLuc sequence was only incorporated into the AAVS1 safe harbor locus and that the signal peptides directed GLuc to the correct vesicles. The accurate insertion of GLuc was confirmed by PCR with primers flanking the AAVS1 safe harbor locus, capable of simultaneously amplifying wildtype and modified alleles. The PCR amplicons, along with an insert-specific amplicon from candidate clones were Sanger sequenced to confirm the correct genomic region and sequence of the inserted DNA. Off-target integrations were analyzed with the newly developed dc-qcnPCR method, whereby the insert DNA was quantified by qPCR against autosomal and sex-chromosome encoded genes. While the majority of clones had off-target inserts, at least one on-target clone was identified for each construct.
Finally, immunofluorescence was utilized to localize GLuc in the selected clones. In iPSCs, the vesicle-directed GLuc should travel through the Golgi apparatus along the neurosecretory pathway, while the no tag GLuc should not follow this pathway. Initial analyses excluded the CgA-GLuc and SgII-GLuc clones due to poor quality protein visualization. The colocalization of GLuc with the Golgi was analyzed by confocal microscopy and quantified. GLuc was strongly colocalized with the Golgi in the hPOMC-GLuc clone (r = 0.85±0.09), moderately in the VAMP2-GLuc clone (r = 0.65±0.01), and, as expected, only weakly in the no tag GLuc clone (r = 0.44±0.10). Confocal microscopy of differentiated MNs was used to analyze the colocalization of GLuc with proteins associated with LDCVs and SVs, SgII in the hPOMC-GLuc clone (r = 0.85±0.08) and synaptophysin in the VAMP2-GLuc clone (r = 0.65±0.07). GLuc was also expressed in the same cells as the MN-associated protein, Islet1.
A significant portion of GLuc was found in the correct cell type and compartment. However, in the MoN-Light BoNT assay, the hPOMC-GLuc clone could not be provoked to reliably release GLuc upon cellular depolarization. The depolarization protocol for hPOMC-GLuc must be further optimized to produce reliable and specific release of GLuc upon exposure to a stimulus. On the other hand, the VAMP2-GLuc clone could be provoked to release GLuc upon exposure to the muscarinic and nicotinic agonist carbachol. Furthermore, upon simultaneous exposure to the calcium chelator EGTA, the carbachol-provoked release of GLuc could be significantly repressed, indicating the detection of GLuc was likely associated with vesicular fusion at the presynaptic terminal. The application of the VAMP2-GLuc clone in the MoN-Light BoNT assay must still be verified, but the results thus far indicate that this clone could be appropriate for the application of BoNT toxicity assessment.
Stable isotopes represent a unique approach to provide insights into the ecology of organisms. δ13C and δ15N have specifically been used to obtain information on the trophic ecology and food-web interactions. Trophic discrimination factors (TDF, Δ13C and Δ15N) describe the isotopic fractionation occurring from diet to consumer tissue, and these factors are critical for obtaining precise estimates within any application of δ13C and δ15N values. It is widely acknowledged that metabolism influences TDF, being responsible for different TDF between tissues of variable metabolic activity (e.g., liver vs. muscle tissue) or species body size (small vs. large). However, the connection between the variation of metabolism occurring within a single species during its ontogeny and TDF has rarely been considered. Here, we conducted a 9-month feeding experiment to report Δ13C and Δ15N of muscle and liver tissues for several weight classes of Eurasian perch (Perca fluviatilis), a widespread teleost often studied using stable isotopes, but without established TDF for feeding on a natural diet. In addition, we assessed the relationship between the standard metabolic rate (SMR) and TDF by measuring the oxygen consumption of the individuals. Our results showed a significant negative relationship of SMR with Δ13C, and a significant positive relationship of SMR with Δ15N of muscle tissue, but not with TDF of liver tissue. SMR varies inversely with size, which translated into a significantly different TDF of muscle tissue between size classes. In summary, our results emphasize the role of metabolism in shaping-specific TDF (i.e., Δ13C and Δ15N of muscle tissue) and especially highlight the substantial differences between individuals of different ontogenetic stages within a species. Our findings thus have direct implications for the use of stable isotope data and the applications of stable isotopes in food-web studies.
The mechanical muscular oscillations are rarely the objective of investigations regarding the identification of a biomarker for Parkinson’s disease (PD). Therefore, the aim of this study was to investigate whether or not this specific motor output differs between PD patients and controls. The novelty is that patients without tremor are investigated performing a unilateral isometric motor task. The force of armflexors and the forearm acceleration (ACC) were recorded as well as the mechanomyography of the biceps brachii (MMGbi), brachioradialis (MMGbra) and pectoralis major (MMGpect) muscles using a piezoelectric-sensor-based system during a unilateral motor task at 70% of the MVIC. The frequency, a power-frequency-ratio, the amplitude variation, the slope of amplitudes and their interlimb asymmetries were analysed. The results indicate that the oscillatory behavior of muscular output in PD without tremor deviates from controls in some parameters: Significant differences appeared for the power-frequency-ratio (p = 0.001, r = 0.43) and for the amplitude variation (p = 0.003, r = 0.34) of MMGpect. The interlimb asymmetries differed significantly concerning the power-frequency-ratio of MMGbi (p = 0.013, r = 0.42) and MMGbra (p = 0.048, r = 0.39) as well as regarding the mean frequency (p = 0.004, r = 0.48) and amplitude variation of MMGpect (p = 0.033, r = 0.37). The mean (M) and variation coefficient (CV) of slope of ACC differed significantly (M: p = 0.022, r = 0.33; CV: p = 0.004, r = 0.43). All other parameters showed no significant differences between PD and controls. It remains open, if this altered mechanical muscular output is reproducible and specific for PD.
The olfactomotor system is especially investigated by examining the sniffing in reaction to olfactory stimuli. The motor output of respiratory-independent muscles was seldomly considered regarding possible influences of smells. The Adaptive Force (AF) characterizes the capability of the neuromuscular system to adapt to external forces in a holding manner and was suggested to be more vulnerable to possible interfering stimuli due to the underlying complex control processes. The aim of this pilot study was to measure the effects of olfactory inputs on the AF of the hip and elbow flexors, respectively. The AF of 10 subjects was examined manually by experienced testers while smelling at sniffing sticks with neutral, pleasant or disgusting odours. The reaction force and the limb position were recorded by a handheld device. The results show, inter alia, a significantly lower maximal isometric AF and a significantly higher AF at the onset of oscillations by perceiving disgusting odours compared to pleasant or neutral odours (p < 0.001). The adaptive holding capacity seems to reflect the functionality of the neuromuscular control, which can be impaired by disgusting olfactory inputs. An undisturbed functioning neuromuscular system appears to be characterized by a proper length tension control and by an earlier onset of mutual oscillations during an external force increase. This highlights the strong connection of olfaction and motor control also regarding respiratory-independent muscles.
The mechanotendography (MTG) is a method for analyzing the mechanical oscillations of tendons during muscular actions. The aim of this investigation was to evaluate the technical reliability of a piezo-based measurement system used for MTG. The reliability measurements were performed by using audio samples played by a subwoofer. The thereby generated pressure waves were recorded by a piezo-based measurement system. An audio of 40 Hz sine oscillations and four different formerly in vivo recorded MTG-signals were converted into audio files and were used as test signals. Five trials with each audio were performed and one audio was used for repetition trials on another day. The signals’ correlation was estimated by Spearman (MCC) and intraclass correlation coefficients (ICC(3,1)), Cronbach’s alpha (CA) and by mean distances (MD). All parameters were compared between repetition and randomized matched signals. The repetition trials show high correlations (MCC: 0.86 ± 0.13, ICC: 0.89 ± 0.12, CA: 0.98 ± 0.03), low MD (0.03 ± 0.03V) and differ significantly from the randomized matched signals (MCC: 0.15 ± 0.10, ICC: 0.17 ± 0.09, CA: 0.37 ± 0.16, MD: 0.19 ± 0.01V) (p = 0.001 – 0.043). This speaks for an excellent reliability of the measurement system. Presuming the skin above superficial tendons oscillates adequately, we estimate this tool as valid for the application in musculoskeletal system.
In sports and movement sciences isometric muscle function is usually measured by pushing against a stable resistance. However, subjectively one can hold or push isometrically. Several investigations suggest a distinction of those forms. The aim of this study was to investigate whether these two forms of isometric muscle action can be distinguished by objective parameters in an interpersonal setting. 20 subjects were grouped in 10 same sex pairs, in which one partner should perform the pushing isometric muscle action (PIMA) and the other partner executed the holding isometric muscle action (HIMA). The partners had contact at the distal forearms via an interface, which included a strain gauge and an acceleration sensor. The mechanical oscillations of the triceps brachii (MMGtri) muscle, its tendon (MTGtri) and the abdominal muscle (MMGobl) were recorded by a piezoelectric-sensor-based measurement system. Each partner performed three 15s (80% MVIC) and two fatiguing trials (90% MVIC) during PIMA and HIMA, respectively. Parameters to compare PIMA and HIMA were the mean frequency, the normalized mean amplitude, the amplitude variation, the power in the frequency range of 8 to 15 Hz, a special power-frequency ratio and the number of task failures during HIMA or PIMA (partner who quit the task). A “HIMA failure” occurred in 85% of trials (p < 0.001). No significant differences between PIMA and HIMA were found for the mean frequency and normalized amplitude. The MMGobl showed significantly higher values of amplitude variation (15s: p = 0.013; fatiguing: p = 0.007) and of power-frequency-ratio (15s: p = 0.040; fatiguing: p = 0.002) during HIMA and a higher power in the range of 8 to 15 Hz during PIMA (15s: p = 0.001; fatiguing: p = 0.011). MMGtri and MTGtri showed no significant differences. Based on the findings it is suggested that a holding and a pushing isometric muscle action can be distinguished objectively, whereby a more complex neural control is assumed for HIMA.
Due to global climate change providing food security for an increasing world population is a big challenge. Especially abiotic stressors have a strong negative effect on crop yield. To develop climate-adapted crops a comprehensive understanding of molecular alterations in the response of varying levels of environmental stresses is required. High throughput or ‘omics’ technologies can help to identify key-regulators and pathways of abiotic stress responses. In addition to obtain omics data also tools and statistical analyses need to be designed and evaluated to get reliable biological results.
To address these issues, I have conducted three different studies covering two omics technologies. In the first study, I used transcriptomic data from the two polymorphic Arabidopsis thaliana accessions, namely Col-0 and N14, to evaluate seven computational tools for their ability to map and quantify Illumina single-end reads. Between 92% and 99% of the reads were mapped against the reference sequence. The raw count distributions obtained from the different tools were highly correlated. Performing a differential gene expression analysis between plants exposed to 20 °C or 4°C (cold acclimation), a large pairwise overlap between the mappers was obtained. In the second study, I obtained transcript data from ten different Oryza sativa (rice) cultivars by PacBio Isoform sequencing that can capture full-length transcripts. De novo reference transcriptomes were reconstructed resulting in 38,900 to 54,500 high-quality isoforms per cultivar. Isoforms were collapsed to reduce sequence redundancy and evaluated, e.g. for protein completeness level (BUSCO), transcript length, and number of unique transcripts per gene loci. For the heat and drought tolerant aus cultivar N22, I identified around 650 unique and novel transcripts of which 56 were significantly differentially expressed in developing seeds during combined drought and heat stress. In the last study, I measured and analyzed the changes in metabolite profiles of eight rice cultivars exposed to high night temperature (HNT) stress and grown during the dry and wet season on the field in the Philippines. Season-specific changes in metabolite levels, as well as for agronomic parameters, were identified and metabolic pathways causing a yield decline at HNT conditions suggested.
In conclusion, the comparison of mapper performances can help plant scientists to decide on the right tool for their data. The de novo reconstruction of rice cultivars without a genome sequence provides a targeted, cost-efficient approach to identify novel genes responding to stress conditions for any organism. With the metabolomics approach for HNT stress in rice, I identified stress and season-specific metabolites which might be used as molecular markers for crop improvement in the future.
Previous studies have not considered the potential influence of maturity status on the relationship between mental imagery and change of direction (CoD) speed in youth soccer. Accordingly, this cross-sectional study examined the association between mental imagery and CoD performance in young elite soccer players of different maturity status. Forty young male soccer players, aged 10-17 years, were assigned into two groups according to their predicted age at peak height velocity (PHV) (Pre-PHV; n = 20 and Post-PHV; n = 20). Participants were evaluated on soccer-specific tests of CoD with (CoDBall-15m) and without (CoD-15m) the ball. Participants completed the movement imagery questionnaire (MIQ) with the three- dimensional structure, internal visual imagery (IVI), external visual imagery (EVI), as well as kinesthetic imagery (KI). The Post-PHV players achieved significantly better results than Pre-PHV in EVI (ES = 1.58, large; p < 0.001), CoD-15m (ES = 2.09, very large; p < 0.001) and CoDBall-15m (ES = 1.60, large; p < 0.001). Correlations were significantly different between maturity groups, where, for the pre-PHV group, a negative very large correlation was observed between CoDBall-15m and KI (r = –0.73, p = 0.001). For the post-PHV group, large negative correlations were observed between CoD-15m and IVI (r = –0.55, p = 0.011), EVI (r = –062, p = 0.003), and KI (r = –0.52, p = 0.020). A large negative correlation of CoDBall-15m with EVI (r = –0.55, p = 0.012) and very large correlation with KI (r = –0.79, p = 0.001) were also observed. This study provides evidence of the theoretical and practical use for the CoD tasks stimulus with imagery. We recommend that sport psychology specialists, coaches, and athletes integrated imagery for CoD tasks in pre-pubertal soccer players to further improve CoD related performance.
Polymeric films and coatings derived from semi-crystalline oligomers are of relevance for medical and pharmaceutical applications. In this context, the material surface is of particular importance, as it mediates the interaction with the biological system. Two dimensional (2D) systems and ultrathin films are used to model this interface. However, conventional techniques for their preparation, such as spin coating or dip coating, have disadvantages, since the morphology and chain packing of the generated films can only be controlled to a limited extent and adsorption on the substrate used affects the behavior of the films. Detaching and transferring the films prepared by such techniques requires additional sacrificial or supporting layers, and free-standing or self supporting domains are usually of very limited lateral extension. The aim of this thesis is to study and modulate crystallization, melting, degradation and chemical reactions in ultrathin films of oligo(ε-caprolactone)s (OCL)s with different end-groups under ambient conditions. Here, oligomeric ultrathin films are assembled at the air-water interface using the Langmuir technique. The water surface allows lateral movement and aggregation of the oligomers, which, unlike solid substrates, enables dynamic physical and chemical interaction of the molecules. Parameters like surface pressure (π), temperature and mean molecular area (MMA) allow controlled assembly and manipulation of oligomer molecules when using the Langmuir technique. The π-MMA isotherms, Brewster angle microscopy (BAM), and interfacial infrared spectroscopy assist in detecting morphological and physicochemical changes in the film. Ultrathin films can be easily transferred to the solid silicon surface via Langmuir Schaefer (LS) method (horizontal substrate dipping). Here, the films transferred on silicon are investigated using atomic force microscopy (AFM) and optical microscopy and are compared to the films on the water surface.
The semi-crystalline morphology (lamellar thicknesses, crystal number densities, and lateral crystal dimensions) is tuned by the chemical structure of the OCL end-groups (hydroxy or methacrylate) and by the crystallization temperature (Tc; 12 or 21 °C) or MMAs. Compression to lower MMA of ~2 Å2, results in the formation of a highly crystalline film, which consists of tightly packed single crystals. Preparation of tightly packed single crystals on a cm2 scale is not possible by conventional techniques. Upon transfer to a solid surface, these films retain their crystalline morphology whereas amorphous films undergo dewetting.
The melting temperature (Tm) of OCL single crystals at the water and the solid surface is found proportional to the inverse crystal thickness and is generally lower than the Tm of bulk PCL. The impact of OCL end-groups on melting behavior is most noticeable at the air-solid interface, where the methacrylate end-capped OCL (OCDME) melted at lower temperatures than the hydroxy end-capped OCL (OCDOL). When comparing the underlying substrate, melting/recrystallization of OCL ultrathin films is possible at lower temperatures at the air water interface than at the air-solid interface, where recrystallization is not visible. Recrystallization at the air-water interface usually occurs at a higher temperature than the initial Tc.
Controlled degradation is crucial for the predictable performance of degradable polymeric biomaterials. Degradation of ultrathin films is carried out under acidic (pH ~ 1) or enzymatic catalysis (lipase from Pseudomonas cepcia) on the water surface or on a silicon surface as transferred films. A high crystallinity strongly reduces the hydrolytic but not the enzymatic degradation rate. As an influence of end-groups, the methacrylate end-capped linear oligomer, OCDME (~85 ± 2 % end-group functionalization) hydrolytically degrades faster than the hydroxy end capped linear oligomer, OCDOL (~95 ± 3 % end-group functionalization) at different temperatures. Differences in the acceleration of hydrolytic degradation of semi-crystalline films were observed upon complete melting, partial melting of the crystals, or by heating to temperatures close to Tm. Therefore, films of densely packed single crystals are suitable as barrier layers with thermally switchable degradation rates.
Chemical modification in ultrathin films is an intricate process applicable to connect functionalized molecules, impart stability or create stimuli-sensitive cross-links. The reaction of end-groups is explored for transferred single crystals on a solid surface or amorphous monolayer at the air-water interface. Bulky methacrylate end-groups are expelled to the crystal surface during chain-folded crystallization. The density of end-groups is inversely proportional to molecular weight and hence very pronounced for oligomers. The methacrylate end-groups at the crystal surface, which are present at high concentration, can be used for further chemical functionalization. This is demonstrated by fluorescence microscopy after reaction with fluorescein dimethacrylate. The thermoswitching behavior (melting and recrystallization) of fluorescein functionalized single crystals shows the temperature-dependent distribution of the chemically linked fluorescein moieties, which are accumulated on the surfaces of crystals, and homogeneously dispersed when the crystals are molten. In amorphous monolayers at the air-water interface, reversible cross-linking of hydroxy-terminated oligo(ε-caprolactone) monolayers using dialdehyde (glyoxal) lead to the formation of 2D networks. Pronounced contraction in the area occurred for 2D OCL films in dependence of surface pressure and time indicating the reaction progress. Cross linking inhibited crystallization and retarded enzymatic degradation of the OCL film. Altering the subphase pH to ~2 led to cleavage of the covalent acetal cross-links. Besides as model systems, these reversibly cross-linked films are applicable for drug delivery systems or cell substrates modulating adhesion at biointerfaces.
This study examined the concurrent validity of an inverse dynamic (force computed from barbell acceleration [reference method]) and a work-energy (force computed from work at the barbell [alternative method]) approach to measure the mean vertical barbell force during the snatch using kinematic data from video analysis. For this purpose, the acceleration phase of the snatch was analyzed in thirty male medal winners of the 2018 weightlifting World Championships (age: 25.2±3.1 years; body mass: 88.9±28.6 kg). Vertical barbell kinematics were measured using a custom-made 2D real-time video analysis software. Agreement between the two computational approaches was assessed using Bland-Altman analysis, Deming regression, and Pearson product-moment correlation. Further, principal component analysis in conjunction with multiple linear regression was used to assess whether individual differences related to the two approaches are due to the waveforms of the acceleration time-series data. Results indicated no mean difference (p > 0.05; d = −0.04) and an extremely large correlation (r = 0.99) between the two approaches. Despite the high agreement, the total error of individual differences was 8.2% (163.0 N). The individual differences can be explained by a multiple linear regression model (R2adj = 0.86) on principal component scores from the principal component analysis of vertical barbell acceleration time-series waveforms. Findings from this study indicate that the individual errors of force measures can be associated with the inverse dynamic approach. This approach uses vertical barbell acceleration data from video analysis that is prone to error. Therefore, it is recommended to use the work-energy approach to compute mean vertical barbell force as this approach did not rely on vertical barbell acceleration.
The present work focuses on minimising the usage of toxic chemicals by integration of the biobased monomers, derived from fatty acid esters, to photopolymerization processes, which are known to be nature friendly. Internal double bond present in the oleic acid was converted to more reactive (meth)acrylate or epoxy group. Biobased starting materials, functionalized by different pendant groups, were used for photopolymerizing formulations to design of new polymeric structures by using ultraviolet light emitting diode (UV-LED) (395 nm) via free radical polymerization or cationic polymerization.
New (meth)acrylates (2,3 and 4) consisting of two isomers, methyl 9-((meth)acryloyloxy)-10-hydroxyoctadecanoate / methyl 9-hydroxy-10-((meth)acryloyloxy)octadecanoate (2 and 3) and methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4), modified from oleic acid mix, and ionic liquid monomers (1a and 1b) bearing long alkyl chain were polymerized photochemically. New (meth)acrylates are based on vegetable oil, and ionic liquids (ILs) have nonvolatile behaviour. Therefore, both monomer types have green approach. Photoinitiated polymerization of new (meth)acrylates and ionic liquids was investigated in the presence of ethyl (2,4,6-trimethylbenzoyl) phenylphosphinate (Irgacure® TPO−L) or di(4-methoxybenzoyl)diethylgermane (Ivocerin®) as photoinitiator (PI). Additionally, the results were discussed in comparison with those obtained from commercial 1,6-hexanediol di(meth)acrylate (5 and 6) for deeper investigation of biobased monomer’s potential to substitute petroleum derived materials with renewable resources for possible coating applications. Kinetic study shows that methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4) and ionic liquids (1a and 1b) have quantitative conversion after irradiation process which is important for practical applications. On the other hand, heat generation occurs in a longer time during the polymerization of biobased systems or ILs.
The poly(meth)acrylates modified from (meth)acrylated fatty acid methyl ester monomers generally show a low glass transition temperature because of the presence of long aliphatic chain in the polymer structure. However, poly(meth)acrylates containing aromatic group have higher glass transition temperature. Therefore, new 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was synthesized which can be a promising candidate for the green techniques, such as light induced polymerization. Photokinetic investigation of the new monomer, 4-(4-methacryloyloxyphenyl)-butan-2-one (7), was discussed using Irgacure® TPO−L or Ivocerin® as photoinitiator. The reactivity of that monomer was compared to commercial 2-phenoxyethyl methacrylate (8) and phenyl methacrylate (9) basis of the differences on monomer structures. The photopolymer of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) might be an interesting candidate for the coating application with the properties of quantitative conversion and high molecular weight. It also shows higher glass transition temperature.
In addition to the linear systems based on renewable materials, new crosslinked polymers were also designed in this thesis. Therefore, isomer mixture consisting of ethane-1,2-diyl bis(9-methacryloyloxy-10-hydroxy octadecanoate), ethane-1,2-diyl 9-hydroxy-10-methacryloyloxy-9’-methacryloyloxy10’-hydroxy octadecanoate and ethane-1,2-diyl bis(9-hydroxy-10-methacryloyloxy octadecanoate) (10) was synthesized by derivation of the oleic acid which has not been previously described in the literature. Crosslinked material based on this biobased monomer was produced by photoinitiated free radical polymerization using Irgacure® TPO−L or Ivocerin® as photoinitiator. Furthermore, material properties were diversified by copolymerization of 10 with 4-(4-methacryloyloxyphenyl)-butan-2-one (7) or methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4). In addition to this, influence of comonomer with different chemical structure on the network system was investigated by analysis of thermo-mechanical properties, crosslink density and molecular weight between two crosslink junctions. An increase in the glass transition temperature caused by copolymerization of biobased monomer 10 with the excess amount of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was confirmed by both techniques, differential scanning calorimetry (DSC) and dynamic mechanical analysis (DMA). On the other hand, crosslink density decreased as a result of copolymerization reactions due to the reduction in the mean functionality of the system. Furthermore, surface characterization has been tested by contact angle measurements using solvents with different polarity.
This work also contributes to the limited data reported about cationic photopolymerization of the epoxidized vegetable oils in the literature in contrast to the widely investigation of thermal curing of the biorenewable epoxy monomers. In addition to the 9,10-epoxystearic acid methyl ester (11), a new monomer of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) has been synthesized from oleic acid. These two biobased epoxies have been polymerized via cationic photoinitiated polymerization in the presence of bis(t-butyl)-iodonium-tetrakis(perfluoro-t-butoxy)aluminate ([Al(O-t-C4F9)4]-) and isopropylthioxanthone (ITX) as photinitiating system. Polymerization kinetic of 9,10-epoxystearic acid methyl ester (11) and bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was investigated and compared with the kinetic of commercial monomers being 3,4-epoxycyclohexylmethyl-3’,4’-epoxycyclohexane carboxylate (13), 1,4-butanediol diglycidyl ether (14), and diglycidylether of bisphenol-A (15). Both biobased epoxies (11 and 12) showed higher conversion than cycloaliphatic epoxy (13), and lower reactivity than 1,4-butanediol diglycidyl ether (14). Additional network systems were designed by copolymerization of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) in different molar ratios (1:1; 1:5; 1:9). It addresses that, final conversion is dependent on polymerization rate as well as physical processes such as vitrification during polymerization. Moreover, low glass transition temperature of homopolymer derived from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was successfully increased by copolymerization with diglycidylether bisphenol-A (15). On the other hand, the surface produced from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) shows hydrophobic character. Higher concentration of biobased diepoxy (12) in the copolymerizing mixture decreases surface free energy. Network systems were also investigated according to the rubber elasticity theory. Crosslinked polymer derived from the mixture of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) (molar ratio=1:5) exhibits almost ideal polymer network.
Models for the predictions of monetary losses from floods mainly blend data deemed to represent a single flood type and region. Moreover, these approaches largely ignore indicators of preparedness and how predictors may vary between regions and events, challenging the transferability of flood loss models. We use a flood loss database of 1812 German flood-affected households to explore how Bayesian multilevel models can estimate normalised flood damage stratified by event, region, or flood process type. Multilevel models acknowledge natural groups in the data and allow each group to learn from others. We obtain posterior estimates that differ between flood types, with credibly varying influences of water depth, contamination, duration, implementation of property-level precautionary measures, insurance, and previous flood experience; these influences overlap across most events or regions, however. We infer that the underlying damaging processes of distinct flood types deserve further attention. Each reported flood loss and affected region involved mixed flood types, likely explaining the uncertainty in the coefficients. Our results emphasise the need to consider flood types as an important step towards applying flood loss models elsewhere. We argue that failing to do so may unduly generalise the model and systematically bias loss estimations from empirical data.
The Internet of Things (IoT) is a system of physical objects that can be discovered, monitored, controlled, or interacted with by electronic devices that communicate over various networking interfaces and eventually can be connected to the wider Internet. [Guinard and Trifa, 2016]. IoT devices are equipped with sensors and/or actuators and may be constrained in terms of memory, computational power, network bandwidth, and energy. Interoperability can help to manage such heterogeneous devices. Interoperability is the ability of different types of systems to work together smoothly. There are four levels of interoperability: physical, network and transport, integration, and data. The data interoperability is subdivided into syntactic and semantic data. Semantic data describes the meaning of data and the common understanding of vocabulary e.g. with the help of dictionaries, taxonomies, ontologies. To achieve interoperability, semantic interoperability is necessary.
Many organizations and companies are working on standards and solutions for interoperability in the IoT. However, the commercial solutions produce a vendor lock-in. They focus on centralized approaches such as cloud-based solutions. This thesis proposes a decentralized approach namely Edge Computing. Edge Computing is based on the concepts of mesh networking and distributed processing. This approach has an advantage that information collection and processing are placed closer to the sources of this information. The goals are to reduce traffic, latency, and to be robust against a lossy or failed Internet connection.
We see management of IoT devices from the network configuration management perspective. This thesis proposes a framework for network configuration management of heterogeneous, constrained IoT devices by using semantic descriptions for interoperability. The MYNO framework is an acronym for MQTT, YANG, NETCONF and Ontology. The NETCONF protocol is the IETF standard for network configuration management. The MQTT protocol is the de-facto standard in the IoT. We picked up the idea of the NETCONF-MQTT bridge, originally proposed by Scheffler and Bonneß[2017], and extended it with semantic device descriptions. These device descriptions provide a description of the device capabilities. They are based on the oneM2M Base ontology and formalized by the Semantic Web Standards.
The novel approach is using a ontology-based device description directly on a constrained device in combination with the MQTT protocol. The bridge was extended in order to query such descriptions. Using a semantic annotation, we achieved that the device capabilities are self-descriptive, machine readable and re-usable.
The concept of a Virtual Device was introduced and implemented, based on semantic device descriptions. A Virtual Device aggregates the capabilities of all devices at the edge network and contributes therefore to the scalability. Thus, it is possible to control all devices via a single RPC call.
The model-driven NETCONF Web-Client is generated automatically from this YANG model which is generated by the bridge based on the semantic device description. The Web-Client provides a user-friendly interface, offers RPC calls and displays sensor values. We demonstrate the feasibility of this approach in different use cases: sensor and actuator scenarios, as well as event configuration and triggering.
The semantic approach results in increased memory overhead. Therefore, we evaluated CBOR and RDF HDT for optimization of ontology-based device descriptions for use on constrained devices. The evaluation shows that CBOR is not suitable for long strings and RDF HDT is a promising candidate but is still a W3C Member Submission. Finally, we used an optimized JSON-LD format for the syntax of the device descriptions.
One of the security tasks of network management is the distribution of firmware updates. The MYNO Update Protocol (MUP) was developed and evaluated on constrained devices CC2538dk and 6LoWPAN. The MYNO update process is focused on freshness and authenticity of the firmware. The evaluation shows that it is challenging but feasible to bring the firmware updates to constrained devices using MQTT. As a new requirement for the next MQTT version, we propose to add a slicing feature for the better support of constrained devices. The MQTT broker should slice data to the maximum packet size specified by the device and transfer it slice-by-slice.
For the performance and scalability evaluation of MYNO framework, we setup the High Precision Agriculture demonstrator with 10 ESP-32 NodeMCU boards at the edge of the network. The ESP-32 NodeMCU boards, connected by WLAN, were equipped with six sensors and two actuators. The performance evaluation shows that the processing of ontology-based descriptions on a Raspberry Pi 3B with the RDFLib is a challenging task regarding computational power. Nevertheless, it is feasible because it must be done only once per device during the discovery process.
The MYNO framework was tested with heterogeneous devices such as CC2538dk from Texas Instruments, Arduino Yún Rev 3, and ESP-32 NodeMCU, and IP-based networks such as 6LoWPAN and WLAN.
Summarizing, with the MYNO framework we could show that the semantic approach on constrained devices is feasible in the IoT.
Permafrost is warming globally, which leads to widespread permafrost thaw and impacts the surrounding landscapes, ecosystems and infrastructure. Especially ice-rich permafrost is vulnerable to rapid and abrupt thaw, resulting from the melting of excess ground ice. Local remote sensing studies have detected increasing rates of abrupt permafrost disturbances, such as thermokarst lake change and drainage, coastal erosion and RTS in the last two decades. All of which indicate an acceleration of permafrost degradation.
In particular retrogressive thaw slumps (RTS) are abrupt disturbances that expand by up to several meters each year and impact local and regional topographic gradients, hydrological pathways, sediment and nutrient mobilisation into aquatic systems, and increased permafrost carbon mobilisation. The feedback between abrupt permafrost thaw and the carbon cycle is a crucial component of the Earth system and a relevant driver in global climate models. However, an assessment of RTS at high temporal resolution to determine the dynamic thaw processes and identify the main thaw drivers as well as a continental-scale assessment across diverse permafrost regions are still lacking.
In northern high latitudes optical remote sensing is restricted by environmental factors and frequent cloud coverage. This decreases image availability and thus constrains the application of automated algorithms for time series disturbance detection for large-scale abrupt permafrost disturbances at high temporal resolution. Since models and observations suggest that abrupt permafrost disturbances will intensify, we require disturbance products at continental-scale, which allow for meaningful integration into Earth system models.
The main aim of this dissertation therefore, is to enhance our knowledge on the spatial extent and temporal dynamics of abrupt permafrost disturbances in a large-scale assessment. To address this, three research objectives were posed:
1. Assess the comparability and compatibility of Landsat-8 and Sentinel-2 data for a combined use in multi-spectral analysis in northern high latitudes.
2. Adapt an image mosaicking method for Landsat and Sentinel-2 data to create combined mosaics of high quality as input for high temporal disturbance assessments in northern high latitudes.
3. Automatically map retrogressive thaw slumps on the landscape-scale and assess their high temporal thaw dynamics.
We assessed the comparability of Landsat-8 and Sentinel-2 imagery by spectral comparison of corresponding bands. Based on overlapping same-day acquisitions of Landsat-8 and Sentinel-2 we derived spectral bandpass adjustment coefficients for North Siberia to adjust Sentinel-2 reflectance values to resemble Landsat-8 and harmonise the two data sets. Furthermore, we adapted a workflow to combine Landsat and Sentinel-2 images to create homogeneous and gap-free annual mosaics. We determined the number of images and cloud-free pixels, the spatial coverage and the quality of the mosaic with spectral comparisons to demonstrate the relevance of the Landsat+Sentinel-2 mosaics. Lastly, we adapted the automatic disturbance detection algorithm LandTrendr for large-scale RTS identification and mapping at high temporal resolution. For this, we modified the temporal segmentation algorithm for annual gradual and abrupt disturbance detection to incorporate the annual Landsat+Sentinel-2 mosaics. We further parametrised the temporal segmentation and spectral filtering for optimised RTS detection, conducted further spatial masking and filtering, and implemented a binary object classification algorithm with machine-learning to derive RTS from the LandTrendr disturbance output. We applied the algorithm to North Siberia, covering an area of 8.1 x 106 km2.
The spectral band comparison between same-day Landsat-8 and Sentinel-2 acquisitions already showed an overall good fit between both satellite products. However, applying the acquired spectral bandpass coefficients for adjustment of Sentinel-2 reflectance values, resulted in a near-perfect alignment between the same-day images. It can therefore be concluded that the spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to those of Landsat-8 in North Siberia.
The number of available cloud-free images increased steadily between 1999 and 2019, especially intensified after 2016 with the addition of Sentinel-2 images. This signifies a highly improved input database for the mosaicking workflow. In a comparison of annual mosaics, the Landsat+Sentinel-2 mosaics always fully covered the study areas, while Landsat-only mosaics contained data-gaps for the same years. The spectral comparison of input images and Landsat+Sentinel-2 mosaic showed a high correlation between the input images and the mosaic bands, testifying mosaicking results of high quality. Our results show that especially the mosaic coverage for northern, coastal areas was substantially improved with the Landsat+Sentinel-2 mosaics. By combining data from both Landsat and Sentinel-2 sensors we reliably created input mosaics at high spatial resolution for comprehensive time series analyses.
This research presents the first automatically derived assessment of RTS distribution and temporal dynamics at continental-scale. In total, we identified 50,895 RTS, primarily located in ice-rich permafrost regions, as well as a steady increase in RTS-affected areas between 2001 and 2019 across North Siberia. From 2016 onward the RTS area increased more abruptly, indicating heightened thaw slump dynamics in this period. Overall, the RTS-affected area increased by 331 % within the observation period. Contrary to this, five focus sites show spatiotemporal variability in their annual RTS dynamics, alternating between periods of increased and decreased RTS development. This suggests a close relationship to varying thaw drivers. The majority of identified RTS was active from 2000 onward and only a small proportion initiated during the assessment period. This highlights that the increase in RTS-affected area was mainly caused by enlarging existing RTS and not by newly initiated RTS.
Overall, this research showed the advantages of combining Landsat and Sentinel-2 data in northern high latitudes and the improvements in spatial and temporal coverage of combined annual mosaics. The mosaics build the database for automated disturbance detection to reliably map RTS and other abrupt permafrost disturbances at continental-scale. The assessment at high temporal resolution further testifies the increasing impact of abrupt permafrost disturbances and likewise emphasises the spatio-temporal variability of thaw dynamics across landscapes. Obtaining such consistent disturbance products is necessary to parametrise regional and global climate change models, for enabling an improved representation of the permafrost thaw feedback.
This dissertation was carried out as part of the international and interdisciplinary graduate school StRATEGy. This group has set itself the goal of investigating geological processes that take place on different temporal and spatial scales and have shaped the southern central Andes. This study focuses on claystones and carbonates of the Yacoraite Fm. that were deposited between Maastricht and Dan in the Cretaceous Salta Rift Basin. The former rift basin is located in northwest Argentina and is divided into the sub-basins Tres Cruces, Metán-Alemanía and Lomas de Olmedo. The overall motivation for this study was to gain new knowledge about the evolution of marine and lacustrine conditions during the Yacoraite Fm. Deposit in the Tres Cruces and Metán-Alemanía sub-basins. Other important aspects that were examined within the scope of this dissertation are the conversion of organic matter from Yacoraite Fm. into oil and its genetic relationship to selected oils produced and natural oil spills. The results of my study show that the Yacoraite Fm. began to be deposited under marine conditions and that a lacustrine environment developed by the end of the deposition in the Tres Cruces and Metán-Alemanía Basins. In general, the kerogen of Yacoraite Fm. consists mainly of the kerogen types II, III and II / III mixtures. Kerogen type III is mainly found in samples from the Yacoraite Fm., whose TOC values are low. Due to the adsorption of hydrocarbons on the mineral surfaces (mineral matrix effect), the content of type III kerogen with Rock-Eval pyrolysis in these samples could be overestimated. Investigations using organic petrography show that the organic particles of Yacoraite Fm. mainly consist of alginites and some vitrinite-like particles. The pyrolysis GC of the rock samples showed that the Yacoraite Fm. generates low-sulfur oils with a predominantly low-wax, paraffinic-naphthenic-aromatic composition and paraffinic wax-rich oils. Small proportions of paraffinic, low-wax oils and a gas condensate-generating facies are also predicted. Here, too, mineral matrix effects were taken into account, which can lead to a quantitative overestimation of the gas-forming character.
The results of an additional 1D tank modeling carried out show that the beginning (10% TR) of the oil genesis took place between ≈10 Ma and ≈4 Ma. Most of the oil (from ≈50% to 65%) was generated prior to the development of structural traps formed during the Plio-Pleistocene Diaguita deformation phase. Only ≈10% of the total oil generated was formed and potentially trapped after the formation of structural traps. Important factors in the risk assessment of this petroleum system, which can determine the small amounts of generated and migrated oil, are the generally low TOC contents and the variable thickness of the Yacoraite Fm. Additional risks are associated with a low density of information about potentially existing reservoir structures and the quality of the overburden.
Climatic change alters the frequency and intensity of natural hazards. In order to assess potential future changes in flood seasonality in the Rhine River Basin, we analyse changes in streamflow, snowmelt, precipitation, and evapotranspiration at 1.5, 2.0 and 3.0 ◦C global warming levels. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios (five general circulation models under three representative concentration pathways) is used to simulate the present and future climate conditions of both, pluvial and nival hydrological regimes. Our results indicate that the interplay between changes in snowmelt- and rainfall-driven runoff is crucial to understand changes in streamflow maxima in the Rhine River. Climate projections suggest that future changes in flood characteristics in the entire Rhine River are controlled by both, more intense precipitation events and diminishing snow packs. The nature of this interplay defines the type of change in runoff peaks. On the sub-basin level (the Moselle River), more intense rainfall during winter is mostly counterbalanced by reduced snowmelt contribution to the streamflow. In the High Rhine (gauge at Basel), the strongest increases in streamflow maxima show up during winter, when strong increases in liquid precipitation intensity encounter almost unchanged snowmelt-driven runoff. The analysis of snowmelt events suggests that at no point in time during the snowmelt season, a warming climate results in an increase in the risk of snowmelt-driven flooding. We do not find indications of a transient merging of pluvial and nival floods due to climate warming.
River flooding poses a threat to numerous cities and communities all over the world. The detection, quantification and attribution of changes in flood characteristics is key to assess changes in flood hazard and help affected societies to timely mitigate and adapt to emerging risks. The Rhine River is one of the major European rivers and numerous large cities reside at its shores. Runoff from several large tributaries superimposes in the main channel shaping the complex from regime. Rainfall, snowmelt as well as ice-melt are important runoff components. The main objective of this thesis is the investigation of a possible transient merging of nival and pluvial Rhine flood regimes under global warming. Rising temperatures cause snowmelt to occur earlier in the year and rainfall to be more intense. The superposition of snowmelt-induced floods originating from the Alps with more intense rainfall-induced runoff from pluvial-type tributaries might create a new flood type with potentially disastrous consequences.
To introduce the topic of changing hydrological flow regimes, an interactive web application that enables the investigation of runoff timing and runoff season- ality observed at river gauges all over the world is presented. The exploration and comparison of a great diversity of river gauges in the Rhine River Basin and beyond indicates that river systems around the world undergo fundamental changes. In hazard and risk research, the provision of background as well as real-time information to residents and decision-makers in an easy accessible way is of great importance. Future studies need to further harness the potential of scientifically engineered online tools to improve the communication of information related to hazards and risks.
A next step is the development of a cascading sequence of analytical tools to investigate long-term changes in hydro-climatic time series. The combination of quantile sampling with moving average trend statistics and empirical mode decomposition allows for the extraction of high resolution signals and the identification of mechanisms driving changes in river runoff. Results point out that the construction and operation of large reservoirs in the Alps is an important factor redistributing runoff from summer to winter and hint at more (intense) rainfall in recent decades, particularly during winter, in turn increasing high runoff quantiles. The development and application of the analytical sequence represents a further step in the scientific quest to disentangling natural variability, climate change signals and direct human impacts.
The in-depth analysis of in situ snow measurements and the simulations of the Alpine snow cover using a physically-based snow model enable the quantification of changes in snowmelt in the sub-basin upstream gauge Basel. Results confirm previous investigations indicating that rising temperatures result in a decrease in maximum melt rates. Extending these findings to a catchment perspective, a threefold effect of rising temperatures can be identified: snowmelt becomes weaker, occurs earlier and forms at higher elevations. Furthermore, results indicate that due to the wide range of elevations in the basin, snowmelt does not occur simultaneously at all elevation, but elevation bands melt together in blocks. The beginning and end of the release of meltwater seem to be determined by the passage of warm air masses, and the respective elevation range affected by accompanying temperatures and snow availability. Following those findings, a hypothesis describing elevation-dependent compensation effects in snowmelt is introduced: In a warmer world with similar sequences of weather conditions, snowmelt is moved upward to higher elevations, i.e., the block of elevation bands providing most water to the snowmelt-induced runoff is located at higher elevations. The movement upward the elevation range makes snowmelt in individual elevation bands occur earlier. The timing of the snowmelt-induced runoff, however, stays the same. Meltwater from higher elevations, at least partly, replaces meltwater from elevations below.
The insights on past and present changes in river runoff, snow covers and underlying mechanisms form the basis of investigations of potential future changes in Rhine River runoff. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios is used to analyse future changes in streamflow, snowmelt, precipitation and evapotranspiration at 1.5, 2.0 and
3.0 ◦ C global warming. Simulation results suggest that future changes in flood characteristics in the Rhine River Basin are controlled by increased precipitation amounts on the one hand, and reduced snowmelt on the other hand. Rising temperatures deplete seasonal snowpacks. At no time during the year, a warming climate results in an increase in the risk of snowmelt-driven flooding. Counterbalancing effects between snowmelt and precipitation often result in only little and transient changes in streamflow peaks. Although, investigations point at changes in both rainfall and snowmelt-driven runoff, there are no indications of a transient merging of nival and pluvial Rhine flood regimes due to climate warming. Flooding in the main tributaries of the Rhine, such as the Moselle River, as well as the High Rhine is controlled by both precipitation and snowmelt. Caution has to be exercised labelling sub-basins such as the Moselle catchment as purely pluvial-type or the Rhine River Basin at Basel as purely nival-type. Results indicate that this (over-) simplifications can entail misleading assumptions with regard to flood-generating mechanisms and changes in flood hazard. In the framework of this thesis, some progress has been made in detecting, quantifying and attributing past, present and future changes in Rhine flow/flood characteristics. However, further studies are necessary to pin down future changes in the flood genesis of Rhine floods, particularly very rare events.
Computation of the instantaneous phase and amplitude via the Hilbert Transform is a powerful tool of data analysis. This approach finds many applications in various science and engineering branches but is not proper for causal estimation because it requires knowledge of the signal’s past and future. However, several problems require real-time estimation of phase and amplitude; an illustrative example is phase-locked or amplitude-dependent stimulation in neuroscience. In this paper, we discuss and compare three causal algorithms that do not rely on the Hilbert Transform but exploit well-known physical phenomena, the synchronization and the resonance. After testing the algorithms on a synthetic data set, we illustrate their performance computing phase and amplitude for the accelerometer tremor measurements and a Parkinsonian patient’s beta-band brain activity.
Implementing innovation laboratories to leverage intrapreneurship are an increasingly popular organizational practice. A typical feature in these creative environments are semi-autonomous teams in which multiple members collectively exert leadership influence, thereby challenging traditional command-and-control conceptions of leadership. An extensive body of research on the team-centric concept of shared leadership has recognized the potential for pluralized leadership structures in enhancing team effectiveness; however, little empirical work has been conducted in organizational contexts in which creativity is key. This study set out to explore antecedents of shared leadership and its influence on team creativity in an innovation lab. Building on extant shared leadership and innovation research, we propose antecedents customary to creative teamwork, that is, experimental culture, task reflexivity, and voice. Multisource data were collected from 104 team members and 49 evaluations of 29 coaches nested in 21 teams working in a prototypical innovation lab. We identify factors specific to creative teamwork that facilitate the emergence of shared leadership by providing room for experimentation, encouraging team members to speak up in the creative process, and cultivating a reflective application of entrepreneurial thinking. We provide specific exemplary activities for innovation lab teams to increase levels of shared leadership.
Populations adapt to novel environmental conditions by genetic changes or phenotypic plasticity. Plastic responses are generally faster and can buffer fitness losses under variable conditions. Plasticity is typically modeled as random noise and linear reaction norms that assume simple one-to- one genotype–phenotype maps and no limits to the phenotypic response. Most studies on plasticity have focused on its effect on population viability. However, it is not clear, whether the advantage of plasticity depends solely on environmental fluctuations or also on the genetic and demographic properties (life histories) of populations. Here we present an individual-based model and study the relative importance of adaptive and nonadaptive plasticity for populations of sexual species with different life histories experiencing directional stochastic climate change. Environmental fluctuations were simulated using differentially autocorrelated climatic stochasticity or noise color, and scenarios of directiona
climate change. Nonadaptive plasticity was simulated as a random environmental effect on trait development, while adaptive plasticity as a linear, saturating, or sinusoidal reaction norm. The last two imposed limits to the plastic response and emphasized flexible interactions of the genotype with the environment. Interestingly, this assumption led to (a) smaller phenotypic than genotypic variance in the population (many-to- one genotype–phenotype map) and the coexistence of polymorphisms, and (b) the maintenance of higher genetic variation—compared to linear reaction norms and genetic determinism—even when the population was exposed to a constant environment for several generations. Limits to plasticity led to genetic accommodation, when costs were negligible, and to the appearance of cryptic variation when limits were exceeded. We found that adaptive plasticity promoted population persistence under red environmental noise and was particularly important for life histories with low fecundity. Populations produing more offspring could cope with environmental fluctuations solely by genetic changes or random plasticity, unless environmental change was too fast.
Learning analytics at scale
(2021)
Digital technologies are paving the way for innovative educational approaches. The learning format of Massive Open Online Courses (MOOCs) provides a highly accessible path to lifelong learning while being more affordable and flexible than face-to-face courses. Thereby, thousands of learners can enroll in courses mostly without admission restrictions, but this also raises challenges. Individual supervision by teachers is barely feasible, and learning persistence and success depend on students' self-regulatory skills. Here, technology provides the means for support. The use of data for decision-making is already transforming many fields, whereas in education, it is still a young research discipline. Learning Analytics (LA) is defined as the measurement, collection, analysis, and reporting of data about learners and their learning contexts with the purpose of understanding and improving learning and learning environments. The vast amount of data that MOOCs produce on the learning behavior and success of thousands of students provides the opportunity to study human learning and develop approaches addressing the demands of learners and teachers.
The overall purpose of this dissertation is to investigate the implementation of LA at the scale of MOOCs and to explore how data-driven technology can support learning and teaching in this context. To this end, several research prototypes have been iteratively developed for the HPI MOOC Platform. Hence, they were tested and evaluated in an authentic real-world learning environment. Most of the results can be applied on a conceptual level to other MOOC platforms as well. The research contribution of this thesis thus provides practical insights beyond what is theoretically possible. In total, four system components were developed and extended:
(1) The Learning Analytics Architecture: A technical infrastructure to collect, process, and analyze event-driven learning data based on schema-agnostic pipelining in a service-oriented MOOC platform. (2) The Learning Analytics Dashboard for Learners: A tool for data-driven support of self-regulated learning, in particular to enable learners to evaluate and plan their learning activities, progress, and success by themselves. (3) Personalized Learning Objectives: A set of features to better connect learners' success to their personal intentions based on selected learning objectives to offer guidance and align the provided data-driven insights about their learning progress. (4) The Learning Analytics Dashboard for Teachers: A tool supporting teachers with data-driven insights to enable the monitoring of their courses with thousands of learners, identify potential issues, and take informed action.
For all aspects examined in this dissertation, related research is presented, development processes and implementation concepts are explained, and evaluations are conducted in case studies. Among other findings, the usage of the learner dashboard in combination with personalized learning objectives demonstrated improved certification rates of 11.62% to 12.63%. Furthermore, it was observed that the teacher dashboard is a key tool and an integral part for teaching in MOOCs. In addition to the results and contributions, general limitations of the work are discussed—which altogether provide a solid foundation for practical implications and future research.
Objective: This study investigated intraindividual differences of intratendinous blood flow (IBF) in response to running exercise in participants with Achilles tendinopathy.
Design: This is a cross-sectional study.
Setting: The study was conducted at the University Outpatient Clinic.
Participants: Sonographic detectable intratendinous blood flow was examined in symptomatic and contralateral asymptomatic Achilles tendons of 19 participants (42 ± 13 years, 178 ± 10 cm, 76 ± 12 kg, VISA-A 75 ± 16) with clinically diagnosed unilateral Achilles tendinopathy and sonographic evident tendinosis.
Intervention: IBF was assessed using Doppler ultrasound “Advanced Dynamic Flow” before (Upre) and 5, 30, 60, and 120 min (U5–U120) after a standardized submaximal constant load run.
Main Outcome Measure: IBF was quantified by counting the number (n) of vessels in each tendon.
Results: At Upre, IBF was higher in symptomatic compared with asymptomatic tendons [mean 6.3 (95% CI: 2.8–9.9) and 1.7 (0.4–2.9), p < 0.01]. Overall, 63% of symptomatic and 47% of asymptomatic Achilles tendons responded to exercise, whereas 16 and 11% showed persisting IBF and 21 and 42% remained avascular throughout the investigation. At U5, IBF increased in both symptomatic and asymptomatic tendons [difference to baseline: 2.4 (0.3–4.5) and 0.9 (0.5–1.4), p = 0.05]. At U30 to U120, IBF was still increased in symptomatic but not in asymptomatic tendons [mean difference to baseline: 1.9 (0.8–2.9) and 0.1 (-0.9 to 1.2), p < 0.01].
Conclusion: Irrespective of pathology, 47–63% of Achilles tendons responded to exercise with an immediate acute physiological IBF increase by an average of one to two vessels (“responders”). A higher amount of baseline IBF (approximately five vessels) and a prolonged exercise-induced IBF response found in symptomatic ATs indicate a pain-associated altered intratendinous “neovascularization.”
Background: The relationship between exercise-induced intratendinous blood flow (IBF) and tendon pathology or training exposure is unclear.
Objective: This study investigates the acute effect of running exercise on sonographic detectable IBF in healthy and tendinopathic Achilles tendons (ATs) of runners and recreational participants.
Methods: 48 participants (43 ± 13 years, 176 ± 9 cm, 75 ± 11 kg) performed a standardized submaximal 30-min constant load treadmill run with Doppler ultrasound “Advanced dynamic flow” examinations before (Upre) and 5, 30, 60, and 120 min (U5-U120) afterward. Included were runners (>30 km/week) and recreational participants (<10 km/week) with healthy (Hrun, n = 10; Hrec, n = 15) or tendinopathic (Trun, n = 13; Trec, n = 10) ATs. IBF was assessed by counting number [n] of intratendinous vessels. IBF data are presented descriptively (%, median [minimum to maximum range] for baseline-IBF and IBF-difference post-exercise). Statistical differences for group and time point IBF and IBF changes were analyzed with Friedman and Kruskal-Wallis ANOVA (α = 0.05).
Results: At baseline, IBF was detected in 40% (3 [1–6]) of Hrun, in 53% (4 [1–5]) of Hrec, in 85% (3 [1–25]) of Trun, and 70% (10 [2–30]) of Trec. At U5 IBF responded to exercise in 30% (3 [−1–9]) of Hrun, in 53% (4 [−2–6]) of Hrec, in 70% (4 [−10–10]) of Trun, and in 80% (5 [1–10]) of Trec. While IBF in 80% of healthy responding ATs returned to baseline at U30, IBF remained elevated until U120 in 60% of tendinopathic ATs. Within groups, IBF changes from Upre-U120 were significant for Hrec (p < 0.01), Trun (p = 0.05), and Trec (p < 0.01). Between groups, IBF changes in consecutive examinations were not significantly different (p > 0.05) but IBF-level was significantly higher at all measurement time points in tendinopathic versus healthy ATs (p < 0.05).
Conclusion: Irrespective of training status and tendon pathology, running leads to an immediate increase of IBF in responding tendons. This increase occurs shortly in healthy and prolonged in tendinopathic ATs. Training exposure does not alter IBF occurrence, but IBF level is elevated in tendon pathology. While an immediate exercise-induced IBF increase is a physiological response, prolonged IBF is considered a pathological finding associated with Achilles tendinopathy.
Background
Artificial intelligence (AI) is one of the most promising areas in medicine with many possibilities for improving health and wellness. Already today, diagnostic decision support systems may help patients to estimate the severity of their complaints. This fictional case study aimed to test the diagnostic potential of an AI algorithm for common sports injuries and pathologies.
Methods
Based on a literature review and clinical expert experience, five fictional “common” cases of acute, and subacute injuries or chronic sport-related pathologies were created: Concussion, ankle sprain, muscle pain, chronic knee instability (after ACL rupture) and tennis elbow. The symptoms of these cases were entered into a freely available chatbot-guided AI app and its diagnoses were compared to the pre-defined injuries and pathologies.
Results
A mean of 25–36 questions were asked by the app per patient, with optional explanations of certain questions or illustrative photos on demand. It was stressed, that the symptom analysis would not replace a doctor’s consultation. A 23-yr-old male patient case with a mild concussion was correctly diagnosed. An ankle sprain of a 27-yr-old female without ligament or bony lesions was also detected and an ER visit was suggested. Muscle pain in the thigh of a 19-yr-old male was correctly diagnosed. In the case of a 26-yr-old male with chronic ACL instability, the algorithm did not sufficiently cover the chronic aspect of the pathology, but the given recommendation of seeing a doctor would have helped the patient. Finally, the condition of the chronic epicondylitis in a 41-yr-old male was correctly detected.
Conclusions
All chosen injuries and pathologies were either correctly diagnosed or at least tagged with the right advice of when it is urgent for seeking a medical specialist. However, the quality of AI-based results could presumably depend on the data-driven experience of these programs as well as on the understanding of their users. Further studies should compare existing AI programs and their diagnostic accuracy for medical injuries and pathologies.
Magmatic continental rifts often constitute the earliest stage of nascent plate boundaries. These extensional tectonic provinces are characterized by ubiquitous normal faulting and volcanic activity; the spatial pattern, the geometry, and the age of these normal faults can help to unravel the spatiotemporal relationships between extensional deformation, magmatism, and long-wavelength crustal deformation of continental rift provinces. This study focuses on the active faulting in the Kenya Rift of the Cenozoic East African Rift System (EARS) with a focus on the mid-Pleistocene to the present-day.
To examine the early stages of continental break-up in the EARS, this thesis presents a time-averaged minimum extension rate for the inner graben of the Northern Kenya Rift (NKR) for the last 0.5 m.y. Using the TanDEM-X digital elevation model, fault-scarp geometries and associated throws are determined across the volcano-tectonic axis of the inner graben of the NKR. By integrating existing geochronology of faulted units with new ⁴⁰Ar/³⁹Ar radioisotopic dates, time-averaged extension rates are calculated. This study reveals that in the inner graben of the NKR, the long-term extension rate based on mid-Pleistocene to recent brittle deformation has minimum values of 1.0 to 1.6 mm yr⁻¹, locally with values up to 2.0 mm yr⁻¹. In light of virtually inactive border faults of the NKR, we show that extension is focused in the region of the active volcano-tectonic axis in the inner graben, thus highlighting the maturing of continental rifting in the NKR.
The phenomenon of focused extension is further investigated with a structural analysis of the youngest volcanic manifestations of the Kenya Rift, their relationship with extensional structures, and their overprint by Holocene faulting. In this context I analyzed the fault characteristics at the ~36 ka old Menengai Caldera and adjacent areas in the Central Kenya Rift using detailed field mapping and a structure-from-motion-based DEM generated from UAV data. In general, the Holocene intra-rift normal faults are dip-slip faults which strike NNE and thus reflect the present-day tectonic stress field; however, inside Menengai caldera persistent magmatic activity and magmatic resurgence overprints these young structures significantly. The caldera is located at the center of an actively extending rift segment and this and the other volcanic edifices of the Kenya Rift may constitute nucleation points of faulting an magmatic extensional processes that ultimately lead into a future stage of magma-assisted rifting.
When viewed at the scale of the entire Kenya Rift the protracted normal faulting in this region compartmentalizes the larger rift depressions, and influences the sedimentology and the hydrology of the intra-rift basins at a scale of less than 100 km. In the present day, most of the fault-bounded sub-basins of the Kenya Rift are hydrologically isolated due to this combination of faulting and magmatic activity that has generated efficient hydrological barriers that maintain these basins as semi-independent geomorphic entities. This isolation, however, was overcome during wetter climatic conditions during the past when the basins were transiently connected. I therefore also investigated the hydrological connectivity of the rift basins during the African Humid Period of the early Holocene, when climate was wetter. With the help of DEM analysis, lake-highstand indicators, radiocarbon dating, and a review of the fossil record, two lake-river-cascades could be identified: one directed southward, and one directed northward. Both cascades connected presently isolated rift basins during the early Holocene via spillovers of lakes and incised river gorges. This hydrological connection fostered the dispersal of aquatic faunas along the rift, and in addition, the water divide between the two river systems represented the only terrestrial dispersal corridor across the Kenya Rift. The reconstruction explains isolated distributions of Nilotic fish species in Kenya Rift lakes and of Guineo-Congolian mammal species in forests east of the Kenya Rift. On longer timescales, repeated episodes of connectivity and isolation must have occurred. To address this problem I participated in research to analyze a sediment drill core from the Koora basin of the Southern Kenya Rift, which provides a paleo-environmental record of the last 1 Ma. Based on this record it can be concluded that at ~400 ka relatively stable environmental conditions were disrupted by tectonic, hydrological, and ecological changes, resulting in increasingly large and frequent fluctuations in water availability, grassland communities, and woody plant cover. The major environmental shifts reflected in the drill core data coincide with phases where volcano-tectonic activity affected the basin. This thesis therefore shows how protracted extensional tectonic processes and the resulting geomorphologic conditions can affect the hydrology, the paleo-environment and the biodiversity of extensional zones in Kenya and elsewhere.
This study investigates the relationship between teacher quality and teachers’ engagement in professional development (PD) activities using data on 229 German secondary school mathematics teachers. We assessed different aspects of teacher quality (e.g. professional knowledge, instructional quality) using a variety of measures, including standardised tests of teachers’ content knowledge, to determine what characteristics are associated with high participation in PD. The results show that teachers with higher scores for teacher quality variables take part in more content-focused PD than teachers with lower scores for these variables. This suggests that teacher learning may be subject to a Matthew effect, whereby more proficient teachers benefit more from PD than less proficient teachers.
The numerous applications of rare earth elements (REE) has lead to a growing global demand and to the search for new REE deposits. One promising technique for exploration of these deposits is laser-induced breakdown spectroscopy (LIBS). Among a number of advantages of the technique is the possibility to perform on-site measurements without sample preparation. Since the exploration of a deposit is based on the analysis of various geological compartments of the surrounding area, REE-bearing rock and soil samples were analyzed in this work. The field samples are from three European REE deposits in Sweden and Norway. The focus is on the REE cerium, lanthanum, neodymium and yttrium. Two different approaches of data analysis were used for the evaluation. The first approach is univariate regression (UVR). While this approach was successful for the analysis of synthetic REE samples, the quantitative analysis of field samples from different sites was influenced by matrix effects. Principal component analysis (PCA) can be used to determine the origin of the samples from the three deposits. The second approach is based on multivariate regression methods, in particular interval PLS (iPLS) regression. In comparison to UVR, this method is better suited for the determination of REE contents in heterogeneous field samples. View Full-Text
Holocene temperature proxy records are commonly used in quantitative synthesis and model-data comparisons. However, comparing correlations between time series from records collected in proximity to one another with the expected correlations based on climate model simulations indicates either regional or noisy climate signals in Holocene temperature proxy records. In this study, we evaluate the consistency of spatial correlations present in Holocene proxy records with those found in data from the Last Glacial Maximum (LGM). Specifically, we predict correlations expected in LGM proxy records if the only difference to Holocene correlations would be due to more time uncertainty and more climate variability in the LGM. We compare this simple prediction to the actual correlation structure in the LGM proxy records. We found that time series data of ice-core stable isotope records and planktonic foraminifera Mg/Ca ratios were consistent between the Holocene and LGM periods, while time series of Uk'37 proxy records were not as we found no correlation between nearby LGM records. Our results support the finding of highly regional or noisy marine proxy records in the compilation analysed here and suggest the need for further studies on the role of climate proxies and the processes of climate signal recording and preservation.
By regulating the concentration of carbon in our atmosphere, the global carbon cycle drives changes in our planet’s climate and habitability. Earth surface processes play a central, yet insufficiently constrained role in regulating fluxes of carbon between terrestrial reservoirs and the atmosphere. River systems drive global biogeochemical cycles by redistributing significant masses of carbon across the landscape. During fluvial transit, the balance between carbon oxidation and preservation determines whether this mass redistribution is a net atmospheric CO2 source or sink. Existing models for fluvial carbon transport fail to integrate the effects of sediment routing processes, resulting in large uncertainties in fluvial carbon fluxes to the oceans.
In this Ph.D. dissertation, I address this knowledge gap through three studies that focus on the timescale and routing pathways of fluvial mass transfer and show their effect on the composition and fluxes of organic carbon exported by rivers. The hypotheses posed in these three studies were tested in an analog lowland alluvial river system – the Rio Bermejo in Argentina. The Rio Bermejo annually exports more than 100 Mt of sediment and organic matter from the central Andes, and transports this material nearly 1300 km downstream across the lowland basin without influence from tributaries, allowing me to isolate the effects of geomorphic processes on fluvial organic carbon cycling. These studies focus primarily on the geochemical composition of suspended sediment collected from river depth profiles along the length of the Rio Bermejo.
In Chapter 3, I aimed to determine the mean fluvial sediment transit time for the Rio Bermejo and evaluate the geomorphic processes that regulate the rate of downstream sediment transfer. I developed a framework to use meteoric cosmogenic 10Be (10Bem) as a chronometer to track the duration of sediment transit from the mountain front downstream along the ~1300 km channel of the Rio Bermejo. I measured 10Bem concentrations in suspended sediment sampled from depth profiles, and found a 230% increase along the fluvial transit pathway. I applied a simple model for the time-dependent accumulation of 10Bem on the floodplain to estimate a mean sediment transit time of 8.5±2.2 kyr. Furthermore, I show that sediment transit velocity is influenced by lateral migration rate and channel morphodynamics. This approach to measuring sediment transit time is much more precise than other methods previously used and shows promise for future applications.
In Chapter 4, I aimed to quantify the effects of hydrodynamic sorting on the composition and quantity of particulate organic carbon (POC) export transported by lowland rivers. I first used scanning electron miscroscopy (SEM) coupled with nanoscale secondary ion mass spectrometry (NanoSIMS) analyses to show that the Bermejo transports two principal types of POC: 1) mineral-bound organic carbon associated with <4 µm, platy grains, and 2) coarse discrete organic particles. Using n-alkane stable isotope data and particle shape analysis, I showed that these two carbon pools are vertically sorted in the water column, due to differences in particle settling velocity. This vertical sorting may drive modern POC to be transported efficiently from source-to-sink, driving efficient CO2 drawdown. Simultaneously, vertical sorting may drive degraded, mineral-bound POC to be deposited overbank and stored on the floodplain for centuries to millennia, resulting in enhanced POC remineralization. In the Rio Bermejo, selective deposition of coarse material causes the proportion of mineral-bound POC to increase with distance downstream, but the majority of exported POC is composed of discrete organic particles, suggesting that the river is a net carbon sink. In summary, this study shows that selective deposition and hydraulic sorting control the composition and fate of fluvial POC during fluvial transit.
In Chapter 5, I characterized and quantified POC transformation and oxidation during fluvial transit. I analyzed the radiocarbon content and stable carbon isotopic composition of Rio Bermejo suspended sediment and found that POC ages during fluvial transit, but is also degraded and oxidized during transient floodplain storage. Using these data, I developed a conceptual model for fluvial POC cycling that allows the estimation of POC oxidation relative to POC export, and ultimately reveals whether a river is a net source or sink of CO2 to the atmosphere. Through this study, I found that the Rio Bermejo annually exports more POC than is oxidized during transit, largely due to high rates of lateral migration that cause erosion of floodplain vegetation and soil into the river. These results imply that human engineering of rivers could alter the fluvial carbon balance, by reducing lateral POC inputs and increasing the mean sediment transit time.
Together, these three studies quantitatively link geomorphic processes to rates of POC transport and degradation across sub-annual to millennial time scales and nanoscale to 103 km spatial scales, laying the groundwork for a global-scale fluvial organic carbon cycling model.
Mycotoxins and pesticides regularly co-occur in agricultural products worldwide. Thus, humans can be exposed to both toxic contaminants and pesticides simultaneously, and multi-methods assessing the occurrence of various food contaminants and residues in a single method are necessary. A two-dimensional high performance liquid chromatography tandem mass spectrometry method for the analysis of 40 (modified) mycotoxins, two plant growth regulators, two tropane alkaloids, and 334 pesticides in cereals was developed. After an acetonitrile/water/formic acid (79:20:1, v/v/v) multi-analyte extraction procedure, extracts were injected into the two-dimensional setup, and an online clean-up was performed. The method was validated according to Commission Decision (EC) no. 657/2002 and document N° SANTE/12682/2019. Good linearity (R2 > 0.96), recovery data between 70-120%, repeatability and reproducibility values < 20%, and expanded measurement uncertainties < 50% were obtained for a wide range of analytes, including very polar substances like deoxynivalenol-3-glucoside and methamidophos. However, results for fumonisins, zearalenone-14,16-disulfate, acid-labile pesticides, and carbamates were unsatisfying. Limits of quantification meeting maximum (residue) limits were achieved for most analytes. Matrix effects varied highly (−85 to +1574%) and were mainly observed for analytes eluting in the first dimension and early-eluting analytes in the second dimension. The application of the method demonstrated the co-occurrence of different types of cereals with 28 toxins and pesticides. Overall, 86% of the samples showed positive findings with at least one mycotoxin, plant growth regulator, or pesticide.
Atmospheric water vapour content is a key variable that controls the development of deep convective storms and rainfall extremes over the central Andes. Direct measurements of water vapour are challenging; however, recent developments in microwave processing allow the use of phase delays from L-band radar to measure the water vapour content throughout the atmosphere: Global Navigation Satellite System (GNSS)-based integrated water vapour (IWV) monitoring shows promising results to measure vertically integrated water vapour at high temporal resolutions. Previous works also identified convective available potential energy (CAPE) as a key climatic variable for the formation of deep convective storms and rainfall in the central Andes. Our analysis relies on GNSS data from the Argentine Continuous Satellite Monitoring Network, Red Argentina de Monitoreo Satelital Continuo (RAMSAC) network from 1999 to 2013. CAPE is derived from version 2.0 of the ECMWF’s (European Centre for Medium-Range Weather Forecasts) Re-Analysis (ERA-interim) and rainfall from the TRMM (Tropical Rainfall Measuring Mission) product. In this study, we first analyse the rainfall characteristics of two GNSS-IWV stations by comparing their complementary cumulative distribution function (CCDF). Second, we separately derive the relation between rainfall vs. CAPE and GNSS-IWV. Based on our distribution fitting analysis, we observe an exponential relation of rainfall to GNSS-IWV. In contrast, we report a power-law relationship between the daily mean value of rainfall and CAPE at the GNSS-IWV station locations in the eastern central Andes that is close to the theoretical relationship based on parcel theory. Third, we generate a joint regression model through a multivariable regression analysis using CAPE and GNSS-IWV to explain the contribution of both variables in the presence of each other to extreme rainfall during the austral summer season. We found that rainfall can be characterised with a higher statistical significance for higher rainfall quantiles, e.g., the 0.9 quantile based on goodness-of-fit criterion for quantile regression. We observed different contributions of CAPE and GNSS-IWV to rainfall for each station for the 0.9 quantile. Fourth, we identify the temporal relation between extreme rainfall (the 90th, 95th, and 99th percentiles) and both GNSS-IWV and CAPE at 6 h time steps. We observed an increase before the rainfall event and at the time of peak rainfall—both for GNSS-integrated water vapour and CAPE. We show higher values of CAPE and GNSS-IWV for higher rainfall percentiles (99th and 95th percentiles) compared to the 90th percentile at a 6-h temporal scale. Based on our correlation analyses and the dynamics of the time series, we show that both GNSS-IWV and CAPE had comparable magnitudes, and we argue to consider both climatic variables when investigating their effect on rainfall extremes.
Postural balance represents a fundamental movement skill for the successful performance of everyday and sport-related activities. There is ample evidence on the effectiveness of balance training on balance performance in athletic and non-athletic population. However, less is known on potential transfer effects of other training types, such as plyometric jump training (PJT) on measures of balance. Given that PJT is a highly dynamic exercise mode with various forms of jump-landing tasks, high levels of postural control are needed to successfully perform PJT exercises. Accordingly, PJT has the potential to not only improve measures of muscle strength and power but also balance. To systematically review and synthetize evidence from randomized and non-randomized controlled trials regarding the effects of PJT on measures of balance in apparently healthy participants. Systematic literature searches were performed in the electronic databases PubMed, Web of Science, and SCOPUS. A PICOS approach was applied to define inclusion criteria, (i) apparently healthy participants, with no restrictions on their fitness level, sex, or age, (ii) a PJT program, (iii) active controls (any sport-related activity) or specific active controls (a specific exercise type such as balance training), (iv) assessment of dynamic, static balance pre- and post-PJT, (v) randomized controlled trials and controlled trials. The methodological quality of studies was assessed using the Physiotherapy Evidence Database (PEDro) scale. This meta-analysis was computed using the inverse variance random-effects model. The significance level was set at p <0.05. The initial search retrieved 8,251 plus 23 records identified through other sources. Forty-two articles met our inclusion criteria for qualitative and 38 for quantitative analysis (1,806 participants [990 males, 816 females], age range 9–63 years). PJT interventions lasted between 4 and 36 weeks. The median PEDro score was 6 and no study had low methodological quality (≤3). The analysis revealed significant small effects of PJT on overall (dynamic and static) balance (ES = 0.46; 95% CI = 0.32–0.61; p < 0.001), dynamic (e.g., Y-balance test) balance (ES = 0.50; 95% CI = 0.30–0.71; p < 0.001), and static (e.g., flamingo balance test) balance (ES = 0.49; 95% CI = 0.31–0.67; p < 0.001). The moderator analyses revealed that sex and/or age did not moderate balance performance outcomes. When PJT was compared to specific active controls (i.e., participants undergoing balance training, whole body vibration training, resistance training), both PJT and alternative training methods showed similar effects on overall (dynamic and static) balance (p = 0.534). Specifically, when PJT was compared to balance training, both training types showed similar effects on overall (dynamic and static) balance (p = 0.514). Conclusion: Compared to active controls, PJT showed small effects on overall balance, dynamic and static balance. Additionally, PJT produced similar balance improvements compared to other training types (i.e., balance training). Although PJT is widely used in athletic and recreational sport settings to improve athletes' physical fitness (e.g., jumping; sprinting), our systematic review with meta-analysis is novel in as much as it indicates that PJT also improves balance performance. The observed PJT-related balance enhancements were irrespective of sex and participants' age. Therefore, PJT appears to be an adequate training regime to improve balance in both, athletic and recreational settings.
Janus droplets were prepared by vortex mixing of three non-mixable liquids, i.e., olive oil, silicone oil and water, in the presence of gold nanoparticles (AuNPs) in the aqueous phase and magnetite nanoparticles (MNPs) in the olive oil. The resulting Pickering emulsions were stabilized by a red-colored AuNP layer at the olive oil/water interface and MNPs at the oil/oil interface. The core–shell droplets can be stimulated by an external magnetic field. Surprisingly, an inner rotation of the silicon droplet is observed when MNPs are fixed at the inner silicon droplet interface. This is the first example of a controlled movement of the inner parts of complex double emulsions by magnetic manipulation via interfacially confined magnetic nanoparticles.