Gold Open-Access
Refine
Year of publication
Document Type
- Article (1940)
- Review (50)
- Other (41)
- Monograph/Edited Volume (33)
- Conference Proceeding (26)
- Part of Periodical (19)
- Part of a Book (9)
- Working Paper (6)
- Doctoral Thesis (5)
- Postprint (2)
Language
- English (2133) (remove)
Keywords
- digital education (35)
- e-learning (35)
- MOOC (34)
- online course creation (33)
- online course design (33)
- Digitale Bildung (32)
- Kursdesign (32)
- Micro Degree (32)
- Online-Lehre (32)
- Onlinekurs (32)
Institute
- Institut für Biochemie und Biologie (419)
- Institut für Geowissenschaften (246)
- Institut für Physik und Astronomie (214)
- Extern (188)
- Institut für Umweltwissenschaften und Geographie (146)
- Hasso-Plattner-Institut für Digital Engineering GmbH (124)
- Department Sport- und Gesundheitswissenschaften (116)
- Institut für Ernährungswissenschaft (104)
- Institut für Chemie (94)
- Department Psychologie (93)
With the latest technological developments and associated new possibilities in teaching, the personalisation of learning is gaining more and more importance. It assumes that individual learning experiences and results could generally be improved when personal learning preferences are considered. To do justice to the complexity of the personalisation possibilities of teaching and learning processes, we illustrate the components of learning and teaching in the digital environment and their interdependencies in an initial model. Furthermore, in a pre-study, we investigate the relationships between the learner's ability to (digital) self-organise, the learner’s prior- knowledge learning in different variants of mode and learning outcomes as one part of this model. With this pre-study, we are taking the first step towards a holistic model of teaching and learning in digital environments.
This paper examines the intertext between Tolkien’s Ithilien episode in Two Towers and artistic presentations of plants in the art and literature of Augustan Rome. We argue that the evident ‘superbloom’ depicted in the ekphrasis of the flora of Ithilien recalls both Vergilian botanical adynata (especially in the Georgics) and Roman wall paintings of the Augustan period.
Scrollytellings are an innovative form of web content. Combining the benefits of books, images, movies, and video games, they are a tool to tell compelling stories and provide excellent learning opportunities. Due to their multi-modality, creating high-quality scrollytellings is not an easy task. Different professions, such as content designers, graphics designers, and developers, need to collaborate to get the best out of the possibilities the scrollytelling format provides. Collaboration unlocks great potential. However, content designers cannot create scrollytellings directly and always need to consult with developers to implement their vision. This can result in misunderstandings. Often, the resulting scrollytelling will not match the designer’s vision sufficiently, causing unnecessary iterations. Our project partner Typeshift specializes in the creation of individualized scrollytellings for their clients. Examined existing solutions for authoring interactive content are not optimally suited for creating highly customized scrollytellings while still being able to manipulate all their elements programmatically. Based on their experience and expertise, we developed an editor to author scrollytellings in the lively.next live-programming environment. In this environment, a graphical user interface for content design is combined with powerful possibilities for programming behavior with the morphic system. The editor allows content designers to take on large parts of the creation process of scrollytellings on their own, such as creating the visible elements, animating content, and fine-tuning the scrollytelling. Hence, developers can focus on interactive elements such as simulations and games. Together with Typeshift, we evaluated the tool by recreating an existing scrollytelling and identified possible future enhancements. Our editor streamlines the creation process of scrollytellings. Content designers and developers can now both work on the same scrollytelling. Due to the editor inside of the lively.next environment, they can both work with a set of tools familiar to them and their traits. Thus, we mitigate unnecessary iterations and misunderstandings by enabling content designers to realize large parts of their vision of a scrollytelling on their own. Developers can add advanced and individual behavior. Thus, developers and content designers benefit from a clearer distribution of tasks while keeping the benefits of collaboration.
Touching at a Distance
(2023)
Studies the capacity of Shakespeare’s plays to touch and think about touchBased on plays from all major genres: Hamlet, The Tempest, Richard III, Much Ado About Nothing and Troilus and CressidaCentres on creative, close readings of Shakespeare’s plays, which aim to generate critical impulses for the 21st century readerBrings Shakespeare Studies into touch with philosophers and theoreticians from a range of disciplinary areas – continental philosophy, literary criticism, psychoanalysis, sociology, phenomenology, law, linguistics: Friedrich Nietzsche, Maurice Blanchot, Jacques Lacan, Luce Irigaray, Jacques Derrida, Roland Barthes, Niklas Luhmann, Hans Blumenberg, Carl Schmitt, J. L. AustinTheatre has a remarkable capacity: it touches from a distance. The audience is affected, despite their physical separation from the stage. The spectators are moved, even though the fictional world presented to them will never come into direct touch with their real lives. Shakespeare is clearly one of the master practitioners of theatrical touch. As the study shows, his exceptional dramaturgic talent is intrinsically connected with being one of the great thinkers of touch. His plays fathom the complexity and power of a fascinating notion – touch as a productive proximity that is characterised by unbridgeable distance – which philosophers like Friedrich Nietzsche, Maurice Blanchot, Jacques Derrida, Luce Irigaray and Jean-Luc Nancy have written about, centuries later. By playing with touch and its metatheatrical implications, Shakespeare raises questions that make his theatrical art point towards modernity: how are communities to form when traditional institutions begin to crumble? What happens to selfhood when time speeds up, when oneness and timeless truth can no longer serve as reliable foundations? What is the role and the capacity of language in a world that has lost its seemingly unshakeable belief and trust in meaning? How are we to conceive of the unthinkable extremes of human existence – birth and death – when the religious orthodoxy slowly ceases to give satisfactory explanations? Shakespeare’s theatre not only prompts these questions, but provides us with answers. They are all related to touch, and they are all theatrical at their core: they are argued and performed by the striking experience of theatre’s capacities to touch – at a distance
The cryosphere in mountain regions is rapidly declining, a trend that is expected to accelerate over the next several decades due to anthropogenic climate change. A cascade of effects will result, extending from mountains to lowlands with associated impacts on human livelihood, economy, and ecosystems. With rising air temperatures and increased radiative forcing, glaciers will become smaller and, in some cases, disappear, the area of frozen ground will diminish, the ratio of snow to rainfall will decrease, and the timing and magnitude of both maximum and minimum streamflow will change. These changes will affect erosion rates, sediment, and nutrient flux, and the biogeochemistry of rivers and proglacial lakes, all of which influence water quality, aquatic habitat, and biotic communities. Changes in the length of the growing season will allow low-elevation plants and animals to expand their ranges upward. Slope failures due to thawing alpine permafrost, and outburst floods from glacier-and moraine-dammed lakes will threaten downstream populations.Societies even well beyond the mountains depend on meltwater from glaciers and snow for drinking water supplies, irrigation, mining, hydropower, agriculture, and recreation. Here, we review and, where possible, quantify the impacts of anticipated climate change on the alpine cryosphere, hydrosphere, and biosphere, and consider the implications for adaptation to a future of mountains without permanent snow and ice.
Management of agricultural soil quality requires fast and cost-efficient methods to identify multiple stressors that can affect soil organisms and associated ecological processes. Here, we propose to use soil protists which have a great yet poorly explored potential for bioindication. They are ubiquitous, highly diverse, and respond to various stresses to agricultural soils caused by frequent management or environmental changes. We test an approach that combines metabarcoding data and machine learning algorithms to identify potential stressors of soil protist community composition and diversity. We measured 17 key variables that reflect various potential stresses on soil protists across 132 plots in 28 Swiss vineyards over 2 years. We identified the taxa showing strong responses to the selected soil variables (potential bioindicator taxa) and tested for their predictive power. Changes in protist taxa occurrence and, to a lesser extent, diversity metrics exhibited great predictive power for the considered soil variables. Soil copper concentration, moisture, pH, and basal respiration were the best predicted soil variables, suggesting that protists are particularly responsive to stresses caused by these variables. The most responsive taxa were found within the clades Rhizaria and Alveolata. Our results also reveal that a majority of the potential bioindicators identified in this study can be used across years, in different regions and across different grape varieties. Altogether, soil protist metabarcoding data combined with machine learning can help identifying specific abiotic stresses on microbial communities caused by agricultural management. Such an approach provides complementary information to existing soil monitoring tools that can help manage the impact of agricultural practices on soil biodiversity and quality.
The immense advances in computer power achieved in the last decades have had a significant impact in Earth science, providing valuable research outputs that allow the simulation of complex natural processes and systems, and generating improved forecasts. The development and implementation of innovative geoscientific software is currently evolving towards a sustainable and efficient development by integrating models of different aspects of the Earth system. This will set the foundation for a future digital twin of the Earth. The codification and update of this software require great effort from research groups and therefore, it needs to be preserved for its reuse by future generations of geoscientists. Here, we report on Geo-Soft-CoRe, a Geoscientific Software & Code Repository, hosted at the archive DIGITAL.CSIC. This is an open source, multidisciplinary and multiscale collection of software and code developed to analyze different aspects of the Earth system, encompassing tools to: 1) analyze climate variability; 2) assess hazards, and 3) characterize the structure and dynamics of the solid Earth. Due to the broad range of applications of these software packages, this collection is useful not only for basic research in Earth science, but also for applied research and educational purposes, reducing the gap between the geosciences and the society. By providing each software and code with a permanent identifier (DOI), we ensure its self-sustainability and accomplish the FAIR (Findable, Accessible, Interoperable and Reusable) principles. Therefore, we aim for a more transparent science, transferring knowledge in an easier way to the geoscience community, and encouraging an integrated use of computational infrastructure.
Anomalous-diffusion, the departure of the spreading dynamics of diffusing particles from the traditional law of Brownian-motion, is a signature feature of a large number of complex soft-matter and biological systems. Anomalous-diffusion emerges due to a variety of physical mechanisms, e.g., trapping interactions or the viscoelasticity of the environment. However, sometimes systems dynamics are erroneously claimed to be anomalous, despite the fact that the true motion is Brownian—or vice versa. This ambiguity in establishing whether the dynamics as normal or anomalous can have far-reaching consequences, e.g., in predictions for reaction- or relaxation-laws. Demonstrating that a system exhibits normal- or anomalous-diffusion is highly desirable for a vast host of applications. Here, we present a criterion for anomalous-diffusion based on the method of power-spectral analysis of single trajectories. The robustness of this criterion is studied for trajectories of fractional-Brownian-motion, a ubiquitous stochastic process for the description of anomalous-diffusion, in the presence of two types of measurement errors. In particular, we find that our criterion is very robust for subdiffusion. Various tests on surrogate data in absence or presence of additional positional noise demonstrate the efficacy of this method in practical contexts. Finally, we provide a proof-of-concept based on diverse experiments exhibiting both normal and anomalous-diffusion.
Efforts have been made in the past to enhance building exposure models on a regional scale with increasing spatial resolutions by integrating different data sources. This work follows a similar path and focuses on the downscaling of the existing SARA exposure model that was proposed for the residential building stock of the communes of Valparaiso and Vina del Mar (Chile). Although this model allowed great progress in harmonising building classes and characterising their differential physical vulnerabilities, it is now outdated, and in any case, it is spatially aggregated over large administrative units. Hence, to more accurately consider the impact of future earthquakes on these cities, it is necessary to employ more reliable exposure models. For such a purpose, we propose updating this existing model through a Bayesian approach by integrating ancillary data that has been made increasingly available from Volunteering Geo-Information (VGI) activities. Its spatial representation is also optimised in higher resolution aggregation units that avoid the inconvenience of having incomplete building-by-building footprints. A worst-case earthquake scenario is presented to calculate direct economic losses and highlight the degree of uncertainty imposed by exposure models in comparison with other parameters used to generate the seismic ground motions within a sensitivity analysis. This example study shows the great potential of using increasingly available VGI to update worldwide building exposure models as well as its importance in scenario-based seismic risk assessment.
Answer Set Programming (ASP) is a successful rule-based formalism for modeling and solving knowledge-intense combinatorial (optimization) problems. Despite its success in both academic and industry, open challenges like automatic source code optimization, and software engineering remains. This is because a problem encoded into an ASP might not have the desired solving performance compared to an equivalent representation. Motivated by these two challenges, this paper has three main contributions. First, we propose a developing process towards a methodology to implement ASP programs, being faithful to existing methods. Second, we present ASP encodings that serve as the basis from the developing process. Third, we demonstrate the use of ASP to reverse the standard solving process. That is, knowing answer sets in advance, and desired strong equivalent properties, “we” exhaustively reconstruct ASP programs if they exist. This paper was originally motivated by the search of propositional formulas (if they exist) that represent the semantics of a new aggregate operator. Particularly, a parity aggregate. This aggregate comes as an improvement from the already existing parity (xor) constraints from xorro, where lacks expressiveness, even though these constraints fit perfectly for reasoning modes like sampling or model counting. To this end, this extended version covers the fundaments from parity constraints as well as the xorro system. Hence, we delve a little more in the examples and the proposed methodology over parity constraints. Finally, we discuss our results by showing the only representation available, that satisfies different properties from the classical logic xor operator, which is also consistent with the semantics of parity constraints from xorro.
Information technology and digital solutions as enablers in the tourism sector require continuous development of skills, as digital transformation is characterized by fast change, complexity and uncertainty. This research investigates how a cMOOC concept could support the tourism industry. A consortium of three universities, a tourism association, and a tourist attraction investigates online learning needs and habits of tourism industry stakeholders in the field of digitalization in a cross-border study in the Baltic Sea region. The multi-national survey (n = 244) reveals a high interest in participating in an online learning community, with two-thirds of respondents seeing opportunities to contributing to such community apart from consuming knowledge. The paper demonstrates preferred ways of learning, motivational and hampering aspects as well as types of possible contributions.
Artificial intelligence (AI)-based technologies can increasingly perform knowledge work tasks, such as medical diagnosis. Thereby, it is expected that humans will not be replaced by AI but work closely with AI-based technology (“augmentation”). Augmentation has ethical implications for humans (e.g., impact on autonomy, opportunities to flourish through work), thus, developers and managers of AI-based technology have a responsibility to anticipate and mitigate risks to human workers. However, doing so can be difficult as AI encompasses a wide range of technologies, some of which enable fundamentally new forms of interaction. In this research-in-progress paper, we propose the development of a taxonomy to categorize unique characteristics of AI-based technology that influence the interaction and have ethical implications for human workers. The completed taxonomy will support researchers in forming cumulative knowledge on the ethical implications of augmentation and assist practitioners in the ethical design and management of AI-based technology in knowledge work.
We investigate whether the distribution of maximum seasonal streamflow is significantly affected by catchment or climate state of the season/month ahead. We fit the Generalized Extreme Value (GEV) distribution to extreme seasonal streamflow for around 600 stations across Europe by conditioning the GEV location and scale parameters on 14 indices, which represent the season-ahead climate or catchment state. The comparison of these climate-informed models with the classical GEV distribution, with time-constant parameters, suggests that there is a substantial potential for seasonal forecasting of flood probabilities. The potential varies between seasons and regions. Overall, the season-ahead catchment wetness shows the highest potential, although climate indices based on large-scale atmospheric circulation, sea surface temperature or sea ice concentration also show some skill for certain regions and seasons. Spatially coherent patterns and a substantial fraction of climate-informed models are promising signs towards early alerts to increase flood preparedness already a season ahead.
Using time-resolved x-ray diffraction, we demonstrate the manipulation of the picosecond strain response of a metallic heterostructure consisting of a dysprosium (Dy) transducer and a niobium (Nb) detection layer by an external magnetic field. We utilize the first-order ferromagnetic–antiferromagnetic phase transition of the Dy layer, which provides an additional large contractive stress upon laser excitation compared to its zerofield response. This enhances the laser-induced contraction of the transducer and changes the shape of the picosecond strain pulses driven in Dy and detected within the buried Nb layer. Based on our experiment with rare-earth metals we discuss required properties for functional transducers, which may allow for novel field-control of the emitted picosecond strain pulses.
Identifying urban pluvial flood-prone areas is necessary but the application of two-dimensional hydrodynamic models is limited to small areas. Data-driven models have been showing their ability to map flood susceptibility but their application in urban pluvial flooding is still rare. A flood inventory (4333 flooded locations) and 11 factors which potentially indicate an increased hazard for pluvial flooding were used to implement convolutional neural network (CNN), artificial neural network (ANN), random forest (RF) and support vector machine (SVM) to: (1) Map flood susceptibility in Berlin at 30, 10, 5, and 2 m spatial resolutions. (2) Evaluate the trained models' transferability in space. (3) Estimate the most useful factors for flood susceptibility mapping. The models' performance was validated using the Kappa, and the area under the receiver operating characteristic curve (AUC). The results indicated that all models perform very well (minimum AUC = 0.87 for the testing dataset). The RF models outperformed all other models at all spatial resolutions and the RF model at 2 m spatial resolution was superior for the present flood inventory and predictor variables. The majority of the models had a moderate performance for predictions outside the training area based on Kappa evaluation (minimum AUC = 0.8). Aspect and altitude were the most influencing factors on the image-based and point-based models respectively. Data-driven models can be a reliable tool for urban pluvial flood susceptibility mapping wherever a reliable flood inventory is available.
Biological invasions may result from multiple introductions, which might compensate for reduced gene pools caused by bottleneck events, but could also dilute adaptive processes. A previous common-garden experiment showed heritable latitudinal clines in fitness-related traits in the invasive goldenrod Solidago canadensis in Central Europe. These latitudinal clines remained stable even in plants chemically treated with zebularine to reduce epigenetic variation. However, despite the heritability of traits investigated, genetic isolation-by-distance was non-significant. Utilizing the same specimens, we applied a molecular analysis of (epi)genetic differentiation with standard and methylation-sensitive (MSAP) AFLPs. We tested whether this variation was spatially structured among populations and whether zebularine had altered epigenetic variation. Additionally, we used genome scans to mine for putative outlier loci susceptible to selection processes in the invaded range. Despite the absence of isolation-by-distance, we found spatial genetic neighborhoods among populations and two AFLP clusters differentiating northern and southern Solidago populations. Genetic and epigenetic diversity were significantly correlated, but not linked to phenotypic variation. Hence, no spatial epigenetic patterns were detected along the latitudinal gradient sampled. Applying genome-scan approaches (BAYESCAN, BAYESCENV, RDA, and LFMM), we found 51 genetic and epigenetic loci putatively responding to selection. One of these genetic loci was significantly more frequent in populations at the northern range. Also, one epigenetic locus was more frequent in populations in the southern range, but this pattern was lost under zebularine treatment. Our results point to some genetic, but not epigenetic adaptation processes along a large-scale latitudinal gradient of S. canadensis in its invasive range.
There is an ongoing debate about how to test and operationalize self-control. This limited understanding is in large part due to a variety of different tests and measures used to assess self-control, as well as the lack of empirical studies examining the temporal dynamics during the exertion of self-control. In order to track changes that occur over the course of exposure to a self-control task, we investigate and compare behavioral, subjective, and physiological indicators during the exertion of self-control. Participants completed both a task requiring inhibitory control (Go/No-Go task) and a control task (two-choice task). Behavioral performance and pupil size were measured during the tasks. Subjective vitality was measured before and after the tasks. While pupil size and subjective vitality showed similar trajectories in the two tasks, behavioral performance decreased in the inhibitory control-demanding task, but not in the control task. However, behavioral, subjective, and physiological measures were not significantly correlated. These results suggest that there is a disconnect between different measures of self-control with high intra- and interindividual variability. Theoretical and methodological implications for self-control theory and future empirical work are discussed.
Many researchers and politicians believe that the COVID-19 crisis may have opened a "window of opportunity " to spur sustainability transformations. Still, evidence for such a dynamic is currently lacking. Here, we propose the linkage of "big data " and "thick data " methods for monitoring debates on transformation processes by following the COVID-19 discourse on ecological sustainability in Germany. We analysed variations in the topics discussed by applying text mining techniques to a corpus with 84,500 newspaper articles published during the first COVID-19 wave. This allowed us to attain a unique and previously inaccessible "bird's eye view " of how these topics evolved. To deepen our understanding of prominent frames, a qualitative content analysis was undertaken. Furthermore, we investigated public awareness by analysing online search behaviour. The findings show an underrepresentation of sustainability topics in the German news during the early stages of the crisis. Similarly, public awareness regarding climate change was found to be reduced. Nevertheless, by examining the newspaper data in detail, we found that the pandemic is often seen as a chance for sustainability transformations-but not without a set of challenges. Our mixed-methods approach enabled us to bridge knowledge gaps between qualitative and quantitative research by "thickening " and providing context to data-driven analyses. By monitoring whether or not the current crisis is seen as a chance for sustainability transformations, we provide insights for environmental policy in times of crisis.
Training intervention effects on cognitive performance and neuronal plasticity — A pilot study
(2022)
Studies suggest that people suffering from chronic pain may have altered brain plasticity, along with altered functional connectivity between pain-processing brain regions. These may be related to decreased mood and cognitive performance. There is some debate as to whether physical activity combined with behavioral therapy (e.g. cognitive distraction, body scan) may counteract these changes. However, underlying neuronal mechanisms are unclear. The aim of the current pilot study with a 3-armed randomized controlled trial design was to examine the effects of sensorimotor training for nonspecific chronic low back pain on (1) cognitive performance; (2) fMRI activity co-fluctuations (functional connectivity) between pain-related brain regions; and (3) the relationship between functional connectivity and subjective variables (pain and depression). Six hundred and sixty two volunteers with non-specific chronic low back pain were randomly allocated to a unimodal (sensorimotor training), multidisciplinary (sensorimotor training and behavioral therapy) intervention, or to a control group within a multicenter study. A subsample of patients (n = 21) from one study center participated in the pilot study presented here. Measurements were at baseline, during (3 weeks, M2) and after intervention (12 weeks, M4 and 24 weeks, M5). Cognitive performance was measured by the Trail Making Test and functional connectivity by MRI. Pain perception and depression were assessed by the Von Korff questionnaire and the Hospital and Anxiety. Group differences were calculated by univariate and repeated ANOVA measures and Bayesian statistics; correlations by Pearson's r. Change and correlation of functional connection were analyzed within a pooled intervention group (uni-, multidisciplinary group). Results revealed that participants with increased pain intensity at baseline showed higher functional connectivity between pain-related brain areas used as ROIs in this study. Though small sample sizes limit generalization, cognitive performance increased in the multimodal group. Increased functional connectivity was observed in participants with increased pain ratings. Pain ratings and connectivity in pain-related brain regions decreased after the intervention. The results provide preliminary indication that intervention effects can potentially be achieved on the cognitive and neuronal level. The intervention may be suitable for therapy and prevention of non-specific chronic low back pain.
Trait means or variance
(2021)
One of the few laws in ecology is that communities consist of few common and many rare taxa. Functional traits may help to identify the underlying mechanisms of this community pattern, since they correlate with different niche dimensions. However, comprehensive studies are missing that investigate the effects of species mean traits (niche position) and intraspecific trait variability (ITV, niche width) on species abundance. In this study, we investigated fragmented dry grasslands to reveal trait-occurrence relationships in plants at local and regional scales. We predicted that (a) at the local scale, species occurrence is highest for species with intermediate traits, (b) at the regional scale, habitat specialists have a lower species occurrence than generalists, and thus, traits associated with stress-tolerance have a negative effect on species occurrence, and (c) ITV increases species occurrence irrespective of the scale. We measured three plant functional traits (SLA = specific leaf area, LDMC = leaf dry matter content, plant height) at 21 local dry grassland communities (10 m × 10 m) and analyzed the effect of these traits and their variation on species occurrence. At the local scale, mean LDMC had a positive effect on species occurrence, indicating that stress-tolerant species are the most abundant rather than species with intermediate traits (hypothesis 1). We found limited support for lower specialist occurrence at the regional scale (hypothesis 2). Further, ITV of LDMC and plant height had a positive effect on local occurrence supporting hypothesis 3. In contrast, at the regional scale, plants with a higher ITV of plant height were less frequent. We found no evidence that the consideration of phylogenetic relationships in our analyses influenced our findings. In conclusion, both species mean traits (in particular LDMC) and ITV were differently related to species occurrence with respect to spatial scale. Therefore, our study underlines the strong scale-dependency of trait-abundance relationships.
African weakly electric fish of the mormyrid genus Campylomormyrus generate pulse-type electric organ discharges (EODs) for orientation and communication. Their pulse durations are species-specific and elongated EODs are a derived trait. So far, differential gene expression among tissue-specific transcriptomes across species with different pulses and point mutations in single ion channel genes indicate a relation of pulse duration and electrocyte geometry/excitability. However, a comprehensive assessment of expressed Single Nucleotide Polymorphisms (SNPs) throughout the entire transcriptome of African weakly electric fish, with the potential to identify further genes influencing EOD duration, is still lacking. This is of particular value, as discharge duration is likely based on multiple cellular mechanisms and various genes. Here we provide the first transcriptome-wide SNP analysis of African weakly electric fish species (genus Campylomormyrus) differing by EOD duration to identify candidate genes and cellular mechanisms potentially involved in the determination of an elongated discharge of C. tshokwe. Non-synonymous substitutions specific to C. tshokwe were found in 27 candidate genes with inferred positive selection among Campylomormyrus species. These candidate genes had mainly functions linked to transcriptional regulation, cell proliferation and cell differentiation. Further, by comparing gene annotations between C. compressirostris (ancestral short EOD) and C. tshokwe (derived elongated EOD), we identified 27 GO terms and 2 KEGG pathway categories for which C. tshokwe significantly more frequently exhibited a species-specific expressed substitution than C. compressirostris. The results indicate that transcriptional regulation as well cell proliferation and differentiation take part in the determination of elongated pulse durations in C. tshokwe. Those cellular processes are pivotal for tissue morphogenesis and might determine the shape of electric organs supporting the observed correlation between electrocyte geometry/tissue structure and discharge duration. The inferred expressed SNPs and their functional implications are a valuable resource for future investigations on EOD durations.
The oil palm (Elaeis guineensis Jacq.) produces a large amount of oil from the fruit. However, increasing the oil production in this fruit is still challenging. A recent study has shown that starch metabolism is essential for oil synthesis in fruit-producing species. Therefore, the transcriptomic analysis by RNA-seq was performed to observe gene expression alteration related to starch metabolism genes throughout the maturity stages of oil palm fruit with different oil yields. Gene expression profiles were examined with three different oil yields group (low, medium, and high) at six fruit development phases (4, 8, 12, 16, 20, and 22 weeks after pollination). We successfully identified and analyzed differentially expressed genes in oil palm mesocarps during development. The results showed that the transcriptome profile for each developmental phase was unique. Sucrose flux to the mesocarp tissue, rapid starch turnover, and high glycolytic activity have been identified as critical factors for oil production in oil palms. For starch metabolism and the glycolytic pathway, we identified specific gene expressions of enzyme isoforms (isozymes) that correlated with oil production, which may determine the oil content. This study provides valuable information for creating new high-oil-yielding palm varieties via breeding programs or genome editing approaches.
Transcriptomic dataset for early inflorescence stages of oil palm in response to defoliation stress
(2022)
Oil palm breeding and seed development have been hindered due to the male parent's incapacity to produce male inflorescence as a source of pollen under normal conditions. On the other hand, a young oil palm plantation has a low pollination rate due to a lack of male flowers. These are the common problem of sex ratio in the oil palm industry. Nevertheless, the regulation of sex ratio in oil palm plants is a complex mechanism and remains an open question until now. Researchers have previously used complete defoliation to induce male inflorescences, but the biological and molecular mechanisms underlying this morphological change have yet to be discovered. Here, we present an RNA-seq dataset from three early stages of an oil palm inflorescence under normal conditions and complete defoliation stress. This transcriptomic dataset is a valuable resource to improve our understanding of sex determination mechanisms in oil palm inflorescence.
Previous clinical research found that invasive vagus nerve stimulation (VNS) enhanced word recognition memory in epileptic patients, an effect assumed to be related to the activation of brainstem arousal systems. In this study, we applied non-invasive transcutaneous auricular VNS (tVNS) to replicate and extend the previous work. Using a single-blind, randomized, between-subject design, 60 healthy volunteers received active or sham stimulation during a lexical decision task, in which emotional and neutral stimuli were classified as words or non-words. In a subsequent recognition memory task (1 day after stimulation), participants' memory performance on these words and their subjective memory confidence were tested. Salivary alpha-amylase (sAA) levels, a putative indirect measure of central noradrenergic activation, were also measured before and after stimulation. During encoding, pleasant words were more accurately detected than neutral and unpleasant words. However, no tVNS effects were observed on task performance or on overall sAA level changes. tVNS also did not modulate overall recognition memory, which was particularly enhanced for pleasant emotional words. However, when hit rates were split based on confidence ratings reflecting familiarity- and recollection-based memory, higher recollection-based memory performance (irrespective of emotional category) was observed during active stimulation than during sham stimulation. To summarize, we replicated prior findings of enhanced processing and memory for emotional (pleasant) words. Whereas tVNS showed no effects on word processing, subtle effects on recollection-based memory performance emerged, which may indicate that tVNS facilitates hippocampus-mediated consolidation processes.
Transferability of data-driven models to predict urban pluvial flood water depth in Berlin, Germany
(2023)
Data-driven models have been recently suggested to surrogate computationally expensive hydrodynamic models to map flood hazards. However, most studies focused on developing models for the same area or the same precipitation event. It is thus not obvious how transferable the models are in space. This study evaluates the performance of a convolutional neural network (CNN) based on the U-Net architecture and the random forest (RF) algorithm to predict flood water depth, the models' transferability in space and performance improvement using transfer learning techniques. We used three study areas in Berlin to train, validate and test the models. The results showed that (1) the RF models outperformed the CNN models for predictions within the training domain, presumable at the cost of overfitting; (2) the CNN models had significantly higher potential than the RF models to generalize beyond the training domain; and (3) the CNN models could better benefit from transfer learning technique to boost their performance outside training domains than RF models.
I study deterministic dynamics of chiral active particles in two dimensions. Particles are considered as discs interacting with elastic repulsive forces. An ensemble of particles, started from random initial conditions, demonstrates chaotic collisions resulting in their normal diffusion. This chaos is transient, as rather abruptly a synchronous collisionless state establishes. The life time of chaos grows exponentially with the number of particles. External forcing (periodic or chaotic) is shown to facilitate the synchronization transition.
Biodiversity conservation and agricultural production have been largely framed as separate goals for landscapes in the discourse on land use. Although there is an increasing tendency to move away from this dichotomy in theory, the tendency is perpetuated by the spatially explicit approaches used in research and management practice. Transition zones (TZ) have previously been defined as areas where two adjacent fields or patches interact, and so they occur abundantly throughout agricultural landscapes. Biodiversity patterns in TZ have been extensively studied, but their relationship to yield patterns and social-ecological dimensions has been largely neglected. Focusing on European, temperate agricultural landscapes, we outline three areas of research and management that together demonstrate how TZ might be used to facilitate an integrated landscape approach: (i) plant and animal species' use and response to boundaries and the resulting effects on yield, for a deeper understanding of how landscape structure shapes quantity and quality of TZ; (ii) local knowledge on field or patch-level management and its interactions with biodiversity and yield in TZ, and (iii) conflict prevention and collaborative management across land-use boundaries.
Background and Objectives: Low back pain is a worldwide health problem. An early diagnosis is required to develop personalized treatment strategies. The Risk Stratification Index (RSI) was developed to serve the purpose. The aim of this pilot study is to cross-culturally translate the RSI to a French version (RSI-F) and evaluate the test-retest reliability of RSI-F using a French active population. Materials and Methods: The RSI was translated from German to French (RSI-F) based on the guidelines of cross-cultural adaptation of self-report measures. A total of 42 French recreational athletes (age 18–63 years) with non-specific low back pain were recruited and filled in the RSI-F twice. The test-retest reliability was examined using intraclass correlation coefficient (ICC1,2) and Pearson correlation coefficient. Results: Finally, 33 questionnaires were analyzed (14 males and 19 females, age 31 ± 10 years, 9.5 ± 3.2 h/week of training). The test-retest of RSI-F CPI and DISS were excellent (CPI: ICC1,2 = 0.989, p < 0.001; r = 0.989, p < 0.001; DISS: ICC1,2 = 0.991, p < 0.001; r = 0.991, p < 0.001), as well as Korff pain intensity (ICC1,2 = 0.995, p < 0.001; r = 0.995, p < 0.001) and disability (ICC1,2 = 0.998, p < 0.001; r = 0.998, p < 0.001). Conclusion: The RSI-F is linguistically accurate and reliable for use by a French-speaking active population with non-specific low back pain. The RSI-F is considered a tool to examine the evolution of psychosocial factors and therefore the risk of chronicity and the prognostic of pain. Further evaluations, such as internal, external validity, and responsiveness should be evaluated in a larger population.
Broad-spectrum antibiotic combination therapy is frequently applied due to increasing resistance development of infective pathogens. The objective of the present study was to evaluate two common empiric broad-spectrum combination therapies consisting of either linezolid (LZD) or vancomycin (VAN) combined with meropenem (MER) against Staphylococcus aureus (S. aureus) as the most frequent causative pathogen of severe infections. A semimechanistic pharmacokinetic-pharmacodynamic (PK-PD) model mimicking a simplified bacterial life-cycle of S. aureus was developed upon time-kill curve data to describe the effects of LZD, VAN, and MER alone and in dual combinations. The PK-PD model was successfully (i) evaluated with external data from two clinical S. aureus isolates and further drug combinations and (ii) challenged to predict common clinical PK-PD indices and breakpoints. Finally, clinical trial simulations were performed that revealed that the combination of VAN-MER might be favorable over LZD-MER due to an unfavorable antagonistic interaction between LZD and MER.
TransPipe
(2021)
Online learning environments, such as Massive Open Online Courses (MOOCs), often rely on videos as a major component to convey knowledge. However, these videos exclude potential participants who do not understand the lecturer’s language, regardless of whether that is due to language unfamiliarity or aural handicaps. Subtitles and/or interactive transcripts solve this issue, ease navigation based on the content, and enable indexing and retrieval by search engines. Although there are several automated speech-to-text converters and translation tools, their quality varies and the process of integrating them can be quite tedious. Thus, in practice, many videos on MOOC platforms only receive subtitles after the course is already finished (if at all) due to a lack of resources. This work describes an approach to tackle this issue by providing a dedicated tool, which is closing this gap between MOOC platforms and transcription and translation tools and offering a simple workflow that can easily be handled by users with a less technical background. The proposed method is designed and evaluated by qualitative interviews with three major MOOC providers.
Thus far, research into reservations to treaties has often overlooked reservations formulated to both European Social Charters (and its Protocols) and the relevant European Committee of Social Rights practices. There are several pressing reasons to further explore this gap in existing literature. First, an analysis of practices within the European Social Charters (and Protocols) will provide a fuller picture of the reservations and responses of treaty bodies. Second, in the context of previous landmark events it is worth noting the practices of another human rights treaty monitoring body that is often omitted from analyses. Third, the very fact that the formulation of reservations to treaties gives parties such far-reaching flexibility to shape their contractual obligations (à la carte) is surprising. An important outcome of the research is the finding that, despite the far-reaching flexibility present in the treaties analysed, both the States Parties and the European Committee of Social Rights generally treat them as conventional treaties to which the general rules on reservations apply. Consequently, there is no basis for assuming that the mere fact of adopting the à la carte system in a treaty with no reservation clause implies a formal prohibition of reservations or otherwise discourages their formulation.
Trends in streamflow, rainfall and potential evapotranspiration (PET) time series, from 1970 to 2017, were assessed for five important hydrological basins in Southeastern Brazil. The concept of elasticity was also used to assess the streamflow sensitivity to changes in climate variables, for annual data and 5-, 10- and 20-year moving averages. Significant negative trends in streamflow and rainfall and significant increasing trend in PET were detected. For annual analysis, elasticity revealed that 1% decrease in rainfall resulted in 1.21-2.19% decrease in streamflow, while 1% increase in PET induced different reductions percentages in streamflow, ranging from 2.45% to 9.67%. When both PET and rainfall were computed to calculate the elasticity, results were positive for some basins. Elasticity analysis considering 20-year moving averages revealed that impacts on the streamflow were cumulative: 1% decrease in rainfall resulted in 1.83-4.75% decrease in streamflow, while 1% increase in PET induced 3.47-28.3% decrease in streamflow. This different temporal response may be associated with the hydrological memory of the basins. Streamflow appears to be more sensitive in less rainy basins. This study provides useful information to support strategic government decisions, especially when the security of water resources and drought mitigation are considered in face of climate change.
Thousands of glacier lakes have been forming behind natural dams in high mountains following glacier retreat since the early 20th century. Some of these lakes abruptly released pulses of water and sediment with disastrous downstream consequences. Yet it remains unclear whether the reported rise of these glacier lake outburst floods (GLOFs) has been fueled by a warming atmosphere and enhanced meltwater production, or simply a growing research effort. Here we estimate trends and biases in GLOF reporting based on the largest global catalog of 1,997 dated glacier-related floods in six major mountain ranges from 1901 to 2017. We find that the positive trend in the number of reported GLOFs has decayed distinctly after a break in the 1970s, coinciding with independently detected trend changes in annual air temperatures and in the annual number of field-based glacier surveys (a proxy of scientific reporting). We observe that GLOF reports and glacier surveys decelerated, while temperature rise accelerated in the past five decades. Enhanced warming alone can thus hardly explain the annual number of reported GLOFs, suggesting that temperature-driven glacier lake formation, growth, and failure are weakly coupled, or that outbursts have been overlooked. Indeed, our analysis emphasizes a distinct geographic and temporal bias in GLOF reporting, and we project that between two to four out of five GLOFs on average might have gone unnoticed in the early to mid-20th century. We recommend that such biases should be considered, or better corrected for, when attributing the frequency of reported GLOFs to atmospheric warming.
Like conventional software projects, projects in model-driven software engineering require adequate management of multiple versions of development artifacts, importantly allowing living with temporary inconsistencies. In the case of model-driven software engineering, employed versioning approaches also have to handle situations where different artifacts, that is, different models, are linked via automatic model transformations.
In this report, we propose a technique for jointly handling the transformation of multiple versions of a source model into corresponding versions of a target model, which enables the use of a more compact representation that may afford improved execution time of both the transformation and further analysis operations. Our approach is based on the well-known formalism of triple graph grammars and a previously introduced encoding of model version histories called multi-version models. In addition to showing the correctness of our approach with respect to the standard semantics of triple graph grammars, we conduct an empirical evaluation that demonstrates the potential benefit regarding execution time performance.
Background
Secondary endosymbionts of aphids provide benefits to their hosts, but also impose costs such as reduced lifespan and reproductive output. The aphid Aphis fabae is host to different strains of the secondary endosymbiont Hamiltonella defensa, which encode different putative toxins. These strains have very different phenotypes: They reach different densities in the host, and the costs and benefits (protection against parasitoid wasps) they confer to the host vary strongly.
Results
We used RNA-Seq to generate hypotheses on why four of these strains inflict such different costs to A. fabae. We found different H. defensa strains to cause strain-specific changes in aphid gene expression, but little effect of H. defensa on gene expression of the primary endosymbiont, Buchnera aphidicola. The highly costly and over-replicating H. defensa strain H85 was associated with strongly reduced aphid expression of hemocytin, a marker of hemocytes in Drosophila. The closely related strain H15 was associated with downregulation of ubiquitin-related modifier 1, which is related to nutrient-sensing and oxidative stress in other organisms. Strain H402 was associated with strong differential regulation of a set of hypothetical proteins, the majority of which were only differentially regulated in presence of H402.
Conclusions
Overall, our results suggest that costs of different strains of H. defensa are likely caused by different mechanisms, and that these costs are imposed by interacting with the host rather than the host's obligatory endosymbiont B. aphidicola.
TRIPOD
(2021)
Inertial measurement units (IMUs) enable easy to operate and low-cost data recording for gait analysis. When combined with treadmill walking, a large number of steps can be collected in a controlled environment without the need of a dedicated gait analysis laboratory. In order to evaluate existing and novel IMU-based gait analysis algorithms for treadmill walking, a reference dataset that includes IMU data as well as reliable ground truth measurements for multiple participants and walking speeds is needed. This article provides a reference dataset consisting of 15 healthy young adults who walked on a treadmill at three different speeds. Data were acquired using seven IMUs placed on the lower body, two different reference systems (Zebris FDMT-HQ and OptoGait), and two RGB cameras. Additionally, in order to validate an existing IMU-based gait analysis algorithm using the dataset, an adaptable modular data analysis pipeline was built. Our results show agreement between the pressure-sensitive Zebris and the photoelectric OptoGait system (r = 0.99), demonstrating the quality of our reference data. As a use case, the performance of an algorithm originally designed for overground walking was tested on treadmill data using the data pipeline. The accuracy of stride length and stride time estimations was comparable to that reported in other studies with overground data, indicating that the algorithm is equally applicable to treadmill data. The Python source code of the data pipeline is publicly available, and the dataset will be provided by the authors upon request, enabling future evaluations of IMU gait analysis algorithms without the need of recording new data.
Peatlands represent large terrestrial carbon banks. Given that most peat accumulates in boreal regions, where low temperatures and water saturation preserve organic matter, the existence of peat in (sub)tropical regions remains enigmatic. Here we examined peat and plant chemistry across a latitudinal transect from the Arctic to the tropics. Near-surface low-latitude peat has lower carbohydrate and greater aromatic content than near-surface high-latitude peat, creating a reduced oxidation state and resulting recalcitrance. This recalcitrance allows peat to persist in the (sub)tropics despite warm temperatures. Because we observed similar declines in carbohydrate content with depth in high-latitude peat, our data explain recent field-scale deep peat warming experiments in which catotelm (deeper) peat remained stable despite temperature increases up to 9 degrees C. We suggest that high-latitude deep peat reservoirs may be stabilized in the face of climate change by their ultimately lower carbohydrate and higher aromatic composition, similar to tropical peats.
The application of the fractional calculus in the mathematical modelling of relaxation processes in complex heterogeneous media has attracted a considerable amount of interest lately.
The reason for this is the successful implementation of fractional stochastic and kinetic equations in the studies of non-Debye relaxation.
In this work, we consider the rotational diffusion equation with a generalised memory kernel in the context of dielectric relaxation processes in a medium composed of polar molecules. We give an overview of existing models on non-exponential relaxation and introduce an exponential resetting dynamic in the corresponding process.
The autocorrelation function and complex susceptibility are analysed in detail.
We show that stochastic resetting leads to a saturation of the autocorrelation function to a constant value, in contrast to the case without resetting, for which it decays to zero. The behaviour of the autocorrelation function, as well as the complex susceptibility in the presence of resetting, confirms that the dielectric relaxation dynamics can be tuned by an appropriate choice of the resetting rate.
The presented results are general and flexible, and they will be of interest for the theoretical description of non-trivial relaxation dynamics in heterogeneous systems composed of polar molecules.
In the present paper we empirically investigate the psychometric properties of some of the most famous statistical and logical cognitive illusions from the "heuristics and biases" research program by Daniel Kahneman and Amos Tversky, who nearly 50 years ago introduced fascinating brain teasers such as the famous Linda problem, the Wason card selection task, and so-called Bayesian reasoning problems (e.g., the mammography task). In the meantime, a great number of articles has been published that empirically examine single cognitive illusions, theoretically explaining people's faulty thinking, or proposing and experimentally implementing measures to foster insight and to make these problems accessible to the human mind. Yet these problems have thus far usually been empirically analyzed on an individual-item level only (e.g., by experimentally comparing participants' performance on various versions of one of these problems). In this paper, by contrast, we examine these illusions as a group and look at the ability to solve them as a psychological construct. Based on an sample of N = 2,643 Luxembourgian school students of age 16-18 we investigate the internal psychometric structure of these illusions (i.e., Are they substantially correlated? Do they form a reflexive or a formative construct?), their connection to related constructs (e.g., Are they distinguishable from intelligence or mathematical competence in a confirmatory factor analysis?), and the question of which of a person's abilities can predict the correct solution of these brain teasers (by means of a regression analysis).
Ulcerative colitis (UC) is part of the inflammatory bowels diseases, and moderate to severe UC patients can be treated with anti-tumour necrosis alpha monoclonal antibodies, including infliximab (IFX). Even though treatment of UC patients by IFX has been in place for over a decade, many gaps in modelling of IFX PK in this population remain. This is even more true for acute severe UC (ASUC) patients for which early prediction of IFX pharmacokinetic (PK) could highly improve treatment outcome. Thus, this review aims to compile and analyse published population PK models of IFX in UC and ASUC patients, and to assess the current knowledge on disease activity impact on IFX PK. For this, a semi-systematic literature search was conducted, from which 26 publications including a population PK model analysis of UC patients receiving IFX therapy were selected. Amongst those, only four developed a model specifically for UC patients, and only three populations included severe UC patients. Investigations of disease activity impact on PK were reported in only 4 of the 14 models selected. In addition, the lack of reported model codes and assessment of predictive performance make the use of published models in a clinical setting challenging. Thus, more comprehensive investigation of PK in UC and ASUC is needed as well as more adequate reports on developed models and their evaluation in order to apply them in a clinical setting.
Uncertainty in climate change impact studies for irrigated maize cropping systems in southern Spain
(2022)
This study investigates the main drivers of uncertainties in simulated irrigated maize yield under historical conditions as well as scenarios of increased temperatures and altered irrigation water availability.
Using APSIM, MONICA, and SIMPLACE crop models, we quantified the relative contributions of three irrigation water allocation strategies, three sowing dates, and three maize cultivars to the uncertainty in simulated yields.
The water allocation strategies were derived from historical records of farmer's allocation patterns in drip-irrigation scheme of the Genil-Cabra region, Spain (2014-2017).
By considering combinations of allocation strategies, the adjusted R-2 values (showing the degree of agreement between simulated and observed yields) increased by 29% compared to unrealistic assumptions of considering only near optimal or deficit irrigation scheduling. The factor decomposition analysis based on historic climate showed that irrigation strategies was the main driver of uncertainty in simulated yields (66%).
However, under temperature increase scenarios, the contribution of crop model and cultivar choice to uncertainty in simulated yields were as important as irrigation strategy. This was partially due to different model structure in processes related to the temperature responses.
Our study calls for including information on irrigation strategies conducted by farmers to reduce the uncertainty in simulated yields at field scale.
The growing worldwide impact of flood events has motivated the development and application of global flood hazard models (GFHMs). These models have become useful tools for flood risk assessment and management, especially in regions where little local hazard information is available. One of the key uncertainties associated with GFHMs is the estimation of extreme flood magnitudes to generate flood hazard maps. In this study, the 1-in-100 year flood (Q100) magnitude was estimated using flow outputs from four global hydrological models (GHMs) and two global flood frequency analysis datasets for 1350 gauges across the conterminous US. The annual maximum flows of the observed and modelled timeseries of streamflow were bootstrapped to evaluate the sensitivity of the underlying data to extrapolation. Results show that there are clear spatial patterns of bias associated with each method. GHMs show a general tendency to overpredict Western US gauges and underpredict Eastern US gauges. The GloFAS and HYPE models underpredict Q100 by more than 25% in 68% and 52% of gauges, respectively. The PCR-GLOBWB and CaMa-Flood models overestimate Q100 by more than 25% at 60% and 65% of gauges in West and Central US, respectively. The global frequency analysis datasets have spatial variabilities that differ from the GHMs. We found that river basin area and topographic elevation explain some of the spatial variability in predictive performance found in this study. However, there is no single model or method that performs best everywhere, and therefore we recommend a weighted ensemble of predictions of extreme flood magnitudes should be used for large-scale flood hazard assessment.
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.
Worldwide, companies are increasingly making claims about their current climate efforts and their future mitigation commitments. These claims tend to be underpinned by carbon credits issued in voluntary carbon markets to offset emissions. Corporate climate claims are largely unregulated which means that they are often (perceived to be) misleading and deceptive. As such, corporate climate claims risk undermining, rather than contributing to, global climate mitigation. This paper takes as its point of departure the proposition that a better understanding of corporate climate claims is needed to govern such claims in a manner that adequately addresses potential greenwashing risks. To that end, the paper reviews the nascent literature on corporate climate claims relying on the use of voluntary carbon credits. Drawing on the reviewed literature, three key dimensions of corporate climate claims as related to carbon credits are discussed: 1) the intended use of carbon credits: offsetting versus non-offsetting claims; 2) the framing and meaning of headline terms: net-zero versus carbon neutral claims; and 3) the status of the claim: future aspirational commitments versus stated achievements. The paper thereby offers a preliminary categorization of corporate climate claims and discusses risks associated with and governance implications for each of these categories.
Year-to-year variations in crop yields can have major impacts on the livelihoods of subsistence farmers and may trigger significant global price fluctuations, with severe consequences for people in developing countries. Fluctuations can be induced by weather conditions, management decisions, weeds, diseases, and pests. Although an explicit quantification and deeper understanding of weather-induced crop-yield variability is essential for adaptation strategies, so far it has only been addressed by empirical models. Here, we provide conservative estimates of the fraction of reported national yield variabilities that can be attributed to weather by state-of-the-art, process-based crop model simulations. We find that observed weather variations can explain more than 50% of the variability in wheat yields in Australia, Canada, Spain, Hungary, and Romania. For maize, weather sensitivities exceed 50% in seven countries, including the United States. The explained variance exceeds 50% for rice in Japan and South Korea and for soy in Argentina. Avoiding water stress by simulating yields assuming full irrigation shows that water limitation is a major driver of the observed variations in most of these countries. Identifying the mechanisms leading to crop-yield fluctuations is not only fundamental for dampening fluctuations, but is also important in the context of the debate on the attribution of loss and damage to climate change. Since process-based crop models not only account for weather influences on crop yields, but also provide options to represent human-management measures, they could become essential tools for differentiating these drivers, and for exploring options to reduce future yield fluctuations.
The passive and active motion of micron-sized tracer particles in crowded liquids and inside living biological cells is ubiquitously characterised by 'viscoelastic' anomalous diffusion, in which the increments of the motion feature long-ranged negative and positive correlations. While viscoelastic anomalous diffusion is typically modelled by a Gaussian process with correlated increments, so-called fractional Gaussian noise, an increasing number of systems are reported, in which viscoelastic anomalous diffusion is paired with non-Gaussian displacement distributions. Following recent advances in Brownian yet non-Gaussian diffusion we here introduce and discuss several possible versions of random-diffusivity models with long-ranged correlations. While all these models show a crossover from non-Gaussian to Gaussian distributions beyond some correlation time, their mean squared displacements exhibit strikingly different behaviours: depending on the model crossovers from anomalous to normal diffusion are observed, as well as a priori unexpected dependencies of the effective diffusion coefficient on the correlation exponent. Our observations of the non-universality of random-diffusivity viscoelastic anomalous diffusion are important for the analysis of experiments and a better understanding of the physical origins of 'viscoelastic yet non-Gaussian' diffusion.
Introduction
Balance is vital for human health and experiments have been conducted to measure the mechanisms of postural control, for example studying reflex responses to simulated perturbations. Such studies are frequent in walking but less common in running, and an understanding of reflex responses to trip-like disturbances could enhance our understanding of human gait and improve approaches to training and rehabilitation. Therefore, the primary aim of this study was to investigate the technical validity and reliability of a treadmill running protocol with perturbations. A further exploratory aim was to evaluate the associated neuromuscular reflex responses to the perturbations, in the lower limbs.
Methods
Twelve healthy participants completed a running protocol (9 km/h) test-retest (2 weeks apart), whereby 30 unilateral perturbations were executed via the treadmill belts (presets:2.0 m/s amplitude;150 ms delay (post-heel contact);100ms duration). Validity of the perturbations was assessed via mean +/- SD comparison, percentage error calculation between the preset and recorded perturbation characteristics (PE%), and coefficient of variation (CV%). Test-retest reliability (TRV%) and Bland-Altman analysis (BLA; bias +/- 1.96 * SD) was calculated for reliability. To measure reflex activity, electromyography (EMG) was applied in both legs. EMG amplitudes (root mean square normalized to unperturbed strides) and latencies [ms] were analysed descriptively.
Results
Left-side perturbation amplitude was 1.9 +/- 0.1 m/s, delay 105 +/- 2 ms, and duration 78 +/- 1 ms. Right-side perturbation amplitude was 1.9 +/- 0.1 m/s, delay 118 +/- 2 ms, duration 78 +/- 1 ms. PE% ranged from 5-30% for the recorded perturbations. CV% of the perturbations ranged from 19.5-76.8%. TRV% for the perturbations was 6.4-16.6%. BLA for the left was amplitude: 0.0 +/- 0.3m/s, delay: 0 +/- 17 ms, duration: 2 +/- 13 ms, and for the right was amplitude: 0.1 +/- 0.7, delay: 4 +/- 40 ms, duration: 1 +/- 35 ms. EMG amplitudes ranged from 175 +/- 141%-454 +/- 359% in both limbs. Latencies were 109 +/- 12-116 +/- 23 ms in the tibialis anterior, and 128 +/- 49-157 +/- 20 ms in the biceps femoris.
Discussion
Generally, this study indicated sufficient validity and reliability of the current setup considering the technical challenges and limitations, although the reliability of the right-sided perturbations could be questioned. The protocol provoked reflex responses in the lower extremities, especially in the leading leg. Acute neuromusculoskeletal adjustments to the perturbations could be studied and compared in clinical and healthy running populations, and the protocol could be utilised to monitor chronic adaptations to interventions over time.
Since COVID-19 became a pandemic, many studies are being conducted to get a better understanding of the disease itself and its spread. One crucial indicator is the prevalence of SARS-CoV-2 infections. Since this measure is an important foundation for political decisions, its estimate must be reliable and unbiased. This paper presents reasons for biases in prevalence estimates due to unit nonresponse in typical studies. Since it is difficult to avoid bias in situations with mostly unknown nonresponse mechanisms, we propose the maximum amount of bias as one measure to assess the uncertainty due to nonresponse. An interactive web application is presented that calculates the limits of such a conservative unit nonresponse confidence interval (CUNCI).
Efficiency is central to understanding the communicative and cognitive underpinnings of language. However, efficiency management is a complex mechanism in which different efficiency effects-such as articulatory, processing and planning ease, mental accessibility, and informativity, online and offline efficiency effects-conspire to yield the coding of linguistic signs. While we do not yet exactly understand the interactional mechanism of these different effects, we argue that universal attractors are an important component of any dynamic theory of efficiency that would be aimed at predicting efficiency effects across languages. Attractors are defined as universal states around which language evolution revolves. Methodologically, we approach efficiency from a cross-linguistic perspective on the basis of a world-wide sample of 383 languages from 53 families, balancing all six macro-areas (Eurasia, North and South America, Australia, Africa, and Oceania). We explore the grammatical domain of verbal person-number subject indexes. We claim that there is an attractor state in this domain to which languages tend to develop and tend not to leave if they happen to comply with the attractor in their earlier stages of evolution. The attractor is characterized by different lengths for each person and number combination, structured along Zipf's predictions. Moreover, the attractor strongly prefers non-compositional, cumulative coding of person and number. On the basis of these and other properties of the attractor, we conclude that there are two domains in which efficiency pressures are most powerful: strive towards less processing and articulatory effort. The latter, however, is overridden by constant information flow. Strive towards lower lexicon complexity and memory costs are weaker efficiency pressures for this grammatical category due to its order of frequency.
Stochastic models based on random diffusivities, such as the diffusing-diffusivity approach, are popular concepts for the description of non-Gaussian diffusion in heterogeneous media. Studies of these models typically focus on the moments and the displacement probability density function. Here we develop the complementary power spectral description for a broad class of random-diffusivity processes. In our approach we cater for typical single particle tracking data in which a small number of trajectories with finite duration are garnered. Apart from the diffusing-diffusivity model we study a range of previously unconsidered random-diffusivity processes, for which we obtain exact forms of the probability density function. These new processes are different versions of jump processes as well as functionals of Brownian motion. The resulting behaviour subtly depends on the specific model details. Thus, the central part of the probability density function may be Gaussian or non-Gaussian, and the tails may assume Gaussian, exponential, log-normal, or even power-law forms. For all these models we derive analytically the moment-generating function for the single-trajectory power spectral density. We establish the generic 1/f²-scaling of the power spectral density as function of frequency in all cases. Moreover, we establish the probability density for the amplitudes of the random power spectral density of individual trajectories. The latter functions reflect the very specific properties of the different random-diffusivity models considered here. Our exact results are in excellent agreement with extensive numerical simulations.
We analyze historical data of stock-market prices for multiple financial indices using the concept of delay-time averaging for the financial time series (FTS). The region of validity of our recent theoretical predictions [Cherstvy A G et al 2017 New J. Phys. 19 063045] for the standard and delayed time-averaged mean-squared 'displacements' (TAMSDs) of the historical FTS is extended to all lag times. As the first novel element, we perform extensive computer simulations of the stochastic differential equation describing geometric Brownian motion (GBM) which demonstrate a quantitative agreement with the analytical long-term price-evolution predictions in terms of the delayed TAMSD (for all stock-market indices in crisis-free times). Secondly, we present a robust procedure of determination of the model parameters of GBM via fitting the features of the price-evolution dynamics in the FTS for stocks and cryptocurrencies. The employed concept of single-trajectory-based time averaging can serve as a predictive tool (proxy) for a mathematically based assessment and rationalization of probabilistic trends in the evolution of stock-market prices.
Universitat Politècnica de València’s Experience with EDX MOOC Initiatives During the Covid Lockdown
(2021)
In March 2020, when massive lockdowns started to be enforced around the world to contain the spread of the COVID-19 pandemic, edX launched two initiatives to help students around the world providing free certificates for its courses, RAP, for member institutions and OCE, for any accredited academic institution. In this paper we analyze how Universitat Poltècnica de València contributed with its courses to both initiatives, providing almost 14,000 free certificate codes in total, and how UPV used the RAP initiative as a customer, describing the mechanism used to distribute more than 22,000 codes for free certificates to more than 7,000 UPV community members, what led to the achievement of more than 5,000 free certificates. We also comment the results of a post initiative survey answered by 1,612 UPV members about 3,241 edX courses, in which they communicated a satisfaction of 4,69 over 5 with the initiative.
A detailed investigation of the energy levels of perylene-3,4,9,10-tetracarboxylic tetraethylester as a representative compound for the whole family of perylene esters was performed. It was revealed via electrochemical measurements that one oxidation and two reductions take place. The bandgaps determined via the electrochemical approach are in good agreement with the optical bandgap obtained from the absorption spectra via a Tauc plot. In addition, absorption spectra in dependence of the electrochemical potential were the basis for extensive quantum-chemical calculations of the neutral, monoanionic, and dianionic molecules. For this purpose, calculations based on density functional theory were compared with post-Hartree-Fock methods and the CAM-B3LYP functional proved to be the most reliable choice for the calculation of absorption spectra. Furthermore, spectral features found experimentally could be reproduced with vibronic calculations and allowed to understand their origins. In particular, the two lowest energy absorption bands of the anion are not caused by absorption of two distinct electronic states, which might have been expected from vertical excitation calculations, but both states exhibit a strong vibronic progression resulting in contributions to both bands.
Anomalous diffusion or, more generally, anomalous transport, with nonlinear dependence of the mean-squared displacement on the measurement time, is ubiquitous in nature. It has been observed in processes ranging from microscopic movement of molecules to macroscopic, large-scale paths of migrating birds. Using data from multiple empirical systems, spanning 12 orders of magnitude in length and 8 orders of magnitude in time, we employ a method to detect the individual underlying origins of anomalous diffusion and transport in the data. This method decomposes anomalous transport into three primary effects: long-range correlations (“Joseph effect”), fat-tailed probability density of increments (“Noah effect”), and nonstationarity (“Moses effect”). We show that such a decomposition of real-life data allows us to infer nontrivial behavioral predictions and to resolve open questions in the fields of single-particle tracking in living cells and movement ecology.
The Big Five personality traits play a major role in student achievement. As such, there is consistent evidence that students that are more conscientious receive better teacher-assigned grades in secondary school. However, research often does not support the claim that students that are more conscientious similarly achieve higher scores in domain-specific standardized achievement tests. Based on the Invest-and-Accrue Model, we argue that conscientiousness explains to some extent why certain students receive better grades despite similar academic accomplishments (i.e., achieving similar scores in domain-specific standardized achievement tests). Therefore, the present study examines to what extent the relationship between student personality and teacher-assigned grades consists of direct as opposed to indirect associations (via subject-specific standardized test scores). We used a representative sample of 14,710 ninth-grade students to estimate these direct and indirect pathways in mathematics and German. Structural equation models showed that test scores explained between 8 and 11% of the variance in teacher-assigned grades in mathematics and German. The Big Five personality traits in students additionally explained between 8 and 10% of the variance in grades. Finally, the personality-grade relationship consisted of direct (0.02 | β| ≤ 0.27) and indirect associations via test scores (0.01 | β| ≤ 0.07). Conscientiousness explained discrepancies between teacher-assigned grades and students’ scores in domain-specific standardized tests to a greater extent than any of the other Big Five personality traits. Our findings suggest that students that are more conscientious may invest more effort to accomplish classroom goals, but fall short of mastery.
A better understanding of precipitation dynamics in the Indian subcontinent is required since India's society depends heavily on reliable monsoon forecasts. We introduce a non-linear, multiscale approach, based on wavelets and event synchronization, for unravelling teleconnection influences on precipitation. We consider those climate patterns with the highest relevance for Indian precipitation. Our results suggest significant influences which are not well captured by only the wavelet coherence analysis, the state-of-the-art method in understanding linkages at multiple timescales. We find substantial variation across India and across timescales. In particular, El Niño–Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD) mainly influence precipitation in the south-east at interannual and decadal scales, respectively, whereas the North Atlantic Oscillation (NAO) has a strong connection to precipitation, particularly in the northern regions. The effect of the Pacific Decadal Oscillation (PDO) stretches across the whole country, whereas the Atlantic Multidecadal Oscillation (AMO) influences precipitation particularly in the central arid and semi-arid regions. The proposed method provides a powerful approach for capturing the dynamics of precipitation and, hence, helps improve precipitation forecasting.
Salt marshes filter pollutants, protect coastlines against storm surges, and sequester carbon, yet are under threat from sea level rise and anthropogenic modification. The sustained existence of the salt marsh ecosystem depends on the topographic evolution of marsh platforms. Quantifying marsh platform topography is vital for improving the management of these valuable landscapes. The determination of platform boundaries currently relies on supervised classification methods requiring near-infrared data to detect vegetation, or demands labour-intensive field surveys and digitisation. We propose a novel, unsupervised method to reproducibly isolate salt marsh scarps and platforms from a digital elevation model (DEM), referred to as Topographic Identification of Platforms (TIP). Field observations and numerical models show that salt marshes mature into subhorizontal platforms delineated by subvertical scarps. Based on this premise, we identify scarps as lines of local maxima on a slope raster, then fill landmasses from the scarps upward, thus isolating mature marsh platforms. We test the TIP method using lidar-derived DEMs from six salt marshes in England with varying tidal ranges and geometries, for which topographic platforms were manually isolated from tidal flats. Agreement between manual and unsupervised classification exceeds 94% for DEM resolutions of 1 m, with all but one site maintaining an accuracy superior to 90% for resolutions up to 3 m. For resolutions of 1 m, platforms detected with the TIP method are comparable in surface area to digitised platforms and have similar elevation distributions. We also find that our method allows for the accurate detection of local block failures as small as 3 times the DEM resolution. Detailed inspection reveals that although tidal creeks were digitised as part of the marsh platform, unsupervised classification categorises them as part of the tidal flat, causing an increase in false negatives and overall platform perimeter. This suggests our method may benefit from combination with existing creek detection algorithms. Fallen blocks and high tidal flat portions, associated with potential pioneer zones, can also lead to differences between our method and supervised mapping. Although pioneer zones prove difficult to classify using a topographic method, we suggest that these transition areas should be considered when analysing erosion and accretion processes, particularly in the case of incipient marsh platforms. Ultimately, we have shown that unsupervised classification of marsh platforms from high-resolution topography is possible and sufficient to monitor and analyse topographic evolution.
We extend the scope of European palaeogenomics by sequencing the genomes of Late Upper Palaeolithic (13,300 years old, 1.4-fold coverage) and Mesolithic (9,700 years old, 15.4-fold) males from western Georgia in the Caucasus and a Late Upper Palaeolithic (13,700 years old, 9.5-fold) male from Switzerland. While we detect Late Palaeolithic-Mesolithic genomic continuity in both regions, we find that Caucasus hunter-gatherers (CHG) belong to a distinct ancient clade that split from western hunter-gatherers similar to 45 kya, shortly after the expansion of anatomically modern humans into Europe and from the ancestors of Neolithic farmers similar to 25 kya, around the Last Glacial Maximum. CHG genomes significantly contributed to the Yamnaya steppe herders who migrated into Europe similar to 3,000 BC, supporting a formative Caucasus influence on this important Early Bronze age culture. CHG left their imprint on modern populations from the Caucasus and also central and south Asia possibly marking the arrival of Indo-Aryan languages.
Marked along-strike changes in stratigraphy, mountain belt morphology, basement exhumation, and deformation styles characterize the Andean retroarc; these changes have previously been related to spatiotemporal variations in the subduction angle. We modeled new apatite fission track and apatite (U-Th-Sm)/He data from nine ranges located between 26 degrees S and 28 degrees S. Using new and previously published data, we constructed a Cretaceous to Pliocene paleogeographic model that delineates a four-stage tectonic evolution: extensional tectonics during the Cretaceous (120-75 Ma), the formation of a broken foreland basin between 55 and 30 Ma, reheating due to burial beneath sedimentary rocks (18-13 Ma), and deformation, exhumation, and surface uplift during the Late Miocene and the Pliocene (13-3 Ma). Our model highlights how preexisting upper plate structures control the deformation patterns of broken foreland basins. Because retroarc deformation predates flat-slab subduction, we propose that slab anchoring may have been the precursor of Eocene-Oligocene compression in the Andean retroarc. Our model challenges models which consider broken foreland basins and retroarc deformation in the NW Argentinian Andes to be directly related to Miocene flat subduction.
This introductory essay is structured as follows: First of all, several forms of urbanisation (I.) are introduced and the processes of urbanisation and dis-urbanisation (II.) are defined. Then four fields of law which are deeply affected by urbanisation are put into the focus. These are, local government law (III.), but also public building law (IV.), civil service law (V.) and public finance law (VI.). Afterwards the effects of the corona pandemic on these fields of law are contemplated, taking account of the process of urbanisation (VII.). Finally, the main results are summarised (VIII.).
Background The use of iodine-based contrast agents entails the risk of contrast induced nephropathy (CIN). Radiocontrast agents elicit the third most common cause of nephropathy among hospitalized patients, accounting for 11-12% of cases. CIN is connected with clinically significant consequences, including increased morbidity, prolonged hospitalization, increased risk of complications, potential need for dialysis, and increased mortality rate. The number of in hospital examinations using iodine-based contrast media has been significantly increasing over the last decade. In order to protect patients from possible complications of such examinations, new biomarkers are needed that are able to predict a risk of contrast-induced nephropathy. Urinary and plasma cyclic guanosine monophosphate (cGMP) concentrations are influenced by renal function. Urinary cGMP is primarily of renal cellular origin. Therefore, we assessed if urinary cGMP concentration may predict major adverse renal events (MARE) after contrast media exposure during coronary angiography. Methods Urine samples were prospectively collected from non-randomized consecutive patients with either diabetes or preexisting impaired kidney function receiving intra-arterial contrast medium (CM) for emergent or elective coronary angiography at the Charite Campus Mitte, University Hospital Berlin. Urinary cGMP concentration in spot urine was analyzed 24 hours after CM exposure. Patients were followed up over 90 days for occurrence of death, initiation of dialysis, doubling of plasma creatinine concentration or MARE. Results In total, 289 consecutive patients were included into the study. Urine cGMP/creatinine ratio 24 hours before CM exposure expressed as mean +/- SD was predictive for the need of dialysis (no dialysis: 89.77 +/- 92.85 mu M/mM, n = 277; need for dialysis: 140.3 +/- 82.90 mu M/mM, n = 12, p = 0.008), death (no death during follow-up: 90.60 +/- 92.50 mu M/mM, n = 280; death during follow-up: 169.88 +/- 81.52 mu M/mM, n = 9; p = 0.002), and the composite endpoint MARE (no MARE: 86.02 +/- 93.17 mu M/mM, n = 271; MARE: 146.64 +/- 74.68 mu M/mM, n = 18, p<0.001) during the follow-up of 90 days after contrast media application. cGMP/creatinine ratio stayed significantly increased at values exceeding 120 pM/mM in patients who developed MARE, required dialysis or died. Conclusions Urinary cGMP/creatinine ratio >= 120 mu M/mM before CM exposure is a promising biomarker for the need of dialysis and all-cause mortality 90 days after CM exposure in patients with preexisting renal impairment or diabetes.
URSA-PQ
(2020)
We present a highly flexible and portable instrument to perform pump-probe spectroscopy with an optical and an X-ray pulse in the gas phase. The so-called URSA-PQ (German for ‘Ultraschnelle Röntgenspektroskopie zur Abfrage der Photoenergiekonversion an Quantensystemen’, Engl. ‘ultrafast X-ray spectroscopy for probing photoenergy conversion in quantum systems’) instrument is equipped with a magnetic bottle electron spectrometer (MBES) and tools to characterize the spatial and temporal overlap of optical and X-ray laser pulses. Its adherence to the CAMP instrument dimensions allows for a wide range of sample sources as well as other spectrometers to be included in the setup. We present the main design and technical features of the instrument. The MBES performance was evaluated using Kr M4,5NN Auger lines using backfilled Kr gas, with an energy resolution ΔE/E ≅ 1/40 in the integrating operative mode. The time resolution of the setup at FLASH 2 FL 24 has been characterized with the help of an experiment on 2-thiouracil that is inserted via the instruments’ capillary oven. We find a time resolution of 190 fs using the molecular 2p photoline shift and attribute this to different origins in the UV-pump—the X-ray probe setup.
As structural membrane components and signaling effector molecules sphingolipids influence a plethora of host cell functions, and by doing so also the replication of viruses. Investigating the effects of various inhibitors of sphingolipid metabolism in primary human peripheral blood lymphocytes (PBL) and the human B cell line BJAB we found that not only the sphingosine kinase (SphK) inhibitor SKI-II, but also the acid ceramidase inhibitor ceranib-2 efficiently inhibited measles virus (MV) replication. Virus uptake into the target cells was not grossly altered by the two inhibitors, while titers of newly synthesized MV were reduced by approximately 1 log (90%) in PBL and 70-80% in BJAB cells. Lipidomic analyses revealed that in PBL SKI-II led to increased ceramide levels, whereas in BJAB cells ceranib-2 increased ceramides. SKI-II treatment decreased sphingosine-1-phosphate (S1P) levels in PBL and BJAB cells. Furthermore, we found that MV infection of lymphocytes induced a transient (0.5-6 h) increase in S1P, which was prevented by SKI-II. Investigating the effect of the inhibitors on the metabolic (mTORC1) activity we found that ceranib-2 reduced the phosphorylation of p70 S6K in PBL, and that both inhibitors, ceranib-2 and SKI-II, reduced the phosphorylation of p70 S6K in BJAB cells. As mTORC1 activity is required for efficient MV replication, this effect of the inhibitors is one possible antiviral mechanism. In addition, reduced intracellular S1P levels affect a number of signaling pathways and functions including Hsp90 activity, which was reported to be required for MV replication. Accordingly, we found that pharmacological inhibition of Hsp90 with the inhibitor 17-AAG strongly impaired MV replication in primary PBL. Thus, our data suggest that treatment of lymphocytes with both, acid ceramidase and SphK inhibitors, impair MV replication by affecting a number of cellular activities including mTORC1 and Hsp90, which alter the metabolic state of the cells causing a hostile environment for the virus.
Background
Artificial intelligence (AI) is one of the most promising areas in medicine with many possibilities for improving health and wellness. Already today, diagnostic decision support systems may help patients to estimate the severity of their complaints. This fictional case study aimed to test the diagnostic potential of an AI algorithm for common sports injuries and pathologies.
Methods
Based on a literature review and clinical expert experience, five fictional “common” cases of acute, and subacute injuries or chronic sport-related pathologies were created: Concussion, ankle sprain, muscle pain, chronic knee instability (after ACL rupture) and tennis elbow. The symptoms of these cases were entered into a freely available chatbot-guided AI app and its diagnoses were compared to the pre-defined injuries and pathologies.
Results
A mean of 25–36 questions were asked by the app per patient, with optional explanations of certain questions or illustrative photos on demand. It was stressed, that the symptom analysis would not replace a doctor’s consultation. A 23-yr-old male patient case with a mild concussion was correctly diagnosed. An ankle sprain of a 27-yr-old female without ligament or bony lesions was also detected and an ER visit was suggested. Muscle pain in the thigh of a 19-yr-old male was correctly diagnosed. In the case of a 26-yr-old male with chronic ACL instability, the algorithm did not sufficiently cover the chronic aspect of the pathology, but the given recommendation of seeing a doctor would have helped the patient. Finally, the condition of the chronic epicondylitis in a 41-yr-old male was correctly detected.
Conclusions
All chosen injuries and pathologies were either correctly diagnosed or at least tagged with the right advice of when it is urgent for seeking a medical specialist. However, the quality of AI-based results could presumably depend on the data-driven experience of these programs as well as on the understanding of their users. Further studies should compare existing AI programs and their diagnostic accuracy for medical injuries and pathologies.
In his essay, Mel Ainscow looks at inclusion and equity from an international perspective and makes suggestions on how to develop inclusive education in a ‘whole-system approach’. After discussing different conceptions of inclusion and equity, he describes international policies which address them. From this international macro-level, Ainscow zooms in to the meso-level of the school and its immediate environment, defining dimensions to be considered for an inclusive school development. One of these dimensions is the ‘use of evidence’. In my comment, I want to focus on this dimension and discuss its scope and the potential to apply it in inclusive education development. As a first and important precondition, Ainscow explains that different circumstances lead to different linguistic uses of the term ‘inclusive education’. Thus, the term ‘inclusive education’ does not refer to an identical set of objectives across countries, and neither does the term ‘equity’.
User Experience (UX) describes the holistic experience of a user before, during, and after interaction with a platform, product, or service. UX adds value and attraction to their sole functionality and is therefore highly relevant for firms. The increased interest in UX has produced a vast amount of scholarly research since 1983. The research field is, therefore, complex and scattered. Conducting a bibliometric analysis, we aim at structuring the field quantitatively and rather abstractly. We employed citation analyses, co-citation analyses, and content analyses to evaluate productivity and impact of extant research. We suggest that future research should focus more on business and management related topics.
The main aim of this article is to explore how learning analytics and synchronous collaboration could improve course completion and learner outcomes in MOOCs, which traditionally have been delivered asynchronously. Based on our experience with developing BigBlueButton, a virtual classroom platform that provides educators with live analytics, this paper explores three scenarios with business focused MOOCs to improve outcomes and strengthen learned skills.
A commonly used approach to parameter estimation in computational models is the so-called grid search procedure: the entire parameter space is searched in small steps to determine the parameter value that provides the best fit to the observed data. This approach has several disadvantages: first, it can be computationally very expensive; second, one optimal point value of the parameter is reported as the best fit value; we cannot quantify our uncertainty about the parameter estimate. In the main journal article that this methods article accompanies (Jager et al., 2020, Interference patterns in subject-verb agreement and reflexives revisited: A large-sample study, Journal of Memory and Language), we carried out parameter estimation using Approximate Bayesian Computation (ABC), which is a Bayesian approach that allows us to quantify our uncertainty about the parameter's values given data. This customization has the further advantage that it allows us to generate both prior and posterior predictive distributions of reading times from the cue-based retrieval model of Lewis and Vasishth, 2005. <br /> Instead of the conventional method of using grid search, we use Approximate Bayesian Computation (ABC) for parameter estimation in the [4] model. <br /> The ABC method of parameter estimation has the advantage that the uncertainty of the parameter can be quantified.
Background: The COVID-19 pandemic has highlighted the importance of scientific endeavors. The goal of this systematic review is to evaluate the quality of the research on physical activity (PA) behavior change and its potential to contribute to policy-making processes in the early days of COVID-19 related restrictions.
Methods: We conducted a systematic review of methodological quality of current research according to PRISMA guidelines using Pubmed and Web of Science, of articles on PA behavior change that were published within 365 days after COVID-19 was declared a pandemic by the World Health Organization (WHO). Items from the JBI checklist and the AXIS tool were used for additional risk of bias assessment. Evidence mapping is used for better visualization of the main results. Conclusions about the significance of published articles are based on hypotheses on PA behavior change in the light of the COVID-19 pandemic.
Results: Among the 1,903 identified articles, there were 36% opinion pieces, 53% empirical studies, and 9% reviews. Of the 332 studies included in the systematic review, 213 used self-report measures to recollect prepandemic behavior in often small convenience samples. Most focused changes in PA volume, whereas changes in PA types were rarely measured. The majority had methodological reporting flaws. Few had very large samples with objective measures using repeated measure design (pre and during the pandemic). In addition to the expected decline in PA duration, these studies show that many of those who were active prepandemic, continued to be active during the pandemic.
Conclusions: Research responded quickly at the onset of the pandemic. However, most of the studies lacked robust methodology, and PA behavior change data lacked the accuracy needed to guide policy makers. To improve the field, we propose the implementation of longitudinal cohort studies by larger organizations such as WHO to ease access to data on PA behavior, and suggest those institutions set clear standards for this research. Researchers need to ensure a better fit between the measurement method and the construct being measured, and use both objective and subjective measures where appropriate to complement each other and provide a comprehensive picture of PA behavior.
In this report we describe Cy5-dUTP labelling of recombinase-polymerase-amplification (RPA) products directly during the amplification process for the first time. Nucleic acid amplification techniques, especially polymerase-chain-reaction as well as various isothermal amplification methods such as RPA, becomes a promising tool in the detection of pathogens and target specific genes. Actually, RPA even provides more advantages. This isothermal method got popular in point of care diagnostics because of its speed and sensitivity but requires pre-labelled primer or probes for a following detection of the amplicons. To overcome this disadvantages, we performed an labelling of RPA-amplicons with Cy5-dUTP without the need of pre-labelled primers. The amplification results of various multiple antibiotic resistance genes indicating great potential as a flexible and promising tool with high specific and sensitive detection capabilities of the target genes. After the determination of an appropriate rate of 1% Cy5-dUTP and 99% unlabelled dTTP we were able to detect the bla(CTX-M15) gene in less than 1.6E-03 ng genomic DNA corresponding to approximately 200 cfu of Escherichia coli cells in only 40 min amplification time.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage. The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
MOOCs have been produced using a variety of instructional design approaches and frameworks. This paper presents experiences from the instructional approach based on the ADDIE model applied to designing and producing MOOCs in the Erasmus+ strategic partnership on Open Badge Ecosystem for Research Data Management (OBERRED). Specifically, this paper describes the case study of the production of the MOOC “Open Badges for Open Science”, delivered on the European MOOC platform EMMA. The key goal of this MOOC is to help learners develop a capacity to use Open Badges in the field of Research Data Management (RDM). To produce the MOOC, the ADDIE model was applied as a generic instructional design model and a systematic approach to the design and development following the five design phases: Analysis, Design, Development, Implementation, Evaluation. This paper outlines the MOOC production including methods, templates and tools used in this process including the interactive micro-content created with H5P in form of Open Educational Resources and digital credentials created with Open Badges and issued to MOOC participants upon successful completion of MOOC levels. The paper also outlines the results from qualitative evaluation, which applied the cognitive walkthrough methodology to elicit user requirements. The paper ends with conclusions about pros and cons of using the ADDIE model in MOOC production and formulates recommendations for further work in this area.
Scaling agriculture to the globally rising population demands new approaches for future crop production such as multilayer and multitrophic indoor farming. Moreover, there is a current trend towards sustainable local solutions for aquaculture and saline agriculture. In this context, halophytes are becoming increasingly important for research and the food industry. As Salicornia europaea is a highly salt-tolerant obligate halophyte that can be used as a food crop, indoor cultivation with saline water is of particular interest. Therefore, finding a sustainable alternative to the use of seawater in non-coastal regions is crucial. Our goal was to determine whether natural brines, which are widely distributed and often available in inland areas, provide an alternative water source for the cultivation of saline organisms. This case study investigated the potential use of natural brines for the production of S. europaea. In the control group, which reflects the optimal growth conditions, fresh weight was increased, but there was no significant difference between the treatment groups comparing natural brines with artificial sea water. A similar pattern was observed for carotenoids and chlorophylls. Individual components showed significant differences. However, within treatments, there were mostly no changes. In summary, we showed that the influence of the different chloride concentrations was higher than the salt composition. Moreover, nutrient-enriched natural brine was demonstrated to be a suitable alternative for cultivation of S. europaea in terms of yield and nutritional quality. Thus, the present study provides the first evidence for the future potential of natural brine waters for the further development of aquaculture systems and saline agriculture in inland regions.
Satisfaction and frustration of the needs for autonomy, competence, and relatedness, as assessed with the 24-item Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS), have been found to be crucial indicators of individuals’ psychological health. To increase the usability of this scale within a clinical and health services research context, we aimed to validate a German short version (12 items) of this scale in individuals with depression including the examination of the relations from need frustration and need satisfaction to ill-being and quality of life (QOL). This cross-sectional study involved 344 adults diagnosed with depression (Mage (SD) = 47.5 years (11.1); 71.8% females). Confirmatory factor analyses indicated that the short version of the BPNSFS was not only reliable, but also fitted a six-factor structure (i.e., satisfaction/frustration X type of need). Subsequent structural equation modeling showed that need frustration related positively to indicators of ill-being and negatively to QOL. Surprisingly, need satisfaction did not predict differences in ill-being or QOL. The short form of the BPNSFS represents a practical instrument to measure need satisfaction and frustration in people with depression. Further, the results support recent evidence on the importance of especially need frustration in the prediction of psychopathology.
In order to improve a recently established cell-based assay to assess the potency of botulinum neurotoxin, neuroblastoma-derived SiMa cells and induced pluripotent stem-cells (iPSC) were modified to incorporate the coding sequence of a reporter luciferase into a genetic safe harbor utilizing CRISPR/Cas9. A novel method, the double-control quantitative copy number PCR (dc-qcnPCR), was developed to detect off-target integrations of donor DNA. The donor DNA insertion success rate and targeted insertion success rate were analyzed in clones of each cell type. The dc-qcnPCR reliably quantified the copy number in both cell lines. The probability of incorrect donor DNA integration was significantly increased in SiMa cells in comparison to the iPSCs. This can possibly be explained by the lower bundled relative gene expression of a number of double-strand repair genes (BRCA1, DNA2, EXO1, MCPH1, MRE11, and RAD51) in SiMa clones than in iPSC clones. The dc-qcnPCR offers an efficient and cost-effective method to detect off-target CRISPR/Cas9-induced donor DNA integrations.
Background
Benefit finding, defined as perceiving positive life changes resulting from adversity and negative life stressors, gains growing attention in the context of chronic illness. The study aimed at examining the psychometric properties of the Benefit Finding Scale for Children (BFSC) in a sample of German youth facing chronic conditions.
Methods
A sample of adolescents with various chronic conditions (N = 304; 12 – 21years) completed the 10-item BFSC along with measures of intra- and interpersonal resources, coping strategies, and health-related quality of life (hrQoL). The total sample was randomly divided into two subsamples for conducting exploratory and confirmatory factor analyses (EFA/CFA).
Results
EFA revealed that the BFSC scores had a one-dimensional factor structure. CFA verified the one-dimensional factor structure with an acceptable fit. The BFSC exhibited acceptable internal consistency (α = 0.87 – 0.88) and construct validity. In line with our hypotheses, benefit finding was positively correlated with optimism, self-esteem, self-efficacy, sense of coherence, and support seeking. There were no correlations with avoidance, wishful thinking, emotional reaction, and hrQoL. Sex differences in benefit finding were not consistent across subsamples. Benefit finding was also positively associated with age, disease severity, and social status.
Conclusions
The BFSC is a psychometrically sound instrument to assess benefit finding in adolescents with chronic illness and may facilitate further research on positive adaptation processes in adolescents, irrespective of their specific diagnosis.
Background
Depression is one of the key factors contributing to difficulties in one’s ability to work, and serves as one of the major reasons why employees apply for psychotherapy and receive insurance subsidization of treatments. Hence, an increasing and growing number of studies rely on workability assessment scales as their primary outcome measure. The Work and Social Assessment Scale (WSAS) has been documented as one of the most psychometrically reliable and valid tools especially developed to assess workability and social functioning in patients with mental health problems. Yet, the application of the WSAS in Germany has been limited due to the paucity of a valid questionnaire in the German language. The objective of the present study was to translate the WSAS, as a brief and easy administrable tool into German and test its psychometric properties in a sample of adults with depression.
Methods
Two hundred seventy-seven patients (M = 48.3 years, SD = 11.1) with mild to moderately severe depression were recruited. A multistep translation from English into the German language was performed and the factorial validity, criterion validity, convergent validity, discriminant validity, internal consistency, and floor and ceiling effects were examined.
Results
The confirmatory factor analysis results confirmed the one-factor structure of the WSAS. Significant correlations with the WHODAS 2–0 questionnaire, a measure of functionality, demonstrated good convergent validity. Significant correlations with depression and quality of life demonstrated good criterion validity. The WSAS also demonstrated strong internal consistency (α = .89), and the absence of floor and ceiling effects indicated good sensitivity of the instrument.
Conclusions
The results of the present study demonstrated that the German version of the WSAS has good psychometric properties comparable to other international versions of this scale. The findings recommend a global assessment of psychosocial functioning with the sum score of the WSAS.
Introduction General and particularly sport-specific testing is an integral aspect of performance optimization in artistic gymnastics. In artistic gymnastics, however, only non-specific field tests have been used to assess endurance performance (e.g., Multistage Shuttle Run Test; Cooper's Test).
Methods This study aimed to examine the validity of a new sport-specific endurance test in artistic gymnastics. Fourteen elite-level gymnasts (i.e., eight males and six females) participated in this study. The newly developed artistic gymnastics-specific endurance test (AGSET) was conducted on two different occasions seven days apart to determine its reliability. To assess the concurrent validity of AGSET, participants performed the multistage shuttle run test (MSRT). Maximum oxygen uptake (VO2max) and respiratory exchange ratio (RER) were directly assessed using a portable gas analyzer system during both protocols. Additionally, the total time maintained (TTM) during the AGSET, maximum heart rate (HRmax), maximal aerobic speed (MAS), and blood lactate concentration (BLa) during the two protocols were collected.
Results The main findings indicated that all variables derived from the AGSET (i.e., VO2max, MAS, HRmax, BLa, and RER) displayed very good relative (all intraclass correlation coefficients [ICC] > 0.90) and absolute (all typical errors of measurement [TEM] < 5%) reliability. Further, results showed that the ability of the AGSET to detect small changes in VO2max, MAS, BLa, and RER was good (smallest worthwhile change [SWC0.2] > TEM), except HRmax (SWC0.2 < TEM). Additionally, results showed a nearly perfect association between the VO2max values derived from the AGSET and MSRT (r = 0.985; coefficient of determination [R-2] = 97%) with no statistically significant differences (p>0.05). The mean (bias) +/- 95% limits of agreement between the two protocols were 0.28 +/- 0.55 mlminkg-1.
Discussion AGSET seems to present very good reliability and concurrent validity for assessing endurance performance in elite artistic gymnastics. In addition, the newly developed protocol presents a good ability to detect small changes in performance.
The Colorado Learning Attitudes about Science Survey (CLASS) is an instrument which is widely used in physics education to characterize students' attitudes toward physics and learning physics and compare them with those of experts. While CLASS has been extensively validated for use in the context of higher education institutions in the United States, there has been less information about its use with European students. We have studied the structural, content, and substantive aspects of validity of CLASS by first doing a confirmatory factor analysis of N = 642 sets of student answers from the University of Helsinki, Finland. The students represented a culturally and demographically different subset of university physics students than in previous studies. The confirmatory factor analysis used a 3-factor, 15-item factor structure as a starting point and the resulting factor structure was similar to the original. Just minor modifications were needed for fit parameters to be in the acceptable range. We explored the differences by student interviews and consultation of experts. With the exception of one item, they supported the new 14-item, 3-factor structure. The results show that the interpretations made from CLASS results are mostly transferable, and CLASS remains a useful instrument for a wide variety of populations.
Plastic pollution is an increasing environmental problem, but a comprehensive understanding of its effect in the environment is still missing. The wide variety of size, shape, and polymer composition of plastics impedes an adequate risk assessment. We investigated the effect of differently sized polystyrene beads (1-, 3-, 6-µm; PS) and polyamide fragments (5–25 µm, PA) and non-plastics items such as silica beads (3-µm, SiO2) on the population growth, reproduction (egg ratio), and survival of two common aquatic micro invertebrates: the rotifer species Brachionus calyciflorus and Brachionus fernandoi. The MPs were combined with food quantity, limiting and saturating food concentration, and with food of different quality. We found variable fitness responses with a significant effect of 3-µm PS on the population growth rate in both rotifer species with respect to food quantity. An interaction between the food quality and the MPs treatments was found in the reproduction of B. calyciflorus. PA and SiO2 beads had no effect on fitness response. This study provides further evidence of the indirect effect of MPs in planktonic rotifers and the importance of testing different environmental conditions that could influence the effect of MPs.
Genetic divergence and the frequency of hybridization are central for defining species delimitations, especially among cryptic species where morphological differences are merely absent. Rotifers are known for their high cryptic diversity and therefore are ideal model organisms to investigate such patterns. Here, we used the recently resolved Brachionus calyciflorus species complex to investigate whether previously observed between species differences in thermotolerance and gene expression are also reflected in their genomic footprint. We identified a Heat Shock Protein gene (HSP 40 kDa) which exhibits cross species pronounced sequence variation. This gene exhibits species-specific fixed sites, alleles, and sites putatively under positive selection. These sites are located in protein binding regions involved in chaperoning and may therefore reflect adaptive diversification. By comparing three genetic markers (ITS, COI, HSP 40 kDa), we revealed hybridization events between the cryptic species. The low frequency of introgressive haplotypes/alleles suggest a tight, but not fully impermeable boundary between the cryptic species.
Genetic divergence and the frequency of hybridization are central for defining species delimitations, especially among cryptic species where morphological differences are merely absent. Rotifers are known for their high cryptic diversity and therefore are ideal model organisms to investigate such patterns. Here, we used the recently resolved Brachionus calyciflorus species complex to investigate whether previously observed between species differences in thermotolerance and gene expression are also reflected in their genomic footprint. We identified a Heat Shock Protein gene (HSP 40 kDa) which exhibits cross species pronounced sequence variation. This gene exhibits species-specific fixed sites, alleles, and sites putatively under positive selection. These sites are located in protein binding regions involved in chaperoning and may therefore reflect adaptive diversification. By comparing three genetic markers (ITS, COI, HSP 40 kDa), we revealed hybridization events between the cryptic species. The low frequency of introgressive haplotypes/alleles suggest a tight, but not fully impermeable boundary between the cryptic species.
With the growing size and use of night light time series from the Visible Infrared Imaging Radiometer Suite Day/Night Band (DNB), it is important to understand the stability of the dataset. All satellites observe differences in pixel values during repeat observations. In the case of night light data, these changes can be due to both environmental effects and changes in light emission. Here we examine the stability of individual locations of particular large scale light sources (e.g., airports and prisons) in the monthly composites of DNB data from April 2012 to September 2017. The radiances for individual pixels of most large light emitters are approximately normally distributed, with a standard deviation of typically 15-20% of the mean. Greenhouses and flares, however, are not stable sources. We observe geospatial autocorrelation in the monthly variations for nearby sites, while the correlation for sites separated by large distances is small. This suggests that local factors contribute most to the variation in the pixel radiances and furthermore that averaging radiances over large areas will reduce the total variation. A better understanding of the causes of temporal variation would improve the sensitivity of DNB to lighting changes.
The starting point of this article is the occurrence of determiner-less and bare que relative complementizers like (en) que, ‘(in) that’, instead of (en) el que, ‘(in) which’, in Yucatecan Spanish (southeast Mexico). While reference grammars treat complementizers with a determiner as the standard option, previous diachronic research has shown that determiner-less complementizers actually predate relative complementizers with a determiner. Additionally, Yucatecan Spanish has been in long-standing contact with Yucatec Maya. Relative complementation in Yucatec Maya differs from that in Spanish (at least) in that the non-complex complementizer tu’ux (‘where’) is generally the only option for locative complementation. The paper explores monolingual and bilingual data from Yucatecan Spanish to discuss the question whether the determiner-less and bare que relative complementizers in our data constitute a historic remnant or a dialectal recast, possibly (but not necessarily) due to language contact. Although our pilot study may not answer these far-reaching questions, it does reveal two separate, but intertwined developments: (i) a generally increased rate of bare que relative complementation, across both monolingual speakers of Spanish and Spanish Maya bilinguals, compared to other Spanish varieties, and (ii) a preference for donde at the cost of other locative complementizer constructions in the bilingual group. Our analysis thus reveals intriguing differences between the complementizer preferences of monolingual and bilingual speakers, suggesting that different variational patterns caused by different (socio-)linguistic factors can co-develop in parallel in one and the [same] region.
Variational bayesian inference for nonlinear hawkes process with gaussian process self-effects
(2022)
Traditionally, Hawkes processes are used to model time-continuous point processes with history dependence. Here, we propose an extended model where the self-effects are of both excitatory and inhibitory types and follow a Gaussian Process. Whereas previous work either relies on a less flexible parameterization of the model, or requires a large amount of data, our formulation allows for both a flexible model and learning when data are scarce. We continue the line of work of Bayesian inference for Hawkes processes, and derive an inference algorithm by performing inference on an aggregated sum of Gaussian Processes. Approximate Bayesian inference is achieved via data augmentation, and we describe a mean-field variational inference approach to learn the model parameters. To demonstrate the flexibility of the model we apply our methodology on data from different domains and compare it to previously reported results.
This article proposes several conceptual frameworks for examining the widespread use of classical intertexts depicting the supernatural in popular media. Whether the supernatural is viewed as reality or simply a trope, it represents the human capacity and desire to explore worlds and meanings beyond the obvious and mundane. Representations of classical gods, heroes, and monsters evoke the power of mythic stories to probe and explain human psychology, social concerns, philosophical questions, and religious beliefs, including belief about the paranormal and supernatural. The entertainment value of popular media allows creators and audiences to engage with larger issues in non-dogmatic and playful ways that help them negotiate tensions among various beliefs and identities. This paper also gives an overview of the other articles in this journal issue, showing overlapping themes and patterns that connect with these tensions. By combining knowledge of classical myths in their original contexts with knowledge about contemporary culture, classical scholars contribute unique perspectives about why classical intertexts dominate in popular media today.
Relationships between climate, species composition, and species richness are of particular importance for understanding how boreal ecosystems will respond to ongoing climate change. This study aims to reconstruct changes in terrestrial vegetation composition and taxa richness during the glacial Late Pleistocene and the interglacial Holocene in the sparsely studied southeastern Yakutia (Siberia) by using pollen and sedimentary ancient DNA (sedaDNA) records. Pollen and sedaDNA metabarcoding data using the trnL g and h markers were obtained from a sediment core from Lake Bolshoe Toko. Both proxies were used to reconstruct the vegetation composition, while metabarcoding data were also used to investigate changes in plant taxa richness. The combination of pollen and sedaDNA approaches allows a robust estimation of regional and local past terrestrial vegetation composition around Bolshoe Toko during the last similar to 35,000 years. Both proxies suggest that during the Late Pleistocene, southeastern Siberia was covered by open steppe-tundra dominated by graminoids and forbs with patches of shrubs, confirming that steppe-tundra extended far south in Siberia. Both proxies show disturbance at the transition between the Late Pleistocene and the Holocene suggesting a period with scarce vegetation, changes in the hydrochemical conditions in the lake, and in sedimentation rates. Both proxies document drastic changes in vegetation composition in the early Holocene with an increased number of trees and shrubs and the appearance of new tree taxa in the lake's vicinity. The sedaDNA method suggests that the Late Pleistocene steppe-tundra vegetation supported a higher number of terrestrial plant taxa than the forested Holocene. This could be explained, for example, by the "keystone herbivore" hypothesis, which suggests that Late Pleistocene megaherbivores were able to maintain a high plant diversity. This is discussed in the light of the data with the broadly accepted species-area hypothesis as steppe-tundra covered such an extensive area during the Late Pleistocene.
Ecological niche models (ENMs) are often used to investigate how climatic variables from known occurrence records can estimate potential species range distribution. Although climate-based ENMs provide critical baseline information, the inclusion of non-climatic predictors related to vegetation cover might generate more realistic scenarios. This assumption is particularly relevant for species with life-history traits related to forest habitats and sensitive to habitat loss and fragmentation. Here, we developed ENMs for 36 Atlantic Forest endemic birds considering two sets of predictor variables: (i) climatic variables only and (ii) climatic variables combined with the percentage of remaining native vegetation. We hypothesized that the inclusion of native vegetation data would decrease the potential range distribution of forest-dependent species by limiting their occurrence in regions harboring small areas of native vegetation habitats, despite otherwise favorable climatic conditions. We also expected that habitat restriction in the climate-vegetation models would be more pronounced for highly forest-dependent birds. The inclusion of vegetation data in the modeling procedures restricted the final distribution ranges of 22 out of 36 modeled species, while the 14 remaining presented an expansion of their ranges. We observed that species with high and medium forest dependency showed higher restriction in range size predictions between predictor sets than species with low forest dependency, which showed no alteration or range expansion. Overall, our results suggest that ENMs based on climatic and landscape variables may be a useful tool for conservationists to better understand the dynamic of bird species distributions in threatened and highly fragmented regions such as the Atlantic Forest hotspot.(c) 2021 Associacao Brasileira de Cie circumflex accent ncia Ecol ogica e Conservacao. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/ ).
Shrub encroachment has far-reaching ecological and economic consequences in many ecosystems worldwide. Yet, compositional changes associated with shrub encroachment are often overlooked despite having important effects on ecosystem functioning. We document the compositional change and potential drivers for a northern Namibian Combretum woodland transitioning into a Terminalia shrubland. We use a multiproxy record (pollen, sedimentary ancient DNA, biomarkers, compound-specific carbon (delta C-13) and deuterium (delta D) isotopes, bulk carbon isotopes (delta(13)Corg), grain size, geochemical properties) from Lake Otjikoto at high taxonomical and temporal resolution. We provide evidence that state changes in semiarid environments may occur on a scale of one century and that transitions between stable states can span around 80 years and are characterized by a unique vegetation composition. We demonstrate that the current grass/woody ratio is exceptional for the last 170 years, as supported by n-alkane distributions and the delta C-13 and delta(13)Corg records. Comparing vegetation records to environmental proxy data and census data, we infer a complex network of global and local drivers of vegetation change. While our delta D record suggests physiological adaptations of woody species to higher atmospheric pCO(2) concentration and drought, our vegetation records reflect the impact of broad-scale logging for the mining industry, and the macrocharcoal record suggests a decrease in fire activity associated with the intensification of farming. Impact of selective grazing is reflected by changes in abundance and taxonomical composition of grasses and by an increase of nonpalatable and trampling-resistant taxa. In addition, grain-size and spore records suggest changes in the erodibility of soils because of reduced grass cover. Synthesis. We conclude that transitions to an encroached savanna state are supported by gradual environmental changes induced by management strategies, which affected the resilience of savanna ecosystems. In addition, feedback mechanisms that reflect the interplay between management legacies and climate change maintain the encroached state.
We present an X-ray-optical cross-correlator for the soft (> 150 eV) up to the hard X-ray regime based on a molybdenum-silicon superlattice. The cross-correlation is done by probing intensity and position changes of superlattice Bragg peaks caused by photoexcitation of coherent phonons. This approach is applicable for a wide range of X-ray photon energies as well as for a broad range of excitation wavelengths and requires no external fields or changes of temperature. Moreover, the cross-correlator can be employed on a 10 ps or 100 fs time scale featuring up to 50% total X-ray reflectivity and transient signal changes of more than 20%. (C) 2016 Author(s).
We report on the detection of very high energy (VHE; E > 100 GeV) gamma-ray emission from the BL Lac objects KUV 00311-1938 and PKS 1440-389 with the High Energy Stereoscopic System (H.E.S.S.). H.E.S.S. observations were accompanied or preceded by multiwavelength observations with Fermi/LAT, XRT and UVOT onboard the Swift satellite, and ATOM. Based on an extrapolation of the Fermi/LAT spectrum towards the VHE gamma-ray regime, we deduce a 95 per cent confidence level upper limit on the unknown redshift of KUV 00311-1938 of z < 0.98 and of PKS 1440-389 of z < 0.53. When combined with previous spectroscopy results, the redshift of KUV 00311-1938 is constrained to 0.51 <= z < 0.98 and of PKS 1440-389 to 0.14 (sic) z < 0.53.
The Victoria microplate between the Eastern and Western Branches of the East African Rift System is one of the largest continental microplates on Earth. In striking contrast to its neighboring plates, Victoria rotates counterclockwise with respect to Nubia. The underlying cause of this distinctive rotation has remained elusive so far. Using 3D numerical models, we investigate the role of pre-existing lithospheric heterogeneities in continental microplate rotation. We find that Victoria's rotation is primarily controlled by the distribution of rheologically stronger zones that transmit the drag of the major plates to the microplate and of the mechanically weaker mobile belts surrounding Victoria that facilitate rotation. Our models reproduce Victoria's GPS-derived counterclockwise rotation as well as key complexities of the regional tectonic stress field. These results reconcile competing ideas on the opening of the rift system by highlighting differences in orientation of the far-field divergence, local extension, and the minimum horizontal stress. One of the largest continental microplates on Earth is situated in the center of the East African Rift System, and oddly, the Victoria microplate rotates counterclockwise with respect to the neighboring African tectonic plate. Here, the authors' modelling results suggest that Victoria microplate rotation is caused by edge-driven lithospheric processes related to the specific geometry of rheologically weak and strong regions.
Vienna
(2021)
This book explores and debates the urban transformations that have taken place in Vienna over the past 30 years and their consequences in policy fields such as labour and housing, political and social participation and the environment. Historically, European cities have been characterised by a strong association between social cohesion, quality of life, economic ambition and a robust State. Vienna is an excellent example for that. In more recent years, however, cities were pressured to change policy principles and mechanisms in the context of demographic shifts, post-industrial transformations and welfare recalibration which have led to worsened social conditions in many cities. Each chapter in this volume discusses Vienna's responses to these pressures in key policy arenas, looking at outcomes from the context-specific local arrangements. Against a theoretical framework debating the European city as a model of inclusion and social justice, authors explore the local capacity to innovate urban policies and to address new social risks, while paying attention to potential trade-offs.
The book questions and assesses the city's resilience using time series and an institutional analysis of four key dimensions that characterise the European city model within the context of post-industrial transition: redistribution, recognition, representation and sustainability. It offers a multiscalar perspective of urban governance through labour, housing, participatory and environmental policies, bringing together different levels and public policy types.
Background: Members of the same social group tent to have the same body height. Migrants tend to adjust in height to their host communities.
Objectives: Social-Economic-Political-Emotional (SEPE) factors influence growth. We hypothesized that Vietnamese young adult migrants in Germany (1) are taller than their parents, (2) are as tall as their German peers, and (3) are as tall as predicted by height expectation at age 13 years.
Sample: The study was conducted in 30 male and 54 female Vietnamese migrants (mean age 26.23 years. SD=4.96) in Germany in 2020.
Methods: Information on age, sex, body height, school and education, job, height and ethnicity of best friend, migration history and cultural identification, parental height and education, and recalled information on their personal height expectations at age 13 years were obtained by questionnaire. The data were analyzed by St. Nicolas House Analysis (SNHA) and multiple regression.
Results: Vietnamese young adults are taller than their parents (females 3.85cm, males 7.44cm), but do not fully attain height of their German peers. The body height is positively associated with the height of best friend (p < 0.001), the height expectation at age 13 year (p < 0.001), and father’s height (p=0.001).
Conclusion: Body height of Vietnamese migrants in Germany reflects competitive growth and strategic growth adjustments. The magnitude of this intergenerational trend supports the concept that human growth depends on SEPE factors.
Background:
Research into the application of virtual reality technology in the health care sector has rapidly increased, resulting in a large body of research that is difficult to keep up with.
Objective:
We will provide an overview of the annual publication numbers in this field and the most productive and influential countries, journals, and authors, as well as the most used, most co-occurring, and most recent keywords.
Methods:
Based on a data set of 356 publications and 20,363 citations derived from Web of Science, we conducted a bibliometric analysis using BibExcel, HistCite, and VOSviewer.
Results:
The strongest growth in publications occurred in 2020, accounting for 29.49% of all publications so far. The most productive countries are the United States, the United Kingdom, and Spain; the most influential countries are the United States, Canada, and the United Kingdom. The most productive journals are the Journal of Medical Internet Research (JMIR), JMIR Serious Games, and the Games for Health Journal; the most influential journals are Patient Education and Counselling, Medical Education, and Quality of Life Research. The most productive authors are Riva, del Piccolo, and Schwebel; the most influential authors are Finset, del Piccolo, and Eide. The most frequently occurring keywords other than “virtual” and “reality” are “training,” “trial,” and “patients.” The most relevant research themes are communication, education, and novel treatments; the most recent research trends are fitness and exergames.
Conclusions:
The analysis shows that the field has left its infant state and its specialization is advancing, with a clear focus on patient usability.