Gold Open-Access
Refine
Year of publication
Document Type
- Article (2412)
- Review (84)
- Monograph/Edited Volume (75)
- Other (57)
- Part of Periodical (40)
- Conference Proceeding (30)
- Part of a Book (21)
- Master's Thesis (19)
- Doctoral Thesis (17)
- Report (12)
Language
- English (2133)
- German (595)
- Spanish (29)
- French (19)
- Multiple languages (4)
- Italian (3)
- Russian (3)
- Portuguese (2)
Keywords
- digital education (37)
- e-learning (37)
- MOOC (35)
- online course creation (33)
- online course design (33)
- Digitale Bildung (32)
- Kursdesign (32)
- Micro Degree (32)
- Online-Lehre (32)
- Onlinekurs (32)
Institute
- Institut für Biochemie und Biologie (421)
- Extern (366)
- Institut für Geowissenschaften (247)
- Institut für Physik und Astronomie (218)
- Institut für Romanistik (175)
- Institut für Umweltwissenschaften und Geographie (154)
- Historisches Institut (146)
- Hasso-Plattner-Institut für Digital Engineering GmbH (127)
- Department Sport- und Gesundheitswissenschaften (117)
- Institut für Ernährungswissenschaft (106)
- Institut für Chemie (95)
- Department Psychologie (94)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (89)
- Vereinigung für Jüdische Studien e. V. (79)
- Strukturbereich Kognitionswissenschaften (76)
- Institut für Mathematik (72)
- Department Linguistik (56)
- Fachgruppe Betriebswirtschaftslehre (51)
- Fachgruppe Politik- & Verwaltungswissenschaft (37)
- Fakultät für Gesundheitswissenschaften (37)
- Institut für Jüdische Studien und Religionswissenschaft (32)
- dbs Deutscher Bundesverband für akademische Sprachtherapie und Logopädie e.V. (31)
- Öffentliches Recht (30)
- Department Erziehungswissenschaft (29)
- Wirtschaftswissenschaften (29)
- Fachgruppe Soziologie (26)
- Institut für Germanistik (26)
- Institut für Informatik und Computational Science (24)
- Fachgruppe Volkswirtschaftslehre (23)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (20)
- Bürgerliches Recht (18)
- Department für Inklusionspädagogik (14)
- MenschenRechtsZentrum (13)
- Strukturbereich Bildungswissenschaften (12)
- Verband für Patholinguistik e. V. (vpl) (12)
- Sozialwissenschaften (11)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (11)
- Institut für Anglistik und Amerikanistik (10)
- Universitätsbibliothek (9)
- Institut für Jüdische Theologie (8)
- Institut für Philosophie (8)
- Department Grundschulpädagogik (7)
- Institut für Slavistik (7)
- Deutsches MEGA-Konsortialbüro an der Universität Potsdam (6)
- Hochschulambulanz (6)
- Humanwissenschaftliche Fakultät (6)
- Institut für Künste und Medien (6)
- Lehreinheit für Wirtschafts-Arbeit-Technik (6)
- Department Musik und Kunst (5)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (4)
- Theodor-Fontane-Archiv (4)
- Gleichstellungsbeauftragte (3)
- Institut für Lebensgestaltung-Ethik-Religionskunde (3)
- Kommunalwissenschaftliches Institut (3)
- Mathematisch-Naturwissenschaftliche Fakultät (3)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (3)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (3)
- eLiS - E-Learning in Studienbereichen (3)
- Gesundheitsmanagement (2)
- Patholinguistics/Neurocognition of Language (2)
- Philosophische Fakultät (2)
- Strafrecht (2)
- Zentrum für Sprachen und Schlüsselkompetenzen (Zessko) (2)
- Akademie für Psychotherapie und Interventionsforschung GmbH (1)
- Career Service (1)
- Digital Engineering Fakultät (1)
- Juristische Fakultät (1)
- Potsdam Graduate School (1)
- Potsdam Research Institute for Multilingualism (PRIM) (1)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (1)
- UP Transfer (1)
Background
Benefit finding, defined as perceiving positive life changes resulting from adversity and negative life stressors, gains growing attention in the context of chronic illness. The study aimed at examining the psychometric properties of the Benefit Finding Scale for Children (BFSC) in a sample of German youth facing chronic conditions.
Methods
A sample of adolescents with various chronic conditions (N = 304; 12 – 21years) completed the 10-item BFSC along with measures of intra- and interpersonal resources, coping strategies, and health-related quality of life (hrQoL). The total sample was randomly divided into two subsamples for conducting exploratory and confirmatory factor analyses (EFA/CFA).
Results
EFA revealed that the BFSC scores had a one-dimensional factor structure. CFA verified the one-dimensional factor structure with an acceptable fit. The BFSC exhibited acceptable internal consistency (α = 0.87 – 0.88) and construct validity. In line with our hypotheses, benefit finding was positively correlated with optimism, self-esteem, self-efficacy, sense of coherence, and support seeking. There were no correlations with avoidance, wishful thinking, emotional reaction, and hrQoL. Sex differences in benefit finding were not consistent across subsamples. Benefit finding was also positively associated with age, disease severity, and social status.
Conclusions
The BFSC is a psychometrically sound instrument to assess benefit finding in adolescents with chronic illness and may facilitate further research on positive adaptation processes in adolescents, irrespective of their specific diagnosis.
In order to improve a recently established cell-based assay to assess the potency of botulinum neurotoxin, neuroblastoma-derived SiMa cells and induced pluripotent stem-cells (iPSC) were modified to incorporate the coding sequence of a reporter luciferase into a genetic safe harbor utilizing CRISPR/Cas9. A novel method, the double-control quantitative copy number PCR (dc-qcnPCR), was developed to detect off-target integrations of donor DNA. The donor DNA insertion success rate and targeted insertion success rate were analyzed in clones of each cell type. The dc-qcnPCR reliably quantified the copy number in both cell lines. The probability of incorrect donor DNA integration was significantly increased in SiMa cells in comparison to the iPSCs. This can possibly be explained by the lower bundled relative gene expression of a number of double-strand repair genes (BRCA1, DNA2, EXO1, MCPH1, MRE11, and RAD51) in SiMa clones than in iPSC clones. The dc-qcnPCR offers an efficient and cost-effective method to detect off-target CRISPR/Cas9-induced donor DNA integrations.
Satisfaction and frustration of the needs for autonomy, competence, and relatedness, as assessed with the 24-item Basic Psychological Need Satisfaction and Frustration Scale (BPNSFS), have been found to be crucial indicators of individuals’ psychological health. To increase the usability of this scale within a clinical and health services research context, we aimed to validate a German short version (12 items) of this scale in individuals with depression including the examination of the relations from need frustration and need satisfaction to ill-being and quality of life (QOL). This cross-sectional study involved 344 adults diagnosed with depression (Mage (SD) = 47.5 years (11.1); 71.8% females). Confirmatory factor analyses indicated that the short version of the BPNSFS was not only reliable, but also fitted a six-factor structure (i.e., satisfaction/frustration X type of need). Subsequent structural equation modeling showed that need frustration related positively to indicators of ill-being and negatively to QOL. Surprisingly, need satisfaction did not predict differences in ill-being or QOL. The short form of the BPNSFS represents a practical instrument to measure need satisfaction and frustration in people with depression. Further, the results support recent evidence on the importance of especially need frustration in the prediction of psychopathology.
Scaling agriculture to the globally rising population demands new approaches for future crop production such as multilayer and multitrophic indoor farming. Moreover, there is a current trend towards sustainable local solutions for aquaculture and saline agriculture. In this context, halophytes are becoming increasingly important for research and the food industry. As Salicornia europaea is a highly salt-tolerant obligate halophyte that can be used as a food crop, indoor cultivation with saline water is of particular interest. Therefore, finding a sustainable alternative to the use of seawater in non-coastal regions is crucial. Our goal was to determine whether natural brines, which are widely distributed and often available in inland areas, provide an alternative water source for the cultivation of saline organisms. This case study investigated the potential use of natural brines for the production of S. europaea. In the control group, which reflects the optimal growth conditions, fresh weight was increased, but there was no significant difference between the treatment groups comparing natural brines with artificial sea water. A similar pattern was observed for carotenoids and chlorophylls. Individual components showed significant differences. However, within treatments, there were mostly no changes. In summary, we showed that the influence of the different chloride concentrations was higher than the salt composition. Moreover, nutrient-enriched natural brine was demonstrated to be a suitable alternative for cultivation of S. europaea in terms of yield and nutritional quality. Thus, the present study provides the first evidence for the future potential of natural brine waters for the further development of aquaculture systems and saline agriculture in inland regions.
MOOCs have been produced using a variety of instructional design approaches and frameworks. This paper presents experiences from the instructional approach based on the ADDIE model applied to designing and producing MOOCs in the Erasmus+ strategic partnership on Open Badge Ecosystem for Research Data Management (OBERRED). Specifically, this paper describes the case study of the production of the MOOC “Open Badges for Open Science”, delivered on the European MOOC platform EMMA. The key goal of this MOOC is to help learners develop a capacity to use Open Badges in the field of Research Data Management (RDM). To produce the MOOC, the ADDIE model was applied as a generic instructional design model and a systematic approach to the design and development following the five design phases: Analysis, Design, Development, Implementation, Evaluation. This paper outlines the MOOC production including methods, templates and tools used in this process including the interactive micro-content created with H5P in form of Open Educational Resources and digital credentials created with Open Badges and issued to MOOC participants upon successful completion of MOOC levels. The paper also outlines the results from qualitative evaluation, which applied the cognitive walkthrough methodology to elicit user requirements. The paper ends with conclusions about pros and cons of using the ADDIE model in MOOC production and formulates recommendations for further work in this area.
Many institutions struggle to tap into the potential of their large archives of radar reflectivity: these data are often affected by miscalibration, yet the bias is typically unknown and temporally volatile. Still, relative calibration techniques can be used to correct the measurements a posteriori. For that purpose, the usage of spaceborne reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) platforms has become increasingly popular: the calibration bias of a ground radar (GR) is estimated from its average reflectivity difference to the spaceborne radar (SR). Recently, Crisologo et al. (2018) introduced a formal procedure to enhance the reliability of such estimates: each match between SR and GR observations is assigned a quality index, and the calibration bias is inferred as a quality-weighted average of the differences between SR and GR. The relevance of quality was exemplified for the Subic S-band radar in the Philippines, which is greatly affected by partial beam blockage. The present study extends the concept of quality-weighted averaging by accounting for path-integrated attenuation (PIA) in addition to beam blockage. This extension becomes vital for radars that operate at the C or X band. Correspondingly, the study setup includes a C-band radar that substantially overlaps with the S-band radar. Based on the extended quality-weighting approach, we retrieve, for each of the two ground radars, a time series of calibration bias estimates from suitable SR overpasses. As a result of applying these estimates to correct the ground radar observations, the consistency between the ground radars in the region of overlap increased substantially. Furthermore, we investigated if the bias estimates can be interpolated in time, so that ground radar observations can be corrected even in the absence of prompt SR overpasses. We found that a moving average approach was most suitable for that purpose, although limited by the absence of explicit records of radar maintenance operations.
In this report we describe Cy5-dUTP labelling of recombinase-polymerase-amplification (RPA) products directly during the amplification process for the first time. Nucleic acid amplification techniques, especially polymerase-chain-reaction as well as various isothermal amplification methods such as RPA, becomes a promising tool in the detection of pathogens and target specific genes. Actually, RPA even provides more advantages. This isothermal method got popular in point of care diagnostics because of its speed and sensitivity but requires pre-labelled primer or probes for a following detection of the amplicons. To overcome this disadvantages, we performed an labelling of RPA-amplicons with Cy5-dUTP without the need of pre-labelled primers. The amplification results of various multiple antibiotic resistance genes indicating great potential as a flexible and promising tool with high specific and sensitive detection capabilities of the target genes. After the determination of an appropriate rate of 1% Cy5-dUTP and 99% unlabelled dTTP we were able to detect the bla(CTX-M15) gene in less than 1.6E-03 ng genomic DNA corresponding to approximately 200 cfu of Escherichia coli cells in only 40 min amplification time.
Background: The COVID-19 pandemic has highlighted the importance of scientific endeavors. The goal of this systematic review is to evaluate the quality of the research on physical activity (PA) behavior change and its potential to contribute to policy-making processes in the early days of COVID-19 related restrictions.
Methods: We conducted a systematic review of methodological quality of current research according to PRISMA guidelines using Pubmed and Web of Science, of articles on PA behavior change that were published within 365 days after COVID-19 was declared a pandemic by the World Health Organization (WHO). Items from the JBI checklist and the AXIS tool were used for additional risk of bias assessment. Evidence mapping is used for better visualization of the main results. Conclusions about the significance of published articles are based on hypotheses on PA behavior change in the light of the COVID-19 pandemic.
Results: Among the 1,903 identified articles, there were 36% opinion pieces, 53% empirical studies, and 9% reviews. Of the 332 studies included in the systematic review, 213 used self-report measures to recollect prepandemic behavior in often small convenience samples. Most focused changes in PA volume, whereas changes in PA types were rarely measured. The majority had methodological reporting flaws. Few had very large samples with objective measures using repeated measure design (pre and during the pandemic). In addition to the expected decline in PA duration, these studies show that many of those who were active prepandemic, continued to be active during the pandemic.
Conclusions: Research responded quickly at the onset of the pandemic. However, most of the studies lacked robust methodology, and PA behavior change data lacked the accuracy needed to guide policy makers. To improve the field, we propose the implementation of longitudinal cohort studies by larger organizations such as WHO to ease access to data on PA behavior, and suggest those institutions set clear standards for this research. Researchers need to ensure a better fit between the measurement method and the construct being measured, and use both objective and subjective measures where appropriate to complement each other and provide a comprehensive picture of PA behavior.
A commonly used approach to parameter estimation in computational models is the so-called grid search procedure: the entire parameter space is searched in small steps to determine the parameter value that provides the best fit to the observed data. This approach has several disadvantages: first, it can be computationally very expensive; second, one optimal point value of the parameter is reported as the best fit value; we cannot quantify our uncertainty about the parameter estimate. In the main journal article that this methods article accompanies (Jager et al., 2020, Interference patterns in subject-verb agreement and reflexives revisited: A large-sample study, Journal of Memory and Language), we carried out parameter estimation using Approximate Bayesian Computation (ABC), which is a Bayesian approach that allows us to quantify our uncertainty about the parameter's values given data. This customization has the further advantage that it allows us to generate both prior and posterior predictive distributions of reading times from the cue-based retrieval model of Lewis and Vasishth, 2005. <br /> Instead of the conventional method of using grid search, we use Approximate Bayesian Computation (ABC) for parameter estimation in the [4] model. <br /> The ABC method of parameter estimation has the advantage that the uncertainty of the parameter can be quantified.
The main aim of this article is to explore how learning analytics and synchronous collaboration could improve course completion and learner outcomes in MOOCs, which traditionally have been delivered asynchronously. Based on our experience with developing BigBlueButton, a virtual classroom platform that provides educators with live analytics, this paper explores three scenarios with business focused MOOCs to improve outcomes and strengthen learned skills.
User Experience (UX) describes the holistic experience of a user before, during, and after interaction with a platform, product, or service. UX adds value and attraction to their sole functionality and is therefore highly relevant for firms. The increased interest in UX has produced a vast amount of scholarly research since 1983. The research field is, therefore, complex and scattered. Conducting a bibliometric analysis, we aim at structuring the field quantitatively and rather abstractly. We employed citation analyses, co-citation analyses, and content analyses to evaluate productivity and impact of extant research. We suggest that future research should focus more on business and management related topics.
In his essay, Mel Ainscow looks at inclusion and equity from an international perspective and makes suggestions on how to develop inclusive education in a ‘whole-system approach’. After discussing different conceptions of inclusion and equity, he describes international policies which address them. From this international macro-level, Ainscow zooms in to the meso-level of the school and its immediate environment, defining dimensions to be considered for an inclusive school development. One of these dimensions is the ‘use of evidence’. In my comment, I want to focus on this dimension and discuss its scope and the potential to apply it in inclusive education development. As a first and important precondition, Ainscow explains that different circumstances lead to different linguistic uses of the term ‘inclusive education’. Thus, the term ‘inclusive education’ does not refer to an identical set of objectives across countries, and neither does the term ‘equity’.
Background
Artificial intelligence (AI) is one of the most promising areas in medicine with many possibilities for improving health and wellness. Already today, diagnostic decision support systems may help patients to estimate the severity of their complaints. This fictional case study aimed to test the diagnostic potential of an AI algorithm for common sports injuries and pathologies.
Methods
Based on a literature review and clinical expert experience, five fictional “common” cases of acute, and subacute injuries or chronic sport-related pathologies were created: Concussion, ankle sprain, muscle pain, chronic knee instability (after ACL rupture) and tennis elbow. The symptoms of these cases were entered into a freely available chatbot-guided AI app and its diagnoses were compared to the pre-defined injuries and pathologies.
Results
A mean of 25–36 questions were asked by the app per patient, with optional explanations of certain questions or illustrative photos on demand. It was stressed, that the symptom analysis would not replace a doctor’s consultation. A 23-yr-old male patient case with a mild concussion was correctly diagnosed. An ankle sprain of a 27-yr-old female without ligament or bony lesions was also detected and an ER visit was suggested. Muscle pain in the thigh of a 19-yr-old male was correctly diagnosed. In the case of a 26-yr-old male with chronic ACL instability, the algorithm did not sufficiently cover the chronic aspect of the pathology, but the given recommendation of seeing a doctor would have helped the patient. Finally, the condition of the chronic epicondylitis in a 41-yr-old male was correctly detected.
Conclusions
All chosen injuries and pathologies were either correctly diagnosed or at least tagged with the right advice of when it is urgent for seeking a medical specialist. However, the quality of AI-based results could presumably depend on the data-driven experience of these programs as well as on the understanding of their users. Further studies should compare existing AI programs and their diagnostic accuracy for medical injuries and pathologies.
As structural membrane components and signaling effector molecules sphingolipids influence a plethora of host cell functions, and by doing so also the replication of viruses. Investigating the effects of various inhibitors of sphingolipid metabolism in primary human peripheral blood lymphocytes (PBL) and the human B cell line BJAB we found that not only the sphingosine kinase (SphK) inhibitor SKI-II, but also the acid ceramidase inhibitor ceranib-2 efficiently inhibited measles virus (MV) replication. Virus uptake into the target cells was not grossly altered by the two inhibitors, while titers of newly synthesized MV were reduced by approximately 1 log (90%) in PBL and 70-80% in BJAB cells. Lipidomic analyses revealed that in PBL SKI-II led to increased ceramide levels, whereas in BJAB cells ceranib-2 increased ceramides. SKI-II treatment decreased sphingosine-1-phosphate (S1P) levels in PBL and BJAB cells. Furthermore, we found that MV infection of lymphocytes induced a transient (0.5-6 h) increase in S1P, which was prevented by SKI-II. Investigating the effect of the inhibitors on the metabolic (mTORC1) activity we found that ceranib-2 reduced the phosphorylation of p70 S6K in PBL, and that both inhibitors, ceranib-2 and SKI-II, reduced the phosphorylation of p70 S6K in BJAB cells. As mTORC1 activity is required for efficient MV replication, this effect of the inhibitors is one possible antiviral mechanism. In addition, reduced intracellular S1P levels affect a number of signaling pathways and functions including Hsp90 activity, which was reported to be required for MV replication. Accordingly, we found that pharmacological inhibition of Hsp90 with the inhibitor 17-AAG strongly impaired MV replication in primary PBL. Thus, our data suggest that treatment of lymphocytes with both, acid ceramidase and SphK inhibitors, impair MV replication by affecting a number of cellular activities including mTORC1 and Hsp90, which alter the metabolic state of the cells causing a hostile environment for the virus.
URSA-PQ
(2020)
We present a highly flexible and portable instrument to perform pump-probe spectroscopy with an optical and an X-ray pulse in the gas phase. The so-called URSA-PQ (German for ‘Ultraschnelle Röntgenspektroskopie zur Abfrage der Photoenergiekonversion an Quantensystemen’, Engl. ‘ultrafast X-ray spectroscopy for probing photoenergy conversion in quantum systems’) instrument is equipped with a magnetic bottle electron spectrometer (MBES) and tools to characterize the spatial and temporal overlap of optical and X-ray laser pulses. Its adherence to the CAMP instrument dimensions allows for a wide range of sample sources as well as other spectrometers to be included in the setup. We present the main design and technical features of the instrument. The MBES performance was evaluated using Kr M4,5NN Auger lines using backfilled Kr gas, with an energy resolution ΔE/E ≅ 1/40 in the integrating operative mode. The time resolution of the setup at FLASH 2 FL 24 has been characterized with the help of an experiment on 2-thiouracil that is inserted via the instruments’ capillary oven. We find a time resolution of 190 fs using the molecular 2p photoline shift and attribute this to different origins in the UV-pump—the X-ray probe setup.
Background The use of iodine-based contrast agents entails the risk of contrast induced nephropathy (CIN). Radiocontrast agents elicit the third most common cause of nephropathy among hospitalized patients, accounting for 11-12% of cases. CIN is connected with clinically significant consequences, including increased morbidity, prolonged hospitalization, increased risk of complications, potential need for dialysis, and increased mortality rate. The number of in hospital examinations using iodine-based contrast media has been significantly increasing over the last decade. In order to protect patients from possible complications of such examinations, new biomarkers are needed that are able to predict a risk of contrast-induced nephropathy. Urinary and plasma cyclic guanosine monophosphate (cGMP) concentrations are influenced by renal function. Urinary cGMP is primarily of renal cellular origin. Therefore, we assessed if urinary cGMP concentration may predict major adverse renal events (MARE) after contrast media exposure during coronary angiography. Methods Urine samples were prospectively collected from non-randomized consecutive patients with either diabetes or preexisting impaired kidney function receiving intra-arterial contrast medium (CM) for emergent or elective coronary angiography at the Charite Campus Mitte, University Hospital Berlin. Urinary cGMP concentration in spot urine was analyzed 24 hours after CM exposure. Patients were followed up over 90 days for occurrence of death, initiation of dialysis, doubling of plasma creatinine concentration or MARE. Results In total, 289 consecutive patients were included into the study. Urine cGMP/creatinine ratio 24 hours before CM exposure expressed as mean +/- SD was predictive for the need of dialysis (no dialysis: 89.77 +/- 92.85 mu M/mM, n = 277; need for dialysis: 140.3 +/- 82.90 mu M/mM, n = 12, p = 0.008), death (no death during follow-up: 90.60 +/- 92.50 mu M/mM, n = 280; death during follow-up: 169.88 +/- 81.52 mu M/mM, n = 9; p = 0.002), and the composite endpoint MARE (no MARE: 86.02 +/- 93.17 mu M/mM, n = 271; MARE: 146.64 +/- 74.68 mu M/mM, n = 18, p<0.001) during the follow-up of 90 days after contrast media application. cGMP/creatinine ratio stayed significantly increased at values exceeding 120 pM/mM in patients who developed MARE, required dialysis or died. Conclusions Urinary cGMP/creatinine ratio >= 120 mu M/mM before CM exposure is a promising biomarker for the need of dialysis and all-cause mortality 90 days after CM exposure in patients with preexisting renal impairment or diabetes.
This introductory essay is structured as follows: First of all, several forms of urbanisation (I.) are introduced and the processes of urbanisation and dis-urbanisation (II.) are defined. Then four fields of law which are deeply affected by urbanisation are put into the focus. These are, local government law (III.), but also public building law (IV.), civil service law (V.) and public finance law (VI.). Afterwards the effects of the corona pandemic on these fields of law are contemplated, taking account of the process of urbanisation (VII.). Finally, the main results are summarised (VIII.).
Marked along-strike changes in stratigraphy, mountain belt morphology, basement exhumation, and deformation styles characterize the Andean retroarc; these changes have previously been related to spatiotemporal variations in the subduction angle. We modeled new apatite fission track and apatite (U-Th-Sm)/He data from nine ranges located between 26 degrees S and 28 degrees S. Using new and previously published data, we constructed a Cretaceous to Pliocene paleogeographic model that delineates a four-stage tectonic evolution: extensional tectonics during the Cretaceous (120-75 Ma), the formation of a broken foreland basin between 55 and 30 Ma, reheating due to burial beneath sedimentary rocks (18-13 Ma), and deformation, exhumation, and surface uplift during the Late Miocene and the Pliocene (13-3 Ma). Our model highlights how preexisting upper plate structures control the deformation patterns of broken foreland basins. Because retroarc deformation predates flat-slab subduction, we propose that slab anchoring may have been the precursor of Eocene-Oligocene compression in the Andean retroarc. Our model challenges models which consider broken foreland basins and retroarc deformation in the NW Argentinian Andes to be directly related to Miocene flat subduction.
We extend the scope of European palaeogenomics by sequencing the genomes of Late Upper Palaeolithic (13,300 years old, 1.4-fold coverage) and Mesolithic (9,700 years old, 15.4-fold) males from western Georgia in the Caucasus and a Late Upper Palaeolithic (13,700 years old, 9.5-fold) male from Switzerland. While we detect Late Palaeolithic-Mesolithic genomic continuity in both regions, we find that Caucasus hunter-gatherers (CHG) belong to a distinct ancient clade that split from western hunter-gatherers similar to 45 kya, shortly after the expansion of anatomically modern humans into Europe and from the ancestors of Neolithic farmers similar to 25 kya, around the Last Glacial Maximum. CHG genomes significantly contributed to the Yamnaya steppe herders who migrated into Europe similar to 3,000 BC, supporting a formative Caucasus influence on this important Early Bronze age culture. CHG left their imprint on modern populations from the Caucasus and also central and south Asia possibly marking the arrival of Indo-Aryan languages.
Eine Zunahme der allgemeinen Temperatur auf Grund des Klimawandels und die damit einhergehende Zunahme von Hitzewellen führten dazu, dass das Landesamt für Umwelt und Verbraucherschutz Nordrhein-Westfalen (LANUV) einen Leitfaden für den Schutz der positiven Klimafunktion urbaner Böden herausgab. Darauf aufbauend wurde auf regionaler Ebene für die Stadt Düsseldorf die Kühlleistung der urbanen Böden quantifiziert, um besonders schutzwürdige Bereiche zu identifizieren. Im Rahmen des Projektes ExTrass sollte nun die Kühlleistung urbaner Böden innerhalb Remscheids quantifiziert werden, jedoch auf Basis von frei zugänglichen Daten. Eine solche Datengrundlage schließt eine Modellierung des Bodenwasserhaushaltes, welches die Grundlage der Quantifizierung in Düsseldorf war, für Remscheid aus. Jedoch bietet der vorgestellte Ansatz die Möglichkeit, eine solche Untersuchung auch in anderen Gemeinden innerhalb Deutschlands mit relativ wenig Aufwand durchzuführen.
Die Kühlleistung der Böden wurde über die nutzbare Feldkapazität abgeschätzt, welche das Wasserspeichervolumen der obersten durchwurzelten Bodenzone angibt. Es ist der Bodenwasserspeicher, der Wasser für die Evapotranspiration zur Verfügung stellt und damit maßgeblich die Kühlleistung eines Bodens definiert, d.h. durch direkte Evaporation des Bodenwassers sowie durch die Transpiration von Wasser durch Pflanzen. In die Erstellung der Karte sind eingegangen: (a) die Bodenkarte Nordrhein-Westfalens (BK50), um die nutzbare Feldkapazität (nFK) je Fläche zu bestimmen; (b) der Landnutzungsdatensatz UrbanAtlas 2012, in Verbindung mit einer Literaturrecherche, um den Einfluss der Landnutzung auf die Werte der nFK, insbesondere im Hinblick auf Versiegelung und Verdichtung herzuleiten; und (c) OpenStreetMap (OSM), um den Anteil der versiegelten Flächen genauer zu bestimmen, als dies auf Basis des UrbanAtlas möglich gewesen wäre.
Es hat sich gezeigt, dass dieser Ansatz geeignet ist, um die räumliche Verteilung der potenziellen Bodenkühlfunktion innerhalb einer Stadt zu untersuchen. Es ist zu beachten, dass der Einfluss des Grundwassers in Remscheid nicht berücksichtigt werden konnte. Denn es ist damit zu rechnen, dass die Grundwasserverhältnisse aufgrund der geologischen und topographischen Situation in Remscheid kleinräumig Variationen unterliegen und es somit
keinen durchgängigen und kartierten Aquifer gibt.
Kleingartenanlagen, Parks und Friedhöhe im innerstädtischen Bereich und allgemein die Landnutzungsklassen Wald und Grünland wurden als Flächen mit einem besonders hohem potenziellen Bodenkühlpotenzial identifiziert. Solche Flächen sind besonders schützenswert. Die Analyse der Speicherfüllstände der oberen Bodenzone, basierend auf der erstellten Karte der potenziellen Bodenkühlfunktion und der klimatischen Wasserbilanz, ergab, dass besonders innerstädtische Flächen, die einen kleinen Bodenwasserspeicher haben, in einem trockenen Jahr bereits früh im Sommer ihre Kühlfunktion verlieren und bei Hitzewellen somit eine verringerte positive Klimafunktion haben. Gestützt wird diese Aussage durch eine Auswertung des normalisierten differenzierten Vegetationsindex (NDVI), der genutzt wurde, um die Veränderung der Pflanzenvitalität vor und nach einer Hitzeperiode im Juni/Juli 2018 zu untersuchen.
Messungen mit Meteobikes, einer Vorrichtung, die dazu geeignet ist, während einer Radfahrt kontinuierlich die Temperatur zu messen, stützen die Erkenntnis, dass innerstädtische Grünflächen wie Parks eine positive Wirkung auf das urbane Mikroklima haben. Weiterhin zeigen diese Messungen, dass die Topographie innerhalb des Untersuchungsgebietes die Aufheizung einzelner Flächen und die Temperaturverteilung vermutlich mitbestimmt. Die hier vorgestellte Karte der potenziellen Kühlfunktion für Remscheid sollte als Ergänzung in die Klimafunktionskarte für Remscheid eingehen und den bestehenden Layer „flächenhafte Klimafunktion“, der nur die Landnutzung berücksichtigt, ersetzen.
Schulpraktische Phasen stellen eine bedeutende praxisnahe Lerngelegenheit im Lehramtsstudium dar, da sie Raum für umfangreiche Reflexionen der eigenen Lernerfahrung bieten. Das im Studium erworbene theoretisch-formale Wissen steht hierbei dem praktischen Wissen und Können gegenüber. Mit der professionellen Entwicklung im Referendariat, besonders im Kompetenzbereich des Unterrichtens, kann geschlussfolgert werden, dass sich eine Reflexion über eher fachliche Aspekte unter den Studierenden im Referendariat auf eine Reflexion über eher überfachliche und pädagogische Aspekte weitet. Infolge der Analyse von N = 55 schriftlichen Fremdreflexionen von angehenden Physiklehrkräften aus Studium und Referendariat konnte diese Hypothese für den Bereich der Unterrichtsanalyse und -reflexion unterstützt werden. Weiter wurde aus der Videovignette ein Workshopangebot für Lehrkräfte der zweiten und dritten Phase der Lehrkräftebildung entwickelt, erprobt und evaluiert.
Salt marshes filter pollutants, protect coastlines against storm surges, and sequester carbon, yet are under threat from sea level rise and anthropogenic modification. The sustained existence of the salt marsh ecosystem depends on the topographic evolution of marsh platforms. Quantifying marsh platform topography is vital for improving the management of these valuable landscapes. The determination of platform boundaries currently relies on supervised classification methods requiring near-infrared data to detect vegetation, or demands labour-intensive field surveys and digitisation. We propose a novel, unsupervised method to reproducibly isolate salt marsh scarps and platforms from a digital elevation model (DEM), referred to as Topographic Identification of Platforms (TIP). Field observations and numerical models show that salt marshes mature into subhorizontal platforms delineated by subvertical scarps. Based on this premise, we identify scarps as lines of local maxima on a slope raster, then fill landmasses from the scarps upward, thus isolating mature marsh platforms. We test the TIP method using lidar-derived DEMs from six salt marshes in England with varying tidal ranges and geometries, for which topographic platforms were manually isolated from tidal flats. Agreement between manual and unsupervised classification exceeds 94% for DEM resolutions of 1 m, with all but one site maintaining an accuracy superior to 90% for resolutions up to 3 m. For resolutions of 1 m, platforms detected with the TIP method are comparable in surface area to digitised platforms and have similar elevation distributions. We also find that our method allows for the accurate detection of local block failures as small as 3 times the DEM resolution. Detailed inspection reveals that although tidal creeks were digitised as part of the marsh platform, unsupervised classification categorises them as part of the tidal flat, causing an increase in false negatives and overall platform perimeter. This suggests our method may benefit from combination with existing creek detection algorithms. Fallen blocks and high tidal flat portions, associated with potential pioneer zones, can also lead to differences between our method and supervised mapping. Although pioneer zones prove difficult to classify using a topographic method, we suggest that these transition areas should be considered when analysing erosion and accretion processes, particularly in the case of incipient marsh platforms. Ultimately, we have shown that unsupervised classification of marsh platforms from high-resolution topography is possible and sufficient to monitor and analyse topographic evolution.
Unsere Würde in Euren Händen
(2024)
A better understanding of precipitation dynamics in the Indian subcontinent is required since India's society depends heavily on reliable monsoon forecasts. We introduce a non-linear, multiscale approach, based on wavelets and event synchronization, for unravelling teleconnection influences on precipitation. We consider those climate patterns with the highest relevance for Indian precipitation. Our results suggest significant influences which are not well captured by only the wavelet coherence analysis, the state-of-the-art method in understanding linkages at multiple timescales. We find substantial variation across India and across timescales. In particular, El Niño–Southern Oscillation (ENSO) and the Indian Ocean Dipole (IOD) mainly influence precipitation in the south-east at interannual and decadal scales, respectively, whereas the North Atlantic Oscillation (NAO) has a strong connection to precipitation, particularly in the northern regions. The effect of the Pacific Decadal Oscillation (PDO) stretches across the whole country, whereas the Atlantic Multidecadal Oscillation (AMO) influences precipitation particularly in the central arid and semi-arid regions. The proposed method provides a powerful approach for capturing the dynamics of precipitation and, hence, helps improve precipitation forecasting.
The Big Five personality traits play a major role in student achievement. As such, there is consistent evidence that students that are more conscientious receive better teacher-assigned grades in secondary school. However, research often does not support the claim that students that are more conscientious similarly achieve higher scores in domain-specific standardized achievement tests. Based on the Invest-and-Accrue Model, we argue that conscientiousness explains to some extent why certain students receive better grades despite similar academic accomplishments (i.e., achieving similar scores in domain-specific standardized achievement tests). Therefore, the present study examines to what extent the relationship between student personality and teacher-assigned grades consists of direct as opposed to indirect associations (via subject-specific standardized test scores). We used a representative sample of 14,710 ninth-grade students to estimate these direct and indirect pathways in mathematics and German. Structural equation models showed that test scores explained between 8 and 11% of the variance in teacher-assigned grades in mathematics and German. The Big Five personality traits in students additionally explained between 8 and 10% of the variance in grades. Finally, the personality-grade relationship consisted of direct (0.02 | β| ≤ 0.27) and indirect associations via test scores (0.01 | β| ≤ 0.07). Conscientiousness explained discrepancies between teacher-assigned grades and students’ scores in domain-specific standardized tests to a greater extent than any of the other Big Five personality traits. Our findings suggest that students that are more conscientious may invest more effort to accomplish classroom goals, but fall short of mastery.
Anomalous diffusion or, more generally, anomalous transport, with nonlinear dependence of the mean-squared displacement on the measurement time, is ubiquitous in nature. It has been observed in processes ranging from microscopic movement of molecules to macroscopic, large-scale paths of migrating birds. Using data from multiple empirical systems, spanning 12 orders of magnitude in length and 8 orders of magnitude in time, we employ a method to detect the individual underlying origins of anomalous diffusion and transport in the data. This method decomposes anomalous transport into three primary effects: long-range correlations (“Joseph effect”), fat-tailed probability density of increments (“Noah effect”), and nonstationarity (“Moses effect”). We show that such a decomposition of real-life data allows us to infer nontrivial behavioral predictions and to resolve open questions in the fields of single-particle tracking in living cells and movement ecology.
A detailed investigation of the energy levels of perylene-3,4,9,10-tetracarboxylic tetraethylester as a representative compound for the whole family of perylene esters was performed. It was revealed via electrochemical measurements that one oxidation and two reductions take place. The bandgaps determined via the electrochemical approach are in good agreement with the optical bandgap obtained from the absorption spectra via a Tauc plot. In addition, absorption spectra in dependence of the electrochemical potential were the basis for extensive quantum-chemical calculations of the neutral, monoanionic, and dianionic molecules. For this purpose, calculations based on density functional theory were compared with post-Hartree-Fock methods and the CAM-B3LYP functional proved to be the most reliable choice for the calculation of absorption spectra. Furthermore, spectral features found experimentally could be reproduced with vibronic calculations and allowed to understand their origins. In particular, the two lowest energy absorption bands of the anion are not caused by absorption of two distinct electronic states, which might have been expected from vertical excitation calculations, but both states exhibit a strong vibronic progression resulting in contributions to both bands.
Universitat Politècnica de València’s Experience with EDX MOOC Initiatives During the Covid Lockdown
(2021)
In March 2020, when massive lockdowns started to be enforced around the world to contain the spread of the COVID-19 pandemic, edX launched two initiatives to help students around the world providing free certificates for its courses, RAP, for member institutions and OCE, for any accredited academic institution. In this paper we analyze how Universitat Poltècnica de València contributed with its courses to both initiatives, providing almost 14,000 free certificate codes in total, and how UPV used the RAP initiative as a customer, describing the mechanism used to distribute more than 22,000 codes for free certificates to more than 7,000 UPV community members, what led to the achievement of more than 5,000 free certificates. We also comment the results of a post initiative survey answered by 1,612 UPV members about 3,241 edX courses, in which they communicated a satisfaction of 4,69 over 5 with the initiative.
We analyze historical data of stock-market prices for multiple financial indices using the concept of delay-time averaging for the financial time series (FTS). The region of validity of our recent theoretical predictions [Cherstvy A G et al 2017 New J. Phys. 19 063045] for the standard and delayed time-averaged mean-squared 'displacements' (TAMSDs) of the historical FTS is extended to all lag times. As the first novel element, we perform extensive computer simulations of the stochastic differential equation describing geometric Brownian motion (GBM) which demonstrate a quantitative agreement with the analytical long-term price-evolution predictions in terms of the delayed TAMSD (for all stock-market indices in crisis-free times). Secondly, we present a robust procedure of determination of the model parameters of GBM via fitting the features of the price-evolution dynamics in the FTS for stocks and cryptocurrencies. The employed concept of single-trajectory-based time averaging can serve as a predictive tool (proxy) for a mathematically based assessment and rationalization of probabilistic trends in the evolution of stock-market prices.
Stochastic models based on random diffusivities, such as the diffusing-diffusivity approach, are popular concepts for the description of non-Gaussian diffusion in heterogeneous media. Studies of these models typically focus on the moments and the displacement probability density function. Here we develop the complementary power spectral description for a broad class of random-diffusivity processes. In our approach we cater for typical single particle tracking data in which a small number of trajectories with finite duration are garnered. Apart from the diffusing-diffusivity model we study a range of previously unconsidered random-diffusivity processes, for which we obtain exact forms of the probability density function. These new processes are different versions of jump processes as well as functionals of Brownian motion. The resulting behaviour subtly depends on the specific model details. Thus, the central part of the probability density function may be Gaussian or non-Gaussian, and the tails may assume Gaussian, exponential, log-normal, or even power-law forms. For all these models we derive analytically the moment-generating function for the single-trajectory power spectral density. We establish the generic 1/f²-scaling of the power spectral density as function of frequency in all cases. Moreover, we establish the probability density for the amplitudes of the random power spectral density of individual trajectories. The latter functions reflect the very specific properties of the different random-diffusivity models considered here. Our exact results are in excellent agreement with extensive numerical simulations.
Efficiency is central to understanding the communicative and cognitive underpinnings of language. However, efficiency management is a complex mechanism in which different efficiency effects-such as articulatory, processing and planning ease, mental accessibility, and informativity, online and offline efficiency effects-conspire to yield the coding of linguistic signs. While we do not yet exactly understand the interactional mechanism of these different effects, we argue that universal attractors are an important component of any dynamic theory of efficiency that would be aimed at predicting efficiency effects across languages. Attractors are defined as universal states around which language evolution revolves. Methodologically, we approach efficiency from a cross-linguistic perspective on the basis of a world-wide sample of 383 languages from 53 families, balancing all six macro-areas (Eurasia, North and South America, Australia, Africa, and Oceania). We explore the grammatical domain of verbal person-number subject indexes. We claim that there is an attractor state in this domain to which languages tend to develop and tend not to leave if they happen to comply with the attractor in their earlier stages of evolution. The attractor is characterized by different lengths for each person and number combination, structured along Zipf's predictions. Moreover, the attractor strongly prefers non-compositional, cumulative coding of person and number. On the basis of these and other properties of the attractor, we conclude that there are two domains in which efficiency pressures are most powerful: strive towards less processing and articulatory effort. The latter, however, is overridden by constant information flow. Strive towards lower lexicon complexity and memory costs are weaker efficiency pressures for this grammatical category due to its order of frequency.
Since COVID-19 became a pandemic, many studies are being conducted to get a better understanding of the disease itself and its spread. One crucial indicator is the prevalence of SARS-CoV-2 infections. Since this measure is an important foundation for political decisions, its estimate must be reliable and unbiased. This paper presents reasons for biases in prevalence estimates due to unit nonresponse in typical studies. Since it is difficult to avoid bias in situations with mostly unknown nonresponse mechanisms, we propose the maximum amount of bias as one measure to assess the uncertainty due to nonresponse. An interactive web application is presented that calculates the limits of such a conservative unit nonresponse confidence interval (CUNCI).
Introduction
Balance is vital for human health and experiments have been conducted to measure the mechanisms of postural control, for example studying reflex responses to simulated perturbations. Such studies are frequent in walking but less common in running, and an understanding of reflex responses to trip-like disturbances could enhance our understanding of human gait and improve approaches to training and rehabilitation. Therefore, the primary aim of this study was to investigate the technical validity and reliability of a treadmill running protocol with perturbations. A further exploratory aim was to evaluate the associated neuromuscular reflex responses to the perturbations, in the lower limbs.
Methods
Twelve healthy participants completed a running protocol (9 km/h) test-retest (2 weeks apart), whereby 30 unilateral perturbations were executed via the treadmill belts (presets:2.0 m/s amplitude;150 ms delay (post-heel contact);100ms duration). Validity of the perturbations was assessed via mean +/- SD comparison, percentage error calculation between the preset and recorded perturbation characteristics (PE%), and coefficient of variation (CV%). Test-retest reliability (TRV%) and Bland-Altman analysis (BLA; bias +/- 1.96 * SD) was calculated for reliability. To measure reflex activity, electromyography (EMG) was applied in both legs. EMG amplitudes (root mean square normalized to unperturbed strides) and latencies [ms] were analysed descriptively.
Results
Left-side perturbation amplitude was 1.9 +/- 0.1 m/s, delay 105 +/- 2 ms, and duration 78 +/- 1 ms. Right-side perturbation amplitude was 1.9 +/- 0.1 m/s, delay 118 +/- 2 ms, duration 78 +/- 1 ms. PE% ranged from 5-30% for the recorded perturbations. CV% of the perturbations ranged from 19.5-76.8%. TRV% for the perturbations was 6.4-16.6%. BLA for the left was amplitude: 0.0 +/- 0.3m/s, delay: 0 +/- 17 ms, duration: 2 +/- 13 ms, and for the right was amplitude: 0.1 +/- 0.7, delay: 4 +/- 40 ms, duration: 1 +/- 35 ms. EMG amplitudes ranged from 175 +/- 141%-454 +/- 359% in both limbs. Latencies were 109 +/- 12-116 +/- 23 ms in the tibialis anterior, and 128 +/- 49-157 +/- 20 ms in the biceps femoris.
Discussion
Generally, this study indicated sufficient validity and reliability of the current setup considering the technical challenges and limitations, although the reliability of the right-sided perturbations could be questioned. The protocol provoked reflex responses in the lower extremities, especially in the leading leg. Acute neuromusculoskeletal adjustments to the perturbations could be studied and compared in clinical and healthy running populations, and the protocol could be utilised to monitor chronic adaptations to interventions over time.
The passive and active motion of micron-sized tracer particles in crowded liquids and inside living biological cells is ubiquitously characterised by 'viscoelastic' anomalous diffusion, in which the increments of the motion feature long-ranged negative and positive correlations. While viscoelastic anomalous diffusion is typically modelled by a Gaussian process with correlated increments, so-called fractional Gaussian noise, an increasing number of systems are reported, in which viscoelastic anomalous diffusion is paired with non-Gaussian displacement distributions. Following recent advances in Brownian yet non-Gaussian diffusion we here introduce and discuss several possible versions of random-diffusivity models with long-ranged correlations. While all these models show a crossover from non-Gaussian to Gaussian distributions beyond some correlation time, their mean squared displacements exhibit strikingly different behaviours: depending on the model crossovers from anomalous to normal diffusion are observed, as well as a priori unexpected dependencies of the effective diffusion coefficient on the correlation exponent. Our observations of the non-universality of random-diffusivity viscoelastic anomalous diffusion are important for the analysis of experiments and a better understanding of the physical origins of 'viscoelastic yet non-Gaussian' diffusion.
Year-to-year variations in crop yields can have major impacts on the livelihoods of subsistence farmers and may trigger significant global price fluctuations, with severe consequences for people in developing countries. Fluctuations can be induced by weather conditions, management decisions, weeds, diseases, and pests. Although an explicit quantification and deeper understanding of weather-induced crop-yield variability is essential for adaptation strategies, so far it has only been addressed by empirical models. Here, we provide conservative estimates of the fraction of reported national yield variabilities that can be attributed to weather by state-of-the-art, process-based crop model simulations. We find that observed weather variations can explain more than 50% of the variability in wheat yields in Australia, Canada, Spain, Hungary, and Romania. For maize, weather sensitivities exceed 50% in seven countries, including the United States. The explained variance exceeds 50% for rice in Japan and South Korea and for soy in Argentina. Avoiding water stress by simulating yields assuming full irrigation shows that water limitation is a major driver of the observed variations in most of these countries. Identifying the mechanisms leading to crop-yield fluctuations is not only fundamental for dampening fluctuations, but is also important in the context of the debate on the attribution of loss and damage to climate change. Since process-based crop models not only account for weather influences on crop yields, but also provide options to represent human-management measures, they could become essential tools for differentiating these drivers, and for exploring options to reduce future yield fluctuations.
Worldwide, companies are increasingly making claims about their current climate efforts and their future mitigation commitments. These claims tend to be underpinned by carbon credits issued in voluntary carbon markets to offset emissions. Corporate climate claims are largely unregulated which means that they are often (perceived to be) misleading and deceptive. As such, corporate climate claims risk undermining, rather than contributing to, global climate mitigation. This paper takes as its point of departure the proposition that a better understanding of corporate climate claims is needed to govern such claims in a manner that adequately addresses potential greenwashing risks. To that end, the paper reviews the nascent literature on corporate climate claims relying on the use of voluntary carbon credits. Drawing on the reviewed literature, three key dimensions of corporate climate claims as related to carbon credits are discussed: 1) the intended use of carbon credits: offsetting versus non-offsetting claims; 2) the framing and meaning of headline terms: net-zero versus carbon neutral claims; and 3) the status of the claim: future aspirational commitments versus stated achievements. The paper thereby offers a preliminary categorization of corporate climate claims and discusses risks associated with and governance implications for each of these categories.
Intuitively, strongly constraining contexts should lead to stronger probabilistic representations of sentences in memory. Encountering unexpected words could therefore be expected to trigger costlier shifts in these representations than expected words. However, psycholinguistic measures commonly used to study probabilistic processing, such as the N400 event-related potential (ERP) component, are sensitive to word predictability but not to contextual constraint. Some research suggests that constraint-related processing cost may be measurable via an ERP positivity following the N400, known as the anterior post-N400 positivity (PNP). The PNP is argued to reflect update of a sentence representation and to be distinct from the posterior P600, which reflects conflict detection and reanalysis. However, constraint-related PNP findings are inconsistent. We sought to conceptually replicate Federmeier et al. (2007) and Kuperberg et al. (2020), who observed that the PNP, but not the N400 or the P600, was affected by constraint at unexpected but plausible words. Using a pre-registered design and statistical approach maximising power, we demonstrated a dissociated effect of predictability and constraint: strong evidence for predictability but not constraint in the N400 window, and strong evidence for constraint but not predictability in the later window. However, the constraint effect was consistent with a P600 and not a PNP, suggesting increased conflict between a strong representation and unexpected input rather than greater update of the representation. We conclude that either a simple strong/weak constraint design is not always sufficient to elicit the PNP, or that previous PNP constraint findings could be an artifact of smaller sample size.
The growing worldwide impact of flood events has motivated the development and application of global flood hazard models (GFHMs). These models have become useful tools for flood risk assessment and management, especially in regions where little local hazard information is available. One of the key uncertainties associated with GFHMs is the estimation of extreme flood magnitudes to generate flood hazard maps. In this study, the 1-in-100 year flood (Q100) magnitude was estimated using flow outputs from four global hydrological models (GHMs) and two global flood frequency analysis datasets for 1350 gauges across the conterminous US. The annual maximum flows of the observed and modelled timeseries of streamflow were bootstrapped to evaluate the sensitivity of the underlying data to extrapolation. Results show that there are clear spatial patterns of bias associated with each method. GHMs show a general tendency to overpredict Western US gauges and underpredict Eastern US gauges. The GloFAS and HYPE models underpredict Q100 by more than 25% in 68% and 52% of gauges, respectively. The PCR-GLOBWB and CaMa-Flood models overestimate Q100 by more than 25% at 60% and 65% of gauges in West and Central US, respectively. The global frequency analysis datasets have spatial variabilities that differ from the GHMs. We found that river basin area and topographic elevation explain some of the spatial variability in predictive performance found in this study. However, there is no single model or method that performs best everywhere, and therefore we recommend a weighted ensemble of predictions of extreme flood magnitudes should be used for large-scale flood hazard assessment.
Uncertainty in climate change impact studies for irrigated maize cropping systems in southern Spain
(2022)
This study investigates the main drivers of uncertainties in simulated irrigated maize yield under historical conditions as well as scenarios of increased temperatures and altered irrigation water availability.
Using APSIM, MONICA, and SIMPLACE crop models, we quantified the relative contributions of three irrigation water allocation strategies, three sowing dates, and three maize cultivars to the uncertainty in simulated yields.
The water allocation strategies were derived from historical records of farmer's allocation patterns in drip-irrigation scheme of the Genil-Cabra region, Spain (2014-2017).
By considering combinations of allocation strategies, the adjusted R-2 values (showing the degree of agreement between simulated and observed yields) increased by 29% compared to unrealistic assumptions of considering only near optimal or deficit irrigation scheduling. The factor decomposition analysis based on historic climate showed that irrigation strategies was the main driver of uncertainty in simulated yields (66%).
However, under temperature increase scenarios, the contribution of crop model and cultivar choice to uncertainty in simulated yields were as important as irrigation strategy. This was partially due to different model structure in processes related to the temperature responses.
Our study calls for including information on irrigation strategies conducted by farmers to reduce the uncertainty in simulated yields at field scale.
Die internationale Schifffahrt erhofft sich mit der Entwicklung unbemannter Schiffe, die nur noch von Kontrollzentren an Land durch Personal überwacht werden und sonst durch Elektromotoren und Solarenergie betrieben und mit selbstlernenden Navigationsprogrammen ausgestattet weitgehend autark agieren, eine Einsparung von Transportkosten von über 20 %. Diese voranschreitende technische Entwicklung wird insbesondere das internationale Seerecht in Zukunft vor Herausforderungen stellen. Das Werk untersucht vor diesem Hintergrund primär die Kompatibilität dieser Schiffe mit dem Seerechtsübereinkommen. Zunächst wird eine Schiffsdefinition für den Vertrag entwickelt und eine Anwendung des Regelwerks auf autonome Schiffe überprüft. Dann wird auf Problemfelder wie die Einhaltung von Pflichten durch die Schiffe, die Notwendigkeit besonderer Schutzrechte vor allem in Bezug auf Zwangsmaßnahmen durch die Küstenstaaten an Bord und die Anwendbarkeit der bestehenden Piraterievorschriften auf diese Schiffe eingegangen. Weiter wirft die Arbeit die Frage auf, ob die Staatengemeinschaft, besonders mit Hinblick auf den maritimen Umweltschutz, nach dem Seerechtsübereinkommen eine Pflicht zur Förderung unbemannter Schiffe hat. Abschließend wird auf erforderliche Cyber Security Maßnahmen für diesen besonderen Schiffstyp eingegangen. Insgesamt zeigt sich nach dieser Analyse, dass das Seerechtsübereinkommen, mit überschaubaren Anpassungen, gut Anwendung auf autonome Schiffe finden kann.
Digitale Forschungsdaten gewinnen zunehmend an Bedeutung und stellen neue Herausforderungen an wissenschaftliche Einrichtungen und ihre Forschenden. Der Begriff Forschungsdatenmanagement umfasst alle Aktivitäten, die mit der Aufbereitung, Speicherung, Archivierung und Veröffentlichung von Forschungsdaten verbunden sind. Da der Umgang mit Forschungsdaten generische, fachliche, rechtliche und technische Aspekte betrifft, erfordert es eine Begleitung der Forschenden durch ein umfangreiches Spektrum an Services, von Information und Beratung bis hin zu fachspezifischen Standards und IT-Infrastrukturen.
Im vorliegenden Bericht werden zunächst die Ausgangslage und die Begrifflichkeiten rund um Forschungsdatenmanagement geklärt und anschließend die wichtigsten nationalen und internationalen Strategien und Entwicklungen vorgestellt. Dabei bilden Richtlinien und Empfehlungen für Forschungsdaten(management) den Handlungsrahmen für alle Beteiligte hin zu einem nachhaltigen Forschungsdatenmanagement. Bundeslandinitiativen schaffen die Grundlage und unterstützen den Kulturwandel zu offenen Daten.
Eine Forschungsdaten-Strategie für Brandenburg muss die Bedeutung von digitalen Forschungsdaten als wissenschaftliches Gut in den Vordergrund stellen, indem dafür das Bewusstsein geschaffen wird und konkrete Vorgaben und Leitlinien auf Landes- und Einrichtungsebene vereinbart werden. Gute wissenschaftliche Praxis wird durch eine geeignete Infrastruktur unterstützt, welche die heterogenen Bedarfe und Voraussetzungen aller Beteiligten berücksichtigt. Ziele sollten die Institutionalisierung von Forschungsdatenmanagement an den Hochschulen und Kooperationen zwischen den Einrichtungen Brandenburgs sein.
Ulcerative colitis (UC) is part of the inflammatory bowels diseases, and moderate to severe UC patients can be treated with anti-tumour necrosis alpha monoclonal antibodies, including infliximab (IFX). Even though treatment of UC patients by IFX has been in place for over a decade, many gaps in modelling of IFX PK in this population remain. This is even more true for acute severe UC (ASUC) patients for which early prediction of IFX pharmacokinetic (PK) could highly improve treatment outcome. Thus, this review aims to compile and analyse published population PK models of IFX in UC and ASUC patients, and to assess the current knowledge on disease activity impact on IFX PK. For this, a semi-systematic literature search was conducted, from which 26 publications including a population PK model analysis of UC patients receiving IFX therapy were selected. Amongst those, only four developed a model specifically for UC patients, and only three populations included severe UC patients. Investigations of disease activity impact on PK were reported in only 4 of the 14 models selected. In addition, the lack of reported model codes and assessment of predictive performance make the use of published models in a clinical setting challenging. Thus, more comprehensive investigation of PK in UC and ASUC is needed as well as more adequate reports on developed models and their evaluation in order to apply them in a clinical setting.
Der tänzerische Kreativitätstest stellt ein valides Instrumentarium dar, welches auf tanzspezifischen Aufgaben basiert und für die differenzierte und standardisierte Erfassung der tänzerischen Kreativität bei Kindern im Alter von 8 bis 12 Jahren konzipiert ist. Mit dem tänzerischen Kreativitätstest können nicht nur Fragestellungen zum Stand sowie zur Entwicklung tänzerisch-kreativer Fähigkeiten im Kindesalter bearbeitet werden, sondern er liefert auch wertvolle Informationen für die Optimierung von Trainings-, Förder- und Vermittlungsmaßnahmen. Erfasst werden folgende tänzerisch-kreativen Fähigkeiten: 1) Vielfalt und Originalität in der Fortbewegung und in Körperpositionen sowie 2) Ideenreichtum, Vielfalt und Originalität in der Gestaltung von Bewegungspatterns und -kompositionen. Dieser Test lässt sich mit größeren Gruppen und minimalem materiellen Aufwand durchführen, ist zeitlich unbeschränkt und ermöglicht es, unterschiedliche Leistungsniveaus zu identifizieren. Der tänzerische Kreativitätstest bietet Forschenden und Lehrkräften eine wertvolle Möglichkeit, die tänzerisch-kreativen Fähigkeiten von Kindern zu analysieren und zu fördern.
In the present paper we empirically investigate the psychometric properties of some of the most famous statistical and logical cognitive illusions from the "heuristics and biases" research program by Daniel Kahneman and Amos Tversky, who nearly 50 years ago introduced fascinating brain teasers such as the famous Linda problem, the Wason card selection task, and so-called Bayesian reasoning problems (e.g., the mammography task). In the meantime, a great number of articles has been published that empirically examine single cognitive illusions, theoretically explaining people's faulty thinking, or proposing and experimentally implementing measures to foster insight and to make these problems accessible to the human mind. Yet these problems have thus far usually been empirically analyzed on an individual-item level only (e.g., by experimentally comparing participants' performance on various versions of one of these problems). In this paper, by contrast, we examine these illusions as a group and look at the ability to solve them as a psychological construct. Based on an sample of N = 2,643 Luxembourgian school students of age 16-18 we investigate the internal psychometric structure of these illusions (i.e., Are they substantially correlated? Do they form a reflexive or a formative construct?), their connection to related constructs (e.g., Are they distinguishable from intelligence or mathematical competence in a confirmatory factor analysis?), and the question of which of a person's abilities can predict the correct solution of these brain teasers (by means of a regression analysis).
The application of the fractional calculus in the mathematical modelling of relaxation processes in complex heterogeneous media has attracted a considerable amount of interest lately.
The reason for this is the successful implementation of fractional stochastic and kinetic equations in the studies of non-Debye relaxation.
In this work, we consider the rotational diffusion equation with a generalised memory kernel in the context of dielectric relaxation processes in a medium composed of polar molecules. We give an overview of existing models on non-exponential relaxation and introduce an exponential resetting dynamic in the corresponding process.
The autocorrelation function and complex susceptibility are analysed in detail.
We show that stochastic resetting leads to a saturation of the autocorrelation function to a constant value, in contrast to the case without resetting, for which it decays to zero. The behaviour of the autocorrelation function, as well as the complex susceptibility in the presence of resetting, confirms that the dielectric relaxation dynamics can be tuned by an appropriate choice of the resetting rate.
The presented results are general and flexible, and they will be of interest for the theoretical description of non-trivial relaxation dynamics in heterogeneous systems composed of polar molecules.
Tu felix Camelot nube!
(2020)
This article explores the ways in which the Yiddish Arthurian romance Viduvilt (sixteenth ct.) reworks its Middle High German model text, Wirnt von Grafenberg’s Wigalois (1210/1220), for an early modern Jewish audience. Through seemingly minor changes, the adaptor creates a story world in which family politics play an essential role and become the driving force behind the story development. Part of this change is the reevaluation of female figures, in particular mothers. In contrast to its model, the Arthurian knight in Viduvilt is created as a figure that relies and depends largely on the decisions made by mothers, who are portrayed as powerful matres familias.
Träume statt Theurgie
(2020)
In his work De insomniis (On Dreams), Synesios adopts a rather critical view of theurgy, resembling Porphyry’s attitude; his wording shows polemical exaggeration. His insistence on the usefulness of dream revelation for hunting might be read as a (not too serious) claim to the divine inspiration of his work κυνηγετικαί.