Filtern
Erscheinungsjahr
- 2022 (1431) (entfernen)
Dokumenttyp
- Wissenschaftlicher Artikel (1431) (entfernen)
Sprache
- Englisch (1093)
- Deutsch (329)
- Hebräisch (5)
- Französisch (2)
- Spanisch (2)
Schlagworte
- climate change (12)
- COVID-19 (11)
- analysis (8)
- Tolkien (7)
- machine learning (7)
- permafrost (7)
- Englischunterricht (6)
- depression (6)
- exercise (6)
- experiment (6)
Institut
- Institut für Physik und Astronomie (192)
- Institut für Biochemie und Biologie (162)
- Extern (137)
- Institut für Geowissenschaften (121)
- Institut für Chemie (74)
- Institut für Umweltwissenschaften und Geographie (61)
- Fachgruppe Politik- & Verwaltungswissenschaft (58)
- Historisches Institut (56)
- Department Sport- und Gesundheitswissenschaften (45)
- Bürgerliches Recht (43)
- Department Linguistik (43)
- Department Psychologie (43)
- Strukturbereich Kognitionswissenschaften (38)
- Department Erziehungswissenschaft (36)
- Öffentliches Recht (35)
- Fachgruppe Betriebswirtschaftslehre (33)
- Wirtschaftswissenschaften (32)
- Institut für Ernährungswissenschaft (30)
- Institut für Mathematik (29)
- Strafrecht (24)
- Hasso-Plattner-Institut für Digital Engineering GmbH (23)
- Fachgruppe Volkswirtschaftslehre (22)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (21)
- Fakultät für Gesundheitswissenschaften (20)
- Institut für Romanistik (20)
- Sozialwissenschaften (19)
- Institut für Germanistik (18)
- Fachgruppe Soziologie (16)
- Institut für Informatik und Computational Science (16)
- WeltTrends e.V. Potsdam (15)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (14)
- Institut für Anglistik und Amerikanistik (13)
- Department Grundschulpädagogik (12)
- dbs Deutscher Bundesverband für akademische Sprachtherapie und Logopädie e.V. (12)
- Institut für Jüdische Theologie (11)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (11)
- Institut für Jüdische Studien und Religionswissenschaft (10)
- Department für Inklusionspädagogik (9)
- MenschenRechtsZentrum (9)
- Institut für Künste und Medien (7)
- Institut für Philosophie (7)
- Strukturbereich Bildungswissenschaften (6)
- Philosophische Fakultät (5)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (4)
- Department Musik und Kunst (3)
- Humanwissenschaftliche Fakultät (3)
- Institut für Slavistik (3)
- Mathematisch-Naturwissenschaftliche Fakultät (3)
- Gesundheitsmanagement (2)
- Juristische Fakultät (2)
- Theodor-Fontane-Archiv (2)
- Universitätsbibliothek (2)
- Hochschulambulanz (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (1)
- Projekte (1)
- ZIM - Zentrum für Informationstechnologie und Medienmanagement (1)
- Zentrale und wissenschaftliche Einrichtungen (1)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (1)
- Zentrum für Sprachen und Schlüsselkompetenzen (Zessko) (1)
"Orfeo out of Care"
(2022)
The paper focuses on an example of multiple-step reception: the contribution of the classical story of Orpheus and Eurydice and the mediaeval lay Sir Orfeo to Tolkien’s work.
In the first part, I compare the lay with Virgilian and Ovidian versions of Orpheus’ myth. This comparison shows the anonymous author’s deep knowledge of the ancient texts and complex way of rewriting them through stealing and hybridization.
The lay was highly esteemed by Tolkien, who translated it and took inspiration from it while describing the Elven kingdom in The Hobbit and building the storyline of Beren and Lúthien in The Silmarillion. Through this key tale, Orpheus/Orfeo’s romance has a deep influence also on Aragorn and Arwen’s story in The Lord of the Rings. The most important element that Tolkien takes from the Sir Orfeo figuration of the ancient story is undoubtedly the insertion of political theme: the link established between the recovery of the main character’s beloved and the return to royal responsability.
The second part of the paper is, thus, dedicated to the reception of Sir Orfeo and the classical myth in Tolkien. It shows how in his work the different steps of the tradition of Orpheus’ story are co-present, creating an inextricable substrate of inspiration that nourishes his imagination.
Der sehr examensrelevante Straftatbestand Hehlerei (§ STGB § 259 StGB) ist infolge einer neuen BGH-Entscheidung um einen Streitpunkt zwischen Strafrechtslehre und Rechtsprechung reicher: Die durch eine Täuschung erwirkte Übergabe der gestohlenen Sache vom Vortäter (oder Vorbesitzer) auf den Anschlusstäter soll nach dem BGH ein tatbestandsmäßiges „Verschaffen“ sein. Die Fachliteratur sieht das überwiegend anders. Der Beitrag versucht davon zu überzeigen, dass die Strafrechtslehre Recht hat.
The paper investigates Tolkien’s narratives of decline through the lens of their classical ancestry. Narratives of decline are widespread in ancient culture, in both philosophical and literary discourses. They normally posit a gradual degradation (moral and ontological) from an idealized Golden Age, which went hand-in-hand with increasing detachment of gods from mortal affairs. Narratives of decline are also at the core of Tolkien’s mythology, constituting yet another underresearched aspect of classical influence on Tolkien. Such Classical narratives reverberate e.g. in Tolkien’s division of Arda’s history into ages, from an idealized First Age filled with Joy and Light to a Third Age, described as “Twilight Age (…) the first of the broken and changed world” (Letters 131). More generally, these narratives are related to Tolkien’s notorious perception of history as a “long defeat” (Letters 195) and to that “heart-racking sense of the vanished past” which pervades Tolkien’s works – the emotion which, in his words, moved him “supremely” and which he found “small difficulty in evoking” (Letters 91). The paper analyses the reception of narratives of decline in Tolkien’s legendarium, pointing out similarities but also contrasts and differences, with the aim to discuss some key patterns of (classical) reception in Tolkien’s theory and practice (‘renewal’, ‘accommodation’, ‘focalization’).
Molecular excitons play a central role in processes of solar energy conversion, both natural and artificial. It is therefore no wonder that numerous experimental and theoretical investigations in the last decade, employing state-of-the-art spectroscopic techniques and computational methods, have been driven by the common aim to unravel exciton dynamics in multichromophoric systems. Theoretically, exciton (de)localization and transfer dynamics are most often modelled using either mixed quantum-classical approaches (e.g., trajectory surface hopping) or fully quantum mechanical treatments (either using model diabatic Hamiltonians or direct dynamics). Yet, the terms such as "exciton localization" or "exciton transfer" may bear different meanings in different works depending on the method in use (quantum-classical vs. fully quantum). Here, we relate different views on exciton (de)localization. For this purpose, we perform molecular surface hopping simulations on several tetracene dimers differing by a magnitude of exciton coupling and carry out quantum dynamical as well as surface hopping calculations on a relevant model system. The molecular surface hopping simulations are done using efficient long-range corrected time-dependent density functional tight binding electronic structure method, allowing us to gain insight into different regimes of exciton dynamics in the studied systems.
Ground-penetrating radar (GPR) is a method that can provide detailed information about the near subsurface in sedimentary and carbonate environments.
The classical interpretation of GPR data (e.g., based on manual feature selection) often is labor-intensive and limited by the experience of the intercally used for seismic interpretation, can provide faster, more repeatable, and less biased interpretations. We have recorded a 3D GPD data set collected across a paleokarst breccia pipe in the Billefjorden area on Spitsbergen, Svalbard. After performing advanced processing, we compare the results of a classical GPR interpretation to the results of an attribute-based classification.
Our attribute classification incorporates a selection of dip and textural attributes as the input for a k-means clustering approach. Similar to the results of the classical interpretation, the resulting classes differentiate between undisturbed strata and breccias or fault zones.
The classes also reveal details inside the breccia pipe that are not discerned in the classical fer that the intrapipe GPR facies result from subtle differences, such as breccia lithology, clast size, or pore-space filling.
The region of West Bohemia and Upper Palatinate belongs to the West Bohemian Massif. The study area is situated at the junction of three different Variscan tectonic units and hosts the ENE-WSW trending Ohre Rift as well as many different fault systems. The entire region is characterized by ongoing magmatic processes in the intra-continental lithospheric mantle expressed by a series of phenomena, including e.g. the occurrence of repeated earthquake swarms and massive degassing of mantle derived CO2 in form of mineral springs and mofettes. Ongoing active tectonics is mainly manifested by Cenozoic volcanism represented by different Quaternary volcanic structures. All these phenomena make the Ohre Rift a unique target area for European intra-continental geo-scientific research. With magnetotelluric (MT) measurements we image the subsurface distribution of the electrical resistivity and map possible fluid pathways. Two-dimensional (2D) inversion results by Munoz et al. (2018) reveal a conductive channel in the vicinity of the earthquake swarm region that extends from the lower crust to the surface forming a pathway for fluids into the region of the mofettes. A second conductive channel is present in the south of their model; however, their 2D inversions allow ambiguous interpretations of this feature. Therefore, we conducted a large 3D MT field experiment extending the study area towards the south. The 3D inversion result matches well with the known geology imaging different fluid/magma reservoirs at crust-mantle depth and mapping possible fluid pathways from the reservoirs to the surface feeding known mofettes and spas. A comparison of 3D and 2D inversion results suggests that the 2D inversion results are considerably characterized by 3D and off-profile structures. In this context, the new results advocate for the swarm earthquakes being located in the resistive host rock surrounding the conductive channels; a finding in line with observations e.g. at the San Andreas Fault, California.
40Ar/39Ar dating of a hydrothermal pegmatitic buddingtonite–muscovite assemblage from Volyn, Ukraine
(2022)
We determined Ar-40/Ar-39 ages of buddingtonite, occurring together with muscovite, with the laser-ablation method. This is the first attempt to date the NH4-feldspar buddingtonite, which is typical for sedimentary-diagenetic environments of sediments, rich in organic matter, or in hydrothermal environments, associated with volcanic geyser systems. The sample is a hydrothermal breccia, coming from the Paleoproterozoic pegmatite field of the Korosten Plutonic Complex, Volyn, Ukraine. A detailed characterization by optical methods, electron microprobe analyses, backscattered electron imaging, and IR analyses showed that the buddingtonite consists of euhedral-appearing platy crystals of tens of micrometers wide, 100 or more micrometers in length, which consist of fine-grained fibers of <= 1 mu m thickness. The crystals are sector and growth zoned in terms of K-NH4-H3O content. The content of K allows for an age determination with the Ar-40/Ar-39 method, as well as in the accompanying muscovite, intimately intergrown with the buddingtonite. The determinations on muscovite yielded an age of 1491 +/- 9 Ma, interpreted as the hydrothermal event forming the breccia. However, buddingtonite apparent ages yielded a range of 563 +/- 14 Ma down to 383 +/- 12 Ma, which are interpreted as reset ages due to Ar loss of the fibrous buddingtonite crystals during later heating. We conclude that buddingtonite is suited for Ar-40/Ar-39 age determinations as a supplementary method, together with other methods and minerals; however, it requires a detailed mineralogical characterization, and the ages will likely represent minimum ages.
The shape and the actuation capability of state of the art robotic devices typically relies on multimaterial systems from a combination of geometry determining materials and actuation components. Here, we present multifunctional 4D-actuators processable by 3D-printing, in which the actuator functionality is integrated into the shaped body. The materials are based on crosslinked poly(carbonate-urea-urethane) networks (PCUU), synthesized in an integrated process, applying reactive extrusion and subsequent water-based curing. Actuation capability could be added to the PCUU, prepared from aliphatic oligocarbonate diol, isophorone diisocyanate (IPDI) and water, in a thermomechanical programming process. When programmed with a strain of epsilon(prog) = 1400% the PCUU networks exhibited actuation apparent by reversible elongation epsilon'(rev) of up to 22%. In a gripper a reversible bending epsilon'(rev)((be)(nd)()) in the range of 37-60% was achieved when the actuation temperature (T-high) was varied between 45 degrees C and 49 degrees C. The integration of actuation and shape formation could be impressively demonstrated in two PCUU-based reversible fastening systems, which were able to hold weights of up to 1.1 kg. In this way, the multifunctional materials are interesting candidate materials for robotic applications where a freedom in shape design and actuation is required as well as for sustainable fastening systems.
50 jahre Grundlagenvertrag
(2022)
The compound [Nb6Cl14(pyrazine)(4)]center dot 2CH(2)Cl(2) (1) was investigated for its suitability as a starting compound for new ligand-supported hexanuclear niobium cluster compounds. The synthesis, stability to air and increased temperature, solubility and usability for subsequent reactions of 1, and purification and separation of the reaction products are discussed. The compounds with cluster units [Nb6Cl14L4], where L = iso-quinoline N-oxides (2), 1,1-dimethylethylenediamines (3), or thiazoles (4), and [Nb6Cl14(PEt3)(3.76)(Et3PO)(0.24)]-[Nb6Cl14(MeCN)(4)]center dot 4MeCN (5) are presented as follow-up products. The crystal structures of compounds 1-5 are analyzed, and the structures are discussed with respect to their intraand intermolecular bonding situations and crystal packing. In addition to hydrogen bonds and pi-pi interactions, the appearance of chalcogen and halogen bonds and lone pair-pi interactions between Nb-6 cluster units was observed for the first time.
With the advent of increasingly powerful computational architectures, scientists use these possibilities to create simulations of ever-increasing size and complexity. Large-scale simulations of environmental systems require huge amounts of resources. Managing these in an operational way becomes increasingly complex and difficult to handle for individual scientists. State-of-the-art simulation infrastructures usually provide the necessary re-sources in a centralised setup, which often results in an all-or-nothing choice for the user. Here, we outline an alternative approach to handling this complexity, while rendering the use of high-performance hardware and large datasets still possible. It retains a number of desirable properties: (i) a decentralised structure, (ii) easy sharing of resources to promote collaboration and (iii) secure access to everything, including natural delegation of authority across levels and system boundaries. We show that the object capability paradigm will cover these issues, and present the first steps towards developing a simulation infrastructure based on these principles.
A Cell-free Expression Pipeline for the Generation and Functional Characterization of Nanobodies
(2022)
Cell-free systems are well-established platforms for the rapid synthesis, screening, engineering and modification of all kinds of recombinant proteins ranging from membrane proteins to soluble proteins, enzymes and even toxins. Also within the antibody field the cell-free technology has gained considerable attention with respect to the clinical research pipeline including antibody discovery and production. Besides the classical full-length monoclonal antibodies (mAbs), so-called "nanobodies" (Nbs) have come into focus. A Nb is the smallest naturally-derived functional antibody fragment known and represents the variable domain (VHH, similar to 15 kDa) of a camelid heavy-chain-only antibody (HCAb). Based on their nanoscale and their special structure, Nbs display striking advantages concerning their production, but also their characteristics as binders, such as high stability, diversity, improved tissue penetration and reaching of cavity-like epitopes. The classical way to produce Nbs depends on the use of living cells as production host. Though cell-based production is well-established, it is still time-consuming, laborious and hardly amenable for high-throughput applications. Here, we present for the first time to our knowledge the synthesis of functional Nbs in a standardized mammalian cell-free system based on Chinese hamster ovary (CHO) cell lysates. Cell-free reactions were shown to be time-efficient and easy-to-handle allowing for the "on demand" synthesis of Nbs. Taken together, we complement available methods and demonstrate a promising new system for Nb selection and validation.
Incorporation of noncanonical amino acids (ncAAs) with bioorthogonal reactive groups by amber suppression allows the generation of synthetic proteins with desired novel properties. Such modified molecules are in high demand for basic research and therapeutic applications such as cancer treatment and in vivo imaging. The positioning of the ncAA-responsive codon within the protein's coding sequence is critical in order to maintain protein function, achieve high yields of ncAA-containing protein, and allow effective conjugation. Cell-free ncAA incorporation is of particular interest due to the open nature of cell-free systems and their concurrent ease of manipulation. In this study, we report a straightforward workflow to inquire ncAA positions in regard to incorporation efficiency and protein functionality in a Chinese hamster ovary (CHO) cell-free system. As a model, the well-established orthogonal translation components Escherichia coli tyrosyl-tRNA synthetase (TyrRS) and tRNATyr(CUA) were used to site-specifically incorporate the ncAA p-azido-l-phenylalanine (AzF) in response to UAG codons. A total of seven ncAA sites within an anti-epidermal growth factor receptor (EGFR) single-chain variable fragment (scFv) N-terminally fused to the red fluorescent protein mRFP1 and C-terminally fused to the green fluorescent protein sfGFP were investigated for ncAA incorporation efficiency and impact on antigen binding. The characterized cell-free dual fluorescence reporter system allows screening for ncAA incorporation sites with high incorporation efficiency that maintain protein activity. It is parallelizable, scalable, and easy to operate. We propose that the established CHO-based cell-free dual fluorescence reporter system can be of particular interest for the development of antibody-drug conjugates (ADCs).
Next-generation sequencing methods provide comprehensive data for the analysis of structural and functional analysis of the genome. The draft genomes with low contig number and high N50 value can give insight into the structure of the genome as well as provide information on the annotation of the genome. In this study, we designed a pipeline that can be used to assemble prokaryotic draft genomes with low number of contigs and high N50 value. We aimed to use combination of two de novo assembly tools (SPAdes and IDBA-Hybrid) and evaluate the impact of this approach on the quality metrics of the assemblies. The followed pipeline was tested with the raw sequence data with short reads (< 300) for a total of 10 species from four different genera. To obtain the final draft genomes, we firstly assembled the sequences using SPAdes to find closely related organism using the extracted 16 s rRNA from it. IDBA-Hybrid assembler was used to obtain the second assembly data using the closely related organism genome. SPAdes assembler tool was implemented using the second assembly, produced by IDBA-hybrid as a hint. The results were evaluated using QUAST and BUSCO. The pipeline was successful for the reduction of the contig numbers and increasing the N50 statistical values in the draft genome assemblies while preserving the coverage of the draft genomes.
Genomic prediction has revolutionized crop breeding despite remaining issues of transferability of models to unseen environmental conditions and environments. Usage of endophenotypes rather than genomic markers leads to the possibility of building phenomic prediction models that can account, in part, for this challenge. Here, we compare and contrast genomic prediction and phenomic prediction models for 3 growth-related traits, namely, leaf count, tree height, and trunk diameter, from 2 coffee 3-way hybrid populations exposed to a series of treatment-inducing environmental conditions. The models are based on 7 different statistical methods built with genomic markers and ChlF data used as predictors. This comparative analysis demonstrates that the best-performing phenomic prediction models show higher predictability than the best genomic prediction models for the considered traits and environments in the vast majority of comparisons within 3-way hybrid populations. In addition, we show that phenomic prediction models are transferrable between conditions but to a lower extent between populations and we conclude that chlorophyll a fluorescence data can serve as alternative predictors in statistical models of coffee hybrid performance. Future directions will explore their combination with other endophenotypes to further improve the prediction of growth-related traits for crops.
A comparative whole-genome approach identifies bacterial traits for marine microbial interactions
(2022)
Luca Zoccarato, Daniel Sher et al. leverage publicly available bacterial genomes from marine and other environments to examine traits underlying microbial interactions.
Their results provide a valuable resource to investigate clusters of functional and linked traits to better understand marine bacteria community assembly and dynamics.
Microbial interactions shape the structure and function of microbial communities with profound consequences for biogeochemical cycles and ecosystem health. Yet, most interaction mechanisms are studied only in model systems and their prevalence is unknown. To systematically explore the functional and interaction potential of sequenced marine bacteria, we developed a trait-based approach, and applied it to 473 complete genomes (248 genera), representing a substantial fraction of marine microbial communities.
We identified genome functional clusters (GFCs) which group bacterial taxa with common ecology and life history. Most GFCs revealed unique combinations of interaction traits, including the production of siderophores (10% of genomes), phytohormones (3-8%) and different B vitamins (57-70%). Specific GFCs, comprising Alpha- and Gammaproteobacteria, displayed more interaction traits than expected by chance, and are thus predicted to preferentially interact synergistically and/or antagonistically with bacteria and phytoplankton. Linked trait clusters (LTCs) identify traits that may have evolved to act together (e.g., secretion systems, nitrogen metabolism regulation and B vitamin transporters), providing testable hypotheses for complex mechanisms of microbial interactions.
Our approach translates multidimensional genomic information into an atlas of marine bacteria and their putative functions, relevant for understanding the fundamental rules that govern community assembly and dynamics.
The quantification of spatial propagation of extreme precipitation events is vital in water resources planning and disaster mitigation. However, quantifying these extreme events has always been challenging as many traditional methods are insufficient to capture the nonlinear interrelationships between extreme event time series. Therefore, it is crucial to develop suitable methods for analyzing the dynamics of extreme events over a river basin with a diverse climate and complicated topography. Over the last decade, complex network analysis emerged as a powerful tool to study the intricate spatiotemporal relationship between many variables in a compact way. In this study, we employ two nonlinear concepts of event synchronization and edit distance to investigate the extreme precipitation pattern in the Ganga river basin. We use the network degree to understand the spatial synchronization pattern of extreme rainfall and identify essential sites in the river basin with respect to potential prediction skills. The study also attempts to quantify the influence of precipitation seasonality and topography on extreme events. The findings of the study reveal that (1) the network degree is decreased in the southwest to northwest direction, (2) the timing of 50th percentile precipitation within a year influences the spatial distribution of degree, (3) the timing is inversely related to elevation, and (4) the lower elevation greatly influences connectivity of the sites. The study highlights that edit distance could be a promising alternative to analyze event-like data by incorporating event time and amplitude and constructing complex networks of climate extremes.
A comprehensive workflow to analyze ensembles of globally inverted 2D electrical resistivity models
(2022)
Electrical resistivity tomography (ERT) aims at imaging the subsurface resistivity distribution and provides valuable information for different geological, engineering, and hydrological applications. To obtain a subsurface resistivity model from measured apparent resistivities, stochastic or deterministic inversion procedures may be employed. Typically, the inversion of ERT data results in non-unique solutions; i.e., an ensemble of different models explains the measured data equally well. In this study, we perform inference analysis of model ensembles generated using a well-established global inversion approach to assess uncertainties related to the nonuniqueness of the inverse problem. Our interpretation strategy starts by establishing model selection criteria based on different statistical descriptors calculated from the data residuals. Then, we perform cluster analysis considering the inverted resistivity models and the corresponding data residuals. Finally, we evaluate model uncertainties and residual distributions for each cluster. To illustrate the potential of our approach, we use a particle swarm optimization (PSO) algorithm to obtain an ensemble of 2D layer-based resistivity models from a synthetic data example and a field data set collected in Loon-Plage, France. Our strategy performs well for both synthetic and field data and allows us to extract different plausible model scenarios with their associated uncertainties and data residual distributions. Although we demonstrate our workflow using 2D ERT data and a PSObased inversion approach, the proposed strategy is general and can be adapted to analyze model ensembles generated from other kinds of geophysical data and using different global inversion approaches.
Model-informed precision dosing (MIPD) is a quantitative dosing framework that combines prior knowledge on the drug-disease-patient system with patient data from therapeutic drug/ biomarker monitoring (TDM) to support individualized dosing in ongoing treatment. Structural models and prior parameter distributions used in MIPD approaches typically build on prior clinical trials that involve only a limited number of patients selected according to some exclusion/inclusion criteria. Compared to the prior clinical trial population, the patient population in clinical practice can be expected to also include altered behavior and/or increased interindividual variability, the extent of which, however, is typically unknown. Here, we address the question of how to adapt and refine models on the level of the model parameters to better reflect this real-world diversity. We propose an approach for continued learning across patients during MIPD using a sequential hierarchical Bayesian framework. The approach builds on two stages to separate the update of the individual patient parameters from updating the population parameters. Consequently, it enables continued learning across hospitals or study centers, because only summary patient data (on the level of model parameters) need to be shared, but no individual TDM data. We illustrate this continued learning approach with neutrophil-guided dosing of paclitaxel. The present study constitutes an important step toward building confidence in MIPD and eventually establishing MIPD increasingly in everyday therapeutic use.
A conundrum of trends
(2022)
This comment is meant to reiterate two warnings: One applies to the uncritical use of ready-made (openly available) program packages, and one to the estimation of trends in serially correlated time series. Both warnings apply to the recent publication of Lischeid et al. about lake-level trends in Germany.
Instruments for measuring the absorbed dose and dose rate under radiation exposure, known as radiation dosimeters, are indispensable in space missions. They are composed of radiation sensors that generate current or voltage response when exposed to ionizing radiation, and processing electronics for computing the absorbed dose and dose rate. Among a wide range of existing radiation sensors, the Radiation Sensitive Field Effect Transistors (RADFETs) have unique advantages for absorbed dose measurement, and a proven record of successful exploitation in space missions. It has been shown that the RADFETs may be also used for the dose rate monitoring. In that regard, we propose a unique design concept that supports the simultaneous operation of a single RADFET as absorbed dose and dose rate monitor. This enables to reduce the cost of implementation, since the need for other types of radiation sensors can be minimized or eliminated. For processing the RADFET's response we propose a readout system composed of analog signal conditioner (ASC) and a self-adaptive multiprocessing system-on-chip (MPSoC). The soft error rate of MPSoC is monitored in real time with embedded sensors, allowing the autonomous switching between three operating modes (high-performance, de-stress and fault-tolerant), according to the application requirements and radiation conditions.
A different class of refugee: university scholarships and developmentalism in late 1960s Africa
(2022)
Using documents assembled in connection with the 1967 Conference on the Legal, Economic and Social Aspects of African Refugee Problems, this article discusses African refugee higher-education discourses in the 1960s at the level of international organizations, volunteer agencies, and government representatives. Education and development history have recently been studied together, but this article focuses on the history of refugee higher education, which, it argues, needs to be understood within the development framework of human-capital theory, meant to support political pan African concerns for a decolonized continent and merged with humanitarian arguments to create a hybrid form of humanitarian developmentalism. The article zooms in on higher-education scholarships, above all for refugees from Southern Africa, as a means of support for human-capital development. It shows that refugee higher education was both a result and a driver of increased international exchanges, as evidenced at the 1967 conference.
A cationic surfactant containing a spiropyrane unit is prepared exhibiting a dual-responsive adjustability of its surface-active characteristics. The switching mechanism of the system relies on the reversible conversion of the non-ionic spiropyrane (SP) to a zwitterionic merocyanine (MC) and can be controlled by adjusting the pH value and via light, resulting in a pH-dependent photoactivity: While the compound possesses a pronounced difference in surface activity between both forms under acidic conditions, this behavior is suppressed at a neutral pH level. The underlying switching processes are investigated in detail, and a thermodynamic explanation based on a combination of theoretical and experimental results is provided. This complex stimuli-responsive behavior enables remote-control of colloidal systems. To demonstrate its applicability, the surfactant is utilized for the pH-dependent manipulation of oil-in-water emulsions.
Wages and wage dynamics directly affect individuals' and families' daily lives. In this article, we show how major theoretical branches of research on wages and inequality-that is, cumulative advantage (CA), human capital theory, and the lifespan perspective-can be integrated into a coherent statistical framework and analyzed with multilevel dynamic structural equation modeling (DSEM). This opens up a new way to empirically investigate the mechanisms that drive growing inequality over time. We demonstrate the new approach by making use of longitudinal, representative U.S. data (NLSY-79). Analyses revealed fundamental between-person differences in both initial wages and autoregressive wage growth rates across the lifespan. Only 0.5% of the sample experienced a "strict" CA and unbounded wage growth, whereas most individuals revealed logarithmic wage growth over time. Adolescent intelligence and adult educational levels explained substantial heterogeneity in both parameters. We discuss how DSEM may help researchers study CA processes and related developmental dynamics, and we highlight the extensions and limitations of the DSEM framework.
The investigation of metabolic fluxes and metabolite distributions within cells by means of tracer molecules is a valuable tool to unravel the complexity of biological systems. Technological advances in mass spectrometry (MS) technology such as atmospheric pressure chemical ionization (APCI) coupled with high resolution (HR), not only allows for highly sensitive analyses but also broadens the usefulness of tracer-based experiments, as interesting signals can be annotated de novo when not yet present in a compound library. However, several effects in the APCI ion source, i.e., fragmentation and rearrangement, lead to superimposed mass isotopologue distributions (MID) within the mass spectra, which need to be corrected during data evaluation as they will impair enrichment calculation otherwise. Here, we present and evaluate a novel software tool to automatically perform such corrections. We discuss the different effects, explain the implemented algorithm, and show its application on several experimental datasets. This adjustable tool is available as an R package from CRAN.
As resources are valuable assets, organizations have to decide which resources to allocate to business process tasks in a way that the process is executed not only effectively but also efficiently. Traditional role-based resource allocation leads to effective process executions, since each task is performed by a resource that has the required skills and competencies to do so. However, the resulting allocations are typically not as efficient as they could be, since optimization techniques have yet to find their way in traditional business process management scenarios. On the other hand, operations research provides a rich set of analytical methods for supporting problem-specific decisions on resource allocation. This paper provides a novel framework for creating transparency on existing tasks and resources, supporting individualized allocations for each activity in a process, and the possibility to integrate problem-specific analytical methods of the operations research domain. To validate the framework, the paper reports on the design and prototypical implementation of a software architecture, which extends a traditional process engine with a dedicated resource management component. This component allows us to define specific resource allocation problems at design time, and it also facilitates optimized resource allocation at run time. The framework is evaluated using a real-world parcel delivery process. The evaluation shows that the quality of the allocation results increase significantly with a technique from operations research in contrast to the traditional applied rule-based approach.
Deep hydrothermal Mo, W, and base metal mineralization at the Sweet Home mine (Detroit City portal) formed in response to magmatic activity during the Oligocene. Microthermometric data of fluid inclusions trapped in greisen quartz and fluorite suggest that the early-stage mineralization at the Sweet Home mine precipitated from low- to medium-salinity (1.5-11.5 wt% equiv. NaCl), CO2-bearing fluids at temperatures between 360 and 415 degrees C and at depths of at least 3.5 km. Stable isotope and noble gas isotope data indicate that greisen formation and base metal mineralization at the Sweet Home mine was related to fluids of different origins. Early magmatic fluids were the principal source for mantle-derived volatiles (CO2, H2S/SO2, noble gases), which subsequently mixed with significant amounts of heated meteoric water. Mixing of magmatic fluids with meteoric water is constrained by delta H-2(w)-delta O-18(w) relationships of fluid inclusions. The deep hydrothermal mineralization at the Sweet Home mine shows features similar to deep hydrothermal vein mineralization at Climax-type Mo deposits or on their periphery. This suggests that fluid migration and the deposition of ore and gangue minerals in the Sweet Home mine was triggered by a deep-seated magmatic intrusion. The findings of this study are in good agreement with the results of previous fluid inclusion studies of the mineralization of the Sweet Home mine and from Climax-type Mo porphyry deposits in the Colorado Mineral Belt.
We demonstrate a recycling system for synthetic nicotinamide cofactor analogues using a soluble hydrogenase with turnover number of >1000 for reduction of the cofactor analogues by H-2.
Coupling this system to an ene reductase, we show quantitative conversion of N-ethylmaleimide to N-ethylsuccinimide.
The biocatalyst system retained >50% activity after 7 h.
A large landslide (frozen debris avalanche) occurred at Assapaat on the south coast of the Nuussuaq Peninsula in Central West Greenland on June 13, 2021, at 04:04 local time. We present a compilation of available data from field observations, photos, remote sensing, and seismic monitoring to describe the event. Analysis of these data in combination with an analysis of pre- and post-failure digital elevation models results in the first description of this type of landslide. The frozen debris avalanche initiated as a 6.9 * 10(6) m(3) failure of permafrozen talus slope and underlying colluvium and till at 600-880 m elevation. It entrained a large volume of permafrozen colluvium along its 2.4 km path in two subsequent entrainment phases accumulating a total volume between 18.3 * 10(6) and 25.9 * 10(6) m(3). About 3.9 * 10(6) m(3) is estimated to have entered the Vaigat strait; however, no tsunami was reported, or is evident in the field. This is probably because the second stage of entrainment along with a flattening of slope angle reduced the mobility of the frozen debris avalanche. We hypothesise that the initial talus slope failure is dynamically conditioned by warming of the ice matrix that binds the permafrozen talus slope. When the slope ice temperature rises to a critical level, its shear resistance is reduced, resulting in an unstable talus slope prone to failure. Likewise, we attribute the large-scale entrainment to increasing slope temperature and take the frozen debris avalanche as a strong sign that the permafrost in this region is increasingly at a critical state. Global warming is enhanced in the Arctic and frequent landslide events in the past decade in Western Greenland let us hypothesise that continued warming will lead to an increase in the frequency and magnitude of these types of landslides. Essential data for critical arctic slopes such as precipitation, snowmelt, and ground and surface temperature are still missing to further test this hypothesis. It is thus strongly required that research funds are made available to better predict the change of landslide threat in the Arctic.
We use the prolonged Greek crisis as a case study to understand how a lasting economic shock affects the innovation strategies of firms in economies with moderate innovation activities. Adopting the 3-stage CDM model, we explore the link between R&D, innovation, and productivity for different size groups of Greek manufacturing firms during the prolonged crisis. At the first stage, we find that the continuation of the crisis is harmful for the R&D engagement of smaller firms while it increased the willingness for R&D activities among the larger ones. At the second stage, among smaller firms the knowledge production remains unaffected by R&D investments, while among larger firms the R&D decision is positively correlated with the probability of producing innovation, albeit the relationship is weakened as the crisis continues. At the third stage, innovation output benefits only larger firms in terms of labor productivity, while the innovation-productivity nexus is insignificant for smaller firms during the lasting crisis.
Older adults with amnestic mild cognitive impairment (aMCI) who in addition to their memory deficits also suffer from frontal-executive dysfunctions have a higher risk of developing dementia later in their lives than older adults with aMCI without executive deficits and older adults with non-amnestic MCI (naMCI). Handgrip strength (HGS) is also correlated with the risk of cognitive decline in the elderly. Hence, the current study aimed to investigate the associations between HGS and executive functioning in individuals with aMCI, naMCI and healthy controls. Older, right-handed adults with amnestic MCI (aMCI), non-amnestic MCI (naMCI), and healthy controls (HC) conducted a handgrip strength measurement via a handheld dynamometer. Executive functions were assessed with the Trail Making Test (TMT A&B). Normalized handgrip strength (nHGS, normalized to Body Mass Index (BMI)) was calculated and its associations with executive functions (operationalized through z-scores of TMT B/A ratio) were investigated through partial correlation analyses (i.e., accounting for age, sex, and severity of depressive symptoms). A positive and low-to-moderate correlation between right nHGS (rp (22) = 0.364; p = 0.063) and left nHGS (rp (22) = 0.420; p = 0.037) and executive functioning in older adults with aMCI but not in naMCI or HC was observed. Our results suggest that higher levels of nHGS are linked to better executive functioning in aMCI but not naMCI and HC. This relationship is perhaps driven by alterations in the integrity of the hippocampal-prefrontal network occurring in older adults with aMCI. Further research is needed to provide empirical evidence for this assumption.
Industry 4.0 is transforming how businesses innovate and, as a result, companies are spearheading the movement towards 'Digital Transformation'. While some scholars advocate the use of design thinking to identify new innovative behaviours, cognition experts emphasise the importance of top managers in supporting employees to develop these behaviours. However, there is a dearth of research in this domain and companies are struggling to implement the required behaviours. To address this gap, this study aims to identify and prioritise behavioural strategies conducive to design thinking to inform the creation of a managerial mental model. We identify 20 behavioural strategies from 45 interviewees with practitioners and educators and combine them with the concepts of 'paradigm-mindset-mental model' from cognition theory. The paper contributes to the body of knowledge by identifying and prioritising specific behavioural strategies to form a novel set of survival conditions aligned to the new industrial paradigm of Industry 4.0.
In this study, we model a sequence of a confined and a full eruption, employing the relaxed end state of the confined eruption of a kink-unstable flux rope as the initial condition for the ejective one. The full eruption, a model of a coronal mass ejection, develops as a result of converging motions imposed at the photospheric boundary, which drive flux cancellation. In this process, parts of the positive and negative external flux converge toward the polarity inversion line, reconnect, and cancel each other. Flux of the same amount as the canceled flux transfers to a flux rope, increasing the free magnetic energy of the coronal field. With sustained flux cancellation and the associated progressive weakening of the magnetic tension of the overlying flux, we find that a flux reduction of approximate to 11% initiates the torus instability of the flux rope, which leads to a full eruption. These results demonstrate that a homologous full eruption, following a confined one, can be driven by flux cancellation.
A multidimensional and analytical perspective on Open Educational Practices in the 21st century
(2022)
Participatory approaches to teaching and learning are experiencing a new lease on life in the 21st century as a result of the rapid technology development. Knowledge, practices, and tools can be shared across spatial and temporal boundaries in higher education by means of Open Educational Resources, Massive Open Online Courses, and open-source technologies. In this context, the Open Education Movement calls for new didactic approaches that encourage greater learner participation in formal higher education. Based on a representative literature review and focus group research, in this study an analytical framework was developed that enables researchers and practitioners to assess the form of participation in formal, collaborative teaching and learning practices. The analytical framework is focused on the micro-level of higher education, in particular on the interaction between students and lecturers when organizing the curriculum. For this purpose, the research reflects anew on the concept of participation, taking into account existing stage models for participation in the educational context. These are then brought together with the dimensions of teaching and learning processes, such as methods, objectives and content, etc. This paper aims to make a valuable contribution to the opening up of learning and teaching, and expands the discourse around possibilities for interpreting Open Educational Practices.
A new evidence-based diet score to capture associations of food consumption and chronic disease risk
(2022)
Previously, the attempt to compile German dietary guidelines into a diet score was predominantly not successful with regards to preventing chronic diseases in the EPIC-Potsdam study. Current guidelines were supplemented by the latest evidence from systematic reviews and expert papers published between 2010 and 2020 on the prevention potential of food groups on chronic diseases such as type 2 diabetes, cardiovascular diseases and cancer. A diet score was developed by scoring the food groups according to a recommended low, moderate or high intake. The relative validity and reliability of the diet score, assessed by a food frequency questionnaire, was investigated. The consideration of current evidence resulted in 10 key food groups being preventive of the chronic diseases of interest. They served as components in the diet score and were scored from 0 to 1 point, depending on their recommended intake, resulting in a maximum of 10 points. Both the reliability (r = 0.53) and relative validity (r = 0.43) were deemed sufficient to consider the diet score as a stable construct in future investigations. This new diet score can be a promising tool to investigate dietary intake in etiological research by concentrating on 10 key dietary determinants with evidence-based prevention potential for chronic diseases.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
Van Allen Probes measurements revealed the presence of the most unusual structures in the ultra-relativistic radiation belts. Detailed modeling, analysis of pitch angle distributions, analysis of the difference between relativistic and ultra-realistic electron evolution, along with theoretical studies of the scattering and wave growth, all indicate that electromagnetic ion cyclotron (EMIC) waves can produce a very efficient loss of the ultra-relativistic electrons in the heart of the radiation belts. Moreover, a detailed analysis of the profiles of phase space densities provides direct evidence for localized loss by EMIC waves. The evolution of multi-MeV fluxes shows dramatic and very sudden enhancements of electrons for selected storms. Analysis of phase space density profiles reveals that growing peaks at different values of the first invariant are formed at approximately the same radial distance from the Earth and show the sequential formation of the peaks from lower to higher energies, indicating that local energy diffusion is the dominant source of the acceleration from MeV to multi-MeV energies. Further simultaneous analysis of the background density and ultra-relativistic electron fluxes shows that the acceleration to multi-MeV energies only occurs when plasma density is significantly depleted outside of the plasmasphere, which is consistent with the modeling of acceleration due to chorus waves.
In postsocialist Potsdam, religious diversity has risen surprisingly in public life since 1990 although more than 80% of the residents have no religious affiliation. City and state authorities have actively embraced issues around immigration and integration as well as the promotion of religious diversity and interreligious dialogue and have linked this to the agenda of rejuvenating the city’s religious heritage. For years, negotiations have been going on about the need of a mosque, the reconstructions of a synagogue and the so-called “Garrison Church,” a landmark military church building. These initiatives have been dominating the public space for different reasons. They implied, beyond religion, questions of memory, identity, immigration, and culture. This article puts these three cases into perspective to offer a nuanced understanding of the importance of religious spaces in secular contexts considering city politics.
We discuss Neumann problems for self-adjoint Laplacians on (possibly infinite) graphs. Under the assumption that the heat semigroup is ultracontractive we discuss the unique solvability for non-empty subgraphs with respect to the vertex boundary and provide analytic and probabilistic representations for Neumann solutions. A second result deals with Neumann problems on canonically compactifiable graphs with respect to the Royden boundary and provides conditions for unique solvability and analytic and probabilistic representations.
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
A novel approach for estimating precipitation patterns is developed here and applied to generate a new hydrologically corrected daily precipitation dataset, called RAIN4PE (Rain for Peru and Ecuador), at 0.1 degrees spatial resolution for the period 1981-2015 covering Peru and Ecuador. It is based on the application of 1) the random forest method to merge multisource precipitation estimates (gauge, satellite, and reanalysis) with terrain elevation, and 2) observed and modeled streamflow data to first detect biases and second further adjust gridded precipitation by inversely applying the simulated results of the ecohydrological model SWAT (Soil and Water Assessment Tool). Hydrological results using RAIN4PE as input for the Peruvian and Ecuadorian catchments were compared against the ones when feeding other uncorrected (CHIRP and ERA5) and gauge-corrected (CHIRPS, MSWEP, and PISCO) precipitation datasets into the model. For that, SWAT was calibrated and validated at 72 river sections for each dataset using a range of performance metrics, including hydrograph goodness of fit and flow duration curve signatures. Results showed that gauge-corrected precipitation datasets outperformed uncorrected ones for streamflow simulation. However, CHIRPS, MSWEP, and PISCO showed limitations for streamflow simulation in several catchments draining into the Pacific Ocean and the Amazon River. RAIN4PE provided the best overall performance for streamflow simulation, including flow variability (low, high, and peak flows) and water budget closure. The overall good performance of RAIN4PE as input for hydrological modeling provides a valuable criterion of its applicability for robust countrywide hydrometeorological applications, including hydroclimatic extremes such as droughts and floods. Significance StatementWe developed a novel precipitation dataset RAIN4PE for Peru and Ecuador by merging multisource precipitation data (satellite, reanalysis, and ground-based precipitation) with terrain elevation using the random forest method. Furthermore, RAIN4PE was hydrologically corrected using streamflow data in watersheds with precipitation underestimation through reverse hydrology. The results of a comprehensive hydrological evaluation showed that RAIN4PE outperformed state-of-the-art precipitation datasets such as CHIRP, ERA5, CHIRPS, MSWEP, and PISCO in terms of daily and monthly streamflow simulations, including extremely low and high flows in almost all Peruvian and Ecuadorian catchments. This underlines the suitability of RAIN4PE for hydrometeorological applications in this region. Furthermore, our approach for the generation of RAIN4PE can be used in other data-scarce regions.
Hegel's many remarks that seem to imply that philosophy should proceed completely a priori pose a problem for his philosophy of nature since, on this reading, Hegel offers an a priori derivation of empirical results of natural sciences. We show how this perception can be mitigated by interpreting Hegel's remarks as broadly in line with the pre-Kantian rationalist notion of a priori and offer reasons for doing so. We show that, rather than being a peculiarity of Hegel's philosophy, the practice of demonstrating a priori the results of empirical sciences was widespread in the pre-Kantian rationalist tradition. We argue that this practice was intelligible in light of the notion of a priori that was still quite prominent during Hegel's life. This notion of a priori differs from Kant's in that, while the latter's notion concerns propositions, the former concerned only their demonstration. According to it, the same proposition could be demonstrated both a posteriori and a priori. Post-Kantian idealists likewise developed projects of demonstrating specific scientific contents a priori. We then make our discussion more concrete by examining a particular case of an a priori derivation of a natural law, namely the law of fall, by both Leibniz and Hegel.
The digital transformation sets new requirements to all classes of enterprise systems in companies. ERP systems in particular, which represent the dominant class of enterprise systems, are struggling to meet the new requirements at all levels of the architecture. Therefore, there is an urgent need to reconsider the overall architecture of the systems and address the root of the related issues. Given that many restrictions ERP pose on their adaptability are related to the standardization of data, the database layer of ERP systems is addressed. Since database serve as the foundation for data storage and retrieval, they limit the flexibility of enterprise systems and the chance to adapt to new requirements accordingly. So far, relational databases are widely used. Using a systematic literature approach, recent requirements for ERP systems were identified. Prominent database approaches were assessed against the 23 requirements identified. The results reveal the strengths and weaknesses of recent database approaches. To this end, the results highlight the demand to combine multiple database approaches to fulfill recent business requirements. From a conceptual point of view, this paper supports the idea of federated databases which are interoperable to fulfill future requirements and support business operation. This research forms the basis for renewal of the current generation of ERP systems and proposes to ERP vendors to use different database concepts in the future.
As the use of free electron laser (FEL) sources increases, so do the findings mentioning non-linear phenomena occurring at these experiments, such as saturable absorption, induced transparency and scattering breakdowns. These are well known among the laser community, but are still rarely understood and expected among the X-ray community and to date lack tools and theories to accurately predict the respective experimental parameters and results. We present a simple theoretical framework to access short X-ray pulse induced light- matter interactions which occur at intense short X-ray pulses as available at FEL sources. Our approach allows to investigate effects such as saturable absorption, induced transparency and scattering suppression, stimulated emission, and transmission spectra, while including the density of state influence relevant to soft X-ray spectroscopy in, for example, transition metal complexes or functional materials. This computationally efficient rate model based approach is intuitively adaptable to most solid state sample systems in the soft X-ray spectrum with the potential to be extended for liquid and gas sample systems as well. The feasibility of the model to estimate the named effects and the influence of the density of state is demonstrated using the example of CoPd transition metal systems at the Co edge. We believe this work is an important contribution for the preparation, performance, and understanding of FEL based high intensity and short pulse experiments, especially on functional materials in the soft X-ray spectrum.
Fluctuating asymmetries (FA) are small stress-induced random deviations from perfect symmetry that arise during the development of bilaterally symmetrical traits. One of the factors that can reduce developmental stability of the individuals and cause FA at a population level is the loss of genetic variation. Populations of founding colonists frequently have lower genetic variation than their ancestral populations that could be reflected in a higher level of FA. The European starling (Sturnus vulgaris) is native to Eurasia and was introduced successfully in the USA in 1890 and Argentina in 1983. In this study, we documented the genetic diversity and FA of starlings from England (ancestral population), USA (primary introduction) and Argentina (secondary introduction). We predicted the Argentinean starlings would have the highest level of FA and lowest genetic diversity of the three populations. We captured wild adult European starlings in England, USA, and Argentina, measured their mtDNA diversity and allowed them to molt under standardized conditions to evaluate their FA of primary feathers. For genetic analyses, we extracted DNA from blood samples of individuals from Argentina and USA and from feather samples from individuals from England and sequenced the mitochondrial control region. Starlings in Argentina showed the highest composite FA and exhibited the lowest haplotype and nucleotide diversity. The USA population showed a level of FA and genetic diversity similar to the native population. Therefore, the level of asymmetry and genetic diversity found among these populations was consistent with our predictions based on their invasion history.
A review of source models to further the understanding of the seismicity of the Groningen field
(2022)
The occurrence of felt earthquakes due to gas production in Groningen has initiated numerous studies and model attempts to understand and quantify induced seismicity in this region. The whole bandwidth of available models spans the range from fully deterministic models to purely empirical and stochastic models. In this article, we summarise the most important model approaches, describing their main achievements and limitations. In addition, we discuss remaining open questions and potential future directions of development.