Refine
Year of publication
- 2022 (2232) (remove)
Document Type
- Article (1403)
- Doctoral Thesis (252)
- Postprint (163)
- Part of a Book (143)
- Monograph/Edited Volume (87)
- Review (47)
- Working Paper (30)
- Part of Periodical (20)
- Other (19)
- Master's Thesis (18)
Keywords
- COVID-19 (16)
- climate change (16)
- machine learning (15)
- Germany (10)
- exercise (10)
- analysis (8)
- depression (8)
- diffusion (8)
- gender (8)
- obesity (8)
Institute
- Institut für Biochemie und Biologie (230)
- Institut für Physik und Astronomie (218)
- Extern (201)
- Institut für Geowissenschaften (146)
- Historisches Institut (110)
- Bürgerliches Recht (102)
- Institut für Chemie (98)
- Fachgruppe Politik- & Verwaltungswissenschaft (95)
- Öffentliches Recht (90)
- Institut für Umweltwissenschaften und Geographie (80)
"Orfeo out of Care"
(2022)
The paper focuses on an example of multiple-step reception: the contribution of the classical story of Orpheus and Eurydice and the mediaeval lay Sir Orfeo to Tolkien’s work.
In the first part, I compare the lay with Virgilian and Ovidian versions of Orpheus’ myth. This comparison shows the anonymous author’s deep knowledge of the ancient texts and complex way of rewriting them through stealing and hybridization.
The lay was highly esteemed by Tolkien, who translated it and took inspiration from it while describing the Elven kingdom in The Hobbit and building the storyline of Beren and Lúthien in The Silmarillion. Through this key tale, Orpheus/Orfeo’s romance has a deep influence also on Aragorn and Arwen’s story in The Lord of the Rings. The most important element that Tolkien takes from the Sir Orfeo figuration of the ancient story is undoubtedly the insertion of political theme: the link established between the recovery of the main character’s beloved and the return to royal responsability.
The second part of the paper is, thus, dedicated to the reception of Sir Orfeo and the classical myth in Tolkien. It shows how in his work the different steps of the tradition of Orpheus’ story are co-present, creating an inextricable substrate of inspiration that nourishes his imagination.
In its practical outlook, the interdisciplinary-driven colonial discourse theory is often criticized for its totalizing tendencies regarding the structure of the exam-ined discourse and the power relations prevailing in this framework. As a result of this structural totalization, the concerned subjects got disempowered and de-graded to mere passive objects incapable of raising their voices within the dis-course. Based on this justified criticism, this thesis investigates the role colonial subjects played in the emergence, the distribution, as well as in questioning and critiquing of the colonial discourse during the initial phase of British colonialism in West Africa. The focal point lies on three themes relevant to the period be-tween 1874 and 1914: The Ashanti-Wars, the creation of an educational system, and the issue of the so-called "Europeanized Africans." Newspapers published by the colonial elite serve as the central source material in order to reconstruct Afri-can perspectives on these subjects. First, the discursive trajectory of the first two themes will be reconstructed and then shown why the initial support of the elite gradually declined towards the end of the century. Eventually, the analyzed tendencies culminated in the emergence of the "African Regeneration" discourse, which was able to reverse the colonial discourse's basic assumptions, at least on a theoretical level. Consequently, the Africans were displayed as the "civilizer" of Europe. On the structural level, however, this discourse likewise employed a to-talizing picture of African and European societies, respectively.
Der sehr examensrelevante Straftatbestand Hehlerei (§ STGB § 259 StGB) ist infolge einer neuen BGH-Entscheidung um einen Streitpunkt zwischen Strafrechtslehre und Rechtsprechung reicher: Die durch eine Täuschung erwirkte Übergabe der gestohlenen Sache vom Vortäter (oder Vorbesitzer) auf den Anschlusstäter soll nach dem BGH ein tatbestandsmäßiges „Verschaffen“ sein. Die Fachliteratur sieht das überwiegend anders. Der Beitrag versucht davon zu überzeigen, dass die Strafrechtslehre Recht hat.
"אברך יהודי חיוור וצנום, שזקנו צהוב, בלוריתו עבותה ועיניו גדולות": שמונים שנה למותו של הלל צייטלין
(2022)
#Gesellschaftslehre 7/8
(2022)
#WAT
(2022)
#WAT 1
(2022)
'Tools' in public management
(2022)
Tools are methods or procedures, and thus operational patterns of action, applied in public administrations to solve standard problems. It is also possible to consider them as structured communication according to professional standards aiming at complexity reduction. Regularly, tools in management stem on a deductive-synoptic rationale offering a seemingly ‘objective’ decision basis. They have a strong formative influence on the organization, regularly also beyond the intended effects. The prominence of tools is sometimes confused with management as such, e.g. introducing tools is mistaken as equivalent to managing for a particular purpose. However, tools have to be closely and carefully managed regarding the objectives and purposes they should serve.
The paper investigates Tolkien’s narratives of decline through the lens of their classical ancestry. Narratives of decline are widespread in ancient culture, in both philosophical and literary discourses. They normally posit a gradual degradation (moral and ontological) from an idealized Golden Age, which went hand-in-hand with increasing detachment of gods from mortal affairs. Narratives of decline are also at the core of Tolkien’s mythology, constituting yet another underresearched aspect of classical influence on Tolkien. Such Classical narratives reverberate e.g. in Tolkien’s division of Arda’s history into ages, from an idealized First Age filled with Joy and Light to a Third Age, described as “Twilight Age (…) the first of the broken and changed world” (Letters 131). More generally, these narratives are related to Tolkien’s notorious perception of history as a “long defeat” (Letters 195) and to that “heart-racking sense of the vanished past” which pervades Tolkien’s works – the emotion which, in his words, moved him “supremely” and which he found “small difficulty in evoking” (Letters 91). The paper analyses the reception of narratives of decline in Tolkien’s legendarium, pointing out similarities but also contrasts and differences, with the aim to discuss some key patterns of (classical) reception in Tolkien’s theory and practice (‘renewal’, ‘accommodation’, ‘focalization’).
Molecular excitons play a central role in processes of solar energy conversion, both natural and artificial. It is therefore no wonder that numerous experimental and theoretical investigations in the last decade, employing state-of-the-art spectroscopic techniques and computational methods, have been driven by the common aim to unravel exciton dynamics in multichromophoric systems. Theoretically, exciton (de)localization and transfer dynamics are most often modelled using either mixed quantum-classical approaches (e.g., trajectory surface hopping) or fully quantum mechanical treatments (either using model diabatic Hamiltonians or direct dynamics). Yet, the terms such as "exciton localization" or "exciton transfer" may bear different meanings in different works depending on the method in use (quantum-classical vs. fully quantum). Here, we relate different views on exciton (de)localization. For this purpose, we perform molecular surface hopping simulations on several tetracene dimers differing by a magnitude of exciton coupling and carry out quantum dynamical as well as surface hopping calculations on a relevant model system. The molecular surface hopping simulations are done using efficient long-range corrected time-dependent density functional tight binding electronic structure method, allowing us to gain insight into different regimes of exciton dynamics in the studied systems.
Ground-penetrating radar (GPR) is a method that can provide detailed information about the near subsurface in sedimentary and carbonate environments.
The classical interpretation of GPR data (e.g., based on manual feature selection) often is labor-intensive and limited by the experience of the intercally used for seismic interpretation, can provide faster, more repeatable, and less biased interpretations. We have recorded a 3D GPD data set collected across a paleokarst breccia pipe in the Billefjorden area on Spitsbergen, Svalbard. After performing advanced processing, we compare the results of a classical GPR interpretation to the results of an attribute-based classification.
Our attribute classification incorporates a selection of dip and textural attributes as the input for a k-means clustering approach. Similar to the results of the classical interpretation, the resulting classes differentiate between undisturbed strata and breccias or fault zones.
The classes also reveal details inside the breccia pipe that are not discerned in the classical fer that the intrapipe GPR facies result from subtle differences, such as breccia lithology, clast size, or pore-space filling.
The region of West Bohemia and Upper Palatinate belongs to the West Bohemian Massif. The study area is situated at the junction of three different Variscan tectonic units and hosts the ENE-WSW trending Ohre Rift as well as many different fault systems. The entire region is characterized by ongoing magmatic processes in the intra-continental lithospheric mantle expressed by a series of phenomena, including e.g. the occurrence of repeated earthquake swarms and massive degassing of mantle derived CO2 in form of mineral springs and mofettes. Ongoing active tectonics is mainly manifested by Cenozoic volcanism represented by different Quaternary volcanic structures. All these phenomena make the Ohre Rift a unique target area for European intra-continental geo-scientific research. With magnetotelluric (MT) measurements we image the subsurface distribution of the electrical resistivity and map possible fluid pathways. Two-dimensional (2D) inversion results by Munoz et al. (2018) reveal a conductive channel in the vicinity of the earthquake swarm region that extends from the lower crust to the surface forming a pathway for fluids into the region of the mofettes. A second conductive channel is present in the south of their model; however, their 2D inversions allow ambiguous interpretations of this feature. Therefore, we conducted a large 3D MT field experiment extending the study area towards the south. The 3D inversion result matches well with the known geology imaging different fluid/magma reservoirs at crust-mantle depth and mapping possible fluid pathways from the reservoirs to the surface feeding known mofettes and spas. A comparison of 3D and 2D inversion results suggests that the 2D inversion results are considerably characterized by 3D and off-profile structures. In this context, the new results advocate for the swarm earthquakes being located in the resistive host rock surrounding the conductive channels; a finding in line with observations e.g. at the San Andreas Fault, California.
40Ar/39Ar dating of a hydrothermal pegmatitic buddingtonite–muscovite assemblage from Volyn, Ukraine
(2022)
We determined Ar-40/Ar-39 ages of buddingtonite, occurring together with muscovite, with the laser-ablation method. This is the first attempt to date the NH4-feldspar buddingtonite, which is typical for sedimentary-diagenetic environments of sediments, rich in organic matter, or in hydrothermal environments, associated with volcanic geyser systems. The sample is a hydrothermal breccia, coming from the Paleoproterozoic pegmatite field of the Korosten Plutonic Complex, Volyn, Ukraine. A detailed characterization by optical methods, electron microprobe analyses, backscattered electron imaging, and IR analyses showed that the buddingtonite consists of euhedral-appearing platy crystals of tens of micrometers wide, 100 or more micrometers in length, which consist of fine-grained fibers of <= 1 mu m thickness. The crystals are sector and growth zoned in terms of K-NH4-H3O content. The content of K allows for an age determination with the Ar-40/Ar-39 method, as well as in the accompanying muscovite, intimately intergrown with the buddingtonite. The determinations on muscovite yielded an age of 1491 +/- 9 Ma, interpreted as the hydrothermal event forming the breccia. However, buddingtonite apparent ages yielded a range of 563 +/- 14 Ma down to 383 +/- 12 Ma, which are interpreted as reset ages due to Ar loss of the fibrous buddingtonite crystals during later heating. We conclude that buddingtonite is suited for Ar-40/Ar-39 age determinations as a supplementary method, together with other methods and minerals; however, it requires a detailed mineralogical characterization, and the ages will likely represent minimum ages.
The shape and the actuation capability of state of the art robotic devices typically relies on multimaterial systems from a combination of geometry determining materials and actuation components. Here, we present multifunctional 4D-actuators processable by 3D-printing, in which the actuator functionality is integrated into the shaped body. The materials are based on crosslinked poly(carbonate-urea-urethane) networks (PCUU), synthesized in an integrated process, applying reactive extrusion and subsequent water-based curing. Actuation capability could be added to the PCUU, prepared from aliphatic oligocarbonate diol, isophorone diisocyanate (IPDI) and water, in a thermomechanical programming process. When programmed with a strain of epsilon(prog) = 1400% the PCUU networks exhibited actuation apparent by reversible elongation epsilon'(rev) of up to 22%. In a gripper a reversible bending epsilon'(rev)((be)(nd)()) in the range of 37-60% was achieved when the actuation temperature (T-high) was varied between 45 degrees C and 49 degrees C. The integration of actuation and shape formation could be impressively demonstrated in two PCUU-based reversible fastening systems, which were able to hold weights of up to 1.1 kg. In this way, the multifunctional materials are interesting candidate materials for robotic applications where a freedom in shape design and actuation is required as well as for sustainable fastening systems.
50 jahre Grundlagenvertrag
(2022)
The compound [Nb6Cl14(pyrazine)(4)]center dot 2CH(2)Cl(2) (1) was investigated for its suitability as a starting compound for new ligand-supported hexanuclear niobium cluster compounds. The synthesis, stability to air and increased temperature, solubility and usability for subsequent reactions of 1, and purification and separation of the reaction products are discussed. The compounds with cluster units [Nb6Cl14L4], where L = iso-quinoline N-oxides (2), 1,1-dimethylethylenediamines (3), or thiazoles (4), and [Nb6Cl14(PEt3)(3.76)(Et3PO)(0.24)]-[Nb6Cl14(MeCN)(4)]center dot 4MeCN (5) are presented as follow-up products. The crystal structures of compounds 1-5 are analyzed, and the structures are discussed with respect to their intraand intermolecular bonding situations and crystal packing. In addition to hydrogen bonds and pi-pi interactions, the appearance of chalcogen and halogen bonds and lone pair-pi interactions between Nb-6 cluster units was observed for the first time.
Data stream processing systems (DSPSs) are a key enabler to integrate continuously generated data, such as sensor measurements, into enterprise applications. DSPSs allow to steadily analyze information from data streams, e.g., to monitor manufacturing processes and enable fast reactions to anomalous behavior. Moreover, DSPSs continuously filter, sample, and aggregate incoming streams of data, which reduces the data size, and thus data storage costs.
The growing volumes of generated data have increased the demand for high-performance DSPSs, leading to a higher interest in these systems and to the development of new DSPSs. While having more DSPSs is favorable for users as it allows choosing the system that satisfies their requirements the most, it also introduces the challenge of identifying the most suitable DSPS regarding current needs as well as future demands. Having a solution to this challenge is important because replacements of DSPSs require the costly re-writing of applications if no abstraction layer is used for application development. However, quantifying performance differences between DSPSs is a difficult task. Existing benchmarks fail to integrate all core functionalities of DSPSs and lack tool support, which hinders objective result comparisons. Moreover, no current benchmark covers the combination of streaming data with existing structured business data, which is particularly relevant for companies.
This thesis proposes a performance benchmark for enterprise stream processing called ESPBench. With enterprise stream processing, we refer to the combination of streaming and structured business data. Our benchmark design represents real-world scenarios and allows for an objective result comparison as well as scaling of data. The defined benchmark query set covers all core functionalities of DSPSs. The benchmark toolkit automates the entire benchmark process and provides important features, such as query result validation and a configurable data ingestion rate.
To validate ESPBench and to ease the use of the benchmark, we propose an example implementation of the ESPBench queries leveraging the Apache Beam software development kit (SDK). The Apache Beam SDK is an abstraction layer designed for developing stream processing applications that is applied in academia as well as enterprise contexts. It allows to run the defined applications on any of the supported DSPSs. The performance impact of Apache Beam is studied in this dissertation as well. The results show that there is a significant influence that differs among DSPSs and stream processing applications. For validating ESPBench, we use the example implementation of the ESPBench queries developed using the Apache Beam SDK. We benchmark the implemented queries executed on three modern DSPSs: Apache Flink, Apache Spark Streaming, and Hazelcast Jet. The results of the study prove the functioning of ESPBench and its toolkit. ESPBench is capable of quantifying performance characteristics of DSPSs and of unveiling differences among systems.
The benchmark proposed in this thesis covers all requirements to be applied in enterprise stream processing settings, and thus represents an improvement over the current state-of-the-art.
With the advent of increasingly powerful computational architectures, scientists use these possibilities to create simulations of ever-increasing size and complexity. Large-scale simulations of environmental systems require huge amounts of resources. Managing these in an operational way becomes increasingly complex and difficult to handle for individual scientists. State-of-the-art simulation infrastructures usually provide the necessary re-sources in a centralised setup, which often results in an all-or-nothing choice for the user. Here, we outline an alternative approach to handling this complexity, while rendering the use of high-performance hardware and large datasets still possible. It retains a number of desirable properties: (i) a decentralised structure, (ii) easy sharing of resources to promote collaboration and (iii) secure access to everything, including natural delegation of authority across levels and system boundaries. We show that the object capability paradigm will cover these issues, and present the first steps towards developing a simulation infrastructure based on these principles.
A Cell-free Expression Pipeline for the Generation and Functional Characterization of Nanobodies
(2022)
Cell-free systems are well-established platforms for the rapid synthesis, screening, engineering and modification of all kinds of recombinant proteins ranging from membrane proteins to soluble proteins, enzymes and even toxins. Also within the antibody field the cell-free technology has gained considerable attention with respect to the clinical research pipeline including antibody discovery and production. Besides the classical full-length monoclonal antibodies (mAbs), so-called "nanobodies" (Nbs) have come into focus. A Nb is the smallest naturally-derived functional antibody fragment known and represents the variable domain (VHH, similar to 15 kDa) of a camelid heavy-chain-only antibody (HCAb). Based on their nanoscale and their special structure, Nbs display striking advantages concerning their production, but also their characteristics as binders, such as high stability, diversity, improved tissue penetration and reaching of cavity-like epitopes. The classical way to produce Nbs depends on the use of living cells as production host. Though cell-based production is well-established, it is still time-consuming, laborious and hardly amenable for high-throughput applications. Here, we present for the first time to our knowledge the synthesis of functional Nbs in a standardized mammalian cell-free system based on Chinese hamster ovary (CHO) cell lysates. Cell-free reactions were shown to be time-efficient and easy-to-handle allowing for the "on demand" synthesis of Nbs. Taken together, we complement available methods and demonstrate a promising new system for Nb selection and validation.
Incorporation of noncanonical amino acids (ncAAs) with bioorthogonal reactive groups by amber suppression allows the generation of synthetic proteins with desired novel properties. Such modified molecules are in high demand for basic research and therapeutic applications such as cancer treatment and in vivo imaging. The positioning of the ncAA-responsive codon within the protein's coding sequence is critical in order to maintain protein function, achieve high yields of ncAA-containing protein, and allow effective conjugation. Cell-free ncAA incorporation is of particular interest due to the open nature of cell-free systems and their concurrent ease of manipulation. In this study, we report a straightforward workflow to inquire ncAA positions in regard to incorporation efficiency and protein functionality in a Chinese hamster ovary (CHO) cell-free system. As a model, the well-established orthogonal translation components Escherichia coli tyrosyl-tRNA synthetase (TyrRS) and tRNATyr(CUA) were used to site-specifically incorporate the ncAA p-azido-l-phenylalanine (AzF) in response to UAG codons. A total of seven ncAA sites within an anti-epidermal growth factor receptor (EGFR) single-chain variable fragment (scFv) N-terminally fused to the red fluorescent protein mRFP1 and C-terminally fused to the green fluorescent protein sfGFP were investigated for ncAA incorporation efficiency and impact on antigen binding. The characterized cell-free dual fluorescence reporter system allows screening for ncAA incorporation sites with high incorporation efficiency that maintain protein activity. It is parallelizable, scalable, and easy to operate. We propose that the established CHO-based cell-free dual fluorescence reporter system can be of particular interest for the development of antibody-drug conjugates (ADCs).
Next-generation sequencing methods provide comprehensive data for the analysis of structural and functional analysis of the genome. The draft genomes with low contig number and high N50 value can give insight into the structure of the genome as well as provide information on the annotation of the genome. In this study, we designed a pipeline that can be used to assemble prokaryotic draft genomes with low number of contigs and high N50 value. We aimed to use combination of two de novo assembly tools (SPAdes and IDBA-Hybrid) and evaluate the impact of this approach on the quality metrics of the assemblies. The followed pipeline was tested with the raw sequence data with short reads (< 300) for a total of 10 species from four different genera. To obtain the final draft genomes, we firstly assembled the sequences using SPAdes to find closely related organism using the extracted 16 s rRNA from it. IDBA-Hybrid assembler was used to obtain the second assembly data using the closely related organism genome. SPAdes assembler tool was implemented using the second assembly, produced by IDBA-hybrid as a hint. The results were evaluated using QUAST and BUSCO. The pipeline was successful for the reduction of the contig numbers and increasing the N50 statistical values in the draft genome assemblies while preserving the coverage of the draft genomes.
Genomic prediction has revolutionized crop breeding despite remaining issues of transferability of models to unseen environmental conditions and environments. Usage of endophenotypes rather than genomic markers leads to the possibility of building phenomic prediction models that can account, in part, for this challenge. Here, we compare and contrast genomic prediction and phenomic prediction models for 3 growth-related traits, namely, leaf count, tree height, and trunk diameter, from 2 coffee 3-way hybrid populations exposed to a series of treatment-inducing environmental conditions. The models are based on 7 different statistical methods built with genomic markers and ChlF data used as predictors. This comparative analysis demonstrates that the best-performing phenomic prediction models show higher predictability than the best genomic prediction models for the considered traits and environments in the vast majority of comparisons within 3-way hybrid populations. In addition, we show that phenomic prediction models are transferrable between conditions but to a lower extent between populations and we conclude that chlorophyll a fluorescence data can serve as alternative predictors in statistical models of coffee hybrid performance. Future directions will explore their combination with other endophenotypes to further improve the prediction of growth-related traits for crops.
A comparative whole-genome approach identifies bacterial traits for marine microbial interactions
(2022)
Luca Zoccarato, Daniel Sher et al. leverage publicly available bacterial genomes from marine and other environments to examine traits underlying microbial interactions.
Their results provide a valuable resource to investigate clusters of functional and linked traits to better understand marine bacteria community assembly and dynamics.
Microbial interactions shape the structure and function of microbial communities with profound consequences for biogeochemical cycles and ecosystem health. Yet, most interaction mechanisms are studied only in model systems and their prevalence is unknown. To systematically explore the functional and interaction potential of sequenced marine bacteria, we developed a trait-based approach, and applied it to 473 complete genomes (248 genera), representing a substantial fraction of marine microbial communities.
We identified genome functional clusters (GFCs) which group bacterial taxa with common ecology and life history. Most GFCs revealed unique combinations of interaction traits, including the production of siderophores (10% of genomes), phytohormones (3-8%) and different B vitamins (57-70%). Specific GFCs, comprising Alpha- and Gammaproteobacteria, displayed more interaction traits than expected by chance, and are thus predicted to preferentially interact synergistically and/or antagonistically with bacteria and phytoplankton. Linked trait clusters (LTCs) identify traits that may have evolved to act together (e.g., secretion systems, nitrogen metabolism regulation and B vitamin transporters), providing testable hypotheses for complex mechanisms of microbial interactions.
Our approach translates multidimensional genomic information into an atlas of marine bacteria and their putative functions, relevant for understanding the fundamental rules that govern community assembly and dynamics.
The quantification of spatial propagation of extreme precipitation events is vital in water resources planning and disaster mitigation. However, quantifying these extreme events has always been challenging as many traditional methods are insufficient to capture the nonlinear interrelationships between extreme event time series. Therefore, it is crucial to develop suitable methods for analyzing the dynamics of extreme events over a river basin with a diverse climate and complicated topography. Over the last decade, complex network analysis emerged as a powerful tool to study the intricate spatiotemporal relationship between many variables in a compact way. In this study, we employ two nonlinear concepts of event synchronization and edit distance to investigate the extreme precipitation pattern in the Ganga river basin. We use the network degree to understand the spatial synchronization pattern of extreme rainfall and identify essential sites in the river basin with respect to potential prediction skills. The study also attempts to quantify the influence of precipitation seasonality and topography on extreme events. The findings of the study reveal that (1) the network degree is decreased in the southwest to northwest direction, (2) the timing of 50th percentile precipitation within a year influences the spatial distribution of degree, (3) the timing is inversely related to elevation, and (4) the lower elevation greatly influences connectivity of the sites. The study highlights that edit distance could be a promising alternative to analyze event-like data by incorporating event time and amplitude and constructing complex networks of climate extremes.
A comprehensive workflow to analyze ensembles of globally inverted 2D electrical resistivity models
(2022)
Electrical resistivity tomography (ERT) aims at imaging the subsurface resistivity distribution and provides valuable information for different geological, engineering, and hydrological applications. To obtain a subsurface resistivity model from measured apparent resistivities, stochastic or deterministic inversion procedures may be employed. Typically, the inversion of ERT data results in non-unique solutions; i.e., an ensemble of different models explains the measured data equally well. In this study, we perform inference analysis of model ensembles generated using a well-established global inversion approach to assess uncertainties related to the nonuniqueness of the inverse problem. Our interpretation strategy starts by establishing model selection criteria based on different statistical descriptors calculated from the data residuals. Then, we perform cluster analysis considering the inverted resistivity models and the corresponding data residuals. Finally, we evaluate model uncertainties and residual distributions for each cluster. To illustrate the potential of our approach, we use a particle swarm optimization (PSO) algorithm to obtain an ensemble of 2D layer-based resistivity models from a synthetic data example and a field data set collected in Loon-Plage, France. Our strategy performs well for both synthetic and field data and allows us to extract different plausible model scenarios with their associated uncertainties and data residual distributions. Although we demonstrate our workflow using 2D ERT data and a PSObased inversion approach, the proposed strategy is general and can be adapted to analyze model ensembles generated from other kinds of geophysical data and using different global inversion approaches.
Model-informed precision dosing (MIPD) is a quantitative dosing framework that combines prior knowledge on the drug-disease-patient system with patient data from therapeutic drug/ biomarker monitoring (TDM) to support individualized dosing in ongoing treatment. Structural models and prior parameter distributions used in MIPD approaches typically build on prior clinical trials that involve only a limited number of patients selected according to some exclusion/inclusion criteria. Compared to the prior clinical trial population, the patient population in clinical practice can be expected to also include altered behavior and/or increased interindividual variability, the extent of which, however, is typically unknown. Here, we address the question of how to adapt and refine models on the level of the model parameters to better reflect this real-world diversity. We propose an approach for continued learning across patients during MIPD using a sequential hierarchical Bayesian framework. The approach builds on two stages to separate the update of the individual patient parameters from updating the population parameters. Consequently, it enables continued learning across hospitals or study centers, because only summary patient data (on the level of model parameters) need to be shared, but no individual TDM data. We illustrate this continued learning approach with neutrophil-guided dosing of paclitaxel. The present study constitutes an important step toward building confidence in MIPD and eventually establishing MIPD increasingly in everyday therapeutic use.
A conundrum of trends
(2022)
This comment is meant to reiterate two warnings: One applies to the uncritical use of ready-made (openly available) program packages, and one to the estimation of trends in serially correlated time series. Both warnings apply to the recent publication of Lischeid et al. about lake-level trends in Germany.
The aim of this dissertation was to conduct a larger-scale cross-linguistic empirical investigation of similarity-based interference effects in sentence comprehension.
Interference studies can offer valuable insights into the mechanisms that are involved in long-distance dependency completion.
Many studies have investigated similarity-based interference effects, showing that syntactic and semantic information are employed during long-distance dependency formation (e.g., Arnett & Wagers, 2017; Cunnings & Sturt, 2018; Van Dyke, 2007, Van Dyke & Lewis, 2003; Van Dyke & McElree, 2011). Nevertheless, there are some important open questions in the interference literature that are critical to our understanding of the constraints involved in dependency resolution.
The first research question concerns the relative timing of syntactic and semantic interference in online sentence comprehension. Only few interference studies have investigated this question, and, to date, there is not enough data to draw conclusions with regard to their time course (Van Dyke, 2007; Van Dyke & McElree, 2011).
Our first cross-linguistic study explores the relative timing of syntactic and semantic interference in two eye-tracking reading experiments that implement the study design used in Van Dyke (2007). The first experiment tests English sentences. The second, larger-sample experiment investigates the two interference types in German.
Overall, the data suggest that syntactic and semantic interference can arise simultaneously during retrieval.
The second research question concerns a special case of semantic interference: We investigate whether cue-based retrieval interference can be caused by semantically similar items which are not embedded in a syntactic structure.
This second interference study builds on a landmark study by Van Dyke & McElree (2006). The study design used in their study is unique in that it is able to pin down the source of interference as a consequence of cue overload during retrieval, when semantic retrieval cues do not uniquely match the retrieval target. Unlike most other interference studies, this design is able to rule out encoding interference as an alternative explanation. Encoding accounts postulate that it is not cue overload at the retrieval site but the erroneous encoding of similar linguistic items in memory that leads to interference (Lewandowsky et al., 2008; Oberauer & Kliegl, 2006). While Van Dyke & McElree (2006) reported cue-based retrieval interference from sentence-external distractors, the evidence for this effect was weak. A subsequent study did not show interference of this type (Van Dyke et al., 2014). Given these inconclusive findings, further research is necessary to investigate semantic cue-based retrieval interference.
The second study in this dissertation provides a larger-scale cross-linguistic investigation of cue-based retrieval interference from sentence-external items. Three larger-sample eye-tracking studies in English, German, and Russian tested cue-based interference in the online processing of filler-gap dependencies. This study further extends the previous research by investigating interference in each language under varying task demands (Logačev & Vasishth, 2016; Swets et al., 2008).
Overall, we see some very modest support for proactive cue-based retrieval interference in English. Unexpectedly, this was observed only under a low task demand. In German and Russian, there is some evidence against the interference effect. It is possible that interference is attenuated in languages with richer case marking.
In sum, the cross-linguistic experiments on the time course of syntactic and semantic interference from sentence-internal distractors support existing evidence of syntactic and semantic interference during sentence comprehension. Our data further show that both types of interference effects can arise simultaneously. Our cross-linguistic experiments investigating semantic cue-based retrieval interference from sentence-external distractors suggest that this type of interference may arise only in specific linguistic contexts.
A different class of refugee: university scholarships and developmentalism in late 1960s Africa
(2022)
Using documents assembled in connection with the 1967 Conference on the Legal, Economic and Social Aspects of African Refugee Problems, this article discusses African refugee higher-education discourses in the 1960s at the level of international organizations, volunteer agencies, and government representatives. Education and development history have recently been studied together, but this article focuses on the history of refugee higher education, which, it argues, needs to be understood within the development framework of human-capital theory, meant to support political pan African concerns for a decolonized continent and merged with humanitarian arguments to create a hybrid form of humanitarian developmentalism. The article zooms in on higher-education scholarships, above all for refugees from Southern Africa, as a means of support for human-capital development. It shows that refugee higher education was both a result and a driver of increased international exchanges, as evidenced at the 1967 conference.
A cationic surfactant containing a spiropyrane unit is prepared exhibiting a dual-responsive adjustability of its surface-active characteristics. The switching mechanism of the system relies on the reversible conversion of the non-ionic spiropyrane (SP) to a zwitterionic merocyanine (MC) and can be controlled by adjusting the pH value and via light, resulting in a pH-dependent photoactivity: While the compound possesses a pronounced difference in surface activity between both forms under acidic conditions, this behavior is suppressed at a neutral pH level. The underlying switching processes are investigated in detail, and a thermodynamic explanation based on a combination of theoretical and experimental results is provided. This complex stimuli-responsive behavior enables remote-control of colloidal systems. To demonstrate its applicability, the surfactant is utilized for the pH-dependent manipulation of oil-in-water emulsions.
Wages and wage dynamics directly affect individuals' and families' daily lives. In this article, we show how major theoretical branches of research on wages and inequality-that is, cumulative advantage (CA), human capital theory, and the lifespan perspective-can be integrated into a coherent statistical framework and analyzed with multilevel dynamic structural equation modeling (DSEM). This opens up a new way to empirically investigate the mechanisms that drive growing inequality over time. We demonstrate the new approach by making use of longitudinal, representative U.S. data (NLSY-79). Analyses revealed fundamental between-person differences in both initial wages and autoregressive wage growth rates across the lifespan. Only 0.5% of the sample experienced a "strict" CA and unbounded wage growth, whereas most individuals revealed logarithmic wage growth over time. Adolescent intelligence and adult educational levels explained substantial heterogeneity in both parameters. We discuss how DSEM may help researchers study CA processes and related developmental dynamics, and we highlight the extensions and limitations of the DSEM framework.
The investigation of metabolic fluxes and metabolite distributions within cells by means of tracer molecules is a valuable tool to unravel the complexity of biological systems. Technological advances in mass spectrometry (MS) technology such as atmospheric pressure chemical ionization (APCI) coupled with high resolution (HR), not only allows for highly sensitive analyses but also broadens the usefulness of tracer-based experiments, as interesting signals can be annotated de novo when not yet present in a compound library. However, several effects in the APCI ion source, i.e., fragmentation and rearrangement, lead to superimposed mass isotopologue distributions (MID) within the mass spectra, which need to be corrected during data evaluation as they will impair enrichment calculation otherwise. Here, we present and evaluate a novel software tool to automatically perform such corrections. We discuss the different effects, explain the implemented algorithm, and show its application on several experimental datasets. This adjustable tool is available as an R package from CRAN.
As resources are valuable assets, organizations have to decide which resources to allocate to business process tasks in a way that the process is executed not only effectively but also efficiently. Traditional role-based resource allocation leads to effective process executions, since each task is performed by a resource that has the required skills and competencies to do so. However, the resulting allocations are typically not as efficient as they could be, since optimization techniques have yet to find their way in traditional business process management scenarios. On the other hand, operations research provides a rich set of analytical methods for supporting problem-specific decisions on resource allocation. This paper provides a novel framework for creating transparency on existing tasks and resources, supporting individualized allocations for each activity in a process, and the possibility to integrate problem-specific analytical methods of the operations research domain. To validate the framework, the paper reports on the design and prototypical implementation of a software architecture, which extends a traditional process engine with a dedicated resource management component. This component allows us to define specific resource allocation problems at design time, and it also facilitates optimized resource allocation at run time. The framework is evaluated using a real-world parcel delivery process. The evaluation shows that the quality of the allocation results increase significantly with a technique from operations research in contrast to the traditional applied rule-based approach.
Deep hydrothermal Mo, W, and base metal mineralization at the Sweet Home mine (Detroit City portal) formed in response to magmatic activity during the Oligocene. Microthermometric data of fluid inclusions trapped in greisen quartz and fluorite suggest that the early-stage mineralization at the Sweet Home mine precipitated from low- to medium-salinity (1.5-11.5 wt% equiv. NaCl), CO2-bearing fluids at temperatures between 360 and 415 degrees C and at depths of at least 3.5 km. Stable isotope and noble gas isotope data indicate that greisen formation and base metal mineralization at the Sweet Home mine was related to fluids of different origins. Early magmatic fluids were the principal source for mantle-derived volatiles (CO2, H2S/SO2, noble gases), which subsequently mixed with significant amounts of heated meteoric water. Mixing of magmatic fluids with meteoric water is constrained by delta H-2(w)-delta O-18(w) relationships of fluid inclusions. The deep hydrothermal mineralization at the Sweet Home mine shows features similar to deep hydrothermal vein mineralization at Climax-type Mo deposits or on their periphery. This suggests that fluid migration and the deposition of ore and gangue minerals in the Sweet Home mine was triggered by a deep-seated magmatic intrusion. The findings of this study are in good agreement with the results of previous fluid inclusion studies of the mineralization of the Sweet Home mine and from Climax-type Mo porphyry deposits in the Colorado Mineral Belt.
We demonstrate a recycling system for synthetic nicotinamide cofactor analogues using a soluble hydrogenase with turnover number of >1000 for reduction of the cofactor analogues by H-2.
Coupling this system to an ene reductase, we show quantitative conversion of N-ethylmaleimide to N-ethylsuccinimide.
The biocatalyst system retained >50% activity after 7 h.
What are the consequences of unemployment and precarious employment for individuals' health in Europe? What are the moderating factors that may offset (or increase) the health consequences of labor-market risks? How do the effects of these risks vary across different contexts, which differ in their institutional and cultural settings? Does gender, regarded as a social structure, play a role, and how? To answer these questions is the aim of my cumulative thesis. This study aims to advance our knowledge about the health consequences that unemployment and precariousness cause over the life course. In particular, I investigate how several moderating factors, such as gender, the family, and the broader cultural and institutional context, may offset or increase the impact of employment instability and insecurity on individual health.
In my first paper, 'The buffering role of the family in the relationship between job loss and self-perceived health: Longitudinal results from Europe, 2004-2011', I and my co-authors measure the causal effect of job loss on health and the role of the family and welfare states (regimes) as moderating factors. Using EU-SILC longitudinal data (2004-2011), we estimate the probability of experiencing 'bad health' following a transition to unemployment by applying linear probability models and undertake separate analyses for men and women. Firstly, we measure whether changes in the independent variable 'job loss' lead to changes in the dependent variable 'self-rated health' for men and women separately. Then, by adding into the model different interaction terms, we measure the moderating effect of the family, both in terms of emotional and economic support, and how much it varies across different welfare regimes. As an identification strategy, we first implement static fixed-effect panel models, which control for time-varying observables and indirect health selection—i.e., constant unobserved heterogeneity. Secondly, to control for reverse causality and path dependency, we implement dynamic fixed-effect panel models, adding a lagged dependent variable to the model.
We explore the role of the family by focusing on close ties within households: we consider the presence of a stable partner and his/her working status as a source of social and economic support. According to previous literature, having a partner should reduce the stress from adverse events, thanks to the symbolic and emotional dimensions that such a relationship entails, regardless of any economic benefits. Our results, however, suggest that benefits linked to the presence of a (female) partner also come from the financial stability that (s)he can provide in terms of a second income. Furthermore, we find partners' employment to be at least as important as the mere presence of the partner in reducing the negative effect of job loss on the individual's health by maintaining the household's standard of living and decreasing economic strain on the family. Our results are in line with previous research, which has highlighted that some people cope better than others with adverse life circumstances, and the support provided by the family is a crucial resource in that regard.
We also reported an important interaction between the family and the welfare state in moderating the health consequences of unemployment, showing how the compensation effect of the family varies across welfare regimes. The family plays a decisive role in cushioning the adverse consequences of labor market risks in Southern and Eastern welfare states, characterized by less developed social protection systems and –especially the Southern – high level of familialism.
The first paper also found important gender differences concerning job loss, family and welfare effects. Of particular interest is the evidence suggesting that health selection works differently for men and women, playing a more prominent role for women than for men in explaining the relationship between job loss and self-perceived health. The second paper, 'Gender roles and selection mechanisms across contexts: A comparative analysis of the relationship between unemployment, self-perceived health, and gender.' investigates more in-depth the gender differential in health driven by unemployment.
Being a highly contested issue in literature, we aim to study whether men are more penalized than women or the other way around and the mechanisms that may explain the gender difference. To do that, we rely on two theoretical arguments: the availability of alternative roles and social selection. The first argument builds on the idea that men and women may compensate for the detrimental health consequences of unemployment through the commitment to 'alternative roles,' which can provide for the resources needed to fulfill people's socially constructed needs. Notably, the availability of alternative options depends on the different positions that men and women have in society.
Further, we merge the availability of the 'alternative roles' argument with the health selection argument. We assume that health selection could be contingent on people's social position as defined by gender and, thus, explain the gender differential in the relationship between unemployment and health. Ill people might be less reluctant to fall or remain (i.e., self-select) in unemployment if they have alternative roles. In Western societies, women generally have more alternative roles than men and thus more discretion in their labor market attachment. Therefore, health selection should be stronger for them, explaining why unemployment is less menace for women than for their male counterparts.
Finally, relying on the idea of different gender regimes, we extended these arguments to comparison across contexts. For example, in contexts where being a caregiver is assumed to be women's traditional and primary roles and the primary breadwinner role is reserved to men, unemployment is less stigmatized, and taking up alternative roles is more socially accepted for women than for men (Hp.1). Accordingly, social (self)selection should be stronger for women than for men in traditional contexts, where, in the case of ill-health, the separation from work is eased by the availability of alternative roles (Hp.2).
By focusing on contexts that are representative of different gender regimes, we implement a multiple-step comparative approach. Firstly, by using EU-SILC longitudinal data (2004-2015), our analysis tests gender roles and selection mechanisms for Sweden and Italy, representing radically different gender regimes, thus providing institutional and cultural variation. Then, we limit institutional heterogeneity by focusing on Germany and comparing East- and West-Germany and older and younger cohorts—for West-Germany (SOEP data 1995-2017). Next, to assess the differential impact of unemployment for men and women, we compared (unemployed and employed) men with (unemployed and employed) women. To do so, we calculate predicted probabilities and average marginal effect from two distinct random-effects probit models. Our first step is estimating random-effects models that assess the association between unemployment and self-perceived health, controlling for observable characteristics. In the second step, our fully adjusted model controls for both direct and indirect selection. We do this using dynamic correlated random-effects (CRE) models. Further, based on the fully adjusted model, we test our hypotheses on alternative roles (Hp.1) by comparing several contexts – models are estimated separately for each context. For this hypothesis, we pool men and women and include an interaction term between unemployment and gender, which has the advantage to allow for directly testing whether gender differences in the effect of unemployment exist and are statistically significant. Finally, we test the role of selection mechanisms (Hp.2), using the KHB method to compare coefficients across nested nonlinear models. Specifically, we test the role of selection for the relationship between unemployment and health by comparing the partially-adjusted and fully-adjusted models. To allow selection mechanisms to operate differently between genders, we estimate separate models for men and women.
We found support to our first hypotheses—the context where people are embedded structures the relationship between unemployment, health, and gender. We found no gendered effect of unemployment on health in the egalitarian context of Sweden. Conversely, in the traditional context of Italy, we observed substantive and statistically significant gender differences in the effect of unemployment on bad health, with women suffering less than men. We found the same pattern for comparing East and West Germany and younger and older cohorts in West Germany.
On the contrary, our results did not support our theoretical argument on social selection. We found that in Sweden, women are more selected out of employment than men. In contrast, in Italy, health selection does not seem to be the primary mechanism behind the gender differential—Italian men and women seem to be selected out of employment to the same extent. Namely, we do not find any evidence that health selection is stronger for women in more traditional countries (Hp2), despite the fact that the institutional and the cultural context would offer them a more comprehensive range of 'alternative roles' relative to men. Moreover, our second hypothesis is also rejected in the second and third comparisons, where the cross-country heterogeneity is reduced to maximize cultural differences within the same institutional context. Further research that addresses selection into inactivity is needed to evaluate the interplay between selection and social roles across gender regimes.
While the health consequences of unemployment have been on the research agenda for a pretty long time, the interest in precarious employment—defined as the linking of the vulnerable worker to work that is characterized by uncertainty and insecurity concerning pay, the stability of the work arrangement, limited access to social benefits, and statutory protections—has emerged only later. Since the 80s, scholars from different disciplines have raised concerns about the social consequences of de-standardization of employment relationships. However, while work has become undoubtedly more precarious, very little is known about its causal effect on individual health and the role of gender as a moderator. These questions are at the core of my third paper : 'Bad job, bad health? A longitudinal analysis of the interaction between precariousness, gender and self-perceived health in Germany'. Herein, I investigate the multidimensional nature of precarious employment and its causal effect on health, particularly focusing on gender differences.
With this paper, I aim at overcoming three major shortcomings of earlier studies: The first one regards the cross-sectional nature of data that prevents the authors from ruling out unobserved heterogeneity as a mechanism for the association between precarious employment and health. Indeed, several unmeasured individual characteristics—such as cognitive abilities—may confound the relationship between precarious work and health, leading to biased results. Secondly, only a few studies have directly addressed the role of gender in shaping the relationship. Moreover, available results on the gender differential are mixed and inconsistent: some found precarious employment being more detrimental for women's health, while others found no gender differences or stronger negative association for men. Finally, previous attempts to an empirical translation of the employment precariousness (EP) concept have not always been coherent with their theoretical framework. EP is usually assumed to be a multidimensional and continuous phenomenon; it is characterized by different dimensions of insecurity that may overlap in the same job and lead to different "degrees of precariousness." However, researchers have predominantly focused on one-dimensional indicators—e.g., temporary employment, subjective job insecurity—to measure EP and study the association with health. Besides the fact that this approach partially grasps the phenomenon's complexity, the major problem is the inconsistency of evidence that it has produced. Indeed, this line of inquiry generally reveals an ambiguous picture, with some studies finding substantial adverse effects of temporary over permanent employment, while others report only minor differences.
To measure the (causal) effect of precarious work on self-rated health and its variation by gender, I focus on Germany and use four waves from SOEP data (2003, 2007, 2011, and 2015). Germany is a suitable context for my study. Indeed, since the 1980s, the labor market and welfare system have been restructured in many ways to increase the German economy's competitiveness in the global market. As a result, the (standard) employment relationship has been de-standardized: non-standard and atypical employment arrangements—i.e., part-time work, fixed-term contracts, mini-jobs, and work agencies—have increased over time while wages have lowered, even among workers with standard work. In addition, the power of unions has also fallen over the last three decades, leaving a large share of workers without collective protection. Because of this process of de-standardization, the link between wage employment and strong social rights has eroded, making workers more powerless and more vulnerable to labor market risks than in the past. EP refers to this uneven distribution of power in the employment relationship, which can be detrimental to workers' health. Indeed, by affecting individuals' access to power and other resources, EP puts precarious workers at risk of experiencing health shocks and influences their ability to gain and accumulate health advantages (Hp.1).
Further, the focus on Germany allows me to investigate my second research question on the gender differential. Germany is usually regarded as a traditionalist gender regime: a context characterized by a configuration of roles. Here, being a caregiver is assumed to be women's primary role, whereas the primary breadwinner role is reserved for men. Although many signs of progress have been made over the last decades towards a greater equalization of opportunities and more egalitarianism, the breadwinner model has barely changed towards a modified version. Thus, women usually take on the double role of workers (the so-called secondary earner) and caregivers, and men still devote most of their time to paid work activities. Moreover, the overall upward trend towards more egalitarian gender ideologies has leveled off over the last decades, moving notably towards more traditional gender ideologies.
In this setting, two alternative hypotheses are possible. Firstly, I assume that the negative relationship between EP and health is stronger for women than for men. This is because women are systematically more disadvantaged than men in the public and private spheres of life, having less access to formal and informal sources of power. These gender-related power asymmetries may interact with EP-related power asymmetries resulting in a stronger effect of EP on women's health than on men's health (Hp.2).
An alternative way of looking at the gender differential is to consider the interaction that precariousness might have with men's and women's gender identities. According to this view, the negative relationship between EP and health is weaker for women than for men (Hp.2a). In a society with a gendered division of labor and a strong link between masculine identities and stable and well-rewarded job—i.e., a job that confers the role of primary family provider—a male worker with precarious employment might violate the traditional male gender role. Men in precarious jobs may perceive themselves (and by others) as possessing a socially undesirable characteristic, which conflicts with the stereotypical idea of themselves as the male breadwinner. Engaging in behaviors that contradict stereotypical gender identity may decrease self-esteem and foster feelings of inferiority, helplessness, and jealousy, leading to poor health.
I develop a new indicator of EP that empirically translates a definition of EP as a multidimensional and continuous phenomenon. I assume that EP is a latent construct composed of seven dimensions of insecurity chosen according to the theory and previous empirical research: Income insecurity, social insecurity, legal insecurity, employment insecurity, working-time insecurity, representation insecurity, worker's vulnerability. The seven dimensions are proxied by eight indicators available in the four waves of the SOEP dataset. The EP composite indicator is obtained by performing a multiple correspondence analysis (MCA) on the eight indicators. This approach aims to construct a summary scale in which all dimensions contribute jointly to the measured experience of precariousness and its health impact.
Further, the relationship between EP and 'general self-perceived health' is estimated by applying ordered probit random-effects estimators and calculating average marginal effect (further AME). Then, to control for unobserved heterogeneity, I implement correlated random-effects models that add to the model the within-individual means of the time-varying independent variables. To test the significance of the gender differential, I add an interaction term between EP and gender in the fully adjusted model in the pooled sample.
My correlated random-effects models showed EP's negative and substantial 'effect' on self-perceived health for both men and women. Although nonsignificant, the evidence seems in line with previous cross-sectional literature. It supports the hypothesis that employment precariousness could be detrimental to workers' health. Further, my results showed the crucial role of unobserved heterogeneity in shaping the health consequences of precarious employment. This is particularly important as evidence accumulates, yet it is still mostly descriptive.
Moreover, my results revealed a substantial difference among men and women in the relationship between EP and health: when EP increases, the risk of experiencing poor health increases much more for men than for women. This evidence falsifies previous theory according to whom the gender differential is contingent on the structurally disadvantaged position of women in western societies. In contrast, they seem to confirm the idea that men in precarious work could experience role conflict to a larger extent than women, as their self-standard is supposed to be the stereotypical breadwinner worker with a good and well-rewarded job. Finally, results from the multiple correspondence analysis contribute to the methodological debate on precariousness, showing that a multidimensional and continuous indicator can express a latent variable of EP.
All in all, complementarities are revealed in the results of unemployment and employment precariousness, which have two implications: Policy-makers need to be aware that the total costs of unemployment and precariousness go far beyond the economic and material realm penetrating other fundamental life domains such as individual health. Moreover, they need to balance the trade-off between protecting adequately unemployed people and fostering high-quality employment in reaction to the highlighted market pressures. In this sense, the further development of a (universalistic) welfare state certainly helps mitigate the adverse health effects of unemployment and, therefore, the future costs of both individuals' health and welfare spending. In addition, the presence of a working partner is crucial for reducing the health consequences of employment instability. Therefore, policies aiming to increase female labor market participation should be promoted, especially in those contexts where the welfare state is less developed.
Moreover, my results support the significance of taking account of a gender perspective in health research. The findings of the three articles show that job loss, unemployment, and precarious employment, in general, have adverse effects on men's health but less or absent consequences for women's health. Indeed, this suggests the importance of labor and health policies that consider and further distinguish the specific needs of the male and female labor force in Europe. Nevertheless, a further implication emerges: the health consequences of employment instability and de-standardization need to be investigated in light of the gender arrangements and the transforming gender relationships in specific cultural and institutional contexts. My results indeed seem to suggest that women's health advantage may be a transitory phenomenon, contingent on the predominant gendered institutional and cultural context. As the structural difference between men's and women's position in society is eroded, egalitarianism becomes the dominant normative status, so will probably be the gender difference in the health consequences of job loss and precariousness. Therefore, while gender equality in opportunities and roles is a desirable aspect for contemporary societies and a political goal that cannot be postponed further, this thesis raises a further and maybe more crucial question: What kind of equality should be pursued to provide men and women with both good life quality and equal chances in the public and private spheres? In this sense, I believe that social and labor policies aiming to reduce gender inequality in society should focus on improving women's integration into the labor market, implementing policies targeting men, and facilitating their involvement in the private sphere of life. Equal redistribution of social roles could activate a crucial transformation of gender roles and the cultural models that sustain and still legitimate gender inequality in Western societies.
A large landslide (frozen debris avalanche) occurred at Assapaat on the south coast of the Nuussuaq Peninsula in Central West Greenland on June 13, 2021, at 04:04 local time. We present a compilation of available data from field observations, photos, remote sensing, and seismic monitoring to describe the event. Analysis of these data in combination with an analysis of pre- and post-failure digital elevation models results in the first description of this type of landslide. The frozen debris avalanche initiated as a 6.9 * 10(6) m(3) failure of permafrozen talus slope and underlying colluvium and till at 600-880 m elevation. It entrained a large volume of permafrozen colluvium along its 2.4 km path in two subsequent entrainment phases accumulating a total volume between 18.3 * 10(6) and 25.9 * 10(6) m(3). About 3.9 * 10(6) m(3) is estimated to have entered the Vaigat strait; however, no tsunami was reported, or is evident in the field. This is probably because the second stage of entrainment along with a flattening of slope angle reduced the mobility of the frozen debris avalanche. We hypothesise that the initial talus slope failure is dynamically conditioned by warming of the ice matrix that binds the permafrozen talus slope. When the slope ice temperature rises to a critical level, its shear resistance is reduced, resulting in an unstable talus slope prone to failure. Likewise, we attribute the large-scale entrainment to increasing slope temperature and take the frozen debris avalanche as a strong sign that the permafrost in this region is increasingly at a critical state. Global warming is enhanced in the Arctic and frequent landslide events in the past decade in Western Greenland let us hypothesise that continued warming will lead to an increase in the frequency and magnitude of these types of landslides. Essential data for critical arctic slopes such as precipitation, snowmelt, and ground and surface temperature are still missing to further test this hypothesis. It is thus strongly required that research funds are made available to better predict the change of landslide threat in the Arctic.
We use the prolonged Greek crisis as a case study to understand how a lasting economic shock affects the innovation strategies of firms in economies with moderate innovation activities. Adopting the 3-stage CDM model, we explore the link between R&D, innovation, and productivity for different size groups of Greek manufacturing firms during the prolonged crisis. At the first stage, we find that the continuation of the crisis is harmful for the R&D engagement of smaller firms while it increased the willingness for R&D activities among the larger ones. At the second stage, among smaller firms the knowledge production remains unaffected by R&D investments, while among larger firms the R&D decision is positively correlated with the probability of producing innovation, albeit the relationship is weakened as the crisis continues. At the third stage, innovation output benefits only larger firms in terms of labor productivity, while the innovation-productivity nexus is insignificant for smaller firms during the lasting crisis.
We use the prolonged Greek crisis as a case study to understand how a lasting economic shock affects the innovation strategies of firms in economies with moderate innovation activities. Adopting the 3-stage CDM model, we explore the link between R&D, innovation, and productivity for different size groups of Greek manufacturing firms during the prolonged crisis. At the first stage, we find that the continuation of the crisis is harmful for the R&D engagement of smaller firms while it increased the willingness for R&D activities among the larger ones. At the second stage, among smaller firms the knowledge production remains unaffected by R&D investments, while among larger firms the R&D decision is positively correlated with the probability of producing innovation, albeit the relationship is weakened as the crisis continues. At the third stage, innovation output benefits only larger firms in terms of labor productivity, while the innovation-productivity nexus is insignificant for smaller firms during the lasting crisis.
Older adults with amnestic mild cognitive impairment (aMCI) who in addition to their memory deficits also suffer from frontal-executive dysfunctions have a higher risk of developing dementia later in their lives than older adults with aMCI without executive deficits and older adults with non-amnestic MCI (naMCI). Handgrip strength (HGS) is also correlated with the risk of cognitive decline in the elderly. Hence, the current study aimed to investigate the associations between HGS and executive functioning in individuals with aMCI, naMCI and healthy controls. Older, right-handed adults with amnestic MCI (aMCI), non-amnestic MCI (naMCI), and healthy controls (HC) conducted a handgrip strength measurement via a handheld dynamometer. Executive functions were assessed with the Trail Making Test (TMT A&B). Normalized handgrip strength (nHGS, normalized to Body Mass Index (BMI)) was calculated and its associations with executive functions (operationalized through z-scores of TMT B/A ratio) were investigated through partial correlation analyses (i.e., accounting for age, sex, and severity of depressive symptoms). A positive and low-to-moderate correlation between right nHGS (rp (22) = 0.364; p = 0.063) and left nHGS (rp (22) = 0.420; p = 0.037) and executive functioning in older adults with aMCI but not in naMCI or HC was observed. Our results suggest that higher levels of nHGS are linked to better executive functioning in aMCI but not naMCI and HC. This relationship is perhaps driven by alterations in the integrity of the hippocampal-prefrontal network occurring in older adults with aMCI. Further research is needed to provide empirical evidence for this assumption.
Older adults with amnestic mild cognitive impairment (aMCI) who in addition to their memory deficits also suffer from frontal-executive dysfunctions have a higher risk of developing dementia later in their lives than older adults with aMCI without executive deficits and older adults with non-amnestic MCI (naMCI). Handgrip strength (HGS) is also correlated with the risk of cognitive decline in the elderly. Hence, the current study aimed to investigate the associations between HGS and executive functioning in individuals with aMCI, naMCI and healthy controls. Older, right-handed adults with amnestic MCI (aMCI), non-amnestic MCI (naMCI), and healthy controls (HC) conducted a handgrip strength measurement via a handheld dynamometer. Executive functions were assessed with the Trail Making Test (TMT A&B). Normalized handgrip strength (nHGS, normalized to Body Mass Index (BMI)) was calculated and its associations with executive functions (operationalized through z-scores of TMT B/A ratio) were investigated through partial correlation analyses (i.e., accounting for age, sex, and severity of depressive symptoms). A positive and low-to-moderate correlation between right nHGS (rp (22) = 0.364; p = 0.063) and left nHGS (rp (22) = 0.420; p = 0.037) and executive functioning in older adults with aMCI but not in naMCI or HC was observed. Our results suggest that higher levels of nHGS are linked to better executive functioning in aMCI but not naMCI and HC. This relationship is perhaps driven by alterations in the integrity of the hippocampal-prefrontal network occurring in older adults with aMCI. Further research is needed to provide empirical evidence for this assumption.
Industry 4.0 is transforming how businesses innovate and, as a result, companies are spearheading the movement towards 'Digital Transformation'. While some scholars advocate the use of design thinking to identify new innovative behaviours, cognition experts emphasise the importance of top managers in supporting employees to develop these behaviours. However, there is a dearth of research in this domain and companies are struggling to implement the required behaviours. To address this gap, this study aims to identify and prioritise behavioural strategies conducive to design thinking to inform the creation of a managerial mental model. We identify 20 behavioural strategies from 45 interviewees with practitioners and educators and combine them with the concepts of 'paradigm-mindset-mental model' from cognition theory. The paper contributes to the body of knowledge by identifying and prioritising specific behavioural strategies to form a novel set of survival conditions aligned to the new industrial paradigm of Industry 4.0.
An important goal in biotechnology and (bio-) medical research is the isolation of single cells from a heterogeneous cell population. These specialised cells are of great interest for bioproduction, diagnostics, drug development, (cancer) therapy and research. To tackle emerging questions, an ever finer differentiation between target cells and non-target cells is required. This precise differentiation is a challenge for a growing number of available methods.
Since the physiological properties of the cells are closely linked to their morphology, it is beneficial to include their appearance in the sorting decision. For established methods, this represents a non addressable parameter, requiring new methods for the identification and isolation of target cells. Consequently, a variety of new flow-based methods have been developed and presented in recent years utilising 2D imaging data to identify target cells within a sample. As these methods aim for high throughput, the devices developed typically require highly complex fluid handling techniques, making them expensive while offering limited image quality.
In this work, a new continuous flow system for image-based cell sorting was developed that uses dielectrophoresis to precisely handle cells in a microchannel. Dielectrophoretic forces are exerted by inhomogeneous alternating electric fields on polarisable particles (here: cells). In the present system, the electric fields can be switched on and off precisely and quickly by a signal generator. In addition to the resulting simple and effective cell handling, the system is characterised by the outstanding quality of the image data generated and its compatibility with standard microscopes. These aspects result in low complexity, making it both affordable and user-friendly.
With the developed cell sorting system, cells could be sorted reliably and efficiently according to their cytosolic staining as well as morphological properties at different optical magnifications. The achieved purity of the target cell population was up to 95% and about 85% of the sorted cells could be recovered from the system. Good agreement was achieved between the results obtained and theoretical considerations. The achieved throughput of the system was up to 12,000 cells per hour. Cell viability studies indicated a high biocompatibility of the system.
The results presented demonstrate the potential of image-based cell sorting using dielectrophoresis. The outstanding image quality and highly precise yet gentle handling of the cells set the system apart from other technologies. This results in enormous potential for processing valuable and sensitive cell samples.
In this study, we model a sequence of a confined and a full eruption, employing the relaxed end state of the confined eruption of a kink-unstable flux rope as the initial condition for the ejective one. The full eruption, a model of a coronal mass ejection, develops as a result of converging motions imposed at the photospheric boundary, which drive flux cancellation. In this process, parts of the positive and negative external flux converge toward the polarity inversion line, reconnect, and cancel each other. Flux of the same amount as the canceled flux transfers to a flux rope, increasing the free magnetic energy of the coronal field. With sustained flux cancellation and the associated progressive weakening of the magnetic tension of the overlying flux, we find that a flux reduction of approximate to 11% initiates the torus instability of the flux rope, which leads to a full eruption. These results demonstrate that a homologous full eruption, following a confined one, can be driven by flux cancellation.
A multidimensional and analytical perspective on Open Educational Practices in the 21st century
(2022)
Participatory approaches to teaching and learning are experiencing a new lease on life in the 21st century as a result of the rapid technology development. Knowledge, practices, and tools can be shared across spatial and temporal boundaries in higher education by means of Open Educational Resources, Massive Open Online Courses, and open-source technologies. In this context, the Open Education Movement calls for new didactic approaches that encourage greater learner participation in formal higher education. Based on a representative literature review and focus group research, in this study an analytical framework was developed that enables researchers and practitioners to assess the form of participation in formal, collaborative teaching and learning practices. The analytical framework is focused on the micro-level of higher education, in particular on the interaction between students and lecturers when organizing the curriculum. For this purpose, the research reflects anew on the concept of participation, taking into account existing stage models for participation in the educational context. These are then brought together with the dimensions of teaching and learning processes, such as methods, objectives and content, etc. This paper aims to make a valuable contribution to the opening up of learning and teaching, and expands the discourse around possibilities for interpreting Open Educational Practices.
A multidimensional and analytical perspective on Open Educational Practices in the 21st century
(2022)
Participatory approaches to teaching and learning are experiencing a new lease on life in the 21st century as a result of the rapid technology development. Knowledge, practices, and tools can be shared across spatial and temporal boundaries in higher education by means of Open Educational Resources, Massive Open Online Courses, and open-source technologies. In this context, the Open Education Movement calls for new didactic approaches that encourage greater learner participation in formal higher education. Based on a representative literature review and focus group research, in this study an analytical framework was developed that enables researchers and practitioners to assess the form of participation in formal, collaborative teaching and learning practices. The analytical framework is focused on the micro-level of higher education, in particular on the interaction between students and lecturers when organizing the curriculum. For this purpose, the research reflects anew on the concept of participation, taking into account existing stage models for participation in the educational context. These are then brought together with the dimensions of teaching and learning processes, such as methods, objectives and content, etc. This paper aims to make a valuable contribution to the opening up of learning and teaching, and expands the discourse around possibilities for interpreting Open Educational Practices.
A new evidence-based diet score to capture associations of food consumption and chronic disease risk
(2022)
Previously, the attempt to compile German dietary guidelines into a diet score was predominantly not successful with regards to preventing chronic diseases in the EPIC-Potsdam study. Current guidelines were supplemented by the latest evidence from systematic reviews and expert papers published between 2010 and 2020 on the prevention potential of food groups on chronic diseases such as type 2 diabetes, cardiovascular diseases and cancer. A diet score was developed by scoring the food groups according to a recommended low, moderate or high intake. The relative validity and reliability of the diet score, assessed by a food frequency questionnaire, was investigated. The consideration of current evidence resulted in 10 key food groups being preventive of the chronic diseases of interest. They served as components in the diet score and were scored from 0 to 1 point, depending on their recommended intake, resulting in a maximum of 10 points. Both the reliability (r = 0.53) and relative validity (r = 0.43) were deemed sufficient to consider the diet score as a stable construct in future investigations. This new diet score can be a promising tool to investigate dietary intake in etiological research by concentrating on 10 key dietary determinants with evidence-based prevention potential for chronic diseases.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
Quantifying the extremeness of heavy precipitation allows for the comparison of events. Conventional quantitative indices, however, typically neglect the spatial extent or the duration, while both are important to understand potential impacts. In 2014, the weather extremity index (WEI) was suggested to quantify the extremeness of an event and to identify the spatial and temporal scale at which the event was most extreme. However, the WEI does not account for the fact that one event can be extreme at various spatial and temporal scales. To better understand and detect the compound nature of precipitation events, we suggest complementing the original WEI with a “cross-scale weather extremity index” (xWEI), which integrates extremeness over relevant scales instead of determining its maximum.
Based on a set of 101 extreme precipitation events in Germany, we outline and demonstrate the computation of both WEI and xWEI. We find that the choice of the index can lead to considerable differences in the assessment of past events but that the most extreme events are ranked consistently, independently of the index. Even then, the xWEI can reveal cross-scale properties which would otherwise remain hidden. This also applies to the disastrous event from July 2021, which clearly outranks all other analyzed events with regard to both WEI and xWEI.
While demonstrating the added value of xWEI, we also identify various methodological challenges along the required computational workflow: these include the parameter estimation for the extreme value distributions, the definition of maximum spatial extent and temporal duration, and the weighting of extremeness at different scales. These challenges, however, also represent opportunities to adjust the retrieval of WEI and xWEI to specific user requirements and application scenarios.
Van Allen Probes measurements revealed the presence of the most unusual structures in the ultra-relativistic radiation belts. Detailed modeling, analysis of pitch angle distributions, analysis of the difference between relativistic and ultra-realistic electron evolution, along with theoretical studies of the scattering and wave growth, all indicate that electromagnetic ion cyclotron (EMIC) waves can produce a very efficient loss of the ultra-relativistic electrons in the heart of the radiation belts. Moreover, a detailed analysis of the profiles of phase space densities provides direct evidence for localized loss by EMIC waves. The evolution of multi-MeV fluxes shows dramatic and very sudden enhancements of electrons for selected storms. Analysis of phase space density profiles reveals that growing peaks at different values of the first invariant are formed at approximately the same radial distance from the Earth and show the sequential formation of the peaks from lower to higher energies, indicating that local energy diffusion is the dominant source of the acceleration from MeV to multi-MeV energies. Further simultaneous analysis of the background density and ultra-relativistic electron fluxes shows that the acceleration to multi-MeV energies only occurs when plasma density is significantly depleted outside of the plasmasphere, which is consistent with the modeling of acceleration due to chorus waves.
In postsocialist Potsdam, religious diversity has risen surprisingly in public life since 1990 although more than 80% of the residents have no religious affiliation. City and state authorities have actively embraced issues around immigration and integration as well as the promotion of religious diversity and interreligious dialogue and have linked this to the agenda of rejuvenating the city’s religious heritage. For years, negotiations have been going on about the need of a mosque, the reconstructions of a synagogue and the so-called “Garrison Church,” a landmark military church building. These initiatives have been dominating the public space for different reasons. They implied, beyond religion, questions of memory, identity, immigration, and culture. This article puts these three cases into perspective to offer a nuanced understanding of the importance of religious spaces in secular contexts considering city politics.
We discuss Neumann problems for self-adjoint Laplacians on (possibly infinite) graphs. Under the assumption that the heat semigroup is ultracontractive we discuss the unique solvability for non-empty subgraphs with respect to the vertex boundary and provide analytic and probabilistic representations for Neumann solutions. A second result deals with Neumann problems on canonically compactifiable graphs with respect to the Royden boundary and provides conditions for unique solvability and analytic and probabilistic representations.
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency-magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true M-max is close to the maximum observed magnitude M-max,M-observed restricts the class of the potential models a priori to those with M-max = M-max,M-observed + Delta M with an increment Delta M approximate to 0.5... 1.2. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled :M-max equals M-obs plus an increment." The incomplete consideration of the entire model family for the frequency-magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
Hegel's many remarks that seem to imply that philosophy should proceed completely a priori pose a problem for his philosophy of nature since, on this reading, Hegel offers an a priori derivation of empirical results of natural sciences. We show how this perception can be mitigated by interpreting Hegel's remarks as broadly in line with the pre-Kantian rationalist notion of a priori and offer reasons for doing so. We show that, rather than being a peculiarity of Hegel's philosophy, the practice of demonstrating a priori the results of empirical sciences was widespread in the pre-Kantian rationalist tradition. We argue that this practice was intelligible in light of the notion of a priori that was still quite prominent during Hegel's life. This notion of a priori differs from Kant's in that, while the latter's notion concerns propositions, the former concerned only their demonstration. According to it, the same proposition could be demonstrated both a posteriori and a priori. Post-Kantian idealists likewise developed projects of demonstrating specific scientific contents a priori. We then make our discussion more concrete by examining a particular case of an a priori derivation of a natural law, namely the law of fall, by both Leibniz and Hegel.
The digital transformation sets new requirements to all classes of enterprise systems in companies. ERP systems in particular, which represent the dominant class of enterprise systems, are struggling to meet the new requirements at all levels of the architecture. Therefore, there is an urgent need to reconsider the overall architecture of the systems and address the root of the related issues. Given that many restrictions ERP pose on their adaptability are related to the standardization of data, the database layer of ERP systems is addressed. Since database serve as the foundation for data storage and retrieval, they limit the flexibility of enterprise systems and the chance to adapt to new requirements accordingly. So far, relational databases are widely used. Using a systematic literature approach, recent requirements for ERP systems were identified. Prominent database approaches were assessed against the 23 requirements identified. The results reveal the strengths and weaknesses of recent database approaches. To this end, the results highlight the demand to combine multiple database approaches to fulfill recent business requirements. From a conceptual point of view, this paper supports the idea of federated databases which are interoperable to fulfill future requirements and support business operation. This research forms the basis for renewal of the current generation of ERP systems and proposes to ERP vendors to use different database concepts in the future.
As the use of free electron laser (FEL) sources increases, so do the findings mentioning non-linear phenomena occurring at these experiments, such as saturable absorption, induced transparency and scattering breakdowns. These are well known among the laser community, but are still rarely understood and expected among the X-ray community and to date lack tools and theories to accurately predict the respective experimental parameters and results. We present a simple theoretical framework to access short X-ray pulse induced light- matter interactions which occur at intense short X-ray pulses as available at FEL sources. Our approach allows to investigate effects such as saturable absorption, induced transparency and scattering suppression, stimulated emission, and transmission spectra, while including the density of state influence relevant to soft X-ray spectroscopy in, for example, transition metal complexes or functional materials. This computationally efficient rate model based approach is intuitively adaptable to most solid state sample systems in the soft X-ray spectrum with the potential to be extended for liquid and gas sample systems as well. The feasibility of the model to estimate the named effects and the influence of the density of state is demonstrated using the example of CoPd transition metal systems at the Co edge. We believe this work is an important contribution for the preparation, performance, and understanding of FEL based high intensity and short pulse experiments, especially on functional materials in the soft X-ray spectrum.
Fluctuating asymmetries (FA) are small stress-induced random deviations from perfect symmetry that arise during the development of bilaterally symmetrical traits. One of the factors that can reduce developmental stability of the individuals and cause FA at a population level is the loss of genetic variation. Populations of founding colonists frequently have lower genetic variation than their ancestral populations that could be reflected in a higher level of FA. The European starling (Sturnus vulgaris) is native to Eurasia and was introduced successfully in the USA in 1890 and Argentina in 1983. In this study, we documented the genetic diversity and FA of starlings from England (ancestral population), USA (primary introduction) and Argentina (secondary introduction). We predicted the Argentinean starlings would have the highest level of FA and lowest genetic diversity of the three populations. We captured wild adult European starlings in England, USA, and Argentina, measured their mtDNA diversity and allowed them to molt under standardized conditions to evaluate their FA of primary feathers. For genetic analyses, we extracted DNA from blood samples of individuals from Argentina and USA and from feather samples from individuals from England and sequenced the mitochondrial control region. Starlings in Argentina showed the highest composite FA and exhibited the lowest haplotype and nucleotide diversity. The USA population showed a level of FA and genetic diversity similar to the native population. Therefore, the level of asymmetry and genetic diversity found among these populations was consistent with our predictions based on their invasion history.
Manufacturing companies still have relatively few points of contact with the circular economy. Especially, extending life time of whole products or parts via remanufacturing is an promising approach to reduce waste. However, necessary cost-efficient assessment of the condition of the individual parts is challenging and assessment procedures are technically complex (e.g., scanning and testing procedures). Furthermore, these assessment procedures are usually only available after the disassembly process has been completed. This is where conceptualization, data acquisition and simulation of remanufacturing processes can help. One major constraining aspect of remanufacturing is reducing logistic efforts, since these also have negative external effects on the environment. Thus regionalization is an additional but in the end consequential challenge for remanufacturing. This article aims to fill a gap by providing an regional remanufacturing approach, in particular the design of local remanufacturing chains. Thereby, further focus lies on modeling and simulating alternative courses of action, including feasibility study and eco-nomic assessment.
A review of source models to further the understanding of the seismicity of the Groningen field
(2022)
The occurrence of felt earthquakes due to gas production in Groningen has initiated numerous studies and model attempts to understand and quantify induced seismicity in this region. The whole bandwidth of available models spans the range from fully deterministic models to purely empirical and stochastic models. In this article, we summarise the most important model approaches, describing their main achievements and limitations. In addition, we discuss remaining open questions and potential future directions of development.
Starch is a complex carbohydrate polymer produced by plants and especially by crops in huge amounts. It consists of amylose and amylopectin, which have alpha-1,4-and alpha-1,6-linked glucose units. Despite this simple chemistry, the entire starch metabolism is complex, containing various (iso)enzymes/proteins. However, whose interplay is still not yet fully understood. Starch is essential for humans and animals as a source of nutrition and energy. Nowadays, starch is also commonly used in non-food industrial sectors for a variety of purposes. However, native starches do not always satisfy the needs of a wide range of (industrial) applications. This review summarizes the structural properties of starch, analytical methods for starch characterization, and in planta starch modifications.
A rigorous construction of the supersymmetric path integral associated to a compact spin manifold
(2022)
We give a rigorous construction of the path integral in N = 1/2 supersymmetry as an integral map for differential forms on the loop space of a compact spin manifold. It is defined on the space of differential forms which can be represented by extended iterated integrals in the sense of Chen and Getzler-Jones-Petrack. Via the iterated integral map, we compare our path integral to the non-commutative loop space Chern character of Guneysu and the second author. Our theory provides a rigorous background to various formal proofs of the Atiyah-Singer index theorem for twisted Dirac operators using supersymmetric path integrals, as investigated by Alvarez-Gaume, Atiyah, Bismut and Witten.
Groundwater recharge (GWR) is one of the most challenging water fluxes to estimate, as it relies on observed data that are often limited in many developing countries.
This study developed an innovative water budget method using satellite products for estimating the spatially distributed GWR at monthly and annual scales in tropical wet sedimentary regions despite cloudy conditions.
The distinctive features proposed in this study include the capacity to address 1) evapotranspiration estimations in tropical wet regions frequently overlaid by substantial cloud cover; and 2) seasonal root-zone water storage estimations in sedimentary regions prone to monthly variations.
The method also utilises satellite-based information of the precipitation and surface runoff. The GWR was estimated and validated for the hydrologically contrasting years 2016 and 2017 over a tropical wet sedimentary region located in North-eastern Brazil, which has substantial potential for groundwater abstraction.
This study showed that applying a cloud-cleaning procedure based on monthly compositions of biophysical data enables the production of a reasonable proxy for evapotranspiration able to estimate groundwater by the water budget method.
The resulting GWR rates were 219 (2016) and 302 (2017) mm yr(-1), showing good correlations (CC = 0.68 to 0.83) and slight underestimations (PBIAS =-13 to-9%) when compared with the referenced estimates obtained by the water table fluctuation method for 23 monitoring wells. Sensitivity analysis shows that water storage changes account for +19% to-22% of our monthly evaluation.
The satellite-based approach consistently demonstrated that the consideration of cloud-cleaned evapotranspiration and root-zone soil water storage changes are essential for a proper estimation of spatially distributed GWR in tropical wet sedimentary regions because of their weather seasonality and cloudy conditions.
Little is known about the current state of research on the involvement of young people in hate speech. Thus, this systematic review presents findings on a) the prevalence of hate speech among children and adolescents and on hate speech definitions that guide prevalence assessments for this population; and b) the theoretical and empirical overlap of hate speech with related concepts. This review was guided by the Cochrane approach. To be included, publications were required to deal with real-life experiences of hate speech, to provide empirical data on prevalence for samples aged 5 to 21 years and they had to be published in academic formats. Included publications were full-text coded using two raters (kappa = .80) and their quality was assessed. The string-guided electronic search (ERIC, SocInfo, Psycinfo, Psyndex) yielded 1,850 publications. Eighteen publications based on 10 studies met the inclusion criteria and their findings were systematized. Twelve publications were of medium quality due to minor deficiencies in their theoretical or methodological foundations. All studies used samples of adolescents and none of younger children. Nine out of 10 studies applied quantitative methodologies. Eighteen publications based on 10 studies were included. Results showed that frequencies for hate speech exposure were higher than those related to victimization and perpetration. Definitions of hate speech and assessment instruments were heterogeneous. Empirical evidence for an often theorized overlap between hate speech and bullying was found. The paper concludes by presenting a definition of hate speech, including implications for practice, policy, and research.
A task-based parallel elliptic solver for numerical relativity with discontinuous Galerkin methods
(2022)
Elliptic partial differential equations are ubiquitous in physics. In numerical relativity---the study of computational solutions to the Einstein field equations of general relativity---elliptic equations govern the initial data that seed every simulation of merging black holes and neutron stars. In the quest to produce detailed numerical simulations of these most cataclysmic astrophysical events in our Universe, numerical relativists resort to the vast computing power offered by current and future supercomputers. To leverage these computational resources, numerical codes for the time evolution of general-relativistic initial value problems are being developed with a renewed focus on parallelization and computational efficiency. Their capability to solve elliptic problems for accurate initial data must keep pace with the increasing detail of the simulations, but elliptic problems are traditionally hard to parallelize effectively.
In this thesis, I develop new numerical methods to solve elliptic partial differential equations on computing clusters, with a focus on initial data for orbiting black holes and neutron stars. I develop a discontinuous Galerkin scheme for a wide range of elliptic equations, and a stack of task-based parallel algorithms for their iterative solution. The resulting multigrid-Schwarz preconditioned Newton-Krylov elliptic solver proves capable of parallelizing over 200 million degrees of freedom to at least a few thousand cores, and already solves initial data for a black hole binary about ten times faster than the numerical relativity code SpEC. I also demonstrate the applicability of the new elliptic solver across physical disciplines, simulating the thermal noise in thin mirror coatings of interferometric gravitational-wave detectors to unprecedented accuracy. The elliptic solver is implemented in the new open-source SpECTRE numerical relativity code, and set up to support simulations of astrophysical scenarios for the emerging era of gravitational-wave and multimessenger astronomy.
Der Beitrag fragt nach Anzeichen einer transmedialen Ästhetik in Texten, die Josef Winkler um die Jahrtausendwende veröffentlicht hat. In einem ersten Teil werden inhaltliche und strukturelle Gemeinsamkeiten von Natura morta (1998) und Wenn es soweit ist (2001) herausgearbeitet. Der Fokus der Untersuchung liegt auf dem Verfahren der Wiederholung und damit verbunden auf Kontinuitäten zwischen den beiden Texten. Zweitens werden die Organisations- und Strukturprinzipien der Texte anhand zweier räumlicher Modelle, des Ablagerungs- und Ansammlungsmodells, analysiert und miteinander verglichen. Drittens werden ausgehend vom Prinzip der Wiederholung, das den beiden Modellen Ablagerung und Ansammlung zugrunde liegt, performative Aspekte von Winklers Erzählen aufgezeigt und erste Spuren einer transmedialen Ästhetik in Winklers Œuvre freigelegt.
Different lake systems might reflect different climate elements of climate changes, while the responses of lake systems are also divers, and are not completely understood so far. Therefore, a comparison of lakes in different climate zones, during the high-amplitude and abrupt climate fluctuations of the Last Glacial to Holocene transition provides an exceptional opportunity to investigate distinct natural lake system responses to different abrupt climate changes. The aim of this doctoral thesis was to reconstruct climatic and environmental fluctuations down to (sub-) annual resolution from two different lake systems during the Last Glacial-Interglacial transition (~17 and 11 ka). Lake Gościąż, situated in the temperate central Poland, developed in the Allerød after recession of the Last Glacial ice sheets. The Dead Sea is located in the Levant (eastern Mediterranean) within a steep gradient from sub-humid to hyper-arid climate, and formed in the mid-Miocene. Despite their differences in sedimentation processes, both lakes form annual laminations (varves), which are crucial for studies of abrupt climate fluctuations. This doctoral thesis was carried out within the DFG project PALEX-II (Paleohydrology and Extreme Floods from the Dead Sea ICDP Core) that investigates extreme hydro-meteorological events in the ICDP core in relation to climate changes, and ICLEA (Virtual Institute of Integrated Climate and Landscape Evolution Analyses) that intends to better the understanding of climate dynamics and landscape evolutions in north-central Europe since the Last Glacial. Further, it contributes to the Helmholtz Climate Initiative REKLIM (Regional Climate Change and Humans) Research Theme 3 “Extreme events across temporal and spatial scales” that investigates extreme events using climate data, paleo-records and model-based simulations. The three main aims were to (1) establish robust chronologies of the lakes, (2) investigate how major and abrupt climate changes affect the lake systems, and (3) to compare the responses of the two varved lakes to these hemispheric-scale climate changes.
Robust chronologies are a prerequisite for high-resolved climate and environmental reconstructions, as well as for archive comparisons. Thus, addressing the first aim, the novel chronology of Lake Gościąż was established by microscopic varve counting and Bayesian age-depth modelling in Bacon for a non-varved section, and was corroborated by independent age constrains from 137Cs activity concentration measurements, AMS radiocarbon dating and pollen analysis. The varve chronology reaches from the late Allerød until AD 2015, revealing more Holocene varves than a previous study of Lake Gościąż suggested. Varve formation throughout the complete Younger Dryas (YD) even allowed the identification of annually- to decadal-resolved leads and lags in proxy responses at the YD transitions.
The lateglacial chronology of the Dead Sea (DS) was thus far mainly based on radiocarbon and U/Th-dating. In the unique ICDP core from the deep lake centre, continuous search for cryptotephra has been carried out in lateglacial sediments between two prominent gypsum deposits – the Upper and Additional Gypsum Units (UGU and AGU, respectively). Two cryptotephras were identified with glass analyses that correlate with tephra deposits from the Süphan and Nemrut volcanoes indicating that the AGU is ~1000 years younger than previously assumed, shifting it into the YD, and the underlying varved interval into the Bølling/Allerød, contradicting previous assumptions.
Using microfacies analyses, stable isotopes and temperature reconstructions, the second aim was achieved at Lake Gościąż. The YD lake system was dynamic, characterized by higher aquatic bioproductivity, more re-suspended material and less anoxia than during the Allerød and Early Holocene, mainly influenced by stronger water circulation and catchment erosion due to stronger westerly winds and less lake sheltering. Cooling at the YD onset was ~100 years longer than the final warming, while environmental proxies lagged the onset of cooling by ~90 years, but occurred contemporaneously during the termination of the YD. Chironomid-based temperature reconstructions support recent studies indicating mild YD summer temperatures. Such a comparison of annually-resolved proxy responses to both abrupt YD transitions is rare, because most European lake archives do not preserve varves during the YD.
To accomplish the second aim at the DS, microfacies analyses were performed between the UGU (~17 ka) and Holocene onset (~11 ka) in shallow- (Masada) and deep-water (ICDP core) environments. This time interval is marked by a huge but fluctuating lake level drop and therefore the complete transition into the Holocene is only recorded in the deep-basin ICDP core. In this thesis, this transition was investigated for the first time continuously and in detail. The final two pronounced lake level drops recorded by deposition of the UGU and AGU, were interrupted by one millennium of relative depositional stability and a positive water budget as recorded by aragonite varve deposition interrupted by only a few event layers. Further, intercalation of aragonite varves between the gypsum beds of the UGU and AGU shows that these generally dry intervals were also marked by decadal- to centennial-long rises in lake level. While continuous aragonite varves indicate decadal-long stable phases, the occurrence of thicker and more frequent event layers suggests general more instability during the gypsum units. These results suggest a pattern of complex and variable hydroclimate at different time scales during the Lateglacial at the DS.
The third aim was accomplished based on the individual studies above that jointly provide an integrated picture of different lake responses to different climate elements of hemispheric-scale abrupt climate changes during the Last Glacial-Interglacial transition. In general, climatically-driven facies changes are more dramatic in the DS than at Lake Gościąż. Further, Lake Gościąż is characterized by continuous varve formation nearly throughout the complete profile, whereas the DS record is widely characterized by extreme event layers, hampering the establishment of a continuous varve chronology. The lateglacial sedimentation in Lake Gościąż is mainly influenced by westerly winds and minor by changes in catchment vegetation, whereas the DS is primarily influenced by changes in winter precipitation, which are caused by temperature variations in the Mediterranean. Interestingly, sedimentation in both archives is more stable during the Bølling/Allerød and more dynamic during the YD, even when sedimentation processes are different.
In summary, this doctoral thesis presents seasonally-resolved records from two lake archives during the Lateglacial (ca 17-11 ka) to investigate the impact of abrupt climate changes in different lake systems. New age constrains from the identification of volcanic glass shards in the lateglacial sediments of the DS allowed the first lithology-based interpretation of the YD in the DS record and its comparison to Lake Gościąż. This highlights the importance of the construction of a robust chronology, and provides a first step for synchronization of the DS with other eastern Mediterranean archives. Further, climate reconstructions from the lake sediments showed variability on different time scales in the different archives, i.e. decadal- to millennial fluctuations in the lateglacial DS, and even annual variations and sub-decadal leads and lags in proxy responses during the rapid YD transitions in Lake Gościąż. This showed the importance of a comparison of different lake archives to better understand the regional and local impacts of hemispheric-scale climate variability. An unprecedented example is demonstrated here of how different lake systems show different lake responses and also react to different climate elements of abrupt climate changes. This further highlights the importance of the understanding of the respective lake system for climate reconstructions.
Der vorliegende Abschlussbericht präsentiert die Ergebnisse des BMBF-geförderten Verbundprojektes "E-LANE: E-Learning in der Lehrerfortbildung: Angebote, Nutzung und Erträge", das gemeinsam durch die Universität Potsdam (Prof. Dr. Dirk Richter) und der Leuphana Universität Lüneburg (Prof. Dr. Marc Kleinknecht) durchgeführt wurde. Ziel des Projektes war die Untersuchung des Angebotes von digitalen bzw. digital-gestützten Fortbildungen für Lehrkräfte in den Bundesländern Berlin, Brandenburg und Schleswig-Holstein. Im Rahmen von vier Teilstudien wurden Datenbankanalysen der Fortbildungsangebote in den jeweiligen Ländern sowie schriftliche Befragungen mit Fortbildner*innen sowie Teilnehmer*innen von Online-Fortbildungen durchgeführt. Darüber hinaus wurde eine Online-Fortbildung für Lehrkräfte zum Thema Feedback eigens konzipiert und durchgeführt.
Der vorliegende Abschlussbericht umfasst die Ergebnisse der wissenschaftlichen Evaluation der Werkstatt „Schule leiten“. Bei dieser Werkstatt handelt es sich um ein 18-monatiges Fortbildungsangebot für Schulleitungen, das durch die Deutsche Schulakademie konzipiert und in Kooperation mit dem saarländischen Kultusministerium für Bildung und Kultur sowie dem saarländischen Landesinstitut für Pädagogik und Medien durchgeführt wurde. Im Zeitraum 2016–2020 absolvierten jeweils zwei Personen des Schulleitungsteams allgemeinbildender Schulen erstmalig in insgesamt drei Durchgängen verschiedene Angebote der Werkstatt. Weiterhin erhielten die Teilnehmenden die Aufgaben, im Zuge ihres Fortbildungsbesuches ein individuelles Schulentwicklungsprojekt zu planen, zu entwickeln und unter Anleitung der Werkstatt in der Schule zu implementieren. Zur Überprüfung der wahrgenommenen Qualität sowie der Wirksamkeit des Fortbildungsangebotes wurde die Universität Potsdam, Arbeitsbereich Prof. Dr. Dirk Richter, beauftragt. Der vorliegende Bericht präsentiert die Evaluationsergebnisse der Durchgänge 2 und 3.
Im Zentrum der Evaluation stehen die folgenden Forschungsfragen: (1) Wie beurteilen die Teilnehmenden die Qualität der Werkstatt „Schule leiten“? (2) Inwiefern hat die Werkstatt „Schule leiten“ dazu beigetragen, die Leitungskompetenzen (u.a. Einstellungen und Führungshandeln) der Teilnehmenden zu stärken? sowie (3) Wie haben sich schulische Strukturen und Prozesse zur Förderung von Schulentwicklung in den teilnehmenden Schulen durch die Werkstatt „Schule leiten“ verändert? Zur Beantwortung der Fragestellungen wurden eine Reihe schriftlicher Befragungen mit den teilnehmenden Schulleitungen sowie den Lehrkräften der teilnehmenden Schulen durchgeführt. Diese Befragungen erfolgten sowohl begleitend zum Fortbildungsprogramm (nach Absolvieren der einzelnen Angebote) sowie in einem Prä-Post-Follow-Up-Design. Weiterhin wurden im Rahmen einer qualitativen Begleitstudie verschiedene Personen (Schulleitung, Mitglieder des Schulleitungsteams, Lehrkräfte) von insgesamt fünf Schulen über drei Zeitpunkte dazu befragt, wie die Planung, Entwicklung und Implementation der Schulentwicklungsprojekte erfolgten.
Die Befunde der Evaluation deuten insgesamt darauf hin, dass die Qualität der Werkstatt „Schule leiten“ als Gesamtprogramm sowie die einzelnen Angebote der Werkstatt sehr positiv bewertet werden. Dabei nehmen die Teilnehmenden von Durchgang 2 die Werkstatt in einer Reihe von Merkmalen positiver wahr als die Teilnehmenden von Durchgang 3. Weiterhin deuten die Ergebnisse darauf hin, dass sich bestimmte Facetten des Führungshandelns der Teilnehmenden im zeitlichen Verlauf positiv verändert haben. Hierfür existieren Hinweise sowohl aus Perspektive der Teilnehmenden selbst als auch aus Perspektive der Lehrkräfte ihrer Schulen. Motivationale Merkmale der Teilnehmenden sowie Aspekte des Führungshandelns, die sich auf Tätigkeiten zur Unterstützung der innerschulischen Kooperation beziehen, bleiben aus Perspektive der Teilnehmenden hingegen weitestgehend konstant. Hinsichtlich der Veränderungen schulischer Strukturen zur Schulentwicklung deuten die Ergebnisse auf eine positive Entwicklung der wahrgenommenen Offenheit gegenüber Kooperation im Kollegium aus Perspektive der Lehrkräfte hin. Die Befunde der qualitativen Begleitstudie liefern weitere Informationen über die wahrgenommene Qualität der Werkstatt sowie über Veränderungen aufseiten der Teilnehmenden und der schulischen Strukturen.
We study the diffusive motion of a particle in a subharmonic potential of the form U(x) = |x|( c ) (0 < c < 2) driven by long-range correlated, stationary fractional Gaussian noise xi ( alpha )(t) with 0 < alpha <= 2. In the absence of the potential the particle exhibits free fractional Brownian motion with anomalous diffusion exponent alpha. While for an harmonic external potential the dynamics converges to a Gaussian stationary state, from extensive numerical analysis we here demonstrate that stationary states for shallower than harmonic potentials exist only as long as the relation c > 2(1 - 1/alpha) holds. We analyse the motion in terms of the mean squared displacement and (when it exists) the stationary probability density function. Moreover we discuss analogies of non-stationarity of Levy flights in shallow external potentials.
There is a longstanding and widely held misconception about the relative remoteness of abstract concepts from concrete experiences. This review examines the current evidence for external influences and internal constraints on the processing, representation, and use of abstract concepts, like truth, friendship, and number. We highlight the theoretical benefit of distinguishing between grounded and embodied cognition and then ask which roles do perception, action, language, and social interaction play in acquiring, representing and using abstract concepts. By reviewing several studies, we show that they are, against the accepted definition, not detached from perception and action. Focussing on magnitude-related concepts, we also discuss evidence for cultural influences on abstract knowledge and explore how internal processes such as inner speech, metacognition, and inner bodily signals (interoception) influence the acquisition and retrieval of abstract knowledge. Finally, we discuss some methodological developments. Specifically, we focus on the importance of studies that investigate the time course of conceptual processing and we argue that, because of the paramount role of sociality for abstract concepts, new methods are necessary to study concepts in interactive situations. We conclude that bodily, linguistic, and social constraints provide important theoretical limitations for our theories of conceptual knowledge.
Abzug unter Beobachtung
(2022)
Mehr als vier Jahrzehnte lang beobachteten die Streitkräfte und Militärnachrichtendienste der NATO-Staaten die sowjetischen Truppen in der DDR. Hierfür übernahm in der Bundesrepublik Deutschland der Bundesnachrichtendienst (BND) die militärische Auslandsaufklärung unter Anwendung nachrichtendienstlicher Mittel und Methoden. Die Bundeswehr betrieb dagegen taktische Fernmelde- und elektronische Aufklärung und hörte vor allem den Funkverkehr der „Gruppe der sowjetischen Streitkräfte in Deutschland“ (GSSD) ab. Mit der Aufstellung einer zentralen Dienststelle für das militärische Nachrichtenwesen, dem Amt für Nachrichtenwesen der Bundeswehr, bündelte und erweiterte zugleich das Bundesministerium für Verteidigung in den 1980er Jahren seine analytischen Kapazitäten. Das Monopol des BND in der militärischen Auslandsaufklärung wurde von der Bundeswehr dadurch zunehmend infrage gestellt.
Nach der deutschen Wiedervereinigung am 3. Oktober 1990 befanden sich immer noch mehr als 300.000 sowjetische Soldaten auf deutschem Territorium. Die 1989 in Westgruppe der Truppen (WGT) umbenannte GSSD sollte – so der Zwei-plus-Vier-Vertrag – bis 1994 vollständig abziehen. Der Vertrag verbot auch den drei Westmächten, in den neuen Bundesländern militärisch tätig zu sein. Die für die Militäraufklärung bis dahin unverzichtbaren Militärverbindungsmissionen der Westmächte mussten ihre Dienste einstellen. Doch was geschah mit diesem „alliierten Erbe“? Wer übernahm auf deutscher Seite die Aufklärung der sowjetischen Truppen und wer kontrollierte den Truppenabzug?
Die Studie untersucht die Rolle von Bundeswehr und BND beim Abzug der WGT zwischen 1990 und 1994 und fragt dabei nach Kooperation und Konkurrenz zwischen Streitkräften und Nachrichtendiensten. Welche militärischen und nachrichtendienstlichen Mittel und Fähigkeiten stellte die Bundesregierung zur Bewältigung des Truppenabzugs zur Verfügung, nachdem die westlichen Militärverbindungsmissionen aufgelöst wurden? Wie veränderten sich die Anforderungen an die militärische Auslandsaufklärung des BND? Inwieweit setzten sich Konkurrenz und Kooperation von Bundeswehr und BNDbeim Truppenabzug fort? Welche Rolle spielten dabei die einstigen Westmächte? Die Arbeit versteht sich nicht nur als Beitrag zur Militärgeschichte, sondern auch zur deutschen Nachrichtendienstgeschichte.
The use of alternating current (AC) electrokinetic forces, like dielectrophoresis and AC electroosmosis, as a simple and fast method to immobilize sub-micrometer objects onto nanoelectrode arrays is presented. Due to its medical relevance, the influenza virus is chosen as a model organism. One of the outstanding features is that the immobilization of viral material to the electrodes can be achieved permanently, allowing subsequent handling independently from the electrical setup. Thus, by using merely electric fields, we demonstrate that the need of prior chemical surface modification could become obsolete. The accumulation of viral material over time is observed by fluorescence microscopy. The influences of side effects like electrothermal fluid flow, causing a fluid motion above the electrodes and causing an intensity gradient within the electrode array, are discussed. Due to the improved resolution by combining fluorescence microscopy with deconvolution, it is shown that the viral material is mainly drawn to the electrode edge and to a lesser extent to the electrode surface. Finally, areas of application for this functionalization technique are presented.
This paper aims to contribute to exploring the design possibilities of robots for use in human-robot interaction. In an experiment, we investigate the influence of the human's personality and the robot's design, especially its humanization, on its acceptance. We use the Almere model, the Big 5 personality traits, and the anthropomorphic gestalt variants to build the foundation for our investigation. The assumption that an anthropomorphized robot variant would, in principle, be preferred to the standard variant when a natural choice is enforced could not be evidenced in our experiment. This allows for the interpretation that anthropomorphism does not necessarily lead to intentional perception and, consequently, does not guarantee that it can automatically generate acceptance.
This study examines the access to healthcare for children and adolescents with three common chronic diseases (type-1 diabetes (T1D), obesity, or juvenile idiopathic arthritis (JIA)) within the 4th (Delta), 5th (Omicron), and beginning of the 6th (Omicron) wave (June 2021 until July 2022) of the COVID-19 pandemic in Germany in a cross-sectional study using three national patient registries. A paper-and-pencil questionnaire was given to parents of pediatric patients (<21 years) during the routine check-ups. The questionnaire contains self-constructed items assessing the frequency of healthcare appointments and cancellations, remote healthcare, and satisfaction with healthcare. In total, 905 parents participated in the T1D-sample, 175 in the obesity-sample, and 786 in the JIA-sample. In general, satisfaction with healthcare (scale: 0–10; 10 reflecting the highest satisfaction) was quite high (median values: T1D 10, JIA 10, obesity 8.5). The proportion of children and adolescents with canceled appointments was relatively small (T1D 14.1%, JIA 11.1%, obesity 20%), with a median of 1 missed appointment, respectively. Only a few parents (T1D 8.6%; obesity 13.1%; JIA 5%) reported obstacles regarding health services during the pandemic. To conclude, it seems that access to healthcare was largely preserved for children and adolescents with chronic health conditions during the COVID-19 pandemic in Germany.
This study examines the access to healthcare for children and adolescents with three common chronic diseases (type-1 diabetes (T1D), obesity, or juvenile idiopathic arthritis (JIA)) within the 4th (Delta), 5th (Omicron), and beginning of the 6th (Omicron) wave (June 2021 until July 2022) of the COVID-19 pandemic in Germany in a cross-sectional study using three national patient registries. A paper-and-pencil questionnaire was given to parents of pediatric patients (<21 years) during the routine check-ups. The questionnaire contains self-constructed items assessing the frequency of healthcare appointments and cancellations, remote healthcare, and satisfaction with healthcare. In total, 905 parents participated in the T1D-sample, 175 in the obesity-sample, and 786 in the JIA-sample. In general, satisfaction with healthcare (scale: 0–10; 10 reflecting the highest satisfaction) was quite high (median values: T1D 10, JIA 10, obesity 8.5). The proportion of children and adolescents with canceled appointments was relatively small (T1D 14.1%, JIA 11.1%, obesity 20%), with a median of 1 missed appointment, respectively. Only a few parents (T1D 8.6%; obesity 13.1%; JIA 5%) reported obstacles regarding health services during the pandemic. To conclude, it seems that access to healthcare was largely preserved for children and adolescents with chronic health conditions during the COVID-19 pandemic in Germany.
Successful communication is often explored by people throughout their life courses. To effectively transfer one’s own information to others, people employ various linguistic tools, such as word order information, prosodic cues, and lexical choices. The exploration of these linguistic cues is known as the study of information structure (IS). Moreover, an important issue in the language acquisition of children is the investigation of how they acquire IS. This thesis seeks to improve our understanding of how children acquire different tools (i.e., prosodical cues, syntactical cues, and the focus particle only) of focus marking in a cross linguistic perspective.
In the first study, following Szendrői and her colleagues (2017)- the sentence-picture verification task- was performed to investigate whether three- to five-year-old Mandarin-speaking children as well as Mandarin-speaking adults could apply prosodic information to recognize focus in sentences. More, in the second study, not only Mandarin-speaking adults and Mandarin-speaking children but also German-speaking adults and German-speaking children were included to confirm the assumption that children could have adult-like performance in understanding sentence focus by identifying language specific cues in their mother tongue from early onwards. In this study, the same paradigm- the sentence-picture verification task- as in the first study was employed together with the eye-tracking method. Finally, in the last study, an issue of whether five-year-old Mandarin-speaking children could understand the pre-subject only sentence was carried out and again whether prosodic information would help them to better understand this kind of sentences.
The overall results seem to suggest that Mandarin-speaking children from early onwards could make use of the specific linguistic cues in their ambient language. That is, in Mandarin, a Topic-prominent and tone language, the word order information plays a more important rule than the prosodic information and even three-year-old Mandarin-speaking children could follow the word order information. More, although it seems that German-speaking children could follow the prosodic information, they did not have the adult-like performance in the object-accented condition. A feasible reason for this result is that there are more possibilities of marking focus in German, such as flexible word order, prosodic information, focus particles, and thus it would take longer time for German-speaking children to manage these linguistic tools. Another important empirical finding regarding the syntactically-marked focus in German is that it seems that the cleft construction is not a valid focus construction and this result corroborates with the previous observations (Dufter, 2009). Further, eye-tracking method did help to uncover how the parser direct their attention for recognizing focus. In the final study, it is showed that with explicit verbal context Mandarin-speaking children could understand the pre-subject only sentence and the study brought a better understanding of the acquisition of the focus particle- only with the Mandarin-speaking children.
Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established hypothesis is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with considerable heterogeneity among existing studies on the hypothesis and causal evidence still limited, a final verdict on its robustness is still pending. To contribute to this ongoing debate, we conducted a week-long randomized control trial with N = 381 adult Instagram users recruited via Prolific. Specifically, we tested how active SNS use, operationalized as picture postings on Instagram, affects different dimensions of well-being. The results depicted a positive effect on users' positive affect but null findings for other well-being outcomes. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs. <br /> Lay Summary Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established assumption is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with great diversity among conducted studies on the hypothesis and a lack of causal evidence, a final verdict on its viability is still pending. To contribute to this ongoing debate, we conducted a week-long experimental investigation with 381 adult Instagram users. Specifically, we tested how posting pictures on Instagram affects different aspects of well-being. The results of this study depicted a positive effect of posting Instagram pictures on users' experienced positive emotions but no effects on other aspects of well-being. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs on users.
Dielectrophoresis (DEP) is an AC electrokinetic effect mainly used to manipulate cells. Smaller particles, like virions, antibodies, enzymes, and even dye molecules can be immobilized by DEP as well. In principle, it was shown that enzymes are active after immobilization by DEP, but no quantification of the retained activity was reported so far. In this study, the activity of the enzyme horseradish peroxidase (HRP) is quantified after immobilization by DEP. For this, HRP is immobilized on regular arrays of titanium nitride ring electrodes of 500 nm diameter and 20 nm widths. The activity of HRP on the electrode chip is measured with a limit of detection of 60 fg HRP by observing the enzymatic turnover of Amplex Red and H2O2 to fluorescent resorufin by fluorescence microscopy. The initial activity of the permanently immobilized HRP equals up to 45% of the activity that can be expected for an ideal monolayer of HRP molecules on all electrodes of the array. Localization of the immobilizate on the electrodes is accomplished by staining with the fluorescent product of the enzyme reaction. The high residual activity of enzymes after AC field induced immobilization shows the method's suitability for biosensing and research applications.
Fragmentation of peptides leaves characteristic patterns in mass spectrometry data, which can be used to identify protein sequences, but this method is challenging for mutated or modified sequences for which limited information exist. Altenburg et al. use an ad hoc learning approach to learn relevant patterns directly from unannotated fragmentation spectra.
Mass spectrometry-based proteomics provides a holistic snapshot of the entire protein set of living cells on a molecular level. Currently, only a few deep learning approaches exist that involve peptide fragmentation spectra, which represent partial sequence information of proteins.
Commonly, these approaches lack the ability to characterize less studied or even unknown patterns in spectra because of their use of explicit domain knowledge.
Here, to elevate unrestricted learning from spectra, we introduce 'ad hoc learning of fragmentation' (AHLF), a deep learning model that is end-to-end trained on 19.2 million spectra from several phosphoproteomic datasets. AHLF is interpretable, and we show that peak-level feature importance values and pairwise interactions between peaks are in line with corresponding peptide fragments.
We demonstrate our approach by detecting post-translational modifications, specifically protein phosphorylation based on only the fragmentation spectrum without a database search. AHLF increases the area under the receiver operating characteristic curve (AUC) by an average of 9.4% on recent phosphoproteomic data compared with the current state of the art on this task.
Furthermore, use of AHLF in rescoring search results increases the number of phosphopeptide identifications by a margin of up to 15.1% at a constant false discovery rate. To show the broad applicability of AHLF, we use transfer learning to also detect cross-linked peptides, as used in protein structure analysis, with an AUC of up to 94%.
Flood risk management in Germany follows an integrative approach in which both private households and businesses can make an important contribution to reducing flood damage by implementing property-level adaptation measures. While the flood adaptation behavior of private households has already been widely researched, comparatively less attention has been paid to the adaptation strategies of businesses. However, their ability to cope with flood risk plays an important role in the social and economic development of a flood-prone region. Therefore, using quantitative survey data, this study aims to identify different strategies and adaptation drivers of 557 businesses damaged by a riverine flood in 2013 and 104 businesses damaged by pluvial or flash floods between 2014 and 2017. Our results indicate that a low perceived self-efficacy may be an important factor that can reduce the motivation of businesses to adapt to flood risk. Furthermore, property-owners tended to act more proactively than tenants. In addition, high experience with previous flood events and low perceived response costs could strengthen proactive adaptation behavior. These findings should be considered in business-tailored risk communication.
Flood risk management in Germany follows an integrative approach in which both private households and businesses can make an important contribution to reducing flood damage by implementing property-level adaptation measures. While the flood adaptation behavior of private households has already been widely researched, comparatively less attention has been paid to the adaptation strategies of businesses. However, their ability to cope with flood risk plays an important role in the social and economic development of a flood-prone region. Therefore, using quantitative survey data, this study aims to identify different strategies and adaptation drivers of 557 businesses damaged by a riverine flood in 2013 and 104 businesses damaged by pluvial or flash floods between 2014 and 2017. Our results indicate that a low perceived self-efficacy may be an important factor that can reduce the motivation of businesses to adapt to flood risk. Furthermore, property-owners tended to act more proactively than tenants. In addition, high experience with previous flood events and low perceived response costs could strengthen proactive adaptation behavior. These findings should be considered in business-tailored risk communication.
Flood risk management in Germany follows an integrative approach in which both private households and businesses can make an important contribution to reducing flood damage by implementing property-level adaptation measures. While the flood adaptation behavior of private households has already been widely researched, comparatively less attention has been paid to the adaptation strategies of businesses. However, their ability to cope with flood risk plays an important role in the social and economic development of a flood-prone region. Therefore, using quantitative survey data, this study aims to identify different strategies and adaptation drivers of 557 businesses damaged by a riverine flood in 2013 and 104 businesses damaged by pluvial or flash floods between 2014 and 2017. Our results indicate that a low perceived self-efficacy may be an important factor that can reduce the motivation of businesses to adapt to flood risk. Furthermore, property-owners tended to act more proactively than tenants. In addition, high experience with previous flood events and low perceived response costs could strengthen proactive adaptation behavior. These findings should be considered in business-tailored risk communication.
Reducing greenhouse gas emissions in food systems is becoming more challenging as food is increasingly consumed away from producer regions, highlighting the need to consider emissions embodied in trade in agricultural emissions accounting.
To address this, our study explores recent trends in trade-adjusted agricultural emissions of food items at the global, regional, and national levels.
We find that emissions are largely dependent on a country’s consumption patterns and their agricultural emission intensities relative to their trading partners’.
The absolute differences between the production-based and trade-adjusted emissions accounting approaches are especially apparent for major agricultural exporters and importers and where large shares of emission-intensive items such as ruminant meat, milk products and rice are involved.
In relative terms, some low-income and emerging and developing economies with consumption of high emission intensity food products show large differences between approaches.
Similar trends are also found under various specifications that account for trade and re-exports differently.
These findings could serve as an important element towards constructing national emissions reduction targets that consider trading partners, leading to more effective emissions reductions overall.
Not much is known about how bystanders' emotional reactions after not intervening in cyberbullying might impact their health issues. Narrowing this gap in the literature, the present study focused on examining the moderating effects of emotional reactions (i.e., guilt, sadness, anger) after not intervening in cyberbullying on the longitudinal relationship between cyberbullying bystanding and health issues (i.e., subjective health complaints, suicidal ideation, non-suicidal self-harm). Participants were 1,067 adolescents between 12 and 15 years old included in this study (M-age = 13.67; 51% girls). The findings showed a positive association between Time 1 cyberbullying bystanding and Time 2 health issues. Guilt moderated the positive relationships among Time 1 cyberbullying bystanding, Time 2 subjective health complaints, suicidal ideation, and non-suicidal self-harm. Time 1 sadness also moderated the relationship between Time 1 cyberbullying bystanding and Time 2 suicidal ideation and non-suicidal self-harm. However, anger did not moderate any of the associations.
Background
Wearables, as small portable computer systems worn on the body, can track user fitness and health data, which can be used to customize health insurance contributions individually. In particular, insured individuals with a healthy lifestyle can receive a reduction of their contributions to be paid. However, this potential is hardly used in practice.
Objective
This study aims to identify which barrier factors impede the usage of wearables for assessing individual risk scores for health insurances, despite its technological feasibility, and to rank these barriers according to their relevance.
Methods
To reach these goals, we conduct a ranking-type Delphi study with the following three stages. First, we collected possible barrier factors from a panel of 16 experts and consolidated them to a list of 11 barrier categories. Second, the panel was asked to rank them regarding their relevance. Third, to enhance the panel consensus, the ranking was revealed to the experts, who were then asked to re-rank the barriers.
Results
The results suggest that regulation is the most important barrier. Other relevant barriers are false or inaccurate measurements and application errors caused by the users. Additionally, insurers could lack the required technological competence to use the wearable data appropriately.
Conclusion
A wider use of wearables and health apps could be achieved through regulatory modifications, especially regarding privacy issues. Even after assuring stricter regulations, users’ privacy concerns could partly remain, if the data exchange between wearables manufacturers, health app providers, and health insurers does not become more transparent.
Background
Wearables, as small portable computer systems worn on the body, can track user fitness and health data, which can be used to customize health insurance contributions individually. In particular, insured individuals with a healthy lifestyle can receive a reduction of their contributions to be paid. However, this potential is hardly used in practice.
Objective
This study aims to identify which barrier factors impede the usage of wearables for assessing individual risk scores for health insurances, despite its technological feasibility, and to rank these barriers according to their relevance.
Methods
To reach these goals, we conduct a ranking-type Delphi study with the following three stages. First, we collected possible barrier factors from a panel of 16 experts and consolidated them to a list of 11 barrier categories. Second, the panel was asked to rank them regarding their relevance. Third, to enhance the panel consensus, the ranking was revealed to the experts, who were then asked to re-rank the barriers.
Results
The results suggest that regulation is the most important barrier. Other relevant barriers are false or inaccurate measurements and application errors caused by the users. Additionally, insurers could lack the required technological competence to use the wearable data appropriately.
Conclusion
A wider use of wearables and health apps could be achieved through regulatory modifications, especially regarding privacy issues. Even after assuring stricter regulations, users’ privacy concerns could partly remain, if the data exchange between wearables manufacturers, health app providers, and health insurers does not become more transparent.
The aim of this work is the study of silica Arrayed Waveguide Gratings (AWGs) in the context of applications in astronomy. The specific focus lies on the investigation of the feasibility and technology limits of customized silica AWG devices for high resolution near-infrared spectroscopy. In a series of theoretical and experimental studies, AWG devices of varying geometry, foot-print and spectral resolution are constructed, simulated using a combination of a numerical beam propagation method and Fraunhofer diffraction and fabricated devices are characterized with respect to transmission efficiency, spectral resolution and polarization sensitivity. The impact of effective index non-uniformities on the performance of high-resolution AWG devices is studied numerically. Characterization results of fabricated devices are used to extrapolate the technology limits of the silica platform. The important issues of waveguide birefringence and defocus aberration are discussed theoretically and addressed experimentally by selection of an appropriate aberration-minimizing anastigmatic AWG layout structure. The drawbacks of the anastigmatic AWG geometry are discussed theoretically. From the results of the experimental studies, it is concluded that fabrication-related phase errors and waveguide birefringence are the primary limiting factors for the growth of AWG spectral resolution. It is shown that, without post-processing, the spectral resolving power is phase-error-limited to R < 40, 000 and, in the case of unpolarized light, birefringence-limited to R < 30, 000 in the AWG devices presented in this work. Necessary measures, such as special waveguide geometries and post-fabrication phase error correction are proposed for future designs. The elimination of defocus aberration using an anastigmatic AWG geometry is successfully demonstrated in experiment. Finally, a novel, non-planar dispersive in-fibre waveguide structure is proposed, discussed and studied theoretically.
Advances in characteristics improvement of polymeric membranes/separators for zinc-air batteries
(2022)
Zinc-air batteries (ZABs) are gaining popularity for a wide range of applications due to their high energy density, excellent safety, and environmental friendliness. A membrane/separator is a critical component of ZABs, with substantial implications for battery performance and stability, particularly in the case of a battery in solid state format, which has captured increased attention in recent years. In this review, recent advances as well as insight into the architecture of polymeric membrane/separators for ZABs including porous polymer separators (PPSs), gel polymer electrolytes (GPEs), solid polymer electrolytes (SPEs) and anion exchange membranes (AEMs) are discussed. The paper puts forward strategies to enhance stability, ionic conductivity, ionic selectivity, electrolyte storage capacity and mechanical properties for each type of polymeric membrane. In addition, the remaining major obstacles as well as the most potential avenues for future research are examined in detail.
The Turkish language in diaspora is in process of change due to different language constellations of immigrants and the dominance of majority languages. This led to a great interest in various research areas, particularly in linguistics. Against this background, this study focuses on developmental change in the use of adverbial clause-combining constructions in Turkish-German bilingual students’ oral and written text production. It illustrates the use of non-finite constructions and some unique alternative strategies to express adverbial relations with authentic examples in Turkish and German. The findings contribute to a better understanding of how bilingual competencies vary in expressing adverbial relations depending on language contact and extra-linguistic factors.
To date, there has been little research on how advocacy coalitions influence the dynamic relationships between norms. Addressing norm collisions as a particular type of norm dynamics, we ask if and how advocacy coalitions and the constellations between them bring such norm collisions to the fore. Norm collisions surface in situations in which actors claim that two or more norms are incompatible with each other, promoting different, even opposing, behavioural choices. We examine the effect of advocacy coalition constellations (ACC) on the activation and varying evolution of norm collisions in three issue areas: international drug control, human trafficking, and child labour. These areas have a legally codified prohibitive regime in common. At the same time, they differ with regard to the specific ACC present. Exploiting this variation, we generate insights into how power asymmetries and other characteristics of ACC affect norm collisions across our three issue areas.
Africa today
(2022)
Africa Today publishes peer-reviewed, scholarly articles and book reviews in a broad range of academic disciplines on topics related to contemporary Africa. We seek to be a venue for interdisciplinary approaches, diverse perspectives, and original research in the humanities and social sciences. This includes work on social, cultural, political, historical, and economic subjects. Recent special issues have been on topics such as the future of African artistic practices, the socio-cultural life of bus stations in Africa, and family-based health care in Ghana. Africa Today has been on the forefront of African Studies research since 1954. Please review our submission guidelines and then contact the Managing Editor or any of the editors with any questions you might have about publishing in Africa Today.
Afropolitan Encounters
(2022)
Afropolitan Encounters: Literature and Activism in London and Berlin explores what Afropolitanism does. Mobile people of African descent use this term to address their own lived realities creatively, which often includes countering stereotypical notions of being African. Afropolitan practices are enormously heterogeneous and malleable, which constitutes its strengths and, at the same time, creates tensions.
This book traces the theoretical beginnings of Afropolitanism and moves on to explore Afropolitan practices in London and Berlin. Afropolitanism can take different forms, such as that of an identity, a political and ethical stance, a dead–end road, networks, a collective self–care practice or a strategic label. In spite of the harsh criticism, Afropolitanism is attractive for people to deal with the meanings of Africa and Africanness, questions of belonging, equal rights and opportunities.
While not a unitary project, the vast variety of Afropolitan practices provide approaches to contemporary political problems in Europe and beyond. In this book, Afropolitan practices are read against the specific context of German and British colonial histories and structures of racism, the histories of Black Europeans, and contemporary right–wing resurgence in Germany and England, respectively.