Refine
Year of publication
Document Type
- Article (20491)
- Doctoral Thesis (3136)
- Postprint (2277)
- Monograph/Edited Volume (1206)
- Other (638)
- Review (611)
- Preprint (528)
- Conference Proceeding (443)
- Part of a Book (212)
- Working Paper (170)
- Habilitation Thesis (73)
- Master's Thesis (59)
- Part of Periodical (44)
- Report (12)
- Bachelor Thesis (8)
- Journal/Publication series (6)
- Contribution to a Periodical (4)
- Moving Images (2)
- Lecture (1)
Language
- English (29921) (remove)
Keywords
- climate change (166)
- Germany (94)
- machine learning (73)
- diffusion (72)
- German (66)
- morphology (65)
- Arabidopsis thaliana (63)
- anomalous diffusion (56)
- stars: massive (55)
- Holocene (53)
Institute
- Institut für Physik und Astronomie (4851)
- Institut für Biochemie und Biologie (4584)
- Institut für Geowissenschaften (3197)
- Institut für Chemie (2867)
- Institut für Mathematik (1852)
- Department Psychologie (1447)
- Institut für Ernährungswissenschaft (1005)
- Department Linguistik (981)
- Wirtschaftswissenschaften (848)
- Institut für Informatik und Computational Science (833)
Nils-Hendrik Grohmann beschäftigt sich mit dem noch andauernden Stärkungsprozess der UN-Menschenrechtsvertragsorgane. Er analysiert, welche rechtlichen Befugnisse die Ausschüsse haben, ob sie von sich aus Vorschläge einbringen können und inwieweit sie ihre Verfahrensweisen bisher aufeinander abgestimmt haben. Ein weiterer Schwerpunkt liegt auf der Zusammenarbeit zwischen den verschiedenen Ausschüssen und der Frage, welche Rolle das Treffen der Vorsitzenden bei der Stärkung spielen kann.
Moss-microbe associations are often characterised by syntrophic interactions between the microorganisms and their hosts, but the structure of the microbial consortia and their role in peatland development remain unknown.
In order to study microbial communities of dominant peatland mosses, Sphagnum and brown mosses, and the respective environmental drivers, four study sites representing different successional stages of natural northern peatlands were chosen on a large geographical scale: two brown moss-dominated, circumneutral peatlands from the Arctic and two Sphagnum-dominated, acidic peat bogs from subarctic and temperate zones.
The family Acetobacteraceae represented the dominant bacterial taxon of Sphagnum mosses from various geographical origins and displayed an integral part of the moss core community. This core community was shared among all investigated bryophytes and consisted of few but highly abundant prokaryotes, of which many appear as endophytes of Sphagnum mosses. Moreover, brown mosses and Sphagnum mosses represent habitats for archaea which were not studied in association with peatland mosses so far. Euryarchaeota that are capable of methane production (methanogens) displayed the majority of the moss-associated archaeal communities. Moss-associated methanogenesis was detected for the first time, but it was mostly negligible under laboratory conditions. Contrarily, substantial moss-associated methane oxidation was measured on both, brown mosses and Sphagnum mosses, supporting that methanotrophic bacteria as part of the moss microbiome may contribute to the reduction of methane emissions from pristine and rewetted peatlands of the northern hemisphere.
Among the investigated abiotic and biotic environmental parameters, the peatland type and the host moss taxon were identified to have a major impact on the structure of moss-associated bacterial communities, contrarily to archaeal communities whose structures were similar among the investigated bryophytes. For the first time it was shown that different bog development stages harbour distinct bacterial communities, while at the same time a small core community is shared among all investigated bryophytes independent of geography and peatland type.
The present thesis displays the first large-scale, systematic assessment of bacterial and archaeal communities associated both with brown mosses and Sphagnum mosses. It suggests that some host-specific moss taxa have the potential to play a key role in host moss establishment and peatland development.
Global warming, driven primarily by the excessive emission of greenhouse gases such as carbon dioxide into the atmosphere, has led to severe and detrimental environmental impacts. Rising global temperatures have triggered a cascade of adverse effects, including melting glaciers and polar ice caps, more frequent and intense heat waves disrupted weather patterns, and the acidification of oceans. These changes adversely affect ecosystems, biodiversity, and human societies, threatening food security, water availability, and livelihoods. One promising solution to mitigate the harmful effects of global warming is the widespread adoption of solar cells, also known as photovoltaic cells. Solar cells harness sunlight to generate electricity without emitting greenhouse gases or other pollutants. By replacing fossil fuel-based energy sources, solar cells can significantly reduce CO2 emissions, a significant contributor to global warming. This transition to clean, renewable energy can help curb the increasing concentration of greenhouse gases in the atmosphere, thereby slowing down the rate of global temperature rise.
Solar energy’s positive impact extends beyond emission reduction. As solar panels become more efficient and affordable, they empower individuals, communities, and even entire nations to generate electricity and become less dependent on fossil fuels. This decentralized energy generation can enhance resilience in the face of climate-related challenges. Moreover, implementing solar cells creates green jobs and stimulates technological innovation, further promoting sustainable economic growth. As solar technology advances, its integration with energy storage systems and smart grids can ensure a stable and reliable energy supply, reducing the need for backup fossil fuel power plants that exacerbate environmental degradation.
The market-dominant solar cell technology is silicon-based, highly matured technology with a highly systematic production procedure. However, it suffers from several drawbacks, such as: 1) Cost: still relatively high due to high energy consumption due to the need to melt and purify silicon, and the use of silver as an electrode, which hinders their widespread availability, especially in low-income countries. 2) Efficiency: theoretically, it should deliver around 29%; however, the efficiency of most of the commercially available silicon-based solar cells ranges from 18 – 22%. 3) Temperature sensitivity: The efficiency decreases with the increase in the temperature, affecting their output. 4) Resource constraints: silicon as a raw material is unavailable in all countries, creating supply chain challenges.
Perovskite solar cells arose in 2011 and matured very rapidly in the last decade as a highly efficient and versatile solar cell technology. With an efficiency of 26%, high absorption coefficients, solution processability, and tunable band gap, it attracted the attention of the solar cells community. It represented a hope for cheap, efficient, and easily processable next-generation solar cells. However, lead toxicity might be the block stone hindering perovskite solar cells’ market reach. Lead is a heavy and bioavailable element that makes perovskite solar cells environmentally unfriendly technology. As a result, scientists try to replace lead with a more environmentally friendly element. Among several possible alternatives, tin was the most suitable element due to its electronic and atomic structure similarity to lead.
Tin perovskites were developed to alleviate the challenge of lead toxicity. Theoretically, it shows very high absorption coefficients, an optimum band gap of 1.35 eV for FASnI3, and a very high short circuit current, which nominates it to deliver the highest possible efficiency of a single junction solar cell, which is around 30.1% according to Schockly-Quisser limit. However, tin perovskites’ efficiency still lags below 15% and is irreproducible, especially from lab to lab. This humble performance could be attributed to three reasons: 1) Tin (II) oxidation to tin (IV), which would happen due to oxygen, water, or even by the effect of the solvent, as was discovered recently. 2) fast crystallization dynamics, which occurs due to the lateral exposure of the P-orbitals of the tin atom, which enhances its reactivity and increases the crystallization pace. 3) Energy band misalignment: The energy bands at the interfaces between the perovskite absorber material and the charge selective layers are not aligned, leading to high interfacial charge recombination, which devastates the photovoltaic performance. To solve these issues, we implemented several techniques and approaches that enhanced the efficiency of tin halide perovskites, providing new chemically safe solvents and antisolvents. In addition, we studied the energy band alignment between the charge transport layers and the tin perovskite absorber.
Recent research has shown that the principal source of tin oxidation is the solvent known as dimethylsulfoxide, which also happens to be one of the most effective solvents for processing perovskite. The search for a stable solvent might prove to be the factor that makes all the difference in the stability of tin-based perovskites. We started with a database of over 2,000 solvents and narrowed it down to a series of 12 new solvents that are suitable for processing FASnI3 experimentally. This was accomplished by looking into 1) the solubility of the precursor chemicals FAI and SnI2, 2) the thermal stability of the precursor solution, and 3) the potential to form perovskite. Finally, we show that it is possible to manufacture solar cells using a novel solvent system that outperforms those produced using DMSO. The results of our research give some suggestions that may be used in the search for novel solvents or mixes of solvents that can be used to manufacture stable tin-based perovskites.
Due to the quick crystallization of tin, it is more difficult to deposit tin-based perovskite films from a solution than manufacturing lead-based perovskite films since lead perovskite is more often utilized. The most efficient way to get high efficiencies is to deposit perovskite from dimethyl sulfoxide (DMSO), which slows down the quick construction of the tin-iodine network that is responsible for perovskite synthesis. This is the most successful approach for achieving high efficiencies. Dimethyl sulfoxide, which is used in the processing, is responsible for the oxidation of tin, which is a disadvantage of this method. This research presents a potentially fruitful alternative in which 4-(tert-butyl) pyridine can substitute dimethyl sulfoxide in the process of regulating crystallization without causing tin oxidation to take place. Perovskite films that have been formed from pyridine have been shown to have a much-reduced defect density. This has resulted in increased charge mobility and better photovoltaic performance, making pyridine a desirable alternative for use in the deposition of tin perovskite films.
The precise control of perovskite precursor crystallization inside a thin film is of utmost importance for optimizing the efficiency and manufacturing of solar cells. The deposition process of tin-based perovskite films from a solution presents difficulties due to the quick crystallization of tin compared to the more often employed lead perovskite. The optimal approach for attaining elevated efficiencies entails using dimethyl sulfoxide (DMSO) as a medium for depositing perovskite. This choice of solvent impedes the tin-iodine network’s fast aggregation, which plays a crucial role in the production of perovskite. Nevertheless, this methodology is limited since the utilization of dimethyl sulfoxide leads to the oxidation of tin throughout the processing stage. In this thesis, we present a potentially advantageous alternative approach wherein 4-(tert-butyl) pyridine is proposed as a substitute for dimethyl sulfoxide in regulating crystallization processes while avoiding the undesired consequence of tin oxidation. Films of perovskite formed using pyridine as a solvent have a notably reduced density of defects, resulting in higher mobility of charges and improved performance in solar applications. Consequently, the utilization of pyridine for the deposition of tin perovskite films is considered advantageous.
Tin perovskites are suffering from an apparent energy band misalignment. However, the band diagrams published in the current body of research display contradictions, resulting in a dearth of unanimity. Moreover, comprehensive information about the dynamics connected with charge extraction is lacking. This thesis aims to ascertain the energy band locations of tin perovskites by employing the kelvin probe and Photoelectron yield spectroscopy methods. This thesis aims to construct a precise band diagram for the often-utilized device stack. Moreover, a comprehensive analysis is performed to assess the energy deficits inherent in the current energetic structure of tin halide perovskites. In addition, we investigate the influence of BCP on the improvement of electron extraction in C60/BCP systems, with a specific emphasis on the energy factors involved. Furthermore, transient surface photovoltage was utilized to investigate the charge extraction kinetics of frequently studied charge transport layers, such as NiOx and PEDOT as hole transport layers and C60, ICBA, and PCBM as electron transport layers. The Hall effect, KP, and TRPL approaches accurately ascertain the p-doping concentration in FASnI3. The results consistently demonstrated a value of 1.5 * 1017 cm-3. Our research findings highlight the imperative nature of autonomously constructing the charge extraction layers for tin halide perovskites, apart from those used for lead perovskites.
The crystallization of perovskite precursors relies mainly on the utilization of two solvents. The first one dissolves the perovskite powder to form the precursor solution, usually called the solvent. The second one precipitates the perovskite precursor, forming the wet film, which is a supersaturated solution of perovskite precursor and in the remains of the solvent and the antisolvent. Later, this wet film crystallizes upon annealing into a full perovskite crystallized film. In our research context, we proposed new solvents to dissolve FASnI3, but when we tried to form a film, most of them did not crystallize. This is attributed to the high coordination strength between the metal halide and the solvent molecules, which is unbreakable by the traditionally used antisolvents such as Toluene and Chlorobenzene. To solve this issue, we introduce a high-throughput antisolvent screening in which we screened around 73 selected antisolvents against 15 solvents that can form a 1M FASnI3 solution. We used for the first time in tin perovskites machine learning algorithm to understand and predict the effect of an antisolvent on the crystallization of a precursor solution in a particular solvent. We relied on film darkness as a primary criterion to judge the efficacy of a solvent-antisolvent pair. We found that the relative polarity between solvent and antisolvent is the primary factor that affects the solvent-antisolvent interaction. Based on our findings, we prepared several high-quality tin perovskite films free from DMSO and achieved an efficiency of 9%, which is the highest DMSO tin perovskite device so far.
Rapidly growing seismic and macroseismic databases and simplified access to advanced machine learning methods have in recent years opened up vast opportunities to address challenges in engineering and strong motion seismology from novel, datacentric perspectives. In this thesis, I explore the opportunities of such perspectives for the tasks of ground motion modeling and rapid earthquake impact assessment, tasks with major implications for long-term earthquake disaster mitigation.
In my first study, I utilize the rich strong motion database from the Kanto basin, Japan, and apply the U-Net artificial neural network architecture to develop a deep learning based ground motion model. The operational prototype provides statistical estimates of expected ground shaking, given descriptions of a specific earthquake source, wave propagation paths, and geophysical site conditions. The U-Net interprets ground motion data in its spatial context, potentially taking into account, for example, the geological properties in the vicinity of observation sites. Predictions of ground motion intensity are thereby calibrated to individual observation sites and earthquake locations.
The second study addresses the explicit incorporation of rupture forward directivity into ground motion modeling. Incorporation of this phenomenon, causing strong, pulse like ground shaking in the vicinity of earthquake sources, is usually associated with an intolerable increase in computational demand during probabilistic seismic hazard analysis (PSHA) calculations. I suggest an approach in which I utilize an artificial neural network to efficiently approximate the average, directivity-related adjustment to ground motion predictions for earthquake ruptures from the 2022 New Zealand National Seismic Hazard Model. The practical implementation in an actual PSHA calculation demonstrates the efficiency and operational readiness of my model. In a follow-up study, I present a proof of concept for an alternative strategy in which I target the generalizing applicability to ruptures other than those from the New Zealand National Seismic Hazard Model.
In the third study, I address the usability of pseudo-intensity reports obtained from macroseismic observations by non-expert citizens for rapid impact assessment. I demonstrate that the statistical properties of pseudo-intensity collections describing the intensity of shaking are correlated with the societal impact of earthquakes. In a second step, I develop a probabilistic model that, within minutes of an event, quantifies the probability of an earthquake to cause considerable societal impact. Under certain conditions, such a quick and preliminary method might be useful to support decision makers in their efforts to organize auxiliary measures for earthquake disaster response while results from more elaborate impact assessment frameworks are not yet available.
The application of machine learning methods to datasets that only partially reveal characteristics of Big Data, qualify the majority of results obtained in this thesis as explorative insights rather than ready-to-use solutions to real world problems. The practical usefulness of this work will be better assessed in the future by applying the approaches developed to growing and increasingly complex data sets.
Extreme-right terrorism is a threat that is often underestimated by the public at large. As this paper argues, this is partly due to a concept of terrorism utilized by policymakers, intelligence agents, and police investigators that is based on experience of international terrorism perpetrated by leftists or jihadists as opposed to domestic extreme-right violence. This was one reason why investigators failed to identify the crimes committed by the National Socialist Underground (NSU) in Germany (2000–2011) as extreme-right terrorism, for example. While scholarly debate focused on the Red Army Faction and Al Qaeda, terrorist tendencies among those perpetrating racist and extreme-right violence tended to be disregarded. Influential researchers in the field of “extremism” denied that terrorist acts were committed by right-wingers. By mapping the specifics regarding the strategic use of violence, target selection, addressing of different audiences etc., this paper proposes a more accurate definition of extreme-right terrorism. In comparing it to other forms of terrorism, extreme-right terrorism is distinguished by its specific framework of ideologies and practices, with the underlying idea of an essential inequality that is compensated for through the affirmation of violence. It can be differentiated from other forms of extreme-right violence based on its use of strategic, premeditated and planned attacks against targets of a symbolic nature.
Microalgae have been recognized as a promising green production platform for recombinant proteins. The majority of studies on recombinant protein expression have been conducted in the green microalga C. reinhardtii. While promising improvement regarding nuclear transgene expression in this alga has been made, it is still inefficient due to epigenetic silencing, often resulting in low yields that are not competitive with other expressor organisms. Other microalgal species might be better suited for high-level protein expression, but are limited in their availability of molecular tools.
The red microalga Porphyridium purpureum recently emerged as candidate for the production of recombinant proteins. It is promising in that transformation vectors are episomally maintained as autonomously replicating plasmids in the nucleus at a high copy number, thus leading to high expression values in this red alga.
In this work, we expand the genetic tools for P. purpureum and investigate parameters that govern efficient transgene expression. We provide an improved transformation protocol to streamline the generation of transgenic lines in this organism. After being able to efficiently generate transgenic lines, we showed that codon usage is a main determinant of high-level transgene expression, not only at the protein level but also at the level of mRNA accumulation. The optimized expression constructs resulted in YFP accumulation up to an unprecedented 5% of the total soluble protein. Furthermore, we designed new constructs conferring efficient transgene expression into the culture medium, simplifying purification and harvests of recombinant proteins. To further improve transgene expression, we tested endogenous promoters driving the most highly transcribed genes in P. purpureum and found minor increase of YFP accumulation.
We employed the previous findings to express complex viral antigens from the hepatitis B virus and the hepatitis C virus in P. purpureum to demonstrate its feasibility as producer of biopharmaceuticals. The viral glycoproteins were successfully produced to high levels and could reach their native confirmation, indicating a functional glycosylation machinery and an appropriate folding environment in this red alga. We could successfully upscale the biomass production of transgenic lines and with that provide enough material for immunization trials in mice that were performed in collaboration. These trials showed no toxicity of neither the biomass nor the purified antigens, and, additionally, the algal-produced antigens were able to elicit a strong and specific immune response.
The results presented in this work pave the way for P. purpureum as a new promising producer organism for biopharmaceuticals in the microalgal field.
Predicting the electron population of Earth's ring current during geomagnetic storms still remains a challenging task.
In this work, we investigate the sensitivity of 10 keV ring current electrons to different driving processes, parameterised by the Kp index, during several moderate and intense storms.
Results are validated against measurements from the Van Allen Probes satellites. Perturbing the Kp index allows us to identify the most dominant processes for moderate and intense storms respectively.
We find that during moderate storms (Kp < 6) the drift velocities mostly control the behaviour of low energy electrons, while loss from wave-particle interactions is the most critical parameter for quantifying the evolution of intense storms (Kp > 6). Perturbations of the Kp index used to drive the boundary conditions at GEO and set the plasmapause location only show a minimal effect on simulation results over a limited L range.
It is further shown that the flux at L & SIM; 3 is more sensitive to changes in the Kp index compared to higher L shells, making it a good proxy for validating the source-loss balance of a ring current model.
Motivated by recent epidemic outbreaks, including those of COVID-19, we solve the canonical problem of calculating the dynamics and likelihood of extensive outbreaks in a population within a large class of stochastic epidemic models with demographic noise, including the susceptible-infected-recovered (SIR) model and its general extensions.
In the limit of large populations, we compute the probability distribution for all extensive outbreaks, including those that entail unusually large or small (extreme) proportions of the population infected.
Our approach reveals that, unlike other well-known examples of rare events occurring in discrete-state stochastic systems, the statistics of extreme outbreaks emanate from a full continuum of Hamiltonian paths, each satisfying unique boundary conditions with a conserved probability flux.
The water swelling and subsequent solvent exchange including co-nonsolvency behavior of thin films of a doubly thermo-responsive diblock copolymer (DBC) are studied viaspectral reflectance, time-of-flight neutron reflectometry, and Fourier transform infrared spectroscopy.
The DBC consists of a thermo-responsive zwitterionic (poly(4-((3-methacrylamidopropyl) dimethylammonio) butane-1-sulfonate)) (PSBP) block, featuring an upper critical solution temperature transition in aqueous media but being insoluble in acetone, and a nonionic poly(N-isopropylmethacrylamide) (PNIPMAM) block, featuring a lower critical solution temperature transition in water, while being soluble in acetone.
Homogeneous DBC films of 50-100 nm thickness are first swollen in saturated water vapor (H2OorD2O), before they are subjected to a contraction process by exposure to mixed saturated water/acetone vapor (H2OorD2O/acetone-d6 = 9:1 v/v).
The affinity of the DBC film toward H2O is stronger than for D2O, as inferred from the higher film thickness in the swollen state and the higher absorbed water content, thus revealing a pronounced isotope sensitivity.
During the co-solvent-induced switching by mixed water/acetone vapor, a two-step film contraction is observed, which is attributed to the delayed expulsion of water molecules and uptake of acetone molecules.
The swelling kinetics are compared for both mixed vapors (H2O/acetone-d6 and D2O/acetone-d6) and with those of the related homopolymer films.
Moreover, the concomitant variations of the local environment around the hydrophilic groups located in the PSBP and PNIPMAM blocks are followed.
The first contraction step turns out to be dominated by the behavior of the PSBP block, where as the second one is dominated by the PNIPMAM block.
The unusual swelling and contraction behavior of the latter block is attributed to its co-nonsolvency behavior.
Furthermore, we observe cooperative hydration effects in the DBC films, that is, both polymer blocks influence each other's solvation behavior.
The Caenorhabditis elegans (C. elegans) is a model organism that has been increasingly used in health and environmental toxicity assessments. The quantification of such elements in vivo can assist in studies that seek to relate the exposure concentration to possible biological effects.
Therefore, this study is the first to propose a method of quantitative analysis of 21 ions by ion chromatography (IC), which can be applied in different toxicity studies in C. elegans.
The developed method was validated for 12 anionic species (fluoride, acetate, chloride, nitrite, bromide, nitrate, sulfate, oxalate, molybdate, dichromate, phosphate, and perchlorate), and 9 cationic species (lithium, sodium, ammonium, thallium, potassium, magnesium, manganese, calcium, and barium).
The method did not present the presence of interfering species, with R2 varying between 0.9991 and 0.9999, with a linear range from 1 to 100 mu g L-1.
Limits of detection (LOD) and limits of quantification (LOQ) values ranged from 0.2319 mu g L-1 to 1.7160 mu g L-1 and 0.7028 mu g L-1 to 5.1999 mu g L-1, respectively.
The intraday and interday precision tests showed an Relative Standard Deviation (RSD) below 10.0 % and recovery ranging from 71.0 % to 118.0 % with a maximum RSD of 5.5 %.
The method was applied to real samples of C. elegans treated with 200 uM of thallium acetate solution, determining the uptake and bioaccumulated Tl+ content during acute exposure.
This dissertation examines the integration of incongruent visual-scene and morphological-case information (“cues”) in building thematic-role representations of spoken relative clauses in German.
Addressing the mutual influence of visual and linguistic processing, the Coordinated Interplay Account (CIA) describes a mechanism in two steps supporting visuo-linguistic integration (Knoeferle & Crocker, 2006, Cog Sci). However, the outcomes and dynamics of integrating incongruent thematic-role representations from distinct sources have been investigated scarcely. Further, there is evidence that both second-language (L2) and older speakers may rely on non-syntactic cues relatively more than first-language (L1)/young speakers. Yet, the role of visual information for thematic-role comprehension has not been measured in L2 speakers, and only limitedly across the adult lifespan.
Thematically unambiguous canonically ordered (subject-extracted) and noncanonically ordered (object-extracted) spoken relative clauses in German (see 1a-b) were presented in isolation and alongside visual scenes conveying either the same (congruent) or the opposite (incongruent) thematic relations as the sentence did.
1 a Das ist der Koch, der die Braut verfolgt.
This is the.NOM cook who.NOM the.ACC bride follows
This is the cook who is following the bride.
b Das ist der Koch, den die Braut verfolgt.
This is the.NOM cook whom.ACC the.NOM bride follows
This is the cook whom the bride is following.
The relative contribution of each cue to thematic-role representations was assessed with agent identification. Accuracy and latency data were collected post-sentence from a sample of L1 and L2 speakers (Zona & Felser, 2023), and from a sample of L1 speakers from across the adult lifespan (Zona & Reifegerste, under review). In addition, the moment-by-moment dynamics of thematic-role assignment were investigated with mouse tracking in a young L1 sample (Zona, under review).
The following questions were addressed: (1) How do visual scenes influence thematic-role representations of canonical and noncanonical sentences? (2) How does reliance on visual-scene, case, and word-order cues vary in L1 and L2 speakers? (3) How does reliance on visual-scene, case, and word-order cues change across the lifespan?
The results showed reliable effects of incongruence of visually and linguistically conveyed thematic relations on thematic-role representations. Incongruent (vs. congruent) scenes yielded slower and less accurate responses to agent-identification probes presented post-sentence. The recently inspected agent was considered as the most likely agent ~300ms after trial onset, and the convergence of visual scenes and word order enabled comprehenders to assign thematic roles predictively.
L2 (vs. L1) participants relied more on word order overall. In response to noncanonical clauses presented with incongruent visual scenes, sensitivity to case predicted the size of incongruence effects better than L1-L2 grouping. These results suggest that the individual’s ability to exploit specific cues might predict their weighting.
Sensitivity to case was stable throughout the lifespan, while visual effects increased with increasing age and were modulated by individual interference-inhibition levels. Thus, age-related changes in comprehension may stem from stronger reliance on visually (vs. linguistically) conveyed meaning.
These patterns represent evidence for a recent-role preference – i.e., a tendency to re-assign visually conveyed thematic roles to the same referents in temporally coordinated utterances. The findings (i) extend the generalizability of CIA predictions across stimuli, tasks, populations, and measures of interest, (ii) contribute to specifying the outcomes and mechanisms of detecting and indexing incongruent representations within the CIA, and (iii) speak to current efforts to understand the sources of variability in sentence comprehension.
Since 2013, the Committee on Economic, Social and Cultural Rights can examine individual communications under the Optional Protocol to the International Covenant on Economic, Social and Cultural Rights (ICESCR). This opens up the possibility to interpret Covenant provisions in a thorough manner. With regard to forced evictions and the right to housing under Article 11 ICESCR, one can discern a fast-developing approach concerning the proportionality analysis of evictions, entailing the establishment of specific criteria that may guide such analysis. This paper seeks to delineate these developments and will also shed light on possible general trends on the topic of limitations within the Committee’s emerging jurisprudence. In doing so, the paper will address if, and how, the developing proportionality analysis under the individual complaints procedure takes into consideration multi-discriminatory dimensions of State measures and how it specifically relates to or incorporates other ICESCR-concepts, such as minimum core obligations or the reasonableness review under Article 8(4) OP ICESCR.
Nils-Hendrik Grohmann beschäftigt sich mit dem noch andauernden Stärkungsprozess der UN-Menschenrechtsvertragsorgane. Er analysiert, welche rechtlichen Befugnisse die Ausschüsse haben, ob sie von sich aus Vorschläge einbringen können und inwieweit sie ihre Verfahrensweisen bisher aufeinander abgestimmt haben. Ein weiterer Schwerpunkt liegt auf der Zusammenarbeit zwischen den verschiedenen Ausschüssen und der Frage, welche Rolle das Treffen der Vorsitzenden bei der Stärkung spielen kann.
The conception of property at the basis of Hegel’s conception of abstract right seems committed to a problematic form of “possessive individualism.” It seems to conceive of right as the expression of human mastery over nature and as based upon an irreducible opposition of person and nature, rightful will, and rightless thing. However, this chapter argues that Hegel starts with a form of possessive individualism only to show that it undermines itself. This is evident in the way Hegel unfolds the nature of property as it applies to external things as well as in the way he explains our self-ownership of our own bodies and lives. Hegel develops the idea of property to a point where it reaches a critical limit and encounters the “true right” that life possesses against the “formal” and “abstract right” of property. Ultimately, Hegel’s account suggests that nature should precisely not be treated as a rightless object at our arbitrary disposal but acknowledged as the inorganic body of right.
In his 1844 Economic and Philosophic Manuscripts, Marx famously claims that the human being is or has a ‘Gattungswesen.’ This is often understood to mean that the human being is a ‘species-being’ and is determined by a given ‘species-essence.’ In this chapter, I argue that this reading is mistaken. What Marx calls Gattungswesen is precisely not a ‘species-being,’ but a being that, in a very specific sense, transcends the limits of its own given species. This different understanding of the genus- character of the human being opens up a new perspective on the naturalism of the early Marx. He is not informed by a problematic speciesist and essentialist naturalism, as is often assumed, but by a different form of naturalism which I propose to call ‘dialectical naturalism.’ The chapter starts (I) by developing Hegel’s account of genus which provides us with a useful background for (II) understanding Marx’s original notion of a genus-being and its practical, social, developmental character. In the last section, I show that (III) the actualization of our genus-being thus depends on the production of a specific type of ‘second nature’ that is at the heart of Marx’s dialectical naturalism.
The art of second nature
(2022)
Symbiotic X-ray binaries are systems hosting a neutron star accreting form the wind of a late-type companion. These are rare objects and so far only a handful of them are known. One of the most puzzling aspects of the symbiotic X-ray binaries is the possibility that they contain strongly magnetized neutron stars. These are expected to be evolutionary much younger compared to their evolved companions and could thus be formed through the (yet poorly known) accretion induced collapse of a white dwarf. In this paper, we perform a broad-band X-ray and soft gamma-ray spectroscopy of two known symbiotic binaries, Sct X-1 and 4U 1700+24, looking for the presence of cyclotron scattering features that could confirm the presence of strongly magnetized NSs. We exploited available Chandra, Swift, and NuSTAR data. We find no evidence of cyclotron resonant scattering features (CRSFs) in the case of Sct X-1 but in the case of 4U 1700+24 we suggest the presence of a possible CRSF at similar to 16 keV and its first harmonic at similar to 31 keV, although we could not exclude alternative spectral models for the broad-band fit. If confirmed by future observations, 4U 1700+24 could be the second symbiotic X-ray binary with a highly magnetized accretor. We also report about our long-term monitoring of the last discovered symbiotic X-ray binary IGR J17329-2731 performed with Swift/XRT. The monitoring revealed that, as predicted, in 2017 this object became a persistent and variable source, showing X-ray flares lasting for a few days and intriguing obscuration events that are interpreted in the context of clumpy wind accretion.
Drought and the availability of mineable phosphorus minerals used for fertilization are two of the important issues agriculture is facing in the future. High phosphorus availability in soils is necessary to maintain high agricultural yields. Drought is one of the major threats for terrestrial ecosystem performance and crop production in future. Among the measures proposed to cope with the upcoming challenges of intensifying drought stress and to decrease the need for phosphorus fertilizer application is the fertilization with silica (Si). Here we tested the importance of soil Si fertilization on wheat phosphorus concentration as well as wheat performance during drought at the field scale. Our data clearly showed a higher soil moisture for the Si fertilized plots. This higher soil moisture contributes to a better plant performance in terms of higher photosynthetic activity and later senescence as well as faster stomata responses ensuring higher productivity during drought periods. The plant phosphorus concentration was also higher in Si fertilized compared to control plots. Overall, Si fertilization or management of the soil Si pools seem to be a promising tool to maintain crop production under predicted longer and more serve droughts in the future and reduces phosphorus fertilizer requirements.
Non-fullerene acceptors (NFAs) are far more emissive than their fullerene-based counterparts. Here, we study the spectral properties of photocurrent generation and recombination of the blend of the donor polymer PM6 with the NFA Y6. We find that the radiative recombination of free charges is almost entirely due to the re-occupation and decay of Y6 singlet excitons, but that this pathway contributes less than 1% to the total recombination. As such, the open-circuit voltage of the PM6:Y6 blend is determined by the energetics and kinetics of the charge-transfer (CT) state. Moreover, we find that no information on the energetics of the CT state manifold can be gained from the low-energy tail of the photovoltaic external quantum efficiency spectrum, which is dominated by the excitation spectrum of the Y6 exciton. We, finally, estimate the charge-separated state to lie only 120 meV below the Y6 singlet exciton energy, meaning that this blend indeed represents a high-efficiency system with a low energetic offset.
Identification of protein complexes from protein-protein interaction (PPI) networks is a key problem in PPI mining, solved by parameter-dependent approaches that suffer from small recall rates. Here we introduce GCC-v, a family of efficient, parameter-free algorithms to accurately predict protein complexes using the (weighted) clustering coefficient of proteins in PPI networks. Through comparative analyses with gold standards and PPI networks from Escherichia coli, Saccharomyces cerevisiae, and Homo sapiens, we demonstrate that GCC-v outperforms twelve state-of-the-art approaches for identification of protein complexes with respect to twelve performance measures in at least 85.71% of scenarios. We also show that GCC-v results in the exact recovery of similar to 35% of protein complexes in a pan-plant PPI network and discover 144 new protein complexes in Arabidopsis thaliana, with high support from GO semantic similarity. Our results indicate that findings from GCC-v are robust to network perturbations, which has direct implications to assess the impact of the PPI network quality on the predicted protein complexes. (C) 2021 The Author(s). Published by Elsevier B.V. on behalf of Research Network of Computational and Structural Biotechnology.
Deliberative and paternalistic interaction styles for conversational agents in digital health
(2021)
Background:
Recent years have witnessed a constant increase in the number of people with chronic conditions requiring ongoing medical support in their everyday lives. However, global health systems are not adequately equipped for this extraordinarily time-consuming and cost-intensive development. Here, conversational agents (CAs) can offer easily scalable and ubiquitous support. Moreover, different aspects of CAs have not yet been sufficiently investigated to fully exploit their potential. One such trait is the interaction style between patients and CAs. In human-to-human settings, the interaction style is an imperative part of the interaction between patients and physicians. Patient-physician interaction is recognized as a critical success factor for patient satisfaction, treatment adherence, and subsequent treatment outcomes. However, so far, it remains effectively unknown how different interaction styles can be implemented into CA interactions and whether these styles are recognizable by users.
Objective:
The objective of this study was to develop an approach to reproducibly induce 2 specific interaction styles into CA-patient dialogs and subsequently test and validate them in a chronic health care context.
Methods:
On the basis of the Roter Interaction Analysis System and iterative evaluations by scientific experts and medical health care professionals, we identified 10 communication components that characterize the 2 developed interaction styles: deliberative and paternalistic interaction styles. These communication components were used to develop 2 CA variations, each representing one of the 2 interaction styles. We assessed them in a web-based between-subject experiment. The participants were asked to put themselves in the position of a patient with chronic obstructive pulmonary disease. These participants were randomly assigned to interact with one of the 2 CAs and subsequently asked to identify the respective interaction style. Chi-square test was used to assess the correct identification of the CA-patient interaction style.
Results:
A total of 88 individuals (42/88, 48% female; mean age 31.5 years, SD 10.1 years) fulfilled the inclusion criteria and participated in the web-based experiment. The participants in both the paternalistic and deliberative conditions correctly identified the underlying interaction styles of the CAs in more than 80% of the assessments (X-1(,8)8(2)=38.2; P<.001; phi coefficient r(phi)=0.68). The validation of the procedure was hence successful.
Conclusions:
We developed an approach that is tailored for a medical context to induce a paternalistic and deliberative interaction style into a written interaction between a patient and a CA. We successfully tested and validated the procedure in a web-based experiment involving 88 participants. Future research should implement and test this approach among actual patients with chronic diseases and compare the results in different medical conditions. This approach can further be used as a starting point to develop dynamic CAs that adapt their interaction styles to their users.
Pathogens and animal pests (P&A) are a major threat to global food security as they directly affect the quantity and quality of food. The Southern Amazon, Brazil's largest domestic region for soybean, maize and cotton production, is particularly vulnerable to the outbreak of P&A due to its (sub)tropical climate and intensive farming systems. However, little is known about the spatial distribution of P&A and the related yield losses. Machine learning approaches for the automated recognition of plant diseases can help to overcome this research gap. The main objectives of this study are to (1) evaluate the performance of Convolutional Neural Networks (ConvNets) in classifying P&A, (2) map the spatial distribution of P&A in the Southern Amazon, and (3) quantify perceived yield and economic losses for the main soybean and maize P&A. The objectives were addressed by making use of data collected with the smartphone application Plantix. The core of the app's functioning is the automated recognition of plant diseases via ConvNets. Data on expected yield losses were gathered through a short survey included in an "expert" version of the application, which was distributed among agronomists. Between 2016 and 2020, Plantix users collected approximately 78,000 georeferenced P&A images in the Southern Amazon. The study results indicate a high performance of the trained ConvNets in classifying 420 different crop-disease combinations. Spatial distribution maps and expert-based yield loss estimates indicate that maize rust, bacterial stalk rot and the fall armyworm are among the most severe maize P&A, whereas soybean is mainly affected by P&A like anthracnose, downy mildew, frogeye leaf spot, stink bugs and brown spot. Perceived soybean and maize yield losses amount to 12 and 16%, respectively, resulting in annual yield losses of approximately 3.75 million tonnes for each crop and economic losses of US$2 billion for both crops together. The high level of accuracy of the trained ConvNets, when paired with widespread use from following a citizen-science approach, results in a data source that will shed new light on yield loss estimates, e.g., for the analysis of yield gaps and the development of measures to minimise them.
The correct orientation of seismic sensors is critical for studies such as full moment tensor inversion, receiver function analysis, and shear-wave splitting. Therefore, the orientation of horizontal components needs to be checked and verified systematically. This study relies on two different waveform-based approaches, to assess the sensor orientations of the broadband network of the Kandilli Observatory and Earthquake Research Institute (KOERI). The network is an important backbone for seismological research in the Eastern Mediterranean Region and provides a comprehensive seismic data set for the North Anatolian fault. In recent years, this region became a worldwide field laboratory for continental transform faults. A systematic survey of the sensor orientations of the entire network, as presented here, facilitates related seismic studies. We apply two independent orientation tests, based on the polarization of P waves and Rayleigh waves to 123 broadband seismic stations, covering a period of 15 yr (2004-2018). For 114 stations, we obtain stable results with both methods. Approximately, 80% of the results agree with each other within 10 degrees. Both methods indicate that about 40% of the stations are misoriented by more than 10 degrees. Among these, 20 stations are misoriented by more than 20 degrees. We observe temporal changes of sensor orientation that coincide with maintenance work or instrument replacement. We provide time-dependent sensor misorientation correction values for the KOERI network in the supplemental material.
The first detections of black hole-neutron star mergers (GW200105 and GW200115) by the LIGO-Virgo-Kagra Collaboration mark a significant scientific breakthrough. The physical interpretation of pre- and postmerger signals requires careful cross-examination between observational and theoretical modelling results. Here we present the first set of black hole-neutron star simulations that were obtained with the numerical-relativity code BAM. Our initial data are constructed using the public LORENE spectral library, which employs an excision of the black hole interior. BAM, in contrast, uses the moving-puncture gauge for the evolution. Therefore, we need to "stuff" the black hole interior with smooth initial data to evolve the binary system in time. This procedure introduces constraint violations such that the constraint damping properties of the evolution system are essential to increase the accuracy of the simulation and in particular to reduce spurious center-of-mass drifts. Within BAM we evolve the Z4c equations and we compare our gravitational-wave results with those of the SXS collaboration and results obtained with the SACRA code. While we find generally good agreement with the reference solutions and phase differences less than or similar to 0.5 rad at the moment of merger, the absence of a clean convergence order in our simulations does not allow for a proper error quantification. We finally present a set of different initial conditions to explore how the merger of black hole neutron star systems depends on the involved masses, spins, and equations of state.
Water bodies are a highly abundant feature of Arctic permafrost ecosystems and strongly influence their hydrology, ecology and biogeochemical cycling. While very high resolution satellite images enable detailed mapping of these water bodies, the increasing availability and abundance of this imagery calls for fast, reliable and automatized monitoring. This technical work presents a largely automated and scalable workflow that removes image noise, detects water bodies, removes potential misclassifications from infrastructural features, derives lake shoreline geometries and retrieves their movement rate and direction on the basis of ortho-ready very high resolution satellite imagery from Arctic permafrost lowlands. We applied this workflow to typical Arctic lake areas on the Alaska North Slope and achieved a successful and fast detection of water bodies. We derived representative values for shoreline movement rates ranging from 0.40-0.56 m yr(-1) for lake sizes of 0.10 ha-23.04 ha. The approach also gives an insight into seasonal water level changes. Based on an extensive quantification of error sources, we discuss how the results of the automated workflow can be further enhanced by incorporating additional information on weather conditions and image metadata and by improving the input database. The workflow is suitable for the seasonal to annual monitoring of lake changes on a sub-meter scale in the study areas in northern Alaska and can readily be scaled for application across larger regions within certain accuracy limitations.
This paper sheds new light on the role of communication for cartel formation. Using machine learning to evaluate free-form chat communication among firms in a laboratory experiment, we identify typical communication patterns for both explicit cartel formation and indirect attempts to collude tacitly. We document that firms are less likely to communicate explicitly about price fixing and more likely to use indirect messages when sanctioning institutions are present. This effect of sanctions on communication reinforces the direct cartel-deterring effect of sanctions as collusion is more difficult to reach and sustain without an explicit agreement. Indirect messages have no, or even a negative, effect on prices.
The leniency rule revisited
(2021)
The experimental literature on antitrust enforcement provides robust evidence that communication plays an important role for the formation and stability of cartels. We extend these studies through a design that distinguishes between innocuous communication and communication about a cartel, sanctioning only the latter. To this aim, we introduce a participant in the role of the competition authority, who is properly incentivized to judge the communication content and price setting behavior of the firms. Using this novel design, we revisit the question whether a leniency rule successfully destabilizes cartels. In contrast to existing experimental studies, we find that a leniency rule does not affect cartelization. We discuss potential explanations for this contrasting result.
COVID-19
(2021)
We investigate how the economic consequences of the pandemic and the government-mandated measures to contain its spread affect the self-employed — particularly women — in Germany. For our analysis, we use representative, real-time survey data in which respondents were asked about their situation during the COVID-19 pandemic. Our findings indicate that among the self-employed, who generally face a higher likelihood of income losses due to COVID-19 than employees, women are about one-third more likely to experience income losses than their male counterparts. We do not find a comparable gender gap among employees. Our results further suggest that the gender gap among the self-employed is largely explained by the fact that women disproportionately work in industries that are more severely affected by the COVID-19 pandemic. Our analysis of potential mechanisms reveals that women are significantly more likely to be impacted by government-imposed restrictions, e.g., the regulation of opening hours. We conclude that future policy measures intending to mitigate the consequences of such shocks should account for this considerable variation in economic hardship.
Detrimental effects of adverse family conditions for children's wellbeing are well-documented, but little is known about the impact of specific risk factors, or about potential protective factors that buffer the effects of family risk factors on negative development.
We investigated the impact of five important family risk factors (e.g., parental conflict) on internalizing and externalizing problems and the potential buffering effects of peer acceptance and academic skills at two measurement points two years apart in 1195 7-to 10-year-olds (T1: M-Age = 8.54).
Latent change models showed that increases in risk factors over the two years predicted increasing internalizing and externalizing problems. Parental conflict was the most impactful risk factor, although peer acceptance and academic skills showed some buffering effects.
The results highlight the necessity of investigating cumulative and single risk factors, specifically interparental conflict, and emphasize the need to strengthen children's internal and social resources to buffer the effects of adverse family conditions.
Previous literature has shown that task-based goal-setting and distributed learning is beneficial to university-level course performance. We investigate the effects of making these insights salient to students by sending out goal-setting prompts in a blended learning environment with bi-weekly quizzes. The randomized field experiment in a large mandatory economics course shows promising results: the treated students outperform the control group. They are 18.8% (0.20 SD) more likely to pass the exam and earn 6.7% (0.19 SD) more points on the exam. While we cannot causally disentangle the effects of goal-setting from the prompt sent, we observe that treated students use the online learning platform earlier in the semester and attempt more online exercises compared to the control group. The heterogeneity analysis suggests that higher treatment effects are associated with low performance at the beginning of the course.
Looking for participation
(2022)
A stronger learner orientation through participatory learning increases learning motivation and results. But what does participatory learning mean? Where do learning factories and fabrication laboratories (FabLabs) stand in this context, and how can didactic implementation be improved in this respect? Using a newly developed analytical framework, which contains elements of the stage model of participation and general media didactics, we compare a FabLab and a learning factory example concerning the degree of participation. From this, we derive guidelines for designing participative teaching and learning processes in learning factories. We explain how FabLabs can be an inspiration for the didactic design of learning factories.
This study deals with the East Beni Suef Basin (Eastern Desert, Egypt) and aims to evaluate the source-generative potential, reconstruct the burial and thermal history, examine the most influential parameters on thermal maturity modeling, and improve on the models already published for the West Beni Suef to ultimately formulate a complete picture of the whole basin evolution.
Source rock evaluation was carried out based on TOC, Rock-Eval pyrolysis, and visual kerogen petrography analyses. Three kerogen types (II, II/III, and III) are distinguished in the East Beni Suef Basin, where the Abu Roash "F" Member acts as the main source rock with good to excellent source potential, oil-prone mainly type II kerogen, and immature to marginal maturity levels.
The burial history shows four depositional and erosional phases linked with the tectonic evolution of the basin. A hiatus (due to erosion or non-deposition) has occurred during the Late Eocene-Oligocene in the East Beni Suef Basin, while the West Beni Suef Basin has continued subsiding.
Sedimentation began later (Middle to Late Albian) with lower rates in the East Beni Suef Basin compared with the West Beni Suef Basin (Early Albian). The Abu Roash "F" source rock exists in the early oil window with a present-day transformation ratio of about 19% and 21% in the East and West Beni Suef Basin, respectively, while the Lower Kharita source rock, which is only recorded in the West Beni Suef Basin, has reached the late oil window with a present-day transformation ratio of about 70%.
The magnitude of erosion and heat flow have proportional and mutual effects on thermal maturity.
We present three possible scenarios of basin modeling in the East Beni Suef Basin concerning the erosion from the Apollonia and Dabaa formations.
Results of this work can serve as a basis for subsequent 2D and/or 3D basin modeling, which are highly recommended to further investigate the petroleum system evolution of the Beni Suef Basin.
The subsurface is a temporally dynamic and spatially heterogeneous compartment of the Earth's critical zone, and biogeochemical transformations taking place in this compartment are crucial for the cycling of nutrients.
The impact of spatial heterogeneity on such microbially mediated nutrient cycling is not well known, which imposes a severe challenge in the prediction of in situ biogeochemical transformation rates and further of nutrient loading contributed by the groundwater to the surface water bodies.
Therefore, we used a numerical modelling approach to evaluate the sensitivity of groundwater microbial biomass distribution and nutrient cycling to spatial heterogeneity in different scenarios accounting for various residence times.
The model results gave us an insight into domain characteristics with respect to the presence of oxic niches in predominantly anoxic zones and vice versa depending on the extent of spatial heterogeneity and the flow regime.
The obtained results show that microbial abundance, distribution, and activity are sensitive to the applied flow regime and that the mobile (i.e. observable by groundwater sampling) fraction of microbial biomass is a varying, yet only a small, fraction of the total biomass in a domain. Furthermore, spatial heterogeneity resulted in anaerobic niches in the domain and shifts in microbial biomass between active and inactive states. The lack of consideration of spatial heterogeneity, thus, can result in inaccurate estimation of microbial activity. In most cases this leads to an overestimation of nutrient removal (up to twice the actual amount) along a flow path.
We conclude that the governing factors for evaluating this are the residence time of solutes and the Damkohler number (Da) of the biogeochemical reactions in the domain. We propose a relationship to scale the impact of spatial heterogeneity on nutrient removal governed by the logioDa.
This relationship may be applied in upscaled descriptions of microbially mediated nutrient cycling dynamics in the subsurface thereby resulting in more accurate predictions of, for example, carbon and nitrogen cycling in groundwater over long periods at the catchment scale.
An increasing number of clinicians (i.e., nurses and physicians) suffer from mental health-related issues like depression and burnout. These, in turn, stress communication, collaboration, and decision- making—areas in which Conversational Agents (CAs) have shown to be useful. Thus, in this work, we followed a mixed-method approach and systematically analysed the literature on factors affecting the well-being of clinicians and CAs’ potential to improve said well-being by relieving support in communication, collaboration, and decision-making in hospitals. In this respect, we are guided by Brigham et al. (2018)’s model of factors influencing well-being. Based on an initial number of 840 articles, we further analysed 52 papers in more detail and identified the influences of CAs’ fields of application on external and individual factors affecting clinicians’ well-being. As our second method, we will conduct interviews with clinicians and experts on CAs to verify and extend these influencing factors.
Epidemiological data suggest that consuming diets rich in carotenoids can reduce the risk of developing several non-communicable diseases. Thus, we investigated the extent to which carotenoid contents of foods can be increased by the choice of food matrices with naturally high carotenoid contents and thermal processing methods that maintain their stability. For this purpose, carotenoids of 15 carrot (Daucus carota L.) cultivars of different colors were assessed with UHPLC-DAD-ToF-MS. Additionally, the processing effects of air drying, air frying, and deep frying on carotenoid stability were applied. Cultivar selection accounted for up to 12.9-fold differences in total carotenoid content in differently colored carrots and a 2.2-fold difference between orange carrot cultivars. Air frying for 18 and 25 min and deep frying for 10 min led to a significant decrease in total carotenoid contents. TEAC assay of lipophilic extracts showed a correlation between carotenoid content and antioxidant capacity in untreated carrots.
Thermally stable photoswitches that are driven with low-energy light are rare, yet crucial for extending the applicability of photoresponsive molecules and materials towards, e.g., living systems. Combined ortho-fluorination and -amination couples high visible light absorptivity of o-aminoazobenzenes with the extraordinary bistability of o-fluoroazobenzenes. Herein, we report a library of easily accessible o-aminofluoroazobenzenes and establish structure-property relationships regarding spectral qualities, visible light isomerization efficiency and thermal stability of the cis-isomer with respect to the degree of o-substitution and choice of amino substituent. We rationalize the experimental results with quantum chemical calculations, revealing the nature of low-lying excited states and providing insight into thermal isomerization. The synthesized azobenzenes absorb at up to 600 nm and their thermal cis-lifetimes range from milliseconds to months. The most unique example can be driven from trans to cis with any wavelength from UV up to 595 nm, while still exhibiting a thermal cis-lifetime of 81 days. <br /> [GRAPHICS] <br /> .
Poly(ionic liquid)s (PIL) are common precursors for heteroatom-doped carbon materials. Despite a relatively higher carbonization yield, the PIL-to-carbon conversion process faces challenges in preserving morphological and structural motifs on the nanoscale. Assisted by a thin polydopamine coating route and ion exchange, imidazoliumbased PIL nanovesicles were successfully applied in morphology-maintaining carbonization to prepare carbon composite nanocapsules. Extending this strategy further to their composites, we demonstrate the synthesis of carbon composite nanocapsules functionalized with iron nitride nanoparticles of an ultrafine, uniform size of 3-5 nm (termed "FexN@C "). Due to its unique nanostructure, the sulfur-loaded FexN@C electrode was tested to efficiently mitigate the notorious shuttle effect of lithium polysulfides (LiPSs) in Li-S batteries. The cavity of the carbon nanocapsules was spotted to better the loading content of sulfur. The well-dispersed iron nitride nanoparticles effectively catalyze the conversion of LiPSs to Li2S, owing to their high electronic conductivity and strong binding power to LiPSs. Benefiting from this well-crafted composite nanostructure, the constructed FexN@C/S cathode demonstrated a fairly high discharge capacity of 1085 mAh g(-1) at 0.5 C initially, and a remaining value of 930 mAh g(-1 )after 200 cycles. In addition, it exhibits an excellent rate capability with a high initial discharge capacity of 889.8 mAh g(-1) at 2 C. This facile PIL-to-nanocarbon synthetic approach is applicable for the exquisite design of complex hybrid carbon nanostructures with potential use in electrochemical energy storage and conversion.
The study of perceptual flexibility in speech depends on a variety of tasks that feature a large degree of variability between participants. Of critical interest is whether measures are consistent within an individual or across stimulus contexts. This is particularly key for individual difference designs that are deployed to examine the neural basis or clinical consequences of perceptual flexibility. In the present set of experiments, we assess the split-half reliability and construct validity of five measures of perceptual flexibility: three of learning in a native language context (e.g., understanding someone with a foreign accent) and two of learning in a non-native context (e.g., learning to categorize non-native speech sounds). We find that most of these tasks show an appreciable level of split-half reliability, although construct validity was sometimes weak. This provides good evidence for reliability for these tasks, while highlighting possible upper limits on expected effect sizes involving each measure.
Changing climatic conditions and unsustainable land use are major threats to savannas worldwide. Historically, many African savannas were used intensively for livestock grazing, which contributed to widespread patterns of bush encroachment across savanna systems. To reverse bush encroachment, it has been proposed to change the cattle-dominated land use to one dominated by comparatively specialized browsers and usually native herbivores. However, the consequences for ecosystem properties and processes remain largely unclear. We used the ecohydrological, spatially explicit model EcoHyD to assess the impacts of two contrasting, herbivore land-use strategies on a Namibian savanna: grazer- versus browser-dominated herbivore communities. We varied the densities of grazers and browsers and determined the resulting composition and diversity of the plant community, total vegetation cover, soil moisture, and water use by plants. Our results showed that plant types that are less palatable to herbivores were best adapted to grazing or browsing animals in all simulated densities. Also, plant types that had a competitive advantage under limited water availability were among the dominant ones irrespective of land-use scenario. Overall, the results were in line with our expectations: under high grazer densities, we found heavy bush encroachment and the loss of the perennial grass matrix. Importantly, regardless of the density of browsers, grass cover and plant functional diversity were significantly higher in browsing scenarios. Browsing herbivores increased grass cover, and the higher total cover in turn improved water uptake by plants overall. We concluded that, in contrast to grazing-dominated land-use strategies, land-use strategies dominated by browsing herbivores, even at high herbivore densities, sustain diverse vegetation communities with high cover of perennial grasses, resulting in lower erosion risk and bolstering ecosystem services.
The investigation of metabolic fluxes and metabolite distributions within cells by means of tracer molecules is a valuable tool to unravel the complexity of biological systems. Technological advances in mass spectrometry (MS) technology such as atmospheric pressure chemical ionization (APCI) coupled with high resolution (HR), not only allows for highly sensitive analyses but also broadens the usefulness of tracer-based experiments, as interesting signals can be annotated de novo when not yet present in a compound library. However, several effects in the APCI ion source, i.e., fragmentation and rearrangement, lead to superimposed mass isotopologue distributions (MID) within the mass spectra, which need to be corrected during data evaluation as they will impair enrichment calculation otherwise. Here, we present and evaluate a novel software tool to automatically perform such corrections. We discuss the different effects, explain the implemented algorithm, and show its application on several experimental datasets. This adjustable tool is available as an R package from CRAN.
ABSTRACT: Structural evolution of cesium triiodide at high pressures has been revealed by synchrotron single-crystal X-ray diffraction. Cesium triiodide undergoes a first-order phase transition above 1.24(3) GPa from an orthorhombic to a trigonal system. This transition is coupled with severe reorganization of the polyiodide network from a layered to three-dimensional architecture. Quantum chemical calculations show that even though the two polymorphic phases are nearly isoenergetic under ambient conditions, the PV term is decisive in stabilizing the trigonal polymorph above the transition point. Phonon calculations using a non-local correlation functional that accounts for dispersion interactions confirm that this polymorph is dynamically unstable under ambient conditions. The high-pressure behavior of crystalline CsI3 can be correlated with other alkali metal trihalides, which undergo a similar sequence of structural changes upon load.
Digital Platforms (DPs) has established themself in recent years as a central concept of the Information Technology Science. Due to the great diversity of digital platform concepts, clear definitions are still required. Furthermore, DPs are subject to dynamic changes from internal and external factors, which pose challenges for digital platform operators, developers and customers. Which current digital platform research directions should be taken to address these challenges remains open so far. The following paper aims to contribute to this by outlining a systematic literature review (SLR) on digital platform concepts in the context of the Industrial Internet of Things (IIoT) for manufacturing companies and provides a basis for (1) a selection of definitions of current digital platform and ecosystem concepts and (2) a selection of current digital platform research directions. These directions are diverted into (a) occurrence of digital platforms, (b) emergence of digital platforms, (c) evaluation of digital platforms, (d) development of digital platforms, and (e) selection of digital platforms.
Concepts and techniques for 3D-embedded treemaps and their application to software visualization
(2024)
This thesis addresses concepts and techniques for interactive visualization of hierarchical data using treemaps. It explores (1) how treemaps can be embedded in 3D space to improve their information content and expressiveness, (2) how the readability of treemaps can be improved using level-of-detail and degree-of-interest techniques, and (3) how to design and implement a software framework for the real-time web-based rendering of treemaps embedded in 3D. With a particular emphasis on their application, use cases from software analytics are taken to test and evaluate the presented concepts and techniques.
Concerning the first challenge, this thesis shows that a 3D attribute space offers enhanced possibilities for the visual mapping of data compared to classical 2D treemaps. In particular, embedding in 3D allows for improved implementation of visual variables (e.g., by sketchiness and color weaving), provision of new visual variables (e.g., by physically based materials and in situ templates), and integration of visual metaphors (e.g., by reference surfaces and renderings of natural phenomena) into the three-dimensional representation of treemaps.
For the second challenge—the readability of an information visualization—the work shows that the generally higher visual clutter and increased cognitive load typically associated with three-dimensional information representations can be kept low in treemap-based representations of both small and large hierarchical datasets. By introducing an adaptive level-of-detail technique, we cannot only declutter the visualization results, thereby reducing cognitive load and mitigating occlusion problems, but also summarize and highlight relevant data. Furthermore, this approach facilitates automatic labeling, supports the emphasis on data outliers, and allows visual variables to be adjusted via degree-of-interest measures.
The third challenge is addressed by developing a real-time rendering framework with WebGL and accumulative multi-frame rendering. The framework removes hardware constraints and graphics API requirements, reduces interaction response times, and simplifies high-quality rendering. At the same time, the implementation effort for a web-based deployment of treemaps is kept reasonable.
The presented visualization concepts and techniques are applied and evaluated for use cases in software analysis. In this domain, data about software systems, especially about the state and evolution of the source code, does not have a descriptive appearance or natural geometric mapping, making information visualization a key technology here. In particular, software source code can be visualized with treemap-based approaches because of its inherently hierarchical structure. With treemaps embedded in 3D, we can create interactive software maps that visually map, software metrics, software developer activities, or information about the evolution of software systems alongside their hierarchical module structure.
Discussions on remaining challenges and opportunities for future research for 3D-embedded treemaps and their applications conclude the thesis.
The intensity of cosmic radiation may differ over five orders of magnitude within a few hours or days during the Solar Particle Events (SPEs), thus increasing for several orders of magnitude the probability of Single Event Upsets (SEUs) in space-borne electronic systems. Therefore, it is vital to enable the early detection of the SEU rate changes in order to ensure timely activation of dynamic radiation hardening measures. In this paper, an embedded approach for the prediction of SPEs and SRAM SEU rate is presented. The proposed solution combines the real-time SRAM-based SEU monitor, the offline-trained machine learning model and online learning algorithm for the prediction. With respect to the state-of-the-art, our solution brings the following benefits: (1) Use of existing on-chip data storage SRAM as a particle detector, thus minimizing the hardware and power overhead, (2) Prediction of SRAM SEU rate one hour in advance, with the fine-grained hourly tracking of SEU variations during SPEs as well as under normal conditions, (3) Online optimization of the prediction model for enhancing the prediction accuracy during run-time, (4) Negligible cost of hardware accelerator design for the implementation of selected machine learning model and online learning algorithm. The proposed design is intended for a highly dependable and self-adaptive multiprocessing system employed in space applications, allowing to trigger the radiation mitigation mechanisms before the onset of high radiation levels.
Studies have evaluated the effectiveness of dual career (DC) support services among student-athletes by examining scholastic performances.
These studies investigated self-reported grades student-athletes or focused on career choices student-athletes made after leaving school. Most of these studies examined scholastic performances cross-sectionally among lower secondary school student-athletes or student-athletes in higher education.
The present longitudinal field study in a quasi-experimental design aims to evaluate the development of scholastic performances among upper secondary school students aged 16-19 by using standardized scholastic assessments and grade points in the subject English over a course of 3-4 years.
A sample of 159 students (54.4% females) at three German Elite Sport Schools (ESS) and three comprehensive schools participated in the study. The sample was split into six groups according to three criteria: (1) students' athletic engagement, (2) school type attendance, and (3) usage of DC support services in secondary school.
Repeated-measurement analyses of variance were conducted in order to evaluate the impact of the three previously mentioned criteria as well as their interaction on the development of scholastic performances.
Findings indicated that the development of English performance levels differ among the six groups.
Invention
(2023)
This entry addresses invention from five different perspectives: (i) definition of the term, (ii) mechanisms underlying invention processes, (iii) (pre-)history of human inventions, (iv) intellectual property protection vs open innovation, and (v) case studies of great inventors. Regarding the definition, an invention is the outcome of a creative process taking place within a technological milieu, which is recognized as successful in terms of its effectiveness as an original technology. In the process of invention, a technological possibility becomes realized. Inventions are distinct from either discovery or innovation. In human creative processes, seven mechanisms of invention can be observed, yielding characteristic outcomes: (1) basic inventions, (2) invention branches, (3) invention combinations, (4) invention toolkits, (5) invention exaptations, (6) invention values, and (7) game-changing inventions. The development of humanity has been strongly shaped by inventions ever since early stone tools and the conception of agriculture. An “explosion of creativity” has been associated with Homo sapiens, and inventions in all fields of human endeavor have followed suit, engendering an exponential growth of cumulative culture. This culture development emerges essentially through a reuse of previous inventions, their revision, amendment and rededication. In sociocultural terms, humans have increasingly regulated processes of invention and invention-reuse through concepts such as intellectual property, patents, open innovation and licensing methods. Finally, three case studies of great inventors are considered: Edison, Marconi, and Montessori, next to a discussion of human invention processes as collaborative endeavors.
Current attempts to prevent and manage type 2 diabetes have been moderately effective, and a better understanding of the molecular roots of this complex disease is important to develop more successful and precise treatment options.
Recently, we initiated the collective diabetes cross, where four mouse inbred strains differing in their diabetes susceptibility were crossed with the obese and diabetes-prone NZO strain and identified the quantitative trait loci (QTL) Nidd13/NZO, a genomic region on chromosome 13 that correlates with hyperglycemia in NZO allele carriers compared to B6 controls.
Subsequent analysis of the critical region, harboring 644 genes, included expression studies in pancreatic islets of congenic Nidd13/NZO mice, integration of single-cell data from parental NZO and B6 islets as well as haplotype analysis.
Finally, of the five genes (Acot12, S100z, Ankrd55, Rnf180, and Iqgap2) within the polymorphic haplotype block that are differently expressed in islets of B6 compared to NZO mice, we identified the calcium-binding protein S100z gene to affect islet cell proliferation as well as apoptosis when overexpressed in MINE cells. In summary, we define S100z as the most striking gene to be causal for the diabetes QTL Nidd13/NZO by affecting beta-cell proliferation and apoptosis. Thus, S100z is an entirely novel diabetes gene regulating islet cell function.
The business problem of having inefficient processes, imprecise process analyses and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS) and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes.
Model uncertainty quantification is an essential component of effective data assimilation. Model errors associated with sub-grid scale processes are often represented through stochastic parameterizations of the unresolved process. Many existing Stochastic Parameterization schemes are only applicable when knowledge of the true sub-grid scale process or full observations of the coarse scale process are available, which is typically not the case in real applications. We present a methodology for estimating the statistics of sub-grid scale processes for the more realistic case that only partial observations of the coarse scale process are available. Model error realizations are estimated over a training period by minimizing their conditional sum of squared deviations given some informative covariates (e.g., state of the system), constrained by available observations and assuming that the observation errors are smaller than the model errors. From these realizations a conditional probability distribution of additive model errors given these covariates is obtained, allowing for complex non-Gaussian error structures. Random draws from this density are then used in actual ensemble data assimilation experiments. We demonstrate the efficacy of the approach through numerical experiments with the multi-scale Lorenz 96 system using both small and large time scale separations between slow (coarse scale) and fast (fine scale) variables. The resulting error estimates and forecasts obtained with this new method are superior to those from two existing methods.
The nature of the sources powering nebular He II emission in star-forming galaxies remains debated, and various types of objects have been considered, including Wolf-Rayet stars, X-ray binaries, and Population III stars.
Modern X-ray observations show the ubiquitous presence of hot gas filling star-forming galaxies. We use a collisional ionization plasma code to compute the specific He II ionizing flux produced by hot gas and show that if its temperature is not too high (less than or similar to 2.5 MK), then the observed levels of soft diffuse X-ray radiation could explain He II ionization in galaxies.
To gain a physical understanding of this result, we propose a model that combines the hydrodynamics of cluster winds and hot superbubbles with observed populations of young massive clusters in galaxies. We find that in low-metallicity galaxies, the temperature of hot gas is lower and the production rate of He II ionizing photons is higher compared to high-metallicity galaxies. The reason is that the slower stellar winds of massive stars in lower-metallicity galaxies input less mechanical energy in the ambient medium.
Furthermore, we show that ensembles of star clusters up to similar to 10-20 Myr old in galaxies can produce enough soft X-rays to induce nebular He II emission. We discuss observations of the template low-metallicity galaxy I Zw 18 and suggest that the He II nebula in this galaxy is powered by a hot superbubble.
Finally, appreciating the complex nature of stellar feedback, we suggest that soft X-rays from hot superbubbles are among the dominant sources of He II ionizing flux in low-metallicity star-forming galaxies.
It has been highlighted many times how difficult it is to draw a boundary between gift and bribe, and how the same transfer can be interpreted in different ways according to the position of the observer and the narrative frame into which it is inserted. This also applied of course to Ancient Rome; in both the Republic and Principate lawgivers tried to define the limits of acceptable transfers and thus also to identify what we might call ‘corruption’. Yet, such definitions remained to a large extent blurred, and what was constructed was mostly a ‘code of conduct’, allowing Roman politicians to perform their own ‘honesty’ in public duty – while being aware at all times that their involvement in different kinds of transfer might be used by their opponents against them and presented as a case of ‘corrupt’ behaviour.
Widespread on social networking sites (SNSs), envy has been linked to an array of detrimental outcomes for users’ well-being. While envy has been considered a status-related emotion and is likely to be experienced in response to perceiving another’s higher status, there is a lack of research exploring how status perceptions influence the emergence of envy on SNSs. This is important because SNSs typically quantify social interactions and reach with metrics that indicate users’ relative rank and status in the network. To understand how status perceptions impact SNS users, we introduce a new form of metric-based digital status rooted in SNS metrics that are available and visible on a platform. Drawing on social comparison theory and status literature, we conducted an online experiment to investigate how different forms of status contribute to the proliferation of envy on SNSs. Our findings shed light on how metric-based digital status influences feelings of envy on SNSs. Specifically, we could show that metric-based digital status impacts envy through increasing perceptions of others’ socioeconomic and sociometric statuses. Our study contributes to the growing discourse on the negative outcomes associated with SNS use and its consequences for users and society.
Who has the future in mind?
(2022)
An individual's relation to time may be an important driver of pro-environmental behaviour. We studied whether young individual's gender and time-orientation are associated with pro-environmental behaviour. In a controlled laboratory environment with students in Germany, participants earned money by performing a real-effort task and were then offered the opportunity to invest their money into an environmental project that supports climate protection. Afterwards, we controlled for their time-orientation. In this consequential behavioural setting, we find that males who scored higher on future-negative orientation showed significantly more pro-environmental behaviour compared to females who scored higher on future-negative orientation and males who scored lower on future-negative orientation. Interestingly, our results are completely reversed when it comes to past-positive orientation. These findings have practical implications regarding the most appropriate way to address individuals in order to achieve more pro-environmental behaviour.
The envy spiral
(2020)
On Social Networking Sites (SNS) users disclose mostly positive and often self-enhancing information. Scholars refer to this phenomenon as the positivity bias in SNS communication (PBSC). However, while theoretical explanations for this phenomenon have been proposed, an empirical proof of these theorized mechanisms is still missing. The project presented in this Research-in-Progress paper aims at explaining the PBSC with the mechanism specified in the self-enhancement envy spiral. Specifically, we hypothesize that feelings of envy drive people to post positive and self-enhancing content on SNS. To test this hypothesis, we developed an experimental design allowing to examine the causal effect of envy on the positivity of users’ subsequently posted content. In a preliminary study, we tested our manipulation of envy and could show its effectiveness in inducing different levels of envy between our groups. Our project will help to broaden the understanding of the complex dynamics of SNS and the potentially adverse driving forces underlying them.
This paper studies how individuals discount the utility they derive from their provision of goods over spatial distance. In a controlled laboratory experiment in Germany, we elicit preferences for the provision of the same good at different locations. To isolate spatial preferences from any other direct value of the goods being close to the individual, we focus on goods with “existence value.” We find that individuals put special weight on the provision of these goods in their immediate vicinity. This “vicinity bias” represents a spatial analogy to the “present bias” in the time dimension.
Enhancing economic efficiency in modular production systems through deep reinforcement learning
(2024)
In times of increasingly complex production processes and volatile customer demands, the production adaptability is crucial for a company's profitability and competitiveness. The ability to cope with rapidly changing customer requirements and unexpected internal and external events guarantees robust and efficient production processes, requiring a dedicated control concept at the shop floor level. Yet in today's practice, conventional control approaches remain in use, which may not keep up with the dynamic behaviour due to their scenario-specific and rigid properties. To address this challenge, deep learning methods were increasingly deployed due to their optimization and scalability properties. However, these approaches were often tested in specific operational applications and focused on technical performance indicators such as order tardiness or total throughput. In this paper, we propose a deep reinforcement learning based production control to optimize combined techno-financial performance measures. Based on pre-defined manufacturing modules that are supplied and operated by multiple agents, positive effects were observed in terms of increased revenue and reduced penalties due to lower throughput times and fewer delayed products. The combined modular and multi-staged approach as well as the distributed decision-making further leverage scalability and transferability to other scenarios.
Shape-memory hydrogels (SMH) are multifunctional, actively-moving polymers of interest in biomedicine. In loosely crosslinked polymer networks, gelatin chains may form triple helices, which can act as temporary net points in SMH, depending on the presence of salts. Here, we show programming and initiation of the shape-memory effect of such networks based on a thermomechanical process compatible with the physiological environment. The SMH were synthesized by reaction of glycidylmethacrylated gelatin with oligo(ethylene glycol) (OEG) alpha,omega-dithiols of varying crosslinker length and amount. Triple helicalization of gelatin chains is shown directly by wide-angle X-ray scattering and indirectly via the mechanical behavior at different temperatures. The ability to form triple helices increased with the molar mass of the crosslinker. Hydrogels had storage moduli of 0.27-23 kPa and Young's moduli of 215-360 kPa at 4 degrees C. The hydrogels were hydrolytically degradable, with full degradation to water-soluble products within one week at 37 degrees C and pH = 7.4. A thermally-induced shape-memory effect is demonstrated in bending as well as in compression tests, in which shape recovery with excellent shape-recovery rates R-r close to 100% were observed. In the future, the material presented here could be applied, e.g., as self-anchoring devices mechanically resembling the extracellular matrix.
With the many challenges facing the agricultural system, such as water scarcity, loss of arable land due to climate change, population growth, urbanization or trade disruptions, new agri-food systems are needed to ensure food security in the future. In addition, healthy diets are needed to combat non-communicable diseases. Therefore, plant-based diets rich in health-promoting plant secondary metabolites are desirable. A saline indoor farming system is representing a sustainable and resilient new agrifood system and can preserve valuable fresh water. Since indoor farming relies on artificial lighting, assessment of lighting conditions is essential. In this thesis, the cultivation of halophytes in a saline indoor farming system was evaluated and the influence of cultivation conditions were assessed in favor of improving the nutritional quality of halophytes for human consumption. Therefore, five selected edible halophyte species (Brassica oleracea var. palmifolia, Cochlearia officinalis, Atriplex hortensis, Chenopodium quinoa, and Salicornia europaea) were cultivated in saline indoor farming. The halophyte species were selected for to their salt tolerance levels and mechanisms. First, the suitability of halophytes for saline indoor farming and the influence of salinity on their nutritional properties, e.g. plant secondary metabolites and minerals, were investigated. Changes in plant performance and nutritional properties were observed as a function of salinity. The response to salinity was found to be species-specific and related to the salt tolerance mechanism of the halophytes. At their optimal salinity levels, the halophytes showed improved carotenoid content. In addition, a negative correlation was found between the nitrate and chloride content of halophytes as a function of salinity. Since chloride and nitrate can be antinutrient compounds, depending on their content, monitoring is essential, especially in halophytes. Second, regional brine water was introduced as an alternative saline water resource in the saline indoor farming system. Brine water was shown to be feasible for saline indoor farming
of halophytes, as there was no adverse effect on growth or nutritional properties, e.g. carotenoids. Carotenoids were shown to be less affected by salt composition than by salt concentration. In addition, the interaction between the salinity and the light regime in indoor farming and greenhouse cultivation has been studied. There it was shown that interacting light regime and salinity alters the content of carotenoids and chlorophylls. Further, glucosinolate and nitrate content were also shown to be influenced by light regime. Finally, the influence of UVB light on halophytes was investigated using supplemental narrow-band UVB LEDs. It was shown that UVB light affects the growth, phenotype and metabolite profile of halophytes and that the UVB response is species specific. Furthermore, a modulation of carotenoid content in S. europaea could be achieved to enhance health-promoting properties and thus improve nutritional quality. This was shown to be dose-dependent and the underlying mechanisms of carotenoid accumulation were also investigated. Here it was revealed that carotenoid accumulation is related to oxidative stress.
In conclusion, this work demonstrated the potential of halophytes as alternative vegetables produced in a saline indoor farming system for future diets that could contribute to ensuring food security in the future. To improve the sustainability of the saline indoor farming system, LED lamps and regional brine water could be integrated into the system. Since the nutritional properties have been shown to be influenced by salt, light regime and UVB light, these abiotic stressors must be taken into account when considering halophytes as alternative vegetables for human nutrition.
Starting from the observation that the reduced state of a system strongly coupled to a bath is, in general, an athermal state, we introduce and study a cyclic battery-charger quantum device that is in thermal equilibrium, or in a ground state, during the charge storing stage. The cycle has four stages: the equilibrium storage stage is interrupted by disconnecting the battery from the charger, then work is extracted from the battery, and then the battery is reconnected with the charger; finally, the system is brought back to equilibrium. At no point during the cycle are the battery-charger correlations artificially erased. We study the case where the battery and charger together comprise a spin-1/2 Ising chain, and show that the main characteristics-the extracted energy and the thermodynamic efficiency-can be enhanced by operating the cycle close to the quantum phase transition point. When the battery is just a single spin, we find that the output work and efficiency show a scaling behavior at criticality and derive the corresponding critical exponents. Due to always present correlations between the battery and the charger, operations that are equivalent from the perspective of the battery can entail different energetic costs for switching the battery-charger coupling. This happens only when the coupling term does not commute with the battery's bare Hamiltonian, and we use this purely quantum leverage to further optimize the performance of the device.
Charitable giving
(2023)
We investigate how different levels of information influence the allocation decisions of donors who are entitled to freely distribute a fixed monetary endowment between themselves and a charitable organization in both giving and taking frames. Participants donate significantly higher amounts, when the decision is described as taking rather than giving. This framing effect becomes smaller if more information about the charity is provided.
The influence of the process gas, laser scan speed, and sample thickness on the build-up of residual stresses and porosity in Ti-6Al-4V produced by laser powder bed fusion was studied. Pure argon and helium, as well as a mixture of those (30% helium), were employed to establish process atmospheres with a low residual oxygen content of 100 ppm O-2. The results highlight that the subsurface residual stresses measured by X-ray diffraction were significantly lower in the thin samples (220 MPa) than in the cuboid samples (645 MPa). This difference was attributed to the shorter laser vector length, resulting in heat accumulation and thus in-situ stress relief. The addition of helium to the process gas did not introduce additional subsurface residual stresses in the simple geometries, even for the increased scanning speed. Finally, larger deflection was found in the cantilever built under helium (after removal from the baseplate), than in those produced under argon and an argon-helium mixture. This result demonstrates that complex designs involving large scanned areas could be subjected to higher residual stress when manufactured under helium due to the gas's high thermal conductivity, heat capacity, and thermal diffusivity.
In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman's backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs. Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards. In the present paper, we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg-Moore algebra. They hold in familiar settings like those of deterministic or stochastic SDPs, but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al. 's generic framework. Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material.
Scaling agriculture to the globally rising population demands new approaches for future crop production such as multilayer and multitrophic indoor farming. Moreover, there is a current trend towards sustainable local solutions for aquaculture and saline agriculture. In this context, halophytes are becoming increasingly important for research and the food industry. As Salicornia europaea is a highly salt-tolerant obligate halophyte that can be used as a food crop, indoor cultivation with saline water is of particular interest. Therefore, finding a sustainable alternative to the use of seawater in non-coastal regions is crucial. Our goal was to determine whether natural brines, which are widely distributed and often available in inland areas, provide an alternative water source for the cultivation of saline organisms. This case study investigated the potential use of natural brines for the production of S. europaea. In the control group, which reflects the optimal growth conditions, fresh weight was increased, but there was no significant difference between the treatment groups comparing natural brines with artificial sea water. A similar pattern was observed for carotenoids and chlorophylls. Individual components showed significant differences. However, within treatments, there were mostly no changes. In summary, we showed that the influence of the different chloride concentrations was higher than the salt composition. Moreover, nutrient-enriched natural brine was demonstrated to be a suitable alternative for cultivation of S. europaea in terms of yield and nutritional quality. Thus, the present study provides the first evidence for the future potential of natural brine waters for the further development of aquaculture systems and saline agriculture in inland regions.
Sarcopenic obesity is increasingly found in youth, but its health consequences remain unclear.
Therefore, we studied the prevalence of sarcopenia and its association with cardiometabolic risk factors as well as muscular and cardiorespiratory fitness using data from the German Children's Health InterventionaL Trial (CHILT III) programme.
In addition to anthropometric data and blood pressure, muscle and fat mass were determined with bioelectrical impedance analysis.
Sarcopenia was classified via muscle-to-fat ratio. A fasting blood sample was taken, muscular fitness was determined using the standing long jump, and cardiorespiratory fitness was determined using bicycle ergometry. Of the 119 obese participants included in the analysis (47.1% female, mean age 12.2 years), 83 (69.7%) had sarcopenia. Affected individuals had higher gamma-glutamyl transferase, higher glutamate pyruvate transaminase, higher high-sensitivity C-reactive protein, higher diastolic blood pressure, and lower muscular and cardiorespiratory fitness (each p < 0.05) compared to participants who were 'only' obese.
No differences were found in other parameters. In our study, sarcopenic obesity was associated with various disorders in children and adolescents.
However, the clinical value must be tested with larger samples and reference populations to develop a unique definition and appropriate methods in terms of identification but also related preventive or therapeutic approaches.
Forest microclimate can buffer biotic responses to summer heat waves, which are expected to become more extreme under climate warming. Prediction of forest microclimate is limited because meteorological observation standards seldom include situations inside forests.
We use eXtreme Gradient Boosting - a Machine Learning technique - to predict the microclimate of forest sites in Brandenburg, Germany, using seasonal data comprising weather features.
The analysis was amended by applying a SHapley Additive explanation to show the interaction effect of variables and individualised feature attributions.
We evaluate model performance in comparison to artificial neural networks, random forest, support vector machine, and multi-linear regression.
After implementing a feature selection, an ensemble approach was applied to combine individual models for each forest and improve robustness over a given single prediction model.
The resulting model can be applied to translate climate change scenarios into temperatures inside forests to assess temperature-related ecosystem services provided by forests.
Insights in electrosynthesis, target binding, and stability of peptide-imprinted polymer nanofilms
(2021)
Molecularly imprinted polymer (MIP) nanofilms have been successfully implemented for the recognition of different target molecules: however, the underlying mechanistic details remained vague.
This paper provides new insights in the preparation and binding mechanism of electrosynthesized peptide-imprinted polymer nanofilms for selective recognition of the terminal pentapeptides of the beta-chains of human adult hemoglobin, HbA, and its glycated form HbA1c.
To differentiate between peptides differing solely in a glucose adduct MIP nanofilms were prepared by a two-step hierarchical electrosynthesis that involves first the chemisorption of a cysteinyl derivative of the pentapeptide followed by electropolymerization of scopoletin.
This approach was compared with a random single-step electrosynthesis using scopo-letin/pentapeptide mixtures. Electrochemical monitoring of the peptide binding to the MIP nanofilms by means of redox probe gating revealed a superior affinity of the hierarchical approach with a Kd value of 64.6 nM towards the related target.
Changes in the electrosynthesized non-imprinted polymer and MIP nanofilms during chemical, electrochemical template removal and rebinding were substantiated in situ by monitoring the characteristic bands of both target peptides and polymer with surface enhanced infrared absorption spectroscopy.
This rational approach led to MIPs with excellent selectivity and provided key mechanistic insights with respect to electrosynthesis, rebinding and stability of the formed MIPs.
Enhanced charge selectivity via anodic-C60 layer reduces nonradiative losses in organic solar cells
(2021)
Interfacial layers in conjunction with suitable charge-transport layers can significantly improve the performance of optoelectronic devices by facilitating efficient charge carrier injection and extraction.
This work uses a neat C-60 interlayer on the anode to experimentally reveal that surface recombination is a significant contributor to nonradiative recombination losses in organic solar cells.
These losses are shown to proportionally increase with the extent of contact between donor molecules in the photoactive layer and a molybdenum oxide (MoO3) hole extraction layer, proven by calculating voltage losses in low- and high-donor-content bulk heterojunction device architectures.
Using a novel in-device determination of the built-in voltage, the suppression of surface recombination, due to the insertion of a thin anodic-C-60 interlayer on MoO3, is attributed to an enhanced built-in potential.
The increased built-in voltage reduces the presence of minority charge carriers at the electrodes-a new perspective on the principle of selective charge extraction layers.
The benefit to device efficiency is limited by a critical interlayer thickness, which depends on the donor material in bilayer devices.
Given the high popularity of MoO3 as an efficient hole extraction and injection layer and the increasingly popular discussion on interfacial phenomena in organic optoelectronic devices, these findings are relevant to and address different branches of organic electronics, providing insights for future device design.
Alpine glacial erosion exerts a first-order control on mountain topography and sediment production, but its mechanisms are poorly understood. Observational data capable of testing glacial erosion and transport laws in glacial models are mostly lacking. New insights, however, can be gained from detrital tracer thermochronology. Detrital tracer thermochronology works on the premise that thermochronometer bedrock ages vary systematically with elevation, and that detrital downstream samples can be used to infer the source elevation sectors of sediments. We analyze six new detrital samples of different grain sizes (sand and pebbles) from glacial deposits and the modern river channel integrated with data from 18 previously analyzed bedrock samples from an elevation transect in the Leones Valley, Northern Patagonian Icefield, Chile (46.7 degrees S). We present 622 new detrital zircon (U-Th)/He (ZHe) single-grain analyses and 22 new bedrock ZHe analyses for two of the bedrock samples to determine age reproducibility. Results suggest that glacial erosion was focused at and below the Last Glacial Maximum and neoglacial equilibrium line altitudes, supporting previous modeling studies. Furthermore, grain age distributions from different grain sizes (sand, pebbles) might indicate differences in erosion mechanisms, including mass movements at steep glacial valley walls. Finally, our results highlight complications and opportunities in assessing glacigenic environments, such as dynamics of sediment production, transport, transient storage, and final deposition, that arise from settings with large glacio-fluvial catchments.
Frequency-domain electromagnetic (FDEM) data are commonly inverted to characterize subsurface geoelectrical properties using smoothness constraints in 1D inversion schemes assuming a layered medium.
Smoothness constraints are suitable for imaging gradual transitions of subsurface geoelectrical properties caused, for example, by varying sand, clay, or fluid content. However, such inversion approaches are limited in characterizing sharp interfaces. Alternative regularizations based on the minimum gradient support (MGS) stabilizers can, instead, be used to promote results with different levels of smoothness/sharpness selected by simply acting on the so-called focusing parameter.
The MGS regularization has been implemented for different kinds of geophysical data inversion strategies. However, concerning FDEM data, the MGS regularization has only been implemented for vertically constrained inversion (VCI) approaches but not for laterally constrained inversion (LCI) approaches.
We present a novel LCI approach for FDEM data using the MGS regularization for the vertical and lateral direction. Using synthetic and field data examples, we demonstrate that our approach can efficiently and automatically provide a set of model solutions characterized by different levels of sharpness and variable lateral consistencies.
In terms of data misfit, the obtained set of solutions contains equivalent models allowing us also to investigate the non-uniqueness of FDEM data inversion.
The addition of nano-Al2O3 has been shown to enhance the breakdown voltage of epoxy resin, but its flashover results appeared with disputation. This work concentrates on the surface charge variation and dc flashover performance of epoxy resin with nano-Al2O3 doping. The dispersion of nano-Al2O3 in epoxy is characterized by scanning electron microscopy (SEM) and atomic force microscopy (AFM). The dc flashover voltages of samples under either positive or negative polarity are measured with a finger-electrode system, and the surface charge variations before and after flashovers were identified from the surface potential mapping. The results evidence that nano-Al2O3 would lead to a 16.9% voltage drop for the negative flashovers and a 6.8% drop for positive cases. It is found that one-time flashover clears most of the accumulated surface charges, regardless of positive or negative. As a result, the ground electrode is neighbored by an equipotential zone enclosed with low-density heterocharges. The equipotential zone tends to be broadened after 20 flashovers. The nano-Al2O3 is noticed as beneficial to downsize the equipotential zone due to its capability on charge migration, which is reasonable to maintain flashover voltage at a high level after multiple flashovers. Hence, nano-Al2O3 plays a significant role in improving epoxy with high resistance to multiple flashovers.
In liquid-chromatography-tandem-mass-spectrometry-based proteomics, information about the presence and stoichiometry ofprotein modifications is not readily available. To overcome this problem,we developed multiFLEX-LF, a computational tool that builds uponFLEXIQuant, which detects modified peptide precursors and quantifiestheir modification extent by monitoring the differences between observedand expected intensities of the unmodified precursors. multiFLEX-LFrelies on robust linear regression to calculate the modification extent of agiven precursor relative to a within-study reference. multiFLEX-LF cananalyze entire label-free discovery proteomics data sets in a precursor-centric manner without preselecting a protein of interest. To analyzemodification dynamics and coregulated modifications, we hierarchicallyclustered the precursors of all proteins based on their computed relativemodification scores. We applied multiFLEX-LF to a data-independent-acquisition-based data set acquired using the anaphase-promoting complex/cyclosome (APC/C) isolated at various time pointsduring mitosis. The clustering of the precursors allows for identifying varying modification dynamics and ordering the modificationevents. Overall, multiFLEX-LF enables the fast identification of potentially differentially modified peptide precursors and thequantification of their differential modification extent in large data sets using a personal computer. Additionally, multiFLEX-LF candrive the large-scale investigation of the modification dynamics of peptide precursors in time-series and case-control studies.multiFLEX-LF is available athttps://gitlab.com/SteenOmicsLab/multiflex-lf.
As the complexity of learning task requirements, computer infrastruc- tures and knowledge acquisition for artificial neuronal networks (ANN) is in- creasing, it is challenging to talk about ANN without creating misunderstandings. An efficient, transparent and failure-free design of learning tasks by models is not supported by any tool at all. For this purpose, particular the consideration of data, information and knowledge on the base of an integration with knowledge- intensive business process models and a process-oriented knowledge manage- ment are attractive. With the aim of making the design of learning tasks express- ible by models, this paper proposes a graphical modeling language called Neu- ronal Training Modeling Language (NTML), which allows the repetitive use of learning designs. An example ANN project of AI-based dynamic GUI adaptation exemplifies its use as a first demonstration.
Earthquake site responses or site effects are the modifications of surface geology to seismic waves. How well can we predict the site effects (average over many earthquakes) at individual sites so far? To address this question, we tested and compared the effectiveness of different estimation techniques in predicting the outcrop Fourier site responses separated using the general inversion technique (GIT) from recordings. Techniques being evaluated are (a) the empirical correction to the horizontal-to-vertical spectral ratio of earthquakes (c-HVSR), (b) one-dimensional ground response analysis (GRA), and (c) the square-root-impedance (SRI) method (also called the quarter-wavelength approach). Our results show that c-HVSR can capture significantly more site-specific features in site responses than both GRA and SRI in the aggregate, especially at relatively high frequencies. c-HVSR achieves a "good match" in spectral shape at similar to 80%-90% of 145 testing sites, whereas GRA and SRI fail at most sites. GRA and SRI results have a high level of parametric and/or modeling errors which can be constrained, to some extent, by collecting on-site recordings.
A different class of refugee: university scholarships and developmentalism in late 1960s Africa
(2022)
Using documents assembled in connection with the 1967 Conference on the Legal, Economic and Social Aspects of African Refugee Problems, this article discusses African refugee higher-education discourses in the 1960s at the level of international organizations, volunteer agencies, and government representatives. Education and development history have recently been studied together, but this article focuses on the history of refugee higher education, which, it argues, needs to be understood within the development framework of human-capital theory, meant to support political pan African concerns for a decolonized continent and merged with humanitarian arguments to create a hybrid form of humanitarian developmentalism. The article zooms in on higher-education scholarships, above all for refugees from Southern Africa, as a means of support for human-capital development. It shows that refugee higher education was both a result and a driver of increased international exchanges, as evidenced at the 1967 conference.
The 2020s are an essential decade for achieving the 2030 Agenda and its Sustainable Development Goals (SDGs). For this, SDG research needs to provide evidence that can be translated into concrete actions. However, studies use different SDG data, resulting in incomparable findings. Researchers primarily use SDG databases provided by the United Nations (UN), the World Bank Group (WBG), and the Bertelsmann Stiftung & Sustainable Development Solutions Network (BE-SDSN). We compile these databases into one unified SDG database and examine the effects of the data selection on our understanding of SDG interactions. Among the databases, we observed more different than similar SDG interactions. Differences in synergies and trade-offs mainly occur for SDGs that are environmentally oriented. Due to the increased data availability, the unified SDG database offers a more nuanced and reliable view of SDG interactions. Thus, the SDG data selection may lead to diverse findings, fostering actions that might neglect or exacerbate trade-offs.
Labor unions’ greatest potential for political influence likely arises from their direct connection to millions of individuals at the workplace. There, they may change the ideological positions of both unionizing workers and their non-unionizing management. In this paper, we analyze the workplace-level impact of unionization on workers’ and managers’ political campaign contributions over the 1980-2016 period in the United States. To do so, we link establishment-level union election data with transaction-level campaign contributions to federal and local candidates. In a difference-in-differences design that we validate with regression discontinuity tests and a novel instrumental variables approach, we find that unionization leads to a leftward shift of campaign contributions. Unionization increases the support for Democrats relative to Republicans not only among workers but also among managers, which speaks against an increase in political cleavages between the two groups. We provide evidence that our results are not driven by compositional changes of the workforce and are weaker in states with Right-to-Work laws where unions can invest fewer resources in political activities.
Nocardioides alcanivorans sp. nov., a novel hexadecane-degrading species isolated from plastic waste
(2022)
Strain NGK65(T), a novel hexadecane degrading, non-motile, Gram-positive, rod-to-coccus shaped, aerobic bacterium, was isolated from plastic polluted soil sampled at a landfill.
Strain NGK65(T) hydrolysed casein, gelatin, urea and was catalase-positive. It optimally grew at 28 degrees C. in 0-1% NaCl and at pH 7.5-8.0. Glycerol, D-glucose, arbutin, aesculin, salicin, potassium 5-ketogluconate. sucrose, acetate, pyruvate and hexadecane were used as sole carbon sources.
The predominant membrane fatty acids were iso-C-16:0 followed by iso-C(17:)0 and C-18:1 omega 9c. The major polar lipids were phosphatidylglycerol, phosphatidylethanolamine, phosphatidylinositol and hydroxyphosphatidylinositol.
The cell-wall peptidoglycan type was A3 gamma, with LL-diaminopimelic acid and glycine as the diagnostic amino acids. MK 8 (H-4) was the predominant menaquinone. Phylogenetic analysis based on 16S rRNA gene sequences indicated that strain NGK65(T) belongs to the genus Nocardioides (phylum Actinobacteria). appearing most closely related to Nocardioides daejeonensis MJ31(T) (98.6%) and Nocardioides dubius KSL-104(T) (98.3%).
The genomic DNA G+C content of strain NGK65(T) was 68.2%.
Strain NGK65(T) and the type strains of species involved in the analysis had average nucleotide identity values of 78.3-71.9% as well as digital DNA-DNA hybridization values between 22.5 and 19.7%, which clearly indicated that the isolate represents a novel species within the genus Nocardioides.
Based on phenotypic and molecular characterization, strain NGK65(T) can clearly be differentiated from its phylogenetic neighbours to establish a novel species, for which the name Nocardioides alcanivorans sp. nov. is proposed.
The type strain is NGK65(T) (=DSM 113112(T)=NCCB 100846(T)).
R-Group stabilization in methylated formamides observed by resonant inelastic X-ray scattering
(2022)
The inherent stability of methylated formamides is traced to a stabilization of the deep-lying sigma-framework by resonant inelastic X-ray scattering at the nitrogen K-edge. Charge transfer from the amide nitrogen to the methyl groups underlie this stabilization mechanism that leaves the aldehyde group essentially unaltered and explains the stability of secondary and tertiary amides.
In light of substantial new discoveries of hot subdwarfs by ongoing spectroscopic surveys and the availability of the Gaia mission Early Data Release 3 (EDR3), we compiled new releases of two catalogues of hot subluminous stars: the data release 3 (DR3) catalogue of the known hot subdwarf stars contains 6616 unique sources and provides multi-band photometry, and astrometry from Gaia EDR3 as well as classifications based on spectroscopy and colours.
This is an increase of 742 objects over the DR2 catalogue.
This new catalogue provides atmospheric parameters for 3087 stars and radial velocities for 2791 stars from the literature. In addition, we have updated the Gaia Data Release 2 (DR2) catalogue of hot subluminous stars using the improved accuracy of the Gaia EDR3 data set together with updated quality and selection criteria to produce the Gaia EDR3 catalogue of 61 585 hot subluminous stars, representing an increase of 21 785 objects.
The improvements in Gaia EDR3 astrometry and photometry compared to Gaia DR2 have enabled us to define more sophisticated selection functions.
In particular, we improved hot subluminous star detection in the crowded regions of the Galactic plane as well as in the direction of the Magellanic Clouds by including sources with close apparent neighbours but with flux levels that dominate the neighbourhood.
Methane (CH4) from aquatic ecosystems contributes to about half of total global CH4 emissions to the atmosphere. Until recently, aquatic biogenic CH4 production was exclusively attributed to methanogenic archaea living under anoxic or suboxic conditions in sediments, bottom waters, and wetlands. However, evidence for oxic CH4 production (OMP) in freshwater, brackish, and marine habitats is increasing. Possible sources were found to be driven by various planktonic organisms supporting different OMP mechanisms. Surprisingly, submerged macrophytes have been fully ignored in studies on OMP, yet they are key components of littoral zones of ponds, lakes, and coastal systems. High CH4 concentrations in these zones have been attributed to organic substrate production promoting classic methanogenesis in the absence of oxygen. Here, we review existing studies and argue that, similar to terrestrial plants and phytoplankton, macroalgae and submerged macrophytes may directly or indirectly contribute to CH4 formation in oxic waters. We propose several potential direct and indirect mechanisms: (1) direct production of CH4; (2) production of CH4 precursors and facilitation of their bacterial breakdown or chemical conversion; (3) facilitation of classic methanogenesis; and (4) facilitation of CH4 ebullition. As submerged macrophytes occur in many freshwater and marine habitats, they are important in global carbon budgets and can strongly vary in their abundance due to seasonal and boom-bust dynamics. Knowledge on their contribution to OMP is therefore essential to gain a better understanding of spatial and temporal dynamics of CH4 emissions and thus to substantially reduce current uncertainties when estimating global CH4 emissions from aquatic ecosystems.
In the present paper we empirically investigate the psychometric properties of some of the most famous statistical and logical cognitive illusions from the "heuristics and biases" research program by Daniel Kahneman and Amos Tversky, who nearly 50 years ago introduced fascinating brain teasers such as the famous Linda problem, the Wason card selection task, and so-called Bayesian reasoning problems (e.g., the mammography task). In the meantime, a great number of articles has been published that empirically examine single cognitive illusions, theoretically explaining people's faulty thinking, or proposing and experimentally implementing measures to foster insight and to make these problems accessible to the human mind. Yet these problems have thus far usually been empirically analyzed on an individual-item level only (e.g., by experimentally comparing participants' performance on various versions of one of these problems). In this paper, by contrast, we examine these illusions as a group and look at the ability to solve them as a psychological construct. Based on an sample of N = 2,643 Luxembourgian school students of age 16-18 we investigate the internal psychometric structure of these illusions (i.e., Are they substantially correlated? Do they form a reflexive or a formative construct?), their connection to related constructs (e.g., Are they distinguishable from intelligence or mathematical competence in a confirmatory factor analysis?), and the question of which of a person's abilities can predict the correct solution of these brain teasers (by means of a regression analysis).
In the past years, work-time in many industries has become more flexible, opening up a new channel for intertemporal substitution: workers might, instead of saving, adjust their work-time to smooth consumption. To study this channel, we set up a two-period consumption/saving model with wage uncertainty. This extends the standard saving model by also allowing a worker to allocate a fixed time budget between two work-shifts. To test the comparative statics implied by these two different channels, we conduct a laboratory experiment. A novel feature of our experiments is that we tie income to a real-effort style task. In four treatments, we turn on and off the two channels for consumption smoothing: saving and time allocation. Our main finding is that savings are strictly positive for at least 85 percent of subjects. We find that a majority of subjects also uses time allocation to smooth consumption and use saving and time shifting as substitutes, though not perfect substitutes. Part of the observed heterogeneity of precautionary behavior can be explained by risk preferences and motivations different from expected utility maximization. (c) 2021 Elsevier B.V. All rights reserved.
It’s personal
(2021)
The new technologies of the Fourth Industrial Revolution (4IR) are disrupting traditional models of work and learning. While the impact of digitalization on education was already a point of serious deliberation, the COVID-19 pandemic has expedited ongoing transitions. With 90% of the world’s student population having been impacted by national lockdowns—online learning has gone from being a luxury to a necessity, in a context where around 3.6 billion people are offline. As the impacts of the 4IR unfold alongside the current crisis, it is not enough for future policy pathways to prioritize educational attainment in the traditional sense; it is essential to reimagine education itself as well as its delivery entirely. Future policy narratives will need to evaluate the very process of learning and identify the ways in which technology can help reduce existing disparities and enhance digital access, literacy and fluency in a scalable manner. In this context, this chapter analyses the status quo of online learning in India and Germany. Drawing on the experiences of these two economies with distinct trajectories of digitalization, the chapter explores how new technologies intersect with traditional education and local sociocultural conditions. Further, the limitations and opportunities presented by dominant ed-tech models is critically analyzed against the ongoing COVID-19 pandemic.
Simple and robust
(2021)
A spectrum of 7562 publications on Molecularly Imprinted Polymers (MIPs) has been presented in literature within the last ten years (Scopus, September 7, 2020). Around 10 % of the papers published on MIPs describe the recognition of proteins. The straightforward synthesis of MIPs is a significant advantage as compared with the preparation of enzymes or antibodies. MIPs have been synthesized from only one up to six functional monomers while proteins are made up of 20 natural amino acids. Furthermore, they can be synthesized against structures of low immunogenicity and allow multi-analyte measurements via multi-target synthesis. Electrochemical methods allow simple polymer synthesis, removal of the template and readout. Among the different sensor configurations electrochemical MIP-sensors provide the broadest spectrum of protein analytes. The sensitivity of MIP-sensors is sufficiently high for biomarkers in the sub-nanomolar region, nevertheless the cross-reactivity of highly abundant proteins in human serum is still a challenge. MIPs for proteins offer innovative tools not only for clinical and environmental analysis, but also for bioimaging, therapy and protein engineering.
CovRadar
(2022)
The ongoing pandemic caused by SARS-CoV-2 emphasizes the importance of genomic surveillance to understand the evolution of the virus, to monitor the viral population, and plan epidemiological responses. Detailed analysis, easy visualization and intuitive filtering of the latest viral sequences are powerful for this purpose. We present CovRadar, a tool for genomic surveillance of the SARS-CoV-2 Spike protein. CovRadar consists of an analytical pipeline and a web application that enable the analysis and visualization of hundreds of thousand sequences. First, CovRadar extracts the regions of interest using local alignment, then builds a multiple sequence alignment, infers variants and consensus and finally presents the results in an interactive app, making accessing and reporting simple, flexible and fast.
We and AI
(2021)
Many phenomena of high relevance for economic development such as human capital, geography and climate vary considerably within countries as well as between them. Yet, global data sets of economic output are typically available at the national level only, thereby limiting the accuracy and precision of insights gained through empirical analyses. Recent work has used interpolation and downscaling to yield estimates of sub-national economic output at a global scale, but respective data sets based on official, reported values only are lacking. We here present DOSE — the MCC-PIK Database Of Sub-national Economic Output. DOSE contains harmonised data on reported economic output from 1,661 sub-national regions across 83 countries from 1960 to 2020. To avoid interpolation, values are assembled from numerous statistical agencies, yearbooks and the literature and harmonised for both aggregate and sectoral output. Moreover, we provide temporally- and spatially-consistent data for regional boundaries, enabling matching with geo-spatial data such as climate observations. DOSE provides the opportunity for detailed analyses of economic development at the subnational level, consistent with reported values.
It is well-documented that academic achievement is associated with students' self-perceptions of their academic abilities, that is, their academic self-concepts. However, low-achieving students may apply self-protective strategies to maintain a favorable academic self-concept when evaluating their academic abilities. Consequently, the relation between achievement and academic self-concept might not be linear across the entire achievement continuum. Capitalizing on representative data from three large-scale assessments (i.e., TIMSS, PIRLS, PISA; N = 470,804), we conducted an integrative data analysis to address nonlinear trends in the relations between achievement and the corresponding self-concepts in mathematics and the verbal domain across 13 countries and 2 age groups (i.e., elementary and secondary school students). Polynomial and interrupted regression analyses showed nonlinear relations in secondary school students, demonstrating that the relations between achievement and the corresponding self-concepts were weaker for lower achieving students than for higher achieving students. Nonlinear effects were also present in younger students, but the pattern of results was rather heterogeneous. We discuss implications for theory as well as for the assessment and interpretation of self-concept.
Income inequality and taxes
(2023)
Economic literature offers several distinct explanations for the raising income inequality observed in several countries. In the debate about the causes of inequality a growing strand of research focuses on the effects of taxation on income inequality. We contribute to this literature by providing a systematic empirical account of the relationship between income inequality and personal income taxation (PIT) for a set of countries over the period 1981–2005. In order to take alternative explanations into account and to isolate the effects of tax progressivity, we include a wide range of control variables. We address potential reverse causality between inequality and PIT by using the variation in tax schedules of neighbouring countries. Our results confirm a statistically significant negative association between the progressivity of PIT and income inequality. Overall, we find that especially the average and the marginal tax rate have the potential to reduce income inequality. This finding is qualitatively robust across various different empirical specifications.
Although the literature on the determinants of training has considered individual and firm-related characteristics, it has generally neglected regional factors. This is surprising, given the fact that labour markets differ by regions. Regional factors are often ignored because (both in Germany and abroad) many data sets covering training information do not include detailed geographical identifiers that would allow a merging of information on the regional level. The regional identifiers of the National Educational Panel Study (Starting Cohort 6) offer opportunities to advance research on several regional factors. This article summarizes the results from two studies that exploit these unique opportunities to investigate the relationship between training participation and (a) the local level of firm competition for workers within specific sectors of the economy and (b) the regional supply of training measured as the number of firms offering courses or seminars for potential training participants.
Personal data increasingly serve as inputs to public goods. Like other types of contributions to public goods, personal data are likely to be underprovided. We investigate whether classical remedies to underprovision are also applicable to personal data and whether the privacy-sensitive nature of personal data must be additionally accounted for. In a randomized field experiment on a public online education platform, we prompt users to complete their profiles with personal information. Compared to a control message, we find that making public benefits salient increases the number of personal data contributions significantly. This effect is even stronger when additionally emphasizing privacy protection, especially for sensitive information. Our results further suggest that emphasis on both public benefits and privacy protection attracts personal data from a more diverse set of contributors.
Does loss aversion apply to social image concerns? In a laboratory experiment, we first induce social image in a relevant domain, intelligence, through public ranking. In a second stage, subjects experience a change in rank and are offered scope for lying to improve their final, also publicly reported rank. Subjects who care about social image and experience a decline in rank lie more than those experiencing gains. Moreover, we document a discontinuity in lying behavior when moving from rank losses to gains. Our results are in line with loss aversion in social image concerns.
From learners to educators
(2020)
The rapid growth of technology and its evolving potential to support the transformation of teaching and learning in post-secondary institutions is a major challenge to the basic understanding of both the university and the communities it serves. In higher education, the standard forms of learning and teaching are increasingly being challenged and a more comprehensive process of differentiation is taking place. Student-centered teaching methods are becoming increasingly important in course design and the role of the lecturer is changing from the knowledge mediator to moderator and learning companion. However, this is accelerating the need for strategically planned faculty support and a reassessment of the role of teaching and learning. Even though the benefits of experience-based learning approaches for the development of life skills are well known, most knowledge transfer is still realized through lectures in higher education. Teachers have the goal to design the curriculum, new assignments, and share insights into evolving pedagogy. Student engagement could be the most important factor in the learning success of university students, regardless of the university program or teaching format. Against this background, this article presents the development, application, and initial findings of an innovative learning concept. In this concept, students are allowed to deal with a scientific topic, but instead of a presentation and a written elaboration, their examination consists of developing an online course in terms of content, didactics, and concept to implement it in a learning environment, which is state of the art. The online courses include both self-created teaching material and interactive tasks. The courses are created to be available to other students as learning material after a review process and are thus incorporated into the curriculum.
Atwood analyzes the effects of the 1963 U.S. measles vaccination on long-run labor market outcomes, using a generalized difference-in-differences approach. We reproduce the results of this paper and perform a battery of robustness checks. Overall, we confirm that the measles vaccination had positive labor market effects. While the negative effect on the likelihood of living in poverty and the positive effect on the probability of being employed are very robust across the different specifications, the headline estimate—the effect on earnings—is more sensitive to the exclusion of certain regions and survey years.
Using data from the German Socio-Economic Panel and exploiting the staggered implementation of a compulsory schooling reform in West Germany, this article finds that an additional year of schooling lowers the probability of being very concerned about immigration to Germany by around six percentage points (20 percent). Furthermore, our findings imply significant spillovers from maternal education to immigration attitudes of her offspring. While we find no evidence for returns to education within a range of labor market outcomes, higher social trust appears to be an important mechanism behind our findings.