Refine
Has Fulltext
- no (24038) (remove)
Year of publication
Document Type
- Article (19844)
- Doctoral Thesis (1409)
- Monograph/Edited Volume (961)
- Other (617)
- Review (585)
- Conference Proceeding (258)
- Part of a Book (187)
- Preprint (103)
- Habilitation Thesis (34)
- Part of Periodical (18)
Language
- English (24038) (remove)
Keywords
- climate change (94)
- Germany (68)
- stars: massive (55)
- gamma rays: general (47)
- stars: early-type (47)
- Climate change (46)
- Arabidopsis thaliana (44)
- German (44)
- diffusion (44)
- stars: winds, outflows (43)
Institute
- Institut für Physik und Astronomie (4209)
- Institut für Biochemie und Biologie (4034)
- Institut für Geowissenschaften (2727)
- Institut für Chemie (2440)
- Institut für Mathematik (1378)
- Department Psychologie (1256)
- Institut für Ernährungswissenschaft (862)
- Department Linguistik (750)
- Institut für Informatik und Computational Science (708)
- Wirtschaftswissenschaften (651)
The Caenorhabditis elegans (C. elegans) is a model organism that has been increasingly used in health and environmental toxicity assessments. The quantification of such elements in vivo can assist in studies that seek to relate the exposure concentration to possible biological effects.
Therefore, this study is the first to propose a method of quantitative analysis of 21 ions by ion chromatography (IC), which can be applied in different toxicity studies in C. elegans.
The developed method was validated for 12 anionic species (fluoride, acetate, chloride, nitrite, bromide, nitrate, sulfate, oxalate, molybdate, dichromate, phosphate, and perchlorate), and 9 cationic species (lithium, sodium, ammonium, thallium, potassium, magnesium, manganese, calcium, and barium).
The method did not present the presence of interfering species, with R2 varying between 0.9991 and 0.9999, with a linear range from 1 to 100 mu g L-1.
Limits of detection (LOD) and limits of quantification (LOQ) values ranged from 0.2319 mu g L-1 to 1.7160 mu g L-1 and 0.7028 mu g L-1 to 5.1999 mu g L-1, respectively.
The intraday and interday precision tests showed an Relative Standard Deviation (RSD) below 10.0 % and recovery ranging from 71.0 % to 118.0 % with a maximum RSD of 5.5 %.
The method was applied to real samples of C. elegans treated with 200 uM of thallium acetate solution, determining the uptake and bioaccumulated Tl+ content during acute exposure.
Since 2013, the Committee on Economic, Social and Cultural Rights can examine individual communications under the Optional Protocol to the International Covenant on Economic, Social and Cultural Rights (ICESCR). This opens up the possibility to interpret Covenant provisions in a thorough manner. With regard to forced evictions and the right to housing under Article 11 ICESCR, one can discern a fast-developing approach concerning the proportionality analysis of evictions, entailing the establishment of specific criteria that may guide such analysis. This paper seeks to delineate these developments and will also shed light on possible general trends on the topic of limitations within the Committee’s emerging jurisprudence. In doing so, the paper will address if, and how, the developing proportionality analysis under the individual complaints procedure takes into consideration multi-discriminatory dimensions of State measures and how it specifically relates to or incorporates other ICESCR-concepts, such as minimum core obligations or the reasonableness review under Article 8(4) OP ICESCR.
The conception of property at the basis of Hegel’s conception of abstract right seems committed to a problematic form of “possessive individualism.” It seems to conceive of right as the expression of human mastery over nature and as based upon an irreducible opposition of person and nature, rightful will, and rightless thing. However, this chapter argues that Hegel starts with a form of possessive individualism only to show that it undermines itself. This is evident in the way Hegel unfolds the nature of property as it applies to external things as well as in the way he explains our self-ownership of our own bodies and lives. Hegel develops the idea of property to a point where it reaches a critical limit and encounters the “true right” that life possesses against the “formal” and “abstract right” of property. Ultimately, Hegel’s account suggests that nature should precisely not be treated as a rightless object at our arbitrary disposal but acknowledged as the inorganic body of right.
In his 1844 Economic and Philosophic Manuscripts, Marx famously claims that the human being is or has a ‘Gattungswesen.’ This is often understood to mean that the human being is a ‘species-being’ and is determined by a given ‘species-essence.’ In this chapter, I argue that this reading is mistaken. What Marx calls Gattungswesen is precisely not a ‘species-being,’ but a being that, in a very specific sense, transcends the limits of its own given species. This different understanding of the genus- character of the human being opens up a new perspective on the naturalism of the early Marx. He is not informed by a problematic speciesist and essentialist naturalism, as is often assumed, but by a different form of naturalism which I propose to call ‘dialectical naturalism.’ The chapter starts (I) by developing Hegel’s account of genus which provides us with a useful background for (II) understanding Marx’s original notion of a genus-being and its practical, social, developmental character. In the last section, I show that (III) the actualization of our genus-being thus depends on the production of a specific type of ‘second nature’ that is at the heart of Marx’s dialectical naturalism.
The art of second nature
(2022)
Symbiotic X-ray binaries are systems hosting a neutron star accreting form the wind of a late-type companion. These are rare objects and so far only a handful of them are known. One of the most puzzling aspects of the symbiotic X-ray binaries is the possibility that they contain strongly magnetized neutron stars. These are expected to be evolutionary much younger compared to their evolved companions and could thus be formed through the (yet poorly known) accretion induced collapse of a white dwarf. In this paper, we perform a broad-band X-ray and soft gamma-ray spectroscopy of two known symbiotic binaries, Sct X-1 and 4U 1700+24, looking for the presence of cyclotron scattering features that could confirm the presence of strongly magnetized NSs. We exploited available Chandra, Swift, and NuSTAR data. We find no evidence of cyclotron resonant scattering features (CRSFs) in the case of Sct X-1 but in the case of 4U 1700+24 we suggest the presence of a possible CRSF at similar to 16 keV and its first harmonic at similar to 31 keV, although we could not exclude alternative spectral models for the broad-band fit. If confirmed by future observations, 4U 1700+24 could be the second symbiotic X-ray binary with a highly magnetized accretor. We also report about our long-term monitoring of the last discovered symbiotic X-ray binary IGR J17329-2731 performed with Swift/XRT. The monitoring revealed that, as predicted, in 2017 this object became a persistent and variable source, showing X-ray flares lasting for a few days and intriguing obscuration events that are interpreted in the context of clumpy wind accretion.
Drought and the availability of mineable phosphorus minerals used for fertilization are two of the important issues agriculture is facing in the future. High phosphorus availability in soils is necessary to maintain high agricultural yields. Drought is one of the major threats for terrestrial ecosystem performance and crop production in future. Among the measures proposed to cope with the upcoming challenges of intensifying drought stress and to decrease the need for phosphorus fertilizer application is the fertilization with silica (Si). Here we tested the importance of soil Si fertilization on wheat phosphorus concentration as well as wheat performance during drought at the field scale. Our data clearly showed a higher soil moisture for the Si fertilized plots. This higher soil moisture contributes to a better plant performance in terms of higher photosynthetic activity and later senescence as well as faster stomata responses ensuring higher productivity during drought periods. The plant phosphorus concentration was also higher in Si fertilized compared to control plots. Overall, Si fertilization or management of the soil Si pools seem to be a promising tool to maintain crop production under predicted longer and more serve droughts in the future and reduces phosphorus fertilizer requirements.
Non-fullerene acceptors (NFAs) are far more emissive than their fullerene-based counterparts. Here, we study the spectral properties of photocurrent generation and recombination of the blend of the donor polymer PM6 with the NFA Y6. We find that the radiative recombination of free charges is almost entirely due to the re-occupation and decay of Y6 singlet excitons, but that this pathway contributes less than 1% to the total recombination. As such, the open-circuit voltage of the PM6:Y6 blend is determined by the energetics and kinetics of the charge-transfer (CT) state. Moreover, we find that no information on the energetics of the CT state manifold can be gained from the low-energy tail of the photovoltaic external quantum efficiency spectrum, which is dominated by the excitation spectrum of the Y6 exciton. We, finally, estimate the charge-separated state to lie only 120 meV below the Y6 singlet exciton energy, meaning that this blend indeed represents a high-efficiency system with a low energetic offset.
Identification of protein complexes from protein-protein interaction (PPI) networks is a key problem in PPI mining, solved by parameter-dependent approaches that suffer from small recall rates. Here we introduce GCC-v, a family of efficient, parameter-free algorithms to accurately predict protein complexes using the (weighted) clustering coefficient of proteins in PPI networks. Through comparative analyses with gold standards and PPI networks from Escherichia coli, Saccharomyces cerevisiae, and Homo sapiens, we demonstrate that GCC-v outperforms twelve state-of-the-art approaches for identification of protein complexes with respect to twelve performance measures in at least 85.71% of scenarios. We also show that GCC-v results in the exact recovery of similar to 35% of protein complexes in a pan-plant PPI network and discover 144 new protein complexes in Arabidopsis thaliana, with high support from GO semantic similarity. Our results indicate that findings from GCC-v are robust to network perturbations, which has direct implications to assess the impact of the PPI network quality on the predicted protein complexes. (C) 2021 The Author(s). Published by Elsevier B.V. on behalf of Research Network of Computational and Structural Biotechnology.
Deliberative and paternalistic interaction styles for conversational agents in digital health
(2021)
Background:
Recent years have witnessed a constant increase in the number of people with chronic conditions requiring ongoing medical support in their everyday lives. However, global health systems are not adequately equipped for this extraordinarily time-consuming and cost-intensive development. Here, conversational agents (CAs) can offer easily scalable and ubiquitous support. Moreover, different aspects of CAs have not yet been sufficiently investigated to fully exploit their potential. One such trait is the interaction style between patients and CAs. In human-to-human settings, the interaction style is an imperative part of the interaction between patients and physicians. Patient-physician interaction is recognized as a critical success factor for patient satisfaction, treatment adherence, and subsequent treatment outcomes. However, so far, it remains effectively unknown how different interaction styles can be implemented into CA interactions and whether these styles are recognizable by users.
Objective:
The objective of this study was to develop an approach to reproducibly induce 2 specific interaction styles into CA-patient dialogs and subsequently test and validate them in a chronic health care context.
Methods:
On the basis of the Roter Interaction Analysis System and iterative evaluations by scientific experts and medical health care professionals, we identified 10 communication components that characterize the 2 developed interaction styles: deliberative and paternalistic interaction styles. These communication components were used to develop 2 CA variations, each representing one of the 2 interaction styles. We assessed them in a web-based between-subject experiment. The participants were asked to put themselves in the position of a patient with chronic obstructive pulmonary disease. These participants were randomly assigned to interact with one of the 2 CAs and subsequently asked to identify the respective interaction style. Chi-square test was used to assess the correct identification of the CA-patient interaction style.
Results:
A total of 88 individuals (42/88, 48% female; mean age 31.5 years, SD 10.1 years) fulfilled the inclusion criteria and participated in the web-based experiment. The participants in both the paternalistic and deliberative conditions correctly identified the underlying interaction styles of the CAs in more than 80% of the assessments (X-1(,8)8(2)=38.2; P<.001; phi coefficient r(phi)=0.68). The validation of the procedure was hence successful.
Conclusions:
We developed an approach that is tailored for a medical context to induce a paternalistic and deliberative interaction style into a written interaction between a patient and a CA. We successfully tested and validated the procedure in a web-based experiment involving 88 participants. Future research should implement and test this approach among actual patients with chronic diseases and compare the results in different medical conditions. This approach can further be used as a starting point to develop dynamic CAs that adapt their interaction styles to their users.
Pathogens and animal pests (P&A) are a major threat to global food security as they directly affect the quantity and quality of food. The Southern Amazon, Brazil's largest domestic region for soybean, maize and cotton production, is particularly vulnerable to the outbreak of P&A due to its (sub)tropical climate and intensive farming systems. However, little is known about the spatial distribution of P&A and the related yield losses. Machine learning approaches for the automated recognition of plant diseases can help to overcome this research gap. The main objectives of this study are to (1) evaluate the performance of Convolutional Neural Networks (ConvNets) in classifying P&A, (2) map the spatial distribution of P&A in the Southern Amazon, and (3) quantify perceived yield and economic losses for the main soybean and maize P&A. The objectives were addressed by making use of data collected with the smartphone application Plantix. The core of the app's functioning is the automated recognition of plant diseases via ConvNets. Data on expected yield losses were gathered through a short survey included in an "expert" version of the application, which was distributed among agronomists. Between 2016 and 2020, Plantix users collected approximately 78,000 georeferenced P&A images in the Southern Amazon. The study results indicate a high performance of the trained ConvNets in classifying 420 different crop-disease combinations. Spatial distribution maps and expert-based yield loss estimates indicate that maize rust, bacterial stalk rot and the fall armyworm are among the most severe maize P&A, whereas soybean is mainly affected by P&A like anthracnose, downy mildew, frogeye leaf spot, stink bugs and brown spot. Perceived soybean and maize yield losses amount to 12 and 16%, respectively, resulting in annual yield losses of approximately 3.75 million tonnes for each crop and economic losses of US$2 billion for both crops together. The high level of accuracy of the trained ConvNets, when paired with widespread use from following a citizen-science approach, results in a data source that will shed new light on yield loss estimates, e.g., for the analysis of yield gaps and the development of measures to minimise them.
The correct orientation of seismic sensors is critical for studies such as full moment tensor inversion, receiver function analysis, and shear-wave splitting. Therefore, the orientation of horizontal components needs to be checked and verified systematically. This study relies on two different waveform-based approaches, to assess the sensor orientations of the broadband network of the Kandilli Observatory and Earthquake Research Institute (KOERI). The network is an important backbone for seismological research in the Eastern Mediterranean Region and provides a comprehensive seismic data set for the North Anatolian fault. In recent years, this region became a worldwide field laboratory for continental transform faults. A systematic survey of the sensor orientations of the entire network, as presented here, facilitates related seismic studies. We apply two independent orientation tests, based on the polarization of P waves and Rayleigh waves to 123 broadband seismic stations, covering a period of 15 yr (2004-2018). For 114 stations, we obtain stable results with both methods. Approximately, 80% of the results agree with each other within 10 degrees. Both methods indicate that about 40% of the stations are misoriented by more than 10 degrees. Among these, 20 stations are misoriented by more than 20 degrees. We observe temporal changes of sensor orientation that coincide with maintenance work or instrument replacement. We provide time-dependent sensor misorientation correction values for the KOERI network in the supplemental material.
The first detections of black hole-neutron star mergers (GW200105 and GW200115) by the LIGO-Virgo-Kagra Collaboration mark a significant scientific breakthrough. The physical interpretation of pre- and postmerger signals requires careful cross-examination between observational and theoretical modelling results. Here we present the first set of black hole-neutron star simulations that were obtained with the numerical-relativity code BAM. Our initial data are constructed using the public LORENE spectral library, which employs an excision of the black hole interior. BAM, in contrast, uses the moving-puncture gauge for the evolution. Therefore, we need to "stuff" the black hole interior with smooth initial data to evolve the binary system in time. This procedure introduces constraint violations such that the constraint damping properties of the evolution system are essential to increase the accuracy of the simulation and in particular to reduce spurious center-of-mass drifts. Within BAM we evolve the Z4c equations and we compare our gravitational-wave results with those of the SXS collaboration and results obtained with the SACRA code. While we find generally good agreement with the reference solutions and phase differences less than or similar to 0.5 rad at the moment of merger, the absence of a clean convergence order in our simulations does not allow for a proper error quantification. We finally present a set of different initial conditions to explore how the merger of black hole neutron star systems depends on the involved masses, spins, and equations of state.
Water bodies are a highly abundant feature of Arctic permafrost ecosystems and strongly influence their hydrology, ecology and biogeochemical cycling. While very high resolution satellite images enable detailed mapping of these water bodies, the increasing availability and abundance of this imagery calls for fast, reliable and automatized monitoring. This technical work presents a largely automated and scalable workflow that removes image noise, detects water bodies, removes potential misclassifications from infrastructural features, derives lake shoreline geometries and retrieves their movement rate and direction on the basis of ortho-ready very high resolution satellite imagery from Arctic permafrost lowlands. We applied this workflow to typical Arctic lake areas on the Alaska North Slope and achieved a successful and fast detection of water bodies. We derived representative values for shoreline movement rates ranging from 0.40-0.56 m yr(-1) for lake sizes of 0.10 ha-23.04 ha. The approach also gives an insight into seasonal water level changes. Based on an extensive quantification of error sources, we discuss how the results of the automated workflow can be further enhanced by incorporating additional information on weather conditions and image metadata and by improving the input database. The workflow is suitable for the seasonal to annual monitoring of lake changes on a sub-meter scale in the study areas in northern Alaska and can readily be scaled for application across larger regions within certain accuracy limitations.
This paper sheds new light on the role of communication for cartel formation. Using machine learning to evaluate free-form chat communication among firms in a laboratory experiment, we identify typical communication patterns for both explicit cartel formation and indirect attempts to collude tacitly. We document that firms are less likely to communicate explicitly about price fixing and more likely to use indirect messages when sanctioning institutions are present. This effect of sanctions on communication reinforces the direct cartel-deterring effect of sanctions as collusion is more difficult to reach and sustain without an explicit agreement. Indirect messages have no, or even a negative, effect on prices.
The leniency rule revisited
(2021)
The experimental literature on antitrust enforcement provides robust evidence that communication plays an important role for the formation and stability of cartels. We extend these studies through a design that distinguishes between innocuous communication and communication about a cartel, sanctioning only the latter. To this aim, we introduce a participant in the role of the competition authority, who is properly incentivized to judge the communication content and price setting behavior of the firms. Using this novel design, we revisit the question whether a leniency rule successfully destabilizes cartels. In contrast to existing experimental studies, we find that a leniency rule does not affect cartelization. We discuss potential explanations for this contrasting result.
COVID-19
(2021)
We investigate how the economic consequences of the pandemic and the government-mandated measures to contain its spread affect the self-employed — particularly women — in Germany. For our analysis, we use representative, real-time survey data in which respondents were asked about their situation during the COVID-19 pandemic. Our findings indicate that among the self-employed, who generally face a higher likelihood of income losses due to COVID-19 than employees, women are about one-third more likely to experience income losses than their male counterparts. We do not find a comparable gender gap among employees. Our results further suggest that the gender gap among the self-employed is largely explained by the fact that women disproportionately work in industries that are more severely affected by the COVID-19 pandemic. Our analysis of potential mechanisms reveals that women are significantly more likely to be impacted by government-imposed restrictions, e.g., the regulation of opening hours. We conclude that future policy measures intending to mitigate the consequences of such shocks should account for this considerable variation in economic hardship.
Detrimental effects of adverse family conditions for children's wellbeing are well-documented, but little is known about the impact of specific risk factors, or about potential protective factors that buffer the effects of family risk factors on negative development.
We investigated the impact of five important family risk factors (e.g., parental conflict) on internalizing and externalizing problems and the potential buffering effects of peer acceptance and academic skills at two measurement points two years apart in 1195 7-to 10-year-olds (T1: M-Age = 8.54).
Latent change models showed that increases in risk factors over the two years predicted increasing internalizing and externalizing problems. Parental conflict was the most impactful risk factor, although peer acceptance and academic skills showed some buffering effects.
The results highlight the necessity of investigating cumulative and single risk factors, specifically interparental conflict, and emphasize the need to strengthen children's internal and social resources to buffer the effects of adverse family conditions.
Previous literature has shown that task-based goal-setting and distributed learning is beneficial to university-level course performance. We investigate the effects of making these insights salient to students by sending out goal-setting prompts in a blended learning environment with bi-weekly quizzes. The randomized field experiment in a large mandatory economics course shows promising results: the treated students outperform the control group. They are 18.8% (0.20 SD) more likely to pass the exam and earn 6.7% (0.19 SD) more points on the exam. While we cannot causally disentangle the effects of goal-setting from the prompt sent, we observe that treated students use the online learning platform earlier in the semester and attempt more online exercises compared to the control group. The heterogeneity analysis suggests that higher treatment effects are associated with low performance at the beginning of the course.
Looking for participation
(2022)
A stronger learner orientation through participatory learning increases learning motivation and results. But what does participatory learning mean? Where do learning factories and fabrication laboratories (FabLabs) stand in this context, and how can didactic implementation be improved in this respect? Using a newly developed analytical framework, which contains elements of the stage model of participation and general media didactics, we compare a FabLab and a learning factory example concerning the degree of participation. From this, we derive guidelines for designing participative teaching and learning processes in learning factories. We explain how FabLabs can be an inspiration for the didactic design of learning factories.
This study deals with the East Beni Suef Basin (Eastern Desert, Egypt) and aims to evaluate the source-generative potential, reconstruct the burial and thermal history, examine the most influential parameters on thermal maturity modeling, and improve on the models already published for the West Beni Suef to ultimately formulate a complete picture of the whole basin evolution.
Source rock evaluation was carried out based on TOC, Rock-Eval pyrolysis, and visual kerogen petrography analyses. Three kerogen types (II, II/III, and III) are distinguished in the East Beni Suef Basin, where the Abu Roash "F" Member acts as the main source rock with good to excellent source potential, oil-prone mainly type II kerogen, and immature to marginal maturity levels.
The burial history shows four depositional and erosional phases linked with the tectonic evolution of the basin. A hiatus (due to erosion or non-deposition) has occurred during the Late Eocene-Oligocene in the East Beni Suef Basin, while the West Beni Suef Basin has continued subsiding.
Sedimentation began later (Middle to Late Albian) with lower rates in the East Beni Suef Basin compared with the West Beni Suef Basin (Early Albian). The Abu Roash "F" source rock exists in the early oil window with a present-day transformation ratio of about 19% and 21% in the East and West Beni Suef Basin, respectively, while the Lower Kharita source rock, which is only recorded in the West Beni Suef Basin, has reached the late oil window with a present-day transformation ratio of about 70%.
The magnitude of erosion and heat flow have proportional and mutual effects on thermal maturity.
We present three possible scenarios of basin modeling in the East Beni Suef Basin concerning the erosion from the Apollonia and Dabaa formations.
Results of this work can serve as a basis for subsequent 2D and/or 3D basin modeling, which are highly recommended to further investigate the petroleum system evolution of the Beni Suef Basin.
The subsurface is a temporally dynamic and spatially heterogeneous compartment of the Earth's critical zone, and biogeochemical transformations taking place in this compartment are crucial for the cycling of nutrients.
The impact of spatial heterogeneity on such microbially mediated nutrient cycling is not well known, which imposes a severe challenge in the prediction of in situ biogeochemical transformation rates and further of nutrient loading contributed by the groundwater to the surface water bodies.
Therefore, we used a numerical modelling approach to evaluate the sensitivity of groundwater microbial biomass distribution and nutrient cycling to spatial heterogeneity in different scenarios accounting for various residence times.
The model results gave us an insight into domain characteristics with respect to the presence of oxic niches in predominantly anoxic zones and vice versa depending on the extent of spatial heterogeneity and the flow regime.
The obtained results show that microbial abundance, distribution, and activity are sensitive to the applied flow regime and that the mobile (i.e. observable by groundwater sampling) fraction of microbial biomass is a varying, yet only a small, fraction of the total biomass in a domain. Furthermore, spatial heterogeneity resulted in anaerobic niches in the domain and shifts in microbial biomass between active and inactive states. The lack of consideration of spatial heterogeneity, thus, can result in inaccurate estimation of microbial activity. In most cases this leads to an overestimation of nutrient removal (up to twice the actual amount) along a flow path.
We conclude that the governing factors for evaluating this are the residence time of solutes and the Damkohler number (Da) of the biogeochemical reactions in the domain. We propose a relationship to scale the impact of spatial heterogeneity on nutrient removal governed by the logioDa.
This relationship may be applied in upscaled descriptions of microbially mediated nutrient cycling dynamics in the subsurface thereby resulting in more accurate predictions of, for example, carbon and nitrogen cycling in groundwater over long periods at the catchment scale.
An increasing number of clinicians (i.e., nurses and physicians) suffer from mental health-related issues like depression and burnout. These, in turn, stress communication, collaboration, and decision- making—areas in which Conversational Agents (CAs) have shown to be useful. Thus, in this work, we followed a mixed-method approach and systematically analysed the literature on factors affecting the well-being of clinicians and CAs’ potential to improve said well-being by relieving support in communication, collaboration, and decision-making in hospitals. In this respect, we are guided by Brigham et al. (2018)’s model of factors influencing well-being. Based on an initial number of 840 articles, we further analysed 52 papers in more detail and identified the influences of CAs’ fields of application on external and individual factors affecting clinicians’ well-being. As our second method, we will conduct interviews with clinicians and experts on CAs to verify and extend these influencing factors.
Epidemiological data suggest that consuming diets rich in carotenoids can reduce the risk of developing several non-communicable diseases. Thus, we investigated the extent to which carotenoid contents of foods can be increased by the choice of food matrices with naturally high carotenoid contents and thermal processing methods that maintain their stability. For this purpose, carotenoids of 15 carrot (Daucus carota L.) cultivars of different colors were assessed with UHPLC-DAD-ToF-MS. Additionally, the processing effects of air drying, air frying, and deep frying on carotenoid stability were applied. Cultivar selection accounted for up to 12.9-fold differences in total carotenoid content in differently colored carrots and a 2.2-fold difference between orange carrot cultivars. Air frying for 18 and 25 min and deep frying for 10 min led to a significant decrease in total carotenoid contents. TEAC assay of lipophilic extracts showed a correlation between carotenoid content and antioxidant capacity in untreated carrots.
Thermally stable photoswitches that are driven with low-energy light are rare, yet crucial for extending the applicability of photoresponsive molecules and materials towards, e.g., living systems. Combined ortho-fluorination and -amination couples high visible light absorptivity of o-aminoazobenzenes with the extraordinary bistability of o-fluoroazobenzenes. Herein, we report a library of easily accessible o-aminofluoroazobenzenes and establish structure-property relationships regarding spectral qualities, visible light isomerization efficiency and thermal stability of the cis-isomer with respect to the degree of o-substitution and choice of amino substituent. We rationalize the experimental results with quantum chemical calculations, revealing the nature of low-lying excited states and providing insight into thermal isomerization. The synthesized azobenzenes absorb at up to 600 nm and their thermal cis-lifetimes range from milliseconds to months. The most unique example can be driven from trans to cis with any wavelength from UV up to 595 nm, while still exhibiting a thermal cis-lifetime of 81 days. <br /> [GRAPHICS] <br /> .
Poly(ionic liquid)s (PIL) are common precursors for heteroatom-doped carbon materials. Despite a relatively higher carbonization yield, the PIL-to-carbon conversion process faces challenges in preserving morphological and structural motifs on the nanoscale. Assisted by a thin polydopamine coating route and ion exchange, imidazoliumbased PIL nanovesicles were successfully applied in morphology-maintaining carbonization to prepare carbon composite nanocapsules. Extending this strategy further to their composites, we demonstrate the synthesis of carbon composite nanocapsules functionalized with iron nitride nanoparticles of an ultrafine, uniform size of 3-5 nm (termed "FexN@C "). Due to its unique nanostructure, the sulfur-loaded FexN@C electrode was tested to efficiently mitigate the notorious shuttle effect of lithium polysulfides (LiPSs) in Li-S batteries. The cavity of the carbon nanocapsules was spotted to better the loading content of sulfur. The well-dispersed iron nitride nanoparticles effectively catalyze the conversion of LiPSs to Li2S, owing to their high electronic conductivity and strong binding power to LiPSs. Benefiting from this well-crafted composite nanostructure, the constructed FexN@C/S cathode demonstrated a fairly high discharge capacity of 1085 mAh g(-1) at 0.5 C initially, and a remaining value of 930 mAh g(-1 )after 200 cycles. In addition, it exhibits an excellent rate capability with a high initial discharge capacity of 889.8 mAh g(-1) at 2 C. This facile PIL-to-nanocarbon synthetic approach is applicable for the exquisite design of complex hybrid carbon nanostructures with potential use in electrochemical energy storage and conversion.
The study of perceptual flexibility in speech depends on a variety of tasks that feature a large degree of variability between participants. Of critical interest is whether measures are consistent within an individual or across stimulus contexts. This is particularly key for individual difference designs that are deployed to examine the neural basis or clinical consequences of perceptual flexibility. In the present set of experiments, we assess the split-half reliability and construct validity of five measures of perceptual flexibility: three of learning in a native language context (e.g., understanding someone with a foreign accent) and two of learning in a non-native context (e.g., learning to categorize non-native speech sounds). We find that most of these tasks show an appreciable level of split-half reliability, although construct validity was sometimes weak. This provides good evidence for reliability for these tasks, while highlighting possible upper limits on expected effect sizes involving each measure.
Changing climatic conditions and unsustainable land use are major threats to savannas worldwide. Historically, many African savannas were used intensively for livestock grazing, which contributed to widespread patterns of bush encroachment across savanna systems. To reverse bush encroachment, it has been proposed to change the cattle-dominated land use to one dominated by comparatively specialized browsers and usually native herbivores. However, the consequences for ecosystem properties and processes remain largely unclear. We used the ecohydrological, spatially explicit model EcoHyD to assess the impacts of two contrasting, herbivore land-use strategies on a Namibian savanna: grazer- versus browser-dominated herbivore communities. We varied the densities of grazers and browsers and determined the resulting composition and diversity of the plant community, total vegetation cover, soil moisture, and water use by plants. Our results showed that plant types that are less palatable to herbivores were best adapted to grazing or browsing animals in all simulated densities. Also, plant types that had a competitive advantage under limited water availability were among the dominant ones irrespective of land-use scenario. Overall, the results were in line with our expectations: under high grazer densities, we found heavy bush encroachment and the loss of the perennial grass matrix. Importantly, regardless of the density of browsers, grass cover and plant functional diversity were significantly higher in browsing scenarios. Browsing herbivores increased grass cover, and the higher total cover in turn improved water uptake by plants overall. We concluded that, in contrast to grazing-dominated land-use strategies, land-use strategies dominated by browsing herbivores, even at high herbivore densities, sustain diverse vegetation communities with high cover of perennial grasses, resulting in lower erosion risk and bolstering ecosystem services.
The investigation of metabolic fluxes and metabolite distributions within cells by means of tracer molecules is a valuable tool to unravel the complexity of biological systems. Technological advances in mass spectrometry (MS) technology such as atmospheric pressure chemical ionization (APCI) coupled with high resolution (HR), not only allows for highly sensitive analyses but also broadens the usefulness of tracer-based experiments, as interesting signals can be annotated de novo when not yet present in a compound library. However, several effects in the APCI ion source, i.e., fragmentation and rearrangement, lead to superimposed mass isotopologue distributions (MID) within the mass spectra, which need to be corrected during data evaluation as they will impair enrichment calculation otherwise. Here, we present and evaluate a novel software tool to automatically perform such corrections. We discuss the different effects, explain the implemented algorithm, and show its application on several experimental datasets. This adjustable tool is available as an R package from CRAN.
ABSTRACT: Structural evolution of cesium triiodide at high pressures has been revealed by synchrotron single-crystal X-ray diffraction. Cesium triiodide undergoes a first-order phase transition above 1.24(3) GPa from an orthorhombic to a trigonal system. This transition is coupled with severe reorganization of the polyiodide network from a layered to three-dimensional architecture. Quantum chemical calculations show that even though the two polymorphic phases are nearly isoenergetic under ambient conditions, the PV term is decisive in stabilizing the trigonal polymorph above the transition point. Phonon calculations using a non-local correlation functional that accounts for dispersion interactions confirm that this polymorph is dynamically unstable under ambient conditions. The high-pressure behavior of crystalline CsI3 can be correlated with other alkali metal trihalides, which undergo a similar sequence of structural changes upon load.
Digital Platforms (DPs) has established themself in recent years as a central concept of the Information Technology Science. Due to the great diversity of digital platform concepts, clear definitions are still required. Furthermore, DPs are subject to dynamic changes from internal and external factors, which pose challenges for digital platform operators, developers and customers. Which current digital platform research directions should be taken to address these challenges remains open so far. The following paper aims to contribute to this by outlining a systematic literature review (SLR) on digital platform concepts in the context of the Industrial Internet of Things (IIoT) for manufacturing companies and provides a basis for (1) a selection of definitions of current digital platform and ecosystem concepts and (2) a selection of current digital platform research directions. These directions are diverted into (a) occurrence of digital platforms, (b) emergence of digital platforms, (c) evaluation of digital platforms, (d) development of digital platforms, and (e) selection of digital platforms.
The intensity of cosmic radiation may differ over five orders of magnitude within a few hours or days during the Solar Particle Events (SPEs), thus increasing for several orders of magnitude the probability of Single Event Upsets (SEUs) in space-borne electronic systems. Therefore, it is vital to enable the early detection of the SEU rate changes in order to ensure timely activation of dynamic radiation hardening measures. In this paper, an embedded approach for the prediction of SPEs and SRAM SEU rate is presented. The proposed solution combines the real-time SRAM-based SEU monitor, the offline-trained machine learning model and online learning algorithm for the prediction. With respect to the state-of-the-art, our solution brings the following benefits: (1) Use of existing on-chip data storage SRAM as a particle detector, thus minimizing the hardware and power overhead, (2) Prediction of SRAM SEU rate one hour in advance, with the fine-grained hourly tracking of SEU variations during SPEs as well as under normal conditions, (3) Online optimization of the prediction model for enhancing the prediction accuracy during run-time, (4) Negligible cost of hardware accelerator design for the implementation of selected machine learning model and online learning algorithm. The proposed design is intended for a highly dependable and self-adaptive multiprocessing system employed in space applications, allowing to trigger the radiation mitigation mechanisms before the onset of high radiation levels.
Studies have evaluated the effectiveness of dual career (DC) support services among student-athletes by examining scholastic performances.
These studies investigated self-reported grades student-athletes or focused on career choices student-athletes made after leaving school. Most of these studies examined scholastic performances cross-sectionally among lower secondary school student-athletes or student-athletes in higher education.
The present longitudinal field study in a quasi-experimental design aims to evaluate the development of scholastic performances among upper secondary school students aged 16-19 by using standardized scholastic assessments and grade points in the subject English over a course of 3-4 years.
A sample of 159 students (54.4% females) at three German Elite Sport Schools (ESS) and three comprehensive schools participated in the study. The sample was split into six groups according to three criteria: (1) students' athletic engagement, (2) school type attendance, and (3) usage of DC support services in secondary school.
Repeated-measurement analyses of variance were conducted in order to evaluate the impact of the three previously mentioned criteria as well as their interaction on the development of scholastic performances.
Findings indicated that the development of English performance levels differ among the six groups.
Invention
(2023)
This entry addresses invention from five different perspectives: (i) definition of the term, (ii) mechanisms underlying invention processes, (iii) (pre-)history of human inventions, (iv) intellectual property protection vs open innovation, and (v) case studies of great inventors. Regarding the definition, an invention is the outcome of a creative process taking place within a technological milieu, which is recognized as successful in terms of its effectiveness as an original technology. In the process of invention, a technological possibility becomes realized. Inventions are distinct from either discovery or innovation. In human creative processes, seven mechanisms of invention can be observed, yielding characteristic outcomes: (1) basic inventions, (2) invention branches, (3) invention combinations, (4) invention toolkits, (5) invention exaptations, (6) invention values, and (7) game-changing inventions. The development of humanity has been strongly shaped by inventions ever since early stone tools and the conception of agriculture. An “explosion of creativity” has been associated with Homo sapiens, and inventions in all fields of human endeavor have followed suit, engendering an exponential growth of cumulative culture. This culture development emerges essentially through a reuse of previous inventions, their revision, amendment and rededication. In sociocultural terms, humans have increasingly regulated processes of invention and invention-reuse through concepts such as intellectual property, patents, open innovation and licensing methods. Finally, three case studies of great inventors are considered: Edison, Marconi, and Montessori, next to a discussion of human invention processes as collaborative endeavors.
Current attempts to prevent and manage type 2 diabetes have been moderately effective, and a better understanding of the molecular roots of this complex disease is important to develop more successful and precise treatment options.
Recently, we initiated the collective diabetes cross, where four mouse inbred strains differing in their diabetes susceptibility were crossed with the obese and diabetes-prone NZO strain and identified the quantitative trait loci (QTL) Nidd13/NZO, a genomic region on chromosome 13 that correlates with hyperglycemia in NZO allele carriers compared to B6 controls.
Subsequent analysis of the critical region, harboring 644 genes, included expression studies in pancreatic islets of congenic Nidd13/NZO mice, integration of single-cell data from parental NZO and B6 islets as well as haplotype analysis.
Finally, of the five genes (Acot12, S100z, Ankrd55, Rnf180, and Iqgap2) within the polymorphic haplotype block that are differently expressed in islets of B6 compared to NZO mice, we identified the calcium-binding protein S100z gene to affect islet cell proliferation as well as apoptosis when overexpressed in MINE cells. In summary, we define S100z as the most striking gene to be causal for the diabetes QTL Nidd13/NZO by affecting beta-cell proliferation and apoptosis. Thus, S100z is an entirely novel diabetes gene regulating islet cell function.
The business problem of having inefficient processes, imprecise process analyses and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS) and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes.
Model uncertainty quantification is an essential component of effective data assimilation. Model errors associated with sub-grid scale processes are often represented through stochastic parameterizations of the unresolved process. Many existing Stochastic Parameterization schemes are only applicable when knowledge of the true sub-grid scale process or full observations of the coarse scale process are available, which is typically not the case in real applications. We present a methodology for estimating the statistics of sub-grid scale processes for the more realistic case that only partial observations of the coarse scale process are available. Model error realizations are estimated over a training period by minimizing their conditional sum of squared deviations given some informative covariates (e.g., state of the system), constrained by available observations and assuming that the observation errors are smaller than the model errors. From these realizations a conditional probability distribution of additive model errors given these covariates is obtained, allowing for complex non-Gaussian error structures. Random draws from this density are then used in actual ensemble data assimilation experiments. We demonstrate the efficacy of the approach through numerical experiments with the multi-scale Lorenz 96 system using both small and large time scale separations between slow (coarse scale) and fast (fine scale) variables. The resulting error estimates and forecasts obtained with this new method are superior to those from two existing methods.
The nature of the sources powering nebular He II emission in star-forming galaxies remains debated, and various types of objects have been considered, including Wolf-Rayet stars, X-ray binaries, and Population III stars.
Modern X-ray observations show the ubiquitous presence of hot gas filling star-forming galaxies. We use a collisional ionization plasma code to compute the specific He II ionizing flux produced by hot gas and show that if its temperature is not too high (less than or similar to 2.5 MK), then the observed levels of soft diffuse X-ray radiation could explain He II ionization in galaxies.
To gain a physical understanding of this result, we propose a model that combines the hydrodynamics of cluster winds and hot superbubbles with observed populations of young massive clusters in galaxies. We find that in low-metallicity galaxies, the temperature of hot gas is lower and the production rate of He II ionizing photons is higher compared to high-metallicity galaxies. The reason is that the slower stellar winds of massive stars in lower-metallicity galaxies input less mechanical energy in the ambient medium.
Furthermore, we show that ensembles of star clusters up to similar to 10-20 Myr old in galaxies can produce enough soft X-rays to induce nebular He II emission. We discuss observations of the template low-metallicity galaxy I Zw 18 and suggest that the He II nebula in this galaxy is powered by a hot superbubble.
Finally, appreciating the complex nature of stellar feedback, we suggest that soft X-rays from hot superbubbles are among the dominant sources of He II ionizing flux in low-metallicity star-forming galaxies.
It has been highlighted many times how difficult it is to draw a boundary between gift and bribe, and how the same transfer can be interpreted in different ways according to the position of the observer and the narrative frame into which it is inserted. This also applied of course to Ancient Rome; in both the Republic and Principate lawgivers tried to define the limits of acceptable transfers and thus also to identify what we might call ‘corruption’. Yet, such definitions remained to a large extent blurred, and what was constructed was mostly a ‘code of conduct’, allowing Roman politicians to perform their own ‘honesty’ in public duty – while being aware at all times that their involvement in different kinds of transfer might be used by their opponents against them and presented as a case of ‘corrupt’ behaviour.
Widespread on social networking sites (SNSs), envy has been linked to an array of detrimental outcomes for users’ well-being. While envy has been considered a status-related emotion and is likely to be experienced in response to perceiving another’s higher status, there is a lack of research exploring how status perceptions influence the emergence of envy on SNSs. This is important because SNSs typically quantify social interactions and reach with metrics that indicate users’ relative rank and status in the network. To understand how status perceptions impact SNS users, we introduce a new form of metric-based digital status rooted in SNS metrics that are available and visible on a platform. Drawing on social comparison theory and status literature, we conducted an online experiment to investigate how different forms of status contribute to the proliferation of envy on SNSs. Our findings shed light on how metric-based digital status influences feelings of envy on SNSs. Specifically, we could show that metric-based digital status impacts envy through increasing perceptions of others’ socioeconomic and sociometric statuses. Our study contributes to the growing discourse on the negative outcomes associated with SNS use and its consequences for users and society.
Who has the future in mind?
(2022)
An individual's relation to time may be an important driver of pro-environmental behaviour. We studied whether young individual's gender and time-orientation are associated with pro-environmental behaviour. In a controlled laboratory environment with students in Germany, participants earned money by performing a real-effort task and were then offered the opportunity to invest their money into an environmental project that supports climate protection. Afterwards, we controlled for their time-orientation. In this consequential behavioural setting, we find that males who scored higher on future-negative orientation showed significantly more pro-environmental behaviour compared to females who scored higher on future-negative orientation and males who scored lower on future-negative orientation. Interestingly, our results are completely reversed when it comes to past-positive orientation. These findings have practical implications regarding the most appropriate way to address individuals in order to achieve more pro-environmental behaviour.
The envy spiral
(2020)
On Social Networking Sites (SNS) users disclose mostly positive and often self-enhancing information. Scholars refer to this phenomenon as the positivity bias in SNS communication (PBSC). However, while theoretical explanations for this phenomenon have been proposed, an empirical proof of these theorized mechanisms is still missing. The project presented in this Research-in-Progress paper aims at explaining the PBSC with the mechanism specified in the self-enhancement envy spiral. Specifically, we hypothesize that feelings of envy drive people to post positive and self-enhancing content on SNS. To test this hypothesis, we developed an experimental design allowing to examine the causal effect of envy on the positivity of users’ subsequently posted content. In a preliminary study, we tested our manipulation of envy and could show its effectiveness in inducing different levels of envy between our groups. Our project will help to broaden the understanding of the complex dynamics of SNS and the potentially adverse driving forces underlying them.
This paper studies how individuals discount the utility they derive from their provision of goods over spatial distance. In a controlled laboratory experiment in Germany, we elicit preferences for the provision of the same good at different locations. To isolate spatial preferences from any other direct value of the goods being close to the individual, we focus on goods with “existence value.” We find that individuals put special weight on the provision of these goods in their immediate vicinity. This “vicinity bias” represents a spatial analogy to the “present bias” in the time dimension.
Enhancing economic efficiency in modular production systems through deep reinforcement learning
(2024)
In times of increasingly complex production processes and volatile customer demands, the production adaptability is crucial for a company's profitability and competitiveness. The ability to cope with rapidly changing customer requirements and unexpected internal and external events guarantees robust and efficient production processes, requiring a dedicated control concept at the shop floor level. Yet in today's practice, conventional control approaches remain in use, which may not keep up with the dynamic behaviour due to their scenario-specific and rigid properties. To address this challenge, deep learning methods were increasingly deployed due to their optimization and scalability properties. However, these approaches were often tested in specific operational applications and focused on technical performance indicators such as order tardiness or total throughput. In this paper, we propose a deep reinforcement learning based production control to optimize combined techno-financial performance measures. Based on pre-defined manufacturing modules that are supplied and operated by multiple agents, positive effects were observed in terms of increased revenue and reduced penalties due to lower throughput times and fewer delayed products. The combined modular and multi-staged approach as well as the distributed decision-making further leverage scalability and transferability to other scenarios.
Shape-memory hydrogels (SMH) are multifunctional, actively-moving polymers of interest in biomedicine. In loosely crosslinked polymer networks, gelatin chains may form triple helices, which can act as temporary net points in SMH, depending on the presence of salts. Here, we show programming and initiation of the shape-memory effect of such networks based on a thermomechanical process compatible with the physiological environment. The SMH were synthesized by reaction of glycidylmethacrylated gelatin with oligo(ethylene glycol) (OEG) alpha,omega-dithiols of varying crosslinker length and amount. Triple helicalization of gelatin chains is shown directly by wide-angle X-ray scattering and indirectly via the mechanical behavior at different temperatures. The ability to form triple helices increased with the molar mass of the crosslinker. Hydrogels had storage moduli of 0.27-23 kPa and Young's moduli of 215-360 kPa at 4 degrees C. The hydrogels were hydrolytically degradable, with full degradation to water-soluble products within one week at 37 degrees C and pH = 7.4. A thermally-induced shape-memory effect is demonstrated in bending as well as in compression tests, in which shape recovery with excellent shape-recovery rates R-r close to 100% were observed. In the future, the material presented here could be applied, e.g., as self-anchoring devices mechanically resembling the extracellular matrix.
Starting from the observation that the reduced state of a system strongly coupled to a bath is, in general, an athermal state, we introduce and study a cyclic battery-charger quantum device that is in thermal equilibrium, or in a ground state, during the charge storing stage. The cycle has four stages: the equilibrium storage stage is interrupted by disconnecting the battery from the charger, then work is extracted from the battery, and then the battery is reconnected with the charger; finally, the system is brought back to equilibrium. At no point during the cycle are the battery-charger correlations artificially erased. We study the case where the battery and charger together comprise a spin-1/2 Ising chain, and show that the main characteristics-the extracted energy and the thermodynamic efficiency-can be enhanced by operating the cycle close to the quantum phase transition point. When the battery is just a single spin, we find that the output work and efficiency show a scaling behavior at criticality and derive the corresponding critical exponents. Due to always present correlations between the battery and the charger, operations that are equivalent from the perspective of the battery can entail different energetic costs for switching the battery-charger coupling. This happens only when the coupling term does not commute with the battery's bare Hamiltonian, and we use this purely quantum leverage to further optimize the performance of the device.
Charitable giving
(2023)
We investigate how different levels of information influence the allocation decisions of donors who are entitled to freely distribute a fixed monetary endowment between themselves and a charitable organization in both giving and taking frames. Participants donate significantly higher amounts, when the decision is described as taking rather than giving. This framing effect becomes smaller if more information about the charity is provided.
The influence of the process gas, laser scan speed, and sample thickness on the build-up of residual stresses and porosity in Ti-6Al-4V produced by laser powder bed fusion was studied. Pure argon and helium, as well as a mixture of those (30% helium), were employed to establish process atmospheres with a low residual oxygen content of 100 ppm O-2. The results highlight that the subsurface residual stresses measured by X-ray diffraction were significantly lower in the thin samples (220 MPa) than in the cuboid samples (645 MPa). This difference was attributed to the shorter laser vector length, resulting in heat accumulation and thus in-situ stress relief. The addition of helium to the process gas did not introduce additional subsurface residual stresses in the simple geometries, even for the increased scanning speed. Finally, larger deflection was found in the cantilever built under helium (after removal from the baseplate), than in those produced under argon and an argon-helium mixture. This result demonstrates that complex designs involving large scanned areas could be subjected to higher residual stress when manufactured under helium due to the gas's high thermal conductivity, heat capacity, and thermal diffusivity.
In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman's backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs. Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards. In the present paper, we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg-Moore algebra. They hold in familiar settings like those of deterministic or stochastic SDPs, but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al. 's generic framework. Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material.
Scaling agriculture to the globally rising population demands new approaches for future crop production such as multilayer and multitrophic indoor farming. Moreover, there is a current trend towards sustainable local solutions for aquaculture and saline agriculture. In this context, halophytes are becoming increasingly important for research and the food industry. As Salicornia europaea is a highly salt-tolerant obligate halophyte that can be used as a food crop, indoor cultivation with saline water is of particular interest. Therefore, finding a sustainable alternative to the use of seawater in non-coastal regions is crucial. Our goal was to determine whether natural brines, which are widely distributed and often available in inland areas, provide an alternative water source for the cultivation of saline organisms. This case study investigated the potential use of natural brines for the production of S. europaea. In the control group, which reflects the optimal growth conditions, fresh weight was increased, but there was no significant difference between the treatment groups comparing natural brines with artificial sea water. A similar pattern was observed for carotenoids and chlorophylls. Individual components showed significant differences. However, within treatments, there were mostly no changes. In summary, we showed that the influence of the different chloride concentrations was higher than the salt composition. Moreover, nutrient-enriched natural brine was demonstrated to be a suitable alternative for cultivation of S. europaea in terms of yield and nutritional quality. Thus, the present study provides the first evidence for the future potential of natural brine waters for the further development of aquaculture systems and saline agriculture in inland regions.
Sarcopenic obesity is increasingly found in youth, but its health consequences remain unclear.
Therefore, we studied the prevalence of sarcopenia and its association with cardiometabolic risk factors as well as muscular and cardiorespiratory fitness using data from the German Children's Health InterventionaL Trial (CHILT III) programme.
In addition to anthropometric data and blood pressure, muscle and fat mass were determined with bioelectrical impedance analysis.
Sarcopenia was classified via muscle-to-fat ratio. A fasting blood sample was taken, muscular fitness was determined using the standing long jump, and cardiorespiratory fitness was determined using bicycle ergometry. Of the 119 obese participants included in the analysis (47.1% female, mean age 12.2 years), 83 (69.7%) had sarcopenia. Affected individuals had higher gamma-glutamyl transferase, higher glutamate pyruvate transaminase, higher high-sensitivity C-reactive protein, higher diastolic blood pressure, and lower muscular and cardiorespiratory fitness (each p < 0.05) compared to participants who were 'only' obese.
No differences were found in other parameters. In our study, sarcopenic obesity was associated with various disorders in children and adolescents.
However, the clinical value must be tested with larger samples and reference populations to develop a unique definition and appropriate methods in terms of identification but also related preventive or therapeutic approaches.
Forest microclimate can buffer biotic responses to summer heat waves, which are expected to become more extreme under climate warming. Prediction of forest microclimate is limited because meteorological observation standards seldom include situations inside forests.
We use eXtreme Gradient Boosting - a Machine Learning technique - to predict the microclimate of forest sites in Brandenburg, Germany, using seasonal data comprising weather features.
The analysis was amended by applying a SHapley Additive explanation to show the interaction effect of variables and individualised feature attributions.
We evaluate model performance in comparison to artificial neural networks, random forest, support vector machine, and multi-linear regression.
After implementing a feature selection, an ensemble approach was applied to combine individual models for each forest and improve robustness over a given single prediction model.
The resulting model can be applied to translate climate change scenarios into temperatures inside forests to assess temperature-related ecosystem services provided by forests.
Insights in electrosynthesis, target binding, and stability of peptide-imprinted polymer nanofilms
(2021)
Molecularly imprinted polymer (MIP) nanofilms have been successfully implemented for the recognition of different target molecules: however, the underlying mechanistic details remained vague.
This paper provides new insights in the preparation and binding mechanism of electrosynthesized peptide-imprinted polymer nanofilms for selective recognition of the terminal pentapeptides of the beta-chains of human adult hemoglobin, HbA, and its glycated form HbA1c.
To differentiate between peptides differing solely in a glucose adduct MIP nanofilms were prepared by a two-step hierarchical electrosynthesis that involves first the chemisorption of a cysteinyl derivative of the pentapeptide followed by electropolymerization of scopoletin.
This approach was compared with a random single-step electrosynthesis using scopo-letin/pentapeptide mixtures. Electrochemical monitoring of the peptide binding to the MIP nanofilms by means of redox probe gating revealed a superior affinity of the hierarchical approach with a Kd value of 64.6 nM towards the related target.
Changes in the electrosynthesized non-imprinted polymer and MIP nanofilms during chemical, electrochemical template removal and rebinding were substantiated in situ by monitoring the characteristic bands of both target peptides and polymer with surface enhanced infrared absorption spectroscopy.
This rational approach led to MIPs with excellent selectivity and provided key mechanistic insights with respect to electrosynthesis, rebinding and stability of the formed MIPs.
Enhanced charge selectivity via anodic-C60 layer reduces nonradiative losses in organic solar cells
(2021)
Interfacial layers in conjunction with suitable charge-transport layers can significantly improve the performance of optoelectronic devices by facilitating efficient charge carrier injection and extraction.
This work uses a neat C-60 interlayer on the anode to experimentally reveal that surface recombination is a significant contributor to nonradiative recombination losses in organic solar cells.
These losses are shown to proportionally increase with the extent of contact between donor molecules in the photoactive layer and a molybdenum oxide (MoO3) hole extraction layer, proven by calculating voltage losses in low- and high-donor-content bulk heterojunction device architectures.
Using a novel in-device determination of the built-in voltage, the suppression of surface recombination, due to the insertion of a thin anodic-C-60 interlayer on MoO3, is attributed to an enhanced built-in potential.
The increased built-in voltage reduces the presence of minority charge carriers at the electrodes-a new perspective on the principle of selective charge extraction layers.
The benefit to device efficiency is limited by a critical interlayer thickness, which depends on the donor material in bilayer devices.
Given the high popularity of MoO3 as an efficient hole extraction and injection layer and the increasingly popular discussion on interfacial phenomena in organic optoelectronic devices, these findings are relevant to and address different branches of organic electronics, providing insights for future device design.
Alpine glacial erosion exerts a first-order control on mountain topography and sediment production, but its mechanisms are poorly understood. Observational data capable of testing glacial erosion and transport laws in glacial models are mostly lacking. New insights, however, can be gained from detrital tracer thermochronology. Detrital tracer thermochronology works on the premise that thermochronometer bedrock ages vary systematically with elevation, and that detrital downstream samples can be used to infer the source elevation sectors of sediments. We analyze six new detrital samples of different grain sizes (sand and pebbles) from glacial deposits and the modern river channel integrated with data from 18 previously analyzed bedrock samples from an elevation transect in the Leones Valley, Northern Patagonian Icefield, Chile (46.7 degrees S). We present 622 new detrital zircon (U-Th)/He (ZHe) single-grain analyses and 22 new bedrock ZHe analyses for two of the bedrock samples to determine age reproducibility. Results suggest that glacial erosion was focused at and below the Last Glacial Maximum and neoglacial equilibrium line altitudes, supporting previous modeling studies. Furthermore, grain age distributions from different grain sizes (sand, pebbles) might indicate differences in erosion mechanisms, including mass movements at steep glacial valley walls. Finally, our results highlight complications and opportunities in assessing glacigenic environments, such as dynamics of sediment production, transport, transient storage, and final deposition, that arise from settings with large glacio-fluvial catchments.
Frequency-domain electromagnetic (FDEM) data are commonly inverted to characterize subsurface geoelectrical properties using smoothness constraints in 1D inversion schemes assuming a layered medium.
Smoothness constraints are suitable for imaging gradual transitions of subsurface geoelectrical properties caused, for example, by varying sand, clay, or fluid content. However, such inversion approaches are limited in characterizing sharp interfaces. Alternative regularizations based on the minimum gradient support (MGS) stabilizers can, instead, be used to promote results with different levels of smoothness/sharpness selected by simply acting on the so-called focusing parameter.
The MGS regularization has been implemented for different kinds of geophysical data inversion strategies. However, concerning FDEM data, the MGS regularization has only been implemented for vertically constrained inversion (VCI) approaches but not for laterally constrained inversion (LCI) approaches.
We present a novel LCI approach for FDEM data using the MGS regularization for the vertical and lateral direction. Using synthetic and field data examples, we demonstrate that our approach can efficiently and automatically provide a set of model solutions characterized by different levels of sharpness and variable lateral consistencies.
In terms of data misfit, the obtained set of solutions contains equivalent models allowing us also to investigate the non-uniqueness of FDEM data inversion.
The addition of nano-Al2O3 has been shown to enhance the breakdown voltage of epoxy resin, but its flashover results appeared with disputation. This work concentrates on the surface charge variation and dc flashover performance of epoxy resin with nano-Al2O3 doping. The dispersion of nano-Al2O3 in epoxy is characterized by scanning electron microscopy (SEM) and atomic force microscopy (AFM). The dc flashover voltages of samples under either positive or negative polarity are measured with a finger-electrode system, and the surface charge variations before and after flashovers were identified from the surface potential mapping. The results evidence that nano-Al2O3 would lead to a 16.9% voltage drop for the negative flashovers and a 6.8% drop for positive cases. It is found that one-time flashover clears most of the accumulated surface charges, regardless of positive or negative. As a result, the ground electrode is neighbored by an equipotential zone enclosed with low-density heterocharges. The equipotential zone tends to be broadened after 20 flashovers. The nano-Al2O3 is noticed as beneficial to downsize the equipotential zone due to its capability on charge migration, which is reasonable to maintain flashover voltage at a high level after multiple flashovers. Hence, nano-Al2O3 plays a significant role in improving epoxy with high resistance to multiple flashovers.
In liquid-chromatography-tandem-mass-spectrometry-based proteomics, information about the presence and stoichiometry ofprotein modifications is not readily available. To overcome this problem,we developed multiFLEX-LF, a computational tool that builds uponFLEXIQuant, which detects modified peptide precursors and quantifiestheir modification extent by monitoring the differences between observedand expected intensities of the unmodified precursors. multiFLEX-LFrelies on robust linear regression to calculate the modification extent of agiven precursor relative to a within-study reference. multiFLEX-LF cananalyze entire label-free discovery proteomics data sets in a precursor-centric manner without preselecting a protein of interest. To analyzemodification dynamics and coregulated modifications, we hierarchicallyclustered the precursors of all proteins based on their computed relativemodification scores. We applied multiFLEX-LF to a data-independent-acquisition-based data set acquired using the anaphase-promoting complex/cyclosome (APC/C) isolated at various time pointsduring mitosis. The clustering of the precursors allows for identifying varying modification dynamics and ordering the modificationevents. Overall, multiFLEX-LF enables the fast identification of potentially differentially modified peptide precursors and thequantification of their differential modification extent in large data sets using a personal computer. Additionally, multiFLEX-LF candrive the large-scale investigation of the modification dynamics of peptide precursors in time-series and case-control studies.multiFLEX-LF is available athttps://gitlab.com/SteenOmicsLab/multiflex-lf.
As the complexity of learning task requirements, computer infrastruc- tures and knowledge acquisition for artificial neuronal networks (ANN) is in- creasing, it is challenging to talk about ANN without creating misunderstandings. An efficient, transparent and failure-free design of learning tasks by models is not supported by any tool at all. For this purpose, particular the consideration of data, information and knowledge on the base of an integration with knowledge- intensive business process models and a process-oriented knowledge manage- ment are attractive. With the aim of making the design of learning tasks express- ible by models, this paper proposes a graphical modeling language called Neu- ronal Training Modeling Language (NTML), which allows the repetitive use of learning designs. An example ANN project of AI-based dynamic GUI adaptation exemplifies its use as a first demonstration.
Earthquake site responses or site effects are the modifications of surface geology to seismic waves. How well can we predict the site effects (average over many earthquakes) at individual sites so far? To address this question, we tested and compared the effectiveness of different estimation techniques in predicting the outcrop Fourier site responses separated using the general inversion technique (GIT) from recordings. Techniques being evaluated are (a) the empirical correction to the horizontal-to-vertical spectral ratio of earthquakes (c-HVSR), (b) one-dimensional ground response analysis (GRA), and (c) the square-root-impedance (SRI) method (also called the quarter-wavelength approach). Our results show that c-HVSR can capture significantly more site-specific features in site responses than both GRA and SRI in the aggregate, especially at relatively high frequencies. c-HVSR achieves a "good match" in spectral shape at similar to 80%-90% of 145 testing sites, whereas GRA and SRI fail at most sites. GRA and SRI results have a high level of parametric and/or modeling errors which can be constrained, to some extent, by collecting on-site recordings.
A different class of refugee: university scholarships and developmentalism in late 1960s Africa
(2022)
Using documents assembled in connection with the 1967 Conference on the Legal, Economic and Social Aspects of African Refugee Problems, this article discusses African refugee higher-education discourses in the 1960s at the level of international organizations, volunteer agencies, and government representatives. Education and development history have recently been studied together, but this article focuses on the history of refugee higher education, which, it argues, needs to be understood within the development framework of human-capital theory, meant to support political pan African concerns for a decolonized continent and merged with humanitarian arguments to create a hybrid form of humanitarian developmentalism. The article zooms in on higher-education scholarships, above all for refugees from Southern Africa, as a means of support for human-capital development. It shows that refugee higher education was both a result and a driver of increased international exchanges, as evidenced at the 1967 conference.
The 2020s are an essential decade for achieving the 2030 Agenda and its Sustainable Development Goals (SDGs). For this, SDG research needs to provide evidence that can be translated into concrete actions. However, studies use different SDG data, resulting in incomparable findings. Researchers primarily use SDG databases provided by the United Nations (UN), the World Bank Group (WBG), and the Bertelsmann Stiftung & Sustainable Development Solutions Network (BE-SDSN). We compile these databases into one unified SDG database and examine the effects of the data selection on our understanding of SDG interactions. Among the databases, we observed more different than similar SDG interactions. Differences in synergies and trade-offs mainly occur for SDGs that are environmentally oriented. Due to the increased data availability, the unified SDG database offers a more nuanced and reliable view of SDG interactions. Thus, the SDG data selection may lead to diverse findings, fostering actions that might neglect or exacerbate trade-offs.
Labor unions’ greatest potential for political influence likely arises from their direct connection to millions of individuals at the workplace. There, they may change the ideological positions of both unionizing workers and their non-unionizing management. In this paper, we analyze the workplace-level impact of unionization on workers’ and managers’ political campaign contributions over the 1980-2016 period in the United States. To do so, we link establishment-level union election data with transaction-level campaign contributions to federal and local candidates. In a difference-in-differences design that we validate with regression discontinuity tests and a novel instrumental variables approach, we find that unionization leads to a leftward shift of campaign contributions. Unionization increases the support for Democrats relative to Republicans not only among workers but also among managers, which speaks against an increase in political cleavages between the two groups. We provide evidence that our results are not driven by compositional changes of the workforce and are weaker in states with Right-to-Work laws where unions can invest fewer resources in political activities.
Nocardioides alcanivorans sp. nov., a novel hexadecane-degrading species isolated from plastic waste
(2022)
Strain NGK65(T), a novel hexadecane degrading, non-motile, Gram-positive, rod-to-coccus shaped, aerobic bacterium, was isolated from plastic polluted soil sampled at a landfill.
Strain NGK65(T) hydrolysed casein, gelatin, urea and was catalase-positive. It optimally grew at 28 degrees C. in 0-1% NaCl and at pH 7.5-8.0. Glycerol, D-glucose, arbutin, aesculin, salicin, potassium 5-ketogluconate. sucrose, acetate, pyruvate and hexadecane were used as sole carbon sources.
The predominant membrane fatty acids were iso-C-16:0 followed by iso-C(17:)0 and C-18:1 omega 9c. The major polar lipids were phosphatidylglycerol, phosphatidylethanolamine, phosphatidylinositol and hydroxyphosphatidylinositol.
The cell-wall peptidoglycan type was A3 gamma, with LL-diaminopimelic acid and glycine as the diagnostic amino acids. MK 8 (H-4) was the predominant menaquinone. Phylogenetic analysis based on 16S rRNA gene sequences indicated that strain NGK65(T) belongs to the genus Nocardioides (phylum Actinobacteria). appearing most closely related to Nocardioides daejeonensis MJ31(T) (98.6%) and Nocardioides dubius KSL-104(T) (98.3%).
The genomic DNA G+C content of strain NGK65(T) was 68.2%.
Strain NGK65(T) and the type strains of species involved in the analysis had average nucleotide identity values of 78.3-71.9% as well as digital DNA-DNA hybridization values between 22.5 and 19.7%, which clearly indicated that the isolate represents a novel species within the genus Nocardioides.
Based on phenotypic and molecular characterization, strain NGK65(T) can clearly be differentiated from its phylogenetic neighbours to establish a novel species, for which the name Nocardioides alcanivorans sp. nov. is proposed.
The type strain is NGK65(T) (=DSM 113112(T)=NCCB 100846(T)).
R-Group stabilization in methylated formamides observed by resonant inelastic X-ray scattering
(2022)
The inherent stability of methylated formamides is traced to a stabilization of the deep-lying sigma-framework by resonant inelastic X-ray scattering at the nitrogen K-edge. Charge transfer from the amide nitrogen to the methyl groups underlie this stabilization mechanism that leaves the aldehyde group essentially unaltered and explains the stability of secondary and tertiary amides.
In light of substantial new discoveries of hot subdwarfs by ongoing spectroscopic surveys and the availability of the Gaia mission Early Data Release 3 (EDR3), we compiled new releases of two catalogues of hot subluminous stars: the data release 3 (DR3) catalogue of the known hot subdwarf stars contains 6616 unique sources and provides multi-band photometry, and astrometry from Gaia EDR3 as well as classifications based on spectroscopy and colours.
This is an increase of 742 objects over the DR2 catalogue.
This new catalogue provides atmospheric parameters for 3087 stars and radial velocities for 2791 stars from the literature. In addition, we have updated the Gaia Data Release 2 (DR2) catalogue of hot subluminous stars using the improved accuracy of the Gaia EDR3 data set together with updated quality and selection criteria to produce the Gaia EDR3 catalogue of 61 585 hot subluminous stars, representing an increase of 21 785 objects.
The improvements in Gaia EDR3 astrometry and photometry compared to Gaia DR2 have enabled us to define more sophisticated selection functions.
In particular, we improved hot subluminous star detection in the crowded regions of the Galactic plane as well as in the direction of the Magellanic Clouds by including sources with close apparent neighbours but with flux levels that dominate the neighbourhood.
Methane (CH4) from aquatic ecosystems contributes to about half of total global CH4 emissions to the atmosphere. Until recently, aquatic biogenic CH4 production was exclusively attributed to methanogenic archaea living under anoxic or suboxic conditions in sediments, bottom waters, and wetlands. However, evidence for oxic CH4 production (OMP) in freshwater, brackish, and marine habitats is increasing. Possible sources were found to be driven by various planktonic organisms supporting different OMP mechanisms. Surprisingly, submerged macrophytes have been fully ignored in studies on OMP, yet they are key components of littoral zones of ponds, lakes, and coastal systems. High CH4 concentrations in these zones have been attributed to organic substrate production promoting classic methanogenesis in the absence of oxygen. Here, we review existing studies and argue that, similar to terrestrial plants and phytoplankton, macroalgae and submerged macrophytes may directly or indirectly contribute to CH4 formation in oxic waters. We propose several potential direct and indirect mechanisms: (1) direct production of CH4; (2) production of CH4 precursors and facilitation of their bacterial breakdown or chemical conversion; (3) facilitation of classic methanogenesis; and (4) facilitation of CH4 ebullition. As submerged macrophytes occur in many freshwater and marine habitats, they are important in global carbon budgets and can strongly vary in their abundance due to seasonal and boom-bust dynamics. Knowledge on their contribution to OMP is therefore essential to gain a better understanding of spatial and temporal dynamics of CH4 emissions and thus to substantially reduce current uncertainties when estimating global CH4 emissions from aquatic ecosystems.
In the present paper we empirically investigate the psychometric properties of some of the most famous statistical and logical cognitive illusions from the "heuristics and biases" research program by Daniel Kahneman and Amos Tversky, who nearly 50 years ago introduced fascinating brain teasers such as the famous Linda problem, the Wason card selection task, and so-called Bayesian reasoning problems (e.g., the mammography task). In the meantime, a great number of articles has been published that empirically examine single cognitive illusions, theoretically explaining people's faulty thinking, or proposing and experimentally implementing measures to foster insight and to make these problems accessible to the human mind. Yet these problems have thus far usually been empirically analyzed on an individual-item level only (e.g., by experimentally comparing participants' performance on various versions of one of these problems). In this paper, by contrast, we examine these illusions as a group and look at the ability to solve them as a psychological construct. Based on an sample of N = 2,643 Luxembourgian school students of age 16-18 we investigate the internal psychometric structure of these illusions (i.e., Are they substantially correlated? Do they form a reflexive or a formative construct?), their connection to related constructs (e.g., Are they distinguishable from intelligence or mathematical competence in a confirmatory factor analysis?), and the question of which of a person's abilities can predict the correct solution of these brain teasers (by means of a regression analysis).
In the past years, work-time in many industries has become more flexible, opening up a new channel for intertemporal substitution: workers might, instead of saving, adjust their work-time to smooth consumption. To study this channel, we set up a two-period consumption/saving model with wage uncertainty. This extends the standard saving model by also allowing a worker to allocate a fixed time budget between two work-shifts. To test the comparative statics implied by these two different channels, we conduct a laboratory experiment. A novel feature of our experiments is that we tie income to a real-effort style task. In four treatments, we turn on and off the two channels for consumption smoothing: saving and time allocation. Our main finding is that savings are strictly positive for at least 85 percent of subjects. We find that a majority of subjects also uses time allocation to smooth consumption and use saving and time shifting as substitutes, though not perfect substitutes. Part of the observed heterogeneity of precautionary behavior can be explained by risk preferences and motivations different from expected utility maximization. (c) 2021 Elsevier B.V. All rights reserved.
It’s personal
(2021)
The new technologies of the Fourth Industrial Revolution (4IR) are disrupting traditional models of work and learning. While the impact of digitalization on education was already a point of serious deliberation, the COVID-19 pandemic has expedited ongoing transitions. With 90% of the world’s student population having been impacted by national lockdowns—online learning has gone from being a luxury to a necessity, in a context where around 3.6 billion people are offline. As the impacts of the 4IR unfold alongside the current crisis, it is not enough for future policy pathways to prioritize educational attainment in the traditional sense; it is essential to reimagine education itself as well as its delivery entirely. Future policy narratives will need to evaluate the very process of learning and identify the ways in which technology can help reduce existing disparities and enhance digital access, literacy and fluency in a scalable manner. In this context, this chapter analyses the status quo of online learning in India and Germany. Drawing on the experiences of these two economies with distinct trajectories of digitalization, the chapter explores how new technologies intersect with traditional education and local sociocultural conditions. Further, the limitations and opportunities presented by dominant ed-tech models is critically analyzed against the ongoing COVID-19 pandemic.
Simple and robust
(2021)
A spectrum of 7562 publications on Molecularly Imprinted Polymers (MIPs) has been presented in literature within the last ten years (Scopus, September 7, 2020). Around 10 % of the papers published on MIPs describe the recognition of proteins. The straightforward synthesis of MIPs is a significant advantage as compared with the preparation of enzymes or antibodies. MIPs have been synthesized from only one up to six functional monomers while proteins are made up of 20 natural amino acids. Furthermore, they can be synthesized against structures of low immunogenicity and allow multi-analyte measurements via multi-target synthesis. Electrochemical methods allow simple polymer synthesis, removal of the template and readout. Among the different sensor configurations electrochemical MIP-sensors provide the broadest spectrum of protein analytes. The sensitivity of MIP-sensors is sufficiently high for biomarkers in the sub-nanomolar region, nevertheless the cross-reactivity of highly abundant proteins in human serum is still a challenge. MIPs for proteins offer innovative tools not only for clinical and environmental analysis, but also for bioimaging, therapy and protein engineering.
CovRadar
(2022)
The ongoing pandemic caused by SARS-CoV-2 emphasizes the importance of genomic surveillance to understand the evolution of the virus, to monitor the viral population, and plan epidemiological responses. Detailed analysis, easy visualization and intuitive filtering of the latest viral sequences are powerful for this purpose. We present CovRadar, a tool for genomic surveillance of the SARS-CoV-2 Spike protein. CovRadar consists of an analytical pipeline and a web application that enable the analysis and visualization of hundreds of thousand sequences. First, CovRadar extracts the regions of interest using local alignment, then builds a multiple sequence alignment, infers variants and consensus and finally presents the results in an interactive app, making accessing and reporting simple, flexible and fast.
We and AI
(2021)
Many phenomena of high relevance for economic development such as human capital, geography and climate vary considerably within countries as well as between them. Yet, global data sets of economic output are typically available at the national level only, thereby limiting the accuracy and precision of insights gained through empirical analyses. Recent work has used interpolation and downscaling to yield estimates of sub-national economic output at a global scale, but respective data sets based on official, reported values only are lacking. We here present DOSE — the MCC-PIK Database Of Sub-national Economic Output. DOSE contains harmonised data on reported economic output from 1,661 sub-national regions across 83 countries from 1960 to 2020. To avoid interpolation, values are assembled from numerous statistical agencies, yearbooks and the literature and harmonised for both aggregate and sectoral output. Moreover, we provide temporally- and spatially-consistent data for regional boundaries, enabling matching with geo-spatial data such as climate observations. DOSE provides the opportunity for detailed analyses of economic development at the subnational level, consistent with reported values.
It is well-documented that academic achievement is associated with students' self-perceptions of their academic abilities, that is, their academic self-concepts. However, low-achieving students may apply self-protective strategies to maintain a favorable academic self-concept when evaluating their academic abilities. Consequently, the relation between achievement and academic self-concept might not be linear across the entire achievement continuum. Capitalizing on representative data from three large-scale assessments (i.e., TIMSS, PIRLS, PISA; N = 470,804), we conducted an integrative data analysis to address nonlinear trends in the relations between achievement and the corresponding self-concepts in mathematics and the verbal domain across 13 countries and 2 age groups (i.e., elementary and secondary school students). Polynomial and interrupted regression analyses showed nonlinear relations in secondary school students, demonstrating that the relations between achievement and the corresponding self-concepts were weaker for lower achieving students than for higher achieving students. Nonlinear effects were also present in younger students, but the pattern of results was rather heterogeneous. We discuss implications for theory as well as for the assessment and interpretation of self-concept.
Income inequality and taxes
(2023)
Economic literature offers several distinct explanations for the raising income inequality observed in several countries. In the debate about the causes of inequality a growing strand of research focuses on the effects of taxation on income inequality. We contribute to this literature by providing a systematic empirical account of the relationship between income inequality and personal income taxation (PIT) for a set of countries over the period 1981–2005. In order to take alternative explanations into account and to isolate the effects of tax progressivity, we include a wide range of control variables. We address potential reverse causality between inequality and PIT by using the variation in tax schedules of neighbouring countries. Our results confirm a statistically significant negative association between the progressivity of PIT and income inequality. Overall, we find that especially the average and the marginal tax rate have the potential to reduce income inequality. This finding is qualitatively robust across various different empirical specifications.
Although the literature on the determinants of training has considered individual and firm-related characteristics, it has generally neglected regional factors. This is surprising, given the fact that labour markets differ by regions. Regional factors are often ignored because (both in Germany and abroad) many data sets covering training information do not include detailed geographical identifiers that would allow a merging of information on the regional level. The regional identifiers of the National Educational Panel Study (Starting Cohort 6) offer opportunities to advance research on several regional factors. This article summarizes the results from two studies that exploit these unique opportunities to investigate the relationship between training participation and (a) the local level of firm competition for workers within specific sectors of the economy and (b) the regional supply of training measured as the number of firms offering courses or seminars for potential training participants.
Personal data increasingly serve as inputs to public goods. Like other types of contributions to public goods, personal data are likely to be underprovided. We investigate whether classical remedies to underprovision are also applicable to personal data and whether the privacy-sensitive nature of personal data must be additionally accounted for. In a randomized field experiment on a public online education platform, we prompt users to complete their profiles with personal information. Compared to a control message, we find that making public benefits salient increases the number of personal data contributions significantly. This effect is even stronger when additionally emphasizing privacy protection, especially for sensitive information. Our results further suggest that emphasis on both public benefits and privacy protection attracts personal data from a more diverse set of contributors.
Does loss aversion apply to social image concerns? In a laboratory experiment, we first induce social image in a relevant domain, intelligence, through public ranking. In a second stage, subjects experience a change in rank and are offered scope for lying to improve their final, also publicly reported rank. Subjects who care about social image and experience a decline in rank lie more than those experiencing gains. Moreover, we document a discontinuity in lying behavior when moving from rank losses to gains. Our results are in line with loss aversion in social image concerns.
From learners to educators
(2020)
The rapid growth of technology and its evolving potential to support the transformation of teaching and learning in post-secondary institutions is a major challenge to the basic understanding of both the university and the communities it serves. In higher education, the standard forms of learning and teaching are increasingly being challenged and a more comprehensive process of differentiation is taking place. Student-centered teaching methods are becoming increasingly important in course design and the role of the lecturer is changing from the knowledge mediator to moderator and learning companion. However, this is accelerating the need for strategically planned faculty support and a reassessment of the role of teaching and learning. Even though the benefits of experience-based learning approaches for the development of life skills are well known, most knowledge transfer is still realized through lectures in higher education. Teachers have the goal to design the curriculum, new assignments, and share insights into evolving pedagogy. Student engagement could be the most important factor in the learning success of university students, regardless of the university program or teaching format. Against this background, this article presents the development, application, and initial findings of an innovative learning concept. In this concept, students are allowed to deal with a scientific topic, but instead of a presentation and a written elaboration, their examination consists of developing an online course in terms of content, didactics, and concept to implement it in a learning environment, which is state of the art. The online courses include both self-created teaching material and interactive tasks. The courses are created to be available to other students as learning material after a review process and are thus incorporated into the curriculum.
Atwood analyzes the effects of the 1963 U.S. measles vaccination on long-run labor market outcomes, using a generalized difference-in-differences approach. We reproduce the results of this paper and perform a battery of robustness checks. Overall, we confirm that the measles vaccination had positive labor market effects. While the negative effect on the likelihood of living in poverty and the positive effect on the probability of being employed are very robust across the different specifications, the headline estimate—the effect on earnings—is more sensitive to the exclusion of certain regions and survey years.
Using data from the German Socio-Economic Panel and exploiting the staggered implementation of a compulsory schooling reform in West Germany, this article finds that an additional year of schooling lowers the probability of being very concerned about immigration to Germany by around six percentage points (20 percent). Furthermore, our findings imply significant spillovers from maternal education to immigration attitudes of her offspring. While we find no evidence for returns to education within a range of labor market outcomes, higher social trust appears to be an important mechanism behind our findings.
This paper provides novel evidence on the impact of public transport subsidies on air pollution. We obtain causal estimates by leveraging a unique policy intervention in Germany that temporarily reduced nationwide prices for regional public transport to a monthly flat rate price of 9 Euros. Using DiD estimation strategies on air pollutant data, we show that this intervention causally reduced a benchmark air pollution index by more than eight percent and, after its termination, increased again. Our results illustrate that public transport subsidies – especially in the context of spatially constrained cities – offer a viable alternative for policymakers and city planers to improve air quality, which has been shown to crucially affect health outcomes.
House price expectations
(2023)
This study examines short-, medium-, and long-run price expectations in housing markets. At the heart of our analysis is the combination of data from a tailored in-person household survey, past sale offerings, satellite imagery on developable land, and an information treatment (RCT). As novel finding, we show that price expectations show no evidence for momentum-effects in the long run. We also do not find much evidence for behavioural biases in expectations related to individual housing tenure decisions. Confirming existing findings, we find momentum-effects in the short-run and that individuals, to a limited extend, use aggregate price information to update local expectations. Lastly, we provide suggestive evidence corroborating existing findings that expectations are relevant for portfolio choice.
Insulin is the main anabolic hormone secreted by 13-cells of the pancreas stimulating the assimilation and storage of glucose in muscle and fat cells. It modulates the postprandial balance of carbohydrates, lipids and proteins via enhancing lipogenesis, glycogen and protein synthesis and suppressing glucose generation and its release from the liver. Resistance to insulin is a severe metabolic disorder related to a diminished response of peripheral tissues to the insulin action and signaling. This leads to a disturbed glucose homeostasis that precedes the onset of type 2 diabetes (T2D), a disease reaching epidemic proportions. A large number of studies reported an association between elevated circulating fatty acids and the development of insulin resistance. The increased fatty acid lipid flux results in the accumulation of lipid droplets in a variety of tissues. However, lipid intermediates such as diacylglycerols and ceramides are also formed in response to elevated fatty acid levels. These bioactive lipids have been associated with the pathogenesis of insulin resistance. More recently, sphingosine 1-phosphate (S1P), another bioactive sphingolipid derivative, has also been shown to increase in T2D and obesity. Although many studies propose a protective role of S1P metabolism on insulin signaling in peripheral tissues, other studies suggest a causal role of S1P on insulin resistance. In this review, we critically summarize the current state of knowledge of S1P metabolism and its modulating role on insulin resistance. A particular emphasis is placed on S1P and insulin signaling in hepatocytes, skeletal muscle cells, adipocytes and pancreatic 13-cells. In particular, modulation of receptors and enzymes that regulate S1P metabolism can be considered as a new therapeutic option for the treatment of insulin resistance and T2D.
The role of the monoaminergic system in the feeding behavior of neonatal chicks has been reported, but the functional relationship between the metabolism of monoamines and appetite-related neuropeptides is still unclear. This study aimed to investigate the changes in catecholamine and indolamine metabolism in response to the central action of neuropeptide Y (NPY) in different feeding statuses and the underlying mechanisms. In Experiment 1, the diencephalic concentrations of amino acids and monoamines following the intracerebroventricular (ICV) injection of NPY (375 pmol/10 mu l/chick), saline solution under ad libitum, and fasting conditions for 30 min were determined. Central NPY significantly decreased L-tyrosine concentration, the precursor of catecholamines under feeding condition, but not under fasting condition. Central NPY significantly increased dopamine metabolites, including 3,4-dihydroxyphenylacetic acid and homovanillic acid (HVA). The concentration of 3-methoxy-4-hydroxyphenylglycol was significantly reduced under feeding condition, but did not change under fasting condition by NPY. However, no effects of NPY on indolamine metabolism were found in either feeding status. Therefore, the mechanism of action of catecholamines with central NPY under feeding condition was elucidated in Experiment 2. Central NPY significantly attenuated diencephalic gene expression of catecholaminergic synthetic enzymes, such as tyrosine hydroxylase, L-aromatic amino acid decarboxylase, and GTP cyclohydrolase I after 30 min of feeding. In Experiment 3, co-injection of alpha-methyl-L-tyrosine, an inhibitor of tyrosine hydroxylase with NPY, moderately attenuated the orexigenic effect of NPY, accompanied by a significant positive correlation between food intake and HVA levels. In Experiment 4, there was a significant interaction between NPY and clorgyline, an inhibitor of monoamine oxidase A with ICV co-injection which implies that co-existence of NPY and clorgyline enhances the orexigenic effect of NPY. In conclusion, central NPY modifies a part of catecholamine metabolism, which is illustrated by the involvement of dopamine transmission and metabolism under feeding but not fasting conditions.
From about 7 months of age onward, infants start to reliably fixate the goal of an observed action, such as a grasp, before the action is complete. The available research has identified a variety of factors that influence such goal-anticipatory gaze shifts, including the experience with the shown action events and familiarity with the observed agents. However, the underlying cognitive processes are still heavily debated. We propose that our minds (i) tend to structure sensorimotor dynamics into probabilistic, generative event-predictive, and event boundary predictive models, and, meanwhile, (ii) choose actions with the objective to minimize predicted uncertainty. We implement this proposition by means of event-predictive learning and active inference. The implemented learning mechanism induces an inductive, event-predictive bias, thus developing schematic encodings of experienced events and event boundaries. The implemented active inference principle chooses actions by aiming at minimizing expected future uncertainty. We train our system on multiple object-manipulation events. As a result, the generation of goal-anticipatory gaze shifts emerges while learning about object manipulations: the model starts fixating the inferred goal already at the start of an observed event after having sampled some experience with possible events and when a familiar agent (i.e., a hand) is involved. Meanwhile, the model keeps reactively tracking an unfamiliar agent (i.e., a mechanical claw) that is performing the same movement. We qualitatively compare these modeling results to behavioral data of infants and conclude that event-predictive learning combined with active inference may be critical for eliciting goal-anticipatory gaze behavior in infants.
This study is dedicated to the interdependencies between digital sovereignty and sustainable digitalization, which need to be explicitly linked to an increasing degree in political discourse, academia, and societal debates. Digital skills are the prerequisites for shaping digitalization in the interest of society and sustainable development.
The management of knowledge in organizations considers both established long-term
processes and cooperation in agile project teams. Since knowledge can be both tacit and explicit, its transfer from the individual to the organizational knowledge base poses a challenge in organizations. This challenge increases when the fluctuation of knowledge carriers is exceptionally high. Especially in large projects in which external consultants are involved, there is a risk that critical, company-relevant knowledge generated in the project will leave the company with the external knowledge carrier and thus be lost. In this paper, we show the advantages of an early warning system for knowledge management to avoid this loss. In particular, the potential of visual analytics in the context of knowledge management systems is presented and discussed. We present a project for the development of a business-critical software system and discuss the first implementations and results.
The fluxes of water and solutes in the subsurface compartment of the Critical Zone are temporally dynamic and it is unclear how this impacts microbial mediated nutrient cycling in the spatially heterogeneous subsurface. To investigate this, we undertook numerical modeling, simulating the transport in a wide range of spatially heterogeneous domains, and the biogeochemical transformation of organic carbon and nitrogen compounds using a complex microbial community with four (4) distinct functional groups, in water saturated subsurface compartments. We performed a comprehensive uncertainty analysis accounting for varying residence times and spatial heterogeneity. While the aggregated removal of chemical species in the domains over the entire simulation period was approximately the same as that in steady state conditions, the sub-scale temporal variation of microbial biomass and chemical discharge from a domain depended strongly on the interplay of spatial heterogeneity and temporal dynamics of the forcing. We showed that the travel time and the Damkohler number (Da) can be used to predict the temporally varying chemical discharge from a spatially heterogeneous domain. In homogeneous domains, chemical discharge in temporally dynamic conditions could be double of that in the steady state conditions while microbial biomass varied up to 75% of that in steady state conditions. In heterogeneous domains, the interquartile range of uncertainty in chemical discharge in reaction dominated systems (log(10)Da > 0) was double of that in steady state conditions. However, high heterogeneous domains resulted in outliers where chemical discharge could be as high as 10-20 times of that in steady state conditions in high flow periods. And in transport dominated systems (log(10)Da < 0), the chemical discharge could be half of that in steady state conditions in unusually low flow conditions. In conclusion, ignoring spatio-temporal heterogeneities in a numerical modeling approach may exacerbate inaccurate estimation of nutrient export and microbial biomass. The results are relevant to long-term field monitoring studies, and for homogeneous soil column-scale experiments investigating the role of temporal dynamics on microbial redox dynamics.
Ethnic-racial identity (ERI) is an important aspect of youth development and has been well-studied for the last several decades. One issue less discussed is how the construct of ERI translates across different countries and cultures. The purpose of our paper is to describe the sociohistorical context of Germany and implications for the study of ethnic-racial identity in Europe. We discuss the German adaption of the Identity Project, an 8-week school-based ethnic-racial identity exploration intervention developed in the United States. We use this as a concrete example of how we thought through the focal construct of ERI to figure out how and whether it is a salient social identity category for youth in Germany where, in response to the history of racially motivated genocide, discussions of "race" are taboo. Digging into the ways ERI may not be directly transferable to different contexts can help us understand its nature as a socially constructed identity with real-life implications. Our hope with this paper is to further discussion, question our conceptualizations, and acknowledge how a detailed understanding of sociohistorical contexts is needed for the study of ERI.
Destabilization of super-rotating Taylor-Couette flows by current-free helical magnetic fields
(2021)
In an earlier paper we showed that the combination of azimuthal magnetic fields and super-rotation in Taylor-Couette flows of conducting fluids can be unstable against non-axisymmetric perturbations if the magnetic Prandtl number of the fluid is Pm not equal 1. Here we demonstrate that the addition of a weak axial field component allows axisymmetric perturbation patterns for Pm of order unity depending on the boundary conditions. The axisymmetric modes only occur for magnetic Mach numbers (of the azimuthal field) of order unity, while higher values are necessary for the non-axisymmetric modes. The typical growth time of the instability and the characteristic time scale of the axial migration of the axisymmetric mode are long compared with the rotation period, but short compared with the magnetic diffusion time. The modes travel in the positive or negative z direction along the rotation axis depending on the sign of B phi Bz. We also demonstrate that the azimuthal components of flow and field perturbations travel in phase if vertical bar B phi vertical bar >> vertical bar B-z vertical bar, independent of the form of the rotation law. Within a short-wave approximation for thin gaps it is also shown (in an appendix) that for ideal fluids the considered helical magnetorotational instability only exists for rotation laws with negative shear.
The Influence of acute sprint interval training on cognitive performance of healthy younger adults
(2022)
There is considerable evidence showing that an acute bout of physical exercises can improve cognitive performance, but the optimal exercise characteristics (e.g., exercise type and exercise intensity) remain elusive. In this regard, there is a gap in the literature to which extent sprint interval training (SIT) can enhance cognitive performance. Thus, this study aimed to investigate the effect of a time-efficient SIT, termed as "shortened-sprint reduced-exertion high-intensity interval training" (SSREHIT), on cognitive performance. Nineteen healthy adults aged 20-28 years were enrolled and assessed for attentional performance (via the d2 test), working memory performance (via Digit Span Forward/Backward), and peripheral blood lactate concentration immediately before and 10 min after an SSREHIT and a cognitive engagement control condition (i.e., reading). We observed that SSREHIT can enhance specific aspects of attentional performance, as it improved the percent error rate (F%) in the d-2 test (t (18) = -2.249, p = 0.037, d = -0.516), which constitutes a qualitative measure of precision and thoroughness. However, SSREHIT did not change other measures of attentional or working memory performance. In addition, we observed that the exercise-induced increase in the peripheral blood lactate levels correlated with changes in attentional performance, i.e., the total number of responses (GZ) (r(m) = 0.70, p < 0.001), objective measures of concentration (SKL) (r(m) = 0.73, p < 0.001), and F% (r(m) = -0.54, p = 0.015). The present study provides initial evidence that a single bout of SSREHIT can improve specific aspects of attentional performance and conforming evidence for a positive link between cognitive improvements and changes in peripheral blood lactate levels.
Due to the COVID-19 pandemic, all schools in Germany were locked down for several months in 2020. How schools realized teaching during the school lockdown greatly varied from school to school. N = 2,647 parents participated in an online survey and rated the following activities of teachers in mathematics, language arts (German), English, and science / biology during the school lockdown: frequency of sending task assignments, task solutions and requesting for solutions, giving task-related feedback, grading tasks, providing lessons per videoconference, and communicating via telecommunication tools with students and / or parents. Parents also reported student academic outcomes during the school lockdown (child's learning motivation, competent and independent learning, learning progress). Parents further reported student characteristics and social background variables: child's negative emotionality, school engagement, mathematical and language competencies, and child's social and cultural capital. Data were separately analyzed for elementary and secondary schools. In both samples, frequency of student-teacher communication was associated with all academic outcomes, except for learning progress in elementary school. Frequency of parent-teacher communication was associated with motivation and learning progress, but not with competent and independent learning, in both samples. Other distant teaching activities were differentially related to students' academic outcomes in elementary vs. secondary school. School engagement explained most additional variance in all students' outcomes during the school lockdown. Parent's highest school leaving certificate incrementally predicted students' motivation, and competent and independent learning in secondary school, as well as learning progress in elementary school. The variable "child has own bedroom" additionally explained variance in students' competent and independent learning during the school lockdown in both samples. Thus, both teaching activities during the school lockdown as well as children's characteristics and social background were independently important for students' motivation, competent and independent learning, and learning progress. Results are discussed with regard to their practical implications for realizing distant teaching.
The nanoscale combination of a conductive carbon and a carbon-based material with abundant heteroatoms for battery electrodes is a method to overcome the limitation that the latter has high affinity to alkali metal ions but low electronic conductivity. The synthetic protocol and the individual ratios and structures are important aspects influencing the properties of such multifunctional compounds. Their interplay is, herein, investigated by infiltration of a porous ZnO-templated carbon (ZTC) with nitrogen-rich carbon obtained by condensation of hexaazatriphenylene-hexacarbonitrile (HAT-CN) at 550-1000 degrees C. The density of lithiophilic sites can be controlled by HAT-CN content and condensation temperature. Lithium storage properties are significantly improved in comparison with those of the individual compounds and their physical mixtures. Depending on the uniformity of the formed composite, loading ratio and condensation temperature have different influence. Most stable operation at high capacity per used monomer is achieved with a slowly dried composite with an HAT-CN:ZTC mass ratio of 4:1, condensed at 550 degrees C, providing more than 400 mAh g(-1) discharge capacity at 0.1 A g(-1) and a capacity retention of 72% after 100 cycles of operation at 0.5 A g(-1) due to the homogeneity of the composite and high content of lithiophilic sites.