Refine
Year of publication
Document Type
- Article (19897)
- Doctoral Thesis (3079)
- Postprint (2019)
- Monograph/Edited Volume (1181)
- Other (622)
- Review (575)
- Conference Proceeding (283)
- Preprint (229)
- Part of a Book (187)
- Working Paper (122)
Language
- English (28399) (remove)
Is part of the Bibliography
- yes (28399) (remove)
Keywords
- climate change (161)
- Germany (92)
- diffusion (72)
- machine learning (71)
- German (66)
- Arabidopsis thaliana (63)
- anomalous diffusion (56)
- stars: massive (55)
- Holocene (53)
- Climate change (52)
Institute
- Institut für Physik und Astronomie (4729)
- Institut für Biochemie und Biologie (4554)
- Institut für Geowissenschaften (3168)
- Institut für Chemie (2800)
- Institut für Mathematik (1543)
- Department Psychologie (1361)
- Institut für Ernährungswissenschaft (987)
- Department Linguistik (890)
- Wirtschaftswissenschaften (816)
- Institut für Informatik und Computational Science (787)
- Institut für Umweltwissenschaften und Geographie (713)
- Department Sport- und Gesundheitswissenschaften (672)
- Mathematisch-Naturwissenschaftliche Fakultät (533)
- Institut für Anglistik und Amerikanistik (466)
- Sozialwissenschaften (438)
- Strukturbereich Kognitionswissenschaften (400)
- Extern (383)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (340)
- Hasso-Plattner-Institut für Digital Engineering GmbH (327)
- Humanwissenschaftliche Fakultät (252)
- Fachgruppe Betriebswirtschaftslehre (246)
- Fachgruppe Politik- & Verwaltungswissenschaft (195)
- Department Erziehungswissenschaft (180)
- Historisches Institut (170)
- Institut für Romanistik (162)
- Fachgruppe Volkswirtschaftslehre (157)
- Institut für Germanistik (153)
- Philosophische Fakultät (129)
- Institut für Jüdische Studien und Religionswissenschaft (115)
- Öffentliches Recht (107)
- Interdisziplinäres Zentrum für Dynamik komplexer Systeme (88)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (82)
- Center for Economic Policy Analysis (CEPA) (68)
- Fachgruppe Soziologie (67)
- Fakultät für Gesundheitswissenschaften (64)
- Institut für Philosophie (61)
- Strukturbereich Bildungswissenschaften (59)
- Department für Inklusionspädagogik (49)
- Institut für Slavistik (48)
- MenschenRechtsZentrum (43)
- Department Grundschulpädagogik (42)
- Institut für Künste und Medien (40)
- Bürgerliches Recht (31)
- Institut für Jüdische Theologie (30)
- Interdisziplinäres Zentrum für Dünne Organische und Biochemische Schichten (26)
- Referat für Presse- und Öffentlichkeitsarbeit (22)
- Zentrum für Gerechtigkeitsforschung (21)
- Klassische Philologie (20)
- Vereinigung für Jüdische Studien e. V. (18)
- Potsdam Research Institute for Multilingualism (PRIM) (17)
- Hochschulambulanz (14)
- Lehreinheit für Wirtschafts-Arbeit-Technik (13)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (12)
- Interdisziplinäres Zentrum für Biopolymere (11)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (11)
- Digital Engineering Fakultät (10)
- Sonderforschungsbereich 632 - Informationsstruktur (10)
- Strafrecht (9)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (8)
- Zentrum für Umweltwissenschaften (8)
- Berlin Potsdam Research Group "The International Rule of Law - Rise or Decline?" (7)
- Department Musik und Kunst (7)
- Juristische Fakultät (6)
- Multilingualism (6)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (6)
- Abraham Geiger Kolleg gGmbH (5)
- Zentrum für Lern- und Lehrforschung (5)
- Forschungsbereich „Politik, Verwaltung und Management“ (4)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (4)
- Akademie für Psychotherapie und Interventionsforschung GmbH (3)
- Gesundheitsmanagement (3)
- Institut für Religionswissenschaft (3)
- Patholinguistics/Neurocognition of Language (3)
- Psycholinguistics and Neurolinguistics (3)
- Zentrum für Sprachen und Schlüsselkompetenzen (Zessko) (3)
- An-Institute (2)
- DV und Statistik Wirtschaftswissenschaften (2)
- Interdisziplinäres Zentrum für Kognitive Studien (2)
- Senat (2)
- UP Transfer (2)
- WeltTrends e.V. Potsdam (2)
- Zentrum für Australienforschung (2)
- Applied Computational Linguistics (1)
- Botanischer Garten (1)
- Foundations of Computational Linguistics (1)
- Institut für Lebensgestaltung-Ethik-Religionskunde (1)
- Interdisziplinäres Zentrum für Massenspektronomie von Biopolymeren (1)
- Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung (1)
- Kommissionen des Senats (1)
- Kommunalwissenschaftliches Institut (1)
- Language Acquisition (1)
- Organe und Gremien (1)
- Phonology & Phonetics (1)
- Syntax, Morphology & Variability (1)
- Theodor-Fontane-Archiv (1)
- Weitere Einrichtungen (1)
This dissertation examines the integration of incongruent visual-scene and morphological-case information (“cues”) in building thematic-role representations of spoken relative clauses in German.
Addressing the mutual influence of visual and linguistic processing, the Coordinated Interplay Account (CIA) describes a mechanism in two steps supporting visuo-linguistic integration (Knoeferle & Crocker, 2006, Cog Sci). However, the outcomes and dynamics of integrating incongruent thematic-role representations from distinct sources have been investigated scarcely. Further, there is evidence that both second-language (L2) and older speakers may rely on non-syntactic cues relatively more than first-language (L1)/young speakers. Yet, the role of visual information for thematic-role comprehension has not been measured in L2 speakers, and only limitedly across the adult lifespan.
Thematically unambiguous canonically ordered (subject-extracted) and noncanonically ordered (object-extracted) spoken relative clauses in German (see 1a-b) were presented in isolation and alongside visual scenes conveying either the same (congruent) or the opposite (incongruent) thematic relations as the sentence did.
1 a Das ist der Koch, der die Braut verfolgt.
This is the.NOM cook who.NOM the.ACC bride follows
This is the cook who is following the bride.
b Das ist der Koch, den die Braut verfolgt.
This is the.NOM cook whom.ACC the.NOM bride follows
This is the cook whom the bride is following.
The relative contribution of each cue to thematic-role representations was assessed with agent identification. Accuracy and latency data were collected post-sentence from a sample of L1 and L2 speakers (Zona & Felser, 2023), and from a sample of L1 speakers from across the adult lifespan (Zona & Reifegerste, under review). In addition, the moment-by-moment dynamics of thematic-role assignment were investigated with mouse tracking in a young L1 sample (Zona, under review).
The following questions were addressed: (1) How do visual scenes influence thematic-role representations of canonical and noncanonical sentences? (2) How does reliance on visual-scene, case, and word-order cues vary in L1 and L2 speakers? (3) How does reliance on visual-scene, case, and word-order cues change across the lifespan?
The results showed reliable effects of incongruence of visually and linguistically conveyed thematic relations on thematic-role representations. Incongruent (vs. congruent) scenes yielded slower and less accurate responses to agent-identification probes presented post-sentence. The recently inspected agent was considered as the most likely agent ~300ms after trial onset, and the convergence of visual scenes and word order enabled comprehenders to assign thematic roles predictively.
L2 (vs. L1) participants relied more on word order overall. In response to noncanonical clauses presented with incongruent visual scenes, sensitivity to case predicted the size of incongruence effects better than L1-L2 grouping. These results suggest that the individual’s ability to exploit specific cues might predict their weighting.
Sensitivity to case was stable throughout the lifespan, while visual effects increased with increasing age and were modulated by individual interference-inhibition levels. Thus, age-related changes in comprehension may stem from stronger reliance on visually (vs. linguistically) conveyed meaning.
These patterns represent evidence for a recent-role preference – i.e., a tendency to re-assign visually conveyed thematic roles to the same referents in temporally coordinated utterances. The findings (i) extend the generalizability of CIA predictions across stimuli, tasks, populations, and measures of interest, (ii) contribute to specifying the outcomes and mechanisms of detecting and indexing incongruent representations within the CIA, and (iii) speak to current efforts to understand the sources of variability in sentence comprehension.
Since 2013, the Committee on Economic, Social and Cultural Rights can examine individual communications under the Optional Protocol to the International Covenant on Economic, Social and Cultural Rights (ICESCR). This opens up the possibility to interpret Covenant provisions in a thorough manner. With regard to forced evictions and the right to housing under Article 11 ICESCR, one can discern a fast-developing approach concerning the proportionality analysis of evictions, entailing the establishment of specific criteria that may guide such analysis. This paper seeks to delineate these developments and will also shed light on possible general trends on the topic of limitations within the Committee’s emerging jurisprudence. In doing so, the paper will address if, and how, the developing proportionality analysis under the individual complaints procedure takes into consideration multi-discriminatory dimensions of State measures and how it specifically relates to or incorporates other ICESCR-concepts, such as minimum core obligations or the reasonableness review under Article 8(4) OP ICESCR.
The conception of property at the basis of Hegel’s conception of abstract right seems committed to a problematic form of “possessive individualism.” It seems to conceive of right as the expression of human mastery over nature and as based upon an irreducible opposition of person and nature, rightful will, and rightless thing. However, this chapter argues that Hegel starts with a form of possessive individualism only to show that it undermines itself. This is evident in the way Hegel unfolds the nature of property as it applies to external things as well as in the way he explains our self-ownership of our own bodies and lives. Hegel develops the idea of property to a point where it reaches a critical limit and encounters the “true right” that life possesses against the “formal” and “abstract right” of property. Ultimately, Hegel’s account suggests that nature should precisely not be treated as a rightless object at our arbitrary disposal but acknowledged as the inorganic body of right.
In his 1844 Economic and Philosophic Manuscripts, Marx famously claims that the human being is or has a ‘Gattungswesen.’ This is often understood to mean that the human being is a ‘species-being’ and is determined by a given ‘species-essence.’ In this chapter, I argue that this reading is mistaken. What Marx calls Gattungswesen is precisely not a ‘species-being,’ but a being that, in a very specific sense, transcends the limits of its own given species. This different understanding of the genus- character of the human being opens up a new perspective on the naturalism of the early Marx. He is not informed by a problematic speciesist and essentialist naturalism, as is often assumed, but by a different form of naturalism which I propose to call ‘dialectical naturalism.’ The chapter starts (I) by developing Hegel’s account of genus which provides us with a useful background for (II) understanding Marx’s original notion of a genus-being and its practical, social, developmental character. In the last section, I show that (III) the actualization of our genus-being thus depends on the production of a specific type of ‘second nature’ that is at the heart of Marx’s dialectical naturalism.
The art of second nature
(2022)
Symbiotic X-ray binaries are systems hosting a neutron star accreting form the wind of a late-type companion. These are rare objects and so far only a handful of them are known. One of the most puzzling aspects of the symbiotic X-ray binaries is the possibility that they contain strongly magnetized neutron stars. These are expected to be evolutionary much younger compared to their evolved companions and could thus be formed through the (yet poorly known) accretion induced collapse of a white dwarf. In this paper, we perform a broad-band X-ray and soft gamma-ray spectroscopy of two known symbiotic binaries, Sct X-1 and 4U 1700+24, looking for the presence of cyclotron scattering features that could confirm the presence of strongly magnetized NSs. We exploited available Chandra, Swift, and NuSTAR data. We find no evidence of cyclotron resonant scattering features (CRSFs) in the case of Sct X-1 but in the case of 4U 1700+24 we suggest the presence of a possible CRSF at similar to 16 keV and its first harmonic at similar to 31 keV, although we could not exclude alternative spectral models for the broad-band fit. If confirmed by future observations, 4U 1700+24 could be the second symbiotic X-ray binary with a highly magnetized accretor. We also report about our long-term monitoring of the last discovered symbiotic X-ray binary IGR J17329-2731 performed with Swift/XRT. The monitoring revealed that, as predicted, in 2017 this object became a persistent and variable source, showing X-ray flares lasting for a few days and intriguing obscuration events that are interpreted in the context of clumpy wind accretion.
Drought and the availability of mineable phosphorus minerals used for fertilization are two of the important issues agriculture is facing in the future. High phosphorus availability in soils is necessary to maintain high agricultural yields. Drought is one of the major threats for terrestrial ecosystem performance and crop production in future. Among the measures proposed to cope with the upcoming challenges of intensifying drought stress and to decrease the need for phosphorus fertilizer application is the fertilization with silica (Si). Here we tested the importance of soil Si fertilization on wheat phosphorus concentration as well as wheat performance during drought at the field scale. Our data clearly showed a higher soil moisture for the Si fertilized plots. This higher soil moisture contributes to a better plant performance in terms of higher photosynthetic activity and later senescence as well as faster stomata responses ensuring higher productivity during drought periods. The plant phosphorus concentration was also higher in Si fertilized compared to control plots. Overall, Si fertilization or management of the soil Si pools seem to be a promising tool to maintain crop production under predicted longer and more serve droughts in the future and reduces phosphorus fertilizer requirements.
Non-fullerene acceptors (NFAs) are far more emissive than their fullerene-based counterparts. Here, we study the spectral properties of photocurrent generation and recombination of the blend of the donor polymer PM6 with the NFA Y6. We find that the radiative recombination of free charges is almost entirely due to the re-occupation and decay of Y6 singlet excitons, but that this pathway contributes less than 1% to the total recombination. As such, the open-circuit voltage of the PM6:Y6 blend is determined by the energetics and kinetics of the charge-transfer (CT) state. Moreover, we find that no information on the energetics of the CT state manifold can be gained from the low-energy tail of the photovoltaic external quantum efficiency spectrum, which is dominated by the excitation spectrum of the Y6 exciton. We, finally, estimate the charge-separated state to lie only 120 meV below the Y6 singlet exciton energy, meaning that this blend indeed represents a high-efficiency system with a low energetic offset.
Identification of protein complexes from protein-protein interaction (PPI) networks is a key problem in PPI mining, solved by parameter-dependent approaches that suffer from small recall rates. Here we introduce GCC-v, a family of efficient, parameter-free algorithms to accurately predict protein complexes using the (weighted) clustering coefficient of proteins in PPI networks. Through comparative analyses with gold standards and PPI networks from Escherichia coli, Saccharomyces cerevisiae, and Homo sapiens, we demonstrate that GCC-v outperforms twelve state-of-the-art approaches for identification of protein complexes with respect to twelve performance measures in at least 85.71% of scenarios. We also show that GCC-v results in the exact recovery of similar to 35% of protein complexes in a pan-plant PPI network and discover 144 new protein complexes in Arabidopsis thaliana, with high support from GO semantic similarity. Our results indicate that findings from GCC-v are robust to network perturbations, which has direct implications to assess the impact of the PPI network quality on the predicted protein complexes. (C) 2021 The Author(s). Published by Elsevier B.V. on behalf of Research Network of Computational and Structural Biotechnology.
Deliberative and paternalistic interaction styles for conversational agents in digital health
(2021)
Background:
Recent years have witnessed a constant increase in the number of people with chronic conditions requiring ongoing medical support in their everyday lives. However, global health systems are not adequately equipped for this extraordinarily time-consuming and cost-intensive development. Here, conversational agents (CAs) can offer easily scalable and ubiquitous support. Moreover, different aspects of CAs have not yet been sufficiently investigated to fully exploit their potential. One such trait is the interaction style between patients and CAs. In human-to-human settings, the interaction style is an imperative part of the interaction between patients and physicians. Patient-physician interaction is recognized as a critical success factor for patient satisfaction, treatment adherence, and subsequent treatment outcomes. However, so far, it remains effectively unknown how different interaction styles can be implemented into CA interactions and whether these styles are recognizable by users.
Objective:
The objective of this study was to develop an approach to reproducibly induce 2 specific interaction styles into CA-patient dialogs and subsequently test and validate them in a chronic health care context.
Methods:
On the basis of the Roter Interaction Analysis System and iterative evaluations by scientific experts and medical health care professionals, we identified 10 communication components that characterize the 2 developed interaction styles: deliberative and paternalistic interaction styles. These communication components were used to develop 2 CA variations, each representing one of the 2 interaction styles. We assessed them in a web-based between-subject experiment. The participants were asked to put themselves in the position of a patient with chronic obstructive pulmonary disease. These participants were randomly assigned to interact with one of the 2 CAs and subsequently asked to identify the respective interaction style. Chi-square test was used to assess the correct identification of the CA-patient interaction style.
Results:
A total of 88 individuals (42/88, 48% female; mean age 31.5 years, SD 10.1 years) fulfilled the inclusion criteria and participated in the web-based experiment. The participants in both the paternalistic and deliberative conditions correctly identified the underlying interaction styles of the CAs in more than 80% of the assessments (X-1(,8)8(2)=38.2; P<.001; phi coefficient r(phi)=0.68). The validation of the procedure was hence successful.
Conclusions:
We developed an approach that is tailored for a medical context to induce a paternalistic and deliberative interaction style into a written interaction between a patient and a CA. We successfully tested and validated the procedure in a web-based experiment involving 88 participants. Future research should implement and test this approach among actual patients with chronic diseases and compare the results in different medical conditions. This approach can further be used as a starting point to develop dynamic CAs that adapt their interaction styles to their users.
Pathogens and animal pests (P&A) are a major threat to global food security as they directly affect the quantity and quality of food. The Southern Amazon, Brazil's largest domestic region for soybean, maize and cotton production, is particularly vulnerable to the outbreak of P&A due to its (sub)tropical climate and intensive farming systems. However, little is known about the spatial distribution of P&A and the related yield losses. Machine learning approaches for the automated recognition of plant diseases can help to overcome this research gap. The main objectives of this study are to (1) evaluate the performance of Convolutional Neural Networks (ConvNets) in classifying P&A, (2) map the spatial distribution of P&A in the Southern Amazon, and (3) quantify perceived yield and economic losses for the main soybean and maize P&A. The objectives were addressed by making use of data collected with the smartphone application Plantix. The core of the app's functioning is the automated recognition of plant diseases via ConvNets. Data on expected yield losses were gathered through a short survey included in an "expert" version of the application, which was distributed among agronomists. Between 2016 and 2020, Plantix users collected approximately 78,000 georeferenced P&A images in the Southern Amazon. The study results indicate a high performance of the trained ConvNets in classifying 420 different crop-disease combinations. Spatial distribution maps and expert-based yield loss estimates indicate that maize rust, bacterial stalk rot and the fall armyworm are among the most severe maize P&A, whereas soybean is mainly affected by P&A like anthracnose, downy mildew, frogeye leaf spot, stink bugs and brown spot. Perceived soybean and maize yield losses amount to 12 and 16%, respectively, resulting in annual yield losses of approximately 3.75 million tonnes for each crop and economic losses of US$2 billion for both crops together. The high level of accuracy of the trained ConvNets, when paired with widespread use from following a citizen-science approach, results in a data source that will shed new light on yield loss estimates, e.g., for the analysis of yield gaps and the development of measures to minimise them.
The correct orientation of seismic sensors is critical for studies such as full moment tensor inversion, receiver function analysis, and shear-wave splitting. Therefore, the orientation of horizontal components needs to be checked and verified systematically. This study relies on two different waveform-based approaches, to assess the sensor orientations of the broadband network of the Kandilli Observatory and Earthquake Research Institute (KOERI). The network is an important backbone for seismological research in the Eastern Mediterranean Region and provides a comprehensive seismic data set for the North Anatolian fault. In recent years, this region became a worldwide field laboratory for continental transform faults. A systematic survey of the sensor orientations of the entire network, as presented here, facilitates related seismic studies. We apply two independent orientation tests, based on the polarization of P waves and Rayleigh waves to 123 broadband seismic stations, covering a period of 15 yr (2004-2018). For 114 stations, we obtain stable results with both methods. Approximately, 80% of the results agree with each other within 10 degrees. Both methods indicate that about 40% of the stations are misoriented by more than 10 degrees. Among these, 20 stations are misoriented by more than 20 degrees. We observe temporal changes of sensor orientation that coincide with maintenance work or instrument replacement. We provide time-dependent sensor misorientation correction values for the KOERI network in the supplemental material.
The first detections of black hole-neutron star mergers (GW200105 and GW200115) by the LIGO-Virgo-Kagra Collaboration mark a significant scientific breakthrough. The physical interpretation of pre- and postmerger signals requires careful cross-examination between observational and theoretical modelling results. Here we present the first set of black hole-neutron star simulations that were obtained with the numerical-relativity code BAM. Our initial data are constructed using the public LORENE spectral library, which employs an excision of the black hole interior. BAM, in contrast, uses the moving-puncture gauge for the evolution. Therefore, we need to "stuff" the black hole interior with smooth initial data to evolve the binary system in time. This procedure introduces constraint violations such that the constraint damping properties of the evolution system are essential to increase the accuracy of the simulation and in particular to reduce spurious center-of-mass drifts. Within BAM we evolve the Z4c equations and we compare our gravitational-wave results with those of the SXS collaboration and results obtained with the SACRA code. While we find generally good agreement with the reference solutions and phase differences less than or similar to 0.5 rad at the moment of merger, the absence of a clean convergence order in our simulations does not allow for a proper error quantification. We finally present a set of different initial conditions to explore how the merger of black hole neutron star systems depends on the involved masses, spins, and equations of state.
Water bodies are a highly abundant feature of Arctic permafrost ecosystems and strongly influence their hydrology, ecology and biogeochemical cycling. While very high resolution satellite images enable detailed mapping of these water bodies, the increasing availability and abundance of this imagery calls for fast, reliable and automatized monitoring. This technical work presents a largely automated and scalable workflow that removes image noise, detects water bodies, removes potential misclassifications from infrastructural features, derives lake shoreline geometries and retrieves their movement rate and direction on the basis of ortho-ready very high resolution satellite imagery from Arctic permafrost lowlands. We applied this workflow to typical Arctic lake areas on the Alaska North Slope and achieved a successful and fast detection of water bodies. We derived representative values for shoreline movement rates ranging from 0.40-0.56 m yr(-1) for lake sizes of 0.10 ha-23.04 ha. The approach also gives an insight into seasonal water level changes. Based on an extensive quantification of error sources, we discuss how the results of the automated workflow can be further enhanced by incorporating additional information on weather conditions and image metadata and by improving the input database. The workflow is suitable for the seasonal to annual monitoring of lake changes on a sub-meter scale in the study areas in northern Alaska and can readily be scaled for application across larger regions within certain accuracy limitations.
This paper sheds new light on the role of communication for cartel formation. Using machine learning to evaluate free-form chat communication among firms in a laboratory experiment, we identify typical communication patterns for both explicit cartel formation and indirect attempts to collude tacitly. We document that firms are less likely to communicate explicitly about price fixing and more likely to use indirect messages when sanctioning institutions are present. This effect of sanctions on communication reinforces the direct cartel-deterring effect of sanctions as collusion is more difficult to reach and sustain without an explicit agreement. Indirect messages have no, or even a negative, effect on prices.
The leniency rule revisited
(2021)
The experimental literature on antitrust enforcement provides robust evidence that communication plays an important role for the formation and stability of cartels. We extend these studies through a design that distinguishes between innocuous communication and communication about a cartel, sanctioning only the latter. To this aim, we introduce a participant in the role of the competition authority, who is properly incentivized to judge the communication content and price setting behavior of the firms. Using this novel design, we revisit the question whether a leniency rule successfully destabilizes cartels. In contrast to existing experimental studies, we find that a leniency rule does not affect cartelization. We discuss potential explanations for this contrasting result.
COVID-19
(2021)
We investigate how the economic consequences of the pandemic and the government-mandated measures to contain its spread affect the self-employed — particularly women — in Germany. For our analysis, we use representative, real-time survey data in which respondents were asked about their situation during the COVID-19 pandemic. Our findings indicate that among the self-employed, who generally face a higher likelihood of income losses due to COVID-19 than employees, women are about one-third more likely to experience income losses than their male counterparts. We do not find a comparable gender gap among employees. Our results further suggest that the gender gap among the self-employed is largely explained by the fact that women disproportionately work in industries that are more severely affected by the COVID-19 pandemic. Our analysis of potential mechanisms reveals that women are significantly more likely to be impacted by government-imposed restrictions, e.g., the regulation of opening hours. We conclude that future policy measures intending to mitigate the consequences of such shocks should account for this considerable variation in economic hardship.
Detrimental effects of adverse family conditions for children's wellbeing are well-documented, but little is known about the impact of specific risk factors, or about potential protective factors that buffer the effects of family risk factors on negative development.
We investigated the impact of five important family risk factors (e.g., parental conflict) on internalizing and externalizing problems and the potential buffering effects of peer acceptance and academic skills at two measurement points two years apart in 1195 7-to 10-year-olds (T1: M-Age = 8.54).
Latent change models showed that increases in risk factors over the two years predicted increasing internalizing and externalizing problems. Parental conflict was the most impactful risk factor, although peer acceptance and academic skills showed some buffering effects.
The results highlight the necessity of investigating cumulative and single risk factors, specifically interparental conflict, and emphasize the need to strengthen children's internal and social resources to buffer the effects of adverse family conditions.
Previous literature has shown that task-based goal-setting and distributed learning is beneficial to university-level course performance. We investigate the effects of making these insights salient to students by sending out goal-setting prompts in a blended learning environment with bi-weekly quizzes. The randomized field experiment in a large mandatory economics course shows promising results: the treated students outperform the control group. They are 18.8% (0.20 SD) more likely to pass the exam and earn 6.7% (0.19 SD) more points on the exam. While we cannot causally disentangle the effects of goal-setting from the prompt sent, we observe that treated students use the online learning platform earlier in the semester and attempt more online exercises compared to the control group. The heterogeneity analysis suggests that higher treatment effects are associated with low performance at the beginning of the course.
Looking for participation
(2022)
A stronger learner orientation through participatory learning increases learning motivation and results. But what does participatory learning mean? Where do learning factories and fabrication laboratories (FabLabs) stand in this context, and how can didactic implementation be improved in this respect? Using a newly developed analytical framework, which contains elements of the stage model of participation and general media didactics, we compare a FabLab and a learning factory example concerning the degree of participation. From this, we derive guidelines for designing participative teaching and learning processes in learning factories. We explain how FabLabs can be an inspiration for the didactic design of learning factories.
This study deals with the East Beni Suef Basin (Eastern Desert, Egypt) and aims to evaluate the source-generative potential, reconstruct the burial and thermal history, examine the most influential parameters on thermal maturity modeling, and improve on the models already published for the West Beni Suef to ultimately formulate a complete picture of the whole basin evolution.
Source rock evaluation was carried out based on TOC, Rock-Eval pyrolysis, and visual kerogen petrography analyses. Three kerogen types (II, II/III, and III) are distinguished in the East Beni Suef Basin, where the Abu Roash "F" Member acts as the main source rock with good to excellent source potential, oil-prone mainly type II kerogen, and immature to marginal maturity levels.
The burial history shows four depositional and erosional phases linked with the tectonic evolution of the basin. A hiatus (due to erosion or non-deposition) has occurred during the Late Eocene-Oligocene in the East Beni Suef Basin, while the West Beni Suef Basin has continued subsiding.
Sedimentation began later (Middle to Late Albian) with lower rates in the East Beni Suef Basin compared with the West Beni Suef Basin (Early Albian). The Abu Roash "F" source rock exists in the early oil window with a present-day transformation ratio of about 19% and 21% in the East and West Beni Suef Basin, respectively, while the Lower Kharita source rock, which is only recorded in the West Beni Suef Basin, has reached the late oil window with a present-day transformation ratio of about 70%.
The magnitude of erosion and heat flow have proportional and mutual effects on thermal maturity.
We present three possible scenarios of basin modeling in the East Beni Suef Basin concerning the erosion from the Apollonia and Dabaa formations.
Results of this work can serve as a basis for subsequent 2D and/or 3D basin modeling, which are highly recommended to further investigate the petroleum system evolution of the Beni Suef Basin.
The subsurface is a temporally dynamic and spatially heterogeneous compartment of the Earth's critical zone, and biogeochemical transformations taking place in this compartment are crucial for the cycling of nutrients.
The impact of spatial heterogeneity on such microbially mediated nutrient cycling is not well known, which imposes a severe challenge in the prediction of in situ biogeochemical transformation rates and further of nutrient loading contributed by the groundwater to the surface water bodies.
Therefore, we used a numerical modelling approach to evaluate the sensitivity of groundwater microbial biomass distribution and nutrient cycling to spatial heterogeneity in different scenarios accounting for various residence times.
The model results gave us an insight into domain characteristics with respect to the presence of oxic niches in predominantly anoxic zones and vice versa depending on the extent of spatial heterogeneity and the flow regime.
The obtained results show that microbial abundance, distribution, and activity are sensitive to the applied flow regime and that the mobile (i.e. observable by groundwater sampling) fraction of microbial biomass is a varying, yet only a small, fraction of the total biomass in a domain. Furthermore, spatial heterogeneity resulted in anaerobic niches in the domain and shifts in microbial biomass between active and inactive states. The lack of consideration of spatial heterogeneity, thus, can result in inaccurate estimation of microbial activity. In most cases this leads to an overestimation of nutrient removal (up to twice the actual amount) along a flow path.
We conclude that the governing factors for evaluating this are the residence time of solutes and the Damkohler number (Da) of the biogeochemical reactions in the domain. We propose a relationship to scale the impact of spatial heterogeneity on nutrient removal governed by the logioDa.
This relationship may be applied in upscaled descriptions of microbially mediated nutrient cycling dynamics in the subsurface thereby resulting in more accurate predictions of, for example, carbon and nitrogen cycling in groundwater over long periods at the catchment scale.
An increasing number of clinicians (i.e., nurses and physicians) suffer from mental health-related issues like depression and burnout. These, in turn, stress communication, collaboration, and decision- making—areas in which Conversational Agents (CAs) have shown to be useful. Thus, in this work, we followed a mixed-method approach and systematically analysed the literature on factors affecting the well-being of clinicians and CAs’ potential to improve said well-being by relieving support in communication, collaboration, and decision-making in hospitals. In this respect, we are guided by Brigham et al. (2018)’s model of factors influencing well-being. Based on an initial number of 840 articles, we further analysed 52 papers in more detail and identified the influences of CAs’ fields of application on external and individual factors affecting clinicians’ well-being. As our second method, we will conduct interviews with clinicians and experts on CAs to verify and extend these influencing factors.
Epidemiological data suggest that consuming diets rich in carotenoids can reduce the risk of developing several non-communicable diseases. Thus, we investigated the extent to which carotenoid contents of foods can be increased by the choice of food matrices with naturally high carotenoid contents and thermal processing methods that maintain their stability. For this purpose, carotenoids of 15 carrot (Daucus carota L.) cultivars of different colors were assessed with UHPLC-DAD-ToF-MS. Additionally, the processing effects of air drying, air frying, and deep frying on carotenoid stability were applied. Cultivar selection accounted for up to 12.9-fold differences in total carotenoid content in differently colored carrots and a 2.2-fold difference between orange carrot cultivars. Air frying for 18 and 25 min and deep frying for 10 min led to a significant decrease in total carotenoid contents. TEAC assay of lipophilic extracts showed a correlation between carotenoid content and antioxidant capacity in untreated carrots.
Thermally stable photoswitches that are driven with low-energy light are rare, yet crucial for extending the applicability of photoresponsive molecules and materials towards, e.g., living systems. Combined ortho-fluorination and -amination couples high visible light absorptivity of o-aminoazobenzenes with the extraordinary bistability of o-fluoroazobenzenes. Herein, we report a library of easily accessible o-aminofluoroazobenzenes and establish structure-property relationships regarding spectral qualities, visible light isomerization efficiency and thermal stability of the cis-isomer with respect to the degree of o-substitution and choice of amino substituent. We rationalize the experimental results with quantum chemical calculations, revealing the nature of low-lying excited states and providing insight into thermal isomerization. The synthesized azobenzenes absorb at up to 600 nm and their thermal cis-lifetimes range from milliseconds to months. The most unique example can be driven from trans to cis with any wavelength from UV up to 595 nm, while still exhibiting a thermal cis-lifetime of 81 days. <br /> [GRAPHICS] <br /> .
Poly(ionic liquid)s (PIL) are common precursors for heteroatom-doped carbon materials. Despite a relatively higher carbonization yield, the PIL-to-carbon conversion process faces challenges in preserving morphological and structural motifs on the nanoscale. Assisted by a thin polydopamine coating route and ion exchange, imidazoliumbased PIL nanovesicles were successfully applied in morphology-maintaining carbonization to prepare carbon composite nanocapsules. Extending this strategy further to their composites, we demonstrate the synthesis of carbon composite nanocapsules functionalized with iron nitride nanoparticles of an ultrafine, uniform size of 3-5 nm (termed "FexN@C "). Due to its unique nanostructure, the sulfur-loaded FexN@C electrode was tested to efficiently mitigate the notorious shuttle effect of lithium polysulfides (LiPSs) in Li-S batteries. The cavity of the carbon nanocapsules was spotted to better the loading content of sulfur. The well-dispersed iron nitride nanoparticles effectively catalyze the conversion of LiPSs to Li2S, owing to their high electronic conductivity and strong binding power to LiPSs. Benefiting from this well-crafted composite nanostructure, the constructed FexN@C/S cathode demonstrated a fairly high discharge capacity of 1085 mAh g(-1) at 0.5 C initially, and a remaining value of 930 mAh g(-1 )after 200 cycles. In addition, it exhibits an excellent rate capability with a high initial discharge capacity of 889.8 mAh g(-1) at 2 C. This facile PIL-to-nanocarbon synthetic approach is applicable for the exquisite design of complex hybrid carbon nanostructures with potential use in electrochemical energy storage and conversion.
The study of perceptual flexibility in speech depends on a variety of tasks that feature a large degree of variability between participants. Of critical interest is whether measures are consistent within an individual or across stimulus contexts. This is particularly key for individual difference designs that are deployed to examine the neural basis or clinical consequences of perceptual flexibility. In the present set of experiments, we assess the split-half reliability and construct validity of five measures of perceptual flexibility: three of learning in a native language context (e.g., understanding someone with a foreign accent) and two of learning in a non-native context (e.g., learning to categorize non-native speech sounds). We find that most of these tasks show an appreciable level of split-half reliability, although construct validity was sometimes weak. This provides good evidence for reliability for these tasks, while highlighting possible upper limits on expected effect sizes involving each measure.
Changing climatic conditions and unsustainable land use are major threats to savannas worldwide. Historically, many African savannas were used intensively for livestock grazing, which contributed to widespread patterns of bush encroachment across savanna systems. To reverse bush encroachment, it has been proposed to change the cattle-dominated land use to one dominated by comparatively specialized browsers and usually native herbivores. However, the consequences for ecosystem properties and processes remain largely unclear. We used the ecohydrological, spatially explicit model EcoHyD to assess the impacts of two contrasting, herbivore land-use strategies on a Namibian savanna: grazer- versus browser-dominated herbivore communities. We varied the densities of grazers and browsers and determined the resulting composition and diversity of the plant community, total vegetation cover, soil moisture, and water use by plants. Our results showed that plant types that are less palatable to herbivores were best adapted to grazing or browsing animals in all simulated densities. Also, plant types that had a competitive advantage under limited water availability were among the dominant ones irrespective of land-use scenario. Overall, the results were in line with our expectations: under high grazer densities, we found heavy bush encroachment and the loss of the perennial grass matrix. Importantly, regardless of the density of browsers, grass cover and plant functional diversity were significantly higher in browsing scenarios. Browsing herbivores increased grass cover, and the higher total cover in turn improved water uptake by plants overall. We concluded that, in contrast to grazing-dominated land-use strategies, land-use strategies dominated by browsing herbivores, even at high herbivore densities, sustain diverse vegetation communities with high cover of perennial grasses, resulting in lower erosion risk and bolstering ecosystem services.
The investigation of metabolic fluxes and metabolite distributions within cells by means of tracer molecules is a valuable tool to unravel the complexity of biological systems. Technological advances in mass spectrometry (MS) technology such as atmospheric pressure chemical ionization (APCI) coupled with high resolution (HR), not only allows for highly sensitive analyses but also broadens the usefulness of tracer-based experiments, as interesting signals can be annotated de novo when not yet present in a compound library. However, several effects in the APCI ion source, i.e., fragmentation and rearrangement, lead to superimposed mass isotopologue distributions (MID) within the mass spectra, which need to be corrected during data evaluation as they will impair enrichment calculation otherwise. Here, we present and evaluate a novel software tool to automatically perform such corrections. We discuss the different effects, explain the implemented algorithm, and show its application on several experimental datasets. This adjustable tool is available as an R package from CRAN.
ABSTRACT: Structural evolution of cesium triiodide at high pressures has been revealed by synchrotron single-crystal X-ray diffraction. Cesium triiodide undergoes a first-order phase transition above 1.24(3) GPa from an orthorhombic to a trigonal system. This transition is coupled with severe reorganization of the polyiodide network from a layered to three-dimensional architecture. Quantum chemical calculations show that even though the two polymorphic phases are nearly isoenergetic under ambient conditions, the PV term is decisive in stabilizing the trigonal polymorph above the transition point. Phonon calculations using a non-local correlation functional that accounts for dispersion interactions confirm that this polymorph is dynamically unstable under ambient conditions. The high-pressure behavior of crystalline CsI3 can be correlated with other alkali metal trihalides, which undergo a similar sequence of structural changes upon load.
Digital Platforms (DPs) has established themself in recent years as a central concept of the Information Technology Science. Due to the great diversity of digital platform concepts, clear definitions are still required. Furthermore, DPs are subject to dynamic changes from internal and external factors, which pose challenges for digital platform operators, developers and customers. Which current digital platform research directions should be taken to address these challenges remains open so far. The following paper aims to contribute to this by outlining a systematic literature review (SLR) on digital platform concepts in the context of the Industrial Internet of Things (IIoT) for manufacturing companies and provides a basis for (1) a selection of definitions of current digital platform and ecosystem concepts and (2) a selection of current digital platform research directions. These directions are diverted into (a) occurrence of digital platforms, (b) emergence of digital platforms, (c) evaluation of digital platforms, (d) development of digital platforms, and (e) selection of digital platforms.
Concepts and techniques for 3D-embedded treemaps and their application to software visualization
(2024)
This thesis addresses concepts and techniques for interactive visualization of hierarchical data using treemaps. It explores (1) how treemaps can be embedded in 3D space to improve their information content and expressiveness, (2) how the readability of treemaps can be improved using level-of-detail and degree-of-interest techniques, and (3) how to design and implement a software framework for the real-time web-based rendering of treemaps embedded in 3D. With a particular emphasis on their application, use cases from software analytics are taken to test and evaluate the presented concepts and techniques.
Concerning the first challenge, this thesis shows that a 3D attribute space offers enhanced possibilities for the visual mapping of data compared to classical 2D treemaps. In particular, embedding in 3D allows for improved implementation of visual variables (e.g., by sketchiness and color weaving), provision of new visual variables (e.g., by physically based materials and in situ templates), and integration of visual metaphors (e.g., by reference surfaces and renderings of natural phenomena) into the three-dimensional representation of treemaps.
For the second challenge—the readability of an information visualization—the work shows that the generally higher visual clutter and increased cognitive load typically associated with three-dimensional information representations can be kept low in treemap-based representations of both small and large hierarchical datasets. By introducing an adaptive level-of-detail technique, we cannot only declutter the visualization results, thereby reducing cognitive load and mitigating occlusion problems, but also summarize and highlight relevant data. Furthermore, this approach facilitates automatic labeling, supports the emphasis on data outliers, and allows visual variables to be adjusted via degree-of-interest measures.
The third challenge is addressed by developing a real-time rendering framework with WebGL and accumulative multi-frame rendering. The framework removes hardware constraints and graphics API requirements, reduces interaction response times, and simplifies high-quality rendering. At the same time, the implementation effort for a web-based deployment of treemaps is kept reasonable.
The presented visualization concepts and techniques are applied and evaluated for use cases in software analysis. In this domain, data about software systems, especially about the state and evolution of the source code, does not have a descriptive appearance or natural geometric mapping, making information visualization a key technology here. In particular, software source code can be visualized with treemap-based approaches because of its inherently hierarchical structure. With treemaps embedded in 3D, we can create interactive software maps that visually map, software metrics, software developer activities, or information about the evolution of software systems alongside their hierarchical module structure.
Discussions on remaining challenges and opportunities for future research for 3D-embedded treemaps and their applications conclude the thesis.
The intensity of cosmic radiation may differ over five orders of magnitude within a few hours or days during the Solar Particle Events (SPEs), thus increasing for several orders of magnitude the probability of Single Event Upsets (SEUs) in space-borne electronic systems. Therefore, it is vital to enable the early detection of the SEU rate changes in order to ensure timely activation of dynamic radiation hardening measures. In this paper, an embedded approach for the prediction of SPEs and SRAM SEU rate is presented. The proposed solution combines the real-time SRAM-based SEU monitor, the offline-trained machine learning model and online learning algorithm for the prediction. With respect to the state-of-the-art, our solution brings the following benefits: (1) Use of existing on-chip data storage SRAM as a particle detector, thus minimizing the hardware and power overhead, (2) Prediction of SRAM SEU rate one hour in advance, with the fine-grained hourly tracking of SEU variations during SPEs as well as under normal conditions, (3) Online optimization of the prediction model for enhancing the prediction accuracy during run-time, (4) Negligible cost of hardware accelerator design for the implementation of selected machine learning model and online learning algorithm. The proposed design is intended for a highly dependable and self-adaptive multiprocessing system employed in space applications, allowing to trigger the radiation mitigation mechanisms before the onset of high radiation levels.
Studies have evaluated the effectiveness of dual career (DC) support services among student-athletes by examining scholastic performances.
These studies investigated self-reported grades student-athletes or focused on career choices student-athletes made after leaving school. Most of these studies examined scholastic performances cross-sectionally among lower secondary school student-athletes or student-athletes in higher education.
The present longitudinal field study in a quasi-experimental design aims to evaluate the development of scholastic performances among upper secondary school students aged 16-19 by using standardized scholastic assessments and grade points in the subject English over a course of 3-4 years.
A sample of 159 students (54.4% females) at three German Elite Sport Schools (ESS) and three comprehensive schools participated in the study. The sample was split into six groups according to three criteria: (1) students' athletic engagement, (2) school type attendance, and (3) usage of DC support services in secondary school.
Repeated-measurement analyses of variance were conducted in order to evaluate the impact of the three previously mentioned criteria as well as their interaction on the development of scholastic performances.
Findings indicated that the development of English performance levels differ among the six groups.
Invention
(2023)
This entry addresses invention from five different perspectives: (i) definition of the term, (ii) mechanisms underlying invention processes, (iii) (pre-)history of human inventions, (iv) intellectual property protection vs open innovation, and (v) case studies of great inventors. Regarding the definition, an invention is the outcome of a creative process taking place within a technological milieu, which is recognized as successful in terms of its effectiveness as an original technology. In the process of invention, a technological possibility becomes realized. Inventions are distinct from either discovery or innovation. In human creative processes, seven mechanisms of invention can be observed, yielding characteristic outcomes: (1) basic inventions, (2) invention branches, (3) invention combinations, (4) invention toolkits, (5) invention exaptations, (6) invention values, and (7) game-changing inventions. The development of humanity has been strongly shaped by inventions ever since early stone tools and the conception of agriculture. An “explosion of creativity” has been associated with Homo sapiens, and inventions in all fields of human endeavor have followed suit, engendering an exponential growth of cumulative culture. This culture development emerges essentially through a reuse of previous inventions, their revision, amendment and rededication. In sociocultural terms, humans have increasingly regulated processes of invention and invention-reuse through concepts such as intellectual property, patents, open innovation and licensing methods. Finally, three case studies of great inventors are considered: Edison, Marconi, and Montessori, next to a discussion of human invention processes as collaborative endeavors.
Current attempts to prevent and manage type 2 diabetes have been moderately effective, and a better understanding of the molecular roots of this complex disease is important to develop more successful and precise treatment options.
Recently, we initiated the collective diabetes cross, where four mouse inbred strains differing in their diabetes susceptibility were crossed with the obese and diabetes-prone NZO strain and identified the quantitative trait loci (QTL) Nidd13/NZO, a genomic region on chromosome 13 that correlates with hyperglycemia in NZO allele carriers compared to B6 controls.
Subsequent analysis of the critical region, harboring 644 genes, included expression studies in pancreatic islets of congenic Nidd13/NZO mice, integration of single-cell data from parental NZO and B6 islets as well as haplotype analysis.
Finally, of the five genes (Acot12, S100z, Ankrd55, Rnf180, and Iqgap2) within the polymorphic haplotype block that are differently expressed in islets of B6 compared to NZO mice, we identified the calcium-binding protein S100z gene to affect islet cell proliferation as well as apoptosis when overexpressed in MINE cells. In summary, we define S100z as the most striking gene to be causal for the diabetes QTL Nidd13/NZO by affecting beta-cell proliferation and apoptosis. Thus, S100z is an entirely novel diabetes gene regulating islet cell function.
The business problem of having inefficient processes, imprecise process analyses and simulations as well as non-transparent artificial neuronal network models can be overcome by an easy-to-use modeling concept. With the aim of developing a flexible and efficient approach to modeling, simulating and optimizing processes, this paper proposes a flexible Concept of Neuronal Modeling (CoNM). The modeling concept, which is described by the modeling language designed and its mathematical formulation and is connected to a technical substantiation, is based on a collection of novel sub-artifacts. As these have been implemented as a computational model, the set of CoNM tools carries out novel kinds of Neuronal Process Modeling (NPM), Neuronal Process Simulations (NPS) and Neuronal Process Optimizations (NPO). The efficacy of the designed artifacts was demonstrated rigorously by means of six experiments and a simulator of real industrial production processes.
Model uncertainty quantification is an essential component of effective data assimilation. Model errors associated with sub-grid scale processes are often represented through stochastic parameterizations of the unresolved process. Many existing Stochastic Parameterization schemes are only applicable when knowledge of the true sub-grid scale process or full observations of the coarse scale process are available, which is typically not the case in real applications. We present a methodology for estimating the statistics of sub-grid scale processes for the more realistic case that only partial observations of the coarse scale process are available. Model error realizations are estimated over a training period by minimizing their conditional sum of squared deviations given some informative covariates (e.g., state of the system), constrained by available observations and assuming that the observation errors are smaller than the model errors. From these realizations a conditional probability distribution of additive model errors given these covariates is obtained, allowing for complex non-Gaussian error structures. Random draws from this density are then used in actual ensemble data assimilation experiments. We demonstrate the efficacy of the approach through numerical experiments with the multi-scale Lorenz 96 system using both small and large time scale separations between slow (coarse scale) and fast (fine scale) variables. The resulting error estimates and forecasts obtained with this new method are superior to those from two existing methods.
The nature of the sources powering nebular He II emission in star-forming galaxies remains debated, and various types of objects have been considered, including Wolf-Rayet stars, X-ray binaries, and Population III stars.
Modern X-ray observations show the ubiquitous presence of hot gas filling star-forming galaxies. We use a collisional ionization plasma code to compute the specific He II ionizing flux produced by hot gas and show that if its temperature is not too high (less than or similar to 2.5 MK), then the observed levels of soft diffuse X-ray radiation could explain He II ionization in galaxies.
To gain a physical understanding of this result, we propose a model that combines the hydrodynamics of cluster winds and hot superbubbles with observed populations of young massive clusters in galaxies. We find that in low-metallicity galaxies, the temperature of hot gas is lower and the production rate of He II ionizing photons is higher compared to high-metallicity galaxies. The reason is that the slower stellar winds of massive stars in lower-metallicity galaxies input less mechanical energy in the ambient medium.
Furthermore, we show that ensembles of star clusters up to similar to 10-20 Myr old in galaxies can produce enough soft X-rays to induce nebular He II emission. We discuss observations of the template low-metallicity galaxy I Zw 18 and suggest that the He II nebula in this galaxy is powered by a hot superbubble.
Finally, appreciating the complex nature of stellar feedback, we suggest that soft X-rays from hot superbubbles are among the dominant sources of He II ionizing flux in low-metallicity star-forming galaxies.
It has been highlighted many times how difficult it is to draw a boundary between gift and bribe, and how the same transfer can be interpreted in different ways according to the position of the observer and the narrative frame into which it is inserted. This also applied of course to Ancient Rome; in both the Republic and Principate lawgivers tried to define the limits of acceptable transfers and thus also to identify what we might call ‘corruption’. Yet, such definitions remained to a large extent blurred, and what was constructed was mostly a ‘code of conduct’, allowing Roman politicians to perform their own ‘honesty’ in public duty – while being aware at all times that their involvement in different kinds of transfer might be used by their opponents against them and presented as a case of ‘corrupt’ behaviour.
Widespread on social networking sites (SNSs), envy has been linked to an array of detrimental outcomes for users’ well-being. While envy has been considered a status-related emotion and is likely to be experienced in response to perceiving another’s higher status, there is a lack of research exploring how status perceptions influence the emergence of envy on SNSs. This is important because SNSs typically quantify social interactions and reach with metrics that indicate users’ relative rank and status in the network. To understand how status perceptions impact SNS users, we introduce a new form of metric-based digital status rooted in SNS metrics that are available and visible on a platform. Drawing on social comparison theory and status literature, we conducted an online experiment to investigate how different forms of status contribute to the proliferation of envy on SNSs. Our findings shed light on how metric-based digital status influences feelings of envy on SNSs. Specifically, we could show that metric-based digital status impacts envy through increasing perceptions of others’ socioeconomic and sociometric statuses. Our study contributes to the growing discourse on the negative outcomes associated with SNS use and its consequences for users and society.
Who has the future in mind?
(2022)
An individual's relation to time may be an important driver of pro-environmental behaviour. We studied whether young individual's gender and time-orientation are associated with pro-environmental behaviour. In a controlled laboratory environment with students in Germany, participants earned money by performing a real-effort task and were then offered the opportunity to invest their money into an environmental project that supports climate protection. Afterwards, we controlled for their time-orientation. In this consequential behavioural setting, we find that males who scored higher on future-negative orientation showed significantly more pro-environmental behaviour compared to females who scored higher on future-negative orientation and males who scored lower on future-negative orientation. Interestingly, our results are completely reversed when it comes to past-positive orientation. These findings have practical implications regarding the most appropriate way to address individuals in order to achieve more pro-environmental behaviour.
The envy spiral
(2020)
On Social Networking Sites (SNS) users disclose mostly positive and often self-enhancing information. Scholars refer to this phenomenon as the positivity bias in SNS communication (PBSC). However, while theoretical explanations for this phenomenon have been proposed, an empirical proof of these theorized mechanisms is still missing. The project presented in this Research-in-Progress paper aims at explaining the PBSC with the mechanism specified in the self-enhancement envy spiral. Specifically, we hypothesize that feelings of envy drive people to post positive and self-enhancing content on SNS. To test this hypothesis, we developed an experimental design allowing to examine the causal effect of envy on the positivity of users’ subsequently posted content. In a preliminary study, we tested our manipulation of envy and could show its effectiveness in inducing different levels of envy between our groups. Our project will help to broaden the understanding of the complex dynamics of SNS and the potentially adverse driving forces underlying them.
This paper studies how individuals discount the utility they derive from their provision of goods over spatial distance. In a controlled laboratory experiment in Germany, we elicit preferences for the provision of the same good at different locations. To isolate spatial preferences from any other direct value of the goods being close to the individual, we focus on goods with “existence value.” We find that individuals put special weight on the provision of these goods in their immediate vicinity. This “vicinity bias” represents a spatial analogy to the “present bias” in the time dimension.
Enhancing economic efficiency in modular production systems through deep reinforcement learning
(2024)
In times of increasingly complex production processes and volatile customer demands, the production adaptability is crucial for a company's profitability and competitiveness. The ability to cope with rapidly changing customer requirements and unexpected internal and external events guarantees robust and efficient production processes, requiring a dedicated control concept at the shop floor level. Yet in today's practice, conventional control approaches remain in use, which may not keep up with the dynamic behaviour due to their scenario-specific and rigid properties. To address this challenge, deep learning methods were increasingly deployed due to their optimization and scalability properties. However, these approaches were often tested in specific operational applications and focused on technical performance indicators such as order tardiness or total throughput. In this paper, we propose a deep reinforcement learning based production control to optimize combined techno-financial performance measures. Based on pre-defined manufacturing modules that are supplied and operated by multiple agents, positive effects were observed in terms of increased revenue and reduced penalties due to lower throughput times and fewer delayed products. The combined modular and multi-staged approach as well as the distributed decision-making further leverage scalability and transferability to other scenarios.
Shape-memory hydrogels (SMH) are multifunctional, actively-moving polymers of interest in biomedicine. In loosely crosslinked polymer networks, gelatin chains may form triple helices, which can act as temporary net points in SMH, depending on the presence of salts. Here, we show programming and initiation of the shape-memory effect of such networks based on a thermomechanical process compatible with the physiological environment. The SMH were synthesized by reaction of glycidylmethacrylated gelatin with oligo(ethylene glycol) (OEG) alpha,omega-dithiols of varying crosslinker length and amount. Triple helicalization of gelatin chains is shown directly by wide-angle X-ray scattering and indirectly via the mechanical behavior at different temperatures. The ability to form triple helices increased with the molar mass of the crosslinker. Hydrogels had storage moduli of 0.27-23 kPa and Young's moduli of 215-360 kPa at 4 degrees C. The hydrogels were hydrolytically degradable, with full degradation to water-soluble products within one week at 37 degrees C and pH = 7.4. A thermally-induced shape-memory effect is demonstrated in bending as well as in compression tests, in which shape recovery with excellent shape-recovery rates R-r close to 100% were observed. In the future, the material presented here could be applied, e.g., as self-anchoring devices mechanically resembling the extracellular matrix.