Refine
Year of publication
- 2021 (1482) (remove)
Document Type
- Article (1022)
- Doctoral Thesis (180)
- Postprint (145)
- Conference Proceeding (26)
- Part of a Book (23)
- Review (20)
- Working Paper (18)
- Monograph/Edited Volume (17)
- Part of Periodical (13)
- Habilitation Thesis (7)
Language
- English (1482) (remove)
Keywords
- COVID-19 (20)
- climate change (12)
- machine learning (12)
- diffusion (11)
- embodied cognition (11)
- Germany (10)
- USA (10)
- Migration (9)
- United States (9)
- moderne jüdische Geschichte (9)
Institute
- Institut für Biochemie und Biologie (177)
- Institut für Physik und Astronomie (140)
- Institut für Chemie (131)
- Institut für Geowissenschaften (119)
- Department Psychologie (88)
- Institut für Umweltwissenschaften und Geographie (73)
- Fachgruppe Betriebswirtschaftslehre (69)
- Extern (67)
- Hasso-Plattner-Institut für Digital Engineering GmbH (65)
- Department Sport- und Gesundheitswissenschaften (59)
- Institut für Ernährungswissenschaft (57)
- Department Linguistik (52)
- Fachgruppe Politik- & Verwaltungswissenschaft (46)
- Fachgruppe Volkswirtschaftslehre (42)
- Institut für Mathematik (39)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (35)
- Strukturbereich Kognitionswissenschaften (34)
- Wirtschaftswissenschaften (31)
- Department Erziehungswissenschaft (28)
- Historisches Institut (27)
- Fakultät für Gesundheitswissenschaften (21)
- Institut für Anglistik und Amerikanistik (21)
- Institut für Jüdische Studien und Religionswissenschaft (20)
- Institut für Informatik und Computational Science (19)
- Center for Economic Policy Analysis (CEPA) (17)
- Sozialwissenschaften (14)
- Institut für Philosophie (13)
- Vereinigung für Jüdische Studien e. V. (12)
- Öffentliches Recht (12)
- Department für Inklusionspädagogik (11)
- Fachgruppe Soziologie (7)
- Department Grundschulpädagogik (6)
- Institut für Germanistik (6)
- Hochschulambulanz (5)
- Institut für Slavistik (5)
- Strukturbereich Bildungswissenschaften (5)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (5)
- Bürgerliches Recht (4)
- Humanwissenschaftliche Fakultät (4)
- Institut für Romanistik (4)
- Klassische Philologie (4)
- Philosophische Fakultät (4)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (4)
- Digital Engineering Fakultät (2)
- Institut für Künste und Medien (2)
- Referat für Presse- und Öffentlichkeitsarbeit (2)
- Botanischer Garten (1)
- Foundations of Computational Linguistics (1)
- Institut für Jüdische Theologie (1)
- Lehreinheit für Wirtschafts-Arbeit-Technik (1)
- MenschenRechtsZentrum (1)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (1)
- Psycholinguistics and Neurolinguistics (1)
- Weitere Einrichtungen (1)
Anomaly detection in process mining aims to recognize outlying or unexpected behavior in event logs for purposes such as the removal of noise and identification of conformance violations. Existing techniques for this task are primarily frequency-based, arguing that behavior is anomalous because it is uncommon. However, such techniques ignore the semantics of recorded events and, therefore, do not take the meaning of potential anomalies into consideration. In this work, we overcome this caveat and focus on the detection of anomalies from a semantic perspective, arguing that anomalies can be recognized when process behavior does not make sense. To achieve this, we propose an approach that exploits the natural language associated with events. Our key idea is to detect anomalous process behavior by identifying semantically inconsistent execution patterns. To detect such patterns, we first automatically extract business objects and actions from the textual labels of events. We then compare these against a process-independent knowledge base. By populating this knowledge base with patterns from various kinds of resources, our approach can be used in a range of contexts and domains. We demonstrate the capability of our approach to successfully detect semantic execution anomalies through an evaluation based on a set of real-world and synthetic event logs and show the complementary nature of semantics-based anomaly detection to existing frequency-based techniques.
Helping overcome distance, the use of videoconferencing tools has surged during the pandemic. To shed light on the consequences of videoconferencing at work, this study takes a granular look at the implications of the self-view feature for meeting outcomes. Building on self-awareness research and self-regulation theory, we argue that by heightening the state of self-awareness, self-view engagement depletes participants’ mental resources and thereby can undermine online meeting outcomes. Evaluation of our theoretical model on a sample of 179 employees reveals a nuanced picture. Self-view engagement while speaking and while listening is positively associated with self-awareness, which, in turn, is negatively associated with satisfaction with meeting process, perceived productivity, and meeting enjoyment. The criticality of the communication role is put forward: looking at self while listening to other attendees has a negative direct and indirect effect on meeting outcomes; however, looking at self while speaking produces equivocal effects.
Despite the phenomenal growth of Big Data Analytics in the last few years, little research is done to explicate the relationship between Big Data Analytics Capability (BDAC) and indirect strategic value derived from such digital capabilities. We attempt to address this gap by proposing a conceptual model of the BDAC - Innovation relationship using dynamic capability theory. The work expands on BDAC business value research and extends the nominal research done on BDAC – innovation. We focus on BDAC's relationship with different innovation objects, namely product, business process, and business model innovation, impacting all value chain activities. The insights gained will stimulate academic and practitioner interest in explicating strategic value generated from BDAC and serve as a framework for future research on the subject
Digital inclusion
(2021)
In this thesis, we tackle two social disruptions: recent refugee waves in Germany and the COVID-19 pandemic. We focus on the use of information and communication technology (ICT) as a key means of alleviating these disruptions and promoting social inclusion. As social disruptions typically lead to frustration and fragmentation, it is essential to ensure the social inclusion of individuals and societies during such times.
In the context of the social inclusion of refugees, we focus on the Syrian refugees who arrived in Germany as of 2015, as they form a large and coherent refugee community. In particular, we address the role of ICTs in refugees’ social inclusion and investigate how different ICTs (especially smartphones and social networks) can foster refugees’ integration and social inclusion. In the context of the COVID-19 pandemic, we focus on the widespread unconventional working model of work from home (WFH). Our research here centers on the main constructs of WFH and the key differences in WFH experiences based on personal characteristics such as gender and parental status.
We reveal novel insights through four well-established research methods: literature review, mixed methods, qualitative method, and quantitative method. The results of our research have been published in the form of eight articles in major information systems venues and journals. Key results from the refugee research stream include the following: Smartphones represent a central component of refugee ICT use; refugees view ICT as a source of information and power; the social connectedness of refugees is strongly correlated with their Internet use; refugees are not relying solely on traditional methods to learn the German language or pursue further education; the ability to use smartphones anytime and anywhere gives refugees an empowering feeling of global connectedness; and ICTs empower refugees on three levels (community participation, sense of control, and self-efficacy).
Key insights from the COVID-19 WFH stream include: Gender and the presence of children under the age of 18 affect workers’ control over their time, technology usefulness, and WFH conflicts, while not affecting their WFH attitudes; and both personal and technology-related factors affect an individual’s attitude toward WFH and their productivity. Further insights are being gathered at the time of submitting this thesis.
This thesis contributes to the discussion within the information systems community regarding how to use different ICT solutions to promote the social inclusion of refugees in their new communities and foster an inclusive society. It also adds to the growing body of research on COVID-19, in particular on the sudden workplace transformation to WFH. The insights gathered in this thesis reveal theoretical implications and future opportunities for research in the field of information systems, practical implications for relevant stakeholders, and social implications related to the refugee crisis and the COVID-19 pandemic that must be addressed.
Immigrant integration has become a primary political concern for leaders in Germany and the United States. The information systems (IS) community has begun to research how information and communications technologies can assist immigrants and refugees, such as by examining how countries can facilitate social-inclusion processes. Migrants face the challenge of joining closed communities that cannot integrate or fear doing so. We conducted a panel discussion at the 2019 Americas Conference on Information Systems (AMCIS) in Cancun, Mexico, to introduce multiple viewpoints on immigration. In particular, the panel discussed how technology can both support and prevent immigrants from succeeding in their quest. We conducted the panel to stimulate a thoughtful and dynamic discussion on best practices and recommendations to enhance the discipline's impact on alleviating the challenges that occur for immigrants in their host countries. In this panel report, we introduce the topic of using ICT to help immigrants integrate and identify differences between North/Central America and Europe. We also discuss how immigrants (particularly refugees) use ICT to connect with others, feel that they belong, and maintain their identity. We also uncover the dark and bright sides of how governments use ICT to deter illegal immigration. Finally, we present recommendations for researchers and practitioners on how to best use ICT to assist with immigration.
The coronavirus disease of 2019 (COVID-19) pandemic has forced most academics to work from home. This sudden venue change can affect academics' productivity and exacerbate the challenges that confront universities as they face an uncertain future. In this paper, we identify factors that influence academics' productivity while working from home during the mandate to self-isolate. From analyzing results from a global survey we conducted, we found that both personal and technology-related factors affect an individual's attitude toward working from home and productivity. Our results should prove valuable to university administrators to better address the work-life challenges that academics face.
Since the beginning of the recent global refugee crisis, researchers have been tackling many of its associated aspects, investigating how we can help to alleviate this crisis, in particular, using ICTs capabilities. In our research, we investigated the use of ICT solutions by refugees to foster the social inclusion process in the host community. To tackle this topic, we conducted thirteen interviews with Syrian refugees in Germany. Our findings reveal different ICT usages by refugees and how these contribute to feeling empowered. Moreover, we show the sources of empowerment for refugees that are gained by ICT use. Finally, we identified the two types of social inclusion benefits that were derived from empowerment sources. Our results provide practical implications to different stakeholders and decision-makers on how ICT usage can empower refugees, which can foster the social inclusion of refugees, and what should be considered to support them in their integration effort.
Since the beginning of the recent global refugee crisis, researchers have been tackling many of its associated aspects, investigating how we can help to alleviate this crisis, in particular, using ICTs capabilities. In our research, we investigated the use of ICT solutions by refugees to foster the social inclusion process in the host community. To tackle this topic, we conducted thirteen interviews with Syrian refugees in Germany. Our findings reveal different ICT usages by refugees and how these contribute to feeling empowered. Moreover, we show the sources of empowerment for refugees that are gained by ICT use. Finally, we identified the two types of social inclusion benefits that were derived from empowerment sources. Our results provide practical implications to different stakeholders and decision-makers on how ICT usage can empower refugees, which can foster the social inclusion of refugees, and what should be considered to support them in their integration effort.
Personal data increasingly serve as inputs to public goods. Like other types of contributions to public goods, personal data are likely to be underprovided. We investigate whether classical remedies to underprovision are also applicable to personal data and whether the privacy-sensitive nature of personal data must be additionally accounted for. In a randomized field experiment on a public online education platform, we prompt users to complete their profiles with personal information. Compared to a control message, we find that making public benefits salient increases the number of personal data contributions significantly. This effect is even stronger when additionally emphasizing privacy protection, especially for sensitive information. Our results further suggest that emphasis on both public benefits and privacy protection attracts personal data from a more diverse set of contributors.
During the observation of goal-directed actions, infants usually predict the goal at an earlier age when the agent is familiar (e.g., human hand) compared to unfamiliar (e.g., mechanical claw). These findings implicate a crucial role of the developing agentive self for infants' processing of others' action goals. Recent theoretical accounts suggest that predictive gaze behavior relies on an interplay between infants' agentive experience (top-down processes) and perceptual information about the agent and the action-event (bottom-up information; e.g., agency cues). The present study examined 7-, 11-, and 18-month-old infants' predictive gaze behavior for a grasping action performed by an unfamiliar tool, depending on infants' age-related action knowledge about tool-use and the display of the agency cue of producing a salient action effect. The results are in line with the notion of a systematic interplay between experience-based top-down processes and cue-based bottom-up information: Regardless of the salient action effect, predictive gaze shifts did not occur in the 7-month-olds (least experienced age group), but did occur in the 18-month-olds (most experienced age group). In the 11-month-olds, however, predictive gaze shifts occurred only when a salient action effect was presented. This sheds new light on how the developing agentive self, in interplay with available agency cues, supports infants' action-goal prediction also for observed tool-use actions.
This article discusses how Alex Garland’s The Beach (1996) engages with conceptions of utopian islands, nation, and colonialism in modernity and how it, from this basis, develops a different spatiality that reflects on a more deterritorialized form of imperial domination within late twentieth-century globalization, as exercised by the United States. The novel is shown to subvert, but not to abolish, two spatial formations that originated in early modernity: nation and utopia. Building on Jean Baudrillard’s elaborations regarding simulation and simulacra, the article argues that The Beach creates a hyperreal narrative that does away with the idea of isolated, bounded spaces and that in form and content corresponds with the worldwide dominance of the United States at the end of the twentieth century.
While W.E.B. Du Bois’s first novel, The Quest of the Silver Fleece (1911), is set squarely in the USA, his second work of fiction, Dark Princess: A Romance (1928), abandons this national framework, depicting the treatment of African Americans in the USA as embedded into an international system of economic exploitation based on racial categories. Ultimately, the political visions offered in the novels differ starkly, but both employ a Western literary canon – so-called ‘classics’ from Greek, German, English, French, and US American literature. With this, Du Bois attempts to create a new space for African Americans in the world (literature) of the 20th century. Weary of the traditions of this ‘world literature’, the novels complicate and begin to decenter the canon that they draw on. This reading traces what I interpret as subtle signs of frustration over the limits set by the literature that underlies Dark Princess, while its predecessor had been more optimistic in its appropriation of Eurocentric fiction for its propagandist aims.
Diabetes is a major public health problem with increasing global prevalence. Type 2 diabetes (T2D), which accounts for 90% of all diagnosed cases, is a complex polygenic disease also modulated by epigenetics and lifestyle factors. For the identification of T2D-associated genes, linkage analyses combined with mouse breeding strategies and bioinformatic tools were useful in the past. In a previous study in which a backcross population of the lean and diabetes-prone dilute brown non-agouti (DBA) mouse and the obese and diabetes-susceptible New Zealand obese (NZO) mouse was characterized, a major diabetes quantitative trait locus (QTL) was identified on chromosome 4. The locus was designated non-insulin dependent diabetes from DBA (Nidd/DBA). The aim of this thesis was (i) to perform a detailed phenotypic characterization of the Nidd/DBA mice, (ii) to further narrow the critical region and (iii) to identify the responsible genetic variant(s) of the Nidd/DBA locus. The phenotypic characterization of recombinant congenic mice carrying a 13.6 Mbp Nidd/DBA fragment with 284 genes presented a gradually worsening metabolic phenotype. Nidd/DBA allele carriers exhibited severe hyperglycemia (~19.9 mM) and impaired glucose clearance at 12 weeks of age. Ex vivo perifusion experiments with islets of 13-week-old congenic mice revealed a tendency towards reduced insulin secretion in homozygous DBA mice. In addition, 16-week-old mice showed a severe loss of β-cells and reduced pancreatic insulin content. Pathway analysis of transcriptome data from islets of congenic mice pointed towards a downregulation of cell survival genes. Morphological analysis of pancreatic sections displayed a reduced number of bi-hormonal cells co-expressing glucagon and insulin in homozygous DBA mice, which could indicate a reduced plasticity of endocrine cells in response to hyperglycemic stress. Further generation and phenotyping of recombinant congenic mice enabled the isolation of a 3.3 Mbp fragment that was still able to induce hyperglycemia and contained 61 genes. Bioinformatic analyses including haplotype mapping, sequence and transcriptome analysis were integrated in order to further reduce the number of candidate genes and to identify the presumable causative gene variant. Four putative candidate genes (Ttc39a, Kti12, Osbpl9, Calr4) were defined, which were either differentially expressed or carried a sequence variant. In addition, in silico ChIP-Seq analyses of the 3.3 Mbp region indicated a high number of SNPs located in active regions of binding sites of β-cell transcription factors. This points towards potentially altered cis-regulatory elements that could be responsible for the phenotype conferred by the Nidd/DBA locus. In summary, the Nidd/DBA locus mediates impaired glucose homeostasis and reduced insulin secretion capacity which finally leads to β-cell death. The downregulation of cell survival genes and reduced plasticity of endocrine cells could further contribute to the β-cell loss. The critical region was narrowed down to a 3.3 Mbp fragment containing 61 genes, of which four might be involved in the development of the diabetogenic Nidd/DBA phenotype.
Purpose
The objective of the investigation was to determine the concomitant effects of upper arm blood flow restriction (BFR) and inversion on elbow flexors neuromuscular responses.
Methods
Randomly allocated, 13 volunteers performed four conditions in a within-subject design: rest (control, 1-min upright position without BFR), control (1-min upright with BFR), 1-min inverted (without BFR), and 1-min inverted with BFR. Evoked and voluntary contractile properties, before, during and after a 30-s maximum voluntary contraction (MVC) exercise intervention were examined as well as pain scale.
Results
Inversion induced significant pre-exercise intervention decreases in elbow flexors MVC (21.1%, Z2p = 0.48, p = 0.02) and resting evoked twitch forces (29.4%, Z2p = 0.34, p = 0.03). The 30-s MVC induced significantly greater pre- to post-test decreases in potentiated twitch force (Z2p = 0.61, p = 0.0009) during inversion (75%) than upright (65.3%) conditions. Overall, BFR decreased MVC force 4.8% (Z2p = 0.37, p = 0.05). For upright position, BFR induced 21.0% reductions in M-wave amplitude (Z2p = 0.44, p = 0.04). There were no significant differences for electromyographic activity or voluntary activation as measured with the interpolated twitch technique. For all conditions, there was a significant increase in pain scale between the 40-60 s intervals and post-30-s MVC (upright< inversion, and without BFR< BFR).
Conclusion
The concomitant application of inversion with elbow flexors BFR only amplified neuromuscular performance impairments to a small degree. Individuals who execute forceful contractions when inverted or with BFR should be cognizant that force output may be impaired.
We present a microcontact printing (mu CP) routine suitable to introduce defined (sub-) microscale patterns on surface substrates exhibiting a high capillary activity and receptive to a silane-based chemistry. This is achieved by transferring functional trivalent alkoxysilanes, such as (3-aminopropyl)-triethoxysilane (APTES) as a low-molecular weight ink via reversible covalent attachment to polymer brushes grafted from elastomeric polydimethylsiloxane (PDMS) stamps. The brushes consist of poly{N-[tris(hydroxymethyl)-methyl]acrylamide} (PTrisAAm) synthesized by reversible addition-fragmentation chain-transfer (RAFT)-polymerization and used for immobilization of the alkoxysilane-based ink by substituting the alkoxy moieties with polymer-bound hydroxyl groups. Upon physical contact of the silane-carrying polymers with surfaces, the conjugated silane transfers to the substrate, thus completely suppressing ink-flow and, in turn, maximizing printing accuracy even for otherwise not addressable substrate topographies. We provide a concisely conducted investigation on polymer brush formation using atomic force microscopy (AFM) and ellipsometry as well as ink immobilization utilizing two-dimensional proton nuclear Overhauser enhancement spectroscopy (H-1-H-1-NOESY-NMR). We analyze the mu CP process by printing onto Si-wafers and show how even distinctively rough surfaces can be addressed, which otherwise represent particularly challenging substrates.
Paths Are Made by Walking
(2021)
Investigation of Sirtuin 3 overexpression as a genetic model of fasting in hypothalamic neurons
(2021)
The controlled dosage of substances from a device to its environment, such as a tissue or an organ in medical applications or a reactor, room, machinery or ecosystem in technical, should ideally match the requirements of the applications, e.g. in terms of the time point at which the cargo is released. On-demand dosage systems may enable such a desired release pattern, if the device contain suitable features that can translate external signals into a release function. This study is motivated by the opportunities arising from microsystems capable of an on-demand release and the contributions that geometrical design may have in realizing such features. The goals of this work included the design, fabrication, characterization and experimental proof-of-concept of geometry-assisted triggerable dosing effect (a) with a sequential dosing release and (b) in a self-sufficient dosage system. Structure-function relationships were addressed on the molecular, morphological and, with a particular attention, the device design level, which is on the micrometer scale. Models and/or computational tools were used to screen the parameter space and provide guidance for experiments.
The present work gives a detailed analysis of the metamorphic and structural evolution of the back-arc portion of the Famatinian Orogen exposed in the southern Sierra de Aconquija (Cuesta de La Chilca segment) in the Sierras Pampeanas Orientales (Eastern Pampean Sierras). The Pampeanas Orientales include from north to south the Aconquija, Ambato and Ancasti mountains. They are mainly composed of middle to high grade metasedimentary units and magmatic rocks.
At the south end of the Sierra de Aconquija, along an east to west segment extending over nearly 10 km (Cuesta de La Chilca), large volumes of metasedimentary rocks crop out. The eastern metasediments were defined as members of the El Portezuelo Metamorphic-Igneous Complex (EPMIC) or Eastern block and the western ones relate to the Quebrada del Molle Metamorphic Complex (QMMC) or Western block. The two blocks are divided by the La Chilca Shear Zone, which is reactivated as the Rio Chanarito fault.
The EPMIC, forming the hanging wall, is composed of schists, gneisses and rare amphibolites, calc- silicate schists, marbles and migmatites. The rocks underwent multiple episodes of deformation and a late high strain-rate episode with gradually increasing mylonitization to the west. Metamorphism progrades from a M-1 phase to the peak M-3, characterized by the reactions: Qtz + Pl + Bt +/- Ms -> Grt + Bt(2) + Pl(2) +/- Sil +/- Kfs, Qtz + Bt + Sil -> Crd + Kfs and Qtz + Grt + Sil -> Crd. The M-3 assemblage is coeval with the dominant foliation related to a third deformational phase (D-3).
The QMMC, forming the foot wall, is made up of fine-grained banded quartz - biotite schists with quartz veins and quartz-feldspar-rich pegmatites. To the east, schists are also overprinted by mylonitization. The M-3 peak assemblage is quartz + biotite + plagioclase +/- garnet +/- sillimanite +/- muscovite +/- ilmenite +/- magnetite +/- apatite.
The studied segment suffered multiphase deformation and metamorphism. Some of these phases can be correlated between both blocks. D-1 is locally preserved in scarce outcrops in the EPMIC but is the dominant in the QMMC, where S-1 is nearly parallel to S-0. In the EPMIC, D-2 is represented by the S-2 foliation, related to the F-2 folding that overprints S-1, with dominant strike NNW - SSE and high angles dip to the E. D-3 in the EPMIC have F-3 folds with axis oblique to S-2; the S-3 foliation has striking NW - SE dipping steeply to the E or W and develops interference patterns. In the QMMC, S-2 (D-2) is a discontinuous cleavage oblique to S-1 and transposed by S-3 (D-3), subparallel to S-1. Such structures in the QMMC developed at subsolidus conditions and could be correlated to those of the EPMIC, which formed under higher P-T conditions. The penetrative deformation D-2 in the EPMIC occurred during a prograde path with syntectonic growth of garnet reaching P-T conditions of 640 degrees C and 0.54 GPa in the EPMIC. This stage was followed by a penetrative deformation D-3 with syn-kinematic growth of garnet, cordierite and plagioclase. Peak P-T conditions calculated for M-3 are 710 degrees C and 0.60 GPa, preserved in the western part of the EPMIC, west of the unnamed fault.
The schists from the QMMC suffered the early low grade M-1 metamorphism with minimum PT conditions of ca 400 degrees C and 0.35 GPa, comparable to the fine schists (M-1) outcropping to the east. The D-2 deformation is associated with the prograde M-2 metamorphism. The penetrative D-3 stage is related to a medium grade metamorphism M-3, with peak conditions at ca 590 degrees C and 0.55 GPa.
The superimposed stages of deformation and metamorphism reaching high P-T conditions followed by isothermal decompression, defining a clockwise orogenic P-T path. During the Lower Paleozoic, folds were superimposed and recrystallization as well as partial melting at peak conditions occurred. Similar characteristics were described from the basement from other Famatinian-dominated locations of the Sierra de Aconquija and other ranges of the Sierras Pampeanas Orientales.
We investigate how inviting students to set task-based goals affects usage of an online learning platform and course performance. We design and implement a randomized field experiment in a large mandatory economics course with blended learning elements. The low-cost treatment induces students to use the online learning system more often, more intensively, and to begin earlier with exam preparation. Treated students perform better in the course than the control group: they are 18.8% (0.20 SD) more likely to pass the exam and earn 6.7% (0.19 SD) more points on the exam. There is no evidence that treated students spend significantly more time, rather they tend to shift to more productive learning methods. The heterogeneity analysis suggests that higher treatment effects are associated with higher levels of behavioral bias but also with poor early course behavior.
With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation.
Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly.
The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design.
As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation.
The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments.
This study investigates the use of pulse stretching (skew-sized) inverters for monitoring the variation of count rate and linear energy transfer (LET) of energetic particles. The basic particle detector is a cascade of two pulse stretching inverters, and the required sensing area is obtained by connecting up to 12 two-inverter cells in parallel and employing the required number of parallel arrays. The incident particles are detected as single-event transients (SETs), whereby the SET count rate denotes the particle count rate, while the SET pulsewidth distribution depicts the LET variations. The advantage of the proposed solution is the possibility to sense the LET variations using fully digital processing logic. SPICE simulations conducted on IHP's 130-nm CMOS technology have shown that the SET pulsewidth varies by approximately 550 ps over the LET range from 1 to 100 MeV center dot cm(2) center dot mg(-1). The proposed detector is intended for triggering the fault-tolerant mechanisms within a self-adaptive multiprocessing system employed in space. It can be implemented as a standalone detector or integrated in the same chip with the target system.
In this paper, we employ a comparative life course approach for Canada and Germany to unravel the relationships among general and vocational educational attainment and different life course activities, with a focus on labour market and income inequality by gender. Life course theory and related concepts of 'time,' 'normative patterns,' 'order and disorder,' and 'discontinuities' are used to inform the analyses. Data from the Paths on Life's Way (Paths) project in British Columbia, Canada and the German Pathways from Late Childhood to Adulthood (LifE) which span 28 and 33 years, respectively, are employed to examine life trajectories from leaving school to around age 45. Sequence analysis and cluster analyses portray both within and between country differences - and in particular gender differences - in educational attainment, employment, and other activities across the life course which has an impact on ultimate labour market participation and income levels. 'Normative' life courses that follow a traditional order correspond with higher levels of full-time work and higher incomes; in Germany more so than Canada, these clusters are male dominated. Clusters characterised by 'disordered' and 'discontinuous' life courses in both countries are female dominated and associated with lower income levels.
The leniency rule revisited
(2021)
The experimental literature on antitrust enforcement provides robust evidence that communication plays an important role for the formation and stability of cartels. We extend these studies through a design that distinguishes between innocuous communication and communication about a cartel, sanctioning only the latter. To this aim, we introduce a participant in the role of the competition authority, who is properly incentivized to judge the communication content and price setting behavior of the firms. Using this novel design, we revisit the question whether a leniency rule successfully destabilizes cartels. In contrast to existing experimental studies, we find that a leniency rule does not affect cartelization. We discuss potential explanations for this contrasting result.
The experimental literature on antitrust enforcement provides robust evidence that communication plays an important role for the formation and stability of cartels. We extend these studies through a design that distinguishes between innocuous communication and communication about a cartel, sanctioning only the latter. To this aim, we introduce a participant in the role of the competition authority, who is properly incentivized to judge communication content and price setting behavior of the firms. Using this novel design, we revisit the question whether a leniency rule successfully destabilizes cartels. In contrast to existing experimental studies, we find that a leniency rule does not affect cartelization. We discuss potential explanations for this contrasting result.
Coherent network partitions
(2021)
We continue to study coherent partitions of graphs whereby the vertex set is partitioned into subsets that induce biclique spanned subgraphs. The problem of identifying the minimum number of edges to obtain biclique spanned connected components (CNP), called the coherence number, is NP-hard even on bipartite graphs. Here, we propose a graph transformation geared towards obtaining an O (log n)-approximation algorithm for the CNP on a bipartite graph with n vertices. The transformation is inspired by a new characterization of biclique spanned subgraphs. In addition, we study coherent partitions on prime graphs, and show that finding coherent partitions reduces to the problem of finding coherent partitions in a prime graph. Therefore, these results provide future directions for approximation algorithms for the coherence number of a given graph.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
Elaeidobius kamerunicus Faust. (Coleoptera: Curculionidae) is an essential insect pollinator in oil palm plantations. Recently, researches have been undertaken to improve pollination efficiency using this species. A fundamental understanding of the genes related to this pollinator behavior is necessary to achieve this goal. Here, we present the draft genome sequence, annotation, and simple sequence repeat (SSR) marker data for this pollinator. In total, 34.97 Gb of sequence data from one male individual (monoisolate) were obtained using Illumina short-read platform NextSeq 500. The draft genome assembly was found to be 269.79 Mb and about 59.9% of completeness based on Benchmarking Universal Single-Copy Orthologs (BUSCO) assessment. Functional gene annotation predicted about 26.566 genes. Also, a total of 281.668 putative SSR markers were identified. This draft genome sequence is a valuable resource for understanding the population genetics, phylogenetics, dispersal patterns, and behavior of this species.
Polymeric semiconductors are strong contenders for replacing traditional inorganic semiconductors in electronic applications requiring low power, low cost and flexibility, such as biosensors, flexible solar cells and electronic displays. Molecular doping has the potential to enable this revolution by improving the conductivity and charge transport properties of this class of materials. Despite decades of research in this field, gaps in our understanding of the nature of dopant–polymer interactions has resulted in limited commercialization of this technology. This work aims at providing a deeper insight into the underlying mechanisms of molecular p-doping of semiconducting polymers in the solution and solid-state, and thereby bring the scientific community closer to realizing the dream of making organic semiconductors commonplace in the electronics industry. The role of 1) dopant size/shape, 2) polymer chain aggregation and 3) charge delocalization on the doping mechanism and efficiency is addressed using optical (UV-Vis-NIR) and electron paramagnetic resonance (EPR) spectroscopies. By conducting a comprehensive study of the nature and concentration of the doping-induced species in solutions of the polymer poly(3-hexylthiophene) (P3HT) with 3 different dopants, we identify the unique optical signatures of the delocalized polaron, localized polaron and charge-transfer complex, and report their extinction coefficient values. Furthermore, with X-ray diffraction, atomic force microscopy and electrical conductivity measurements, we study the impact of processing technique and doping mechanism on the morphology and thereby, charge transport through the doped films.
This work demonstrates that the doping mechanism and type of doping-induced species formed are strongly influenced by the polymer backbone arrangement rather than dopant shape/size. The ability of the polymer chain to aggregate is found to be crucial for efficient charge transfer (ionization) and polaron delocalization. At the same time, our results suggest that the high ionization efficiency of a dopant–polymer system in solution may subsequently hinder efficient charge transport in the solid-state due to the reduction in the fraction of tie chains, which enable charges to move efficiently between aggregated domains in the films. This study demonstrates the complex multifaceted nature of polymer doping while providing important hints for the future design of dopant-host systems and film fabrication techniques.
The goal of limiting global warming to well below 2°C as set out in the Paris Agreement calls for a strategic assessment of societal pathways and policy strategies. Besides policy makers, new powerful actors from the private sector, including finance, have stepped up to engage in forward-looking assessments of a Paris-compliant and climate-resilient future. Climate change scenarios have addressed this demand by providing scientific insights on the possible pathways ahead to limit warming in line with the Paris climate goal. Despite the increased interest, the potential of climate change scenarios has not been fully unleashed, mostly due to a lack of an intermediary service that provides guidance and access to climate change scenarios. This perspective presents the concept of a climate change scenario service, its components, and a prototypical implementation to overcome this shortcoming aiming to make scenarios accessible to a broader audience of societal actors and decision makers.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a maximum a posteriori (MAP) estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager-Machlup (OM) functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the Gamma-convergence of OM functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.
We derive Onsager-Machlup functionals for countable product measures on weighted l(p) subspaces of the sequence space R-N. Each measure in the product is a shifted and scaled copy of a reference probability measure on R that admits a sufficiently regular Lebesgue density. We study the equicoercivity and Gamma-convergence of sequences of Onsager-Machlup functionals associated to convergent sequences of measures within this class. We use these results to establish analogous results for probability measures on separable Banach or Hilbert spaces, including Gaussian, Cauchy, and Besov measures with summability parameter 1 <= p <= 2. Together with part I of this paper, this provides a basis for analysis of the convergence of maximum a posteriori estimators in Bayesian inverse problems and most likely paths in transition path theory.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
For around a decade, deep learning - the sub-field of machine learning that refers to artificial neural networks comprised of many computational layers - modifies the landscape of statistical model development in many research areas, such as image classification, machine translation, and speech recognition. Geoscientific disciplines in general and the field of hydrology in particular, also do not stand aside from this movement. Recently, the proliferation of modern deep learning-based techniques and methods has been actively gaining popularity for solving a wide range of hydrological problems: modeling and forecasting of river runoff, hydrological model parameters regionalization, assessment of available water resources. identification of the main drivers of the recent change in water balance components. This growing popularity of deep neural networks is primarily due to their high universality and efficiency. The presented qualities, together with the rapidly growing amount of accumulated environmental information, as well as increasing availability of computing facilities and resources, allow us to speak about deep neural networks as a new generation of mathematical models designed to, if not to replace existing solutions, but significantly enrich the field of geophysical processes modeling. This paper provides a brief overview of the current state of the field of development and application of deep neural networks in hydrology. Also in the following study, the qualitative long-term forecast regarding the development of deep learning technology for managing the corresponding hydrological modeling challenges is provided based on the use of "Gartner Hype Curve", which in the general details describes a life cycle of modern technologies.
We systematically explore the effect of calibration data length on the performance of a conceptual hydrological model, GR4H, in comparison to two Artificial Neural Network (ANN) architectures: Long Short-Term Memory Networks (LSTM) and Gated Recurrent Units (GRU), which have just recently been introduced to the field of hydrology. We implemented a case study for six river basins across the contiguous United States, with 25 years of meteorological and discharge data. Nine years were reserved for independent validation; two years were used as a warm-up period, one year for each of the calibration and validation periods, respectively; from the remaining 14 years, we sampled increasing amounts of data for model calibration, and found pronounced differences in model performance. While GR4H required less data to converge, LSTM and GRU caught up at a remarkable rate, considering their number of parameters. Also, LSTM and GRU exhibited the higher calibration instability in comparison to GR4H. These findings confirm the potential of modern deep-learning architectures in rainfall runoff modelling, but also highlight the noticeable differences between them in regard to the effect of calibration data length.
The paper introduces the principle Maximise Presupposition and its cognates. The main focus of the literature and this article is on the inferences that arise as a result of reasoning with Maximise Presupposition ('anti-presuppositions'). I will review the arguments put forward for distinguishing them from other inference types, most notably presuppositions and conversational implicatures. I will zoom in on three main issues regarding Maximise Presupposition and these inferences critically discussed in the literature: epistemic strength(ening), projection, and the role of alternatives. I will discuss more recent views which argue for either a uniform treatment of anti-presuppositions and implicatures and/or a revision of the original principle in light of new data and developments in pragmatics.
Internships during tertiary education have become substantially more common over the past decades in many industrialised countries. This study examines the impact of a voluntary intra-curricular internship experience during university studies on the probability of being invited to a job interview. To estimate a causal relationship, we conducted a randomised field experiment in which we sent 1248 fictitious, but realistic, resumes to real job openings. We find that applicants with internship experience have, on average, a 12.6% higher probability of being invited to a job interview.
The trace elements zinc and manganese are essential for human health, especially due to their enzymatic and protein stabilizing functions. If these elements are ingested in amounts exceeding the requirements, regulatory processes for maintaining their physiological concentrations (homeostasis) can be disturbed. Those homeostatic dysregulations can cause severe health effects including the emergence of neurodegenerative disorders such as Parkinson’s disease (PD). The concentrations of essential trace elements also change during the aging process. However, the relations of cause and consequence between increased manganese and zinc uptake and its influence on the aging process and the emergence of the aging-associated PD are still rarely understood. This doctoral thesis therefore aimed to investigate the influence of a nutritive zinc and/or manganese oversupply on the metal homeostasis during the aging process. For that, the model organism Caenorhabditis elegans (C. elegans) was applied. This nematode suits well as an aging and PD model due to properties such as its short life cycle and its completely sequenced, genetically amenable genome. Different protocols for the propagation of zinc- and/or manganese-supplemented young, middle-aged and aged C. elegans were established. Therefore, wildtypes, as well as genetically modified worm strains modeling inheritable forms of parkinsonism were applied. To identify homeostatic and neurological alterations, the nematodes were investigated with different methods including the analysis of total metal contents via inductively-coupled plasma tandem mass spectrometry, a specific probe-based method for quantifying labile zinc, survival assays, gene expression analysis as well as fluorescence microscopy for the identification and quantification of dopaminergic neurodegeneration.. During aging, the levels of iron, as well as zinc and manganese increased.. Furthermore, the simultaneous oversupply with zinc and manganese increased the total zinc and manganese contents to a higher extend than the single metal supplementation. In this relation the C. elegans metallothionein 1 (MTL-1) was identified as an important regulator of metal homeostasis. The total zinc content and the concentration of labile zinc were age-dependently, but differently regulated. This elucidates the importance of distinguishing these parameters as two independent biomarkers for the zinc status. Not the metal oversupply, but aging increased the levels of dopaminergic neurodegeneration. Additionally, nearly all these results yielded differences in the aging-dependent regulation of trace element homeostasis between wildtypes and PD models. This confirms that an increased zinc and manganese intake can influence the aging process as well as parkinsonism by altering homeostasis although the underlying mechanisms need to be clarified in further studies.
Manganese (Mn) and zinc (Zn) are not only essential trace elements, but also potential exogenous risk factors for various diseases. Since the disturbed homeostasis of single metals can result in detrimental health effects, concerns have emerged regarding the consequences of excessive exposures to multiple metals, either via nutritional supplementation or parenteral nutrition. This study focuses on Mn-Zn-interactions in the nematode Caenorhabditis elegans (C. elegans) model, taking into account aspects related to aging and age-dependent neurodegeneration.
Manganese (Mn) and zinc (Zn) are not only essential trace elements, but also potential exogenous risk factors for various diseases. Since the disturbed homeostasis of single metals can result in detrimental health effects, concerns have emerged regarding the consequences of excessive exposures to multiple metals, either via nutritional supplementation or parenteral nutrition. This study focuses on Mn-Zn-interactions in the nematode Caenorhabditis elegans (C. elegans) model, taking into account aspects related to aging and age-dependent neurodegeneration.
“Embodied Practices – Looking From Small Places” is an edited transcript of a conversation between theatre and performance scholar Sruti Bala (University of Amsterdam) and sociologist, criminologist and anthropologist Dylan Kerrigan (University of Leicester) that took place as an online event in November 2020. Throughout their talk, Bala and Kerrigan engage with the legacy of Haitian anthropologist Michel-Rolph Trouillot. Specifically, they focus on his approach of looking from small units, such as small villages in Dominica, outwards to larger political structures such as global capitalism, social inequalities and the distribution of power. They also share insights from their own research on embodied practices in the Caribbean, Europe and India and answer questions such as: What can research on and through embodied practices tell us about systems of power and domination that move between the local and the global? How can performance practices which are informed by multiple locations and cultures be read and appreciated adequately? Sharing insights from his research into Guyanese prisons, Kerrigan outlines how he aims to connect everyday experiences and struggles of Caribbean people to trans-historical and transnational processes such as racial capitalism and post/coloniality. Furthermore, he elaborates on how he uses performance practices such as spoken word poetry and data verbalisation to connect with systematically excluded groups. Bala challenges naïve notions about the inherent transformative potential of performance in her research on performance and translation. She points to the way in which performance and its reception is always already inscribed in what she calls global or planetary asymmetries. At the conclusion of this conversation, they broach the question: are small places truly as small as they seem?
Label-free optical sensors are attractive candidates, for example, for detecting toxic substances and monitoring biomolecular interactions. Their performance can be pushed by the design of the sensor through clever material choices and integration of components. In this work, two porous materials, namely, porous silicon and plasmonic nanohole arrays, are combined in order to obtain increased sensitivity and dual-mode sensing capabilities. For this purpose, porous silicon monolayers are prepared by electrochemical etching and plasmonic nanohole arrays are obtained using a bottom-up strategy. Hybrid sensors of these two materials are realized by transferring the plasmonic nanohole array on top of the porous silicon. Reflectance spectra of the hybrid sensors are characterized by a fringe pattern resulting from the Fabry–Pérot interference at the porous silicon borders, which is overlaid with a broad dip based on surface plasmon resonance in the plasmonic nanohole array. In addition, the hybrid sensor shows a significant higher reflectance in comparison to the porous silicon monolayer. The sensitivities of the hybrid sensor to refractive index changes are separately determined for both components. A significant increase in sensitivity from 213 ± 12 to 386 ± 5 nm/RIU is determined for the transfer of the plasmonic nanohole array sensors from solid glass substrates to porous silicon monolayers. In contrast, the spectral position of the interference pattern of porous silicon monolayers in different media is not affected by the presence of the plasmonic nanohole array. However, the changes in fringe pattern reflectance of the hybrid sensor are increased 3.7-fold after being covered with plasmonic nanohole arrays and could be used for high-sensitivity sensing. Finally, the capability of the hybrid sensor for simultaneous and independent dual-mode sensing is demonstrated.
Less is more!
(2021)
Enhancing consumer satisfaction and well-being is an important objective of companies, retailers and public policy makers. In the current debate on climate change, a consistent theme is that consumers in developed countries must learn to consume less. The present study (based on representative data sets from the US, N = 1,017, and Germany, N = 1030) addresses these issues by using a scenario-based experiment to analyze how satisfied voluntary simplifiers (people who voluntarily abstain from consumption) are with their purchase decisions in the case of a muesli brand. The research question is whether people who follow a sustainable, simple lifestyle are more satisfied with their daily consumption choices than people who have a more consumerist lifestyle. If so, it would be easier for many people to change their lifestyles and consume less. In addition, this scenario experiment manipulates consumer empowerment and decision complexity since both factors are supposed to influence purchase satisfaction. The results are consistent across both countries and indicate that voluntary simplifiers experience a higher level of purchasing satisfaction than non-simplifiers, whereby empowerment and decision complexity play different roles.
Metal sulfide nanoparticle synthesis with ionic liquids state of the art and future perspectives
(2021)
Metal sulfides are among the most promising materials for a wide variety of technologically relevant applications ranging from energy to environment and beyond. Incidentally, ionic liquids (ILs) have been among the top research subjects for the same applications and also for inorganic materials synthesis. As a result, the exploitation of the peculiar properties of ILs for metal sulfide synthesis could provide attractive new avenues for the generation of new, highly specific metal sulfides for numerous applications. This article therefore describes current developments in metal sulfide nano-particle synthesis as exemplified by a number of highlight examples. Moreover, the article demonstrates how ILs have been used in metal sulfide synthesis and discusses the benefits of using ILs over more traditional approaches. Finally, the article demonstrates some technological challenges and how ILs could be used to further advance the production and specific property engineering of metal sulfide nanomaterials, again based on a number of selected examples.
Enzymes can support the synthesis or degradation of biomacromolecules in natural processes. Here, we demonstrate that enzymes can induce a macroscopic-directed movement of microstructured hydrogels following a mechanism that we call a "Jack-in-the-box" effect. The material's design is based on the formation of internal stresses induced by a deformation load on an architectured microscale, which are kinetically frozen by the generation of polyester locking domains, similar to a Jack-in-thebox toy (i.e., a compressed spring stabilized by a closed box lid). To induce the controlled macroscopic movement, the locking domains are equipped with enzyme-specific cleavable bonds (i.e., a box with a lock and key system). As a result of enzymatic reaction, a transformed shape is achieved by the release of internal stresses. There is an increase in entropy in combination with a swelling-supported stretching of polymer chains within the microarchitectured hydrogel (i.e., the encased clown pops-up with a pre-stressed movement when the box is unlocked). This utilization of an enzyme as a physiological stimulus may offer new approaches to create interactive and enzyme-specific materials for different applications such as an optical indicator of the enzyme's presence or actuators and sensors in biotechnology and in fermentation processes.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
In this short survey article, we showcase a number of non-trivial geometric problems that have recently been resolved by marrying methods from functional calculus and real-variable harmonic analysis. We give a brief description of these methods as well as their interplay. This is a succinct survey that hopes to inspire geometers and analysts alike to study these methods so that they can be further developed to be potentially applied to a broader range of questions.
The Eastern Mediterranean is the most seismically active region in Europe due to the complex interactions of the Arabian, African, and Eurasian tectonic plates. Deformation is achieved by faulting in the brittle crust, distributed flow in the viscoelastic lower-crust and mantle, and Hellenic subduction, but the long-term partitioning of these mechanisms is still unknown. We exploit an extensive suite of geodetic observations to build a kinematic model connecting strike-slip deformation, extension, subduction, and shear localization across Anatolia and the Aegean Sea by mapping the distribution of slip and strain accumulation on major active geological structures. We find that tectonic escape is facilitated by a plate-boundary-like, translithospheric shear zone extending from the Gulf of Evia to the Turkish-Iranian Plateau that underlies the surface trace of the North Anatolian Fault. Additional deformation in Anatolia is taken up by a series of smaller-scale conjugate shear zones that reach the upper mantle, the largest of which is located beneath the East Anatolian Fault. Rapid north-south extension in the western part of the system, driven primarily by Hellenic Trench retreat, is accommodated by rotation and broadening of the North Anatolian mantle shear zone from the Sea of Marmara across the north Aegean Sea, and by a system of distributed transform faults and rifts including the rapidly extending Gulf of Corinth in central Greece and the active grabens of western Turkey. Africa-Eurasia convergence along the Hellenic Arc occurs at a median rate of 49.8mm yr(-1) in a largely trench-normal direction except near eastern Crete where variably oriented slip on the megathrust coincides with mixed-mode and strike-slip deformation in the overlying accretionary wedge near the Ptolemy-Pliny-Strabo trenches. Our kinematic model illustrates the competing roles the North Anatolian mantle shear zone, Hellenic Trench, overlying mantle wedge, and active crustal faults play in accommodating tectonic indentation, slab rollback and associated Aegean extension. Viscoelastic flow in the lower crust and upper mantle dominate the surface velocity field across much of Anatolia and a clear transition to megathrust-related slab pull occurs in western Turkey, the Aegean Sea and Greece. Crustal scale faults and the Hellenic wedge contribute only a minor amount to the large-scale, regional pattern of Eastern Mediterranean interseismic surface deformation.
Cyanobacteria are an abundant bacterial group and are found in a variety of ecological niches all around the globe. They can serve as a real threat for fish or mammals and can restrict the use of lakes or rivers for recreational purposes or as a source of drinking water, when they form blooms. One of the most abundant bloom-forming cyanobacteria is Microcystis aeruginosa.
In the first part of the study, the role and possible dynamics of RubisCO in M. aeruginosa during high-light irradiation were examined. Its response was analyzed on the protein and peptide level via immunoblotting, immunofluorescence microscopy and with high performance liquid chromatography (HPLC). It was revealed that large amounts of RubisCO were located outside of carboxysomes under the applied high light stress. RubisCO aggregated mainly underneath the cytoplasmic membrane. There it forms a putative Calvin-Benson-Bassham (CBB) super complex together with other enzymes of photosynthesis. This complex could be part of an alternative carbon-concentrating mechanism (CCM) in M. aeruginosa, which enables a faster, and energy saving adaptation to high light stress of the whole bloom.
Furthermore, the re-localization of RubisCO was delayed in the microcystin-deficient mutant ΔmcyB and RubisCO was more evenly distributed over the cell in comparison to the wild type. Since ΔmcyB is not harmed in its growth, possibly other produced cyanopeptides as aeruginosin or cyanopeptolin also play a role in the stabilization of RubisCO and the putative CBB complex, especially in the microcystin-free mutant.
In the second part of this work, the possible role of microcystin as an extracellular signaling peptide during the diurnal cycle was studied. HPLC analysis showed a strong increase of extracellular microcystin in the wild type when the population entered nighttime and it resumed into the next day as well. Together with the increase of extracellular microcystin, a strong decrease of protein-bound intracellular microcystin was observed via immunoblot analysis. Interestingly, the signal of the large subunit of RubisCO (RbcL) also diminished when high amounts of microcystin were present in the surrounding medium. Microcystin addition experiments to M. aeruginosa WT and ΔmcyB cultures support this observation, since the immunoblot signal of both subunits of RubisCO and CcmK, a shell protein of carboxysomes, diminished after the addition of microcystin. In addition, the fluctuation of cyanopeptolin during the diurnal cycle indicates a more prominent role of other cyanopeptides besides microcystin as a signaling peptide, intracellularly as well as extracellularly.
Frailty assessment is recommended before elective transcatheter aortic valve implantation (TAVI) to determine post-interventional prognosis. Several studies have investigated frailty in TAVI-patients using numerous assessments; however, it remains unclear which is the most appropriate tool for clinical practice. Therefore, we evaluate which frailty assessment is mainly used and meaningful for ≤30-day and ≥1-year prognosis in TAVI patients. Randomized controlled or observational studies (prospective/retrospective) investigating all-cause mortality in older (≥70 years) TAVI patients were identified (PubMed; May 2020). In total, 79 studies investigating frailty with 49 different assessments were included. As single markers of frailty, mostly gait speed (23 studies) and serum albumin (16 studies) were used. Higher risk of 1-year mortality was predicted by slower gait speed (highest Hazard Ratios (HR): 14.71; 95% confidence interval (CI) 6.50–33.30) and lower serum albumin level (highest HR: 3.12; 95% CI 1.80–5.42). Composite indices (five items; seven studies) were associated with 30-day (highest Odds Ratio (OR): 15.30; 95% CI 2.71–86.10) and 1-year mortality (highest OR: 2.75; 95% CI 1.55–4.87). In conclusion, single markers of frailty, in particular gait speed, were widely used to predict 1-year mortality. Composite indices were appropriate, as well as a comprehensive assessment of frailty. View Full-Text
Frailty assessment is recommended before elective transcatheter aortic valve implantation (TAVI) to determine post-interventional prognosis. Several studies have investigated frailty in TAVI-patients using numerous assessments; however, it remains unclear which is the most appropriate tool for clinical practice. Therefore, we evaluate which frailty assessment is mainly used and meaningful for ≤30-day and ≥1-year prognosis in TAVI patients. Randomized controlled or observational studies (prospective/retrospective) investigating all-cause mortality in older (≥70 years) TAVI patients were identified (PubMed; May 2020). In total, 79 studies investigating frailty with 49 different assessments were included. As single markers of frailty, mostly gait speed (23 studies) and serum albumin (16 studies) were used. Higher risk of 1-year mortality was predicted by slower gait speed (highest Hazard Ratios (HR): 14.71; 95% confidence interval (CI) 6.50–33.30) and lower serum albumin level (highest HR: 3.12; 95% CI 1.80–5.42). Composite indices (five items; seven studies) were associated with 30-day (highest Odds Ratio (OR): 15.30; 95% CI 2.71–86.10) and 1-year mortality (highest OR: 2.75; 95% CI 1.55–4.87). In conclusion, single markers of frailty, in particular gait speed, were widely used to predict 1-year mortality. Composite indices were appropriate, as well as a comprehensive assessment of frailty. View Full-Text
Pivots revisited
(2021)
The term "pivot" usually refers to two overlapping syntactic units such that the completion of the first unit simultaneously launches the second. In addition, pivots are generally said to be characterized by the smooth prosodic integration of their syntactic parts. This prosodic integration is typically achieved by prosodic-phonetic matching of the pivot components. As research on such turns in a range of languages has illustrated, speakers routinely deploy pivots so as to be able to continue past a point of possible turn completion, in the service of implementing some additional or revised action. This article seeks to build on, and complement, earlier research by exploring two issues in more detail as follows: (1) what exactly do pivotal turn extensions accomplish on the action dimension, and (2) what role does prosodic-phonetic packaging play in this? We will show that pivot constructions not only exhibit various degrees of prosodic-phonetic (non-)integration, i.e., differently strong cesuras, but that they can be ordered on a continuum, and that this cline maps onto the relationship of the actions accomplished by the components of the pivot construction. While tighter prosodic-phonetic integration, i.e., weak(er) cesuring, co-occurs with post-pivot actions whose relationship to that of the pre-pivot tends to be rather retrospective in character, looser prosodic-phonetic integration, i.e., strong(er) cesuring, is associated with a more prospective orientation of the post-pivot's action. These observations also raise more general questions with regard to the analysis of action.
“Chunking” spoken language
(2021)
In this introductory paper to the special issue on “Weak cesuras in talk-in-interaction”, we aim to guide the reader into current work on the “chunking” of naturally occurring talk. It is conducted in the methodological frameworks of Conversation Analysis and Interactional Linguistics – two approaches that consider the interactional aspect of humans talking with each other to be a crucial starting point for its analysis. In doing so, we will (1) lay out the background of this special issue (what is problematic about “chunking” talk-in-interaction, the characteristics of the methodological approach chosen by the contributors, the cesura model), (2) highlight what can be gained from such a revised understanding of “chunking” in talk-in-interaction by referring to previous work with this model as well as the findings of the contributions to this special issue, and (3) indicate further directions such work could take starting from papers in this special issue. We hope to induce a fruitful exchange on the phenomena discussed, across methodological divides.
“Chunking” spoken language
(2021)
In this introductory paper to the special issue on “Weak cesuras in talk-in-interaction”, we aim to guide the reader into current work on the “chunking” of naturally occurring talk. It is conducted in the methodological frameworks of Conversation Analysis and Interactional Linguistics – two approaches that consider the interactional aspect of humans talking with each other to be a crucial starting point for its analysis. In doing so, we will (1) lay out the background of this special issue (what is problematic about “chunking” talk-in-interaction, the characteristics of the methodological approach chosen by the contributors, the cesura model), (2) highlight what can be gained from such a revised understanding of “chunking” in talk-in-interaction by referring to previous work with this model as well as the findings of the contributions to this special issue, and (3) indicate further directions such work could take starting from papers in this special issue. We hope to induce a fruitful exchange on the phenomena discussed, across methodological divides.
The life cycle of higher plants is based on recurring phases of growth and development based on repetitive sequences of cell division, cell expansion and cell differentiation. This dissertation deals with two projects, each of them investigating two different topics that are related to cell expansion. The first project is examining an Arabidopsis thaliana mutant exhibiting overall cell enlargement and the second project is analysing two naturally occurring floral morphs of Amsinckia spectabilis (Boraginaceae) differing (amongst others) in style length and anther heights due to differences in longitudinal cell elongation. The EMS-mutant eop1 was shown to exhibit a petal size increase of 26% caused by cell enlargement. Further phenotypes were detected, such as cotyledon size increase (based on larger cells) as well as increased carpel, sepal, leaf and pollen sizes. Plant height was shown to be increased and more highly branched trichomes explained the hairy eop1 phenotype. Fine mapping revealed the causal SNP to be a C to T transition at the last nucleotide of intron 7 of the INCURVATA11 (ICU11) gene, a 2-oxoglutarate /Fe(II)-dependant dioxygenase, and thus causing missplicing of the mRNA. Two T-DNA insertion lines (icu11-2 & icu11-4) confirmed ICU11 as causal gene by exhibiting increased petal size. A comparison of three icu11 alleles, which possessed different mutation-related changes, either overexpressing ICU11 or modified mRNAs, was the base for investigating the molecular mechanism that underlies the observed phenotype. Different approaches revealed contradictory results regarding ICU11 protein functionality in the icu11 mutants. A complementation assay proved the three mutants to be exchangeable and ICU11 overexpression in the wild-type led to an icu11-like phenotype, arguing for all three icu11 mutants to be GOF mutants. Contradicting this conclusion, the icu11-4 line could be rescued by a genomic ICU11 transgene. A model, based on the assumption that an overexpression of ICU11 is inhibiting the function of the protein, and thus causing the same effect as a LOF protein was proposed. Further, icu11-3 (eop1) mutants were shown to have an increased resistance towards paclobutrazol, a gibberellin (GA) inhibitor and an upregulation of AtGA20ox2, a main GA biosynthesis gene. Additionally, ICU11 subcellular localization was discovered to be cytoplasmic, supporting the assumption, that ICU11 affects GA biosynthesis and overall GA level, possibly explaining the observed (GA-overdose) phenotype.
The second project aimed to identify the genetic base of the S-locus in Amsinckia spectabilis, as the Amsinckia genus represents untypical characteristics for a heterostylous species, such as no obvious self-incompatibility (SI) and the repeated transition towards homostylous and fully selfing variants. The work was based on three Amsinckia spectabilis forms: a heterostylous form, consisting of two floral morphs with reciprocal positioning of sexual organs (S-morph: high anthers and a short style and L-morph: low anthers and a long style), and two homostylous forms, one large-flowered and partially selfing and the other small-flowered and fully selfing. The maintenance of the two floral morphs is genetically based on the S-locus region, containing genes that encode for the morph-specific traits, which are marked by a tight linkage due to suppressed recombination. Natural populations are found to possess a 1:1 S:L morph ratio, that can be explained by predominant disassortative mating of the two morphs, causing the occurrence of the dominant S-allele only in the heterozygous state (heterozygous (Ss) for the S-morph and homozygous recessive (ss) for the L-morph). Investigation of morph-specific phenotypes detected 56% elongated L-morph styles and 58% higher positioned S-morph anthers. Approximately 50% of the observed size differences were explained by an increase in cell elongation. Moreover, additional phenotypes were found, such as 21% enlarged S-morph pollen and no obvious SI, confirmed by hand pollinated seed counts, in vivo pollen tube growth and the development of homozygous dominant SS individuals via selfing. The Amsinckia spec. S-locus was assumed to at least consist of the G- (style length), the A- (anther height) and the P- (pollen size) locus. Comparative Transcriptomics of the two morphs revealed 22 differentially expressed markers that were found to be located within two contigs of a SS individual PacBio genome assembly, allowing the localization of the S-locus to be delimited to a region of approximately 23 Mb. Contradictory to revealed S-loci within the plant kingdom, no strong argument for a present hemizygous region was found to be causal for the suppressed recombination of the S-locus, so that an inversion was assumed to be the causal mechanism.
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
The imagination of clearly separated core-shell structures is already outdated by the fact, that the nanoparticle core-shell structures remain in terms of efficiency behind their respective bulk material due to intermixing between core and shell dopant ions. In order to optimize the photoluminescence of core-shell UCNP the intermixing should be as small as possible and therefore, key parameters of this process need to be identified. In the present work the Ln(III) ion migration in the host lattices NaYF4 and NaGdF4 was monitored. These investigations have been performed by laser spectroscopy with help of lanthanide resonance energy transfer (LRET) between Eu(III) as donor and Pr(III) or Nd(III) as acceptor. The LRET is evaluated based on the Forster theory. The findings corroborate the literature and point out the migration of ions in the host lattices. Based on the introduced LRET model, the acceptor concentration in the surrounding of one donor depends clearly on the design of the applied core-shell-shell nanoparticles. In general, thinner intermediate insulating shells lead to higher acceptor concentration, stronger quenching of the Eu(III) donor and subsequently stronger sensitization of the Pr(III) or the Nd(III) acceptors. The choice of the host lattice as well as of the synthesis temperature are parameters to be considered for the intermixing process.
Moving spiral wave chimeras
(2021)
We consider a two-dimensional array of heterogeneous nonlocally coupled phase oscillators on a flat torus and study the bound states of two counter-rotating spiral chimeras, shortly two-core spiral chimeras, observed in this system. In contrast to other known spiral chimeras with motionless incoherent cores, the two-core spiral chimeras typically show a drift motion. Due to this drift, their incoherent cores become spatially modulated and develop specific fingerprint patterns of varying synchrony levels. In the continuum limit of infinitely many oscillators, the two-core spiral chimeras can be studied using the Ott-Antonsen equation. Numerical analysis of this equation allows us to reveal the stability region of different spiral chimeras, which we group into three main classes-symmetric, asymmetric, and meandering spiral chimeras.
Stereoselective [4+2] Cycloaddition of Singlet Oxygen to Naphthalenes Controlled by Carbohydrates
(2021)
Stereoselective reactions of singlet oxygen are of current interest. Since enantioselective photooxygenations have not been realized efficiently, auxiliary control is an attractive alternative. However, the obtained peroxides are often too labile for isolation or further transformations into enantiomerically pure products. Herein, we describe the oxidation of naphthalenes by singlet oxygen, where the face selectivity is controlled by carbohydrates for the first time. The synthesis of the precursors is easily achieved starting from naphthoquinone and a protected glucose derivative in only two steps. Photooxygenations proceed smoothly at low temperature, and we detected the corresponding endoperoxides as sole products by NMR. They are labile and can thermally react back to the parent naphthalenes and singlet oxygen. However, we could isolate and characterize two enantiomerically pure peroxides, which are sufficiently stable at room temperature. An interesting influence of substituents on the stereoselectivities of the photooxygenations has been found, ranging from 51:49 to up to 91:9 dr (diastereomeric ratio). We explain this by a hindered rotation of the carbohydrate substituents, substantiated by a combination of NOESY measurements and theoretical calculations. Finally, we could transfer the chiral information from a pure endoperoxide to an epoxide, which was isolated after cleavage of the sugar chiral auxiliary in enantiomerically pure form.
Stereoselective [4+2] Cycloaddition of Singlet Oxygen to Naphthalenes Controlled by Carbohydrates
(2021)
Stereoselective reactions of singlet oxygen are of current interest. Since enantioselective photooxygenations have not been realized efficiently, auxiliary control is an attractive alternative. However, the obtained peroxides are often too labile for isolation or further transformations into enantiomerically pure products. Herein, we describe the oxidation of naphthalenes by singlet oxygen, where the face selectivity is controlled by carbohydrates for the first time. The synthesis of the precursors is easily achieved starting from naphthoquinone and a protected glucose derivative in only two steps. Photooxygenations proceed smoothly at low temperature, and we detected the corresponding endoperoxides as sole products by NMR. They are labile and can thermally react back to the parent naphthalenes and singlet oxygen. However, we could isolate and characterize two enantiomerically pure peroxides, which are sufficiently stable at room temperature. An interesting influence of substituents on the stereoselectivities of the photooxygenations has been found, ranging from 51:49 to up to 91:9 dr (diastereomeric ratio). We explain this by a hindered rotation of the carbohydrate substituents, substantiated by a combination of NOESY measurements and theoretical calculations. Finally, we could transfer the chiral information from a pure endoperoxide to an epoxide, which was isolated after cleavage of the sugar chiral auxiliary in enantiomerically pure form.
Large-scale literature mining to assess the relation between anti-cancer drugs and cancer types
(2021)
Background:
There is a huge body of scientific literature describing the relation between tumor types and anti-cancer drugs. The vast amount of scientific literature makes it impossible for researchers and physicians to extract all relevant information manually.
Methods:
In order to cope with the large amount of literature we applied an automated text mining approach to assess the relations between 30 most frequent cancer types and 270 anti-cancer drugs. We applied two different approaches, a classical text mining based on named entity recognition and an AI-based approach employing word embeddings. The consistency of literature mining results was validated with 3 independent methods: first, using data from FDA approvals, second, using experimentally measured IC-50 cell line data and third, using clinical patient survival data.
Results:
We demonstrated that the automated text mining was able to successfully assess the relation between cancer types and anti-cancer drugs. All validation methods showed a good correspondence between the results from literature mining and independent confirmatory approaches. The relation between most frequent cancer types and drugs employed for their treatment were visualized in a large heatmap. All results are accessible in an interactive web-based knowledge base using the following link: .
Conclusions:
Our approach is able to assess the relations between compounds and cancer types in an automated manner. Both, cancer types and compounds could be grouped into different clusters. Researchers can use the interactive knowledge base to inspect the presented results and follow their own research questions, for example the identification of novel indication areas for known drugs.
The rapid emergence of online targeted political advertising has raised concerns over data privacy and what the government's response should be. This paper tested and confirmed the hypothesis that public attitudes toward stricter regulation of online targeted political advertising are partially motivated by partisan self-interest. We conducted an experiment using an online survey of 1549 Americans who identify as either Democrats or Republicans. Our findings show that Democrats and Republicans believe that online targeted political advertising benefits the opposing party. This belief is based on their conviction that their political opponents are more likely to be mobilized by online targeted political advertising than are supporters of their own party. We exogenously manipulated partisan self-interest considerations of a random subset of participants by truthfully informing them that, in the past, online targeted political advertising has benefited Republicans. Our findings show that Republicans informed about this had less favorable attitudes toward regulation than did their uninformed co-partisans. This suggests that Republicans' attitudes regarding stricter regulation are based not solely on concerns about privacy violations, but also, in part, are caused by beliefs about partisan advantage. The results imply that people are willing to accept violations of their privacy if their preferred party benefits from the use of online targeted political advertising.
Nonribosomal peptides (NRP) are crucial molecular mediators in microbial ecology and provide indispensable drugs. Nevertheless, the evolution of the flexible biosynthetic machineries that correlates with the stunning structural diversity of NRPs is poorly understood. Here, we show that recombination is a key driver in the evolution of bacterial NRP synthetase (NRPS) genes across distant bacterial phyla, which has guided structural diversification in a plethora of NRP families by extensive mixing andmatching of biosynthesis genes. The systematic dissection of a large number of individual recombination events did not only unveil a striking plurality in the nature and origin of the exchange units but allowed the deduction of overarching principles that enable the efficient exchange of adenylation (A) domain substrates while keeping the functionality of the dynamic multienzyme complexes. In the majority of cases, recombination events have targeted variable portions of the A(core) domains, yet domain interfaces and the flexible A(sub) domain remained untapped. Our results strongly contradict the widespread assumption that adenylation and condensation (C) domains coevolve and significantly challenge the attributed role of C domains as stringent selectivity filter during NRP synthesis. Moreover, they teach valuable lessons on the choice of natural exchange units in the evolution of NRPS diversity, which may guide future engineering approaches.
One third of the world's population lives in areas where earthquakes causing at least slight damage are frequently expected. Thus, the development and testing of global seismicity models is essential to improving seismic hazard estimates and earthquake-preparedness protocols for effective disaster-risk mitigation. Currently, the availability and quality of geodetic data along plate-boundary regions provides the opportunity to construct global models of plate motion and strain rate, which can be translated into global maps of forecasted seismicity. Moreover, the broad coverage of existing earthquake catalogs facilitates in present-day the calibration and testing of global seismicity models. As a result, modern global seismicity models can integrate two independent factors necessary for physics-based, long-term earthquake forecasting, namely interseismic crustal strain accumulation and sudden lithospheric stress release.
In this dissertation, I present the construction of and testing results for two global ensemble seismicity models, aimed at providing mean rates of shallow (0-70 km) earthquake activity for seismic hazard assessment. These models depend on the Subduction Megathrust Earthquake Rate Forecast (SMERF2), a stationary seismicity approach for subduction zones, based on the conservation of moment principle and the use of regional "geodesy-to-seismicity" parameters, such as corner magnitudes, seismogenic thicknesses and subduction dip angles. Specifically, this interface-earthquake model combines geodetic strain rates with instrumentally-recorded seismicity to compute long-term rates of seismic and geodetic moment. Based on this, I derive analytical solutions for seismic coupling and earthquake activity, which provide this earthquake model with the initial abilities to properly forecast interface seismicity. Then, I integrate SMERF2 interface-seismicity estimates with earthquake computations in non-subduction zones provided by the Seismic Hazard Inferred From Tectonics based on the second iteration of the Global Strain Rate Map seismicity approach to construct the global Tectonic Earthquake Activity Model (TEAM). Thus, TEAM is designed to reduce number, and potentially spatial, earthquake inconsistencies of its predecessor tectonic earthquake model during the 2015-2017 period. Also, I combine this new geodetic-based earthquake approach with a global smoothed-seismicity model to create the World Hybrid Earthquake Estimates based on Likelihood scores (WHEEL) model. This updated hybrid model serves as an alternative earthquake-rate approach to the Global Earthquake Activity Rate model for forecasting long-term rates of shallow seismicity everywhere on Earth.
Global seismicity models provide scientific hypotheses about when and where earthquakes may occur, and how big they might be. Nonetheless, the veracity of these hypotheses can only be either confirmed or rejected after prospective forecast evaluation. Therefore, I finally test the consistency and relative performance of these global seismicity models with independent observations recorded during the 2014-2019 pseudo-prospective evaluation period. As a result, hybrid earthquake models based on both geodesy and seismicity are the most informative seismicity models during the testing time frame, as they obtain higher information scores than their constituent model components. These results support the combination of interseismic strain measurements with earthquake-catalog data for improved seismicity modeling. However, further prospective evaluations are required to more accurately describe the capacities of these global ensemble seismicity models to forecast longer-term earthquake activity.
The mean free path of ionizing photons, lambda(mfp), is a key factor in the photoionization of the intergalactic medium (IGM). At z greater than or similar to 5, however, lambda(mfp) may be short enough that measurements towards QSOs are biased by the QSO proximity effect. We present new direct measurements of lambda(mfp) that address this bias and extend up to z similar to 6 for the first time. Our measurements at z similar to 5 are based on data from the Giant Gemini GMOS survey and new Keck LRIS observations of low-luminosity QSOs. At z similar to 6 we use QSO spectra from Keck ESI and VLT X-Shooter. We measure lambda(mfp) = 9.09(-1.28)(+1.62) proper Mpc and 0.75(-0.45)(+0.65) proper Mpc (68 percent confidence) at z = 5.1 and 6.0, respectively. The results at z = 5.1 are consistent with existing measurements, suggesting that bias from the proximity effect is minor at this redshift. At z = 6.0, however, we find that neglecting the proximity effect biases the result high by a factor of two or more. Our measurement at z = 6.0 falls well below extrapolations from lower redshifts, indicating rapid evolution in lambda(mfp) over 5 < z < 6. This evolution disfavours models in which reionization ended early enough that the IGM had time to fully relax hydrodynamically by z = 6, but is qualitatively consistent with models wherein reionization completed at z = 6 or even significantly later. Our mean free path results are most consistent with late reionization models wherein the IGM is still 20 percent neutral at z = 6, although our measurement at z = 6.0 is even lower than these models prefer.
A characterization of the essential spectrum of Schrodinger operators on infinite graphs is derived involving the concept of R-limits. This concept, which was introduced previously for operators on N and Z(d) as "right-limits," captures the behaviour of the operator at infinity. For graphs with sub-exponential growth rate, we show that each point in sigma(ss)(H) corresponds to a bounded generalized eigenfunction of a corresponding R-limit of H. If, additionally, the graph is of uniform sub-exponential growth, also the converse inclusion holds.
With recent experimental advances in laser-driven electron dynamics in polyatomic molecules, the need arises for their reliable theoretical modelling. Among efficient, yet fairly accurate methods for many-electron dynamics are Time-Dependent Configuration Interaction Singles (TD-CIS) (a Wave Function Theory (WFT) method), and Real-Time Time-Dependent Density Functional Theory (RT-TD-DFT), respectively. Here we compare TD-CIS combined with extended Atomic Orbital (AO) bases, TD-CIS/AO, with RT-TD-DFT in a grid representation of the Kohn-Sham orbitals, RT-TD-DFT/Grid. Possible ionization losses are treated by complex absorbing potentials in energy space (for TD-CIS/AO) or real space (for RT-TD-DFT), respectively. The comparison is made for two test cases: (i) state-to-state transitions using resonant lasers (pi-pulses), i.e., bound electron motion, and (ii) large-amplitude electron motion leading to High Harmonic Generation (HHG). Test systems are a H-2 molecule and cis- and trans-1,2-dichlorethene, C2H2Cl2, (DCE). From time-dependent electronic energies, dipole moments and from HHG spectra, the following observations are made: first, for bound state-to-state transitions enforced by pi-pulses, TD-CIS nicely accounts for the expected population inversion in contrast to RT-TD-DFT, in agreement with earlier findings. Secondly, when using laser pulses under non-resonant conditions, dipole moments and lower harmonics in HHG spectra are obtained by TD-CIS/AO which are in good agreement with those obtained with RT-TD-DFT/Grid. Deviations become larger for higher harmonics and at low laser intensities, i.e., for low-intensity HHG signals. We also carefully test effects of basis sets for TD-CIS/AO and grid size for RT-TD-DFT/Grid, different exchange-correlation functionals in RT-TD-DFT, and absorbing boundaries. Finally, for the present examples, TD-CIS/AO is observed to be at least an order of magnitude more computationally efficient than RT-TD-DFT/Grid.
The chemical nature, the number length of integrated building blocks, as well as their sequence structure impact the phase morphology of multiblock copolymers (MBC) consisting of two non-miscible block types. We hypothesized that a strictly alternating sequence should favour phase segregation and in this way the elastic properties. A library of well-defined MBCs composed of two different hydrophobic, semi-crystalline blocks providing domains with well-separated melting temperatures (T(m)s) were synthesized from the same type of precursor building blocks as strictly alternating (MBCsalt) or random (MBCsran) MBCs and compared. Three different series of MBCsalt or MBCsran were synthesized by high-throughput synthesis by coupling oligo(e-caprolactone) (OCL) of different molecular weights (2, 4, and 8 kDa) with oligotetrahydrofuran (OTHF, 2.9 kDa) via Steglich esterification in which the molar ratio of the reaction partners was slightly adjusted. Maximum of weight average molecular weight (M-w) were 65,000 g center dot mol(-1), 165,000 g center dot mol(-1), and 168,000 g center dot mol(-1) for MBCsalt and 80,500 g center dot mol(-1), 100,000 g center dot mol(-1), and 147,600 g center dot mol(-1) for MBCsran. When Mw increased, a decrease of both Tms associated to the melting of the OCL and OTHF domains was observed for all MBCs. T-m (OTHF) of MBCsran was always higher than Tm (OTHF) of MBCsalt, which was attributed to a better phase segregation. In addition, the elongation at break of MBCsalt was almost half as high when compared to MBCsran. In this way this study elucidates role of the block length and sequence structure in MBCs and enables a quantitative discussion of the structure-function relationship when two semi-crystalline block segments are utilized for the design of block copolymers.
The chemical nature, the number length of integrated building blocks, as well as their sequence structure impact the phase morphology of multiblock copolymers (MBC) consisting of two non-miscible block types. It is hypothesized that a strictly alternating sequence should impact phase segregation. A library of well-defined MBC obtained by coupling oligo(epsilon-caprolactone) (OCL) of different molecular weights (2, 4, and 8 kDa) with oligotetrahydrofuran (OTHF, 2.9 kDa) via Steglich esterification results in strictly alternating (MBCalt) or random (MBCran) MBC. The three different series has a weight average molecular weight (M-w) of 65 000, 165 000, and 168 000 g mol(-1) for MBCalt and 80 500, 100 000, and 147 600 g mol(-1) for MBCran. When the chain length of OCL building blocks is increased, the tendency for phase segregation is facilitated, which is attributed to the decrease in chain mobility within the MBC. Furthermore, it is found that the phase segregation disturbs the crystallization by causing heterogeneities in the semi-crystalline alignment, which is attributed to an increase of the disorder of the OCL semi-crystalline alignment.
Introduction
(2021)
Confidence Counts
(2021)
The increasing reliance on online learning in higher education has been further expedited by the on-going Covid-19 pandemic. Students need to be supported as they adapt to this new learning environment. Research has established that learners with positive online learning self-efficacy beliefs are more likely to persevere and achieve their higher education goals when learning online. In this paper, we explore how MOOC design can contribute to the four sources of self-efficacy beliefs posited by Bandura [4]. Specifically, we will explore, drawing on learner reflections, whether design elements of the MOOC, The Digital Edge: Essentials for the Online Learner, provided participants with the necessary mastery experiences, vicarious experiences, verbal persuasion, and affective regulation opportunities, to evaluate and develop their online learning self-efficacy beliefs. Findings from a content analysis of discussion forum posts show that learners referenced three of the four information sources when reflecting on their experience of the MOOC. This paper illustrates the potential of MOOCs as a pedagogical tool for enhancing online learning self-efficacy among students.
We report on the multiple response of microgels triggered by a single optical stimulus. Under irradiation, the volume of the microgels is reversibly switched by more than 20 times. The irradiation initiates two different processes: photo-isomerization of the photo-sensitive surfactant, which forms a complex with the anionic microgel, rendering it photo-responsive; and local heating due to a thermo-plasmonic effect within the structured gold layer on which the microgel is deposited. The photo-responsivity is related to the reversible accommodation/release of the photo-sensitive surfactant depending on its photo-isomerization state, while the thermo-sensitivity is intrinsically built in. We show that under exposure to green light, the thermo-plasmonic effect generates a local hot spot in the gold layer, resulting in the shrinkage of the microgel. This process competes with the simultaneous photo-induced swelling. Depending on the position of the laser spot, the spatiotemporal control of reversible particle shrinking/swelling with a predefined extent on a per-second base can be implemented.
CrashNet
(2021)
Destructive car crash tests are an elaborate, time-consuming, and expensive necessity of the automotive development process. Today, finite element method (FEM) simulations are used to reduce costs by simulating car crashes computationally. We propose CrashNet, an encoder-decoder deep neural network architecture that reduces costs further and models specific outcomes of car crashes very accurately. We achieve this by formulating car crash events as time series prediction enriched with a set of scalar features. Traditional sequence-to-sequence models are usually composed of convolutional neural network (CNN) and CNN transpose layers. We propose to concatenate those with an MLP capable of learning how to inject the given scalars into the output time series. In addition, we replace the CNN transpose with 2D CNN transpose layers in order to force the model to process the hidden state of the set of scalars as one time series. The proposed CrashNet model can be trained efficiently and is able to process scalars and time series as input in order to infer the results of crash tests. CrashNet produces results faster and at a lower cost compared to destructive tests and FEM simulations. Moreover, it represents a novel approach in the car safety management domain.
The existential threat to small businesses, based on their crucial role in the economy, is behind the plethora of scholarly studies in 2020, the first year of the COVID-19 pandemic. Examining the 15 contributions of the special issue on the “Economic effects of the COVID-19 pandemic on entrepreneurship and small businesses,” the paper comprises four parts: a systematic review of the literature on the effect on entrepreneurship and small businesses; a discussion of four literature strands based on this review; an overview of the contributions in this special issue; and some ideas for post-pandemic economic research.
"BreaThink"
(2021)
Cognition is shaped by signals from outside and within the body. Following recent evidence of interoceptive signals modulating higher-level cognition, we examined whether breathing changes the production and perception of quantities. In Experiment 1, 22 adults verbally produced on average larger random numbers after inhaling than after exhaling. In Experiment 2, 24 further adults estimated the numerosity of dot patterns that were briefly shown after either inhaling or exhaling. Again, we obtained on average larger responses following inhalation than exhalation. These converging results extend models of situated cognition according to which higher-level cognition is sensitive to transient interoceptive states.
Digital software platforms such as iOS or Android evolve quickly. Through regular updates, their set of built-in (core) features increases. While innovation allows strengthening platforms amidst competition, it can hurt contributors when introducing core features that are already provided by third-party developers (Platform Coring).
This book addresses the underexplored phenomenon of Platform Coring and provides strategical guidance for platform owners and third-party contributors. Platform owners are well-advised to carefully consider the benefits and risks for their platform ecosystem.
The book contributes by highlighting avenues to employ Platform Coring for the competitive advantage of the platform and ecosystem simultaneously.
Future ERP Systems
(2021)
This paper presents a research agenda on the current generation of ERP systems which was developed based on a literature review on current problems of ERP systems. The problems are presented following the ERP life cycle. In the next step, the identified problems are mapped on a reference architecture model of ERP systems that is an extension of the three-tier architecture model that is widely used in practice. The research agenda is structured according to the reference architecture model and addresses the problems identified regarding data, infrastructure, adaptation, processes, and user interface layer.
Today’s mobile devices are part of powerful business ecosystems, which usually involve digital platforms. To better understand the complex phenomenon of coring and related dynamics, this paper presents a case study comparing iMessage as part of Apple’s iOS and WhatsApp. Specifically, it investigates activities regarding platform coring, as the integration of several functionalities provided by third-party applications in the platform core. The paper makes three contributions. First, a systematization of coring activities is developed. Coring modes are differentiated by the amount of coring and application maintenance. Second, the case study revealed that the phenomenon of platform coring is present on digital platforms for mobile devices. Third, the fundamentals of coring are discussed as a first step towards theoretical development. Even though coring constitutes a potential threat for third-party developers regarding their functional differentiation, an idea of what a beneficial partnership incorporating coring activities could look like is developed here.
The idea of the continuous improvement process (CIP) helps companies to continuously improve their operation and thereby contributes to their competitiveness. Through digi tization, new potentials emerge to solve known CIP issues. This contribution specifically addresses the individual motivation of employees to contribute to the CIP. Typically, related initiatives lack contributions over time. The use of gamification is a promising way to achieve continuous participation by addressing the individual needs of participants. While the use of extrinsic motivation elements is common in practice, the idea of this approach is to specifically address intrinsic motivations which serve as a long-term motivator. This article contributes to a gam-ification concept for the continuous improvement process. The main results include an adapted CIP, a gamification concept, and a market mechanism. Furthermore, the concept is implemented and demonstrated as a prototype in an online platform.
Government as a platform?
(2021)
Digital platforms, by their design, allow the coordination of multiple entities to achieve a common goal. Motivated by the success of platforms in the private sector, they increasingly receive attention in the public sector. However, different understandings of the platform concept prevail. To guide the development and further research a coherent understanding is required. To address this gap, we identify the constitutive elements of platforms in the public sector. Moreover, their potential to coordinate partially autonomous entities as typical for federal organized states is highlighted.
This study contributes through a uniform understanding of public service platforms. Despite constitutive elements, the proposed framework for platforms in the public sector may guide future analysis. The analysis framework is applied to platforms of federal states in the European Union.
Software platforms allow for the extension of features by third-party contributors. Thereby, platform innovation is an important aspects of platforms attractiveness for users and complementors. While previous research focused the introduction of new features, the aspect of feature removal and discontinued features on software platforms has been disregarded. To explore the phenomenon and motivations for feature removal on software platforms, a review of recent literature is provided. To illustrate the existence of and motivations for feature removal, a case study of the browser platform Mozilla Firefox is presented. The results reveal feature removal to regularly occur on browser platforms for user- and developer-related features. Frequent reasons for feature removal involve unused features, security concerns, and bugs. Related motivations for feature removal are discussed from the platform owner's perspective. Implications for complementors and users are highlighted.
Software platforms regularly introduce new features to remain competitive. While platform innovation is considered to be a critical success factor, adding certain features could hurt the ecosystem. If platform owners provide functionality that was previously provided by a contributor, the owners enter complementary product spaces. Complementary market entry frequently occurs on software platforms and is a major concern for third-party developers.
Divergent findings on the impact of complementary market entry call for the consideration of additional factors. As prior research neglected the third-party perspective, this contribution aims to address this gap. We explore the use of measures to prevent complementary market entry using a survey approach on browser platforms. The research model is tested with 655 responses among developer from Mozilla Firefox and Google Chrome. To explain countermeasures employment, developer’s attitude and perceived likelihood are important. The results reveal that developers employ countermeasures if complementary market entry is assessed negatively and perceived as likely for their extension. Differences among browser platforms concerning complementary market entry are identified. Product spaces of extensions being available on multiple platforms are less likely to be entered and more heavily protected. Implications for research and stakeholders, i.e. platform owners and contributors are discussed.
Modern browsers are digital software platforms, as they allow third parties to extend functionality by providing extensions. In a highly competitive environment, differentiation through provided functionality is a key factor for browser platforms. As the development of browsers progress, new functions are constantly being released. Browsers could thus enter complementary markets by adding functionality previously provided by third-party extensions, which is referred to as ‘platform coring’. Previous studies have missed the perspective of the parties involved. To address this gap, we conducted interviews with third-party and core developers in the security and privacy domain from Firefox and Chrome. This study provides three contributions. First, insights into stakeholder-specific issues concerning coring. Second, measures to prevent coring. Third, strategical guidance for developers and owners. Third-party vendors experienced and core developers confirmed that coring occurs on browser platforms. While developers with extrinsic motivations assess coring negatively, developers with intrinsic motivations perceive coring positively.
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
Viper
(2021)
Key-value stores (KVSs) have found wide application in modern software systems. For persistence, their data resides in slow secondary storage, which requires KVSs to employ various techniques to increase their read and write performance from and to the underlying medium. Emerging persistent memory (PMem) technologies offer data persistence at close-to-DRAM speed, making them a promising alternative to classical disk-based storage. However, simply drop-in replacing existing storage with PMem does not yield good results, as block-based access behaves differently in PMem than on disk and ignores PMem's byte addressability, layout, and unique performance characteristics. In this paper, we propose three PMem-specific access patterns and implement them in a hybrid PMem-DRAM KVS called Viper. We employ a DRAM-based hash index and a PMem-aware storage layout to utilize the random-write speed of DRAM and efficient sequential-write performance PMem. Our evaluation shows that Viper significantly outperforms existing KVSs for core KVS operations while providing full data persistence. Moreover, Viper outperforms existing PMem-only, hybrid, and disk-based KVSs by 4-18x for write workloads, while matching or surpassing their get performance.
Trait means or variance
(2021)
One of the few laws in ecology is that communities consist of few common and many rare taxa. Functional traits may help to identify the underlying mechanisms of this community pattern, since they correlate with different niche dimensions. However, comprehensive studies are missing that investigate the effects of species mean traits (niche position) and intraspecific trait variability (ITV, niche width) on species abundance. In this study, we investigated fragmented dry grasslands to reveal trait-occurrence relationships in plants at local and regional scales. We predicted that (a) at the local scale, species occurrence is highest for species with intermediate traits, (b) at the regional scale, habitat specialists have a lower species occurrence than generalists, and thus, traits associated with stress-tolerance have a negative effect on species occurrence, and (c) ITV increases species occurrence irrespective of the scale. We measured three plant functional traits (SLA = specific leaf area, LDMC = leaf dry matter content, plant height) at 21 local dry grassland communities (10 m × 10 m) and analyzed the effect of these traits and their variation on species occurrence. At the local scale, mean LDMC had a positive effect on species occurrence, indicating that stress-tolerant species are the most abundant rather than species with intermediate traits (hypothesis 1). We found limited support for lower specialist occurrence at the regional scale (hypothesis 2). Further, ITV of LDMC and plant height had a positive effect on local occurrence supporting hypothesis 3. In contrast, at the regional scale, plants with a higher ITV of plant height were less frequent. We found no evidence that the consideration of phylogenetic relationships in our analyses influenced our findings. In conclusion, both species mean traits (in particular LDMC) and ITV were differently related to species occurrence with respect to spatial scale. Therefore, our study underlines the strong scale-dependency of trait-abundance relationships.
Trait means or variance
(2021)
One of the few laws in ecology is that communities consist of few common and many rare taxa. Functional traits may help to identify the underlying mechanisms of this community pattern, since they correlate with different niche dimensions. However, comprehensive studies are missing that investigate the effects of species mean traits (niche position) and intraspecific trait variability (ITV, niche width) on species abundance. In this study, we investigated fragmented dry grasslands to reveal trait-occurrence relationships in plants at local and regional scales. We predicted that (a) at the local scale, species occurrence is highest for species with intermediate traits, (b) at the regional scale, habitat specialists have a lower species occurrence than generalists, and thus, traits associated with stress-tolerance have a negative effect on species occurrence, and (c) ITV increases species occurrence irrespective of the scale. We measured three plant functional traits (SLA = specific leaf area, LDMC = leaf dry matter content, plant height) at 21 local dry grassland communities (10 m × 10 m) and analyzed the effect of these traits and their variation on species occurrence. At the local scale, mean LDMC had a positive effect on species occurrence, indicating that stress-tolerant species are the most abundant rather than species with intermediate traits (hypothesis 1). We found limited support for lower specialist occurrence at the regional scale (hypothesis 2). Further, ITV of LDMC and plant height had a positive effect on local occurrence supporting hypothesis 3. In contrast, at the regional scale, plants with a higher ITV of plant height were less frequent. We found no evidence that the consideration of phylogenetic relationships in our analyses influenced our findings. In conclusion, both species mean traits (in particular LDMC) and ITV were differently related to species occurrence with respect to spatial scale. Therefore, our study underlines the strong scale-dependency of trait-abundance relationships.
Future Outlook and Scenarios
(2021)
Where is local self-government heading in the future? Among trends identified is firstly an intensification of multilevel, intermunicipal, and cross-border governance. In the future even more of cooperation and coordination among different political and administrative levels will be required. Territorial boundaries have become increasingly incongruent with functional public activities. Secondly, the innovative potential of introducing markets as templates for organisational reform has reached its end. Future reforms will most likely try to adapt market reforms to local public contexts, or even reverse the development. Finally, a tightening of state steering and an increased dependence on state funding to uphold local services is expected. Waves of amalgamations might slow down this process but they will not make financial problems disappear completely.
The efficiency of sediment routing from land to the ocean depends on the position of submarine canyon heads with regard to terrestrial sediment sources. We aim to identify the main controls on whether a submarine canyon head remains connected to terrestrial sediment input during Holocene sea-level rise. Globally, we identified 798 canyon heads that are currently located at the 120m-depth contour (the Last Glacial Maximum shoreline) and 183 canyon heads that are connected to the shore (within a distance of 6 km) during the present-day highstand. Regional hotspots of shore-connected canyons are the Mediterranean active margin and the Pacific coast of Central and South America. We used 34 terrestrial and marine predictor variables to predict shore-connected canyon occurrence using Bayesian regression. Our analysis shows that steep and narrow shelves facilitate canyon-head connectivity to the shore. Moreover, shore-connected canyons occur preferentially along active margins characterized by resistant bedrock and high river-water discharge.
Although teen dating violence (TDV) is internationally recognized as a serious threat to adolescents' health and well-being, almost no data is available for Slovenian youth. Hence, the purpose of this study was to examine the prevalence and predictors of TDV among Slovenian adolescents for the first time. Using data from the SPMAD study (Study of Parental Monitoring and Adolescent Delinquency), 330 high school students were asked about physical TDV victimization and perpetration as well as about their dating history, relationship conflicts, peers' antisocial behavior, and informal social control by family and school. A substantial number of female andmale adolescents reported victimization (16.7% of female and 12.7% of male respondents) and perpetration (21.1% of female and 6.0% of male respondents). Furthermore, the results revealed that lower age at the first relationship, relationship conflicts, and school informal social control were associated with victimization, whereas being female, relationship conflicts, having antisocial peers, and family informal social control were linked to perpetration. Implications of the study findings were discussed.
Energy system developments and investments in the decisive decade for the Paris Agreement goals
(2021)
The Paris Agreement does not only stipulate to limit the global average temperature increase to well below 2 °C, it also calls for 'making finance flows consistent with a pathway towards low greenhouse gas emissions'. Consequently, there is an urgent need to understand the implications of climate targets for energy systems and quantify the associated investment requirements in the coming decade. A meaningful analysis must however consider the near-term mitigation requirements to avoid the overshoot of a temperature goal. It must also include the recently observed fast technological progress in key mitigation options. Here, we use a new and unique scenario ensemble that limit peak warming by construction and that stems from seven up-to-date integrated assessment models. This allows us to study the near-term implications of different limits to peak temperature increase under a consistent and up-to-date set of assumptions. We find that ambitious immediate action allows for limiting median warming outcomes to well below 2 °C in all models. By contrast, current nationally determined contributions for 2030 would add around 0.2 °C of peak warming, leading to an unavoidable transgression of 1.5 °C in all models, and 2 °C in some. In contrast to the incremental changes as foreseen by current plans, ambitious peak warming targets require decisive emission cuts until 2030, with the most substantial contribution to decarbonization coming from the power sector. Therefore, investments into low-carbon power generation need to increase beyond current levels to meet the Paris goals, especially for solar and wind technologies and related system enhancements for electricity transmission, distribution and storage. Estimates on absolute investment levels, up-scaling of other low-carbon power generation technologies and investment shares in less ambitious scenarios vary considerably across models. In scenarios limiting peak warming to below 2 °C, while coal is phased out quickly, oil and gas are still being used significantly until 2030, albeit at lower than current levels. This requires continued investments into existing oil and gas infrastructure, but investments into new fields in such scenarios might not be needed. The results show that credible and effective policy action is essential for ensuring efficient allocation of investments aligned with medium-term climate targets.
TransPipe
(2021)
Online learning environments, such as Massive Open Online Courses (MOOCs), often rely on videos as a major component to convey knowledge. However, these videos exclude potential participants who do not understand the lecturer’s language, regardless of whether that is due to language unfamiliarity or aural handicaps. Subtitles and/or interactive transcripts solve this issue, ease navigation based on the content, and enable indexing and retrieval by search engines. Although there are several automated speech-to-text converters and translation tools, their quality varies and the process of integrating them can be quite tedious. Thus, in practice, many videos on MOOC platforms only receive subtitles after the course is already finished (if at all) due to a lack of resources. This work describes an approach to tackle this issue by providing a dedicated tool, which is closing this gap between MOOC platforms and transcription and translation tools and offering a simple workflow that can easily be handled by users with a less technical background. The proposed method is designed and evaluated by qualitative interviews with three major MOOC providers.
Smend, Rudolf
(2021)
Background Advanced glycation end-products are proteins that become glycated after contact with sugars and are implicated in endothelial dysfunction and arterial stiffening. We aimed to investigate the relationships between advanced glycation end-products, measured as skin autofluorescence, and vascular stiffness in various glycemic strata. Methods We performed a cross-sectional analysis within the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam cohort, comprising n = 3535 participants (median age 67 years, 60% women). Advanced glycation end-products were measured as skin autofluorescence with AGE-Reader (TM), vascular stiffness was measured as pulse wave velocity, augmentation index and ankle-brachial index with Vascular Explorer (TM). A subset of 1348 participants underwent an oral glucose tolerance test. Participants were sub-phenotyped into normoglycemic, prediabetes and diabetes groups. Associations between skin autofluorescence and various indices of vascular stiffness were assessed by multivariable regression analyses and were adjusted for age, sex, measures of adiposity and lifestyle, blood pressure, prevalent conditions, medication use and blood biomarkers. Results Skin autofluorescence associated with pulse wave velocity, augmentation index and ankle-brachial index, adjusted beta coefficients (95% CI) per unit skin autofluorescence increase: 0.38 (0.21; 0.55) for carotid-femoral pulse wave velocity, 0.25 (0.14; 0.37) for aortic pulse wave velocity, 1.00 (0.29; 1.70) for aortic augmentation index, 4.12 (2.24; 6.00) for brachial augmentation index and - 0.04 (- 0.05; - 0.02) for ankle-brachial index. The associations were strongest in men, younger individuals and were consistent across all glycemic strata: for carotid-femoral pulse wave velocity 0.36 (0.12; 0.60) in normoglycemic, 0.33 (- 0.01; 0.67) in prediabetes and 0.45 (0.09; 0.80) in diabetes groups; with similar estimates for aortic pulse wave velocity. Augmentation index was associated with skin autofluorescence only in normoglycemic and diabetes groups. Ankle-brachial index inversely associated with skin autofluorescence across all sex, age and glycemic strata. Conclusions Our findings indicate that advanced glycation end-products measured as skin autofluorescence might be involved in vascular stiffening independent of age and other cardiometabolic risk factors not only in individuals with diabetes but also in normoglycemic and prediabetic conditions. Skin autofluorescence might prove as a rapid and non-invasive method for assessment of macrovascular disease progression across all glycemic strata.
A quote from Fight Club (Chuck Palahniuk, 1996) may seem unusual for a Classicist. Nevertheless, this famous sentence summarises the contents of this special issue of thersites perfectly. As specialists in classical reception frequently witness, there is a sort of déjà-vu effect when it comes to the presence of Antiquity within popular culture. In 2019, to try to better understand the phenomenon, Antiquipop invited researchers to take an interest in the construction and semantic path of these “masterpieces” in contemporary popular culture, with a particular focus on the 21st century.