Refine
Year of publication
- 2021 (2421) (remove)
Document Type
- Article (1480)
- Doctoral Thesis (275)
- Part of a Book (185)
- Postprint (157)
- Monograph/Edited Volume (97)
- Review (55)
- Conference Proceeding (43)
- Part of Periodical (25)
- Other (21)
- Master's Thesis (18)
- Working Paper (18)
- Habilitation Thesis (14)
- Report (11)
- Contribution to a Periodical (9)
- Bachelor Thesis (7)
- Course Material (4)
- Preprint (1)
- Sound (1)
Language
Is part of the Bibliography
- yes (2421) (remove)
Keywords
- COVID-19 (23)
- machine learning (15)
- climate change (13)
- Migration (12)
- diffusion (12)
- Germany (11)
- embodied cognition (11)
- exercise (10)
- Arabidopsis thaliana (9)
- gender (9)
Institute
- Institut für Biochemie und Biologie (212)
- Institut für Physik und Astronomie (183)
- Institut für Geowissenschaften (169)
- Institut für Chemie (160)
- Bürgerliches Recht (128)
- Department Psychologie (103)
- Fachgruppe Betriebswirtschaftslehre (101)
- Historisches Institut (101)
- Institut für Umweltwissenschaften und Geographie (100)
- Fachgruppe Politik- & Verwaltungswissenschaft (99)
Anomaly detection in process mining aims to recognize outlying or unexpected behavior in event logs for purposes such as the removal of noise and identification of conformance violations. Existing techniques for this task are primarily frequency-based, arguing that behavior is anomalous because it is uncommon. However, such techniques ignore the semantics of recorded events and, therefore, do not take the meaning of potential anomalies into consideration. In this work, we overcome this caveat and focus on the detection of anomalies from a semantic perspective, arguing that anomalies can be recognized when process behavior does not make sense. To achieve this, we propose an approach that exploits the natural language associated with events. Our key idea is to detect anomalous process behavior by identifying semantically inconsistent execution patterns. To detect such patterns, we first automatically extract business objects and actions from the textual labels of events. We then compare these against a process-independent knowledge base. By populating this knowledge base with patterns from various kinds of resources, our approach can be used in a range of contexts and domains. We demonstrate the capability of our approach to successfully detect semantic execution anomalies through an evaluation based on a set of real-world and synthetic event logs and show the complementary nature of semantics-based anomaly detection to existing frequency-based techniques.
Helping overcome distance, the use of videoconferencing tools has surged during the pandemic. To shed light on the consequences of videoconferencing at work, this study takes a granular look at the implications of the self-view feature for meeting outcomes. Building on self-awareness research and self-regulation theory, we argue that by heightening the state of self-awareness, self-view engagement depletes participants’ mental resources and thereby can undermine online meeting outcomes. Evaluation of our theoretical model on a sample of 179 employees reveals a nuanced picture. Self-view engagement while speaking and while listening is positively associated with self-awareness, which, in turn, is negatively associated with satisfaction with meeting process, perceived productivity, and meeting enjoyment. The criticality of the communication role is put forward: looking at self while listening to other attendees has a negative direct and indirect effect on meeting outcomes; however, looking at self while speaking produces equivocal effects.
Despite the phenomenal growth of Big Data Analytics in the last few years, little research is done to explicate the relationship between Big Data Analytics Capability (BDAC) and indirect strategic value derived from such digital capabilities. We attempt to address this gap by proposing a conceptual model of the BDAC - Innovation relationship using dynamic capability theory. The work expands on BDAC business value research and extends the nominal research done on BDAC – innovation. We focus on BDAC's relationship with different innovation objects, namely product, business process, and business model innovation, impacting all value chain activities. The insights gained will stimulate academic and practitioner interest in explicating strategic value generated from BDAC and serve as a framework for future research on the subject
Vertragsgestaltung ist ein praktisch sehr relevantes Thema, das wegen der Justizorientierung in der Wissenschaft noch weitgehend stiefmütterlich behandelt wird. In dieser Untersuchung wird, zumindest für die besonders delikate Konstellation bei komplexen Kooperationen des Staats mit Privaten (ÖPP/PPP), Abhilfe geschaffen. Dabei gründet die Analyse auf einer fundierten Typisierung und Charakterisierung der Probleme solcher Projekte. Den theoretischen Rahmen liefert eine effizienzorientierte Studie institutionenökonomischer Ansätze, namentlich der Transaktionskostentheorie und der Prinzipal-Agenten-Theorie, rückversichert über die praxisorientierten Grundregeln der vertraglichen Risikoverteilung. So gelingt es praktische Formulierungsvorschläge für Standardprobleme der Vertragsgestaltung, wie Leistungsbestimmungen, Anpassungsmechanismen, Konfliktbeilegungsregeln, Informationsmechanismen und Kündigungsregeln zu finden. Diese werden auch aus den Erfolgsbedingungen erläutert.
Digital inclusion
(2021)
In this thesis, we tackle two social disruptions: recent refugee waves in Germany and the COVID-19 pandemic. We focus on the use of information and communication technology (ICT) as a key means of alleviating these disruptions and promoting social inclusion. As social disruptions typically lead to frustration and fragmentation, it is essential to ensure the social inclusion of individuals and societies during such times.
In the context of the social inclusion of refugees, we focus on the Syrian refugees who arrived in Germany as of 2015, as they form a large and coherent refugee community. In particular, we address the role of ICTs in refugees’ social inclusion and investigate how different ICTs (especially smartphones and social networks) can foster refugees’ integration and social inclusion. In the context of the COVID-19 pandemic, we focus on the widespread unconventional working model of work from home (WFH). Our research here centers on the main constructs of WFH and the key differences in WFH experiences based on personal characteristics such as gender and parental status.
We reveal novel insights through four well-established research methods: literature review, mixed methods, qualitative method, and quantitative method. The results of our research have been published in the form of eight articles in major information systems venues and journals. Key results from the refugee research stream include the following: Smartphones represent a central component of refugee ICT use; refugees view ICT as a source of information and power; the social connectedness of refugees is strongly correlated with their Internet use; refugees are not relying solely on traditional methods to learn the German language or pursue further education; the ability to use smartphones anytime and anywhere gives refugees an empowering feeling of global connectedness; and ICTs empower refugees on three levels (community participation, sense of control, and self-efficacy).
Key insights from the COVID-19 WFH stream include: Gender and the presence of children under the age of 18 affect workers’ control over their time, technology usefulness, and WFH conflicts, while not affecting their WFH attitudes; and both personal and technology-related factors affect an individual’s attitude toward WFH and their productivity. Further insights are being gathered at the time of submitting this thesis.
This thesis contributes to the discussion within the information systems community regarding how to use different ICT solutions to promote the social inclusion of refugees in their new communities and foster an inclusive society. It also adds to the growing body of research on COVID-19, in particular on the sudden workplace transformation to WFH. The insights gathered in this thesis reveal theoretical implications and future opportunities for research in the field of information systems, practical implications for relevant stakeholders, and social implications related to the refugee crisis and the COVID-19 pandemic that must be addressed.
Immigrant integration has become a primary political concern for leaders in Germany and the United States. The information systems (IS) community has begun to research how information and communications technologies can assist immigrants and refugees, such as by examining how countries can facilitate social-inclusion processes. Migrants face the challenge of joining closed communities that cannot integrate or fear doing so. We conducted a panel discussion at the 2019 Americas Conference on Information Systems (AMCIS) in Cancun, Mexico, to introduce multiple viewpoints on immigration. In particular, the panel discussed how technology can both support and prevent immigrants from succeeding in their quest. We conducted the panel to stimulate a thoughtful and dynamic discussion on best practices and recommendations to enhance the discipline's impact on alleviating the challenges that occur for immigrants in their host countries. In this panel report, we introduce the topic of using ICT to help immigrants integrate and identify differences between North/Central America and Europe. We also discuss how immigrants (particularly refugees) use ICT to connect with others, feel that they belong, and maintain their identity. We also uncover the dark and bright sides of how governments use ICT to deter illegal immigration. Finally, we present recommendations for researchers and practitioners on how to best use ICT to assist with immigration.
The coronavirus disease of 2019 (COVID-19) pandemic has forced most academics to work from home. This sudden venue change can affect academics' productivity and exacerbate the challenges that confront universities as they face an uncertain future. In this paper, we identify factors that influence academics' productivity while working from home during the mandate to self-isolate. From analyzing results from a global survey we conducted, we found that both personal and technology-related factors affect an individual's attitude toward working from home and productivity. Our results should prove valuable to university administrators to better address the work-life challenges that academics face.
Since the beginning of the recent global refugee crisis, researchers have been tackling many of its associated aspects, investigating how we can help to alleviate this crisis, in particular, using ICTs capabilities. In our research, we investigated the use of ICT solutions by refugees to foster the social inclusion process in the host community. To tackle this topic, we conducted thirteen interviews with Syrian refugees in Germany. Our findings reveal different ICT usages by refugees and how these contribute to feeling empowered. Moreover, we show the sources of empowerment for refugees that are gained by ICT use. Finally, we identified the two types of social inclusion benefits that were derived from empowerment sources. Our results provide practical implications to different stakeholders and decision-makers on how ICT usage can empower refugees, which can foster the social inclusion of refugees, and what should be considered to support them in their integration effort.
Since the beginning of the recent global refugee crisis, researchers have been tackling many of its associated aspects, investigating how we can help to alleviate this crisis, in particular, using ICTs capabilities. In our research, we investigated the use of ICT solutions by refugees to foster the social inclusion process in the host community. To tackle this topic, we conducted thirteen interviews with Syrian refugees in Germany. Our findings reveal different ICT usages by refugees and how these contribute to feeling empowered. Moreover, we show the sources of empowerment for refugees that are gained by ICT use. Finally, we identified the two types of social inclusion benefits that were derived from empowerment sources. Our results provide practical implications to different stakeholders and decision-makers on how ICT usage can empower refugees, which can foster the social inclusion of refugees, and what should be considered to support them in their integration effort.
Personal data increasingly serve as inputs to public goods. Like other types of contributions to public goods, personal data are likely to be underprovided. We investigate whether classical remedies to underprovision are also applicable to personal data and whether the privacy-sensitive nature of personal data must be additionally accounted for. In a randomized field experiment on a public online education platform, we prompt users to complete their profiles with personal information. Compared to a control message, we find that making public benefits salient increases the number of personal data contributions significantly. This effect is even stronger when additionally emphasizing privacy protection, especially for sensitive information. Our results further suggest that emphasis on both public benefits and privacy protection attracts personal data from a more diverse set of contributors.
Auf Basis einer Umfrage unter 300 Beschäftigten im öffentlichen Dienst untersucht dieser Beitrag, welche möglichen Auswirkungen die Digitale Transformation auf das Tätigkeitsprofil von Mitarbeiterinnen und Mitarbeitern im öffentlichen Sektor haben kann. Zum einen finden sich erste Hinweise auf signifikante Effizienzpotenziale durch Automatisierung im öffentlichen Sektor. Zum anderen wird deutlich, dass die Mitarbeiterinnen und Mitarbeiter dieser Entwicklung mehrheitlich positiv gegenüberstehen und sie aktiv an der Verbesserung von Dienstleistungen mitwirken wollen. Aus diesen Erkenntnissen können zahlreiche Handlungsimplikationen für Veränderungsprojekte in der Praxis abgeleitet werden. Gleichzeitig ruft dieser Beitrag dazu auf, die Folgen der Digitalen Transformation für Mitarbeiterinnen und Mitarbeiter noch besser zu erforschen.
During the observation of goal-directed actions, infants usually predict the goal at an earlier age when the agent is familiar (e.g., human hand) compared to unfamiliar (e.g., mechanical claw). These findings implicate a crucial role of the developing agentive self for infants' processing of others' action goals. Recent theoretical accounts suggest that predictive gaze behavior relies on an interplay between infants' agentive experience (top-down processes) and perceptual information about the agent and the action-event (bottom-up information; e.g., agency cues). The present study examined 7-, 11-, and 18-month-old infants' predictive gaze behavior for a grasping action performed by an unfamiliar tool, depending on infants' age-related action knowledge about tool-use and the display of the agency cue of producing a salient action effect. The results are in line with the notion of a systematic interplay between experience-based top-down processes and cue-based bottom-up information: Regardless of the salient action effect, predictive gaze shifts did not occur in the 7-month-olds (least experienced age group), but did occur in the 18-month-olds (most experienced age group). In the 11-month-olds, however, predictive gaze shifts occurred only when a salient action effect was presented. This sheds new light on how the developing agentive self, in interplay with available agency cues, supports infants' action-goal prediction also for observed tool-use actions.
This article discusses how Alex Garland’s The Beach (1996) engages with conceptions of utopian islands, nation, and colonialism in modernity and how it, from this basis, develops a different spatiality that reflects on a more deterritorialized form of imperial domination within late twentieth-century globalization, as exercised by the United States. The novel is shown to subvert, but not to abolish, two spatial formations that originated in early modernity: nation and utopia. Building on Jean Baudrillard’s elaborations regarding simulation and simulacra, the article argues that The Beach creates a hyperreal narrative that does away with the idea of isolated, bounded spaces and that in form and content corresponds with the worldwide dominance of the United States at the end of the twentieth century.
While W.E.B. Du Bois’s first novel, The Quest of the Silver Fleece (1911), is set squarely in the USA, his second work of fiction, Dark Princess: A Romance (1928), abandons this national framework, depicting the treatment of African Americans in the USA as embedded into an international system of economic exploitation based on racial categories. Ultimately, the political visions offered in the novels differ starkly, but both employ a Western literary canon – so-called ‘classics’ from Greek, German, English, French, and US American literature. With this, Du Bois attempts to create a new space for African Americans in the world (literature) of the 20th century. Weary of the traditions of this ‘world literature’, the novels complicate and begin to decenter the canon that they draw on. This reading traces what I interpret as subtle signs of frustration over the limits set by the literature that underlies Dark Princess, while its predecessor had been more optimistic in its appropriation of Eurocentric fiction for its propagandist aims.
Im Rahmen eines einjährigen Entwicklungsprozesses wurde das Fragebogenmodul "Einstellungen zu sozialer Ungleichheit" unter der Leitung der Infrastruktureinrichtung SOEP entwickelt und in der 38. Welle der Haupterhebung des Sozio-oekonomischen Panels erstmalig erhoben. Das finale Fragebogenmodul umfasst 43 Items zu den Themenbereichen Soziale Vergleiche, Soziale Mobilität, Sozialstaat und Nicht-materielle Ungleichheit. In der Tradition des SOEP als forschungsbasierte Infrastruktureinrichtung erfolgte die Fragebogenentwicklung in enger Zusammenarbeit mit externen Forschenden aus dem Bereich der Einstellungs- und Ungleichheitsforschung. Neben der etablierten Nutzung des SOEP Innovation Samples (SOEP-IS) für quantitative Pretests neu entwickelter Fragen kam erstmals ein kognitiver Pretest zum Einsatz. Der vorliegende Bericht dokumentiert den Entwicklungsprozess von der Konzeption bis zum finalen Fragebogen.
Diabetes is a major public health problem with increasing global prevalence. Type 2 diabetes (T2D), which accounts for 90% of all diagnosed cases, is a complex polygenic disease also modulated by epigenetics and lifestyle factors. For the identification of T2D-associated genes, linkage analyses combined with mouse breeding strategies and bioinformatic tools were useful in the past. In a previous study in which a backcross population of the lean and diabetes-prone dilute brown non-agouti (DBA) mouse and the obese and diabetes-susceptible New Zealand obese (NZO) mouse was characterized, a major diabetes quantitative trait locus (QTL) was identified on chromosome 4. The locus was designated non-insulin dependent diabetes from DBA (Nidd/DBA). The aim of this thesis was (i) to perform a detailed phenotypic characterization of the Nidd/DBA mice, (ii) to further narrow the critical region and (iii) to identify the responsible genetic variant(s) of the Nidd/DBA locus. The phenotypic characterization of recombinant congenic mice carrying a 13.6 Mbp Nidd/DBA fragment with 284 genes presented a gradually worsening metabolic phenotype. Nidd/DBA allele carriers exhibited severe hyperglycemia (~19.9 mM) and impaired glucose clearance at 12 weeks of age. Ex vivo perifusion experiments with islets of 13-week-old congenic mice revealed a tendency towards reduced insulin secretion in homozygous DBA mice. In addition, 16-week-old mice showed a severe loss of β-cells and reduced pancreatic insulin content. Pathway analysis of transcriptome data from islets of congenic mice pointed towards a downregulation of cell survival genes. Morphological analysis of pancreatic sections displayed a reduced number of bi-hormonal cells co-expressing glucagon and insulin in homozygous DBA mice, which could indicate a reduced plasticity of endocrine cells in response to hyperglycemic stress. Further generation and phenotyping of recombinant congenic mice enabled the isolation of a 3.3 Mbp fragment that was still able to induce hyperglycemia and contained 61 genes. Bioinformatic analyses including haplotype mapping, sequence and transcriptome analysis were integrated in order to further reduce the number of candidate genes and to identify the presumable causative gene variant. Four putative candidate genes (Ttc39a, Kti12, Osbpl9, Calr4) were defined, which were either differentially expressed or carried a sequence variant. In addition, in silico ChIP-Seq analyses of the 3.3 Mbp region indicated a high number of SNPs located in active regions of binding sites of β-cell transcription factors. This points towards potentially altered cis-regulatory elements that could be responsible for the phenotype conferred by the Nidd/DBA locus. In summary, the Nidd/DBA locus mediates impaired glucose homeostasis and reduced insulin secretion capacity which finally leads to β-cell death. The downregulation of cell survival genes and reduced plasticity of endocrine cells could further contribute to the β-cell loss. The critical region was narrowed down to a 3.3 Mbp fragment containing 61 genes, of which four might be involved in the development of the diabetogenic Nidd/DBA phenotype.
Purpose
The objective of the investigation was to determine the concomitant effects of upper arm blood flow restriction (BFR) and inversion on elbow flexors neuromuscular responses.
Methods
Randomly allocated, 13 volunteers performed four conditions in a within-subject design: rest (control, 1-min upright position without BFR), control (1-min upright with BFR), 1-min inverted (without BFR), and 1-min inverted with BFR. Evoked and voluntary contractile properties, before, during and after a 30-s maximum voluntary contraction (MVC) exercise intervention were examined as well as pain scale.
Results
Inversion induced significant pre-exercise intervention decreases in elbow flexors MVC (21.1%, Z2p = 0.48, p = 0.02) and resting evoked twitch forces (29.4%, Z2p = 0.34, p = 0.03). The 30-s MVC induced significantly greater pre- to post-test decreases in potentiated twitch force (Z2p = 0.61, p = 0.0009) during inversion (75%) than upright (65.3%) conditions. Overall, BFR decreased MVC force 4.8% (Z2p = 0.37, p = 0.05). For upright position, BFR induced 21.0% reductions in M-wave amplitude (Z2p = 0.44, p = 0.04). There were no significant differences for electromyographic activity or voluntary activation as measured with the interpolated twitch technique. For all conditions, there was a significant increase in pain scale between the 40-60 s intervals and post-30-s MVC (upright< inversion, and without BFR< BFR).
Conclusion
The concomitant application of inversion with elbow flexors BFR only amplified neuromuscular performance impairments to a small degree. Individuals who execute forceful contractions when inverted or with BFR should be cognizant that force output may be impaired.
Vienna
(2021)
This book explores and debates the urban transformations that have taken place in Vienna over the past 30 years and their consequences in policy fields such as labour and housing, political and social participation and the environment. Historically, European cities have been characterised by a strong association between social cohesion, quality of life, economic ambition and a robust State. Vienna is an excellent example for that. In more recent years, however, cities were pressured to change policy principles and mechanisms in the context of demographic shifts, post-industrial transformations and welfare recalibration which have led to worsened social conditions in many cities. Each chapter in this volume discusses Vienna's responses to these pressures in key policy arenas, looking at outcomes from the context-specific local arrangements. Against a theoretical framework debating the European city as a model of inclusion and social justice, authors explore the local capacity to innovate urban policies and to address new social risks, while paying attention to potential trade-offs.
The book questions and assesses the city's resilience using time series and an institutional analysis of four key dimensions that characterise the European city model within the context of post-industrial transition: redistribution, recognition, representation and sustainability. It offers a multiscalar perspective of urban governance through labour, housing, participatory and environmental policies, bringing together different levels and public policy types.
We present a microcontact printing (mu CP) routine suitable to introduce defined (sub-) microscale patterns on surface substrates exhibiting a high capillary activity and receptive to a silane-based chemistry. This is achieved by transferring functional trivalent alkoxysilanes, such as (3-aminopropyl)-triethoxysilane (APTES) as a low-molecular weight ink via reversible covalent attachment to polymer brushes grafted from elastomeric polydimethylsiloxane (PDMS) stamps. The brushes consist of poly{N-[tris(hydroxymethyl)-methyl]acrylamide} (PTrisAAm) synthesized by reversible addition-fragmentation chain-transfer (RAFT)-polymerization and used for immobilization of the alkoxysilane-based ink by substituting the alkoxy moieties with polymer-bound hydroxyl groups. Upon physical contact of the silane-carrying polymers with surfaces, the conjugated silane transfers to the substrate, thus completely suppressing ink-flow and, in turn, maximizing printing accuracy even for otherwise not addressable substrate topographies. We provide a concisely conducted investigation on polymer brush formation using atomic force microscopy (AFM) and ellipsometry as well as ink immobilization utilizing two-dimensional proton nuclear Overhauser enhancement spectroscopy (H-1-H-1-NOESY-NMR). We analyze the mu CP process by printing onto Si-wafers and show how even distinctively rough surfaces can be addressed, which otherwise represent particularly challenging substrates.
In der vorliegenden Arbeit wurde sich mit der Frage beschäftigt, ob und wenn ja inwiefern das spanische que, neben seinen klassischen Verwendungsweisen als Pronomen und Konjunktion, als Diskursmarker (DM) fungieren kann, also ob que in bestimmten Kontexten seinen propositionalen Gehalt verliert und rein diskursive Funktionen übernimmt.
Es wurden 128 Beispiele von satzinitialen que untersucht, welche sich zunächst nicht eindeutig als grammatisches Element klassifizieren lassen. Die Beispiele entstammen einem Korpus, welches auf einem auf Grundlage der zweiten Staffel der Netflix-Serie “Élite” erstellten Transkript basiert. Das Material wurde anhand von fünf auf Grundlage der Forschungsliteratur erstellten Kriterien analysiert und je nach Erfüllung oder Nicht-Erfüllung in die Kategorien “nicht pragmatikalisiert” (NP), “teilweise pragmatikalisiert” (TP) und “pragmatikalisiert” (P) eingeordnet. Innerhalb jeder dieser Kategorien wurde(n) die entsprechende(n) grammatische(n) bzw. pragmatische(n) Funktion(en) spezifiziert und die Ergebnisse in einem Raster zusammengetragen. Für die Funktionszuordnung in der Kategorie (P) wurde hierbei auf die DM-Klassifizierung von Martín Zorraquino und Portolés 1999 zurückgegriffen und hierbei teilweise noch einmal weiter spezifiziert.
Bei der Analyse haben sich 89 als P, 34 als TP und fünf Beispiele als NP herausgestellt. Von den 89 als P eingestuften que wurde der Großteil (84) als “comentador” beschrieben - als DM, der einen Kommentar einführt. So wurden insgesamt 72 que als DM eingestuft, die einen erklärenden Kommentar einleiten.
Es wurde hiermit eine objektive Einstufung von que als DM erreicht, welche gleichzeitig erste Aufschlüsse über die spezifischen Funktionen von que als DM gibt. Die Nutzung konkreter Kriterien zur Analyse von potentiellen DM gewährleistet Objektivität und leistet einen Beitrag zur Systematisierung der teils von Uneinigkeiten und Interpretationen geprägten DM-Forschung.
Paths Are Made by Walking
(2021)
Investigation of Sirtuin 3 overexpression as a genetic model of fasting in hypothalamic neurons
(2021)
The controlled dosage of substances from a device to its environment, such as a tissue or an organ in medical applications or a reactor, room, machinery or ecosystem in technical, should ideally match the requirements of the applications, e.g. in terms of the time point at which the cargo is released. On-demand dosage systems may enable such a desired release pattern, if the device contain suitable features that can translate external signals into a release function. This study is motivated by the opportunities arising from microsystems capable of an on-demand release and the contributions that geometrical design may have in realizing such features. The goals of this work included the design, fabrication, characterization and experimental proof-of-concept of geometry-assisted triggerable dosing effect (a) with a sequential dosing release and (b) in a self-sufficient dosage system. Structure-function relationships were addressed on the molecular, morphological and, with a particular attention, the device design level, which is on the micrometer scale. Models and/or computational tools were used to screen the parameter space and provide guidance for experiments.
The present work gives a detailed analysis of the metamorphic and structural evolution of the back-arc portion of the Famatinian Orogen exposed in the southern Sierra de Aconquija (Cuesta de La Chilca segment) in the Sierras Pampeanas Orientales (Eastern Pampean Sierras). The Pampeanas Orientales include from north to south the Aconquija, Ambato and Ancasti mountains. They are mainly composed of middle to high grade metasedimentary units and magmatic rocks.
At the south end of the Sierra de Aconquija, along an east to west segment extending over nearly 10 km (Cuesta de La Chilca), large volumes of metasedimentary rocks crop out. The eastern metasediments were defined as members of the El Portezuelo Metamorphic-Igneous Complex (EPMIC) or Eastern block and the western ones relate to the Quebrada del Molle Metamorphic Complex (QMMC) or Western block. The two blocks are divided by the La Chilca Shear Zone, which is reactivated as the Rio Chanarito fault.
The EPMIC, forming the hanging wall, is composed of schists, gneisses and rare amphibolites, calc- silicate schists, marbles and migmatites. The rocks underwent multiple episodes of deformation and a late high strain-rate episode with gradually increasing mylonitization to the west. Metamorphism progrades from a M-1 phase to the peak M-3, characterized by the reactions: Qtz + Pl + Bt +/- Ms -> Grt + Bt(2) + Pl(2) +/- Sil +/- Kfs, Qtz + Bt + Sil -> Crd + Kfs and Qtz + Grt + Sil -> Crd. The M-3 assemblage is coeval with the dominant foliation related to a third deformational phase (D-3).
The QMMC, forming the foot wall, is made up of fine-grained banded quartz - biotite schists with quartz veins and quartz-feldspar-rich pegmatites. To the east, schists are also overprinted by mylonitization. The M-3 peak assemblage is quartz + biotite + plagioclase +/- garnet +/- sillimanite +/- muscovite +/- ilmenite +/- magnetite +/- apatite.
The studied segment suffered multiphase deformation and metamorphism. Some of these phases can be correlated between both blocks. D-1 is locally preserved in scarce outcrops in the EPMIC but is the dominant in the QMMC, where S-1 is nearly parallel to S-0. In the EPMIC, D-2 is represented by the S-2 foliation, related to the F-2 folding that overprints S-1, with dominant strike NNW - SSE and high angles dip to the E. D-3 in the EPMIC have F-3 folds with axis oblique to S-2; the S-3 foliation has striking NW - SE dipping steeply to the E or W and develops interference patterns. In the QMMC, S-2 (D-2) is a discontinuous cleavage oblique to S-1 and transposed by S-3 (D-3), subparallel to S-1. Such structures in the QMMC developed at subsolidus conditions and could be correlated to those of the EPMIC, which formed under higher P-T conditions. The penetrative deformation D-2 in the EPMIC occurred during a prograde path with syntectonic growth of garnet reaching P-T conditions of 640 degrees C and 0.54 GPa in the EPMIC. This stage was followed by a penetrative deformation D-3 with syn-kinematic growth of garnet, cordierite and plagioclase. Peak P-T conditions calculated for M-3 are 710 degrees C and 0.60 GPa, preserved in the western part of the EPMIC, west of the unnamed fault.
The schists from the QMMC suffered the early low grade M-1 metamorphism with minimum PT conditions of ca 400 degrees C and 0.35 GPa, comparable to the fine schists (M-1) outcropping to the east. The D-2 deformation is associated with the prograde M-2 metamorphism. The penetrative D-3 stage is related to a medium grade metamorphism M-3, with peak conditions at ca 590 degrees C and 0.55 GPa.
The superimposed stages of deformation and metamorphism reaching high P-T conditions followed by isothermal decompression, defining a clockwise orogenic P-T path. During the Lower Paleozoic, folds were superimposed and recrystallization as well as partial melting at peak conditions occurred. Similar characteristics were described from the basement from other Famatinian-dominated locations of the Sierra de Aconquija and other ranges of the Sierras Pampeanas Orientales.
We investigate how inviting students to set task-based goals affects usage of an online learning platform and course performance. We design and implement a randomized field experiment in a large mandatory economics course with blended learning elements. The low-cost treatment induces students to use the online learning system more often, more intensively, and to begin earlier with exam preparation. Treated students perform better in the course than the control group: they are 18.8% (0.20 SD) more likely to pass the exam and earn 6.7% (0.19 SD) more points on the exam. There is no evidence that treated students spend significantly more time, rather they tend to shift to more productive learning methods. The heterogeneity analysis suggests that higher treatment effects are associated with higher levels of behavioral bias but also with poor early course behavior.
Mildred Harnack, geb. Fish, stammte ursprünglich aus Milwaukee, Wisconsin. Zusammen mit ihrem Ehemann Arvid Harnack zog sie nach Deutschland und lebte seit 1930 in Berlin. Hier lehrte die Literaturwissenschaftlerin an der Friedrich-Wilhelms-Universität (heute Humboldt-Universität) und am Berliner Abendgymnasium (heute Peter A. Silbermann-Schule). Bereits kurz nach der Machtübernahme von Adolf Hitler hatte sich um das Ehepaar Harnack ein Kreis von Freunden gebildet, der gegen die Herrschaft der Nationalsozialisten opponierte. Dazu zählten auch Karl Behrens und Bodo Schlösinger, die beide Schüler Mildred Harnacks am Berliner Abendgymnasium waren. Mildred Harnack konnte mit Hilfe ihrer Kontakte zur amerikanischen Botschaft ihren Schülern im nationalsozialistischen Deutschland ansonsten nicht zugängliche Informationen besorgen.
Aufgrund von Funkkontakten des Freundeskreises zur Sowjetunion wurde die Gruppe von den Nationalsozialisten Rote Kapelle genannt – „rot“ bezog sich auf deren linke Haltung und mit „Kapelle“ wurden Funker assoziiert, die wie Pianisten in einer Kapelle spielen. Der Berliner Oppositionszirkel umfasste bis zu seiner Zerschlagung durch die Nationalsozialisten etwa 150 Personen verschiedenster Berufsgruppen, unterschiedlicher parteipolitischer Einstellungen und Konfessionen. Die Gruppe verfertigte oppositionelle Flugblätter und lieferte Informationen an die amerikanische Botschaft sowie an die Sowjetunion. Mildred Harnack wurde – wie viele ihrer Mitstreiterinnen und Mitstreiter – nach ihrer Verhaftung vom Reichskriegsgericht zum Tode verurteilt und am 16. Februar 1943 in Plötzensee guillotiniert.
In diesem Band stellen Studierende der Universität Potsdam sowie Hörerinnen und Hörer der Peter A. Silbermann-Schule (Berlin) nach einem kurzen Überblick zum Widerstand gegen den Nationalsozialismus in Deutschland das Netzwerk der Roten Kapelle sowie die Biographien von Mildred Harnack und ihren Schülern Karl Behrens und Bodo Schlösinger vom Berliner Abendgymnasium eindrücklich vor.
Development of functional and stable solid polymer electrolytes (SPEs) for battery applications is an important step towards both safer batteries and for the realization of lithium-based or anode-less batteries. The interface between the lithium and the solid polymer electrolyte is one of the bottlenecks, where severe degradation is expected. Here, the stability of three different SPEs - poly(ethylene oxide) (PEO), poly(epsilon-caprolactone) (PCL) and poly(trimethylene carbonate) (PTMC) - together with lithium bis(trifluoromethanesulfonyl)imide (LiTFSI) salt, is investigated after they have been exposed to lithium metal under UHV conditions. Degradation compounds, e.g. Li-O-R, LiF and LixSyOz, are identified for all SPEs using soft X-ray photoelectron spectroscopy. A competing degradation between polymer and salt is identified in the outermost surface region (<7 nm), and is dependent on the polymer host. PTMC:LiTFSI shows the most severe decomposition of both polymer and salt followed by PCL:LiTFSI and PEO:LiTFSI. In addition, the movement of lithium species through the decomposed interface shows large variation depending on the polymer electrolyte system.
With the downscaling of CMOS technologies, the radiation-induced Single Event Transient (SET) effects in combinational logic have become a critical reliability issue for modern integrated circuits (ICs) intended for operation under harsh radiation conditions. The SET pulses generated in combinational logic may propagate through the circuit and eventually result in soft errors. It has thus become an imperative to address the SET effects in the early phases of the radiation-hard IC design. In general, the soft error mitigation solutions should accommodate both static and dynamic measures to ensure the optimal utilization of available resources. An efficient soft-error-aware design should address synergistically three main aspects: (i) characterization and modeling of soft errors, (ii) multi-level soft error mitigation, and (iii) online soft error monitoring. Although significant results have been achieved, the effectiveness of SET characterization methods, accuracy of predictive SET models, and efficiency of SET mitigation measures are still critical issues. Therefore, this work addresses the following topics: (i) Characterization and modeling of SET effects in standard combinational cells, (ii) Static mitigation of SET effects in standard combinational cells, and (iii) Online particle detection, as a support for dynamic soft error mitigation.
Since the standard digital libraries are widely used in the design of radiation-hard ICs, the characterization of SET effects in standard cells and the availability of accurate SET models for the Soft Error Rate (SER) evaluation are the main prerequisites for efficient radiation-hard design. This work introduces an approach for the SPICE-based standard cell characterization with the reduced number of simulations, improved SET models and optimized SET sensitivity database. It has been shown that the inherent similarities in the SET response of logic cells for different input levels can be utilized to reduce the number of required simulations. Based on characterization results, the fitting models for the SET sensitivity metrics (critical charge, generated SET pulse width and propagated SET pulse width) have been developed. The proposed models are based on the principle of superposition, and they express explicitly the dependence of the SET sensitivity of individual combinational cells on design, operating and irradiation parameters. In contrast to the state-of-the-art characterization methodologies which employ extensive look-up tables (LUTs) for storing the simulation results, this work proposes the use of LUTs for storing the fitting coefficients of the SET sensitivity models derived from the characterization results. In that way the amount of characterization data in the SET sensitivity database is reduced significantly.
The initial step in enhancing the robustness of combinational logic is the application of gate-level mitigation techniques. As a result, significant improvement of the overall SER can be achieved with minimum area, delay and power overheads. For the SET mitigation in standard cells, it is essential to employ the techniques that do not require modifying the cell structure. This work introduces the use of decoupling cells for improving the robustness of standard combinational cells. By insertion of two decoupling cells at the output of a target cell, the critical charge of the cell’s output node is increased and the attenuation of short SETs is enhanced. In comparison to the most common gate-level techniques (gate upsizing and gate duplication), the proposed approach provides better SET filtering. However, as there is no single gate-level mitigation technique with optimal performance, a combination of multiple techniques is required. This work introduces a comprehensive characterization of gate-level mitigation techniques aimed to quantify their impact on the SET robustness improvement, as well as introduced area, delay and power overhead per gate. By characterizing the gate-level mitigation techniques together with the standard cells, the required effort in subsequent SER analysis of a target design can be reduced. The characterization database of the hardened standard cells can be utilized as a guideline for selection of the most appropriate mitigation solution for a given design.
As a support for dynamic soft error mitigation techniques, it is important to enable the online detection of energetic particles causing the soft errors. This allows activating the power-greedy fault-tolerant configurations based on N-modular redundancy only at the high radiation levels. To enable such a functionality, it is necessary to monitor both the particle flux and the variation of particle LET, as these two parameters contribute significantly to the system SER. In this work, a particle detection approach based on custom-sized pulse stretching inverters is proposed. Employing the pulse stretching inverters connected in parallel enables to measure the particle flux in terms of the number of detected SETs, while the particle LET variations can be estimated from the distribution of SET pulse widths. This approach requires a purely digital processing logic, in contrast to the standard detectors which require complex mixed-signal processing. Besides the possibility of LET monitoring, additional advantages of the proposed particle detector are low detection latency and power consumption, and immunity to error accumulation.
The results achieved in this thesis can serve as a basis for establishment of an overall soft-error-aware database for a given digital library, and a comprehensive multi-level radiation-hard design flow that can be implemented with the standard IC design tools. The following step will be to evaluate the achieved results with the irradiation experiments.
This study investigates the use of pulse stretching (skew-sized) inverters for monitoring the variation of count rate and linear energy transfer (LET) of energetic particles. The basic particle detector is a cascade of two pulse stretching inverters, and the required sensing area is obtained by connecting up to 12 two-inverter cells in parallel and employing the required number of parallel arrays. The incident particles are detected as single-event transients (SETs), whereby the SET count rate denotes the particle count rate, while the SET pulsewidth distribution depicts the LET variations. The advantage of the proposed solution is the possibility to sense the LET variations using fully digital processing logic. SPICE simulations conducted on IHP's 130-nm CMOS technology have shown that the SET pulsewidth varies by approximately 550 ps over the LET range from 1 to 100 MeV center dot cm(2) center dot mg(-1). The proposed detector is intended for triggering the fault-tolerant mechanisms within a self-adaptive multiprocessing system employed in space. It can be implemented as a standalone detector or integrated in the same chip with the target system.
This paper presents two new pollen records and quantitative climate reconstructions from northern Chukotka documenting environmental changes over the last 27.9 ka. Open tundra- and steppe-like habitats dominated between 27.9 and 18.7 cal. ka BP. Betula and Alnus shrubs might have grown in sheltered microhabitats but disappeared after 18.7 cal. ka BP. Although the climate was rather harsh, local herb-dominated communities supported herbivores as is evident by the presence of coprophilous spores in the sediments. The increase in Salix and Cyperaceae similar to 16.1 cal. ka BP suggests climate amelioration. Shrub Betula appeared similar to 15.9 cal. ka BP, and became dominant after similar to 15.52 cal. ka BP, whilst typical steppe communities drastically reduced. Very high presence of Botryococcus in the Lateglacial sediments reflects widespread shallow habitats, probably due to lake level increase. Shrub Alnus became common after similar to 13 cal. ka BP reflecting further climate amelioration. Simultaneously, herb communities gradually decreased in the vegetation reaching a minimum similar to 11.8 cal. ka BP. A gradual decrease of algae remains suggests a reduction of shallow-water habitats. Shrubby and graminoid tundra was dominant similar to 11.8-11.1 cal. ka BP, later Salix stands significantly decreased. The forest-tundra ecotone established in the Early Holocene, shortly after 11.1 cal. ka BP. Low contents of green algae in the Early Holocene sediments likely reflect deeper aquatic conditions. The most favourable climate conditions were between similar to 10.6 and 7 cal. ka BP. Vegetation became similar to the modern after similar to 7 cal. ka BP but Pinus pumila came to the Ilirney area at about 1.2 cal. ka BP. It is important to emphasize that the study area provided refugia for Betula and Alnus during MIS 2. It is also notable that our records do not reflect evidence of Younger Dryas cooling, which is inconsistent with some regional environmental records but in good accordance with some others.
Karl Peters (1904–1998)
(2021)
Dieses Buch zeichnet das Leben und Wirken des bedeutenden Strafrechtswissenschaftlers Karl Peters nach, wobei ein Schwerpunkt auf der Zeit des Nationalsozialismus liegt. Als Staatsanwalt seit 1932 tätig, auf Grund seiner katholischen Konfession erst 1942 zum Ordinarius in Greifswald ernannt, von 1946 bis 1962 Professor in Münster und sodann bis 1972 in Tübingen tätig. Peters‘ Wirken beeindruckt durch seine Bandbreite. Neben einer intensiven Auseinandersetzung mit dem Strafprozess, -vollzugs- und Jugendstrafrecht forschte er in den Bereichen dermKriminologie, Soziologie, Psychologie, Medizin und Pädagogik. Getragen von christlichen Grundanschauungen stellte Peters hohe Anforderungen an sich und den (Straf-)Juristen. Die Beschäftigung mit Justizirrtümern und dem Wiederaufnahmeverfahrensrecht wurde zu seinem Hauptanliegen.
In this paper, we employ a comparative life course approach for Canada and Germany to unravel the relationships among general and vocational educational attainment and different life course activities, with a focus on labour market and income inequality by gender. Life course theory and related concepts of 'time,' 'normative patterns,' 'order and disorder,' and 'discontinuities' are used to inform the analyses. Data from the Paths on Life's Way (Paths) project in British Columbia, Canada and the German Pathways from Late Childhood to Adulthood (LifE) which span 28 and 33 years, respectively, are employed to examine life trajectories from leaving school to around age 45. Sequence analysis and cluster analyses portray both within and between country differences - and in particular gender differences - in educational attainment, employment, and other activities across the life course which has an impact on ultimate labour market participation and income levels. 'Normative' life courses that follow a traditional order correspond with higher levels of full-time work and higher incomes; in Germany more so than Canada, these clusters are male dominated. Clusters characterised by 'disordered' and 'discontinuous' life courses in both countries are female dominated and associated with lower income levels.
The leniency rule revisited
(2021)
The experimental literature on antitrust enforcement provides robust evidence that communication plays an important role for the formation and stability of cartels. We extend these studies through a design that distinguishes between innocuous communication and communication about a cartel, sanctioning only the latter. To this aim, we introduce a participant in the role of the competition authority, who is properly incentivized to judge the communication content and price setting behavior of the firms. Using this novel design, we revisit the question whether a leniency rule successfully destabilizes cartels. In contrast to existing experimental studies, we find that a leniency rule does not affect cartelization. We discuss potential explanations for this contrasting result.
The experimental literature on antitrust enforcement provides robust evidence that communication plays an important role for the formation and stability of cartels. We extend these studies through a design that distinguishes between innocuous communication and communication about a cartel, sanctioning only the latter. To this aim, we introduce a participant in the role of the competition authority, who is properly incentivized to judge communication content and price setting behavior of the firms. Using this novel design, we revisit the question whether a leniency rule successfully destabilizes cartels. In contrast to existing experimental studies, we find that a leniency rule does not affect cartelization. We discuss potential explanations for this contrasting result.
Coherent network partitions
(2021)
We continue to study coherent partitions of graphs whereby the vertex set is partitioned into subsets that induce biclique spanned subgraphs. The problem of identifying the minimum number of edges to obtain biclique spanned connected components (CNP), called the coherence number, is NP-hard even on bipartite graphs. Here, we propose a graph transformation geared towards obtaining an O (log n)-approximation algorithm for the CNP on a bipartite graph with n vertices. The transformation is inspired by a new characterization of biclique spanned subgraphs. In addition, we study coherent partitions on prime graphs, and show that finding coherent partitions reduces to the problem of finding coherent partitions in a prime graph. Therefore, these results provide future directions for approximation algorithms for the coherence number of a given graph.
Gravitational-wave (GW) astrophysics is a field in full blossom. Since the landmark detection of GWs from a binary black hole on September 14th 2015, fifty-two compact-object binaries have been reported by the LIGO-Virgo collaboration. Such events carry astrophysical and cosmological information ranging from an understanding of how black holes and neutron stars are formed, what neutron stars are composed of, how the Universe expands, and allow testing general relativity in the highly-dynamical strong-field regime. It is the goal of GW astrophysics to extract such information as accurately as possible. Yet, this is only possible if the tools and technology used to detect and analyze GWs are advanced enough. A key aspect of GW searches are waveform models, which encapsulate our best predictions for the gravitational radiation under a certain set of parameters, and that need to be cross-correlated with data to extract GW signals. Waveforms must be very accurate to avoid missing important physics in the data, which might be the key to answer the fundamental questions of GW astrophysics. The continuous improvements of the current LIGO-Virgo detectors, the development of next-generation ground-based detectors such as the Einstein Telescope or the Cosmic Explorer, as well as the development of the Laser Interferometer Space Antenna (LISA), demand accurate waveform models. While available models are enough to capture the low spins, comparable-mass binaries routinely detected in LIGO-Virgo searches, those for sources from both current and next-generation ground-based and spaceborne detectors must be accurate enough to detect binaries with large spins and asymmetry in the masses. Moreover, the thousands of sources that we expect to detect with future detectors demand accurate waveforms to mitigate biases in the estimation of signals’ parameters due to the presence of a foreground of many sources that overlap in the frequency band. This is recognized as one of the biggest challenges for the analysis of future-detectors’ data, since biases might hinder the extraction of important astrophysical and cosmological information from future detectors’ data. In the first part of this thesis, we discuss how to improve waveform models for binaries with high spins and asymmetry in the masses. In the second, we present the first generic metrics that have been proposed to predict biases in the presence of a foreground of many overlapping signals in GW data.
For the first task, we will focus on several classes of analytical techniques. Current models for LIGO and Virgo studies are based on the post-Newtonian (PN, weak-field, small velocities) approximation that is most natural for the bound orbits that are routinely detected in GW searches. However, two other approximations have risen in prominence, the post-Minkowskian (PM, weak- field only) approximation natural for unbound (scattering) orbits and the small-mass-ratio (SMR) approximation typical of binaries in which the mass of one body is much bigger than the other. These are most appropriate to binaries with high asymmetry in the masses that challenge current waveform models. Moreover, they allow one to “cover” regions of the parameter space of coalescing binaries, thereby improving the interpolation (and faithfulness) of waveform models. The analytical approximations to the relativistic two-body problem can synergically be included within the effective-one-body (EOB) formalism, in which the two-body information from each approximation can be recast into an effective problem of a mass orbiting a deformed Schwarzschild (or Kerr) black hole. The hope is that the resultant models can cover both the low-spin comparable-mass binaries that are routinely detected, and the ones that challenge current models. The first part of this thesis is dedicated to a study about how to best incorporate information from the PN, PM, SMR and EOB approaches in a synergistic way. We also discuss how accurate the resulting waveforms are, as compared against numerical-relativity (NR) simulations. We begin by comparing PM models, whether alone or recast in the EOB framework, against PN models and NR simulations. We will show that PM information has the potential to improve currently-employed models for LIGO and Virgo, especially if recast within the EOB formalism. This is very important, as the PM approximation comes with a host of new computational techniques from particle physics to exploit. Then, we show how a combination of PM and SMR approximations can be employed to access previously-unknown PN orders, deriving the third subleading PN dynamics for spin-orbit and (aligned) spin1-spin2 couplings. Such new results can then be included in the EOB models currently used in GW searches and parameter estimation studies, thereby improving them when the binaries have high spins. Finally, we build an EOB model for quasi-circular nonspinning binaries based on the SMR approximation (rather than the PN one as usually done). We show how this is done in detail without incurring in the divergences that had affected previous attempts, and compare the resultant model against NR simulations. We find that the SMR approximation is an excellent approximation for all (quasi-circular nonspinning) binaries, including both the equal-mass binaries that are routinely detected in GW searches and the ones with highly asymmetric masses. In particular, the SMR-based models compare much better than the PN models, suggesting that SMR-informed EOB models might be the key to model binaries in the future. In the second task of this thesis, we work within the linear-signal ap- proximation and describe generic metrics to predict inference biases on the parameters of a GW source of interest in the presence of confusion noise from unfitted foregrounds and from residuals of other signals that have been incorrectly fitted out. We illustrate the formalism with simple (yet realistic) LISA sources, and demonstrate its validity against Monte-Carlo simulations. The metrics we describe pave the way for more realistic studies to quantify the biases with future ground-based and spaceborne detectors.
Das Rahmenkonzept der Universitätsschule Potsdam beschreibt die Wertegrundlage und das pädagogisch-didaktische sowie das wissenschaftliche Fundament einer zu gründenden Universitätsschule Potsdam. Wie andere Universitätsschulen soll sich auch diese Schule durch eine enge und institutionalisierte Beziehung zwischen Schule und Universität auszeichnen, die den ständigen Wissenstransfer zwischen Schulpraxis, Wissenschaft, Lehrkräftebildung und Schulverwaltung unterstützt. Das Rahmenkonzept legt die Grundlagen für eine inklusive Schule, deren Schüler:innen einen Querschnitt der Gesellschaft abbilden, und die in ungleichheitssensiblen Bildungsangeboten alle Bildungsabschlüsse des Landes Brandenburg anbietet. Die Universitätsschule soll den starken Segregationsprozessen in Potsdam entgegenwirken.
Im Leitbild werden die Grundwerte (Nachhaltigkeit, Inklusion und Bildungsgerechtigkeit, Menschenrechte und Demokratie, Gemeinschaft, Ganzheitlichkeit) und die Bildungsziele (Transferfähigkeit, kritisch-reflexives Denken und lebensbegleitendes Lernen, Diversitätsbewusstsein und Transkulturalität, Selbstkompetenz und Beziehungskompetenz, Kulturtechniken und digitale Kompetenz) der Universitätsschule dargestellt. Das Pädagogische Konzept veranschaulicht, wie Werte und Bildungsziele in den Bereichen Schulform, Schulkultur, Lernkultur sowie Lernorte und Lernumgebung ausgestaltet werden können. Schließlich wird die Universitätsschule als lernende und lehrende Institution beschrieben, die ein Ort des Transfers von Bildungsinnovationen ist. Dafür soll eine Transferwerkstatt in der Schule verankert werden, die den Wissensaustausch der schulrelevanten Akteur:innen unterstützt und gestaltet.
Einleitung
(2021)
Elaeidobius kamerunicus Faust. (Coleoptera: Curculionidae) is an essential insect pollinator in oil palm plantations. Recently, researches have been undertaken to improve pollination efficiency using this species. A fundamental understanding of the genes related to this pollinator behavior is necessary to achieve this goal. Here, we present the draft genome sequence, annotation, and simple sequence repeat (SSR) marker data for this pollinator. In total, 34.97 Gb of sequence data from one male individual (monoisolate) were obtained using Illumina short-read platform NextSeq 500. The draft genome assembly was found to be 269.79 Mb and about 59.9% of completeness based on Benchmarking Universal Single-Copy Orthologs (BUSCO) assessment. Functional gene annotation predicted about 26.566 genes. Also, a total of 281.668 putative SSR markers were identified. This draft genome sequence is a valuable resource for understanding the population genetics, phylogenetics, dispersal patterns, and behavior of this species.
In den vergangenen Jahren hat sich die Politikdidaktik zunehmend mit dem Einsatz von Narrationen im Politikunterricht beschäftigt, denn neben Sachtexten bietet auch die Belletristik die Möglichkeit, sich mit politischen Themen auseinanderzusetzen. Insbesondere die Literatur von Ferdinand von Schirach hat in den letzten Jahren zunehmend Anklang in der Gesellschaft gefunden. Von Schirachs Texte greifen gesellschaftskritische Themen auf, beleuchten diese aus verschiedenen Perspektiven und fordern zur Meinungsbildung heraus. Aus diesem Grund weisen von Schirachs Narrationen ein hohes Potential für die Politische Bildung auf. Politische Bildung schließt auch die Rechterziehung ein. Der Fall Collini von Ferdinand von Schirach setzt sich sowohl mit rechtlichen, als auch mit politischen Themen im Sinne der Rechtserziehung auseinander. In der vorliegenden Masterarbeit wird der Frage nachgegangen, inwieweit der Roman Der Fall Collini von Ferdinand von Schirach als Narration eine Chance für politisch-rechtliches Lernen im Politikunterricht darstellt. Um die Forschungsfrage zu beantworten, werden die Lernchancen und -grenzen des Romans hinsichtlich seiner Thematik und seines Genres, sowie durch den Roman geförderten Kompetenzen herausgearbeitet und die durch ihn möglichen fächerübergreifenden Bezüge verdeutlicht. Durch die Auseinandersetzung mit von Schirachs Werk beschäftigen sich die Schülerinnen und Schüler mit politisch-rechtlichen Themen, wie dem Spannungsverhältnis von Recht und Gerechtigkeit, dem Ablauf von Strafgerichtsverfahren, dem theoretischen Anspruch des Rechtsstaates und dessen realen Schwächen. Zudem fördert die Auseinandersetzung mit dem Roman Der Fall Collini die vier fachbezogenen Kompetenzen der Politischen Bildung, sowie Multiperspektivität und exemplarisches Lernen. Des Weiteren verknüpft der Roman historische, politisch-rechtliche und moralisch-ethische Aspekte miteinander, wodurch fächerübergreifende Bezüge mit den Fächern Geschichte, Deutsch und L-E-R hergestellt werden können. Darüber hinaus spricht der Justizroman als Narration seine Leserinnen und Leser auch emotional an und fördert somit eine ganzheitliche und nachhaltige Wissensvermittlung im Sinne der Rechtserziehung. Es hat sich gezeigt, dass Der Fall Collini von Ferdinand von Schirach sich für die unterrichtliche Beschäftigung innerhalb der Politischen Bildung besonders eignet.
En el presente trabajo se realizó una investigación multidisciplinaria combinando métodos de geomorfología tectónica con estudios geofisicos y estructurales, focalizados principalmente en la caracterización neotectónica de ambos faldeos de la sierra de La Candelaria y del extremo sur de la cuenca de Metán. La zona de estudio se encuentra ubicada en la región limítrofe entre las provincias de Salta y Tucumán y pertenece a la provincia geológica del Sistema Santa Bárbara.
El principal objetivo consistió en contextualizar las evidencias de actividad tectónica cuaternaria de la región mediante la propuesta de un modelo estructural novedoso, con el propósito de incrementar la información disponible sobre estructuras neotectónicas y su potencial sismogénico. Con este fin, se aplicaron e integraron diversas técnicas tales como la interpretación de líneas sísmicas de reflexión, construcción de secciones estructurales balanceadas, y métodos geofísicos someros, para constatar el comportamiento en profundidad tanto de las estructuras geológicas identificadas en superficie como de las posibles fallas ciegas corticales involucradas.
En primer lugar, se realizó un relevamiento regional del área de estudio empleando imágenes satelitales multiespectrales LANDSAT y SENTINEL 2, que permitieron reconocer diferentes niveles de abanicos aluviales y terrazas fluviales cuaternarios. Mediante la determinación de diferentes indicadores morfométricos en modelos de elevación digital (MED), junto con observaciones de campo, fue posible identificar evidencias de deformación sobre dichos niveles cuaternarios que han sido relacionadas genéticamente con cuatro fallas neotectónicas. Tres de ellas (fallas Arias, El Quemado y Copo Quile) fueron seleccionadas para efectuar estudios de mayor detalle por medio de la aplicación de métodos de geofísica somera (tomografía eléctrica resistiva (ERT) y tomografía sísmica de refracción Sísmica (SRT)), que permitieron corroborar su existencia en profundidad, realizar inferencias geométricas y cinemáticas, y estimar la magnitud de la deformación reciente. Las fallas Arias y El Quemado fueron interpretadas como fallas inversas relacionadas con deslizamiento flexural interstratal, mientras que la falla Copo Quile se interpretó como una falla inversa ciega de bajo ángulo.También se realizó una interpretación conjunta de líneas sísmicas de reflexión y pozos exploratorios pertenecientes a áreas hidrocarburíferas de las cuencas de Choromoro y Metán con el fin de contextualizar las principales estructuras reconocidas en el marco estratigráfico y tectónico regional. Toda la información fue integrada en una sección estructural balanceada mediante técnicas de modelado cinemático. Dicho modelo permite inferir que la deformación cuaternaria reconocida está relacionada al desplazamiento del basamento a lo largo de un corrimiento ciego, responsable del levantamiento de la sierra de La Candelaria y el cerr Cantero. Asimismo, el modelo cinemático permite interpretar la ubicación aproximada de los principales niveles de despegue que controlan el estilo de deformación. El nivel de despegue más somero, que controla la deformación de la cobertura sedimentaria se encuentra a 4 km de profundidad, a 21 km se estima la presencia de otra zona de cizalla subhorizontal dentro del basamento.
Finalmente, a partir de la integración de todos los resultados obtenidos, se evaluó el potencial sismogénico de las fallas en la zona de estudio. Las fallas de primer orden que controlan la deformación en la zona son las responsables de los grandes terremotos. Mientras, las fallas Cuaternarias flexodeslizantes e inversas afectan solamente a la cobertura sedimentaria y serían estructuras de segundo orden que acomodan la deformación y fueron activadas durante el cuaternario con movimientos asísmicos y/o sísmicos de muy baja magnitud.
Estos resultados permiten inferir que el corrimiento La Candelaria constituye una fuente sismogénica potencial de importancia para la región, donde se ubican numerosas poblaciones y obras civiles de envergadura. Por otra parte, la sección estructural balanceada implica la presencia de otras fallas ciegas de distinto orden de magnitud que podrían ser posibles fuentes sismogénicas profundas adicionales, marcando la necesidad de continuar con el desarrollo de este tipo de estudios en esta región tectónicamente activa.
Polymeric semiconductors are strong contenders for replacing traditional inorganic semiconductors in electronic applications requiring low power, low cost and flexibility, such as biosensors, flexible solar cells and electronic displays. Molecular doping has the potential to enable this revolution by improving the conductivity and charge transport properties of this class of materials. Despite decades of research in this field, gaps in our understanding of the nature of dopant–polymer interactions has resulted in limited commercialization of this technology. This work aims at providing a deeper insight into the underlying mechanisms of molecular p-doping of semiconducting polymers in the solution and solid-state, and thereby bring the scientific community closer to realizing the dream of making organic semiconductors commonplace in the electronics industry. The role of 1) dopant size/shape, 2) polymer chain aggregation and 3) charge delocalization on the doping mechanism and efficiency is addressed using optical (UV-Vis-NIR) and electron paramagnetic resonance (EPR) spectroscopies. By conducting a comprehensive study of the nature and concentration of the doping-induced species in solutions of the polymer poly(3-hexylthiophene) (P3HT) with 3 different dopants, we identify the unique optical signatures of the delocalized polaron, localized polaron and charge-transfer complex, and report their extinction coefficient values. Furthermore, with X-ray diffraction, atomic force microscopy and electrical conductivity measurements, we study the impact of processing technique and doping mechanism on the morphology and thereby, charge transport through the doped films.
This work demonstrates that the doping mechanism and type of doping-induced species formed are strongly influenced by the polymer backbone arrangement rather than dopant shape/size. The ability of the polymer chain to aggregate is found to be crucial for efficient charge transfer (ionization) and polaron delocalization. At the same time, our results suggest that the high ionization efficiency of a dopant–polymer system in solution may subsequently hinder efficient charge transport in the solid-state due to the reduction in the fraction of tie chains, which enable charges to move efficiently between aggregated domains in the films. This study demonstrates the complex multifaceted nature of polymer doping while providing important hints for the future design of dopant-host systems and film fabrication techniques.
We consider sedimented at a solid wall particles that are immersed in water containing small additives of photosensitive ionic surfactants. It is shown that illumination with an appropriate wavelength, a beam intensity profile, shape and size could lead to a variety of dynamic, both unsteady and steady state, configurations of particles. These dynamic, well-controlled and switchable particle patterns at the wall are due to an emerging diffusio-osmotic flow that takes its origin in the adjacent to the wall electrostatic diffuse layer, where the concentration gradients of surfactant are induced by light. The conventional nonporous particles are passive and can move only with already generated flow. However, porous colloids actively participate themselves in the flow generation mechanism at the wall, which also sets their interactions that can be very long ranged. This light-induced diffusio-osmosis opens novel avenues to manipulate colloidal particles and assemble them to various patterns. We show in particular how to create and split optically the confined regions of particles of tunable size and shape, where well-controlled flow-induced forces on the colloids could result in their crystalline packing, formation of dilute lattices of well-separated particles, and other states.
Zur Einführung
(2021)
Universität
(2021)
Fast Holocene slip and localized strain along the Liquiñe-Ofqui strike-slip fault system, Chile
(2021)
In active tectonic settings dominated by strike-slip kinematics, slip partitioning across subparallel faults is a common feature; therefore, assessing the degree of partitioning and strain localization is paramount for seismic hazard assessments. Here, we estimate a slip rate of 18.8 +/- 2.0 mm/year over the past 9.0 +/- 0.1 ka for a single strand of the Liquirie-Ofqui Fault System, which straddles the Main Cordillera in Southern Chile. This Holocene rate accounts for similar to 82% of the trench-parallel component of oblique plate convergence and is similar to million-year estimates integrated over the entire fault system. Our results imply that strain localizes on a single fault at millennial time scale but over longer time scales strain localization is not sustained. The fast millennial slip rate in the absence of historical Mw> 6.5 earthquakes along the Liquine-Ofqui Fault System implies either a component of aseismic slip or Mw similar to 7 earthquakes involving multi-trace ruptures and > 150-year repeat times. Our results have implications for the understanding of strike-slip fault system dynamics within volcanic arcs and seismic hazard assessments.
The goal of limiting global warming to well below 2°C as set out in the Paris Agreement calls for a strategic assessment of societal pathways and policy strategies. Besides policy makers, new powerful actors from the private sector, including finance, have stepped up to engage in forward-looking assessments of a Paris-compliant and climate-resilient future. Climate change scenarios have addressed this demand by providing scientific insights on the possible pathways ahead to limit warming in line with the Paris climate goal. Despite the increased interest, the potential of climate change scenarios has not been fully unleashed, mostly due to a lack of an intermediary service that provides guidance and access to climate change scenarios. This perspective presents the concept of a climate change scenario service, its components, and a prototypical implementation to overcome this shortcoming aiming to make scenarios accessible to a broader audience of societal actors and decision makers.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a maximum a posteriori (MAP) estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager-Machlup (OM) functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the Gamma-convergence of OM functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.
We derive Onsager-Machlup functionals for countable product measures on weighted l(p) subspaces of the sequence space R-N. Each measure in the product is a shifted and scaled copy of a reference probability measure on R that admits a sufficiently regular Lebesgue density. We study the equicoercivity and Gamma-convergence of sequences of Onsager-Machlup functionals associated to convergent sequences of measures within this class. We use these results to establish analogous results for probability measures on separable Banach or Hilbert spaces, including Gaussian, Cauchy, and Besov measures with summability parameter 1 <= p <= 2. Together with part I of this paper, this provides a basis for analysis of the convergence of maximum a posteriori estimators in Bayesian inverse problems and most likely paths in transition path theory.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
For around a decade, deep learning - the sub-field of machine learning that refers to artificial neural networks comprised of many computational layers - modifies the landscape of statistical model development in many research areas, such as image classification, machine translation, and speech recognition. Geoscientific disciplines in general and the field of hydrology in particular, also do not stand aside from this movement. Recently, the proliferation of modern deep learning-based techniques and methods has been actively gaining popularity for solving a wide range of hydrological problems: modeling and forecasting of river runoff, hydrological model parameters regionalization, assessment of available water resources. identification of the main drivers of the recent change in water balance components. This growing popularity of deep neural networks is primarily due to their high universality and efficiency. The presented qualities, together with the rapidly growing amount of accumulated environmental information, as well as increasing availability of computing facilities and resources, allow us to speak about deep neural networks as a new generation of mathematical models designed to, if not to replace existing solutions, but significantly enrich the field of geophysical processes modeling. This paper provides a brief overview of the current state of the field of development and application of deep neural networks in hydrology. Also in the following study, the qualitative long-term forecast regarding the development of deep learning technology for managing the corresponding hydrological modeling challenges is provided based on the use of "Gartner Hype Curve", which in the general details describes a life cycle of modern technologies.
We systematically explore the effect of calibration data length on the performance of a conceptual hydrological model, GR4H, in comparison to two Artificial Neural Network (ANN) architectures: Long Short-Term Memory Networks (LSTM) and Gated Recurrent Units (GRU), which have just recently been introduced to the field of hydrology. We implemented a case study for six river basins across the contiguous United States, with 25 years of meteorological and discharge data. Nine years were reserved for independent validation; two years were used as a warm-up period, one year for each of the calibration and validation periods, respectively; from the remaining 14 years, we sampled increasing amounts of data for model calibration, and found pronounced differences in model performance. While GR4H required less data to converge, LSTM and GRU caught up at a remarkable rate, considering their number of parameters. Also, LSTM and GRU exhibited the higher calibration instability in comparison to GR4H. These findings confirm the potential of modern deep-learning architectures in rainfall runoff modelling, but also highlight the noticeable differences between them in regard to the effect of calibration data length.
Donors of development assistance for health typically provide funding for a range of disease focus areas, such as maternal health and child health, malaria, HIV/AIDS, and other infectious diseases. But funding for each disease category does not match closely its contribution to the disability and loss of life it causes and the cost-effectiveness of interventions. We argue that peer influences in the social construction of global health priorities contribute to explaining this misalignment. Aid policy-makers are embedded in a social environment encompassing other donors, health experts, advocacy groups, and international officials. This social environment influences the conceptual and normative frameworks of decision-makers, which in turn affect their funding priorities. Aid policy-makers are especially likely to emulate decisions on funding priorities taken by peers with whom they are most closely involved in the context of expert and advocacy networks. We draw on novel data on donor connectivity through health IGOs and health INGOs and assess the argument by applying spatial regression models to health aid disbursed globally between 1990 and 2017. The analysis provides strong empirical support for our argument that the involvement in overlapping expert and advocacy networks shapes funding priorities regarding disease categories and recipient countries in health aid.
The paper introduces the principle Maximise Presupposition and its cognates. The main focus of the literature and this article is on the inferences that arise as a result of reasoning with Maximise Presupposition ('anti-presuppositions'). I will review the arguments put forward for distinguishing them from other inference types, most notably presuppositions and conversational implicatures. I will zoom in on three main issues regarding Maximise Presupposition and these inferences critically discussed in the literature: epistemic strength(ening), projection, and the role of alternatives. I will discuss more recent views which argue for either a uniform treatment of anti-presuppositions and implicatures and/or a revision of the original principle in light of new data and developments in pragmatics.
Internships during tertiary education have become substantially more common over the past decades in many industrialised countries. This study examines the impact of a voluntary intra-curricular internship experience during university studies on the probability of being invited to a job interview. To estimate a causal relationship, we conducted a randomised field experiment in which we sent 1248 fictitious, but realistic, resumes to real job openings. We find that applicants with internship experience have, on average, a 12.6% higher probability of being invited to a job interview.
The trace elements zinc and manganese are essential for human health, especially due to their enzymatic and protein stabilizing functions. If these elements are ingested in amounts exceeding the requirements, regulatory processes for maintaining their physiological concentrations (homeostasis) can be disturbed. Those homeostatic dysregulations can cause severe health effects including the emergence of neurodegenerative disorders such as Parkinson’s disease (PD). The concentrations of essential trace elements also change during the aging process. However, the relations of cause and consequence between increased manganese and zinc uptake and its influence on the aging process and the emergence of the aging-associated PD are still rarely understood. This doctoral thesis therefore aimed to investigate the influence of a nutritive zinc and/or manganese oversupply on the metal homeostasis during the aging process. For that, the model organism Caenorhabditis elegans (C. elegans) was applied. This nematode suits well as an aging and PD model due to properties such as its short life cycle and its completely sequenced, genetically amenable genome. Different protocols for the propagation of zinc- and/or manganese-supplemented young, middle-aged and aged C. elegans were established. Therefore, wildtypes, as well as genetically modified worm strains modeling inheritable forms of parkinsonism were applied. To identify homeostatic and neurological alterations, the nematodes were investigated with different methods including the analysis of total metal contents via inductively-coupled plasma tandem mass spectrometry, a specific probe-based method for quantifying labile zinc, survival assays, gene expression analysis as well as fluorescence microscopy for the identification and quantification of dopaminergic neurodegeneration.. During aging, the levels of iron, as well as zinc and manganese increased.. Furthermore, the simultaneous oversupply with zinc and manganese increased the total zinc and manganese contents to a higher extend than the single metal supplementation. In this relation the C. elegans metallothionein 1 (MTL-1) was identified as an important regulator of metal homeostasis. The total zinc content and the concentration of labile zinc were age-dependently, but differently regulated. This elucidates the importance of distinguishing these parameters as two independent biomarkers for the zinc status. Not the metal oversupply, but aging increased the levels of dopaminergic neurodegeneration. Additionally, nearly all these results yielded differences in the aging-dependent regulation of trace element homeostasis between wildtypes and PD models. This confirms that an increased zinc and manganese intake can influence the aging process as well as parkinsonism by altering homeostasis although the underlying mechanisms need to be clarified in further studies.
Manganese (Mn) and zinc (Zn) are not only essential trace elements, but also potential exogenous risk factors for various diseases. Since the disturbed homeostasis of single metals can result in detrimental health effects, concerns have emerged regarding the consequences of excessive exposures to multiple metals, either via nutritional supplementation or parenteral nutrition. This study focuses on Mn-Zn-interactions in the nematode Caenorhabditis elegans (C. elegans) model, taking into account aspects related to aging and age-dependent neurodegeneration.
Manganese (Mn) and zinc (Zn) are not only essential trace elements, but also potential exogenous risk factors for various diseases. Since the disturbed homeostasis of single metals can result in detrimental health effects, concerns have emerged regarding the consequences of excessive exposures to multiple metals, either via nutritional supplementation or parenteral nutrition. This study focuses on Mn-Zn-interactions in the nematode Caenorhabditis elegans (C. elegans) model, taking into account aspects related to aging and age-dependent neurodegeneration.
Zentrales Element dieser Arbeit ist die Synthese und Charakterisierung praktisch nutzbarer Ionogele. Die Basis der Polymerionogele bildet das Modellpolymer Polymethylmethacrylat. Als Additive kommen ionische Flüssigkeiten zum Einsatz, deren Grundlage Derivate des vielfach verwendeten Imidazoliumkations sind. Die Eigenschaften der eingebetteten ionischen Flüssigkeiten sind für die Ionogele funktionsgebend. Die Funktionalität der jeweiligen Gele und damit der Transfer der Eigenschaften von ionischen Flüssigkeiten auf die Ionogele wurde in der vorliegenden Arbeit mittels zahlreicher Charakterisierungstechniken überprüft und bestätigt. In dieser Arbeit wurden durch Ionogelbildung makroskopische Ionogelobjekte in Form von Folien und Vliesen erzeugt. Dabei kamen das Filmgießen und das Elektrospinnen als Methoden zur Erzeugung dieser Folien und Vliese zum Einsatz, woraus jeweils ein Modellsystem resultiert. Dadurch wird die vorliegende Arbeit in die Themenkomplexe „elektrisch halbleitende Ionogelfolien“ und „antimikrobiell aktive Ionogelvliese“ gegliedert. Der Einsatz von triiodidhaltigen ionischen Flüssigkeiten und einer Polymermatrix in einem diskontinuierlichen Gießprozess resultiert in elektrisch halbleitenden Ionogelfolien. Die flexiblen und transparenten Folien können Mittelpunkt zahlreicher neuer Anwendungsfelder im Bereich flexibler Elektronik sein. Das Elektrospinnen von Polymethylmethacrylat mit einer ionischen Flüssigkeit führte zu einem homogen Ionogelvlies, welches ein Modell für die Übertragung antimikrobiell aktiver Eigenschaften ionischer Flüssigkeiten auf poröse Strukturen zur Filtration darstellt. Gleichzeitig ist es das erste Beispiel für ein kupferchloridhaltiges Ionogel. Ionogele sind attraktive Materialien mit zahlreichen Anwendungsmöglichkeiten. Mit der vorliegenden Arbeit wird das Spektrum der Ionogele um ein elektrisch halbleitendes und ein antimikrobiell aktives Ionogel erweitert. Gleichzeitig wurden durch diese Arbeit der Gruppe der ionischen Flüssigkeiten drei Beispiele für elektrisch halbleitende ionische Flüssigkeiten sowie zahlreiche kupfer(II)chloridbasierte ionische Flüssigkeiten hinzugefügt.
“Embodied Practices – Looking From Small Places” is an edited transcript of a conversation between theatre and performance scholar Sruti Bala (University of Amsterdam) and sociologist, criminologist and anthropologist Dylan Kerrigan (University of Leicester) that took place as an online event in November 2020. Throughout their talk, Bala and Kerrigan engage with the legacy of Haitian anthropologist Michel-Rolph Trouillot. Specifically, they focus on his approach of looking from small units, such as small villages in Dominica, outwards to larger political structures such as global capitalism, social inequalities and the distribution of power. They also share insights from their own research on embodied practices in the Caribbean, Europe and India and answer questions such as: What can research on and through embodied practices tell us about systems of power and domination that move between the local and the global? How can performance practices which are informed by multiple locations and cultures be read and appreciated adequately? Sharing insights from his research into Guyanese prisons, Kerrigan outlines how he aims to connect everyday experiences and struggles of Caribbean people to trans-historical and transnational processes such as racial capitalism and post/coloniality. Furthermore, he elaborates on how he uses performance practices such as spoken word poetry and data verbalisation to connect with systematically excluded groups. Bala challenges naïve notions about the inherent transformative potential of performance in her research on performance and translation. She points to the way in which performance and its reception is always already inscribed in what she calls global or planetary asymmetries. At the conclusion of this conversation, they broach the question: are small places truly as small as they seem?
Label-free optical sensors are attractive candidates, for example, for detecting toxic substances and monitoring biomolecular interactions. Their performance can be pushed by the design of the sensor through clever material choices and integration of components. In this work, two porous materials, namely, porous silicon and plasmonic nanohole arrays, are combined in order to obtain increased sensitivity and dual-mode sensing capabilities. For this purpose, porous silicon monolayers are prepared by electrochemical etching and plasmonic nanohole arrays are obtained using a bottom-up strategy. Hybrid sensors of these two materials are realized by transferring the plasmonic nanohole array on top of the porous silicon. Reflectance spectra of the hybrid sensors are characterized by a fringe pattern resulting from the Fabry–Pérot interference at the porous silicon borders, which is overlaid with a broad dip based on surface plasmon resonance in the plasmonic nanohole array. In addition, the hybrid sensor shows a significant higher reflectance in comparison to the porous silicon monolayer. The sensitivities of the hybrid sensor to refractive index changes are separately determined for both components. A significant increase in sensitivity from 213 ± 12 to 386 ± 5 nm/RIU is determined for the transfer of the plasmonic nanohole array sensors from solid glass substrates to porous silicon monolayers. In contrast, the spectral position of the interference pattern of porous silicon monolayers in different media is not affected by the presence of the plasmonic nanohole array. However, the changes in fringe pattern reflectance of the hybrid sensor are increased 3.7-fold after being covered with plasmonic nanohole arrays and could be used for high-sensitivity sensing. Finally, the capability of the hybrid sensor for simultaneous and independent dual-mode sensing is demonstrated.
Die ökologischen und sozialen Probleme der Gegenwart zwingen zu gravierenden Änderungen industrieller Produktions- und Wertschöpfungsprozesse und privater Konsumstile. Dieses Buch geht auf beide Seiten der Medaille ein: Es beleuchtet die Beiträge, die Unternehmen durch nachhaltiges Management für eine sozial gerechte und ökologische verträgliche Zukunftsentwicklung leisten können, als auch die Möglichkeiten der Konsumenten, durch ihre Konsumentscheidungen einen Beitrag zu einer lebenswerten Zukunft zu leisten. Jedes Kapitel wird durch eine Lernzielformulierung eingeleitet und durch eine Lernstandskontrolle abgeschlossen. Die zahlreichen Einblicke in die Praxis unterstützen das Verständnis. Aktuelle Links zu Websites von Unternehmen und Institutionen runden das Buch ab. Das Buch richtet sich insbesondere an Studierende der Wirtschaftswissenschaften, aber auch an Personen, die ein Interesse an dieser Themenstellung haben. Fazit: Die kompakte und verständliche Einführung schafft ein tieferes Verständnis für die Verknüpfung von nachhaltigem Management mit Konsumentenverhalten.
Choice-Based Conjointanalyse
(2021)
Die auswahlbasierte oder auch Choice-Based Conjointanalyse (CBC) ist die derzeit wohl beliebteste Variante der Conjointanalyse. Gründe dafür bestehen einerseits in der leichten Verfügbarkeit benutzerfreundlicher Software (z.B. R, Sawtooth Software), andererseits weist das Verfahren aufgrund seiner Sonderstellung auch aus methodischer sowie praktischer Sicht Stärken auf. So werden bei einer CBC im Gegensatz zur bewertungsbasierten Conjointanalyse keine Präferenzurteile, sondern diskrete Entscheidungen der Auskunftspersonen erhoben und ausgewertet. Bei der CBC handelt es sich also genau genommen um eine Discrete Choice Analyse (DCA), die auf ein conjointanalytisches Erhebungsdesign angewandt wird. Beide Bezeichnungen werden nach wie vor verwendet, die Methodik wird in diesem Kapitel grundlegend und anhand eines Anwendungsbeispiels diskutiert.
Less is more!
(2021)
Enhancing consumer satisfaction and well-being is an important objective of companies, retailers and public policy makers. In the current debate on climate change, a consistent theme is that consumers in developed countries must learn to consume less. The present study (based on representative data sets from the US, N = 1,017, and Germany, N = 1030) addresses these issues by using a scenario-based experiment to analyze how satisfied voluntary simplifiers (people who voluntarily abstain from consumption) are with their purchase decisions in the case of a muesli brand. The research question is whether people who follow a sustainable, simple lifestyle are more satisfied with their daily consumption choices than people who have a more consumerist lifestyle. If so, it would be easier for many people to change their lifestyles and consume less. In addition, this scenario experiment manipulates consumer empowerment and decision complexity since both factors are supposed to influence purchase satisfaction. The results are consistent across both countries and indicate that voluntary simplifiers experience a higher level of purchasing satisfaction than non-simplifiers, whereby empowerment and decision complexity play different roles.
Metal sulfide nanoparticle synthesis with ionic liquids state of the art and future perspectives
(2021)
Metal sulfides are among the most promising materials for a wide variety of technologically relevant applications ranging from energy to environment and beyond. Incidentally, ionic liquids (ILs) have been among the top research subjects for the same applications and also for inorganic materials synthesis. As a result, the exploitation of the peculiar properties of ILs for metal sulfide synthesis could provide attractive new avenues for the generation of new, highly specific metal sulfides for numerous applications. This article therefore describes current developments in metal sulfide nano-particle synthesis as exemplified by a number of highlight examples. Moreover, the article demonstrates how ILs have been used in metal sulfide synthesis and discusses the benefits of using ILs over more traditional approaches. Finally, the article demonstrates some technological challenges and how ILs could be used to further advance the production and specific property engineering of metal sulfide nanomaterials, again based on a number of selected examples.
Enzymes can support the synthesis or degradation of biomacromolecules in natural processes. Here, we demonstrate that enzymes can induce a macroscopic-directed movement of microstructured hydrogels following a mechanism that we call a "Jack-in-the-box" effect. The material's design is based on the formation of internal stresses induced by a deformation load on an architectured microscale, which are kinetically frozen by the generation of polyester locking domains, similar to a Jack-in-thebox toy (i.e., a compressed spring stabilized by a closed box lid). To induce the controlled macroscopic movement, the locking domains are equipped with enzyme-specific cleavable bonds (i.e., a box with a lock and key system). As a result of enzymatic reaction, a transformed shape is achieved by the release of internal stresses. There is an increase in entropy in combination with a swelling-supported stretching of polymer chains within the microarchitectured hydrogel (i.e., the encased clown pops-up with a pre-stressed movement when the box is unlocked). This utilization of an enzyme as a physiological stimulus may offer new approaches to create interactive and enzyme-specific materials for different applications such as an optical indicator of the enzyme's presence or actuators and sensors in biotechnology and in fermentation processes.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
In this short survey article, we showcase a number of non-trivial geometric problems that have recently been resolved by marrying methods from functional calculus and real-variable harmonic analysis. We give a brief description of these methods as well as their interplay. This is a succinct survey that hopes to inspire geometers and analysts alike to study these methods so that they can be further developed to be potentially applied to a broader range of questions.
Informatische Bildung spielt eine immer zentralere Rolle in der Bildung einer Gesellschaft des 21. Jahrhunderts. Für den Chemieunterricht ergeben sich daraus zwei Aspekte: Einerseits können Konzepte der informatischen Bildung dabei helfen, chemie- und naturwissenschaftsspezifische Denk- und Arbeitsweisen zu fördern. Andererseits kann der Chemieunterricht einen Beitrag für die informatische Bildung leisten. Dieser Artikel geht auf beide Aspekte ein und versucht die gegenseitigen Vorteile der informatischen Bildung und der naturwissenschaftlichen Bildung im Chemieunterricht darzustellen.
Zur Abschaffung des Gutachterverfahrens in der Vertragspsychotherapie – ein Qualitätsverlust?
(2021)
Zielsetzung: Der vorliegende Artikel befasst sich mit der Fragestellung, inwiefern das Gutachterverfahren in der Vertragspsychotherapie ein zuverlässiges Qualitätsinstrument darstellt und ob sich aus der geplanten Abschaffung des Gutachterverfahrens das Risiko einer Qualitätsminderung in der ambulanten Psychotherapie ergibt.
Methodik: Es wurde eine Literaturrecherche durchgeführt. Arbeiten von den Jahren 2000 bis 2020 wurden berücksichtigt, welche sich mit dem Gutachterverfahren als Qualitätsmerkmal der ambulanten Psychotherapie befassen. Um die unterschiedlichen Standpunkte der zitierten Autor_innen zu diskutieren, wurde auch Bezug auf weiterführende Literatur genommen.
Ergebnisse: Das Gutachterverfahren scheint empirisch nicht sicher als zuverlässiges Qualitätsmerkmal der ambulanten Psychotherapie herangezogen werden zu können. Die Annahme, dass sich durch eine gutachterbefreite Vertragspsychotherapie eine Qualitätsminderung der Psychotherapie ergibt, wird durch die hier zusammengefassten Arbeiten insgesamt nicht gestützt.
The Eastern Mediterranean is the most seismically active region in Europe due to the complex interactions of the Arabian, African, and Eurasian tectonic plates. Deformation is achieved by faulting in the brittle crust, distributed flow in the viscoelastic lower-crust and mantle, and Hellenic subduction, but the long-term partitioning of these mechanisms is still unknown. We exploit an extensive suite of geodetic observations to build a kinematic model connecting strike-slip deformation, extension, subduction, and shear localization across Anatolia and the Aegean Sea by mapping the distribution of slip and strain accumulation on major active geological structures. We find that tectonic escape is facilitated by a plate-boundary-like, translithospheric shear zone extending from the Gulf of Evia to the Turkish-Iranian Plateau that underlies the surface trace of the North Anatolian Fault. Additional deformation in Anatolia is taken up by a series of smaller-scale conjugate shear zones that reach the upper mantle, the largest of which is located beneath the East Anatolian Fault. Rapid north-south extension in the western part of the system, driven primarily by Hellenic Trench retreat, is accommodated by rotation and broadening of the North Anatolian mantle shear zone from the Sea of Marmara across the north Aegean Sea, and by a system of distributed transform faults and rifts including the rapidly extending Gulf of Corinth in central Greece and the active grabens of western Turkey. Africa-Eurasia convergence along the Hellenic Arc occurs at a median rate of 49.8mm yr(-1) in a largely trench-normal direction except near eastern Crete where variably oriented slip on the megathrust coincides with mixed-mode and strike-slip deformation in the overlying accretionary wedge near the Ptolemy-Pliny-Strabo trenches. Our kinematic model illustrates the competing roles the North Anatolian mantle shear zone, Hellenic Trench, overlying mantle wedge, and active crustal faults play in accommodating tectonic indentation, slab rollback and associated Aegean extension. Viscoelastic flow in the lower crust and upper mantle dominate the surface velocity field across much of Anatolia and a clear transition to megathrust-related slab pull occurs in western Turkey, the Aegean Sea and Greece. Crustal scale faults and the Hellenic wedge contribute only a minor amount to the large-scale, regional pattern of Eastern Mediterranean interseismic surface deformation.
Cyanobacteria are an abundant bacterial group and are found in a variety of ecological niches all around the globe. They can serve as a real threat for fish or mammals and can restrict the use of lakes or rivers for recreational purposes or as a source of drinking water, when they form blooms. One of the most abundant bloom-forming cyanobacteria is Microcystis aeruginosa.
In the first part of the study, the role and possible dynamics of RubisCO in M. aeruginosa during high-light irradiation were examined. Its response was analyzed on the protein and peptide level via immunoblotting, immunofluorescence microscopy and with high performance liquid chromatography (HPLC). It was revealed that large amounts of RubisCO were located outside of carboxysomes under the applied high light stress. RubisCO aggregated mainly underneath the cytoplasmic membrane. There it forms a putative Calvin-Benson-Bassham (CBB) super complex together with other enzymes of photosynthesis. This complex could be part of an alternative carbon-concentrating mechanism (CCM) in M. aeruginosa, which enables a faster, and energy saving adaptation to high light stress of the whole bloom.
Furthermore, the re-localization of RubisCO was delayed in the microcystin-deficient mutant ΔmcyB and RubisCO was more evenly distributed over the cell in comparison to the wild type. Since ΔmcyB is not harmed in its growth, possibly other produced cyanopeptides as aeruginosin or cyanopeptolin also play a role in the stabilization of RubisCO and the putative CBB complex, especially in the microcystin-free mutant.
In the second part of this work, the possible role of microcystin as an extracellular signaling peptide during the diurnal cycle was studied. HPLC analysis showed a strong increase of extracellular microcystin in the wild type when the population entered nighttime and it resumed into the next day as well. Together with the increase of extracellular microcystin, a strong decrease of protein-bound intracellular microcystin was observed via immunoblot analysis. Interestingly, the signal of the large subunit of RubisCO (RbcL) also diminished when high amounts of microcystin were present in the surrounding medium. Microcystin addition experiments to M. aeruginosa WT and ΔmcyB cultures support this observation, since the immunoblot signal of both subunits of RubisCO and CcmK, a shell protein of carboxysomes, diminished after the addition of microcystin. In addition, the fluctuation of cyanopeptolin during the diurnal cycle indicates a more prominent role of other cyanopeptides besides microcystin as a signaling peptide, intracellularly as well as extracellularly.
Frailty assessment is recommended before elective transcatheter aortic valve implantation (TAVI) to determine post-interventional prognosis. Several studies have investigated frailty in TAVI-patients using numerous assessments; however, it remains unclear which is the most appropriate tool for clinical practice. Therefore, we evaluate which frailty assessment is mainly used and meaningful for ≤30-day and ≥1-year prognosis in TAVI patients. Randomized controlled or observational studies (prospective/retrospective) investigating all-cause mortality in older (≥70 years) TAVI patients were identified (PubMed; May 2020). In total, 79 studies investigating frailty with 49 different assessments were included. As single markers of frailty, mostly gait speed (23 studies) and serum albumin (16 studies) were used. Higher risk of 1-year mortality was predicted by slower gait speed (highest Hazard Ratios (HR): 14.71; 95% confidence interval (CI) 6.50–33.30) and lower serum albumin level (highest HR: 3.12; 95% CI 1.80–5.42). Composite indices (five items; seven studies) were associated with 30-day (highest Odds Ratio (OR): 15.30; 95% CI 2.71–86.10) and 1-year mortality (highest OR: 2.75; 95% CI 1.55–4.87). In conclusion, single markers of frailty, in particular gait speed, were widely used to predict 1-year mortality. Composite indices were appropriate, as well as a comprehensive assessment of frailty. View Full-Text
Frailty assessment is recommended before elective transcatheter aortic valve implantation (TAVI) to determine post-interventional prognosis. Several studies have investigated frailty in TAVI-patients using numerous assessments; however, it remains unclear which is the most appropriate tool for clinical practice. Therefore, we evaluate which frailty assessment is mainly used and meaningful for ≤30-day and ≥1-year prognosis in TAVI patients. Randomized controlled or observational studies (prospective/retrospective) investigating all-cause mortality in older (≥70 years) TAVI patients were identified (PubMed; May 2020). In total, 79 studies investigating frailty with 49 different assessments were included. As single markers of frailty, mostly gait speed (23 studies) and serum albumin (16 studies) were used. Higher risk of 1-year mortality was predicted by slower gait speed (highest Hazard Ratios (HR): 14.71; 95% confidence interval (CI) 6.50–33.30) and lower serum albumin level (highest HR: 3.12; 95% CI 1.80–5.42). Composite indices (five items; seven studies) were associated with 30-day (highest Odds Ratio (OR): 15.30; 95% CI 2.71–86.10) and 1-year mortality (highest OR: 2.75; 95% CI 1.55–4.87). In conclusion, single markers of frailty, in particular gait speed, were widely used to predict 1-year mortality. Composite indices were appropriate, as well as a comprehensive assessment of frailty. View Full-Text
Pivots revisited
(2021)
The term "pivot" usually refers to two overlapping syntactic units such that the completion of the first unit simultaneously launches the second. In addition, pivots are generally said to be characterized by the smooth prosodic integration of their syntactic parts. This prosodic integration is typically achieved by prosodic-phonetic matching of the pivot components. As research on such turns in a range of languages has illustrated, speakers routinely deploy pivots so as to be able to continue past a point of possible turn completion, in the service of implementing some additional or revised action. This article seeks to build on, and complement, earlier research by exploring two issues in more detail as follows: (1) what exactly do pivotal turn extensions accomplish on the action dimension, and (2) what role does prosodic-phonetic packaging play in this? We will show that pivot constructions not only exhibit various degrees of prosodic-phonetic (non-)integration, i.e., differently strong cesuras, but that they can be ordered on a continuum, and that this cline maps onto the relationship of the actions accomplished by the components of the pivot construction. While tighter prosodic-phonetic integration, i.e., weak(er) cesuring, co-occurs with post-pivot actions whose relationship to that of the pre-pivot tends to be rather retrospective in character, looser prosodic-phonetic integration, i.e., strong(er) cesuring, is associated with a more prospective orientation of the post-pivot's action. These observations also raise more general questions with regard to the analysis of action.
“Chunking” spoken language
(2021)
In this introductory paper to the special issue on “Weak cesuras in talk-in-interaction”, we aim to guide the reader into current work on the “chunking” of naturally occurring talk. It is conducted in the methodological frameworks of Conversation Analysis and Interactional Linguistics – two approaches that consider the interactional aspect of humans talking with each other to be a crucial starting point for its analysis. In doing so, we will (1) lay out the background of this special issue (what is problematic about “chunking” talk-in-interaction, the characteristics of the methodological approach chosen by the contributors, the cesura model), (2) highlight what can be gained from such a revised understanding of “chunking” in talk-in-interaction by referring to previous work with this model as well as the findings of the contributions to this special issue, and (3) indicate further directions such work could take starting from papers in this special issue. We hope to induce a fruitful exchange on the phenomena discussed, across methodological divides.
“Chunking” spoken language
(2021)
In this introductory paper to the special issue on “Weak cesuras in talk-in-interaction”, we aim to guide the reader into current work on the “chunking” of naturally occurring talk. It is conducted in the methodological frameworks of Conversation Analysis and Interactional Linguistics – two approaches that consider the interactional aspect of humans talking with each other to be a crucial starting point for its analysis. In doing so, we will (1) lay out the background of this special issue (what is problematic about “chunking” talk-in-interaction, the characteristics of the methodological approach chosen by the contributors, the cesura model), (2) highlight what can be gained from such a revised understanding of “chunking” in talk-in-interaction by referring to previous work with this model as well as the findings of the contributions to this special issue, and (3) indicate further directions such work could take starting from papers in this special issue. We hope to induce a fruitful exchange on the phenomena discussed, across methodological divides.
This article discusses the so-called 'Apocalypse' of Carour, a text preserved in a Codex (M586) of the famous Hamuli-find, that originally emanated from the environment of the Pachomian monastic enterprise. It addresses a series of disasters and communal deficiencies through metaphorical imagery and similes that struck the community after the death of its founding father Pachomios. After presenting a few conjectures to the editio princeps and providing a German translation, the 'Apocalypse' is contextualized within the historical and liturgical background of this late antique monastic community. The author asserts that this unique text not only displays the symptoms of disaster, but also gives us new insights into how the Pachomians productively coped with crises. In contrast to modern scholarship, the author argues that the 'Apocalypse' is in fact a prophecy (ex eventu) that was based on an instruction, which was publicly read at the large Easter assembly of the Pachomians, most likely by Horsiesos, the third abbot of the Koinonia. Using the figure of the frog, C(h)arour, to symbolize the biblical plague but also the Egyptian concept of rebirth, the instruction was intended to strengthen group cohesion and especially to prepare the novices that were about to receive their baptism during the Easter celebration for a life devoted to the Koinonia and its principles. To this initial prophecy, which developed an antithesis to the ideal monastic life envisioned by the Pachomians, another text was later added that narrated an unsuccessful attempt to overthrow Apa Besarion, the fifth abbot of the Koinonia. In a much more practical manner this second part of the prophecy elaborated on the same themes while also displaying the resilience of the community in averting crises through remembering and recommitting to its founding precepts. The convoluted text we possess now should therefore be equally viewed as a testament to the communication structures of the Pachomians as well as their memorial culture, which targeted moments of crisis and despair to imbue future generations with the necessary persistence to overcome possible disasters themselves and secure the long-term existence of the Koinonia.
The life cycle of higher plants is based on recurring phases of growth and development based on repetitive sequences of cell division, cell expansion and cell differentiation. This dissertation deals with two projects, each of them investigating two different topics that are related to cell expansion. The first project is examining an Arabidopsis thaliana mutant exhibiting overall cell enlargement and the second project is analysing two naturally occurring floral morphs of Amsinckia spectabilis (Boraginaceae) differing (amongst others) in style length and anther heights due to differences in longitudinal cell elongation. The EMS-mutant eop1 was shown to exhibit a petal size increase of 26% caused by cell enlargement. Further phenotypes were detected, such as cotyledon size increase (based on larger cells) as well as increased carpel, sepal, leaf and pollen sizes. Plant height was shown to be increased and more highly branched trichomes explained the hairy eop1 phenotype. Fine mapping revealed the causal SNP to be a C to T transition at the last nucleotide of intron 7 of the INCURVATA11 (ICU11) gene, a 2-oxoglutarate /Fe(II)-dependant dioxygenase, and thus causing missplicing of the mRNA. Two T-DNA insertion lines (icu11-2 & icu11-4) confirmed ICU11 as causal gene by exhibiting increased petal size. A comparison of three icu11 alleles, which possessed different mutation-related changes, either overexpressing ICU11 or modified mRNAs, was the base for investigating the molecular mechanism that underlies the observed phenotype. Different approaches revealed contradictory results regarding ICU11 protein functionality in the icu11 mutants. A complementation assay proved the three mutants to be exchangeable and ICU11 overexpression in the wild-type led to an icu11-like phenotype, arguing for all three icu11 mutants to be GOF mutants. Contradicting this conclusion, the icu11-4 line could be rescued by a genomic ICU11 transgene. A model, based on the assumption that an overexpression of ICU11 is inhibiting the function of the protein, and thus causing the same effect as a LOF protein was proposed. Further, icu11-3 (eop1) mutants were shown to have an increased resistance towards paclobutrazol, a gibberellin (GA) inhibitor and an upregulation of AtGA20ox2, a main GA biosynthesis gene. Additionally, ICU11 subcellular localization was discovered to be cytoplasmic, supporting the assumption, that ICU11 affects GA biosynthesis and overall GA level, possibly explaining the observed (GA-overdose) phenotype.
The second project aimed to identify the genetic base of the S-locus in Amsinckia spectabilis, as the Amsinckia genus represents untypical characteristics for a heterostylous species, such as no obvious self-incompatibility (SI) and the repeated transition towards homostylous and fully selfing variants. The work was based on three Amsinckia spectabilis forms: a heterostylous form, consisting of two floral morphs with reciprocal positioning of sexual organs (S-morph: high anthers and a short style and L-morph: low anthers and a long style), and two homostylous forms, one large-flowered and partially selfing and the other small-flowered and fully selfing. The maintenance of the two floral morphs is genetically based on the S-locus region, containing genes that encode for the morph-specific traits, which are marked by a tight linkage due to suppressed recombination. Natural populations are found to possess a 1:1 S:L morph ratio, that can be explained by predominant disassortative mating of the two morphs, causing the occurrence of the dominant S-allele only in the heterozygous state (heterozygous (Ss) for the S-morph and homozygous recessive (ss) for the L-morph). Investigation of morph-specific phenotypes detected 56% elongated L-morph styles and 58% higher positioned S-morph anthers. Approximately 50% of the observed size differences were explained by an increase in cell elongation. Moreover, additional phenotypes were found, such as 21% enlarged S-morph pollen and no obvious SI, confirmed by hand pollinated seed counts, in vivo pollen tube growth and the development of homozygous dominant SS individuals via selfing. The Amsinckia spec. S-locus was assumed to at least consist of the G- (style length), the A- (anther height) and the P- (pollen size) locus. Comparative Transcriptomics of the two morphs revealed 22 differentially expressed markers that were found to be located within two contigs of a SS individual PacBio genome assembly, allowing the localization of the S-locus to be delimited to a region of approximately 23 Mb. Contradictory to revealed S-loci within the plant kingdom, no strong argument for a present hemizygous region was found to be causal for the suppressed recombination of the S-locus, so that an inversion was assumed to be the causal mechanism.
The accelerating climatic changes and new infrastructure development across the Arctic require more robust risk and environmental assessment, but thus far there is no consistent record of human impact. We provide a first panarctic satellite-based record of expanding infrastructure and anthropogenic impacts along all permafrost affected coasts (100 km buffer, approximate to 6.2 Mio km(2)), named the Sentinel-1/2 derived Arctic Coastal Human Impact (SACHI) dataset. The completeness and thematic content goes beyond traditional satellite based approaches as well as other publicly accessible data sources. Three classes are considered: linear transport infrastructure (roads and railways), buildings, and other impacted area. C-band synthetic aperture radar and multi-spectral information (2016-2020) is exploited within a machine learning framework (gradient boosting machines and deep learning) and combined for retrieval with 10 m nominal resolution. In total, an area of 1243 km(2) constitutes human-built infrastructure as of 2016-2020. Depending on region, SACHI contains 8%-48% more information (human presence) than in OpenStreetMap. 221 (78%) more settlements are identified than in a recently published dataset for this region. 47% is not covered in a global night-time light dataset from 2016. At least 15% (180 km(2)) correspond to new or increased detectable human impact since 2000 according to a Landsat-based normalized difference vegetation index trend comparison within the analysis extent. Most of the expanded presence occurred in Russia, but also some in Canada and US. 31% and 5% of impacted area associated predominantly with oil/gas and mining industry respectively has appeared after 2000. 55% of the identified human impacted area will be shifting to above 0 C-circle ground temperature at two meter depth by 2050 if current permafrost warming trends continue at the pace of the last two decades, highlighting the critical importance to better understand how much and where Arctic infrastructure may become threatened by permafrost thaw.
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
The imagination of clearly separated core-shell structures is already outdated by the fact, that the nanoparticle core-shell structures remain in terms of efficiency behind their respective bulk material due to intermixing between core and shell dopant ions. In order to optimize the photoluminescence of core-shell UCNP the intermixing should be as small as possible and therefore, key parameters of this process need to be identified. In the present work the Ln(III) ion migration in the host lattices NaYF4 and NaGdF4 was monitored. These investigations have been performed by laser spectroscopy with help of lanthanide resonance energy transfer (LRET) between Eu(III) as donor and Pr(III) or Nd(III) as acceptor. The LRET is evaluated based on the Forster theory. The findings corroborate the literature and point out the migration of ions in the host lattices. Based on the introduced LRET model, the acceptor concentration in the surrounding of one donor depends clearly on the design of the applied core-shell-shell nanoparticles. In general, thinner intermediate insulating shells lead to higher acceptor concentration, stronger quenching of the Eu(III) donor and subsequently stronger sensitization of the Pr(III) or the Nd(III) acceptors. The choice of the host lattice as well as of the synthesis temperature are parameters to be considered for the intermixing process.
Moving spiral wave chimeras
(2021)
We consider a two-dimensional array of heterogeneous nonlocally coupled phase oscillators on a flat torus and study the bound states of two counter-rotating spiral chimeras, shortly two-core spiral chimeras, observed in this system. In contrast to other known spiral chimeras with motionless incoherent cores, the two-core spiral chimeras typically show a drift motion. Due to this drift, their incoherent cores become spatially modulated and develop specific fingerprint patterns of varying synchrony levels. In the continuum limit of infinitely many oscillators, the two-core spiral chimeras can be studied using the Ott-Antonsen equation. Numerical analysis of this equation allows us to reveal the stability region of different spiral chimeras, which we group into three main classes-symmetric, asymmetric, and meandering spiral chimeras.
Stereoselective [4+2] Cycloaddition of Singlet Oxygen to Naphthalenes Controlled by Carbohydrates
(2021)
Stereoselective reactions of singlet oxygen are of current interest. Since enantioselective photooxygenations have not been realized efficiently, auxiliary control is an attractive alternative. However, the obtained peroxides are often too labile for isolation or further transformations into enantiomerically pure products. Herein, we describe the oxidation of naphthalenes by singlet oxygen, where the face selectivity is controlled by carbohydrates for the first time. The synthesis of the precursors is easily achieved starting from naphthoquinone and a protected glucose derivative in only two steps. Photooxygenations proceed smoothly at low temperature, and we detected the corresponding endoperoxides as sole products by NMR. They are labile and can thermally react back to the parent naphthalenes and singlet oxygen. However, we could isolate and characterize two enantiomerically pure peroxides, which are sufficiently stable at room temperature. An interesting influence of substituents on the stereoselectivities of the photooxygenations has been found, ranging from 51:49 to up to 91:9 dr (diastereomeric ratio). We explain this by a hindered rotation of the carbohydrate substituents, substantiated by a combination of NOESY measurements and theoretical calculations. Finally, we could transfer the chiral information from a pure endoperoxide to an epoxide, which was isolated after cleavage of the sugar chiral auxiliary in enantiomerically pure form.
Stereoselective [4+2] Cycloaddition of Singlet Oxygen to Naphthalenes Controlled by Carbohydrates
(2021)
Stereoselective reactions of singlet oxygen are of current interest. Since enantioselective photooxygenations have not been realized efficiently, auxiliary control is an attractive alternative. However, the obtained peroxides are often too labile for isolation or further transformations into enantiomerically pure products. Herein, we describe the oxidation of naphthalenes by singlet oxygen, where the face selectivity is controlled by carbohydrates for the first time. The synthesis of the precursors is easily achieved starting from naphthoquinone and a protected glucose derivative in only two steps. Photooxygenations proceed smoothly at low temperature, and we detected the corresponding endoperoxides as sole products by NMR. They are labile and can thermally react back to the parent naphthalenes and singlet oxygen. However, we could isolate and characterize two enantiomerically pure peroxides, which are sufficiently stable at room temperature. An interesting influence of substituents on the stereoselectivities of the photooxygenations has been found, ranging from 51:49 to up to 91:9 dr (diastereomeric ratio). We explain this by a hindered rotation of the carbohydrate substituents, substantiated by a combination of NOESY measurements and theoretical calculations. Finally, we could transfer the chiral information from a pure endoperoxide to an epoxide, which was isolated after cleavage of the sugar chiral auxiliary in enantiomerically pure form.
Large-scale literature mining to assess the relation between anti-cancer drugs and cancer types
(2021)
Background:
There is a huge body of scientific literature describing the relation between tumor types and anti-cancer drugs. The vast amount of scientific literature makes it impossible for researchers and physicians to extract all relevant information manually.
Methods:
In order to cope with the large amount of literature we applied an automated text mining approach to assess the relations between 30 most frequent cancer types and 270 anti-cancer drugs. We applied two different approaches, a classical text mining based on named entity recognition and an AI-based approach employing word embeddings. The consistency of literature mining results was validated with 3 independent methods: first, using data from FDA approvals, second, using experimentally measured IC-50 cell line data and third, using clinical patient survival data.
Results:
We demonstrated that the automated text mining was able to successfully assess the relation between cancer types and anti-cancer drugs. All validation methods showed a good correspondence between the results from literature mining and independent confirmatory approaches. The relation between most frequent cancer types and drugs employed for their treatment were visualized in a large heatmap. All results are accessible in an interactive web-based knowledge base using the following link: .
Conclusions:
Our approach is able to assess the relations between compounds and cancer types in an automated manner. Both, cancer types and compounds could be grouped into different clusters. Researchers can use the interactive knowledge base to inspect the presented results and follow their own research questions, for example the identification of novel indication areas for known drugs.
Landesrecht Brandenburg
(2021)
Das Studienbuch stellt in übersichtlicher und systematischer Form die wichtigsten ausbildungsrelevanten Teile des brandenburgischen Verfassungs- und Verwaltungsrechts dar. Die Autoren gehen auf die für Examen und Praxis relevanten Kerngebiete (Verfassungsrecht, Verwaltungsorganisationsrecht, Kommunalrecht, Polizei- und Ordnungsrecht und Bauordnungsrecht) unter Einbeziehung von Rechtsprechung und Literatur ein. Zahlreiche Beispiele vereinfachen das Verständnis und Klausurhinweise schärfen den Blick für fehlerträchtige Fragestellungen.
The rapid emergence of online targeted political advertising has raised concerns over data privacy and what the government's response should be. This paper tested and confirmed the hypothesis that public attitudes toward stricter regulation of online targeted political advertising are partially motivated by partisan self-interest. We conducted an experiment using an online survey of 1549 Americans who identify as either Democrats or Republicans. Our findings show that Democrats and Republicans believe that online targeted political advertising benefits the opposing party. This belief is based on their conviction that their political opponents are more likely to be mobilized by online targeted political advertising than are supporters of their own party. We exogenously manipulated partisan self-interest considerations of a random subset of participants by truthfully informing them that, in the past, online targeted political advertising has benefited Republicans. Our findings show that Republicans informed about this had less favorable attitudes toward regulation than did their uninformed co-partisans. This suggests that Republicans' attitudes regarding stricter regulation are based not solely on concerns about privacy violations, but also, in part, are caused by beliefs about partisan advantage. The results imply that people are willing to accept violations of their privacy if their preferred party benefits from the use of online targeted political advertising.
Strafrecht Allgemeiner Teil
(2021)