Refine
Year of publication
- 2021 (2335) (remove)
Document Type
- Article (1423)
- Doctoral Thesis (276)
- Part of a Book (168)
- Postprint (154)
- Monograph/Edited Volume (90)
- Review (66)
- Conference Proceeding (33)
- Part of Periodical (30)
- Working Paper (20)
- Master's Thesis (18)
- Habilitation Thesis (14)
- Other (12)
- Report (11)
- Contribution to a Periodical (8)
- Bachelor Thesis (7)
- Course Material (4)
- Sound (1)
Language
Keywords
- COVID-19 (22)
- climate change (13)
- machine learning (13)
- Germany (11)
- Migration (11)
- diffusion (11)
- embodied cognition (11)
- USA (10)
- United States (9)
- moderne jüdische Geschichte (9)
Institute
- Institut für Biochemie und Biologie (186)
- Institut für Physik und Astronomie (154)
- Institut für Chemie (149)
- Institut für Geowissenschaften (129)
- Bürgerliches Recht (127)
- Historisches Institut (118)
- Department Psychologie (102)
- Fachgruppe Betriebswirtschaftslehre (98)
- Extern (91)
- Fachgruppe Politik- & Verwaltungswissenschaft (91)
- Institut für Umweltwissenschaften und Geographie (87)
- Öffentliches Recht (75)
- Hasso-Plattner-Institut für Digital Engineering GmbH (68)
- Department Sport- und Gesundheitswissenschaften (66)
- Institut für Ernährungswissenschaft (65)
- Institut für Jüdische Studien und Religionswissenschaft (64)
- Department Erziehungswissenschaft (61)
- Department Linguistik (53)
- Institut für Romanistik (49)
- Fachgruppe Volkswirtschaftslehre (47)
- Institut für Mathematik (45)
- Wirtschaftswissenschaften (43)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (35)
- Strukturbereich Kognitionswissenschaften (34)
- Strafrecht (31)
- Institut für Germanistik (30)
- Sozialwissenschaften (29)
- Philosophische Fakultät (27)
- Institut für Informatik und Computational Science (26)
- Fakultät für Gesundheitswissenschaften (25)
- Institut für Künste und Medien (23)
- Institut für Philosophie (22)
- Institut für Anglistik und Amerikanistik (21)
- Vereinigung für Jüdische Studien e. V. (21)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (20)
- MenschenRechtsZentrum (19)
- Center for Economic Policy Analysis (CEPA) (17)
- Fachgruppe Soziologie (16)
- Department für Inklusionspädagogik (13)
- Klassische Philologie (12)
- Verband für Patholinguistik e. V. (vpl) (12)
- WeltTrends e.V. Potsdam (12)
- Institut für Slavistik (10)
- Arbeitskreis Militär und Gesellschaft in der Frühen Neuzeit e. V. (9)
- Department Grundschulpädagogik (9)
- Institut für Jüdische Theologie (9)
- Referat für Presse- und Öffentlichkeitsarbeit (8)
- Universitätsbibliothek (8)
- Strukturbereich Bildungswissenschaften (7)
- Department Musik und Kunst (6)
- Hochschulambulanz (6)
- Zentrum für Lehrerbildung und Bildungsforschung (ZeLB) (5)
- Humanwissenschaftliche Fakultät (4)
- Lehreinheit für Wirtschafts-Arbeit-Technik (4)
- Moses Mendelssohn Zentrum für europäisch-jüdische Studien e. V. (3)
- Abraham Geiger Kolleg gGmbH (2)
- Digital Engineering Fakultät (2)
- Juristische Fakultät (2)
- Patholinguistics/Neurocognition of Language (2)
- Potsdam Transfer - Zentrum für Gründung, Innovation, Wissens- und Technologietransfer (2)
- Botanischer Garten (1)
- Chief Information Officer (CIO) (1)
- Foundations of Computational Linguistics (1)
- Institut für Religionswissenschaft (1)
- Kanzler (1)
- Mathematisch-Naturwissenschaftliche Fakultät (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Psycholinguistics and Neurolinguistics (1)
- Theodor-Fontane-Archiv (1)
- UP Transfer (1)
- Universitätsleitung und Verwaltung (1)
- Weitere Einrichtungen (1)
- Zentrum für Qualitätsentwicklung in Lehre und Studium (ZfQ) (1)
Universität
(2021)
The goal of limiting global warming to well below 2°C as set out in the Paris Agreement calls for a strategic assessment of societal pathways and policy strategies. Besides policy makers, new powerful actors from the private sector, including finance, have stepped up to engage in forward-looking assessments of a Paris-compliant and climate-resilient future. Climate change scenarios have addressed this demand by providing scientific insights on the possible pathways ahead to limit warming in line with the Paris climate goal. Despite the increased interest, the potential of climate change scenarios has not been fully unleashed, mostly due to a lack of an intermediary service that provides guidance and access to climate change scenarios. This perspective presents the concept of a climate change scenario service, its components, and a prototypical implementation to overcome this shortcoming aiming to make scenarios accessible to a broader audience of societal actors and decision makers.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a maximum a posteriori (MAP) estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager-Machlup (OM) functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the Gamma-convergence of OM functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.
We derive Onsager-Machlup functionals for countable product measures on weighted l(p) subspaces of the sequence space R-N. Each measure in the product is a shifted and scaled copy of a reference probability measure on R that admits a sufficiently regular Lebesgue density. We study the equicoercivity and Gamma-convergence of sequences of Onsager-Machlup functionals associated to convergent sequences of measures within this class. We use these results to establish analogous results for probability measures on separable Banach or Hilbert spaces, including Gaussian, Cauchy, and Besov measures with summability parameter 1 <= p <= 2. Together with part I of this paper, this provides a basis for analysis of the convergence of maximum a posteriori estimators in Bayesian inverse problems and most likely paths in transition path theory.
Precipitation forecasting has an important place in everyday life – during the day we may have tens of small talks discussing the likelihood that it will rain this evening or weekend. Should you take an umbrella for a walk? Or should you invite your friends for a barbecue? It will certainly depend on what your weather application shows.
While for years people were guided by the precipitation forecasts issued for a particular region or city several times a day, the widespread availability of weather radars allowed us to obtain forecasts at much higher spatiotemporal resolution of minutes in time and hundreds of meters in space. Hence, radar-based precipitation nowcasting, that is, very-short-range forecasting (typically up to 1–3 h), has become an essential technique, also in various professional application contexts, e.g., early warning, sewage control, or agriculture.
There are two major components comprising a system for precipitation nowcasting: radar-based precipitation estimates, and models to extrapolate that precipitation to the imminent future. While acknowledging the fundamental importance of radar-based precipitation retrieval for precipitation nowcasts, this thesis focuses only on the model development: the establishment of open and competitive benchmark models, the investigation of the potential of deep learning, and the development of procedures for nowcast errors diagnosis and isolation that can guide model development.
The present landscape of computational models for precipitation nowcasting still struggles with the availability of open software implementations that could serve as benchmarks for measuring progress. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. We distribute the corresponding set of models as a software library, rainymotion, which is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion). That way, the library acts as a tool for providing fast, open, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing.
One of the promising directions for model development is to challenge the potential of deep learning – a subfield of machine learning that refers to artificial neural networks with deep architectures, which may consist of many computational layers. Deep learning showed promising results in many fields of computer science, such as image and speech recognition, or natural language processing, where it started to dramatically outperform reference methods.
The high benefit of using "big data" for training is among the main reasons for that. Hence, the emerging interest in deep learning in atmospheric sciences is also caused and concerted with the increasing availability of data – both observational and model-based. The large archives of weather radar data provide a solid basis for investigation of deep learning potential in precipitation nowcasting: one year of national 5-min composites for Germany comprises around 85 billion data points.
To this aim, we present RainNet, a deep convolutional neural network for radar-based precipitation nowcasting. RainNet was trained to predict continuous precipitation intensities at a lead time of 5 min, using several years of quality-controlled weather radar composites provided by the German Weather Service (DWD). That data set covers Germany with a spatial domain of 900 km x 900 km and has a resolution of 1 km in space and 5 min in time. Independent verification experiments were carried out on 11 summer precipitation events from 2016 to 2017. In these experiments, RainNet was applied recursively in order to achieve lead times of up to 1 h. In the verification experiments, trivial Eulerian persistence and a conventional model based on optical flow served as benchmarks. The latter is available in the previously developed rainymotion library.
RainNet significantly outperformed the benchmark models at all lead times up to 60 min for the routine verification metrics mean absolute error (MAE) and critical success index (CSI) at intensity thresholds of 0.125, 1, and 5 mm/h. However, rainymotion turned out to be superior in predicting the exceedance of higher intensity thresholds (here 10 and 15 mm/h). The limited ability of RainNet to predict high rainfall intensities is an undesirable property which we attribute to a high level of spatial smoothing introduced by the model. At a lead time of 5 min, an analysis of power spectral density confirmed a significant loss of spectral power at length scales of 16 km and below.
Obviously, RainNet had learned an optimal level of smoothing to produce a nowcast at 5 min lead time. In that sense, the loss of spectral power at small scales is informative, too, as it reflects the limits of predictability as a function of spatial scale. Beyond the lead time of 5 min, however, the increasing level of smoothing is a mere artifact – an analogue to numerical diffusion – that is not a property of RainNet itself but of its recursive application. In the context of early warning, the smoothing is particularly unfavorable since pronounced features of intense precipitation tend to get lost over longer lead times. Hence, we propose several options to address this issue in prospective research on model development for precipitation nowcasting, including an adjustment of the loss function for model training, model training for longer lead times, and the prediction of threshold exceedance.
The model development together with the verification experiments for both conventional and deep learning model predictions also revealed the need to better understand the source of forecast errors. Understanding the dominant sources of error in specific situations should help in guiding further model improvement. The total error of a precipitation nowcast consists of an error in the predicted location of a precipitation feature and an error in the change of precipitation intensity over lead time. So far, verification measures did not allow to isolate the location error, making it difficult to specifically improve nowcast models with regard to location prediction.
To fill this gap, we introduced a framework to directly quantify the location error. To that end, we detect and track scale-invariant precipitation features (corners) in radar images. We then consider these observed tracks as the true reference in order to evaluate the performance (or, inversely, the error) of any model that aims to predict the future location of a precipitation feature. Hence, the location error of a forecast at any lead time ahead of the forecast time corresponds to the Euclidean distance between the observed and the predicted feature location at the corresponding lead time.
Based on this framework, we carried out a benchmarking case study using one year worth of weather radar composites of the DWD. We evaluated the performance of four extrapolation models, two of which are based on the linear extrapolation of corner motion; and the remaining two are based on the Dense Inverse Search (DIS) method: motion vectors obtained from DIS are used to predict feature locations by linear and Semi-Lagrangian extrapolation.
For all competing models, the mean location error exceeds a distance of 5 km after 60 min, and 10 km after 110 min. At least 25% of all forecasts exceed an error of 5 km after 50 min, and of 10 km after 90 min. Even for the best models in our experiment, at least 5 percent of the forecasts will have a location error of more than 10 km after 45 min. When we relate such errors to application scenarios that are typically suggested for precipitation nowcasting, e.g., early warning, it becomes obvious that location errors matter: the order of magnitude of these errors is about the same as the typical extent of a convective cell. Hence, the uncertainty of precipitation nowcasts at such length scales – just as a result of locational errors – can be substantial already at lead times of less than 1 h. Being able to quantify the location error should hence guide any model development that is targeted towards its minimization. To that aim, we also consider the high potential of using deep learning architectures specific to the assimilation of sequential (track) data.
Last but not least, the thesis demonstrates the benefits of a general movement towards open science for model development in the field of precipitation nowcasting. All the presented models and frameworks are distributed as open repositories, thus enhancing transparency and reproducibility of the methodological approach. Furthermore, they are readily available to be used for further research studies, as well as for practical applications.
For around a decade, deep learning - the sub-field of machine learning that refers to artificial neural networks comprised of many computational layers - modifies the landscape of statistical model development in many research areas, such as image classification, machine translation, and speech recognition. Geoscientific disciplines in general and the field of hydrology in particular, also do not stand aside from this movement. Recently, the proliferation of modern deep learning-based techniques and methods has been actively gaining popularity for solving a wide range of hydrological problems: modeling and forecasting of river runoff, hydrological model parameters regionalization, assessment of available water resources. identification of the main drivers of the recent change in water balance components. This growing popularity of deep neural networks is primarily due to their high universality and efficiency. The presented qualities, together with the rapidly growing amount of accumulated environmental information, as well as increasing availability of computing facilities and resources, allow us to speak about deep neural networks as a new generation of mathematical models designed to, if not to replace existing solutions, but significantly enrich the field of geophysical processes modeling. This paper provides a brief overview of the current state of the field of development and application of deep neural networks in hydrology. Also in the following study, the qualitative long-term forecast regarding the development of deep learning technology for managing the corresponding hydrological modeling challenges is provided based on the use of "Gartner Hype Curve", which in the general details describes a life cycle of modern technologies.
We systematically explore the effect of calibration data length on the performance of a conceptual hydrological model, GR4H, in comparison to two Artificial Neural Network (ANN) architectures: Long Short-Term Memory Networks (LSTM) and Gated Recurrent Units (GRU), which have just recently been introduced to the field of hydrology. We implemented a case study for six river basins across the contiguous United States, with 25 years of meteorological and discharge data. Nine years were reserved for independent validation; two years were used as a warm-up period, one year for each of the calibration and validation periods, respectively; from the remaining 14 years, we sampled increasing amounts of data for model calibration, and found pronounced differences in model performance. While GR4H required less data to converge, LSTM and GRU caught up at a remarkable rate, considering their number of parameters. Also, LSTM and GRU exhibited the higher calibration instability in comparison to GR4H. These findings confirm the potential of modern deep-learning architectures in rainfall runoff modelling, but also highlight the noticeable differences between them in regard to the effect of calibration data length.
Donors of development assistance for health typically provide funding for a range of disease focus areas, such as maternal health and child health, malaria, HIV/AIDS, and other infectious diseases. But funding for each disease category does not match closely its contribution to the disability and loss of life it causes and the cost-effectiveness of interventions. We argue that peer influences in the social construction of global health priorities contribute to explaining this misalignment. Aid policy-makers are embedded in a social environment encompassing other donors, health experts, advocacy groups, and international officials. This social environment influences the conceptual and normative frameworks of decision-makers, which in turn affect their funding priorities. Aid policy-makers are especially likely to emulate decisions on funding priorities taken by peers with whom they are most closely involved in the context of expert and advocacy networks. We draw on novel data on donor connectivity through health IGOs and health INGOs and assess the argument by applying spatial regression models to health aid disbursed globally between 1990 and 2017. The analysis provides strong empirical support for our argument that the involvement in overlapping expert and advocacy networks shapes funding priorities regarding disease categories and recipient countries in health aid.
The paper introduces the principle Maximise Presupposition and its cognates. The main focus of the literature and this article is on the inferences that arise as a result of reasoning with Maximise Presupposition ('anti-presuppositions'). I will review the arguments put forward for distinguishing them from other inference types, most notably presuppositions and conversational implicatures. I will zoom in on three main issues regarding Maximise Presupposition and these inferences critically discussed in the literature: epistemic strength(ening), projection, and the role of alternatives. I will discuss more recent views which argue for either a uniform treatment of anti-presuppositions and implicatures and/or a revision of the original principle in light of new data and developments in pragmatics.
Internships during tertiary education have become substantially more common over the past decades in many industrialised countries. This study examines the impact of a voluntary intra-curricular internship experience during university studies on the probability of being invited to a job interview. To estimate a causal relationship, we conducted a randomised field experiment in which we sent 1248 fictitious, but realistic, resumes to real job openings. We find that applicants with internship experience have, on average, a 12.6% higher probability of being invited to a job interview.
The trace elements zinc and manganese are essential for human health, especially due to their enzymatic and protein stabilizing functions. If these elements are ingested in amounts exceeding the requirements, regulatory processes for maintaining their physiological concentrations (homeostasis) can be disturbed. Those homeostatic dysregulations can cause severe health effects including the emergence of neurodegenerative disorders such as Parkinson’s disease (PD). The concentrations of essential trace elements also change during the aging process. However, the relations of cause and consequence between increased manganese and zinc uptake and its influence on the aging process and the emergence of the aging-associated PD are still rarely understood. This doctoral thesis therefore aimed to investigate the influence of a nutritive zinc and/or manganese oversupply on the metal homeostasis during the aging process. For that, the model organism Caenorhabditis elegans (C. elegans) was applied. This nematode suits well as an aging and PD model due to properties such as its short life cycle and its completely sequenced, genetically amenable genome. Different protocols for the propagation of zinc- and/or manganese-supplemented young, middle-aged and aged C. elegans were established. Therefore, wildtypes, as well as genetically modified worm strains modeling inheritable forms of parkinsonism were applied. To identify homeostatic and neurological alterations, the nematodes were investigated with different methods including the analysis of total metal contents via inductively-coupled plasma tandem mass spectrometry, a specific probe-based method for quantifying labile zinc, survival assays, gene expression analysis as well as fluorescence microscopy for the identification and quantification of dopaminergic neurodegeneration.. During aging, the levels of iron, as well as zinc and manganese increased.. Furthermore, the simultaneous oversupply with zinc and manganese increased the total zinc and manganese contents to a higher extend than the single metal supplementation. In this relation the C. elegans metallothionein 1 (MTL-1) was identified as an important regulator of metal homeostasis. The total zinc content and the concentration of labile zinc were age-dependently, but differently regulated. This elucidates the importance of distinguishing these parameters as two independent biomarkers for the zinc status. Not the metal oversupply, but aging increased the levels of dopaminergic neurodegeneration. Additionally, nearly all these results yielded differences in the aging-dependent regulation of trace element homeostasis between wildtypes and PD models. This confirms that an increased zinc and manganese intake can influence the aging process as well as parkinsonism by altering homeostasis although the underlying mechanisms need to be clarified in further studies.
Manganese (Mn) and zinc (Zn) are not only essential trace elements, but also potential exogenous risk factors for various diseases. Since the disturbed homeostasis of single metals can result in detrimental health effects, concerns have emerged regarding the consequences of excessive exposures to multiple metals, either via nutritional supplementation or parenteral nutrition. This study focuses on Mn-Zn-interactions in the nematode Caenorhabditis elegans (C. elegans) model, taking into account aspects related to aging and age-dependent neurodegeneration.
Manganese (Mn) and zinc (Zn) are not only essential trace elements, but also potential exogenous risk factors for various diseases. Since the disturbed homeostasis of single metals can result in detrimental health effects, concerns have emerged regarding the consequences of excessive exposures to multiple metals, either via nutritional supplementation or parenteral nutrition. This study focuses on Mn-Zn-interactions in the nematode Caenorhabditis elegans (C. elegans) model, taking into account aspects related to aging and age-dependent neurodegeneration.
Zentrales Element dieser Arbeit ist die Synthese und Charakterisierung praktisch nutzbarer Ionogele. Die Basis der Polymerionogele bildet das Modellpolymer Polymethylmethacrylat. Als Additive kommen ionische Flüssigkeiten zum Einsatz, deren Grundlage Derivate des vielfach verwendeten Imidazoliumkations sind. Die Eigenschaften der eingebetteten ionischen Flüssigkeiten sind für die Ionogele funktionsgebend. Die Funktionalität der jeweiligen Gele und damit der Transfer der Eigenschaften von ionischen Flüssigkeiten auf die Ionogele wurde in der vorliegenden Arbeit mittels zahlreicher Charakterisierungstechniken überprüft und bestätigt. In dieser Arbeit wurden durch Ionogelbildung makroskopische Ionogelobjekte in Form von Folien und Vliesen erzeugt. Dabei kamen das Filmgießen und das Elektrospinnen als Methoden zur Erzeugung dieser Folien und Vliese zum Einsatz, woraus jeweils ein Modellsystem resultiert. Dadurch wird die vorliegende Arbeit in die Themenkomplexe „elektrisch halbleitende Ionogelfolien“ und „antimikrobiell aktive Ionogelvliese“ gegliedert. Der Einsatz von triiodidhaltigen ionischen Flüssigkeiten und einer Polymermatrix in einem diskontinuierlichen Gießprozess resultiert in elektrisch halbleitenden Ionogelfolien. Die flexiblen und transparenten Folien können Mittelpunkt zahlreicher neuer Anwendungsfelder im Bereich flexibler Elektronik sein. Das Elektrospinnen von Polymethylmethacrylat mit einer ionischen Flüssigkeit führte zu einem homogen Ionogelvlies, welches ein Modell für die Übertragung antimikrobiell aktiver Eigenschaften ionischer Flüssigkeiten auf poröse Strukturen zur Filtration darstellt. Gleichzeitig ist es das erste Beispiel für ein kupferchloridhaltiges Ionogel. Ionogele sind attraktive Materialien mit zahlreichen Anwendungsmöglichkeiten. Mit der vorliegenden Arbeit wird das Spektrum der Ionogele um ein elektrisch halbleitendes und ein antimikrobiell aktives Ionogel erweitert. Gleichzeitig wurden durch diese Arbeit der Gruppe der ionischen Flüssigkeiten drei Beispiele für elektrisch halbleitende ionische Flüssigkeiten sowie zahlreiche kupfer(II)chloridbasierte ionische Flüssigkeiten hinzugefügt.
“Embodied Practices – Looking From Small Places” is an edited transcript of a conversation between theatre and performance scholar Sruti Bala (University of Amsterdam) and sociologist, criminologist and anthropologist Dylan Kerrigan (University of Leicester) that took place as an online event in November 2020. Throughout their talk, Bala and Kerrigan engage with the legacy of Haitian anthropologist Michel-Rolph Trouillot. Specifically, they focus on his approach of looking from small units, such as small villages in Dominica, outwards to larger political structures such as global capitalism, social inequalities and the distribution of power. They also share insights from their own research on embodied practices in the Caribbean, Europe and India and answer questions such as: What can research on and through embodied practices tell us about systems of power and domination that move between the local and the global? How can performance practices which are informed by multiple locations and cultures be read and appreciated adequately? Sharing insights from his research into Guyanese prisons, Kerrigan outlines how he aims to connect everyday experiences and struggles of Caribbean people to trans-historical and transnational processes such as racial capitalism and post/coloniality. Furthermore, he elaborates on how he uses performance practices such as spoken word poetry and data verbalisation to connect with systematically excluded groups. Bala challenges naïve notions about the inherent transformative potential of performance in her research on performance and translation. She points to the way in which performance and its reception is always already inscribed in what she calls global or planetary asymmetries. At the conclusion of this conversation, they broach the question: are small places truly as small as they seem?
Label-free optical sensors are attractive candidates, for example, for detecting toxic substances and monitoring biomolecular interactions. Their performance can be pushed by the design of the sensor through clever material choices and integration of components. In this work, two porous materials, namely, porous silicon and plasmonic nanohole arrays, are combined in order to obtain increased sensitivity and dual-mode sensing capabilities. For this purpose, porous silicon monolayers are prepared by electrochemical etching and plasmonic nanohole arrays are obtained using a bottom-up strategy. Hybrid sensors of these two materials are realized by transferring the plasmonic nanohole array on top of the porous silicon. Reflectance spectra of the hybrid sensors are characterized by a fringe pattern resulting from the Fabry–Pérot interference at the porous silicon borders, which is overlaid with a broad dip based on surface plasmon resonance in the plasmonic nanohole array. In addition, the hybrid sensor shows a significant higher reflectance in comparison to the porous silicon monolayer. The sensitivities of the hybrid sensor to refractive index changes are separately determined for both components. A significant increase in sensitivity from 213 ± 12 to 386 ± 5 nm/RIU is determined for the transfer of the plasmonic nanohole array sensors from solid glass substrates to porous silicon monolayers. In contrast, the spectral position of the interference pattern of porous silicon monolayers in different media is not affected by the presence of the plasmonic nanohole array. However, the changes in fringe pattern reflectance of the hybrid sensor are increased 3.7-fold after being covered with plasmonic nanohole arrays and could be used for high-sensitivity sensing. Finally, the capability of the hybrid sensor for simultaneous and independent dual-mode sensing is demonstrated.
Die ökologischen und sozialen Probleme der Gegenwart zwingen zu gravierenden Änderungen industrieller Produktions- und Wertschöpfungsprozesse und privater Konsumstile. Dieses Buch geht auf beide Seiten der Medaille ein: Es beleuchtet die Beiträge, die Unternehmen durch nachhaltiges Management für eine sozial gerechte und ökologische verträgliche Zukunftsentwicklung leisten können, als auch die Möglichkeiten der Konsumenten, durch ihre Konsumentscheidungen einen Beitrag zu einer lebenswerten Zukunft zu leisten. Jedes Kapitel wird durch eine Lernzielformulierung eingeleitet und durch eine Lernstandskontrolle abgeschlossen. Die zahlreichen Einblicke in die Praxis unterstützen das Verständnis. Aktuelle Links zu Websites von Unternehmen und Institutionen runden das Buch ab. Das Buch richtet sich insbesondere an Studierende der Wirtschaftswissenschaften, aber auch an Personen, die ein Interesse an dieser Themenstellung haben. Fazit: Die kompakte und verständliche Einführung schafft ein tieferes Verständnis für die Verknüpfung von nachhaltigem Management mit Konsumentenverhalten.
Choice-Based Conjointanalyse
(2021)
Die auswahlbasierte oder auch Choice-Based Conjointanalyse (CBC) ist die derzeit wohl beliebteste Variante der Conjointanalyse. Gründe dafür bestehen einerseits in der leichten Verfügbarkeit benutzerfreundlicher Software (z.B. R, Sawtooth Software), andererseits weist das Verfahren aufgrund seiner Sonderstellung auch aus methodischer sowie praktischer Sicht Stärken auf. So werden bei einer CBC im Gegensatz zur bewertungsbasierten Conjointanalyse keine Präferenzurteile, sondern diskrete Entscheidungen der Auskunftspersonen erhoben und ausgewertet. Bei der CBC handelt es sich also genau genommen um eine Discrete Choice Analyse (DCA), die auf ein conjointanalytisches Erhebungsdesign angewandt wird. Beide Bezeichnungen werden nach wie vor verwendet, die Methodik wird in diesem Kapitel grundlegend und anhand eines Anwendungsbeispiels diskutiert.
Less is more!
(2021)
Enhancing consumer satisfaction and well-being is an important objective of companies, retailers and public policy makers. In the current debate on climate change, a consistent theme is that consumers in developed countries must learn to consume less. The present study (based on representative data sets from the US, N = 1,017, and Germany, N = 1030) addresses these issues by using a scenario-based experiment to analyze how satisfied voluntary simplifiers (people who voluntarily abstain from consumption) are with their purchase decisions in the case of a muesli brand. The research question is whether people who follow a sustainable, simple lifestyle are more satisfied with their daily consumption choices than people who have a more consumerist lifestyle. If so, it would be easier for many people to change their lifestyles and consume less. In addition, this scenario experiment manipulates consumer empowerment and decision complexity since both factors are supposed to influence purchase satisfaction. The results are consistent across both countries and indicate that voluntary simplifiers experience a higher level of purchasing satisfaction than non-simplifiers, whereby empowerment and decision complexity play different roles.
Metal sulfide nanoparticle synthesis with ionic liquids state of the art and future perspectives
(2021)
Metal sulfides are among the most promising materials for a wide variety of technologically relevant applications ranging from energy to environment and beyond. Incidentally, ionic liquids (ILs) have been among the top research subjects for the same applications and also for inorganic materials synthesis. As a result, the exploitation of the peculiar properties of ILs for metal sulfide synthesis could provide attractive new avenues for the generation of new, highly specific metal sulfides for numerous applications. This article therefore describes current developments in metal sulfide nano-particle synthesis as exemplified by a number of highlight examples. Moreover, the article demonstrates how ILs have been used in metal sulfide synthesis and discusses the benefits of using ILs over more traditional approaches. Finally, the article demonstrates some technological challenges and how ILs could be used to further advance the production and specific property engineering of metal sulfide nanomaterials, again based on a number of selected examples.
Enzymes can support the synthesis or degradation of biomacromolecules in natural processes. Here, we demonstrate that enzymes can induce a macroscopic-directed movement of microstructured hydrogels following a mechanism that we call a "Jack-in-the-box" effect. The material's design is based on the formation of internal stresses induced by a deformation load on an architectured microscale, which are kinetically frozen by the generation of polyester locking domains, similar to a Jack-in-thebox toy (i.e., a compressed spring stabilized by a closed box lid). To induce the controlled macroscopic movement, the locking domains are equipped with enzyme-specific cleavable bonds (i.e., a box with a lock and key system). As a result of enzymatic reaction, a transformed shape is achieved by the release of internal stresses. There is an increase in entropy in combination with a swelling-supported stretching of polymer chains within the microarchitectured hydrogel (i.e., the encased clown pops-up with a pre-stressed movement when the box is unlocked). This utilization of an enzyme as a physiological stimulus may offer new approaches to create interactive and enzyme-specific materials for different applications such as an optical indicator of the enzyme's presence or actuators and sensors in biotechnology and in fermentation processes.
The noble way to substantiate decisions that affect many people is to ask these people for their opinions. For governments that run whole countries, this means asking all citizens for their views to consider their situations and needs.
Organizations such as Africa's Voices Foundation, who want to facilitate communication between decision-makers and citizens of a country, have difficulty mediating between these groups. To enable understanding, statements need to be summarized and visualized. Accomplishing these goals in a way that does justice to the citizens' voices and situations proves challenging. Standard charts do not help this cause as they fail to create empathy for the people behind their graphical abstractions. Furthermore, these charts do not create trust in the data they are representing as there is no way to see or navigate back to the underlying code and the original data. To fulfill these functions, visualizations would highly benefit from interactions to explore the displayed data, which standard charts often only limitedly provide.
To help improve the understanding of people's voices, we developed and categorized 80 ideas for new visualizations, new interactions, and better connections between different charts, which we present in this report. From those ideas, we implemented 10 prototypes and two systems that integrate different visualizations. We show that this integration allows consistent appearance and behavior of visualizations. The visualizations all share the same main concept: representing each individual with a single dot. To realize this idea, we discuss technologies that efficiently allow the rendering of a large number of these dots. With these visualizations, direct interactions with representations of individuals are achievable by clicking on them or by dragging a selection around them. This direct interaction is only possible with a bidirectional connection from the visualization to the data it displays. We discuss different strategies for bidirectional mappings and the trade-offs involved. Having unified behavior across visualizations enhances exploration. For our prototypes, that includes grouping, filtering, highlighting, and coloring of dots. Our prototyping work was enabled by the development environment Lively4. We explain which parts of Lively4 facilitated our prototyping process. Finally, we evaluate our approach to domain problems and our developed visualization concepts.
Our work provides inspiration and a starting point for visualization development in this domain. Our visualizations can improve communication between citizens and their government and motivate empathetic decisions. Our approach, combining low-level entities to create visualizations, provides value to an explorative and empathetic workflow. We show that the design space for visualizing this kind of data has a lot of potential and that it is possible to combine qualitative and quantitative approaches to data analysis.
In this short survey article, we showcase a number of non-trivial geometric problems that have recently been resolved by marrying methods from functional calculus and real-variable harmonic analysis. We give a brief description of these methods as well as their interplay. This is a succinct survey that hopes to inspire geometers and analysts alike to study these methods so that they can be further developed to be potentially applied to a broader range of questions.
Zur Abschaffung des Gutachterverfahrens in der Vertragspsychotherapie – ein Qualitätsverlust?
(2021)
Zielsetzung: Der vorliegende Artikel befasst sich mit der Fragestellung, inwiefern das Gutachterverfahren in der Vertragspsychotherapie ein zuverlässiges Qualitätsinstrument darstellt und ob sich aus der geplanten Abschaffung des Gutachterverfahrens das Risiko einer Qualitätsminderung in der ambulanten Psychotherapie ergibt.
Methodik: Es wurde eine Literaturrecherche durchgeführt. Arbeiten von den Jahren 2000 bis 2020 wurden berücksichtigt, welche sich mit dem Gutachterverfahren als Qualitätsmerkmal der ambulanten Psychotherapie befassen. Um die unterschiedlichen Standpunkte der zitierten Autor_innen zu diskutieren, wurde auch Bezug auf weiterführende Literatur genommen.
Ergebnisse: Das Gutachterverfahren scheint empirisch nicht sicher als zuverlässiges Qualitätsmerkmal der ambulanten Psychotherapie herangezogen werden zu können. Die Annahme, dass sich durch eine gutachterbefreite Vertragspsychotherapie eine Qualitätsminderung der Psychotherapie ergibt, wird durch die hier zusammengefassten Arbeiten insgesamt nicht gestützt.
The Eastern Mediterranean is the most seismically active region in Europe due to the complex interactions of the Arabian, African, and Eurasian tectonic plates. Deformation is achieved by faulting in the brittle crust, distributed flow in the viscoelastic lower-crust and mantle, and Hellenic subduction, but the long-term partitioning of these mechanisms is still unknown. We exploit an extensive suite of geodetic observations to build a kinematic model connecting strike-slip deformation, extension, subduction, and shear localization across Anatolia and the Aegean Sea by mapping the distribution of slip and strain accumulation on major active geological structures. We find that tectonic escape is facilitated by a plate-boundary-like, translithospheric shear zone extending from the Gulf of Evia to the Turkish-Iranian Plateau that underlies the surface trace of the North Anatolian Fault. Additional deformation in Anatolia is taken up by a series of smaller-scale conjugate shear zones that reach the upper mantle, the largest of which is located beneath the East Anatolian Fault. Rapid north-south extension in the western part of the system, driven primarily by Hellenic Trench retreat, is accommodated by rotation and broadening of the North Anatolian mantle shear zone from the Sea of Marmara across the north Aegean Sea, and by a system of distributed transform faults and rifts including the rapidly extending Gulf of Corinth in central Greece and the active grabens of western Turkey. Africa-Eurasia convergence along the Hellenic Arc occurs at a median rate of 49.8mm yr(-1) in a largely trench-normal direction except near eastern Crete where variably oriented slip on the megathrust coincides with mixed-mode and strike-slip deformation in the overlying accretionary wedge near the Ptolemy-Pliny-Strabo trenches. Our kinematic model illustrates the competing roles the North Anatolian mantle shear zone, Hellenic Trench, overlying mantle wedge, and active crustal faults play in accommodating tectonic indentation, slab rollback and associated Aegean extension. Viscoelastic flow in the lower crust and upper mantle dominate the surface velocity field across much of Anatolia and a clear transition to megathrust-related slab pull occurs in western Turkey, the Aegean Sea and Greece. Crustal scale faults and the Hellenic wedge contribute only a minor amount to the large-scale, regional pattern of Eastern Mediterranean interseismic surface deformation.
Cyanobacteria are an abundant bacterial group and are found in a variety of ecological niches all around the globe. They can serve as a real threat for fish or mammals and can restrict the use of lakes or rivers for recreational purposes or as a source of drinking water, when they form blooms. One of the most abundant bloom-forming cyanobacteria is Microcystis aeruginosa.
In the first part of the study, the role and possible dynamics of RubisCO in M. aeruginosa during high-light irradiation were examined. Its response was analyzed on the protein and peptide level via immunoblotting, immunofluorescence microscopy and with high performance liquid chromatography (HPLC). It was revealed that large amounts of RubisCO were located outside of carboxysomes under the applied high light stress. RubisCO aggregated mainly underneath the cytoplasmic membrane. There it forms a putative Calvin-Benson-Bassham (CBB) super complex together with other enzymes of photosynthesis. This complex could be part of an alternative carbon-concentrating mechanism (CCM) in M. aeruginosa, which enables a faster, and energy saving adaptation to high light stress of the whole bloom.
Furthermore, the re-localization of RubisCO was delayed in the microcystin-deficient mutant ΔmcyB and RubisCO was more evenly distributed over the cell in comparison to the wild type. Since ΔmcyB is not harmed in its growth, possibly other produced cyanopeptides as aeruginosin or cyanopeptolin also play a role in the stabilization of RubisCO and the putative CBB complex, especially in the microcystin-free mutant.
In the second part of this work, the possible role of microcystin as an extracellular signaling peptide during the diurnal cycle was studied. HPLC analysis showed a strong increase of extracellular microcystin in the wild type when the population entered nighttime and it resumed into the next day as well. Together with the increase of extracellular microcystin, a strong decrease of protein-bound intracellular microcystin was observed via immunoblot analysis. Interestingly, the signal of the large subunit of RubisCO (RbcL) also diminished when high amounts of microcystin were present in the surrounding medium. Microcystin addition experiments to M. aeruginosa WT and ΔmcyB cultures support this observation, since the immunoblot signal of both subunits of RubisCO and CcmK, a shell protein of carboxysomes, diminished after the addition of microcystin. In addition, the fluctuation of cyanopeptolin during the diurnal cycle indicates a more prominent role of other cyanopeptides besides microcystin as a signaling peptide, intracellularly as well as extracellularly.
Frailty assessment is recommended before elective transcatheter aortic valve implantation (TAVI) to determine post-interventional prognosis. Several studies have investigated frailty in TAVI-patients using numerous assessments; however, it remains unclear which is the most appropriate tool for clinical practice. Therefore, we evaluate which frailty assessment is mainly used and meaningful for ≤30-day and ≥1-year prognosis in TAVI patients. Randomized controlled or observational studies (prospective/retrospective) investigating all-cause mortality in older (≥70 years) TAVI patients were identified (PubMed; May 2020). In total, 79 studies investigating frailty with 49 different assessments were included. As single markers of frailty, mostly gait speed (23 studies) and serum albumin (16 studies) were used. Higher risk of 1-year mortality was predicted by slower gait speed (highest Hazard Ratios (HR): 14.71; 95% confidence interval (CI) 6.50–33.30) and lower serum albumin level (highest HR: 3.12; 95% CI 1.80–5.42). Composite indices (five items; seven studies) were associated with 30-day (highest Odds Ratio (OR): 15.30; 95% CI 2.71–86.10) and 1-year mortality (highest OR: 2.75; 95% CI 1.55–4.87). In conclusion, single markers of frailty, in particular gait speed, were widely used to predict 1-year mortality. Composite indices were appropriate, as well as a comprehensive assessment of frailty. View Full-Text
Frailty assessment is recommended before elective transcatheter aortic valve implantation (TAVI) to determine post-interventional prognosis. Several studies have investigated frailty in TAVI-patients using numerous assessments; however, it remains unclear which is the most appropriate tool for clinical practice. Therefore, we evaluate which frailty assessment is mainly used and meaningful for ≤30-day and ≥1-year prognosis in TAVI patients. Randomized controlled or observational studies (prospective/retrospective) investigating all-cause mortality in older (≥70 years) TAVI patients were identified (PubMed; May 2020). In total, 79 studies investigating frailty with 49 different assessments were included. As single markers of frailty, mostly gait speed (23 studies) and serum albumin (16 studies) were used. Higher risk of 1-year mortality was predicted by slower gait speed (highest Hazard Ratios (HR): 14.71; 95% confidence interval (CI) 6.50–33.30) and lower serum albumin level (highest HR: 3.12; 95% CI 1.80–5.42). Composite indices (five items; seven studies) were associated with 30-day (highest Odds Ratio (OR): 15.30; 95% CI 2.71–86.10) and 1-year mortality (highest OR: 2.75; 95% CI 1.55–4.87). In conclusion, single markers of frailty, in particular gait speed, were widely used to predict 1-year mortality. Composite indices were appropriate, as well as a comprehensive assessment of frailty. View Full-Text
Pivots revisited
(2021)
The term "pivot" usually refers to two overlapping syntactic units such that the completion of the first unit simultaneously launches the second. In addition, pivots are generally said to be characterized by the smooth prosodic integration of their syntactic parts. This prosodic integration is typically achieved by prosodic-phonetic matching of the pivot components. As research on such turns in a range of languages has illustrated, speakers routinely deploy pivots so as to be able to continue past a point of possible turn completion, in the service of implementing some additional or revised action. This article seeks to build on, and complement, earlier research by exploring two issues in more detail as follows: (1) what exactly do pivotal turn extensions accomplish on the action dimension, and (2) what role does prosodic-phonetic packaging play in this? We will show that pivot constructions not only exhibit various degrees of prosodic-phonetic (non-)integration, i.e., differently strong cesuras, but that they can be ordered on a continuum, and that this cline maps onto the relationship of the actions accomplished by the components of the pivot construction. While tighter prosodic-phonetic integration, i.e., weak(er) cesuring, co-occurs with post-pivot actions whose relationship to that of the pre-pivot tends to be rather retrospective in character, looser prosodic-phonetic integration, i.e., strong(er) cesuring, is associated with a more prospective orientation of the post-pivot's action. These observations also raise more general questions with regard to the analysis of action.
“Chunking” spoken language
(2021)
In this introductory paper to the special issue on “Weak cesuras in talk-in-interaction”, we aim to guide the reader into current work on the “chunking” of naturally occurring talk. It is conducted in the methodological frameworks of Conversation Analysis and Interactional Linguistics – two approaches that consider the interactional aspect of humans talking with each other to be a crucial starting point for its analysis. In doing so, we will (1) lay out the background of this special issue (what is problematic about “chunking” talk-in-interaction, the characteristics of the methodological approach chosen by the contributors, the cesura model), (2) highlight what can be gained from such a revised understanding of “chunking” in talk-in-interaction by referring to previous work with this model as well as the findings of the contributions to this special issue, and (3) indicate further directions such work could take starting from papers in this special issue. We hope to induce a fruitful exchange on the phenomena discussed, across methodological divides.
“Chunking” spoken language
(2021)
In this introductory paper to the special issue on “Weak cesuras in talk-in-interaction”, we aim to guide the reader into current work on the “chunking” of naturally occurring talk. It is conducted in the methodological frameworks of Conversation Analysis and Interactional Linguistics – two approaches that consider the interactional aspect of humans talking with each other to be a crucial starting point for its analysis. In doing so, we will (1) lay out the background of this special issue (what is problematic about “chunking” talk-in-interaction, the characteristics of the methodological approach chosen by the contributors, the cesura model), (2) highlight what can be gained from such a revised understanding of “chunking” in talk-in-interaction by referring to previous work with this model as well as the findings of the contributions to this special issue, and (3) indicate further directions such work could take starting from papers in this special issue. We hope to induce a fruitful exchange on the phenomena discussed, across methodological divides.
This article discusses the so-called 'Apocalypse' of Carour, a text preserved in a Codex (M586) of the famous Hamuli-find, that originally emanated from the environment of the Pachomian monastic enterprise. It addresses a series of disasters and communal deficiencies through metaphorical imagery and similes that struck the community after the death of its founding father Pachomios. After presenting a few conjectures to the editio princeps and providing a German translation, the 'Apocalypse' is contextualized within the historical and liturgical background of this late antique monastic community. The author asserts that this unique text not only displays the symptoms of disaster, but also gives us new insights into how the Pachomians productively coped with crises. In contrast to modern scholarship, the author argues that the 'Apocalypse' is in fact a prophecy (ex eventu) that was based on an instruction, which was publicly read at the large Easter assembly of the Pachomians, most likely by Horsiesos, the third abbot of the Koinonia. Using the figure of the frog, C(h)arour, to symbolize the biblical plague but also the Egyptian concept of rebirth, the instruction was intended to strengthen group cohesion and especially to prepare the novices that were about to receive their baptism during the Easter celebration for a life devoted to the Koinonia and its principles. To this initial prophecy, which developed an antithesis to the ideal monastic life envisioned by the Pachomians, another text was later added that narrated an unsuccessful attempt to overthrow Apa Besarion, the fifth abbot of the Koinonia. In a much more practical manner this second part of the prophecy elaborated on the same themes while also displaying the resilience of the community in averting crises through remembering and recommitting to its founding precepts. The convoluted text we possess now should therefore be equally viewed as a testament to the communication structures of the Pachomians as well as their memorial culture, which targeted moments of crisis and despair to imbue future generations with the necessary persistence to overcome possible disasters themselves and secure the long-term existence of the Koinonia.
The life cycle of higher plants is based on recurring phases of growth and development based on repetitive sequences of cell division, cell expansion and cell differentiation. This dissertation deals with two projects, each of them investigating two different topics that are related to cell expansion. The first project is examining an Arabidopsis thaliana mutant exhibiting overall cell enlargement and the second project is analysing two naturally occurring floral morphs of Amsinckia spectabilis (Boraginaceae) differing (amongst others) in style length and anther heights due to differences in longitudinal cell elongation. The EMS-mutant eop1 was shown to exhibit a petal size increase of 26% caused by cell enlargement. Further phenotypes were detected, such as cotyledon size increase (based on larger cells) as well as increased carpel, sepal, leaf and pollen sizes. Plant height was shown to be increased and more highly branched trichomes explained the hairy eop1 phenotype. Fine mapping revealed the causal SNP to be a C to T transition at the last nucleotide of intron 7 of the INCURVATA11 (ICU11) gene, a 2-oxoglutarate /Fe(II)-dependant dioxygenase, and thus causing missplicing of the mRNA. Two T-DNA insertion lines (icu11-2 & icu11-4) confirmed ICU11 as causal gene by exhibiting increased petal size. A comparison of three icu11 alleles, which possessed different mutation-related changes, either overexpressing ICU11 or modified mRNAs, was the base for investigating the molecular mechanism that underlies the observed phenotype. Different approaches revealed contradictory results regarding ICU11 protein functionality in the icu11 mutants. A complementation assay proved the three mutants to be exchangeable and ICU11 overexpression in the wild-type led to an icu11-like phenotype, arguing for all three icu11 mutants to be GOF mutants. Contradicting this conclusion, the icu11-4 line could be rescued by a genomic ICU11 transgene. A model, based on the assumption that an overexpression of ICU11 is inhibiting the function of the protein, and thus causing the same effect as a LOF protein was proposed. Further, icu11-3 (eop1) mutants were shown to have an increased resistance towards paclobutrazol, a gibberellin (GA) inhibitor and an upregulation of AtGA20ox2, a main GA biosynthesis gene. Additionally, ICU11 subcellular localization was discovered to be cytoplasmic, supporting the assumption, that ICU11 affects GA biosynthesis and overall GA level, possibly explaining the observed (GA-overdose) phenotype.
The second project aimed to identify the genetic base of the S-locus in Amsinckia spectabilis, as the Amsinckia genus represents untypical characteristics for a heterostylous species, such as no obvious self-incompatibility (SI) and the repeated transition towards homostylous and fully selfing variants. The work was based on three Amsinckia spectabilis forms: a heterostylous form, consisting of two floral morphs with reciprocal positioning of sexual organs (S-morph: high anthers and a short style and L-morph: low anthers and a long style), and two homostylous forms, one large-flowered and partially selfing and the other small-flowered and fully selfing. The maintenance of the two floral morphs is genetically based on the S-locus region, containing genes that encode for the morph-specific traits, which are marked by a tight linkage due to suppressed recombination. Natural populations are found to possess a 1:1 S:L morph ratio, that can be explained by predominant disassortative mating of the two morphs, causing the occurrence of the dominant S-allele only in the heterozygous state (heterozygous (Ss) for the S-morph and homozygous recessive (ss) for the L-morph). Investigation of morph-specific phenotypes detected 56% elongated L-morph styles and 58% higher positioned S-morph anthers. Approximately 50% of the observed size differences were explained by an increase in cell elongation. Moreover, additional phenotypes were found, such as 21% enlarged S-morph pollen and no obvious SI, confirmed by hand pollinated seed counts, in vivo pollen tube growth and the development of homozygous dominant SS individuals via selfing. The Amsinckia spec. S-locus was assumed to at least consist of the G- (style length), the A- (anther height) and the P- (pollen size) locus. Comparative Transcriptomics of the two morphs revealed 22 differentially expressed markers that were found to be located within two contigs of a SS individual PacBio genome assembly, allowing the localization of the S-locus to be delimited to a region of approximately 23 Mb. Contradictory to revealed S-loci within the plant kingdom, no strong argument for a present hemizygous region was found to be causal for the suppressed recombination of the S-locus, so that an inversion was assumed to be the causal mechanism.
In recent years, computer vision algorithms based on machine learning have seen rapid development. In the past, research mostly focused on solving computer vision problems such as image classification or object detection on images displaying natural scenes. Nowadays other fields such as the field of cultural heritage, where an abundance of data is available, also get into the focus of research. In the line of current research endeavours, we collaborated with the Getty Research Institute which provided us with a challenging dataset, containing images of paintings and drawings. In this technical report, we present the results of the seminar "Deep Learning for Computer Vision". In this seminar, students of the Hasso Plattner Institute evaluated state-of-the-art approaches for image classification, object detection and image recognition on the dataset of the Getty Research Institute. The main challenge when applying modern computer vision methods to the available data is the availability of annotated training data, as the dataset provided by the Getty Research Institute does not contain a sufficient amount of annotated samples for the training of deep neural networks. However, throughout the report we show that it is possible to achieve satisfying to very good results, when using further publicly available datasets, such as the WikiArt dataset, for the training of machine learning models.
The imagination of clearly separated core-shell structures is already outdated by the fact, that the nanoparticle core-shell structures remain in terms of efficiency behind their respective bulk material due to intermixing between core and shell dopant ions. In order to optimize the photoluminescence of core-shell UCNP the intermixing should be as small as possible and therefore, key parameters of this process need to be identified. In the present work the Ln(III) ion migration in the host lattices NaYF4 and NaGdF4 was monitored. These investigations have been performed by laser spectroscopy with help of lanthanide resonance energy transfer (LRET) between Eu(III) as donor and Pr(III) or Nd(III) as acceptor. The LRET is evaluated based on the Forster theory. The findings corroborate the literature and point out the migration of ions in the host lattices. Based on the introduced LRET model, the acceptor concentration in the surrounding of one donor depends clearly on the design of the applied core-shell-shell nanoparticles. In general, thinner intermediate insulating shells lead to higher acceptor concentration, stronger quenching of the Eu(III) donor and subsequently stronger sensitization of the Pr(III) or the Nd(III) acceptors. The choice of the host lattice as well as of the synthesis temperature are parameters to be considered for the intermixing process.
Moving spiral wave chimeras
(2021)
We consider a two-dimensional array of heterogeneous nonlocally coupled phase oscillators on a flat torus and study the bound states of two counter-rotating spiral chimeras, shortly two-core spiral chimeras, observed in this system. In contrast to other known spiral chimeras with motionless incoherent cores, the two-core spiral chimeras typically show a drift motion. Due to this drift, their incoherent cores become spatially modulated and develop specific fingerprint patterns of varying synchrony levels. In the continuum limit of infinitely many oscillators, the two-core spiral chimeras can be studied using the Ott-Antonsen equation. Numerical analysis of this equation allows us to reveal the stability region of different spiral chimeras, which we group into three main classes-symmetric, asymmetric, and meandering spiral chimeras.
Stereoselective [4+2] Cycloaddition of Singlet Oxygen to Naphthalenes Controlled by Carbohydrates
(2021)
Stereoselective reactions of singlet oxygen are of current interest. Since enantioselective photooxygenations have not been realized efficiently, auxiliary control is an attractive alternative. However, the obtained peroxides are often too labile for isolation or further transformations into enantiomerically pure products. Herein, we describe the oxidation of naphthalenes by singlet oxygen, where the face selectivity is controlled by carbohydrates for the first time. The synthesis of the precursors is easily achieved starting from naphthoquinone and a protected glucose derivative in only two steps. Photooxygenations proceed smoothly at low temperature, and we detected the corresponding endoperoxides as sole products by NMR. They are labile and can thermally react back to the parent naphthalenes and singlet oxygen. However, we could isolate and characterize two enantiomerically pure peroxides, which are sufficiently stable at room temperature. An interesting influence of substituents on the stereoselectivities of the photooxygenations has been found, ranging from 51:49 to up to 91:9 dr (diastereomeric ratio). We explain this by a hindered rotation of the carbohydrate substituents, substantiated by a combination of NOESY measurements and theoretical calculations. Finally, we could transfer the chiral information from a pure endoperoxide to an epoxide, which was isolated after cleavage of the sugar chiral auxiliary in enantiomerically pure form.
Stereoselective [4+2] Cycloaddition of Singlet Oxygen to Naphthalenes Controlled by Carbohydrates
(2021)
Stereoselective reactions of singlet oxygen are of current interest. Since enantioselective photooxygenations have not been realized efficiently, auxiliary control is an attractive alternative. However, the obtained peroxides are often too labile for isolation or further transformations into enantiomerically pure products. Herein, we describe the oxidation of naphthalenes by singlet oxygen, where the face selectivity is controlled by carbohydrates for the first time. The synthesis of the precursors is easily achieved starting from naphthoquinone and a protected glucose derivative in only two steps. Photooxygenations proceed smoothly at low temperature, and we detected the corresponding endoperoxides as sole products by NMR. They are labile and can thermally react back to the parent naphthalenes and singlet oxygen. However, we could isolate and characterize two enantiomerically pure peroxides, which are sufficiently stable at room temperature. An interesting influence of substituents on the stereoselectivities of the photooxygenations has been found, ranging from 51:49 to up to 91:9 dr (diastereomeric ratio). We explain this by a hindered rotation of the carbohydrate substituents, substantiated by a combination of NOESY measurements and theoretical calculations. Finally, we could transfer the chiral information from a pure endoperoxide to an epoxide, which was isolated after cleavage of the sugar chiral auxiliary in enantiomerically pure form.
Large-scale literature mining to assess the relation between anti-cancer drugs and cancer types
(2021)
Background:
There is a huge body of scientific literature describing the relation between tumor types and anti-cancer drugs. The vast amount of scientific literature makes it impossible for researchers and physicians to extract all relevant information manually.
Methods:
In order to cope with the large amount of literature we applied an automated text mining approach to assess the relations between 30 most frequent cancer types and 270 anti-cancer drugs. We applied two different approaches, a classical text mining based on named entity recognition and an AI-based approach employing word embeddings. The consistency of literature mining results was validated with 3 independent methods: first, using data from FDA approvals, second, using experimentally measured IC-50 cell line data and third, using clinical patient survival data.
Results:
We demonstrated that the automated text mining was able to successfully assess the relation between cancer types and anti-cancer drugs. All validation methods showed a good correspondence between the results from literature mining and independent confirmatory approaches. The relation between most frequent cancer types and drugs employed for their treatment were visualized in a large heatmap. All results are accessible in an interactive web-based knowledge base using the following link: .
Conclusions:
Our approach is able to assess the relations between compounds and cancer types in an automated manner. Both, cancer types and compounds could be grouped into different clusters. Researchers can use the interactive knowledge base to inspect the presented results and follow their own research questions, for example the identification of novel indication areas for known drugs.
Landesrecht Brandenburg
(2021)
Das Studienbuch stellt in übersichtlicher und systematischer Form die wichtigsten ausbildungsrelevanten Teile des brandenburgischen Verfassungs- und Verwaltungsrechts dar. Die Autoren gehen auf die für Examen und Praxis relevanten Kerngebiete (Verfassungsrecht, Verwaltungsorganisationsrecht, Kommunalrecht, Polizei- und Ordnungsrecht und Bauordnungsrecht) unter Einbeziehung von Rechtsprechung und Literatur ein. Zahlreiche Beispiele vereinfachen das Verständnis und Klausurhinweise schärfen den Blick für fehlerträchtige Fragestellungen.
The rapid emergence of online targeted political advertising has raised concerns over data privacy and what the government's response should be. This paper tested and confirmed the hypothesis that public attitudes toward stricter regulation of online targeted political advertising are partially motivated by partisan self-interest. We conducted an experiment using an online survey of 1549 Americans who identify as either Democrats or Republicans. Our findings show that Democrats and Republicans believe that online targeted political advertising benefits the opposing party. This belief is based on their conviction that their political opponents are more likely to be mobilized by online targeted political advertising than are supporters of their own party. We exogenously manipulated partisan self-interest considerations of a random subset of participants by truthfully informing them that, in the past, online targeted political advertising has benefited Republicans. Our findings show that Republicans informed about this had less favorable attitudes toward regulation than did their uninformed co-partisans. This suggests that Republicans' attitudes regarding stricter regulation are based not solely on concerns about privacy violations, but also, in part, are caused by beliefs about partisan advantage. The results imply that people are willing to accept violations of their privacy if their preferred party benefits from the use of online targeted political advertising.
Strafrecht Allgemeiner Teil
(2021)
Die berufliche Orientierung von Kindern im Grundschulalter ist bislang nur in Ansätzen erforscht. Gleichwohl gibt es berufsorientierende Angebote, die auf verschiedenen Ebenen Grundschulkinder adressieren. Die Untersuchung fokussiert aktuelle Forschungsergebnisse, ausgewählte Initiativen, Kinderbücher, Unterrichtsmaterialien usw. zur beruflichen Orientierung von Kindern. Mit dem Ziel der Entwicklung und Ausdifferenzierung eines facettenreichen beruflichen Selbstkonzeptes von Kindern werden spezifische Forschungs- und Entwicklungspotenziale aufgezeigt.
Nonribosomal peptides (NRP) are crucial molecular mediators in microbial ecology and provide indispensable drugs. Nevertheless, the evolution of the flexible biosynthetic machineries that correlates with the stunning structural diversity of NRPs is poorly understood. Here, we show that recombination is a key driver in the evolution of bacterial NRP synthetase (NRPS) genes across distant bacterial phyla, which has guided structural diversification in a plethora of NRP families by extensive mixing andmatching of biosynthesis genes. The systematic dissection of a large number of individual recombination events did not only unveil a striking plurality in the nature and origin of the exchange units but allowed the deduction of overarching principles that enable the efficient exchange of adenylation (A) domain substrates while keeping the functionality of the dynamic multienzyme complexes. In the majority of cases, recombination events have targeted variable portions of the A(core) domains, yet domain interfaces and the flexible A(sub) domain remained untapped. Our results strongly contradict the widespread assumption that adenylation and condensation (C) domains coevolve and significantly challenge the attributed role of C domains as stringent selectivity filter during NRP synthesis. Moreover, they teach valuable lessons on the choice of natural exchange units in the evolution of NRPS diversity, which may guide future engineering approaches.
Die Untersuchung befasst sich mit den umstrittenen Grenzen des Betrugs durch Unterlassen und schafft Klarheit für die Praxis, indem die dogmatischen Leitlinien der Rechtsprechung offengelegt werden. Im Zentrum steht dabei die Interpretation der betrugsspezifischen Garantenstellung durch die Judikatur. Nachdem diese sich im Ergebnis nicht mit der vermeintlich vorherrschenden Rechtsquellentrias aus Gesetz, Vertrag und Ingerenz erklären lässt, wird anhand einer eingehenden Durchsicht der gesamten Betrugsrechtsprechung der Vertrauensgedanke als materielles Kriterium herausgearbeitet und konturiert. Ob hiermit die gesetzgeberische Lücke in § 13 Abs. 1 StGB tatsächlich auf angemessene Art geschlossen wurde, wird abschließend kritisch besprochen.
One third of the world's population lives in areas where earthquakes causing at least slight damage are frequently expected. Thus, the development and testing of global seismicity models is essential to improving seismic hazard estimates and earthquake-preparedness protocols for effective disaster-risk mitigation. Currently, the availability and quality of geodetic data along plate-boundary regions provides the opportunity to construct global models of plate motion and strain rate, which can be translated into global maps of forecasted seismicity. Moreover, the broad coverage of existing earthquake catalogs facilitates in present-day the calibration and testing of global seismicity models. As a result, modern global seismicity models can integrate two independent factors necessary for physics-based, long-term earthquake forecasting, namely interseismic crustal strain accumulation and sudden lithospheric stress release.
In this dissertation, I present the construction of and testing results for two global ensemble seismicity models, aimed at providing mean rates of shallow (0-70 km) earthquake activity for seismic hazard assessment. These models depend on the Subduction Megathrust Earthquake Rate Forecast (SMERF2), a stationary seismicity approach for subduction zones, based on the conservation of moment principle and the use of regional "geodesy-to-seismicity" parameters, such as corner magnitudes, seismogenic thicknesses and subduction dip angles. Specifically, this interface-earthquake model combines geodetic strain rates with instrumentally-recorded seismicity to compute long-term rates of seismic and geodetic moment. Based on this, I derive analytical solutions for seismic coupling and earthquake activity, which provide this earthquake model with the initial abilities to properly forecast interface seismicity. Then, I integrate SMERF2 interface-seismicity estimates with earthquake computations in non-subduction zones provided by the Seismic Hazard Inferred From Tectonics based on the second iteration of the Global Strain Rate Map seismicity approach to construct the global Tectonic Earthquake Activity Model (TEAM). Thus, TEAM is designed to reduce number, and potentially spatial, earthquake inconsistencies of its predecessor tectonic earthquake model during the 2015-2017 period. Also, I combine this new geodetic-based earthquake approach with a global smoothed-seismicity model to create the World Hybrid Earthquake Estimates based on Likelihood scores (WHEEL) model. This updated hybrid model serves as an alternative earthquake-rate approach to the Global Earthquake Activity Rate model for forecasting long-term rates of shallow seismicity everywhere on Earth.
Global seismicity models provide scientific hypotheses about when and where earthquakes may occur, and how big they might be. Nonetheless, the veracity of these hypotheses can only be either confirmed or rejected after prospective forecast evaluation. Therefore, I finally test the consistency and relative performance of these global seismicity models with independent observations recorded during the 2014-2019 pseudo-prospective evaluation period. As a result, hybrid earthquake models based on both geodesy and seismicity are the most informative seismicity models during the testing time frame, as they obtain higher information scores than their constituent model components. These results support the combination of interseismic strain measurements with earthquake-catalog data for improved seismicity modeling. However, further prospective evaluations are required to more accurately describe the capacities of these global ensemble seismicity models to forecast longer-term earthquake activity.