Refine
Has Fulltext
- yes (479) (remove)
Year of publication
- 2019 (479) (remove)
Document Type
- Postprint (209)
- Doctoral Thesis (130)
- Article (64)
- Monograph/Edited Volume (19)
- Working Paper (17)
- Part of Periodical (15)
- Master's Thesis (10)
- Bachelor Thesis (3)
- Conference Proceeding (3)
- Report (3)
Language
- English (355)
- German (116)
- French (4)
- Spanish (3)
- Portuguese (1)
Is part of the Bibliography
- yes (479) (remove)
Keywords
- morphology (14)
- Informationsstruktur (12)
- Morphologie (12)
- information structure (12)
- linguistics (12)
- syntax (12)
- Festschrift (11)
- Linguistik (11)
- Syntax (11)
- festschrift (11)
Institute
- Institut für Biochemie und Biologie (48)
- Institut für Geowissenschaften (36)
- Mathematisch-Naturwissenschaftliche Fakultät (35)
- Department Linguistik (31)
- Institut für Chemie (31)
- Institut für Physik und Astronomie (29)
- Strukturbereich Kognitionswissenschaften (23)
- Department Erziehungswissenschaft (21)
- Wirtschaftswissenschaften (21)
- MenschenRechtsZentrum (19)
Almost half of the political life has been experienced under the
state of emergency and state of siege policies in the Turkish
Republic. In spite of such a striking number and continuity in the
deployment of legal emergency powers, there are just a few legal
and political studies examining the reasons for such permanency
in governing practices. To fill this gap, this paper aims to discuss
one of the most important sources of the ‘permanent’ political
crisis in the country: the historical evolution of legal emergency
power. In order to highlight how these policies have intensified
the highly fragile citizenship regime by weakening the separation
of power, repressing the use of political rights and increasing the
discretionary power of both the executive and judiciary authori-
ties, the paper sheds light on the emergence and production of
a specific form of legality based on the idea of emergency and the
principle of executive prerogative. In that context, it aims to
provide a genealogical explanation of the evolution of the excep-
tional form of the nation-state, which is based on the way political
society, representation, and legitimacy have been instituted and
accompanying failure of the ruling classes in building hegemony
in the country.
Since 1980 Iraq passed through various wars and conflicts including Iraq-Iran war, Saddam Hussein’s the Anfals and Halabja campaigns against the Kurds and the killing campaigns against Shiite in 1986, Saddam Hussein’s invasion of Kuwait in August 1990, the Gulf war in 1990, Iraq war in 2003 and the fall of Saddam, the conflicts and chaos in the transmission of power after the death of Saddam, and the war against ISIS . All these wars left severe impacts in most households in Iraq; on women and children in particular.
The consequences of such long wars could be observed in all sectors including economic, social, cultural and religious sectors. The social structure, norms and attitudes are intensely affected. Many women specifically divorced women found them-selves in challenging different difficulties such as social as well as economic situations. Thus the divorced women in Iraqi Kurdistan are the focus of this research.
Considering the fact that there is very few empirical researches on this topic, a constructivist grounded theory methodology (CGT) is viewed as reliable in order to come up with a comprehensive picture about the everyday life of divorced women in Iraqi Kurdistan. Data collected in Sulaimani city in Iraqi Kurdistan. The work of Kathy Charmaz was chosen to be the main methodological context of the research and the main data collection method was individual intensive narrative interviews with divorced women.
Women generally and divorced women specifically in Iraqi Kurdistan are living in a patriarchal society that passing through many changes due to the above mentioned wars among many other factors. This research is trying to study the everyday life of divorced women in such situations and the forms of social insecurity they are experiencing. The social institutions starting from the family as a very significant institution for women to the governmental and non-governmental institutions that are working to support women, and the copying strategies, are in focus in this research. The main research argument is that the family is playing ambivalent roles in divorced women’s life. For instance, on one side families are revealed to be an essential source of security to most respondents, on the other side families posed also many threats and restrictions on those women. This argument supported by what called by Suad joseph "the paradox of support and suppression" . Another important finding is that the stat institution(laws , constitutions ,Offices of combating violence against woman and family) are supporting women somehow and offering them protection from the insecurities but it is clear that the existence of the laws does not stop the violence against women in Iraqi Kurdistan, As explained by Pateman because the laws /the contract is a sexual-social contract that upholds the sex rights of males and grants them more privileges than females. The political instability, Tribal social norms also play a major role in influencing the rule of law.
It is noteworthy to refer that analyzing the interviews in this research showed that in spite that divorced women living in insecurities and facing difficulties but most of the respondents try to find a coping strategies to tackle difficult situations and to deal with the violence they face; these strategies are bargaining, sometimes compromising or resisting …etc. Different theories used to explain these coping strategies such as bargaining with patriarchy. Kandiyoti who stated that women living under certain restraints struggle to find way and strategies to enhance their situations. The research finding also revealed that the western liberal feminist view of agency is limited this is agree with Saba Mahmood and what she explained about Muslim women agency. For my respondents, who are divorced women, their agency reveals itself in different ways, in resisting or compromising with or even obeying the power of male relatives, and the normative system in the society. Agency is also explained the behavior of women contacting formal state institutions in cases of violence like the police or Offices of combating violence against woman and family.
Seit ihrem Beginn ist die Raumfahrt Untersuchungsgegenstand verschiedenster Disziplinen. Auch die Philosophie hat seither eine kritische Perspektive auf diese Aktivität eingenommen. Und doch fehlt es bislang eines philosophisch-systematischen Zugangs, mit einem genuin ‚anthropologischen‘ Gesichtspunkt. Diese Lücke wird immer offensichtlicher, seitdem sich, nach Entdeckung der ersten Exoplaneten, neue ‚Astro-wissenschaften‘ (z.B. Astrobiologe, Astrokognition, Astrosoziologie) gebildet haben, die explizit Menschen als Raumfahrer voraussetzen bzw. menschliche Eigenschaften auf ihre ‚Ablösbarkeit‘ hin diskutieren. Mit vorliegender Masterarbeit soll der Versuch gemacht werden, die notwendigen Präsuppositionen, für das Verständnis von Menschen als ‚raumfahrende Lebewesen‘, aufzudecken, ohne naturalistische oder kulturalistische Verkürzungen zu betreiben. Zu diesem Zweck wird der systematische Rahmen von Helmuth Plessners Philosophischer Anthropologie gewählt, da dieser eine umfassende ‚spezies-neutrale‘ (d.h. es erlaubt über Menschen, Tiere und Extraterrestriker gleichermaßen nachzudenken, ohne ‚anthropozentrische‘ oder ‚speziesistische‘ Vorurteile zu machen) Untersuchung des infrage stehenden Sachverhaltes bietet. Um diesen Rahmen zu exemplifizieren, und währenddessen den philosophisch-systematischen Ansatz zur Raumfahrt zu elaborieren, der raumfahrende Extraterrestriker ohne Anthropomorphisierung konzeptualisieren, wie auch den Umgang mit Extraterrestrikern in ethischer und politischer Hinsicht berücksichtigen kann, werden die Themenkreise der Astrobiologie, Astroethik und Astropolitik in einzelnen Kapiteln besprochen. Abschließend ist, entgegen aller Erwartung, der gewählte Ansatz als ‚kritisch-posthumanistische‘ Option zu verteidigen.
The immense popularity of online communication services in the last decade has not only upended our lives (with news spreading like wildfire on the Web, presidents announcing their decisions on Twitter, and the outcome of political elections being determined on Facebook) but also dramatically increased the amount of data exchanged on these platforms. Therefore, if we wish to understand the needs of modern society better and want to protect it from new threats, we urgently need more robust, higher-quality natural language processing (NLP) applications that can recognize such necessities and menaces automatically, by analyzing uncensored texts. Unfortunately, most NLP programs today have been created for standard language, as we know it from newspapers, or, in the best case, adapted to the specifics of English social media.
This thesis reduces the existing deficit by entering the new frontier of German online communication and addressing one of its most prolific forms—users’ conversations on Twitter. In particular, it explores the ways and means by how people express their opinions on this service, examines current approaches to automatic mining of these feelings, and proposes novel methods, which outperform state-of-the-art techniques. For this purpose, I introduce a new corpus of German tweets that have been manually annotated with sentiments, their targets and holders, as well as lexical polarity items and their contextual modifiers. Using these data, I explore four major areas of sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained opinion mining, (iii) message-level polarity classification, and (iv) discourse-aware sentiment analysis. In the first task, I compare three popular groups of lexicon generation methods: dictionary-, corpus-, and word-embedding–based ones, finding that dictionary-based systems generally yield better polarity lists than the last two groups. Apart from this, I propose a linear projection algorithm, whose results surpass many existing automatically-generated lexicons. Afterwords, in the second task, I examine two common approaches to automatic prediction of sentiment spans, their sources, and targets: conditional random fields (CRFs) and recurrent neural networks, obtaining higher scores with the former model and improving these results even further by redefining the structure of CRF graphs. When dealing with message-level polarity classification, I juxtapose three major sentiment paradigms: lexicon-, machine-learning–, and deep-learning–based systems, and try to unite the first and last of these method groups by introducing a bidirectional neural network with lexicon-based attention. Finally, in order to make the new classifier aware of microblogs' discourse structure, I let it separately analyze the elementary discourse units of each tweet and infer the overall polarity of a message from the scores of its EDUs with the help of two new approaches: latent-marginalized CRFs and Recursive Dirichlet Process.
Die Wissenschaftsfreiheit ist ein Grundrecht, dessen Sinn und Auslegung im Rahmen von Reformen des Hochschulsystems nicht nur der Justiz, sondern auch der Wissenschaft selbst immer wieder Anlass zur Diskussion geben, so auch im Zuge der Einführung des so genannten Qualitätsmanagements von Studium und Lehre an deutschen Hochschulen. Die vorliegende Dissertationsschrift stellt die Ergebnisse einer empirischen Studie vor, die mit einer soziologischen Betrachtung des Qualitätsmanagements unterschiedlicher Hochschulen zu dieser Diskussion beiträgt.
Auf Grundlage der Prämisse, dass Verlauf und Folgen einer organisationalen Innovation nur verstanden werden können, wenn der alltägliche Umgang der Organisationsmitglieder mit den neuen Strukturen und Prozessen in die Analyse einbezogen wird, geht die Studie von der Frage aus, wie Akteurinnen und Akteure an deutschen Hochschulen die Qualitätsmanagementsysteme ihrer Organisationen nutzen. Die qualitative inhaltsanalytische Auswertung von 26 Leitfaden-Interviews mit Prorektorinnen und -rektoren, Qualitätsmanagement-Personal und Studiendekaninnen und -dekanen an neun Hochschulen ergibt, dass die Strategien der Akteursgruppen an den Hochschulen im Zusammenspiel mit strukturellen Aspekten unterschiedliche Dynamiken entstehen lassen, mit denen Implikationen für die Lehrfreiheit verbunden sind: Während die Autonomie der Lehrenden durch das Qualitätsmanagement an einigen Hochschulen unterstützt wird, sind sowohl Autonomie als auch Verantwortung für Studium und Lehre an anderen Hochschulen Gegenstand andauernder Konflikte, die auch das Qualitätsmanagement einschließen.
Business process management is an established technique for business organizations to manage and support their processes. Those processes are typically represented by graphical models designed with modeling languages, such as the Business Process Model and Notation (BPMN).
Since process models do not only serve the purpose of documentation but are also a basis for implementation and automation of the processes, they have to satisfy certain correctness requirements. In this regard, the notion of soundness of workflow nets was developed, that can be applied to BPMN process models in order to verify their correctness. Because the original soundness criteria are very restrictive regarding the behavior of the model, different variants of the soundness notion have been developed for situations in which certain violations are not even harmful.
All of those notions do only consider the control-flow structure of a process model, however. This poses a problem, taking into account the fact that with the recent release and the ongoing development of the Decision Model and Notation (DMN) standard, an increasing number of process models are complemented by respective decision models. DMN is a dedicated modeling language for decision logic and separates the concerns of process and decision logic into two different models, process and decision models respectively.
Hence, this thesis is concerned with the development of decisionaware soundness notions, i.e., notions of soundness that build upon the original soundness ideas for process models, but additionally take into account complementary decision models. Similar to the various notions of workflow net soundness, this thesis investigates different notions of decision soundness that can be applied depending on the desired degree of restrictiveness. Since decision tables are a standardized means of DMN to represent decision logic, this thesis also puts special focus on decision tables, discussing how they can be translated into an unambiguous format and how their possible output values can be efficiently determined.
Moreover, a prototypical implementation is described that supports checking a basic version of decision soundness. The decision soundness notions were also empirically evaluated on models from participants of an online course on process and decision modeling as well as from a process management project of a large insurance company. The evaluation demonstrates that violations of decision soundness indeed occur and can be detected with our approach.
There is evidence that infants start extracting words from fluent speech around 7.5 months of age (e.g., Jusczyk & Aslin, 1995) and that they use at least two mechanisms to segment words forms from fluent speech: prosodic information (e.g., Jusczyk, Cutler & Redanz, 1993) and statistical information (e.g., Saffran, Aslin & Newport, 1996). However, how these two mechanisms interact and whether they change during development is still not fully understood.
The main aim of the present work is to understand in what way different cues to word segmentation are exploited by infants when learning the language in their environment, as well as to explore whether this ability is related to later language skills. In Chapter 3 we pursued to determine the reliability of the method used in most of the experiments in the present thesis (the Headturn Preference Procedure), as well as to examine correlations and individual differences between infants’ performance and later language outcomes. In Chapter 4 we investigated how German-speaking adults weigh statistical and prosodic information for word segmentation. We familiarized adults with an auditory string in which statistical and prosodic information indicated different word boundaries and obtained both behavioral and pupillometry responses. Then, we conducted further experiments to understand in what way different cues to word segmentation are exploited by 9-month-old German-learning infants (Chapter 5) and by 6-month-old German-learning infants (Chapter 6). In addition, we conducted follow-up questionnaires with the infants and obtained language outcomes at later stages of development.
Our findings from this thesis revealed that (1) German-speaking adults show a strong weight of prosodic cues, at least for the materials used in this study and that (2) German-learning infants weight these two kind of cues differently depending on age and/or language experience. We observed that, unlike English-learning infants, 6-month-old infants relied more strongly on prosodic cues. Nine-month-olds do not show any preference for either of the cues in the word segmentation task. From the present results it remains unclear whether the ability to use prosodic cues to word segmentation relates to later language vocabulary. We speculate that prosody provides infants with their first window into the specific acoustic regularities in the signal, which enables them to master the specific stress pattern of German rapidly. Our findings are a step forwards in the understanding of an early impact of the native prosody compared to statistical learning in early word segmentation.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
Astandard approach to study time-dependent stochastic processes is the power spectral density (PSD), an ensemble-averaged property defined as the Fourier transform of the autocorrelation function of the process in the asymptotic limit of long observation times, T → ∞. In many experimental situations one is able to garner only relatively few stochastic time series of finite T, such that practically neither an ensemble average nor the asymptotic limit T → ∞ can be achieved. To accommodate for a meaningful analysis of such finite-length data we here develop the framework of single-trajectory spectral analysis for one of the standard models of anomalous diffusion, scaled Brownian motion.Wedemonstrate that the frequency dependence of the single-trajectory PSD is exactly the same as for standard Brownian motion, which may lead one to the erroneous conclusion that the observed motion is normal-diffusive. However, a distinctive feature is shown to be provided by the explicit dependence on the measurement time T, and this ageing phenomenon can be used to deduce the anomalous diffusion exponent.Wealso compare our results to the single-trajectory PSD behaviour of another standard anomalous diffusion process, fractional Brownian motion, and work out the commonalities and differences. Our results represent an important step in establishing singletrajectory PSDs as an alternative (or complement) to analyses based on the time-averaged mean squared displacement.
Fractional Brownian motion (FBM) is a Gaussian stochastic process with stationary, long-time correlated increments and is frequently used to model anomalous diffusion processes. We study numerically FBM confined to a finite interval with reflecting boundary conditions. The probability density function of this reflected FBM at long times converges to a stationary distribution showing distinct deviations from the fully flat distribution of amplitude 1/L in an interval of length L found for reflected normal Brownian motion. While for superdiffusion, corresponding to a mean squared displacement (MSD) 〈X² (t)〉 ⋍ tᵅ with 1 < α < 2, the probability density function is lowered in the centre of the interval and rises towards the boundaries, for subdiffusion (0 < α < 1) this behaviour is reversed and the particle density is depleted close to the boundaries. The MSD in these cases at long times converges to a stationary value, which is, remarkably, monotonically increasing with the anomalous diffusion exponent α. Our a priori surprising results may have interesting consequences for the application of FBM for processes such as molecule or tracer diffusion in the confines of living biological cells or organelles, or other viscoelastic environments such as dense liquids in microfluidic chambers.
Many studies on biological and soft matter systems report the joint presence of a linear mean-squared displacement and a non-Gaussian probability density exhibiting, for instance, exponential or stretched-Gaussian tails. This phenomenon is ascribed to the heterogeneity of the medium and is captured by random parameter models such as ‘superstatistics’ or ‘diffusing diffusivity’. Independently, scientists working in the area of time series analysis and statistics have studied a class of discrete-time processes with similar properties, namely, random coefficient autoregressive models. In this work we try to reconcile these two approaches and thus provide a bridge between physical stochastic processes and autoregressive models.Westart from the basic Langevin equation of motion with time-varying damping or diffusion coefficients and establish the link to random coefficient autoregressive processes. By exploring that link we gain access to efficient statistical methods which can help to identify data exhibiting Brownian yet non-Gaussian diffusion.
Land cover change is a dynamic phenomenon driven by synergetic biophysical and socioeconomic effects. It involves massive transitions from natural to less natural habitats and thereby threatens ecosystems and the services they provide. To retain intact ecosystems and reduce land cover change to a minimum of natural transition processes, a dense network of protected areas has been established across Europe. However, even protected areas and in particular the zones around protected areas have been shown to undergo land cover changes. The aim of our study was to compare land cover changes in protected areas, non-protected areas, and 1 km buffer zones around protected areas and analyse their relationship to climatic and socioeconomic factors across Europe between 2000 and 2012 based on earth observation data. We investigated land cover flows describing major change processes: urbanisation, afforestation, deforestation, intensification of agriculture, extensification of agriculture, and formation of water bodies. Based on boosted regression trees, we modelled correlations between land cover flows and climatic and socioeconomic factors. The results show that land cover changes were most frequent in 1 km buffer zones around protected areas (3.0% of all buffer areas affected). Overall, land cover changes within protected areas were less frequent than outside, although they still amounted to 18,800 km2 (1.5% of all protected areas) from 2000 to 2012. In some parts of Europe, urbanisation and intensification of agriculture still accounted for up to 25% of land cover changes within protected areas. Modelling revealed meaningful relationships between land cover changes and a combination of influencing factors. Demographic factors (accessibility to cities and population density) were most important for coarse-scale patterns of land cover changes, whereas fine-scale patterns were most related to longitude (representing the general east/west economic gradient) and latitude (representing the north/south climatic gradient).
Local observations indicate that climate change and shifting disturbance regimes are causing permafrost degradation. However, the occurrence and distribution of permafrost region disturbances (PRDs) remain poorly resolved across the Arctic and Subarctic. Here we quantify the abundance and distribution of three primary PRDs using time-series analysis of 30-m resolution Landsat imagery from 1999 to 2014. Our dataset spans four continental-scale transects in North America and Eurasia, covering ~10% of the permafrost region. Lake area loss (−1.45%) dominated the study domain with enhanced losses occurring at the boundary between discontinuous and continuous permafrost regions. Fires were the most extensive PRD across boreal regions (6.59%), but in tundra regions (0.63%) limited to Alaska. Retrogressive thaw slumps were abundant but highly localized (<10−5%). Our analysis synergizes the global-scale importance of PRDs. The findings highlight the need to include PRDs in next-generation land surface models to project the permafrost carbon feedback.
Der Teufel ist in der russischen Literatur vielfach dargestellt worden, und seine Bilder und Funktionen ändern sich durch die Jahrhunderte – in Entsprechung zum Wandel der Epochen und literarischen Moden. In den Teufelsvorstellungen mischen sich volkstümlich animistische Elemente mit den biblischen Konzepten von Teufeln und Dämonen. Aus beiden Reservoirs schöpft die Literatur, die z. T. die naive Teufelsgläubigkeit verspottet, die sich aufgeklärt gebenden Skeptiker aber auch gerne mit Teufelserscheinungen schreckt. Der Teufel ist ein zentrales Motiv der russischen Literatur, dessen Geschichte nachzuerzählen, einen ganz zentralen Strang der russischen Literatur nachzuerzählen heißt – sub specie diaboli.
Auch wenn er schon lange vor den Romantikern – allen voran Nikolaj Gogol’ – einen prominenten Platz in der russischen Literatur inne hatte, mischen sich seitdem volkstümliche Vorstellungen mit dem biblischen Erbe. Im Volk sind Teufelsvorstellungen bis heute populär, die gebildeten Schichten zeigen sich eher skeptisch, weshalb die realistische Literatur – mit der großen Ausnahme Fedor Dostoevskij – den Teufel eher mied, die Modernisten gestalteten ihn dafür umso lieber. Einen Höhepunkt erreicht er bei Michail Bulgakov. Zeitgenossen fehlt häufig der religiöse Subtext.
Background: Although clinical supervision is considered to be a major component of the development and maintenance of psychotherapeutic competencies, and despite an increase in supervision research, the empirical evidence on the topic remains sparse.
Methods: Because most previous reviews lack methodological rigor, we aimed to review the status and quality of the empirical literature on clinical supervision, and to provide suggestions for future research. MEDLINE, PsycInfo and the Web of Science Core Collection were searched and the review was conducted according to current guidelines. From the review results, we derived suggestions for future research on clinical supervision.
Results: The systematic literature search identified 19 publications from 15 empirical studies. Taking into account the review results, the following suggestions for further research emerged: Supervision research would benefit from proper descriptions of how studies are conducted according to current guidelines, more methodologically rigorous empirical studies, the investigation of active supervision interventions, from taking diverse outcome domains into account, and from investigating supervision from a meta-theoretical perspective.
Conclusions: In all, the systematic review supported the notion that supervision research often lags behind psychotherapy research in general. Still, the results offer detailed starting points for further supervision research.
Due to the enhanced electromagnetic field at the tips of metal nanoparticles, the spiked structure of gold nanostars (AuNSs) is promising for surface-enhanced Raman scattering (SERS). Therefore, the challenge is the synthesis of well designed particles with sharp tips. The influence of different surfactants, i.e., dioctyl sodium sulfosuccinate (AOT), sodium dodecyl sulfate (SDS), and benzylhexadecyldimethylammonium chloride (BDAC), as well as the combination of surfactant mixtures on the formation of nanostars in the presence of Ag⁺ ions and ascorbic acid was investigated. By varying the amount of BDAC in mixed micelles the core/spike-shell morphology of the resulting AuNSs can be tuned from small cores to large ones with sharp and large spikes. The concomitant red-shift in the absorption toward the NIR region without losing the SERS enhancement enables their use for biological applications and for time-resolved spectroscopic studies of chemical reactions, which require a permanent supply with a fresh and homogeneous solution. HRTEM micrographs and energy-dispersive X-ray (EDX) experiments allow us to verify the mechanism of nanostar formation according to the silver underpotential deposition on the spike surface in combination with micelle adsorption.
Quorum-sensing bacteria in a growing colony of cells send out signalling molecules (so-called “autoinducers”) and themselves sense the autoinducer concentration in their vicinity. Once—due to increased local cell density inside a “cluster” of the growing colony—the concentration of autoinducers exceeds a threshold value, cells in this clusters get “induced” into a communal, multi-cell biofilm-forming mode in a cluster-wide burst event. We analyse quantitatively the influence of spatial disorder, the local heterogeneity of the spatial distribution of cells in the colony, and additional physical parameters such as the autoinducer signal range on the induction dynamics of the cell colony. Spatial inhomogeneity with higher local cell concentrations in clusters leads to earlier but more localised induction events, while homogeneous distributions lead to comparatively delayed but more concerted induction of the cell colony, and, thus, a behaviour close to the mean-field dynamics. We quantify the induction dynamics with quantifiers such as the time series of induction events and burst sizes, the grouping into induction families, and the mean autoinducer concentration levels. Consequences for different scenarios of biofilm growth are discussed, providing possible cues for biofilm control in both health care and biotechnology.
Research on weight-loss interventions in emerging adulthood is warranted. Therefore, a cognitive-behavioral group treatment (CBT), including development-specific topics for adolescents and young adults with obesity (YOUTH), was developed. In a controlled study, we compared the efficacy of this age-specific CBT group intervention to an age-unspecific CBT group delivered across ages in an inpatient setting. The primary outcome was body mass index standard deviation score (BMI-SDS) over the course of one year; secondary outcomes were health-related and disease-specific quality of life (QoL). 266 participants aged 16 to 21 years (65% females) were randomized. Intention-to-treat (ITT) and per-protocol analyses (PPA) were performed. For both group interventions, we observed significant and clinically relevant improvements in BMI-SDS and QoL over the course of time with small to large effect sizes. Contrary to our hypothesis, the age-specific intervention was not superior to the age-unspecific CBT-approach.
Auftrag und Möglichkeiten der Kommission für Friedenskonsolidierung im System der Vereinten Nationen
(2019)
Vor dem Hintergrund der international steigenden Zahl an Konfliktrückfällen insbesondere im Anschluss an bereits offiziell für beendet erklärte Bürgerkriege und die daraus folgende zunehmende Relevanz von Peacebuilding-Maßnahmen der internationalen Gemeinschaft, wird in diesem Beitrag die Arbeit der Kommission für Friedenskonsolidierung der Vereinten Nationen untersucht. Einerseits werden hierbei, nach einigen einführenden Erläuterungen zum Begriff der Friedenskonsolidierung an sich sowie der Zusammensetzung und Funktionsweise der Kommission, zunächst ihre einzelnen Aufträge systematisch unter Einordnung in den Kontext des Peacebuilding-Systems der Vereinten Nationen herausgearbeitet und eine auswertende Bilanz unter ihre bisherige Erfüllung gezogen. Daran anschließend erfolgt eine Darstellung der zukünftigen Möglichkeiten der Kommission im Bereich der Friedenskonsolidierung unter besonderer Berücksichtigung ihres Potenzials innerhalb des Systems der Vereinten Nationen sowie der einschlägigen völkerrechtlichen Aspekte.
Transcending the conventional debate around efficiency in sustainable consumption, anti-consumption patterns leading to decreased levels of material consumption have been gaining importance. Change agents are crucial for the promotion of such patterns, so there may be lessons for governance interventions that can be learnt from the every-day experiences of those who actively implement and promote sustainability in the field of anti-consumption. Eighteen social innovation pioneers, who engage in and diffuse practices of voluntary simplicity and collaborative consumption as sustainable options of anti-consumption share their knowledge and personal insights in expert interviews for this research. Our qualitative content analysis reveals drivers, barriers, and governance strategies to strengthen anti-consumption patterns, which are negotiated between the market, the state, and civil society. Recommendations derived from the interviews concern entrepreneurship, municipal infrastructures in support of local grassroots projects, regulative policy measures, more positive communication to strengthen the visibility of initiatives and emphasize individual benefits, establishing a sense of community, anti-consumer activism, and education. We argue for complementary action between top-down strategies, bottom-up initiatives, corporate activities, and consumer behavior. The results are valuable to researchers, activists, marketers, and policymakers who seek to enhance their understanding of materially reduced consumption patterns based on the real-life experiences of active pioneers in the field.
The fabrication of 1D nanostrands composed of stimuli responsive microgels has been shown in this work. Microgels are well known materials able to respond to various stimuli from outer environment. Since these microgels respond via a volume change to an external stimulus, a targeted mechanical response can be achieved. Through carefully choosing the right composition of the polymer matrix, microgels can be designed to react precisely to the targeted stimuli (e.g. drug delivery via pH and temperature changes, or selective contractions through changes in electrical current125).
In this work, it was aimed to create flexible nano-filaments which are capable of fast anisotropic contractions similar to muscle filaments. For the fabrication of such filaments or strands, nanostructured templates (PDMS wrinkles) were chosen due to a facile and low-cost fabrication and versatile tunability of their dimensions. Additionally, wrinkling is a well-known lithography-free method which enables the fabrication of nanostructures in a reproducible manner and with a high long-range periodicity.
In Chapter 2.1, it was shown for the first time that microgels as soft matter particles can be aligned to densely packed microgel arrays of various lateral dimensions. The alignment of microgels with different compositions (e.g. VCL/AAEM, NIPAAm, NIPAAm/VCL and charged microgels) was shown by using different assembly techniques (e.g. spin-coating, template confined molding). It was chosen to set one experimental parameter constant which was the SiOx surface composition of the templates and substrates (e.g. oxidized PDMS wrinkles, Si-wafers and glass slides). It was shown that the fabrication of nanoarrays was feasible with all tested microgel types. Although the microgels exhibited different deformability when aligned on a flat surface, they retained their thermo-responsivity and swelling behavior.
Towards the fabrication of 1D microgel strands interparticle connectivity was aspired. This was achieved via different cross-linking methods (i.e. cross-linking via UV-irradiation and host-guest complexation) discussed in Chapter 2.2. The microgel arrays created by different assembly methods and microgel types were tested for their cross-linking suitability. It was observed that NIPAAm based microgels cannot be cross-linked with UV light. Furthermore, it was found that these microgels exhibit a strong surface-particle-interaction and therefore could not be detached from the given substrates. In contrast to the latter, with VCL/AAEM based microgels it was possible to both UV cross-link them based on the keto-enol tautomerism of the AAEM copolymer, and to detach them from the substrate due to the lower adhesion energy towards SiOx surfaces. With VCL/AAEM microgels long, one-dimensional microgel strands could be re-dispersed in water for further analysis. It has also been shown that at least one lateral dimension of the free dispersed 1D microgel strands is easily controllable by adjusting the wavelength of the wrinkled template. For further work, only VCL/AAEM based microgels were used to focus on the main aim of this work, i.e. the fabrication of 1D microgel nanostrands.
As an alternative to the unspecific and harsh UV cross-linking, the host-guest complexation via diazobenzene cross-linkers and cyclodextrin hosts was explored. The idea behind this approach was to give means to a future construction kit-like approach by incorporation of cyclodextrin comonomers in a broad variety of particle systems (e.g. microgels, nanoparticles). For this purpose, VCL/AAEM microgels were copolymerized with different amounts of mono-acrylate functionalized β-cyclodextrin (CD). After successfully testing the cross-linking capability in solution, the cross-linking of aligned VCL/AAEM/CD microgels was tried. Although the cross-linking worked well, once the single arrays came into contact to each other, they agglomerated. As a reason for this behavior residual amounts of mono-complexed diazobenzene linkers were suspected. Thus, end-capping strategies were tried out (e.g. excess amounts of β-cyclodextrin and coverage with azobenzene functionalized AuNPs) but were unsuccessful. With deeper thought, entropy effects were taken into consideration which favor the release of complexed diazobenzene linker leading to agglomerations. To circumvent this entropy driven effect, a multifunctional polymer with 50% azobenzene groups (Harada polymer) was used. First experiments with this polymer showed promising results regarding a less pronounced agglomeration (Figure 77). Thus, this approach could be pursued in the future. In this chapter it was found out that in contrast to pearl necklace and ribbon like formations, particle alignment in zigzag formation provided the best compromise in terms of stability in dispersion (see Figure 44a and Figure 51) while maintaining sufficient flexibility.
For this reason, microgel strands in zigzag formation were used for the motion analysis described in Chapter 2.3. The aim was to observe the properties of unrestrained microgel strands in solution (e.g. diffusion behavior, rotational properties and ideally, anisotropic contraction after temperature increase). Initially, 1D microgel strands were manipulated via AFM in a liquid cell setup. It could be observed that the strands required a higher load force compared to single microgels to be detached from the surface. However, with the AFM it was not possible to detach the strands in a controllable manner but resulted in a complete removal of single microgel particles and a tearing off the strands from the surface, respectively. For this reason, to observe the motion behavior of unrestrained microgel strands in solution, confocal microscopy was used. Furthermore, to hinder an adsorption of the strands, it was found out that coating the surface of the substrates with a repulsive polymer film was beneficial. Confocal and wide-field microscopy videos showed that the microgel strands exhibit translational and rotational diffusive motion in solution without perceptible bending. Unfortunately, with these methods the detection of the anisotropic stimuli responsive contraction of the free moving microgel strands was not possible. To summarize, the flexibility of microgel strands is more comparable to the mechanical behavior of a semi flexible cable than to a yarn. The strands studied here consist of dozens or even hundreds of discrete submicron units strung together by cross-linking, having few parallels in nanotechnology.
With the insights gained in this work on microgel-surface interactions, in the future, a targeted functionalization of the template and substrate surfaces can be conducted to actively prevent unwanted microgel adsorption for a given microgel system (e.g. PVCL and polystyrene coating235). This measure would make the discussed alignment methods more diverse. As shown herein, the assembly methods enable a versatile microgel alignment (e.g. microgel meshes, double and triple strands). To go further, one could use more complex templates (e.g. ceramic rhombs and star shaped wrinkles (Figure 14) to expand the possibilities of microgel alignment and to precisely control their aspect ratios (e.g. microgel rods with homogeneous size distributions).
Der Universitätscampus Golm
(2019)
Westlich der Potsdamer Innenstadt liegt der Campus Golm, der größte Standort der Universität Potsdam. Die sehr verschiedenen Gebäude erzählen von den zahlreichen Institutionen, die im Laufe der Zeit auf dem Areal angesiedelt waren: Ab Mitte der 1930er Jahre befand sich hier die Walther-Wever-Kaserne, in der ab 1943 die Luftnachrichtenabteilung Oberbefehlshaber der Luftwaffe untergebracht war. 1951 zog eine Ausbildungseinrichtung des Ministeriums für Staatssicherheit ein, die – unter verschiedenen Namen – bis 1989 bestand. Im Juli 1991 übernahm die neu gegründete Universität Potsdam die Liegenschaften, die heute Teil des Wissenschaftsparks Golm sind.
Das Buch führt durch die Geschichte des Standortes und lädt ein zu einem Spaziergang über den heutigen Campus der Universität. Mit über 110 Fotos und einem detaillierten Lageplan.
„Blame it on the Russians“
(2019)
The politics of zoom
(2019)
Following the mandate in the Paris Agreement for signatories to provide “climate services” to their constituents, “downscaled” climate visualizations are proliferating. But the process of downscaling climate visualizations does not neutralize the political problems with their synoptic global sources—namely, their failure to empower communities to take action and their replication of neoliberal paradigms of globalization. In this study we examine these problems as they apply to interactive climate‐visualization platforms, which allow their users to localize global climate information to support local political action. By scrutinizing the political implications of the “zoom” tool from the perspective of media studies and rhetoric, we add to perspectives of cultural cartography on the issue of scaling from our fields. Namely, we break down the cinematic trope of “zooming” to reveal how it imports the political problems of synopticism to the level of individual communities. As a potential antidote to the politics of zoom, we recommend a downscaling strategy of connectivity, which associates rather than reduces situated views of climate to global ones.
Unfolding the history of one of the oldest human val-ues, the freedom of expression, while defining its limits, is a complicated task. Does freedom stop where hate starts? This very old dilemma is -now more than ever before- revealing new dimensions. Politicians and new laws aim at regulating free expression, while disagree-ments over such regulation gradually become a source of endless conflict in newly formed multicultural, inter-connected, and digitized societies. The example of the Network Enforcement Act is used to understand the idea of restrictive legal practices in Germany, but also to enlighten the fact that law is a human construction which was created in order to regulate communication among individuals. Alternative practices, to straight legal ones, are summarized to show other dimensions of regulating hate speech without involving top-down approaches. The article proposes the approach of re-storative justice as a combination of legal and medita-tive practices in cases of hate speech. One advantage of the restorative justice approach elaborated in this arti-cle is the potential to remedy the inner hate and the pain, both of the victim and perpetrator. Finally, reveal-ing parts of history and new aspects of the ‘hate speech-puzzle’, leads to a questioning of contemporary social structures that possibly generate hate itself.
We aimed at unveiling the role of executive functions (EFs) and language-related skills in spelling for mono- versus multilingual primary school children. We focused on EF and language-related skills, in particular lexicon size and phonological awareness (PA), because these factors were found to predict spelling in studies predominantly conducted with monolinguals, and because multilingualism can modulate these factors. There is evidence for (a) a bilingual advantage in EF due to constant high cognitive demands through language control, (b) a smaller mental lexicon in German and (c) possibly better PA. Multilinguals in Germany show on average poorer German language proficiency, what can influence performance on language-based tasks negatively. Thus, we included two spelling tasks to tease apart spelling based on lexical knowledge (i.e., word spelling) from spelling based on non-lexical strategies (i.e., non-word spelling). Our sample consisted of heterogeneous third graders from Germany: 69 monolinguals (age: M = 108 months) and 57 multilinguals (age: M = 111 months). On less language-dependent tasks (e.g., non-word spelling, PA, intelligence, short-term memory (STM) and three EF tasks testing switching, inhibition, and working memory) performance of both groups did not differ significantly. However, multilinguals performed significantly more poorly on tasks measuring German lexicon size and word spelling than monolinguals. Regression analyses revealed that for multilinguals, inhibition was related to spelling, whereas switching was the only EF component to influence word spelling in monolinguals and non-word spelling performance in both groups. By adding lexicon size and other language-related factors to the regression models, the influence of switching was reduced to insignificant effects, but inhibition remained significant for multilinguals. Language-related skills best predicted spelling and both language groups shared those variables: PA for word spelling, and STM for non-word spelling. Additionally, multilinguals’ word spelling performance was also predicted by their German lexicon size, and non-word spelling performance by PA. This study offers an in-depth look at spelling acquisition at a certain point of literacy development. Mono- and multilinguals have the predominant factors for spelling in common, but probably due to superior language knowledge, monolinguals were already able to make use of EF during spelling. For multilinguals, German lexicon size was more important for spelling than EF. For multilinguals’ spelling these functions might come into play only at a later stage.
There is evidence both for mental number representations along a horizontal mental number line with larger numbers to the right of smaller numbers (for Western cultures) and a physically grounded, vertical representation where “more is up.” Few studies have compared effects in the horizontal and vertical dimension and none so far have combined both dimensions within a single paradigm where numerical magnitude was task-irrelevant and none of the dimensions was primed by a response dimension. We now investigated number representations over both dimensions, building on findings that mental representations of numbers and space co-activate each other. In a Go/No-go experiment, participants were auditorily primed with a relatively small or large number and then visually presented with quasi-randomly distributed distractor symbols and one Arabic target number (in Go trials only). Participants pressed a central button whenever they detected the target number and elsewise refrained from responding. Responses were not more efficient when small numbers were presented to the left and large numbers to the right. However, results indicated that large numbers were associated with upper space more strongly than small numbers. This suggests that in two-dimensional space when no response dimension is given, numbers are conceptually associated with vertical, but not horizontal space.
Hantavirus assembly and budding are governed by the surface glycoproteins Gn and Gc. In this study, we investigated the glycoproteins of Puumala, the most abundant Hantavirus species in Europe, using fluorescently labeled wild-type constructs and cytoplasmic tail (CT) mutants. We analyzed their intracellular distribution, co-localization and oligomerization, applying comprehensive live, single-cell fluorescence techniques, including confocal microscopy, imaging flow cytometry, anisotropy imaging and Number&Brightness analysis. We demonstrate that Gc is significantly enriched in the Golgi apparatus in absence of other viral components, while Gn is mainly restricted to the endoplasmic reticulum (ER). Importantly, upon co-expression both glycoproteins were found in the Golgi apparatus. Furthermore, we show that an intact CT of Gc is necessary for efficient Golgi localization, while the CT of Gn influences protein stability. Finally, we found that Gn assembles into higher-order homo-oligomers, mainly dimers and tetramers, in the ER while Gc was present as mixture of monomers and dimers within the Golgi apparatus. Our findings suggest that PUUV Gc is the driving factor of the targeting of Gc and Gn to the Golgi region, while Gn possesses a significantly stronger self-association potential.
Am Beispiel der Erd- und Umweltwissenschaften (einschließlich der landschafts- und standortbezogenen Teilgebiete der Agrarwissenschaften) zeigt dieser Beitrag, dass auch in scheinbar „unverdächtigen“ Disziplinen personenbezogene Forschungsdaten vorkommen. Eine Auswertung der Literatur zeigt, dass allgemeine Handreichungen zum Datenschutz in der Forschung kaum Unterstützung bei der Arbeit mit den für diese Disziplinen besonders relevanten Fällen bieten. Für die in den Erd- und Umweltwissenschaften besonders relevanten raumbezogenen Daten kommt hinzu, dass selbst unter Fachjuristinnen Uneinigkeit über die datenschutzrechtliche Bewertung herrscht. Die Ergebnisse einer empirischen Vorstudie zeigen eine ganze Reihe verschiedener Arten personenbezogener Forschungsdaten auf, die in der Forschungspraxis der Erd- und Umweltwissenschaften eine Rolle spielen. Sie legen außerdem nahe, dass der Umgang mit personenbezogenen Daten in der Forschungspraxis der Erd- und Umweltwissenschaften auf Grund der mangelnden Vertrautheit mit dem Datenschutz nicht immer den rechtlichen Anforderungen entspricht. Auch Unterstützung durch Fachgesellschaften und Infrastruktureinrichtungen – etwa in Form disziplinspezifischer Handreichungen, qualifizierter Beratung oder institutionalisierten Möglichkeiten, Daten sicher zu archivieren und ggf. zugangsbeschränkt zu publizieren – bestehen kaum. Aus dieser Situation ergeben sich Herausforderungen an die Weiterentwicklung der disziplinären Datenkultur und Dateninfrastruktur, beispielsweise im Rahmen des Prozesses zum Aufbau einer Nationalen Forschungsdateninfrastruktur (NFDI). Zu den Möglichkeiten für Infrastruktureinrichtungen, diese Weiterentwicklung zu unterstützen, zeigt dieser Beitrag Handlungsoptionen auf.
Force plays a fundamental role in the regulation of biological processes. Cells can sense the mechanical properties of the extracellular matrix (ECM) by applying forces and transmitting mechanical signals. They further use mechanical information for regulating a wide range of cellular functions, including adhesion, migration, proliferation, as well as differentiation and apoptosis. Even though it is well understood that mechanical signals play a crucial role in directing cell fate, surprisingly little is known about the range of forces that define cell-ECM interactions at the molecular level.
Recently, synthetic molecular force sensor (MFS) designs have been established for measuring the molecular forces acting at the cell-ECM interface. MFSs detect the traction forces generated by cells and convert this mechanical input into an optical readout. They are composed of calibrated mechanoresponsive building blocks and are usually equipped with a fluorescence reporter system. Up to date, many different MFS designs have been introduced and successfully used for measuring forces involved in the adhesion of mammalian cells. These MFSs utilize different molecular building blocks, such as double-stranded deoxyribonucleic acid (dsDNA) molecules, DNA hairpins and synthetic polymers like polyethylene glycol (PEG). These currently available MFS designs lack ECM mimicking properties.
In this work, I introduce a new MFS building block for cell biology applications, derived from the natural ECM. It combines mechanical tunability with the ability to mimic the native cellular microenvironment. Inspired by structural ECM proteins with load bearing function, this new MFS design utilizes coiled coil (CC)-forming peptides. CCs are involved in structural and mechanical tasks in the cellular microenvironment and many of the key protein components of the cytoskeleton and the ECM contain CC structures. The well-known folding motif of CC structures, an easy synthesis via solid phase methods and the many roles CCs play in biological processes have inspired studies to use CCs as tunable model systems for protein design and assembly. All these properties make CCs ideal candidates as building blocks for MFSs. In this work, a series of heterodimeric CCs were designed, characterized and further used as molecular building blocks for establishing a novel, next-generation MFS prototype.
A mechanistic molecular understanding of their structural response to mechanical load is essential for revealing the sequence-structure-mechanics relationships of CCs. Here, synthetic heterodimeric CCs of different length were loaded in shear geometry and their mechanical response was investigated using a combination of atomic force microscope (AFM)-based single-molecule force spectroscopy (SMFS) and steered molecular dynamics (SMD) simulations. SMFS showed that the rupture forces of short heterodimeric CCs (3-5 heptads) lie in the range of 20-50 pN, depending on CC length, pulling geometry and the applied loading rate (dF/dt). Upon shearing, an initial rise in the force, followed by a force plateau and ultimately strand separation was observed in SMD simulations. A detailed structural analysis revealed that CC response to shear load depends on the loading rate and involves helix uncoiling, uncoiling-assisted sliding in the direction of the applied force and uncoiling-assisted dissociation perpendicular to the force axis.
The application potential of these mechanically characterized CCs as building blocks for MFSs has been tested in 2D cell culture applications with the goal of determining the threshold force for cell adhesion. Fully calibrated, 4- to 5-heptad long, CC motifs (CC-A4B4 and CC-A5B5) were used for functionalizing glass surfaces with MFSs. 3T3 fibroblasts and endothelial cells carrying mutations in a signaling pathway linked to cell adhesion and mechanotransduction processes were used as model systems for time-dependent adhesion experiments. A5B5-MFS efficiently supported cell attachment to the functionalized surfaces for both cell types, while A4B4-MFS failed to maintain attachment of 3T3 fibroblasts after the first 2 hours of initial cell adhesion. This difference in cell adhesion behavior demonstrates that the magnitude of cell-ECM forces varies depending on the cell type and further supports the application potential of CCs as mechanoresponsive and tunable molecular building blocks for the development of next-generation protein-based MFSs.This novel CC-based MFS design is expected to provide a powerful new tool for observing cellular mechanosensing processes at the molecular level and to deliver new insights into the mechanisms and forces involved. This MFS design, utilizing mechanically tunable CC building blocks, will not only allow for measuring the molecular forces acting at the cell-ECM interface, but also yield a new platform for the development of mechanically controlled materials for a large number of biological and medical applications.
In this work we investigated ultrafast demagnetization in a Heusler-alloy. This material belongs to the halfmetal and exists in a ferromagnetic phase. A special feature of investigated alloy is a structure of electronic bands. The last leads to the specific density of the states. Majority electrons form a metallic like structure while minority electrons form a gap near the Fermi-level, like in semiconductor. This particularity offers a good possibility to use this material as model-like structure and to make some proof of principles concerning demagnetization. Using pump-probe experiments we carried out time-resolved measurements to figure out the times of demagnetization. For the pumping we used ultrashort laser pulses with duration around 100 fs. Simultaneously we used two excitation regimes with two different wavelengths namely 400 nm and 1240 nm. Decreasing the energy of photons to the gap size of the minority electrons we explored the effect of the gap on the demagnetization dynamics. During this work we used for the first time OPA (Optical Parametrical Amplifier) for the generation of the laser irradiation in a long-wave regime. We tested it on the FETOSPEX-beamline in BASSYII electron storage ring. With this new technique we measured wavelength dependent demagnetization dynamics. We estimated that the demagnetization time is in a correlation with photon energy of the excitation pulse. Higher photon energy leads to the faster demagnetization in our material. We associate this result with the existence of the energy-gap for minority electrons and explained it with Elliot-Yaffet-scattering events. Additionally we applied new probe-method for magnetization state in this work and verified their effectivity. It is about the well-known XMCD (X-ray magnetic circular dichroism) which we adopted for the measurements in reflection geometry. Static experiments confirmed that the pure electronic dynamics can be separated from the magnetic one. We used photon energy fixed on the L3 of the corresponding elements with circular polarization. Appropriate incidence angel was estimated from static measurements. Using this probe method in dynamic measurements we explored electronic and magnetic dynamics in this alloy.
Regulatory focus is a motivational construct that describes humans’ motivational orientation during goal pursuit. It is conceptualized as a chronic, trait-like, as well as a momentary, state-like orientation. Whereas there is a large number of measures to capture chronic regulatory focus, measures for its momentary assessment are only just emerging. This paper presents the development and validation of a measure of Momentary–Chronic Regulatory Focus. Our development incorporates the distinction between self-guide and reference-point definitions of regulatory focus. Ideals and ought striving are the promotion and prevention dimension in the self-guide system; gain and non-loss regulatory focus are the respective dimensions within the reference-point system. Three-survey-based studies test the structure, psychometric properties, and validity of the measure in its version to assess chronic regulatory focus (two samples of working participants, N = 389, N = 672; one student sample [time 1, N = 105; time 2, n = 91]). In two further studies, an experience sampling study with students (N = 84, k = 1649) and a daily-diary study with working individuals (N = 129, k = 1766), the measure was applied to assess momentary regulatory focus. Multilevel analyses test the momentary measure’s factorial structure, provide support for its sensitivity to capture within-person fluctuations, and provide evidence for concurrent construct validity.
Optical flow models as an open benchmark for radar-based precipitation nowcasting (rainymotion v0.1)
(2019)
Quantitative precipitation nowcasting (QPN) has become an essential technique in various application contexts, such as early warning or urban sewage control. A common heuristic prediction approach is to track the motion of precipitation features from a sequence of weather radar images and then to displace the precipitation field to the imminent future (minutes to hours) based on that motion, assuming that the intensity of the features remains constant (“Lagrangian persistence”). In that context, “optical flow” has become one of the most popular tracking techniques. Yet the present landscape of computational QPN models still struggles with producing open software implementations. Focusing on this gap, we have developed and extensively benchmarked a stack of models based on different optical flow algorithms for the tracking step and a set of parsimonious extrapolation procedures based on image warping and advection. We demonstrate that these models provide skillful predictions comparable with or even superior to state-of-the-art operational software. Our software library (“rainymotion”) for precipitation nowcasting is written in the Python programming language and openly available at GitHub (https://github.com/hydrogo/rainymotion, Ayzel et al., 2019). That way, the library may serve as a tool for providing fast, free, and transparent solutions that could serve as a benchmark for further model development and hypothesis testing – a benchmark that is far more advanced than the conventional benchmark of Eulerian persistence commonly used in QPN verification experiments.
Background: Core-specific sensorimotor exercises are proven to enhance neuromuscular activity of the trunk, improve athletic performance and prevent back pain. However, the dose-response relationship and, therefore, the dose required to improve trunk function is still under debate. The purpose of the present trial will be to compare four different intervention strategies of sensorimotor exercises that will result in improved trunk function.
Methods/design: A single-blind, four-armed, randomized controlled trial with a 3-week (home-based) intervention phase and two measurement days pre and post intervention (M1/M2) is designed. Experimental procedures on both measurement days will include evaluation of maximum isokinetic and isometric trunk strength (extension/flexion, rotation) including perturbations, as well as neuromuscular trunk activity while performing strength testing. The primary outcome is trunk strength (peak torque). Neuromuscular activity (amplitude, latencies as a response to perturbation) serves as secondary outcome. The control group will perform a standardized exercise program of four sensorimotor exercises (three sets of 10 repetitions) in each of six training sessions (30 min duration) over 3 weeks. The intervention groups’ programs differ in the number of exercises, sets per exercise and, therefore, overall training amount (group I: six sessions, three exercises, two sets; group II: six sessions, two exercises, two sets; group III: six sessions, one exercise, three sets). The intervention programs of groups I, II and III include additional perturbations for all exercises to increase both the difficulty and the efficacy of the exercises performed. Statistical analysis will be performed after examining the underlying assumptions for parametric and non-parametric testing.
Discussion: The results of the study will be clinically relevant, not only for researchers but also for (sports) therapists, physicians, coaches, athletes and the general population who have the aim of improving trunk function.
Kriminalliteratur gilt als zuverlässiger Seismograph für den inneren Zustand einer Gesellschaft, deren Umgang mit der Abweichung von der Norm zum Indikator sozialer und politischer Verhältnisse wird. Die gemeinsame Vergangenheit eint und trennt die Staaten Ostmittel-, Ost- und Südosteuropas gleichermaßen. Die schicksalhaften Verwerfungen des 20. Jahrhunderts fanden natürlich auch Eingang in die jeweiligen Kriminalliteraturen. So vielgestaltig wie die einzelnen Länder und Regionen sind die im vorliegenden Band untersuchten Texte. Sie ermöglichen einerseits Einblicke in den Herausbildungs- und Etablierungsprozess der Kriminalliteratur der Slavia. Andererseits bilden sie aktuelle Entwicklungen dieses ebenso populären wie zeitlosen Genres ab.
Das literarische Verbrechen hat Prof. Dr. Norbert P. Franz während seines aktiven akademischen Wirkens immer begleitet. Ihm zu Ehren fand im Frühjahr 2017 an der Universität Potsdam eine wissenschaftliche Tagung statt, deren Beiträge in diesem Band zusammengestellt sind.
Das kommunale System des Landes Brandenburg wurde seit der Deutschen Wiedervereinigung durch eine Vielzahl von territorialen und funktionalen Verwaltungsreformen verändert.
Das hier vorliegende Arbeitsheft des kommunalwissenschaftlichen Instituts der Universität Potsdam stellt diese zurückliegenden Reformen sowie den momentanen Verwaltungsaufbau und die Bevölkerungsstruktur des Landes Brandenburg dar (Stand: 1.Juli 2018). Die demographische Entwicklung war und ist dabei ein wichtiger Reformfaktor. Zudem werden verfassungsrechtliche Grundlagen für kommunale Reformen im Land Brandenburg erörtert.
Anschließend werden die möglichen Auswirkungen des Gesetzes zur Weiterentwicklung der gemeindlichen Ebene vom 15.10.2018 für zukünftige Reformen des Brandenburgischen Kommunalsystems anhand einer Fallstudie aus der Modellregion Oderlandregion diskutiert. Dieses Gesetz stellt einen Wendepunkt in der bisherigen Reformstrategie des Landes Brandenburg dar, da Reformen erstmals auf freiwilliger Basis durchgeführt werden sollen.
Durch eine Netzwerkanalyse wird in der Fallstudie insbesondere auf Akteurskonstellationen im Reformprozess eingegangen. Dabei zeigt sich, dass die Hauptverwaltungsbeamten reformwilliger Gemeinden großen Einfluss auf Entscheidungsprozesse nehmen.
Sprachbildung und Deutsch als Zweitsprache (DaZ) sind in der Lehrkräftebildung für die Sekundarstufe an der Universität Potsdam bislang nicht systematisch als Querschnittsaufgabe verankert. Vor diesem Hintergrund verfolgte das Projekt „Sprachliche Heterogenität als Herausforderung in der Lehrkräftebildung“ (Leitung: Prof. Christoph Schroeder, Teilprojekt 3.2. des Projekts PSI, Laufzeit 2015-2018) das Ziel, eine fachübergreifende Auseinandersetzung mit diesem Thema anzustoßen. Zu diesem Zweck wurden in Kooperation mit fachdidaktischen Arbeitsbereichen Lehrveranstaltungen zu den Themen „Sprachliche Heterogenität“ und „Sprachbildung im Fach“ durchgeführt. Auf der Basis dieser Lehrveranstaltungen sind diese fachspezifischen Handreichungen entstanden. Als frei zugängliche Dokumente stehen sie sowohl Lehrenden als auch Studierenden als Informationsquellen zur Verfügung: Jeder thematische Unterpunkt enthält ein zentrales Schaubild oder Zitat, das mit einer knappen Erläuterung versehen ist. Für die Lehre können einzelne thematische Unterpunkte wie aus einem Baukasten ausgewählt werden, ohne dass alle Unterpunkte behandelt werden. Insbesondere die abgedruckten und transkribierten Schülerprodukte stellen einen Materialfundus für eine kompetenzorientierte und anwendungsbezogene Lehrkräftebildung im Bereich Sprachbildung / DaZ dar.
Membrane adhesion is a fundamental biological process in which membranes are attached to neighboring membranes or surfaces. Membrane adhesion emerges from a complex interplay between the binding of membrane-anchored receptors/ligands and the membrane properties. In this work, we study membrane adhesion mediated by lipid-anchored saccharides using microsecond-long full-atomistic molecular dynamics simulations. Motivated by neutron scattering experiments on membrane adhesion via lipid-anchored saccharides, we investigate the role of LeX, Lac1, and Lac2 saccharides and membrane fluctuations in membrane adhesion.
We study the binding of saccharides in three different systems: for saccharides in water, for saccharides anchored to essentially planar membranes at fixed separations, and for saccharides anchored to apposing fluctuating membranes. Our simulations of two saccharides in water indicate that the saccharides engage in weak interactions to form dimers. We find that the binding occurs in a continuum of bound states instead of a certain number of well-defined bound structures, which we term as "diffuse binding".
The binding of saccharides anchored to essentially planar membranes strongly depends on separation of the membranes, which is fixed in our simulation system. We show that the binding constants for trans-interactions of two lipid-anchored saccharides monotonically decrease with increasing separation. Saccharides anchored to the same membrane leaflet engage in cis-interactions with binding constants comparable to the trans-binding constants at the smallest membrane separations. The interplay of cis- and trans-binding can be investigated in simulation systems with many lipid-anchored saccharides. For Lac2, our simulation results indicate a positive cooperativity of trans- and cis-binding. In this cooperative binding the trans-binding constant is enhanced by the cis-interactions. For LeX, in contrast, we observe no cooperativity between trans- and cis-binding. In addition, we determine the forces generated by trans-binding of lipid-anchored saccharides in planar membranes from the binding-induced deviations of the lipid-anchors. We find that the forces acting on trans-bound saccharides increase with increasing membrane separation to values of the order of 10 pN.
The binding of saccharides anchored to the fluctuating membranes results from an interplay between the binding properties of the lipid-anchored saccharides and membrane fluctuations. Our simulations, which have the same average separation of the membranes as obtained from the neutron scattering experiments, yield a binding constant larger than in planar membranes with the same separation. This result demonstrates that membrane fluctuations play an important role at average membrane separations which are seemingly too large for effective binding. We further show that the probability distribution of the local separation can be well approximated by a Gaussian distribution. We calculate the relative membrane roughness and show that our results are in good agreement with the roughness values reported from the neutron scattering experiments.
“I mean, no soy psicóloga”
(2019)
This paper is concerned with the qualitative analysis of the use of the English discourse marker I mean in Spanish and Portuguese online discourses (in online fora, blogs or user comments on websites). The examples are retrieved from the Corpus del Español (Web/ Dialects) as well as the Corpus do Português (Web/ Dialects).
Die vorliegende Dissertation zielt generell darauf ab, die Anwendung der dialektischen Methodologie auf den Bereich der Sprachphilosophie zu rechtfertigen und eine systematische Bearbeitung eines begrenzten Teils der Sprachphilosophie mithilfe der Dialektik durchzuführen. Um diese Herangehensweise, die in der Forschung kaum oder gar nicht vertreten ist, aufzuklären und festzustellen, werde ich zuerst auf die philosophischen Überlegungen von zwei Autoren zurückgreifen: Hegel und Wittgenstein.
Hegel und Wittgenstein sind, auf den ersten Blick, Autoren, die kaum Gemeinsamkeiten haben, außer dass sie sich beide mit der Philosophie als Fach beschäftigt und unvermeidlich ein gemeinschaftliches Thema, die Sprache, behandelt haben, wobei jedoch weder eine inhaltliche noch eine methodologische Verbindung hervorgehoben werden könnte. Die erste Voraussetzung dieser Dissertation, in Bezug auf die Geschichte der Idee, besteht darin, darauf hinzudeuten, dass der hegelsche Geistesbegriff und Wittgensteins Lebensform zwei Ansätze und Resultat einer philosophischen Bemühung sind, die gemeinsam die notwendige Auflösung bzw. Überwindung skeptischer Argumentation vornehmen. Tatsächlich hat Wittgenstein in seinen Philosophischen Untersuchungen eine Argumentation entwickelt, die als „Paradox des Regelfolgens“ bezeichnet und in der sekundären Literatur (hauptsächlich bei Kripke) als eine Art skeptische Argumentation betrachtet wurde. Demnach wird Wittgensteins Theorie der Sprache entweder als eine Auflösung dieses Skeptizismus oder einfach als ein skeptischer Text selbst ausgelegt (Brandom). Das erste Ziel meiner Dissertation besteht darin, zu zeigen, dass dieses Paradox als skeptische Argumentation allerdings unvollständig geblieben ist dass dieses Paradox als der erster entscheidender Moment zu der höchsten Form der skeptischen Herausforderung, der Antinomie, betrachtet werden kann. Eine vollständige skeptische Argumentation heißt, dass die alleinige Auflösung des Paradoxes, der Dispositionalismus und die Negation dieser Theorie, beide beweisbar sind. Ich werde also versuchen, aus der in den PU dargestellten Auflösung des Paradoxes des Regelfolgens die Vervollständigung einer Antinomie des Begriffes der Normativität in Bezug auf die Sprachregeln festzulegen, ähnlich der von Kant entwickelten kosmologischen Antinomie (Thesis cum Antithesis). Das zweite Ziel meiner Dissertation besteht folglich darin, zu zeigen, 1. dass die kantische Auflösung der Antinomie unwirksam bezüglich der Antinomie der Normativität ist, 2. dass diese Antinomie eine notwendige Auseinandersetzung mit einem radikalen Skeptizismus bedeutet und dass wir logisch gezwungen sind, nicht nur irgendeine Theorie der Sprachphilosophie neu zu bestimmen, sondern unsere Methodologie – das heißt die Anwendung der üblichen Normen der Rationalität – selbst grundsätzlich tiefer gehend in Frage zu stellen, und 3. dass die hegelsche Dialektik sich als die methodologische Auflösung einer solchen radikalen skeptischen Herausforderung bzw. als die Auflösung einer Antinomie überhaupt ergibt. Anlässlich dieser methodologischen Revidierung wird auf die hegelsche Dialektik zurückgegriffen.
Dennoch begrenzt sich der Zweck dieser Dissertation nicht darauf, eine Interpretation von Hegels Dialektik oder eine Überwindung von Wittgensteins Lebensform darzustellen, vielmehr geht es darum, auf die Problematik und die Grundsätze des Begriffs der Lebensform bzw. des theoretischen Geistes zurückzugreifen und kraft Hegels Dialektik darüber hinauszuführen, um den Platz und die Funktion der Sprache besser zu verstehen. Diese Arbeit erfolgt vielmehr im Rahmen eines wissenschaftlichen Projekts, oder anders gesagt, sie nutzt die methodologischen Resultate von zwei Autoren der Philosophie, um ein wissenschaftliches Programm vorzustellen. Der Anspruch dieser Arbeit ist dementsprechend, durch das Zurückgreifen auf Hegels Dialektik eine neue Erkenntnis über die Sprache zu gewinnen, wobei die beiden kontradiktorischen Momente der Kognition – die Normativität, die durch das Bewusstsein erfolgt und diejenige, die durch Dispositionen erfolgt –, konstruktiv verbinden sind. Der konkrete Gewinn dieser Methodologie ist es demnach, eine Sprachphilosophie überhaupt als System festlegen zu können, ein System, das es ermöglicht, sprachliche Phänomene in all ihren Aspekten in kohärenter Weise zu fassen. Inhaltlich betrachtet zielt dieses Programm darauf ab, die allgemeine Stufe des Begriffs der Sprache als Moment des Begriffs des Geistes dialektisch abzuleiten, d. h. den richtigen Sinn der Sprache festzulegen. Eine vollständige Bearbeitung der Sprachphilosophie mithilfe der Dialektik konnte ich allerdings nicht durchführen, und der Umfang der mithilfe der Dialektik hergeleiteten Sprachkategorien begrenzt sich auf die Lehre der Einbildungskraft, die die Lehre der allgemeinen Semiologie und der Grammatik einschließt.
Die eigene Tätigkeit reflexiv in den Blick zu nehmen ist eine wichtige Aufgabe von Lehrer*innen im Berufsalltag. Reflexionskompetenz ist eine entscheidende Voraussetzung, um für sich selbst und die Gestaltung von Lehr-Lernprozessen sinnvolle Schlüsse ziehen zu können. Dies ist ein Grund, warum schon während des Praxissemesters, in dem Studierende ihre ersten längeren praktischen Erfahrungen als Lehrkräfte sammeln, viel Wert auf die Förderung und Entwicklung reflexiver Fähigkeiten gelegt wird. Herr Thomas Auge widmet sich mit seiner Arbeit der zentralen Frage, inwieweit und in welcher Form Studierende bereits im Praxissemester sich und ihre eigene Arbeit reflexiv in den Blick nehmen. Als Datenmaterial dienten von Studierenden schriftlich angefertigte Wochenrückblicke, welche im Rahmen des Praxissemesters online auf einer der Universität Potsdam entwickelten Plattform (padup.uni-potsdam.de) angefertigt wurden. Die Daten wurden nach der Methode der Inhaltsanalyse qualitativ ausgewertet. Neben einer detaillierten fachlichen Auseinandersetzung mit dem Thema „Reflexionskompetenz“ bietet die Arbeit einen vertieften Einblick in die Art und Weise, wie Studierende auf ihren eigenen Unterricht schauen. Die Ergebnisse offenbaren einen weiteren Handlungsbedarf hinsichtlich der Förderung von Reflexionskompetenzen in der Lehramtsausbildung.
Light-switchable proteins are being used increasingly to understand and manipulate complex molecular systems. The success of this approach has fueled the development of tailored photo-switchable proteins, to enable targeted molecular events to be studied using light. The development of novel photo-switchable tools has to date largely relied on rational design. Complementing this approach with directed evolution would be expected to facilitate these efforts. Directed evolution, however, has been relatively infrequently used to develop photo-switchable proteins due to the challenge presented by high-throughput evaluation of switchable protein activity. This thesis describes the development of two genetic circuits that can be used to evaluate libraries of switchable proteins, enabling optimization of both the on- and off-states. A screening system is described, which permits detection of DNA-binding activity based on conditional expression of a fluorescent protein. In addition, a tunable selection system is presented, which allows for the targeted selection of protein-protein interactions of a desired affinity range. This thesis additionally describes the development and characterization of a synthetic protein that was designed to investigate chromophore reconstitution in photoactive yellow protein (PYP), a promising scaffold for engineering photo-controlled protein tools.
Preface
(2019)
The Forgotten War: Yemen
(2019)
The conflict in Yemen seems forgotten considering the worldwide severe humanitarian catastrophes. Nevertheless, since the conflict escalated around four years ago, it became one of the worst humanitarian crises in recent history and has no end in sight. Thousands of people were killed even more displaced and the country is facing tremendous food insecurity as well as the world’s largest cholera outbreak. It is no longer just a civil war between the Houthi- and Hadi-Faction. International interests play a major role and made it a proxy war between Saudi Arabia (and its allies) on one side and Iran on the other. This all happens at the expense of the civilian population. Therefore, it is urgent to analyse the actors involved, their interests within the conflict and furthermore searching for possibilities to overcome it.
Der vorliegende Beitrag beschäftigt sich mit der durch Sprachkontakt beeinflussten bzw. übernommenen Kodierung von Evidentialität im paraguayischen Spanischen. Es geht hierbei insbesondere um den Gebrauch der Guaraní-Partikel ndaje im paraguayischen Zeitungsspanischen. In diesem Zusammenhang wird der Versuch einer Einordnung des sprachlichen Phänomens vorgenommen und eine qualitative Korpusanalyse durchgeführt.
Experimenting with Lurchi
(2019)
A form-function mismatch?
(2019)
Skarn deposits are found on every continents and were formed at different times from Precambrian to Tertiary. Typically, the formation of a skarn is induced by a granitic intrusion in carbonates-rich sedimentary rocks. During contact metamorphism, fluids derived from the granite interact with the sedimentary host rocks, which results in the formation of calc-silicate minerals at the expense of carbonates. Those newly formed minerals generally develop in a metamorphic zoned aureole with garnet in the proximal and pyroxene in the distal zone. Ore elements contained in magmatic fluids are precipitated due to the change in fluid composition. The temperature decrease of the entire system, due to the cooling of magmatic fluids and the entering of meteoric water, allows retrogression of some prograde minerals.
The Hämmerlein skarn deposit has a multi-stage history with a skarn formation during regional metamorphism and a retrogression of primary skarn minerals during the granitic intrusion. Tin was mobilized during both events. The 340 Ma old tin-bearing skarn minerals show that tin was present in sediments before the granite intrusion, and that the first Sn enrichment occurred during the skarn formation by regional metamorphism fluids. In a second step at ca. 320 Ma, tin-bearing fluids were produced with the intrusion of the Eibenstock granite. Tin, which has been added by the granite and remobilized from skarn calc-silicates, precipitated as cassiterite.
Compared to clay or marl, the skarn is enriched in Sn, W, In, Zn, and Cu. These metals have been supplied during both regional metamorphism and granite emplacement. In addition, the several isotopic and chemical data of skarn samples show that the granite selectively added elements such as Sn, and that there was no visible granitic contribution to the sedimentary signature of the skarn
The example of Hämmerlein shows that it is possible to form a tin-rich skarn without associated granite when tin has already been transported from tin-bearing sediments during regional metamorphism by aqueous metamorphic fluids. These skarns are economically not interesting if tin is only contained in the skarn minerals. Later alteration of the skarn (the heat and fluid source is not necessarily a granite), however, can lead to the formation of secondary cassiterite (SnO2), with which the skarn can become economically highly interesting.
This review summarizes features of professional development programs that aim to prepare in-service teachers to improve students’ academic language proficiency when teaching subject areas. The 38 studies reviewed suggest that all of the profiled interventions were effective to some extent. The programs share many characteristics considered important in successful teacher professional development across different subject areas. They also include some features that appear to be specific to teacher training in this particular domain. This review supports the idea that professional development helps change teachers’ thinking and practice and benefits students, if certain features are taken into consideration in its design and implementation.
Presupposition triggers differ with respect to whether their presupposition is easily accommodatable. The presupposition of focus-sensitive additive particles like also or too is often classified as hard to accommodate, i.e., these triggers are infelicitous if their presupposition is not entailed by the immediate linguistic or non-linguistic context. We tested two competing accounts for the German additive particle auch concerning this requirement: First, that it requires a focus alternative to the whole proposition to be salient, and second, that it merely requires an alternative to the focused constituent (e.g., an individual) to be salient. We conducted two experiments involving felicity judgments as well as questions asking for the truth of the presupposition to be accommodated. Our results suggest that the latter account is too weak: mere previous mention of a potential alternative to the focused constituent is not enough to license the use of auch. However, our results also suggest that the former account is too strong: when an alternative of the focused constituent is prementioned and certain other accommodation-enhancing factors are present, the context does not have to entail the presupposed proposition. We tested the following two potentially accommodation-enhancing factors: First, whether the discourse can be construed to be from the perspective of the individual that the presupposition is about, and second, whether the presupposition is needed to establish coherence between the host sentence of the additive particle and the preceding context. The factor coherence was found to play a significant role. Our results thus corroborate the results of other researchers showing that discourse participants go to great lengths in order to identify a potential presupposition to accommodate, and we contribute to these results by showing that coherence is one of the factors that enhance accommodation.
Heute wird selbstverständlich von einer aktiven Zivilgesellschaft als relevanter Akteurin des politischen Prozesses ausgegangen. Dies gilt für den innerstaatlichen Rahmen ebenso wie für die völkerrechtliche Ebene. Das Engagement zivilgesellschaftlicher Akteure im verfassungsrechtlich eingehegten Rahmen des politischen Prozesses ist mit Fragen verbunden, denen sich dieser Aufsatz nähern wird. Zunächst wird der Begriff der Zivilgesellschaft hergeleitet (I) und danach wird auf die Funktionen der Öffentlichkeit in einem rechtsstaatlich verfaßten republikanischen Gemeinwesen eingegangen (II), bevor zum Schluß aktuelle Themen, die sich in den letzten Jahren entwickelt haben, vorgestellt und als erste Forschungsfragen formuliert werden (III).
Vorwort
(2019)
Docendo Discimus
(2019)
Persönlichkeitspsychologisch fundierte Studienorientierung durch onlinebasierte Self-Assessments
(2019)
We combine ultrafast X-ray diffraction (UXRD) and time-resolved Magneto-Optical Kerr Effect (MOKE) measurements to monitor the strain pulses in laser-excited TbFe2/Nb heterostructures. Spatial separation of the Nb detection layer from the laser excitation region allows for a background-free characterization of the laser-generated strain pulses. We clearly observe symmetric bipolar strain pulses if the excited TbFe2 surface terminates the sample and a decomposition of the strain wavepacket into an asymmetric bipolar and a unipolar pulse, if a SiO2 glass capping layer covers the excited TbFe2 layer. The inverse magnetostriction of the temporally separated unipolar strain pulses in this sample leads to a MOKE signal that linearly depends on the strain pulse amplitude measured through UXRD. Linear chain model simulations accurately predict the timing and shape of UXRD and MOKE signals that are caused by the strain reflections from multiple interfaces in the heterostructure.
Die Bedeutung von Fachschaftsräten in der Studieneingangsphase am Beispiel der Universität Potsdam
(2019)
Cocoa Bean Proteins
(2019)
The protein fractions of cocoa have been implicated influencing both the bioactive potential and sensory properties of cocoa and cocoa products. The objective of the present review is to show the impact of different stages of cultivation and processing with regard to the changes induced in the protein fractions. Special focus has been laid on the major seed storage proteins throughout the different stages of processing. The study starts with classical introduction of the extraction and the characterization methods used, while addressing classification approaches of cocoa proteins evolved during the timeline. The changes in protein composition during ripening and maturation of cocoa seeds, together with the possible modifications during the post-harvest processing (fermentation, drying, and roasting), have been documented. Finally, the bioactive potential arising directly or indirectly from cocoa proteins has been elucidated. The “state of the art” suggests that exploration of other potentially bioactive components in cocoa needs to be undertaken, while considering the complexity of reaction products occurring during the roasting phase of the post-harvest processing. Finally, the utilization of partially processed cocoa beans (e.g., fermented, conciliatory thermal treatment) can be recommended, providing a large reservoir of bioactive potentials arising from the protein components that could be instrumented in functionalizing foods.
When dealing with issues that are of high so-cietal relevance, Earth sciences still face a lack of accep-tance, which is partly rooted in insufficient communicationstrategies on the individual and local community level. Toincrease the efficiency of communication routines, sciencehas to transform its outreach concepts to become more awareof individual needs and demands. The “encoding/decoding”concept as well as critical intercultural communication stud-ies can offer pivotal approaches for this transformation.
Trait-based approaches to investigate (short- and long-term) phytoplankton dynamics and community assembly have become increasingly popular in freshwater and marine science. Although the nature of the pelagic habitat and the main phytoplankton taxa and ecology are relatively similar in both marine and freshwater systems, the lines of research have evolved, at least in part, separately. We compare and contrast the approaches adopted in marine and freshwater ecosystems with respect to phytoplankton functional traits. We note differences in study goals relating to functional trait use that assess community assembly and those that relate to ecosystem processes and biogeochemical cycling that affect the type of characteristics assigned as traits to phytoplankton taxa. Specific phytoplankton traits relevant for ecological function are examined in relation to
herbivory, amplitude of environmental change and spatial and temporal scales of study. Major differences are identified, including the shorter time scale for regular environmental change in freshwater ecosystems compared to that in the open oceans as well as the
type of sampling done by researchers based on site-accessibility. Overall, we encourage researchers to better motivate why they apply trait-based analyses to their studies and to make use of process-driven approaches, which are more common in marine studies. We further propose fully comparative trait studies conducted along the habitat gradient spanning freshwater to brackish to marine systems, or along geographic gradients. Such studies will benefit from the combined strength of both fields.
The size structure of autotroph communities – the relative abundance of small vs. large individuals – shapes the functioning of ecosystems. Whether common mechanisms underpin the size structure of unicellular and multicellular autotrophs is, however, unknown. Using a global data compilation, we show that individual body masses in tree and phytoplankton communities follow power-law distributions and that the average exponents of these individual size distributions (ISD) differ. Phytoplankton communities are characterized by an average ISD exponent consistent with three-quarter-power scaling of metabolism with body
mass and equivalence in energy use among mass classes. Tree communities deviate from this pattern in a manner consistent with equivalence in energy use among diameter size classes. Our findings suggest that whilst universal metabolic constraints ultimately underlie the emergent size structure of autotroph communities, divergent aspects of body size (volumetric vs. linear dimensions) shape the ecological outcome of metabolic scaling in forest vs. pelagic ecosystems.
Alluvial and transport-limited bedrock rivers constitute the majority of fluvial systems on Earth. Their long profiles hold clues to their present state and past evolution. We currently possess first-principles-based governing equations for flow, sediment transport, and channel morphodynamics in these systems, which we lack for detachment-limited bedrock rivers. Here we formally couple these equations for transport-limited gravel-bed river long-profile evolution. The result is a new predictive relationship whose functional form and parameters are grounded in theory and defined through experimental data. From this, we produce a power-law analytical solution and a finite-difference numerical solution to long-profile evolution. Steady-state channel concavity and steepness are diagnostic of external drivers: concavity decreases with increasing uplift rate, and steepness increases with an increasing sediment-to-water supply ratio. Constraining free parameters explains common observations of river form: to match observed channel concavities, gravel-sized sediments must weather and fine – typically rapidly – and valleys typically should widen gradually. To match the empirical square-root width–discharge scaling in equilibrium-width gravel-bed rivers, downstream fining must occur. The ability to assign a cause to such observations is the direct result of a deductive approach to developing equations for landscape evolution.
Geomagnetic paleosecular variations (PSVs) are an expression of geodynamo processes inside the Earth’s liquid outer core. These paleomagnetic time series provide insights into the properties of the Earth’s magnetic field, from normal behavior with a dominating dipolar geometry, over field crises, such as pronounced intensity lows and geomagnetic excursions with a distorted field geometry, to the complete reversal of the dominating dipole contribution. Particularly, long-term high-resolution and high-quality PSV time series are needed for properly reconstructing the higher frequency components in the spectrum of geomagnetic field variations and for a better understanding of the effects of smoothing during the recording of such paleomagnetic records by sedimentary archives.
In this doctorate study, full vector paleomagnetic records were derived from 16 sediment cores recovered from the southeastern Black Sea. Age models are based on radiocarbon dating and correlations of warming/cooling cycles monitored by high-resolution X-ray fluorescence (XRF) elementary ratios as well as ice-rafted debris (IRD) in Black Sea sediments to the sequence of ‘Dansgaard-Oeschger’ (DO) events defined from Greenland ice core oxygen isotope stratigraphy.
In order to identify the carriers of magnetization in Black Sea sediments, core MSM33-55-1 recovered from the southeast Black Sea was subjected to detailed rock magnetic and electron microscopy investigations. The younger part of core MSM33-55-1 was continuously deposited since 41 ka. Before 17.5 ka, the magnetic minerals were dominated by a mixture of greigite (Fe3S4) and titanomagnetite (Fe3-xTixO4) in samples with SIRM/κLF >10 kAm-1, or exclusively by titanomagnetite in samples with SIRM/κLF ≤10 kAm-1. It was found that greigite is generally present as crustal aggregates in locally reducing micro-environments. From 17.5 ka to 8.3 ka, the dominant magnetic mineral in this transition phase was changing from greigite (17.5 – ~10.0 ka) to probably silicate-hosted titanomagnetite (~10.0 – 8.3 ka). After 8.3 ka, the anoxic Black Sea was a favorable environment for the formation of non-magnetic pyrite (FeS2) framboids.
Aiming to avoid compromising of paleomagnetic data by erroneous directions carried by greigite, paleomagnetic data from samples with SIRM/κLF >10 kAm-1, shown to contain greigite by various methods, were removed from obtained records. Consequently, full vector paleomagnetic records, comprising directional data and relative paleointensity (rPI), were derived only from samples with SIRM/κLF ≤10 kAm-1 from 16 Black Sea sediment cores. The obtained data sets were used to create a stack covering the time window between 68.9 and 14.5 ka with temporal resolution between 40 and 100 years, depending on sedimentation rates.
At 64.5 ka, according to obtained results from Black Sea sediments, the second deepest minimum in relative paleointensity during the past 69 ka occurred. The field minimum during MIS 4 is associated with large declination swings beginning about 3 ka before the minimum. While a swing to 50°E is associated with steep inclinations (50-60°) according to the coring site at 42°N, the subsequent declination swing to 30°W is associated with shallow inclinations of down to 40°. Nevertheless, these large deviations from the direction of a geocentric axial dipole field (I=61°, D=0°) still can not yet be termed as 'excursional', since latitudes of corresponding VGPs only reach down to 51.5°N (120°E) and 61.5°N (75°W), respectively. However, these VGP positions at opposite sides of the globe are linked with VGP drift rates of up to 0.2° per year in between. These extreme secular variations might be the mid-latitude expression of the Norwegian–Greenland Sea excursion found at several sites much further North in Arctic marine sediments between 69°N and 81°N.
At about 34.5 ka, the Mono Lake excursion is evidenced in the stacked Black Sea PSV record by both a rPI minimum and directional shifts. Associated VGPs from stacked Black Sea data migrated from Alaska, via central Asia and the Tibetan Plateau, to Greenland, performing a clockwise loop. This agrees with data recorded in the Wilson Creek Formation, USA., and Arctic sediment core PS2644-5 from the Iceland Sea, suggesting a dominant dipole field. On the other hand, the Auckland lava flows, New Zealand, the Summer Lake, USA., and Arctic sediment core from ODP Site-919 yield distinct VGPs located in the central Pacific Ocean due to a presumably non-dipole (multi-pole) field configuration.
A directional anomaly at 18.5 ka, associated with pronounced swings in inclination and declination, as well as a low in rPI, is probably contemporaneous with the Hilina Pali excursion, originally reported from Hawaiian lava flows. However, virtual geomagnetic poles (VGPs) calculated from Black Sea sediments are not located at latitudes lower than 60° N, which denotes normal, though pronounced secular variations. During the postulated Hilina Pali excursion, the VGPs calculated from Black Sea data migrated clockwise only along the coasts of the Arctic Ocean from NE Canada (20.0 ka), via Alaska (18.6 ka) and NE Siberia (18.0 ka) to Svalbard (17.0 ka), then looping clockwise through the Eastern Arctic Ocean.
In addition to the Mono Lake and the Norwegian–Greenland Sea excursions, the Laschamp excursion was evidenced in the Black Sea PSV record with the lowest paleointensities at about 41.6 ka and a short-term (~500 years) full reversal centered at 41 ka. These excursions are further evidenced by an abnormal PSV index, though only the Laschamp and the Mono Lake excursions exhibit excursional VGP positions. The stacked Black Sea paleomagnetic record was also converted into one component parallel to the direction expected from a geocentric axial dipole (GAD) and two components perpendicular to it, representing only non-GAD components of the geomagnetic field. The Laschamp and the Norwegian–Greenland Sea excursions are characterized by extremely low GAD components, while the Mono Lake excursion is marked by large non-GAD contributions. Notably, negative values of the GAD component, indicating a fully reversed geomagnetic field, are observed only during the Laschamp excursion.
In summary, this doctoral thesis reconstructed high-resolution and high-fidelity PSV records from SE Black Sea sediments. The obtained record comprises three geomagnetic excursions, the Norwegian–Greenland Sea excursion, the Laschamp excursion, and the Mono Lake excursion. They are characterized by abnormal secular variations of different amplitudes centered at about 64.5 ka, 41.0 ka and 34.5 ka, respectively. In addition, the obtained PSV record from the Black Sea do not provide evidence for the postulated 'Hilina Pali excursion' at about 18.5 ka. Anyway, the obtained Black Sea paleomagnetic record, covering field fluctuations from normal secular variations, over excursions, to a short but full reversal, points to a geomagnetic field characterized by a large dynamic range in intensity and a highly variable superposition of dipole and non-dipole contributions from the geodynamo during the past 68.9 to 14.5 ka.
Bertha’s Channel
(2019)
In this thesis we introduce the concept of the degree of formality. It is directed against a dualistic point of view, which only distinguishes between formal and informal proofs. This dualistic attitude does not respect the differences between the argumentations classified as informal and it is unproductive because the individual potential of the respective argumentation styles cannot be appreciated and remains untapped.
This thesis has two parts. In the first of them we analyse the concept of the degree of formality (including a discussion about the respective benefits for each degree) while in the second we demonstrate its usefulness in three case studies. In the first case study we will repair Haskell B. Curry's view of mathematics, which incidentally is of great importance in the first part of this thesis, in light of the different degrees of formality. In the second case study we delineate how awareness of the different degrees of formality can be used to help students to learn how to prove. Third, we will show how the advantages of proofs of different degrees of formality can be combined by the development of so called tactics having a medium degree of formality. Together the three case studies show that the degrees of formality provide a convincing solution to the problem of untapped potential.
Carbonate-rich silicate and carbonate melts play a crucial role in deep Earth magmatic processes and their melt structure is a key parameter, as it controls physical and chemical properties. Carbonate-rich melts can be strongly enriched in geochemically important trace elements. The structural incorporation mechanisms of these elements are difficult to study because such melts generally cannot be quenched to glasses, which are usually employed for structural investigations. This thesis investigates the influence of CO2 on the local environments of trace elements contained in silicate glasses with variable CO2 concentrations as well as in silicate and carbonate melts. The compositions studied include sodium-rich peralkaline silicate melts and glasses and carbonate melts similar to those occurring naturally at Oldoinyo Lengai volcano, Tanzania.
The local environments of the three elements yttrium (Y), lanthanum (La) and strontium (Sr) were investigated in synthesized glasses and melts using X-ray absorption fine structure (XAFS) spectroscopy. Especially extended X-ray absorption fine structure spectroscopy (EXAFS) provides element specific information on local structure, such as bond lengths, coordination numbers and the degree of disorder. To cope with the enhanced structural disorder present in glasses and melts, EXAFS analysis was based on fitting approaches using an asymmetric distribution function as well as a correlation model according to bond valence theory. Firstly, silicate glasses quenched from high pressure/temperature melts with up to 7.6 wt % CO2 were investigated. In strongly and extremely peralkaline glasses the local structure of Y is unaffected by the CO2 content (with oxygen bond lengths of ~ 2.29 Å). Contrary, the bond lengths for Sr-O and La-O increase with increasing CO2 content in the strongly peralkaline glasses from ~ 2.53 to ~ 2.57 Å and from ~ 2.52 to ~ 2.54 Å, respectively, while they remain constant in extremely peralkaline glasses (at ~ 2.55 Å and 2.54 Å, respectively). Furthermore, silicate and unquenchable carbonate melts were investigated in-situ at high pressure/temperature conditions (2.2 to 2.6 GPa, 1200 to 1500 °C) using a Paris-Edinburgh press. A novel design of the pressure medium assembly for this press was developed, which features increased mechanical stability as well as enhanced transmittance at relevant energies to allow for low content element EXAFS in transmission. Compared to glasses the bond lengths of Y-O, La-O and Sr-O are elongated by up to + 3 % in the melt and exhibit higher asymmetric pair distributions. For all investigated silicate melt compositions Y-O bond lengths were found constant at ~ 2.37 Å, while in the carbonate melt the Y-O length increases slightly to 2.41 Å. The La-O bond lengths in turn, increase systematically over the whole silicate – carbonate melt joint from 2.55 to 2.60 Å. Sr-O bond lengths in melts increase from ~ 2.60 to 2.64 Å from pure silicate to silicate-bearing carbonate composition with constant elevated bond length within the carbonate region.
For comparison and deeper insight, glass and melt structures of Y and Sr bearing sodium-rich silicate to carbonate compositions were simulated in an explorative ab initio molecular dynamics (MD) study. The simulations confirm observed patterns of CO2-dependent local changes around Y and Sr and additionally provide further insights into detailed incorporation mechanisms of the trace elements and CO2. Principle findings include that in sodium-rich silicate compositions carbon either is mainly incorporated as a free carbonate-group or shares one oxygen with a network former (Si or [4]Al) to form a non-bridging carbonate. Of minor importance are bridging carbonates between two network formers. Here, a clear preference for two [4]Al as adjacent network formers occurs, compared to what a statistical distribution would suggest. In C-bearing silicate melts minor amounts of molecular CO2 are present, which is almost totally dissolved as carbonate in the quenched glasses.
The combination of experiment and simulation provides extraordinary insights into glass and melt structures. The new data is interpreted on the basis of bond valence theory and is used to deduce potential mechanisms for structural incorporation of investigated elements, which allow for prediction on their partitioning behavior in natural melts. Furthermore, it provides unique insights into the dissolution mechanisms of CO2 in silicate melts and into the carbonate melt structure. For the latter, a structural model is suggested, which is based on planar CO3-groups linking 7- to 9-fold cation polyhedra, in accordance to structural units as found in the Na-Ca carbonate nyerereite. Ultimately, the outcome of this study contributes to rationalize the unique physical properties and geological phenomena related to carbonated silicate-carbonate melts.
Veni, vidi, falsi nuntii
(2019)
Bericht zu www.BrAnD2
(2019)
Background
Semi-natural plant communities such as field boundaries play an important ecological role in agricultural landscapes, e.g., provision of refuge for plant and other species, food web support or habitat connectivity. To prevent undesired effects of herbicide applications on these communities and their structure, the registration and application are regulated by risk assessment schemes in many industrialized countries. Standardized individual-level greenhouse experiments are conducted on a selection of crop and wild plant species to characterize the effects of herbicide loads potentially reaching off-field areas on non-target plants. Uncertainties regarding the protectiveness of such approaches to risk assessment might be addressed by assessment factors that are often under discussion. As an alternative approach, plant community models can be used to predict potential effects on plant communities of interest based on extrapolation of the individual-level effects measured in the standardized greenhouse experiments. In this study, we analyzed the reliability and adequacy of the plant community model IBC-grass (individual-based plant community model for grasslands) by comparing model predictions with empirically measured effects at the plant community level.
Results
We showed that the effects predicted by the model IBC-grass were in accordance with the empirical data. Based on the species-specific dose responses (calculated from empirical effects in monocultures measured 4 weeks after application), the model was able to realistically predict short-term herbicide impacts on communities when compared to empirical data.
Conclusion
The results presented in this study demonstrate an approach how the current standard greenhouse experiments—measuring herbicide impacts on individual-level—can be coupled with the model IBC-grass to estimate effects on plant community level. In this way, it can be used as a tool in ecological risk assessment.
The trace gases CO2 and CH4 pertain to the most relevant greenhouse gases and are important exchange fluxes of the global carbon (C) cycle. Their atmospheric quantity increased significantly as a result of the intensification of anthropogenic activities, such as especially land-use and land-use change, since the mid of the 18th century. To mitigate global climate change and ensure food security, land-use systems need to be developed, which favor reduced trace gas emissions and a sustainable soil carbon management. This requires the accurate and precise quantification of the influence of land-use and land-use change on CO2 and CH4 emissions. A common method to determine the trace gas dynamics and C sink or source function of a particular ecosystem is the closed chamber method. This method is often used assuming that accuracy and precision are high enough to determine differences in C gas emissions for e.g., treatment comparisons or different ecosystem components.
However, the broad range of different chamber designs, related operational procedures and data-processing strategies which are described in the scientific literature contribute to the overall uncertainty of closed chamber-based emission estimates. Hence, the outcomes of meta-analyses are limited, since these methodical differences hamper the comparability between studies. Thus, a standardization of closed chamber data acquisition and processing is much-needed.
Within this thesis, a set of case studies were performed to: (I) develop standardized routines for an unbiased data acquisition and processing, with the aim of providing traceable, reproducible and comparable closed chamber based C emission estimates; (II) validate those routines by comparing C emissions derived using closed chambers with independent C emission estimates; and (III) reveal processes driving the spatio-temporal dynamics of C emissions by developing (data processing based) flux separation approaches.
The case studies showed: (I) the importance to test chamber designs under field conditions for an appropriate sealing integrity and to ensure an unbiased flux measurement. Compared to the sealing integrity, the use of a pressure vent and fan was of minor importance, affecting mainly measurement precision; (II) that the developed standardized data processing routines proved to be a powerful and flexible tool to estimate C gas emissions and that this tool can be successfully applied on a broad range of flux data sets from very different ecosystem; (III) that automatic chamber measurements display temporal dynamics of CO2 and CH4 fluxes very well and most importantly, that they accurately detect small-scale spatial differences in the development of soil C when validated against repeated soil inventories; and (IV) that a simple algorithm to separate CH4 fluxes into ebullition and diffusion improves the identification of environmental drivers, which allows for an accurate gap-filling of measured CH4 fluxes.
Overall, the proposed standardized data acquisition and processing routines strongly improved the detection accuracy and precision of source/sink patterns of gaseous C emissions. Hence, future studies, which consider the recommended improvements, will deliver valuable new data and insights to broaden our understanding of spatio-temporal C gas dynamics, their particular environmental drivers and underlying processes.
„Begegnung mit dem Fremden“ ist das Thema einer Zusatzrunde des zweiten Brandenburger Antike-Denkwerks, das Lateinschüler ausgewählter Brandenburger Gymnasien für die Antike begeistern will und von der Robert Bosch Stiftung gefördert wird.
Der vorliegende Band enthält die Impulsvorträge des 13. Potsdamer Lateintages im Oktober und des Sonderlateintages im Dezember 2017: Prof. Dr. Anja Klöckner erörtert den Einfluss des Mithras-Kults auf die römisch-germanische Bevölkerung am Limes; PD Dr. Nicola Hömke stellt Originalbriefe römischer Soldaten vor, die vom nordenglischen Hadrianswall stammen, wo Legionäre und keltische Einheimische aufeinandertrafen. Dr. Hermann Krüssel präsentiert anhand des Poblicius-Denkmals seine Erkenntnisse zum Leben im augusteischen Köln. Außerdem sind die kreativen und fachlich fundierten Präsentationen dokumentiert, die die Schüler zusammen mit ihren studentischen Mentoren im Laufe mehrerer Monate erarbeiteten und auf einem eigenen Schülerkongress im März 2018 vorstellten.
Am 30. Juni 2018 fand die vierzehnte Konferenz des Forschungskreises Vereinte Nationen zum Thema „Herausforderungen für die gegenwärtige deutsche UN-Politik“ statt.
Die Konferenz widmet sich ihrem Schwerpunktthema in drei Vorträgen, in denen einerseits die offizielle Sicht des Auswärtigen Amtes dargestellt und andererseits die Herausforderungen aus wissenschaftlicher Perspektive analysiert werden. Tanja Brühl nimmt hierzu eine Gesamtbetrachtung der deutschen UN-Politik aus rollentheoretischer Sicht vor, während Theodor Rathgeber die deutsche Menschenrechtspolitik angesichts der im UN-Menschenrechtsrat bestehenden Herausforderungen untersucht.
Der Band enthält drei weitere Texte, die Forschungsergebnisse zu aktuellen Entwicklungen präsentieren. Klaus Hüfner erörtert das aktuelle finanzielle Engagement Deutschlands in den Vereinten Nationen. Yanina Bloch zieht eine Bilanz nach den ersten acht Jahren der Tätigkeit von UN Women. Helmut Volger analysiert neue Fortschritte bei der Reform der Arbeitsmethoden des UN-Sicherheitsrates.
Mit RISE-DE liegt als FDMentor-Projektergebnis ein Referenzmodell für Strategieprozesse im institutionellen Forschungsdatenmanagement (FDM) vor. RISE-DE bietet einen Bewertungsrahmen zur Selbstevaluation und Zielbestimmung und eignet sich als Werkzeug zur Gestaltung einer strukturierten, Stakeholder-orientierten Strategieentwicklung für das FDM an Hochschulen und Forschungseinrichtungen.
RISE-DE basiert auf dem lose an Reifegradenmodellen orientierten Research Infrastructure Self-Evaluation Framework (RISE v1.1) des Digital Curation Centre (DCC), wurde aber für den Einsatz in partizipativen Prozessen deutlich überarbeitet sowie inhaltlich an den deutschen Wissenschaftskontext und Entwicklungen in der guten Praxis im FDM angepasst. Eine mit Hilfe von RISE-DE erarbeitete Strategie erfüllt zugleich die von der Hochschulrektorenkonferenz (HRK) und der League of European Research Universities (LERU) formulierten Empfehlungen.
Die hier vorliegende RISE-DE Version 1.0 nimmt Erfahrungen aus dem Piloteinsatz an der Universität Potsdam sowie Feedback aus der Community auf. Es beinhaltet gegenüber der Vorversion zum einen Veränderungen an den Themen des Referenzmodells, zum anderen wurden Empfehlungen für FDM-Beginner deutlich erweitert und Erläuterungen für die Durchführung partizipativer Strategieprozesse hinzugefügt. In Zusammenarbeit mit der Hochschule für Angewandte Wissenschaften Hamburg entstand außerdem ein digitales Evaluations-Tool.
Supermassive black holes reside in the hearts of almost all massive galaxies. Their evolutionary path seems to be strongly linked to the evolution of their host galaxies, as implied by several empirical relations between the black hole mass (M BH ) and different host galaxy properties. The physical driver of this co-evolution is, however, still not understood. More mass measurements over homogeneous samples and a detailed understanding of systematic uncertainties are required to fathom the origin of the scaling relations.
In this thesis, I present the mass estimations of supermassive black holes in the nuclei of one late-type and thirteen early-type galaxies. Our SMASHING sample extends from the intermediate to the massive galaxy mass regime and was selected to fill in gaps in number of galaxies along the scaling relations. All galaxies were observed at high spatial resolution, making use of the adaptive-optics mode of integral field unit (IFU) instruments on state-of-the-art telescopes (SINFONI, NIFS, MUSE). I extracted the stellar kinematics from these observations and constructed dynamical Jeans and Schwarzschild models to estimate the mass of the central black holes robustly. My new mass estimates increase the number of early-type galaxies with measured black hole masses by 15%. The seven measured galaxies with nuclear light deficits (’cores’) augment the sample of cored galaxies with measured black holes by 40%. Next to determining massive black hole masses, evaluating the accuracy of black hole masses is crucial for understanding the intrinsic scatter of the black hole- host galaxy scaling relations. I tested various sources of systematic uncertainty on my derived mass estimates.
The M BH estimate of the single late-type galaxy of the sample yielded an upper limit, which I could constrain very robustly. I tested the effects of dust, mass-to-light ratio (M/L) variation, and dark matter on my measured M BH . Based on these tests, the typically assumed constant M/L ratio can be an adequate assumption to account for the small amounts of dark matter in the center of that galaxy. I also tested the effect of a variable M/L variation on the M BH measurement on a second galaxy. By considering stellar M/L variations in the dynamical modeling, the measured M BH decreased by 30%. In the future, this test should be performed on additional galaxies to learn how an as constant assumed M/L flaws the estimated black hole masses.
Based on our upper limit mass measurement, I confirm previous suggestions that resolving the predicted BH sphere-of-influence is not a strict condition to measure black hole masses. Instead, it is only a rough guide for the detection of the black hole if high-quality, and high signal-to-noise IFU data are used for the measurement. About half of our sample consists of massive early-type galaxies which show nuclear surface brightness cores and signs of triaxiality. While these types of galaxies are typically modeled with axisymmetric modeling methods, the effects on M BH are not well studied yet. The massive galaxies of our presented galaxy sample are well suited to test the effect of different stellar dynamical models on the measured black hole mass in evidently triaxial galaxies. I have compared spherical Jeans and axisymmetric Schwarzschild models and will add triaxial Schwarzschild models to this comparison in the future. The constructed Jeans and Schwarzschild models mostly disagree with each other and cannot reproduce many of the triaxial features of the galaxies (e.g., nuclear sub-components, prolate rotation). The consequence of the axisymmetric-triaxial assumption on the accuracy of M BH and its impact on the black hole - host galaxy relation needs to be carefully examined in the future.
In the sample of galaxies with published M BH , we find measurements based on different dynamical tracers, requiring different observations, assumptions, and methods. Crucially, different tracers do not always give consistent results. I have used two independent tracers (cold molecular gas and stars) to estimate M BH in a regular galaxy of our sample. While the two estimates are consistent within their errors, the stellar-based measurement is twice as high as the gas-based. Similar trends have also been found in the literature. Therefore, a rigorous test of the systematics associated with the different modeling methods is required in the future. I caution to take the effects of different tracers (and methods) into account when discussing the scaling relations.
I conclude this thesis by comparing my galaxy sample with the compilation of galaxies with measured black holes from the literature, also adding six SMASHING galaxies, which were published outside of this thesis. None of the SMASHING galaxies deviates significantly from the literature measurements. Their inclusion to the published early-type galaxies causes a change towards a shallower slope for the M BH - effective velocity dispersion relation, which is mainly driven by the massive galaxies of our sample. More unbiased and homogenous measurements are needed in the future to determine the shape of the relation and understand its physical origin.
For many years, psycholinguistic evidence has been predominantly based on findings from native speakers of Indo-European languages, primarily English, thus providing a rather limited perspective into the human language system. In recent years a growing body of experimental research has been devoted to broadening this picture, testing a wide range of speakers and languages, aiming to understanding the factors that lead to variability in linguistic performance. The present dissertation investigates sources of variability within the morphological domain, examining how and to what extent morphological processes and representations are shaped by specific properties of languages and speakers. Firstly, the present work focuses on a less explored language, Hebrew, to investigate how the unique non-concatenative morphological structure of Hebrew, namely a non-linear combination of consonantal roots and vowel patterns to form lexical entries (L-M-D + CiCeC = limed ‘teach’), affects morphological processes and representations in the Hebrew lexicon. Secondly, a less investigated population was tested: late learners of a second language. We directly compare native (L1) and non-native (L2) speakers, specifically highly proficient and immersed late learners of Hebrew. Throughout all publications, we have focused on a morphological phenomenon of inflectional classes (called binyanim; singular: binyan), comparing productive (class Piel, e.g., limed ‘teach’) and unproductive (class Paal, e.g., lamad ‘learn’) verbal inflectional classes. By using this test case, two psycholinguistic aspects of morphology were examined: (i) how morphological structure affects online recognition of complex words, using masked priming (Publications I and II) and cross-modal priming (Publication III) techniques, and (ii) what type of cues are used when extending morpho-phonological patterns to novel complex forms, a process referred to as morphological generalization, using an elicited production task (Publication IV).
The findings obtained in the four manuscripts, either published or under review, provide significant insights into the role of productivity in Hebrew morphological processing and generalization in L1 and L2 speakers. Firstly, the present L1 data revealed a close relationship between productivity of Hebrew verbal classes and recognition process, as revealed in both priming techniques. The consonantal root was accessed only in the productive class (Piel) but not the unproductive class (Paal). Another dissociation between the two classes was revealed in the cross-modal priming, yielding a semantic relatedness effect only for Paal but not Piel primes. These findings are taken to reflect that the Hebrew mental representations display a balance between stored undecomposable unstructured stems (Paal) and decomposed structured stems (Piel), in a similar manner to a typical dual-route architecture, showing that the Hebrew mental lexicon is less unique than previously claimed in psycholinguistic research. The results of the generalization study, however, indicate that there are still substantial differences between inflectional classes of Hebrew and other Indo-European classes, particularly in the type of information they rely on in generalization to novel forms. Hebrew binyan generalization relies more on cues of argument structure and less on phonological cues.
Secondly, clear L1/L2 differences were observed in the sensitivity to abstract morphological and morpho-syntactic information during complex word recognition and generalization. While L1 Hebrew speakers were sensitive to the binyan information during recognition, expressed by the contrast in root priming, L2 speakers showed similar root priming effects for both classes, but only when the primes were presented in an infinitive form. A root priming effect was not obtained for primes in a finite form. These patterns are interpreted as evidence for a reduced sensitivity of L2 speakers to morphological information, such as information about inflectional classes, and evidence for processing costs in recognition of forms carrying complex morpho-syntactic information. Reduced reliance on structural information cues was found in production of novel verbal forms, when the L2 group displayed a weaker effect of argument structure for Piel responses, in comparison to the L1 group. Given the L2 results, we suggest that morphological and morphosyntactic information remains challenging for late bilinguals, even at high proficiency levels.
Molecularly imprinted polymers (MIPs) mimic the binding sites of antibodies by substituting the amino acid-scaffold of proteins by synthetic polymers. In this work, the first MIP for the recognition of the diagnostically relevant enzyme butyrylcholinesterase (BuChE) is presented. The MIP was prepared using electropolymerization of the functional monomer o-phenylenediamine and was deposited as a thin film on a glassy carbon electrode by oxidative potentiodynamic polymerization. Rebinding and removal of the template were detected by cyclic voltammetry using ferricyanide as a redox marker. Furthermore, the enzymatic activity of BuChE rebound to the MIP was measured via the anodic oxidation of thiocholine, the reaction product of butyrylthiocholine. The response was linear between 50 pM and 2 nM concentrations of BuChE with a detection limit of 14.7 pM. In addition to the high sensitivity for BuChE, the sensor responded towards pseudo-irreversible inhibitors in the lower mM range.
While some pronouncements of expert treaty bodies have been considered ‘key catalysts’ for the development of international human rights law, others are only selectively referred to in legal practice. This article argues that the varying normative impact is due to the informal character of pronouncements. In the absence of treaty provisions specifying their legal effect, practitioners tend to rely on different factors and arguments when either drawing on or rejecting certain pronouncements. Scholars in turn face difficulties when trying to identify explanatory patterns within this diverging practice as the informal character confronts both international lawyers and international relations scholars with their respective methodological ‘blind spots’. In light of these intradisciplinary challenges, this article explores the extent as to which an interdisciplinary approach helps to assess the reasons for the varying impact of pronouncements. After analysing the factors determining their legal significance on the basis of State practice and the academic debate, this article identifies the drafting process as a factor which promises to be particularly insightful when explored from an interdisciplinary perspective and sketches out a framework for future research.
La carte de Tupaia constitue l’un des artéfacts les plus célèbres et les plus énigmatiques à émerger des toutes premières rencontres entre Européens et îliens du Pacifique. Elle a été élaborée entre août 1769 et février 1770 par Tupaia, prêtre ’arioi, conseiller royal et maître de navigation originaire de Ra’iātea, aux Îles Sous-le-Vent de la Société. En collaboration avec divers membres d’équipage de l’Endeavour de James Cook, en deux temps distincts de cartographie et trois ébauches. L’identité de bien des îles qui y figurent et la logique de leur agencement demeuraient jusqu’à présent des énigmes. En se fiant en partie à des pièces d’archives restées ignorées, nous proposons, dans ce long essai, une nouvelle compréhension de sa logique cartographique, une reconstitution détaillée de sa genèse et donc, pour la toute première fois, une lecture exhaustive. La carte de Tupaia n’illustre pas seulement la magnitude et la maîtrise de la navigation polynésienne, elle réalise aussi une remarquable synthèse représentationnelle de deux systèmes d’orientation très différents.
Die Stadtwerkebetriebe, zumindest diejenigen die im Strom- und Gassektor tätig sind, sind meist nicht mehr im Stadtwerke Eigenbetrieb organisiert, sondern von den Kommunen in den vergangenen zwei Jahrzehnten in die Privatrechtsform der GmbH ausgegliedert worden. Hinzu kommt, dass diese kommunalen Unternehmen in einem Energiebinnenmarkt agieren, der durch die EU-Marktliberalisierung entstanden ist. Die unternehmerische Verselbstständigung der Stadtwerke GmbH von politischer Steuerung wird durch das Credo des Neuen Steuerungsmodells bestärkt, das gerade in der unternehmerischen Unabhängigkeit die Voraussetzungen für wirtschaftlichen Erfolg sieht. Diese Rahmenbedingungen zwingen die Unternehmen der kommunalen Wirtschaft, sich ausschließlich nach unternehmerischen und marktinduzierten Systemen zu richten. Dass die Logik des unternehmerischen Handelns keinen Platz lässt für eine politische Steuerung der Unternehmen, wird zum Legitimationsproblem für die kommunale Wirtschaft. Denn eine ausschließliche Orientierung an den Überschüssen der kommunalen Unternehmen legitimiert nicht den öffentlichen Zweck, weder politisch noch organisationsrechtlich. Die Gemeinwohlorientierung ist konstitutiver Bestandteil der kommunalen wirtschaftlichen Betätigung. Hier wird die These hervorgebracht, dass Bürgerbeteiligung in dieser Situation von den Stadtwerken zugelassen wird, um dieses Legitimationsdefizit abzuschwächen. Zwei Fälle werden qualitativ analysiert und verglichen: erstens die Stadtwerke Wolfhagen GmbH, die anhand von Bürgerbeteiligung Akzeptanz für einen Windpark generieren wollen. Zweitens die Stadtwerke Potsdam GmbH, die aus einer - hier als PR-Krise beschriebenen - Situation heraus, Legitimation mit verschiedenen Instrumenten der Bürgerbeteiligung wiederherzustellen versuchen.
The Central Andes host large reserves of base and precious metals. The region represented, in 2017, an important part of the worldwide mining activity. Three principal types of deposits have been identified and studied: 1) porphyry type deposits extending from central Chile and Argentina to Bolivia, and Northern Peru, 2) iron oxide-copper-gold (IOCG) deposits, extending from central Peru to central Chile, and 3) epithermal tin polymetallic deposits extending from Southern Peru to Northern Argentina, which compose a large part of the deposits of the Bolivian Tin Belt (BTB). Deposits in the BTB can be divided into two major types: (1) tin-tungsten-zinc pluton-related polymetallic deposits, and (2) tin-silver-lead-zinc epithermal polymetallic vein deposits.
Mina Pirquitas is a tin-silver-lead-zinc epithermal polymetallic vein deposit, located in north-west Argentina, that used to be one of the most important tin-silver producing mine of the country. It was interpreted to be part of the BTB and it shares similar mineral associations with southern pluton related BTB epithermal deposits. Two major mineralization events related to three pulses of magmatic fluids mixed with meteoric water have been identified. The first event can be divided in two stages: 1) stage I-1 with quartz, pyrite, and cassiterite precipitating from fluids between 233 and 370 °C and salinity between 0 and 7.5 wt%, corresponding to a first pulse of fluids, and 2) stage I-2 with sphalerite and tin-silver-lead-antimony sulfosalts precipitating from fluids between 213 and 274 °C with salinity up to 10.6 wt%, corresponding to a new pulse of magmatic fluids in the hydrothermal system. The mineralization event II deposited the richest silver ores at Pirquitas. Event II fluids temperatures and salinities range between 190 and 252 °C and between 0.9 and 4.3 wt% respectively. This corresponds to the waning supply of magmatic fluids. Noble gas isotopic compositions and concentrations in ore-hosted fluid inclusions demonstrate a significant contribution of magmatic fluids to the Pirquitas mineralization although no intrusive rocks are exposed in the mine area.
Lead and sulfur isotopic measurements on ore minerals show that Pirquitas shares a similar signature with southern pluton related polymetallic deposits in the BTB. Furthermore, the major part of the sulfur isotopic values of sulfide and sulfosalt minerals from Pirquitas ranges in the field for sulfur derived from igneous rocks. This suggests that the main contribution of sulfur to the hydrothermal system at Pirquitas is likely to be magma-derived. The precise age of the deposit is still unknown but the results of wolframite dating of 2.9 ± 9.1 Ma and local structural observations suggest that the late mineralization event is younger than 12 Ma.
Over the last years there is an increasing awareness that historical land cover changes and associated land use legacies may be important drivers for present-day species richness and biodiversity due to time-delayed extinctions or colonizations in response to historical environmental changes. Historically altered habitat patches may therefore exhibit an extinction debt or colonization credit and can be expected to lose or gain species in the future. However, extinction debts and colonization credits are difficult to detect and their actual magnitudes or payments have rarely been quantified because species richness patterns and dynamics are also shaped by recent environmental conditions and recent environmental changes.
In this thesis we aimed to determine patterns of herb-layer species richness and recent species richness dynamics of forest herb layer plants and link those patterns and dynamics to historical land cover changes and associated land use legacies. The study was conducted in the Prignitz, NE-Germany, where the forest distribution remained stable for the last ca. 100 years but where a) the deciduous forest area had declined by more than 90 per cent (leaving only remnants of "ancient forests"), b) small new forests had been established on former agricultural land ("post-agricultural forests"). Here, we analyzed the relative importance of land use history and associated historical land cover changes for herb layer species richness compared to recent environmental factors and determined magnitudes of extinction debt and colonization credit and their payment in ancient and post-agricultural forests, respectively.
We showed that present-day species richness patterns were still shaped by historical land cover changes that ranged back to more than a century. Although recent environmental conditions were largely comparable we found significantly more forest specialists, species with short-distance dispersal capabilities and clonals in ancient forests than in post-agricultural forests. Those species richness differences were largely contingent to a colonization credit in post-agricultural forests that ranged up to 9 species (average 4.7), while the extinction debt in ancient forests had almost completely been paid. Environmental legacies from historical agricultural land use played a minor role for species richness differences. Instead, patch connectivity was most important. Species richness in ancient forests was still dependent on historical connectivity, indicating a last glimpse of an extinction debt, and the colonization credit was highest in isolated post-agricultural forests. In post-agricultural forests that were better connected or directly adjacent to ancient forest patches the colonization credit was way smaller and we were able to verify a gradual payment of the colonization credit from 2.7 species to 1.5 species over the last six decades.