Refine
Has Fulltext
- yes (594) (remove)
Year of publication
- 2019 (594) (remove)
Document Type
- Postprint (211)
- Article (142)
- Doctoral Thesis (130)
- Working Paper (35)
- Part of Periodical (21)
- Monograph/Edited Volume (19)
- Master's Thesis (12)
- Review (10)
- Bachelor Thesis (3)
- Conference Proceeding (3)
Language
- English (395)
- German (189)
- Spanish (5)
- French (4)
- Portuguese (1)
Keywords
- morphology (32)
- Informationsstruktur (30)
- Morphologie (30)
- information structure (30)
- linguistics (30)
- syntax (30)
- Festschrift (29)
- Linguistik (29)
- Syntax (29)
- festschrift (29)
Institute
- Department Linguistik (49)
- Institut für Biochemie und Biologie (48)
- Institut für Geowissenschaften (36)
- Mathematisch-Naturwissenschaftliche Fakultät (36)
- Institut für Chemie (31)
- Institut für Physik und Astronomie (29)
- MenschenRechtsZentrum (29)
- Department Erziehungswissenschaft (27)
- Institut für Romanistik (27)
- Strukturbereich Kognitionswissenschaften (24)
On doubling unconditionals
(2019)
We show that the codifference is a useful tool in studying the ergodicity breaking and non-Gaussianity properties of stochastic time series. While the codifference is a measure of dependence that was previously studied mainly in the context of stable processes, we here extend its range of applicability to random-parameter and diffusing-diffusivity models which are important in contemporary physics, biology and financial engineering. We prove that the codifference detects forms of dependence and ergodicity breaking which are not visible from analysing the covariance and correlation functions. We also discuss a related measure of dispersion, which is a nonlinear analogue of the mean squared displacement.
Many studies on biological and soft matter systems report the joint presence of a linear mean-squared displacement and a non-Gaussian probability density exhibiting, for instance, exponential or stretched-Gaussian tails. This phenomenon is ascribed to the heterogeneity of the medium and is captured by random parameter models such as ‘superstatistics’ or ‘diffusing diffusivity’. Independently, scientists working in the area of time series analysis and statistics have studied a class of discrete-time processes with similar properties, namely, random coefficient autoregressive models. In this work we try to reconcile these two approaches and thus provide a bridge between physical stochastic processes and autoregressive models.Westart from the basic Langevin equation of motion with time-varying damping or diffusion coefficients and establish the link to random coefficient autoregressive processes. By exploring that link we gain access to efficient statistical methods which can help to identify data exhibiting Brownian yet non-Gaussian diffusion.
Persönlichkeitspsychologisch fundierte Studienorientierung durch onlinebasierte Self-Assessments
(2019)
Background: Agility in general and change-of-direction speed (CoD) in particular represent important performance determinants in elite soccer.
Objectives: The objectives of this study were to determine the effects of a 6-week neuromuscular training program on agility performance, and to determine differences in movement times between the slower and faster turning directions in elite soccer players. Materials and Methods: Twenty male elite soccer players from the Stade Rennais Football Club (Ligue 1, France) participated in this study. The players were randomly assigned to a neuromuscular training group (NTG, n = 10) or an active control (CG, n = 10) according to their playing position. NTG participated in a 6-week, twice per week neuromuscular training program that included CoD, plyometric and dynamic stability exercises. Neuromuscular training replaced the regular warm-up program. Each training session lasted 30 min. CG continued their regular training program. Training volume was similar between groups. Before and after the intervention, the two groups performed a reactive agility test that included 180° left and right body rotations followed by a 5-m linear sprint. The weak side was defined as the left/right turning direction that produced slower overall movement times (MT). Reaction time (RT) was assessed and defined as the time from the first appearance of a visual stimulus until the athlete’s first movement. MT corresponded to the time from the first movement until the athlete reached the arrival gate (5 m distance).
Results: No significant between-group baseline differences were observed for RT or MT. Significant group x time interactions were found for MT (p = 0.012, effect size = 0.332, small) for the slower and faster directions (p = 0.011, effect size = 0.627, moderate). Significant pre-to post improvements in MT were observed for NTG but not CG (p = 0.011, effect size = 0.877, moderate). For NTG, post hoc analyses revealed significant MT improvements for the slower (p = 0.012, effect size = 0.897, moderate) and faster directions (p = 0.017, effect size = 0.968, moderate).
Conclusion: Our results illustrate that 6 weeks of neuromuscular training with two sessions per week included in the warm-up program, significantly enhanced agility performance in elite soccer players. Moreover, improvements were found on both sides during body rotations. Thus, practitioners are advised to focus their training programs on both turning directions.
The innovative dual-purpose chicken approach aims at contributing to the transition towards sustainable poultry production by avoiding the culling of male chickens. To successfully integrate sustainability aspects into innovation, goal congruency among actors and clearly communicating the added value within the actor network and to consumers is needed. The challenge of identifying common sustainability goals calls for decision support tools. The objectives of our research were to investigate whether the tool could assist in improving communication and marketing with respect to sustainability and optimizing the value chain organization. Three actor groups participated in the tool application, in which quantitative and qualitative data were collected. The results showed that there were manifold sustainability goals within the innovation network, but only some goals overlapped, and the perception of their implementation also diverged. While easily marketable goals such as ‘animal welfare’ were perceived as being largely implemented, economic goals were prioritized less often, and the implementation was perceived as being rather low. By visualizing congruencies and differences in the goals, the tool helped identify fields of action, such as improved information flows and prompted thinking processes. We conclude that the tool is useful for managing complex decision processes with several actors involved.
In canoe sprint, the trunk muscles play an important role in stabilizing the body in an unstable environment (boat) and in generating forces that are transmitted through the shoulders and arms to the paddle for propulsion of the boat. Isokinetic training is well suited for sports in which propulsion is generated through water resistance due to similarities in the resistive mode. Thus, the purpose of this study was to determine the effects of isokinetic training in addition to regular sport-specific training on trunk muscular fitness and body composition in world-class canoeists and to evaluate associations between trunk muscular fitness and canoe-specific performance. Nine world-class canoeists (age: 25.6 ± 3.3 years; three females; four world champions; three Olympic gold medalists) participated in an 8-week progressive isokinetic training with a 6-week block “muscle hypertrophy” and a 2-week block “muscle power.” Pre- and post-tests included the assessment of peak isokinetic torque at different velocities in concentric (30 and 140∘s-1) and eccentric (30 and 90∘s-1) mode, trunk muscle endurance, and body composition (e.g., body fat, segmental lean mass). Additionally, peak paddle force was assessed in the flume at a water current of 3.4 m/s. Significant pre-to-post increases were found for peak torque of the trunk rotators at 30∘s-1 (p = 0.047; d = 0.4) and 140∘s-1 (p = 0.014; d = 0.7) in concentric mode. No significant pre-to-post changes were detected for eccentric trunk rotator torque, trunk muscle endurance, and body composition (p > 0.148). Significant medium-to-large correlations were observed between concentric trunk rotator torque but not trunk muscle endurance and peak paddle force, irrespective of the isokinetic movement velocity (all r ≥ 0.886; p ≤ 0.008). Isokinetic trunk rotator training is effective in improving concentric trunk rotator strength in world-class canoe sprinters. It is recommended to progressively increase angular velocity from 30∘s-1 to 140∘s-1 over the course of the training period.
Portal = Wohnen
(2019)
Zuhause. Ein schönes Wort, wenn man eines hat. Ein Sehnsuchtswort, wenn man keines hat oder das eigene Zuhause nicht sicher ist. Zuhause steht auf dem Spiel, das zeigten die Nachrichten der vergangenen Monate und Jahre – in Potsdam und Berlin ebenso wie in vielen anderen Städten. Überall fehlt Wohnraum, den sich Menschen leisten können.
Seit Monaten kursiert auch in unserem Referat die Frage: Gibt es etwas Neues wegen deiner Wohnung? Streit mit dem Vermieter, Eigentümerwechsel oder eine auszehrende Wohnungssuche – was uns persönlich beschäftigt, ist derzeit überall zu hören. Deswegen möchten wir in der aktuellen Ausgabe des Universitätsmagazins Portal dem Thema Wohnen auf den Grund gehen.
Was bedeutet der Mangel an bezahlbarem Wohnraum für die soziale Mischung und wie kann die Politik hier eingreifen? Das haben wir einen Sozialwissenschaftler gefragt. Und wir haben uns umgehört, wie Studierende und Beschäftigte der Universität Potsdam eigentlich wohnen, was für sie Zuhause ist und was ihnen Sorgen bereitet. Wir haben einen Blick in die Wohnheime auf dem Campus Golm gewagt und zeigen Ihnen eine Vision des Standorts als Lebensraum nach menschlichem Maß. Aber auch das Klima lässt uns nicht kalt: Wie kann sich eine Stadt wie Potsdam, Wohnort von fast 180.000 Menschen, künftig besser auf Wetterextreme vorbereiten?
Wie Sie sicherlich schon bemerkt haben, erscheint die Portal in einem neuen Gewand. Doch wie eh und je haben wir die Menschen an der Universität besucht – in der Hoffnung, dass Sie einander an dieser großen Einrichtung mit den drei Standorten etwas besser kennenlernen. Und auch die Leserinnen und Leser, die die Uni Potsdam nicht so gut kennen, möchten wir in das Leben an unserer Hochschule einführen.
Wir haben Studierende getroffen, die sich besonders engagieren: für den Schutz des Klimas, die Gleichstellung aller Geschlechter oder im Fakultätsrat. Andere musizieren miteinander. In der Serie „Mein Arbeitstag“ fragen wir, welche Aufgaben alltäglich in der Universitätsbibliothek zu bewältigen sind. Eine Auszubildende hat ihren Praktikumsalltag im fernen Hongkong mit uns geteilt, während uns ein Seminar der Lehrerbildung in die Virtual Reality entführt. Wir lernen hyperschnelle Sterne und das beste Mittel gegen Rückenschmerzen kennen. Was die menschliche Stimme mit den Bewegungen der Erde zu tun hat, erfahren Sie in einem „Laborbesuch“. Im „Gespräch“ unterhält sich ein Klimaforscher mit einem Schüler und wir zeigen, wo sich Uni und Stadt gefunden haben. Wir nehmen Sie mit in die entstehende „European Digital UniverCity“ und erkundigen uns in einer internen „Expertenanfrage“ nach einer neuen Frauenbewegung in der Katholischen Kirche. Neugierig haben wir einem Slavisten 15 forsche Fragen gestellt. Ein Linguist erklärt uns, ob und wie wir Außerirdische verstehen können, wenn sie denn mit uns sprechen wollen. Wir haben mit einer ausgezeichneten Juristin über die Todesstrafe gesprochen und mit einer Postdoktorandin über selbstspielende Klaviere. Da auch eine junge Uni wie die unsere älter wird, schauen wir in der „Zeitreise“ zurück in die Kinderstube der Alma Mater und pusten mit Humor den Staub von den Akten. In der Serie „Es war einmal“ äußern sich zwei Forschende zu einem geschichtlichen Jubiläum. Und weil unser (und hoffentlich auch Ihr) Wissensdurst keine Grenzen kennt, haben wir ein Wissenschaftswort herumgedreht: den Turn. Was das ist und warum einem davon schwindelig werden kann – lesen Sie es selbst!
For a long time, there were things on this planet that only humans could do, but this time might be coming to an end. By using the universal tool that makes us unique – our intelligence – we have worked to eliminate our uniqueness, at least when it comes to solving cognitive tasks. Artificial intelligence is now able to play chess, understand language, and drive a car – and often better than we.
How did we get here? The philosopher Aristotle formulated the first “laws of thought” in his syllogisms, and the mathematicians Blaise Pascal and Wilhelm Leibniz built some of the earliest calculating machines. The mathematician George Boole was the first to introduce a formal language to represent logic. The natural scientist Alan Turing created his deciphering machine “Colossus,” the first programmable computer. Philosophers, mathematicians, psychologists, and linguists – for centuries, scientists have been developing formulas, machines, and theories that were supposed to enable us to reproduce and possibly even enhance our most valuable ability.
But what exactly is “artificial intelligence”? Even the name calls for comparison. Is artificial intelligence like human intelligence? Alan Turing came up with a test in 1950 to provide a satisfying operational definition of intelligence: According to him, a machine is intelligent if its thinking abilities equal those of humans. It has to reach human levels for any cognitive task. The machine has to prove this by convincing a human interrogator that it is human. Not an easy task: After all, it has to process natural language, store knowledge, draw conclusions, and learn something new. In fact, over the past ten years, a number of AI systems have emerged that have passed the test one way or another in chat conversations with automatically generated texts or images. Nowadays, the discussion usually centers on other questions: Does AI still need its creators? Will it not only outperform humans but someday replace them – be it in the world of work or even beyond? Will AI solve our problems in the age of all-encompassing digital networking – or will it become a part of the problem?
Artificial intelligence, its nature, its limitations, its potential, and its relationship to humans were being discussed even before it existed. Literature and film have created scenarios with very different endings. But what is the view of the scientists who are actually researching with or about artificial intelligence? For the current issue of our research magazine, a cognitive scientist, an education researcher, and a computer scientist shared their views. We also searched the University for projects whose professional environment reveals the numerous opportunities that AI offers for various disciplines. We cover the geosciences and computer science as well as economics, health, and literature studies.
At the same time, we have not lost sight of the broad research spectrum at the University: a legal expert introduces us to the not-so-distant sphere of space law while astrophysicists work on ensuring that state-of-the-art telescopes observe those regions in space where something “is happening” at the right time. A chemist explains why the battery of the future will come from a printer, and molecular biologists explain how they will breed stress-resistant plants. You will read about all this in this issue as well as about current studies on restless legs syndrome in children and the situation of Muslims in Brandenburg. Last but not least, we will introduce you to the sheep currently grazing in Sanssouci Park – all on behalf of science. Quite clever!
Enjoy your read!
THE EDITORS
Lange gab es auf der Erde Dinge, die konnte nur der Mensch. Doch diese Zeit könnte zu Ende gehen. Mithilfe des universalen Werkzeugs, das uns einzigartig macht – unserer Intelligenz –, haben wir dafür gesorgt, dass wir es nicht länger sind. Zumindest wenn es darum geht, kognitive Aufgaben zu lösen. Künstliche Intelligenz kann inzwischen Schach spielen, Sprache verstehen, Auto fahren. Vieles sogar besser als wir. Wie kam es dazu?
Der Philosoph Aristoteles schuf mit seinen Syllogismen die ersten „Gesetze des Denkens“, die Mathematiker Blaise Pascal und Wilhelm Leibniz bauten einige der frühesten Rechenmaschinen, der Mathematiker George Boole führte als erster eine formale Sprache zur Darstellung der Logik ein, der Naturwissenschaftler Alan Turing schuf mit seiner Dechiffriermaschine „Colossus“ den ersten programmierbaren Computer. Philosophen, Mathematiker, Psychologen, Linguisten – seit Jahrhunderten entwickeln Wissenschaftlerin- nen und Wissenschaftler Formeln, Maschinen und Theorien, die es möglich machen sollen, unsere wertvollste Fähigkeit zu reproduzieren und womöglich sogar zu verbessern. Aber was ist das eigentlich: „Künstliche Intelligenz“?
Schon die Bezeichnung fordert zum Vergleich auf. Ist Künstliche Intelligenz wie menschliche Intelligenz? Alan Turing formulierte 1950 einen Test, der eine befriedigende operationale Definition von Intelligenz liefern sollte: Intelligent ist eine Maschine demnach, wenn sie ein dem Menschen gleichwertiges Denkvermögen besitzt. Sie muss also bei beliebigen kognitiven Aufgaben dasselbe Niveau erreichen. Beweisen muss sie dies, indem sie einen menschlichen Fragenden glauben lässt, sie sei ein Mensch. Keine leichte Sache: Immerhin muss sie dafür natürliche Sprache verarbeiten, Wissen speichern, aus diesem Schlüsse ziehen und Neues lernen können. Tatsächlich entstanden in den vergangenen zehn Jahren etliche KI-Systeme, die in Chat- Gesprächen, mit automatisch erzeugten Texten oder Bildern den Test auf die eine oder andere Weise bestanden. Im Fokus stehen nun meist andere Fragen: Braucht KI ihre Schöpfer überhaupt noch? Wird sie den Menschen nicht nur überflügeln, sondern eines Tages sogar ersetzen – sei es in der Welt der Arbeit oder sogar darüber hinaus? Löst KI im Zeitalter der allumfassenden digitalen Vernetzung unsere Probleme – oder wird sie Teil davon?
Über Künstliche Intelligenz, ihr Wesen, ihre Beschränkungen, ihr Potenzial und ihr Verhältnis zum Menschen wird nicht erst diskutiert seitdem es sie gibt. Vor allem Literatur und Kino haben Szenarien mit verschiedenstem Ausgang kreiert. Aber wie sehen das Wissenschaftler, die mit oder zu Künstlicher Intelligenz forschen? Für die aktuelle Ausgabe des Forschungsmagazins kamen ein Kognitionswissenschaftler, eine Bildungsforscherin und ein Informatiker darüber ins Gespräch. Daneben haben wir uns in der Hochschule nach Projekten umgesehen, deren fachliche Heimat die zahlreichen Möglichkeiten offenbart, die KI für viele Disziplinen erahnen lässt. So geht die Reise in die Geowissenschaften und die Informatik ebenso wie die Wirtschafts-, Gesundheits- und Literaturwissenschaften.
Daneben haben wir die Breite der Forschung an der Universität nicht aus den Augen verloren: Ein Jurist führt ein in die gar nicht so weltferne Sphäre des Weltraumrechts, während Astrophysiker daran arbeiten, dass modernste Teleskope zum richtigen Zeitpunkt genau in die Regionen des Weltraums schauen, wo gerade etwas „los ist“. Eine Chemikerin erklärt, warum die Batterie der Zukunft aus dem Drucker kommt, und Molekularbiologen berichten, wie sie stressresistente Pflanzen züchten wollen. Mit menschlichem Stress in der Arbeitswelt beschäftigt sich nicht nur ein Forschungs-, sondern auch ein Gründerprojekt. Darüber ist in diesem Heft genauso zu lesen wie über aktuelle Studien zum Restless Legs Syndrom bei Kindern oder aber der Situation von Muslimen in Brandenburg. Nicht zuletzt machen wir Sie mit jenen Schafen bekannt, die derzeit im Park Sanssouci weiden – im Auftrag der Wissenschaft. Gar nicht so dumm! Viel Vergnügen!
Die Redaktion
Transcending the conventional debate around efficiency in sustainable consumption, anti-consumption patterns leading to decreased levels of material consumption have been gaining importance. Change agents are crucial for the promotion of such patterns, so there may be lessons for governance interventions that can be learnt from the every-day experiences of those who actively implement and promote sustainability in the field of anti-consumption. Eighteen social innovation pioneers, who engage in and diffuse practices of voluntary simplicity and collaborative consumption as sustainable options of anti-consumption share their knowledge and personal insights in expert interviews for this research. Our qualitative content analysis reveals drivers, barriers, and governance strategies to strengthen anti-consumption patterns, which are negotiated between the market, the state, and civil society. Recommendations derived from the interviews concern entrepreneurship, municipal infrastructures in support of local grassroots projects, regulative policy measures, more positive communication to strengthen the visibility of initiatives and emphasize individual benefits, establishing a sense of community, anti-consumer activism, and education. We argue for complementary action between top-down strategies, bottom-up initiatives, corporate activities, and consumer behavior. The results are valuable to researchers, activists, marketers, and policymakers who seek to enhance their understanding of materially reduced consumption patterns based on the real-life experiences of active pioneers in the field.
The purpose of this study was to compare the effects of combined resistance and plyometric/sprint training with plyometric/sprint training or typical soccer training alone on muscle strength and power, speed, change-of-direction ability in young soccer players. Thirty-one young (14.5 ± 0.52 years; tanner stage 3–4) soccer players were randomly assigned to either a combined- (COMB, n = 14), plyometric-training (PLYO, n = 9) or an active control group (CONT, n = 8). Two training sessions were added to the regular soccer training consisting of one session of light-load high-velocity resistance exercises combined with one session of plyometric/sprint training (COMB), two sessions of plyometric/sprint training (PLYO) or two soccer training sessions (CONT). Training volume was similar between the experimental groups. Before and after 7-weeks of training, peak torque, as well as absolute and relative (normalized to torque; RTDr) rate of torque development (RTD) during maximal voluntary isometric contraction of the knee extensors (KE) were monitored at time intervals from the onset of contraction to 200 ms. Jump height, sprinting speed at 5, 10, 20-m and change-of-direction ability performances were also assessed. There were no significant between–group baseline differences. Both COMB and PLYO significantly increased their jump height (Δ14.3%; ES = 0.94; Δ12.1%; ES = 0.54, respectively) and RTD at mid to late phases but with greater within effect sizes in COMB in comparison with PLYO. However, significant increases in peak torque (Δ16.9%; p < 0.001; ES = 0.58), RTD (Δ44.3%; ES = 0.71), RTDr (Δ27.3%; ES = 0.62) and sprint performance at 5-m (Δ-4.7%; p < 0.001; ES = 0.73) were found in COMB without any significant pre-to-post change in PLYO and CONT groups. Our results suggest that COMB is more effective than PLYO or CONT for enhancing strength, sprint and jump performances.
We combine ultrafast X-ray diffraction (UXRD) and time-resolved Magneto-Optical Kerr Effect (MOKE) measurements to monitor the strain pulses in laser-excited TbFe2/Nb heterostructures. Spatial separation of the Nb detection layer from the laser excitation region allows for a background-free characterization of the laser-generated strain pulses. We clearly observe symmetric bipolar strain pulses if the excited TbFe2 surface terminates the sample and a decomposition of the strain wavepacket into an asymmetric bipolar and a unipolar pulse, if a SiO2 glass capping layer covers the excited TbFe2 layer. The inverse magnetostriction of the temporally separated unipolar strain pulses in this sample leads to a MOKE signal that linearly depends on the strain pulse amplitude measured through UXRD. Linear chain model simulations accurately predict the timing and shape of UXRD and MOKE signals that are caused by the strain reflections from multiple interfaces in the heterostructure.
Die funktionelle Charakterisierung von therapeutisch relevanten Proteinen kann bereits durch die Bereitstellung des Zielproteins in adäquaten Mengen limitierend sein. Dies trifft besonders auf Membranproteine zu, die aufgrund von zytotoxischen Effekten auf die Produktionszelllinie und der Tendenz Aggregate zu bilden, in niedrigen Ausbeuten an aktivem Protein resultieren können. Der lebende Organismus kann durch die Verwendung von translationsaktiven Zelllysaten umgangen werden- die Grundlage der zellfreien Proteinsynthese. Zu Beginn der Arbeit wurde die ATP-abhängige Translation eines Lysates auf der Basis von kultivierten Insektenzellen (Sf21) analysiert. Für diesen Zweck wurde ein ATP-bindendes Aptamer eingesetzt, durch welches die Translation der Nanoluziferase reguliert werden konnte. Durch die dargestellte Applizierung von Aptameren, könnten diese zukünftig in zellfreien Systemen für die Visualisierung der Transkription und Translation eingesetzt werden, wodurch zum Beispiel komplexe Prozesse validiert werden können.
Neben der reinen Proteinherstellung können Faktoren wie posttranslationale Modifikationen sowie eine Integration in eine lipidische Membran essentiell für die Funktionalität des Membranproteins sein. Im zweiten Abschnitt konnte, im zellfreien Sf21-System, für den G-Protein-gekoppelten Rezeptor Endothelin B sowohl eine Integration in die endogen vorhandenen Endoplasmatisch Retikulum-basierten Membranstrukturen als auch Glykosylierungen, identifiziert werden.
Auf der Grundlage der erfolgreichen Synthese des ET-B-Rezeptors wurden verschiedene Methoden zur Fluoreszenzmarkierung des Adenosin-Rezeptors A2a (Adora2a) angewandt und optimiert. Im dritten Abschnitt wurde der Adora2a mit Hilfe einer vorbeladenen tRNA, welche an eine fluoreszierende Aminosäure gekoppelt war, im zellfreien Chinesischen Zwerghamster Ovarien (CHO)-System markiert. Zusätzlich konnte durch den Einsatz eines modifizierten tRNA/Aminoacyl-tRNA-Synthetase-Paares eine nicht-kanonische Aminosäure an Position eines integrierten Amber-Stopcodon in die Polypeptidkette eingebaut und die funktionelle Gruppe im Anschluss an einen Fluoreszenzfarbstoff gekoppelt werden. Aufgrund des offenen Charakters eignen sich zellfreie Proteinsynthesesysteme besonders für eine Integration von exogenen Komponenten in den Translationsprozess. Mit Hilfe der Fluoreszenzmarkierung wurde eine ligandvermittelte Konformationsänderung im Adora2a über einen Biolumineszenz-Resonanzenergietransfer detektiert. Durch die Etablierung der Amber-Suppression wurde darüber hinaus das Hormon Erythropoetin pegyliert, wodurch Eigenschaften wie Stabilität und Halbwertszeit des Proteins verändert wurden.
Zu guter Letzt wurde ein neues tRNA/Aminoacyl-tRNA-Synthetase-Paar auf Basis der Methanosarcina mazei Pyrrolysin-Synthetase etabliert, um das Repertoire an nicht-kanonischen Aminosäuren und den damit verbundenen Kopplungsreaktionen zu erweitern. Zusammenfassend wurden die Potenziale zellfreier Systeme in Bezug auf der Herstellung von komplexen Membranproteinen und der Charakterisierung dieser durch die Einbringung einer positionsspezifischen Fluoreszenzmarkierung verdeutlicht, wodurch neue Möglichkeiten für die Analyse und Funktionalisierung von komplexen Proteinen geschaffen wurden.
-Jill Rabea Zaun: Eduard Hildebrandts „Wunderbild“
in einem Brief von Alexander von Humboldt an Henriette Mendelssohn, geb. Meyer
-Ottmar Ette: Paris/Berlin/Havanna: Alexander von Humboldts transareale Wissenschaft und die Revolution nach der Revolution
-Frank Holl: Alexander von Humboldt und der Klimawandel
Mythen und Fakten
-Cettina Rapisarda: Antike Marmorarten nach Zoёga’s Bestimmungen. Alexander von Humboldts Sammlung und Gesteinsstudien in Rom
-Sandra Rebok: De la pintura de viaje a la fotografía: Alexander von Humboldt y la representación artística del Nuevo Mundo
-Ingo Schwarz: Der zweite Entdecker Kubas
Die 2011 von Sebastian Panwitz und Ingo Schwarz herausgegebene Korrespondenz Alexander von Humboldts mit den Mendelssohns dokumentiert ihre lebenslange Verbundenheit. Ein Brief von Humboldt an Henriette Mendelssohn, den die Mendelssohn-Gesellschaft von einem privaten Sammler erwarb, und der erst seit diesem Jahr in der Mendelssohn-Remise in Berlin-Mitte, in der Dauerausstellung Die Mendelssohns in der Jägerstraße zu sehen ist, war zum Zeitpunkt der Veröffentlichung noch nicht zugänglich. Dieser Brief lässt das gesellschaftliche und künstlerische Leben der Stadt um 1850 sichtbar werden und enthält einige Rätsel: Was hat es mit dem unkommentierten Zettel auf sich, auf dem ein nicht korrekt wiedergegebener Buchtitel der amerikanischen Autorin Harriet Beecher Stowe geschrieben steht und welches „Wunderbild“ erwähnt der weltberühmte Forschungsreisende in dem Brief an Henriette Mendelssohn?
Interactions and feedbacks between tectonics, climate, and upper plate architecture control basin geometry, relief, and depositional systems. The Andes is part of a longlived continental margin characterized by multiple tectonic cycles which have strongly modified the Andean upper plate architecture. In the Andean retroarc, spatiotemporal variations in the structure of the upper plate and tectonic regimes have resulted in marked along-strike variations in basin geometry, stratigraphy, deformational style, and mountain belt morphology. These along-strike variations include high-elevation plateaus (Altiplano and Puna) associated with a thin-skin fold-and-thrust-belt and thick-skin deformation in broken foreland basins such as the Santa Barbara system and the Sierras Pampeanas. At the confluence of the Puna Plateau, the Santa Barbara system and the Sierras Pampeanas, major along-strike changes in upper plate architecture, mountain belt morphology, basement exhumation, and deformation style can be recognized. I have used a source to sink approach to unravel the spatiotemporal tectonic evolution of the Andean retroarc between 26 and 28°S. I obtained a large low-temperature thermochronology data set from basement units which includes apatite fission track, apatite U-Th-Sm/He, and zircon U-Th/He (ZHe) cooling ages. Stratigraphic descriptions of Miocene units were temporally constrained by U-Pb LA-ICP-MS zircon ages from interbedded pyroclastic material.
Modeled ZHe ages suggest that the basement of the study area was exhumed during the Famatinian orogeny (550-450 Ma), followed by a period of relative tectonic quiescence during the Paleozoic and the Triassic. The basement experienced horst exhumation during the Cretaceous development of the Salta rift. After initial exhumation, deposition of thick Cretaceous syn-rift strata caused reheating of several basement blocks within the Santa Barbara system. During the Eocene-Oligocene, the Andean compressional setting was responsible for the exhumation of several disconnected basement blocks. These exhumed blocks were separated by areas of low relief, in which humid climate and low erosion rates facilitated the development of etchplains on the crystalline basement. The exhumed basement blocks formed an Eocene to Oligocene broken foreland basin in the back-bulge depozone of the Andean foreland. During the Early Miocene, foreland basin strata filled up the preexisting Paleogene topography. The basement blocks in lower relief positions were reheated; associated geothermal gradients were higher than 25°C/km. Miocene volcanism was responsible for lateral variations on the amount of reheating along the Campo-Arenal basin. Around 12 Ma, a new deformational phase modified the drainage network and fragmented the lacustrine system. As deformation and rock uplift continued, the easily eroded sedimentary cover was efficiently removed and reworked by an ephemeral fluvial system, preventing the development of significant relief. After ~6 Ma, the low erodibility of the basement blocks which began to be exposed caused relief increase, leading to the development of stable fluvial systems. Progressive relief development modified atmospheric circulation, creating a rainfall gradient. After 3 Ma, orographic rainfall and high relief lead to the development of proximal fluvial-gravitational depositional systems in the surrounding basins.
Almost half of the political life has been experienced under the
state of emergency and state of siege policies in the Turkish
Republic. In spite of such a striking number and continuity in the
deployment of legal emergency powers, there are just a few legal
and political studies examining the reasons for such permanency
in governing practices. To fill this gap, this paper aims to discuss
one of the most important sources of the ‘permanent’ political
crisis in the country: the historical evolution of legal emergency
power. In order to highlight how these policies have intensified
the highly fragile citizenship regime by weakening the separation
of power, repressing the use of political rights and increasing the
discretionary power of both the executive and judiciary authori-
ties, the paper sheds light on the emergence and production of
a specific form of legality based on the idea of emergency and the
principle of executive prerogative. In that context, it aims to
provide a genealogical explanation of the evolution of the excep-
tional form of the nation-state, which is based on the way political
society, representation, and legitimacy have been instituted and
accompanying failure of the ruling classes in building hegemony
in the country.
Supercapacitors are electrochemical energy storage devices with rapid charge/discharge rate and long cycle life. Their biggest challenge is the inferior energy density compared to other electrochemical energy storage devices such as batteries. Being the most widely spread type of supercapacitors, electrochemical double-layer capacitors (EDLCs) store energy by electrosorption of electrolyte ions on the surface of charged electrodes. As a more recent development, Na-ion capacitors (NICs) are expected to be a more promising tactic to tackle the inferior energy density due to their higher-capacity electrodes and larger operating voltage. The charges are simultaneously stored by ion adsorption on the capacitive-type cathode surface and via faradic process in the battery-type anode, respectively. Porous carbon electrodes are of great importance in these devices, but the paramount problems are the facile synthetic routes for high-performance carbons and the lack of fundamental understanding of the energy storage mechanisms. Therefore, the aim of the present dissertation is to develop novel synthetic methods for (nitrogen-doped) porous carbon materials with superior performance, and to reveal a deeper understanding energy storage mechanisms of EDLCs and NICs.
The first part introduces a novel synthetic method towards hierarchical ordered meso-microporous carbon electrode materials for EDLCs. The large amount of micropores and highly ordered mesopores endow abundant sites for charge storage and efficient electrolyte transport, respectively, giving rise to superior EDLC performance in different electrolytes. More importantly, the controversial energy storage mechanism of EDLCs employing ionic liquid (IL) electrolytes is investigated by employing a series of porous model carbons as electrodes. The results not only allow to conclude on the relations between the porosity and ion transport dynamics, but also deliver deeper insights into the energy storage mechanism of IL-based EDLCs which is different from the one usually dominating in solvent-based electrolytes leading to compression double-layers.
The other part focuses on anodes of NICs, where novel synthesis of nitrogen-rich porous carbon electrodes and their sodium storage mechanism are investigated. Free-standing fibrous nitrogen-doped carbon materials are synthesized by electrospinning using the nitrogen-rich monomer (hexaazatriphenylene-hexacarbonitrile, C18N12) as the precursor followed by condensation at high temperature. These fibers provide superior capacity and desirable charge/discharge rate for sodium storage. This work also allows insights into the sodium storage mechanism in nitrogen-doped carbons. Based on this mechanism, further optimization is done by designing a composite material composed of nitrogen-rich carbon nanoparticles embedded in conductive carbon matrix for a better charge/discharge rate. The energy density of the assembled NICs significantly prevails that of common EDLCs while maintaining the high power density and long cycle life.
The goal of this three-year longitudinal study was to examine the buffering effect of parental mediation of adolescents’ technology use (i.e., restrictive, co-viewing, and instructive) on the relationships among cyber aggression involvement and substance use (i.e., alcohol use, marijuana use, cigarette smoking, and non-marijuana illicit drug use). Overall, 867 (M age = 13.67, age range from 13–15 years, 51% female, 49% White) 8th grade adolescents from the Midwestern United States participated in this study during the 6th grade (Wave 1), 7th grade (Wave 2), and 8th grade (Wave 3). Results revealed that higher levels of Wave 2 instructive mediation weakened the association between Wave 1 cyber victimization and Wave 3 alcohol use and Wave 3 non-marijuana illicit drug use. The relationship was stronger between Wave 1 cyber victimization and Wave 3 alcohol use and Wave 3 non-marijuana illicit drug use when adolescents reported lower levels of Wave 2 instructive mediation. At lower levels of Wave 2 instructive mediation, the association between Wave 1 cyber aggression perpetration and Wave 3 non-marijuana illicit drug use was stronger. Implications of these findings are discussed in the context of parents recognizing their role in helping to mitigate the negative consequences associated with adolescents’ cyber aggression involvement.
While the consequences of cyberbullying victimization have received some attention in the literature, to date, little is known about the multiple types of strains in adolescents’ lives, such as whether cyberbullying victimization and peer rejection increase their vulnerability to depression and anxiety. Even though some research found that adolescents with disabilities show higher risk for cyberbullying victimization, most research has focused on typically developing adolescents. Thus, the present study focused on examining the moderating effect of peer rejection in the relationships between cyberbullying victimization, depression, and anxiety among adolescents with autism spectrum disorder. There were 128 participants (89% male; ages ranging from 11–16 years old) with autism spectrum disorder in the sixth, seventh, or eighth grade at 16 middle schools in the United States. Participants completed questionnaires on cyberbullying victimization, peer rejection, depression, and anxiety. Results revealed that cyberbullying victimization was associated positively with peer rejection, anxiety, and depression among adolescents with autism spectrum disorder. Further, peer rejection was linked positively with depression and anxiety. Peer rejection moderated the positive relationship between cyberbullying victimization and depression, but not anxiety. Implications for prevention programs and future research are discussed.
Cyber victimization research reveals various personal and contextual correlations and negative consequences associated with this experience. Despite increasing attention on cyber victimization, few studies have examined such experiences among ethnic minority adolescents. The purpose of the present study was to examine the moderating effect of ethnicity in the longitudinal associations among cyber victimization, school-belongingness, and psychological consequences (i.e., depression, loneliness, anxiety). These associations were investigated among 416 Latinx and white adolescents (46% female; M age = 13.89, SD = 0.41) from one middle school in the United States. They answered questionnaires on cyber victimization, school belongingness, depression, loneliness, and anxiety in the 7th grade (Time 1). One year later, in the 8th grade (Time 2), they completed questionnaires on depression, loneliness, and anxiety. Low levels of school-belongingness strengthened the positive relationships between cyber victimization and Time 2 depression and anxiety, especially among Latinx adolescents. The positive association between cyber victimization and Time 2 loneliness was strengthened for low levels of school-belongingness for all adolescents. These findings may indicate that cyber victimization threatens adolescents’ school-belongingness, which has implications for their emotional adjustment. Such findings underscore the importance of considering diverse populations when examining cyber victimization.
Modern health care systems are characterized by pronounced prevention and cost-optimized treatments. This dissertation offers novel empirical evidence on how useful such measures can be. The first chapter analyzes how radiation, a main pollutant in health care, can negatively affect cognitive health. The second chapter focuses on the effect of Low Emission Zones on public heath, as air quality is the major external source of health problems. Both chapters point out potentials for preventive measures. Finally, chapter three studies how changes in treatment prices affect the reallocation of hospital resources. In the following, I briefly summarize each chapter and discuss implications for health care systems as well as other policy areas. Based on the National Educational Panel Study that is linked to data on radiation, chapter one shows that radiation can have negative long-term effects on cognitive skills, even at subclinical doses. Exploiting arguably exogenous variation in soil contamination in Germany due to the Chernobyl disaster in 1986, the findings show that people exposed to higher radiation perform significantly worse in cognitive tests 25 years later. Identification is ensured by abnormal rainfall within a critical period of ten days. The results show that the effect is stronger among older cohorts than younger cohorts, which is consistent with radiation accelerating cognitive decline as people get older. On average, a one-standarddeviation increase in the initial level of CS137 (around 30 chest x-rays) is associated with a decrease in the cognitive skills by 4.1 percent of a standard deviation (around 0.05 school years). Chapter one shows that sub-clinical levels of radiation can have negative consequences even after early childhood. This is of particular importance because most of the literature focuses on exposure very early in life, often during pregnancy. However, population exposed after birth is over 100 times larger. These results point to substantial external human capital costs of radiation which can be reduced by choices of medical procedures. There is a large potential for reductions because about one-third of all CT scans are assumed to be not medically justified (Brenner and Hall, 2007). If people receive unnecessary CT scans because of economic incentives, this chapter points to additional external costs of health care policies. Furthermore, the results can inform the cost-benefit trade-off for medically indicated procedures. Chapter two provides evidence about the effectiveness of Low Emission Zones. Low Emission Zones are typically justified by improvements in population health. However, there is little evidence about the potential health benefits from policy interventions aiming at improving air quality in inner-cities. The chapter ask how the coverage of Low Emission Zones air pollution and hospitalization, by exploiting variation in the roll out of Low Emission Zones in Germany. It combines information on the geographic coverage of Low Emission Zones with rich panel data on the universe of German hospitals over the period from 2006 to 2016 with precise information on hospital locations and the annual frequency of detailed diagnoses. In order to establish that our estimates of Low Emission Zones’ health impacts can indeed be attributed to improvements in local air quality, we use data from Germany’s official air pollution monitoring system and assign monitor locations to Low Emission Zones and test whether measures of air pollution are affected by the coverage of a Low Emission Zone. Results in chapter two confirm former results showing that the introduction of Low Emission Zones improved air quality significantly by reducing NO2 and PM10 concentrations. Furthermore, the chapter shows that hospitals which catchment areas are covered by a Low Emission Zone, diagnose significantly less air pollution related diseases, in particular by reducing the incidents of chronic diseases of the circulatory and the respiratory system. The effect is stronger before 2012, which is consistent with a general improvement in the vehicle fleet’s emission standards. Depending on the disease, a one-standard-deviation increase in the coverage of a hospitals catchment area covered by a Low Emission Zone reduces the yearly number of diagnoses up to 5 percent. These findings have strong implications for policy makers. In 2015, overall costs for health care in Germany were around 340 billion euros, of which 46 billion euros for diseases of the circulatory system, making it the most expensive type of disease caused by 2.9 million cases (Statistisches Bundesamt, 2017b). Hence, reductions in the incidence of diseases of the circulatory system may directly reduce society’s health care costs. Whereas chapter one and two study the demand-side in health care markets and thus preventive potential, chapter three analyzes the supply-side. By exploiting the same hospital panel data set as in chapter two, chapter three studies the effect of treatment price shocks on the reallocation of hospital resources in Germany. Starting in 2005, the implementation of the German-DRG-System led to general idiosyncratic treatment price shocks for individual hospitals. Thus far there is little evidence of the impact of general price shocks on the reallocation of hospital resources. Additionally, I add to the exiting literature by showing that price shocks can have persistent effects on hospital resources even when these shocks vanish. However, simple OLS regressions would underestimate the true effect, due to endogenous treatment price shocks. I implement a novel instrument variable strategy that exploits the exogenous variation in the number of days of snow in hospital catchment areas. A peculiarity of the reform allowed variation in days of snow to have a persistent impact on treatment prices. I find that treatment price increases lead to increases in input factors such as nursing staff, physicians and the range of treatments offered but to decreases in the treatment volume. This indicates supplier-induced demand. Furthermore, the probability of hospital mergers and privatization decreases. Structural differences in pre-treatment characteristics between hospitals enhance these effects. For instance, private and larger hospitals are more affected. IV estimates reveal that OLS results are biased towards zero in almost all dimensions because structural hospital differences are correlated with the reallocation of hospital resources. These results are important for several reasons. The G-DRG-Reform led to a persistent polarization of hospital resources, as some hospitals were exposed to treatment price increases, while others experienced reductions. If hospitals increase the treatment volume as a response to price reductions by offering unnecessary therapies, it has a negative impact on population wellbeing and public spending. However, results show a decrease in the range of treatments if prices decrease. Hospitals might specialize more, thus attracting more patients. From a policy perspective it is important to evaluate if such changes in the range of treatments jeopardize an adequate nationwide provision of treatments. Furthermore, the results show a decrease in the number of nurses and physicians if prices decrease. This could partly explain the nursing crisis in German hospitals. However, since hospitals specialize more they might be able to realize efficiency gains which justify reductions in input factors without loses in quality. Further research is necessary to provide evidence for the impact of the G-DRG-Reform on health care quality. Another important aspect are changes in the organizational structure. Many public hospitals have been privatized or merged. The findings show that this is at least partly driven by the G-DRG-Reform. This can again lead to a lack in services offered in some regions if merged hospitals specialize more or if hospitals are taken over by ecclesiastical organizations which do not provide all treatments due to moral conviction. Overall, this dissertation reveals large potential for preventive health care measures and helps to explain reallocation processes in the hospital sector if treatment prices change. Furthermore, its findings have potentially relevant implications for other areas of public policy. Chapter one identifies an effect of low dose radiation on cognitive health. As mankind is searching for new energy sources, nuclear power is becoming popular again. However, results of chapter one point to substantial costs of nuclear energy which have not been accounted yet. Chapter two finds strong evidence that air quality improvements by Low Emission Zones translate into health improvements, even at relatively low levels of air pollution. These findings may, for instance, be of relevance to design further policies targeted at air pollution such as diesel bans. As pointed out in chapter three, the implementation of DRG-Systems may have unintended side-effects on the reallocation of hospital resources. This may also apply to other providers in the health care sector such as resident doctors.
Perovskite solar cells combine high carrier mobilities with long carrier lifetimes and high radiative efficiencies. Despite this, full devices suffer from significant nonradiative recombination losses, limiting their VOC to values well below the Shockley–Queisser limit. Here, recent advances in understanding nonradiative recombination in perovskite solar cells from picoseconds to steady state are presented, with an emphasis on the interfaces between the perovskite absorber and the charge transport layers. Quantification of the quasi‐Fermi level splitting in perovskite films with and without attached transport layers allows to identify the origin of nonradiative recombination, and to explain the VOC of operational devices. These measurements prove that in state‐of‐the‐art solar cells, nonradiative recombination at the interfaces between the perovskite and the transport layers is more important than processes in the bulk or at grain boundaries. Optical pump‐probe techniques give complementary access to the interfacial recombination pathways and provide quantitative information on transfer rates and recombination velocities. Promising optimization strategies are also highlighted, in particular in view of the role of energy level alignment and the importance of surface passivation. Recent record perovskite solar cells with low nonradiative losses are presented where interfacial recombination is effectively overcome—paving the way to the thermodynamic efficiency limit.
Partial melting is a first order process for the chemical differentiation of the crust (Vielzeuf et al., 1990). Redistribution of chemical elements during melt generation crucially influences the composition of the lower and upper crust and provides a mechanism to concentrate and transport chemical elements that may also be of economic interest. Understanding of the diverse processes and their controlling factors is therefore not only of scientific interest but also of high economic importance to cover the demand for rare metals.
The redistribution of major and trace elements during partial melting represents a central step for the understanding how granite-bound mineralization develops (Hedenquist and Lowenstern, 1994). The partial melt generation and mobilization of ore elements (e.g. Sn, W, Nb, Ta) into the melt depends on the composition of the sedimentary source and melting conditions. Distinct source rocks have different compositions reflecting their deposition and alteration histories. This specific chemical “memory” results in different mineral assemblages and melting reactions for different protolith compositions during prograde metamorphism (Brown and Fyfe, 1970; Thompson, 1982; Vielzeuf and Holloway, 1988). These factors do not only exert an important influence on the distribution of chemical elements during melt generation, they also influence the volume of melt that is produced, extraction of the melt from its source, and its ascent through the crust (Le Breton and Thompson, 1988). On a larger scale, protolith distribution and chemical alteration (weathering), prograde metamorphism with partial melting, melt extraction, and granite emplacement are ultimately depending on a (plate-)tectonic control (Romer and Kroner, 2016). Comprehension of the individual stages and their interaction is crucial in understanding how granite-related mineralization forms, thereby allowing estimation of the mineralization potential of certain areas. Partial melting also influences the isotope systematics of melt and restite. Radiogenic and stable isotopes of magmatic rocks are commonly used to trace back the source of intrusions or to quantify mixing of magmas from different sources with distinct isotopic signatures (DePaolo and Wasserburg, 1979; Lesher, 1990; Chappell, 1996). These applications are based on the fundamental requirement that the isotopic signature in the melt reflects that of the bulk source from which it is derived. Different minerals in a protolith may have isotopic compositions of radiogenic isotopes that deviate from their whole rock signature (Ayres and Harris, 1997; Knesel and Davidson, 2002). In particular, old minerals with a distinct parent-to-daughter (P/D) ratio are expected to have a specific radiogenic isotope signature. As the partial melting reaction only involves selective phases in a protolith, the isotopic signature of the melt reflects that of the minerals involved in the melting reaction and, therefore, should be different from the bulk source signature. Similar considerations hold true for stable isotopes.
Verwaltungswissenschaft
(2019)
Das Werk ist im ersten Teil Grundlagen und Querschnittsfragen der Verwaltungswissenschaft gewidmet. Zunächst stellt der Verfasser die Erkenntnisobjekte "Verwaltungswissenschaft" und "Öffentliche Verwaltung" vor. Sodann vermittelt er dem Leser Aufgaben, Kulturen, Reformen und die Kontrolle der Verwaltung. Im zweiten Teil werden Verwaltungsbehörden als Organisationen und Handlungssysteme näher beleuchtet. Die betreffenden Kapitel behandeln die Aufbauorganisation, das Personal, die Koordination, das Verfahren und die Entscheidung.
Ultrafast magnetisation dynamics have been investigated intensely for two decades. The recovery process after demagnetisation, however, was rarely studied experimentally and discussed in detail. The focus of this work lies on the investigation of the magnetisation on long timescales after laser excitation. It combines two ultrafast time resolved methods to study the relaxation of the magnetic and lattice system after excitation with a high fluence ultrashort laser pulse. The magnetic system is investigated by time resolved measurements of the magneto-optical Kerr effect. The experimental setup has been implemented in the scope of this work. The lattice dynamics were obtained with ultrafast X-ray diffraction. The combination of both techniques leads to a better understanding of the mechanisms involved in magnetisation recovery from a non-equilibrium condition. Three different groups of samples are investigated in this work: Thin Nickel layers capped with nonmagnetic materials, a continuous sample of the ordered L10 phase of Iron Platinum and a sample consisting of Iron Platinum nanoparticles embedded in a carbon matrix. The study of the remagnetisation reveals a general trend for all of the samples: The remagnetisation process can be described by two time dependences. A first exponential recovery that slows down with an increasing amount of energy absorbed in the system until an approximately linear time dependence is observed. This is followed by a second exponential recovery. In case of low fluence excitation, the first recovery is faster than the second. With increasing fluence the first recovery is slowed down and can be described as a linear function. If the pump-induced temperature increase in the sample is sufficiently high, a phase transition to a paramagnetic state is observed. In the remagnetisation process, the transition into the ferromagnetic state is characterised by a distinct transition between the linear and exponential recovery. From the combination of the transient lattice temperature Tp(t) obtained from ultrafast X-ray measurements and magnetisation M(t) gained from magneto-optical measurements we construct the transient magnetisation versus temperature relations M(Tp). If the lattice temperature remains below the Curie temperature the remagnetisation curve M(Tp) is linear and stays below the M(T) curve in equilibrium in the continuous transition metal layers. When the sample is heated above phase transition, the remagnetisation converges towards the static temperature dependence. For the granular Iron Platinum sample the M(Tp) curves for different fluences coincide, i.e. the remagnetisation follows a similar path irrespective of the initial laser-induced temperature jump.
Sphingolipids are a class of lipids that share a sphingoid base backbone. They exert various effects in eukaryotes, ranging from structural roles in plasma membranes to cellular signaling. De novo sphingolipid synthesis takes place in the endoplasmic reticulum (ER), where the condensation of the activated C₁₆ fatty acid palmitoyl-CoA and the amino acid L-serine is catalyzed by serine palmitoyltransferase (SPT). The product, 3-ketosphinganine, is then converted into more complex sphingolipids by additional ER-bound enzymes, resulting in the formation of ceramides. Since sphingolipid homeostasis is crucial to numerous cellular functions, improved assessment of sphingolipid metabolism will be key to better understanding several human diseases. To date, no assay exists capable of monitoring de novo synthesis sphingolipid in its entirety. Here, we have established a cell-free assay utilizing rat liver microsomes containing all the enzymes necessary for bottom-up synthesis of ceramides. Following lipid extraction, we were able to track the different intermediates of the sphingolipid metabolism pathway, namely 3-ketosphinganine, sphinganine, dihydroceramide, and ceramide. This was achieved by chromatographic separation of sphingolipid metabolites followed by detection of their accurate mass and characteristic fragmentations through high-resolution mass spectrometry and tandem-mass spectrometry. We were able to distinguish, unequivocally, between de novo synthesized sphingolipids and intrinsic species, inevitably present in the microsome preparations, through the addition of stable isotope-labeled palmitate-d₃ and L-serine-d₃. To the best of our knowledge, this is the first demonstration of a method monitoring the entirety of ER-associated sphingolipid biosynthesis. Proof-of-concept data was provided by modulating the levels of supplied cofactors (e.g., NADPH) or the addition of specific enzyme inhibitors (e.g., fumonisin B₁). The presented microsomal assay may serve as a useful tool for monitoring alterations in sphingolipid de novo synthesis in cells or tissues. Additionally, our methodology may be used for metabolism studies of atypical substrates – naturally occurring or chemically tailored – as well as novel inhibitors of enzymes involved in sphingolipid de novo synthesis.
Alluvial and transport-limited bedrock rivers constitute the majority of fluvial systems on Earth. Their long profiles hold clues to their present state and past evolution. We currently possess first-principles-based governing equations for flow, sediment transport, and channel morphodynamics in these systems, which we lack for detachment-limited bedrock rivers. Here we formally couple these equations for transport-limited gravel-bed river long-profile evolution. The result is a new predictive relationship whose functional form and parameters are grounded in theory and defined through experimental data. From this, we produce a power-law analytical solution and a finite-difference numerical solution to long-profile evolution. Steady-state channel concavity and steepness are diagnostic of external drivers: concavity decreases with increasing uplift rate, and steepness increases with an increasing sediment-to-water supply ratio. Constraining free parameters explains common observations of river form: to match observed channel concavities, gravel-sized sediments must weather and fine – typically rapidly – and valleys typically should widen gradually. To match the empirical square-root width–discharge scaling in equilibrium-width gravel-bed rivers, downstream fining must occur. The ability to assign a cause to such observations is the direct result of a deductive approach to developing equations for landscape evolution.
In daily life, we automatically form impressions of other individuals on basis of subtle facial features that convey trustworthiness. Because these face-based judgements influence current and future social interactions, we investigated how perceived trustworthiness of faces affects long-term memory using event-related potentials (ERPs). In the current study, participants incidentally viewed 60 neutral faces differing in trustworthiness, and one week later, performed a surprise recognition memory task, in which the same old faces were presented intermixed with novel ones. We found that after one week untrustworthy faces were better recognized than trustworthy faces and that untrustworthy faces prompted early (350–550 ms) enhanced frontal ERP old/new differences (larger positivity for correctly remembered old faces, compared to novel ones) during recognition. Our findings point toward an enhanced long-lasting, likely familiarity-based, memory for untrustworthy faces. Even when trust judgments about a person do not necessarily need to be accurate, a fast access to memories predicting potential harm may be important to guide social behaviour in daily life.
Wissensmanagement
(2019)
Wissen ist für die Bewältigung der Verwaltungsaufgaben eine wichtige Ressource.
Das wirft die Frage auf, wie das notwendige Wissen erzeugt, bewahrt, verteilt und auffindbar gemacht werden kann. Ein solches Wissensmanagement kann die Arbeit der Behörden qualitativ verbessern und effizienter machen. Dennoch wird Wissen in der Verwaltungspraxis bisher nur unzureichend gemanagt.
Ein systematisches Wissensmanagement erfordert personelle, finanzielle und technische Ressourcen. Sind diese nicht vorhanden, können Verwaltungen zunächst auf einzelne Instrumente des Wissensmanagements zurückgreifen, um ihre Arbeit mit begrenztem Aufwand zu verbessern.
Editorial
(2019)
Heute wird selbstverständlich von einer aktiven Zivilgesellschaft als relevanter Akteurin des politischen Prozesses ausgegangen. Dies gilt für den innerstaatlichen Rahmen ebenso wie für die völkerrechtliche Ebene. Das Engagement zivilgesellschaftlicher Akteure im verfassungsrechtlich eingehegten Rahmen des politischen Prozesses ist mit Fragen verbunden, denen sich dieser Aufsatz nähern wird. Zunächst wird der Begriff der Zivilgesellschaft hergeleitet (I) und danach wird auf die Funktionen der Öffentlichkeit in einem rechtsstaatlich verfaßten republikanischen Gemeinwesen eingegangen (II), bevor zum Schluß aktuelle Themen, die sich in den letzten Jahren entwickelt haben, vorgestellt und als erste Forschungsfragen formuliert werden (III).
Trait-based approaches to investigate (short- and long-term) phytoplankton dynamics and community assembly have become increasingly popular in freshwater and marine science. Although the nature of the pelagic habitat and the main phytoplankton taxa and ecology are relatively similar in both marine and freshwater systems, the lines of research have evolved, at least in part, separately. We compare and contrast the approaches adopted in marine and freshwater ecosystems with respect to phytoplankton functional traits. We note differences in study goals relating to functional trait use that assess community assembly and those that relate to ecosystem processes and biogeochemical cycling that affect the type of characteristics assigned as traits to phytoplankton taxa. Specific phytoplankton traits relevant for ecological function are examined in relation to
herbivory, amplitude of environmental change and spatial and temporal scales of study. Major differences are identified, including the shorter time scale for regular environmental change in freshwater ecosystems compared to that in the open oceans as well as the
type of sampling done by researchers based on site-accessibility. Overall, we encourage researchers to better motivate why they apply trait-based analyses to their studies and to make use of process-driven approaches, which are more common in marine studies. We further propose fully comparative trait studies conducted along the habitat gradient spanning freshwater to brackish to marine systems, or along geographic gradients. Such studies will benefit from the combined strength of both fields.
Zionistische Debatten im Kontext des Ersten Weltkriegs am Beispiel der Herzl-Bund-Blätter 1914–1918
(2019)
Die Bedeutung des Ersten Weltkriegs als zentraler Kontext für die Aushandlung, Anpassung und Verwerfung unterschiedlicher Konzepte jüdischer Identität im Deutschen Kaiserreich, aber auch über dessen Grenzen hinaus, wurde in der jüngsten Forschung in verschiedenen Aspekten erörtert. Die Kriegserfahrung gab insbesondere nationaljüdischen bzw. zionistischen Gruppierungen wichtige Denkanstöße und beförderte die Konkretisierung ihrer Handlungsstrategien für den Aufbau eines jüdischen Nationalwesens in Palästina. Die vorliegende Studie möchte den Fokus historisch-soziologischer Forschung auf der akademischen zionistischen Jugendbewegung erweitern, indem sie eine zionistische Jugendorganisation in den Mittelpunkt rückt, die in wissenschaftlichen Betrachtungen bisher kaum Beachtung fand: den 1912 in Halberstadt gegründeten Herzl-Bund, einen Zusammenschluss junger zionistisch gesinnter Kaufleute. Die Autorin unternimmt eine Auseinandersetzung mit dem publizistischen Schaffen seiner Mitglieder im Kontext des Ersten Weltkriegs, anhand derer es nachzuvollziehen gilt, wie die „großen Themen“, die die Arbeit und Debatten der zionistischen Bewegung im Deutschen Kaiserreich zu dieser Zeit bestimmten, auf der Ebene des Herzl-Bundes und der in ihm vereinigten Herzl-Clubs verhandelt wurden. Hierbei wird unter Rückgriff auf die interne Informationsschrift, die Herzl-Bund-Blätter, untersucht, welche inhaltlichen Aspekte Eingang in die Debatten der zionistischen Jugend gefunden haben. Im Mittelpunkt steht die Besprechung dreier Themenkomplexe: 1) deutsch-jüdischer Nationalismus versus jüdische Nationalbewegung, 2) Antisemitismus und 3) die Begegnung mit osteuropäischen Jüdinnen und Juden. Ziel ist es, diskursive Selbstverständigungsprozesse entlang dieser Themen offenzulegen, die auch der Beantwortung der Frage dienen, ob die Erfahrungen des Ersten Weltkriegs als Schablonen zur Neubewertung des Selbstverständnisses und der eigenen Arbeit des Herzl-Bundes verstanden werden können.
The development of new and better optimization and approximation methods for Job Shop Scheduling Problems (JSP) uses simulations to compare their performance. The test data required for this has an uncertain influence on the simulation results, because the feasable search space can be changed drastically by small variations of the initial problem model. Methods could benefit from this to varying degrees. This speaks in favor of defining standardized and reusable test data for JSP problem classes, which in turn requires a systematic describability of the test data in order to be able to compile problem adequate data sets. This article looks at the test data used for comparing methods by literature review. It also shows how and why the differences in test data have to be taken into account. From this, corresponding challenges are derived which the management of test data must face in the context of JSP research.
Keywords
In nature as well as in the context of infection and medical applications, bacteria often have to move in highly complex environments such as soil or tissues. Previous studies have shown that bacteria strongly interact with their surroundings and are often guided by confinements. Here, we investigate theoretically how the dispersal of swimming bacteria can be augmented by microfluidic environments and validate our theoretical predictions experimentally. We consider a system of bacteria performing the prototypical run-and-tumble motion inside a labyrinth with square lattice geometry. Narrow channels between the square obstacles limit the possibility of bacteria to reorient during tumbling events to an area where channels cross. Thus, by varying the geometry of the lattice it might be possible to control the dispersal of cells. We present a theoretical model quantifying diffusive spreading of a run-and-tumble random walker in a square lattice. Numerical simulations validate our theoretical predictions for the dependence of the diffusion coefficient on the lattice geometry. We show that bacteria moving in square labyrinths exhibit enhanced dispersal as compared to unconfined cells. Importantly, confinement significantly extends the duration of the phase with strongly non-Gaussian diffusion, when the geometry of channels is imprinted in the density profiles of spreading cells. Finally, in good agreement with our theoretical findings, we observe the predicted behaviors in experiments with E. coli bacteria swimming in a square lattice labyrinth created in amicrofluidic device. Altogether, our comprehensive understanding of bacterial dispersal in a simple two-dimensional labyrinth makes the first step toward the analysis of more complex geometries relevant for real world applications.
Research on weight-loss interventions in emerging adulthood is warranted. Therefore, a cognitive-behavioral group treatment (CBT), including development-specific topics for adolescents and young adults with obesity (YOUTH), was developed. In a controlled study, we compared the efficacy of this age-specific CBT group intervention to an age-unspecific CBT group delivered across ages in an inpatient setting. The primary outcome was body mass index standard deviation score (BMI-SDS) over the course of one year; secondary outcomes were health-related and disease-specific quality of life (QoL). 266 participants aged 16 to 21 years (65% females) were randomized. Intention-to-treat (ITT) and per-protocol analyses (PPA) were performed. For both group interventions, we observed significant and clinically relevant improvements in BMI-SDS and QoL over the course of time with small to large effect sizes. Contrary to our hypothesis, the age-specific intervention was not superior to the age-unspecific CBT-approach.
The growing energy demand of the modern economies leads to the increased consumption of fossil fuels in form of coal, oil, and natural gases, as the mains sources. The combustion of these carbon-based fossil fuels is inevitably producing greenhouse gases, especially CO2. Approaches to tackle the CO2 problem are to capture it from the combustion sources or directly from air, as well as to avoid CO2 production in energy consuming sources (e.g., in the refrigeration sector). In the former, relatively low CO2 concentrations and competitive adsorption of other gases is often leading to low CO2 capacities and selectivities. In both approaches, the interaction of gas molecules with porous materials plays a key role. Porous carbon materials possess unique properties including electric conductivity, tunable porosity, as well as thermal and chemical stability. Nevertheless, pristine carbon materials offer weak polarity and thus low CO2 affinity. This can be overcome by nitrogen doping, which enhances the affinity of carbon materials towards acidic or polar guest molecules (e.g., CO2, H2O, or NH3). In contrast to heteroatom-free materials, such carbon materials are in most cases “noble”, that is, they oxidize other matter rather than being oxidized due to the very positive working potential of their electrons. The challenging task here is to achieve homogenous distribution of significant nitrogen content with similar bonding motives throughout the carbon framework and a uniform pore size/distribution to maximize host-guest interactions. The aim of this thesis is the development of novel synthesis pathways towards nitrogen-doped nanoporous noble carbon materials with precise design on a molecular level and understanding of their structure-related performance in energy and environmental applications, namely gas adsorption and electrochemical energy storage.
A template-free synthesis approach towards nitrogen-doped noble microporous carbon materials with high pyrazinic nitrogen content and C2N-type stoichiometry was established via thermal condensation of a hexaazatriphenylene derivative. The materials exhibited high uptake of guest molecules, such as H2O and CO2 at low concentrations, as well as moderate CO2/N2 selectivities. In the following step, the CO2/N2 selectivity was enhanced towards molecular sieving of CO2 via kinetic size exclusion of N2. The precise control over the condensation degree, and thus, atomic construction and porosity of the resulting materials led to remarkable CO2/N2 selectivities, CO2 capacities, and heat of CO2 adsorption. The ultrahydrophilic nature of the pore walls and the narrow microporosity of these carbon materials served as ideal basis for the investigation of interface effects with more polar guest molecules than CO2, namely H2O and NH3.
H2O vapor physisorption measurements, as well as NH3-temperature programmed desorption and thermal response measurements showed exceptionally high affinity towards H2O vapor and NH3 gas. Another series of nitrogen-doped carbon materials was synthesized by direct condensation of a pyrazine-fused conjugated microporous polymer and their structure-related performance in electrochemical energy storage, namely as anode materials for sodium-ion battery, was investigated.
All in all, the findings in this thesis exemplify the value of molecularly designed nitrogen-doped carbon materials with remarkable heteroatom content implemented as well-defined structure motives. The simultaneous adjustment of the porosity renders these materials suitable candidates for fundamental studies about the interactions between nitrogen-doped carbon materials and different guest species.
Online hate is a topic that has received considerable interest lately, as online hate represents a risk to self-determination and peaceful coexistence in societies around the globe. However, not much is known about the explanations for adolescents posting or forwarding hateful online material or how adolescents cope with this newly emerging online risk. Thus, we sought to better understand the relationship between a bystander to and perpetrator of online hate, and the moderating effects of problem-focused coping strategies (e.g., assertive, technical coping) within this relationship. Self-report questionnaires on witnessing and committing online hate and assertive and technical coping were completed by 6829 adolescents between 12 and 18 years of age from eight countries. The results showed that increases in witnessing online hate were positively related to being a perpetrator of online hate. Assertive and technical coping strategies were negatively related with perpetrating online hate. Bystanders of online hate reported fewer instances of perpetrating online hate when they reported higher levels of assertive and technical coping strategies, and more frequent instances of perpetrating online hate when they reported lower levels of assertive and technical coping strategies. In conclusion, our findings suggest that, if effective, prevention and intervention programs that target online hate should consider educating young people about problem-focused coping strategies, self-assertiveness, and media skills. Implications for future research are discussed.
Hepcidin-25 (Hep-25) plays a crucial role in the control of iron homeostasis. Since the dysfunction of the hepcidin pathway leads to multiple diseases as a result of iron imbalance, hepcidin represents a potential target for the diagnosis and treatment of disorders of iron metabolism. Despite intense research in the last decade targeted at developing a selective immunoassay for iron disorder diagnosis and treatment and better understanding the ferroportin-hepcidin interaction, questions remain. The key to resolving these underlying questions is acquiring exact knowledge of the 3D structure of native Hep-25. Since it was determined that the N-terminus, which is responsible for the bioactivity of Hep-25, contains a small Cu(II)-binding site known as the ATCUN motif, it was assumed that the Hep-25-Cu(II) complex is the native, bioactive form of the hepcidin. This structure has thus far not been elucidated in detail. Owing to the lack of structural information on metal-bound Hep-25, little is known about its possible biological role in iron metabolism. Therefore, this work is focused on structurally characterizing the metal-bound Hep-25 by NMR spectroscopy and molecular dynamics simulations. For the present work, a protocol was developed to prepare and purify properly folded Hep-25 in high quantities. In order to overcome the low solubility of Hep-25 at neutral pH, we introduced the C-terminal DEDEDE solubility tag. The metal binding was investigated through a series of NMR spectroscopic experiments to identify the most affected amino acids that mediate metal coordination. Based on the obtained NMR data, a structural calculation was performed in order to generate a model structure of the Hep-25-Ni(II) complex. The DEDEDE tag was excluded from the structural calculation due to a lack of NMR restraints. The dynamic nature and fast exchange of some of the amide protons with solvent reduced the overall number of NMR restraints needed for a high-quality structure. The NMR data revealed that the 20 Cterminal Hep-25 amino acids experienced no significant conformational changes, compared to published results, as a result of a pH change from pH 3 to pH 7 and metal binding. A 3D model of the Hep-25-Ni(II) complex was constructed from NMR data recorded for the hexapeptideNi(II) complex and Hep-25-DEDEDE-Ni(II) complex in combination with the fixed conformation of 19 C-terminal amino acids. The NMR data of the Hep-25-DEDEDE-Ni(II) complex indicates that the ATCUN motif moves independently from the rest of the structure. The 3D model structure of the metal-bound Hep-25 allows for future works to elucidate hepcidin’s interaction with its receptor ferroportin and should serve as a starting point for the development of antibodies with improved selectivity.
Seit 2003 hat sich das politische Bild des Irak stark verändert. Dadurch begann der Prozess der Neugestaltung der irakischen Rechtsordnung. Die irakische Verfassung von 2005 legt erstmalig in der Geschichte des Irak den Islam und die Demokratie als zwei nebeneinander zu beachtende Grundprinzipien bei der Gesetzgebung fest. Trotz dieser signifikanten Veränderung im irakischen Rechtssystem und erheblicher Entwicklungen im internationalen Privat- und Zivilverfahrensrecht (IPR/IZVR) im internationalen Vergleich gilt die hauptsächlich im irakischen Zivilgesetzbuch (ZGB) von 1951 enthaltene gesetzliche Regelung des IPR/IZVR im Irak weiterhin. Deshalb entstand diese Arbeit für eine Reformierung des irakischen IPR/IZVR.
Die Arbeit gilt als erste umfassende wissenschaftliche Untersuchung, die sich mit dem jetzigen Inhalt und der zukünftigen Reformierung des irakischen internationalen Privatrecht- und Zivilverfahrensrechts (IPR/IZVR) beschäftigt.
Die Verfasserin vermittelt einen Gesamtüberblick über das jetzt geltende irakische internationale Privat- und Zivilverfahrensrecht mit gelegentlicher punktueller und stichwortartiger Heranziehung des deutschen, islamischen, türkischen und tunesischen Rechts, zeigt dessen Schwachstellen auf und unterbreitet entsprechende Reformvorschläge.
Wegen der besonderen Bedeutung des internationalen Vertragsrechts für die Wirtschaft im Irak und auch zum Teil für Deutschland gibt die Verfasserin einen genaueren Überblick über das irakische internationale Vertragsrecht und bekräftigt gleichzeitig dessen Reformbedürftigkeit.
Die Darstellung der wichtigen Entwicklungen im deutsch-europäischen, im traditionellen islamischen Recht und im türkischen und tunesischen internationalen Privat- und Zivilverfahrensrecht im zweiten Kapitel dienen als Grundlage, auf die bei der Reformierung des irakischen IPR/ IZVR zurück gegriffen werden kann. Da die Kenntnisse des islamischen Rechts nicht zwingend zum Rechtsstudium gehören, wird das islamische Recht dazu in Bezug auf seine Entstehung und die Rechtsquellen dargestellt.
Am Ende der Arbeit wird ein Entwurf eines föderalen Gesetzes zum internationalen Privatrecht im Irak katalogisiert, der sich im Rahmen der irakischen Verfassung gleichzeitig mit dem Islam und der Demokratie vereinbaren lässt.
In light of the debate on the consequences of competitive contracting out of traditionally public services, this research compares two mechanisms used to allocate funds in development cooperation—direct awarding and competitive contracting out—aiming to identify their potential advantages and disadvantages.
The agency theory is applied within the framework of rational-choice institutionalism to study the institutional arrangements that surround two different money allocation mechanisms, identify the incentives they create for the behavior of individual actors in the field, and examine how these then transfer into measurable differences in managerial quality of development aid projects. In this work, project management quality is seen as an important determinant of the overall project success.
For data-gathering purposes, the German development agency, the Gesellschaft für Internationale Zusammenarbeit (GIZ), is used due to its unique way of work. Whereas the majority of projects receive funds via direct-award mechanism, there is a commercial department, GIZ International Services (GIZ IS) that has to compete for project funds.
The data concerning project management practices on the GIZ and GIZ IS projects was gathered via a web-based, self-administered survey of project team leaders. Principal component analysis was applied to reduce the dimensionality of the independent variable to total of five components of project management. Furthermore, multiple regression analysis identified the differences between the separate components on these two project types. Enriched by qualitative data gathered via interviews, this thesis offers insights into everyday managerial practices in development cooperation and identifies the advantages and disadvantages of the two allocation mechanisms.
The thesis first reiterates the responsibility of donors and implementers for overall aid effectiveness. It shows that the mechanism of competitive contracting out leads to better oversight and control of implementers, fosters deeper cooperation between the implementers and beneficiaries, and has a potential to strengthen ownership of recipient countries. On the other hand, it shows that the evaluation quality does not tremendously benefit from the competitive allocation mechanism and that the quality of the component knowledge management and learning is better when direct-award mechanisms are used. This raises questions about the lacking possibilities of actors in the field to learn about past mistakes and incorporate the finings into the future interventions, which is one of the fundamental issues of aid effectiveness. Finally, the findings show immense deficiencies in regard to oversight and control of individual projects in German development cooperation.
Das 11. Herbsttreffen Patholinguistik mit dem Schwerpunktthema »Gut gestimmt: Diagnostik und Therapie bei Dysphonie« fand am 18.11.2017 in Potsdam statt. Das Herbsttreffen wird seit 2007 jährlich vom Verband für Patholinguistik e.V. (vpl) durchgeführt. Der vorliegende Tagungsband beinhaltet die Hauptvorträge zum Schwerpunktthema sowie Beiträge zu den Kurzvorträgen »Spektrum Patholinguistik« und der Posterpräsentationen zu weiteren Themen aus der sprachtherapeutischen Forschung und Praxis.
Stimmstörungen bei Kindern
(2019)
Stimmstörungen
(2019)
Peer cultural socialisation
(2019)
This study investigated how peers can contribute to cultural minority students’ cultural identity, life satisfaction, and school values (school importance, utility, and intrinsic values) by talking about cultural values, beliefs, and behaviours associated with heritage and mainstream culture (peer cultural socialisation). We further distinguished between heritage and mainstream identity as two separate dimensions of cultural identity. Analyses were based on self-reports of 662 students of the first, second, and third migrant generation in Germany (Mean age = 14.75 years, 51% female). Path analyses revealed that talking about heritage culture with friends was positively related to heritage identity. Talking about mainstream culture with friends was negatively associated with heritage identity, but positively with mainstream identity as well as school values. Both dimensions of cultural identity related to higher life satisfaction and more positive school values. As expected, heritage and mainstream identity mediated the link between peer cultural socialisation and adjustment outcomes. Findings highlight the potential of peers as socialisation agents to help promote cultural belonging as well as positive adjustment of cultural minority youth in the school context.
The Role of Bargaining Power
(2019)
Neoclassical theory omits the role of bargaining power in the determination of wages. As a result, the importance of changes in the bargaining position for the development of income shares in the last decades is underestimated. This paper presents a theoretical argument why collective bargaining power is a main determinant of workers’ share of income and how its decline contributed to the severe changes in the distribution of income since the 1980s. In order to confirm this hypothesis, a panel data regression analysis is performed that suggests that unions significantly influence the distribution of income in developed countries.
The Himalayas are a region that is most dependent, but also frequently prone to hazards from changing meltwater resources. This mountain belt hosts the highest mountain peaks on earth, has the largest reserve of ice outside the polar regions, and is home to a rapidly growing population in recent decades. One source of hazard has attracted scientific research in particular in the past two decades: glacial lake outburst floods (GLOFs) occurred rarely, but mostly with fatal and catastrophic consequences for downstream communities and infrastructure. Such GLOFs can suddenly release several million cubic meters of water from naturally impounded meltwater lakes. Glacial lakes have grown in number and size by ongoing glacial mass losses in the Himalayas. Theory holds that enhanced meltwater production may increase GLOF frequency, but has never been tested so far. The key challenge to test this notion are the high altitudes of >4000 m, at which lakes occur, making field work impractical. Moreover, flood waves can attenuate rapidly in mountain channels downstream, so that many GLOFs have likely gone unnoticed in past decades. Our knowledge on GLOFs is hence likely biased towards larger, destructive cases, which challenges a detailed quantification of their frequency and their response to atmospheric warming. Robustly quantifying the magnitude and frequency of GLOFs is essential for risk assessment and management along mountain rivers, not least to implement their return periods in building design codes.
Motivated by this limited knowledge of GLOF frequency and hazard, I developed an algorithm that efficiently detects GLOFs from satellite images. In essence, this algorithm classifies land cover in 30 years (~1988–2017) of continuously recorded Landsat images over the Himalayas, and calculates likelihoods for rapidly shrinking water bodies in the stack of land cover images. I visually assessed such detected tell-tale sites for sediment fans in the river channel downstream, a second key diagnostic of GLOFs. Rigorous tests and validation with known cases from roughly 10% of the Himalayas suggested that this algorithm is robust against frequent image noise, and hence capable to identify previously unknown GLOFs. Extending the search radius to the entire Himalayan mountain range revealed some 22 newly detected GLOFs. I thus more than doubled the existing GLOF count from 16 previously known cases since 1988, and found a dominant cluster of GLOFs in the Central and Eastern Himalayas (Bhutan and Eastern Nepal), compared to the rarer affected ranges in the North. Yet, the total of 38 GLOFs showed no change in the annual frequency, so that the activity of GLOFs per unit glacial lake area has decreased in the past 30 years. I discussed possible drivers for this finding, but left a further attribution to distinct GLOF-triggering mechanisms open to future research.
This updated GLOF frequency was the key input for assessing GLOF hazard for the entire Himalayan mountain belt and several subregions. I used standard definitions in flood hydrology, describing hazard as the annual exceedance probability of a given flood peak discharge [m3 s-1] or larger at the breach location. I coupled the empirical frequency of GLOFs per region to simulations of physically plausible peak discharges from all existing ~5,000 lakes in the Himalayas. Using an extreme-value model, I could hence calculate flood return periods. I found that the contemporary 100-year GLOF discharge (the flood level that is reached or exceeded on average once in 100 years) is 20,600+2,200/–2,300 m3 s-1 for the entire Himalayas. Given the spatial and temporal distribution of historic GLOFs, contemporary GLOF hazard is highest in the Eastern Himalayas, and lower for regions with rarer GLOF abundance. I also calculated GLOF hazard for some 9,500 overdeepenings, which could expose and fill with water, if all Himalayan glaciers have melted eventually. Assuming that the current GLOF rate remains unchanged, the 100-year GLOF discharge could double (41,700+5,500/–4,700 m3 s-1), while the regional GLOF hazard may increase largest in the Karakoram.
To conclude, these three stages–from GLOF detection, to analysing their frequency and estimating regional GLOF hazard–provide a framework for modern GLOF hazard assessment. Given the rapidly growing population, infrastructure, and hydropower projects in the Himalayas, this thesis assists in quantifying the purely climate-driven contribution to hazard and risk from GLOFs.
A new micro/mesoporous hybrid clay nanocomposite prepared from kaolinite clay, Carica papaya seeds, and ZnCl2 via calcination in an inert atmosphere is presented. Regardless of the synthesis temperature, the specific surface area of the nanocomposite material is between ≈150 and 300 m2/g. The material contains both micro- and mesopores in roughly equal amounts. X-ray diffraction, infrared spectroscopy, and solid-state nuclear magnetic resonance spectroscopy suggest the formation of several new bonds in the materials upon reaction of the precursors, thus confirming the formation of a new hybrid material. Thermogravimetric analysis/differential thermal analysis and elemental analysis confirm the presence of carbonaceous matter. The new composite is stable up to 900 °C and is an efficient adsorbent for the removal of a water micropollutant, 4-nitrophenol, and a pathogen, E. coli, from an aqueous medium, suggesting applications in water remediation are feasible.
Der Artikel arbeitet an Platons Gastmahl ein semantisches Netz rund um das Konzept des ‚Berührens‘ heraus. Dabei bildet das Verb ἅπτομαι ein zentrales Relais, das zwischen dem vieldiskutierten ‚philosophischen Gehalt‘ des Textes und der in ihrem performativen Beitrag meist unterschätzten Rahmenhandlung vermittelt. Im Nachvollzug der Konstellationen des Berührens zeigt sich, dass dem Berühren, als Berühren, nicht begrifflich beizukommen ist – es entzieht sich dem aneignenden Zugriff. Berühren ist eben nicht Begriff. Deshalb muss sich das Gastmahl der Berührung auf andere Weise nähern, nämlich berührend – wofür die narratologische Konstruktion des Textes von entscheidender Wichtigkeit ist. Er praktiziert Philo-Logie, d.h. nutzt die Macht der Worte, die genau daraus entsteht, dass sie in einer sehr präzisen Weise zwischen den Beteiligten aus einer konstitutiven Distanz heraus wirken.
Pedagogy of integrity
(2019)
The master thesis “Pedagogy of Integrity: an Analysis of the Conceptualization and Implementation of the MA Program Anglophone Modernities in Literature and Culture” deals with colonial patterns in higher education practices. It provides a theoretical framework for decolonization of academic teaching-learning practices on the micro- and meso-didactic levels and suggests concrete solutions for the decolonized education practices, especially for degree programs, which content focuses on post-colonial issues. Besides, through the exemplary analysis of the conceptualization and implementation of the MA Program Anglophone Modernities in Literature and Culture the work explores patterns of colonial heritage as well as will to decolonise these. The main thesis claims that (higher) education should be liberated from colonial patterns, so that real participation for all students in the collective knowledge production becomes possible.
In the theoretical elaborations different concepts of critical and radical pedagogy, e.g. the ones of Paulo Freire and bell hooks, in combination with concepts about modalities of adult learning (e.g. transformative learning) and approaches to education, seeking to combine learning and social justice (e.g. Social Justice Learning) are systematised and explored for their substance and potential to contribute to a criteria catalogue for decolonised educational practises. Besides, attention is paid on higher education research results, which reveal, that students, who belong to underrepresented groups at university (non-traditional students) in their societies of origin, face more difficulties and discrimination as international students at Western universities, than ‘traditional’ international students do. Based on the theoretical elaborations, the work claims that:
(1) the homogeneity-preserving dynamics, found in Western colleges, are an inheritance of colonial time and mindsets, which continue to function in education and multiply social inequality in the context of internationalization, migration, and participation;
(2) all, but especially those higher educational programs, dealing explicitly with inequality phenomena, social and cultural diversity, power relations and issues of domination, as well as with postcolonial criticism, should establish premises of equity and provide de-facto equal opportunities for participation through embodiment of social justice as a way to remain credible;
(3) decolonization of the educational space can be enabled through appropriate didactic action both on the meso- (institution) and micro-didactical (teaching-learning arrangements) agency levels with sufficient will and willingness of responsible professionals at.
By examining representative documents, published by the MA Program Anglophone Modernities in Literature and Culture, using the 'close reading' methodology, as well as through the exemplary analysis of the concept of a teaching-learning program’s event and a student survey, the work seeks to examine wo what extent the Master's Degree Program represents a space of decolonised higher education. The results of the analysis indicate the need for stronger normative value-positioning of the Study program, while many practices that show commitment to participation, social justice and diversity, have been identified.
In the last chapter, the results of the theoretical elaboration and the program’s analysis are synthesized in the concept of an integrity-based pedagogy conceptualisation, called Pedagogy of Integrity, and suggestions are formulated for the teaching practice in the study program, which are meant to help overcome the discrepancy between will and practice towards decolonised educational space.
The individual’s mental lexicon comprises all known words as well related infor-mation on semantics, orthography and phonology. Moreover, entries connect due to simi-larities in these language domains building a large network structure. The access to lexical information is crucial for processing of words and sentences. Thus, a lack of information in-hibits the retrieval and can cause language processing difficulties. Hence, the composition of the mental lexicon is essential for language skills and its assessment is a central topic of lin-guistic and educational research.
In early childhood, measurement of the mental lexicon is uncomplicated, for example through parental questionnaires or the analysis of speech samples. However, with growing content the measurement becomes more challenging: With more and more words in the mental lexicon, the inclusion of all possible known words into a test or questionnaire be-comes impossible. That is why there is a lack of methods to assess the mental lexicon for school children and adults. For the same reason, there are only few findings on the courses of lexical development during school years as well as its specific effect on other language skills. This dissertation is supposed to close this gap by pursuing two major goals: First, I wanted to develop a method to assess lexical features, namely lexicon size and lexical struc-ture, for children of different age groups. Second, I aimed to describe the results of this method in terms of lexical development of size and structure. Findings were intended to help understanding mechanisms of lexical acquisition and inform theories on vocabulary growth.
The approach is based on the dictionary method where a sample of words out of a dictionary is tested and results are projected on the whole dictionary to determine an indi-vidual’s lexicon size. In the present study, the childLex corpus, a written language corpus for children in German, served as the basis for lexicon size estimation. The corpus is assumed to comprise all words children attending primary school could know. Testing a sample of words out of the corpus enables projection of the results on the whole corpus. For this purpose, a vocabulary test based on the corpus was developed. Afterwards, test performance of virtual participants was simulated by drawing different lexicon sizes from the corpus and comparing whether the test items were included in the lexicon or not. This allowed determination of the relation between test performance and total lexicon size and thus could be transferred to a sample of real participants. Besides lexicon size, lexical content could be approximated with this approach and analyzed in terms of lexical structure.
To pursue the presented aims and establish the sampling method, I conducted three consecutive studies. Study 1 includes the development of a vocabulary test based on the childLex corpus. The testing was based on the yes/no format and included three versions for different age groups. The validation grounded on the Rasch Model shows that it is a valid instrument to measure vocabulary for primary school children in German. In Study 2, I estab-lished the method to estimate lexicon sizes and present results on lexical development dur-ing primary school. Plausible results demonstrate that lexical growth follows a quadratic function starting with about 6,000 words at the beginning of school and about 73,000 words on average for young adults. Moreover, the study revealed large interindividual differences. Study 3 focused on the analysis of network structures and their development in the mental lexicon due to orthographic similarities. It demonstrates that networks possess small-word characteristics and decrease in interconnectivity with age.
Taken together, this dissertation provides an innovative approach for the assessment and description of the development of the mental lexicon from primary school onwards. The studies determine recent results on lexical acquisition in different age groups that were miss-ing before. They impressively show the importance of this period and display the existence of extensive interindividual differences in lexical development. One central aim of future research needs to address the causes and prevention of these differences. In addition, the application of the method for further research (e.g. the adaptation for other target groups) and teaching purposes (e.g. adaptation of texts for different target groups) appears to be promising.
The sensitivity of fluvial systems to tectonic and climatic boundary conditions allows us to use the geomorphic and stratigraphic records as quantitative archives of past climatic and tectonic conditions. Thus, fluvial terraces that form on alluvial fans and floodplains as well as the rate of sediment export to oceanic and continental basins are commonly used to reconstruct paleoenvironments. However, we currently lack a systematic and quantitative understanding of the transient evolution of fluvial systems and their associated sediment storage and release in response to changes in base level, water input, and sediment input. Such knowledge is necessary to quantify past environmental change from terrace records or sedimentary deposits and to disentangle the multiple possible causes for terrace formation and sediment deposition. Here, we use a set of seven physical experiments to explore terrace formation and sediment export from a single, braided channel that is perturbed by changes in upstream water discharge or sediment supply, or through downstream base-level fall. Each perturbation differently affects (1) the geometry of terraces and channels, (2) the timing of terrace cutting, and (3) the transient response of sediment export from the basin. In general, an increase in water discharge leads to near-instantaneous channel incision across the entire fluvial system and consequent local terrace cutting, thus preserving the initial channel slope on terrace surfaces, and it also produces a transient increase in sediment export from the system. In contrast, a decreased upstream sediment-supply rate may result in longer lag times before terrace cutting, leading to terrace slopes that differ from the initial channel slope, and also lagged responses in sediment export. Finally, downstream base-level fall triggers the upstream propagation of a diffuse knickzone, forming terraces with upstream-decreasing ages. The slope of terraces triggered by base-level fall mimics that of the newly adjusted active channel, whereas slopes of terraces triggered by a decrease in upstream sediment discharge or an increase in upstream water discharge are steeper compared to the new equilibrium channel. By combining fillterrace records with constraints on sediment export, we can distinguish among environmental perturbations that would otherwise remain unresolved when using just one of these records.
Alexander von Humboldts und Aimé Bonplands Pflanzen im Herbarium der Universität Halle-Wittenberg
(2019)
Die Pflanzensammlung (Herbarium) der Universität Halle-Wittenberg enthält eine beträchtliche Anzahl von Pflanzenexemplaren, die von Alexander von Humboldt und Aimé Bonpland während ihrer amerikanischen Reise (1799–1804) gesammelt wurden. Wir erläutern die wissenschaftliche Bedeutung der Herbarbelege und wie sie ihren Weg nach Halle fanden.
Accusative Unaccusatives
(2019)
In this study, we analyze the forecast accuracy and profitability of buy recommendations published in five major German financial magazines for private households based on fundamental analysis. The results show a high average forecast accuracy but with a very high standard deviation, which indicates poor forecast accuracy with regard to individual stocks. The recommendation profitability slightly exceeds the performance of the MSCI World index. Considering the involved risk, which is represented by a high standard deviation, the excess returns appear to be insufficient.
The growing global demand for meat is being thwarted by shrinking agricultural areas, and opposes efforts to mitigate methane emissions and to improve public health. Cultured meat could contribute to solve these problems, but will such meat be marketable, competitive, and accepted? Using the Delphi method, this study explored the potential development of cultured meat by 2027. Despite the acknowledged urgency to develop sustainable meat alternatives, participants doubt that challenges regarding mass production, production costs, and consumer acceptance will be overcome by 2027. Considering the noticeable impacts of global warming, further research and development as well as a change in consumer perceptions is inevitable.
This paper challenges the solely rational view of the scenario technique as a strategy and foresight tool designed to cope with uncertainty by considering multiple possible future states. The paper employs an affordance-based view that allows for the identification and structuring of hidden, emergent attributes of the scenario technique beyond the intended ones. The suggested framework distinguishes between affordances (1) that are intended by the organization and relate to its goals, (2) that emergently generate organizational benefits, and (3) that do not relate to organizational but individual interests. Also, constraints in the use of scenarios are discussed. Affordance theory’s specific lens shows that the emergence of such attributes depends on the users’ specific intentions.
Die Universität Potsdam hat 2018 zum zweiten Mal erfolgreich am Times Higher Education-Ranking (THE-Ranking) teilgenommen. Hierfür wurde erneut das Publikationsaufkommen inklusive der Verbreitung der IDs von Autorinnen und Autoren, wie bspw. ORCiD, Researcher-ID oder Google Scholar-ID, ermittelt. Die aus den Erkenntnissen der ersten bibliometrischen Output-Analyse abgeleiteten Maßnahmen wurden nun hinsichtlich ihrer Wirksamkeit beurteilt. Der vorliegende Artikel vergleicht die Ergebnisse beider Analysen und thematisiert die Entwicklungen und Implikationen seit dem Ergreifen der ersten Maßnahmen.
Es darf gekocht werden
(2019)
Eine achtköpfige Delegation der Universität Potsdam (UP) besuchte vom 18. bis 21. November 2018 die Tel Aviv University (TAU) in Israel. Die Kooperation zwischen beiden Einrichtungen wurde durch einen »staff exchange« intensiviert. So konnten Mitarbeiter*innen der UP ihren Gegenpart an der TAU kennenlernen und sich inhaltlich austauschen. Der vorliegende Bericht basiert auf Gesprächen mit drei der insgesamt fünf Bibliotheksdirektorinnen sowie zwei weiteren Kolleginnen aus dem Bereich Erwerbung und Katalogisierung. Dabei wurden unterschiedliche Themenbereiche aus dem Bibliothekswesen und die Sicht beziehungsweise der Stand dazu an der TAU und der UP angesprochen. Der vorliegende Bericht geht auf die Themenfelder Bibliothekssystem, Open Access und Bibliothek als Raum ein.
Additive Manufacturing (AM) in terms of laser powder-bed fusion (L-PBF) offers new prospects regarding the design of parts and enables therefore the production of lattice structures. These lattice structures shall be implemented in various industrial applications (e.g. gas turbines) for reasons of material savings or cooling channels. However, internal defects, residual stress, and structural deviations from the nominal geometry are unavoidable.
In this work, the structural integrity of lattice structures manufactured by means of L-PBF was non-destructively investigated on a multiscale approach.
A workflow for quantitative 3D powder analysis in terms of particle size, particle shape, particle porosity, inter-particle distance and packing density was established. Synchrotron computed tomography (CT) was used to correlate the packing density with the particle size and particle shape. It was also observed that at least about 50% of the powder porosity was released during production of the struts.
Struts are the component of lattice structures and were investigated by means of laboratory CT. The focus was on the influence of the build angle on part porosity and surface quality. The surface topography analysis was advanced by the quantitative characterisation of re-entrant surface features. This characterisation was compared with conventional surface parameters showing their complementary information, but also the need for AM specific surface parameters.
The mechanical behaviour of the lattice structure was investigated with in-situ CT under compression and successive digital volume correlation (DVC). The deformation was found to be knot-dominated, and therefore the lattice folds unit cell layer wise.
The residual stress was determined experimentally for the first time in such lattice structures. Neutron diffraction was used for the non-destructive 3D stress investigation. The principal stress directions and values were determined in dependence of the number of measured directions. While a significant uni-axial stress state was found in the strut, a more hydrostatic stress state was found in the knot. In both cases, strut and knot, seven directions were at least needed to find reliable principal stress directions.
Supermassive black holes reside in the hearts of almost all massive galaxies. Their evolutionary path seems to be strongly linked to the evolution of their host galaxies, as implied by several empirical relations between the black hole mass (M BH ) and different host galaxy properties. The physical driver of this co-evolution is, however, still not understood. More mass measurements over homogeneous samples and a detailed understanding of systematic uncertainties are required to fathom the origin of the scaling relations.
In this thesis, I present the mass estimations of supermassive black holes in the nuclei of one late-type and thirteen early-type galaxies. Our SMASHING sample extends from the intermediate to the massive galaxy mass regime and was selected to fill in gaps in number of galaxies along the scaling relations. All galaxies were observed at high spatial resolution, making use of the adaptive-optics mode of integral field unit (IFU) instruments on state-of-the-art telescopes (SINFONI, NIFS, MUSE). I extracted the stellar kinematics from these observations and constructed dynamical Jeans and Schwarzschild models to estimate the mass of the central black holes robustly. My new mass estimates increase the number of early-type galaxies with measured black hole masses by 15%. The seven measured galaxies with nuclear light deficits (’cores’) augment the sample of cored galaxies with measured black holes by 40%. Next to determining massive black hole masses, evaluating the accuracy of black hole masses is crucial for understanding the intrinsic scatter of the black hole- host galaxy scaling relations. I tested various sources of systematic uncertainty on my derived mass estimates.
The M BH estimate of the single late-type galaxy of the sample yielded an upper limit, which I could constrain very robustly. I tested the effects of dust, mass-to-light ratio (M/L) variation, and dark matter on my measured M BH . Based on these tests, the typically assumed constant M/L ratio can be an adequate assumption to account for the small amounts of dark matter in the center of that galaxy. I also tested the effect of a variable M/L variation on the M BH measurement on a second galaxy. By considering stellar M/L variations in the dynamical modeling, the measured M BH decreased by 30%. In the future, this test should be performed on additional galaxies to learn how an as constant assumed M/L flaws the estimated black hole masses.
Based on our upper limit mass measurement, I confirm previous suggestions that resolving the predicted BH sphere-of-influence is not a strict condition to measure black hole masses. Instead, it is only a rough guide for the detection of the black hole if high-quality, and high signal-to-noise IFU data are used for the measurement. About half of our sample consists of massive early-type galaxies which show nuclear surface brightness cores and signs of triaxiality. While these types of galaxies are typically modeled with axisymmetric modeling methods, the effects on M BH are not well studied yet. The massive galaxies of our presented galaxy sample are well suited to test the effect of different stellar dynamical models on the measured black hole mass in evidently triaxial galaxies. I have compared spherical Jeans and axisymmetric Schwarzschild models and will add triaxial Schwarzschild models to this comparison in the future. The constructed Jeans and Schwarzschild models mostly disagree with each other and cannot reproduce many of the triaxial features of the galaxies (e.g., nuclear sub-components, prolate rotation). The consequence of the axisymmetric-triaxial assumption on the accuracy of M BH and its impact on the black hole - host galaxy relation needs to be carefully examined in the future.
In the sample of galaxies with published M BH , we find measurements based on different dynamical tracers, requiring different observations, assumptions, and methods. Crucially, different tracers do not always give consistent results. I have used two independent tracers (cold molecular gas and stars) to estimate M BH in a regular galaxy of our sample. While the two estimates are consistent within their errors, the stellar-based measurement is twice as high as the gas-based. Similar trends have also been found in the literature. Therefore, a rigorous test of the systematics associated with the different modeling methods is required in the future. I caution to take the effects of different tracers (and methods) into account when discussing the scaling relations.
I conclude this thesis by comparing my galaxy sample with the compilation of galaxies with measured black holes from the literature, also adding six SMASHING galaxies, which were published outside of this thesis. None of the SMASHING galaxies deviates significantly from the literature measurements. Their inclusion to the published early-type galaxies causes a change towards a shallower slope for the M BH - effective velocity dispersion relation, which is mainly driven by the massive galaxies of our sample. More unbiased and homogenous measurements are needed in the future to determine the shape of the relation and understand its physical origin.
Predators can have numerical and behavioral effects on prey animals. While numerical effects are well explored, the impact of behavioral effects is unclear. Furthermore, behavioral effects are generally either analyzed with a focus on single individuals or with a focus on consequences for other trophic levels. Thereby, the impact of fear on the level of prey communities is overlooked, despite potential consequences for conservation and nature management. In order to improve our understanding of predator-prey interactions, an assessment of the consequences of fear in shaping prey community structures is crucial.
In this thesis, I evaluated how fear alters prey space use, community structure and composition, focusing on terrestrial mammals. By integrating landscapes of fear in an existing individual-based and spatially-explicit model, I simulated community assembly of prey animals via individual home range formation. The model comprises multiple hierarchical levels from individual home range behavior to patterns of prey community structure and composition. The mechanistic approach of the model allowed for the identification of underlying mechanism driving prey community responses under fear.
My results show that fear modified prey space use and community patterns. Under fear, prey animals shifted their home ranges towards safer areas of the landscape. Furthermore, fear decreased the total biomass and the diversity of the prey community and reinforced shifts in community composition towards smaller animals. These effects could be mediated by an increasing availability of refuges in the landscape. Under landscape changes, such as habitat loss and fragmentation, fear intensified negative effects on prey communities. Prey communities in risky environments were subject to a non-proportional diversity loss of up to 30% if fear was taken into account. Regarding habitat properties, I found that well-connected, large safe patches can reduce the negative consequences of habitat loss and fragmentation on prey communities. Including variation in risk perception between prey animals had consequences on prey space use. Animals with a high risk perception predominantly used safe areas of the landscape, while animals with a low risk perception preferred areas with a high food availability. On the community level, prey diversity was higher in heterogeneous landscapes of fear if individuals varied in their risk perception compared to scenarios in which all individuals had the same risk perception.
Overall, my findings give a first, comprehensive assessment of the role of fear in shaping prey communities. The linkage between individual home range behavior and patterns at the community level allows for a mechanistic understanding of the underlying processes. My results underline the importance of the structure of the landscape of fear as a key driver of prey community responses, especially if the habitat is threatened by landscape changes. Furthermore, I show that individual landscapes of fear can improve our understanding of the consequences of trait variation on community structures. Regarding conservation and nature management, my results support calls for modern conservation approaches that go beyond single species and address the protection of biotic interactions.
Das kommunale System des Landes Brandenburg wurde seit der Deutschen Wiedervereinigung durch eine Vielzahl von territorialen und funktionalen Verwaltungsreformen verändert.
Das hier vorliegende Arbeitsheft des kommunalwissenschaftlichen Instituts der Universität Potsdam stellt diese zurückliegenden Reformen sowie den momentanen Verwaltungsaufbau und die Bevölkerungsstruktur des Landes Brandenburg dar (Stand: 1.Juli 2018). Die demographische Entwicklung war und ist dabei ein wichtiger Reformfaktor. Zudem werden verfassungsrechtliche Grundlagen für kommunale Reformen im Land Brandenburg erörtert.
Anschließend werden die möglichen Auswirkungen des Gesetzes zur Weiterentwicklung der gemeindlichen Ebene vom 15.10.2018 für zukünftige Reformen des Brandenburgischen Kommunalsystems anhand einer Fallstudie aus der Modellregion Oderlandregion diskutiert. Dieses Gesetz stellt einen Wendepunkt in der bisherigen Reformstrategie des Landes Brandenburg dar, da Reformen erstmals auf freiwilliger Basis durchgeführt werden sollen.
Durch eine Netzwerkanalyse wird in der Fallstudie insbesondere auf Akteurskonstellationen im Reformprozess eingegangen. Dabei zeigt sich, dass die Hauptverwaltungsbeamten reformwilliger Gemeinden großen Einfluss auf Entscheidungsprozesse nehmen.
Background
Postoperative delirium is a common disorder in older adults that is associated with higher morbidity and mortality, prolonged cognitive impairment, development of dementia, higher institutionalization rates, and rising healthcare costs. The probability of delirium after surgery increases with patients’ age, with pre-existing cognitive impairment, and with comorbidities, and its diagnosis and treatment is dependent on the knowledge of diagnostic criteria, risk factors, and treatment options of the medical staff. In this study, we will investigate whether a cross-sectoral and multimodal intervention for preventing delirium can reduce the prevalence of delirium and postoperative cognitive decline (POCD) in patients older than 70 years undergoing elective surgery. Additionally, we will analyze whether the intervention is cost-effective.
Methods
The study will be conducted at five medical centers (with two or three surgical departments each) in the southwest of Germany. The study employs a stepped-wedge design with cluster randomization of the medical centers. Measurements are performed at six consecutive points: preadmission, preoperative, and postoperative with daily delirium screening up to day 7 and POCD evaluations at 2, 6, and 12 months after surgery. Recruitment goals are to enroll 1500 patients older than 70 years undergoing elective operative procedures (cardiac, thoracic, vascular, proximal big joints and spine, genitourinary, gastrointestinal, and general elective surgery procedures.
Discussion
Results of the trial should form the basis of future standards for preventing delirium and POCD in surgical wards. Key aims are the improvement of patient safety and quality of life, as well as the reduction of the long-term risk of conversion to dementia. Furthermore, from an economic perspective, we expect benefits and decreased costs for hospitals, patients, and healthcare insurances.
Trial registration
German Clinical Trials Register, DRKS00013311. Registered on 10 November 2017.
The Collatz conjecture is a number theoretical problem, which has puzzled countless researchers using myriad approaches. Presently, there are scarcely any methodologies to describe and treat the problem from the perspective of the Algebraic Theory of Automata. Such an approach is promising with respect to facilitating the comprehension of the Collatz sequence’s "mechanics". The systematic technique of a state machine is both simpler and can fully be described by the use of algebraic means.
The current gap in research forms the motivation behind the present contribution. The present authors are convinced that exploring the Collatz conjecture in an algebraic manner, relying on findings and fundamentals of Graph Theory and Automata Theory, will simplify the problem as a whole.
The Collatz conjecture is a number theoretical problem, which has puzzled countless researchers using myriad approaches. Presently, there are scarcely any methodologies to describe and treat the problem from the perspective of the Algebraic Theory of Automata. Such an approach is promising with respect to facilitating the comprehension of the Collatz sequences "mechanics". The systematic technique of a state machine is both simpler and can fully be described by the use of algebraic means.
The current gap in research forms the motivation behind the present contribution. The present authors are convinced that exploring the Collatz conjecture in an algebraic manner, relying on findings and fundamentals of Graph Theory and Automata Theory, will simplify the problem as a whole.
Die Stadtwerkebetriebe, zumindest diejenigen die im Strom- und Gassektor tätig sind, sind meist nicht mehr im Stadtwerke Eigenbetrieb organisiert, sondern von den Kommunen in den vergangenen zwei Jahrzehnten in die Privatrechtsform der GmbH ausgegliedert worden. Hinzu kommt, dass diese kommunalen Unternehmen in einem Energiebinnenmarkt agieren, der durch die EU-Marktliberalisierung entstanden ist. Die unternehmerische Verselbstständigung der Stadtwerke GmbH von politischer Steuerung wird durch das Credo des Neuen Steuerungsmodells bestärkt, das gerade in der unternehmerischen Unabhängigkeit die Voraussetzungen für wirtschaftlichen Erfolg sieht. Diese Rahmenbedingungen zwingen die Unternehmen der kommunalen Wirtschaft, sich ausschließlich nach unternehmerischen und marktinduzierten Systemen zu richten. Dass die Logik des unternehmerischen Handelns keinen Platz lässt für eine politische Steuerung der Unternehmen, wird zum Legitimationsproblem für die kommunale Wirtschaft. Denn eine ausschließliche Orientierung an den Überschüssen der kommunalen Unternehmen legitimiert nicht den öffentlichen Zweck, weder politisch noch organisationsrechtlich. Die Gemeinwohlorientierung ist konstitutiver Bestandteil der kommunalen wirtschaftlichen Betätigung. Hier wird die These hervorgebracht, dass Bürgerbeteiligung in dieser Situation von den Stadtwerken zugelassen wird, um dieses Legitimationsdefizit abzuschwächen. Zwei Fälle werden qualitativ analysiert und verglichen: erstens die Stadtwerke Wolfhagen GmbH, die anhand von Bürgerbeteiligung Akzeptanz für einen Windpark generieren wollen. Zweitens die Stadtwerke Potsdam GmbH, die aus einer - hier als PR-Krise beschriebenen - Situation heraus, Legitimation mit verschiedenen Instrumenten der Bürgerbeteiligung wiederherzustellen versuchen.
This thesis investigates whether multilingual speakers’ use of grammatical constraints in an additional language (La) is affected by the native (L1) and non-native grammars (L2) of their linguistic repertoire.
Previous studies have used untimed measures of grammatical performance to show that L1 and L2 grammars affect the initial stages of La acquisition. This thesis extends this work by examining whether speakers at intermediate levels of La proficiency, who demonstrate mature untimed/offline knowledge of the target La constraints, are differentially affected by their L1 and L2 knowledge when they comprehend sentences under processing pressure. With this purpose, several groups of La German speakers were tested on word order and agreement phenomena using online/timed measures of grammatical knowledge. Participants had mirror distributions of their prior languages and they were either L1English/L2Spanish speakers or L1Spanish/L2English speakers. Crucially, in half of the phenomena the target La constraint aligned with English but not with Spanish, while in the other half it aligned with Spanish but not with English. Results show that the L1 grammar plays a major role in the use of La constraints under processing pressure, as participants displayed increased sensitivity to La constraints when they aligned with their L1, and reduced sensitivity when they did not. Further, in specific phenomena in which the L2 and La constraints aligned, increased L2 proficiency resulted in an enhanced sensitivity to the La constraint. These findings suggest that both native and non-native grammars affect how speakers use La grammatical constraints under processing pressure. However, L1 and L2 grammars differentially influence on participants’ performance: While L1 constraints seem to be reliably recruited to cope with the processing demands of real-time La use, proficiency in an L2 can enhance sensitivity to La constraints only in specific circumstances, namely when L2 and La constraints align.
PLATON
(2019)
Lesson planning is both an important and demanding task—especially as part of teacher training. This paper presents the requirements for a lesson planning system and evaluates existing systems regarding these requirements. One major drawback of existing software tools is that most are limited to a text- or form-based representation of the lesson designs. In this article, a new approach with a graphical, time-based representation with (automatic) analyses methods is proposed and the system architecture and domain model are described in detail. The approach is implemented in an interactive, web-based prototype called PLATON, which additionally supports the management of lessons in units as well as the modelling of teacher and student-generated resources. The prototype was evaluated in a study with 61 prospective teachers (bachelor’s and master’s preservice teachers as well as teacher trainees in post-university teacher training) in Berlin, Germany, with a focus on usability. The results show that this approach proofed usable for lesson planning and offers positive effects for the perception of time and self-reflection.
Bienenfresserortungsversuch
(2019)
On a planetary scale human populations need to adapt to both socio-economic and environmental problems amidst rapid global change. This holds true for coupled human-environment (socio-ecological) systems in rural and urban settings alike. Two examples are drylands and urban coasts. Such socio-ecological systems have a global distribution. Therefore, advancing the knowledge base for identifying socio-ecological adaptation needs with local vulnerability assessments alone is infeasible: The systems cover vast areas, while funding, time, and human resources for local assessments are limited. They are lacking in low an middle-income countries (LICs and MICs) in particular.
But places in a specific socio-ecological system are not only unique and complex – they also exhibit similarities. A global patchwork of local rural drylands vulnerability assessments of human populations to socio-ecological and environmental problems has already been reduced to a limited number of problem structures, which typically cause vulnerability. However, the question arises whether this is also possible in urban socio-ecological systems. The question also arises whether these typologies provide added value in research beyond global change. Finally, the methodology employed for drylands needs refining and standardizing to increase its uptake in the scientific community. In this dissertation, I set out to fill these three gaps in research.
The geographical focus in my dissertation is on LICs and MICs, which generally have lower capacities to adapt, and greater adaptation needs, regarding rapid global change. Using a spatially explicit indicator-based methodology, I combine geospatial and clustering methods to identify typical configurations of key factors in case studies causing vulnerability to human populations in two specific socio-ecological systems. Then I use statistical and analytical methods to interpret and appraise both the typical configurations and the global typologies they constitute.
First, I improve the indicator-based methodology and then reanalyze typical global problem structures of socio-ecological drylands vulnerability with seven indicator datasets. The reanalysis confirms the key tenets and produces a more realistic and nuanced typology of eight spatially explicit problem structures, or vulnerability profiles: Two new profiles with typically high natural resource endowment emerge, in which overpopulation has led to medium or high soil erosion. Second, I determine whether the new drylands typology and its socio-ecological vulnerability concept advance a thematically linked scientific debate in human security studies: what drives violent conflict in drylands? The typology is a much better predictor for conflict distribution and incidence in drylands than regression models typically used in peace research. Third, I analyze global problem structures typically causing vulnerability in an urban socio-ecological system - the rapidly urbanizing coastal fringe (RUCF) – with eleven indicator datasets. The RUCF also shows a robust typology, and its seven profiles show huge asymmetries in vulnerability and adaptive capacity. The fastest population increase, lowest income, most ineffective governments, most prevalent poverty, and lowest adaptive capacity are all typically stacked in two profiles in LICs. This shows that beyond local case studies tropical cyclones and/or coastal flooding are neither stalling rapid population growth, nor urban expansion, in the RUCF. I propose entry points for scaling up successful vulnerability reduction strategies in coastal cities within the same vulnerability profile.
This dissertation shows that patchworks of local vulnerability assessments can be generalized to structure global socio-ecological vulnerabilities in both rural and urban socio-ecological systems according to typical problems. In terms of climate-related extreme events in the RUCF, conflicting problem structures and means to deal with them are threatening to widen the development gap between LICs and high-income countries unless successful vulnerability reduction measures are comprehensively scaled up. The explanatory power for human security in drylands warrants further applications of the methodology beyond global environmental change research in the future. Thus, analyzing spatially explicit global typologies of socio-ecological vulnerability is a useful complement to local assessments: The typologies provide entry points for where to consider which generic measures to reduce typical problem structures – including the countless places without local assessments. This can save limited time and financial resources for adaptation under rapid global change.
Address on the opening of the Alexander von Humboldt Season
in Quito, Ecuador, on 13 February 2019
(2019)
Transitional Justice
(2019)
Astandard approach to study time-dependent stochastic processes is the power spectral density (PSD), an ensemble-averaged property defined as the Fourier transform of the autocorrelation function of the process in the asymptotic limit of long observation times, T → ∞. In many experimental situations one is able to garner only relatively few stochastic time series of finite T, such that practically neither an ensemble average nor the asymptotic limit T → ∞ can be achieved. To accommodate for a meaningful analysis of such finite-length data we here develop the framework of single-trajectory spectral analysis for one of the standard models of anomalous diffusion, scaled Brownian motion.Wedemonstrate that the frequency dependence of the single-trajectory PSD is exactly the same as for standard Brownian motion, which may lead one to the erroneous conclusion that the observed motion is normal-diffusive. However, a distinctive feature is shown to be provided by the explicit dependence on the measurement time T, and this ageing phenomenon can be used to deduce the anomalous diffusion exponent.Wealso compare our results to the single-trajectory PSD behaviour of another standard anomalous diffusion process, fractional Brownian motion, and work out the commonalities and differences. Our results represent an important step in establishing singletrajectory PSDs as an alternative (or complement) to analyses based on the time-averaged mean squared displacement.