Refine
Has Fulltext
- yes (586) (remove)
Year of publication
- 2019 (586) (remove)
Document Type
- Postprint (210)
- Article (142)
- Doctoral Thesis (130)
- Working Paper (35)
- Monograph/Edited Volume (19)
- Part of Periodical (15)
- Master's Thesis (11)
- Review (10)
- Bachelor Thesis (3)
- Conference Proceeding (3)
Language
- English (394)
- German (182)
- Spanish (5)
- French (4)
- Portuguese (1)
Keywords
- morphology (32)
- Informationsstruktur (30)
- Morphologie (30)
- information structure (30)
- syntax (30)
- Festschrift (29)
- Linguistik (29)
- Syntax (29)
- festschrift (29)
- linguistics (29)
Institute
- Department Linguistik (49)
- Institut für Biochemie und Biologie (48)
- Institut für Geowissenschaften (36)
- Mathematisch-Naturwissenschaftliche Fakultät (36)
- Institut für Chemie (31)
- Institut für Physik und Astronomie (29)
- MenschenRechtsZentrum (29)
- Department Erziehungswissenschaft (27)
- Institut für Romanistik (27)
- Strukturbereich Kognitionswissenschaften (24)
Der Teufel ist in der russischen Literatur vielfach dargestellt worden, und seine Bilder und Funktionen ändern sich durch die Jahrhunderte – in Entsprechung zum Wandel der Epochen und literarischen Moden. In den Teufelsvorstellungen mischen sich volkstümlich animistische Elemente mit den biblischen Konzepten von Teufeln und Dämonen. Aus beiden Reservoirs schöpft die Literatur, die z. T. die naive Teufelsgläubigkeit verspottet, die sich aufgeklärt gebenden Skeptiker aber auch gerne mit Teufelserscheinungen schreckt. Der Teufel ist ein zentrales Motiv der russischen Literatur, dessen Geschichte nachzuerzählen, einen ganz zentralen Strang der russischen Literatur nachzuerzählen heißt – sub specie diaboli.
Auch wenn er schon lange vor den Romantikern – allen voran Nikolaj Gogol’ – einen prominenten Platz in der russischen Literatur inne hatte, mischen sich seitdem volkstümliche Vorstellungen mit dem biblischen Erbe. Im Volk sind Teufelsvorstellungen bis heute populär, die gebildeten Schichten zeigen sich eher skeptisch, weshalb die realistische Literatur – mit der großen Ausnahme Fedor Dostoevskij – den Teufel eher mied, die Modernisten gestalteten ihn dafür umso lieber. Einen Höhepunkt erreicht er bei Michail Bulgakov. Zeitgenossen fehlt häufig der religiöse Subtext.
„Polnische Leichen“
(2019)
„Blame it on the Russians“
(2019)
”Thanks in Advance”
(2019)
This paper studies the effect of the commonly used phrase “thanks in advance” on compliance with a small request. In a controlled laboratory experiment we ask participants to give a detailed answer to an open question. The treatment variable is whether or not they see the phrase “thanks in advance.” Our participants react to the treatment by exerting less effort in answering the request even though they perceive the phrase as polite.
Der historische Spielfilm zählt zu den populärsten Formen geschichtskultureller Artikulation. Als solche ist er Gegenstand kontroverser Diskussionen über einen angemessenen didaktischen Umgang. Vor diesem Hintergrund ist es das Ziel der vorliegenden Arbeit, ein integratives, theoretisch und empirisch abgesichertes Analysemodell zu entwickeln, das nach den Tiefenstrukturen historischen Erzählens im Medium des Spielfilms fragt und dabei unterschiedliche Erscheinungsformen historischer Spielfilme berücksichtigt. Die Überlegungen bewegen sich deshalb in einem interdisziplinären Spannungsfeld von Theorien zum historischen Erzählen und Konzepten der Literatur- und Filmwissenschaft. Die Diskussion und Synthese dieser unterschiedlichen Konzepte geht dabei – auf der Grundlage einer großen Materialbasis – vom Gegenstand aus und ist induktiv angelegt. Als Orientierung für die praktische Arbeit werden am Ende der einzelnen Kapitel Toolkits entwickelt, die zu einer vertieften Auseinandersetzung mit historischen Spielfilmen anregen sollen.
“Mason without apron”
(2019)
While the lack of religion in Alexander von Humboldt’s work and the criticism he received is well known, his relationship with Freemasonry is relatively unexplored. Humboldt appears on some lists of “illustrious Masons,” and several lodges carry his name, but was he really a member? If so, when and where did he join a lodge? Are there any comments by him about Freemasonry? Who were the renowned Masons he was surrounded by? This paper examines these questions, but more importantly it analyzes what a membership might have meant for Humboldt’s scholarly work. It looks particularly at the unprecedented success he enjoyed in the United States in the early 19th century and the factors behind it. What could he have gained from these connections and how was he viewed by Masonic leaders and lodges in the trans-Atlantic world?
“I mean, no soy psicóloga”
(2019)
This paper is concerned with the qualitative analysis of the use of the English discourse marker I mean in Spanish and Portuguese online discourses (in online fora, blogs or user comments on websites). The examples are retrieved from the Corpus del Español (Web/ Dialects) as well as the Corpus do Português (Web/ Dialects).
Los estudios del viaje se centran en las experiencias, publicaciones y aportes científicos de Humboldt. El artículo estudia la situación política española en 1799, la posición y el influjo del poderoso ministro Urquijo, cuya caída y sus eventuales consecuencias en territorios americanos, son mencionados en forma pasajera y preocupada por Humboldt. Se transcribe el inusual registro de Humboldt como pasajero de la corbeta Castor, con detalles, algunos tergiversados, sobre el influjo de Georg Forster en su formación, su continuado empleo prusiano, su avance hacia territorio portugués y las dudas por la velocidad del viaje. Estos datos, junto con algunas críticas en Quito, contemporáneas y posteriores, así como problemas de al menos dos de sus interlocutores con la Inquisición limeña, ofrecen una “mirada americana”, ciertamente parcial, a Humboldt y explican, posiblemente, los pocos datos y la percepción negativa de su estadía en Lima.
Objeto de esta investigación es el auge y caída de una legitimación teológica de la poesía que tuvo lugar en el virreinato del Perú entre fines del siglo XVI y la segunda mitad del siglo XVII. Su punto cúlmine está marcado por el surgimiento de una “Academia Antártica” en las primeras décadas del siglo XVII, mientras que su fin, se aprecia a fines del mismo siglo, cuando eruditos de las órdenes religiosas, especialmente Juan de Espinosa y Medrano en sus textos en defensa de la poesía y las ciencias, negaron a la poesía cualquier estatuto teológico, sirviéndose sin embargo de ella para escribir sus sermones y textos. A partir del auge y caída de esta legitimación teológica en el virreinato del Perú, este estudio muestra la existencia de dos movimientos que forman un quiasmo entre una teologización de la poesía y una poetización de la teología, en cuyo centro velado se encuentra en disputa el saber teórico y práctico de la poesía. Lo que está en disputa en este sentido no es la poesía, entendida como una cumbre de las bellas letras, sino la posesión legítima de un modo de lectura analógico y tipológico del orden del universo, fundado en las Sagradas Escrituras y en la historia de la salvación, y un modo poético para doctrinar a todos los miembros de la sociedad virreinal en concordancia con aquel modo de lectura.
Ihre außergewöhnlich hohen Konversionseffizienzen von über 20 % und die einfache Zellherstellung machen Hybridperowskite zu heißen Kandidaten für alternative Solarzellenmaterialien. CH3NH3PbI3 als Archetyp dieser Materialklasse besitzt außergewöhnliche Eigenschaften wie eine sehr effiziente Umwandlung von Solarenergie, wobei besonders Ferroelektrizität als mögliche Erklärung in den Fokus gerückt ist. Diese erfordert allerdings eine nicht-zentrosymmetrische Kristallstruktur als notwendige Voraussetzung. Wir stellen hier eine Erklärung des Symmetriebruchs in diesem Material auf kristallographischem, d. h. fernordnungs-basiertem, Wege vor. Während das Molekülkation CH3NH3+ intrinsisch polar ist, ist es extrem fehlgeordnet und kann deshalb nicht die einzige Erklärung darstellen. Es verzerrt allerdings das umgebende Kristallgitter und ruft dadurch eine Verschiebung der Iod-Atome von den zentrosymmetrischen Positionen hervor.
Der vorliegende Beitrag beschäftigt sich mit der durch Sprachkontakt beeinflussten bzw. übernommenen Kodierung von Evidentialität im paraguayischen Spanischen. Es geht hierbei insbesondere um den Gebrauch der Guaraní-Partikel ndaje im paraguayischen Zeitungsspanischen. In diesem Zusammenhang wird der Versuch einer Einordnung des sprachlichen Phänomens vorgenommen und eine qualitative Korpusanalyse durchgeführt.
Dieses Sonderheft der Schriftenreihe des Lehrstuhls für Public und Nonprofit Management präsentiert Ergebnisse eines studentischen Beratungsprojekts aus dem Wintersemester 2018/19. Dabei wurde eine Vision für eine digitalisierte öffentliche Verwaltung entworfen. Unter Anwendung von Szenariomethoden wurden Zukunftsszenarien entwickelt und getestet, die sich entweder mit Bürger*innen und Unternehmen als Kund*innen der Verwaltung, den öffentlich Beschäftigen oder der Aufbau- und Ablauforganisation in der Verwaltung beschäftigen.
Zionistische Debatten im Kontext des Ersten Weltkriegs am Beispiel der Herzl-Bund-Blätter 1914–1918
(2019)
Die Bedeutung des Ersten Weltkriegs als zentraler Kontext für die Aushandlung, Anpassung und Verwerfung unterschiedlicher Konzepte jüdischer Identität im Deutschen Kaiserreich, aber auch über dessen Grenzen hinaus, wurde in der jüngsten Forschung in verschiedenen Aspekten erörtert. Die Kriegserfahrung gab insbesondere nationaljüdischen bzw. zionistischen Gruppierungen wichtige Denkanstöße und beförderte die Konkretisierung ihrer Handlungsstrategien für den Aufbau eines jüdischen Nationalwesens in Palästina. Die vorliegende Studie möchte den Fokus historisch-soziologischer Forschung auf der akademischen zionistischen Jugendbewegung erweitern, indem sie eine zionistische Jugendorganisation in den Mittelpunkt rückt, die in wissenschaftlichen Betrachtungen bisher kaum Beachtung fand: den 1912 in Halberstadt gegründeten Herzl-Bund, einen Zusammenschluss junger zionistisch gesinnter Kaufleute. Die Autorin unternimmt eine Auseinandersetzung mit dem publizistischen Schaffen seiner Mitglieder im Kontext des Ersten Weltkriegs, anhand derer es nachzuvollziehen gilt, wie die „großen Themen“, die die Arbeit und Debatten der zionistischen Bewegung im Deutschen Kaiserreich zu dieser Zeit bestimmten, auf der Ebene des Herzl-Bundes und der in ihm vereinigten Herzl-Clubs verhandelt wurden. Hierbei wird unter Rückgriff auf die interne Informationsschrift, die Herzl-Bund-Blätter, untersucht, welche inhaltlichen Aspekte Eingang in die Debatten der zionistischen Jugend gefunden haben. Im Mittelpunkt steht die Besprechung dreier Themenkomplexe: 1) deutsch-jüdischer Nationalismus versus jüdische Nationalbewegung, 2) Antisemitismus und 3) die Begegnung mit osteuropäischen Jüdinnen und Juden. Ziel ist es, diskursive Selbstverständigungsprozesse entlang dieser Themen offenzulegen, die auch der Beantwortung der Frage dienen, ob die Erfahrungen des Ersten Weltkriegs als Schablonen zur Neubewertung des Selbstverständnisses und der eigenen Arbeit des Herzl-Bundes verstanden werden können.
Yiddish in the Andes
(2019)
This article elucidates the efforts of Chilean-Jewish activists to create, manage and protect Chilean Yiddish culture. It illuminates how Yiddish cultural leaders in small diasporas, such as Chile, worked to maintain dialogue with other Jewish centers. Chilean culturists maintained that a unique Latin American Jewish culture existed and needed to be strengthened through the joint efforts of all Yiddish actors on the continent. Chilean activists envisioned a modern Jewish culture informed by both Eastern European influences and local Jewish cultural production, as well as by exchanges with non-Jewish Latin American majority cultures.
„Begegnung mit dem Fremden“ ist das Thema einer Zusatzrunde des zweiten Brandenburger Antike-Denkwerks, das Lateinschüler ausgewählter Brandenburger Gymnasien für die Antike begeistern will und von der Robert Bosch Stiftung gefördert wird.
Der vorliegende Band enthält die Impulsvorträge des 13. Potsdamer Lateintages im Oktober und des Sonderlateintages im Dezember 2017: Prof. Dr. Anja Klöckner erörtert den Einfluss des Mithras-Kults auf die römisch-germanische Bevölkerung am Limes; PD Dr. Nicola Hömke stellt Originalbriefe römischer Soldaten vor, die vom nordenglischen Hadrianswall stammen, wo Legionäre und keltische Einheimische aufeinandertrafen. Dr. Hermann Krüssel präsentiert anhand des Poblicius-Denkmals seine Erkenntnisse zum Leben im augusteischen Köln. Außerdem sind die kreativen und fachlich fundierten Präsentationen dokumentiert, die die Schüler zusammen mit ihren studentischen Mentoren im Laufe mehrerer Monate erarbeiteten und auf einem eigenen Schülerkongress im März 2018 vorstellten.
There is evidence that infants start extracting words from fluent speech around 7.5 months of age (e.g., Jusczyk & Aslin, 1995) and that they use at least two mechanisms to segment words forms from fluent speech: prosodic information (e.g., Jusczyk, Cutler & Redanz, 1993) and statistical information (e.g., Saffran, Aslin & Newport, 1996). However, how these two mechanisms interact and whether they change during development is still not fully understood.
The main aim of the present work is to understand in what way different cues to word segmentation are exploited by infants when learning the language in their environment, as well as to explore whether this ability is related to later language skills. In Chapter 3 we pursued to determine the reliability of the method used in most of the experiments in the present thesis (the Headturn Preference Procedure), as well as to examine correlations and individual differences between infants’ performance and later language outcomes. In Chapter 4 we investigated how German-speaking adults weigh statistical and prosodic information for word segmentation. We familiarized adults with an auditory string in which statistical and prosodic information indicated different word boundaries and obtained both behavioral and pupillometry responses. Then, we conducted further experiments to understand in what way different cues to word segmentation are exploited by 9-month-old German-learning infants (Chapter 5) and by 6-month-old German-learning infants (Chapter 6). In addition, we conducted follow-up questionnaires with the infants and obtained language outcomes at later stages of development.
Our findings from this thesis revealed that (1) German-speaking adults show a strong weight of prosodic cues, at least for the materials used in this study and that (2) German-learning infants weight these two kind of cues differently depending on age and/or language experience. We observed that, unlike English-learning infants, 6-month-old infants relied more strongly on prosodic cues. Nine-month-olds do not show any preference for either of the cues in the word segmentation task. From the present results it remains unclear whether the ability to use prosodic cues to word segmentation relates to later language vocabulary. We speculate that prosody provides infants with their first window into the specific acoustic regularities in the signal, which enables them to master the specific stress pattern of German rapidly. Our findings are a step forwards in the understanding of an early impact of the native prosody compared to statistical learning in early word segmentation.
Since 1980 Iraq passed through various wars and conflicts including Iraq-Iran war, Saddam Hussein’s the Anfals and Halabja campaigns against the Kurds and the killing campaigns against Shiite in 1986, Saddam Hussein’s invasion of Kuwait in August 1990, the Gulf war in 1990, Iraq war in 2003 and the fall of Saddam, the conflicts and chaos in the transmission of power after the death of Saddam, and the war against ISIS . All these wars left severe impacts in most households in Iraq; on women and children in particular.
The consequences of such long wars could be observed in all sectors including economic, social, cultural and religious sectors. The social structure, norms and attitudes are intensely affected. Many women specifically divorced women found them-selves in challenging different difficulties such as social as well as economic situations. Thus the divorced women in Iraqi Kurdistan are the focus of this research.
Considering the fact that there is very few empirical researches on this topic, a constructivist grounded theory methodology (CGT) is viewed as reliable in order to come up with a comprehensive picture about the everyday life of divorced women in Iraqi Kurdistan. Data collected in Sulaimani city in Iraqi Kurdistan. The work of Kathy Charmaz was chosen to be the main methodological context of the research and the main data collection method was individual intensive narrative interviews with divorced women.
Women generally and divorced women specifically in Iraqi Kurdistan are living in a patriarchal society that passing through many changes due to the above mentioned wars among many other factors. This research is trying to study the everyday life of divorced women in such situations and the forms of social insecurity they are experiencing. The social institutions starting from the family as a very significant institution for women to the governmental and non-governmental institutions that are working to support women, and the copying strategies, are in focus in this research. The main research argument is that the family is playing ambivalent roles in divorced women’s life. For instance, on one side families are revealed to be an essential source of security to most respondents, on the other side families posed also many threats and restrictions on those women. This argument supported by what called by Suad joseph "the paradox of support and suppression" . Another important finding is that the stat institution(laws , constitutions ,Offices of combating violence against woman and family) are supporting women somehow and offering them protection from the insecurities but it is clear that the existence of the laws does not stop the violence against women in Iraqi Kurdistan, As explained by Pateman because the laws /the contract is a sexual-social contract that upholds the sex rights of males and grants them more privileges than females. The political instability, Tribal social norms also play a major role in influencing the rule of law.
It is noteworthy to refer that analyzing the interviews in this research showed that in spite that divorced women living in insecurities and facing difficulties but most of the respondents try to find a coping strategies to tackle difficult situations and to deal with the violence they face; these strategies are bargaining, sometimes compromising or resisting …etc. Different theories used to explain these coping strategies such as bargaining with patriarchy. Kandiyoti who stated that women living under certain restraints struggle to find way and strategies to enhance their situations. The research finding also revealed that the western liberal feminist view of agency is limited this is agree with Saba Mahmood and what she explained about Muslim women agency. For my respondents, who are divorced women, their agency reveals itself in different ways, in resisting or compromising with or even obeying the power of male relatives, and the normative system in the society. Agency is also explained the behavior of women contacting formal state institutions in cases of violence like the police or Offices of combating violence against woman and family.
Genetic divergence is impacted by many factors, including phylogenetic history, gene flow, genetic drift, and divergent selection. Rotifers are an important component of aquatic ecosystems, and genetic variation is essential to their ongoing adaptive diversification and local adaptation. In addition to coding sequence divergence, variation in gene expression may relate to variable heat tolerance, and can impose ecological barriers within species. Temperature plays a significant role in aquatic ecosystems by affecting species abundance, spatio-temporal distribution, and habitat colonization. Recently described (formerly cryptic) species of the Brachionus calyciflorus complex exhibit different temperature tolerance both in natural and in laboratory studies, and show that B. calyciflorus sensu stricto (s.s.) is a thermotolerant species. Even within B. calyciflorus s.s., there is a tendency for further temperature specializations. Comparison of expressed genes allows us to assess the impact of stressors on both expression and sequence divergence among disparate populations within a single species. Here, we have used RNA-seq to explore expressed genetic diversity in B. calyciflorus s.s. in two mitochondrial DNA lineages with different phylogenetic histories and differences in thermotolerance. We identify a suite of candidate genes that may underlie local adaptation, with a particular focus on the response to sustained high or low temperatures. We do not find adaptive divergence in established candidate genes for thermal adaptation. Rather, we detect divergent selection among our two lineages in genes related to metabolism (lipid metabolism, metabolism of xenobiotics).
Wissensmanagement
(2019)
Wissen ist für die Bewältigung der Verwaltungsaufgaben eine wichtige Ressource.
Das wirft die Frage auf, wie das notwendige Wissen erzeugt, bewahrt, verteilt und auffindbar gemacht werden kann. Ein solches Wissensmanagement kann die Arbeit der Behörden qualitativ verbessern und effizienter machen. Dennoch wird Wissen in der Verwaltungspraxis bisher nur unzureichend gemanagt.
Ein systematisches Wissensmanagement erfordert personelle, finanzielle und technische Ressourcen. Sind diese nicht vorhanden, können Verwaltungen zunächst auf einzelne Instrumente des Wissensmanagements zurückgreifen, um ihre Arbeit mit begrenztem Aufwand zu verbessern.
Einleitung
Die Implantation einer Knie- oder Hüft-Totalendoprothese (TEP) ist eine der häufigsten operativen Eingriffe. Im Anschluss an die Operation und die postoperative Rehabilitation stellt die Bewegungstherapie einen wesentlichen Bestandteil der Behandlung zur Verbesserung der Gelenkfunktion und der Lebensqualität dar. In strukturschwachen Gebieten werden entsprechende Angebote nur in unzureichender Dichte vorgehalten. Zudem zeichnet sich ein flächendeckender Fachkräftemangel im Bereich der Physiotherapie ab. Die Tele-Nachsorge bietet daher einen innovativen Ansatz für die postrehabilitative Versorgung der Patienten. Das Ziel der vorliegenden Untersuchung war die Überprüfung der Wirksamkeit einer interaktiven Tele-Nachsorgeintervention für Patienten mit Knie- oder Hüft-TEP im Vergleich zur herkömmlichen Versorgung (usual care). Dazu wurden die Funktionalität und die berufliche Wiedereingliederung untersucht.
Methode
Zwischen August 2016 und August 2017 wurden 111 Patienten (54,9 ± 6,8 Jahre, 54,3 % weiblich) zu Beginn ihrer stationären Anschlussheilbehandlung nach Implantation einer Knie- oder Hüft-TEP in diese randomisiert, kontrolliert, multizentrische Studie eingeschlossen. Nach Entlassung aus der orthopädischen Anschlussrehabilitation (Baseline) führte die Interventionsgruppe (IG) ein dreimonatiges interaktives Training über ein Telerehabilitationssystem durch. Hierfür erstellte ein betreuender Physiotherapeut einen individuellen Trainingsplan aus 38 Übungen zur Verbesserung der Kraft sowie der posturalen Kontrolle. Zur Anpassung des Trainingsplans übermittelte das System dem Physiotherapeuten Daten zur Quantität sowie zur Qualität des Trainings. Die Kontrollgruppe (KG) konnte die herkömmlichen Versorgungsangebote nutzen. Zur Beurteilung der Wirksamkeit der Intervention wurde die Differenz der Verbesserung im 6MWT zwischen der IG und der KG nach drei Monaten als primärer Endpunkt definiert. Als sekundäre Endpunkte wurden die Return-to-Work-Rate sowie die funktionelle Mobilität mittels des Stair Ascend Tests, des Five-Times-Sit-to-Stand Test und des Timed Up and Go Tests untersucht. Weiterhin wurden die gesundheitsbezogene Lebensqualität mit dem Short-Form 36 (SF-36) und die gelenkbezogenen Einschränkungen mit dem Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) evaluiert. Der primäre und die sekundären Endpunkte wurden anhand von baseline-adjustierten Kovarianzanalysen im intention-to-treat-Ansatz ausgewertet. Zusätzlich wurde die Teilnahme an Nachsorgeangeboten und die Adhärenz der Interventionsgruppe an der Tele-Nachsorge erfasst und evaluiert.
Ergebnisse
Zum Ende der Intervention wiesen beide Gruppen einen statistisch signifikanten Anstieg ihrer 6MWT Strecke auf (p < 0,001). Zu diesem Zeitpunkt legten die Teilnehmer der IG im Mittel 530,8 ± 79,7 m, die der KG 514,2 ± 71,2 m zurück. Dabei betrug die Differenz der Verbesserung der Gehstrecke in der IG 88,3 ± 57,7 m und in der KG 79,6 ± 48,7 m. Damit zeigt der primäre Endpunkt keine signifikanten Gruppenunterschiede (p = 0,951). Bezüglich der beruflichen Wiedereingliederung konnte jedoch eine signifikant höhere Rate in der IG (64,6 % versus 46,2 %; p = 0,014) festgestellt werden. Für die sekundären Endpunkte der funktionellen Mobilität, der Lebensqualität und der gelenkbezogenen Beschwerden belegen die Ergebnisse eine Gleichwertigkeit beider Gruppen zum Ende der Intervention.
Schlussfolgerung
Die telemedizinisch assistierte Bewegungstherapie für Knie- oder Hüft-TEP Patienten ist der herkömmlichen Versorgung zur Nachsorge hinsichtlich der erzielten Verbesserungen der funktionellen Mobilität, der gesundheitsbezogenen Lebensqualität und der gelenkbezogenen Beschwerden gleichwertig. In dieser Patientenpopulation ließen sich klinisch relevante Verbesserungen unabhängig von der Form der Bewegungstherapie erzielen. Im Hinblick auf die berufliche Wiedereingliederung zeigte sich eine signifikant höhere Rate in der Interventionsgruppe. Die telemedizinisch assistierte Bewegungstherapie scheint eine geeignete Versorgungsform der Nachsorge zu sein, die orts- und zeitunabhängig durchgeführt werden kann und somit den Bedürfnissen berufstätiger Patienten entgegenkommt und in den Alltag der Patienten integriert werden kann. Die Tele-Nachsorge sollte daher als optionale und komplementäre Form der postrehabilitativen Nachsorge angeboten werden. Auch im Hinblick auf den zunehmenden Fachkräftemangel im Bereich der Physiotherapie und bestehende Versorgungslücken in strukturschwachen Gebieten kann der Einsatz der Tele-Nachsorge innovative und bedarfsgerechte Lösungsansätze bieten.
Regulatory focus is a motivational construct that describes humans’ motivational orientation during goal pursuit. It is conceptualized as a chronic, trait-like, as well as a momentary, state-like orientation. Whereas there is a large number of measures to capture chronic regulatory focus, measures for its momentary assessment are only just emerging. This paper presents the development and validation of a measure of Momentary–Chronic Regulatory Focus. Our development incorporates the distinction between self-guide and reference-point definitions of regulatory focus. Ideals and ought striving are the promotion and prevention dimension in the self-guide system; gain and non-loss regulatory focus are the respective dimensions within the reference-point system. Three-survey-based studies test the structure, psychometric properties, and validity of the measure in its version to assess chronic regulatory focus (two samples of working participants, N = 389, N = 672; one student sample [time 1, N = 105; time 2, n = 91]). In two further studies, an experience sampling study with students (N = 84, k = 1649) and a daily-diary study with working individuals (N = 129, k = 1766), the measure was applied to assess momentary regulatory focus. Multilevel analyses test the momentary measure’s factorial structure, provide support for its sensitivity to capture within-person fluctuations, and provide evidence for concurrent construct validity.
What Makes an Employer?
(2019)
As the policy debate on entrepreneurship increasingly centers on firm growth in terms of job creation, it is important to better understand which variables influence the first hiring decision and which ones influence the subsequent survival as an employer. Using the German Socio-economic Panel (SOEP), we analyze what role individual characteristics of entrepreneurs play in sustainable job creation. While human and social capital variables positively influence the hiring decision and the survival as an employer in the same direction, we show that none of the personality traits affect the two outcomes in the same way. Some traits are only relevant for survival as an employer but do not influence the hiring decision, other traits even unfold a revolving door effect, in the sense that employers tend to fail due to the same characteristics that positively influenced their hiring decision.
Wege entstehen beim Gehen
(2019)
Musicalarbeit in der Schule, vom Mini-Musical bis hin zu groß angelegten Schulmusicals, erfreut sich sowohl bei Schülerinnen und Schülern als auch bei Musiklehrkräften großer Beliebtheit und eines oftmals außerordentlichen, teils auch unterschätzten Engagements. Dessen ungeachtet gibt es nur wenig musikdidaktische Fachliteratur zu diesem Thema und es liegen bislang nur wenige Forschungsarbeiten vor, die wegweisend für die Umsetzung von Musicalprojekten an Schulen sind. Auch in der Musiklehrerbildung spielt Musicalarbeit nur eine marginale Rolle.
Die vorliegende Publikation möchte dazu beitragen, diese Lücke zu verringern. Sie ist das Ergebnis des Masterseminars „Musicalarbeit in der Schule“ am Lehrstuhl für Musikpädagogik und Musikdidaktik der Universität Potsdam, das begleitend zur künstlerischen Erarbeitung des Musicals „Elion“ durch Studierende der Universität Potsdam im Sommersemester 2018 stattfand. Im Zentrum des Seminars standen pädagogische sowie methodisch-didaktische Fragestellungen in den Bereichen Gesang, Choreografie und Theaterarbeit. Des Weiteren wurden Möglichkeiten und pädagogische Potenziale fachübergreifenden und fächerverbindenden Arbeitens erörtert.
Zu diesem Seminar wurden Musicalexperten aus verschiedenen schulischen Kontexten eingeladen, die den Studierenden Einblicke in ihre langjährigen Praxiserfahrungen gewährten und ihre Erfahrungen zur Diskussion stellten.
Die vorliegende Publikation wurde abschließend von den Seminarteilnehmern selbst erarbeitet und stellt eine Zusammenfassung des Seminars dar. Sie versteht sich als Entscheidungshilfe für oder gegen Musicalarbeit in der Schule und als Leitfaden für den Einstieg in die Praxis.
In this work we investigated ultrafast demagnetization in a Heusler-alloy. This material belongs to the halfmetal and exists in a ferromagnetic phase. A special feature of investigated alloy is a structure of electronic bands. The last leads to the specific density of the states. Majority electrons form a metallic like structure while minority electrons form a gap near the Fermi-level, like in semiconductor. This particularity offers a good possibility to use this material as model-like structure and to make some proof of principles concerning demagnetization. Using pump-probe experiments we carried out time-resolved measurements to figure out the times of demagnetization. For the pumping we used ultrashort laser pulses with duration around 100 fs. Simultaneously we used two excitation regimes with two different wavelengths namely 400 nm and 1240 nm. Decreasing the energy of photons to the gap size of the minority electrons we explored the effect of the gap on the demagnetization dynamics. During this work we used for the first time OPA (Optical Parametrical Amplifier) for the generation of the laser irradiation in a long-wave regime. We tested it on the FETOSPEX-beamline in BASSYII electron storage ring. With this new technique we measured wavelength dependent demagnetization dynamics. We estimated that the demagnetization time is in a correlation with photon energy of the excitation pulse. Higher photon energy leads to the faster demagnetization in our material. We associate this result with the existence of the energy-gap for minority electrons and explained it with Elliot-Yaffet-scattering events. Additionally we applied new probe-method for magnetization state in this work and verified their effectivity. It is about the well-known XMCD (X-ray magnetic circular dichroism) which we adopted for the measurements in reflection geometry. Static experiments confirmed that the pure electronic dynamics can be separated from the magnetic one. We used photon energy fixed on the L3 of the corresponding elements with circular polarization. Appropriate incidence angel was estimated from static measurements. Using this probe method in dynamic measurements we explored electronic and magnetic dynamics in this alloy.
Warschauer Topographie
(2019)
Vorwort
(2019)
Vom Monomer zum Glykopolymer
(2019)
Glykopolymere sind synthetische und natürlich vorkommende Polymere, die eine Glykaneinheit in der Seitenkette des Polymers tragen. Glykane sind durch die Glykan-Protein-Wechselwirkung verantwortlich für viele biologische Prozesse. Die Beteiligung der Glykanen in diesen biologischen Prozessen ermöglicht das Imitieren und Analysieren der Wechselwirkungen durch geeignete Modellverbindungen, z.B. der Glykopolymere. Dieses System der Glykan-Protein-Wechselwirkung soll durch die Glykopolymere untersucht und studiert werden, um die spezifische und selektive Bindung der Proteine an die Glykopolymere nachzuweisen. Die Proteine, die in der Lage sind, Kohlenhydratstrukturen selektiv zu binden, werden Lektine genannt.
In dieser Dissertationsarbeit wurden verschiedene Glykopolymere synthetisiert. Dabei sollte auf einen effizienten und kostengünstigen Syntheseweg geachtet werden.
Verschiedene Glykopolymere wurden durch funktionalisierte Monomere mit verschiedenen Zuckern, wie z.B. Mannose, Laktose, Galaktose oder N-Acetyl-Glukosamin als funktionelle Gruppe, hergestellt. Aus diesen funktionalisierten Glykomonomeren wurden über ATRP und RAFT-Polymerisation Glykopolymere synthetisiert.
Die erhaltenen Glykopolymere wurden in Diblockcopolymeren als hydrophiler Block angewendet und die Selbstassemblierung in wässriger Lösung untersucht. Die Polymere formten in wässriger Lösung Mizellen, bei denen der Zuckerblock an der Oberfläche der Mizellen sitzt. Die Mizellen wurden mit einem hydrophoben Fluoreszenzfarbstoff beladen, wodurch die CMC der Mizellenbildung bestimmt werden konnte.
Außerdem wurden die Glykopolymere als Oberflächenbeschichtung über „Grafting from“ mit SI-ATRP oder über „Grafting to“ auf verschiedene Oberflächen gebunden. Durch die glykopolymerbschichteten Oberflächen konnte die Glykan Protein Wechselwirkung über spektroskopische Messmethoden, wie SPR- und Mikroring Resonatoren untersucht werden. Hierbei wurde die spezifische und selektive Bindung der Lektine an die Glykopolymere nachgewiesen und die Bindungsstärke untersucht.
Die synthetisierten Glykopolymere könnten durch Austausch der Glykaneinheit für andere Lektine adressierbar werden und damit ein weites Feld an anderen Proteinen erschließen. Die bioverträglichen Glykopolymere wären alternativen für den Einsatz in biologischen Prozessen als Transporter von Medikamenten oder Farbstoffe in den Körper. Außerdem könnten die funktionalisierten Oberflächen in der Diagnostik zum Erkennen von Lektinen eingesetzt werden. Die Glykane, die keine selektive und spezifische Bindung zu Proteinen eingehen, könnten als antiadsorptive Oberflächenbeschichtung z.B. in der Zellbiologie eingesetzt werden.
Verwaltungswissenschaft
(2019)
Das Werk ist im ersten Teil Grundlagen und Querschnittsfragen der Verwaltungswissenschaft gewidmet. Zunächst stellt der Verfasser die Erkenntnisobjekte "Verwaltungswissenschaft" und "Öffentliche Verwaltung" vor. Sodann vermittelt er dem Leser Aufgaben, Kulturen, Reformen und die Kontrolle der Verwaltung. Im zweiten Teil werden Verwaltungsbehörden als Organisationen und Handlungssysteme näher beleuchtet. Die betreffenden Kapitel behandeln die Aufbauorganisation, das Personal, die Koordination, das Verfahren und die Entscheidung.
Verum focus and negation
(2019)
Die Stadtwerkebetriebe, zumindest diejenigen die im Strom- und Gassektor tätig sind, sind meist nicht mehr im Stadtwerke Eigenbetrieb organisiert, sondern von den Kommunen in den vergangenen zwei Jahrzehnten in die Privatrechtsform der GmbH ausgegliedert worden. Hinzu kommt, dass diese kommunalen Unternehmen in einem Energiebinnenmarkt agieren, der durch die EU-Marktliberalisierung entstanden ist. Die unternehmerische Verselbstständigung der Stadtwerke GmbH von politischer Steuerung wird durch das Credo des Neuen Steuerungsmodells bestärkt, das gerade in der unternehmerischen Unabhängigkeit die Voraussetzungen für wirtschaftlichen Erfolg sieht. Diese Rahmenbedingungen zwingen die Unternehmen der kommunalen Wirtschaft, sich ausschließlich nach unternehmerischen und marktinduzierten Systemen zu richten. Dass die Logik des unternehmerischen Handelns keinen Platz lässt für eine politische Steuerung der Unternehmen, wird zum Legitimationsproblem für die kommunale Wirtschaft. Denn eine ausschließliche Orientierung an den Überschüssen der kommunalen Unternehmen legitimiert nicht den öffentlichen Zweck, weder politisch noch organisationsrechtlich. Die Gemeinwohlorientierung ist konstitutiver Bestandteil der kommunalen wirtschaftlichen Betätigung. Hier wird die These hervorgebracht, dass Bürgerbeteiligung in dieser Situation von den Stadtwerken zugelassen wird, um dieses Legitimationsdefizit abzuschwächen. Zwei Fälle werden qualitativ analysiert und verglichen: erstens die Stadtwerke Wolfhagen GmbH, die anhand von Bürgerbeteiligung Akzeptanz für einen Windpark generieren wollen. Zweitens die Stadtwerke Potsdam GmbH, die aus einer - hier als PR-Krise beschriebenen - Situation heraus, Legitimation mit verschiedenen Instrumenten der Bürgerbeteiligung wiederherzustellen versuchen.
Veni, vidi, falsi nuntii
(2019)
Many human infants grow up learning more than one language simultaneously but only recently has research started to study early language acquisition in this population more systematically. The paper gives an overview on findings on early language acquisition in bilingual infants during the first two years of life and compares these findings to current knowledge on early language acquisition in monolingual infants. Given the state of the research, the overview focuses on research on phonological and early lexical development in the first two years of life. We will show that the developmental trajectory of early language acquisition in these areas is very similar in mono- and bilingual infants suggesting that these early steps into language are guided by mechanisms that are rather robust against the differences in the conditions of language exposure that mono- and bilingual infants typically experience.
Word forms such as walked or walker are decomposed into their morphological constituents (walk + -ed/-er) during language comprehension. Yet, the efficiency of morphological decomposition seems to vary for different languages and morphological types, as well as for first and second language speakers. The current study reports results from a visual masked priming experiment focusing on different types of derived word forms (specifically prefixed vs. suffixed) in first and second language speakers of German. We compared the present findings with results from previous studies on inflection and compounding and proposed an account of morphological decomposition that captures both the variability and the consistency of morphological decomposition for different morphological types and for first and second language speakers. Open Practices This article has been awarded an Open Materials badge. Study materials are publicly accessible via the Open Science Framework at . Learn more about the Open Practices badges from the Center for Open Science.
This paper – which is based on the Thomas Franck Lecture held by the author at Humboldt University Berlin on 13 May 2019 – argues that the most likely development of international to be expected will be the coexistence of two “legal worlds”. On the one hand, an inter-State law brutally regulating political relations between human groups whitewashed by nationalism; on the other hand, a transnational or “a-national” law regulating economic relations between private as well as public interests. Further, the paper argues that there are two obvious victims – of very different nature – of this foreseeable evolution: the human being on the one hand, the certainty and effectiveness of the rule of law itself on the other hand.
Accurate weather observations are the keystone to many quantitative applications, such as precipitation monitoring and nowcasting, hydrological modelling and forecasting, climate studies, as well as understanding precipitation-driven natural hazards (i.e. floods, landslides, debris flow). Weather radars have been an increasingly popular tool since the 1940s to provide high spatial and temporal resolution precipitation data at the mesoscale, bridging the gap between synoptic and point scale observations. Yet, many institutions still struggle to tap the potential of the large archives of reflectivity, as there is still much to understand about factors that contribute to measurement errors, one of which is calibration. Calibration represents a substantial source of uncertainty in quantitative precipitation estimation (QPE). A miscalibration of a few dBZ can easily deteriorate the accuracy of precipitation estimates by an order of magnitude. Instances where rain cells carrying torrential rains are misidentified by the radar as moderate rain could mean the difference between a timely warning and a devastating flood.
Since 2012, the Philippine Atmospheric, Geophysical, and Astronomical Services Administration (PAGASA) has been expanding the country’s ground radar network. We had a first look into the dataset from one of the longest running radars (the Subic radar) after devastating week-long torrential rains and thunderstorms in August 2012 caused by the annual southwestmonsoon and enhanced by the north-passing Typhoon Haikui. The analysis of the rainfall spatial distribution revealed the added value of radar-based QPE in comparison to interpolated rain gauge observations. However, when compared with local gauge measurements, severe miscalibration of the Subic radar was found. As a consequence, the radar-based QPE would have underestimated the rainfall amount by up to 60% if they had not been adjusted by rain gauge observations—a technique that is not only affected by other uncertainties, but which is also not feasible in other regions of the country with very sparse rain gauge coverage.
Relative calibration techniques, or the assessment of bias from the reflectivity of two radars, has been steadily gaining popularity. Previous studies have demonstrated that reflectivity observations from the Tropical Rainfall Measuring Mission (TRMM) and its successor, the Global Precipitation Measurement (GPM), are accurate enough to serve as a calibration reference for ground radars over low-to-mid-latitudes (± 35 deg for TRMM; ± 65 deg for GPM). Comparing spaceborne radars (SR) and ground radars (GR) requires cautious consideration of differences in measurement geometry and instrument specifications, as well as temporal coincidence. For this purpose, we implement a 3-D volume matching method developed by Schwaller and Morris (2011) and extended by Warren et al. (2018) to 5 years worth of observations from the Subic radar. In this method, only the volumetric intersections of the SR and GR beams are considered.
Calibration bias affects reflectivity observations homogeneously across the entire radar domain. Yet, other sources of systematic measurement errors are highly heterogeneous in space, and can either enhance or balance the bias introduced by miscalibration. In order to account for such heterogeneous errors, and thus isolate the calibration bias, we assign a quality index to each matching SR–GR volume, and thus compute the GR calibration bias as a qualityweighted average of reflectivity differences in any sample of matching SR–GR volumes. We exemplify the idea of quality-weighted averaging by using beam blockage fraction (BBF) as a quality variable. Quality-weighted averaging is able to increase the consistency of SR and GR observations by decreasing the standard deviation of the SR–GR differences, and thus increasing the precision of the bias estimates.
To extend this framework further, the SR–GR quality-weighted bias estimation is applied to the neighboring Tagaytay radar, but this time focusing on path-integrated attenuation (PIA) as the source of uncertainty. Tagaytay is a C-band radar operating at a lower wavelength and is therefore more affected by attenuation. Applying the same method used for the Subic radar, a time series of calibration bias is also established for the Tagaytay radar.
Tagaytay radar sits at a higher altitude than the Subic radar and is surrounded by a gentler terrain, so beam blockage is negligible, especially in the overlapping region. Conversely, Subic radar is largely affected by beam blockage in the overlapping region, but being an SBand radar, attenuation is considered negligible. This coincidentally independent uncertainty contributions of each radar in the region of overlap provides an ideal environment to experiment with the different scenarios of quality filtering when comparing reflectivities from the two ground radars. The standard deviation of the GR–GR differences already decreases if we consider either BBF or PIA to compute the quality index and thus the weights. However, combining them multiplicatively resulted in the largest decrease in standard deviation, suggesting that taking both factors into account increases the consistency between the matched samples.
The overlap between the two radars and the instances of the SR passing over the two radars at the same time allows for verification of the SR–GR quality-weighted bias estimation method. In this regard, the consistency between the two ground radars is analyzed before and after bias correction is applied. For cases when all three radars are coincident during a significant rainfall event, the correction of GR reflectivities with calibration bias estimates from SR overpasses dramatically improves the consistency between the two ground radars which have shown incoherent observations before correction. We also show that for cases where adequate SR coverage is unavailable, interpolating the calibration biases using a moving average can be used to correct the GR observations for any point in time to some extent. By using the interpolated biases to correct GR observations, we demonstrate that bias correction reduces the absolute value of the mean difference in most cases, and therefore improves the consistency between the two ground radars.
This thesis demonstrates that in general, taking into account systematic sources of uncertainty that are heterogeneous in space (e.g. BBF) and time (e.g. PIA) allows for a more consistent estimation of calibration bias, a homogeneous quantity. The bias still exhibits an unexpected variability in time, which hints that there are still other sources of errors that remain unexplored. Nevertheless, the increase in consistency between SR and GR as well as between the two ground radars, suggests that considering BBF and PIA in a weighted-averaging approach is a step in the right direction.
Despite the ample room for improvement, the approach that combines volume matching between radars (either SR–GR or GR–GR) and quality-weighted comparison is readily available for application or further scrutiny. As a step towards reproducibility and transparency in atmospheric science, the 3D matching procedure and the analysis workflows as well as sample data are made available in public repositories. Open-source software such as Python and wradlib are used for all radar data processing in this thesis. This approach towards open science provides both research institutions and weather services with a valuable tool that can be applied to radar calibration, from monitoring to a posteriori correction of archived data.
The interactions between atmosphere and steep topography in the eastern south–central Andes result in complex relations with inhomogenous rainfall distributions. The atmospheric conditions leading to deep convection and extreme rainfall and their spatial patterns—both at the valley and mountain-belt scales—are not well understood. In this study, we aim to identify the dominant atmospheric conditions and their spatial variability by analyzing the convective available potential energy (CAPE) and dew-point temperature (Td). We explain the crucial effect of temperature on extreme rainfall generation along the steep climatic and topographic gradients in the NW Argentine Andes stretching from the low-elevation eastern foreland to the high-elevation central Andean Plateau in the west. Our analysis relies on version 2.0 of the ECMWF’s (European Centre for Medium-RangeWeather Forecasts) Re-Analysis (ERA-interim) data and TRMM (Tropical Rainfall Measuring Mission) data. We make the following key observations: First, we observe distinctive gradients along and across strike of the Andes in dew-point temperature and CAPE that both control rainfall distributions. Second, we identify a nonlinear correlation between rainfall and a combination of dew-point temperature and CAPE through a multivariable regression analysis. The correlation changes in space along the climatic and topographic gradients and helps to explain controlling factors for extreme-rainfall generation. Third, we observe more contribution (or higher importance) of Td in the tropical low-elevation foreland and intermediate-elevation areas as compared to the high-elevation central Andean Plateau for 90th percentile rainfall. In contrast, we observe a higher contribution of CAPE in the intermediate-elevation area between low and high elevation, especially in the transition zone between the tropical and subtropical areas for the 90th percentile rainfall. Fourth, we find that the parameters of the multivariable regression using CAPE and Td can explain rainfall with higher statistical significance for the 90th percentile compared to lower rainfall percentiles. Based on our results, the spatial pattern of rainfall-extreme events during the past ∼16 years can be described by a combination of dew-point temperature and CAPE in the south–central Andes.
Water is essential to life and thus, an essential resource. However, freshwater resources are limited and their maintenance is crucial. Pollution with chemicals and pathogens through urbanization and a growing population impair the quality of freshwater. Furthermore, water can serve as vector for the transmission of pathogens resulting in water-borne illness.
The Interdisciplinary Research Group III – "Water" of the Leibniz alliance project INFECTIONS‘21 investigated water as a hub for pathogens focusing on Clostridioides difficile and avian influenza A viruses that may be shed into the water. Another aim of this study was to characterize the bacterial communities in a wastewater treatment plant (WWTP) of the capital Berlin, Germany to further assess potential health risks associated with wastewater management practices.
Bacterial communities of WWTP inflow and effluent differed significantly. The proportion of fecal/enteric bacteria was relatively low and OTUs related to potential enteric pathogens were largely removed from inflow to effluent. However, a health risk might exist as an increased relative abundance of potential pathogenic Legionella spp. such as L. lytica was observed. Three Clostridioides difficile isolates from wastewater inflow and an urban bathing lake in Berlin (‗Weisser See‘) were obtained and sequenced. The two isolates from the wastewater did not carry toxin genes, whereas the isolate from the lake was positive for the toxin genes. All three isolates were closely related to human strains. This indicates a potential, but rather sporadic health risk. Avian influenza A viruses were detected in 38.8% of sediment samples by PCR, but virus isolation failed. An experiment with inoculated freshwater and sediment samples showed that virus isolation from sediment requires relatively high virus concentrations and worked much better in Madin-Darby Canine Kidney (MDCK) cell cultures than in embryonated chicken eggs, but low titre of influenza contamination in freshwater samples was sufficient to recover virus.
In conclusion, this work revealed potential health risks coming from bacterial groups with pathogenic potential such as Legionella spp. whose relative abundance is higher in the released effluent than in the inflow of the investigated WWTP. It further indicates that water bodies such as wastewater and lake sediments can serve as reservoir and vector, even for non-typical water-borne or water-transmitted pathogens such as C. difficile.
Untersuchungen an neuartigen sauerstoffsubstituierten Donoren und Akzeptoren für Singulettsauerstoff
(2019)
Im Verlauf dieser Arbeit wurden Aromaten wie Naphthaline und Anthracene mit Singulettsauerstoff, einer reaktiven Form des gewöhnlichen Sauerstoffs, zu sogenannten Endoperoxiden umgesetzt. Die hier eingesetzten Systeme wurden mit funktionellen Gruppen modifiziert, die über eine Sauerstoffbrücke mit dem Aromaten verknüpft sind. Die daraus entstandenen Endoperoxide sind meist besonders labil und konnten in dieser Arbeit isoliert und umfassend untersucht werden.
Hierbei wurde zum einen das Reaktionsverhalten untersucht. Es konnte gezeigt werden, dass die Aromaten in Abhängigkeit ihrer funktionellen Gruppen unterschiedlich schnell mit Singulettsauerstoff reagieren. Die so ermittelten Reaktivitäten wurden zusätzlich durch theoretische Berechnungen gestützt.
Die resultierenden Endoperoxide wurden unter verschiedenen Bedingungen wie erhöhter Temperatur oder einem sauren bzw. basischen Milieu auf ihre Stabilität hin untersucht. Dabei konnte gezeigt werden, dass die auf Naphthalinen basierenden Endoperoxiden den gebundenen Singulettsauerstoff in guten Ausbeuten oft schon bei sehr niedrigen Temperaturen (−40 bis 0 °C) freisetzen. Diese Verbindungen können daher als milde Quellen dieser reaktiven Sauerstoffspezies eingesetzt werden. Weiterhin konnten bei den Anthracenendoperoxiden Zerfallsmechanismen aufgeklärt und andere reaktive Sauerstoffspezies wie Wasserstoffperoxid oder Persäuren nachgewiesen werden.
Zu den Modifikationen der Aromaten gehören auch Glucosereste. Dadurch könnten sich die hier hergestellten Endoperoxide als vielversprechende Verbindungen in der Krebstherapie herausstellen, da Krebszellen deutlich stärker als gesunde Zellen kohlenhydratreiche Verbindungen für ihren Stoffwechsel benötigen. Bei der Spaltung von Endoperoxiden mit Glucosesubstituenten werden ebenfalls reaktive Sauerstoffspezies frei, die so zum Zelltod führen könnten.
For many years, psycholinguistic evidence has been predominantly based on findings from native speakers of Indo-European languages, primarily English, thus providing a rather limited perspective into the human language system. In recent years a growing body of experimental research has been devoted to broadening this picture, testing a wide range of speakers and languages, aiming to understanding the factors that lead to variability in linguistic performance. The present dissertation investigates sources of variability within the morphological domain, examining how and to what extent morphological processes and representations are shaped by specific properties of languages and speakers. Firstly, the present work focuses on a less explored language, Hebrew, to investigate how the unique non-concatenative morphological structure of Hebrew, namely a non-linear combination of consonantal roots and vowel patterns to form lexical entries (L-M-D + CiCeC = limed ‘teach’), affects morphological processes and representations in the Hebrew lexicon. Secondly, a less investigated population was tested: late learners of a second language. We directly compare native (L1) and non-native (L2) speakers, specifically highly proficient and immersed late learners of Hebrew. Throughout all publications, we have focused on a morphological phenomenon of inflectional classes (called binyanim; singular: binyan), comparing productive (class Piel, e.g., limed ‘teach’) and unproductive (class Paal, e.g., lamad ‘learn’) verbal inflectional classes. By using this test case, two psycholinguistic aspects of morphology were examined: (i) how morphological structure affects online recognition of complex words, using masked priming (Publications I and II) and cross-modal priming (Publication III) techniques, and (ii) what type of cues are used when extending morpho-phonological patterns to novel complex forms, a process referred to as morphological generalization, using an elicited production task (Publication IV).
The findings obtained in the four manuscripts, either published or under review, provide significant insights into the role of productivity in Hebrew morphological processing and generalization in L1 and L2 speakers. Firstly, the present L1 data revealed a close relationship between productivity of Hebrew verbal classes and recognition process, as revealed in both priming techniques. The consonantal root was accessed only in the productive class (Piel) but not the unproductive class (Paal). Another dissociation between the two classes was revealed in the cross-modal priming, yielding a semantic relatedness effect only for Paal but not Piel primes. These findings are taken to reflect that the Hebrew mental representations display a balance between stored undecomposable unstructured stems (Paal) and decomposed structured stems (Piel), in a similar manner to a typical dual-route architecture, showing that the Hebrew mental lexicon is less unique than previously claimed in psycholinguistic research. The results of the generalization study, however, indicate that there are still substantial differences between inflectional classes of Hebrew and other Indo-European classes, particularly in the type of information they rely on in generalization to novel forms. Hebrew binyan generalization relies more on cues of argument structure and less on phonological cues.
Secondly, clear L1/L2 differences were observed in the sensitivity to abstract morphological and morpho-syntactic information during complex word recognition and generalization. While L1 Hebrew speakers were sensitive to the binyan information during recognition, expressed by the contrast in root priming, L2 speakers showed similar root priming effects for both classes, but only when the primes were presented in an infinitive form. A root priming effect was not obtained for primes in a finite form. These patterns are interpreted as evidence for a reduced sensitivity of L2 speakers to morphological information, such as information about inflectional classes, and evidence for processing costs in recognition of forms carrying complex morpho-syntactic information. Reduced reliance on structural information cues was found in production of novel verbal forms, when the L2 group displayed a weaker effect of argument structure for Piel responses, in comparison to the L1 group. Given the L2 results, we suggest that morphological and morphosyntactic information remains challenging for late bilinguals, even at high proficiency levels.
Previous research offers equivocal results regarding the effect of
social networking site use on individuals’ self-esteem. We con-
duct a systematic literature review to examine the existing litera-
ture and develop a theoretical framework in order to classify the
results. The framework proposes that self-esteem is affected by
three distinct processes that incorporate self-evaluative informa-
tion: social comparison processes, social feedback processing,
and self-reflective processes. Due to particularities of the social
networking site environment, the accessibility and quality of self-
evaluative information is altered, which leads to online-specific
effects on users’ self-esteem. Results of the reviewed studies
suggest that when a social networking site is used to compare
oneself with others, it mostly results in decreases in users’ self-
esteem. On the other hand, receiving positive social feedback
from others or using these platforms to reflect on one’s own self is
mainly associated with benefits for users’ self-esteem.
Nevertheless, inter-individual differences and the specific activ-
ities performed by users on these platforms should be considered
when predicting individual effects.
Undisclosed desires
(2019)
Following decades of quality management featuring in higher education settings, questions regarding its implementation, impact and outcomes remain. Indeed, leaving aside anecdotal case studies and value-laden documentaries of best practice, current research still knows very little about the implementation of quality management in teaching and learning within higher education institutions. Referring to data collected from German higher education institutions in which a quality management department or functional equivalent was present, this article theorises and provides evidence for the supposition that the implementation of quality management follows two implicit logics. Specifically, it tends either towards the logic of appropriateness or, contrastingly, towards the logic of consequentialism. This study’s results also suggest that quality managers’ socialisation is related to these logics and that it influences their views on quality management in teaching and learning.
Predators can have numerical and behavioral effects on prey animals. While numerical effects are well explored, the impact of behavioral effects is unclear. Furthermore, behavioral effects are generally either analyzed with a focus on single individuals or with a focus on consequences for other trophic levels. Thereby, the impact of fear on the level of prey communities is overlooked, despite potential consequences for conservation and nature management. In order to improve our understanding of predator-prey interactions, an assessment of the consequences of fear in shaping prey community structures is crucial.
In this thesis, I evaluated how fear alters prey space use, community structure and composition, focusing on terrestrial mammals. By integrating landscapes of fear in an existing individual-based and spatially-explicit model, I simulated community assembly of prey animals via individual home range formation. The model comprises multiple hierarchical levels from individual home range behavior to patterns of prey community structure and composition. The mechanistic approach of the model allowed for the identification of underlying mechanism driving prey community responses under fear.
My results show that fear modified prey space use and community patterns. Under fear, prey animals shifted their home ranges towards safer areas of the landscape. Furthermore, fear decreased the total biomass and the diversity of the prey community and reinforced shifts in community composition towards smaller animals. These effects could be mediated by an increasing availability of refuges in the landscape. Under landscape changes, such as habitat loss and fragmentation, fear intensified negative effects on prey communities. Prey communities in risky environments were subject to a non-proportional diversity loss of up to 30% if fear was taken into account. Regarding habitat properties, I found that well-connected, large safe patches can reduce the negative consequences of habitat loss and fragmentation on prey communities. Including variation in risk perception between prey animals had consequences on prey space use. Animals with a high risk perception predominantly used safe areas of the landscape, while animals with a low risk perception preferred areas with a high food availability. On the community level, prey diversity was higher in heterogeneous landscapes of fear if individuals varied in their risk perception compared to scenarios in which all individuals had the same risk perception.
Overall, my findings give a first, comprehensive assessment of the role of fear in shaping prey communities. The linkage between individual home range behavior and patterns at the community level allows for a mechanistic understanding of the underlying processes. My results underline the importance of the structure of the landscape of fear as a key driver of prey community responses, especially if the habitat is threatened by landscape changes. Furthermore, I show that individual landscapes of fear can improve our understanding of the consequences of trait variation on community structures. Regarding conservation and nature management, my results support calls for modern conservation approaches that go beyond single species and address the protection of biotic interactions.
This is a publication-based dissertation comprising three original research stud-ies (one published, one submitted and one ready for submission; status March 2019). The dissertation introduces a generic computer model as a tool to investigate the behaviour and population dynamics of animals in cyclic environments. The model is further employed for analysing how migratory birds respond to various scenarios of altered food supply under global change. Here, ecological and evolutionary time-scales are considered, as well as the biological constraints and trade-offs the individual faces, which ultimately shape response dynamics at the population level. Further, the effect of fine-scale temporal patterns in re-source supply are studied, which is challenging to achieve experimentally. My findings predict population declines, altered behavioural timing and negative carry-over effects arising in migratory birds under global change. They thus stress the need for intensified research on how ecological mechanisms are affected by global change and for effective conservation measures for migratory birds. The open-source modelling software created for this dissertation can now be used for other taxa and related research questions. Overall, this thesis improves our mechanistic understanding of the impacts of global change on migratory birds as one prerequisite to comprehend ongoing global biodiversity loss. The research results are discussed in a broader ecological and scientific context in a concluding synthesis chapter.
Ultrafast magnetisation dynamics have been investigated intensely for two decades. The recovery process after demagnetisation, however, was rarely studied experimentally and discussed in detail. The focus of this work lies on the investigation of the magnetisation on long timescales after laser excitation. It combines two ultrafast time resolved methods to study the relaxation of the magnetic and lattice system after excitation with a high fluence ultrashort laser pulse. The magnetic system is investigated by time resolved measurements of the magneto-optical Kerr effect. The experimental setup has been implemented in the scope of this work. The lattice dynamics were obtained with ultrafast X-ray diffraction. The combination of both techniques leads to a better understanding of the mechanisms involved in magnetisation recovery from a non-equilibrium condition. Three different groups of samples are investigated in this work: Thin Nickel layers capped with nonmagnetic materials, a continuous sample of the ordered L10 phase of Iron Platinum and a sample consisting of Iron Platinum nanoparticles embedded in a carbon matrix. The study of the remagnetisation reveals a general trend for all of the samples: The remagnetisation process can be described by two time dependences. A first exponential recovery that slows down with an increasing amount of energy absorbed in the system until an approximately linear time dependence is observed. This is followed by a second exponential recovery. In case of low fluence excitation, the first recovery is faster than the second. With increasing fluence the first recovery is slowed down and can be described as a linear function. If the pump-induced temperature increase in the sample is sufficiently high, a phase transition to a paramagnetic state is observed. In the remagnetisation process, the transition into the ferromagnetic state is characterised by a distinct transition between the linear and exponential recovery. From the combination of the transient lattice temperature Tp(t) obtained from ultrafast X-ray measurements and magnetisation M(t) gained from magneto-optical measurements we construct the transient magnetisation versus temperature relations M(Tp). If the lattice temperature remains below the Curie temperature the remagnetisation curve M(Tp) is linear and stays below the M(T) curve in equilibrium in the continuous transition metal layers. When the sample is heated above phase transition, the remagnetisation converges towards the static temperature dependence. For the granular Iron Platinum sample the M(Tp) curves for different fluences coincide, i.e. the remagnetisation follows a similar path irrespective of the initial laser-induced temperature jump.
Skarn deposits are found on every continents and were formed at different times from Precambrian to Tertiary. Typically, the formation of a skarn is induced by a granitic intrusion in carbonates-rich sedimentary rocks. During contact metamorphism, fluids derived from the granite interact with the sedimentary host rocks, which results in the formation of calc-silicate minerals at the expense of carbonates. Those newly formed minerals generally develop in a metamorphic zoned aureole with garnet in the proximal and pyroxene in the distal zone. Ore elements contained in magmatic fluids are precipitated due to the change in fluid composition. The temperature decrease of the entire system, due to the cooling of magmatic fluids and the entering of meteoric water, allows retrogression of some prograde minerals.
The Hämmerlein skarn deposit has a multi-stage history with a skarn formation during regional metamorphism and a retrogression of primary skarn minerals during the granitic intrusion. Tin was mobilized during both events. The 340 Ma old tin-bearing skarn minerals show that tin was present in sediments before the granite intrusion, and that the first Sn enrichment occurred during the skarn formation by regional metamorphism fluids. In a second step at ca. 320 Ma, tin-bearing fluids were produced with the intrusion of the Eibenstock granite. Tin, which has been added by the granite and remobilized from skarn calc-silicates, precipitated as cassiterite.
Compared to clay or marl, the skarn is enriched in Sn, W, In, Zn, and Cu. These metals have been supplied during both regional metamorphism and granite emplacement. In addition, the several isotopic and chemical data of skarn samples show that the granite selectively added elements such as Sn, and that there was no visible granitic contribution to the sedimentary signature of the skarn
The example of Hämmerlein shows that it is possible to form a tin-rich skarn without associated granite when tin has already been transported from tin-bearing sediments during regional metamorphism by aqueous metamorphic fluids. These skarns are economically not interesting if tin is only contained in the skarn minerals. Later alteration of the skarn (the heat and fluid source is not necessarily a granite), however, can lead to the formation of secondary cassiterite (SnO2), with which the skarn can become economically highly interesting.
We study travelling chimera states in a ring of nonlocally coupled heterogeneous (with Lorentzian distribution of natural frequencies) phase oscillators. These states are coherence-incoherence patterns moving in the lateral direction because of the broken reflection symmetry of the coupling topology. To explain the results of direct numerical simulations we consider the continuum limit of the system. In this case travelling chimera states correspond to smooth travelling wave solutions of some integro-differential equation, called the Ott–Antonsen equation, which describes the long time coarse-grained dynamics of the oscillators. Using the Lyapunov–Schmidt reduction technique we suggest a numerical approach for the continuation of these travelling waves. Moreover, we perform their linear stability analysis and show that travelling chimera states can lose their stability via fold and Hopf bifurcations. Some of the Hopf bifurcations turn out to be supercritical resulting in the observation of modulated travelling chimera states.
This paper addresses issues of translating both words and rituals as Muslim cemetery keepers care for Jewish graves and recite traditional prayers for the dead in Morocco. Several issues of translation must be dealt with while considering these rare and disappearing practices. The first issue to be discussed is the translation of Hebrew inscriptions into French by cemetery keepers. One cemetery keeper in Meknes has tried to compile an exhaustive index of the names and dates represented on the gravestones under her care. The Muslim guard of the Jewish cemetery in Sefrou, on the other hand, has somewhat famously told visitors differing stories about his ability and willingness to pray the Kaddish over the graves of emigrated relatives who cannot return to mark an anniversary death. These practices provide the context for considering how the act of Muslims caring for Jewish graves creates linguistic and ritual translations of traditional Jewish ancestor care.
Transitional Justice
(2019)
Synchronization – the adjustment of rhythms among coupled self-oscillatory systems – is a fascinating dynamical phenomenon found in many biological, social, and technical systems.
The present thesis deals with synchronization in finite ensembles of weakly coupled self-sustained oscillators with distributed frequencies.
The standard model for the description of this collective phenomenon is the Kuramoto model – partly due to its analytical tractability in the thermodynamic limit of infinitely many oscillators. Similar to a phase transition in the thermodynamic limit, an order parameter indicates the transition from incoherence to a partially synchronized state. In the latter, a part of the oscillators rotates at a common frequency. In the finite case, fluctuations occur, originating from the quenched noise of the finite natural frequency sample.
We study intermediate ensembles of a few hundred oscillators in which fluctuations are comparably strong but which also allow for a comparison to frequency distributions in the infinite limit.
First, we define an alternative order parameter for the indication of a collective mode in the finite case. Then we test the dependence of the degree of synchronization and the mean rotation frequency of the collective mode on different characteristics for different coupling strengths.
We find, first numerically, that the degree of synchronization depends strongly on the form (quantified by kurtosis) of the natural frequency sample and the rotation frequency of the collective mode depends on the asymmetry (quantified by skewness) of the sample. Both findings are verified in the infinite limit.
With these findings, we better understand and generalize observations of other authors. A bit aside of the general line of thoughts, we find an analytical expression for the volume contraction in phase space.
The second part of this thesis concentrates on an ordering effect of the finite-size fluctuations. In the infinite limit, the oscillators are separated into coherent and incoherent thus ordered and disordered oscillators. In finite ensembles, finite-size fluctuations can generate additional order among the asynchronous oscillators. The basic principle – noise-induced synchronization – is known from several recent papers. Among coupled oscillators, phases are pushed together by the order parameter fluctuations, as we on the one hand show directly and on the other hand quantify with a synchronization measure from directed statistics between pairs of passive oscillators.
We determine the dependence of this synchronization measure from the ratio of pairwise natural frequency difference and variance of the order parameter fluctuations. We find a good agreement with a simple analytical model, in which we replace the deterministic fluctuations of the order parameter by white noise.
We combine ultrafast X-ray diffraction (UXRD) and time-resolved Magneto-Optical Kerr Effect (MOKE) measurements to monitor the strain pulses in laser-excited TbFe2/Nb heterostructures. Spatial separation of the Nb detection layer from the laser excitation region allows for a background-free characterization of the laser-generated strain pulses. We clearly observe symmetric bipolar strain pulses if the excited TbFe2 surface terminates the sample and a decomposition of the strain wavepacket into an asymmetric bipolar and a unipolar pulse, if a SiO2 glass capping layer covers the excited TbFe2 layer. The inverse magnetostriction of the temporally separated unipolar strain pulses in this sample leads to a MOKE signal that linearly depends on the strain pulse amplitude measured through UXRD. Linear chain model simulations accurately predict the timing and shape of UXRD and MOKE signals that are caused by the strain reflections from multiple interfaces in the heterostructure.
Towards Eurasia
(2019)
In order to heed the call in world literature studies to work against disciplinary Eurocentrism by refiguring both what constitutes world literature and how this is read, in this article I propose world literature as an archive of world-making practices and as an impulse for the articulation of alternative methodological approaches. This takes world literature from the postcolonial South as, following Pheng Cheah, instantiating a modality of world literature in which the need for imagining worlds with alternative centres to those determined by coloniality is particularly acute. A response to this is facilitated and illustrated by a reading of Bengali poet Rabindranath Tagore’s Letters from Russia (1930), and South African writer/activist Alex La Guma’s A Soviet Journey (1978). By drawing forward connections between the postcolonial South and the former Soviet Union, this complicates traditional colonial arrangements of the colonial ‘centre’ as cradle of civilisation and culture, as well as postcolonial scholarship’s cumulative fetishisation of ‘Europe’, by allowing a reshuffling of the co-ordinates determining ‘centres’ and ‘peripheries’ and a more nuanced grasp of ‘Europe’ simultaneously. These imaginative journeys destabilise ‘Europe’ as closed category and call forth Eurasia as a more appropriate categorical–cartographical framework for thinking this space and the connections and (hi)story-telling it stages and fosters.
The identification of vulnerabilities in IT infrastructures is a crucial problem in enhancing the security, because many incidents resulted from already known vulnerabilities, which could have been resolved. Thus, the initial identification of vulnerabilities has to be used to directly resolve the related weaknesses and mitigate attack possibilities. The nature of vulnerability information requires a collection and normalization of the information prior to any utilization, because the information is widely distributed in different sources with their unique formats. Therefore, the comprehensive vulnerability model was defined and different sources have been integrated into one database. Furthermore, different analytic approaches have been designed and implemented into the HPI-VDB, which directly benefit from the comprehensive vulnerability model and especially from the logical preconditions and postconditions.
Firstly, different approaches to detect vulnerabilities in both IT systems of average users and corporate networks of large companies are presented. Therefore, the approaches mainly focus on the identification of all installed applications, since it is a fundamental step in the detection. This detection is realized differently depending on the target use-case. Thus, the experience of the user, as well as the layout and possibilities of the target infrastructure are considered. Furthermore, a passive lightweight detection approach was invented that utilizes existing information on corporate networks to identify applications.
In addition, two different approaches to represent the results using attack graphs are illustrated in the comparison between traditional attack graphs and a simplistic graph version, which was integrated into the database as well. The implementation of those use-cases for vulnerability information especially considers the usability. Beside the analytic approaches, the high data quality of the vulnerability information had to be achieved and guaranteed. The different problems of receiving incomplete or unreliable information for the vulnerabilities are addressed with different correction mechanisms. The corrections can be carried out with correlation or lookup mechanisms in reliable sources or identifier dictionaries. Furthermore, a machine learning based verification procedure was presented that allows an automatic derivation of important characteristics from the textual description of the vulnerabilities.
Topologische Datenanalyse
(2019)
Bei der Analyse von höherdimensionalen Daten kann deren Gestalt wichtige Informationen über den Datensatz liefern. Bei einer gegebenen Punktwolke, die aus einem unbekannten topologischen Raum ausgewählt wurde, versucht die Topologische Datenanalyse (TDA) den ursprünglichen Raum zu rekonstruieren. Dieser Beitrag soll eine Einführung in die Topologische Datenanalyse geben und konzentriert sich dabei auf zwei wichtige Aspekte: die Persistente Homologie und den Mapper. Dabei werden zuerst die notwendigen theoretischen Grundlagen vorgestellt und anschließend wird die Methodik bei der Visualisierung von Daten eingesetzt.
Die Persistente Homologie ist eines der Standardwerkzeuge in der TDA. Sie findet ihre Anwendung beispielsweise in den Bereichen Formerkennung und -beschreibung. Der Mapper als zweites wichtiges Konzept der TDA wandelt umfangreiche, höherdimensionale Datensätze in Simplizialkomplexe um und kann dadurch geometrische und topologische Eigenschaften der Daten bestimmen. Des Weiteren ist die Mapper-Methode ein brauchbares Werkzeug zur Visualisierungen von mehrdimensionalen Daten, woran statistische Verfahren scheitern.
Die vorliegende Forschungsarbeit untersucht den Umgang mit Dilemmata von Topmanagern. Dilemmata sind ein alltägliches Geschäft im Topmanagement. Die entsprechenden Akteure sind daher immer wieder mit diesen konfrontiert und mit ihnen umzugehen, gehört gewissermaßen zu ihrer Berufsbeschreibung. Hinzu kommen Dilemmata im nicht direkt geschäftlichen Bereich, wie zum Beispiel jene zwischen Familien- und Arbeitszeit. Doch stellt dieses Feld ein kaum untersuchtes Forschungsgebiet dar. Während Dilemmata in anderen Bereichen eine zunehmende Aufmerksamkeit erfuhren, wurden deren Besonderheiten im Topmanagement genauso wenig differenziert betrachtet wie zugehörige Umgangsweisen. Theorie und Praxis stellen bezüglich Dilemmata von Topmanagern vor allem einen Gegensatz dar, beziehungsweise fehlt es an einer theoretischen Fundierung der Empirie. Diesem Umstand wird mittels dieser Studie begegnet. Auf der Grundlage einer differenzierten und breiten Erfassung von Theorien zu Dilemmata, so diese auch noch nicht auf Topmanager bezogen wurden, und einer empirischen Erhebung, die im Mittelpunkt dieser Arbeit stehen, soll das Feld Dilemmata von Topmanagern der Forschung geöffnet werden. Empirische Grundlage sind vor allem narrative Interviews mit Topmanagern über ihre Dilemmata-Wahrnehmung, ausgemachte Ursachen, Umgangsweisen und Resultate. Dies erlaubt es, Topmanagertypen sowie Dilemmata-Arten, mit denen sie konfrontiert sind oder waren, analytisch herauszuarbeiten. Angesichts der Praxisrelevanz von Dilemmata von Topmanagern wird jedoch nicht nur ein theoretisches Modell zu dieser Thematik erarbeitet, es werden auch Reflexionen auf die Praxis in Form von Handlungsempfehlungen vorgenommen. Schließlich gilt es, die allgemeine Theorie zu Dilemmata, ohne konkreten Bezug zu Topmanagern, mit den theoretischen Erkenntnissen dieser Studie auf empirischer Basis zu kontrastieren. Dabei wird im Rahmen der empirischen Erfassung und Auswertung dem Ansatz der Grounded-Theory-Methodologie gefolgt.
Data assimilation has been an active area of research in recent years, owing to its wide utility. At the core of data assimilation are filtering, prediction, and smoothing procedures. Filtering entails incorporation of measurements' information into the model to gain more insight into a given state governed by a noisy state space model. Most natural laws are governed by time-continuous nonlinear models. For the most part, the knowledge available about a model is incomplete; and hence uncertainties are approximated by means of probabilities. Time-continuous filtering, therefore, holds promise for wider usefulness, for it offers a means of combining noisy measurements with imperfect model to provide more insight on a given state.
The solution to time-continuous nonlinear Gaussian filtering problem is provided for by the Kushner-Stratonovich equation. Unfortunately, the Kushner-Stratonovich equation lacks a closed-form solution. Moreover, the numerical approximations based on Taylor expansion above third order are fraught with computational complications. For this reason, numerical methods based on Monte Carlo methods have been resorted to. Chief among these methods are sequential Monte-Carlo methods (or particle filters), for they allow for online assimilation of data. Particle filters are not without challenges: they suffer from particle degeneracy, sample impoverishment, and computational costs arising from resampling.
The goal of this thesis is to:— i) Review the derivation of Kushner-Stratonovich equation from first principles and its extant numerical approximation methods, ii) Study the feedback particle filters as a way of avoiding resampling in particle filters, iii) Study joint state and parameter estimation in time-continuous settings, iv) Apply the notions studied to linear hyperbolic stochastic differential equations.
The interconnection between Itô integrals and stochastic partial differential equations and those of Stratonovich is introduced in anticipation of feedback particle filters. With these ideas and motivated by the variants of ensemble Kalman-Bucy filters founded on the structure of the innovation process, a feedback particle filter with randomly perturbed innovation is proposed. Moreover, feedback particle filters based on coupling of prediction and analysis measures are proposed. They register a better performance than the bootstrap particle filter at lower ensemble sizes.
We study joint state and parameter estimation, both by means of extended state spaces and by use of dual filters. Feedback particle filters seem to perform well in both cases. Finally, we apply joint state and parameter estimation in the advection and wave equation, whose velocity is spatially varying. Two methods are employed: Metropolis Hastings with filter likelihood and a dual filter comprising of Kalman-Bucy filter and ensemble Kalman-Bucy filter. The former performs better than the latter.
Modern health care systems are characterized by pronounced prevention and cost-optimized treatments. This dissertation offers novel empirical evidence on how useful such measures can be. The first chapter analyzes how radiation, a main pollutant in health care, can negatively affect cognitive health. The second chapter focuses on the effect of Low Emission Zones on public heath, as air quality is the major external source of health problems. Both chapters point out potentials for preventive measures. Finally, chapter three studies how changes in treatment prices affect the reallocation of hospital resources. In the following, I briefly summarize each chapter and discuss implications for health care systems as well as other policy areas. Based on the National Educational Panel Study that is linked to data on radiation, chapter one shows that radiation can have negative long-term effects on cognitive skills, even at subclinical doses. Exploiting arguably exogenous variation in soil contamination in Germany due to the Chernobyl disaster in 1986, the findings show that people exposed to higher radiation perform significantly worse in cognitive tests 25 years later. Identification is ensured by abnormal rainfall within a critical period of ten days. The results show that the effect is stronger among older cohorts than younger cohorts, which is consistent with radiation accelerating cognitive decline as people get older. On average, a one-standarddeviation increase in the initial level of CS137 (around 30 chest x-rays) is associated with a decrease in the cognitive skills by 4.1 percent of a standard deviation (around 0.05 school years). Chapter one shows that sub-clinical levels of radiation can have negative consequences even after early childhood. This is of particular importance because most of the literature focuses on exposure very early in life, often during pregnancy. However, population exposed after birth is over 100 times larger. These results point to substantial external human capital costs of radiation which can be reduced by choices of medical procedures. There is a large potential for reductions because about one-third of all CT scans are assumed to be not medically justified (Brenner and Hall, 2007). If people receive unnecessary CT scans because of economic incentives, this chapter points to additional external costs of health care policies. Furthermore, the results can inform the cost-benefit trade-off for medically indicated procedures. Chapter two provides evidence about the effectiveness of Low Emission Zones. Low Emission Zones are typically justified by improvements in population health. However, there is little evidence about the potential health benefits from policy interventions aiming at improving air quality in inner-cities. The chapter ask how the coverage of Low Emission Zones air pollution and hospitalization, by exploiting variation in the roll out of Low Emission Zones in Germany. It combines information on the geographic coverage of Low Emission Zones with rich panel data on the universe of German hospitals over the period from 2006 to 2016 with precise information on hospital locations and the annual frequency of detailed diagnoses. In order to establish that our estimates of Low Emission Zones’ health impacts can indeed be attributed to improvements in local air quality, we use data from Germany’s official air pollution monitoring system and assign monitor locations to Low Emission Zones and test whether measures of air pollution are affected by the coverage of a Low Emission Zone. Results in chapter two confirm former results showing that the introduction of Low Emission Zones improved air quality significantly by reducing NO2 and PM10 concentrations. Furthermore, the chapter shows that hospitals which catchment areas are covered by a Low Emission Zone, diagnose significantly less air pollution related diseases, in particular by reducing the incidents of chronic diseases of the circulatory and the respiratory system. The effect is stronger before 2012, which is consistent with a general improvement in the vehicle fleet’s emission standards. Depending on the disease, a one-standard-deviation increase in the coverage of a hospitals catchment area covered by a Low Emission Zone reduces the yearly number of diagnoses up to 5 percent. These findings have strong implications for policy makers. In 2015, overall costs for health care in Germany were around 340 billion euros, of which 46 billion euros for diseases of the circulatory system, making it the most expensive type of disease caused by 2.9 million cases (Statistisches Bundesamt, 2017b). Hence, reductions in the incidence of diseases of the circulatory system may directly reduce society’s health care costs. Whereas chapter one and two study the demand-side in health care markets and thus preventive potential, chapter three analyzes the supply-side. By exploiting the same hospital panel data set as in chapter two, chapter three studies the effect of treatment price shocks on the reallocation of hospital resources in Germany. Starting in 2005, the implementation of the German-DRG-System led to general idiosyncratic treatment price shocks for individual hospitals. Thus far there is little evidence of the impact of general price shocks on the reallocation of hospital resources. Additionally, I add to the exiting literature by showing that price shocks can have persistent effects on hospital resources even when these shocks vanish. However, simple OLS regressions would underestimate the true effect, due to endogenous treatment price shocks. I implement a novel instrument variable strategy that exploits the exogenous variation in the number of days of snow in hospital catchment areas. A peculiarity of the reform allowed variation in days of snow to have a persistent impact on treatment prices. I find that treatment price increases lead to increases in input factors such as nursing staff, physicians and the range of treatments offered but to decreases in the treatment volume. This indicates supplier-induced demand. Furthermore, the probability of hospital mergers and privatization decreases. Structural differences in pre-treatment characteristics between hospitals enhance these effects. For instance, private and larger hospitals are more affected. IV estimates reveal that OLS results are biased towards zero in almost all dimensions because structural hospital differences are correlated with the reallocation of hospital resources. These results are important for several reasons. The G-DRG-Reform led to a persistent polarization of hospital resources, as some hospitals were exposed to treatment price increases, while others experienced reductions. If hospitals increase the treatment volume as a response to price reductions by offering unnecessary therapies, it has a negative impact on population wellbeing and public spending. However, results show a decrease in the range of treatments if prices decrease. Hospitals might specialize more, thus attracting more patients. From a policy perspective it is important to evaluate if such changes in the range of treatments jeopardize an adequate nationwide provision of treatments. Furthermore, the results show a decrease in the number of nurses and physicians if prices decrease. This could partly explain the nursing crisis in German hospitals. However, since hospitals specialize more they might be able to realize efficiency gains which justify reductions in input factors without loses in quality. Further research is necessary to provide evidence for the impact of the G-DRG-Reform on health care quality. Another important aspect are changes in the organizational structure. Many public hospitals have been privatized or merged. The findings show that this is at least partly driven by the G-DRG-Reform. This can again lead to a lack in services offered in some regions if merged hospitals specialize more or if hospitals are taken over by ecclesiastical organizations which do not provide all treatments due to moral conviction. Overall, this dissertation reveals large potential for preventive health care measures and helps to explain reallocation processes in the hospital sector if treatment prices change. Furthermore, its findings have potentially relevant implications for other areas of public policy. Chapter one identifies an effect of low dose radiation on cognitive health. As mankind is searching for new energy sources, nuclear power is becoming popular again. However, results of chapter one point to substantial costs of nuclear energy which have not been accounted yet. Chapter two finds strong evidence that air quality improvements by Low Emission Zones translate into health improvements, even at relatively low levels of air pollution. These findings may, for instance, be of relevance to design further policies targeted at air pollution such as diesel bans. As pointed out in chapter three, the implementation of DRG-Systems may have unintended side-effects on the reallocation of hospital resources. This may also apply to other providers in the health care sector such as resident doctors.
Thermoresponsive Zellkultursubstrate für zeitlich-räumlich gesteuertes Auswachsen neuronaler Zellen
(2019)
Ein wichtiges Ziel der Neurowissenschaften ist das Verständnis der komplexen und zugleich faszinierenden, hochgeordneten Vernetzung der Neurone im Gehirn, welche neuronalen Prozessen, wie zum Beispiel dem Wahrnehmen oder Lernen wie auch Neuropathologien zu Grunde liegt. Für verbesserte neuronale Zellkulturmodelle zur detaillierten Untersuchung dieser Prozesse ist daher die Rekonstruktion von geordneten neuronalen Verbindungen dringend erforderlich. Mit Oberflächenstrukturen aus zellattraktiven und zellabweisenden Beschichtungen können neuronale Zellen und ihre Neuriten in vitro strukturiert werden. Zur Kontrolle der neuronalen Verbindungsrichtung muss das Auswachsen der Axone zu benachbarten Zellen dynamisch gesteuert werden, zum Beispiel über eine veränderliche Zugänglichkeit der Oberfläche.
In dieser Arbeit wurde untersucht, ob mit thermoresponsiven Polymeren (TRP) beschichtete Zellkultursubstrate für eine dynamische Kontrolle des Auswachsens neuronaler Zellen geeignet sind. TRP können über die Temperatur von einem zellabweisenden in einen zellattraktiven Zustand geschaltet werden, womit die Zugänglichkeit der Oberfläche für Zellen dynamisch gesteuert werden kann. Die TRP-Beschichtung wurde mikrostrukturiert, um einzelne oder wenige neuronale Zellen zunächst auf der Oberfläche anzuordnen und das Auswachsen der Zellen und Neuriten über definierte TRP-Bereiche in Abhängigkeit der Temperatur zeitlich und räumlich zu kontrollieren. Das Protokoll wurde mit der neuronalen Zelllinie SH-SY5Y etabliert und auf humane induzierte Neurone übertragen. Die Anordnung der Zellen konnte bei Kultivierung im zellabweisenden Zustand des TRPs für bis zu 7 Tage aufrecht erhalten werden. Durch Schalten des TRPs in den zellattraktiven Zustand konnte das Auswachsen der Neuriten und Zellen zeitlich und räumlich induziert werden. Immunozytochemische Färbungen und Patch-Clamp-Ableitungen der Neurone demonstrierten die einfache Anwendbarkeit und Zellkompatibilität der TRP-Substrate.
Eine präzisere räumliche Kontrolle des Auswachsens der Zellen sollte durch lokales Schalten der TRP-Beschichtung erreicht werden. Dafür wurden Mikroheizchips mit Mikroelektroden zur lokalen Jouleschen Erwärmung der Substratoberfläche entwickelt. Zur Evaluierung der generierten Temperaturprofile wurde eine Temperaturmessmethode entwickelt und die erhobenen Messwerte mit numerisch simulierten Werten abgeglichen. Die Temperaturmessmethode basiert auf einfach zu applizierenden Sol-Gel-Schichten, die den temperatursensitiven Fluoreszenzfarbstoff Rhodamin B enthalten. Sie ermöglicht oberflächennahe Temperaturmessungen in trockener und wässriger Umgebung mit hoher Orts- und Temperaturauflösung. Numerische Simulationen der Temperaturprofile korrelierten gut mit den experimentellen Daten. Auf dieser Basis konnten Geometrie und Material der Mikroelektroden hinsichtlich einer lokal stark begrenzten Temperierung optimiert werden. Ferner wurden für die Kultvierung der Zellen auf den Mikroheizchips eine Zellkulturkammer und Kontaktboard für die elektrische Kontaktierung der Mikroelektroden geschaffen.
Die vorgestellten Ergebnisse demonstrieren erstmalig das enorme Potential thermoresponsiver Zellkultursubstrate für die zeitlich und räumlich gesteuerte Formation geordneter neuronaler Verbindungen in vitro. Zukünftig könnte dies detaillierte Studien zur neuronalen Informationsverarbeitung oder zu Neuropathologien an relevanten, humanen Zellmodellen ermöglichen.
Bisher ist die Ursache für die Entstehung der meisten Skoliosen noch ungeklärt und damit eine kausale Behandlung der Betroffenen unmöglich. Die vorliegende Arbeit geht davon aus, dass der Auslöser für die sogenannte idiopathische Skoliose eine funktionelle Störung von Muskeln ist, die sich in einer verminderten relativen Haltekraft äußert. Durch gezielte willkürliche Muskelanspannungen könnte es möglich sein, kompensatorisch auf die Deformität einzuwirken, um damit ein Fortschreiten zu verhindern bzw. sogar eine Regression hervorzurufen. Insbesondere Patientengruppen mit einem hohen Progressionsrisiko, wie Jugendliche im Wachstumsalter, könnten davon profitieren.
Ein Muskeltraining kann mit unterschiedlichsten Hilfsmitteln und Methoden erfolgen. Eine Möglichkeit bietet auch das Klettern. Im Kern wird daher ein Trainingskonzept zum Therapeutischen Klettern bei Jugendlichen mit Skoliose vorgestellt. Dabei beruft sich der Autor auf das Potsdamer Modell. Dieses Modell erlaubt es, gezielte Kraftübungen systematisiert an der Kletterwand in Absprunghöhe umzusetzen. Materielle Sicherungsmaßnahmen sind dadurch nicht erforderlich und eventuell notwendige Korrekturen bzw. Hilfestellungen können direkt erfolgen. Hauptinhalt eines Trainings nach dem vorgestellten Konzept sind spielerische Bewegungserfahrung innerhalb der Sportart Klettern und ein Systembouldertraining.
In einem beigefügten Übungskatalog werden für letzteres Möglichkeiten der praktischen Umsetzun-gen gegeben. Die Übungen fokussieren sich auf die Aktivierung und das Training wirbelkörperdero-tierender Muskeln. Im Hauptteil einer Trainingseinheit können sie dann in Kombination mit der Kor-rektur der Seitverbiegung und des sagittalen Profils (3D Autokorrektur) unter Aufsicht eines geschul-ten Therapeuten durchgeführt werden. Die Arbeit erhebt den Anspruch, einem Leser vom Fach, die Auswahl der Übungen und die darin enthaltene individuelle Anpassung an den Patienten aus funktionell-anatomischer Sicht zu begründen.
In naher Zukunft wird das Konzept in einer randomisiert kontrollierten Studie untersucht. Alle notwendigen Vorbereitungen wurden im Rahmen dieser Arbeit getroffen.
Optimization is a core part of technological advancement and is usually heavily aided by computers. However, since many optimization problems are hard, it is unrealistic to expect an optimal solution within reasonable time. Hence, heuristics are employed, that is, computer programs that try to produce solutions of high quality quickly. One special class are estimation-of-distribution algorithms (EDAs), which are characterized by maintaining a probabilistic model over the problem domain, which they evolve over time. In an iterative fashion, an EDA uses its model in order to generate a set of solutions, which it then uses to refine the model such that the probability of producing good solutions is increased.
In this thesis, we theoretically analyze the class of univariate EDAs over the Boolean domain, that is, over the space of all length-n bit strings. In this setting, the probabilistic model of a univariate EDA consists of an n-dimensional probability vector where each component denotes the probability to sample a 1 for that position in order to generate a bit string.
My contribution follows two main directions: first, we analyze general inherent properties of univariate EDAs. Second, we determine the expected run times of specific EDAs on benchmark functions from theory. In the first part, we characterize when EDAs are unbiased with respect to the problem encoding. We then consider a setting where all solutions look equally good to an EDA, and we show that the probabilistic model of an EDA quickly evolves into an incorrect model if it is always updated such that it does not change in expectation.
In the second part, we first show that the algorithms cGA and MMAS-fp are able to efficiently optimize a noisy version of the classical benchmark function OneMax. We perturb the function by adding Gaussian noise with a variance of σ², and we prove that the algorithms are able to generate the true optimum in a time polynomial in σ² and the problem size n. For the MMAS-fp, we generalize this result to linear functions. Further, we prove a run time of Ω(n log(n)) for the algorithm UMDA on (unnoisy) OneMax. Last, we introduce a new algorithm that is able to optimize the benchmark functions OneMax and LeadingOnes both in O(n log(n)), which is a novelty for heuristics in the domain we consider.
This thesis investigates whether multilingual speakers’ use of grammatical constraints in an additional language (La) is affected by the native (L1) and non-native grammars (L2) of their linguistic repertoire.
Previous studies have used untimed measures of grammatical performance to show that L1 and L2 grammars affect the initial stages of La acquisition. This thesis extends this work by examining whether speakers at intermediate levels of La proficiency, who demonstrate mature untimed/offline knowledge of the target La constraints, are differentially affected by their L1 and L2 knowledge when they comprehend sentences under processing pressure. With this purpose, several groups of La German speakers were tested on word order and agreement phenomena using online/timed measures of grammatical knowledge. Participants had mirror distributions of their prior languages and they were either L1English/L2Spanish speakers or L1Spanish/L2English speakers. Crucially, in half of the phenomena the target La constraint aligned with English but not with Spanish, while in the other half it aligned with Spanish but not with English. Results show that the L1 grammar plays a major role in the use of La constraints under processing pressure, as participants displayed increased sensitivity to La constraints when they aligned with their L1, and reduced sensitivity when they did not. Further, in specific phenomena in which the L2 and La constraints aligned, increased L2 proficiency resulted in an enhanced sensitivity to the La constraint. These findings suggest that both native and non-native grammars affect how speakers use La grammatical constraints under processing pressure. However, L1 and L2 grammars differentially influence on participants’ performance: While L1 constraints seem to be reliably recruited to cope with the processing demands of real-time La use, proficiency in an L2 can enhance sensitivity to La constraints only in specific circumstances, namely when L2 and La constraints align.
The foreland of the Andes in South America is characterised by distinct along strike changes in surface deformational styles. These styles are classified into two end-members, the thin-skinned and the thick-skinned style. The superficial expression of thin-skinned deformation is a succession of narrowly spaced hills and valleys, that form laterally continuous ranges on the foreland facing side of the orogen. Each of the hills is defined by a reverse fault that roots in a basal décollement surface within the sedimentary cover, and acted as thrusting ramp to stack the sedimentary pile. Thick-skinned deformation is morphologically characterised by spatially disparate, basement-cored mountain ranges. These mountain ranges are uplifted along reactivated high-angle crustal-scale discontinuities, such as suture zones between different tectonic terranes.
Amongst proposed causes for the observed variation are variations in the dip angle of the Nazca plate, variation in sediment thickness, lithospheric thickening, volcanism or compositional differences. The proposed mechanisms are predominantly based on geological observations or numerical thermomechanical modelling, but there has been no attempt to understand the mechanisms from a point of data-integrative 3D modelling. The aim of this dissertation is therefore to understand how lithospheric structure controls the deformational behaviour. The integration of independent data into a consistent model of the lithosphere allows to obtain additional evidence that helps to understand the causes for the different deformational styles. Northern Argentina encompasses the transition from the thin-skinned fold-and-thrust belt in Bolivia, to the thick-skinned Sierras Pampeanas province, which makes this area a well suited location for such a study. The general workflow followed in this study first involves data-constrained structural- and density-modelling in order to obtain a model of the study area. This model was then used to predict the steady-state thermal field, which was then used to assess the present-day rheological state in northern Argentina.
The structural configuration of the lithosphere in northern Argentina was determined by means of data-integrative, 3D density modelling verified by Bouguer gravity. The model delineates the first-order density contrasts in the lithosphere in the uppermost 200 km, and discriminates bodies for the sediments, the crystalline crust, the lithospheric mantle and the subducting Nazca plate. To obtain the intra-crustal density structure, an automated inversion approach was developed and applied to a starting structural model that assumed a homogeneously dense crust. The resulting final structural model indicates that the crustal structure can be represented by an upper crust with a density of 2800 kg/m³, and a lower crust of 3100 kg/m³. The Transbrazilian Lineament, which separates the Pampia terrane from the Río de la Plata craton, is expressed as a zone of low average crustal densities.
In an excursion, we demonstrate in another study, that the gravity inversion method developed to obtain intra-crustal density structures, is also applicable to obtain density variations in the uppermost lithospheric mantle. Densities in such sub-crustal depths are difficult to constrain from seismic tomographic models due to smearing of crustal velocities. With the application to the uppermost lithospheric mantle in the north Atlantic, we demonstrate in Tan et al. (2018) that lateral density trends of at least 125\,km width are robustly recovered by the inversion method, thereby providing an important tool for the delineation of subcrustal density trends.
Due to the genetic link between subduction, orogenesis and retroarc foreland basins the question rises whether the steady-state assumption is valid in such a dynamic setting. To answer this question, I analysed (i) the impact of subduction on the conductive thermal field of the overlying continental plate, (ii) the differences between the transient and steady-state thermal fields of a geodynamic coupled model. Both studies indicate that the assumption of a thermal steady-state is applicable in most parts of the study area. Within the orogenic wedge, where the assumption cannot be applied, I estimated the transient thermal field based on the results of the conducted analyses.
Accordingly, the structural model that had been obtained in the first step, could be used to obtain a 3D conductive steady-state thermal field. The rheological assessment based on this thermal field indicates that the lithosphere of the thin-skinned Subandean ranges is characterised by a relatively strong crust and a weak mantle. Contrarily, the adjacent foreland basin consists of a fully coupled, very strong lithosphere. Thus, shortening in northern Argentina can only be accommodated within the weak lithosphere of the orogen and the Subandean ranges. The analysis suggests that the décollements of the fold-and-thrust belt are the shallow continuation of shear zones that reside in the ductile sections of the orogenic crust. Furthermore, the localisation of the faults that provide strain transfer between the deeper ductile crust and the shallower décollement is strongly influenced by crustal weak zones such as foliation. In contrast to the northern foreland, the lithosphere of the thick-skinned Sierras Pampeanas is fully coupled and characterised by a strong crust and mantle. The high overall strength prevents the generation of crustal-scale faults by tectonic stresses. Even inherited crustal-scale discontinuities, such as sutures, cannot sufficiently reduce the strength of the lithosphere in order to be reactivated. Therefore, magmatism that had been identified to be a precursor of basement uplift in the Sierras Pampeanas, is the key factor that leads to the broken foreland of this province. Due to thermal weakening, and potentially lubrication of the inherited discontinuities, the lithosphere is locally weakened such that tectonic stresses can uplift the basement blocks. This hypothesis explains both the spatially disparate character of the broken foreland, as well as the observed temporal delay between volcanism and basement block uplift.
This dissertation provides for the first time a data-driven 3D model that is consistent with geophysical data and geological observations, and that is able to causally link the thermo-rheological structure of the lithosphere to the observed variation of surface deformation styles in the retroarc foreland of northern Argentina.
The Government will create a motivated, merit-based, performance-driven, and professional civil service that is resistant to temptations of corruption and which provides efficient, effective and transparent public services that do not force customers to pay bribes.
— (GoIRA, 2006, p. 106)
We were in a black hole! We had an empty glass and had nothing from our side to fill it with! Thus, we accepted anything anybody offered; that is how our glass was filled; that is how we reformed our civil service.
— (Former Advisor to IARCSC, personal communication, August 2015)
How and under what conditions were the post-Taleban Civil Service Reforms of Afghanistan initiated? What were the main components of the reforms? What were their objectives and to which extent were they achieved? Who were the leading domestic and foreign actors involved in the process? Finally, what specific factors influenced the success and failure Afghanistan’s Civil Service Reforms since 2002? Guided by such fundamental questions, this research studies the wicked process of reforming the Afghan civil service in an environment where a variety of contextual, programmatic, and external factors affected the design and implementation of reforms that were entirely funded and technically assisted by the international community.
Focusing on the core components of reforms—recruitment, remuneration, and appraisal of civil servants—the qualitative study provides a detailed picture of the pre-reform civil service and its major human resources developments in the past. Following discussions on the content and purposes of the main reform programs, it will then analyze the extent of changes in policies and practices by examining the outputs and effects of these reforms.
Moreover, the study defines the specific factors that led the reforms toward a situation where most of the intended objectives remain unachieved. Doing so, it explores and explains how an overwhelming influence of international actors with conflicting interests, large-scale corruption, political interference, networks of patronage, institutionalized nepotism, culturally accepted cronyism and widespread ethnic favoritism created a very complex environment and prevented the reforms from transforming Afghanistan’s patrimonial civil service into a professional civil service, which is driven by performance and merit.
Partial melting is a first order process for the chemical differentiation of the crust (Vielzeuf et al., 1990). Redistribution of chemical elements during melt generation crucially influences the composition of the lower and upper crust and provides a mechanism to concentrate and transport chemical elements that may also be of economic interest. Understanding of the diverse processes and their controlling factors is therefore not only of scientific interest but also of high economic importance to cover the demand for rare metals.
The redistribution of major and trace elements during partial melting represents a central step for the understanding how granite-bound mineralization develops (Hedenquist and Lowenstern, 1994). The partial melt generation and mobilization of ore elements (e.g. Sn, W, Nb, Ta) into the melt depends on the composition of the sedimentary source and melting conditions. Distinct source rocks have different compositions reflecting their deposition and alteration histories. This specific chemical “memory” results in different mineral assemblages and melting reactions for different protolith compositions during prograde metamorphism (Brown and Fyfe, 1970; Thompson, 1982; Vielzeuf and Holloway, 1988). These factors do not only exert an important influence on the distribution of chemical elements during melt generation, they also influence the volume of melt that is produced, extraction of the melt from its source, and its ascent through the crust (Le Breton and Thompson, 1988). On a larger scale, protolith distribution and chemical alteration (weathering), prograde metamorphism with partial melting, melt extraction, and granite emplacement are ultimately depending on a (plate-)tectonic control (Romer and Kroner, 2016). Comprehension of the individual stages and their interaction is crucial in understanding how granite-related mineralization forms, thereby allowing estimation of the mineralization potential of certain areas. Partial melting also influences the isotope systematics of melt and restite. Radiogenic and stable isotopes of magmatic rocks are commonly used to trace back the source of intrusions or to quantify mixing of magmas from different sources with distinct isotopic signatures (DePaolo and Wasserburg, 1979; Lesher, 1990; Chappell, 1996). These applications are based on the fundamental requirement that the isotopic signature in the melt reflects that of the bulk source from which it is derived. Different minerals in a protolith may have isotopic compositions of radiogenic isotopes that deviate from their whole rock signature (Ayres and Harris, 1997; Knesel and Davidson, 2002). In particular, old minerals with a distinct parent-to-daughter (P/D) ratio are expected to have a specific radiogenic isotope signature. As the partial melting reaction only involves selective phases in a protolith, the isotopic signature of the melt reflects that of the minerals involved in the melting reaction and, therefore, should be different from the bulk source signature. Similar considerations hold true for stable isotopes.
The Postmasburg Manganese Field (PMF), Northern Cape Province, South Africa, once represented one of the largest sources of manganese ore worldwide. Two belts of manganese ore deposits have been distinguished in the PMF, namely the Western Belt of ferruginous manganese ores and the Eastern Belt of siliceous manganese ores. Prevailing models of ore formation in these two belts invoke karstification of manganese-rich dolomites and residual accumulation of manganese wad which later underwent diagenetic and low-grade metamorphic processes. For the most part, the role of hydrothermal processes and metasomatic alteration towards ore formation has not been adequately discussed. Here we report an abundance of common and some rare Al-, Na-, K- and Ba-bearing minerals, particularly aegirine, albite, microcline, banalsite, sérandite-pectolite, paragonite and natrolite in Mn ores of the PMF, indicative of hydrothermal influence. Enrichments in Na, K and/or Ba in the ores are generally on a percentage level for most samples analysed through bulk-rock techniques. The presence of As-rich tokyoite also suggests the presence of As and V in the hydrothermal fluid. The fluid was likely oxidized and alkaline in nature, akin to a mature basinal brine. Various replacement textures, particularly of Na- and K- rich minerals by Ba-bearing phases, suggest sequential deposition of gangue as well as ore-minerals from the hydrothermal fluid, with Ba phases being deposited at a later stage. The stratigraphic variability of the studied ores and their deviation from the strict classification of ferruginous and siliceous ores in the literature, suggests that a re-evaluation of genetic models is warranted. New Ar-Ar ages for K-feldspars suggest a late Neoproterozoic timing for hydrothermal activity. This corroborates previous geochronological evidence for regional hydrothermal activity that affected Mn ores at the PMF but also, possibly, the high-grade Mn ores of the Kalahari Manganese Field to the north. A revised, all-encompassing model for the development of the manganese deposits of the PMF is then proposed, whereby the source of metals is attributed to underlying carbonate rocks beyond the Reivilo Formation of the Campbellrand Subgroup. The main process by which metals are primarily accumulated is attributed to karstification of the dolomitic substrate. The overlying Asbestos Hills Subgroup banded iron formation (BIF) is suggested as a potential source of alkali metals, which also provides a mechanism for leaching of these BIFs to form high-grade residual iron ore deposits.
The role of case and animacy in biand monolingual children’s sentence interpretation in German
(2019)
German-speaking children appear to have a strong N1-bias when interpreting non-canonical OVSsentences. During sentence interpretation, especially unambiguous accusative and dative case markers (den ‘the-ACC’ and dem ‘the-DAT’) weaken the N1-bias and help building up sentence interpretation strategies on the basis of morphological cues. Still, the N1-bias prevails beyond the age of five (Brandt et al. 2016, Cristante 2016, Dittmar et al. 2008) and remains until puberty (Lidzba et al. 2013). This paper investigates whether prototypical case-animacy coalitions (denACC + N INANIMATE and demDAT + N ANIMATE ) strengthen a morphologically based sentence interpretation strategy in German. The experiment discussed in this paper tests for effects of such case-animacy coalitions in mono- and bilingual primary school children. 20 German monolinguals, 12 Dutch-German and 17 Russian-German bilinguals with a mean age of 9;6 were tested in a forced-choice off-line experiment. Results indicate that case-animacy coalitions weaken the N1-bias in OVS-conditions in German monolinguals and Dutch-German bilinguals, while no effects were found for Russian-German bilinguals. Together with an analysis of individual differences, these group-specific effects are discussed in terms of a developmental approach that represents a gradual cue strength adjustment process in mono- and bilingual children.
The Role of Bargaining Power
(2019)
Neoclassical theory omits the role of bargaining power in the determination of wages. As a result, the importance of changes in the bargaining position for the development of income shares in the last decades is underestimated. This paper presents a theoretical argument why collective bargaining power is a main determinant of workers’ share of income and how its decline contributed to the severe changes in the distribution of income since the 1980s. In order to confirm this hypothesis, a panel data regression analysis is performed that suggests that unions significantly influence the distribution of income in developed countries.
The public encounter
(2019)
This thesis puts the citizen-state interaction at its center. Building on a comprehensive model incorporating various perspectives on this interaction, I derive selected research gaps. The three articles, comprising this thesis, tackle these gaps. A focal role plays the citizens’ administrative literacy, the relevant competences and knowledge necessary to successfully interact with public organizations. The first article elaborates on the different dimensions of administrative literacy and develops a survey instrument to assess these. The second study shows that public employees change their behavior according to the competences that citizens display during public encounters. They treat citizens preferentially that are well prepared and able to persuade them of their application’s potential. Thereby, they signal a higher success potential for bureaucratic success criteria which leads to the employees’ cream-skimming behavior. The third article examines the dynamics of employees’ communication strategies when recovering from a service failure. The study finds that different explanation strategies yield different effects on the client’s frustration. While accepting the responsibility and explaining the reasons for a failure alleviates the frustration and anger, refusing the responsibility leads to no or even reinforcing effects on the client’s frustration. The results emphasize the different dynamics that characterize the nature of citizen-state interactions and how they establish their short- and long-term outcomes.
The politics of zoom
(2019)
Following the mandate in the Paris Agreement for signatories to provide “climate services” to their constituents, “downscaled” climate visualizations are proliferating. But the process of downscaling climate visualizations does not neutralize the political problems with their synoptic global sources—namely, their failure to empower communities to take action and their replication of neoliberal paradigms of globalization. In this study we examine these problems as they apply to interactive climate‐visualization platforms, which allow their users to localize global climate information to support local political action. By scrutinizing the political implications of the “zoom” tool from the perspective of media studies and rhetoric, we add to perspectives of cultural cartography on the issue of scaling from our fields. Namely, we break down the cinematic trope of “zooming” to reveal how it imports the political problems of synopticism to the level of individual communities. As a potential antidote to the politics of zoom, we recommend a downscaling strategy of connectivity, which associates rather than reduces situated views of climate to global ones.
When dealing with issues that are of high so-cietal relevance, Earth sciences still face a lack of accep-tance, which is partly rooted in insufficient communicationstrategies on the individual and local community level. Toincrease the efficiency of communication routines, sciencehas to transform its outreach concepts to become more awareof individual needs and demands. The “encoding/decoding”concept as well as critical intercultural communication stud-ies can offer pivotal approaches for this transformation.
The individual’s mental lexicon comprises all known words as well related infor-mation on semantics, orthography and phonology. Moreover, entries connect due to simi-larities in these language domains building a large network structure. The access to lexical information is crucial for processing of words and sentences. Thus, a lack of information in-hibits the retrieval and can cause language processing difficulties. Hence, the composition of the mental lexicon is essential for language skills and its assessment is a central topic of lin-guistic and educational research.
In early childhood, measurement of the mental lexicon is uncomplicated, for example through parental questionnaires or the analysis of speech samples. However, with growing content the measurement becomes more challenging: With more and more words in the mental lexicon, the inclusion of all possible known words into a test or questionnaire be-comes impossible. That is why there is a lack of methods to assess the mental lexicon for school children and adults. For the same reason, there are only few findings on the courses of lexical development during school years as well as its specific effect on other language skills. This dissertation is supposed to close this gap by pursuing two major goals: First, I wanted to develop a method to assess lexical features, namely lexicon size and lexical struc-ture, for children of different age groups. Second, I aimed to describe the results of this method in terms of lexical development of size and structure. Findings were intended to help understanding mechanisms of lexical acquisition and inform theories on vocabulary growth.
The approach is based on the dictionary method where a sample of words out of a dictionary is tested and results are projected on the whole dictionary to determine an indi-vidual’s lexicon size. In the present study, the childLex corpus, a written language corpus for children in German, served as the basis for lexicon size estimation. The corpus is assumed to comprise all words children attending primary school could know. Testing a sample of words out of the corpus enables projection of the results on the whole corpus. For this purpose, a vocabulary test based on the corpus was developed. Afterwards, test performance of virtual participants was simulated by drawing different lexicon sizes from the corpus and comparing whether the test items were included in the lexicon or not. This allowed determination of the relation between test performance and total lexicon size and thus could be transferred to a sample of real participants. Besides lexicon size, lexical content could be approximated with this approach and analyzed in terms of lexical structure.
To pursue the presented aims and establish the sampling method, I conducted three consecutive studies. Study 1 includes the development of a vocabulary test based on the childLex corpus. The testing was based on the yes/no format and included three versions for different age groups. The validation grounded on the Rasch Model shows that it is a valid instrument to measure vocabulary for primary school children in German. In Study 2, I estab-lished the method to estimate lexicon sizes and present results on lexical development dur-ing primary school. Plausible results demonstrate that lexical growth follows a quadratic function starting with about 6,000 words at the beginning of school and about 73,000 words on average for young adults. Moreover, the study revealed large interindividual differences. Study 3 focused on the analysis of network structures and their development in the mental lexicon due to orthographic similarities. It demonstrates that networks possess small-word characteristics and decrease in interconnectivity with age.
Taken together, this dissertation provides an innovative approach for the assessment and description of the development of the mental lexicon from primary school onwards. The studies determine recent results on lexical acquisition in different age groups that were miss-ing before. They impressively show the importance of this period and display the existence of extensive interindividual differences in lexical development. One central aim of future research needs to address the causes and prevention of these differences. In addition, the application of the method for further research (e.g. the adaptation for other target groups) and teaching purposes (e.g. adaptation of texts for different target groups) appears to be promising.
Rabbi Jacob ben Isaac of Yanova (d. 1623) is best known as the author of the Ze’enah U-Re’enah; the Melits Yosher (“Intercessor before God”) is one of his lesser known works. It was first published in Lublin in 1622 and reprinted once in Amsterdam in 1688. Like the Ze’enah U-Re’enah, it was a Torah commentary, but composed for men who had some yeshivah education, but who could not continue their studies. The commentary on the Song of Songs by Isaac Sulkes is another Yiddish work that addresses the same audience as the Melits Yosher. The purpose of this article is to bring to scholarly attention an audience that has not been noticed or studied in the previous scholarship on early modern Yiddish literature.
The thesis comprises three experimental studies, which were carried out to unravel the short- as well as the long-term mechanical properties of shale rocks. Short-term mechanical properties such as compressive strength and Young’s modulus were taken from recorded stress-strain curves of constant strain rate tests. Long-term mechanical properties are represented by the time– dependent creep behavior of shales. This was obtained from constant stress experiments, where the test duration ranged from a couple minutes up to two weeks. A profound knowledge of the mechanical behavior of shales is crucial to reliably estimate the potential of a shale reservoir for an economical and sustainable extraction of hydrocarbons (HC). In addition, healing of clay-rich forming cap rocks involving creep and compaction is important for underground storage of carbon dioxide and nuclear waste.
Chapter 1 introduces general aspects of the research topic at hand and highlights the motivation for conducting this study. At present, a shift from energy recovered from conventional resources e.g., coal towards energy provided by renewable resources such as wind or water is a big challenge. Gas recovered from unconventional reservoirs (shale plays) is considered a potential bridge technology.
In Chapter 2, short-term mechanical properties of two European mature shale rocks are presented, which were determined from constant strain rate experiments performed at ambient and in situ deformation conditions (confining pressure, pc ≤ 100 MPa, temperature, T ≤ 125 °C, representing pc, T - conditions at < 4 km depth) using a Paterson– type gas deformation apparatus. The investigated shales were mainly from drill core material of Posidonia (Germany) shale and weathered material of Bowland (United Kingdom) shale. The results are compared with mechanical properties of North American shales. Triaxial compression tests performed perpendicular to bedding revealed semibrittle deformation behavior of Posidonia shale with pronounced inelastic deformation. This is in contrast to Bowland shale samples that deformed brittle and displayed predominantly elastic deformation. The static Young’s modulus, E, and triaxial compressive strength, σTCS, determined from recorded stress-strain curves strongly depended on the applied confining pressure and sample composition, whereas the influence of temperature and strain rate on E and σTCS was minor. Shales with larger amounts of weak minerals (clay, mica, total organic carbon) yielded decreasing E and σTCS. This may be related to a shift from deformation supported by a load-bearing framework of hard phases (e.g., quartz) towards deformation of interconnected weak minerals, particularly for higher fractions of about 25 – 30 vol% weak phases. Comparing mechanical properties determined at reservoir conditions with mechanical data applying effective medium theories revealed that E and σTCS of Posidonia and Bowland shale are close to the lower (Reuss) bound. Brittleness B is often quoted as a measure indicating the response of a shale formation to stimulation and economic production. The brittleness, B, of Posidonia and Bowland shale, estimated from E, is in good agreement with the experimental results. This correlation may be useful to predict B from sonic logs, from which the (dynamic) Young’s modulus can be retrieved.
Chapter 3 presents a study of the long-term creep properties of an immature Posidonia shale. Constant stress experiments (σ = const.) were performed at elevated confining pressures (pc = 50 – 200 MPa) and temperatures (T = 50 – 200 °C) to simulate reservoir pc, T - conditions. The Posidonia shale samples were acquired from a quarry in South Germany. At stresses below ≈ 84 % compressive strength of Posidonia shale, at high temperature and low confining pressure, samples showed pronounced transient (primary) creep with high deformation rates in the semibrittle regime. Sample deformation was mainly accommodated by creep of weak sample constituents and pore space reduction. An empirical power law relation between strain and time, which also accounts for the influence of pc, T and σ on creep strain was formulated to describe the primary creep phase. Extrapolation of the results to a creep period of several years, which is the typical time interval for a large production decline, suggest that fracture closure is unlikely at low stresses. At high stresses as expected for example at the contact between the fracture surfaces and proppants added during stimulation measures, subcritical crack growth may lead to secondary and tertiary creep. An empirical power law is suggested to describe secondary creep of shale rocks as a function of stress, pressure and temperature. The predicted closure rates agree with typical production decline curves recorded during the extraction of hydrocarbons. At the investigated conditions, the creep behavior of Posidonia shale was found to correlate with brittleness, calculated from sample composition.
In Chapter 4 the creep properties of mature Posidonia and Bowland shales are presented. The observed long-term creep behavior is compared to the short-term behavior determined in Chapter 2. Creep experiments were performed at simulated reservoir conditions of pc = 50 – 115 MPa and T = 75 – 150 °C. Similar to the mechanical response of immature Posidonia shale samples investigated in Chapter 3, creep strain rates of mature Bowland and Posidonia shales were enhanced with increasing stress and temperature and decreasing confining pressures. Depending on applied deformation conditions, samples displayed either only a primary (decelerating) or in addition also a secondary (quasi-steady state) and subsequently a tertiary (accelerating) creep phase before failure. At the same deformation conditions, creep strain of Posidonia shale, which is rich in weak constituents, is tremendously higher than of quartz-rich Bowland shale. Typically, primary creep strain is again mostly accommodated by deformation of weak minerals and local pore space reduction. At the onset of tertiary creep most of the deformation was accommodated by micro crack growth. A power law was used to characterize the primary creep phase of Posidonia and Bowland shale. Primary creep strain of shale rocks is inversely correlated to triaxial compressive strength and brittleness, as described in Chapter 2.
Chapter 5 provides a synthesis of the experimental findings and summarizes the major results of the studies presented in Chapters 2 – 4 and potential applications in the Exploration & Production industry.
Chapter 6 gives a brief outlook on potential future experimental research that would help to further improve our understanding of processes leading to fracture closure involving proppant embedment in unconventional shale gas reservoirs. Such insights may allow to improve stimulation techniques aimed at maintaining economical extraction of hydrocarbons over several years.
Over the last years there is an increasing awareness that historical land cover changes and associated land use legacies may be important drivers for present-day species richness and biodiversity due to time-delayed extinctions or colonizations in response to historical environmental changes. Historically altered habitat patches may therefore exhibit an extinction debt or colonization credit and can be expected to lose or gain species in the future. However, extinction debts and colonization credits are difficult to detect and their actual magnitudes or payments have rarely been quantified because species richness patterns and dynamics are also shaped by recent environmental conditions and recent environmental changes.
In this thesis we aimed to determine patterns of herb-layer species richness and recent species richness dynamics of forest herb layer plants and link those patterns and dynamics to historical land cover changes and associated land use legacies. The study was conducted in the Prignitz, NE-Germany, where the forest distribution remained stable for the last ca. 100 years but where a) the deciduous forest area had declined by more than 90 per cent (leaving only remnants of "ancient forests"), b) small new forests had been established on former agricultural land ("post-agricultural forests"). Here, we analyzed the relative importance of land use history and associated historical land cover changes for herb layer species richness compared to recent environmental factors and determined magnitudes of extinction debt and colonization credit and their payment in ancient and post-agricultural forests, respectively.
We showed that present-day species richness patterns were still shaped by historical land cover changes that ranged back to more than a century. Although recent environmental conditions were largely comparable we found significantly more forest specialists, species with short-distance dispersal capabilities and clonals in ancient forests than in post-agricultural forests. Those species richness differences were largely contingent to a colonization credit in post-agricultural forests that ranged up to 9 species (average 4.7), while the extinction debt in ancient forests had almost completely been paid. Environmental legacies from historical agricultural land use played a minor role for species richness differences. Instead, patch connectivity was most important. Species richness in ancient forests was still dependent on historical connectivity, indicating a last glimpse of an extinction debt, and the colonization credit was highest in isolated post-agricultural forests. In post-agricultural forests that were better connected or directly adjacent to ancient forest patches the colonization credit was way smaller and we were able to verify a gradual payment of the colonization credit from 2.7 species to 1.5 species over the last six decades.
Supermassive black holes reside in the hearts of almost all massive galaxies. Their evolutionary path seems to be strongly linked to the evolution of their host galaxies, as implied by several empirical relations between the black hole mass (M BH ) and different host galaxy properties. The physical driver of this co-evolution is, however, still not understood. More mass measurements over homogeneous samples and a detailed understanding of systematic uncertainties are required to fathom the origin of the scaling relations.
In this thesis, I present the mass estimations of supermassive black holes in the nuclei of one late-type and thirteen early-type galaxies. Our SMASHING sample extends from the intermediate to the massive galaxy mass regime and was selected to fill in gaps in number of galaxies along the scaling relations. All galaxies were observed at high spatial resolution, making use of the adaptive-optics mode of integral field unit (IFU) instruments on state-of-the-art telescopes (SINFONI, NIFS, MUSE). I extracted the stellar kinematics from these observations and constructed dynamical Jeans and Schwarzschild models to estimate the mass of the central black holes robustly. My new mass estimates increase the number of early-type galaxies with measured black hole masses by 15%. The seven measured galaxies with nuclear light deficits (’cores’) augment the sample of cored galaxies with measured black holes by 40%. Next to determining massive black hole masses, evaluating the accuracy of black hole masses is crucial for understanding the intrinsic scatter of the black hole- host galaxy scaling relations. I tested various sources of systematic uncertainty on my derived mass estimates.
The M BH estimate of the single late-type galaxy of the sample yielded an upper limit, which I could constrain very robustly. I tested the effects of dust, mass-to-light ratio (M/L) variation, and dark matter on my measured M BH . Based on these tests, the typically assumed constant M/L ratio can be an adequate assumption to account for the small amounts of dark matter in the center of that galaxy. I also tested the effect of a variable M/L variation on the M BH measurement on a second galaxy. By considering stellar M/L variations in the dynamical modeling, the measured M BH decreased by 30%. In the future, this test should be performed on additional galaxies to learn how an as constant assumed M/L flaws the estimated black hole masses.
Based on our upper limit mass measurement, I confirm previous suggestions that resolving the predicted BH sphere-of-influence is not a strict condition to measure black hole masses. Instead, it is only a rough guide for the detection of the black hole if high-quality, and high signal-to-noise IFU data are used for the measurement. About half of our sample consists of massive early-type galaxies which show nuclear surface brightness cores and signs of triaxiality. While these types of galaxies are typically modeled with axisymmetric modeling methods, the effects on M BH are not well studied yet. The massive galaxies of our presented galaxy sample are well suited to test the effect of different stellar dynamical models on the measured black hole mass in evidently triaxial galaxies. I have compared spherical Jeans and axisymmetric Schwarzschild models and will add triaxial Schwarzschild models to this comparison in the future. The constructed Jeans and Schwarzschild models mostly disagree with each other and cannot reproduce many of the triaxial features of the galaxies (e.g., nuclear sub-components, prolate rotation). The consequence of the axisymmetric-triaxial assumption on the accuracy of M BH and its impact on the black hole - host galaxy relation needs to be carefully examined in the future.
In the sample of galaxies with published M BH , we find measurements based on different dynamical tracers, requiring different observations, assumptions, and methods. Crucially, different tracers do not always give consistent results. I have used two independent tracers (cold molecular gas and stars) to estimate M BH in a regular galaxy of our sample. While the two estimates are consistent within their errors, the stellar-based measurement is twice as high as the gas-based. Similar trends have also been found in the literature. Therefore, a rigorous test of the systematics associated with the different modeling methods is required in the future. I caution to take the effects of different tracers (and methods) into account when discussing the scaling relations.
I conclude this thesis by comparing my galaxy sample with the compilation of galaxies with measured black holes from the literature, also adding six SMASHING galaxies, which were published outside of this thesis. None of the SMASHING galaxies deviates significantly from the literature measurements. Their inclusion to the published early-type galaxies causes a change towards a shallower slope for the M BH - effective velocity dispersion relation, which is mainly driven by the massive galaxies of our sample. More unbiased and homogenous measurements are needed in the future to determine the shape of the relation and understand its physical origin.
The instrumental -er suffix
(2019)
According to recent literature sodium bicarbonate (NaHCO3) has been proposed as a performance enhancing aid by reducing acidosis during exercise. The aim of the current review is to investigate if the duration of exercise is an essential factor for the effect
of NaHCO3. To collect the latest studies from electronic database
of PubMed, study publication time was restricted from December 2006 to December 2016. The search was updated in July 2018. The studies were divided into exercise durations of > 4 or ≤ 4 minutes for easier comparability of their effects in different exercises. Only randomized controlled trials were included in this review. Of the 775 studies, 35 met the inclusion criteria. Study design, subjects, effects as well as outcome criteria were inconsistent throughout the studies. Seventeen of these studies reported
performance enhancing effects after supplementing NaHCO3. Eleven of twenty studies with exercise duration of ≤ 4 minutes showed positive and four diverse results after supplementing NaHCO3. On the other hand six of fifteen studies with an exercise duration of >4 minutes showed performance enhancing and two studies showed diverse results. Consequently, the duration of exercise might be influential for inducing a performance enhancing effect when supplementing NaHCO3, but to which extent, remains unclear due to the inconsistencies in the study results.
The increasing age of worldwide population is a major contributor for the rising prevalence of major pathologies and disease, such as type 2 diabetes, mediated by massive insulin resistance and a decline in functional beta-cell mass, highly associated with an elevated incidence of obesity. Thus, the impact of aging under physiological conditions and in combination with diet-induced metabolic stress on characteristics of pancreatic islets and beta-cells, with the focus on functionality and structural integrity, were investigated in the present dissertation.
Primarily induced by malnutrition due to chronic and excess intake of high caloric diets, containing large amounts of carbohydrates and fats, obesity followed by systemic inflammation and peripheral insulin resistance occurs over time, initiating metabolic stress conditions. Elevated insulin demands initiate an adaptive response by beta-cell mass expansion due to increased proliferation, but prolonged stress conditions drive beta-cell failure and loss. Aging has been also shown to affect beta-cell functionality and morphology, in particular by proliferative limitations. However, most studies in rodents were performed under beta-cell challenging conditions, such as high-fat diet interventions. Thus, in the first part of the thesis (publication I), a characterization of age-related alterations on pancreatic islets and beta-cells was performed by using plasma samples and pancreatic tissue sections of standard diet-fed C57BL/6J wild-type mice in several age groups (2.5, 5, 10, 15 and 21 months).
Aging was accompanied by decreased but sustained islet proliferative potential as well as an induction of cellular senescence. This was associated with a progressive islet expansion to maintain normoglycemia throughout lifespan. Moreover, beta-cell function and mass were not impaired although the formation and accumulation of AGEs occurred, located predominantly in the islet vasculature, accompanied by an induction of oxidative and nitrosative (redox) stress.
The nutritional behavior throughout human lifespan; however, is not restricted to a balanced diet. This emphasizes the significance to investigate malnutrition by the intake of high-energy diets, inducing metabolic stress conditions that synergistically with aging might amplify the detrimental effects on endocrine pancreas. Using diabetes-prone NZO mice aged 7 weeks, fed a dietary regimen of carbohydrate restriction for different periods (young mice - 11 weeks, middle-aged mice - 32 weeks) followed by a carbohydrate intervention for 3 weeks, offered the opportunity to distinguish the effects of diet-induced metabolic stress in different ages on the functionality and integrity of pancreatic islets and their beta-cells (publication II, manuscript).
Interestingly, while young NZO mice exhibited massive hyperglycemia in response to diet-induced metabolic stress accompanied by beta-cell dysfunction and apoptosis, middle-aged animals revealed only moderate hyperglycemia by the maintenance of functional beta-cells. The loss of functional beta-cell mass in islets of young mice was associated with reduced expression of PDX1 transcription factor, increased endocrine AGE formation and related redox stress as well as TXNIP-dependent induction of the mitochondrial death pathway. Although the amounts of secreted insulin and the proliferative potential were comparable in both age groups, islets of middle-aged mice exhibited sustained PDX1 expression, almost regular insulin secretory function, increased capacity for cell cycle progression as well as maintained redox potential.
The results of the present thesis indicate a loss of functional beta-cell mass in young diabetes-prone NZO mice, occurring by redox imbalance and induction of apoptotic signaling pathways. In contrast, aging under physiological conditions in C57BL/6J mice and in combination with diet-induced metabolic stress in NZO mice does not appear to have adverse effects on the functionality and structural integrity of pancreatic islets and beta-cells, associated with adaptive responses on changing metabolic demands. However, considering the detrimental effects of aging, it has to be assumed that the compensatory potential of mice might be exhausted at a later point of time, finally leading to a loss of functional beta-cell mass and the onset and progression of type 2 diabetes.
The polygenic, diabetes-prone NZO mouse is a suitable model for the investigation of human obesity-associated type 2 diabetes. However, mice at advanced age attenuated the diabetic phenotype or do not respond to the dietary stimuli. This might be explained by the middle age of mice, corresponding to the human age of about 38-40 years, in which the compensatory mechanisms of pancreatic islets and beta cells towards metabolic stress conditions are presumably more active.
Most of the matter in the universe consists of hydrogen. The hydrogen in the intergalactic medium (IGM), the matter between the galaxies, underwent a change of its ionisation state at the epoch of reionisation, at a redshift roughly between 6>z>10, or ~10^8 years after the Big Bang. At this time, the mostly neutral hydrogen in the IGM was ionised but the source of the responsible hydrogen ionising emission remains unclear. In this thesis I discuss the most likely candidates for the emission of this ionising radiation, which are a type of galaxy called Lyman alpha emitters (LAEs). As implied by their name, they emit Lyman alpha radiation, produced after a hydrogen atom has been ionised and recombines with a free electron. The ionising radiation itself (also called Lyman continuum emission) which is needed for this process inside the LAEs could also be responsible for ionising the IGM around those galaxies at the epoch of reionisation, given that enough Lyman continuum escapes. Through this mechanism, Lyman alpha and Lyman continuum radiation are closely linked and are both studied to better understand the properties of high redshift galaxies and the reionisation state of the universe.
Before I can analyse their Lyman alpha emission lines and the escape of Lyman continuum emission from them, the first step is the detection and correct classification of LAEs in integral field spectroscopic data, specifically taken with the Multi-Unit Spectroscopic Explorer (MUSE). After detecting emission line objects in the MUSE data, the task of classifying them and determining their redshift is performed with the graphical user interface QtClassify, which I developed during the work on this thesis. It uses the strength of the combination of spectroscopic and photometric information that integral field spectroscopy offers to enable the user to quickly identify the nature of the detected emission lines. The reliable classification of LAEs and determination of their redshifts is a crucial first step towards an analysis of their properties.
Through radiative transfer processes, the properties of the neutral hydrogen clouds in and around LAEs are imprinted on the shape of the Lyman alpha line. Thus after identifying the LAEs in the MUSE data, I analyse the properties of the Lyman alpha emission line, such as the equivalent width (EW) distribution, the asymmetry and width of the line as well as the double peak fraction. I challenge the common method of displaying EW distributions as histograms without taking the limits of the survey into account and construct a more independent EW distribution function that better reflects the properties of the underlying population of galaxies. I illustrate this by comparing the fraction of high EW objects between the two surveys MUSE-Wide and MUSE-Deep, both consisting of MUSE pointings (each with the size of one square arcminute) of different depths. In the 60 MUSE-Wide fields of one hour exposure time I find a fraction of objects with extreme EWs above EW_0>240A of ~20%, while in the MUSE-Deep fields (9 fields with an exposure time of 10 hours and one with an exposure time of 31 hours) I find a fraction of only ~1%, which is due to the differences in the limiting line flux of the surveys. The highest EW I measure is EW_0 = 600.63 +- 110A, which hints at an unusual underlying stellar population, possibly with a very low metallicity.
With the knowledge of the redshifts and positions of the LAEs detected in the MUSE-Wide survey, I also look for Lyman continuum emission coming from these galaxies and analyse the connection between Lyman continuum emission and Lyman alpha emission. I use ancillary Hubble Space Telescope (HST) broadband photometry in the bands that contain the Lyman continuum and find six Lyman continuum leaker candidates. To test whether the Lyman continuum emission of LAEs is coming only from those individual objects or the whole population, I select LAEs that are most promising for the detection of Lyman continuum emission, based on their rest-frame UV continuum and Lyman alpha line shape properties. After this selection, I stack the broadband data of the resulting sample and detect a signal in Lyman continuum with a significance of S/N = 5.5, pointing towards a Lyman continuum escape fraction of ~80%. If the signal is reliable, it strongly favours LAEs as the providers of the hydrogen ionising emission at the epoch of reionisation and beyond.
West of Potsdam’s city center lies the Golm Campus, the largest campus of the University of Potsdam. Its different buildings tell of the numerous institutions that were established at this site over the years: From the mid-1930s, the Walther Wever Barracks were located here. From 1943, it housed the Air Intelligence Division of the German Airforce Supreme Commander. In 1951, a training institution of the Ministry of State Security moved in, which existed until 1989 under different names. In July 1991, the newly founded University of Potsdam took over the premises, which are now part of the Potsdam-Golm Science Park.
The book takes you on a historic journey of the place and invites you to take a walk across today’s campus. The book includes over 110 photos and a detailed map.
Almost half of the political life has been experienced under the
state of emergency and state of siege policies in the Turkish
Republic. In spite of such a striking number and continuity in the
deployment of legal emergency powers, there are just a few legal
and political studies examining the reasons for such permanency
in governing practices. To fill this gap, this paper aims to discuss
one of the most important sources of the ‘permanent’ political
crisis in the country: the historical evolution of legal emergency
power. In order to highlight how these policies have intensified
the highly fragile citizenship regime by weakening the separation
of power, repressing the use of political rights and increasing the
discretionary power of both the executive and judiciary authori-
ties, the paper sheds light on the emergence and production of
a specific form of legality based on the idea of emergency and the
principle of executive prerogative. In that context, it aims to
provide a genealogical explanation of the evolution of the excep-
tional form of the nation-state, which is based on the way political
society, representation, and legitimacy have been instituted and
accompanying failure of the ruling classes in building hegemony
in the country.
The Forgotten War: Yemen
(2019)
The conflict in Yemen seems forgotten considering the worldwide severe humanitarian catastrophes. Nevertheless, since the conflict escalated around four years ago, it became one of the worst humanitarian crises in recent history and has no end in sight. Thousands of people were killed even more displaced and the country is facing tremendous food insecurity as well as the world’s largest cholera outbreak. It is no longer just a civil war between the Houthi- and Hadi-Faction. International interests play a major role and made it a proxy war between Saudi Arabia (and its allies) on one side and Iran on the other. This all happens at the expense of the civilian population. Therefore, it is urgent to analyse the actors involved, their interests within the conflict and furthermore searching for possibilities to overcome it.