Refine
Has Fulltext
- yes (165) (remove)
Year of publication
- 2021 (165) (remove)
Document Type
- Doctoral Thesis (165) (remove)
Is part of the Bibliography
- yes (165)
Keywords
- Spektroskopie (4)
- Klimawandel (3)
- Politik (3)
- climate change (3)
- spectroscopy (3)
- 3D-Visualisierung (2)
- Agrarökologie (2)
- Air pollution (2)
- Alpen (2)
- Alps (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Geowissenschaften (22)
- Institut für Biochemie und Biologie (20)
- Institut für Chemie (20)
- Hasso-Plattner-Institut für Digital Engineering GmbH (13)
- Institut für Umweltwissenschaften und Geographie (9)
- Wirtschaftswissenschaften (9)
- Institut für Informatik und Computational Science (8)
- Department Psychologie (6)
- Extern (5)
Ausgangspunkt der Dissertation ist die Fragestellung, warum es relativ wenige weibliche Wirtschaftsprüfer/innen in Deutschland gibt. Laut Mitgliederstatistik der Wirtschaftsprüferkammer vom 1. Januar 2020 liegt der Frauenanteil im Berufs-stand bei rund 17 %. Einschlägige Literatur zeigt, dass auf Ebene der Berufseinstei-ger/innen im Segment der zehn größten Wirtschaftsprüfungsgesellschaften das Ge-schlechterverhältnis recht ausgewogen ist. Jedoch liegt der Frauenanteil auf der Hierarchieebene „Manager“, für die üblicherweise ein bestandenes Berufsexamen Voraussetzung ist, bereits deutlich niedriger und sinkt mit jeder weiteren Hierar-chiestufe. Die Zielstellung der Dissertation wurde somit dahingehend spezifiziert, diejenigen Faktoren zu analysieren, die dazu beitragen können, dass die relative Repräsentation von Frauen im Segment der zehn größten Wirtschaftsprüfungsge-sellschaften Deutschlands ab der Manager-Ebene (d. h. üblicherweise ab der Schwelle der examinierten Wirtschaftsprüfer/innen) sinkt. Der Fokus der Analyse liegt daher auf Ebene der erfahrenen Prüfungsassistenten und Prüfungsassistentin-nen (Senior), um diese Schwelle unmittelbar vor der Manager-Ebene detailliert zu beleuchten.
Neben der Auswertung von Erkenntnissen aus der internationalen Prüfungsfor-schung wurde eine empirische Studie unter den Senior von sechs der zehn größten Wirtschaftsprüfungsgesellschaften in Deutschland durchgeführt. Die empirischen Ergebnisse wurden mittels deskriptiver Datenanalyse ausgewertet und dahinge-hend analysiert, für welche der zuvor definierten Aspekte signifikante geschlechts-spezifische Unterschiede zu beobachten sind. Für ausgewählte Aspekte wurde zu-dem analysiert, ob es Unterschiede zwischen weiblichen/männlichen Senior mit Kind/ern und ohne Kind/er gibt. Insgesamt wurden für zahlreiche Aspekte ge-schlechtsspezifische Unterschiede und Unterschiede zwischen Senior mit Kind/ern und ohne Kind/er gefunden. Es zeigt sich außerdem, dass neben der beruflichen Situation auch die individuellen Eigenschaften und das private Umfeld von Bedeu-tung sind. Im Rahmen der beruflichen Situation spielen sowohl die Wahrnehmung der aktuellen beruflichen Situation eine Rolle als auch u. a. die Erwartungen der Senior an die mögliche künftige Manager-Position, an das Wirtschaftsprüfungsexa-men und an weitere berufliche Perspektiven.
Rehabilitationspädagogik
(2021)
Die Rehabilitationspädagogik ist eine jüngere eigenständige Hybridwissenschaft im Feld der Humanwissenschaften. Sie setzt theoriebildend im Sinne des Neunten Buchs Sozialgesetzbuch (SGB IX) an den längerfristigen Folgen einer Krankheit oder eines biologischen Mangels an. Dabei orientiert sie sich konzeptionell zum Beispiel an der UN-Behindertenrechtskonvention (UN-BRK) und an der International Classification of Functioning, Disability and Health (ICF). Des Weiteren an den Konzepten der Humanontogenetik von K.-F. Wessel, insbesondere: dem ganzen Menschen, der Hierarchie der Kompetenzen, den sensiblen Phasen und der Souveränität.
Die Rehabilitationspädagogik ist Bestandteil der komplexen gesundheitlichen Rehabilitation und eine Tochterdisziplin der allgemeinen Pädagogik. Bei ihrem rehabilitationspädagogischen Prozess gilt das Richtziel, die umfassende Teilhabe des Menschen an individuellen Lebensbereichen durch rehabilitationspädagogische Mittel, Methoden und Organisationsformen zu unterstützen.
Die Dissertation setzt sich mittels Methoden der Hermeneutik mit der DDR-Rehabilitationspädagogik von K.- P. Becker und Autorenkollektiv kritisch-konstruktiv auseinander. Sie legt eine aktuelle fortführende Theorie der Rehabilitationspädagogik unter der Berücksichtigung der UN-BRK, der ICF und des SGB IX vor und liefert eine neue Sichtweise auf die Rehabilitationspädagogik aus historischer und aktueller Perspektive.
Die vorliegende Publikation der Dissertationsschrift „Nutzungsfokussierte Evaluation in der Lehrkräftefortbildung Belcantare Brandenburg für musikunterrichtende Grundschul-lehrer*innen im ländlichen Raum“ ist eine akteursorientierte, explorativ angelegte Evaluation. Seit 2011 führt in den Regionen des Landes Brandenburg der Landesmusikrat Brandenburg e.V. in Kooperation mit mehreren Institutionen die zweijährige Fortbildung für fachnah sowie ausgebildete Musiklehrkräfte im Kompetenzfeld Singen und Lieddidaktik durch.
Der zugrunde liegende Evaluationsansatz stellt die Interessen der kooperierenden Partner, welche praktische Konsequenzen aus den Ergebnissen der Evaluation zu ziehen beabsichtigen, in den Mittelpunkt der Forschungsarbeit. Es handelt sich somit um eine Auftragsforschung. Der Evaluation kommen die Funktionen zu, die inhaltliche Qualität der Lehrkräftefortbildung zu sichern und zu optimieren, den Erkenntnisgewinn zur Gestaltung eines fachdidaktischen Coachings zu erweitern, die Forschungsergebnisse zur Legitimation und Partizipation sichtbar zu machen sowie analytische Entscheidungshilfe zur Weiterführung Belcantare Brandenburgs nach 2022 bereitzustellen.
Die von den Akteuren an die Autorin herangetragenen Forschungsanliegen wurden zu vier Fragestellungen zusammengefasst:
1. Wie zufrieden sind die Teilnehmenden mit der Veranstaltungsreihe?
2. Welche fachlichen, didaktischen und persönlichen Entwicklungen stellen sich während des Fortbildungszeitraumes aus der Wahrnehmungsperspektive der teilnehmenden Lehrkräfte ein?
3. Wie beurteilen die Coaching-Beteiligten die Chancen und Grenzen des musikdidaktischen Coachings als Fortbildungsform?
4. Welche Schlussfolgerungen lassen sich hinsichtlich professioneller Lehrkräftefortbildung aus der Gegenüberstellung der empirischen Erkenntnisse mit denen der Theorie ziehen?
Diese Forschungsfragen wurden in zwei Forschungsphasen beantwortet:
1. Der empirische Datenkorpus wurde zwischen 2011-2015 gebildet. In dieser Zeit hatten zur projektbegleitenden Qualitätssicherung und -weiterführung der Pilot- und Folgestaffel Belcantare Brandenburgs die Forschungsfragen 1, 2 und 3 besondere Relevanz. Die Evaluationsstudie ist explorativ angelegt: Die Variablen zu den Forschungsfragen 1 und 2 sind durch Dokumentenanalysen sowie Interview-auswertungen mit der Projektleitung und teilnehmenden Lehrkräften sukzessive herausgearbeitet. Ebenso entsprechen die halb-geschlossenen Fragebögen als zentrale Erhebungsinstrumente der Forschungsfragen 1 und 2 dem explorativen Charakter und stellen auf diesem Weg sicher, dass den Teilnehmer*innen (N=40) die Möglichkeit zum Einbringen eigener Perspektiven eingeräumt wurde. Mit der Gesamtnote „sehr gut“ (1,39) seitens der befragten Lehrkräfte gilt die Gestaltung der Veranstaltungsreihe als ein Best-Practice-Beispiel: Für die Lehrkräfte sind das handlungsorientierte Erarbeiten von schülerpassenden und thematisch geeigneten, unmittelbar einsetzbaren oder wiederholt geübten Unterrichtsinhalten, Lerngegenständen und dazu passenden Materialien für den Unterricht die wesentlichen Kriterien zur Nutzung einer solchen Professionalisierungsmaßnahme. Die Lehrkräfteentwicklungen beider beforschter Staffeln zeigen, dass die fachnahen Kräfte bei sich größere Entwicklungszuwächse nach Beendigung des Projektes wahrnehmen als die Fachkräfte. Gleichzeitig liegt die selbsteingeschätzte Fachkompetenz der fachnahen Kräfte zu Fortbildungsende unter denen der Fachkräfte.
Der Forschungsfrage 3 liegt ein ausschließlich qualitatives Design (N=16) zugrunde. Im Ergebnis konnten die Offene Form fachdidaktischen Coachings definiert werden, deren Parameter beschrieben und wesentliche Eigenschaften von Coach-Constellationen für ein binnendifferenziertes Coaching in der Lehrkräftefortbildung benannt werden.
2. Im Mai 2019 bildete sich aufgrund des sich verschärfenden Fachkräftemangels in Brandenburg das Bestreben der Kooperationspartner heraus, die Lehrkräftefortbildung nach 2022 als qualitätssichernde Maßnahme fortführen zu wollen. Diese Situation führte 2019 zur Aufnahme der Forschungsfrage 4, die eine umfassende und aktualisierte Analyse der theoretischen und bildungspolitischen Hintergründe der Intervention implizierte, mit dem Ziel, den Erkenntnisstand der Evaluation für eine erneute Empfehlung zu vertiefen. Das Thematisieren sowie das Gestalten von Selbstlernprozessen in der professionalisierenden Lehrkräftefortbildung stellte sich hierbei als ein zentrales Merkmal innovativer Lernkultur heraus.
Die Publikation gliedert sich in vier Teile: Teil I stellt den Forschungsstand zur professionalisierenden Lehrkräfte¬fortbildung aus bildungswissenschaftlicher und musikpäda-gogischer Perspektive dar. Teil II der Arbeit stellt die komplexen Begründungs-zusammenhänge zum Evaluationsgegenstand her. Im III. Teil der Arbeit ist die Evaluationsstudie zu finden. Deren induktiv erschlossene Erkenntnisse werden in Teil IV der Arbeit dem Forschungsstand zur professionalisierenden Lehrkräftefortbildung gegenübergestellt.
Smart contracts promise to reform the legal domain by automating clerical and procedural work, and minimizing the risk of fraud and manipulation. Their core idea is to draft contract documents in a way which allows machines to process them, to grasp the operational and non-operational parts of the underlying legal agreements, and to use tamper-proof code execution alongside established judicial systems to enforce their terms. The implementation of smart contracts has been largely limited by the lack of an adequate technological foundation which does not place an undue amount of trust in any contract party or external entity. Only recently did the emergence of Decentralized Applications (DApps) change this: Stored and executed via transactions on novel distributed ledger and blockchain networks, powered by complex integrity and consensus protocols, DApps grant secure computation and immutable data storage while at the same time eliminating virtually all assumptions of trust.
However, research on how to effectively capture, deploy, and most of all enforce smart contracts with DApps in mind is still in its infancy. Starting from the initial expression of a smart contract's intent and logic, to the operation of concrete instances in practical environments, to the limits of automatic enforcement---many challenges remain to be solved before a widespread use and acceptance of smart contracts can be achieved.
This thesis proposes a model-driven smart contract management approach to tackle some of these issues. A metamodel and semantics of smart contracts are presented, containing concepts such as legal relations, autonomous and non-autonomous actions, and their interplay. Guided by the metamodel, the notion and a system architecture of a Smart Contract Management System (SCMS) is introduced, which facilitates smart contracts in all phases of their lifecycle. Relying on DApps in heterogeneous multi-chain environments, the SCMS approach is evaluated by a proof-of-concept implementation showing both its feasibility and its limitations.
Further, two specific enforceability issues are explored in detail: The performance of fully autonomous tamper-proof behavior with external off-chain dependencies and the evaluation of temporal constraints within DApps, both of which are essential for smart contracts but challenging to support in the restricted transaction-driven and closed environment of blockchain networks. Various strategies of implementing or emulating these capabilities, which are ultimately applicable to all kinds of DApp projects independent of smart contracts, are presented and evaluated.
Membrane contact sites are of particular interest in the field of synthetic biology and biophysics. They are involved in a great variety of cellular functions. They form in between two cellular organelles or an organelle and the plasma membrane in order to establish a communication path for molecule transport or signal transmission.
The development of an artificial membrane system which can mimic membrane contact sites using bottom up synthetic biology was the goal of this research study. For this, a multi - compartmentalised giant unilamellar vesicle (GUV) system was created with the membrane of the outer vesicle mimicking the plasma membrane and the inner GUVs posing as cellular organelles.
In the following steps, three different strategies were used to achieve an internal membrane - membrane adhesion.
The present work deals with the variation in the linearisation of German infinitival complements from a diachronic perspective. Based on the observation that in present-day German the position of infinitival complements is restricted by properties of the matrix verb (Haider, 2010, Wurmbrand, 2001), whereas this appears much more liberal in older stages of German (Demske, 2008, Maché and Abraham, 2011, Demske, 2015), this dissertation investigates the emergence of those restrictions and the factors that have led to a reduced, yet still existing variability. The study contrasts infinitival complements of two types of matrix verbs, namely raising and control verbs. In present-day German, these show different syntactic behaviour and opposite preferences as far as the position of the infinitive is concerned: while infinitival complements of raising verbs build a single clausal domain with the with the matrix verb and occur obligatorily intraposed, infinitive complements of control verbs can form clausal constituents and occur predominantly extraposed. This correlation is not attested in older stages of German, at least not until Early New High German.
Drawing on diachronic corpus data, the present work provides a description of the changes in the linearisation of infinitival complements from Early New High German to present-day German which aims at finding out when the correlation between infinitive type and word order emerged and further examines their possible causes. The study shows that word order change in German infinitival complements is not a case of syntactic change in the narrow sense, but that the diachronic variation results from the interaction of different language-internal and language-external factors and that it reflects, on the one hand, the influence of language modality on the emerging standard language and, on the other hand, a process of specialisation.
Detecting and categorizing particular entities in the environment are important visual tasks that humans have had to solve at various points in our evolutionary time. The question arises whether characteristics of entities that were of ecological significance for humans play a particular role during the development of visual categorization.
The current project addressed this question by investigating the effects of developing visual abilities, visual properties and ecological significance on categorization early in life. Our stimuli were monochromatic photographs of structure-like assemblies and surfaces taken from three categories: vegetation, non-living natural elements, and artifacts. A set of computational and rated visual properties were assessed for these stimuli. Three empirical studies applied coherent research concepts and methods in young children and adults, comprising (a) two card-sorting tasks with preschool children (age: 4.1-6.1 years) and adults (age: 18-50 years) which assessed classification and similarity judgments, (b) a gaze contingent eye-tracking search task which investigated the impact of visual properties and category membership on 8-month-olds' ability to segregate visual structure. Because eye-tracking with infants still provides challenges, a methodological study (c) assessed the effect of infant eye-tracking procedures on data quality with 8- to 12-month-old infants and adults.
In the categorization tasks we found that category membership and visual properties impacted the performance of all participant groups. Sensitivity to the respective categories varied between tasks and over the age groups. For example, artifact images hindered infants' visual search but were classified best by adults, whereas sensitivity to vegetation was highest during similarity judgments. Overall, preschool children relied less on visual properties than adults, but some properties (e.g., rated depth, shading) were drawn upon similarly strong. In children and infants, depth predicted task performance stronger than shape-related properties. Moreover, children and infants were sensitive to variations in the complexity of low-level visual statistics. These results suggest that classification of visual structures, and attention to particular visual properties is affected by the functional or ecological significance these categories and properties may have for each of the respective age groups.
Based on this, the project highlights the importance of further developmental research on visual categorization with naturalistic, structure-like stimuli. As intended with the current work, this would allow important links between developmental and adult research.
Fördermittelfinanzierte Gründungsunterstützungsangebote waren in den EU-Förderperioden 2007-2013 und 2014-2020 ein wichtiges Element der Hochschulgründungsförderung im Land Brandenburg. Aufgrund der positiven wirtschaftlichen Entwicklung des Landes, reduzierte sich das Fördervolumen in der gleichen Zeit jedoch stetig. Für die EU-Förderperiode 2021-2027 steht eine weitere Reduzierung der Fördermittel bereits fest. In der Folge wird es, ohne Anpassungen der etablierten Förderstrukturen, zur weiteren Reduzierung oder Erosion der Gründungsunterstützungsangebote an Brandenburger Hochschulen kommen. Die vorliegende Arbeit befasst sich daher u.a. mit der Frage, wie ein theoretisches Referenzmodell zur fördermittelfinanzierten Hochschulgründungsberatung gestaltet sein kann, um den reduzierten Fördersätzen bei gleichzeitiger Aufrechterhaltung der Angebotsvielfalt gerecht zu werden.
Zur Beantwortung dieser Frage wird als Untersuchungsobjekt das Förderprojekt BIEM Startup Navigator herangezogen. Das Gründungsberatungsprojekt BIEM Startup Navigator wurde von 2010 bis 2014 an sechs Brandenburger Hochschulen durchgeführt. Mit Hilfe der Modelle und Prämissen der Prinzipal-Agent-Theorie wird zunächst ein theoretischer Rahmen aufgespannt, auf dessen Grundlage die empirische Untersuchung erfolgt. Anhand der Prinzipal-Agent-Theorie werden die beteiligten Organisationen, Individuen und Institutionen aufgezeigt. Weiterhin werden die wesentlichen Problemfelder und Lösungsansätze der Prinzipal-Agent-Theorie für die Untersuchung des BIEM Startup Navigators diskutiert.
Im Untersuchungsverlauf werden u.a. die Konzepte zur Durchführung des Förderprojekts an sechs Hochschulstandorten, die Daten von 610 Teilnehmenden und 288 Gründungen analysiert, um so sachlogische Zusammenhänge und Wechselwirkungen identifizieren und beschreiben zu können. Es werden unterschiedliche theoretische Annahmen zu den Bereichen Projekteffektivität bzw. Projekteffizienz, Kostenverteilung und zur konzeptionellen Ausgestaltung in Form von 24 Arbeitshypothesen formuliert und auf die Untersuchung übertragen. Die Verifizierung bzw. Falsifizierung der Hypothesen erfolgt auf Grundlage der kombinierten Erkenntnisse aus Literaturrecherchen und den Ergebnissen der empirischen Untersuchung.
Im Verlauf der Arbeit gelingt es, die in der Prinzipal-Agent-Theorie auftretenden Agencykosten auch am Beispiel des BIEM Startup Navigators zu beschreiben und ex post Ineffizienzen in den durchgeführten Screening- und Signalingprozessen aufzuzeigen.
Mit Hilfe des im Verlauf der Arbeit entwickelten theoretischen Referenzmodells zur fördermittelfinanzierten Gründungsberatung an Brandenburger Hochschulen soll es gelingen, den sinkenden EU-Fördermitteln, ohne eine gleichzeitige Reduzierung der Gründungsunterstützungsangebote an den Hochschulen, gerecht zu werden. Hierfür zeigt das theoretische Referenzmodell wie die Ergebnisse der empirischen Untersuchung genutzt werden können, um die Agencykosten der fördermittelfinanzierten Gründungsberatung zu reduzieren.
Centroid moment tensor inversion can provide insight into ongoing tectonic processes and active faults. In the Alpine mountains (central Europe), challenges result from low signal-to-noise ratios of earthquakes with small to moderate magnitudes and complex wave propagation effects through the heterogeneous crustal structure of the mountain belt. In this thesis, I make use of the temporary installation of the dense AlpArray seismic network (AASN) to establish a work flow to study seismic source processes and enhance the knowledge of the Alpine seismicity. The cumulative thesis comprises four publications on the topics of large seismic networks, seismic source processes in the Alps, their link to tectonics and stress field, and the inclusion of small magnitude earthquakes into studies of active faults.
Dealing with hundreds of stations of the dense AASN requires the automated assessment of data and metadata quality. I developed the open source toolbox AutoStatsQ to perform an automated data quality control. Its first application to the AlpArray seismic network has revealed significant errors of amplitude gains and sensor orientations. A second application of the orientation test to the Turkish KOERI network, based on Rayleigh wave polarization, further illustrated the potential in comparison to a P wave polarization method. Taking advantage of the gain and orientation results of the AASN, I tested different inversion settings and input data types to approach the specific challenges of centroid moment tensor (CMT) inversions in the Alps. A comparative study was carried out to define the best fitting procedures.
The application to 4 years of seismicity in the Alps (2016-2019) substantially enhanced the amount of moment tensor solutions in the region. We provide a list of moment tensors solutions down to magnitude Mw 3.1. Spatial patterns of typical focal mechanisms were analyzed in the seismotectonic context, by comparing them to long-term seismicity, historical earthquakes and observations of strain rates. Additionally, we use our MT solutions to investigate stress regimes and orientations along the Alpine chain. Finally, I addressed the challenge of including smaller magnitude events into the study of active faults and source processes. The open-source toolbox Clusty was developed for the clustering of earthquakes based on waveforms recorded across a network of seismic stations. The similarity of waveforms reflects both, the location and the similarity of source mechanisms. Therefore the clustering bears the opportunity to identify earthquakes of similar faulting styles, even when centroid moment tensor inversion is not possible due to low signal-to-noise ratios of surface waves or oversimplified velocity models. The toolbox is described through an application to the Zakynthos 2018 aftershock sequence and I subsequently discuss its potential application to weak earthquakes (Mw<3.1) in the Alps.
Learning analytics at scale
(2021)
Digital technologies are paving the way for innovative educational approaches. The learning format of Massive Open Online Courses (MOOCs) provides a highly accessible path to lifelong learning while being more affordable and flexible than face-to-face courses. Thereby, thousands of learners can enroll in courses mostly without admission restrictions, but this also raises challenges. Individual supervision by teachers is barely feasible, and learning persistence and success depend on students' self-regulatory skills. Here, technology provides the means for support. The use of data for decision-making is already transforming many fields, whereas in education, it is still a young research discipline. Learning Analytics (LA) is defined as the measurement, collection, analysis, and reporting of data about learners and their learning contexts with the purpose of understanding and improving learning and learning environments. The vast amount of data that MOOCs produce on the learning behavior and success of thousands of students provides the opportunity to study human learning and develop approaches addressing the demands of learners and teachers.
The overall purpose of this dissertation is to investigate the implementation of LA at the scale of MOOCs and to explore how data-driven technology can support learning and teaching in this context. To this end, several research prototypes have been iteratively developed for the HPI MOOC Platform. Hence, they were tested and evaluated in an authentic real-world learning environment. Most of the results can be applied on a conceptual level to other MOOC platforms as well. The research contribution of this thesis thus provides practical insights beyond what is theoretically possible. In total, four system components were developed and extended:
(1) The Learning Analytics Architecture: A technical infrastructure to collect, process, and analyze event-driven learning data based on schema-agnostic pipelining in a service-oriented MOOC platform. (2) The Learning Analytics Dashboard for Learners: A tool for data-driven support of self-regulated learning, in particular to enable learners to evaluate and plan their learning activities, progress, and success by themselves. (3) Personalized Learning Objectives: A set of features to better connect learners' success to their personal intentions based on selected learning objectives to offer guidance and align the provided data-driven insights about their learning progress. (4) The Learning Analytics Dashboard for Teachers: A tool supporting teachers with data-driven insights to enable the monitoring of their courses with thousands of learners, identify potential issues, and take informed action.
For all aspects examined in this dissertation, related research is presented, development processes and implementation concepts are explained, and evaluations are conducted in case studies. Among other findings, the usage of the learner dashboard in combination with personalized learning objectives demonstrated improved certification rates of 11.62% to 12.63%. Furthermore, it was observed that the teacher dashboard is a key tool and an integral part for teaching in MOOCs. In addition to the results and contributions, general limitations of the work are discussed—which altogether provide a solid foundation for practical implications and future research.
Was ist HipHop?
(2021)
Es handelt sich bei der vorliegenden Dissertation um eine investigative Forschungsarbeit, die sich mit dem dynamisch wandelnden HipHop-Phänomen befasst. Der Autor erläutert hierbei die anhaltende Attraktivität des kulturellen Phänomens HipHop und versucht die Tatsache der stetigen Reproduzierbarkeit des HipHops genauer zu erklären. Daher beginnt er mit einer historischen Diskursanalyse der HipHop-Kultur. Er analysiert hierfür die Formen, die Protagonisten und die Diskurse des HipHops, um diesen besser verstehen zu können. Durch die Herausarbeitung der genuinen Eigenschaft der Mehrfachkodierbarkeit des HipHops werden gängige Erklärungsmuster aus Wissenschaft und Medien relativiert und kritisiert. Der Autor kombiniert in seiner Studie kultur- und erziehungswissenschaftliche Literatur mit diversen aktuellen und historischen Darstellungen und Bildern. Es werden vor allem bildbasierte Selbstinszenierungen von HipHoppern und Selbstzeugnisse aus narrativen Interviews, die er selbst mit verschiedenen HipHoppern in Deutschland geführt hat, ausgewertet. Neben den narrativen Interviews dient vor allem die Bildinterpretation nach Bohnsack als Quelle zur Bildung der These der Mehrfachkodierbarkeit. Hierbei werden zwei Bilder der HipHopper Lady Bitch Ray und Kollegah nach Bohnsack (2014) interpretiert und gezeigt wie HipHop neben der lyrischen und der klanglichen Komponente auch visuell inszeniert und produziert wird. Hieraus wird geschlussfolgert, dass es im HipHop möglich ist konträre Sichtweisen bei gleichzeitiger Anwendung von typischen Kulturpraktiken wie zum Beispiel dem Boasting darzustellen und zu vermitteln. Die stetige Offenheit des HipHops wird durch Praktiken wie dem Sampling oder dem Battle deutlich und der Autor erklärt, dass durch diese Techniken die generative Eigenschaft der Mehrfachkodierbarkeit hergestellt wird. Damit vertritt er eine Art Baukasten-Theorie, die besagt, dass sich prinzipiell jeder aus dem Baukasten HipHop, je nach Vorliebe, Interesse und Affinität, bedienen kann. Durch die Vielfalt an Meinungen zu HipHop, die der Autor durch die Kodierung der geführten narrativen Interviews erhält, wird diese These verdeutlicht und es wird klar, dass es sich bei HipHop um mehr als nur eine Mode handelt. HipHop besitzt die prinzipielle Möglichkeit durch die Offenheit, die er in sich trägt, sich stetig neu zu wandeln und damit an Beliebtheit und Popularität zuzunehmen. Die vorliegende Arbeit erweitert damit die immer größer werdende Forschung in den HipHop-Studies und setzt wichtige Akzente um weiter zu forschen und HipHop besser verständlich zu machen.
Angepasste Pathogene besitzen eine Reihe von Virulenzmechanismen, um pflanzliche Immunantworten unterhalb eines Schwellenwerts der effektiven Resistenz zu unterdrücken. Dadurch sind sie in der Lage sich zu vermehren und Krankheiten auf einem bestimmten Wirt zu verursachen. Eine essentielle Virulenzstrategie Gram-negativer Bakterien ist die Translokation von sogenannten Typ-III Effektorproteinen (T3Es) direkt in die Wirtszelle. Dort stören diese die Immunantwort des Wirts oder fördern die Etablierung einer für das Pathogen günstigen Umgebung. Eine kritische Komponente der Pflanzenimmunität gegen eindringende Pathogene ist die schnelle transkriptionelle Umprogrammierung der angegriffenen Zelle. Viele adaptierte bakterielle Pflanzenpathogene verwenden T3Es, um die Induktion Abwehr-assoziierter Gene zu stören. Die Aufklärung von Effektor-Funktionen, sowie die Identifikation ihrer pflanzlichen Zielproteine sind für das Verständnis der bakteriellen Pathogenese essentiell. Im Rahmen dieser Arbeit sollte das Typ-III Effektorprotein XopS aus Xanthomonas campestris pv. vesicatoria (Xcv) funktionell charakterisiert werden. Zudem lag hier ein besonderer Fokus auf der Untersuchung der Wechselwirkung zwischen XopS und seinem in Vorarbeiten identifizierten pflanzlichen Interaktionspartner WRKY40, einem transkriptionellen Regulator der Abwehr-assoziierten Genexpression. Es konnte gezeigt werden, dass XopS ein essentieller Virulenzfaktor des Phytopathogens Xcv während der präinvasiven Immunantwort ist. So zeigten xopS-defiziente Xcv Bakterien bei einer Inokulation der Blattoberfläche suszeptibler Paprika Pflanzen eine deutlich reduzierte Virulenz im Vergleich zum Xcv Wildtyp. Die Translokation von XopS durch Xcv, sowie die ektopische Expression von XopS in Arabidopsis oder N. benthamiana verhinderte das Schließen von Stomata als Reaktion auf Bakterien bzw. einem Pathogen-assoziierten Stimulus, wobei zudem gezeigt werden konnte, dass dies in einer WRKY40-abhängigen Weise geschieht. Weiter konnte gezeigt werden, dass XopS in der Lage ist, die Expression Abwehr-assoziierter Gene zu manipulieren. Dies deutet darauf hin, dass XopS sowohl in die prä-als auch in die postinvasive, apoplastische Abwehr eingreift. Phytohormon-Signalnetzwerke spielen während des Aufbaus einer effizienten pflanzlichen Immunantwort eine wichtige Rolle. Hier konnte gezeigt werden, dass XopS mit genau diesen Signalnetzwerken zu interferieren scheint. Eine ektopische Expression des Effektors in Arabidopsis führte beispielsweise zu einer signifikanten Induktion des Phytohormons Jasmonsäure (JA), während eine Infektion von suszeptiblen Paprika Pflanzen mit einem xopS-defizienten Xcv Stamm zu einer ebenfalls signifikanten Akkumulation des Salicylsäure (SA)-Gehalts führte.
So kann zu diesem Zeitpunkt vermutet werden, dass XopS die Virulenz von Xcv fördert, indem JA-abhängige Signalwege induziert werden und es gleichzeitig zur Unterdrückung SA-abhängiger Signalwege kommt. Die Virus-induzierte Genstilllegung des XopS Interaktionspartners WRKY40a in Paprika erhöhte die Toleranz der Pflanze gegenüber einer Xcv Infektion, was darauf hindeutet, dass es sich bei diesem Protein um einen transkriptionellen Repressor pflanzlicher Immunantworten handelt. Die Hypothese, dass WRKY40 die Abwehr-assoziierte Genexpression reprimiert, konnte hier über verschiedene experimentelle Ansätze bekräftigt werden. So wurde beispielsweise gezeigt, dass die Expression von verschiedenen Abwehrgenen einschließlich des SA-abhängigen Gens PR1 und die des Negativregulators des JA-Signalwegs JAZ8 von WRKY40 gehemmt wird. Um bei einem Pathogenangriff die Abwehr-assoziierte Genexpression zu gewährleisten, muss WRKY40 als Negativregulator abgebaut werden. Vorarbeiten zeigten, dass WRKY40 über das 26S Proteasom abgebaut wird. In der hier vorliegenden Studie konnte weiter bestätigt, dass der T3E XopS zu einer Stabilisierung des WRKY40 Proteins führt, indem er auf bislang ungeklärte Weise dessen Abbau über das 26S Proteasom verhindert. Die Ergebnisse aus der hier vorliegenden Arbeit lassen die Vermutung zu, dass die Stabilisierung des Negativregulators der Immunantwort WRKY40 seitens XopS dazu führt, dass eine darüber vermittelte Manipulation der Abwehr-assoziierten Genexpression, sowie eine Umsteuerung phytohormoneller Wechselwirkungen die Ausbreitung von Xcv auf suszeptiblen Paprikapflanzen fördert. Ein weiteres Ziel dieser Arbeit war es, weitere potentielle in planta Interaktionspartner von XopS zu identifizieren die für seine Interaktion mit WRKY40 bzw. für die Aufschlüsselung seines Wirkmechanismus relevant sein könnten. So konnte die Deubiquitinase UBP12 als weiterer pflanzlicher Interaktionspartner sowohl von XopS als auch von WRKY40 gefunden werden. Dieses Enzym ist in der Lage, die Ubiquitinierung von Substratproteinen zu modifizieren und seine Funktion könnte somit ein Bindeglied zwischen XopS und dessen Interferenz mit dem proteasomalen Abbau von WRKY40 sein. Während einer kompatiblen Xcv-Wirtsinteraktion führte die Virus-induzierte Genstilllegung von UBP12 zu einer reduzierten Resistenz der Pflanze gegenüber des Pathogens Xcv, was auf dessen positiv-regulatorische Wirkung während der Immunantwort hindeutet. Zudem zeigten Western Blot Analysen, dass das Protein WRKY40 bei einer Herunterregulierung von UBP12 akkumuliert und dass diese Akkumulation von der Anwesenheit des T3Es XopS zusätzlich verstärkt wird. Weiterführende Analysen zur biochemischen Charakterisierung der XopS/WRKY40/UBP12 Interaktion sollten in Zukunft durchgeführt werden, um den genauen Wirkmechanismus des XopS T3Es weiter aufzuschlüsseln.
With ongoing anthropogenic global warming, some of the most vulnerable components of the Earth system might become unstable and undergo a critical transition. These subsystems are the so-called tipping elements. They are believed to exhibit threshold behaviour and would, if triggered, result in severe consequences for the biosphere and human societies. Furthermore, it has been shown that climate tipping elements are not isolated entities, but interact across the entire Earth system. Therefore, this thesis aims at mapping out the potential for tipping events and feedbacks in the Earth system mainly by the use of complex dynamical systems and network science approaches, but partially also by more detailed process-based models of the Earth system.
In the first part of this thesis, the theoretical foundations are laid by the investigation of networks of interacting tipping elements. For this purpose, the conditions for the emergence of global cascades are analysed against the structure of paradigmatic network types such as Erdös-Rényi, Barabási-Albert, Watts-Strogatz and explicitly spatially embedded networks. Furthermore, micro-scale structures are detected that are decisive for the transition of local to global cascades. These so-called motifs link the micro- to the macro-scale in the network of tipping elements. Alongside a model description paper, all these results are entered into the Python software package PyCascades, which is publicly available on github.
In the second part of this dissertation, the tipping element framework is first applied to components of the Earth system such as the cryosphere and to parts of the biosphere. Afterwards it is applied to a set of interacting climate tipping elements on a global scale. Using the Earth system Model of Intermediate Complexity (EMIC) CLIMBER-2, the temperature feedbacks are quantified, which would arise if some of the large cryosphere elements disintegrate over a long span of time. The cryosphere components that are investigated are the Arctic summer sea ice, the mountain glaciers, the Greenland and the West Antarctic Ice Sheets. The committed temperature increase, in case the ice masses disintegrate, is on the order of an additional half a degree on a global average (0.39-0.46 °C), while local to regional additional temperature increases can exceed 5 °C. This means that, once tipping has begun, additional reinforcing feedbacks are able to increase global warming and with that the risk of further tipping events.
This is also the case in the Amazon rainforest, whose parts are dependent on each other via the so-called moisture-recycling feedback. In this thesis, the importance of drought-induced tipping events in the Amazon rainforest is investigated in detail. Despite the Amazon rainforest is assumed to be adapted to past environmental conditions, it is found that tipping events sharply increase if the drought conditions become too intense in a too short amount of time, outpacing the adaptive capacity of the Amazon rainforest. In these cases, the frequency of tipping cascades also increases to 50% (or above) of all tipping events. In the model that was developed in this study, the southeastern region of the Amazon basin is hit hardest by the simulated drought patterns. This is also the region that already nowadays suffers a lot from extensive human-induced changes due to large-scale deforestation, cattle ranching or infrastructure projects.
Moreover, on the larger Earth system wide scale, a network of conceptualised climate tipping elements is constructed in this dissertation making use of a large literature review, expert knowledge and topological properties of the tipping elements. In global warming scenarios, tipping cascades are detected even under modest scenarios of climate change, limiting global warming to 2 °C above pre-industrial levels. In addition, the structural roles of the climate tipping elements in the network are revealed. While the large ice sheets on Greenland and Antarctica are the initiators of tipping cascades, the Atlantic Meridional Overturning Circulation (AMOC) acts as the transmitter of cascades. Furthermore, in our conceptual climate tipping element model, it is found that the ice sheets are of particular importance for the stability of the entire system of investigated climate tipping elements.
In the last part of this thesis, the results from the temperature feedback study with the EMIC CLIMBER-2 are combined with the conceptual model of climate tipping elements. There, it is observed that the likelihood of further tipping events slightly increases due to the temperature feedbacks even if no further CO$_2$ would be added to the atmosphere.
Although the developed network model is of conceptual nature, it is possible with this work for the first time to quantify the risk of tipping events between interacting components of the Earth system under global warming scenarios, by allowing for dynamic temperature feedbacks at the same time.
Das Schulfach Geographie war in der DDR eines der Fächer, das sehr stark mit politischen Themen im Sinne des Marxismus-Leninismus bestückt war. Ein anderer Aspekt sind die sozialistischen Erziehungsziele, die in der Schulbildung der DDR hoch im Kurs standen. Im Fokus stand diesbezüglich die Erziehung der Kinder zu sozialistischen Persönlichkeiten. Die Arbeit versucht einen klaren Blick auf diesen Umstand zu werfen, um zu erfahren, was da von den Lehrkräften gefordert wurde und wie es in der Schule umzusetzen war.
Durch den Fall der Mauer war natürlich auch eine Umstrukturierung des Bildungssystems im Osten unausweichlich. Hier will die Arbeit Einblicke geben, wie die Geographielehrkräfte diese Transformation mitgetragen und umgesetzt haben. Welche Wesenszüge aus der Sozialisierung in der DDR haben sich bei der Gestaltung des Unterrichtes und dessen Ausrichtung auf die neuen Erziehungsziele erhalten?
Hierzu wurden Geographielehrkräfte befragt, die sowohl in der DDR als auch im geeinten Deutschland unterrichtet haben. Die Fragen bezogen sich in erster Linie auf die Art und Weise des Unterrichtens vor, während und nach der Wende und der daraus entstandenen Systemtransformation.
Die Befragungen kommen zu dem Ergebnis, dass sich der Geographieunterricht in der DDR thematisch von dem in der BRD nicht sonderlich unterschied. Von daher bedurfte es keiner umfangreichen inhaltlichen Veränderung des Geographieunterrichts. Schon zu DDR-Zeiten wurden durch die Lehrkräfte offenbar eigenmächtig ideologiefreie physisch-geographische Themen oft ausgedehnt, um die Ideologie des Faches zu reduzieren. So fiel den meisten eine Anpassung ihres Unterrichts an das westdeutsche System relativ leicht. Die humanistisch geprägte Werteerziehung des DDR-Bildungssystems wurde unter Ausklammerung des sozialistischen Aspektes ebenso fortgeführt, da es auch hier viele Parallelen zum westdeutschen System gegeben hat. Deutlich wird eine Charakterisierung des Faches als Naturwissenschaft von Seiten der ostdeutschen Lehrkräfte, obwohl das Fach an den Schulen den Gesellschaftswissenschaften zugeordnet wird und auch in der DDR eine starke wirtschaftsgeographische Ausrichtung hatte.
Von der Verantwortung sozialistische Persönlichkeiten zu erziehen, wurden die Lehrkräfte mit dem Ende der DDR entbunden und die in dieser Arbeit aufgeführten Interviewauszüge lassen keinen Zweifel daran, dass es dem Großteil der Befragten darum nicht leidtat, sie sich aber bis heute an der Werteorientierung aus DDR-Zeiten orientieren.
In C3 plants, CO2 diffuses into the leaf and is assimilated by the Calvin-Benson cycle in the mesophyll cells. It leaves Rubisco open to its side reaction with O2, resulting in a wasteful cycle known as photorespiration. A sharp fall in atmospheric CO2 levels about 30 million years ago have further increased the side reaction with O2. The pressure to reduce photorespiration led, in over 60 plant genera, to the evolution of a CO2-concentrating mechanism called C4 photosynthesis; in this mode, CO2 is initially incorporated into 4-carbon organic acids, which diffuse to the bundle sheath and are decarboxylated to provide CO2 to Rubisco. Some genera, like Flaveria, contain several species that represent different steps in this complex evolutionary process. However, the majority of terrestrial plant species did not evolve a CO2-concentrating mechanism and perform C3 photosynthesis.
This thesis compares photosynthetic metabolism in several species with C3, C4 and intermediate modes of photosynthesis. Metabolite profiling and stable isotope labelling were performed to detect inter-specific differences changes in metabolite profile and, hence, how a pathway operates. The results obtained were subjected to integrative data analyses like hierarchical clustering and principal component analysis, and were deepened by correlation analyses to uncover specific metabolic features and reaction steps that were conserved or differed between species.
The main findings are that Calvin-Benson cycle metabolite profiles differ between C3 and C4 species and between different C3 species, including a very different response to rising irradiance in Arabidopsis and rice. These findings confirm Calvin-Benson cycle operation diverged between C3 and C4 species and, most unexpectedly, even between different C3 species. Moreover, primary metabolic profiles supported the current C4 evolutionary model in the genus Flaveria and also provided new insights and opened up new questions. Metabolite profiles also point toward a progressive adjustment of the Calvin-Benson cycle during the evolution of C4 photosynthesis. Overall, this thesis point out the importance of a metabolite-centric approach to uncover underlying differences in species apparently sharing the same photosynthetic routes and as a valid method to investigate evolutionary transition between C3 and C4 photosynthesis.
The majority of baryons in the Universe is believed to reside in the intergalactic medium (IGM). This makes the IGM an important component in understanding cosmological structure formation. It is expected to trace the same dark matter distribution as galaxies, forming structures like filaments and clusters. However, whereas galaxies can be observed to be arranged along these large-scale structures, the spatial distribution of the diffuse IGM is not as easily unveiled. Absorption line studies of quasar (QSO) spectra can help with mapping the IGM, as well as the boundary layer between IGM and galaxies: the circumgalactic medium (CGM). By studying gas in the Local Group, as well as in the IGM, this study aims to get a better understanding of how the gas is linked to the large-scale structure of the local Universe and the galaxies residing in that structure.
Chapter 1 gives an introduction to the CGM and IGM, while the methods used in this study are explained in Chapter 2. Chapter 3 starts on a relatively small cosmological scale, namely that of our Local Group, which includes i.a. the Milky Way (MW) and the M31. Within the CGM of the MW, there exist denser clouds, some of which are infalling while others are moving away from the Galactic disc. To study these clouds, 29 QSO spectra obtained with the Cosmic Origins Spectrograph (COS) aboard the Hubble Space Telescope (HST) were analysed. Abundances of Si II, Si III, Si IV, C II, and C IV were measured for 69 HVCs belonging to two samples: one in the direction of the LG’s barycentre and the other in the anti-barycentre direction. Their velocities range from -100 ≥ vLSR ≥ -400 km/s for the barycentre sample and between +100 ≤ vLSR ≤ +300 km/s for the anti-barycentre sample. By using Cloudy models, these data could then be used to derive gas volume densities for the HVCs. Because of the relationship between density and pressure of the ambient medium, which is in turn determined by the Galactic radiation field, the distances of the HVCs could be estimated. From this, a subsample of absorbers located in the direction of M31 was found to exist outside of the MW’s virial radius, their low densities (log nH ≤ -3.54) making it likely for them to be part of the gas in between the MW and M31. No such low-density absorbers were found in the anti-barycentre sample. Our results thus hint at gas following the dark matter potential, which would be deeper between the MW and M31 as they are by far the most massive members of the LG.
From this bridge of gas in the LG, this study zooms out to the large-scale structure of the local Universe (z ~ 0) in Chapter 4. Galaxy data from the V8k catalogue and QSO spectra from COS were used to study the relation between the galaxies tracing large-scale filaments and the gas existing outside of those galaxies. This study used the filaments defined in Courtois et al. (2013). A total of 587 Lyman α (Lyα) absorbers were found in the 302 QSO spectra in the velocity range 1070 - 6700 km/s. After selecting sightlines passing through or close to these filaments, model spectra were made for 91 sightlines and 215 (227) Lyα absorbers (components) were measured in this sample. The velocity gradient along each filament was calculated and 74 absorbers were found within 1000 km/s of the nearest filament segment.
In order to find whether the absorbers are more tied to galaxies or to the large-scale structure, equivalent widths of the Lyα absorbers were plotted against both galaxy and filament impact parameters. While stronger absorbers do tend to be closer to either galaxies or filaments, there is a large scatter in this relation. Despite this large scatter, this study found that the absorbers do not follow a random distribution either. They cluster less strongly around filaments than galaxies, but stronger than random distributions, as confirmed by a Kolmogorov-Smirnov test.
Furthermore, the column density distribution function found in this study has a slope of -β = 1.63±0.12 for the total sample and -β =1.47±0.24 for the absorbers within 1000 km/s of a filament. The shallower slope for the latter subsample could indicate an excess of denser absorbers within the filament, but they are consistent within errors. These values are in agreement with values found in e.g. Lehner et al. (2007); Danforth et al. (2016).
The picture that emerges from this study regarding the relation between the IGM and the large-scale structure in the local Universe fits with what is found in other studies: while at least part of the gas traces the same filamentary structure as galaxies, the relation is complex. This study has shown that by taking a large sample of sightlines and comparing the data gathered from those with galaxy data, it is possible to study the gaseous large-scale structure. This approach can be used in the future together with simulations to get a better understanding of structure formation and evolution in the Universe.
As part of our everyday life we consume breaking news and interpret it based on our own viewpoints and beliefs. We have easy access to online social networking platforms and news media websites, where we inform ourselves about current affairs and often post about our own views, such as in news comments or social media posts. The media ecosystem enables opinions and facts to travel from news sources to news readers, from news article commenters to other readers, from social network users to their followers, etc. The views of the world many of us have depend on the information we receive via online news and social media. Hence, it is essential to maintain accurate, reliable and objective online content to ensure democracy and verity on the Web. To this end, we contribute to a trustworthy media ecosystem by analyzing news and social media in the context of politics to ensure that media serves the public interest. In this thesis, we use text mining, natural language processing and machine learning techniques to reveal underlying patterns in political news articles and political discourse in social networks.
Mainstream news sources typically cover a great amount of the same news stories every day, but they often place them in a different context or report them from different perspectives. In this thesis, we are interested in how distinct and predictable newspaper journalists are, in the way they report the news, as a means to understand and identify their different political beliefs. To this end, we propose two models that classify text from news articles to their respective original news source, i.e., reported speech and also news comments. Our goal is to capture systematic quoting and commenting patterns by journalists and news commenters respectively, which can lead us to the newspaper where the quotes and comments are originally published. Predicting news sources can help us understand the potential subjective nature behind news storytelling and the magnitude of this phenomenon. Revealing this hidden knowledge can restore our trust in media by advancing transparency and diversity in the news.
Media bias can be expressed in various subtle ways in the text and it is often challenging to identify these bias manifestations correctly, even for humans. However, media experts, e.g., journalists, are a powerful resource that can help us overcome the vague definition of political media bias and they can also assist automatic learners to find the hidden bias in the text. Due to the enormous technological advances in artificial intelligence, we hypothesize that identifying political bias in the news could be achieved through the combination of sophisticated deep learning modelsxi and domain expertise. Therefore, our second contribution is a high-quality and reliable news dataset annotated by journalists for political bias and a state-of-the-art solution for this task based on curriculum learning. Our aim is to discover whether domain expertise is necessary for this task and to provide an automatic solution for this traditionally manually-solved problem. User generated content is fundamentally different from news articles, e.g., messages are shorter, they are often personal and opinionated, they refer to specific topics and persons, etc. Regarding political and socio-economic news, individuals in online communities make use of social networks to keep their peers up-to-date and to share their own views on ongoing affairs. We believe that social media is also an as powerful instrument for information flow as the news sources are, and we use its unique characteristic of rapid news coverage for two applications. We analyze Twitter messages and debate transcripts during live political presidential debates to automatically predict the topics that Twitter users discuss. Our goal is to discover the favoured topics in online communities on the dates of political events as a way to understand the political subjects of public interest. With the up-to-dateness of microblogs, an additional opportunity emerges, namely to use social media posts and leverage the real-time verity about discussed individuals to find their locations.
That is, given a person of interest that is mentioned in online discussions, we use the wisdom of the crowd to automatically track her physical locations over time. We evaluate our approach in the context of politics, i.e., we predict the locations of US politicians as a proof of concept for important use cases, such as to track people that
are national risks, e.g., warlords and wanted criminals.
Die herausragenden mechanischen Eigenschaften natürlicher anorganisch-organischer Kompositmaterialien wie Knochen oder Muschelschalen entspringen ihrer hierarchischen Struktur, die von der nano- bis hinauf zur makroskopischen Ebene reicht, und einer kontrollierten Verbindung entlang der Grenzflächen der anorganischen und organischen Komponenten.
Ausgehend von diesen Schlüsselprinzipien des biologischen Materialdesigns wurden in dieser Arbeit zwei Konzepte für die bioinspirierte Strukturbildung von Kompositen untersucht, die auf dem Verkleben von Nano- oder Mesokristallen mit funktionalisierten Poly(2-oxazolin)-Blockcopolymeren beruhen sowie deren Potenzial zur Herstellung bioinspirierter selbstorganisierter hierarchischer anorganisch-organischer Verbundstrukturen ohne äußere Kräfte beleuchtet. Die Konzepte unterschieden sich in den verwendeten anorganischen Partikeln und in der Art der Strukturbildung.
Über einen modularen Ansatz aus Polymersynthese und polymeranaloger Thiol-En-Funktionalisierung wurde erfolgreich eine Bibliothek von Poly(2-oxazolin)en mit unterschiedlichen Funktionalitäten erstellt. Die Blockcopolymere bestehen aus einem kurzen partikelaffinen "Klebeblock", der aus Thiol-En-funktionalisiertem Poly(2-(3-butenyl)-2-oxazolin) besteht, und einem langen wasserlöslichen, strukturbildenden Block, der aus thermoresponsivem und kristallisierbarem Poly(2-isopropyl-2-oxazolin) besteht und hierarchische Morphologien ausbildet. Verschiedene analytische Untersuchungen wie Turbidimetrie, DLS, DSC, SEM oder XRD machten das thermoresponsive bzw. das Kristallisationsverhalten der Blockcopolymere in Abhängigkeit vom eingeführten Klebeblock zugänglich. Es zeigte sich, dass diese Polymere ein komplexes temperatur- und pH-abhängiges Trübungsverhalten aufweisen. Hinsichtlich der Kristallisation änderte der Klebeblock nicht die nanoskopische Kristallstruktur; er beeinflusste jedoch die Kristallisationszeit, den Kristallisationsgrad und die hierarchische Morphologie. Dieses Ergebnis wurde auf das unterschiedliche Aggregationsverhalten der Polymere in Wasser zurückgeführt.
Für die Herstellung von Kompositen nutzte Konzept 1 mikrometergroße Kupferoxalat-Mesokristalle, die eine innere Nanostruktur aufweisen. Die Strukturbildung über den anorganischen Teil wurde durch das Verkleben und Anordnen dieser Partikel erstrebt. Konzept 1 ermöglichte homogene freistehende stabile Kompositfilme mit einem hohen anorganischen Anteil. Die Partikel-Polymer-Kombination vereinte jedoch ungünstige Eigenschaften in sich, d. h. ihre Längenskalen waren zu unterschiedlich, was die Selbstassemblierung der Partikel verhinderte. Aufgrund des geringen Aspektverhältnisses von Kupferoxalat blieb auch die gegenseitige Ausrichtung durch äußere Kräfte erfolglos. Im Ergebnis eignet sich das Kupferoxalat-Poly(2-oxazolin)-Modellsystem nicht für die Herstellung hierarchischer Kompositstrukturen.
Im Gegensatz dazu verwendet Konzept 2 scheibenförmige Laponit®-Nanopartikel und kristallisierbare Blockcopolymere zur Strukturbildung über die organische Komponente durch polymervermittelte Selbstassemblierung. Komplementäre Analysemethoden (Zeta-Potenzial, DLS, SEM, XRD, DSC, TEM) zeigten sowohl eine kontrollierte Wechselwirkung zwischen den Komponenten in wässriger Umgebung als auch eine kontrollierte Strukturbildung, die in selbstassemblierten Nanokompositen resultiert, deren Struktur sich über mehrere Längenskalen erstreckt. Es wurde gezeigt, dass die negativ geladenen Klebeblöcke spezifisch und selektiv an den positiv geladenen Rändern der Laponit®-Partikel binden und so Polymer-Laponit®-Nanohybridpartikel entstehen, die als Grundbausteine für die Kompositbildung dienen. Die Hybridpartikel sind bei Raumtemperatur elektrosterisch stabilisiert - sterisch durch ihre langen, mit Wasser wechselwirkenden Poly(2-isopropyl-2-oxazolin)-Blöcke und elektrostatisch über die negativ geladenen Laponit®-Flächen. Im Ergebnis ließ sich Konzept 2 und damit die Strukturbildung über die organische Komponente erfolgreich umsetzten. Das Laponit®-Poly(2-oxazolin)-Modellsystem eröffnete den Weg zu selbstassemblierten geschichteten quasi-hierarchischen Nanokompositstrukturen mit hohem anorganischen Anteil. Abhängig von der frei verfügbaren Polymerkonzentration bei der Kompositbildung entstanden zwei unterschiedliche Komposit-Typen. Darüber hinaus entwarf die Arbeit einen Erklärungsansatz für den polymervermittelten Bildungsprozess der Komposit-Strukturen.
Insgesamt legt diese Arbeit Struktur-Prozess-Eigenschafts-Beziehungen offen, um selbstassemblierte bioinspirierte Kompositstrukturen zu bilden und liefert neue Einsichten zu einer geeigneten Kombination an Komponenten und Herstellungsbedingungen, die eine kontrollierte selbstassemblierte Strukturbildung mithilfe funktionalisierter Poly(2-oxazolin)-Blockcopolymere erlauben.
Institutionelle Bildung ist für autistische Lernende mit vielgestaltigen und spezifischen Hindernissen verbunden. Dies gilt insbesondere im Zusammenhang mit Inklusion, deren Relevanz nicht zuletzt durch das Übereinkommen der Vereinten Nationen über die Rechte von Menschen mit Behinderung gegeben ist.
Diese Arbeit diskutiert zahlreiche lernrelevante Besonderheiten im Kontext von Autismus und zeigt Diskrepanzen zu den nicht immer ausreichend angemessenen institutionellen Lehrkonzepten. Eine zentrale These ist hierbei, dass die ungewöhnlich intensive Aufmerksamkeit von Autist*innen für ihre Spezialinteressen dafür genutzt werden kann, das Lernen mit fremdgestellten Inhalten zu erleichtern. Darauf aufbauend werden Lösungsansätze diskutiert, welche in einem neuartigen Konzept für ein digitales mehrgerätebasiertes Lernspiel resultieren.
Eine wesentliche Herausforderung bei der Konzeption spielbasierten Lernens besteht in der adäquaten Einbindung von Lerninhalten in einen fesselnden narrativen Kontext. Am Beispiel von Übungen zur emotionalen Deutung von Mimik, welche für das Lernen von sozioemotionalen Kompetenzen besonders im Rahmen von Therapiekonzepten bei Autismus Verwendung finden, wird eine angemessene Narration vorgestellt, welche die störungsarme Einbindung dieser sehr speziellen Lerninhalte ermöglicht.
Die Effekte der einzelnen Konzeptionselemente werden anhand eines prototypisch entwickelten Lernspiels untersucht. Darauf aufbauend zeigt eine quantitative Studie die gute Akzeptanz und Nutzerfreundlichkeit des Spiels und belegte vor allem die
Verständlichkeit der Narration und der Spielelemente. Ein weiterer Schwerpunkt liegt in der minimalinvasiven Untersuchung möglicher Störungen des Spielerlebnisses durch den Wechsel zwischen verschiedenen Endgeräten, für die ein innovatives Messverfahren entwickelt wurde.
Im Ergebnis beleuchtet diese Arbeit die Bedeutung und die Grenzen von spielbasierten Ansätzen für autistische Lernende. Ein großer Teil der vorgestellten Konzepte lässt sich auf andersartige Lernszenarien übertragen. Das dafür entwickelte technische Framework zur Realisierung narrativer Lernpfade ist ebenfalls darauf vorbereitet, für weitere Lernszenarien, gerade auch im institutionellen Kontext, Verwendung zu finden.
Permafrost is warming globally, which leads to widespread permafrost thaw and impacts the surrounding landscapes, ecosystems and infrastructure. Especially ice-rich permafrost is vulnerable to rapid and abrupt thaw, resulting from the melting of excess ground ice. Local remote sensing studies have detected increasing rates of abrupt permafrost disturbances, such as thermokarst lake change and drainage, coastal erosion and RTS in the last two decades. All of which indicate an acceleration of permafrost degradation.
In particular retrogressive thaw slumps (RTS) are abrupt disturbances that expand by up to several meters each year and impact local and regional topographic gradients, hydrological pathways, sediment and nutrient mobilisation into aquatic systems, and increased permafrost carbon mobilisation. The feedback between abrupt permafrost thaw and the carbon cycle is a crucial component of the Earth system and a relevant driver in global climate models. However, an assessment of RTS at high temporal resolution to determine the dynamic thaw processes and identify the main thaw drivers as well as a continental-scale assessment across diverse permafrost regions are still lacking.
In northern high latitudes optical remote sensing is restricted by environmental factors and frequent cloud coverage. This decreases image availability and thus constrains the application of automated algorithms for time series disturbance detection for large-scale abrupt permafrost disturbances at high temporal resolution. Since models and observations suggest that abrupt permafrost disturbances will intensify, we require disturbance products at continental-scale, which allow for meaningful integration into Earth system models.
The main aim of this dissertation therefore, is to enhance our knowledge on the spatial extent and temporal dynamics of abrupt permafrost disturbances in a large-scale assessment. To address this, three research objectives were posed:
1. Assess the comparability and compatibility of Landsat-8 and Sentinel-2 data for a combined use in multi-spectral analysis in northern high latitudes.
2. Adapt an image mosaicking method for Landsat and Sentinel-2 data to create combined mosaics of high quality as input for high temporal disturbance assessments in northern high latitudes.
3. Automatically map retrogressive thaw slumps on the landscape-scale and assess their high temporal thaw dynamics.
We assessed the comparability of Landsat-8 and Sentinel-2 imagery by spectral comparison of corresponding bands. Based on overlapping same-day acquisitions of Landsat-8 and Sentinel-2 we derived spectral bandpass adjustment coefficients for North Siberia to adjust Sentinel-2 reflectance values to resemble Landsat-8 and harmonise the two data sets. Furthermore, we adapted a workflow to combine Landsat and Sentinel-2 images to create homogeneous and gap-free annual mosaics. We determined the number of images and cloud-free pixels, the spatial coverage and the quality of the mosaic with spectral comparisons to demonstrate the relevance of the Landsat+Sentinel-2 mosaics. Lastly, we adapted the automatic disturbance detection algorithm LandTrendr for large-scale RTS identification and mapping at high temporal resolution. For this, we modified the temporal segmentation algorithm for annual gradual and abrupt disturbance detection to incorporate the annual Landsat+Sentinel-2 mosaics. We further parametrised the temporal segmentation and spectral filtering for optimised RTS detection, conducted further spatial masking and filtering, and implemented a binary object classification algorithm with machine-learning to derive RTS from the LandTrendr disturbance output. We applied the algorithm to North Siberia, covering an area of 8.1 x 106 km2.
The spectral band comparison between same-day Landsat-8 and Sentinel-2 acquisitions already showed an overall good fit between both satellite products. However, applying the acquired spectral bandpass coefficients for adjustment of Sentinel-2 reflectance values, resulted in a near-perfect alignment between the same-day images. It can therefore be concluded that the spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to those of Landsat-8 in North Siberia.
The number of available cloud-free images increased steadily between 1999 and 2019, especially intensified after 2016 with the addition of Sentinel-2 images. This signifies a highly improved input database for the mosaicking workflow. In a comparison of annual mosaics, the Landsat+Sentinel-2 mosaics always fully covered the study areas, while Landsat-only mosaics contained data-gaps for the same years. The spectral comparison of input images and Landsat+Sentinel-2 mosaic showed a high correlation between the input images and the mosaic bands, testifying mosaicking results of high quality. Our results show that especially the mosaic coverage for northern, coastal areas was substantially improved with the Landsat+Sentinel-2 mosaics. By combining data from both Landsat and Sentinel-2 sensors we reliably created input mosaics at high spatial resolution for comprehensive time series analyses.
This research presents the first automatically derived assessment of RTS distribution and temporal dynamics at continental-scale. In total, we identified 50,895 RTS, primarily located in ice-rich permafrost regions, as well as a steady increase in RTS-affected areas between 2001 and 2019 across North Siberia. From 2016 onward the RTS area increased more abruptly, indicating heightened thaw slump dynamics in this period. Overall, the RTS-affected area increased by 331 % within the observation period. Contrary to this, five focus sites show spatiotemporal variability in their annual RTS dynamics, alternating between periods of increased and decreased RTS development. This suggests a close relationship to varying thaw drivers. The majority of identified RTS was active from 2000 onward and only a small proportion initiated during the assessment period. This highlights that the increase in RTS-affected area was mainly caused by enlarging existing RTS and not by newly initiated RTS.
Overall, this research showed the advantages of combining Landsat and Sentinel-2 data in northern high latitudes and the improvements in spatial and temporal coverage of combined annual mosaics. The mosaics build the database for automated disturbance detection to reliably map RTS and other abrupt permafrost disturbances at continental-scale. The assessment at high temporal resolution further testifies the increasing impact of abrupt permafrost disturbances and likewise emphasises the spatio-temporal variability of thaw dynamics across landscapes. Obtaining such consistent disturbance products is necessary to parametrise regional and global climate change models, for enabling an improved representation of the permafrost thaw feedback.
Das Fachwissen von Lehrkräften weist für die Ausprägung fachdidaktischer Expertise eine hohe Bedeutung auf. Welche Merkmale universitäre Lehrveranstaltungen aufweisen sollten, um Lehramtsstudierenden ein berufsspezifisches Fachwissen zu vermitteln, ist jedoch überwiegend noch unklar.
Innerhalb des Projekts PSI-Potsdam wurde auf theoretischer Grundlage das fachübergreifende Modell des erweiterten Fachwissens für den schulischen Kontext entwickelt. Als Ansatz zur Verbesserung des Biologie-Lehramtsstudiums diente dieses Modell als Konzeptionsgrundlage für eine additive Lehrveranstaltung. Hierbei werden Lerngelegenheiten geboten, um das universitär erworbene Fachwissen über zellbiologische Inhalte auf schulische Kontexte anzuwenden, z.B. durch die Dekonstruktion und anschließende Rekonstruktion von schulischen Lerntexten. Die Wirkung des Seminars wurde in mehreren Zyklen im Forschungsformat der Fachdidaktischen Entwicklungsforschung beforscht. Eine der zentralen Forschungsfragen lautet dabei: Wie kann eine Lerngelegenheit für Lehramtsstudierende der Biologie gestaltet sein, um ein erweitertes Fachwissen für den schulischen Kontext für den zellbiologischen Themenbereich „Struktur und Funktion der Biomembran“ zu fördern?
Anhand fallübergreifender Analysen (n = 29) wird im empirischen Teil aufgezeigt, welche Einstellungen zum Lehramtsstudium in der Stichprobe bestehen. Als ein wichtiges Ergebnis kann hierbei herausgestellt werden, dass sich das Fachinteresse hinsichtlich schulisch und universitär vermittelter Inhalte bei den untersuchten Studierenden auffallend unterscheidet, wobei dem Schulwissen ein deutlich höheres Interesse entgegengebracht wird. Die Berufsrelevanz fachlicher Inhalte wird seitens der Studierenden häufig am Schulwissen festgemacht.
Innerhalb konkreter Einzelfallanalysen (n = 6) wird anhand von Lernpfaden dargestellt, wie sich über mehrere Design-Experimente hinweg fachliche Konzepte entwickelt haben. Bei der Beschreibung wird vor allem auf Schlüsselstellen und Hürden im Lernprozess fokussiert. Aus diesen Ergebnissen folgend werden vorgenommene Iterationen für die einzelnen Zyklen beschrieben, die ebenfalls anhand der iterativen Entwicklung der Design-Prinzipien dargelegt werden.
Es konnte gezeigt werden, dass die Schlüsselstellen sehr individuell aufgrund der subjektiv fokussierten Inhalte zu Tage treten. Meist treten sie jedoch im Zusammenhang mit der Verknüpfung verschiedener fachlicher Konzepte oder durch kooperative Aufschlüsselungen von Konzepten auf. Fachliche Hürden konnten hingegen in Form von fachlich unangemessenen Vorstellungen fallübergreifend identifiziert werden. Dies betrifft unter anderem die Vorstellung der Biomembran als Wand, die mit den Vorstellungen einer Schutzfunktion und einer formgebenden Funktion der Biomembran einhergeht.
Weiterhin wird beleuchtet, wie das erweiterte Fachwissen für den schulischen Kontext zur Bearbeitung der Lernaufgaben angewendet wurde. Es hat sich gezeigt, dass sich bestimmte Lerngelegenheiten eigenen, um bestimmte Facetten des erweiterten Fachwissens zu fördern.
Insgesamt scheint das Modell des erweiterten Fachwissens für den schulischen Kontext äußerst geeignet zu sein, um anhand der Facetten und deren Beschreibungen Lerngelegenheiten oder Gestaltungsprinzipien für diese zu konzipieren. Für das untersuchte Lehr-Lernarrangement haben sich kleinere Adaptationen des Modells als sinnvoll erwiesen. Hinsichtlich der Methodologie konnten Ableitungen für die Anwendung der fachdidaktischen Entwicklungsforschung für additive fachliche Lehrveranstaltungen dieser Art herausgestellt werden.
Um den Professionsbezug der fachwissenschaftlichen Anteile im Lehramtsstudium zu verbessern, ist der weitere Einbezug des erweiterten Fachwissens für den schulischen Kontext in die fachwissenschaftlichen Studienanteile überaus wünschenswert.
Insulinresistenz ist ein zentraler Bestandteil des metabolischen Syndroms und trägt maßgeblich zur Ausbildung eines Typ-2-Diabetes bei. Eine mögliche Ursache für die Entstehung von Insulinresistenz ist eine chronische unterschwellige Entzündung, welche ihren Ursprung im Fettgewebe übergewichtiger Personen hat. Eingewanderte Makrophagen produzieren vermehrt pro-inflammatorische Mediatoren, wie Zytokine und Prostaglandine, wodurch die Konzentrationen dieser Substanzen sowohl lokal als auch systemisch erhöht sind. Darüber hinaus weisen übergewichtige Personen einen gestörten Fettsäuremetabolismus und eine erhöhte Darmpermeabilität auf. Ein gesteigerter Flux an freien Fettsäuren vom Fettgewebe in andere Organe führt zu einer lokalen Konzentrationssteigerung in diesen Organen. Eine erhöhte Darmpermeabilität erleichtert das Eindringen von Pathogenen und anderer körperfremder Substanzen in den Körper.
Ziel dieser Arbeit war es, zu untersuchen, ob hohe Konzentrationen von Insulin, des bakteriellen Bestandteils Lipopolysaccharid (LPS) oder der freien Fettsäure Palmitat eine Entzündungsreaktion in Makrophagen auslösen oder verstärken können und ob diese Entzündungsantwort zur Ausbildung einer Insulinresistenz beitragen kann. Weiterhin sollte untersucht werden, ob Metabolite und Signalsubstanzen, deren Konzentrationen beim metabolischen Syndrom erhöht sind, die Produktion des Prostaglandins (PG) E2 begünstigen können und ob dieses wiederum die Entzündungsreaktion und seine eigene Produktion in Makrophagen regulieren kann. Um den Einfluss dieser Faktoren auf die Produktion pro-inflammatorischer Mediatoren in Makrophagen zu untersuchen, wurden Monozyten-artigen Zelllinien und primäre humane Monozyten, welche aus dem Blut gesunder Probanden isoliert wurden, in Makrophagen differenziert und mit Insulin, LPS, Palmitat und/ oder PGE2 inkubiert. Überdies wurden primäre Hepatozyten der Ratte isoliert und mit Überständen Insulin-stimulierter Makrophagen inkubiert, um zu untersuchen, ob die Entzündungsanwort in Makrophagen an der Ausbildung einer Insulinresistenz in Hepatozyten beteiligt ist.
Insulin induzierte die Expression pro-inflammatorischer Zytokine in Makrophagen-artigen Zelllinien wahrscheinlich vorrangig über den Phosphoinositid-3-Kinase (PI3K)-Akt-Signalweg mit anschließender Aktiverung des Transkriptionsfaktors NF-κB (nuclear factor 'kappa-light-chain-enhancer' of activated B-cells). Die dabei ausgeschütteten Zytokine hemmten in primären Hepatozyten der Ratte die Insulin-induzierte Expression der Glukokinase durch Überstände Insulin-stimulierter Makrophagen.
Auch LPS oder Palmitat, deren lokale Konzentrationen im Zuge des metabolischen Syndroms erhöht sind, waren in der Lage, die Expression pro-inflammatorischer Zytokine in Makrophagen-artigen Zelllinien zu stimulieren. Während LPS seine Wirkung, laut Literatur, unbestritten über eine Aktivierung des Toll-ähnlichen Rezeptors (toll-like receptor; TLR) 4 vermittelt, scheint Palmitat jedoch weitestgehend TLR4-unabhängig wirken zu können. Vielmehr schien die de novo-Ceramidsynthese eine entscheidene Rolle zu spielen. Darüber hinaus verstärkte Insulin sowohl die LPS- als auch die Palmitat-induzierte Ent-zündungsantwort in beiden Zelllinien. Die in Zelllinien gewonnenen Ergebnisse wurden größtenteils in primären humanen Makrophagen bestätigt.
Desweiteren induzierten sowohl Insulin als auch LPS oder Palmitat die Produktion von PGE2 in den untersuchten Makrophagen. Die Daten legen nahe, dass dies auf eine gesteigerte Expression PGE2-synthetisierender Enzyme zurückzuführen ist.
PGE2 wiederum hemmte auf der einen Seite die Stimulus-abhängige Expression des pro-inflammatorischen Zytokins Tumornekrosefaktor (TNF) α in U937-Makrophagen. Auf der anderen Seite verstärkte es jedoch die Expression der pro-inflammatorischen Zytokine Interleukin- (IL-) 1β und IL-8. Darüber hinaus verstärkte es die Expression von IL-6-Typ-Zytokinen, welche sowohl pro- als auch anti-inflammatorisch wirken können. Außerdem vestärkte PGE2 die Expression PGE2-synthetisierender Enzyme. Es scheint daher in der Lage zu sein, seine eigene Synthese zu verstärken.
Zusammenfassend kann die Freisetzung pro-inflammatorischer Mediatoren aus Makro-phagen im Zuge einer Hyperinsulinämie die Entstehung einer Insulinresistenz begünstigen. Insulin ist daher in der Lage, einen Teufelskreis der immer stärker werdenden Insulin-resistenz in Gang zu setzen.
Auch Metabolite und Signalsubstanzen, deren Konzentrationen beim metabolischen Syndrom erhöht sind (zum Beispiel LPS, freie Fettsäuren und PGE2), lösten Entzündungsantworten in Makrophagen aus. Das wechselseitige Zusammenspiel von Insulin und diesen Metaboliten und Signalsubstanzen löste eine stärkere Entzündungsantwort in Makrophagen aus als jeder der Einzelkomponenten. Die dadurch freigesetzten Zytokine könnten zur Manifestation einer Insulinresistenz und des metabolischen Syndroms beitragen.
Spatiotemporal variations of key air pollutants and greenhouse gases in the Himalayan foothills
(2021)
South Asia is a rapidly developing, densely populated and highly polluted region that is facing the impacts of increasing air pollution and climate change, and yet it remains one of the least studied regions of the world scientifically. In recognition of this situation, this thesis focuses on studying (i) the spatial and temporal variation of key greenhouse gases (CO2 and CH4) and air pollutants (CO and O3) and (ii) the vertical distribution of air pollutants (PM, BC) in the foothills of the Himalaya. Five sites were selected in the Kathmandu Valley, the capital region of Nepal, along with two sites outside of the valley in the Makawanpur and Kaski districts, and conducted measurements during the period of 2013-2014 and 2016. These measurements are analyzed in this thesis.
The CO measurements at multiple sites in the Kathmandu Valley showed a clear diurnal cycle: morning and evening levels were high, with an afternoon dip. There are slight differences in the diurnal cycles of CO2 and CH4, with the CO2 and CH4 mixing ratios increasing after the afternoon dip, until the morning peak the next day. The mixing layer height (MLH) of the nocturnal stable layer is relatively constant (~ 200 m) during the night, after which it transitions to a convective mixing layer during the day and the MLH increases up to 1200 m in the afternoon. Pollutants are thus largely trapped in the valley from the evening until sunrise the following day, and the concentration of pollutants increases due to emissions during the night. During afternoon, the pollutants are diluted due to the circulation by the valley winds after the break-up of the mixing layer. The major emission sources of GHGs and air pollutants in the valley are transport sector, residential cooking, brick kilns, trash burning, and agro-residue burning. Brick industries are influential in the winter and pre-monsoon season. The contribution of regional forest fires and agro-residue burning are seen during the pre-monsoon season. In addition, relatively higher CO values were also observed at the valley outskirts (Bhimdhunga and Naikhandi), which indicates the contribution of regional emission sources. This was also supported by the presence of higher concentrations of O3 during the pre-monsoon season.
The mixing ratios of CO2 (419.3 ±6.0 ppm) and CH4 (2.192 ±0.066 ppm) in the valley were much higher than at background sites, including the Mauna Loa observatory (CO2: 396.8 ± 2.0 ppm, CH4:1.831 ± 0.110 ppm) and Waligaun (CO2: 397.7 ± 3.6 ppm, CH4: 1.879 ± 0.009 ppm), China, as well as at an urban site Shadnagar (CH4: 1.92 ± 0.07 ppm) in India.
The daily 8 hour maximum O3 average in the Kathmandu Valley exceeds the WHO recommended value during more than 80% of the days during the pre-monsoon period, which represents a significant risk for human health and ecosystems in the region. Moreover, in the measurements of the vertical distribution of particulate matter, which were made using an ultralight aircraft, and are the first of their kind in the region, an elevated polluted layer at around ca. 3000 m asl. was detected over the Pokhara Valley. The layer could be associated with the large-scale regional transport of pollution. These contributions towards understanding the distributions of key air pollutants and their main sources will provide helpful information for developing management plans and policies to help reduce the risks for the millions of people living in the region.
In Systems Medicine, in addition to high-throughput molecular data (*omics), the wealth of clinical characterization plays a major role in the overall understanding of a disease. Unique problems and challenges arise from the heterogeneity of data and require new solutions to software and analysis methods. The SMART and EurValve studies establish a Systems Medicine approach to valvular heart disease -- the primary cause of subsequent heart failure.
With the aim to ascertain a holistic understanding, different *omics as well as the clinical picture of patients with aortic stenosis (AS) and mitral regurgitation (MR) are collected. Our task within the SMART consortium was to develop an IT platform for Systems Medicine as a basis for data storage, processing, and analysis as a prerequisite for collaborative research. Based on this platform, this thesis deals on the one hand with the transfer of the used Systems Biology methods to their use in the Systems Medicine context and on the other hand with the clinical and biomolecular differences of the two heart valve diseases. To advance differential expression/abundance (DE/DA) analysis software for use in Systems Medicine, we state 21 general software requirements and features of automated DE/DA software, including a novel concept for the simple formulation of experimental designs that can represent complex hypotheses, such as comparison of multiple experimental groups, and demonstrate our handling of the wealth of clinical data in two research applications DEAME and Eatomics. In user interviews, we show that novice users are empowered to formulate and test their multiple DE hypotheses based on clinical phenotype. Furthermore, we describe insights into users' general impression and expectation of the software's performance and show their intention to continue using the software for their work in the future. Both research applications cover most of the features of existing tools or even extend them, especially with respect to complex experimental designs. Eatomics is freely available to the research community as a user-friendly R Shiny application.
Eatomics continued to help drive the collaborative analysis and interpretation of the proteomic profile of 75 human left myocardial tissue samples from the SMART and EurValve studies. Here, we investigate molecular changes within the two most common types of valvular heart disease: aortic valve stenosis (AS) and mitral valve regurgitation (MR). Through DE/DA analyses, we explore shared and disease-specific protein alterations, particularly signatures that could only be found in the sex-stratified analysis. In addition, we relate changes in the myocardial proteome to parameters from clinical imaging. We find comparable cardiac hypertrophy but differences in ventricular size, the extent of fibrosis, and cardiac function. We find that AS and MR show many shared remodeling effects, the most prominent of which is an increase in the extracellular matrix and a decrease in metabolism. Both effects are stronger in AS. In muscle and cytoskeletal adaptations, we see a greater increase in mechanotransduction in AS and an increase in cortical cytoskeleton in MR. The decrease in proteostasis proteins is mainly attributable to the signature of female patients with AS. We also find relevant therapeutic targets.
In addition to the new findings, our work confirms several concepts from animal and heart failure studies by providing the largest collection of human tissue from in vivo collected biopsies to date. Our dataset contributing a resource for isoform-specific protein expression in two of the most common valvular heart diseases. Apart from the general proteomic landscape, we demonstrate the added value of the dataset by showing proteomic and transcriptomic evidence for increased expression of the SARS-CoV-2- receptor at pressure load but not at volume load in the left ventricle and also provide the basis of a newly developed metabolic model of the heart.
The Earth's electron radiation belts exhibit a two-zone structure, with the outer belt being highly dynamic due to the constant competition between a number of physical processes, including acceleration, loss, and transport. The flux of electrons in the outer belt can vary over several orders of magnitude, reaching levels that may disrupt satellite operations. Therefore, understanding the mechanisms that drive these variations is of high interest to the scientific community.
In particular, the important role played by loss mechanisms in controlling relativistic electron dynamics has become increasingly clear in recent years. It is now widely accepted that radiation belt electrons can be lost either by precipitation into the atmosphere or by transport across the magnetopause, called magnetopause shadowing. Precipitation of electrons occurs due to pitch-angle scattering by resonant interaction with various types of waves, including whistler mode chorus, plasmaspheric hiss, and electromagnetic ion cyclotron waves. In addition, the compression of the magnetopause due to increases in solar wind dynamic pressure can substantially deplete electrons at high L shells where they find themselves in open drift paths, whereas electrons at low L shells can be lost through outward radial diffusion. Nevertheless, the role played by each physical process during electron flux dropouts still remains a fundamental puzzle.
Differentiation between these processes and quantification of their relative contributions to the evolution of radiation belt electrons requires high-resolution profiles of phase space density (PSD). However, such profiles of PSD are difficult to obtain due to restrictions of spacecraft observations to a single measurement in space and time, which is also compounded by the inaccuracy of instruments. Data assimilation techniques aim to blend incomplete and inaccurate spaceborne data with physics-based models in an optimal way. In the Earth's radiation belts, it is used to reconstruct the entire radial profile of electron PSD, and it has become an increasingly important tool in validating our current understanding of radiation belt dynamics, identifying new physical processes, and predicting the near-Earth hazardous radiation environment.
In this study, sparse measurements from Van Allen Probes A and B and Geostationary Operational Environmental Satellites (GOES) 13 and 15 are assimilated into the three-dimensional Versatile Electron Radiation Belt (VERB-3D) diffusion model, by means of a split-operator Kalman filter over a four-year period from 01 October 2012 to 01 October 2016. In comparison to previous works, the 3D model accounts for more physical processes, namely mixed pitch angle-energy diffusion, scattering by EMIC waves, and magnetopause shadowing. It is shown how data assimilation, by means of the innovation vector (the residual between observations and model forecast), can be used to account for missing physics in the model. This method is used to identify the radial distances from the Earth and the geomagnetic conditions where the model is inconsistent with the measured PSD for different values of the adiabatic invariants mu and K. As a result, the Kalman filter adjusts the predictions in order to match the observations, and this is interpreted as evidence of where and when additional source or loss processes are active.
Furthermore, two distinct loss mechanisms responsible for the rapid dropouts of radiation belt electrons are investigated: EMIC wave-induced scattering and magnetopause shadowing. The innovation vector is inspected for values of the invariant mu ranging from 300 to 3000 MeV/G, and a statistical analysis is performed to quantitatively assess the effect of both processes as a function of various geomagnetic indices, solar wind parameters, and radial distance from the Earth. The results of this work are in agreement with previous studies that demonstrated the energy dependence of these two mechanisms. EMIC wave scattering dominates loss at lower L shells and it may amount to between 10%/hr to 30%/hr of the maximum value of PSD over all L shells for fixed first and second adiabatic invariants. On the other hand, magnetopause shadowing is found to deplete electrons across all energies, mostly at higher L shells, resulting in loss from 50%/hr to 70%/hr of the maximum PSD. Nevertheless, during times of enhanced geomagnetic activity, both processes can operate beyond such location and encompass the entire outer radiation belt.
The results of this study are two-fold. Firstly, it demonstrates that the 3D data assimilative code provides a comprehensive picture of the radiation belts and is an important step toward performing reanalysis using observations from current and future missions. Secondly, it achieves a better understanding and provides critical clues of the dominant loss mechanisms responsible for the rapid dropouts of electrons at different locations over the outer radiation belt.
Carbonatite magmatism is a highly efficient transport mechanism from Earth’s mantle to the crust, thus providing insights into the chemistry and dynamics of the Earth’s mantle. One evolving and promising tool for tracing magma interaction are stable iron isotopes, particularly because iron isotope fractionation is controlled by oxidation state and bonding environment. Meanwhile, a large data set on iron isotope fractionation in igneous rocks exists comprising bulk rock compositions and fractionation between mineral groups. Iron isotope data from natural carbonatite rocks are extremely light and of remarkably high variability. This resembles iron isotope data from mantle xenoliths, which are characterized by a variability in δ56Fe spanning three times the range found in basalts, and by the extremely light values of some whole rock samples, reaching δ56Fe as low as -0.69 ‰ in a spinel lherzolite. Cause to this large range of variations may be metasomatic processes, involving metasomatic agents like volatile bearing high-alkaline silicate melts or carbonate melts. The expected effects of metasomatism on iron isotope fractionation vary with parameters like melt/rock-ratio, reaction time, and the nature of metasomatic agents and mineral reactions involved. An alternative or additional way to enrich light isotopes in the mantle could be multiple phases of melt extraction. To interpret the existing data sets more knowledge on iron isotope fractionation factors is needed.
To investigate the behavior of iron isotopes in the carbonatite systems, kinetic and equilibration experiments in natro-carbonatite systems between immiscible silicate and carbonate melts were performed in an internally heated gas pressure vessel at intrinsic redox conditions at temperatures between 900 and 1200 °C and pressures of 0.5 and 0.7 GPa. The iron isotope compositions of coexisting silicate melt and carbonate melt were analyzed by solution MC-ICP-MS. The kinetic experiments employing a Fe-58 spiked starting material show that isotopic equilibrium is obtained after 48 hours. The experimental studies of equilibrium iron isotope fractionation between immiscible silicate and carbonate melts have shown that light isotopes are enriched in the carbonatite melt. The highest Δ56Fesil.m.-carb.melt (mean) of 0.13 ‰ was determined in a system with a strongly peralkaline silicate melt composition (ASI ≥ 0.21, Na/Al ≤ 2.7). In three systems with extremely peralkaline silicate melt compositions (ASI between 0.11 and 0.14) iron isotope fractionation could analytically not be resolved. The lowest Δ56Fesil.m.-carb.melt (mean) of 0.02 ‰ was determined in a system with an extremely peralkaline silicate melt composition (ASI ≤ 0.11 , Na/Al ≥ 6.1). The observed iron isotope fractionation is most likely governed by the redox conditions of the system. Yet, in the systems, where no fractionation occurred, structural changes induced by compositional changes possibly overrule the influence of redox conditions. This interpretation implicates, that the iron isotope system holds the potential to be useful not only for exploring redox conditions in magmatic systems, but also for discovering structural changes in a melt.
In situ iron isotope analyses by femtosecond laser ablation coupled to MC-ICP-MS on magnetite and olivine grains were performed to reveal variations in iron isotope composition on the micro scale. The investigated sample is a melilitite bomb from the Salt Lake Crater group at Honolulu (Oahu, Hawaii), showing strong evidence for interaction with a carbonatite melt. While magnetite grains are rather homogeneous in their iron isotope compositions, olivine grains span a far larger range in iron isotope ratios. The variability of δ56Fe in magnetite is limited from - 0.17 ‰ (± 0.11 ‰, 2SE) to +0.08 ‰ (± 0.09 ‰, 2SE). δ56Fe in olivine range from -0.66‰ (± 0.11 ‰, 2SE) to +0.10 ‰ (± 0.13 ‰, 2SE). Olivine and magnetite grains hold different informations regarding kinetic and equilibrium fractionation due to their different Fe diffusion coefficients. The observations made in the experiments and in the in situ iron isotope analyses suggest that the extremely light iron isotope signatures found in carbonatites are generated by several steps of isotope fractionation during carbonatite genesis. These may involve equilibrium and kinetic fractionation. Since iron isotopic signatures in natural systems are generated by a combination of multiple factors (pressure, temperature, redox conditions, phase composition and structure, time scale), multi tracer approaches are needed to explain signatures found in natural rocks.
Boon and bane
(2021)
Semi-natural habitats (SNHs) in agricultural landscapes represent important refugia for biodiversity including organisms providing ecosystem services. Their spill-over into agricultural fields may lead to the provision of regulating ecosystem services such as biological pest control ultimately affecting agricultural yield. Still, it remains largely unexplored, how different habitat types and their distributions in the surrounding landscape shape this provision of ecosystem services within arable fields. Hence, in this thesis I investigated the effect of SNHs on biodiversity-driven ecosystem services and disservices affecting wheat production with an emphasis on the role and interplay of habitat type, distance to the habitat and landscape complexity.
I established transects from the field border into the wheat field, starting either from a field-to-field border, a hedgerow, or a kettle hole, and assessed beneficial and detrimental organisms and their ecosystem functions as well as wheat yield at several in-field distances. Using this study design, I conducted three studies where I aimed to relate the impacts of SNHs at the field and at the landscape scale on ecosystem service providers to crop production.
In the first study, I observed yield losses close to SNHs for all transect types. Woody habitats, such as hedgerows, reduced yields stronger than kettle holes, most likely due to shading from the tall vegetation structure. In order to find the biotic drivers of these yield losses close to SNHs, I measured pest infestation by selected wheat pests as potential ecosystem disservices to crop production in the second study. Besides relating their damage rates to wheat yield of experimental plots, I studied the effect of SNHs on these pest rates at the field and at the landscape scale. Only weed cover could be associated to yield losses, having their strongest impact on wheat yield close to the SNH. While fungal seed infection rates did not respond to SNHs, fungal leaf infection and herbivory rates of cereal leaf beetle larvae were positively influenced by kettle holes. The latter even increased at kettle holes with increasing landscape complexity suggesting a release of natural enemies at isolated habitats within the field interior.
In the third study, I found that also ecosystem service providers benefit from the presence of kettle holes. The distance to a SNH decreased species richness of ecosystem service providers, whereby the spatial range depended on species mobility, i.e. arable weeds diminished rapidly while carabids were less affected by the distance to a SNH. Contrarily, weed seed predation increased with distance suggesting that a higher food availability at field borders might have diluted the predation on experimental seeds. Intriguingly, responses to landscape complexity were rather mixed: While weed species richness was generally elevated with increasing landscape complexity, carabids followed a hump-shaped curve with highest species numbers and activity-density in simple landscapes. The latter might give a hint that carabids profit from a minimum endowment of SNHs, while a further increase impedes their mobility. Weed seed predation was affected differently by landscape complexity depending on weed species displayed. However, in habitat-rich landscapes seed predation of the different weed species converged to similar rates, emphasising that landscape complexity can stabilize the provision of ecosystem services. Lastly, I could relate a higher weed seed predation to an increase in wheat yield even though seed predation did not diminish weed cover. The exact mechanisms of the provision of weed control to crop production remain to be investigated in future studies.
In conclusion, I found habitat-specific responses of ecosystem (dis)service providers and their functions emphasizing the need to evaluate the effect of different habitat types on the provision of ecosystem services not only at the field scale, but also at the landscape scale. My findings confirm that besides identifying species richness of ecosystem (dis)service providers the assessment of their functions is indispensable to relate the actual delivery of ecosystem (dis)services to crop production.
Virtualizing physical space
(2021)
The true cost for virtual reality is not the hardware, but the physical space it requires, as a one-to-one mapping of physical space to virtual space allows for the most immersive way of navigating in virtual reality. Such “real-walking” requires physical space to be of the same size and the same shape of the virtual world represented. This generally prevents real-walking applications from running on any space that they were not designed for.
To reduce virtual reality’s demand for physical space, creators of such applications let users navigate virtual space by means of a treadmill, altered mappings of physical to virtual space, hand-held controllers, or gesture-based techniques. While all of these solutions succeed at reducing virtual reality’s demand for physical space, none of them reach the same level of immersion that real-walking provides.
Our approach is to virtualize physical space: instead of accessing physical space directly, we allow applications to express their need for space in an abstract way, which our software systems then map to the physical space available. We allow real-walking applications to run in spaces of different size, different shape, and in spaces containing different physical objects. We also allow users immersed in different virtual environments to share the same space.
Our systems achieve this by using a tracking volume-independent representation of real-walking experiences — a graph structure that expresses the spatial and logical relationships between virtual locations, virtual elements contained within those locations, and user interactions with those elements. When run in a specific physical space, this graph representation is used to define a custom mapping of the elements of the virtual reality application and the physical space by parsing the graph using a constraint solver. To re-use space, our system splits virtual scenes and overlap virtual geometry. The system derives this split by means of hierarchically clustering of our virtual objects as nodes of our bi-partite directed graph that represents the logical ordering of events of the experience. We let applications express their demands for physical space and use pre-emptive scheduling between applications to have them share space. We present several application examples enabled by our system. They all enable real-walking, despite being mapped to physical spaces of different size and shape, containing different physical objects or other users.
We see substantial real-world impact in our systems. Today’s commercial virtual reality applications are generally designing to be navigated using less immersive solutions, as this allows them to be operated on any tracking volume. While this is a commercial necessity for the developers, it misses out on the higher immersion offered by real-walking. We let developers overcome this hurdle by allowing experiences to bring real-walking to any tracking volume, thus potentially bringing real-walking to consumers.
Die eigentlichen Kosten für Virtual Reality Anwendungen entstehen nicht primär durch die erforderliche Hardware, sondern durch die Nutzung von physischem Raum, da die eins-zu-eins Abbildung von physischem auf virtuellem Raum die immersivste Art von Navigation ermöglicht. Dieses als „Real-Walking“ bezeichnete Erlebnis erfordert hinsichtlich Größe und Form eine Entsprechung von physischem Raum und virtueller Welt. Resultierend daraus können Real-Walking-Anwendungen nicht an Orten angewandt werden, für die sie nicht entwickelt wurden.
Um den Bedarf an physischem Raum zu reduzieren, lassen Entwickler von Virtual Reality-Anwendungen ihre Nutzer auf verschiedene Arten navigieren, etwa mit Hilfe eines Laufbandes, verfälschten Abbildungen von physischem zu virtuellem Raum, Handheld-Controllern oder gestenbasierten Techniken. All diese Lösungen reduzieren zwar den Bedarf an physischem Raum, erreichen jedoch nicht denselben Grad an Immersion, den Real-Walking bietet.
Unser Ansatz zielt darauf, physischen Raum zu virtualisieren: Anstatt auf den physischen Raum direkt zuzugreifen, lassen wir Anwendungen ihren Raumbedarf auf abstrakte Weise formulieren, den unsere Softwaresysteme anschließend auf den verfügbaren physischen Raum abbilden. Dadurch ermöglichen wir Real-Walking-Anwendungen Räume mit unterschiedlichen Größen und Formen und Räume, die unterschiedliche physische Objekte enthalten, zu nutzen. Wir ermöglichen auch die zeitgleiche Nutzung desselben Raums durch mehrere Nutzer verschiedener Real-Walking-Anwendungen.
Unsere Systeme erreichen dieses Resultat durch eine Repräsentation von Real-Walking-Erfahrungen, die unabhängig sind vom gegebenen Trackingvolumen – eine Graphenstruktur, die die räumlichen und logischen Beziehungen zwischen virtuellen Orten, den virtuellen Elementen innerhalb dieser Orte, und Benutzerinteraktionen mit diesen Elementen, ausdrückt. Bei der Instanziierung der Anwendung in einem bestimmten physischen Raum wird diese Graphenstruktur und ein Constraint Solver verwendet, um eine individuelle Abbildung der virtuellen Elemente auf den physischen Raum zu erreichen. Zur mehrmaligen Verwendung des Raumes teilt unser System virtuelle Szenen und überlagert virtuelle Geometrie. Das System leitet diese Aufteilung anhand eines hierarchischen Clusterings unserer virtuellen Objekte ab, die als Knoten unseres bi-partiten, gerichteten Graphen die logische Reihenfolge aller Ereignisse repräsentieren. Wir verwenden präemptives Scheduling zwischen den Anwendungen für die zeitgleiche Nutzung von physischem Raum. Wir stellen mehrere Anwendungsbeispiele vor, die Real-Walking ermöglichen – in physischen Räumen mit unterschiedlicher Größe und Form, die verschiedene physische Objekte oder weitere Nutzer enthalten.
Wir sehen in unseren Systemen substantielles Potential. Heutige Virtual Reality-Anwendungen sind bisher zwar so konzipiert, dass sie auf einem beliebigen Trackingvolumen betrieben werden können, aber aus kommerzieller Notwendigkeit kein Real-Walking beinhalten. Damit entgeht Entwicklern die Gelegenheit eine höhere Immersion herzustellen. Indem wir es ermöglichen, Real-Walking auf jedes Trackingvolumen zu bringen, geben wir Entwicklern die Möglichkeit Real-Walking zu ihren Nutzern zu bringen.
The present work focuses on minimising the usage of toxic chemicals by integration of the biobased monomers, derived from fatty acid esters, to photopolymerization processes, which are known to be nature friendly. Internal double bond present in the oleic acid was converted to more reactive (meth)acrylate or epoxy group. Biobased starting materials, functionalized by different pendant groups, were used for photopolymerizing formulations to design of new polymeric structures by using ultraviolet light emitting diode (UV-LED) (395 nm) via free radical polymerization or cationic polymerization.
New (meth)acrylates (2,3 and 4) consisting of two isomers, methyl 9-((meth)acryloyloxy)-10-hydroxyoctadecanoate / methyl 9-hydroxy-10-((meth)acryloyloxy)octadecanoate (2 and 3) and methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4), modified from oleic acid mix, and ionic liquid monomers (1a and 1b) bearing long alkyl chain were polymerized photochemically. New (meth)acrylates are based on vegetable oil, and ionic liquids (ILs) have nonvolatile behaviour. Therefore, both monomer types have green approach. Photoinitiated polymerization of new (meth)acrylates and ionic liquids was investigated in the presence of ethyl (2,4,6-trimethylbenzoyl) phenylphosphinate (Irgacure® TPO−L) or di(4-methoxybenzoyl)diethylgermane (Ivocerin®) as photoinitiator (PI). Additionally, the results were discussed in comparison with those obtained from commercial 1,6-hexanediol di(meth)acrylate (5 and 6) for deeper investigation of biobased monomer’s potential to substitute petroleum derived materials with renewable resources for possible coating applications. Kinetic study shows that methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4) and ionic liquids (1a and 1b) have quantitative conversion after irradiation process which is important for practical applications. On the other hand, heat generation occurs in a longer time during the polymerization of biobased systems or ILs.
The poly(meth)acrylates modified from (meth)acrylated fatty acid methyl ester monomers generally show a low glass transition temperature because of the presence of long aliphatic chain in the polymer structure. However, poly(meth)acrylates containing aromatic group have higher glass transition temperature. Therefore, new 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was synthesized which can be a promising candidate for the green techniques, such as light induced polymerization. Photokinetic investigation of the new monomer, 4-(4-methacryloyloxyphenyl)-butan-2-one (7), was discussed using Irgacure® TPO−L or Ivocerin® as photoinitiator. The reactivity of that monomer was compared to commercial 2-phenoxyethyl methacrylate (8) and phenyl methacrylate (9) basis of the differences on monomer structures. The photopolymer of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) might be an interesting candidate for the coating application with the properties of quantitative conversion and high molecular weight. It also shows higher glass transition temperature.
In addition to the linear systems based on renewable materials, new crosslinked polymers were also designed in this thesis. Therefore, isomer mixture consisting of ethane-1,2-diyl bis(9-methacryloyloxy-10-hydroxy octadecanoate), ethane-1,2-diyl 9-hydroxy-10-methacryloyloxy-9’-methacryloyloxy10’-hydroxy octadecanoate and ethane-1,2-diyl bis(9-hydroxy-10-methacryloyloxy octadecanoate) (10) was synthesized by derivation of the oleic acid which has not been previously described in the literature. Crosslinked material based on this biobased monomer was produced by photoinitiated free radical polymerization using Irgacure® TPO−L or Ivocerin® as photoinitiator. Furthermore, material properties were diversified by copolymerization of 10 with 4-(4-methacryloyloxyphenyl)-butan-2-one (7) or methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4). In addition to this, influence of comonomer with different chemical structure on the network system was investigated by analysis of thermo-mechanical properties, crosslink density and molecular weight between two crosslink junctions. An increase in the glass transition temperature caused by copolymerization of biobased monomer 10 with the excess amount of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was confirmed by both techniques, differential scanning calorimetry (DSC) and dynamic mechanical analysis (DMA). On the other hand, crosslink density decreased as a result of copolymerization reactions due to the reduction in the mean functionality of the system. Furthermore, surface characterization has been tested by contact angle measurements using solvents with different polarity.
This work also contributes to the limited data reported about cationic photopolymerization of the epoxidized vegetable oils in the literature in contrast to the widely investigation of thermal curing of the biorenewable epoxy monomers. In addition to the 9,10-epoxystearic acid methyl ester (11), a new monomer of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) has been synthesized from oleic acid. These two biobased epoxies have been polymerized via cationic photoinitiated polymerization in the presence of bis(t-butyl)-iodonium-tetrakis(perfluoro-t-butoxy)aluminate ([Al(O-t-C4F9)4]-) and isopropylthioxanthone (ITX) as photinitiating system. Polymerization kinetic of 9,10-epoxystearic acid methyl ester (11) and bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was investigated and compared with the kinetic of commercial monomers being 3,4-epoxycyclohexylmethyl-3’,4’-epoxycyclohexane carboxylate (13), 1,4-butanediol diglycidyl ether (14), and diglycidylether of bisphenol-A (15). Both biobased epoxies (11 and 12) showed higher conversion than cycloaliphatic epoxy (13), and lower reactivity than 1,4-butanediol diglycidyl ether (14). Additional network systems were designed by copolymerization of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) in different molar ratios (1:1; 1:5; 1:9). It addresses that, final conversion is dependent on polymerization rate as well as physical processes such as vitrification during polymerization. Moreover, low glass transition temperature of homopolymer derived from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was successfully increased by copolymerization with diglycidylether bisphenol-A (15). On the other hand, the surface produced from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) shows hydrophobic character. Higher concentration of biobased diepoxy (12) in the copolymerizing mixture decreases surface free energy. Network systems were also investigated according to the rubber elasticity theory. Crosslinked polymer derived from the mixture of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) (molar ratio=1:5) exhibits almost ideal polymer network.
The presented study investigated the influence of microbial and biogeochemical processes on the physical transport related properties and the fate of microplastics in freshwater reservoirs. The overarching goal was to elucidate the mechanisms leading to sedimentation and deposition of microplastics in such environments. This is of importance, as large amounts of initially buoyant microplastics are found in reservoir sediments worldwide. However, the transport processes which lead to microplastics accumulation in sediments, were up to now understudied.
The impact of biofilm formation on the density and subsequent sedimentation of microplastics was investigated in the eutrophic Bautzen reservoirs (Chapter 2). Biofilms are complex microbial communities fixed to submerged surfaces through a slimy organic film. The mineral calcite was detected in the biofilms, which led to the
sinking of the overgrown microplastic particles. The calcite was of biogenic origin, most likely precipitated by sessile cyanobacteria within the biofilms.
Biofilm formation was also studied in the mesotrophic Malter reservoir. Unlike in Bautzen reservoir, biofilm formation did not govern the sedimentation of different microplastics in Malter reservoir (Chapter 3). Instead autumnal lake mixing led to
the formation of sinking aggregates of microplastics and iron colloids. Such colloids form when anoxic, iron-rich water from the hypolimnion mixes with the oxygenated epilimnetic waters. The colloids bind organic material from the lake water, which leads to the formation of large and sinking iron-organo flocs.
Hence, iron-organo floc formation and their influence on the buoyancy or burial of microplastics into sediments of Bautzen reservoir was studied in laboratory experiments (Chapter 4). Microplastics of different shapes (fiber, fragment, sphere) and sizes were readily incorporated into sinking iron-organo flocs. By this initially buoyant polyethylene microplastics were transported on top of sediments from Bautzen reservoir. Shortly after deposition, the microplastic bearing flocs started to subside and transported the pollutants into deeper sediment layers. The microplastics were not released from the sediments within two months of laboratory incubation.
The stability of floc microplastic deposition was further investigated employing experiments with the iron reducing model organism Shewanella oneidensis (Chapter 5). It was shown, that reduction or re-mineralization of the iron minerals did not affect the integrity of the iron-organo flocs. The organic matrix was stable under iron reducing conditions. Hence, no incorporated microplastics were released from the flocs. As similar processes are likely to take place in natural sediments, this might explain the previous described low microplastic release from the sediments.
This thesis introduced different mechanisms leading to the sedimentation of initially buoyant microplastics and to their subsequent deposition in freshwater reservoirs. Novel processes such as the aggregation with iron-organo flocs were identified and the understudied issue of biofilm densification through biogenic mineral formation was further investigated. The findings might have implications for the fate of microplastics within the river-reservoir system and outline the role of freshwater reservoirs as important accumulation zone for microplastics. Microplastics deposited in the sediments of reservoirs might not be transported further by through flowing river. Hence the study might contribute to better risk assessment and transport balances of these anthropogenic contaminants.
This project describes the nominal, verbal and ‘truncation’ systems of Awing and explains the syntactic and semantic functions of the multifunctional l<-><-> (LE) morpheme in copular and wh-focused constructions. Awing is a Bantu Grassfields language spoken in the North West region of Cameroon. The work begins with morphological processes viz. deverbals, compounding, reduplication, borrowing and a thorough presentation of the pronominal system and takes on verbal categories viz. tense, aspect, mood, verbal extensions, negation, adverbs and triggers of a homorganic N(asal)-prefix that attaches to the verb and other verbal categories. Awing grammar also has a very unusual phenomenon whereby nouns and verbs take long and short forms. A chapter entitled truncation is dedicated to the phenomenon. It is observed that the truncation process does not apply to bare singular NPs, proper names and nouns derived via morphological processes. On the other hand, with the exception of the 1st person non-emphatic possessive determiner and the class 7 noun prefix, nouns generally take the truncated form with modifiers (i.e., articles, demonstratives and other possessives). It is concluded that nominal truncation depicts movement within the DP system (Abney 1987). Truncation of the verb occurs in three contexts: a mass/plurality conspiracy (or lattice structuring in terms of Link 1983) between the verb and its internal argument (i.e., direct object); a means to align (exhaustive) focus (in terms of Fery’s 2013), and a means to form polar questions.
The second part of the work focuses on the role of the LE morpheme in copular and wh-focused clauses. Firstly, the syntax of the Awing copular clause is presented and it is shown that copular clauses in Awing have ‘subject-focus’ vs ‘topic-focus’ partitions and that the LE morpheme indirectly relates such functions. Semantically, it is shown that LE does not express contrast or exhaustivity in copular clauses. Turning to wh-constructions, the work adheres to Hamblin’s (1973) idea that the meaning of a question is the set of its possible answers and based on Rooth’s (1985) underspecified semantic notion of alternative focus, concludes that the LE morpheme is not a Focus Marker (FM) in Awing: LE does not generate or indicate the presence of alternatives (Krifka 2007); The LE morpheme can associate with wh-elements as a focus-sensitive operator with semantic import that operates on the focus alternatives by presupposing an exhaustive answer, among other notions. With focalized categories, the project further substantiates the claim in Fominyam & Šimík (2017), namely that exhaustivity is part of the semantics of the LE morpheme and not derived via contextual implicature, via a number of diagnostics. Hence, unlike in copular clauses, the LE morpheme with wh-focused categories is analysed as a morphological exponent of a functional head Exh corresponding to Horvath's (2010) EI (Exhaustive Identification). The work ends with the syntax of verb focus and negation and modifies the idea in Fominyam & Šimík (2017), namely that the focalized verb that associates with the exhaustive (LE) particle is a lower copy of the finite verb that has been moved to Agr. It is argued that the LE-focused verb ‘cluster’ is an instantiation of adjunction. The conclusion is that verb doubling with verb focus in Awing is neither a realization of two copies of one and the same verb (Fominyam and Šimík 2017), nor a result of a copy triggered by a focus marker (Aboh and Dyakonova 2009). Rather, the focalized copy is said to be merged directly as the complement of LE forming a type of adjoining cluster.
The life cycle of higher plants is based on recurring phases of growth and development based on repetitive sequences of cell division, cell expansion and cell differentiation. This dissertation deals with two projects, each of them investigating two different topics that are related to cell expansion. The first project is examining an Arabidopsis thaliana mutant exhibiting overall cell enlargement and the second project is analysing two naturally occurring floral morphs of Amsinckia spectabilis (Boraginaceae) differing (amongst others) in style length and anther heights due to differences in longitudinal cell elongation. The EMS-mutant eop1 was shown to exhibit a petal size increase of 26% caused by cell enlargement. Further phenotypes were detected, such as cotyledon size increase (based on larger cells) as well as increased carpel, sepal, leaf and pollen sizes. Plant height was shown to be increased and more highly branched trichomes explained the hairy eop1 phenotype. Fine mapping revealed the causal SNP to be a C to T transition at the last nucleotide of intron 7 of the INCURVATA11 (ICU11) gene, a 2-oxoglutarate /Fe(II)-dependant dioxygenase, and thus causing missplicing of the mRNA. Two T-DNA insertion lines (icu11-2 & icu11-4) confirmed ICU11 as causal gene by exhibiting increased petal size. A comparison of three icu11 alleles, which possessed different mutation-related changes, either overexpressing ICU11 or modified mRNAs, was the base for investigating the molecular mechanism that underlies the observed phenotype. Different approaches revealed contradictory results regarding ICU11 protein functionality in the icu11 mutants. A complementation assay proved the three mutants to be exchangeable and ICU11 overexpression in the wild-type led to an icu11-like phenotype, arguing for all three icu11 mutants to be GOF mutants. Contradicting this conclusion, the icu11-4 line could be rescued by a genomic ICU11 transgene. A model, based on the assumption that an overexpression of ICU11 is inhibiting the function of the protein, and thus causing the same effect as a LOF protein was proposed. Further, icu11-3 (eop1) mutants were shown to have an increased resistance towards paclobutrazol, a gibberellin (GA) inhibitor and an upregulation of AtGA20ox2, a main GA biosynthesis gene. Additionally, ICU11 subcellular localization was discovered to be cytoplasmic, supporting the assumption, that ICU11 affects GA biosynthesis and overall GA level, possibly explaining the observed (GA-overdose) phenotype.
The second project aimed to identify the genetic base of the S-locus in Amsinckia spectabilis, as the Amsinckia genus represents untypical characteristics for a heterostylous species, such as no obvious self-incompatibility (SI) and the repeated transition towards homostylous and fully selfing variants. The work was based on three Amsinckia spectabilis forms: a heterostylous form, consisting of two floral morphs with reciprocal positioning of sexual organs (S-morph: high anthers and a short style and L-morph: low anthers and a long style), and two homostylous forms, one large-flowered and partially selfing and the other small-flowered and fully selfing. The maintenance of the two floral morphs is genetically based on the S-locus region, containing genes that encode for the morph-specific traits, which are marked by a tight linkage due to suppressed recombination. Natural populations are found to possess a 1:1 S:L morph ratio, that can be explained by predominant disassortative mating of the two morphs, causing the occurrence of the dominant S-allele only in the heterozygous state (heterozygous (Ss) for the S-morph and homozygous recessive (ss) for the L-morph). Investigation of morph-specific phenotypes detected 56% elongated L-morph styles and 58% higher positioned S-morph anthers. Approximately 50% of the observed size differences were explained by an increase in cell elongation. Moreover, additional phenotypes were found, such as 21% enlarged S-morph pollen and no obvious SI, confirmed by hand pollinated seed counts, in vivo pollen tube growth and the development of homozygous dominant SS individuals via selfing. The Amsinckia spec. S-locus was assumed to at least consist of the G- (style length), the A- (anther height) and the P- (pollen size) locus. Comparative Transcriptomics of the two morphs revealed 22 differentially expressed markers that were found to be located within two contigs of a SS individual PacBio genome assembly, allowing the localization of the S-locus to be delimited to a region of approximately 23 Mb. Contradictory to revealed S-loci within the plant kingdom, no strong argument for a present hemizygous region was found to be causal for the suppressed recombination of the S-locus, so that an inversion was assumed to be the causal mechanism.
This dissertation was carried out as part of the international and interdisciplinary graduate school StRATEGy. This group has set itself the goal of investigating geological processes that take place on different temporal and spatial scales and have shaped the southern central Andes. This study focuses on claystones and carbonates of the Yacoraite Fm. that were deposited between Maastricht and Dan in the Cretaceous Salta Rift Basin. The former rift basin is located in northwest Argentina and is divided into the sub-basins Tres Cruces, Metán-Alemanía and Lomas de Olmedo. The overall motivation for this study was to gain new knowledge about the evolution of marine and lacustrine conditions during the Yacoraite Fm. Deposit in the Tres Cruces and Metán-Alemanía sub-basins. Other important aspects that were examined within the scope of this dissertation are the conversion of organic matter from Yacoraite Fm. into oil and its genetic relationship to selected oils produced and natural oil spills. The results of my study show that the Yacoraite Fm. began to be deposited under marine conditions and that a lacustrine environment developed by the end of the deposition in the Tres Cruces and Metán-Alemanía Basins. In general, the kerogen of Yacoraite Fm. consists mainly of the kerogen types II, III and II / III mixtures. Kerogen type III is mainly found in samples from the Yacoraite Fm., whose TOC values are low. Due to the adsorption of hydrocarbons on the mineral surfaces (mineral matrix effect), the content of type III kerogen with Rock-Eval pyrolysis in these samples could be overestimated. Investigations using organic petrography show that the organic particles of Yacoraite Fm. mainly consist of alginites and some vitrinite-like particles. The pyrolysis GC of the rock samples showed that the Yacoraite Fm. generates low-sulfur oils with a predominantly low-wax, paraffinic-naphthenic-aromatic composition and paraffinic wax-rich oils. Small proportions of paraffinic, low-wax oils and a gas condensate-generating facies are also predicted. Here, too, mineral matrix effects were taken into account, which can lead to a quantitative overestimation of the gas-forming character.
The results of an additional 1D tank modeling carried out show that the beginning (10% TR) of the oil genesis took place between ≈10 Ma and ≈4 Ma. Most of the oil (from ≈50% to 65%) was generated prior to the development of structural traps formed during the Plio-Pleistocene Diaguita deformation phase. Only ≈10% of the total oil generated was formed and potentially trapped after the formation of structural traps. Important factors in the risk assessment of this petroleum system, which can determine the small amounts of generated and migrated oil, are the generally low TOC contents and the variable thickness of the Yacoraite Fm. Additional risks are associated with a low density of information about potentially existing reservoir structures and the quality of the overburden.
„If you can’t measure it, you can’t manage it.“ Dieser Slogan, der u. a. auf Peter Drucker, Henry Deming oder Robert Kaplan und David Norton zurückgehen soll, ist Ausdruck einer tiefen Überzeugung in die Notwendigkeit und den Nutzen des Performance Managements, einem Ansatz der auch die öffentliche Verwaltung erfasst und geprägt hat. Gleichzeitig impliziert er eine entscheidende Rolle von Performance Informationen. Die vorliegende Dissertation rückt das neuralgische Element Performance Information ins Zentrum des Forschungsinteresses, genauer die Verwendung von Kennzahlen.
Ausgangspunkt bildet die wissenschaftliche Beobachtung, dass Kennzahlen nicht immer und automatisch in der vom theoretischen Standpunkt aus erforderlichen und prognostizierten Art und Weise genutzt werden. Eine schlechte Implementierung des Managementansatzes oder Fehler im theoretischen Fundament sind mögliche Erklärungsansätze. Im Zuge der Analyse des Forschungsstandes ist offenkundig geworden, dass Erklärungen vor allem im organisationalen Setting und in Performance Management bezogenen Faktoren gesucht werden; ein Kennzeichen für eine eher technokratische und implementationsbezogene Perspektive auf die Verwendungsproblematik. Die aus neurowissenschaftlicher Sicht wichtige intrapersonale Ebene spielt eine ungeordnete Rolle.
In Anbetracht dessen ist auf der Grundlage neurowissenschaftlicher Erkenntnisse im Rahmen einer empirischen Untersuchung die Wirkung erfahrungsbezogener Variablen auf das Verwendungsverhalten untersucht worden. Dabei ist analysiert worden, wie Erfahrungen auf organisationaler Ebene entstehen und wie sie im Detail auf das Nutzungsverhalten wirken. Als Forschungsobjekt sind polizeiliche Führungskräfte herangezogen worden. Die Daten sind Ende 2016/Anfang 2017 online-basiert erhoben worden.
Im Ergebnis der Datenauswertung und Diskussion der Befunde sind folgende Erkenntnisse hervorzuheben:
(1) Erfahrungen beeinflussen die Verwendung von Performance Informationen. Die Art der Erfahrung mit Kennzahlen bildet dabei eine Mediatorvariable. Vor allem organisationale Faktoren, wie der Reifegrad des Performance Management Systems, wirken über den Faktor Erfahrung auf das Verwendungsverhalten.
(2) Erwähnenswert ist zudem, dass die Auseinandersetzung mit Kennzahlen sowohl den Erfahrungsschatz als auch die Nutzung von Kennzahlen positiv beeinflusst. Insgesamt haben sich die neurowissenschaftlich inspirierten Variablen als vielversprechende Erklärungsfaktoren herausgestellt.
(3) Des Weiteren hat die Arbeit bestehende Befunde abgesichert, v. a. die Wirkung des erwähnten Reifegrads. Allerdings sind auch Unterschiede aufgetreten. So büßt zum Beispiel der transformationale Führungsstil i. V. m. Art der Erfahrung seine positive Wirkung auf die Kennzahlennutzung ein.
(4) Interessant sind zudem die Ergebnisse des Labor- und Quasiexperiments. Erstmalig sind nicht zweckorientierte Verwendungsarten experimentell beobachtbar. Zudem sind neuro- und verhaltensökonomische Erklärungsansätze identifiziert und diskutiert worden, die eine Bereicherung des Forschungsdiskurses darstellen. Sie bieten eine neue Perspektive hinsichtlich des Verwendungsverhaltens und liefern Impulse für die weitere Forschung.
Für das New Public Management, in dessen Werkzeugkasten dieser Managementansatz eine Schlüsselrolle einnimmt, wiegen die Forschungsbefunde schwer. Ohne ein funktionierendes Performance Management kann das wichtige Reformziel „Wirkungsorientierung“ nicht erreicht werden. Das NPM läuft damit Gefahr, selbst Dysfunktionen zu entwickeln.
Insgesamt scheint es geboten, in der Auseinandersetzung mit Managementsystemen einen stärkeren Fokus auf intrapersonale Faktoren zu legen. Auch Verhaltensanomalien im Kontext von Management und deren Implikationen sollten näher untersucht werden. Es zeigt sich ferner, dass eine rein technokratische Sichtweise auf das Performance Management nicht zielführend ist. Folglich ist das Performance Management theoretisch wie konzeptionell fortzuentwickeln.
Die Forschungsarbeit liefert somit wichtige neue Erkenntnisse zur Verwendung von Performance Informationen und zum Verständnis von Performance Management. Vor allem erweitert sie den Forschungsdiskurs, da sie die Erklärungskraft intrapersonaler Faktoren aufgezeigt hat sowie methodisch mit dem Mixed-Method-Ansatz (Multimethod-Studie) und theoretisch mittels der Neuro- und Verhaltensökonomie neue Perspektiven hinsichtlich der Verwendungsproblematik eröffnet.
Soft actuators have drawn significant attention due to their relevance for applications, such as artificial muscles in devices developed for medicine and robotics. Tuning their performance and expanding their functionality are frequently done by means of chemical modification. The introduction of structural elements rendering non-synthetic modification of the performance possible, as well as control over physical appearance and facilitating their recycling is a subject of a great interest in the field of smart materials. The primary aim of this thesis was to create a shape-memory polymeric actuator, where the capability for non-synthetic tuning of the actuation performance is combined with reprocessability. Physically cross-linked polymeric matrices provide a solid material platform, where the in situ processing methods can be employed for modification of the composition and morphology, resulting in the fine tuning of the related mechanical properties and shape-memory actuation capability.
The morphological features, required for shape-memory polymeric actuators, namely two crystallisable domains and anchoring points for physical cross-links, were embedded into a multiblock copolymer with poly(ε-caprolactone) and poly(L-lactide) segments (PLLA-PCL). Here, the melting transition of PCL was bisected into the actuating and skeleton-forming units, while the cross-linking was introduced via PLA stereocomplexation in blends with oligomeric poly(D-lactide) (ODLA). PLLA segment number average length of 12-15 repeating units was experimentally defined to be capable of the PLA stereocomplexes formation, but not sufficient for the isotactic crystallisation. Multiblock structure and phase dilution broaden the PCL melting transition, facilitating its separation into two conditionally independent crystalline domains. Low molar mass of the PLA stereocomplex components and a multiblock structure enables processing and reprocessing of the PLLA-PCL / ODLA blends with common non-destructive techniques. The modularity of the PLLA-PCL structure and synthetic approach allows for independent tuning of the properties of its components. The designed material establishes a solid platform for non-synthetic tuning of thermomechanical and structural properties of thermoplastic elastomers.
To evaluate the thermomechanical stability of the formed physical network, three criteria were appraised. As physical cross-links, PLA stereocomplexes have to be evenly distributed within the material matrix, their melting temperature shall not overlap with the thermal transitions of the PCL domains and they have to maintain the structural integrity within the strain ε ranges further applied in the shape-memory actuation experiments. Assigning PCL the function of the skeleton-forming and actuating units, and PLA stereocomplexes the role of physical netpoints, shape-memory actuation was realised in the PLLA-PCL / ODLA blends. Reversible strain of shape-memory actuation was found to be a function of PLA stereocomplex crystallinity, i.e. physical cross-linking density, with a maximum of 13.4 ± 1.5% at PLA stereocomplex content of 3.1 ± 0.3 wt%. In this way, shape-memory actuation can be tuned via adjusting the composition of the PLLA-PCL / ODLA blend. This makes the developed material a valuable asset in the production of cost-effective tunable soft polymeric actuators for the applications in medicine and soft robotics.
River flooding poses a threat to numerous cities and communities all over the world. The detection, quantification and attribution of changes in flood characteristics is key to assess changes in flood hazard and help affected societies to timely mitigate and adapt to emerging risks. The Rhine River is one of the major European rivers and numerous large cities reside at its shores. Runoff from several large tributaries superimposes in the main channel shaping the complex from regime. Rainfall, snowmelt as well as ice-melt are important runoff components. The main objective of this thesis is the investigation of a possible transient merging of nival and pluvial Rhine flood regimes under global warming. Rising temperatures cause snowmelt to occur earlier in the year and rainfall to be more intense. The superposition of snowmelt-induced floods originating from the Alps with more intense rainfall-induced runoff from pluvial-type tributaries might create a new flood type with potentially disastrous consequences.
To introduce the topic of changing hydrological flow regimes, an interactive web application that enables the investigation of runoff timing and runoff season- ality observed at river gauges all over the world is presented. The exploration and comparison of a great diversity of river gauges in the Rhine River Basin and beyond indicates that river systems around the world undergo fundamental changes. In hazard and risk research, the provision of background as well as real-time information to residents and decision-makers in an easy accessible way is of great importance. Future studies need to further harness the potential of scientifically engineered online tools to improve the communication of information related to hazards and risks.
A next step is the development of a cascading sequence of analytical tools to investigate long-term changes in hydro-climatic time series. The combination of quantile sampling with moving average trend statistics and empirical mode decomposition allows for the extraction of high resolution signals and the identification of mechanisms driving changes in river runoff. Results point out that the construction and operation of large reservoirs in the Alps is an important factor redistributing runoff from summer to winter and hint at more (intense) rainfall in recent decades, particularly during winter, in turn increasing high runoff quantiles. The development and application of the analytical sequence represents a further step in the scientific quest to disentangling natural variability, climate change signals and direct human impacts.
The in-depth analysis of in situ snow measurements and the simulations of the Alpine snow cover using a physically-based snow model enable the quantification of changes in snowmelt in the sub-basin upstream gauge Basel. Results confirm previous investigations indicating that rising temperatures result in a decrease in maximum melt rates. Extending these findings to a catchment perspective, a threefold effect of rising temperatures can be identified: snowmelt becomes weaker, occurs earlier and forms at higher elevations. Furthermore, results indicate that due to the wide range of elevations in the basin, snowmelt does not occur simultaneously at all elevation, but elevation bands melt together in blocks. The beginning and end of the release of meltwater seem to be determined by the passage of warm air masses, and the respective elevation range affected by accompanying temperatures and snow availability. Following those findings, a hypothesis describing elevation-dependent compensation effects in snowmelt is introduced: In a warmer world with similar sequences of weather conditions, snowmelt is moved upward to higher elevations, i.e., the block of elevation bands providing most water to the snowmelt-induced runoff is located at higher elevations. The movement upward the elevation range makes snowmelt in individual elevation bands occur earlier. The timing of the snowmelt-induced runoff, however, stays the same. Meltwater from higher elevations, at least partly, replaces meltwater from elevations below.
The insights on past and present changes in river runoff, snow covers and underlying mechanisms form the basis of investigations of potential future changes in Rhine River runoff. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios is used to analyse future changes in streamflow, snowmelt, precipitation and evapotranspiration at 1.5, 2.0 and
3.0 ◦ C global warming. Simulation results suggest that future changes in flood characteristics in the Rhine River Basin are controlled by increased precipitation amounts on the one hand, and reduced snowmelt on the other hand. Rising temperatures deplete seasonal snowpacks. At no time during the year, a warming climate results in an increase in the risk of snowmelt-driven flooding. Counterbalancing effects between snowmelt and precipitation often result in only little and transient changes in streamflow peaks. Although, investigations point at changes in both rainfall and snowmelt-driven runoff, there are no indications of a transient merging of nival and pluvial Rhine flood regimes due to climate warming. Flooding in the main tributaries of the Rhine, such as the Moselle River, as well as the High Rhine is controlled by both precipitation and snowmelt. Caution has to be exercised labelling sub-basins such as the Moselle catchment as purely pluvial-type or the Rhine River Basin at Basel as purely nival-type. Results indicate that this (over-) simplifications can entail misleading assumptions with regard to flood-generating mechanisms and changes in flood hazard. In the framework of this thesis, some progress has been made in detecting, quantifying and attributing past, present and future changes in Rhine flow/flood characteristics. However, further studies are necessary to pin down future changes in the flood genesis of Rhine floods, particularly very rare events.
Over the past decades, natural hazards, many of which are aggravated by climate change and reveal an increasing trend in frequency and intensity, have caused significant human and economic losses and pose a considerable obstacle to sustainable development. Hence, dedicated action toward disaster risk reduction is needed to understand the underlying drivers and create efficient risk mitigation plans. Such action is requested by the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), a global agreement launched in 2015 that establishes stating priorities for action, e.g. an improved understanding of disaster risk. Turkey is one of the SFDRR contracting countries and has been severely affected by many natural hazards, in particular earthquakes and floods. However, disproportionately little is known about flood hazards and risks in Turkey. Therefore, this thesis aims to carry out a comprehensive analysis of flood hazards for the first time in Turkey from triggering drivers to impacts. It is intended to contribute to a better understanding of flood risks, improvements of flood risk mitigation and the facilitated monitoring of progress and achievements while implementing the SFDRR.
In order to investigate the occurrence and severity of flooding in comparison to other natural hazards in Turkey and provide an overview of the temporal and spatial distribution of flood losses, the Turkey Disaster Database (TABB) was examined for the years 1960-2014. The TABB database was reviewed through comparison with the Emergency Events Database (EM-DAT), the Dartmouth Flood Observatory database, the scientific literature and news archives. In addition, data on the most severe flood events between 1960 and 2014 were retrieved. These served as a basis for analyzing triggering mechanisms (i.e. atmospheric circulation and precipitation amounts) and aggravating pathways (i.e. topographic features, catchment size, land use types and soil properties). For this, a new approach was developed and the events were classified using hierarchical cluster analyses to identify the main influencing factor per event and provide additional information about the dominant flood pathways for severe floods. The main idea of the study was to start with the event impacts based on a bottom-up approach and identify the causes that created damaging events, instead of applying a model chain with long-term series as input and searching for potentially impacting events as model outcomes. However, within the frequency analysis of the flood-triggering circulation pattern types, it was discovered that events in terms of heavy precipitation were not included in the list of most severe floods, i.e. their impacts were not recorded in national and international loss databases but were mentioned in news archives and reported by the Turkish State Meteorological Service. This finding challenges bottom-up modelling approaches and underlines the urgent need for consistent event and loss documentation. Therefore, as a next step, the aim was to enhance the flood loss documentation by calibrating, validating and applying the United Nations Office for Disaster Risk Reduction (UNDRR) loss estimation method for the recent severe flood events (2015-2020). This provided, a consistent flood loss estimation model for Turkey, allowing governments to estimate losses as quickly as possible after events, e.g. to better coordinate financial aid.
This thesis reveals that, after earthquakes, floods have the second most destructive effects in Turkey in terms of human and economic impacts, with over 800 fatalities and US$ 885.7 million in economic losses between 1960 and 2020, and that more attention should be paid on the national scale. The clustering results of the dominant flood-producing mechanisms (e.g. circulation pattern types, extreme rainfall, sudden snowmelt) present crucial information regarding the source and pathway identification, which can be used as base information for hazard identification in the preliminary risk assessment process. The implementation of the UNDRR loss estimation model shows that the model with country-specific parameters, calibrated damage ratios and sufficient event documentation (i.e. physically damaged units) can be recommended in order to provide first estimates of the magnitude of direct economic losses, even shortly after events have occurred, since it performed well when estimates were compared to documented losses.
The presented results can contribute to improving the national disaster loss database in Turkey and thus enable a better monitoring of the national progress and achievements with regard to the targets stated by the SFDRR. In addition, the outcomes can be used to better characterize and classify flood events. Information on the main underlying factors and aggravating flood pathways further supports the selection of suitable risk reduction policies.
All input variables used in this thesis were obtained from publicly available data. The results are openly accessible and can be used for further research.
As an overall conclusion, it can be stated that consistent loss data collection and better event documentation should gain more attention for a reliable monitoring of the implementation of the SFDRR. Better event documentation should be established according to a globally accepted standard for disaster classification and loss estimation in Turkey. Ultimately, this enables stakeholders to create better risk mitigation actions based on clear hazard definitions, flood event classification and consistent loss estimations.
Forming as a result of the collision between the Adriatic and European plates, the Alpine orogen exhibits significant lithospheric heterogeneity due to the long history of interplay between these plates, other continental and oceanic blocks in the region, and inherited features from preceeding orogenies. This implies that the thermal and rheological configuration of the lithosphere also varies significantly throughout the region. Lithology and temperature/pressure conditions exert a first order control on rock strength, principally via thermally activated creep deformation and on the distribution at depth of the brittle-ductile transition zone, which can be regarded as the lower bound to the seismogenic zone. Therefore, they influence the spatial distribution of seismicity within a lithospheric plate. In light of this, accurately constrained geophysical models of the heterogeneous Alpine lithospheric configuration, are crucial in describing regional deformation patterns. However, despite the amount of research focussing on the area, different hypotheses still exist regarding the present-day lithospheric state and how it might relate to the present-day seismicity distribution.
This dissertaion seeks to constrain the Alpine lithospheric configuration through a fully 3D integrated modelling workflow, that utilises multiple geophysical techniques and integrates from all available data sources. The aim is therefore to shed light on how lithospheric heterogeneity may play a role in influencing the heterogeneous patterns of seismicity distribution observed within the region. This was accomplished through the generation of: (i) 3D seismically constrained, structural and density models of the lithosphere, that were adjusted to match the observed gravity field; (ii) 3D models of the lithospheric steady state thermal field, that were adjusted to match observed wellbore temperatures; and (iii) 3D rheological models of long term lithospheric strength, with the results of each step used as input for the following steps.
Results indicate that the highest strength within the crust (~ 1 GPa) and upper mantle (> 2 GPa), are shown to occur at temperatures characteristic for specific phase transitions (more felsic crust: 200 – 400 °C; more mafic crust and upper lithospheric mantle: ~600 °C) with almost all seismicity occurring in these regions. However, inherited lithospheric heterogeneity was found to significantly influence this, with seismicity in the thinner and more mafic Adriatic crust (~22.5 km, 2800 kg m−3, 1.30E-06 W m-3) occuring to higher temperatures (~600 °C) than in the thicker and more felsic European crust (~27.5 km, 2750 kg m−3, 1.3–2.6E-06 W m-3, ~450 °C). Correlation between seismicity in the orogen forelands and lithospheric strength, also show different trends, reflecting their different tectonic settings. As such, events in the plate boundary setting of the southern foreland correlate with the integrated lithospheric strength, occurring mainly in the weaker lithosphere surrounding the strong Adriatic indenter. Events in the intraplate setting of the northern foreland, instead correlate with crustal strength, mainly occurring in the weaker and warmer crust beneath the Upper Rhine Graben.
Therefore, not only do the findings presented in this work represent a state of the art understanding of the lithospheric configuration beneath the Alps and their forelands, but also a significant improvement on the features known to significantly influence the occurrence of seismicity within the region. This highlights the importance of considering lithospheric state in regards to explaining observed patterns of deformation.
Proteins of halophilic organisms that accumulate molar concentrations of KCl in their cytoplasm have much higher content in acidic amino acids than proteins of mesophilic organisms. It has been proposed that this excess is necessary to maintain proteins hydrated in an environment with low water activity: either via direct interactions between water and the carboxylate groups of acidic amino acids or via cooperative interactions between acidic amino acids and hydrated cations, which would stabilize the folded protein. In the course of this Ph.D. study, we investigated these possibilities using atomistic molecular dynamics simulations and classical force fields. High quality parameters describing the interaction between K+ and carboxylate groups present in acidic amino acids are indispensable for this study. We first evaluated the quality of the default parameters for these ions within the widely used AMBER ff14SB force field for proteins and found that they perform poorly. We propose new parameters, which reproduce solution activity derivatives of potassium acetate solutions up to 2 mol/kg and the distances between potassium ions and carboxylate groups observed in x-ray structures of proteins. To understand the role of acidic amino acids in protein hydration, we investigated this aspect for 5 halophilic proteins in comparison with 5 mesophilic ones. Our results do not support the necessity of acidic amino acids to keep folded proteins hydrated. Proteins with a larger fraction of acidic amino acids indeed have higher hydration levels. However, the hydration level of each protein is identical at low (b_KCl = 0.15 mol/kg) and high (b_KCl = 2 mol/kg) KCl concentration. It has also been proposed that cooperative interactions between acidic amino acids with nearby hydrated cations stabilize the folded protein and slow down its solvation shell; according to this theory, the cations would be preferentially excluded from the unfolded structure. We investigate this possibility through extensive free energy calculation simulations. We find that cooperative interactions between neighboring acidic amino acids exist and are mediated by the ions in solution but are present in both folded and unfolded structures of halophilic proteins. The translational dynamics of the solvation shell is barely distinguishable between halophilic and mesophilic proteins; therefore, such a cooperative effect does not result in unusually slow solvent dynamics as has been suggested.
Die vorliegende Studie beschreitet im religionswissenschaftlichen Kontext einen Weg zur Erforschung der Modifikation und Neuausrichtung eines einzelnen christlichen Bildmotivs, dessen Bildformel sich bis in die Gegenwart durchgesetzt hat.
Das Bildmotiv der Pietà wird in der Gegenwartskunst verstärkt als innovative Bildformel in politischen oder sozialen Kontexten verwendet, um existenzielle Lebenserfahrungen oder gesellschaftskritische, sowie politische Anklagen zu formulieren. Es erlebt einen Relaunch in der Medienberichterstattung, der Kunst, in Filmen oder der Alltagskultur. Künstler_innen und Fotojournalist_innen geben ihren Objekten vermehrt den Titel Pietà oder er wird ihnen von außen zuge-schrieben. Die Semantik dieses spezifischen Bildmotivs rührt offenbar an und kann bei Betrachtenden eine emotionale Gestimmtheit evozieren. Für diese Stu-die ist das Norm- und Wertesystem mit dem dahinter liegenden Tradierungs- und Transformationsprozess von Interesse. Bisher fehlt eine Monografie, in der die Zusammenhänge der Wiederbelebung eines primär christlichen Bildmotivs und der gegenwärtigen Bezüge zu Gewalt, Tod, Angst, Vergänglichkeit, dem Altern oder des Verlustes analysiert werden.
Im Vordergrund steht die Frage nach einer Modifikation bzw. Neuinterpretation dieser Ikonik. Das Aufzeigen eines möglichen dynamischen Entwicklungspro-zesses des Bildmotivs soll klären, welche veränderten Funktionen dem Pietà-Motiv in der Gegenwartskunst zugeschrieben werden. Über ein Set international renomierter, zeitgenössischer Künstler_innen werden eventuelle Veränderun-gen und ein damit verbundener gesellschaftlicher Bedeutungswandel seit dem 21. Jahrhundert analysiert.
Vor diesem Hintergrund ist die Frage nach einer religionsübergreifenden Wirk-mächtigkeit ikonischer Präsenz eines religiösen Bildmotivs in der Kunst und den Bildmedien von aktueller Relevanz. Diese Studie leistet einen exemplarischen Beitrag für die Affektforschung, die sich in den vergangenen Jahren vermehrt mit der Emotionsdarstellung und der Emotionsvermittlung in den audiovisuellen Medien befasst.
Silicate melts are major components of the Earth’s interior and as such they make an essential contribution in igneous processes, in the dynamics of the solid Earth and the chemical development of the entire Earth. Macroscopic physical and chemical properties such as density, compressibility, viscosity, degree of polymerization etc. are determined by the atomic structure of the melt. Depending on the pressure, but also on the temperature and the chemical composition, silicate melts show different structural properties. These properties are best described by the local coordination environment, i.e. symmetry and number of neighbors (coordination number) of an atom, as well as the distance between the central atom and its neighbors (inter-atomic distance). With increasing pressure and temperature, i.e. with increasing depth in the Earth, the density of the melt increases, which can lead to changes in coordination number and distances. If the coordination number remains the same, the distance usually decreases. If the coordination number increases, the distance can increase. These general trends can, however, vary greatly, which can be attributed in particular to the chemical composition.
Due to the fact that natural melts of the deep earth are not accessible to direct investigations, in order to understand their properties under the relevant conditions, extensive experimental and theoretical investigations have been carried out so far. This has often been studied using the example of amorphous samples of the end-members SiO2 and GeO2 , with the latter serving as a structural and chemical analog model to SiO2. Commonly, the experiments were carried out at high pressure and at room temperature. Natural melts are chemically much more complex than the simple end-member SiO2 and GeO2, so that observations made on them may lead to incorrect compression models. Furthermore, the investigations on glasses at room temperature can show potentially strong deviations from the properties of melts under natural thermodynamic conditions.
The aim of this thesis was to explain the influence of the composition and the temperature on the structural properties of the melts at high pressures. To understand this, we studied complex alumino-germanate and alumino-silicate glasses. More precisely, we studied synthetic glasses that have a composition like the mineral albite and like a mixture of albite-diopside at the eutectic point. The albite glass is structurally similar to a simplified granitic melt, while the albite-diopside glass simulates a simplified basaltic melt. To study the local coordination environment of the elements, we used X-ray absorption spectroscopy in combination with a diamond anvil cell. Because the diamonds have a high absorbance for X-rays with energies below 10 keV, the direct investigation of the geologically relevant elements such as Si, Al, Ca, Mg etc. with this spectroscopic probe technique in combination with a diamond anvil cell is not possible. Therefore the glasses were doped with Ge and Sr. These elements serve partially or fully as substitutes for important major elements. In this sense, Ge serves as an a substitute for Si and other network formers, while Sr replaces network modifiers such as Ca, Na, Mg etc.,
as well as other cations with a large ionic radius.
In the first step we studied the Ge K-edge in Ge-Albit-glass, NaAlGe3O8, at room temperature up to 131 GPa. This glass has a higher chemical complexity than SiO2 and GeO2, but it is still fully polymerized. The differences in the compression mechanism between this glass and the simple oxides can clearly be attributed to higher chemical complexity. The albite and albite-diopside compositions partially doped with Ge and Sr were probed at room temperature for Ge up to 164 GPa and for Sr up to 42 GPa. While the albite glass is nominally fully polymerized like NaAlGe3O8, the albite-diopside glass is partially depolymerized. The results show that structural changes take place in all three glasses in the first 25 to a maximum of 30 GPa, with both Ge and Sr reaching the maximum coordination number 6 and ∼9, respectively. At higher pressures, only isostructural shrinkage of the coordination polyhedra takes place in the glasses. The most important finding of the high pressure studies on the alumino-silicate and alumino-germanate glasses is that in these complex glasses the polyhedra show a much higher compressibility than what can be observed in the end-members. This is shown in particular by the strong shortening of the Ge-O distances in the amorphous NaAlGe3O8 and albite-diopside glass at pressures above 30 GPa.
In addition to the effects of the composition on the compaction process, we investigated the influence of temperature on the structural changes. To do this, we probed the albite-diopside glass, as it is chemically most similar to the melts in the lower mantle. We studied the Ge K edge of the sample with a resistively heated and a laser-heated diamond anvil cell, for a pressure range of up to 48 GPa and a temperature range of up to 5000 K. High temperatures at which the sample is liquid and that are relevant for the Earth mantle, have a significant impact on the structural transformation, with a shift of approx. 30% to significantly lower pressures, compared to the glasses at room temperature and below 1000 K.
The results of this thesis represent an important contribution to the understanding of the properties of melts at conditions of the lower mantle. In the context of the discussion about the existence and origin of ultra-dense silicate melts at the core-mantle boundary, these investigations show that the higher density compared to the surrounding material cannot be explained by only structural features, but by a distinct chemical composition. The results also suggest that only very low solubilities of noble gases are to be expected for melts in the lower mantle, so that the structural properties clearly influence the overall budget and transport of noble gases in the Earth’s mantle.
Botulinum neurotoxin (BoNT) is produced by the anaerobic bacterium Clostridium botulinum. It is one of the most potent toxins found in nature and can enter motor neurons (MN) to cleave proteins necessary for neurotransmission, resulting in flaccid paralysis. The toxin has applications in both traditional and esthetic medicine. Since BoNT activity varies between batches despite identical protein concentrations, the activity of each lot must be assessed. The gold standard method is the mouse lethality assay, in which mice are injected with a BoNT dilution series to determine the dose at which half of the animals suffer death from peripheral asphyxia. Ethical concerns surrounding the use of animals in toxicity testing necessitate the creation of alternative model systems to measure the potency of BoNT.
Prerequisites of a successful model are that it is human specific; it monitors the complete toxic pathway of BoNT; and it is highly sensitive, at least in the range of the mouse lethality assay. One model system was developed by our group, in which human SIMA neuroblastoma cells were genetically modified to express a reporter protein (GLuc), which is packaged into neurosecretory vesicles, and which, upon cellular depolarization, can be released – or inhibited by BoNT – simultaneously with neurotransmitters. This assay has great potential, but includes the inherent disadvantages that the GLuc sequence was randomly inserted into the genome and the tumor cells only have limited sensitivity and specificity to BoNT. This project aims to improve these deficits, whereby induced pluripotent stem cells (iPSCs) were genetically modified by the CRISPR/Cas9 method to insert the GLuc sequence into the AAVS1 genomic safe harbor locus, precluding genetic disruption through non-specific integrations. Furthermore, GLuc was modified to associate with signal peptides that direct to the lumen of both large dense core vesicles (LDCV), which transport neuropeptides, and synaptic vesicles (SV), which package neurotransmitters. Finally, the modified iPSCs were differentiated into motor neurons (MNs), the true physiological target of BoNT, and hypothetically the most sensitive and specific cells available for the MoN-Light BoNT assay.
iPSCs were transfected to incorporate one of three constructs to direct GLuc into LDCVs, one construct to direct GLuc into SVs, and one “no tag” GLuc control construct. The LDCV constructs fused GLuc with the signal peptides for proopiomelanocortin (hPOMC-GLuc), chromogranin-A (CgA-GLuc), and secretogranin II (SgII-GLuc), which are all proteins found in the LDCV lumen. The SV construct comprises a VAMP2-GLuc fusion sequence, exploiting the SV membrane-associated protein synaptobrevin (VAMP2). The no tag GLuc expresses GLuc non-specifically throughout the cell and was created to compare the localization of vesicle-directed GLuc.
The clones were characterized to ensure that the GLuc sequence was only incorporated into the AAVS1 safe harbor locus and that the signal peptides directed GLuc to the correct vesicles. The accurate insertion of GLuc was confirmed by PCR with primers flanking the AAVS1 safe harbor locus, capable of simultaneously amplifying wildtype and modified alleles. The PCR amplicons, along with an insert-specific amplicon from candidate clones were Sanger sequenced to confirm the correct genomic region and sequence of the inserted DNA. Off-target integrations were analyzed with the newly developed dc-qcnPCR method, whereby the insert DNA was quantified by qPCR against autosomal and sex-chromosome encoded genes. While the majority of clones had off-target inserts, at least one on-target clone was identified for each construct.
Finally, immunofluorescence was utilized to localize GLuc in the selected clones. In iPSCs, the vesicle-directed GLuc should travel through the Golgi apparatus along the neurosecretory pathway, while the no tag GLuc should not follow this pathway. Initial analyses excluded the CgA-GLuc and SgII-GLuc clones due to poor quality protein visualization. The colocalization of GLuc with the Golgi was analyzed by confocal microscopy and quantified. GLuc was strongly colocalized with the Golgi in the hPOMC-GLuc clone (r = 0.85±0.09), moderately in the VAMP2-GLuc clone (r = 0.65±0.01), and, as expected, only weakly in the no tag GLuc clone (r = 0.44±0.10). Confocal microscopy of differentiated MNs was used to analyze the colocalization of GLuc with proteins associated with LDCVs and SVs, SgII in the hPOMC-GLuc clone (r = 0.85±0.08) and synaptophysin in the VAMP2-GLuc clone (r = 0.65±0.07). GLuc was also expressed in the same cells as the MN-associated protein, Islet1.
A significant portion of GLuc was found in the correct cell type and compartment. However, in the MoN-Light BoNT assay, the hPOMC-GLuc clone could not be provoked to reliably release GLuc upon cellular depolarization. The depolarization protocol for hPOMC-GLuc must be further optimized to produce reliable and specific release of GLuc upon exposure to a stimulus. On the other hand, the VAMP2-GLuc clone could be provoked to release GLuc upon exposure to the muscarinic and nicotinic agonist carbachol. Furthermore, upon simultaneous exposure to the calcium chelator EGTA, the carbachol-provoked release of GLuc could be significantly repressed, indicating the detection of GLuc was likely associated with vesicular fusion at the presynaptic terminal. The application of the VAMP2-GLuc clone in the MoN-Light BoNT assay must still be verified, but the results thus far indicate that this clone could be appropriate for the application of BoNT toxicity assessment.
Inequalities in health are a prevalent feature of societies. And as societies, we condemn inequalities that are rooted in immutable circumstances such as gender, race, and parental background. Consequently, policy makers are interested in measuring and understanding the causes of health inequalities rooted in circumstances. However, identifying causal estimates of these relationships is very ambitious for reasons such as the presence of confounders or measurement error in the data. This thesis contributes to this ambitious endeavour by addressing these challenges in four chapters.
In the first Chapter, I use 25 years of rich health information to describe three features of intergenerational health mobility in Germany. First, we describe the joint permanent health distribution of the parents and their children. A ten percentile increase in parental permanent health is associated with a 2.3 percentile increase in their child’s health. Second, a percentile point increase in permanent health ranks is associated with a 0.8% to 1.4% increase in permanent income for, both, children, and parents, respectively. Non-linearities in the association between permanent health and income create incentives to escape the bottom of the permanent health distribution. Third, upward mobility in permanent health varies with parental socio-economic status.
In the second Chapter, we estimate the effect of maternal schooling on children’s mental health in adulthood. Using the Socio-Economic Panel and the mental health measure based on the SF-12 questionnaire, we exploit a compulsory schooling law reform to identify the causal effect of maternal schooling on children’s mental health. While the theoretical considerations are not clear, we do not find that the mother’s schooling has an effect on the mental health of the children. However, we find a positive effect on children’s physical health operating mainly through physical functioning. In addition, albeit with the absence of a reduced-form effect on mental health, we find evidence that the number of friends moderates the relationship between maternal schooling and their children’s mental health.
In the third Chapter, against a background of increasing violence against non-natives, we estimate the effect of hate crime on refugees’ mental health in Germany. For this purpose, we combine two datasets: administrative records on xenophobic crime against refugee shelters by the Federal Criminal Office and the IAB-BAMF-SOEP Survey of Refugees. We apply a regression discontinuity design in time to estimate the effect of interest. Our results indicate that hate crime has a substantial negative effect on several mental health indicators, including the Mental Component Summary score and the Patient Health Questionnaire-4 score. The effects are stronger for refugees with closer geographic proximity to the focal hate crime and refugees with low country-specific human capital. While the estimated effect is only transitory, we argue that negative mental health shocks during the critical period after arrival have important long-term consequences.
In the last Chapter of this thesis, we investigate how the economic consequences of the pandemic and the government-mandated measures to contain its spread affect the self-employed – particularly women– in Germany. For our analysis, we use representative, real-time survey data in which respondents were asked about their situation during the COVID-19 pandemic. Our findings indicate that among the self-employed, who generally face a higher likelihood of income losses due to COVID-19 than employees, women are 35% more likely to experience income losses than their male counterparts. We do not find a comparable gender gap among employees. Our results further suggest that the gender gap among the self-employed is largely explained by the fact that women disproportionately work in industries that are more severely affected by the COVID-19 pandemic. Our analysis of potential mechanisms reveals that women are significantly more likely to be impacted by government-imposed restrictions, e.g., the regulation of opening hours. We conclude that future policy measures intending to mitigate the consequences of such shocks should account for this considerable variation in economic hardship.
Halide perovskites are a class of novel photovoltaic materials that have recently attracted much attention in the photovoltaics research community due to their highly promising optoelectronic properties, including large absorption coefficients and long carrier lifetimes. The charge carrier mobility of halide perovskites is investigated in this thesis by THz spectroscopy, which is a contact-free technique that yields the intra-grain sum mobility of electrons and holes
in a thin film.
The polycrystalline halide perovskite thin films, provided from Potsdam University, show moderate mobilities in the range from 21.5 to 33.5 cm2V-1s-1. It is shown in this work that the room temperature mobility is limited by charge carrier scattering at polar optical phonons. The mobility at low temperature is likely to be limited by scattering at charged and neutral impurities at impurity concentration N=1017-1018 cm-3. Furthermore, it is shown that exciton formation
may decrease the mobility at low temperatures. Scattering at acoustic phonons can be neglected at both low and room temperatures. The analysis of mobility spectra over a broad range of temperatures for perovskites with various cation compounds shows that cations have a minor impact on charge carrier mobility.
The low-dimensional thin films of quasi-2D perovskite with different numbers of [PbI6]4−sheets (n=2-4) alternating with long organic spacer molecules were provided by S. Zhang from Potsdam University. They exhibit mobilities in the range from 3.7 to 8 cm2V-1s-1. A clear
decrease of mobility is observed with decrease in number of metal-halide sheets n, which likely arises from charge carrier confinement within metal-halide layers. Modelling the measured THz mobility with the modified Drude-Smith model yields localization length from 0.9 to 3.7 nm, which agrees well on the thicknesses of the metal-halide layers. Additionally, the mobilities are found to be dependent on the orientation of the layers. The charge carrier dynamics is also
dependent on the number of metal-halide sheets n. For the thin films with n =3-4 the dynamics is similar to the 3D MHPs. However, the thin film with n = 2 shows clearly different dynamics, where the signs of exciton formation are observed within 390 fs timeframe after
photoexcitation.
Also, the charge carrier dynamics of CsPbI3 perovskite nanocrystals was investigated, in particular the effect of post treatments on the charge carrier transport.
This paper-based dissertation aims to contribute to the open innovation (OI) and technology management (TM) research fields by investigating their mechanisms, and potentials at the operational level. The dissertation connects the well-known concept of technology management with OI formats and applies these on specific manufacturing technologies within a clearly defined setting.
Technological breakthroughs force firms to continuously adapt and reinvent themselves. The pace of technological innovation and their impact on firms is constantly increasing due to more connected infrastructure and accessible resources (i.e. data, knowledge). Especially in the manufacturing sector it is one key element to leverage new technologies to stay competitive. These technological shifts call for new management practices.
TM supports firms with various tools to manage these shifts at different levels in the firm. It is a multifunctional and multidisciplinary field as it deals with all aspects of integrating technological issues into business decision-making and is directly relevant to a number of core business processes. Thus, it makes sense to utilize this theory and their practices as a foundation of this dissertation. However, considering the increasing complexity and number of technologies it is not sufficient anymore for firms to only rely on previous internal R&D and managerial practices. OI can expanse these practices by involving distributed innovation processes and accessing further external knowledge sources. This expansion can lead to an increasing innovation performance and thereby accelerate the time-to-market of technologies.
Research in this dissertation was based on the expectations that OI formats will support the R&D activities of manufacturing technologies on the operational level by providing access to resources, knowledge, and leading-edge technology. The dissertation represents uniqueness regarding the rich practical data sets (observations, internal documents, project reviews) drawn from a very large German high-tech firm. The researcher was embedded in an R&D unit within the operational TM department for manufacturing technologies. The analyses include 1.) an exploratory in-depth analysis of a crowdsourcing initiative to elaborate the impact on specific manufacturing technologies, 2.) a deductive approach for developing a technology evaluation score model to create a common understanding of the value of selected manufacturing technologies at the operational level, and 3.) an abductive reasoning approach in form of a longitudinal case study to derive important indicator for the in-process activities of science-based partnership university-industry collaboration format. Thereby, the dissertation contributed to research and practice 1.) linkages of TM and OI practices to assimilate technologies at the operational level, 2.) insights about the impact of CS on manufacturing technologies and a related guideline to execute CS initiatives in this specific environment 3.) introduction of manufacturing readiness levels and further criteria into the TM and OI research field to support decision-makers in the firm in gaining a common understanding of the maturity of manufacturing technologies and, 4.) context-specific important indicators for science based university-industry collaboration projects and a holistic framework to connect TM with the university-industry collaboration approach
The findings of this dissertation illustrate that OI formats can support the acceleration of time-to-market of manufacturing technologies and further improve the technical requirements of the product by leveraging external capabilities. The conclusions and implications made are intended to foster further research and improve managerial practices to evolve TM into an open collaborative context with interconnectivities between all internal and external involved technologies, individuals and organizational levels.
While patients are known to respond differently to drug therapies, current clinical practice often still follows a standardized dosage regimen for all patients. For drugs with a narrow range of both effective and safe concentrations, this approach may lead to a high incidence of adverse events or subtherapeutic dosing in the presence of high patient variability. Model-informedprecision dosing (MIPD) is a quantitative approach towards dose individualization based on mathematical modeling of dose-response relationships integrating therapeutic drug/biomarker monitoring (TDM) data. MIPD may considerably improve the efficacy and safety of many drug therapies. Current MIPD approaches, however, rely either on pre-calculated dosing tables or on simple point predictions of the therapy outcome. These
approaches lack a quantification of uncertainties and the ability to account for effects that are delayed. In addition, the underlying models are not improved while applied to patient data. Therefore, current approaches are not well suited for informed clinical decision-making based on a differentiated understanding of the individually predicted therapy outcome.
The objective of this thesis is to develop mathematical approaches for MIPD, which (i) provide efficient fully Bayesian forecasting of the individual therapy outcome including associated uncertainties, (ii) integrate Markov decision processes via reinforcement learning (RL) for a comprehensive decision framework for dose individualization, (iii) allow for continuous learning across patients and hospitals. Cytotoxic anticancer chemotherapy with its major dose-limiting toxicity, neutropenia, serves as a therapeutically relevant application example.
For more comprehensive therapy forecasting, we apply Bayesian data assimilation (DA) approaches, integrating patient-specific TDM data into mathematical models of chemotherapy-induced neutropenia that build on prior population analyses. The value of uncertainty quantification is demonstrated as it allows reliable computation of the patient-specific probabilities of relevant clinical quantities, e.g., the neutropenia grade. In view of novel home monitoring devices that increase the amount of TDM data available, the data processing of
sequential DA methods proves to be more efficient and facilitates handling of the variability between dosing events.
By transferring concepts from DA and RL we develop novel approaches for MIPD. While DA-guided dosing integrates individualized uncertainties into dose selection, RL-guided dosing provides a framework to consider delayed effects of dose selections. The combined
DA-RL approach takes into account both aspects simultaneously and thus represents a holistic approach towards MIPD. Additionally, we show that RL can be used to gain insights into important patient characteristics for dose selection. The novel dosing strategies substantially reduce the occurrence of both subtherapeutic and life-threatening neutropenia grades in a simulation study based on a recent clinical study (CEPAC-TDM trial) compared to currently used MIPD approaches.
If MIPD is to be implemented in routine clinical practice, a certain model bias with respect to the underlying model is inevitable, as the models are typically based on data from comparably small clinical trials that reflect only to a limited extent the diversity in real-world patient populations. We propose a sequential hierarchical Bayesian inference framework that enables continuous cross-patient learning to learn the underlying model parameters of the target patient population. It is important to note that the approach only requires summary information of the individual patient data to update the model. This separation of the individual inference from population inference enables implementation across different centers of care.
The proposed approaches substantially improve current MIPD approaches, taking into account new trends in health care and aspects of practical applicability. They enable progress towards more informed clinical decision-making, ultimately increasing patient benefits beyond the current practice.
Conceptual knowledge about objects, people and events in the world is central to human cognition, underlying core cognitive abilities such as object recognition and use, and word comprehension. Previous research indicates that concepts consist of perceptual and motor features represented in modality-specific perceptual-motor brain regions. In addition, cross-modal convergence zones integrate modality-specific features into more abstract conceptual representations.
However, several questions remain open: First, to what extent does the retrieval of perceptual-motor features depend on the concurrent task? Second, how do modality-specific and cross-modal regions interact during conceptual knowledge retrieval? Third, which brain regions are causally relevant for conceptually-guided behavior? This thesis addresses these three key issues using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in the healthy human brain.
Study 1 - an fMRI activation study - tested to what extent the retrieval of sound and action features of concepts, and the resulting engagement of auditory and somatomotor brain regions depend on the concurrent task. 40 healthy human participants performed three different tasks - lexical decision, sound judgment, and action judgment - on words with a high or low association to sounds and actions. We found that modality-specific regions selectively respond to task-relevant features: Auditory regions selectively responded to sound features during sound judgments, and somatomotor regions selectively responded to action features during action judgments. Unexpectedly, several regions (e.g. the left posterior parietal cortex; PPC) exhibited a task-dependent response to both sound and action features. We propose these regions to be "multimodal", and not "amodal", convergence zones which retain modality-specific information.
Study 2 - an fMRI connectivity study - investigated the functional interaction between modality-specific and multimodal areas during conceptual knowledge retrieval. Using the above fMRI data, we asked (1) whether modality-specific and multimodal regions are functionally coupled during sound and action feature retrieval, (2) whether their coupling depends on the task, (3) whether information flows bottom-up, top-down, or bidirectionally, and (4) whether their coupling is behaviorally relevant. We found that functional coupling between multimodal and modality-specific areas is task-dependent, bidirectional, and relevant for conceptually-guided behavior. Left PPC acted as a connectivity "switchboard" that flexibly adapted its coupling to task-relevant modality-specific nodes.
Hence, neuroimaging studies 1 and 2 suggested a key role of left PPC as a multimodal convergence zone for conceptual knowledge. However, as neuroimaging is correlational, it remained unknown whether left PPC plays a causal role as a multimodal conceptual hub. Therefore, study 3 - a TMS study - tested the causal relevance of left PPC for sound and action feature retrieval. We found that TMS over left PPC selectively impaired action judgments on low sound-low action words, as compared to sham stimulation. Computational simulations of the TMS-induced electrical field revealed that stronger stimulation of left PPC was associated with worse performance on action, but not sound, judgments. These results indicate that left PPC causally supports conceptual processing when action knowledge is task-relevant and cannot be compensated by sound knowledge. Our findings suggest that left PPC is specialized for action knowledge, challenging the view of left PPC as a multimodal conceptual hub.
Overall, our studies support "hybrid theories" which posit that conceptual processing involves both modality-specific perceptual-motor regions and cross-modal convergence zones. In our new model of the conceptual system, we propose conceptual processing to rely on a representational hierarchy from modality-specific to multimodal up to amodal brain regions. Crucially, this hierarchical system is flexible, with different regions and connections being engaged in a task-dependent fashion. Our model not only reconciles the seemingly opposing grounded cognition and amodal theories, it also incorporates task dependency of conceptually-related brain activity and connectivity, thereby resolving several current issues on the neural basis of conceptual knowledge retrieval.
Supernova remnants (SNRs) are discussed as the most promising sources of galactic cosmic rays (CR). The diffusive shock acceleration (DSA) theory predicts particle spectra in a rough agreement with observations. Upon closer inspection, however, the photon spectra of observed SNRs indicate that the particle spectra produced at SNRs shocks deviate from the standard expectation. This work suggests a viable explanation for a softening of the particle spectra in SNRs. The basic idea is the re-acceleration of particles in the turbulent region immediately downstream of the shock. This thesis shows that at the re-acceleration of particles by the fast-mode waves in the downstream region can be efficient enough to impact particle spectra over several decades in energy. To demonstrate this, a generic SNR model is presented, where the evolution of particles is described by the reduced transport equation for CR. It is shown that the resulting particle and the corresponding synchrotron spectra are significantly softer compared to the standard case. Next, this work outlines RATPaC, a code developed to model particle acceleration and corresponding photon emissions in SNRs. RATPaC solves the particle transport equation in test-particle mode using hydrodynamic simulations of the SNR plasma flow. The background magnetic field can be either computed from the induction equation or follows analytic profiles. This work presents an extended version of RATPaC that accounts for stochastic re-acceleration by fast-mode waves that provide diffusion of particles in momentum space. This version is then applied to model the young historical SNR Tycho. According to radio observations, Tycho’s SNR features the radio spectral index of approximately −0.65. In previous modeling approaches, this fact has been attributed to the strongly distinctive Alfvénic drift, which is assumed to operate in the shock vicinity. In this work, the problems and inconsistencies of this scenario are discussed. Instead, stochastic re-acceleration of electrons in the immediate downstream region of Tycho’s SNR is suggested as a cause for the soft radio spectrum. Furthermore, this work investigates two different scenarios for magnetic-field distributions inside Tycho’s SNR. It is concluded that magnetic-field damping is needed to account for the observed filaments in the radio range. Two models are presented for Tycho’s SNR, both of them feature strong hadronic contribution. Thus, a purely leptonic model is considered as very unlikely. Additionally, to the detailed modeling of Tycho’s SNR, this dissertation presents a relatively simple one-zone model for the young SNR Cassiopeia A and an interpretation for the recently analyzed VERITAS and Fermi-LAT data. It shows that the γ-ray emission of Cassiopeia A cannot be explained without a hadronic contribution and that the remnant accelerates protons up to TeV energies. Thus, Cassiopeia A is found to be unlikely a PeVatron.
Filaments are omnipresent features in the solar chromosphere, one of the atmospheric layers of the Sun, which is located above the photosphere, the visible surface of the Sun. They are clouds of plasma reaching from the photosphere to the chromosphere, and even to the outer-most atmospheric layer, the corona. They are stabalized by the magnetic field. If the magnetic field is disturbed, filaments can erupt as coronal mass ejections (CME), releasing plasma into space, which can also hit the Earth. A special type of filaments are polar crown filaments, which form at the interface of the unipolar field of the poles and flux of opposite magnetic polarity, which was transported towards the poles. This flux transport is related to the global dynamo of the Sun and can therefore be analyzed indirectly with polar crown filaments. The main objective of this thesis is to better understand the physical properties and environment of high-latitude and polar crown filaments, which can be approached from two perspectives: (1) analyzing the large-scale properties of high-latitude and polar crown filaments with full-disk Hα observations from the Chromospheric Telescope (ChroTel) and (2) determining the relation of polar crown and high-latitude filaments from the chromosphere to the lower-lying photosphere with high-spatial resolution observations of the Vacuum Tower Telescope (VTT), which reveal the smallest details.
The Chromospheric Telescope (ChroTel) is a small 10-cm robotic telescope at Observatorio del Teide on Tenerife (Spain), which observes the entire Sun in Hα, Ca IIK, and He I 10830 Å. We present a new calibration method that includes limb-darkening correction, removal of non-uniform filter transmission, and determination of He I Doppler velocities. Chromospheric full-disk filtergrams are often obtained with Lyot filters, which may display non-uniform transmission causing large-scale intensity variations across the solar disk. Removal of a 2D symmetric limb-darkening function from full-disk images results in a flat background. However, transmission artifacts remain and are even more distinct in these contrast-enhanced images. Zernike polynomials are uniquely appropriate to fit these large-scale intensity variations of the background. The Zernike coefficients show a distinct temporal evolution for ChroTel data, which is likely related to the telescope’s alt-azimuth mount that introduces image rotation. In addition, applying this calibration to sets of seven filtergrams that cover the He I triplet facilitates determining chromospheric Doppler velocities. To validate the method, we use three datasets with varying levels of solar activity. The Doppler velocities are benchmarked with respect to co-temporal high-resolution spectroscopic data of the GREGOR Infrared Spectrograph (GRIS). Furthermore, this technique can be applied to ChroTel Hα and Ca IIK data. The calibration method for ChroTel filtergrams can be easily adapted to other full-disk data exhibiting unwanted large-scale variations. The spectral region of the He I triplet is a primary choice for high-resolution near-infrared spectropolarimetry. Here, the improved calibration of ChroTel data will provide valuable context data.
Polar crown filaments form above the polarity inversion line between the old magnetic flux of the previous cycle and the new magnetic flux of the current cycle. Studying their appearance and their properties can lead to a better understanding of the solar cycle. We use full-disk data of the ChroTel at Observatorio del Teide, Tenerife, Spain, which were taken in three different chromospheric absorption lines (Hα 6563 Å, Ca IIK 3933 Å, and He I 10830 Å), and we create synoptic maps. In addition, the spectroscopic He I data allow us to compute Doppler velocities and to create synoptic Doppler maps. ChroTel data cover the rising and decaying phase of Solar Cycle 24 on about 1000 days between 2012 and 2018. Based on these data, we automatically extract polar crown filaments with image-processing tools and study their properties. We compare contrast maps of polar crown filaments with those of quiet-Sun filaments. Furthermore, we present a super-synoptic map summarizing the entire ChroTel database. In summary, we provide statistical properties, i.e. number and location of filaments, area, and tilt angle for both the maximum and declining phase of Solar Cycle 24. This demonstrates that ChroTel provides a
promising dataset to study the solar cycle.
The cyclic behavior of polar crown filaments can be monitored by regular full-disk Hα observations. ChroTel provides such regular observations of the Sun in three chromospheric wavelengths. To analyze the cyclic behavior and the statistical properties of polar crown filaments, we have to extract the filaments from the images. Manual extraction is tedious, and extraction with morphological image processing tools produces a large number of false positive detections and the manual extraction of these takes too much time. Automatic object detection and extraction in a reliable manner allows us to process more data in a shorter time. We will present an overview of the ChroTel database and a proof of concept of a machine learning application, which allows us a unified extraction of, for example, filaments from ChroTel data.
The chromospheric Hα spectral line dominates the spectrum of the Sun and other stars. In the stellar regime, this spectral line is already used as a powerful tracer of magnetic activity. For the Sun, other tracers are typically used to monitor solar activity. Nonetheless, the Sun is observed constantly in Hα with globally distributed ground-based full-disk imagers. The aim of this study is to introduce Hα as a tracer of solar activity and compare it to other established indicators. We discuss the newly created imaging Hα excess in the perspective of possible application for modelling of stellar atmospheres. In particular, we try to determine how constant is the mean intensity of the Hα excess and number density of low-activity regions between solar maximum and minimum. Furthermore, we investigate whether the active region coverage fraction or the changing emission strength in the active regions dominates time variability in solar Hα observations. We use ChroTel observations of full-disk Hα filtergrams and morphological image processing techniques to extract the positive and negative imaging Hα excess, for bright features (plage regions) and dark absorption features (filaments and sunspots), respectively. We describe the evolution of the Hα excess during Solar Cycle 24 and compare it to other well established tracers: the relative sunspot number, the F10.7 cm radio flux, and the Mg II index. Moreover, we discuss possible applications of the Hα excess for stellar activity diagnostics and the contamination of exoplanet transmission spectra. The positive and negative Hα excess follow the behavior of the solar activity over the course of the cycle. Thereby, positive Hα excess is closely correlated to the chromospheric Mg II index. On the other hand, the negative Hα excess, created from dark features like filaments and sunspots, is introduced as a tracer of solar activity for the first time. We investigated the mean intensity distribution for active regions for solar minimum and maximum and found that the shape of both distributions is very similar but with different amplitudes. This might be related with the relatively stable coronal temperature component during the solar cycle. Furthermore, we found that the coverage fraction of Hα excess and the Hα excess of bright features are strongly correlated, which will influence modelling of stellar and exoplanet atmospheres.
High-resolution observations of polar crown and high-latitude filaments are scarce. We present a unique sample of such filaments observed in high-resolution Hα narrow-band filtergrams and broad-band images, which were obtained with a new fast camera system at the VTT. ChroTel provided full-disk context observations in Hα, Ca IIK, and He I 10830 Å. The Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) provided line-of-sight magnetograms and ultraviolet (UV) 1700 Å filtergrams, respectively. We study filigree in the vicinity of polar crown and high-latitude filaments and relate their locations to magnetic concentrations at the filaments’ footpoints. Bright points are a well studied phenomenon in the photosphere at low latitudes, but they were not yet studied in the quiet network close to the poles. We examine size, area, and eccentricity of bright points and find that their morphology is very similar to their counterparts at lower latitudes, but their sizes and areas are larger. Bright points at the footpoints of polar crown filaments are preferentially located at stronger magnetic flux concentrations, which are related to bright regions at the border of supergranules as observed in UV filtergrams. Examining the evolution of bright points on three consecutive days reveals that their amount increases while the filament decays, which indicates they impact the equilibrium of the cool plasma contained in filaments.
Today, the Mekong Delta in the southern of Vietnam is home for 18 million people. The delta also accounts for more than half of the country’s food production and 80% of the exported rice. Due to the low elevation, it is highly susceptible to the risk of fluvial and coastal flooding. Although extreme floods often result in excessive damages and economic losses, the annual flood pulse from the Mekong is vital to sustain agricultural cultivation and livelihoods of million delta inhabitants.
Delta-wise risk management and adaptation strategies are required to mitigate the adverse impacts from extreme events while capitalising benefits from floods. However, a proper flood risk management has not been implemented in the VMD, because the quantification of flood damage is often overlooked and the risks are thus not quantified. So far, flood management has been exclusively focused on engineering measures, i.e. high- and low- dyke systems, aiming at flood-free or partial inundation control without any consideration of the actual risks or a cost-benefit analysis. Therefore, an analysis of future delta flood dynamics driven these stressors is valuable to facilitate the transition from sole hazard control towards a risk management approach, which is more cost-effective and also robust against future changes in risk.
Built on these research gaps, this thesis investigates the current state and future projections of flood hazard, damage and risk to rice cultivation, the most important economic activity in the VMD. The study quantifies the changes in risk and hazard brought by the development of delta-based flood control measures in the last decades, and analyses the expected changes in risk driven by the changing climate, rising sea-level and deltaic land subsidence, and finally the development of hydropower projects in the Mekong Basin. For this purpose, flood trend analyses and comprehensive hydraulic modelling were performed, together with the development of a concept to quantify flood damage and risk to rice plantation.
The analysis of observed flood levels revealed strong and robust increasing trends of peak and duration downstream of the high-dyke areas with a step change in 2000/2001, i.e. after the disastrous flood which initiated the high-dyke development. These changes were in contrast to the negative trends detected upstream, suggested that high-dyke development has shifted flood hazard downstream. Findings of the trend’s analysis were later confirmed by hydraulic simulations of the two recent extreme floods in 2000 and 2011, where the hydrological boundaries and dyke system settings were interchanged.
However, the high-dyke system was not the only and often not the main cause for a shift of flood hazard, as a comparative analysis of these two extreme floods proved. The high-dyke development was responsible for 20–90% of the observed changes in flood level between 2000 and 2011, with large spatial variances. The particular flood hydrograph of the two events had the highest contribution in the northern part of the delta, while the tidal level had 2–3 times higher influence than the high-dyke in the lower-central and coastal areas downstream of high-dyke areas. The impact of the high-dyke development was highest in the areas closely downstream of the high-dyke area just south of the Cambodia-Vietnam border. The hydraulic simulations also validated that the concurrence of the flood peak with spring tides, i.e. high sea level along the coast, amplified the flood level and inundation in the central and coastal regions substantially.
The risk assessment quantified the economic losses of rice cultivation to USD 25.0 and 115 million (0.02–0.1% of the total GDP of Vietnam in 2011) corresponding to the 10-year and the 100-year floods, with an expected annual damage of about USD 4.5 million. A particular finding is that the flood damage was highly sensitive to flood timing. Here, a 10-year event with an early peak, i.e. late August-September, could cause as much damage as a 100-year event that peaked in October. This finding underlines the importance of a reliable early flood warning, which could substantially reduce the damage to rice crops and thus the risk.
The developed risk assessment concept was furthermore applied to investigate two high-dyke development alternatives, which are currently under discussion among the administrative bodies in Vietnam, but also in the public. The first option favouring the utilization of the current high-dyke compartments as flood retention areas instead for rice cropping during the flood season could reduce flood hazard and expected losses by 5–40%, depending on the region of the delta. On the contrary, the second option promoting the further extension of the areas protected by high-dyke to facilitate third rice crop planting on a larger area, tripled the current expected annual flood damage. This finding challenges the expected economic benefit of triple rice cultivation, in addition to the already known reducing of nutrient supply by floodplain sedimentation and thus higher costs for fertilizers.
The economic benefits of the high-dyke and triple rice cropping system is further challenged by the changes in the flood dynamics to be expected in future. For the middle of the 21st century (2036-2065) the effective sea-level rise an increase of the inundation extent by 20–27% was projected. This corresponds to an increase of flood damage to rice crops in dry, normal and wet year by USD 26.0, 40.0 and 82.0 million in dry, normal and wet year compared to the baseline period 1971-2000.
Hydraulic simulations indicated that the planned massive development of hydropower dams in the Mekong Basin could potentially compensate the increase in flood hazard and agriculture losses stemming from climate change. However, the benefits of dams as mitigation of flood losses are highly uncertain, because a) the actual development of the dams is highly disputed, b) the operation of the dams is primarily targeted at power generation, not flood control, and c) this would require international agreements and cooperation, which is difficult to achieve in South-East Asia. The theoretical flood mitigation benefit is additionally challenged by a number of negative impacts of the dam development, e.g. disruption of floodplain inundation in normal, non-extreme flood years. Adding to the certain reduction of sediment and nutrient load to the floodplains, hydropower dams will drastically impair rice and agriculture production, the basis livelihoods of million delta inhabitants.
In conclusion, the VMD is expected to face increasing threats of tidal induced floods in the coming decades. Protection of the entire delta coastline solely with “hard” engineering flood protection structures is neither technically nor economically feasible, adaptation and mitigation actions are urgently required. Better control and reduction of groundwater abstraction is thus strongly recommended as an immediate and high priority action to reduce the land subsidence and thus tidal flooding and salinity intrusion in the delta. Hydropower development in the Mekong basin might offer some theoretical flood protection for the Mekong delta, but due to uncertainties in the operation of the dams and a number of negative effects, the dam development cannot be recommended as a strategy for flood management. For the Vietnamese authorities, it is advisable to properly maintain the existing flood protection structures and to develop flexible risk-based flood management plans. In this context the study showed that the high-dyke compartments can be utilized for emergency flood management in extreme events. For this purpose, a reliable flood forecast is essential, and the action plan should be materialised in official documents and legislation to assure commitment and consistency in the implementation and operation.