Refine
Year of publication
- 2018 (284) (remove)
Document Type
- Doctoral Thesis (284) (remove)
Keywords
- Biomarker (3)
- Fernerkundung (3)
- Magnetismus (3)
- biomarker (3)
- climate change (3)
- magnetism (3)
- remote sensing (3)
- uncertainty (3)
- Angriffserkennung (2)
- Bakterien (2)
Institute
- Institut für Chemie (37)
- Institut für Biochemie und Biologie (34)
- Institut für Physik und Astronomie (32)
- Institut für Geowissenschaften (25)
- Historisches Institut (21)
- Wirtschaftswissenschaften (17)
- Extern (16)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Sozialwissenschaften (14)
- Öffentliches Recht (13)
Spatio-temporal data denotes a category of data that contains spatial as well as temporal components. For example, time-series of geo-data, thematic maps that change over time, or tracking data of moving entities can be interpreted as spatio-temporal data.
In today's automated world, an increasing number of data sources exist, which constantly generate spatio-temporal data. This includes for example traffic surveillance systems, which gather movement data about human or vehicle movements, remote-sensing systems, which frequently scan our surroundings and produce digital representations of cities and landscapes, as well as sensor networks in different domains, such as logistics, animal behavior study, or climate research.
For the analysis of spatio-temporal data, in addition to automatic statistical and data mining methods, exploratory analysis methods are employed, which are based on interactive visualization. These analysis methods let users explore a data set by interactively manipulating a visualization, thereby employing the human cognitive system and knowledge of the users to find patterns and gain insight into the data.
This thesis describes a software framework for the visualization of spatio-temporal data, which consists of GPU-based techniques to enable the interactive visualization and exploration of large spatio-temporal data sets. The developed techniques include data management, processing, and rendering, facilitating real-time processing and visualization of large geo-temporal data sets. It includes three main contributions:
- Concept and Implementation of a GPU-Based Visualization Pipeline.
The developed visualization methods are based on the concept of a GPU-based visualization pipeline, in which all steps -- processing, mapping, and rendering -- are implemented on the GPU. With this concept, spatio-temporal data is represented directly in GPU memory, using shader programs to process and filter the data, apply mappings to visual properties, and finally generate the geometric representations for a visualization during the rendering process. Data processing, filtering, and mapping are thereby executed in real-time, enabling dynamic control over the mapping and a visualization process which can be controlled interactively by a user.
- Attributed 3D Trajectory Visualization.
A visualization method has been developed for the interactive exploration of large numbers of 3D movement trajectories. The trajectories are visualized in a virtual geographic environment, supporting basic geometries such as lines, ribbons, spheres, or tubes. Interactive mapping can be applied to visualize the values of per-node or per-trajectory attributes, supporting shape, height, size, color, texturing, and animation as visual properties. Using the dynamic mapping system, several kind of visualization methods have been implemented, such as focus+context visualization of trajectories using interactive density maps, and space-time cube visualization to focus on the temporal aspects of individual movements.
- Geographic Network Visualization.
A method for the interactive exploration of geo-referenced networks has been developed, which enables the visualization of large numbers of nodes and edges in a geographic context. Several geographic environments are supported, such as a 3D globe, as well as 2D maps using different map projections, to enable the analysis of networks in different contexts and scales. Interactive filtering, mapping, and selection can be applied to analyze these geographic networks, and visualization methods for specific types of networks, such as coupled 3D networks or temporal networks have been implemented.
As a demonstration of the developed visualization concepts, interactive visualization tools for two distinct use cases have been developed. The first contains the visualization of attributed 3D movement trajectories of airplanes around an airport. It allows users to explore and analyze the trajectories of approaching and departing aircrafts, which have been recorded over the period of a month. By applying the interactive visualization methods for trajectory visualization and interactive density maps, analysts can derive insight from the data, such as common flight paths, regular and irregular patterns, or uncommon incidents such as missed approaches on the airport.
The second use case involves the visualization of climate networks, which are geographic networks in the climate research domain. They represent the dynamics of the climate system using a network structure that expresses statistical interrelationships between different regions. The interactive tool allows climate analysts to explore these large networks, analyzing the network's structure and relating it to the geographic background. Interactive filtering and selection enables them to find patterns in the climate data and identify e.g. clusters in the networks or flow patterns.
Delay
(2018)
Das Sinnbild »Diabolisches Spiel mit den Zeitmaschinen« spiegelt eine Sicht auf den Musikeffekt »Delay« und seine wechselseitige Beziehung zwischen Technik, Musikproduktion und Rezeption wider. Der durch die beabsichtigte Verzögerung eines akustischen Signals erzeugte und häufig als »Echo« beschriebene Effekt kann nicht nur als echoähnliche Verzögerung, sondern auch in anderen Weisen eingesetzt und vernommen werden. Dies zeigt die Dub-Musik, in der das Delay von der ersten Stunde bis in seine gegenwärtige Ausprägung zum Stilmerkmal wurde. Lee »Scratch« Perry ist Zeitzeuge und Exponent dieser Entwicklung. Sein Schaffen bildet den geeigneten Anlass, den Zeitgeist des Delay und seine Wahrnehmung als psychoakustischen Effekt zu erforschen. Damit führt dieses Buch von einer allgemeinen Beschreibung und Neuverortung des Delays in den prominenten Verwendungskontext der Dub-Musik und die Erkundung eines vielschichtigen Bezugskosmos aus Natur, Technik, Religion, Okkultismus und (Musik-)Kultur.
The last years have shown an increasing sophistication of attacks against enterprises. Traditional security solutions like firewalls, anti-virus systems and generally Intrusion Detection Systems (IDSs) are no longer sufficient to protect an enterprise against these advanced attacks. One popular approach to tackle this issue is to collect and analyze events generated across the IT landscape of an enterprise. This task is achieved by the utilization of Security Information and Event Management (SIEM) systems. However, the majority of the currently existing SIEM solutions is not capable of handling the massive volume of data and the diversity of event representations. Even if these solutions can collect the data at a central place, they are neither able to extract all relevant information from the events nor correlate events across various sources. Hence, only rather simple attacks are detected, whereas complex attacks, consisting of multiple stages, remain undetected. Undoubtedly, security operators of large enterprises are faced with a typical Big Data problem.
In this thesis, we propose and implement a prototypical SIEM system named Real-Time Event Analysis and Monitoring System (REAMS) that addresses the Big Data challenges of event data with common paradigms, such as data normalization, multi-threading, in-memory storage, and distributed processing. In particular, a mostly stream-based event processing workflow is proposed that collects, normalizes, persists and analyzes events in near real-time. In this regard, we have made various contributions in the SIEM context. First, we propose a high-performance normalization algorithm that is highly parallelized across threads and distributed across nodes. Second, we are persisting into an in-memory database for fast querying and correlation in the context of attack detection. Third, we propose various analysis layers, such as anomaly- and signature-based detection, that run on top of the normalized and correlated events. As a result, we demonstrate our capabilities to detect previously known as well as unknown attack patterns. Lastly, we have investigated the integration of cyber threat intelligence (CTI) into the analytical process, for instance, for correlating monitored user accounts with previously collected public identity leaks to identify possible compromised user accounts.
In summary, we show that a SIEM system can indeed monitor a large enterprise environment with a massive load of incoming events. As a result, complex attacks spanning across the whole network can be uncovered and mitigated, which is an advancement in comparison to existing SIEM systems on the market.
This paper introduces a novel measure to assess similarity between event hydrographs. It is based on Cross Recurrence Plots and Recurrence Quantification Analysis which have recently gained attention in a range of disciplines when dealing with complex systems. The method attempts to quantify the event runoff dynamics and is based on the time delay embedded phase space representation of discharge hydrographs. A phase space trajectory is reconstructed from the event hydrograph, and pairs of hydrographs are compared to each other based on the distance of their phase space trajectories. Time delay embedding allows considering the multi-dimensional relationships between different points in time within the event. Hence, the temporal succession of discharge values is taken into account, such as the impact of the initial conditions on the runoff event. We provide an introduction to Cross Recurrence Plots and discuss their parameterization. An application example based on flood time series demonstrates how the method can be used to measure the similarity or dissimilarity of events, and how it can be used to detect events with rare runoff dynamics. It is argued that this methods provides a more comprehensive approach to quantify hydrograph similarity compared to conventional hydrological signatures.
Im Januar 1916 eroberte die Armee des Habsburgerreichs das Königreich Montenegro, den kleinsten und bevölkerungsärmsten Staat Südosteuropas, der an der Seite Serbiens in den Ersten Weltkrieg eingetreten war. Bereits im Sommer 1916 formierte sich bewaffneter Widerstand gegen die Besatzer, 1918 eskalierte dieser zu einer Aufstandsbewegung. Diese Studie zum k. u. k. Militärgeneralgouvernement in Montenegro macht deutlich, welche Relevanz (Fehl-)Einschätzungen und (Fehl-)Entscheidungen in Besatzungssituationen zukommt. Außerdem arbeitet sie die Bedeutung der Geografie des Besatzungsgebiets, des strategischen Kontexts der Besatzung sowie des soziokulturellen Referenzrahmens der Besatzer wie der Besetzten heraus.
Previous studies on native language (L1) anaphor resolution have found that monolingual native speakers are sensitive to syntactic, pragmatic, and semantic constraints on pronouns and reflexive resolution. However, most studies have focused on English and other Germanic languages, and little is currently known about the online (i.e., real-time) processing of anaphors in languages with syntactically less restricted anaphors, such as Turkish. We also know relatively little about how 'non-standard' populations such as non-native (L2) speakers and heritage speakers (HSs) resolve anaphors.
This thesis investigates the interpretation and real-time processing of anaphors in German and in a typologically different and as yet understudied language, Turkish. It compares hypotheses about differences between native speakers' (L1ers) and L2 speakers' (L2ers) sentence processing, looking into differences in processing mechanisms as well as the possibility of cross-linguistic influence. To help fill the current research gap regarding HS sentence comprehension, it compares findings for this group with those for L2ers.
To investigate the representation and processing of anaphors in these three populations, I carried out a series of offline questionnaires and Visual-World eye-tracking experiments on the resolution of reflexives and pronouns in both German and Turkish. In the German experiments, native German speakers as well as L2ers of German were tested, while in the Turkish experiments, non-bilingual native Turkish speakers as well as HSs of Turkish with L2 German were tested. This allowed me to observe both cross-linguistic differences as well as population differences between monolinguals' and different types of bilinguals' resolution of anaphors.
Regarding the comprehension of Turkish anaphors by L1ers, contrary to what has been previously assumed, I found that Turkish has no reflexive that follows Condition A of Binding theory (Chomsky, 1981). Furthermore, I propose more general cross-linguistic differences between Turkish and German, in the form of a stronger reliance on pragmatic information in anaphor resolution overall in Turkish compared to German.
As for the processing differences between L1ers and L2ers of a language, I found evidence in support of hypotheses which propose that L2ers of German rely more strongly on non-syntactic information compared to L1ers (Clahsen & Felser, 2006, 2017; Cunnings, 2016, 2017) independent of a potential influence of their L1. HSs, on the other hand, showed a tendency to overemphasize interpretational contrasts between different Turkish anaphors compared to monolingual native speakers. However, lower-proficiency HSs were likely to merge different forms for simplified representation and processing. Overall, L2ers and HSs showed differences from monolingual native speakers both in their final interpretation of anaphors and during online processing. However, these differences were not parallel between the two types of bilingual and thus do not support a unified model of L2 and HS processing (cf. Montrul, 2012).
The findings of this thesis contribute to the field of anaphor resolution by providing data from a previously unexplored language, Turkish, as well as contributing to research on native and non-native processing differences. My results also illustrate the importance of considering individual differences in the acquisition process when studying bilingual language comprehension. Factors such as age of acquisition, language proficiency and the type of input a language learner receives may influence the processing mechanisms they develop and employ, both between and within different bilingual populations.
Terrain de je
(2018)
Funde und Fiktionen
(2018)
Urgeschichte im Fernsehen als Spiegel gesellschaftlicher Themen der Gegenwart. Unser gesichertes Wissen über die Urgeschichte der Menschheit ist angesichts der dünnen Quellenlage sehr begrenzt. Dennoch werden seit längerer Zeit sehr erfolgreiche Fernsehdokumentationen ausgestrahlt, die minutiös den Alltag in der fernsten menschlichen Vergangenheit präsentieren. Georg Koch untersucht, wie diese Darstellungen aus dem Zusammenspiel von Filmemachern und Wissenschaftlern entstanden und welchen Wandel sie seit den 1950er Jahren in der Bundesrepublik und in Großbritannien durchliefen. Er zeigt, wie Archäologen zu Medienstars des frühen britischen Fernsehens wurden, wie Hightech, Exotik und Abenteuer Einzug in die Darstellung der Archäologie hielten und wie inszenierte Erzählungen ein Millionenpublikum erreichten. Dabei wird deutlich, dass sich in den Betrachtungen der Urgeschichte stets Projektionen der Gegenwart finden, die zeitgemäße Antworten auf gesellschaftliche Fragen bieten.
Die Arbeit beschäftigt sich mit den aktuellen Regelungen des deutschen Aufenthaltsrechts in Bezug auf die Möglichkeiten des Familiennachzuges. Es werden Schwachstellen der aktuellen Regelungen aufgezeigt, Ursachen, Rechtfertigungsgründe und mögliche Lösungsansätze betrachtet.
Schwerpunkt der Betrachtung sind die Konflikte, welche sich unter dem Begriff der Inländerdiskriminierung zusammenfassen lassen. Hierzu wird das Phänomen der Inländerdiskriminierung untersucht und die im Kontext des Familiennachzuges hierzu ergangene Rechtsprechung des EuGH betrachtet. Dabei gilt das Hauptaugenmerk der Figur des grenzüberschreitenden Bezuges, welche der EuGH im Ergebnis mittlerweile aufgelöst hat. Als Ergebnis dieses Abschnittes der Arbeit wird festgestellt, dass eine Unterscheidung von Nachzug zu Deutschen oder zu Unionsbürgern gegen Gleichheitssätze verstößt und aufzuheben ist.
Weiterhin betrachtet die Arbeit verschiedene alternative Lebensmodelle neben der klassischen verschiedengeschlechtlichen Ehe. In Bezug auf gleichgeschlechtliche Lebensgemeinschaften werden auch nach Einführung der „Ehe für alle“ weitere Schwachstellen verortet, die vor allem darauf fußen, dass Nachzugsrechte vom Bestehen eines Instituts abhängen, welches in großen Teilen der Welt nicht gibt. In Hinblick auf nichtehelichen Lebensgemeinschaften wird hingegen die geltende Rechtslage als ausreichend betrachtet. Zuletzt betrachtet die Arbeit Ehemodelle, welche im deutschen Recht nicht vorgesehen und anerkannt sind. Dies sind die Zwangs-, Kinder- und Mehrehe. Es wird beleuchtet, wie das deutsche Recht und insbesondere das Aufenthaltsrecht mit diesen Ehen umgeht und welcher Zweck mit den bestehenden Regelungen verfolgt wird. Während der Gesetzgeber den Schutz der Opfer solcher Eheschließungen vor Augen hatte, kommt die Untersuchung zu dem Ergebnis, dass vielmehr eine weitere Gefährdung eintritt, welche nur zu vermeiden wäre, wenn auch diese Ehemodelle zunächst anerkannt würden und den Opfern im Inland sodann Hilfe angeboten würde.
Insgesamt stellt die Arbeit gravierende Mängel in menschenrechtlicher Hinsicht im bestehenden Recht des Familiennachzugs fest und schlägt eine generelle Neuordnung vor.
Die betrachteten Regelungen entsprechen dem Regelungsstand im Juli 2018.
Studien zum Bildungserfolg in Deutschland weisen auf verschiedene Ungleichheitsdimensionen hin. So wurde wiederholt ein enger Zusammenhang zwischen der sozialen Herkunft und dem schulischen Bildungserfolg dokumentiert. Des Weiteren stellen auch Geschlechterunterschiede im Bildungserfolg einen vielfach berichteten und sowohl wissenschaftlich als auch gesellschaftlich diskutierten Befund dar. Der großen Anzahl an Studien, die sich jeweils einer dieser Ungleichheitsdimensionen widmen, steht jedoch ein Forschungsbedarf bezüglich des systematischen Wissens über die Wechselwirkung von Geschlecht und sozialer Herkunft im Bildungserfolg gegenüber. Vor diesem Hintergrund hat die vorliegende Arbeit zum Ziel, das Zusammenspiel von Geschlecht und sozialer Herkunft zu untersuchen, wobei sie von zwei übergeordneten Fragestellungen geleitet wird, die im Rahmen von vier Teilstudien untersucht werden.Erstens wurde das Zusammenspiel von Geschlecht und sozioökonomischem Status (SES) in unterschiedlichen Facetten des Bildungserfolges sowie in den Berufsaspirationen analysiert (Teilstudien 1-3). Zweitens wurde untersucht, inwiefern die elterlichen Geschlechterrollenvorstellungen mit den Schulleistungen ihres Kindes assoziiert sind. Vor diesem Hintergrund wurde ebenso der Zusammenhang zwischen den elterlichen Geschlechterrollenvorstellungen und Merkmalen des familiären Hintergrundes analysiert (Teilstudie 4). Zusammenfassend betrachtet weisen die Ergebnisse der Teilstudien auf eine Wechselwirkung von Geschlechtszugehörigkeit und sozialer Herkunft im Bildungserfolg sowie in den beruflichen Aspirationen hin, auch wenn die entsprechenden Effekte eher klein ausfallen. Entgegen der gesellschaftlichen Konnotation von Mathematik als „Jungenfach“ stellen die Befunde damit beispielsweise einen Hinweis darauf dar, dass die vielfach zitierten Geschlechterunterschiede in den mathematischen Kompetenzen nicht als „naturgegeben“ sondern beeinflussbar verstanden werden können. Damit untermauern die Ergebnisse die unter anderem im Rahmen verschiedener Theorien herausgestellte Bedeutsamkeit des Sozialisationskontextes für die Entwicklung der Fähigkeiten und Ziele von Jungen und Mädchen sowie die im internationalen Vergleich gezeigte Variabilität von Geschlechterunterschieden in Schulleistungen.
This doctoral dissertation aims at elucidating the development of hot and cool executive functions in middle childhood and at gaining insight about their role in childhood overweight. The dissertation is based on three empirical studies which have been published in peer-reviewed journals. Data from a large 3-year longitudinal study (the “PIER-study”) was used.
The findings presented in the dissertation demonstrated that both hot and cool EF abilities increase during middle childhood. They also supported the notion that hot and cool EF facets are distinguishable from each other in middle childhood, that they have distinct developmental trajectories, and different predictors.
Evidence was found for associations of hot and cool EF with body weight in middle childhood, which is in line with the notion that they might play a role in the self-regulation of eating and the multifactorial etiology of childhood overweight.
This dissertation consists of four self-contained papers that deal with the implications of financial market imperfections and heterogeneity. The analysis mainly relates to the class of incomplete-markets models but covers different research topics.
The first paper deals with the distributional effects of financial integration for developing countries. Based on a simple heterogeneous-agent approach, it is shown that capital owners experience large welfare losses while only workers moderately gain due to higher wages. The large welfare losses for capital owners contrast with the small average welfare gains from representative-agent economies and indicate that a strong opposition against capital market opening has to be expected.
The second paper considers the puzzling observation of capital flows from poor to rich countries and the accompanying changes in domestic economic development. Motivated by the mixed results from the literature, we employ an incomplete-markets model with different types of idiosyncratic risk and borrowing constraints. Based on different scenarios, we analyze under what conditions the presence of financial market imperfections contributes to explain the empirical findings and how the conditions may change with different model assumptions.
The third paper deals with the interplay of incomplete information and financial market imperfections in an incomplete-markets economy. In particular, it analyzes the impact of incomplete information about idiosyncratic income shocks on aggregate saving. The results show that the effect of incomplete information is not only quantitatively substantial but also qualitatively ambiguous and varies with the influence of the income risk and the borrowing constraint.
Finally, the fourth paper analyzes the influence of different types of fiscal rules on the response of key macroeconomic variables to a government spending shock. We find that a strong temporary increase in public debt contributes to stabilizing consumption and leisure in the first periods following the change in government spending, whereas a non-debt-intensive fiscal rule leads to a faster recovery of consumption, leisure, capital and output in later periods. Regarding optimal debt policy, we find that a debt-intensive fiscal rule leads to the largest aggregate welfare benefit and that the individual welfare gain is particularly high for wealth-poor agents.
Metamaterial devices
(2018)
Digital fabrication machines such as 3D printers excel at producing arbitrary shapes, such as for decorative objects. In recent years, researchers started to engineer not only the outer shape of objects, but also their internal microstructure. Such objects, typically based on 3D cell grids, are known as metamaterials. Metamaterials have been used to create materials that, e.g., change their volume, or have variable compliance.
While metamaterials were initially understood as materials, we propose to think of them as devices.
We argue that thinking of metamaterials as devices enables us to create internal structures that offer functionalities to implement an input-process-output model without electronics, but purely within the material’s internal structure. In this thesis, we investigate three aspects of such metamaterial devices that implement parts of the input-process-output model: (1) materials that process analog inputs by implementing mechanisms based on their microstructure, (2) that process digital signals by embedding mechanical computation into the object’s microstructure, and (3) interactive metamaterial objects that output to the user by changing their outside to interact with their environment. The input to our metamaterial devices is provided directly by the users interacting with the device by means of physically pushing the metamaterial, e.g., turning a handle, pushing a button, etc.
The design of such intricate microstructures, which enable the functionality of metamaterial devices, is not obvious. The complexity of the design arises from the fact that not only a suitable cell geometry is necessary, but that additionally cells need to play together in a well-defined way. To support users in creating such microstructures, we research and implement interactive design tools. These tools allow experts to freely edit their materials, while supporting novice users by auto-generating cells assemblies from high-level input. Our tools implement easy-to-use interactions like brushing, interactively simulate the cell structures’ deformation directly in the editor, and export the geometry as a 3D-printable file. Our goal is to foster more research and innovation on metamaterial devices by allowing the broader public to contribute.
Der am 15. Juni 1875 in Frankfurt (Oder) geborene und langjährig in seiner Wahlheimat Potsdam praktizierende Allgemeinmediziner Georg Otto Schneider war einer der bedeutendsten ärztlichen Standesvertreter der ersten Hälfte des 20. Jahrhunderts. Eng verknüpft mit seinem Namen sind eine geradlinige, liberale Berufspolitik sowie die Entfaltung und der Erhalt beruflicher Selbstverwaltung in der brandenburgischen und gesamtdeutschen Ärzteschaft. Als führendes Mitglied in mehreren provinzialen und reichsweiten Verbänden engagierte sich Schneider über vier historische Epochen stets im Sinne einer freien Ausübung und autonomen Verwaltung des Arztberufes.
Im Deutschen Kaiserreich war Schneiders standespolitisches Handeln zunächst noch regional begrenzt. 1912 initiierte er die Errichtung eines Schutzverbandes für die Ärzte des Bezirks Potsdam, dem er über zehn Jahre vorsaß. In der Weimarer Republik stieg Schneider sodann zu einer Schlüsselfigur der Gesundheits- und ärztlichen Berufspolitik auf. 1920 belebte er den Ärzteverband für die Provinz Brandenburg, ab 1928 leitete er dazu in Personalunion die brandenburgische Ärztekammer. Bereits zwei Jahre zuvor hatte er die Geschäftsführung des Deutschen Ärztevereinsbundes übernommen. Infolge der Machtübernahme der Nationalsozialisten schied Schneider bis Mitte 1934 aus allen Ämtern aus, seine Bemühungen für den Erhalt der Berufsautonomie waren vergebens. Anders sah es zunächst nach Ende des Zweiten Weltkriegs aus. In der Sowjetischen Besatzungszone saß Schneider der Fachgruppe Ärzte im Freien Deutschen Gewerkschaftsbund Brandenburg vor und verteidigte die Möglichkeiten der selbstständigen Berufsverwaltung. Zudem war er von 1946 an bis zu seinem Tod am 26. Oktober 1949 Fraktionsvorsitzender der Liberal-Demokratischen Partei im brandenburgischen Landtag.
Vor dem Hintergrund des Lebens und Wirkens Georg Schneiders untersucht die Dissertation Kontinuitäten und Brüche im ärztlichen Organisationswesen, ausgehend vom Deutschen Kaiserreich über die Weimarer Epoche und den Nationalsozialismus bis hin zur Zeit der sowjetischen Besatzung. Die Arbeit stellt die Auswirkungen der jeweiligen politischen, sozioökonomischen und gesellschaftlichen Entwicklungen auf den Ärztestand und die entsprechenden Reaktionen der ärztlichen Berufsvertreter, allen voran Georg Schneiders, gegenüber. Dabei hinterfragt sie, inwiefern sich die ärztlichen Organisationsstrukturen dem jeweiligen System anpassten und welchen Einfluss Schneider als einzelne Person in den größeren Institutionen entfalten konnte.
Gesetzgebungsmehrheiten in parlamentarischen Systemen mit ihrem Dualismus aus Regierungslager und Oppositionsparteien bilden sich nicht frei. Vielmehr findet ihre Koordination in einem Spannungsfeld aus den programmatischen Positionen der Akteure und ihrem opportunistischen Wettbewerb untereinander statt. Diese Problematik bricht die Arbeit auf drei konkrete Fragestellungen herunter, im Rahmen derer sie die Konfliktmuster zwischen Akteuren bei der legislativen Mehrheitskoordination unter Mehrheitsregierungen in den deutschen Landesparlamenten untersucht: 1) Inwieweit hängt es von programmatischen Positionen oder vom opportunistischen Wettbewerb des Neuen Dualismus zwischen Regierungslager und Oppositionsparteien ab, ob Oppositionsparteien und Regierungslager bei der Bildung von Gesetzgebungsmehrheiten kooperieren oder konfligieren? 2) Inwieweit kommt es vor dem Hintergrund unterschiedlicher programmatischer Positionen und opportunistischer Überlegungen zu Konflikt statt Kooperation zwischen Koalitionsakteuren bei der Bildung gemeinsamer Gesetzgebungsmehrheiten? Letztere Fragestellung wird sodann auch in den Kontext des bundesrepublikanischen Kooperativföderalismus eingebettet: 3) Inwieweit geht die Bildung von Gesetzgebungsmehrheiten bei der Ausführung von Bundesgesetzen in Mischkoalitionen (bestehend aus Parteien, die sich auf Bundesebene in konkurrierenden Lagern gegenüberstehen) mit mehr Konflikt einher als in ebenenübergreifend kongruenten Regierungskoalitionen?
Theoretisch wird ein rationalistisches Modell der grundlegenden Handlungsanreize bei der Bildung von Gesetzgebungsmehrheiten in den deutschen Landesparlamenten erarbeitet. Auf dieser Basis beschäftigt sich die Arbeit damit, wie die Akteure strategisch programmatische und opportunistische Anreize zu Konflikt und Kooperation abwägen. Die Arbeit leitet dann konkrete Determinanten ab, die vorwiegend – aber nicht nur – mittels quantitativer Methoden getestet werden. Die Arbeit stützt sich dabei auf eine größtenteils neu zusammengestellte Gesetzgebungsdatenbank aus 3.359 Gesetzgebungsvorgängen aus 23 Legislaturperioden zwischen 1990 und 2013 in den Ländern Hamburg, Hessen, Mecklenburg-Vorpommern, Nordrhein-Westfalen und Sachsen-Anhalt.
Die Analyse der Konfliktmuster zwischen Oppositionsparteien und Regierungslager zeigt, dass programmatische Distanz einer Oppositionspartei zum Regierungslager für Oppositionsverhalten eine Rolle spielt; dies gilt jedoch auch für opportunistische Aspekte (so lässt sich beispielsweise ein kompetitiveres Oppositionsverhalten beobachten, wenn nach der letzten Wahl ein vollständiger Regierungswechsel erfolgte). Oppositionsverhalten erscheint dabei recht kleinteilig ausgeprägt. Neben Unterschieden zwischen Legislaturperioden treten solche auch innerhalb von Legislaturperioden zwischen Akteuren sowie zwischen Gesetzentwürfen auf. Die Analyse generellen Koalitionskonflikts weist darauf hin, dass ein nicht unerheblicher Teil von Koalitionskonflikt strukturell bedingt ist. Handelt es sich bei einer gebildeten Regierungskoalition um die Wunschkoalition der beteiligten Parteien, so ist dies Koalitionskonflikt abträglich. Selbiges gilt für eine größere Mehrheitsmarge des Regierungslagers. Darüber hinaus ergeben sich Hinweise, dass die Ausführung von Bundesgesetzen unter Mischkoalitionen bei bundespolitischer Abgrenzung der Koalitionspartner mit mehr Koalitionskonflikt einhergeht als eine Ausführung unter kongruenten Koalitionen.
Der Beitrag der Arbeit ist polymorph angelegt. Sie hilft zunächst, die Strategien von Akteuren im Gesetzgebungsprozess besser zu verstehen. Als normativer Beitrag tritt auf einer zweiten Ebene die bessere Erforschung etwaiger nachteiliger Effekte des Neuen Dualismus unter Mehrheitsregierungen hinzu. Gleichzeitig soll die Arbeit drittens in der Zusammenschau helfen, die Mechanik der parlamentarischen Systeme in den Ländern selbst zu erhellen und besser normativ bewerten zu können. Hintergrund sind hier die jahrzehntealten Debatten um das beste Regierungssystem und -format der deutschen Länder als subnationale Entitäten. Die dritte Fragestellung dieser Arbeit konnte diese Debatte zudem mit einem neuen Aspekt bereichern. Wissen darüber, inwieweit die Ausführung von Bundesgesetzen in den Ländern je nach ebenenübergreifendem Koalitionsmuster in unterschiedlichem Ausmaß mit einem ‚coalition governance‘-Problem verbunden ist, fügt der Forschung zum föderalen Entscheiden in der Bundesrepublik eine neue und beachtenswerte Facette hinzu. Denn dabei handelt es sich um eine föderal bedingte mechanische Beeinträchtigung der Mehrheitskoordination in den Landesparlamenten selbst, die die potenziell gegebene föderale Flexibilität bei der Ausführung von Bundesgesetzen hemmt. Dies ebnet den Weg zu neuen Debatten darüber, wie in den deutschen Ländern mehr legislative Abstimmungsflexibilität ermöglicht werden kann als unter den bisher üblichen Mehrheits-Koalitionsregierungen.
Today, more than half of the world’s population lives in urban areas. With a high density of population and assets, urban areas are not only the economic, cultural and social hubs of every society, they are also highly susceptible to natural disasters. As a consequence of rising sea levels and an expected increase in extreme weather events caused by a changing climate in combination with growing cities, flooding is an increasing threat to many urban agglomerations around the globe.
To mitigate the destructive consequences of flooding, appropriate risk management and adaptation strategies are required. So far, flood risk management in urban areas is almost exclusively focused on managing river and coastal flooding. Often overlooked is the risk from small-scale rainfall-triggered flooding, where the rainfall intensity of rainstorms exceeds the capacity of urban drainage systems, leading to immediate flooding. Referred to as pluvial flooding, this flood type exclusive to urban areas has caused severe losses in cities around the world. Without further intervention, losses from pluvial flooding are expected to increase in many urban areas due to an increase of impervious surfaces compounded with an aging drainage infrastructure and a projected increase in heavy precipitation events. While this requires the integration of pluvial flood risk into risk management plans, so far little is known about the adverse consequences of pluvial flooding due to a lack of both detailed data sets and studies on pluvial flood impacts. As a consequence, methods for reliably estimating pluvial flood losses, needed for pluvial flood risk assessment, are still missing.
Therefore, this thesis investigates how pluvial flood losses to private households can be reliably estimated, based on an improved understanding of the drivers of pluvial flood loss. For this purpose, detailed data from pluvial flood-affected households was collected through structured telephone- and web-surveys following pluvial flood events in Germany and the Netherlands.
Pluvial flood losses to households are the result of complex interactions between impact characteristics such as the water depth and a household’s resistance as determined by its risk awareness, preparedness, emergency response, building properties and other influencing factors. Both exploratory analysis and machine-learning approaches were used to analyze differences in resistance and impacts between households and their effects on the resulting losses. The comparison of case studies showed that the awareness around pluvial flooding among private households is quite low. Low awareness not only challenges the effective dissemination of early warnings, but was also found to influence the implementation of private precautionary measures. The latter were predominately implemented by households with previous experience of pluvial flooding. Even cases where previous flood events affected a different part of the same city did not lead to an increase in preparedness of the surveyed households, highlighting the need to account for small-scale variability in both impact and resistance parameters when assessing pluvial flood risk.
While it was concluded that the combination of low awareness, ineffective early warning and the fact that only a minority of buildings were adapted to pluvial flooding impaired the coping capacities of private households, the often low water levels still enabled households to mitigate or even prevent losses through a timely and effective emergency response.
These findings were confirmed by the detection of loss-influencing variables, showing that cases in which households were able to prevent any loss to the building structure are predominately explained by resistance variables such as the household’s risk awareness, while the degree of loss is mainly explained by impact variables.
Based on the important loss-influencing variables detected, different flood loss models were developed. Similar to flood loss models for river floods, the empirical data from the preceding data collection was used to train flood loss models describing the relationship between impact and resistance parameters and the resulting loss to building structures. Different approaches were adapted from river flood loss models using both models with the water depth as only predictor for building structure loss and models incorporating additional variables from the preceding variable detection routine.
The high predictive errors of all compared models showed that point predictions are not suitable for estimating losses on the building level, as they severely impair the reliability of the estimates. For that reason, a new probabilistic framework based on Bayesian inference was introduced that is able to provide predictive distributions instead of single loss estimates. These distributions not only give a range of probable losses, they also provide information on how likely a specific loss value is, representing the uncertainty in the loss estimate.
Using probabilistic loss models, it was found that the certainty and reliability of a loss estimate on the building level is not only determined by the use of additional predictors as shown in previous studies, but also by the choice of response distribution defining the shape of the predictive distribution. Here, a mix between a beta and a Bernoulli distribution to account for households that are able to prevent losses to their building’s structure was found to provide significantly more certain and reliable estimates than previous approaches using Gaussian or non-parametric response distributions.
The successful model transfer and post-event application to estimate building structure loss in Houston, TX, caused by pluvial flooding during Hurricane Harvey confirmed previous findings, and demonstrated the potential of the newly developed multi-variable beta model for future risk assessments. The highly detailed input data set constructed from openly available data sources containing over 304,000 affected buildings in Harris County further showed the potential of data-driven, building-level loss models for pluvial flood risk assessment.
In conclusion, pluvial flood losses to private households are the result of complex interactions between impact and resistance variables, which should be represented in loss models. The local occurrence of pluvial floods requires loss estimates on high spatial resolutions, i.e. on the building level, where losses are variable and uncertainties are high.
Therefore, probabilistic loss estimates describing the uncertainty of the estimate should be used instead of point predictions. While the performance of probabilistic models on the building level are mainly driven by the choice of response distribution, multi-variable models are recommended for two reasons:
First, additional resistance variables improve the detection of cases in which households were able to prevent structural losses.
Second, the added variability of additional predictors provides a better representation of the uncertainties when loss estimates from multiple buildings are aggregated.
This leads to the conclusion that data-driven probabilistic loss models on the building level allow for a reliable loss estimation at an unprecedented level of detail, with a consistent quantification of uncertainties on all aggregation levels. This makes the presented approach suitable for a wide range of applications, from decision support in spatial planning to impact- based early warning systems.
Plant-derived Transcription Factors for Orthologous Regulation of Gene Expression in the Yeast Saccharomyces cerevisiae
Control of gene expression by transcription factors (TFs) is central in many synthetic biology projects where tailored expression of one or multiple genes is often needed. As TFs from evolutionary distant organisms are unlikely to affect gene expression in a host of choice, they represent excellent candidates for establishing orthogonal control systems. To establish orthogonal regulators for use in yeast (Saccharomyces cerevisiae), we chose TFs from the plant Arabidopsis thaliana. We established a library of 106 different combinations of chromosomally integrated TFs, activation domains (yeast GAL4 AD, herpes simplex virus VP64, and plant EDLL) and synthetic promoters harbouring cognate cis-regulatory motifs driving a yEGFP reporter. Transcriptional output of the different driver / reporter combinations varied over a wide spectrum, with EDLL being a considerably stronger transcription activation domain in yeast, than the GAL4 activation domain, in particular when fused to Arabidopsis NAC TFs. Notably, the strength of several NAC - EDLL fusions exceeded that of the strong yeast TDH3 promoter by 6- to 10-fold. We furthermore show that plant TFs can be used to build regulatory systems encoded by centromeric or episomal plasmids. Our library of TF – DNA-binding site combinations offers an excellent tool for diverse synthetic biology applications in yeast.
COMPASS: Rapid combinatorial optimization of biochemical pathways based on artificial transcription factors
We established a high-throughput cloning method, called COMPASS for COMbinatorial Pathway ASSembly, for the balanced expression of multiple genes in Saccharomyces cerevisiae. COMPASS employs orthogonal, plant-derived artificial transcription factors (ATFs) for controlling the expression of pathway genes, and homologous recombination-based cloning for the generation of thousands of individual DNA constructs in parallel. The method relies on a positive selection of correctly assembled pathway variants from both, in vivo and in vitro cloning procedures. To decrease the turnaround time in genomic engineering, we equipped COMPASS with multi-locus CRISPR/Cas9-mediated modification capacity. In its current realization, COMPASS allows combinatorial optimization of up to ten pathway genes, each transcriptionally controlled by nine different ATFs spanning a 10-fold difference in expression strength. The application of COMPASS was demonstrated by generating cell libraries producing beta-carotene and co-producing beta-ionone and biosensor-responsive naringenin. COMPASS will have many applications in other synthetic biology projects that require gene expression balancing.
CaPRedit: Genome editing using CRISPR-Cas9 and plant-derived transcriptional regulators for the redirection of flux through the FPP branch-point in yeast. Technologies developed over the past decade have made Saccharomyces cerevisiae a promising platform for production of different natural products. We developed CRISPR/Ca9- and plant derived regulator-mediated genome editing approach (CaPRedit) to greatly accelerate strain modification and to facilitate very low to very high expression of key enzymes using inducible regulators. CaPRedit can be implemented to enhance the production of yeast endogenous or heterologous metabolites in the yeast S. cerevisiae. The CaPRedit system aims to faciltiate modification of multiple targets within a complex metabolic pathway through providing new tools for increased expression of genes encoding rate-limiting enzymes, decreased expression of essential genes, and removed expression of competing pathways. This approach is based on CRISPR/Cas9-mediated one-step double-strand breaks to integrate modules containing IPTG-inducible plant-derived artificial transcription factor and promoter pair(s) in a desired locus or loci. Here, we used CaPRedit to redirect the yeast endogenous metabolic flux toward production of farnesyl diphosphate (FPP), a central precursor of nearly all yeast isoprenoid products, by overexpression of the enzymes lead to produce FPP from glutamate. We found significantly higher beta-carotene accumulation in the CaPRedit-mediated modified strain than in the wild type (WT) strain. More specifically, CaPRedit_FPP 1.0 strain was generated, in which three genes involved in FPP synthesis, tHMG1, ERG20, and GDH2, were inducibly overexpressed under the control of strong plant-derived ATFPs. The beta–carotene accumulated in CaPRedit_FPP 1.0 strain to a level 1.3-fold higher than the previously reported optimized strain that carries the same overexpressed genes (as well as additional genetic modifications to redirect yeast endogenous metabolism toward FPP production). Furthermore, the genetic modifications implemented in CaPRedit_FPP 1.0 strain resulted in only a very small growth defect (growth rate relative to the WT is ~ -0.03).
Inzidenz und Therapie der Depression bei Patienten mit Osteoporose und Rheumatoider Arthritis
(2018)
Die Mehrzahl der Schlaganfallpatienten leidet unter Störungen der Gehfähigkeit. Die Behandlung der Folgen des Schlaganfalls stellt eine der häufigsten Indikationen für die neurologische Rehabilitation dar. Dabei steht die Wiederherstellung von sensomotorischen Funktionen, insbesondere der Gehfähigkeit, und der gesellschaftlichen Teilhabe im Vordergrund.
In Deutschland wird in der Gangrehabilitation nach Schlaganfall oft das Neurophysiologische Gangtraining nach Bobath (NGB) angewandt, das jedoch in seiner Effektivität kritisch gesehen wird. In Behandlungsleitlinien wird zuerst das Laufbandtraining (LT) empfohlen. Für diese Therapie liegen Wirknachweise für Verbesserungen in Gehgeschwindigkeit und Gehausdauer vor. Auch für die Rhythmisch-auditive Stimulation (RAS), dem ebenerdigen Gangtraining mit akustischer Stimulation liegt vergleichbare Evidenz für Schlaganfallpatienten vor.
Ziel der durchgeführten Studie war es, zu klären ob der Einsatz von RAS die Effektivität von LT verbessert. Es wurden die Auswirkungen eines 4-wöchigen musikgestützten Laufbandtrainings auf die Gangrehabilitation von Schlaganfallpatienten untersucht.
Für die Kombinationstherapie RAS mit Laufbandtraining (RAS-LT) wurde spezielle Trainingsmusik entwickelt. Diese wurde an die individuelle Laufbandkadenz des Patienten angepasst und in Abstimmung mit der Bandgeschwindigkeit systematisch gesteigert. Untersucht wurde, ob RAS-LT zu stärkeren Verbesserungen der Gehfähigkeit bei Schlaganfallpatienten führt als die Standardtherapien NGB und LT. Dazu wurde eine klinische Evaluation im prospektiven randomisierten und kontrollierten Parallelgruppendesign mit 45 Patienten nach Schlaganfall durchgeführt. 45 Patienten mit Hemiparese der unteren Extremität oder unsicherem und asymmetrischem Gangbild wurden in der Akutphase nach Schlaganfall eingeschlossen. Bei 10 Patienten wurde die Studie während der Interventionsphase abgebrochen, davon 1 Patient mit unerwünschter Nebenwirkung in Folge des LT.
Die verwendete Testbatterie umfasste neben Verfahren zur Bestimmung der Gehfunktion wie Fast Gait Speed Test, 3-min-Walking-Time-Test und der apparativen Ganganalyse mit dem Lokometer nach Bessou eine statische Posturographie und eine kinematische 2D-Ganganalyse auf dem Laufband. Diese Methode wurde in Erweiterung der bisherigen Studienlage in dieser Form erstmals für diese Fragestellung und dieses Patientenkollektiv konzipiert und eingesetzt. Sie ermöglichte eine differenzierte und seitenbezogene Beurteilung der Bewegungsqualität.
Die primären Endpunkte der Studie waren die longitudinalen Gangparameter Kadenz, Gehgeschwindigkeit und Schrittlänge. Als sekundäre Endpunkte dienten die Schrittsymmetrie, die Gehausdauer, das statische Gleichgewicht und die Bewegungsqualität des Gehens.
Prä-Post-Effekte wurden für die gesamte Stichprobe und für jede Gruppe durch T-Tests und wenn Normalverteilung nicht gegeben war mit dem Wilcoxon-Vorzeichen-Rangtest errechnet. Für die Ermittlung der Wirkungsunterschiede der 3 Interventionen wurde eine Kovarianzanalyse mit zwei Kovariaten durchgeführt: (1) der jeweilige Prä-Interventionsparameter und (2) die Zeit zwischen Akutereignis und Studienbeginn. Für einzelne Messparameter waren die Vorbedingungen der Kovarianzanalyse nicht erfüllt, sodass stattdessen ein Kruskal-Wallis H Test durchgeführt wurde. Das Signifikanzniveau wurde auf p < 0,05 und für gruppenspezifische Prä-Post-Effekte auf p > 0,016 gesetzt. Effektstärken wurden mit Cohens d berechnet.
Es wurden die Datensätze von 35 Patienten (RAS-LT: N = 11, LT: N = 13, NGB: N = 11) mit einem Alter von 63.6 ±8.6 Jahren, und mit einer Zeit zwischen Akutereignis und Beginn der Studie von 42.1 ±23.7 Tagen ausgewertet. In der statistischen Auswertung zeigten sich in der Nachuntersuchung stärkere Verbesserungen durch RAS-LT in der Kadenz (F(2,34) = 7.656, p = 0.002; partielles η2 = 0.338), wobei auch die Gruppenkontraste signifikante Unterschiede zugunsten von RAS-LT aufwiesen und eine Tendenz zu stärkerer Verbesserung in der Gehgeschwindigkeit (F(2,34) = 3.864, p = 0.032; partielles η2 = 0.205). Auch die Ergebnisse zur Schrittsymmetrie und zur Bewegungsqualität deuteten auf eine Überlegenheit des neuen Therapieansatzes RAS-LT hin, obgleich dort keine statistischen Signifikanzen im Gruppenvergleich erreicht wurden. Die Parameter Schrittlänge, Gehausdauer und die Werte zum statischen Gleichgewicht zeigten keine spezifischen Effekte von RAS-LT.
Die Studie liefert erstmals Anhaltspunkte für eine klinische Überlegenheit von RAS-LT gegenüber den Standardtherapien. Die weitere Entwicklung und Beforschung dieses innovativen Therapieansatzes können in Zukunft zu einer verbesserten Gangrehabilitation von Patienten nach Schlaganfall beitragen.
Die Entstehung der modernen britischen Nachrichtendienstarchitektur fiel in die erste Hälfte des zwanzigsten Jahrhunderts. Zeitgleich erfuhr die britische Gesellschaft eine nie dagewesene Demokratisierung. Die Arbeit versucht darzulegen, wie auch vermeintlich arkane Bereiche staatlichen Handelns in öffentliche Aushandlungsprozesse eingebettet sind und rekonstruiert deshalb erstmals systematisch öffentliche und fachöffentliche Diskurse über Nachrichtendienste Großbritanniens im Zeitalter der Weltkriege.
The scientific drilling campaign PALEOVAN was conducted in the summer of 2010 and was part of the international continental drilling programme (ICDP). The main goal of the campaign was the recovery of a sensitive climate archive in the East of Anatolia. Lacustrine deposits underneath the lake floor of ‘Lake Van’ constitute this archive. The drilled core material was recovered from two locations: the Ahlat Ridge and the Northern Basin. A composite core was constructed from cored material of seven parallel boreholes at the Ahlat Ridge and covers an almost complete lacustrine history of Lake Van. The composite record offered sensitive climate proxies such as variations of total organic carbon, K/Ca ratios, or a relative abundance of arboreal pollen. These proxies revealed patterns that are similar to climate proxy variations from Greenland ice cores. Climate variations in Greenland ice cores have been dated by modelling the timing of orbital forces to affect the climate. Volatiles from melted ice aliquots are often taken as high-resolution proxies and provide a base for fitting the according temporal models.
The ICDP PALEOVAN scientific team fitted proxy data from the lacustrine drilling record to ice core data and constructed an age model. Embedded volcaniclastic layers had to be dated radiometrically in order to provide independent age constraints to the climate-stratigraphic age model. Solving this task by an application of the 40Ar/39Ar method was the main objective of this thesis. Earlier efforts to apply the 40Ar/39Ar dating resulted in inaccuracies that could not be explained satisfactorily.
The absence of K-rich feldspars in suitable tephra layers implied that feldspar crystals needed to be 500 μm in size minimum, in order to apply single-crystal 40Ar/39Ar dating. Some of the samples did not contain any of these grain sizes or only very few crystals of that size. In order to overcome this problem this study applied a combined single-crystal and multi-crystal approach with different crystal fractions from the same sample. The preferred method of a stepwise heating analysis of an aliquot of feldspar crystals has been applied to three samples. The Na-rich crystals and their young geological age required 20 mg of inclusion-free, non-corroded feldspars. Small sample volumes (usually 25 % aliquots of 5 cm3 of sample material – a spoon full of tephra) and the widespread presence of melt-inclusion led to the application of combined single- and multigrain total fusion analyses. 40Ar/39Ar analyses on single crystals have the advantage of being able to monitor the presence of excess 40Ar and detrital or xenocrystic contamination in the samples. Multigrain analyses may hide the effects from these obstacles. The results from the multigrain analyses are therefore discussed with respect to the findings from the respective cogenetic single crystal ages. Some of the samples in this study were dated by 40Ar/39Ar on feldspars on multigrain separates and (if available) in combination with only a few single crystals. 40Ar/39Ar ages from two of the samples deviated statistically from the age model. All other samples resulted in identical ages. The deviations displayed older ages than those obtained from the age model. t-Tests compared radiometric ages with available age control points from various proxies and from the relative paleointensity of the earth magnetic field within a stratigraphic range of ± 10 m. Concordant age control points from different relative chronometers indicated that deviations are a result of erroneous 40Ar/39Ar ages. The thesis argues two potential reasons for these ages: (1) the irregular appearance of 40Ar from rare melt- and fluid- inclusions and (2) the contamination of the samples with older crystals due to a rapid combination of assimilation and ejection.
Another aliquot of feldspar crystals that underwent separation for the application of 40Ar/39Ar dating was investigated for geochemical inhomogeneities. Magmatic zoning is ubiquitous in the volcaniclastic feldspar crystals. Four different types of magmatic zoning were detected. The zoning types are compositional zoning (C-type zoning), pseudo-oscillatory zoning of trace ele- ment concentrations (PO-type zoning), chaotic and patchy zoning of major and trace element concentrations (R-type zoning) and concentric zoning of trace elements (CC-type zoning). Sam- ples that deviated in 40Ar/39Ar ages showed C-type zoning, R-type zoning or a mix of different types of zoning (C-type and PO-type). Feldspars showing PO-type zoning typically represent the smallest grain size fractions in the samples. The constant major element compositions of these crystals are interpreted to represent the latest stages in the compositional evolution of feldspars in a peralkaline melt. PO-type crystals contain less melt- inclusions than other zoning types and are rarely corroded. This thesis concludes that feldspars that show PO-type zoning are most promising chronometers for the 40Ar/39Ar method, if samples provide mixed zoning types of Quaternary anorthoclase feldspars.
Five samples were dated by applying the 40Ar/39Ar method to volcanic glass. High fractions of atmospheric Ar (typically > 98%) significantly hampered the precision of the 40Ar/39Ar ages and resulted in rough age estimates that widely overlap the age model. Ar isotopes indicated that the glasses bore a chorine-rich Ar-end member. The chlorine-derived 38Ar indicated chlorine-rich fluid-inclusions or the hydration of the volcanic glass shards. This indication strengthened the evidence that irregularly distributed melt-inclusions and thus irregular distributed excess 40Ar influenced the problematic feldspar 40Ar/39Ar ages. Whether a connection between a corrected initial 40Ar/36Ar ratio from glasses to the 40Ar/36Ar ratios from pore waters exists remains unclear.
This thesis offers another age model, which is similarly based on the interpolation of the temporal tie points from geophysical and climate-stratigraphic data. The model used a PCHIP- interpolation (piecewise cubic hermite interpolating polynomial) whereas the older age model used a spline-interpolation. Samples that match in ages from 40Ar/39Ar dating of feldspars with the earlier published age model were additionally assigned with an age from the PCHIP- interpolation. These modelled ages allowed a recalculation of the Alder Creek sanidine mineral standard. The climate-stratigraphic calibration of an 40Ar/39Ar mineral standard proved that the age versus depth interpolations from PAELOVAN drilling cores were accurate, and that the applied chronometers recorded the temporal evolution of Lake Van synchronously.
Petrochemical discrimination of the sampled volcaniclastic material is also given in this thesis. 41 from 57 sampled volcaniclastic layers indicate Nemrut as their provenance. Criteria that served for the provenance assignment are provided and reviewed critically. Detailed correlations of selected PALEOVAN volcaniclastics to onshore samples that were described in detail by earlier studies are also discussed. The sampled volcaniclastics dominantly have a thickness of < 40 cm and have been ejected by small to medium sized eruptions. Onshore deposits from these types of eruptions are potentially eroded due to predominant strong winds on Nemrut and Süphan slopes. An exact correlation with the data presented here is therefore equivocal or not possible at all.
Deviating feldspar 40Ar/39Ar ages can possibly be explained by inherited 40Ar from feldspar xenocrysts contaminating the samples. In order to test this hypothesis diffusion couples of Ba were investigated in compositionally zoned feldspar crystals. The diffusive behaviour of Ba in feldspar is known, and gradients in the changing concentrations allowed for the calculation of the duration of the crystal’s magmatic development since the formation of the zoning interface. Durations were compared with degassing scenarios that model the Ar-loss during assimilation and subsequent ejection of the xenocrystals. Diffusive equilibration of the contrasting Ba concentrations is assumed to generate maximum durations as the gradient could have been developed in several growth and heating stages. The modelling does not show any indication of an involvement of inherited 40Ar in any of the deviating samples. However, the analytical set-up represents the lower limit of the required spatial resolution. Therefore, it cannot be excluded that the degassing modelling relies on a significant overestimation of the maximum duration of the magmatic history. Nevertheless, the modelling of xenocrystal degassing evidences that the irregular incorporation of excess 40Ar by melt- and fluid inclusions represents the most critical problem that needs to be overcome in dating volcaniclastic feldspars from the PALEOVAN drill cores. This thesis provides the complete background in generating and presenting 40Ar/39Ar ages that are compared to age data from a climate-stratigraphic model. Deviations are identified statistically and then discussed in order to find explanations from the age model and/or from 40Ar/39Ar geochronology. Most of the PALEOVAN stratigraphy provides several chronometers that have been proven for their synchronicity. Lacustrine deposits from Lake Van represent a key archive for reconstructing climate evolution in the eastern Mediterranean and in the Near East. The PALEOVAN record offers a climate-stratigraphic age model with a remarkable accuracy and resolution.
Die Präsenz der Dinge
(2018)
Menschenähnliche Dinge fordern uns in besonderem Maße heraus. Sie lösen Gefühle und Imaginationen aus, sie beeinflussen unsere Körperhaltung und unsere Mimik. Woher rühren unsere bisweilen starken Reaktionen auf anthropomorphe Artefakte? Warum neigen wir dazu, sie wider besseres Wissen zu verlebendigen? Jana Scholz fragt erstmals gezielt nach der Agency künstlerischer Artefakte in menschlicher Gestalt. Anhand dreier Beispiele aus Fotografie, Mode und Literatur lotet sie das Verhältnis zwischen materiell-visueller Inszenierung und ästhetischer Wahrnehmung aus. Dabei werden neue Sichtweisen auf die Beziehungen von Dingen und Menschen eröffnet – in einer Zeit, in der diese zunehmend undurchdringlich scheinen.
Die Publikation zu Sprachwandelprozessen im Russischen und Ukrainischen beschreibt eine ausschlaggebende Phase der neueren Sprachgeschichte Russlands und der Ukraine (1985–2008). Im Fokus steht die Anglisierung als eine der Haupttendenzen der aktuellen sprachlichen Destandardisierung europäischer Sprachen. Die Autorin zeigt am Beispiel der Anglisierung in der Werbesprache die Destandardisierung des Russischen und Ukrainischen nach 1985 auf. Diese korpusbasierte Untersuchung umfasst sowohl die quantitative (statistische) als auch die qualitative (systemlinguistische) Analyse des werbesprachlichen Korpus. Die quantitative chronologische Analyse belegt die deutlich stärkere Dynamik der Anglisierung im Ukrainischen nach 1998. Die qualitative Analyse illustriert die unterschiedlichen bzw. gemeinsamen innerlinguistischen Prozesse in beiden Sprachen, insbesondere Anglizismen-Integration und Standardisierungswege.
Um das Immunsystem der Pflanze zu manipulieren translozieren gram-negative pathogene Bakterien Typ-III Effektorproteine (T3E) über ein Typ-III Sekretionssystem (T3SS) in die pflanzliche Wirtszelle. Dort lokalisieren T3Es in verschiedenen subzellulären Kompartimenten, wo sie Zielproteine modifizieren und so die Infektion begünstigen. HopZ1a, ein T3E des Pflanzenpathogens Pseudomonas syringae pv. syringae, ist eine Acetyltransferase und lokalisiert über ein Myristolierungsmotiv an der Plasmamembran der Wirtszelle. Obwohl gezeigt wurde, dass HopZ1a die frühe Signalweiterleitung an der Plasmamembran stört, wurde bisher kein mit der Plasmamembran assoziiertes Zielprotein für diesen T3E identifiziert. Um bisher unbekannte HopZ1a-Zieleproteine zu identifizieren wurde im Vorfeld dieser Arbeit eine Hefe-Zwei-Hybrid-Durchmusterung mit einer cDNA-Bibliothek aus Tabak durchgeführt, wobei ein nicht näher charakterisiertes Remorin als Interaktor gefunden wurde.
Bei dem Remorin handelt es sich um einen Vertreter der Gruppe 4 der Remorin-Familie, weshalb es in NbREM4 umbenannt wurde. Durch den Einsatz verschiedener Interaktionsstudien konnte demonstriert werden, dass HopZ1a mit NbREM4 in Hefe, in vitro und in planta wechselwirkt. Es wurde ferner deutlich, dass HopZ1a auf spezifische Weise mit dem konservierten C-Terminus von NbREM4 interagiert, das Remorin jedoch in vitro nicht acetyliert. Analysen mittels BiFC haben zudem ergeben, dass NbREM4 in Homodimeren an der Plasmamembran lokalisiert, wo auch die Interaktion mit HopZ1a stattfindet.
Eine funktionelle Charakterisierung von NbREM4 ergab, dass das Remorin eine spezifische Rolle im Immunsystem der Pflanze einnimmt. Die transiente Expression in N. benthamiana induziert die Expression von Abwehrgenen sowie einen veränderten Blattphänotyp. In A. thaliana wird HopZ1a über das Decoy ZED1 und das R-Protein ZAR1 erkannt, was zur Auslösung einer starken Hypersensitiven Antwort (HR von hypersensitive response) führt. Es konnte im Rahmen dieser Arbeit gezeigt werden, dass ZAR1 in N. benthamiana konserviert ist, NbREM4 jedoch nicht in der ETI als Decoy fungiert. Mit Hilfe einer Hefe-Zwei-Hybrid-Durchmusterung mit NbZAR1 als Köder konnten zwei Proteine, die Catalase CAT1 und der Protonenpumpeninteraktor PPI1, als Interaktoren von NbZAR1 identifiziert werden, welche möglicherweise in der Regulation der HR eine Rolle spielen.
Aus Voruntersuchungen war bekannt, dass NbREM4 mit weiteren, nicht näher charakterisierten Proteinen aus Tabak interagieren könnte. Eine phylogenetische Einordnung hat gezeigt, dass es sich um die bekannte Immun-Kinase PBS1 sowie zwei E3-Ubiquitin-Ligasen, NbSINA1 und NbSINAL3, handelt. PBS1 interagiert mit NbREM4 an der Plasmamembran und phosphoryliert das Remorin innerhalb des intrinsisch ungeordneten N-Terminus. Mittels Massenspektrometrie konnten die Serine an Position 64 und 65 innerhalb der Aminosäuresequenz von NbREM4 als PBS1-abhängige Phosphorylierungsstellen identifiziert wurden.
NbSINA1 und NbSINAL3 besitzen in vitro Ubiquitinierungsaktivität, bilden Homo- und Heterodimere und interagieren ebenfalls mit dem N-terminalen Teil von NbREM4, wobei sie das Remorin in vitro nicht ubiquitinieren.
Aus den in dieser Arbeit gewonnenen Ergebnissen lässt sich ableiten, dass der bakterielle T3E HopZ1a gezielt mit dem Tabak-Remorin NbREM4 an der Plasmamembran interagiert und über einen noch unbekannten Mechanismus mit dem Immunsystem der Pflanze interferiert, wobei NbREM4 möglicherweise eine Rolle als Adapter- oder Ankerprotein zukommt, über welches HopZ1a mit weiteren Immunkomponenten interagiert. NbREM4 ist Teil eines größeren Immunnetzwerkes, zu welchem die bekannte Immun-Kinase PBS1 und zwei E3-Ubiquitin-Ligasen gehören. Mit NbREM4 konnte damit erstmalig ein membranständiges Protein mit einer Funktion im Immunsystem der Pflanze als Zielprotein von HopZ1a identifiziert werden.
Wie stark eine Person in ihrer alltäglichen Umgebung auf Anzahlen achtet (Spontane Fokussierung auf Anzahligkeit, kurz SFON) ist individuell sehr unterschiedlich. Zwar liegen bereits hinreichende Belege für einen Zusammenhang zwischen SFON und Zählfertigkeiten, Subitizing und basalen sowie höheren arithmetischen Fertigkeiten im Kindergarten und der frühen Grundschulzeit vor, die Einordnung der relativen Bedeutsamkeit von SFON gegenüber bereits bekannten und gut belegten Prädiktoren fehlt jedoch. Daneben lag der bisherige Schwerpunkt vorrangig auf Zählfertigkeiten. Offen bleiben die Kompetenzen des Kindes in der Mengenerfassung und -verarbeitung sowie die bereits im Vorschulalter vorhandene Kenntnis arabischer Ziffern.
Die Daten dieser Arbeit wurden im Rahmen einer großen epidemiologischen Studie (SCHUES) erhoben. Eine Stichprobe von 1868 Kindergartenkindern (964 Jungen und 904 Mädchen) konnte zwölf Monate vor ihrem Schuleintritt erstmalig untersucht werden. Die Kinder waren hier im Mittel 63 Monate alt. 1704 Kinder konnten erneut rund neun Monate später (im Mittel drei Monate vor Schulbeginn) getestet werden. Das mittlere Alter der Kinder lag bei 72 Monaten. Die erfassten numerisch-mathematischen Fertigkeiten lassen sich in drei Teilbereiche gliedern: Zählfertigkeiten, Ziffernkenntnis und Rechnen/Mengenerfassung. Daneben wurden SFON, die nonverbale und verbale Intelligenz, die phonologische Schleife, der visuell-räumliche Notizblock sowie die zentrale Exekutive und die Aufmerksamkeit zu beiden Messzeitpunkten erhoben.
Die SFON-Tendenz zeigte eine mäßige, numerisch-mathematische Fertigkeiten eine mäßige bis hohe Stabilität über die Zeit. Der an bisher deutlich kleineren Stichproben gefundene Zusammenhang zwischen der SFON-Tendenz und den numerisch-mathematischen Fertigkeiten konnte in der vorliegenden Arbeit repliziert werden. Eine Vorhersage auffälliger als auch sehr guter numerisch-mathematischer Leistungen gelang jedoch weder quer- noch längsschnittlich mit ausreichend hoher Genauigkeit. Auch der bereits in der Literatur beschriebene reziproke Zusammenhang zwischen SFON und numerisch-mathematischen Fertigkeiten konnte durch die vorliegende Arbeit an einer großen Stichprobe repliziert werden. Darüber hinaus wurden Hinweise auf die kausale Struktur des Zusammenhangs gewonnen: Die Ergebnisse zeigten, dass numerisch-mathematische Fertigkeiten SFON besser vorhersagen konnten als andersherum. Die Ergebnisse der Pfadanalysen zeigten weiterhin, dass SFON neben den bedeutsamen Konstrukten des Arbeitsgedächtnisses, der Intelligenz und der Aufmerksamkeit einen eigenständigen Beitrag für die Entwicklung numerischer als auch mathematischer Fertigkeiten leistet. Auch auf die weitere numerische und mathematische Entwicklung bis kurz vor Schuleintritt hat SFON einen bedeutsamen Einfluss. Dieser vollzieht sich jedoch indirekt über das numerisch-mathematische Vorwissen.
Kolorektalkrebs (CRC) ist die dritthäufigste Tumorerkrankung weltweit. Neben dem Alter spielt auch die Ernährung eine wichtige Rolle bei der Entstehung der Krankheit. Eine vermutlich krebspräventive Wirkung wird dabei dem Spurenelement Selen zugeschrieben, das fast ausschließlich über Lebensmittel aufgenommen wird. So hängt beispielsweise ein niedriger Selenstatus mit dem Risiko, im Laufe des Lebens an CRC zu erkranken, zusammen. Seine Funktionen vermittelt Selen dabei überwiegend durch Selenoproteine, in denen es in Form von Selenocystein eingebaut wird. Zu den bisher am besten untersuchten Selenoproteinen mit möglicher Funktion während CRC zählen die Glutathionperoxidasen (GPXen). Die Mitglieder dieser Familie tragen aufgrund ihrer Hydroperoxid-reduzierenden Eigenschaften entscheidend zum Schutz der Zellen vor oxidativem Stress bei. Dies kann je nach Art und Stadium des Tumors entweder krebshemmend oder -fördernd wirken, da auch transformierte Zellen von dieser Schutzfunktion profitieren.
In dieser Arbeit wurde die GPX2 in HT29-Darmkrebszellen mithilfe stabil-transfizierter shRNA herunterreguliert, um die Funktion des Enzyms vor allem in Hinblick auf regulierte Signalwege zu untersuchen. Ein Knockdowns (KD) der strukturell ähnlichen GPX1 kam ebenfalls zum Einsatz, um gezielt Isoform-spezifische Funktionen unterscheiden zu können. Anhand eines PCR-Arrays wurden Signalwege identifiziert, die auf einen Einfluss der beiden Proteine im Zellwachstum hindeuteten. Anschließende Untersuchungen ließen auf einen verminderten Differenzierungsstatus in den GPX1- und GPX2-KDs aufgrund einer geringeren Aktivität der Alkalischen Phosphatase schließen. Zudem war die Zellviabilität im Neutralrot-Assay (NRU) bei Fehlen der GPX1 bzw. GPX2 im Vergleich zur Kontrolle reduziert. Die Ergebnisse des PCR-Arrays, und speziell für die GPX2 frühere Untersuchungen der Arbeitsgruppe, wiesen weiterhin auf eine Rolle der beiden Proteine in der entzündungsgetriebenen Karzinogenese hin. Daher wurden auch mögliche Interaktionen mit dem NFκB-Signalweg analysiert. Eine Stimulation der Zellen mit dem proinflammatorischen Zytokin IL1β ging mit einer verstärkten Aktivierung der MAP-Kinasen ERK1/2 in den Zellen mit GPX1- bzw. GPX2-KD einher. Die gleichzeitige Behandlung mit dem Antioxidans NAC führte nicht zur Rücknahme der Effekte in den KDs, sodass möglicherweise nicht nur die antioxidativen Eigenschaften der Enzyme bei der Interaktion mit diesen Signalwegsproteinen relevant sind.
Weiterhin wurden Analysen zum Substratspektrum der GPX2 in HCT116-Zellen mit einer Überexpression des Proteins durchgeführt. Dabei zeigte sich mittels NRU-Assay und DNA-Laddering, dass die GPX2 besonders vor den proapoptotischen Effekten einer Behandlung mit den Lipidhydroperoxiden HPODE und HPETE schützt.
Im Gegensatz zur GPX2 lässt sich Selenoprotein H (SELENOH) stärker durch die alimentäre Selenzufuhr beeinflussen. Einer möglichen Nutzung als Biomarker oder gar als Ansatzpunkt bei der Prävention bzw. Behandlung von CRC steht allerdings unvollständiges Wissen über die Funktion des Proteins gegenüber. Zur genaueren Charakterisierung von SELENOH wurden daher stabil-transfizierte KD-Klone in HT29- und Caco2-Zellen hergestellt und zunächst auf ihre Tumorigenität untersucht.
Zellen mit SELENOH-KD bildeten mehr und größere Kolonien im Soft Agar und zeigten ein erhöhtes Proliferations- und Migrationspotenzial im Vergleich zur Kontrolle.
Ein Xenograft in Nacktmäusen resultierte zudem in einer stärkeren Tumorbildung nach Injektion von KD-Zellen. Untersuchungen zur Beteiligung von SELENOH an der Zellzyklusregulation deuten auf eine hemmende Rolle des Proteins in der G1/S-Phase hin.
Die weiterhin beobachtete Hochregulation von SELENOH in humanen Adenokarzinomen und präkanzerösem Mausgewebe lässt sich möglicherweise mit der postulierten Schutzfunktion vor oxidativen Zell- und DNA-Schäden erklären. In gesunden Darmepithelzellen war das Protein vorrangig am Kryptengrund lokalisiert, was zu einer potenziellen Rolle während der gastrointestinalen Differenzierung passt.
On a small scale
(2018)
This study argues that micro relations matter in peacekeeping. Asking what makes the implementation of peacekeeping interventions complex and how complexity is resolved, I find that formal, contractual mechanisms only rarely effectively reduce complexity – and that micro relations fill this gap. Micro relations are personal relationships resulting from frequent face-to-face interaction in professional and – equally importantly – social contexts.
This study offers an explanation as to why micro relations are important for coping with complexity, in the form of a causal mechanism. For this purpose, I bring together theoretical and empirical knowledge: I draw upon the current debate on ‘institutional complexity’ (Greenwood et al. 2011) in organizational institutionalism as well as original empirical evidence from a within-case study of the peacekeeping intervention in Haiti, gained in ten weeks of field research. In this study, scholarship on institutional complexity serves to identify theoretical causal channels which guide empirical analysis. An additional, secondary aim is pursued with this mechanism-centered approach: testing the utility of Beach and Pedersen’s (2013) theory-testing process tracing.
Regarding the first research question – what makes the implementation of peacekeeping interventions complex –, the central finding is that complexity manifests itself in the dual role of organizations as cooperation partners and competitors for (scarce) resources, turf and influence. UN organizations, donor agencies and international NGOs implementing peacekeeping activities in post-conflict environments have chronic difficulty mastering both roles because they entail contradictory demands: effective cooperation requires information exchange, resource and responsibility-sharing as well as external scrutiny, whereas prevailing over competitors demands that organizations conceal information, guard resources, increase relative turf and influence, as well as shield themselves from scrutiny. Competition fuels organizational distrust and friction – and impedes cooperation.
How is this complexity resolved? The answer to this second research question is that deep-seated organizational competition is routinely mediated – and cooperation motivated – in micro relations and micro interaction. Regular, frequent face-to-face interaction between individual organizational members generates social resources that help to transcend organizational distrust and conflict, most importantly familiarity with each other, personal trust and belief in reciprocity. Furthermore, informal conflict mediation and control mechanisms – namely, open discussion, mutual monitoring in direct interaction and social exclusion – enhance solidarity and mutual support.
Eta Carinae
(2018)
The exceptional binary star Eta Carinae has been fascinating scientists and the people in the Southern hemisphere alike for hundreds of years. It survived an enormous outbreak, comparable to a supernova energy-wise, and for a short period became the brightest star of the night sky. From observations from the radio regime to X-rays the system's characteristics and its emission in photon energies up to ~ 50 keV are well studied today. The binary is composed of two massive stars of ~ 30 and ~ 100 solar masses. Either star drives a strong stellar wind that continuously carries away a fraction of its mass. The collision of these winds leads to a shock on each side of the encounter. In the wind-wind-collision region plasma gets heated when it is overrun by the shocks. Part of the emission seen in X-rays can be attributed to this plasma. Above ~ 50 keV the emission is no longer of thermal origin: the required plasma temperature exceeds the available mechanical energy input of the stellar winds. In contrast to its observational history in thermal energies observational evidence of Eta Carinae's non-thermal emission has only recently built up. In high-energy gamma-rays Eta Carinae is the only binary of its kind that has been detected unambiguously. Its energy spectrum reaches up to ~ hundred GeV, a regime where satellite-based gamma-ray experiments run out of statistics. Ground-based gamma-ray experiments have the advantage of large photon collection areas. H.E.S.S. is the only gamma-ray experiment located in the Southern hemisphere and thus able to observe Eta Carinae in this energy range. H.E.S.S. measures gamma-rays via electromagnetic showers of particles that very-high-energy gamma-rays initiate in the atmosphere. The main challenge in observations of Eta Carinae with H.E.S.S. is the UV emission of the Carina nebula that leads to a background that is up to 10 times stronger than usual for H.E.S.S. This thesis presents the first detection of a colliding-wind binary in very-high-energy gamma-rays and documents the studies that led to it. The differential gamma-ray energy spectrum of Eta Carinae is measured up to 700 GeV. A hadronic and leptonic origin of the gamma-ray emission is discussed and based on the comparison of cooling times a hadronic scenario is favoured.
Ferroic materials have attracted a lot of attention over the years due to their wide range of applications in sensors, actuators, and memory devices. Their technological applications originate from their unique properties such as ferroelectricity and piezoelectricity. In order to optimize these materials, it is necessary to understand the coupling between their nanoscale structure and transient response, which are related to the atomic structure of the unit cell.
In this thesis, synchrotron X-ray diffraction is used to investigate the structure of ferroelectric thin film capacitors during application of a periodic electric field. Combining electrical measurements with time-resolved X-ray diffraction on a working device allows for visualization of the interplay between charge flow and structural motion. This constitutes the core of this work. The first part of this thesis discusses the electrical and structural dynamics of a ferroelectric Pt/Pb(Zr0.2,Ti0.8)O3/SrRuO3 heterostructure during charging, discharging, and polarization reversal. After polarization reversal a non-linear piezoelectric response develops on a much longer time scale than the RC time constant of the device. The reversal process is inhomogeneous and induces a transient disordered domain state. The structural dynamics under sub-coercive field conditions show that this disordered domain state can be remanent and can be erased with an appropriate voltage pulse sequence. The frequency-dependent dynamic characterization of a Pb(Zr0.52,Ti0.48)O3 layer, at the morphotropic phase boundary, shows that at high frequency, the limited domain wall velocity causes a phase lag between the applied field and both the structural and electrical responses. An external modification of the RC time constant of the measurement delays the switching current and widens the electromechanical hysteresis loop while achieving a higher compressive piezoelectric strain within the crystal.
In the second part of this thesis, time-resolved reciprocal space maps of multiferroic BiFeO3 thin films were measured to identify the domain structure and investigate the development of an inhomogeneous piezoelectric response during the polarization reversal. The presence of 109° domains is evidenced by the splitting of the Bragg peak.
The last part of this work investigates the effect of an optically excited ultrafast strain or heat pulse propagating through a ferroelectric BaTiO3 layer, where we observed an additional current response due to the laser pulse excitation of the metallic bottom electrode of the heterostructure.
Virtual 3D city models represent and integrate a variety of spatial data and georeferenced data related to urban areas. With the help of improved remote-sensing technology, official 3D cadastral data, open data or geodata crowdsourcing, the quantity and availability of such data are constantly expanding and its quality is ever improving for many major cities and metropolitan regions. There are numerous fields of applications for such data, including city planning and development, environmental analysis and simulation, disaster and risk management, navigation systems, and interactive city maps.
The dissemination and the interactive use of virtual 3D city models represent key technical functionality required by nearly all corresponding systems, services, and applications. The size and complexity of virtual 3D city models, their management, their handling, and especially their visualization represent challenging tasks. For example, mobile applications can hardly handle these models due to their massive data volume and data heterogeneity. Therefore, the efficient usage of all computational resources (e.g., storage, processing power, main memory, and graphics hardware, etc.) is a key requirement for software engineering in this field. Common approaches are based on complex clients that require the 3D model data (e.g., 3D meshes and 2D textures) to be transferred to them and that then render those received 3D models. However, these applications have to implement most stages of the visualization pipeline on client side. Thus, as high-quality 3D rendering processes strongly depend on locally available computer graphics resources, software engineering faces the challenge of building robust cross-platform client implementations.
Web-based provisioning aims at providing a service-oriented software architecture that consists of tailored functional components for building web-based and mobile applications that manage and visualize virtual 3D city models. This thesis presents corresponding concepts and techniques for web-based provisioning of virtual 3D city models. In particular, it introduces services that allow us to efficiently build applications for virtual 3D city models based on a fine-grained service concept. The thesis covers five main areas:
1. A Service-Based Concept for Image-Based Provisioning of
Virtual 3D City Models It creates a frame for a broad range of services related to the rendering and image-based dissemination of virtual 3D city models.
2. 3D Rendering Service for Virtual 3D City Models This service provides efficient, high-quality 3D rendering functionality for virtual 3D city models. In particular, it copes with requirements such as standardized data formats, massive model texturing, detailed 3D geometry, access to associated feature data, and non-assumed frame-to-frame coherence for parallel service requests. In addition, it supports thematic and artistic styling based on an expandable graphics effects library.
3. Layered Map Service for Virtual 3D City Models It generates a map-like representation of virtual 3D city models using an oblique view. It provides high visual quality, fast initial loading times, simple map-based interaction and feature data access. Based on a configurable client framework, mobile and web-based applications for virtual 3D city models can be created easily.
4. Video Service for Virtual 3D City Models It creates and synthesizes videos from virtual 3D city models. Without requiring client-side 3D rendering capabilities, users can create camera paths by a map-based user interface, configure scene contents, styling, image overlays, text overlays, and their transitions. The service significantly reduces the manual effort typically required to produce such videos. The videos can automatically be updated when the underlying data changes.
5. Service-Based Camera Interaction It supports task-based 3D camera interactions, which can be integrated seamlessly into service-based visualization applications. It is demonstrated how to build such web-based interactive applications for virtual 3D city models using this camera service.
These contributions provide a framework for design, implementation, and deployment of future web-based applications, systems, and services for virtual 3D city models. The approach shows how to decompose the complex, monolithic functionality of current 3D geovisualization systems into independently designed, implemented, and operated service- oriented units. In that sense, this thesis also contributes to microservice architectures for 3D geovisualization systems—a key challenge of today’s IT systems engineering to build scalable IT solutions.
There are numerous situations in which people ask for something or make a request, e.g. asking a favor, asking for help or requesting compliance with specific norms. For this reason, how to ask for something in order to increase people’s willingness to fulfill such requests is one of the most important question for many people working in various different fields of responsibility such as charitable giving, marketing, management or policy making.
This dissertation consists of four chapters that deal with the effects of small changes in the decision-making environment on altruistic decision-making and compliance behavior. Most notably, written communication as an influencing factor is the focus of the first three chapters. The starting point was the question how to devise a request in order to maximize its chance of success (Chapter 1). The results of the first chapter originate the ideas for the second and third chapter. Chapter 2 analyzes how communication by a neutral third-party, i.e. a text from the experimenters that either reminds potential benefactors of their responsibility or highlights their freedom of choice, affects altruistic decision-making. Chapter 3 elaborates on the effect of thanking people in advance when asking them for help. While being not as closely related to the other chapters as the three first ones are, the fourth chapter deals as well with the question how compliance (here: compliance with norms and rules) is affected by subtle manipulations of the environment in which decisions are made. This chapter analyzes the effect of default settings in a tax return on tax compliance.
In order to study the research questions outlined above, controlled experiments were conducted. Chapter 1, which analyzes the effect of text messages on the decision to give something to another person, employs a mini-dictator game. The recipient sends a free-form text message to the dictator before the latter makes a binary decision whether or not to give part of her or his endowment to the recipient. We find that putting effort into the message by writing a long note without spelling mistakes increases dictators’ willingness to give. Moreover, writing in a humorous way and mentioning reasons why the money is needed pays off. Furthermore, men and women seem to react differently to some message categories. Only men react positively to efficiency arguments, while only women react to messages that emphasize the dictator’s power and responsibility.
Building on this last result, Chapter 2 attempts to disentangle the effect of reminding potential benefactors of their responsibility for the potential beneficiary and the effect of highlighting their decision power and freedom of choice on altruistic decision-making by studying the effects of two different texts on giving in a dictator game. We find that only men react positively to a text that stresses their responsibility for the recipient by giving more to her or him, whereas only women seem to react positively to a text that emphasizes their decision power and freedom of choice.
Chapter 3 focuses on the compliance with a request. In the experiment, participants are asked to provide a detailed answer to an open question. Compliance is measured by the effort participants spend on answering the question. The treatment variable is whether or not they see the text “thanks in advance.” We find that participants react negatively by putting less effort into complying with the request in response to the phrase “thanks in advance.”
Chapter 4 studies the effect of prefilled tax returns with mostly inaccurate default values on tax compliance. In a laboratory experiment, participants earn income by performing a real-effort task and must subsequently file a tax return for three consecutive rounds. In the main treatment, the tax return is prefilled with a default value, resulting from participants’ own performance in previous rounds, which varies in its relative size. The results suggest that there is no lasting effect of a default value on tax honesty, neither for relatively low nor relatively high defaults. However, participants who face a default that is lower than their true income in the first round evade significantly and substantially more taxes in this round than participants in the control treatment without a default.
This research addressed the question, if it is possible to simplify current microcontact printing systems for the production of anisotropic building blocks or patchy particles, by using common chemicals while still maintaining reproducibility, high precision and tunability of the Janus-balance
Chapter 2 introduced the microcontact printing materials as well as their defined electrostatic interactions. In particular polydimethylsiloxane stamps, silica particles and high molecular weight polyethylenimine ink were mainly used in this research. All of these components are commercially available in large quantities and affordable, which gives this approach a huge potential for further up-scaling developments. The benefits of polymeric over molecular inks was described including its flexible influence on the printing pressure. With this alteration of the µCP concept, a new method of solvent assisted particle release mechanism enabled the switch from two-dimensional surface modification to three-dimensional structure printing on colloidal silica particles, without changing printing parameters or starting materials. This effect opened the way to use the internal volume of the achieved patches for incorporation of nano additives, introducing additional physical properties into the patches without alteration of the surface chemistry.
The success of this system and its achievable range was further investigated in chapter 3 by giving detailed information about patch geometry parameters including diameter, thickness and yield. For this purpose, silica particles in a size range between 1µm and 5µm were printed with different ink concentrations to change the Janus-balance of these single patched particles. A necessary intermediate step, consisting of air-plasma treatment, for the production of trivalent particles using "sandwich" printing was discovered and comparative studies concerning the patch geometry of single and double patched particles were conducted. Additionally, the usage of structured PDMS stamps during printing was described. These results demonstrate the excellent precision of this approach and opens the pathway for even greater accuracy as further parameters can be finely tuned and investigated, e.g. humidity and temperature during stamp loading.
The performance of these synthesized anisotropic colloids was further investigated in chapter 4, starting with behaviour studies in alcoholic and aqueous dispersions. Here, the stability of the applied patches was studied in a broad pH range, discovering a release mechanism by disabling the electrostatic bonding between particle surface and polyelectrolyte ink. Furthermore, the absence of strong attractive forces between divalent particles in water was investigated using XPS measurements. These results lead to the conclusion that the transfer of small PDMS oligomers onto the patch surface is shielding charges, preventing colloidal agglomeration. However, based on this knowledge, further patch modifications for particle self-assembly were introduced including physical approaches using magnetic nano additives, chemical patch functionalization with avidin-biotin or the light responsive cyclodextrin-arylazopyrazoles coupling as well as particle surface modification for the synthesis of highly amphiphilic colloids. The successful coupling, its efficiency, stability and behaviour in different solvents were evaluated to find a suitable coupling system for future assembly experiments. Based on these results the possibility of more sophisticated structures by colloidal self-assembly is given.
Certain findings needed further analysis to understand their underlying mechanics, including the relatively broad patch diameter distribution and the decreasing patch thickness for smaller silica particles. Mathematical assumptions for both effects are introduced in chapter 5. First, they demonstrate the connection between the naturally occurring particle size distribution and the broadening of the patch diameter, indicating an even higher precision for this µCP approach. Second, explaining the increase of contact area between particle and ink surface due to higher particle packaging, leading to a decrease in printing pressure for smaller particles.
These calculations ultimately lead to the development of a new mechanical microcontact printing approach, using centrifugal forces for high pressure control and excellent parallel alignment of printing substrates. First results with this device and the comparison with previously conducted by-hand experiments conclude this research. It furthermore displays the advantages of such a device for future applications using a mechanical printing approach, especially for accessing even smaller nano particles with great precision and excellent yield.
In conclusion, this work demonstrates the successful adjustment of the µCP approach using commercially available and affordable silica particles and polyelectrolytes for high flexibility, reduced costs and higher scale-up value. Furthermore, its was possible to increase the modification potential by introducing three-dimensional patches for additional functionalization volume. While keeping a high colloidal stability, different coupling systems showed the self-assembly capabilities of this toolbox for anisotropic particles.
East Africa is a natural laboratory: Studying its unique geological and biological history can help us better inform our theories and models. Studying its present and future can help us protect its globally important biodiversity and ecosystem services. East African vegetation plays a central role in all these aspects, and this dissertation aims to quantify its dynamics through computer simulations.
Computer models help us recreate past settings, forecast into the future or conduct simulation experiments that we cannot otherwise perform in the field. But before all that, one needs to test their performance. The outputs that the model produced using the present day-inputs, agreed well with present-day observations of East African vegetation. Next, I simulated past vegetation for which we have fossil pollen data to compare. With computer models, we can fill the gaps of knowledge between sites where we have fossil pollen data from, and create a more complete picture of the past. Good level of agreement between model and pollen data where they overlapped in space further validated our model performance.
Once the model was tested and validated for the region, it became possible to probe one of the long standing questions regarding East African vegetation: How did East Africa lose its tropical forests? The present-day vegetation in the tropics is mainly characterized by continuous forests worldwide except in tropical East Africa, where forests only occur as patches. In a series of simulation experiments, I was able to show under which conditions these forest patches could have been connected and fragmented in the past. This study showed the sensitivity of East African vegetation to climate change and variability such as those expected under future climate change.
El Niño Southern Oscillation (ENSO) events that result from the fluctuations in temperature between the ocean and atmosphere, bring further variability to East African climate and are predicted to increase in intensity in the future. But climate models are still not good at capturing the pattens of these events. In a study where I quantified the influence of ENSO events on East African vegetation, I showed how different the future vegetation could be from what we currently predict with these climate models that lack accurate ENSO contribution. Consideration of these discrepancies is important for our future global carbon budget calculations and management decisions.
Movement and navigation are essential for many organisms during some parts of their lives. This is also true for bacteria, which can move along surfaces and swim though liquid environments. They are able to sense their environment, and move towards environmental cues in a directed fashion.
These abilities enable microbial lifecyles in biofilms, improved food uptake, host infection, and many more. In this thesis we study aspects of the swimming movement - or motility - of the soil bacterium (P. putida). Like most bacteria, P. putida swims by rotating its helical flagella, but their arrangement differs from the main model organism in bacterial motility research: (E. coli). P. putida is known for its intriguing motility strategy, where fast and slow episodes can occur after each other. Up until now, it was not known how these two speeds can be produced, and what advantages they might confer to this bacterium.
Normally the flagella, the main component of thrust generation in bacteria, are not observable by ordinary light microscopy. In order to elucidate this behavior, we therefore used a fluorescent staining technique on a mutant strain of this species to specifically label the flagella, while leaving the cell body only faintly stained. This allowed us to image the flagella of the swimming bacteria with high spacial and temporal resolution with a customized high speed fluorescence microscopy setup. Our observations show that P. putida can swim in three different modes. First, It can swim with the flagella pushing the cell body, which is the main mode of swimming motility previously known from other bacteria. Second, it can swim with the flagella pulling the cell body, which was thought not to be possible in situations with multiple flagella. Lastly, it can wrap its flagellar bundle around the cell body, which results in a speed wich is slower by a factor of two. In this mode, the flagella are in a different physical conformation with a larger radius so the cell body can fit inside. These three swimming modes explain the previous observation of two speeds, as well as the non strict alternation of the different speeds.
Because most bacterial swimming in nature does not occur in smoothly walled glass enclosures under a microscope, we used an artificial, microfluidic, structured system of obstacles to study the motion of our model organism in a structured environment. Bacteria were observed in microchannels with cylindrical obstacles of different sizes and with different distances with video microscopy and cell tracking. We analyzed turning angles, run times, and run length, which we compared to a minimal model for movement in structured geometries. Our findings show that hydrodynamic interactions with the walls lead to a guiding of the bacteria along obstacles. When comparing the observed behavior with the statics of a particle that is deflected with every obstacle contact, we find that cells run for longer distances than that model.
Navigation in chemical gradients is one of the main applications of motility in bacteria. We studied the swimming response of P. putida cells to chemical stimuli (chemotaxis) of the common food preservative sodium benzoate. Using a microfluidic gradient generation device, we created gradients of varying strength, and observed the motion of cells with a video microscope and subsequent cell tracking. Analysis of different motility parameters like run lengths and times, shows that P. putida employs the classical chemotaxis strategy of E. coli: runs up the gradient are biased to be longer than those down the gradient. Using the two different run speeds we observed due to the different swimming modes, we classify runs into `fast' and `slow' modes with a Gaussian mixture model (GMM). We find no evidence that P. putida's uses its swimming modes to perform chemotaxis.
In most studies of bacterial motility, cell tracking is used to gather trajectories of individual swimming cells. These trajectories then have to be decomposed into run sections and tumble sections. Several algorithms have been developed to this end, but most require manual tuning of a number of parameters, or extensive measurements with chemotaxis mutant strains. Together with our collaborators, we developed a novel motility analysis scheme, based on generalized Kramers-Moyal-coefficients. From the underlying stochastic model, many parameters like run length etc., can be inferred by an optimization procedure without the need for explicit run and tumble classification. The method can, however, be extended to a fully fledged tumble classifier. Using this method, we analyze E. coli chemotaxis measurements in an aspartate analog, and find evidence for a chemotactic bias in the tumble angles.
The movement of organisms has formed our planet like few other processes. Movements shape populations, communities, entire ecosystems, and guarantee fundamental ecosystem functions and services, like seed dispersal and pollination. Global, regional and local anthropogenic impacts influence animal movements across ecosystems all around the world. In particular, land-use modification, like habitat loss and fragmentation disrupt movements between habitats with profound consequences, from increased disease transmissions to reduced species richness and abundance. However, neither the influence of anthropogenic change on animal movement processes nor the resulting effects on ecosystems are well understood. Therefore, we need a coherent understanding of organismal movement processes and their underlying mechanisms to predict and prevent altered animal movements and their consequences for ecosystem functions.
In this thesis I aim at understanding the influence of anthropogenically caused land-use change on animal movement processes and their underlying mechanisms. In particular, I am interested in the synergistic influence of large-scale landscape structure and fine-scale habitat features on basic-level movement behaviours (e.g. the daily amount of time spend running, foraging, and resting) and their emerging higher-level movements (home range formation). Based on my findings, I identify the likely consequences of altered animal movements that lead to the loss of species richness and abundances.
The study system of my thesis are hares in agricultural landscapes. European brown hares (Lepus europaeus) are perfectly suited to study animal movements in agricultural landscapes, as hares are hermerophiles and prefer open habitats. They have historically thrived in agricultural landscapes, but their numbers are in decline. Agricultural areas are undergoing strong land-use changes due to increasing food demand and fast developing agricultural technologies. They are already the largest land-use class, covering 38% of the world’s terrestrial surface. To consider the relevance of a given landscape structure for animal movement behaviour I selected two differently structured agricultural landscapes – a simple landscape in Northern Germany with large fields and few landscape elements (e.g. hedges and tree stands), and a complex landscape in Southern Germany with small fields and many landscape elements.
I applied GPS devices (hourly fixes) with internal high-resolution accelerometers (4 min samples) to track hares, receiving an almost continuous observation of the animals’ behaviours via acceleration analyses. I used the spatial and behavioural information in combination with remote sensing data (normalized difference vegetation index, or NDVI, a proxy for resource availability), generating an almost complete idea of what the animal was doing when, why and where. Apart from landscape structure (represented by the two differently structured study areas), I specifically tested whether the following fine-scale habitat features influence animal movements: resource, agricultural management events, habitat diversity, and habitat structure.
My results show that, irrespective of the movement process or mechanism and the type of fine-scale habitat features, landscape structure was the overarching variable influencing hare movement behaviour. High resource variability forces hares to enlarge their home ranges, but only in the simple and not in the complex landscape. Agricultural management events result in home range shifts in both landscapes, but force hares to increase their home ranges only in the simple landscape. Also the preference of habitat patches with low vegetation and the avoidance of high vegetation, was stronger in the simple landscape. High and dense crop fields restricted hare movements temporarily to very local and small habitat patch remnants. Such insuperable barriers can separate habitat patches that were previously connected by mobile links. Hence, the transport of nutrients and genetic material is temporarily disrupted. This mechanism is also working on a global scale, as human induced changes from habitat loss and fragmentation to expanding monocultures cause a reduction in animal movements worldwide.
The mechanisms behind those findings show that higher-level movements, like increasing home ranges, emerge from underlying basic-level movements, like the behavioural modes. An increasing landscape simplicity first acts on the behavioural modes, i.e. hares run and forage more, but have less time to rest. Hence, the emergence of increased home range sizes in simple landscapes is based on an increased proportion of time running and foraging, largely due to longer travelling times between distant habitats and scarce resource items in the landscape. This relationship was especially strong during the reproductive phase, demonstrating the importance of high-quality habitat for reproduction and the need to keep up self-maintenance first, in low quality areas. These changes in movement behaviour may release a cascade of processes that start with more time being allocated to running and foraging, resulting into an increased energy expenditure and may lead to a decline in individual fitness. A decrease in individual fitness and reproductive output will ultimately affect population viability leading to local extinctions.
In conclusion, I show that landscape structure has one of the most important effects on hare movement behaviour. Synergistic effects of landscape structure, and fine-scale habitat features, first affect and modify basic-level movement behaviours, that can scales up to altered higher-level movements and may even lead to the decline of species richness and abundances, and the disruption of ecosystem functions. Understanding the connection between movement mechanisms and processes can help to predict and prevent anthropogenically induced changes in movement behaviour. With regard to the paramount importance of landscape structure, I strongly recommend to decrease the size of agricultural fields and increase crop diversity. On the small-scale, conservation policies should assure the year round provision of areas with low vegetation height and high quality forage. This could be done by generating wildflower strips and additional (semi-) natural habitat patches. This will not only help to increase the populations of European brown hares and other farmland species, but also ensure and protects the continuity of mobile links and their intrinsic value for sustaining important ecosystem functions and services.
For more than two centuries, plant ecologists have aimed to understand how environmental gradients and biotic interactions shape the distribution and co-occurrence of plant species. In recent years, functional trait–based approaches have been increasingly used to predict patterns of species co-occurrence and species distributions along environmental gradients (trait–environment relationships). Functional traits are measurable properties at the individual level that correlate well with important processes. Thus, they allow us to identify general patterns by synthesizing studies across specific taxonomic compositions, thereby fostering our understanding of the underlying processes of species assembly. However, the importance of specific processes have been shown to be highly dependent on the spatial scale under consideration. In particular, it remains uncertain which mechanisms drive species assembly and allow for plant species coexistence at smaller, more local spatial scales. Furthermore, there is still no consensus on how particular environmental gradients affect the trait composition of plant communities. For example, increasing drought because of climate change is predicted to be a main threat to plant diversity, although it remains unclear which traits of species respond to increasing aridity. Similarly, there is conflicting evidence of how soil fertilization affects the traits related to establishment ability (e.g., seed mass). In this cumulative dissertation, I present three empirical trait-based studies that investigate specific research questions in order to improve our understanding of species distributions along environmental gradients.
In the first case study, I analyze how annual species assemble at the local scale and how environmental heterogeneity affects different facets of biodiversity—i.e. taxonomic, functional, and phylogenetic diversity—at different spatial scales. The study was conducted in a semi-arid environment at the transition zone between desert and Mediterranean ecosystems that features a sharp precipitation gradient (Israel). Different null model analyses revealed strong support for environmentally driven species assembly at the local scale, since species with similar traits tended to co-occur and shared high abundances within microsites (trait convergence). A phylogenetic approach, which assumes that closely related species are functionally more similar to each other than distantly related ones, partly supported these results. However, I observed that species abundances within microsites were, surprisingly, more evenly distributed across the phylogenetic tree than expected (phylogenetic overdispersion). Furthermore, I showed that environmental heterogeneity has a positive effect on diversity, which was higher on functional than on taxonomic diversity and increased with spatial scale. The results of this case study indicate that environmental heterogeneity may act as a stabilizing factor to maintain species diversity at local scales, since it influenced species distribution according to their traits and positively influenced diversity. All results were constant along the precipitation gradient.
In the second case study (same study system as case study one), I explore the trait responses of two Mediterranean annuals (Geropogon hybridus and Crupina crupinastrum) along a precipitation gradient that is comparable to the maximum changes in precipitation predicted to occur by the end of this century (i.e., −30%). The heterocarpic G. hybridus showed strong trends in seed traits, suggesting that dispersal ability increased with aridity. By contrast, the homocarpic C. crupinastrum showed only a decrease in plant height as aridity increased, while leaf traits of both species showed no consistent pattern along the precipitation gradient. Furthermore, variance decomposition of traits revealed that most of the trait variation observed in the study system was actually found within populations. I conclude that trait responses towards aridity are highly species-specific and that the amount of precipitation is not the most striking environmental factor at this particular scale.
In the third case study, I assess how soil fertilization mediates—directly by increased nutrient addition and indirectly by increased competition—the effect of seed mass on establishment ability. For this experiment, I used 22 species differing in seed mass from dry grasslands in northeastern Germany and analyzed the interacting effects of seed mass with nutrient availability and competition on four key components of seedling establishment: seedling emergence, time of seedling emergence, seedling survival, and seedling growth. (Time of) seedling emergence was not affected by seed mass. However, I observed that the positive effect of seed mass on seedling survival is lowered under conditions of high nutrient availability, whereas the positive effect of seed mass on seedling growth was only reduced by competition. Based on these findings, I developed a conceptual model of how seed mass should change along a soil fertility gradient in order to reconcile conflicting findings from the literature. In this model, seed mass shows a U-shaped pattern along the soil fertility gradient as a result of changing nutrient availability and competition.
Overall, the three case studies highlight the role of environmental factors on species distribution and co-occurrence. Moreover, the findings of this thesis indicate that spatial heterogeneity at local scales may act as a stabilizing factor that allows species with different traits to coexist. In the concluding discussion, I critically debate intraspecific trait variability in plant community ecology, the use of phylogenetic relationships and easily measured key functional traits as a proxy for species’ niches. Finally, I offer my outlook for the future of functional plant community research.
To reach its climate targets, the European Union has to implement a major sustainability transition in the coming decades. While the socio-technical change required for this transition is well discussed in the academic literature, the economics that go along with it are often reduced to a cost-benefit perspective of climate policy measures. By investigating climate change mitigation as a coordination problem, this thesis offers a novel perspective: It integrates the economic and the socio-technical dimension and thus allows to better understand the opportunities of a sustainability transition in Europe.
First, a game theoretic framework is developed to illustrate coordination on green or brown investment from an agent perspective. A model based on the coordination game "stag hunt" is used to discuss the influence of narratives and signals for green investment as a means to coordinate expectations towards green growth. Public and private green investment impulses – triggered by credible climate policy measures and targets – serve as an example for a green growth perspective for Europe in line with a sustainability transition. This perspective also embodies a critical view on classical analyses of climate policy measures.
Secondly, this analysis is enriched with empirical results derived from stakeholder involvement. In interviews and with a survey among European insurance companies, coordination mechanisms such as market and policy signals are identified and evaluated by their impact on investment strategies for green infrastructure. The latter, here defined as renewable energy, electricity distribution and transmission as well as energy efficiency improvements, is considered a central element of the transition to a low-carbon society.
Thirdly, this thesis identifies and analyzes major criticisms raised towards stakeholder involvement in sustainability science. On a conceptual level, different ways of conducting such qualitative research are classified. This conceptualization is then evaluated by scientists, thereby generating empirical evidence on ideals and practices of stakeholder involvement in sustainability science.
Through the combination of theoretical and empirical research on coordination problems, this thesis offers several contributions: On the one hand, it outlines an approach that allows to assess the economic opportunities of sustainability transitions. This is helpful for policy makers in Europe that are striving to implement climate policy measures addressing the targets of the Paris Agreement as well as to encourage a shift of investments towards green infrastructure. On the other hand, this thesis enhances the stabilization of the theoretical foundations in sustainability science. Therefore, it can aid researchers who involve stakeholders when studying sustainability transitions.
BACKGROUND: Physical activity involving high spinal load has been exposed to possess a crucial impact in the genesis of acute and chronic low back pain and disorder. Vigorous spinal loads are surmised in drop landings, for which strenuous bending loads were formerly evinced for the lower extremity structures. Thus far, clinical studies revealed that repetitive landing impacts can evoke benign structural adaptions or damage to the lumbar vertebrae. Though, causes for these observations are hitherto not conclusively evinced; since actual spinal load has to date not been experimentally documented. Moreover, it is yet undetermined how physiological activation of trunk musculature compensates for landing impact induced spinal loads, and to which extend trunk activity and spinal load are affected by landing demands and performer characteristics. AIMS of this study are 1. the localisation and quantification of spinal bending loads under various landing demands and 2. the identification of compensatory trunk muscular activity pattern, which potentially alleviate spinal load magnitudes. Three consecutive Hypotheses (H1 - H3) were hereto postulated: H1 posits that spinal bending loads in segregated motion planes can feasibly and reliably be evaluated from peak spine segmental angular accelerations. H2 furthermore assumes that vertical drop landings elicit highest spine bending load in sagittal flexion of the lumbar spine. Based on these verifications, a second study shall prove the successive hypothesis (H3) that diversified landing conditions, like performer’s landing familiarity and gender, as an implementation of an instantaneous follow-up task, affect the emerging lumbar spinal bending load. Herein it is moreover surmised that lumbar spinal bending loads under distinct landing conditions are predominantly modulated by herewith disparately deployed conditioned pre-activations of trunk muscles. METHODS: To test the above arrayed hypothesis, two successive studies were carried out. In STUDY 1, 17 subjects were repetitively assessed performing various drop landings (heigth: 15, 30, 45, 60cm; unilateral, bilateral, blindfolded, catching a ball) in a test-retest-design. Herein individual peak angular accelerations [αMAX] were derived from three-dimensional motion data of four trunk-segments (upper thoracic, lower thoracic, lumbar, pelvis). αMAX was herein assessed in flexion, lateral flexion, and rotation of each spinal joint, formed by two adjacent segments. Reliability of αMAX within and between test-days was evaluated by CV%, ICC 2.1, TRV%, and Bland & Altman Analysis (BIAS±LoA). Subsequently, peak flexion acceleration of the lumbo-pelvic joint [αFLEX[LS-PV]] was statistically compared to αMAX expressions of each other assessed spinal joint and motion plane (Mean ±SD, Independent Samples T-test). STUDY 2 deliberately assessed mere peak lumbo-pelvic flexion accelerations [αFLEX[LS-PV]] and electro-myographic trunk pre-activity prior to αFLEX[LS-PV] on 43 subjects performing varied landing tasks (height 45cm; with definite or indefinite predictability of a subsequent instant follow up jump). Subjects were contrasted with respect to their previous landing familiarity ( >1000 vs. <100 landings performed in the past 10 years) and gender. Differences of αFLEX[LS-PV] and muscular pre-activity between contrasted subject groups as between landing tasks were equally statistically tested by three-way mixed ANOVA with Post-hoc tests. Associations between αFLEX[LS-PV] and muscular pre-activity were factor-specifically assessed by Spearman’s rank order correlation coefficient (rS). Complementarily, muscular pre-activity was subdivided by landing phases [DROP, IMPACT] and discretely assessed for phase specific associations to αFLEX[LS-PV]. Each muscular activity was moreover pairwise compared between DROP and IMPACT (Mean ±SD, Dependent Samples T-test). RESULTS: αMAX was presented with overall high variability within test-days (CV =36%). Lowest intra-individual variability and highest reproducibility of αMAX between test-days was shown in flexion of the spine. αFLEX[LS-PV] showed largely consistent sig. higher magnitudes compared to αMAX presented in more cranial spinal joints and other motion planes. αFLEX[LS-PV] moreover gradually increased with escalations in landing heights. Landing unfamiliar subjects presented sig. higher αFLEX[LS-PV] in contrast to landing familiar ones (p=.016). M. Obliquus Int. with M. Transversus Abd. (66 ±32%MVC) and M. Erector Spinae (47 ±15%MVC) presented maredly highest activity in contrast to lowest activity of M. Rectus Abd. (10 ±4%MVC). Landing unfamiliar subjects showed compared to landing familiar ones sig. higher activity of M. Obliquus Ext. (17 ±8%MVC, 12 ±7%MVC, p= .044). M. Obliquus Ext. and its co-contraction ratio with M. Erector Spinae moreover exhibited low but sig. positive correlations to αFLEX[LS-PV] (rs=.39, rs=.31). Each trunk muscule distributed larger shares of its activity to DROP, whereas peak activations of most muscles emerged in the proportionally shorter IMPACT phase. Commonly increased muscular pre-activation particularly at IMPACT was found in landings with a contrived follow up jump and in female subjects, whereby αFLEX[LS-PV] was hereof only marginally affected. DISCUSSION: Highest spine segmental angular accelerations in drop landings emerge in sagittal flexion of the lumbar spine. The compensatory stabilisation of the spine appears to be preponderantly provided by a dorso-ventral co-contraction of M. Obliquus Int., M. Transversus Abd. and M. Erector Spinae. Elevated pre-activity of M. Obliquuis Ext. supposably characterises poor landing experience, which might engender increased bending loads to the lumbar spine. A pervasive large variability of spinal angular accelerations measured across all landing types, suggests a multifarious utilisation of diverse mechanisms compensating for spinal impacts in landing performances. A standardised assessment and valid evaluation of landing evoked lumbar bending loads is hereof largley confined. CONCLUSION: Drop landings elicit most strenuous lumbo-pelvic flexion accelerations, which can be appraised as representatives for high energetic bending loads to the spine. Such entail the highest risk to overload the spinal tissue, when landing demands exceed the individual’s landing skill. Previous landing experience and training appears to effectively improve muscular spine stabilisation pattern, diminishing spinal bending loads.
Utilization of sunlight for energy harvesting has been foreseen as sustainable replacement for fossil fuels, which would also eliminate side effects arising from fossil fuel consumption such as drastic increase of CO2 in Earth atmosphere. Semiconductor materials can be implemented for energy harvesting, and design of ideal energy harvesting devices relies on effective semiconductor with low recombination rate, ease of processing, stability over long period, non-toxicity and synthesis from abundant sources. Aforementioned criteria have attracted broad interest for graphitic carbon nitride (g-CN) materials, metal-free semiconductor which can be synthesized from low cost and abundant precursors. Furthermore, physical properties such as band gap, surface area and absorption can be tuned. g-CN was investigated as heterogeneous catalyst, with diversified applications from water splitting to CO2 reduction and organic coupling reactions. However, low dispersibility of g-CN in water and organic solvents was an obstacle for future improvements.
Tissue engineering aims to mimic natural tissues mechanically and biologically, so that synthetic materials can replace natural ones in future. Hydrogels are crosslinked networks with high water content, therefore are prime candidates for tissue engineering. However, the first requirement is synthesis of hydrogels with mechanical properties that are matching to natural tissues. Among different approaches for reinforcement, nanocomposite reinforcement is highly promising.
This thesis aims to investigate aqueous and organic dispersions of g-CN materials. Aqueous g-CN dispersions were utilized for visible light induced hydrogel synthesis, where g-CN acts as reinforcer and photoinitiator. Varieties of methodologies were presented for enhancing g-CN dispersibility, from co-solvent method to prepolymer formation, and it was shown that hydrogels with diversified mechanical properties (from skin-like to cartilage-like) are accessible via g-CN utilization. One pot photografting method was introduced for functionalization of g-CN surface which provides functional groups towards enhanced dispersibility in aqueous and organic media. Grafting vinyl thiazole groups yields stable additive-free organodispersions of g-CN which are electrostatically stabilized with increased photophysical properties. Colloidal stability of organic systems provides transparent g-CN coatings and printing g-CN from commercial inkjet printers.
Overall, application of g-CN in dispersed media is highly promising, and variety of materials can be accessible via utilization of g-CN and visible light with simple chemicals and synthetic conditions. g-CN in dispersed media will bridge emerging research areas from tissue engineering to energy harvesting in near future.
Signals stored in sediment
(2018)
Tectonic and climatic boundary conditions determine the amount and the characteristics (size distribution and composition) of sediment that is generated and exported from mountain regions. On millennial timescales, rivers adjust their morphology such that the incoming sediment (Qs,in) can be transported downstream by the available water discharge (Qw). Changes in climatic and tectonic boundary conditions thus trigger an adjustment of the downstream river morphology. Understanding the sensitivity of river morphology to perturbations in boundary conditions is therefore of major importance, for example, for flood assessments, infrastructure and habitats. Although we have a general understanding of how rivers evolve over longer timescales, the prediction of channel response to changes in boundary conditions on a more local scale and over shorter timescales remains a major challenge. To better predict morphological channel evolution, we need to test (i) how channels respond to perturbations in boundary conditions and (ii) how signals reflecting the persisting conditions are preserved in sediment characteristics. This information can then be applied to reconstruct how local river systems have evolved over time.
In this thesis, I address those questions by combining targeted field data collection in the Quebrada del Toro (Southern Central Andes of NW Argentina) with cosmogenic nuclide analysis and remote sensing data. In particular, I (1) investigate how information on hillslope processes is preserved in the 10Be concentration (geochemical composition) of fluvial sediments and how those signals are altered during downstream transport. I complement the field-based approach with physical experiments in the laboratory, in which I (2) explore how changes in sediment supply (Qs,in) or water discharge (Qw) generate distinct signals in the amount of sediment discharge at the basin outlet (Qs,out). With the same set of experiments, I (3) study the adjustments of alluvial channel morphology to changes in Qw and Qs,in, with a particular focus in fill-terrace formation. I transfer the findings from the experiments to the field to (4) reconstruct the evolution of a several-hundred meter thick fluvial fill-terrace sequence in the Quebrada del Toro. I create a detailed terrace chronology and perform reconstructions of paleo-Qs and Qw from the terrace deposits. In the following paragraphs, I summarize my findings on each of these four topics.
First, I sampled detrital sediment at the outlet of tributaries and along the main stem in the Quebrada del Toro, analyzed their 10Be concentration ([10Be]) and compared the data to a detailed hillslope-process inventory. The often observed non-linear increase in catchment-mean denudation rate (inferred from [10Be] in fluvial sediment) with catchment-median slope, which has commonly been explained by an adjustment in landslide-frequency, coincided with a shift in the main type of hillslope processes. In addition, the [10Be] in fluvial sediments varied with grain-size. I defined the normalized sand-gravel-index (NSGI) as the 10Be-concentration difference between sand and gravel fractions divided by their summed concentrations. The NSGI increased with median catchment slope and coincided with a shift in the prevailing hillslope processes active in the catchments, thus making the NSGI a potential proxy for the evolution of hillslope processes over time from sedimentary deposits. However, the NSGI recorded hillslope-processes less well in regions of reduced hillslope-channel connectivity and, in addition, has the potential to be altered during downstream transport due to lateral sediment input, size-selective sediment transport and abrasion.
Second, my physical experiments revealed that sediment discharge at the basin outlet (Qs,out) varied in response to changes in Qs,in or Qw. While changes in Qw caused a distinct signal in Qs,out during the transient adjustment phase of the channel to new boundary conditions, signals related to changes in Qs,in were buffered during the transient phase and likely only become apparent once the channel is adjusted to the new conditions. The temporal buffering is related to the negative feedback between Qs,in and channel-slope adjustments. In addition, I inferred from this result that signals extracted from the geochemical composition of sediments (e.g., [10Be]) are more likely to represent modern-day conditions during times of aggradation, whereas the signal will be temporally buffered due to mixing with older, remobilized sediment during times of channel incision.
Third, the same set of experiments revealed that river incision, channel-width narrowing and terrace cutting were initiated by either an increase in Qw, a decrease in Qs,in or a drop in base level. The lag-time between the external perturbation and the terrace cutting determined (1) how well terrace surfaces preserved the channel profile prior to perturbation and (2) the degree of reworking of terrace-surface material. Short lag-times and well preserved profiles occurred in cases with a rapid onset of incision. Also, lag-times were synchronous along the entire channel after upstream perturbations (Qw, Qs,in), whereas base-level fall triggered an upstream migrating knickzone, such that lag-times increased with distance upstream. Terraces formed after upstream perturbations (Qw, Qs,in) were always steeper when compared to the active channel in new equilibrium conditions. In the base-level fall experiment, the slope of the terrace-surfaces and the modern channel were similar. Hence, slope comparisons between the terrace surface and the modern channel can give insights into the mechanism of terrace formation.
Fourth, my detailed terrace-formation chronology indicated that cut-and-fill episodes in the Quebrada del Toro followed a ~100-kyr cyclicity, with the oldest terraces ~ 500 kyr old. The terraces were formed due to variability in upstream Qw and Qs. Reconstructions of paleo-Qs over the last 500 kyr, which were restricted to times of sediment deposition, indicated only minor (up to four-fold) variations in paleo-denudation rates. Reconstructions of paleo-Qw were limited to the times around the onset of river incision and revealed enhanced discharge from 10 to 85% compared to today. Such increases in Qw are in agreement with other quantitative paleo-hydrological reconstructions from the Eastern Andes, but have the advantage of dating further back in time.
Water at α-alumina surfaces
(2018)
The (0001) surface of α-Al₂O₃ is the most stable surface cut under UHV conditions and was studied by many groups both theoretically and experimentally. Reaction barriers computed with GGA functionals are known to be underestimated. Based on an example reaction at the (0001) surface, this work seeks to improve this rate by applying a hybrid functional method and perturbation theory (LMP2) with an atomic orbital basis, rather than a plane wave basis. In addition to activation barriers, we calculate the stability and vibrational frequencies of water on the surface. Adsorption energies were compared to PW calculations and confirmed PBE+D2/PW stability results. Especially the vibrational frequencies with the B3LYP hybrid functional that have been calculated for the (0001) surface are in good agreement with experimental findings. Concerning the barriers and the reaction rate constant, the expectations are fully met. It could be shown that recalculation of the transition state leads to an increased barrier, and a decreased rate constant when hybrid functionals or LMP2 are applied.
Furthermore, the molecular beam scattering of water on (0001) surface was studied. In a previous work by Hass the dissociation was studied by AIMD of molecularly adsorbed water, referring to an equilibrium situation. The experimental method to obtaining this is pinhole dosing. In contrast to this earlier work, the dissociation process of heavy water that is brought onto the surface from a molecular beam source was modeled in this work by periodic ab initio molecular dynamics simulations. This experimental method results in a non-equilibrium situation. The calculations with different surface and beam models allow us to understand the results of the non-equilibrium situation better. In contrast to a more equilibrium situation with pinhole dosing, this gives an increase in the dissociation probability, which could be explained and also understood mechanistically by those calculations.
In this work good progress was made in understanding the (1120) surface of α-Al₂O₃ in contact with water in the low-coverage regime. This surface cut is the third most stable one under UHV conditions and has not been studied to a great extent yet. After optimization of the clean, defect free surface, the stability of different adsorbed species could be classified. One molecular minimum and several dissociated species could be detected. Starting from these, reaction rates for various surface reactions were evaluated. A dissociation reaction was shown to be very fast because the molecular minimum is relatively unstable, whereas diffusion reactions cover a wider range from fast to slow. In general, the (112‾0) surface appears to be much more reactive against water than the (0001) surface. In addition to reactivity, harmonic vibrational frequencies were determined for comparison with the findings of the experimental “Interfacial Molecular Spectroscopy” group from Fritz-Haber institute in Berlin. Especially the vibrational frequencies of OD species could be assigned to vibrations from experimental SFG spectra with very good agreement. Also, lattice vibrations were studied in close collaboration with the experimental partners. They perform SFG spectra at very low frequencies to get deep into the lattice vibration region. Correspondingly, a bigger slab model with greater expansion perpendicular to the surface was applied, considering more layers in the bulk. Also with the lattice vibrations we could obtain reasonably good agreement in terms of energy differences between the peaks.
Natural extreme events are an integral part of nature on planet earth. Usually these events are only considered hazardous to humans, in case they are exposed. In this case, however, natural hazards can have devastating impacts on human societies. Especially hydro-meteorological hazards have a high damage potential in form of e.g. riverine and pluvial floods, winter storms, hurricanes and tornadoes, which can occur all over the globe. Along with an increasingly warm climate also an increase in extreme weather which potentially triggers natural hazards can be expected. Yet, not only changing natural systems, but also changing societal systems contribute to an increasing risk associated with these hazards. These can comprise increasing exposure and possibly also increasing vulnerability to the impacts of natural events. Thus, appropriate risk management is required to adapt all parts of society to existing and upcoming risks at various spatial scales. One essential part of risk management is the risk assessment including the estimation of the economic impacts. However, reliable methods for the estimation of economic impacts due to hydro-meteorological hazards are still missing. Therefore, this thesis deals with the question of how the reliability of hazard damage estimates can be improved, represented and propagated across all spatial scales. This question is investigated using the specific example of economic impacts to companies as a result of riverine floods in Germany.
Flood damage models aim to describe the damage processes during a given flood event. In other words they describe the vulnerability of a specific object to a flood. The models can be based on empirical data sets collected after flood events. In this thesis tree-based models trained with survey data are used for the estimation of direct economic flood impacts on the objects. It is found that these machine learning models, in conjunction with increasing sizes of data sets used to derive the models, outperform state-of-the-art damage models. However, despite the performance improvements induced by using multiple variables and more data points, large prediction errors remain at the object level. The occurrence of the high errors was explained by a further investigation using distributions derived from tree-based models. The investigation showed that direct economic impacts to individual objects cannot be modeled by a normal distribution. Yet, most state-of-the-art approaches assume a normal distribution and take mean values as point estimators. Subsequently, the predictions are unlikely values within the distributions resulting in high errors. At larger spatial scales more objects are considered for the damage estimation. This leads to a better fit of the damage estimates to a normal distribution. Consequently, also the performance of the point estimators get better, although large errors can still occur due to the variance of the normal distribution. It is recommended to use distributions instead of point estimates in order to represent the reliability of damage estimates.
In addition current approaches also mostly ignore the uncertainty associated with the characteristics of the hazard and the exposed objects. For a given flood event e.g. the estimation of the water level at a certain building is prone to uncertainties. Current approaches define exposed objects mostly by the use of land use data sets. These data sets often show inconsistencies, which introduce additional uncertainties. Furthermore, state-of-the-art approaches also imply problems of missing consistency when predicting the damage at different spatial scales. This is due to the use of different types of exposure data sets for model derivation and application. In order to face these issues a novel object-based method was developed in this thesis. The method enables a seamless estimation of hydro-meteorological hazard damage across spatial scales including uncertainty quantification. The application and validation of the method resulted in plausible estimations at all spatial scales without overestimating the uncertainty.
Mainly newly available data sets containing individual buildings make the application of the method possible as they allow for the identification of flood affected objects by overlaying the data sets with water masks. However, the identification of affected objects with two different water masks revealed huge differences in the number of identified objects. Thus, more effort is needed for their identification, since the number of objects affected determines the order of magnitude of the economic flood impacts to a large extent.
In general the method represents the uncertainties associated with the three components of risk namely hazard, exposure and vulnerability, in form of probability distributions. The object-based approach enables a consistent propagation of these uncertainties in space. Aside from the propagation of damage estimates and their uncertainties across spatial scales, a propagation between models estimating direct and indirect economic impacts was demonstrated. This enables the inclusion of uncertainties associated with the direct economic impacts within the estimation of the indirect economic impacts. Consequently, the modeling procedure facilitates the representation of the reliability of estimated total economic impacts. The representation of the estimates' reliability prevents reasoning based on a false certainty, which might be attributed to point estimates. Therefore, the developed approach facilitates a meaningful flood risk management and adaptation planning.
The successful post-event application and the representation of the uncertainties qualifies the method also for the use for future risk assessments. Thus, the developed method enables the representation of the assumptions made for the future risk assessments, which is crucial information for future risk management. This is an important step forward, since the representation of reliability associated with all components of risk is currently lacking in all state-of-the-art methods assessing future risk.
In conclusion, the use of object-based methods giving results in the form of distributions instead of point estimations is recommended. The improvement of the model performance by the means of multi-variable models and additional data points is possible, but small. Uncertainties associated with all components of damage estimation should be included and represented within the results. Furthermore, the findings of the thesis suggest that, at larger scales, the influence of the uncertainty associated with the vulnerability is smaller than those associated with the hazard and exposure. This leads to the conclusion that for an increased reliability of flood damage estimations and risk assessments, the improvement and active inclusion of hazard and exposure, including their uncertainties, is needed in addition to the improvements of the models describing the vulnerability of the objects.
Landslides are frequent natural hazards in rugged terrain, when the resisting frictional force of the surface of rupture yields to the gravitational force. These forces are functions of geological and morphological factors, such as angle of internal friction, local slope gradient or curvature, which remain static over hundreds of years; whereas more dynamic triggering events, such as rainfall and earthquakes, compromise the force balance by temporarily reducing resisting forces or adding transient loads. This thesis investigates landslide distribution and orientation due to landslide triggers (e.g. rainfall) at different scales (6-4∙10^5 km^2) and aims to link rainfall movement with the landslide distribution. It additionally explores the local impacts of the extreme rainstorms on landsliding and the role of precursory stability conditions that could be induced by an earlier trigger, such as an earthquake.
Extreme rainfall is a common landslide trigger. Although several studies assessed rainfall intensity and duration to study the distribution of thus triggered landslides, only a few case studies quantified spatial rainfall patterns (i.e. orographic effect). Quantifying the regional trajectories of extreme rainfall could aid predicting landslide prone regions in Japan. To this end, I combined a non-linear correlation metric, namely event synchronization, and radial statistics to assess the general pattern of extreme rainfall tracks over distances of hundreds of kilometers using satellite based rainfall estimates. Results showed that, although the increase in rainfall intensity and duration positively correlates with landslide occurrence, the trajectories of typhoons and frontal storms were insufficient to explain landslide distribution in Japan. Extreme rainfall trajectories inclined northwestwards and were concentrated along some certain locations, such as coastlines of southern Japan, which was unnoticed in the landslide distribution of about 5000 rainfall-triggered landslides. These landslides seemed to respond to the mean annual rainfall rates.
Above mentioned findings suggest further investigation on a more local scale to better understand the mechanistic response of landscape to extreme rainfall in terms of landslides. On May 2016 intense rainfall struck southern Germany triggering high waters and landslides. The highest damage was reported at the Braunsbach, which is located on the tributary-mouth fan formed by the Orlacher Bach. Orlacher Bach is a ~3 km long creek that drains a catchment of about ~6 km^2. I visited this catchment in June 2016 and mapped 48 landslides along the creek. Such high landslide activity was not reported in the nearby catchments within ~3300 km^2, despite similar rainfall intensity and duration based on weather radar estimates. My hypothesis was that several landslides were triggered by rainfall-triggered flash floods that undercut hillslope toes along the Orlacher Bach. I found that morphometric features such as slope and curvature play an important role in landslide distribution on this micro scale study site (<10 km^2). In addition, the high number of landslides along the Orlacher Bach could also be boosted by accumulated damages on hillslopes due karst weathering over longer time scales.
Precursory damages on hillslopes could also be induced by past triggering events that effect landscape evolution, but this interaction is hard to assess independently from the latest trigger. For example, an earthquake might influence the evolution of a landscape decades long, besides its direct impacts, such as landslides that follow the earthquake. Here I studied the consequences of the 2016 Kumamoto Earthquake (MW 7.1) that triggered some 1500 landslides in an area of ~4000 km^2 in central Kyushu, Japan. Topography, i.e. local slope and curvature, both amplified and attenuated seismic waves, thus controlling the failure mechanism of those landslides (e.g. progressive). I found that topography fails in explaining the distribution and the preferred orientation of the landslides after the earthquake; instead the landslides were concentrated around the northeast of the rupture area and faced mostly normal to the rupture plane. This preferred location of the landslides was dominated mainly by the directivity effect of the strike-slip earthquake, which is the propagation of wave energy along the fault in the rupture direction; whereas amplitude variations of the seismic radiation altered the preferred orientation. I suspect that the earthquake directivity and the asymmetry of seismic radiation damaged hillslopes at those preferred locations increasing landslide susceptibility. Hence a future weak triggering event, e.g. scattered rainfall, could further trigger landslides at those damaged hillslopes.
Shaping via binding
(2018)