Refine
Has Fulltext
- yes (165) (remove)
Year of publication
- 2021 (165) (remove)
Document Type
- Doctoral Thesis (165) (remove)
Is part of the Bibliography
- yes (165)
Keywords
- Spektroskopie (4)
- Klimawandel (3)
- Politik (3)
- climate change (3)
- spectroscopy (3)
- 3D-Visualisierung (2)
- Agrarökologie (2)
- Air pollution (2)
- Alpen (2)
- Alps (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Geowissenschaften (22)
- Institut für Biochemie und Biologie (20)
- Institut für Chemie (20)
- Hasso-Plattner-Institut für Digital Engineering GmbH (13)
- Institut für Umweltwissenschaften und Geographie (9)
- Wirtschaftswissenschaften (9)
- Institut für Informatik und Computational Science (8)
- Department Psychologie (6)
- Extern (5)
Institutionelle Bildung ist für autistische Lernende mit vielgestaltigen und spezifischen Hindernissen verbunden. Dies gilt insbesondere im Zusammenhang mit Inklusion, deren Relevanz nicht zuletzt durch das Übereinkommen der Vereinten Nationen über die Rechte von Menschen mit Behinderung gegeben ist.
Diese Arbeit diskutiert zahlreiche lernrelevante Besonderheiten im Kontext von Autismus und zeigt Diskrepanzen zu den nicht immer ausreichend angemessenen institutionellen Lehrkonzepten. Eine zentrale These ist hierbei, dass die ungewöhnlich intensive Aufmerksamkeit von Autist*innen für ihre Spezialinteressen dafür genutzt werden kann, das Lernen mit fremdgestellten Inhalten zu erleichtern. Darauf aufbauend werden Lösungsansätze diskutiert, welche in einem neuartigen Konzept für ein digitales mehrgerätebasiertes Lernspiel resultieren.
Eine wesentliche Herausforderung bei der Konzeption spielbasierten Lernens besteht in der adäquaten Einbindung von Lerninhalten in einen fesselnden narrativen Kontext. Am Beispiel von Übungen zur emotionalen Deutung von Mimik, welche für das Lernen von sozioemotionalen Kompetenzen besonders im Rahmen von Therapiekonzepten bei Autismus Verwendung finden, wird eine angemessene Narration vorgestellt, welche die störungsarme Einbindung dieser sehr speziellen Lerninhalte ermöglicht.
Die Effekte der einzelnen Konzeptionselemente werden anhand eines prototypisch entwickelten Lernspiels untersucht. Darauf aufbauend zeigt eine quantitative Studie die gute Akzeptanz und Nutzerfreundlichkeit des Spiels und belegte vor allem die
Verständlichkeit der Narration und der Spielelemente. Ein weiterer Schwerpunkt liegt in der minimalinvasiven Untersuchung möglicher Störungen des Spielerlebnisses durch den Wechsel zwischen verschiedenen Endgeräten, für die ein innovatives Messverfahren entwickelt wurde.
Im Ergebnis beleuchtet diese Arbeit die Bedeutung und die Grenzen von spielbasierten Ansätzen für autistische Lernende. Ein großer Teil der vorgestellten Konzepte lässt sich auf andersartige Lernszenarien übertragen. Das dafür entwickelte technische Framework zur Realisierung narrativer Lernpfade ist ebenfalls darauf vorbereitet, für weitere Lernszenarien, gerade auch im institutionellen Kontext, Verwendung zu finden.
Noise is ubiquitous in nature and usually results in rich dynamics in stochastic systems such as oscillatory systems, which exist in such various fields as physics, biology and complex networks. The correlation and synchronization of two or many oscillators are widely studied topics in recent years.
In this thesis, we mainly investigate two problems, i.e., the stochastic bursting phenomenon in noisy excitable systems and synchronization in a three-dimensional Kuramoto model with noise. Stochastic bursting here refers to a sequence of coherent spike train, where each spike has random number of followers due to the combined effects of both time delay and noise. Synchronization, as a universal phenomenon in nonlinear dynamical systems, is well illustrated in the Kuramoto model, a prominent model in the description of collective motion.
In the first part of this thesis, an idealized point process, valid if the characteristic timescales in the problem are well separated, is used to describe statistical properties such as the power spectral density and the interspike interval distribution. We show how the main parameters of the point process, the spontaneous excitation rate, and the probability to induce a spike during the delay action can be calculated from the solutions of a stationary and a forced Fokker-Planck equation. We extend it to the delay-coupled case and derive analytically the statistics of the spikes in each neuron, the pairwise correlations between any two neurons, and the spectrum of the total output from the network.
In the second part, we investigate the three-dimensional noisy Kuramoto model, which can be used to describe the synchronization in a swarming model with helical trajectory. In the case without natural frequency, the Kuramoto model can be connected with the Vicsek model, which is widely studied in collective motion and swarming of active matter. We analyze the linear stability of the incoherent state and derive the critical coupling strength above which the incoherent state loses stability. In the limit of no natural frequency, an exact self-consistent equation of the mean field is derived and extended straightforward to any high-dimensional case.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
With ongoing anthropogenic global warming, some of the most vulnerable components of the Earth system might become unstable and undergo a critical transition. These subsystems are the so-called tipping elements. They are believed to exhibit threshold behaviour and would, if triggered, result in severe consequences for the biosphere and human societies. Furthermore, it has been shown that climate tipping elements are not isolated entities, but interact across the entire Earth system. Therefore, this thesis aims at mapping out the potential for tipping events and feedbacks in the Earth system mainly by the use of complex dynamical systems and network science approaches, but partially also by more detailed process-based models of the Earth system.
In the first part of this thesis, the theoretical foundations are laid by the investigation of networks of interacting tipping elements. For this purpose, the conditions for the emergence of global cascades are analysed against the structure of paradigmatic network types such as Erdös-Rényi, Barabási-Albert, Watts-Strogatz and explicitly spatially embedded networks. Furthermore, micro-scale structures are detected that are decisive for the transition of local to global cascades. These so-called motifs link the micro- to the macro-scale in the network of tipping elements. Alongside a model description paper, all these results are entered into the Python software package PyCascades, which is publicly available on github.
In the second part of this dissertation, the tipping element framework is first applied to components of the Earth system such as the cryosphere and to parts of the biosphere. Afterwards it is applied to a set of interacting climate tipping elements on a global scale. Using the Earth system Model of Intermediate Complexity (EMIC) CLIMBER-2, the temperature feedbacks are quantified, which would arise if some of the large cryosphere elements disintegrate over a long span of time. The cryosphere components that are investigated are the Arctic summer sea ice, the mountain glaciers, the Greenland and the West Antarctic Ice Sheets. The committed temperature increase, in case the ice masses disintegrate, is on the order of an additional half a degree on a global average (0.39-0.46 °C), while local to regional additional temperature increases can exceed 5 °C. This means that, once tipping has begun, additional reinforcing feedbacks are able to increase global warming and with that the risk of further tipping events.
This is also the case in the Amazon rainforest, whose parts are dependent on each other via the so-called moisture-recycling feedback. In this thesis, the importance of drought-induced tipping events in the Amazon rainforest is investigated in detail. Despite the Amazon rainforest is assumed to be adapted to past environmental conditions, it is found that tipping events sharply increase if the drought conditions become too intense in a too short amount of time, outpacing the adaptive capacity of the Amazon rainforest. In these cases, the frequency of tipping cascades also increases to 50% (or above) of all tipping events. In the model that was developed in this study, the southeastern region of the Amazon basin is hit hardest by the simulated drought patterns. This is also the region that already nowadays suffers a lot from extensive human-induced changes due to large-scale deforestation, cattle ranching or infrastructure projects.
Moreover, on the larger Earth system wide scale, a network of conceptualised climate tipping elements is constructed in this dissertation making use of a large literature review, expert knowledge and topological properties of the tipping elements. In global warming scenarios, tipping cascades are detected even under modest scenarios of climate change, limiting global warming to 2 °C above pre-industrial levels. In addition, the structural roles of the climate tipping elements in the network are revealed. While the large ice sheets on Greenland and Antarctica are the initiators of tipping cascades, the Atlantic Meridional Overturning Circulation (AMOC) acts as the transmitter of cascades. Furthermore, in our conceptual climate tipping element model, it is found that the ice sheets are of particular importance for the stability of the entire system of investigated climate tipping elements.
In the last part of this thesis, the results from the temperature feedback study with the EMIC CLIMBER-2 are combined with the conceptual model of climate tipping elements. There, it is observed that the likelihood of further tipping events slightly increases due to the temperature feedbacks even if no further CO$_2$ would be added to the atmosphere.
Although the developed network model is of conceptual nature, it is possible with this work for the first time to quantify the risk of tipping events between interacting components of the Earth system under global warming scenarios, by allowing for dynamic temperature feedbacks at the same time.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
Das Fachwissen von Lehrkräften weist für die Ausprägung fachdidaktischer Expertise eine hohe Bedeutung auf. Welche Merkmale universitäre Lehrveranstaltungen aufweisen sollten, um Lehramtsstudierenden ein berufsspezifisches Fachwissen zu vermitteln, ist jedoch überwiegend noch unklar.
Innerhalb des Projekts PSI-Potsdam wurde auf theoretischer Grundlage das fachübergreifende Modell des erweiterten Fachwissens für den schulischen Kontext entwickelt. Als Ansatz zur Verbesserung des Biologie-Lehramtsstudiums diente dieses Modell als Konzeptionsgrundlage für eine additive Lehrveranstaltung. Hierbei werden Lerngelegenheiten geboten, um das universitär erworbene Fachwissen über zellbiologische Inhalte auf schulische Kontexte anzuwenden, z.B. durch die Dekonstruktion und anschließende Rekonstruktion von schulischen Lerntexten. Die Wirkung des Seminars wurde in mehreren Zyklen im Forschungsformat der Fachdidaktischen Entwicklungsforschung beforscht. Eine der zentralen Forschungsfragen lautet dabei: Wie kann eine Lerngelegenheit für Lehramtsstudierende der Biologie gestaltet sein, um ein erweitertes Fachwissen für den schulischen Kontext für den zellbiologischen Themenbereich „Struktur und Funktion der Biomembran“ zu fördern?
Anhand fallübergreifender Analysen (n = 29) wird im empirischen Teil aufgezeigt, welche Einstellungen zum Lehramtsstudium in der Stichprobe bestehen. Als ein wichtiges Ergebnis kann hierbei herausgestellt werden, dass sich das Fachinteresse hinsichtlich schulisch und universitär vermittelter Inhalte bei den untersuchten Studierenden auffallend unterscheidet, wobei dem Schulwissen ein deutlich höheres Interesse entgegengebracht wird. Die Berufsrelevanz fachlicher Inhalte wird seitens der Studierenden häufig am Schulwissen festgemacht.
Innerhalb konkreter Einzelfallanalysen (n = 6) wird anhand von Lernpfaden dargestellt, wie sich über mehrere Design-Experimente hinweg fachliche Konzepte entwickelt haben. Bei der Beschreibung wird vor allem auf Schlüsselstellen und Hürden im Lernprozess fokussiert. Aus diesen Ergebnissen folgend werden vorgenommene Iterationen für die einzelnen Zyklen beschrieben, die ebenfalls anhand der iterativen Entwicklung der Design-Prinzipien dargelegt werden.
Es konnte gezeigt werden, dass die Schlüsselstellen sehr individuell aufgrund der subjektiv fokussierten Inhalte zu Tage treten. Meist treten sie jedoch im Zusammenhang mit der Verknüpfung verschiedener fachlicher Konzepte oder durch kooperative Aufschlüsselungen von Konzepten auf. Fachliche Hürden konnten hingegen in Form von fachlich unangemessenen Vorstellungen fallübergreifend identifiziert werden. Dies betrifft unter anderem die Vorstellung der Biomembran als Wand, die mit den Vorstellungen einer Schutzfunktion und einer formgebenden Funktion der Biomembran einhergeht.
Weiterhin wird beleuchtet, wie das erweiterte Fachwissen für den schulischen Kontext zur Bearbeitung der Lernaufgaben angewendet wurde. Es hat sich gezeigt, dass sich bestimmte Lerngelegenheiten eigenen, um bestimmte Facetten des erweiterten Fachwissens zu fördern.
Insgesamt scheint das Modell des erweiterten Fachwissens für den schulischen Kontext äußerst geeignet zu sein, um anhand der Facetten und deren Beschreibungen Lerngelegenheiten oder Gestaltungsprinzipien für diese zu konzipieren. Für das untersuchte Lehr-Lernarrangement haben sich kleinere Adaptationen des Modells als sinnvoll erwiesen. Hinsichtlich der Methodologie konnten Ableitungen für die Anwendung der fachdidaktischen Entwicklungsforschung für additive fachliche Lehrveranstaltungen dieser Art herausgestellt werden.
Um den Professionsbezug der fachwissenschaftlichen Anteile im Lehramtsstudium zu verbessern, ist der weitere Einbezug des erweiterten Fachwissens für den schulischen Kontext in die fachwissenschaftlichen Studienanteile überaus wünschenswert.
Supernova remnants (SNRs) are discussed as the most promising sources of galactic cosmic rays (CR). The diffusive shock acceleration (DSA) theory predicts particle spectra in a rough agreement with observations. Upon closer inspection, however, the photon spectra of observed SNRs indicate that the particle spectra produced at SNRs shocks deviate from the standard expectation. This work suggests a viable explanation for a softening of the particle spectra in SNRs. The basic idea is the re-acceleration of particles in the turbulent region immediately downstream of the shock. This thesis shows that at the re-acceleration of particles by the fast-mode waves in the downstream region can be efficient enough to impact particle spectra over several decades in energy. To demonstrate this, a generic SNR model is presented, where the evolution of particles is described by the reduced transport equation for CR. It is shown that the resulting particle and the corresponding synchrotron spectra are significantly softer compared to the standard case. Next, this work outlines RATPaC, a code developed to model particle acceleration and corresponding photon emissions in SNRs. RATPaC solves the particle transport equation in test-particle mode using hydrodynamic simulations of the SNR plasma flow. The background magnetic field can be either computed from the induction equation or follows analytic profiles. This work presents an extended version of RATPaC that accounts for stochastic re-acceleration by fast-mode waves that provide diffusion of particles in momentum space. This version is then applied to model the young historical SNR Tycho. According to radio observations, Tycho’s SNR features the radio spectral index of approximately −0.65. In previous modeling approaches, this fact has been attributed to the strongly distinctive Alfvénic drift, which is assumed to operate in the shock vicinity. In this work, the problems and inconsistencies of this scenario are discussed. Instead, stochastic re-acceleration of electrons in the immediate downstream region of Tycho’s SNR is suggested as a cause for the soft radio spectrum. Furthermore, this work investigates two different scenarios for magnetic-field distributions inside Tycho’s SNR. It is concluded that magnetic-field damping is needed to account for the observed filaments in the radio range. Two models are presented for Tycho’s SNR, both of them feature strong hadronic contribution. Thus, a purely leptonic model is considered as very unlikely. Additionally, to the detailed modeling of Tycho’s SNR, this dissertation presents a relatively simple one-zone model for the young SNR Cassiopeia A and an interpretation for the recently analyzed VERITAS and Fermi-LAT data. It shows that the γ-ray emission of Cassiopeia A cannot be explained without a hadronic contribution and that the remnant accelerates protons up to TeV energies. Thus, Cassiopeia A is found to be unlikely a PeVatron.
Das Schulfach Geographie war in der DDR eines der Fächer, das sehr stark mit politischen Themen im Sinne des Marxismus-Leninismus bestückt war. Ein anderer Aspekt sind die sozialistischen Erziehungsziele, die in der Schulbildung der DDR hoch im Kurs standen. Im Fokus stand diesbezüglich die Erziehung der Kinder zu sozialistischen Persönlichkeiten. Die Arbeit versucht einen klaren Blick auf diesen Umstand zu werfen, um zu erfahren, was da von den Lehrkräften gefordert wurde und wie es in der Schule umzusetzen war.
Durch den Fall der Mauer war natürlich auch eine Umstrukturierung des Bildungssystems im Osten unausweichlich. Hier will die Arbeit Einblicke geben, wie die Geographielehrkräfte diese Transformation mitgetragen und umgesetzt haben. Welche Wesenszüge aus der Sozialisierung in der DDR haben sich bei der Gestaltung des Unterrichtes und dessen Ausrichtung auf die neuen Erziehungsziele erhalten?
Hierzu wurden Geographielehrkräfte befragt, die sowohl in der DDR als auch im geeinten Deutschland unterrichtet haben. Die Fragen bezogen sich in erster Linie auf die Art und Weise des Unterrichtens vor, während und nach der Wende und der daraus entstandenen Systemtransformation.
Die Befragungen kommen zu dem Ergebnis, dass sich der Geographieunterricht in der DDR thematisch von dem in der BRD nicht sonderlich unterschied. Von daher bedurfte es keiner umfangreichen inhaltlichen Veränderung des Geographieunterrichts. Schon zu DDR-Zeiten wurden durch die Lehrkräfte offenbar eigenmächtig ideologiefreie physisch-geographische Themen oft ausgedehnt, um die Ideologie des Faches zu reduzieren. So fiel den meisten eine Anpassung ihres Unterrichts an das westdeutsche System relativ leicht. Die humanistisch geprägte Werteerziehung des DDR-Bildungssystems wurde unter Ausklammerung des sozialistischen Aspektes ebenso fortgeführt, da es auch hier viele Parallelen zum westdeutschen System gegeben hat. Deutlich wird eine Charakterisierung des Faches als Naturwissenschaft von Seiten der ostdeutschen Lehrkräfte, obwohl das Fach an den Schulen den Gesellschaftswissenschaften zugeordnet wird und auch in der DDR eine starke wirtschaftsgeographische Ausrichtung hatte.
Von der Verantwortung sozialistische Persönlichkeiten zu erziehen, wurden die Lehrkräfte mit dem Ende der DDR entbunden und die in dieser Arbeit aufgeführten Interviewauszüge lassen keinen Zweifel daran, dass es dem Großteil der Befragten darum nicht leidtat, sie sich aber bis heute an der Werteorientierung aus DDR-Zeiten orientieren.
Geochemical processes such as mineral dissolution and precipitation alter the microstructure of rocks, and thereby affect their hydraulic and mechanical behaviour. Quantifying these property changes and considering them in reservoir simulations is essential for a sustainable utilisation of the geological subsurface. Due to the lack of alternatives, analytical methods and empirical relations are currently applied to estimate evolving hydraulic and mechanical rock properties associated with chemical reactions. However, the predictive capabilities of analytical approaches remain limited, since they assume idealised microstructures, and thus are not able to reflect property evolution for dynamic processes. Hence, aim of the present thesis is to improve the prediction of permeability and stiffness changes resulting from pore space alterations of reservoir sandstones.
A detailed representation of rock microstructure, including the morphology and connectivity of pores, is essential to accurately determine physical rock properties. For that purpose, three-dimensional pore-scale models of typical reservoir sandstones, obtained from highly resolved micro-computed tomography (micro-CT), are used to numerically calculate permeability and stiffness. In order to adequately depict characteristic distributions of secondary minerals, the virtual samples are systematically altered and resulting trends among the geometric, hydraulic, and mechanical rock properties are quantified. It is demonstrated that the geochemical reaction regime controls the location of mineral precipitation within the pore space, and thereby crucially affects the permeability evolution. This emphasises the requirement of determining distinctive porosity-permeability relationships
by means of digital pore-scale models. By contrast, a substantial impact of spatial alterations patterns on the stiffness evolution of reservoir sandstones are only observed in case of certain microstructures, such as highly porous granular rocks or sandstones comprising framework-supporting cementations. In order to construct synthetic granular samples a process-based approach is proposed including grain deposition and diagenetic cementation. It is demonstrated that the generated samples reliably represent the microstructural complexity of natural sandstones. Thereby, general limitations of imaging techniques can be overcome and various realisations of granular rocks can be flexibly produced. These can be further altered by virtual experiments, offering a fast and cost-effective way to examine the impact of precipitation, dissolution or fracturing on various petrophysical correlations.
The presented research work provides methodological principles to quantify trends in permeability and stiffness resulting from geochemical processes. The calculated physical property relations are directly linked to pore-scale alterations, and thus have a higher accuracy than commonly applied analytical approaches. This will considerably improve the predictive capabilities of reservoir models, and is further relevant to assess and reduce potential risks, such as productivity or injectivity losses as well as reservoir compaction or fault reactivation. Hence, the proposed method is of paramount importance for a wide range of natural and engineered subsurface applications, including geothermal energy systems, hydrocarbon reservoirs, CO2 and energy storage as well as hydrothermal deposit exploration.
Die stetige Weiterentwicklung von VR-Systemen bietet neue Möglichkeiten der Interaktion mit virtuellen Objekten im dreidimensionalen Raum, stellt Entwickelnde von VRAnwendungen aber auch vor neue Herausforderungen. Selektions- und Manipulationstechniken müssen unter Berücksichtigung des Anwendungsszenarios, der Zielgruppe und der zur Verfügung stehenden Ein- und Ausgabegeräte ausgewählt werden. Diese Arbeit leistet einen Beitrag dazu, die Auswahl von passenden Interaktionstechniken zu unterstützen. Hierfür wurde eine repräsentative Menge von Selektions- und Manipulationstechniken untersucht und, unter Berücksichtigung existierender Klassifikationssysteme, eine Taxonomie entwickelt, die die Analyse der Techniken hinsichtlich interaktionsrelevanter Eigenschaften ermöglicht. Auf Basis dieser Taxonomie wurden Techniken ausgewählt, die in einer explorativen Studie verglichen wurden, um Rückschlüsse auf die Dimensionen der Taxonomie zu ziehen und neue Indizien für Vor- und Nachteile der Techniken in spezifischen Anwendungsszenarien zu generieren. Die Ergebnisse der Arbeit münden in eine Webanwendung, die Entwickelnde von VR-Anwendungen gezielt dabei unterstützt, passende Selektions- und Manipulationstechniken für ein Anwendungsszenario auszuwählen, indem Techniken auf Basis der Taxonomie gefiltert und unter Verwendung der Resultate aus der Studie sortiert werden können.
Energy is at the heart of the climate crisis—but also at the heart of any efforts for climate change mitigation. Energy consumption is namely responsible for approximately three quarters of global anthropogenic greenhouse gas (GHG) emissions. Therefore, central to any serious plans to stave off a climate catastrophe is a major transformation of the world's energy system, which would move society away from fossil fuels and towards a net-zero energy future. Considering that fossil fuels are also a major source of air pollutant emissions, the energy transition has important implications for air quality as well, and thus also for human and environmental health. Both Europe and Germany have set the goal of becoming GHG neutral by 2050, and moreover have demonstrated their deep commitment to a comprehensive energy transition. Two of the most significant developments in energy policy over the past decade have been the interest in expansion of shale gas and hydrogen, which accordingly have garnered great interest and debate among public, private and political actors.
In this context, sound scientific information can play an important role by informing stakeholder dialogue and future research investments, and by supporting evidence-based decision-making. This thesis examines anticipated environmental impacts from possible, relevant changes in the European energy system, in order to impart valuable insight and fill critical gaps in knowledge. Specifically, it investigates possible future shale gas development in Germany and the United Kingdom (UK), as well as a hypothetical, complete transition to hydrogen mobility in Germany. Moreover, it assesses the impacts on GHG and air pollutant emissions, and on tropospheric ozone (O3) air quality. The analysis is facilitated by constructing emission scenarios and performing air quality modeling via the Weather Research and Forecasting model coupled with chemistry (WRF-Chem). The work of this thesis is presented in three research papers.
The first paper finds that methane (CH4) leakage rates from upstream shale gas development in Germany and the UK would range between 0.35% and 1.36% in a realistic, business-as-usual case, while they would be significantly lower - between 0.08% and 0.15% - in an optimistic, strict regulation and high compliance case, thus demonstrating the value and potential of measures to substantially reduce emissions. Yet, while the optimistic case is technically feasible, it is unlikely that the practices and technologies assumed would be applied and accomplished on a systematic, regular basis, owing to economics and limited monitoring resources. The realistic CH4 leakage rates estimated in this study are comparable to values reported by studies carried out in the US and elsewhere. In contrast, the optimistic rates are similar to official CH4 leakage data from upstream gas production in Germany and in the UK. Considering that there is a lack of systematic, transparent and independent reports supporting the official values, this study further highlights the need for more research efforts in this direction. Compared with national energy sector emissions, this study suggests that shale gas emissions of volatile organic compounds (VOCs) could be significant, though relatively insignificant for other air pollutants. Similar to CH4, measures could be effective for reducing VOCs emissions.
The second paper shows that VOC and nitrogen oxides (NOx) emissions from a future shale gas industry in Germany and the UK have potentially harmful consequences for European O3 air quality on both the local and regional scale. The results indicate a peak increase in maximum daily 8-hour average O3 (MDA8) ranging from 3.7 µg m-3 to 28.3 µg m-3. Findings suggest that shale gas activities could result in additional exceedances of MDA8 at a substantial percentage of regulatory measurement stations both locally and in neighboring and distant countries, with up to circa one third of stations in the UK and one fifth of stations in Germany experiencing additional exceedances. Moreover, the results reveal that the shale gas impact on the cumulative health-related metric SOMO35 (annual Sum of Ozone Means Over 35 ppb) could be substantial, with a maximum increase of circa 28%. Overall, the findings suggest that shale gas VOC emissions could play a critical role in O3 enhancement, while NOx emissions would contribute to a lesser extent. Thus, the results indicate that stringent regulation of VOC emissions would be important in the event of future European shale gas development to minimize deleterious health outcomes.
The third paper demonstrates that a hypothetical, complete transition of the German vehicle fleet to hydrogen fuel cell technology could contribute substantially to Germany's climate and air quality goals. The results indicate that if the hydrogen were to be produced via renewable-powered water electrolysis (green hydrogen), German carbon dioxide equivalent (CO2eq) emissions would decrease by 179 MtCO2eq annually, though if electrolysis were powered by the current electricity mix, emissions would instead increase by 95 MtCO2eq annually. The findings generally reveal a notable anticipated decrease in German energy emissions of regulated air pollutants. The results suggest that vehicular hydrogen demand is 1000 PJ annually, which would require between 446 TWh and 525 TWh for electrolysis, hydrogen transport and storage. When only the heavy duty vehicle segment (HDVs) is shifted to green hydrogen, the results of this thesis show that vehicular hydrogen demand drops to 371 PJ, while a deep emissions cut is still realized (-57 MtCO2eq), suggesting that HDVs are a low-hanging fruit for contributing to decarbonization of the German road transport sector with hydrogen energy.
Magnetic strain contributions in laser-excited metals studied by time-resolved X-ray diffraction
(2021)
In this work I explore the impact of magnetic order on the laser-induced ultrafast strain response of metals. Few experiments with femto- or picosecond time-resolution have so far investigated magnetic stresses. This is contrasted by the industrial usage of magnetic invar materials or magnetostrictive transducers for ultrasound generation, which already utilize magnetostrictive stresses in the low frequency regime.
In the reported experiments I investigate how the energy deposition by the absorption of femtosecond laser pulses in thin metal films leads to an ultrafast stress generation. I utilize that this stress drives an expansion that emits nanoscopic strain pulses, so called hypersound, into adjacent layers. Both the expansion and the strain pulses change the average inter-atomic distance in the sample, which can be tracked with sub-picosecond time resolution using an X-ray diffraction setup at a laser-driven Plasma X-ray source. Ultrafast X-ray diffraction can also be applied to buried layers within heterostructures that cannot be accessed by optical methods, which exhibit a limited penetration into metals. The reconstruction of the initial energy transfer processes from the shape of the strain pulse in buried detection layers represents a contribution of this work to the field of picosecond ultrasonics.
A central point for the analysis of the experiments is the direct link between the deposited energy density in the nano-structures and the resulting stress on the crystal lattice. The underlying thermodynamical concept of a Grüneisen parameter provides the theoretical framework for my work. I demonstrate how the Grüneisen principle can be used for the interpretation of the strain response on ultrafast timescales in various materials and that it can be extended to describe magnetic stresses. The class of heavy rare-earth elements exhibits especially large magnetostriction effects, which can even lead to an unconventional contraction of the laser-excited transducer material. Such a dominant contribution of the magnetic stress to the motion of atoms has not been demonstrated previously. The observed rise time of the magnetic stress contribution in Dysprosium is identical to the decrease in the helical spin-order, that has been found previously using time-resolved resonant X-ray diffraction. This indicates that the strength of the magnetic stress can be used as a proxy of the underlying magnetic order. Such magnetostriction measurements are applicable even in case of antiparallel or non-collinear alignment of the magnetic moments and a vanishing magnetization.
The strain response of metal films is usually determined by the pressure of electrons and lattice vibrations. I have developed a versatile two-pulse excitation routine that can be used to extract the magnetic contribution to the strain response even if systematic measurements above and below the magnetic ordering temperature are not feasible. A first laser pulse leads to a partial ultrafast demagnetization so that the amplitude and shape of the strain response triggered by the second pulse depends on the remaining magnetic order. With this method I could identify a strongly anisotropic magnetic stress contribution in the magnetic data storage material iron-platinum and identify the recovery of the magnetic order by the variation of the pulse-to-pulse delay. The stark contrast of the expansion of iron-platinum nanograins and thin films shows that the different constraints for the in-plane expansion have a strong influence on the out-of-plane expansion, due to the Poisson effect. I show how such transverse strain contributions need to be accounted for when interpreting the ultrafast out-of-plane strain response using thermal expansion coefficients obtained in near equilibrium conditions.
This work contributes an investigation of magnetostriction on ultrafast timescales to the literature of magnetic effects in materials. It develops a method to extract spatial and temporal varying stress contributions based on a model for the amplitude and shape of the emitted strain pulses. Energy transfer processes result in a change of the stress profile with respect to the initial absorption of the laser pulses. One interesting example occurs in nanoscopic gold-nickel heterostructures, where excited electrons rapidly transport energy into a distant nickel layer, that takes up much more energy and expands faster and stronger than the laser-excited gold capping layer. Magnetic excitations in rare earth materials represent a large energy reservoir that delays the energy transfer into adjacent layers. Such magneto-caloric effects are known in thermodynamics but not extensively covered on ultrafast timescales. The combination of ultrafast X-ray diffraction and time-resolved techniques with direct access to the magnetization has a large potential to uncover and quantify such energy transfer processes.
Anthropogenic climate change alters the hydrological cycle. While certain areas experience more intense precipitation events, others will experience droughts and increased evaporation, affecting water storage in long-term reservoirs, groundwater, snow, and glaciers. High elevation environments are especially vulnerable to climate change, which will impact the water supply for people living downstream. The Himalaya has been identified as a particularly vulnerable system, with nearly one billion people depending on the runoff in this system as their main water resource. As such, a more refined understanding of spatial and temporal changes in the water cycle in high altitude systems is essential to assess variations in water budgets under different climate change scenarios.
However, not only anthropogenic influences have an impact on the hydrological cycle, but changes to the hydrological cycle can occur over geological timescales, which are connected to the interplay between orogenic uplift and climate change. However, their temporal evolution and causes are often difficult to constrain. Using proxies that reflect hydrological changes with an increase in elevation, we can unravel the history of orogenic uplift in mountain ranges and its effect on the climate.
In this thesis, stable isotope ratios (expressed as δ2H and δ18O values) of meteoric waters and organic material are combined as tracers of atmospheric and hydrologic processes with remote sensing products to better understand water sources in the Himalayas. In addition, the record of modern climatological conditions based on the compound specific stable isotopes of leaf waxes (δ2Hwax) and brGDGTs (branched Glycerol dialkyl glycerol tetraethers) in modern soils in four Himalayan river catchments was assessed as proxies of the paleoclimate and (paleo-) elevation. Ultimately, hydrological variations over geological timescales were examined using δ13C and δ18O values of soil carbonates and bulk organic matter originating from sedimentological sections from the pre-Siwalik and Siwalik groups to track the response of vegetation and monsoon intensity and seasonality on a timescale of 20 Myr.
I find that Rayleigh distillation, with an ISM moisture source, mainly controls the isotopic composition of surface waters in the studied Himalayan catchments. An increase in d-excess in the spring, verified by remote sensing data products, shows the significant impact of runoff from snow-covered and glaciated areas on the surface water isotopic values in the timeseries.
In addition, I show that biomarker records such as brGDGTs and δ2Hwax have the potential to record (paleo-) elevation by yielding a significant correlation with the temperature and surface water δ2H values, respectively, as well as with elevation. Comparing the elevation inferred from both brGDGT and δ2Hwax, large differences were found in arid sections of the elevation transects due to an additional effect of evapotranspiration on δ2Hwax. A combined study of these proxies can improve paleoelevation estimates and provide recommendations based on the results found in this study.
Ultimately, I infer that the expansion of C4 vegetation between 20 and 1 Myr was not solely dependent on atmospheric pCO2, but also on regional changes in aridity and seasonality from to the stable isotopic signature of the two sedimentary sections in the Himalaya (east and west).
This thesis shows that the stable isotope chemistry of surface waters can be applied as a tool to monitor the changing Himalayan water budget under projected increasing temperatures. Minimizing the uncertainties associated with the paleo-elevation reconstructions were assessed by the combination of organic proxies (δ2Hwax and brGDGTs) in Himalayan soil. Stable isotope ratios in bulk soil and soil carbonates showed the evolution of vegetation influenced by the monsoon during the late Miocene, proving that these proxies can be used to record monsoon intensity, seasonality, and the response of vegetation. In conclusion, the use of organic proxies and stable isotope chemistry in the Himalayas has proven to successfully record changes in climate with increasing elevation. The combination of δ2Hwax and brGDGTs as a new proxy provides a more refined understanding of (paleo-)elevation and the influence of climate.
Die Arbeit untersucht die historische Entwicklung der Prätorianerpräfektur im 3. Jh. und bewertet die Funktion im Rahmen der kaiserlichen Herrschaftsordnung. Aufgrund der militärischen und politischen Krisen des 3. Jh. und der daran angepassten Herrschaftsstrategien erhielten die Prätorianerpräfekten umfassende Aufgaben. Die disparate Quellen- und Forschungslage beschreibt den Machtzuwachs und die Funktionsaufwertung der Prätorianerpräfekten in dieser wichtigen Phase aber sehr unterschiedlich. Ausgehend von den spätantiken Berichten geht die mehrheitliche Forschung zudem von einem Machtverlust der Prätorianerpräfekten unter Konstantin aus, dem eine Reformierung der Prätorianerpräfektur zugesprochen wird. Dieser Machtverlust lässt sich zeitlich und funktional jedoch nicht sicher bestimmen. In der Forschung wird dieser funktionale Abstieg oft mit der konstantinischen Demilitarisierung und Regionalisierung der Prätorianerpräfektur begründet. Bisher fehlte eine aktuelle Gesamtdarstellung, die die Prätorianerpräfektur in der Herrschaftsordnung des 3. Jh. bewertet und kategorisiert, um eine funktionale Abgrenzung zur klassischen Prätorianerpräfektur und zur Regionalpräfektur im 4. Jh. vorzunehmen.
Für diese funktionale Abgrenzung wurden in dieser Arbeit die Funktionsmerkmale und historischen Zusammenhänge der Prätorianerpräfektur im 3. Jh. abstrahiert und hieraus der Idealtypus einer „Kaiserlichen Magistratur“ gebildet. Die Ergebnisse dieser Abstrahierung zeigen die Prätorianerpräfektur im 3. Jh. als eine kommunikative Schnittstelle zwischen dem Kaiser und den leitenden Stellen der Zentral- und Provinzadministration. Die Prätorianerpräfektur übernahm hierbei eine leitende Stabsfunktion, die im Zusammenhang mit der höchsten inappellablen Gerichtsbarkeit die zweite Funktionsträgerebene nach dem Kaiser bildete. Diese Funktion übten die Prätorianerpräfekten ohne territoriale Bindung bis zum Ende der Tetrarchie bzw. bis zur frühen Herrschaft Konstantins aus.
Generative adversarial networks (GANs) have been broadly applied to a wide range of application domains since their proposal. In this thesis, we propose several methods that aim to tackle different existing problems in GANs. Particularly, even though GANs are generally able to generate high-quality samples, the diversity of the generated set is often sub-optimal. Moreover, the common increase of the number of models in the original GANs framework, as well as their architectural sizes, introduces additional costs. Additionally, even though challenging, the proper evaluation of a generated set is an important direction to ultimately improve the generation process in GANs. We start by introducing two diversification methods that extend the original GANs framework to multiple adversaries to stimulate sample diversity in a generated set. Then, we introduce a new post-training compression method based on Monte Carlo methods and importance sampling to quantize and prune the weights and activations of pre-trained neural networks without any additional training. The previous method may be used to reduce the memory and computational costs introduced by increasing the number of models in the original GANs framework. Moreover, we use a similar procedure to quantize and prune gradients during training, which also reduces the communication costs between different workers in a distributed training setting. We introduce several topology-based evaluation methods to assess data generation in different settings, namely image generation and language generation. Our methods retrieve both single-valued and double-valued metrics, which, given a real set, may be used to broadly assess a generated set or separately evaluate sample quality and sample diversity, respectively. Moreover, two of our metrics use locality-sensitive hashing to accurately assess the generated sets of highly compressed GANs. The analysis of the compression effects in GANs paves the way for their efficient employment in real-world applications. Given their general applicability, the methods proposed in this thesis may be extended beyond the context of GANs. Hence, they may be generally applied to enhance existing neural networks and, in particular, generative frameworks.
Massive Open Online Courses (MOOCs) open up new opportunities to learn a wide variety of skills online and are thus well suited for individual education, especially where proffcient teachers are not available locally. At the same time, modern society is undergoing a digital transformation, requiring the training of large numbers of current and future employees. Abstract thinking, logical reasoning, and the need to formulate instructions for computers are becoming increasingly relevant. A holistic way to train these skills is to learn how to program. Programming, in addition to being a mental discipline, is also considered a craft, and practical training is required to achieve mastery. In order to effectively convey programming skills in MOOCs, practical exercises are incorporated into the course curriculum to offer students the necessary hands-on experience to reach an in-depth understanding of the programming concepts presented. Our preliminary analysis showed that while being an integral and rewarding part of courses, practical exercises bear the risk of overburdening students who are struggling with conceptual misunderstandings and unknown syntax. In this thesis, we develop, implement, and evaluate different interventions with the aim to improve the learning experience, sustainability, and success of online programming courses. Data from four programming MOOCs, with a total of over 60,000 participants, are employed to determine criteria for practical programming exercises best suited for a given audience.
Based on over five million executions and scoring runs from students' task submissions, we deduce exercise difficulties, students' patterns in approaching the exercises, and potential flaws in exercise descriptions as well as preparatory videos. The primary issue in online learning is that students face a social gap caused by their isolated physical situation. Each individual student usually learns alone in front of a computer and suffers from the absence of a pre-determined time structure as provided in traditional school classes. Furthermore, online learning usually presses students into a one-size-fits-all curriculum, which presents the same content to all students, regardless of their individual needs and learning styles. Any means of a personalization of content or individual feedback regarding problems they encounter are mostly ruled out by the discrepancy between the number of learners and the number of instructors. This results in a high demand for self-motivation and determination of MOOC participants. Social distance exists between individual students as well as between students and course instructors. It decreases engagement and poses a threat to learning success. Within this research, we approach the identified issues within MOOCs and suggest scalable technical solutions, improving social interaction and balancing content difficulty.
Our contributions include situational interventions, approaches for personalizing educational content as well as concepts for fostering collaborative problem-solving. With these approaches, we reduce counterproductive struggles and create a universal improvement for future programming MOOCs. We evaluate our approaches and methods in detail to improve programming courses for students as well as instructors and to advance the state of knowledge in online education.
Data gathered from our experiments show that receiving peer feedback on one's programming problems improves overall course scores by up to 17%. Merely the act of phrasing a question about one's problem improved overall scores by about 14%. The rate of students reaching out for help was significantly improved by situational just-in-time interventions. Request for Comment interventions increased the share of students asking for help by up to 158%. Data from our four MOOCs further provide detailed insight into the learning behavior of students. We outline additional significant findings with regard to student behavior and demographic factors. Our approaches, the technical infrastructure, the numerous educational resources developed, and the data collected provide a solid foundation for future research.
Partial synchronous states exist in systems of coupled oscillators between full synchrony and asynchrony. They are an important research topic because of their variety of different dynamical states. Frequently, they are studied using phase dynamics. This is a caveat, as phase dynamics are generally obtained in the weak coupling limit of a first-order approximation in the coupling strength. The generalization to higher orders in the coupling strength is an open problem. Of particular interest in the research of partial synchrony are systems containing both attractive and repulsive coupling between the units. Such a mix of coupling yields very specific dynamical states that may help understand the transition between full synchrony and asynchrony. This thesis investigates partial synchronous states in mixed-coupling systems. First, a method for higher-order phase reduction is introduced to observe interactions beyond the pairwise one in the first-order phase description, hoping that these may apply to mixed-coupling systems. This new method for coupled systems with known phase dynamics of the units gives correct results but, like most comparable methods, is computationally expensive. It is applied to three Stuart-Landau oscillators coupled in a line with a uniform coupling strength. A numerical method is derived to verify the analytical results. These results are interesting but give importance to simpler phase models that still exhibit exotic states. Such simple models that are rarely considered are Kuramoto oscillators with attractive and repulsive interactions. Depending on how the units are coupled and the frequency difference between the units, it is possible to achieve many different states. Rich synchronization dynamics, such as a Bellerophon state, are observed when considering a Kuramoto model with attractive interaction in two subpopulations (groups) and repulsive interactions between groups. In two groups, one attractive and one repulsive, of identical oscillators with a frequency difference, an interesting solitary state appears directly between full and partial synchrony. This system can be described very well analytically.
The spread of shrubs in Namibian savannas raises questions about the resilience of these ecosystems to global change. This makes it necessary to understand the past dynamics of the vegetation, since there is no consensus on whether shrub encroachment is a new phenomenon, nor on its main drivers. However, a lack of long-term vegetation datasets for the region and the scarcity of suitable palaeoecological archives, makes reconstructing past vegetation and land cover of the savannas a challenge.
To help meet this challenge, this study addresses three main research questions: 1) is pollen analysis a suitable tool to reflect the vegetation change associated with shrub encroachment in savanna environments? 2) Does the current encroached landscape correspond to an alternative stable state of savanna vegetation? 3) To what extent do pollen-based quantitative vegetation reconstructions reflect changes in past land cover?
The research focuses on north-central Namibia, where despite being the region most affected by shrub invasion, particularly since the 21st century, little is known about the dynamics of this phenomenon.
Field-based vegetation data were compared with modern pollen data to assess their correspondence in terms of composition and diversity along precipitation and grazing intensity gradients. In addition, two sediment cores from Lake Otjikoto were analysed to reveal changes in vegetation composition that have occurred in the region over the past 170 years and their possible drivers. For this, a multiproxy approach (fossil pollen, sedimentary ancient DNA (sedaDNA), biomarkers, compound specific carbon (δ13C) and deuterium (δD) isotopes, bulk carbon isotopes (δ13Corg), grain size, geochemical properties) was applied at high taxonomic and temporal resolution. REVEALS modelling of the fossil pollen record from Lake Otjikoto was run to quantitatively reconstruct past vegetation cover. For this, we first made pollen productivity estimates (PPE) of the most relevant savanna taxa in the region using the extended R-value model and two pollen dispersal options (Gaussian plume model and Lagrangian stochastic model). The REVEALS-based vegetation reconstruction was then validated using remote sensing-based regional vegetation data.
The results show that modern pollen reflects the composition of the vegetation well, but diversity less well. Interestingly, precipitation and grazing explain a significant amount of the compositional change in the pollen and vegetation spectra. The multiproxy record shows that a state change from open Combretum woodland to encroached Terminalia shrubland can occur over a century, and that the transition between states spans around 80 years and is characterized by a unique vegetation composition. This transition is supported by gradual environmental changes induced by management (i.e. broad-scale logging for the mining industry, selective grazing and reduced fire activity associated with intensified farming) and related land-use change. Derived environmental changes (i.e. reduced soil moisture, reduced grass cover, changes in species composition and competitiveness, reduced fire intensity) may have affected the resilience of Combretum open woodlands, making them more susceptible to change to an encroached state by stochastic events such as consecutive years of precipitation and drought, and by high concentrations of pCO2. We assume that the resulting encroached state was further stabilized by feedback mechanisms that favour the establishment and competitiveness of woody vegetation.
The REVEALS-based quantitative estimates of plant taxa indicate the predominance of a semi-open landscape throughout the 20th century and a reduction in grass cover below 50% since the 21st century associated with the spread of encroacher woody taxa. Cover estimates show a close match with regional vegetation data, providing support for the vegetation dynamics inferred from multiproxy analyses. Reasonable PPEs were made for all woody taxa, but not for Poaceae.
In conclusion, pollen analysis is a suitable tool to reconstruct past vegetation dynamics in savannas. However, because pollen cannot identify grasses beyond family level, a multiproxy approach, particularly the use of sedaDNA, is required. I was able to separate stable encroached states from mere woodland phases, and could identify drivers and speculate about related feedbacks. In addition, the REVEALS-based quantitative vegetation reconstruction clearly reflects the magnitude of the changes in the vegetation cover that occurred during the last 130 years, despite the limitations of some PPEs.
This research provides new insights into pollen-vegetation relationships in savannas and highlights the importance of multiproxy approaches when reconstructing past vegetation dynamics in semi-arid environments. It also provides the first time series with sufficient taxonomic resolution to show changes in vegetation composition during shrub encroachment, as well as the first quantitative reconstruction of past land cover in the region. These results help to identify the different stages in savanna dynamics and can be used to calibrate predictive models of vegetation change, which are highly relevant to land management.
Carbonatite magmatism is a highly efficient transport mechanism from Earth’s mantle to the crust, thus providing insights into the chemistry and dynamics of the Earth’s mantle. One evolving and promising tool for tracing magma interaction are stable iron isotopes, particularly because iron isotope fractionation is controlled by oxidation state and bonding environment. Meanwhile, a large data set on iron isotope fractionation in igneous rocks exists comprising bulk rock compositions and fractionation between mineral groups. Iron isotope data from natural carbonatite rocks are extremely light and of remarkably high variability. This resembles iron isotope data from mantle xenoliths, which are characterized by a variability in δ56Fe spanning three times the range found in basalts, and by the extremely light values of some whole rock samples, reaching δ56Fe as low as -0.69 ‰ in a spinel lherzolite. Cause to this large range of variations may be metasomatic processes, involving metasomatic agents like volatile bearing high-alkaline silicate melts or carbonate melts. The expected effects of metasomatism on iron isotope fractionation vary with parameters like melt/rock-ratio, reaction time, and the nature of metasomatic agents and mineral reactions involved. An alternative or additional way to enrich light isotopes in the mantle could be multiple phases of melt extraction. To interpret the existing data sets more knowledge on iron isotope fractionation factors is needed.
To investigate the behavior of iron isotopes in the carbonatite systems, kinetic and equilibration experiments in natro-carbonatite systems between immiscible silicate and carbonate melts were performed in an internally heated gas pressure vessel at intrinsic redox conditions at temperatures between 900 and 1200 °C and pressures of 0.5 and 0.7 GPa. The iron isotope compositions of coexisting silicate melt and carbonate melt were analyzed by solution MC-ICP-MS. The kinetic experiments employing a Fe-58 spiked starting material show that isotopic equilibrium is obtained after 48 hours. The experimental studies of equilibrium iron isotope fractionation between immiscible silicate and carbonate melts have shown that light isotopes are enriched in the carbonatite melt. The highest Δ56Fesil.m.-carb.melt (mean) of 0.13 ‰ was determined in a system with a strongly peralkaline silicate melt composition (ASI ≥ 0.21, Na/Al ≤ 2.7). In three systems with extremely peralkaline silicate melt compositions (ASI between 0.11 and 0.14) iron isotope fractionation could analytically not be resolved. The lowest Δ56Fesil.m.-carb.melt (mean) of 0.02 ‰ was determined in a system with an extremely peralkaline silicate melt composition (ASI ≤ 0.11 , Na/Al ≥ 6.1). The observed iron isotope fractionation is most likely governed by the redox conditions of the system. Yet, in the systems, where no fractionation occurred, structural changes induced by compositional changes possibly overrule the influence of redox conditions. This interpretation implicates, that the iron isotope system holds the potential to be useful not only for exploring redox conditions in magmatic systems, but also for discovering structural changes in a melt.
In situ iron isotope analyses by femtosecond laser ablation coupled to MC-ICP-MS on magnetite and olivine grains were performed to reveal variations in iron isotope composition on the micro scale. The investigated sample is a melilitite bomb from the Salt Lake Crater group at Honolulu (Oahu, Hawaii), showing strong evidence for interaction with a carbonatite melt. While magnetite grains are rather homogeneous in their iron isotope compositions, olivine grains span a far larger range in iron isotope ratios. The variability of δ56Fe in magnetite is limited from - 0.17 ‰ (± 0.11 ‰, 2SE) to +0.08 ‰ (± 0.09 ‰, 2SE). δ56Fe in olivine range from -0.66‰ (± 0.11 ‰, 2SE) to +0.10 ‰ (± 0.13 ‰, 2SE). Olivine and magnetite grains hold different informations regarding kinetic and equilibrium fractionation due to their different Fe diffusion coefficients. The observations made in the experiments and in the in situ iron isotope analyses suggest that the extremely light iron isotope signatures found in carbonatites are generated by several steps of isotope fractionation during carbonatite genesis. These may involve equilibrium and kinetic fractionation. Since iron isotopic signatures in natural systems are generated by a combination of multiple factors (pressure, temperature, redox conditions, phase composition and structure, time scale), multi tracer approaches are needed to explain signatures found in natural rocks.
One of the key challenges in modern Facility Management (FM) is to digitally reflect the current state of the built environment, referred to as-is or as-built versus as-designed representation. While the use of Building Information Modeling (BIM) can address the issue of digital representation, the generation and maintenance of BIM data requires a considerable amount of manual work and domain expertise. Another key challenge is being able to monitor the current state of the built environment, which is used to provide feedback and enhance decision making. The need for an integrated solution for all data associated with the operational life cycle of a building is becoming more pronounced as practices from Industry 4.0 are currently being evaluated and adopted for FM use. This research presents an approach for digital representation of indoor environments in their current state within the life cycle of a given building. Such an approach requires the fusion of various sources of digital data. The key to solving such a complex issue of digital data integration, processing and representation is with the use of a Digital Twin (DT). A DT is a digital duplicate of the physical environment, states, and processes. A DT fuses as-designed and as-built digital representations of built environment with as-is data, typically in the form of floorplans, point clouds and BIMs, with additional information layers pertaining to the current and predicted states of an indoor environment or a complete building (e.g., sensor data). The design, implementation and initial testing of prototypical DT software services for indoor environments is presented and described. These DT software services are implemented within a service-oriented paradigm, and their feasibility is presented through functioning and tested key software components within prototypical Service-Oriented System (SOS) implementations. The main outcome of this research shows that key data related to the built environment can be semantically enriched and combined to enable digital representations of indoor environments, based on the concept of a DT. Furthermore, the outcomes of this research show that digital data, related to FM and Architecture, Construction, Engineering, Owner and Occupant (AECOO) activity, can be combined, analyzed and visualized in real-time using a service-oriented approach. This has great potential to benefit decision making related to Operation and Maintenance (O&M) procedures within the scope of the post-construction life cycle stages of typical office buildings.
Anthropogenic activities such as continuous landscape changes threaten biodiversity at both local and regional scales. Metacommunity models attempt to combine these two scales and continuously contribute to a better mechanistic understanding of how spatial processes and constraints, such as fragmentation, affect biodiversity. There is a strong consensus that such structural changes of the landscape tend to negatively effect the stability of metacommunities. However, in particular the interplay of complex trophic communities and landscape structure is not yet fully understood.
In this present dissertation, a metacommunity approach is used based on a dynamic and spatially explicit model that integrates population dynamics at the local scale and dispersal dynamics at the regional scale. This approach allows the assessment of complex spatial landscape components such as habitat clustering on complex species communities, as well as the analysis of population dynamics of a single species. In addition to the impact of a fixed landscape structure, periodic environmental disturbances are also considered, where a periodical change of habitat availability, temporally alters landscape structure, such as the seasonal drying of a water body.
On the local scale, the model results suggest that large-bodied animal species, such as predator species at high trophic positions, are more prone to extinction in a state of large patch isolation than smaller species at lower trophic levels.
Increased metabolic losses for species with a lower body mass lead to increased energy limitation for species on higher trophic levels and serves as an explanation for a predominant loss of these species. This effect is particularly pronounced for food webs, where species are more sensitive to increased metabolic losses through dispersal and a change in landscape structure.
In addition to the impact of species composition in a food web for diversity, the strength of local foraging interactions likewise affect the synchronization of population dynamics. A reduced predation pressure leads to more asynchronous population dynamics, beneficial for the stability of population dynamics as it reduces the risk of correlated extinction events among habitats. On the regional scale, two landscape aspects, which are the mean patch isolation and the formation of local clusters of two patches, promote an increase in $\beta$-diversity. Yet, the individual composition and robustness of the local species community equally explain a large proportion of the observed diversity patterns.
A combination of periodic environmental disturbance and patch isolation has a particular impact on population dynamics of a species. While the periodic disturbance has a synchronizing effect, it can even superimpose emerging asynchronous dynamics in a state of large patch isolation and unifies trends in synchronization between different species communities.
In summary, the findings underline a large local impact of species composition and interactions on local diversity patterns of a metacommunity. In comparison, landscape structures such as fragmentation have a negligible effect on local diversity patterns, but increase their impact for regional diversity patterns. In contrast, at the level of population dynamics, regional characteristics such as periodic environmental disturbance and patch isolation have a particularly strong impact and contribute substantially to the understanding of the stability of population dynamics in a metacommunity. These studies demonstrate once again the complexity of our ecosystems and the need for further analysis for a better understanding of our surrounding environment and more targeted conservation of biodiversity.
Kenya and Uganda are amongst the countries that, for different historical, political, and economic reasons, have embarked on law reform processes as regards to citizenship. In 2009, Uganda made provisions in its laws to allow citizens to have dual citizenship while Kenya’s 2010 constitution similarly introduced it, and at the same time, a general prohibition on dual citizenship was lifted, that is, a ban on state officers, including the President and Deputy President, being dual nationals (Manby, 2018).
Against this background, I analysed the reasons for which these countries that previously held stringent laws and policies against dual citizenship, made a shift in a close time proximity. Given their geo-political roles, location, regional, continental, and international obligations, I conducted a comparative study on the processes, actors, impact, and effect. A specific period of 2000 to 2010 was researched, that is, from when the debates for law reforms emerged, to the processes being implemented, the actors, and the implications.
According to Rubenstein (2000, p. 520), citizenship is observed in terms of “political institutions” that are free to act according to the will of, in the interests of, or with authority over, their citizenry. Institutions are emergent national or international, higher-order factors above the individual spectrum, having the interests and political involvement of their actors without requiring recurring collective mobilisation or imposing intervention to realise these regularities. Transnational institutions are organisations with authority beyond single governments. Given their International obligations, I analysed the role of the UN, AU, and EAC in influencing the citizenship debates and reforms in Kenya and Uganda. Further, non-state actors, such as civil society, were considered.
Veblen, (1899) describes institutions as a set of settled habits of thought common to the generality of men. Institutions function only because the rules involved are rooted in shared habits of thought and behaviour although there is some ambiguity in the definition of the term “habit”. Whereas abstracts and definitions depend on different analytical procedures, institutions restrain some forms of action and facilitate others. Transnational institutions both restrict and aid behaviour. The famous “invisible hand” is nothing else but transnational institutions. Transnational theories, as applied to politics, posit two distinct forms that are of influence over policy and political action (Veblen, 1899). This influence and durability of institutions is “a function of the degree to which they are instilled in political actors at the individual or organisational level, and the extent to which they thereby “tie up” material resources and networks. Against this background, transitional networks with connection to Kenya and Uganda were considered alongside the diaspora from these two countries and their role in the debate and reforms on Dual citizenship.
Sterian (2013, p. 310) notes that Nation states may be vulnerable to institutional influence and this vulnerability can pose a threat to a nation’s autonomy, political legitimacy, and to the democratic public law. Transnational institutions sometimes “collide with the sovereignty of the state when they create new structures for regulating cross-border relationships”. However, Griffin (2003) disagrees that transnational institutional behaviour is premised on the principles of neutrality, impartiality, and independence. Transnational institutions have become the main target of the lobby groups and civil society, consequently leading to excessive politicisation. Kenya and Uganda are member states not only of the broader African union but also of the E.A.C which has adopted elements of socio-economic uniformity. Therefore, in the comparative analysis, I examine the role of the East African Community and its partners in the dual citizenship debate on the two countries.
I argue in the analysis that it is not only important to be a citizen within Kenya or Uganda but also important to discover how the issue of dual citizenship is legally interpreted within the borders of each individual nation-state. In light of this discussion, I agree with Mamdani’s definition of the nation-state as a unique form of power introduced in Africa by colonial powers between 1880 and 1940 whose outcomes can be viewed as “debris of a modernist postcolonial project, an attempt to create a centralised modern state as the bearer of Westphalia sovereignty against the background of indirect rule” (Mamdani, 1996, p. xxii). I argue that this project has impacted the citizenship debate through the adopted legal framework of post colonialism, built partly on a class system, ethnic definitions, and political affiliation. I, however, insist that the nation-state should still be a vital custodian of the citizenship debate, not in any way denying the individual the rights to identity and belonging. The question then that arises is which type of nation-state? Mamdani (1996, p. 298) asserts that the core agenda that African states faced at independence was threefold: deracialising civil society; detribalising the native authority; and developing the economy in the context of unequal international relations. Post-independence governments grappled with overcoming the citizen and subject dichotomy through either preserving the customary in the name of “defending tradition against alien encroachment or abolishing it in the name of overcoming backwardness and embracing triumphant modernism”. Kenya and Uganda are among countries that have reformed their citizenship laws attesting to Mamdani’s latter assertion.
Mamdani’s (1996) assertions on how African states continue to deal with the issue of citizenship through either the defence of tradition against subjects or abolishing it in the name of overcoming backwardness and acceptance of triumphant modernism are based on the colonial legal theory and the citizen-subject dichotomy within Africa communities. To further create a wider perspective on legal theory, I argue that those assertions above, point to the historical divergence between the republican model of citizenship, which places emphasis on political agency as envisioned in Rousseau´s social contract, as opposed to the liberal model of citizenship, which stresses the legal status and protection (Pocock, 1995).
I, therefore, compare the contexts of both Kenya and Uganda, the actors, the implications of transnationalism and post-nationalism, on the citizens, the nation-state and the region. I conclude by highlighting the shortcomings in the law reforms that allowed for dual citizenship, further demonstrating an urgent need to address issues, such as child statelessness, gender nationality laws, and the rights of dual citizens. Ethnicity, a weak nation state, and inconsistent citizenship legal reforms are closely linked to the historical factors of both countries. I further indicate the economic and political incentives that influenced the reform.
Keywords: Citizenship, dual citizenship, nation state, republicanism, liberalism, transnationalism, post-nationalism
Forming as a result of the collision between the Adriatic and European plates, the Alpine orogen exhibits significant lithospheric heterogeneity due to the long history of interplay between these plates, other continental and oceanic blocks in the region, and inherited features from preceeding orogenies. This implies that the thermal and rheological configuration of the lithosphere also varies significantly throughout the region. Lithology and temperature/pressure conditions exert a first order control on rock strength, principally via thermally activated creep deformation and on the distribution at depth of the brittle-ductile transition zone, which can be regarded as the lower bound to the seismogenic zone. Therefore, they influence the spatial distribution of seismicity within a lithospheric plate. In light of this, accurately constrained geophysical models of the heterogeneous Alpine lithospheric configuration, are crucial in describing regional deformation patterns. However, despite the amount of research focussing on the area, different hypotheses still exist regarding the present-day lithospheric state and how it might relate to the present-day seismicity distribution.
This dissertaion seeks to constrain the Alpine lithospheric configuration through a fully 3D integrated modelling workflow, that utilises multiple geophysical techniques and integrates from all available data sources. The aim is therefore to shed light on how lithospheric heterogeneity may play a role in influencing the heterogeneous patterns of seismicity distribution observed within the region. This was accomplished through the generation of: (i) 3D seismically constrained, structural and density models of the lithosphere, that were adjusted to match the observed gravity field; (ii) 3D models of the lithospheric steady state thermal field, that were adjusted to match observed wellbore temperatures; and (iii) 3D rheological models of long term lithospheric strength, with the results of each step used as input for the following steps.
Results indicate that the highest strength within the crust (~ 1 GPa) and upper mantle (> 2 GPa), are shown to occur at temperatures characteristic for specific phase transitions (more felsic crust: 200 – 400 °C; more mafic crust and upper lithospheric mantle: ~600 °C) with almost all seismicity occurring in these regions. However, inherited lithospheric heterogeneity was found to significantly influence this, with seismicity in the thinner and more mafic Adriatic crust (~22.5 km, 2800 kg m−3, 1.30E-06 W m-3) occuring to higher temperatures (~600 °C) than in the thicker and more felsic European crust (~27.5 km, 2750 kg m−3, 1.3–2.6E-06 W m-3, ~450 °C). Correlation between seismicity in the orogen forelands and lithospheric strength, also show different trends, reflecting their different tectonic settings. As such, events in the plate boundary setting of the southern foreland correlate with the integrated lithospheric strength, occurring mainly in the weaker lithosphere surrounding the strong Adriatic indenter. Events in the intraplate setting of the northern foreland, instead correlate with crustal strength, mainly occurring in the weaker and warmer crust beneath the Upper Rhine Graben.
Therefore, not only do the findings presented in this work represent a state of the art understanding of the lithospheric configuration beneath the Alps and their forelands, but also a significant improvement on the features known to significantly influence the occurrence of seismicity within the region. This highlights the importance of considering lithospheric state in regards to explaining observed patterns of deformation.
The ubiquitin-proteasome-system (UPS) is a cellular cascade involving three enzymatic steps for protein ubiquitination to target them to the 26S proteasome for proteolytic degradation. Several components of the UPS have been shown to be central for regulation of defense responses during infections with phytopathogenic bacteria. Upon recognition of the pathogen, local defense is induced which also primes the plant to acquire systemic resistance (SAR) for enhanced immune responses upon challenging infections. Here, ubiquitinated proteins were shown to accumulate locally and systemically during infections with Psm and after treatment with the SAR-inducing metabolites salicylic acid (SA) and pipecolic acid (Pip). The role of the 26S proteasome in local defense has been described in several studies, but the potential role during SAR remains elusive and was therefore investigated in this project by characterizing the Arabidopsis proteasome mutants rpt2a-2 and rpn12a-1 during priming and infections with Pseudomonas. Bacterial replication assays reveal decreased basal and systemic immunity in both mutants which was verified on molecular level showing impaired activation of defense- and SAR-genes. rpt2a-2 and rpn12a-1 accumulate wild type like levels of camalexin but less SA. Endogenous SA treatment restores local PR gene expression but does not rescue the SAR-phenotype. An RNAseq experiment of Col-0 and rpt2a-2 reveal weak or absent induction of defense genes in the proteasome mutant during priming. Thus, a functional 26S proteasome was found to be required for induction of SAR while compensatory mechanisms can still be initiated.
E3-ubiquitin ligases conduct the last step of substrate ubiquitination and thereby convey specificity to proteasomal protein turnover. Using RNAseq, 11 E3-ligases were found to be differentially expressed during priming in Col-0 of which plant U-box 54 (PUB54) and ariadne 12 (ARI12) were further investigated to gain deeper understanding of their potential role during priming.
PUB54 was shown to be expressed during priming and /or triggering with virulent Pseudomonas. pub54 I and pub54-II mutants display local and systemic defense comparable to Col-0. The heavy-metal associated protein 35 (HMP35) was identified as potential substrate of PUB54 in yeast which was verified in vitro and in vivo. PUB54 was shown to be an active E3-ligase exhibiting auto-ubiquitination activity and performing ubiquitination of HMP35. Proteasomal turnover of HMP35 was observed indicating that PUB54 targets HMP35 for ubiquitination and subsequent proteasomal degradation. Furthermore, hmp35-I benefits from increased resistance in bacterial replication assays. Thus, HMP35 is potentially a negative regulator of defense which is targeted and ubiquitinated by PUB54 to regulate downstream defense signaling. ARI12 is transcriptionally activated during priming or triggering and hyperinduced during priming and triggering. Gene expression is not inducible by the defense related hormone salicylic acid (SA) and is dampened in npr1 and fmo1 mutants consequently depending on functional SA- and Pip-pathways, respectively. ARI12 accumulates systemically after priming with SA, Pip or Pseudomonas. ari12 mutants are not altered in resistance but stable overexpression leads to increased resistance in local and systemic tissue. During priming and triggering, unbalanced ARI12 levels (i.e. knock out or overexpression) leads to enhanced FMO1 activation indicating a role of ARI12 in Pip-mediated SAR. ARI12 was shown to be an active E3-ligase with auto-ubiquitination activity likely required for activation with an identified ubiquitination site at K474. Mass spectrometrically identified potential substrates were not verified by additional experiments yet but suggest involvement of ARI12 in regulation of ROS in turn regulating Pip-dependent SAR pathways.
Thus, data from this project provide strong indications about the involvement of the 26S proteasome in SAR and identified a central role of the two so far barely described E3-ubiquitin ligases PUB54 and ARI12 as novel components of plant defense.
Background: A growing body of research has documented negative effects of sexualization in the media on individuals’ self-objectification. This research is predominantly built on studies examining traditional media, such as magazines and television, and young female samples. Furthermore, longitudinal studies are scarce, and research is missing studying mediators of the relationship. The first aim of the present PhD thesis was to investigate the relations between the use of sexualized interactive media and social media and self-objectification. The second aim of this work was to examine the presumed processes within understudied samples, such as males and females beyond college age, thus investigating the moderating roles of age and gender. The third aim was to shed light on possible mediators of the relation between sexualized media and self-objectification.
Method: The research aims were addressed within the scope of four studies. In an experiment, women’s self-objectification and body satisfaction was measured after playing a video game with a sexualized vs. a nonsexualized character that was either personalized or generic. The second study investigated the cross-sectional link between sexualized television use and self-objectification and consideration of cosmetic surgery in a sample of women across a broad age spectrum, examining the role of age in the relations. The third study looked at the cross-sectional link between male and female sexualized images on Instagram and their associations with self-objectification among a sample of male and female adolescents. Using a two-wave longitudinal design, the fourth study examined sexualized video game and Instagram use as predictors of adolescents’ self-objectification. Path models were conceptualized for the second, third and fourth study, in which media use predicted body surveillance via appearance comparisons (Study 4), thin-ideal internalization (Study 2, 3, 4), muscular-ideal internalization (Study 3, 4), and valuing appearance (all studies).
Results: The results of the experimental study revealed no effect of sexualized video game characters on women’s self-objectification and body satisfaction. No moderating effect of personalization emerged. Sexualized television use was associated to consideration of cosmetic surgery via body surveillance and valuing appearance for women of all ages in Study 2, while no moderating effect of age was found. Study 3 revealed that seeing sexualized male images on Instagram was indirectly associated with higher body surveillance via muscular-ideal internalization for boys and girls. Sexualized female images were indirectly linked to higher body surveillance via thin-ideal internalization and valuing appearance over competence only for girls. The longitudinal analysis of Study 4 showed no moderating effect of gender: For boys and girls, sexualized video game use at T1 predicted body surveillance at T2 via appearance comparisons, thin-ideal internalization and valuing appearance over competence. Furthermore, the use of sexualized Instagram images at T1 predicted body surveillance at T2 via valuing appearance.
Conclusion: The findings show that sexualization in the media is linked to self-objectification among a variety of media formats and within diverse groups of people. While the longitudinal study indicates that sexualized media predict self-objectification over time, the experimental null findings warrant caution regarding this temporal order. The results demonstrate that several mediating variables might be involved in this link. Possible implications for research and practice, such as intervention programs and policy-making, are discussed.
Media artists have been struggling for financial survival ever since media art came into being. The non-material value of the artwork, a provocative attitude towards the traditional arts world and originally anti-capitalist mindset of the movement makes it particularly difficult to provide a constructive solution. However, a cultural entrepreneurial approach can be used to build a framework in order to find a balance between culture and business while ensuring that the cultural mission remains the top priority.
We investigate models for incremental binary classification, an example for supervised online learning. Our starting point is a model for human and machine learning suggested by E.M.Gold.
In the first part, we consider incremental learning algorithms that use all of the available binary labeled training data in order to compute the current hypothesis. For this model, we observe that the algorithm can be assumed to always terminate and that the distribution of the training data does not influence learnability. This is still true if we pose additional delayable requirements that remain valid despite a hypothesis output delayed in time. Additionally, we consider the non-delayable requirement of consistent learning. Our corresponding results underpin the claim for delayability being a suitable structural property to describe and collectively investigate a major part of learning success criteria. Our first theorem states the pairwise implications or incomparabilities between an established collection of delayable learning success criteria, the so-called complete map. Especially, the learning algorithm can be assumed to only change its last hypothesis in case it is inconsistent with the current training data. Such a learning behaviour is called conservative.
By referring to learning functions, we obtain a hierarchy of approximative learning success criteria. Hereby we allow an increasing finite number of errors of the hypothesized concept by the learning algorithm compared with the concept to be learned. Moreover, we observe a duality depending on whether vacillations between infinitely many different correct hypotheses are still considered a successful learning behaviour. This contrasts the vacillatory hierarchy for learning from solely positive information.
We also consider a hypothesis space located between the two most common hypothesis space types in the nearby relevant literature and provide the complete map.
In the second part, we model more efficient learning algorithms. These update their hypothesis referring to the current datum and without direct regress to past training data. We focus on iterative (hypothesis based) and BMS (state based) learning algorithms. Iterative learning algorithms use the last hypothesis and the current datum in order to infer the new hypothesis.
Past research analyzed, for example, the above mentioned pairwise relations between delayable learning success criteria when learning from purely positive training data. We compare delayable learning success criteria with respect to iterative learning algorithms, as well as learning from either exclusively positive or binary labeled data. The existence of concept classes that can be learned by an iterative learning algorithm but not in a conservative way had already been observed, showing that conservativeness is restrictive. An additional requirement arising from cognitive science research %and also observed when training neural networks is U-shapedness, stating that the learning algorithm does diverge from a correct hypothesis. We show that forbidding U-shapes also restricts iterative learners from binary labeled data.
In order to compute the next hypothesis, BMS learning algorithms refer to the currently observed datum and the actual state of the learning algorithm. For learning algorithms equipped with an infinite amount of states, we provide the complete map. A learning success criterion is semantic if it still holds, when the learning algorithm outputs other parameters standing for the same classifier. Syntactic (non-semantic) learning success criteria, for example conservativeness and syntactic non-U-shapedness, restrict BMS learning algorithms. For proving the equivalence of the syntactic requirements, we refer to witness-based learning processes. In these, every change of the hypothesis is justified by a later on correctly classified witness from the training data. Moreover, for every semantic delayable learning requirement, iterative and BMS learning algorithms are equivalent. In case the considered learning success criterion incorporates syntactic non-U-shapedness, BMS learning algorithms can learn more concept classes than iterative learning algorithms.
The proofs are combinatorial, inspired by investigating formal languages or employ results from computability theory, such as infinite recursion theorems (fixed point theorems).
During sentence reading the eyes quickly jump from word to word to sample visual information with the high acuity of the fovea. Lexical properties of the currently fixated word are known to affect the duration of the fixation, reflecting an interaction of word processing with oculomotor planning. While low level properties of words in the parafovea can likewise affect the current fixation duration, results concerning the influence of lexical properties have been ambiguous (Drieghe, Rayner, & Pollatsek, 2008; Kliegl, Nuthmann, & Engbert, 2006). Experimental investigations of such lexical parafoveal-on-foveal effects using the boundary paradigm have instead shown, that lexical properties of parafoveal previews affect fixation durations on the upcoming target words (Risse & Kliegl, 2014). However, the results were potentially confounded with effects of preview validity.
The notion of parafoveal processing of lexical information challenges extant models of eye movements during reading. Models containing serial word processing assumptions have trouble explaining such effects, as they usually couple successful word processing to saccade planning, resulting in skipping of the parafoveal word. Although models with parallel word processing are less restricted, in the SWIFT model (Engbert, Longtin, & Kliegl, 2002) only processing of the foveal word can directly influence the saccade latency.
Here we combine the results of a boundary experiment (Chapter 2) with a predictive modeling approach using the SWIFT model, where we explore mechanisms of parafoveal inhibition in a simulation study (Chapter 4). We construct a likelihood function for the SWIFT model (Chapter 3) and utilize the experimental data in a Bayesian approach to parameter estimation (Chapter 3 & 4).
The experimental results show a substantial effect of parafoveal preview frequency on fixation durations on the target word, which can be clearly distinguished from the effect of preview validity. Using the eye movement data from the participants, we demonstrate the feasibility of the Bayesian approach even for a small set of estimated parameters, by comparing summary statistics of experimental and simulated data. Finally, we can show that the SWIFT model can account for the lexical preview effects, when a mechanism for parafoveal inhibition is added. The effects of preview validity were modeled best, when processing dependent saccade cancellation was added for invalid trials. In the simulation study only the control condition of the experiment was used for parameter estimation, allowing for cross validation. Simultaneously the number of free parameters was increased. High correlations of summary statistics demonstrate the capabilities of the parameter estimation approach. Taken together, the results advocate for a better integration of experimental data into computational modeling via parameter estimation.
To achieve a sustainable energy economy, it is necessary to turn back on the combustion of fossil fuels as a means of energy production and switch to renewable sources. However, their temporal availability does not match societal consumption needs, meaning that renewably generated energy must be stored in its main generation times and allocated during peak consumption periods. Electrochemical energy storage (EES) in general is well suited due to its infrastructural independence and scalability. The lithium ion battery (LIB) takes a special place, among EES systems due to its energy density and efficiency, but the scarcity and uneven geological occurrence of minerals and ores vital for many cell components, and hence the high and fluctuating costs will decelerate its further distribution.
The sodium ion battery (SIB) is a promising successor to LIB technology, as the fundamental setup and cell chemistry is similar in the two systems. Yet, the most widespread negative electrode material in LIBs, graphite, cannot be used in SIBs, as it cannot store sufficient amounts of sodium at reasonable potentials. Hence, another carbon allotrope, non-graphitizing or hard carbon (HC) is used in SIBs. This material consists of turbostratically disordered, curved graphene layers, forming regions of graphitic stacking and zones of deviating layers, so-called internal or closed pores.
The structural features of HC have a substantial impact of the charge-potential curve exhibited by the carbon when it is used as the negative electrode in an SIB. At defects and edges an adsorption-like mechanism of sodium storage is prevalent, causing a sloping voltage curve, ill-suited for the practical application in SIBs, whereas a constant voltage plateau of relatively high capacities is found immediately after the sloping region, which recent research attributed to the deposition of quasimetallic sodium into the closed pores of HC.
Literature on the general mechanism of sodium storage in HCs and especially the role of the closed pore is abundant, but the influence of the pore geometry and chemical nature of the HC on the low-potential sodium deposition is yet in an early stage. Therefore, the scope of this thesis is to investigate these relationships using suitable synthetic and characterization methods. Materials of precisely known morphology, porosity, and chemical structure are prepared in clear distinction to commonly obtained ones and their impact on the sodium storage characteristics is observed. Electrochemical impedance spectroscopy in combination with distribution of relaxation times analysis is further established as a technique to study the sodium storage process, in addition to classical direct current techniques, and an equivalent circuit model is proposed to qualitatively describe the HC sodiation mechanism, based on the recorded data. The obtained knowledge is used to develop a method for the preparation of closed porous and non-porous materials from open porous ones, proving not only the necessity of closed pores for efficient sodium storage, but also providing a method for effective pore closure and hence the increase of the sodium storage capacity and efficiency of carbon materials.
The insights obtained and methods developed within this work hence not only contribute to the better understanding of the sodium storage mechanism in carbon materials of SIBs, but can also serve as guidance for the design of efficient electrode materials.
Der Bildungshausbau ist Thema aktueller Debatten in der Stadtentwicklung und Stadtplanung sowie in der Pädagogik. Viele Expert*innen beschäftigen sich in Studien mit Fragen zu gutem und gelingendem Schulbau. Die Anforderungen der Gesellschaft an Bildungshäuser verändern sich, wenn in ganztägigen Schulformen nicht nur Unterricht, sondern auch Freizeitbetreuung für die Schülerinnen und Schüler stattfinden soll. Gleichzeitig soll Schule ein Ort der Begegnung und Kommunikation, des sozialen Lernens und der Kooperation sein. Schule ist in vielfacher Hinsicht in Bewegung. Um mit den Veränderungen und Ansprüchen Schritt zu halten, steht der Bildungshausbau immer wieder vor Herausforderungen. Einerseits werden Leuchtturmprojekte geschaffen, andererseits entstehen nach wie vor Bildungsbauten, die den gegenwärtigen Anforderungen und zukünftigen Entwicklungen nicht gerecht werden.
An dieser Stelle setzt die vorliegende Arbeit an, die nicht neue Normen zu gutem Schulbau vorlegt, sondern in einer qualitativen empirischen Studie nach den pädagogischen Vorstellungen von Beteiligten im Bildungshausbau und den typischen Entwicklungen im Planungsprozess fragt. Der vorliegenden Fallstudie wurde die dokumentarische Methode als Auswertungsverfahren zugrunde gelegt. Gegenstand der Untersuchung waren zwei Bildungsbauten eines Großbauprojektes. Im Zuge der Auswertung erfolgten eine Analyse der Projektstrukturen und eine Analyse der Deutungsmuster der befragten Akteur*innen, die in einer zusammen¬führenden Ergebnisdarstellung in Form eines Handlungs-Struktur-Gefüges mündeten.
Es werden Einblicke in Zusammenhänge von Handlungen der Beteiligten und Projektstrukturen gegeben, wie sie sich gegenseitig beeinflussen oder im Prozessverlauf verändern. Die Auswertung zeigt, dass Transferproblematiken zwischen Wissenschaft und Praxis nach wie vor bestehen. Besonderes Gewicht bei Planungsentscheidungen haben finanzielle, zeitliche und architektonische Strukturen. Nur wenige pädagogische Vorstellungen bzw. Deutungsmuster können in Erscheinung treten.
Fördermittelfinanzierte Gründungsunterstützungsangebote waren in den EU-Förderperioden 2007-2013 und 2014-2020 ein wichtiges Element der Hochschulgründungsförderung im Land Brandenburg. Aufgrund der positiven wirtschaftlichen Entwicklung des Landes, reduzierte sich das Fördervolumen in der gleichen Zeit jedoch stetig. Für die EU-Förderperiode 2021-2027 steht eine weitere Reduzierung der Fördermittel bereits fest. In der Folge wird es, ohne Anpassungen der etablierten Förderstrukturen, zur weiteren Reduzierung oder Erosion der Gründungsunterstützungsangebote an Brandenburger Hochschulen kommen. Die vorliegende Arbeit befasst sich daher u.a. mit der Frage, wie ein theoretisches Referenzmodell zur fördermittelfinanzierten Hochschulgründungsberatung gestaltet sein kann, um den reduzierten Fördersätzen bei gleichzeitiger Aufrechterhaltung der Angebotsvielfalt gerecht zu werden.
Zur Beantwortung dieser Frage wird als Untersuchungsobjekt das Förderprojekt BIEM Startup Navigator herangezogen. Das Gründungsberatungsprojekt BIEM Startup Navigator wurde von 2010 bis 2014 an sechs Brandenburger Hochschulen durchgeführt. Mit Hilfe der Modelle und Prämissen der Prinzipal-Agent-Theorie wird zunächst ein theoretischer Rahmen aufgespannt, auf dessen Grundlage die empirische Untersuchung erfolgt. Anhand der Prinzipal-Agent-Theorie werden die beteiligten Organisationen, Individuen und Institutionen aufgezeigt. Weiterhin werden die wesentlichen Problemfelder und Lösungsansätze der Prinzipal-Agent-Theorie für die Untersuchung des BIEM Startup Navigators diskutiert.
Im Untersuchungsverlauf werden u.a. die Konzepte zur Durchführung des Förderprojekts an sechs Hochschulstandorten, die Daten von 610 Teilnehmenden und 288 Gründungen analysiert, um so sachlogische Zusammenhänge und Wechselwirkungen identifizieren und beschreiben zu können. Es werden unterschiedliche theoretische Annahmen zu den Bereichen Projekteffektivität bzw. Projekteffizienz, Kostenverteilung und zur konzeptionellen Ausgestaltung in Form von 24 Arbeitshypothesen formuliert und auf die Untersuchung übertragen. Die Verifizierung bzw. Falsifizierung der Hypothesen erfolgt auf Grundlage der kombinierten Erkenntnisse aus Literaturrecherchen und den Ergebnissen der empirischen Untersuchung.
Im Verlauf der Arbeit gelingt es, die in der Prinzipal-Agent-Theorie auftretenden Agencykosten auch am Beispiel des BIEM Startup Navigators zu beschreiben und ex post Ineffizienzen in den durchgeführten Screening- und Signalingprozessen aufzuzeigen.
Mit Hilfe des im Verlauf der Arbeit entwickelten theoretischen Referenzmodells zur fördermittelfinanzierten Gründungsberatung an Brandenburger Hochschulen soll es gelingen, den sinkenden EU-Fördermitteln, ohne eine gleichzeitige Reduzierung der Gründungsunterstützungsangebote an den Hochschulen, gerecht zu werden. Hierfür zeigt das theoretische Referenzmodell wie die Ergebnisse der empirischen Untersuchung genutzt werden können, um die Agencykosten der fördermittelfinanzierten Gründungsberatung zu reduzieren.
Was ist HipHop?
(2021)
Es handelt sich bei der vorliegenden Dissertation um eine investigative Forschungsarbeit, die sich mit dem dynamisch wandelnden HipHop-Phänomen befasst. Der Autor erläutert hierbei die anhaltende Attraktivität des kulturellen Phänomens HipHop und versucht die Tatsache der stetigen Reproduzierbarkeit des HipHops genauer zu erklären. Daher beginnt er mit einer historischen Diskursanalyse der HipHop-Kultur. Er analysiert hierfür die Formen, die Protagonisten und die Diskurse des HipHops, um diesen besser verstehen zu können. Durch die Herausarbeitung der genuinen Eigenschaft der Mehrfachkodierbarkeit des HipHops werden gängige Erklärungsmuster aus Wissenschaft und Medien relativiert und kritisiert. Der Autor kombiniert in seiner Studie kultur- und erziehungswissenschaftliche Literatur mit diversen aktuellen und historischen Darstellungen und Bildern. Es werden vor allem bildbasierte Selbstinszenierungen von HipHoppern und Selbstzeugnisse aus narrativen Interviews, die er selbst mit verschiedenen HipHoppern in Deutschland geführt hat, ausgewertet. Neben den narrativen Interviews dient vor allem die Bildinterpretation nach Bohnsack als Quelle zur Bildung der These der Mehrfachkodierbarkeit. Hierbei werden zwei Bilder der HipHopper Lady Bitch Ray und Kollegah nach Bohnsack (2014) interpretiert und gezeigt wie HipHop neben der lyrischen und der klanglichen Komponente auch visuell inszeniert und produziert wird. Hieraus wird geschlussfolgert, dass es im HipHop möglich ist konträre Sichtweisen bei gleichzeitiger Anwendung von typischen Kulturpraktiken wie zum Beispiel dem Boasting darzustellen und zu vermitteln. Die stetige Offenheit des HipHops wird durch Praktiken wie dem Sampling oder dem Battle deutlich und der Autor erklärt, dass durch diese Techniken die generative Eigenschaft der Mehrfachkodierbarkeit hergestellt wird. Damit vertritt er eine Art Baukasten-Theorie, die besagt, dass sich prinzipiell jeder aus dem Baukasten HipHop, je nach Vorliebe, Interesse und Affinität, bedienen kann. Durch die Vielfalt an Meinungen zu HipHop, die der Autor durch die Kodierung der geführten narrativen Interviews erhält, wird diese These verdeutlicht und es wird klar, dass es sich bei HipHop um mehr als nur eine Mode handelt. HipHop besitzt die prinzipielle Möglichkeit durch die Offenheit, die er in sich trägt, sich stetig neu zu wandeln und damit an Beliebtheit und Popularität zuzunehmen. Die vorliegende Arbeit erweitert damit die immer größer werdende Forschung in den HipHop-Studies und setzt wichtige Akzente um weiter zu forschen und HipHop besser verständlich zu machen.
Trotz der hohen innovationspolitischen Bedeutung der außeruniversitären Forschungseinrichtungen (AUF) sind sie bisher selten Gegenstand empirischer Untersuchungen. Keine der bisher vorliegenden Arbeiten legt ihren Fokus auf die Zusammenarbeit von Wissenschaftler:innen in Forschungsteams, obwohl wissenschaftliche Zusammenarbeit ein weitgehend unerforschtes Gebiet ist. Dies verwundert insofern, da gerade innovative und komplexe Aufgaben, wie sie im Bereich der Forschung bestehen, das kreative Potenzial Einzelner sowie eine gut funktionierende Kooperation der einzelnen Individuen benötigen. Die Zusammenarbeit von Wissenschaftler:innen in den AUF findet in einem kompetitiven Umfeld statt. Einerseits stehen die AUF auf Organisationsebene im Wettbewerb zueinander und konkurrieren um Forschungsgelder und wissenschaftliches Personal. Andererseits ist die kompetitive Einwerbung von Drittmitteln für Wissenschaftler:innen essentiell, um Leistungen, gemessen an hochrangigen Publikationen und Drittmittelquoten, für die eigene Karriere zu erbringen. Ein zunehmender Anteil an Drittmittelfinanzierung in den Einrichtungen hat zudem Auswirkungen auf die Personalpolitik und die Anzahl befristeter Arbeitsverhältnisse. Gleichzeitig wird Forschungsförderung häufig an Kollaborationen von Wissenschaftler:innen geknüpft und bei Publikationen und Forschungsergebnissen zeigen Studien, dass diese überwiegend das Resultat von mehreren Personen sind. Dieses Spannungsfeld zwischen Zusammenarbeit und Wettbewerb wird verstärkt durch die fehlenden Möglichkeiten für den wissenschaftlichen Nachwuchs in der Wissenschaft zu bleiben. Auch wenn die Bundesregierung auf diese Herausforderungen reagiert, muss der Einzelne seinen Weg zwischen Zusammenarbeit und Konkurrenz finden.
Zielsetzung dieser Arbeit ist es, nachfolgende Forschungsfragen zu beantworten:
1. Wie können naturwissenschaftliche Forschungsteams in AUF charakterisiert werden?
2. Wie agiert die einzelne Forscherin/ der einzelne Forscher im Spannungsfeld zwischen Kooperation und Wettbewerb?
3. Welche Potentiale und Hemmnisse lassen sich auf Individual-, Team- und Umweltebene für eine erfolgreiche Arbeit von Forschungsteams in AUF ausmachen?
Um die Forschungsfragen beantworten zu können, wurde eine empirische Untersuchung im Mixed Method Design, bestehend aus einer deutschlandweiten Onlinebefragung von 574 Naturwissenschaftler:innen in AUF und qualitativen Interviews mit 122 Teammitgliedern aus 20 naturwissenschaftlichen Forschungsteams in AUF, durchgeführt.
Die Ergebnisse zeigen, dass die Teams eher als Arbeitsgruppen bezeichnet werden können, da v.a. in der Grundlagenforschung kein gemeinsames Ziel als vielmehr ein gemeinsamer inhaltlicher Rahmen vorliegt, in dem die Forschenden ihre individuellen Ziele verfolgen. Die Arbeit im Team wird überwiegend als positiv und kooperativ beschrieben und ist v.a. durch gegenseitige Unterstützung bei Problemen und weniger durch einen thematisch wissenschaftlichen Erkenntnisprozess geprägt. Dieser findet vielmehr in Form kleiner Untergruppen innerhalb der Arbeitsgruppe und vor allem in enger Abstimmung mit der Teamleitung (TL) statt. Als wettbewerbsverschärfend werden vor allem organisationale Rahmenbedingungen, wie Befristungen und der Flaschenhals, thematisiert.
Die TL nimmt die zentrale Rolle im Team ein, trägt die wissenschaftliche, finanzielle und personelle Verantwortung und muss den Forderungen der Organisation gerecht werden. Promovierende konzentrieren sich fast ausschließlich auf ihre Qualifizierungsarbeit. Bei Postdocs ist ein Spannungsfeld zu erkennen, da sie eigene Projekte und Ziele verfolgen, die neben den Anforderungen der TL bestehen. Die Gatekeeperfunktion der TL wird gestärkt durch ihre Rolle bei der Weitergabe von karriererelevanten Informationen im Team, z.B. bei anstehenden Konferenzen. Sie hat die wichtigen Kontakte, sorgt für die Vernetzung des Teams und ist für die Netzwerkpflege zuständig. Der wissenschaftliche Nachwuchs verlässt sich bei seinen Aufgaben und den karriererelevanten Faktoren sehr auf ihre Unterstützung. Nicht-wissenschaftliche Mitarbeitende gilt es stärker zu berücksichtigen, dies sowohl in ihrer Funktion in den Teams als auch in der Gesamtorganisation. Sie sind die zentralen Ansprechpersonen des wissenschaftlichen Personals und sorgen für eine Kontinuität bei der Wissensspeicherung und -weitergabe. Für die Organisationen gilt es, unterstützende Rahmen-, Arbeits- und Aufgabenbedingungen für die TL zu schaffen und den wissenschaftlichen Nachwuchs bei einer frühzeitigen Verantwortung für wissenschaftliche und karriererelevante Aufgaben zu unterstützen. Dafür bedarf es verbesserter Personalentwicklungskonzepte und -angebote. Darüber hinaus gilt es, Kooperationsmöglichkeiten innerhalb der Einrichtung und zwischen den Gruppen zu schaffen, z.B. durch offene Räume und Netzwerkmöglichkeiten, und innovative Arbeitsumgebungen zu fördern, um neue Formen einer innovationsfreundlichen Wissenschaftskultur zu etablieren.
Detecting and categorizing particular entities in the environment are important visual tasks that humans have had to solve at various points in our evolutionary time. The question arises whether characteristics of entities that were of ecological significance for humans play a particular role during the development of visual categorization.
The current project addressed this question by investigating the effects of developing visual abilities, visual properties and ecological significance on categorization early in life. Our stimuli were monochromatic photographs of structure-like assemblies and surfaces taken from three categories: vegetation, non-living natural elements, and artifacts. A set of computational and rated visual properties were assessed for these stimuli. Three empirical studies applied coherent research concepts and methods in young children and adults, comprising (a) two card-sorting tasks with preschool children (age: 4.1-6.1 years) and adults (age: 18-50 years) which assessed classification and similarity judgments, (b) a gaze contingent eye-tracking search task which investigated the impact of visual properties and category membership on 8-month-olds' ability to segregate visual structure. Because eye-tracking with infants still provides challenges, a methodological study (c) assessed the effect of infant eye-tracking procedures on data quality with 8- to 12-month-old infants and adults.
In the categorization tasks we found that category membership and visual properties impacted the performance of all participant groups. Sensitivity to the respective categories varied between tasks and over the age groups. For example, artifact images hindered infants' visual search but were classified best by adults, whereas sensitivity to vegetation was highest during similarity judgments. Overall, preschool children relied less on visual properties than adults, but some properties (e.g., rated depth, shading) were drawn upon similarly strong. In children and infants, depth predicted task performance stronger than shape-related properties. Moreover, children and infants were sensitive to variations in the complexity of low-level visual statistics. These results suggest that classification of visual structures, and attention to particular visual properties is affected by the functional or ecological significance these categories and properties may have for each of the respective age groups.
Based on this, the project highlights the importance of further developmental research on visual categorization with naturalistic, structure-like stimuli. As intended with the current work, this would allow important links between developmental and adult research.
Botulinum neurotoxin (BoNT) is produced by the anaerobic bacterium Clostridium botulinum. It is one of the most potent toxins found in nature and can enter motor neurons (MN) to cleave proteins necessary for neurotransmission, resulting in flaccid paralysis. The toxin has applications in both traditional and esthetic medicine. Since BoNT activity varies between batches despite identical protein concentrations, the activity of each lot must be assessed. The gold standard method is the mouse lethality assay, in which mice are injected with a BoNT dilution series to determine the dose at which half of the animals suffer death from peripheral asphyxia. Ethical concerns surrounding the use of animals in toxicity testing necessitate the creation of alternative model systems to measure the potency of BoNT.
Prerequisites of a successful model are that it is human specific; it monitors the complete toxic pathway of BoNT; and it is highly sensitive, at least in the range of the mouse lethality assay. One model system was developed by our group, in which human SIMA neuroblastoma cells were genetically modified to express a reporter protein (GLuc), which is packaged into neurosecretory vesicles, and which, upon cellular depolarization, can be released – or inhibited by BoNT – simultaneously with neurotransmitters. This assay has great potential, but includes the inherent disadvantages that the GLuc sequence was randomly inserted into the genome and the tumor cells only have limited sensitivity and specificity to BoNT. This project aims to improve these deficits, whereby induced pluripotent stem cells (iPSCs) were genetically modified by the CRISPR/Cas9 method to insert the GLuc sequence into the AAVS1 genomic safe harbor locus, precluding genetic disruption through non-specific integrations. Furthermore, GLuc was modified to associate with signal peptides that direct to the lumen of both large dense core vesicles (LDCV), which transport neuropeptides, and synaptic vesicles (SV), which package neurotransmitters. Finally, the modified iPSCs were differentiated into motor neurons (MNs), the true physiological target of BoNT, and hypothetically the most sensitive and specific cells available for the MoN-Light BoNT assay.
iPSCs were transfected to incorporate one of three constructs to direct GLuc into LDCVs, one construct to direct GLuc into SVs, and one “no tag” GLuc control construct. The LDCV constructs fused GLuc with the signal peptides for proopiomelanocortin (hPOMC-GLuc), chromogranin-A (CgA-GLuc), and secretogranin II (SgII-GLuc), which are all proteins found in the LDCV lumen. The SV construct comprises a VAMP2-GLuc fusion sequence, exploiting the SV membrane-associated protein synaptobrevin (VAMP2). The no tag GLuc expresses GLuc non-specifically throughout the cell and was created to compare the localization of vesicle-directed GLuc.
The clones were characterized to ensure that the GLuc sequence was only incorporated into the AAVS1 safe harbor locus and that the signal peptides directed GLuc to the correct vesicles. The accurate insertion of GLuc was confirmed by PCR with primers flanking the AAVS1 safe harbor locus, capable of simultaneously amplifying wildtype and modified alleles. The PCR amplicons, along with an insert-specific amplicon from candidate clones were Sanger sequenced to confirm the correct genomic region and sequence of the inserted DNA. Off-target integrations were analyzed with the newly developed dc-qcnPCR method, whereby the insert DNA was quantified by qPCR against autosomal and sex-chromosome encoded genes. While the majority of clones had off-target inserts, at least one on-target clone was identified for each construct.
Finally, immunofluorescence was utilized to localize GLuc in the selected clones. In iPSCs, the vesicle-directed GLuc should travel through the Golgi apparatus along the neurosecretory pathway, while the no tag GLuc should not follow this pathway. Initial analyses excluded the CgA-GLuc and SgII-GLuc clones due to poor quality protein visualization. The colocalization of GLuc with the Golgi was analyzed by confocal microscopy and quantified. GLuc was strongly colocalized with the Golgi in the hPOMC-GLuc clone (r = 0.85±0.09), moderately in the VAMP2-GLuc clone (r = 0.65±0.01), and, as expected, only weakly in the no tag GLuc clone (r = 0.44±0.10). Confocal microscopy of differentiated MNs was used to analyze the colocalization of GLuc with proteins associated with LDCVs and SVs, SgII in the hPOMC-GLuc clone (r = 0.85±0.08) and synaptophysin in the VAMP2-GLuc clone (r = 0.65±0.07). GLuc was also expressed in the same cells as the MN-associated protein, Islet1.
A significant portion of GLuc was found in the correct cell type and compartment. However, in the MoN-Light BoNT assay, the hPOMC-GLuc clone could not be provoked to reliably release GLuc upon cellular depolarization. The depolarization protocol for hPOMC-GLuc must be further optimized to produce reliable and specific release of GLuc upon exposure to a stimulus. On the other hand, the VAMP2-GLuc clone could be provoked to release GLuc upon exposure to the muscarinic and nicotinic agonist carbachol. Furthermore, upon simultaneous exposure to the calcium chelator EGTA, the carbachol-provoked release of GLuc could be significantly repressed, indicating the detection of GLuc was likely associated with vesicular fusion at the presynaptic terminal. The application of the VAMP2-GLuc clone in the MoN-Light BoNT assay must still be verified, but the results thus far indicate that this clone could be appropriate for the application of BoNT toxicity assessment.
Due to global climate change providing food security for an increasing world population is a big challenge. Especially abiotic stressors have a strong negative effect on crop yield. To develop climate-adapted crops a comprehensive understanding of molecular alterations in the response of varying levels of environmental stresses is required. High throughput or ‘omics’ technologies can help to identify key-regulators and pathways of abiotic stress responses. In addition to obtain omics data also tools and statistical analyses need to be designed and evaluated to get reliable biological results.
To address these issues, I have conducted three different studies covering two omics technologies. In the first study, I used transcriptomic data from the two polymorphic Arabidopsis thaliana accessions, namely Col-0 and N14, to evaluate seven computational tools for their ability to map and quantify Illumina single-end reads. Between 92% and 99% of the reads were mapped against the reference sequence. The raw count distributions obtained from the different tools were highly correlated. Performing a differential gene expression analysis between plants exposed to 20 °C or 4°C (cold acclimation), a large pairwise overlap between the mappers was obtained. In the second study, I obtained transcript data from ten different Oryza sativa (rice) cultivars by PacBio Isoform sequencing that can capture full-length transcripts. De novo reference transcriptomes were reconstructed resulting in 38,900 to 54,500 high-quality isoforms per cultivar. Isoforms were collapsed to reduce sequence redundancy and evaluated, e.g. for protein completeness level (BUSCO), transcript length, and number of unique transcripts per gene loci. For the heat and drought tolerant aus cultivar N22, I identified around 650 unique and novel transcripts of which 56 were significantly differentially expressed in developing seeds during combined drought and heat stress. In the last study, I measured and analyzed the changes in metabolite profiles of eight rice cultivars exposed to high night temperature (HNT) stress and grown during the dry and wet season on the field in the Philippines. Season-specific changes in metabolite levels, as well as for agronomic parameters, were identified and metabolic pathways causing a yield decline at HNT conditions suggested.
In conclusion, the comparison of mapper performances can help plant scientists to decide on the right tool for their data. The de novo reconstruction of rice cultivars without a genome sequence provides a targeted, cost-efficient approach to identify novel genes responding to stress conditions for any organism. With the metabolomics approach for HNT stress in rice, I identified stress and season-specific metabolites which might be used as molecular markers for crop improvement in the future.
Polymeric films and coatings derived from semi-crystalline oligomers are of relevance for medical and pharmaceutical applications. In this context, the material surface is of particular importance, as it mediates the interaction with the biological system. Two dimensional (2D) systems and ultrathin films are used to model this interface. However, conventional techniques for their preparation, such as spin coating or dip coating, have disadvantages, since the morphology and chain packing of the generated films can only be controlled to a limited extent and adsorption on the substrate used affects the behavior of the films. Detaching and transferring the films prepared by such techniques requires additional sacrificial or supporting layers, and free-standing or self supporting domains are usually of very limited lateral extension. The aim of this thesis is to study and modulate crystallization, melting, degradation and chemical reactions in ultrathin films of oligo(ε-caprolactone)s (OCL)s with different end-groups under ambient conditions. Here, oligomeric ultrathin films are assembled at the air-water interface using the Langmuir technique. The water surface allows lateral movement and aggregation of the oligomers, which, unlike solid substrates, enables dynamic physical and chemical interaction of the molecules. Parameters like surface pressure (π), temperature and mean molecular area (MMA) allow controlled assembly and manipulation of oligomer molecules when using the Langmuir technique. The π-MMA isotherms, Brewster angle microscopy (BAM), and interfacial infrared spectroscopy assist in detecting morphological and physicochemical changes in the film. Ultrathin films can be easily transferred to the solid silicon surface via Langmuir Schaefer (LS) method (horizontal substrate dipping). Here, the films transferred on silicon are investigated using atomic force microscopy (AFM) and optical microscopy and are compared to the films on the water surface.
The semi-crystalline morphology (lamellar thicknesses, crystal number densities, and lateral crystal dimensions) is tuned by the chemical structure of the OCL end-groups (hydroxy or methacrylate) and by the crystallization temperature (Tc; 12 or 21 °C) or MMAs. Compression to lower MMA of ~2 Å2, results in the formation of a highly crystalline film, which consists of tightly packed single crystals. Preparation of tightly packed single crystals on a cm2 scale is not possible by conventional techniques. Upon transfer to a solid surface, these films retain their crystalline morphology whereas amorphous films undergo dewetting.
The melting temperature (Tm) of OCL single crystals at the water and the solid surface is found proportional to the inverse crystal thickness and is generally lower than the Tm of bulk PCL. The impact of OCL end-groups on melting behavior is most noticeable at the air-solid interface, where the methacrylate end-capped OCL (OCDME) melted at lower temperatures than the hydroxy end-capped OCL (OCDOL). When comparing the underlying substrate, melting/recrystallization of OCL ultrathin films is possible at lower temperatures at the air water interface than at the air-solid interface, where recrystallization is not visible. Recrystallization at the air-water interface usually occurs at a higher temperature than the initial Tc.
Controlled degradation is crucial for the predictable performance of degradable polymeric biomaterials. Degradation of ultrathin films is carried out under acidic (pH ~ 1) or enzymatic catalysis (lipase from Pseudomonas cepcia) on the water surface or on a silicon surface as transferred films. A high crystallinity strongly reduces the hydrolytic but not the enzymatic degradation rate. As an influence of end-groups, the methacrylate end-capped linear oligomer, OCDME (~85 ± 2 % end-group functionalization) hydrolytically degrades faster than the hydroxy end capped linear oligomer, OCDOL (~95 ± 3 % end-group functionalization) at different temperatures. Differences in the acceleration of hydrolytic degradation of semi-crystalline films were observed upon complete melting, partial melting of the crystals, or by heating to temperatures close to Tm. Therefore, films of densely packed single crystals are suitable as barrier layers with thermally switchable degradation rates.
Chemical modification in ultrathin films is an intricate process applicable to connect functionalized molecules, impart stability or create stimuli-sensitive cross-links. The reaction of end-groups is explored for transferred single crystals on a solid surface or amorphous monolayer at the air-water interface. Bulky methacrylate end-groups are expelled to the crystal surface during chain-folded crystallization. The density of end-groups is inversely proportional to molecular weight and hence very pronounced for oligomers. The methacrylate end-groups at the crystal surface, which are present at high concentration, can be used for further chemical functionalization. This is demonstrated by fluorescence microscopy after reaction with fluorescein dimethacrylate. The thermoswitching behavior (melting and recrystallization) of fluorescein functionalized single crystals shows the temperature-dependent distribution of the chemically linked fluorescein moieties, which are accumulated on the surfaces of crystals, and homogeneously dispersed when the crystals are molten. In amorphous monolayers at the air-water interface, reversible cross-linking of hydroxy-terminated oligo(ε-caprolactone) monolayers using dialdehyde (glyoxal) lead to the formation of 2D networks. Pronounced contraction in the area occurred for 2D OCL films in dependence of surface pressure and time indicating the reaction progress. Cross linking inhibited crystallization and retarded enzymatic degradation of the OCL film. Altering the subphase pH to ~2 led to cleavage of the covalent acetal cross-links. Besides as model systems, these reversibly cross-linked films are applicable for drug delivery systems or cell substrates modulating adhesion at biointerfaces.
Innerhalb dieser Arbeit erfolgte die erstmalige systematische Untersuchung von Vinylsulfonsäureethylester (1a), Phenylvinylsulfon (1b), N-Benzyl-N-methylethensulfonamid (1c) in der FUJIWARA-MORITANI Reaktion (alternativ als DHR bezeichnet). Bei dieser übergangsmetallkatalysierten Reaktion erfolgt der Aufbau einer neuen C-C-Bindung unter der doppelten Aktivierung einer C-H-Bindung. Somit kann ein atomökonomischer Aufbau von Molekülen realisiert werden, da keine Beiprodukte in Form von Salzen entstehen. Als aromatischer Reaktant wurden Acetanilide (2) verwendet, damit eine regiospezifische Kupplung durch die katalysatordirigierende Acetamid-Gruppe (CDG) erfolgt. Für die Pd-katalysierte DHR wurde eine umfangreiche Optimierung durchgeführt und anschließend konnten neun verschieden, substituierte 2 mit 1a und sieben verschieden, substituierte 2 mit 1b funktionalisiert werden. Da eine Reaktion mit 1c ausblieb, erfolgte ein Wechsel auf eine Ru-katalysierte Methode für die DHR. Mit dieser Methode konnte 1c mit Acetaniliden funktionalisiert werden und das Spektrum der verwendeten 2, in Form von deaktivierenden Substituenten erweitert werden.
Im Anschluss wurden die sulfalkenylierten Acetanilide in weiterführenden Reaktionen untersucht. Hierfür wurde eine Reaktionssequenz bestehend aus einer DeacetylierungDiazotierung-Kupplungsreaktion verwendet, um die Acetamid-Gruppe in eine Abgangsgruppe zu überführen und danach in einer MATSUDA-HECK Reaktion zu kuppeln. Mit dieser Methode konnten mehrere 1,2-Dialkenylbenzole erhalten werden und die CDG ein weiteres Mal genutzt werden. Neben der Überführung der CDG in eine Abgangsgruppe konnte diese auch in die Synthese verschiedener Heterozyklen integriert werden. Dafür erfolgte zunächst eine 1,3-Zykloaddition durch deprotonierten Tosylmethylisocanid an der elektronenarmen Sulfalkenylgruppe zur Synthese von Pyrrolen. Anschließend erfolgte eine Kupplung der PyrrolFunktion und der CDG durch Zyklokondensation, wodurch Quinoline dargestellt wurden. Durch diese Synthesen konnten Schwefelanaloga des Naturstoffes Marinoquionolin A erhalten werden.
Ein weitere übergangsmetallkatalysierte C-H-Aktivierungsreaktion, die MATSUDA-HECK Reaktion, wurde genutzt, um 1b zu mit verschieden, subtituierten Diazoniumsalzen zu arylieren. Hier konnten zahlreichen Styrenylsulfone erhalten werden. Der erfolgreiche Einsatz der Vinylsulfonylverbindungen in der Kreuzmetathese konnte innerhalb dieser Arbeit nicht erreicht werden. Daher erfolgte die Synthese verschiedener dialkenylierter Sulfonamide. Hierfür wurde die Kettenlänge der Alkenyl-Gruppe am Schwefel zwischen 2-3 und am Stickstoff zwischen 3-4 variiert. Der Einsatz der dialkenylierten Sulfonamide erfolgte in den zuvor untersuchten C-H-Aktivierungsmethoden.
N-Allyl-N-phenylethensulfonamid (3) konnte erfolgreich in der DHR und HECK Reaktion funktionalisiert werden. Hierbei erfolgte eine methodenspezifische Kupplung in Abhängigkeit von der Elektronendichte der entsprechenden Alkenyl-Gruppe. Die DHR führte zur selektiven Arylierung der Vinyl-Gruppe und die HECK Reaktion zur Arylierung an der Allyl-Gruppe. Gemischte Produkte wurden nicht erhalten. Für die weiteren Diolefine wurde komplexe Produktgemische erhalten. Des Weiteren wurden die Diolefine in der Ringschlussmetathese untersucht und die entsprechenden Sultame in sehr guten Ausbeuten erhalten. Die Verwendung der Sultame in der C-H-Aktivierung war erfolglos. Es wird vermutet, dass für diese zweifachsubstituierten Sulfonamide die vorhandenen Reaktionsbedingungen optimiert werden müssen.
Abschließend wurden verschiedene, enantiomerenreine Olefine ausgehend von Levoglucosenon dargestellt. Hierfür wurde Levoglucosenon zunächst mit einem Allyl- und 3-Butenylgrignard Reagenz umgesetzt. Die entsprechenden Produkte wurden in moderaten Ausbeuten erhalten. Eine weitere Methode begann mit der Reduktion von Levoglucosenon zum Levoglucosenol. Dieser Alkohol wurde mit Allylbromid erfolgreich verethert. Neben der Untersuchungen zur Ethersynthese, erfolgte die Veresterung von Levoglucosenol mit verschiedenen Sulfonylchloriden zu den entsprechenden Sulfonsäureestern. Diese Olefine wurden in einer Dominometathesereaktion untersucht. Ausgehend vom Allyllevoglucosenylether erfolgte die Darstellung eines Dihydrofurans.
The present work focuses on minimising the usage of toxic chemicals by integration of the biobased monomers, derived from fatty acid esters, to photopolymerization processes, which are known to be nature friendly. Internal double bond present in the oleic acid was converted to more reactive (meth)acrylate or epoxy group. Biobased starting materials, functionalized by different pendant groups, were used for photopolymerizing formulations to design of new polymeric structures by using ultraviolet light emitting diode (UV-LED) (395 nm) via free radical polymerization or cationic polymerization.
New (meth)acrylates (2,3 and 4) consisting of two isomers, methyl 9-((meth)acryloyloxy)-10-hydroxyoctadecanoate / methyl 9-hydroxy-10-((meth)acryloyloxy)octadecanoate (2 and 3) and methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4), modified from oleic acid mix, and ionic liquid monomers (1a and 1b) bearing long alkyl chain were polymerized photochemically. New (meth)acrylates are based on vegetable oil, and ionic liquids (ILs) have nonvolatile behaviour. Therefore, both monomer types have green approach. Photoinitiated polymerization of new (meth)acrylates and ionic liquids was investigated in the presence of ethyl (2,4,6-trimethylbenzoyl) phenylphosphinate (Irgacure® TPO−L) or di(4-methoxybenzoyl)diethylgermane (Ivocerin®) as photoinitiator (PI). Additionally, the results were discussed in comparison with those obtained from commercial 1,6-hexanediol di(meth)acrylate (5 and 6) for deeper investigation of biobased monomer’s potential to substitute petroleum derived materials with renewable resources for possible coating applications. Kinetic study shows that methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4) and ionic liquids (1a and 1b) have quantitative conversion after irradiation process which is important for practical applications. On the other hand, heat generation occurs in a longer time during the polymerization of biobased systems or ILs.
The poly(meth)acrylates modified from (meth)acrylated fatty acid methyl ester monomers generally show a low glass transition temperature because of the presence of long aliphatic chain in the polymer structure. However, poly(meth)acrylates containing aromatic group have higher glass transition temperature. Therefore, new 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was synthesized which can be a promising candidate for the green techniques, such as light induced polymerization. Photokinetic investigation of the new monomer, 4-(4-methacryloyloxyphenyl)-butan-2-one (7), was discussed using Irgacure® TPO−L or Ivocerin® as photoinitiator. The reactivity of that monomer was compared to commercial 2-phenoxyethyl methacrylate (8) and phenyl methacrylate (9) basis of the differences on monomer structures. The photopolymer of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) might be an interesting candidate for the coating application with the properties of quantitative conversion and high molecular weight. It also shows higher glass transition temperature.
In addition to the linear systems based on renewable materials, new crosslinked polymers were also designed in this thesis. Therefore, isomer mixture consisting of ethane-1,2-diyl bis(9-methacryloyloxy-10-hydroxy octadecanoate), ethane-1,2-diyl 9-hydroxy-10-methacryloyloxy-9’-methacryloyloxy10’-hydroxy octadecanoate and ethane-1,2-diyl bis(9-hydroxy-10-methacryloyloxy octadecanoate) (10) was synthesized by derivation of the oleic acid which has not been previously described in the literature. Crosslinked material based on this biobased monomer was produced by photoinitiated free radical polymerization using Irgacure® TPO−L or Ivocerin® as photoinitiator. Furthermore, material properties were diversified by copolymerization of 10 with 4-(4-methacryloyloxyphenyl)-butan-2-one (7) or methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4). In addition to this, influence of comonomer with different chemical structure on the network system was investigated by analysis of thermo-mechanical properties, crosslink density and molecular weight between two crosslink junctions. An increase in the glass transition temperature caused by copolymerization of biobased monomer 10 with the excess amount of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was confirmed by both techniques, differential scanning calorimetry (DSC) and dynamic mechanical analysis (DMA). On the other hand, crosslink density decreased as a result of copolymerization reactions due to the reduction in the mean functionality of the system. Furthermore, surface characterization has been tested by contact angle measurements using solvents with different polarity.
This work also contributes to the limited data reported about cationic photopolymerization of the epoxidized vegetable oils in the literature in contrast to the widely investigation of thermal curing of the biorenewable epoxy monomers. In addition to the 9,10-epoxystearic acid methyl ester (11), a new monomer of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) has been synthesized from oleic acid. These two biobased epoxies have been polymerized via cationic photoinitiated polymerization in the presence of bis(t-butyl)-iodonium-tetrakis(perfluoro-t-butoxy)aluminate ([Al(O-t-C4F9)4]-) and isopropylthioxanthone (ITX) as photinitiating system. Polymerization kinetic of 9,10-epoxystearic acid methyl ester (11) and bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was investigated and compared with the kinetic of commercial monomers being 3,4-epoxycyclohexylmethyl-3’,4’-epoxycyclohexane carboxylate (13), 1,4-butanediol diglycidyl ether (14), and diglycidylether of bisphenol-A (15). Both biobased epoxies (11 and 12) showed higher conversion than cycloaliphatic epoxy (13), and lower reactivity than 1,4-butanediol diglycidyl ether (14). Additional network systems were designed by copolymerization of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) in different molar ratios (1:1; 1:5; 1:9). It addresses that, final conversion is dependent on polymerization rate as well as physical processes such as vitrification during polymerization. Moreover, low glass transition temperature of homopolymer derived from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was successfully increased by copolymerization with diglycidylether bisphenol-A (15). On the other hand, the surface produced from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) shows hydrophobic character. Higher concentration of biobased diepoxy (12) in the copolymerizing mixture decreases surface free energy. Network systems were also investigated according to the rubber elasticity theory. Crosslinked polymer derived from the mixture of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) (molar ratio=1:5) exhibits almost ideal polymer network.
The Internet of Things (IoT) is a system of physical objects that can be discovered, monitored, controlled, or interacted with by electronic devices that communicate over various networking interfaces and eventually can be connected to the wider Internet. [Guinard and Trifa, 2016]. IoT devices are equipped with sensors and/or actuators and may be constrained in terms of memory, computational power, network bandwidth, and energy. Interoperability can help to manage such heterogeneous devices. Interoperability is the ability of different types of systems to work together smoothly. There are four levels of interoperability: physical, network and transport, integration, and data. The data interoperability is subdivided into syntactic and semantic data. Semantic data describes the meaning of data and the common understanding of vocabulary e.g. with the help of dictionaries, taxonomies, ontologies. To achieve interoperability, semantic interoperability is necessary.
Many organizations and companies are working on standards and solutions for interoperability in the IoT. However, the commercial solutions produce a vendor lock-in. They focus on centralized approaches such as cloud-based solutions. This thesis proposes a decentralized approach namely Edge Computing. Edge Computing is based on the concepts of mesh networking and distributed processing. This approach has an advantage that information collection and processing are placed closer to the sources of this information. The goals are to reduce traffic, latency, and to be robust against a lossy or failed Internet connection.
We see management of IoT devices from the network configuration management perspective. This thesis proposes a framework for network configuration management of heterogeneous, constrained IoT devices by using semantic descriptions for interoperability. The MYNO framework is an acronym for MQTT, YANG, NETCONF and Ontology. The NETCONF protocol is the IETF standard for network configuration management. The MQTT protocol is the de-facto standard in the IoT. We picked up the idea of the NETCONF-MQTT bridge, originally proposed by Scheffler and Bonneß[2017], and extended it with semantic device descriptions. These device descriptions provide a description of the device capabilities. They are based on the oneM2M Base ontology and formalized by the Semantic Web Standards.
The novel approach is using a ontology-based device description directly on a constrained device in combination with the MQTT protocol. The bridge was extended in order to query such descriptions. Using a semantic annotation, we achieved that the device capabilities are self-descriptive, machine readable and re-usable.
The concept of a Virtual Device was introduced and implemented, based on semantic device descriptions. A Virtual Device aggregates the capabilities of all devices at the edge network and contributes therefore to the scalability. Thus, it is possible to control all devices via a single RPC call.
The model-driven NETCONF Web-Client is generated automatically from this YANG model which is generated by the bridge based on the semantic device description. The Web-Client provides a user-friendly interface, offers RPC calls and displays sensor values. We demonstrate the feasibility of this approach in different use cases: sensor and actuator scenarios, as well as event configuration and triggering.
The semantic approach results in increased memory overhead. Therefore, we evaluated CBOR and RDF HDT for optimization of ontology-based device descriptions for use on constrained devices. The evaluation shows that CBOR is not suitable for long strings and RDF HDT is a promising candidate but is still a W3C Member Submission. Finally, we used an optimized JSON-LD format for the syntax of the device descriptions.
One of the security tasks of network management is the distribution of firmware updates. The MYNO Update Protocol (MUP) was developed and evaluated on constrained devices CC2538dk and 6LoWPAN. The MYNO update process is focused on freshness and authenticity of the firmware. The evaluation shows that it is challenging but feasible to bring the firmware updates to constrained devices using MQTT. As a new requirement for the next MQTT version, we propose to add a slicing feature for the better support of constrained devices. The MQTT broker should slice data to the maximum packet size specified by the device and transfer it slice-by-slice.
For the performance and scalability evaluation of MYNO framework, we setup the High Precision Agriculture demonstrator with 10 ESP-32 NodeMCU boards at the edge of the network. The ESP-32 NodeMCU boards, connected by WLAN, were equipped with six sensors and two actuators. The performance evaluation shows that the processing of ontology-based descriptions on a Raspberry Pi 3B with the RDFLib is a challenging task regarding computational power. Nevertheless, it is feasible because it must be done only once per device during the discovery process.
The MYNO framework was tested with heterogeneous devices such as CC2538dk from Texas Instruments, Arduino Yún Rev 3, and ESP-32 NodeMCU, and IP-based networks such as 6LoWPAN and WLAN.
Summarizing, with the MYNO framework we could show that the semantic approach on constrained devices is feasible in the IoT.
Permafrost is warming globally, which leads to widespread permafrost thaw and impacts the surrounding landscapes, ecosystems and infrastructure. Especially ice-rich permafrost is vulnerable to rapid and abrupt thaw, resulting from the melting of excess ground ice. Local remote sensing studies have detected increasing rates of abrupt permafrost disturbances, such as thermokarst lake change and drainage, coastal erosion and RTS in the last two decades. All of which indicate an acceleration of permafrost degradation.
In particular retrogressive thaw slumps (RTS) are abrupt disturbances that expand by up to several meters each year and impact local and regional topographic gradients, hydrological pathways, sediment and nutrient mobilisation into aquatic systems, and increased permafrost carbon mobilisation. The feedback between abrupt permafrost thaw and the carbon cycle is a crucial component of the Earth system and a relevant driver in global climate models. However, an assessment of RTS at high temporal resolution to determine the dynamic thaw processes and identify the main thaw drivers as well as a continental-scale assessment across diverse permafrost regions are still lacking.
In northern high latitudes optical remote sensing is restricted by environmental factors and frequent cloud coverage. This decreases image availability and thus constrains the application of automated algorithms for time series disturbance detection for large-scale abrupt permafrost disturbances at high temporal resolution. Since models and observations suggest that abrupt permafrost disturbances will intensify, we require disturbance products at continental-scale, which allow for meaningful integration into Earth system models.
The main aim of this dissertation therefore, is to enhance our knowledge on the spatial extent and temporal dynamics of abrupt permafrost disturbances in a large-scale assessment. To address this, three research objectives were posed:
1. Assess the comparability and compatibility of Landsat-8 and Sentinel-2 data for a combined use in multi-spectral analysis in northern high latitudes.
2. Adapt an image mosaicking method for Landsat and Sentinel-2 data to create combined mosaics of high quality as input for high temporal disturbance assessments in northern high latitudes.
3. Automatically map retrogressive thaw slumps on the landscape-scale and assess their high temporal thaw dynamics.
We assessed the comparability of Landsat-8 and Sentinel-2 imagery by spectral comparison of corresponding bands. Based on overlapping same-day acquisitions of Landsat-8 and Sentinel-2 we derived spectral bandpass adjustment coefficients for North Siberia to adjust Sentinel-2 reflectance values to resemble Landsat-8 and harmonise the two data sets. Furthermore, we adapted a workflow to combine Landsat and Sentinel-2 images to create homogeneous and gap-free annual mosaics. We determined the number of images and cloud-free pixels, the spatial coverage and the quality of the mosaic with spectral comparisons to demonstrate the relevance of the Landsat+Sentinel-2 mosaics. Lastly, we adapted the automatic disturbance detection algorithm LandTrendr for large-scale RTS identification and mapping at high temporal resolution. For this, we modified the temporal segmentation algorithm for annual gradual and abrupt disturbance detection to incorporate the annual Landsat+Sentinel-2 mosaics. We further parametrised the temporal segmentation and spectral filtering for optimised RTS detection, conducted further spatial masking and filtering, and implemented a binary object classification algorithm with machine-learning to derive RTS from the LandTrendr disturbance output. We applied the algorithm to North Siberia, covering an area of 8.1 x 106 km2.
The spectral band comparison between same-day Landsat-8 and Sentinel-2 acquisitions already showed an overall good fit between both satellite products. However, applying the acquired spectral bandpass coefficients for adjustment of Sentinel-2 reflectance values, resulted in a near-perfect alignment between the same-day images. It can therefore be concluded that the spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to those of Landsat-8 in North Siberia.
The number of available cloud-free images increased steadily between 1999 and 2019, especially intensified after 2016 with the addition of Sentinel-2 images. This signifies a highly improved input database for the mosaicking workflow. In a comparison of annual mosaics, the Landsat+Sentinel-2 mosaics always fully covered the study areas, while Landsat-only mosaics contained data-gaps for the same years. The spectral comparison of input images and Landsat+Sentinel-2 mosaic showed a high correlation between the input images and the mosaic bands, testifying mosaicking results of high quality. Our results show that especially the mosaic coverage for northern, coastal areas was substantially improved with the Landsat+Sentinel-2 mosaics. By combining data from both Landsat and Sentinel-2 sensors we reliably created input mosaics at high spatial resolution for comprehensive time series analyses.
This research presents the first automatically derived assessment of RTS distribution and temporal dynamics at continental-scale. In total, we identified 50,895 RTS, primarily located in ice-rich permafrost regions, as well as a steady increase in RTS-affected areas between 2001 and 2019 across North Siberia. From 2016 onward the RTS area increased more abruptly, indicating heightened thaw slump dynamics in this period. Overall, the RTS-affected area increased by 331 % within the observation period. Contrary to this, five focus sites show spatiotemporal variability in their annual RTS dynamics, alternating between periods of increased and decreased RTS development. This suggests a close relationship to varying thaw drivers. The majority of identified RTS was active from 2000 onward and only a small proportion initiated during the assessment period. This highlights that the increase in RTS-affected area was mainly caused by enlarging existing RTS and not by newly initiated RTS.
Overall, this research showed the advantages of combining Landsat and Sentinel-2 data in northern high latitudes and the improvements in spatial and temporal coverage of combined annual mosaics. The mosaics build the database for automated disturbance detection to reliably map RTS and other abrupt permafrost disturbances at continental-scale. The assessment at high temporal resolution further testifies the increasing impact of abrupt permafrost disturbances and likewise emphasises the spatio-temporal variability of thaw dynamics across landscapes. Obtaining such consistent disturbance products is necessary to parametrise regional and global climate change models, for enabling an improved representation of the permafrost thaw feedback.
This dissertation was carried out as part of the international and interdisciplinary graduate school StRATEGy. This group has set itself the goal of investigating geological processes that take place on different temporal and spatial scales and have shaped the southern central Andes. This study focuses on claystones and carbonates of the Yacoraite Fm. that were deposited between Maastricht and Dan in the Cretaceous Salta Rift Basin. The former rift basin is located in northwest Argentina and is divided into the sub-basins Tres Cruces, Metán-Alemanía and Lomas de Olmedo. The overall motivation for this study was to gain new knowledge about the evolution of marine and lacustrine conditions during the Yacoraite Fm. Deposit in the Tres Cruces and Metán-Alemanía sub-basins. Other important aspects that were examined within the scope of this dissertation are the conversion of organic matter from Yacoraite Fm. into oil and its genetic relationship to selected oils produced and natural oil spills. The results of my study show that the Yacoraite Fm. began to be deposited under marine conditions and that a lacustrine environment developed by the end of the deposition in the Tres Cruces and Metán-Alemanía Basins. In general, the kerogen of Yacoraite Fm. consists mainly of the kerogen types II, III and II / III mixtures. Kerogen type III is mainly found in samples from the Yacoraite Fm., whose TOC values are low. Due to the adsorption of hydrocarbons on the mineral surfaces (mineral matrix effect), the content of type III kerogen with Rock-Eval pyrolysis in these samples could be overestimated. Investigations using organic petrography show that the organic particles of Yacoraite Fm. mainly consist of alginites and some vitrinite-like particles. The pyrolysis GC of the rock samples showed that the Yacoraite Fm. generates low-sulfur oils with a predominantly low-wax, paraffinic-naphthenic-aromatic composition and paraffinic wax-rich oils. Small proportions of paraffinic, low-wax oils and a gas condensate-generating facies are also predicted. Here, too, mineral matrix effects were taken into account, which can lead to a quantitative overestimation of the gas-forming character.
The results of an additional 1D tank modeling carried out show that the beginning (10% TR) of the oil genesis took place between ≈10 Ma and ≈4 Ma. Most of the oil (from ≈50% to 65%) was generated prior to the development of structural traps formed during the Plio-Pleistocene Diaguita deformation phase. Only ≈10% of the total oil generated was formed and potentially trapped after the formation of structural traps. Important factors in the risk assessment of this petroleum system, which can determine the small amounts of generated and migrated oil, are the generally low TOC contents and the variable thickness of the Yacoraite Fm. Additional risks are associated with a low density of information about potentially existing reservoir structures and the quality of the overburden.
River flooding poses a threat to numerous cities and communities all over the world. The detection, quantification and attribution of changes in flood characteristics is key to assess changes in flood hazard and help affected societies to timely mitigate and adapt to emerging risks. The Rhine River is one of the major European rivers and numerous large cities reside at its shores. Runoff from several large tributaries superimposes in the main channel shaping the complex from regime. Rainfall, snowmelt as well as ice-melt are important runoff components. The main objective of this thesis is the investigation of a possible transient merging of nival and pluvial Rhine flood regimes under global warming. Rising temperatures cause snowmelt to occur earlier in the year and rainfall to be more intense. The superposition of snowmelt-induced floods originating from the Alps with more intense rainfall-induced runoff from pluvial-type tributaries might create a new flood type with potentially disastrous consequences.
To introduce the topic of changing hydrological flow regimes, an interactive web application that enables the investigation of runoff timing and runoff season- ality observed at river gauges all over the world is presented. The exploration and comparison of a great diversity of river gauges in the Rhine River Basin and beyond indicates that river systems around the world undergo fundamental changes. In hazard and risk research, the provision of background as well as real-time information to residents and decision-makers in an easy accessible way is of great importance. Future studies need to further harness the potential of scientifically engineered online tools to improve the communication of information related to hazards and risks.
A next step is the development of a cascading sequence of analytical tools to investigate long-term changes in hydro-climatic time series. The combination of quantile sampling with moving average trend statistics and empirical mode decomposition allows for the extraction of high resolution signals and the identification of mechanisms driving changes in river runoff. Results point out that the construction and operation of large reservoirs in the Alps is an important factor redistributing runoff from summer to winter and hint at more (intense) rainfall in recent decades, particularly during winter, in turn increasing high runoff quantiles. The development and application of the analytical sequence represents a further step in the scientific quest to disentangling natural variability, climate change signals and direct human impacts.
The in-depth analysis of in situ snow measurements and the simulations of the Alpine snow cover using a physically-based snow model enable the quantification of changes in snowmelt in the sub-basin upstream gauge Basel. Results confirm previous investigations indicating that rising temperatures result in a decrease in maximum melt rates. Extending these findings to a catchment perspective, a threefold effect of rising temperatures can be identified: snowmelt becomes weaker, occurs earlier and forms at higher elevations. Furthermore, results indicate that due to the wide range of elevations in the basin, snowmelt does not occur simultaneously at all elevation, but elevation bands melt together in blocks. The beginning and end of the release of meltwater seem to be determined by the passage of warm air masses, and the respective elevation range affected by accompanying temperatures and snow availability. Following those findings, a hypothesis describing elevation-dependent compensation effects in snowmelt is introduced: In a warmer world with similar sequences of weather conditions, snowmelt is moved upward to higher elevations, i.e., the block of elevation bands providing most water to the snowmelt-induced runoff is located at higher elevations. The movement upward the elevation range makes snowmelt in individual elevation bands occur earlier. The timing of the snowmelt-induced runoff, however, stays the same. Meltwater from higher elevations, at least partly, replaces meltwater from elevations below.
The insights on past and present changes in river runoff, snow covers and underlying mechanisms form the basis of investigations of potential future changes in Rhine River runoff. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios is used to analyse future changes in streamflow, snowmelt, precipitation and evapotranspiration at 1.5, 2.0 and
3.0 ◦ C global warming. Simulation results suggest that future changes in flood characteristics in the Rhine River Basin are controlled by increased precipitation amounts on the one hand, and reduced snowmelt on the other hand. Rising temperatures deplete seasonal snowpacks. At no time during the year, a warming climate results in an increase in the risk of snowmelt-driven flooding. Counterbalancing effects between snowmelt and precipitation often result in only little and transient changes in streamflow peaks. Although, investigations point at changes in both rainfall and snowmelt-driven runoff, there are no indications of a transient merging of nival and pluvial Rhine flood regimes due to climate warming. Flooding in the main tributaries of the Rhine, such as the Moselle River, as well as the High Rhine is controlled by both precipitation and snowmelt. Caution has to be exercised labelling sub-basins such as the Moselle catchment as purely pluvial-type or the Rhine River Basin at Basel as purely nival-type. Results indicate that this (over-) simplifications can entail misleading assumptions with regard to flood-generating mechanisms and changes in flood hazard. In the framework of this thesis, some progress has been made in detecting, quantifying and attributing past, present and future changes in Rhine flow/flood characteristics. However, further studies are necessary to pin down future changes in the flood genesis of Rhine floods, particularly very rare events.
Learning analytics at scale
(2021)
Digital technologies are paving the way for innovative educational approaches. The learning format of Massive Open Online Courses (MOOCs) provides a highly accessible path to lifelong learning while being more affordable and flexible than face-to-face courses. Thereby, thousands of learners can enroll in courses mostly without admission restrictions, but this also raises challenges. Individual supervision by teachers is barely feasible, and learning persistence and success depend on students' self-regulatory skills. Here, technology provides the means for support. The use of data for decision-making is already transforming many fields, whereas in education, it is still a young research discipline. Learning Analytics (LA) is defined as the measurement, collection, analysis, and reporting of data about learners and their learning contexts with the purpose of understanding and improving learning and learning environments. The vast amount of data that MOOCs produce on the learning behavior and success of thousands of students provides the opportunity to study human learning and develop approaches addressing the demands of learners and teachers.
The overall purpose of this dissertation is to investigate the implementation of LA at the scale of MOOCs and to explore how data-driven technology can support learning and teaching in this context. To this end, several research prototypes have been iteratively developed for the HPI MOOC Platform. Hence, they were tested and evaluated in an authentic real-world learning environment. Most of the results can be applied on a conceptual level to other MOOC platforms as well. The research contribution of this thesis thus provides practical insights beyond what is theoretically possible. In total, four system components were developed and extended:
(1) The Learning Analytics Architecture: A technical infrastructure to collect, process, and analyze event-driven learning data based on schema-agnostic pipelining in a service-oriented MOOC platform. (2) The Learning Analytics Dashboard for Learners: A tool for data-driven support of self-regulated learning, in particular to enable learners to evaluate and plan their learning activities, progress, and success by themselves. (3) Personalized Learning Objectives: A set of features to better connect learners' success to their personal intentions based on selected learning objectives to offer guidance and align the provided data-driven insights about their learning progress. (4) The Learning Analytics Dashboard for Teachers: A tool supporting teachers with data-driven insights to enable the monitoring of their courses with thousands of learners, identify potential issues, and take informed action.
For all aspects examined in this dissertation, related research is presented, development processes and implementation concepts are explained, and evaluations are conducted in case studies. Among other findings, the usage of the learner dashboard in combination with personalized learning objectives demonstrated improved certification rates of 11.62% to 12.63%. Furthermore, it was observed that the teacher dashboard is a key tool and an integral part for teaching in MOOCs. In addition to the results and contributions, general limitations of the work are discussed—which altogether provide a solid foundation for practical implications and future research.
Magmatic continental rifts often constitute the earliest stage of nascent plate boundaries. These extensional tectonic provinces are characterized by ubiquitous normal faulting and volcanic activity; the spatial pattern, the geometry, and the age of these normal faults can help to unravel the spatiotemporal relationships between extensional deformation, magmatism, and long-wavelength crustal deformation of continental rift provinces. This study focuses on the active faulting in the Kenya Rift of the Cenozoic East African Rift System (EARS) with a focus on the mid-Pleistocene to the present-day.
To examine the early stages of continental break-up in the EARS, this thesis presents a time-averaged minimum extension rate for the inner graben of the Northern Kenya Rift (NKR) for the last 0.5 m.y. Using the TanDEM-X digital elevation model, fault-scarp geometries and associated throws are determined across the volcano-tectonic axis of the inner graben of the NKR. By integrating existing geochronology of faulted units with new ⁴⁰Ar/³⁹Ar radioisotopic dates, time-averaged extension rates are calculated. This study reveals that in the inner graben of the NKR, the long-term extension rate based on mid-Pleistocene to recent brittle deformation has minimum values of 1.0 to 1.6 mm yr⁻¹, locally with values up to 2.0 mm yr⁻¹. In light of virtually inactive border faults of the NKR, we show that extension is focused in the region of the active volcano-tectonic axis in the inner graben, thus highlighting the maturing of continental rifting in the NKR.
The phenomenon of focused extension is further investigated with a structural analysis of the youngest volcanic manifestations of the Kenya Rift, their relationship with extensional structures, and their overprint by Holocene faulting. In this context I analyzed the fault characteristics at the ~36 ka old Menengai Caldera and adjacent areas in the Central Kenya Rift using detailed field mapping and a structure-from-motion-based DEM generated from UAV data. In general, the Holocene intra-rift normal faults are dip-slip faults which strike NNE and thus reflect the present-day tectonic stress field; however, inside Menengai caldera persistent magmatic activity and magmatic resurgence overprints these young structures significantly. The caldera is located at the center of an actively extending rift segment and this and the other volcanic edifices of the Kenya Rift may constitute nucleation points of faulting an magmatic extensional processes that ultimately lead into a future stage of magma-assisted rifting.
When viewed at the scale of the entire Kenya Rift the protracted normal faulting in this region compartmentalizes the larger rift depressions, and influences the sedimentology and the hydrology of the intra-rift basins at a scale of less than 100 km. In the present day, most of the fault-bounded sub-basins of the Kenya Rift are hydrologically isolated due to this combination of faulting and magmatic activity that has generated efficient hydrological barriers that maintain these basins as semi-independent geomorphic entities. This isolation, however, was overcome during wetter climatic conditions during the past when the basins were transiently connected. I therefore also investigated the hydrological connectivity of the rift basins during the African Humid Period of the early Holocene, when climate was wetter. With the help of DEM analysis, lake-highstand indicators, radiocarbon dating, and a review of the fossil record, two lake-river-cascades could be identified: one directed southward, and one directed northward. Both cascades connected presently isolated rift basins during the early Holocene via spillovers of lakes and incised river gorges. This hydrological connection fostered the dispersal of aquatic faunas along the rift, and in addition, the water divide between the two river systems represented the only terrestrial dispersal corridor across the Kenya Rift. The reconstruction explains isolated distributions of Nilotic fish species in Kenya Rift lakes and of Guineo-Congolian mammal species in forests east of the Kenya Rift. On longer timescales, repeated episodes of connectivity and isolation must have occurred. To address this problem I participated in research to analyze a sediment drill core from the Koora basin of the Southern Kenya Rift, which provides a paleo-environmental record of the last 1 Ma. Based on this record it can be concluded that at ~400 ka relatively stable environmental conditions were disrupted by tectonic, hydrological, and ecological changes, resulting in increasingly large and frequent fluctuations in water availability, grassland communities, and woody plant cover. The major environmental shifts reflected in the drill core data coincide with phases where volcano-tectonic activity affected the basin. This thesis therefore shows how protracted extensional tectonic processes and the resulting geomorphologic conditions can affect the hydrology, the paleo-environment and the biodiversity of extensional zones in Kenya and elsewhere.
Die Geschichtsschreibung terminiert das Ende des deutschen Zionismus bisher mit dem NS-Verbot der Zionistischen Vereinigung für Deutschland im Zuge des Novemberpogroms 1938. Zu diesem Zeitpunkt hatte er aber von seinem geographischen Kontext entgrenzt, in Erez Israel bereits neue Wurzeln geschlagen. Zionisten aus Deutschland schickten sich nun an, mit ihrem spezifischen Erfahrungshorizont und Wertemaßstab und mitgebrachtem ideologischen Rüstzeug die Entwicklung des jüdischen Nationalheims mitzugestalten und einer umfassenden ökonomischen, kulturellen und politischen Akkulturation der deutschen Alijah den Weg zu bahnen. Entgegen aller zionistischen Theorie gründeten sie auf landsmannschaftlicher Basis im Jahr 1932 die Selbsthilfeorganisation Hitachduth Olej Germania und während des Weltkrieges die Partei Alija Chadascha.
Die Dissertation beinhaltet die Gesamtschau des deutschen Zionismus in seiner letzten Phase in den Jahren 1932 bis 1948; zugleich beleuchtet sie die Geschichte der etwa 60.000 in Palästina eingewanderten Juden aus Deutschland in der für diese Abhandlung relevanten Zeitperiode. Im ersten Teil wird in chronologischer Folge die 1932 beginnende letztmalige Sammlung und Neuformierung des deutschen Zionismus in seiner neu-alten Heimat dargestellt. Wenn man so will, die formativen Jahre im personellen, organisatorischen und ideologisch-politischen Sinne, die schließlich nach dem fast gänzlichen Scheitern der politischen Integration der deutschen Alijah mit der – in der Rückschau – fast zwangsläufig erscheinenden Begründung der Alija Chadascha ihren Abschluss fanden. Im zweiten Teil werden die Positionen der deutschen Zionisten zu den existenziellen Fragen der jüdischen Gemeinschaft in Palästina, hebräisch Jischuw genannt, in der im Fokus stehenden Zeitperiode dargestellt. Im Einzelnen handelt es sich erstens um die Einwanderungsfrage, die untrennbar verbunden war mit der in der zionistischen Theorie unabdingbaren Forderung nach der Erlangung einer jüdischen Majorität in Palästina; zweitens um die der staatlichen Ausgestaltung des zukünftigen jüdischen Gemeinwesens und drittens um die Frage der adäquaten Reaktion des Jischuw auf die Schoah. In diese jeweils in separaten Kapiteln behandelten Themenkomplexe wird die Frage nach dem anzustrebenden Verhältnis zur britischen Mandatsmacht mit einfließen. Hieran mussten die deutschen Zionisten ihr mitgebrachtes geistig-ideologisches Rüstzeug einem Praxistest unterziehen und nach realpolitischen Antworten suchen.
Dem kometenhaften Aufstieg der weiterhin landsmannschaftlich geprägten Alija Chadascha folgte dann in den ersten Nachkriegsjahren ein ebenso rapider Zerfall. Einige Monate nach der Staatsgründung Israels löste sie sich dann sang- und klanglos auf und das Gros ihrer Aktivisten integrierte sich in das Parteiengefüge des neuen Staates. Der deutsche Zionismus als politische Bewegung kam nun wirklich an sein Ende. Diese Abhandlung wird somit zum einen den Kampf der deutschen Alijah um gesellschaftliche Anerkennung und politische Partizipation im Jischuw nachzeichnen und zum anderen eine geistig-ideologische Verortung des deutschen Zionismus in seiner letzten Phase vollziehen und Tendenzen der ideologischen Neuausrichtung offenlegen. Darüber hinaus werden in der Historiographie vorhandene Allgemeinplätze wie die fast allseits anerkannte These vom Scheitern der deutschen Zionisten in der neuen Heimat einer Überprüfung unterzogen. Die letzte vorhandene Leerstelle im wissenschaftlichen Kanon zur mehr als 50-jährigen Geschichte des deutschen Zionismus wird somit geschlossen.
By regulating the concentration of carbon in our atmosphere, the global carbon cycle drives changes in our planet’s climate and habitability. Earth surface processes play a central, yet insufficiently constrained role in regulating fluxes of carbon between terrestrial reservoirs and the atmosphere. River systems drive global biogeochemical cycles by redistributing significant masses of carbon across the landscape. During fluvial transit, the balance between carbon oxidation and preservation determines whether this mass redistribution is a net atmospheric CO2 source or sink. Existing models for fluvial carbon transport fail to integrate the effects of sediment routing processes, resulting in large uncertainties in fluvial carbon fluxes to the oceans.
In this Ph.D. dissertation, I address this knowledge gap through three studies that focus on the timescale and routing pathways of fluvial mass transfer and show their effect on the composition and fluxes of organic carbon exported by rivers. The hypotheses posed in these three studies were tested in an analog lowland alluvial river system – the Rio Bermejo in Argentina. The Rio Bermejo annually exports more than 100 Mt of sediment and organic matter from the central Andes, and transports this material nearly 1300 km downstream across the lowland basin without influence from tributaries, allowing me to isolate the effects of geomorphic processes on fluvial organic carbon cycling. These studies focus primarily on the geochemical composition of suspended sediment collected from river depth profiles along the length of the Rio Bermejo.
In Chapter 3, I aimed to determine the mean fluvial sediment transit time for the Rio Bermejo and evaluate the geomorphic processes that regulate the rate of downstream sediment transfer. I developed a framework to use meteoric cosmogenic 10Be (10Bem) as a chronometer to track the duration of sediment transit from the mountain front downstream along the ~1300 km channel of the Rio Bermejo. I measured 10Bem concentrations in suspended sediment sampled from depth profiles, and found a 230% increase along the fluvial transit pathway. I applied a simple model for the time-dependent accumulation of 10Bem on the floodplain to estimate a mean sediment transit time of 8.5±2.2 kyr. Furthermore, I show that sediment transit velocity is influenced by lateral migration rate and channel morphodynamics. This approach to measuring sediment transit time is much more precise than other methods previously used and shows promise for future applications.
In Chapter 4, I aimed to quantify the effects of hydrodynamic sorting on the composition and quantity of particulate organic carbon (POC) export transported by lowland rivers. I first used scanning electron miscroscopy (SEM) coupled with nanoscale secondary ion mass spectrometry (NanoSIMS) analyses to show that the Bermejo transports two principal types of POC: 1) mineral-bound organic carbon associated with <4 µm, platy grains, and 2) coarse discrete organic particles. Using n-alkane stable isotope data and particle shape analysis, I showed that these two carbon pools are vertically sorted in the water column, due to differences in particle settling velocity. This vertical sorting may drive modern POC to be transported efficiently from source-to-sink, driving efficient CO2 drawdown. Simultaneously, vertical sorting may drive degraded, mineral-bound POC to be deposited overbank and stored on the floodplain for centuries to millennia, resulting in enhanced POC remineralization. In the Rio Bermejo, selective deposition of coarse material causes the proportion of mineral-bound POC to increase with distance downstream, but the majority of exported POC is composed of discrete organic particles, suggesting that the river is a net carbon sink. In summary, this study shows that selective deposition and hydraulic sorting control the composition and fate of fluvial POC during fluvial transit.
In Chapter 5, I characterized and quantified POC transformation and oxidation during fluvial transit. I analyzed the radiocarbon content and stable carbon isotopic composition of Rio Bermejo suspended sediment and found that POC ages during fluvial transit, but is also degraded and oxidized during transient floodplain storage. Using these data, I developed a conceptual model for fluvial POC cycling that allows the estimation of POC oxidation relative to POC export, and ultimately reveals whether a river is a net source or sink of CO2 to the atmosphere. Through this study, I found that the Rio Bermejo annually exports more POC than is oxidized during transit, largely due to high rates of lateral migration that cause erosion of floodplain vegetation and soil into the river. These results imply that human engineering of rivers could alter the fluvial carbon balance, by reducing lateral POC inputs and increasing the mean sediment transit time.
Together, these three studies quantitatively link geomorphic processes to rates of POC transport and degradation across sub-annual to millennial time scales and nanoscale to 103 km spatial scales, laying the groundwork for a global-scale fluvial organic carbon cycling model.
Angepasste Pathogene besitzen eine Reihe von Virulenzmechanismen, um pflanzliche Immunantworten unterhalb eines Schwellenwerts der effektiven Resistenz zu unterdrücken. Dadurch sind sie in der Lage sich zu vermehren und Krankheiten auf einem bestimmten Wirt zu verursachen. Eine essentielle Virulenzstrategie Gram-negativer Bakterien ist die Translokation von sogenannten Typ-III Effektorproteinen (T3Es) direkt in die Wirtszelle. Dort stören diese die Immunantwort des Wirts oder fördern die Etablierung einer für das Pathogen günstigen Umgebung. Eine kritische Komponente der Pflanzenimmunität gegen eindringende Pathogene ist die schnelle transkriptionelle Umprogrammierung der angegriffenen Zelle. Viele adaptierte bakterielle Pflanzenpathogene verwenden T3Es, um die Induktion Abwehr-assoziierter Gene zu stören. Die Aufklärung von Effektor-Funktionen, sowie die Identifikation ihrer pflanzlichen Zielproteine sind für das Verständnis der bakteriellen Pathogenese essentiell. Im Rahmen dieser Arbeit sollte das Typ-III Effektorprotein XopS aus Xanthomonas campestris pv. vesicatoria (Xcv) funktionell charakterisiert werden. Zudem lag hier ein besonderer Fokus auf der Untersuchung der Wechselwirkung zwischen XopS und seinem in Vorarbeiten identifizierten pflanzlichen Interaktionspartner WRKY40, einem transkriptionellen Regulator der Abwehr-assoziierten Genexpression. Es konnte gezeigt werden, dass XopS ein essentieller Virulenzfaktor des Phytopathogens Xcv während der präinvasiven Immunantwort ist. So zeigten xopS-defiziente Xcv Bakterien bei einer Inokulation der Blattoberfläche suszeptibler Paprika Pflanzen eine deutlich reduzierte Virulenz im Vergleich zum Xcv Wildtyp. Die Translokation von XopS durch Xcv, sowie die ektopische Expression von XopS in Arabidopsis oder N. benthamiana verhinderte das Schließen von Stomata als Reaktion auf Bakterien bzw. einem Pathogen-assoziierten Stimulus, wobei zudem gezeigt werden konnte, dass dies in einer WRKY40-abhängigen Weise geschieht. Weiter konnte gezeigt werden, dass XopS in der Lage ist, die Expression Abwehr-assoziierter Gene zu manipulieren. Dies deutet darauf hin, dass XopS sowohl in die prä-als auch in die postinvasive, apoplastische Abwehr eingreift. Phytohormon-Signalnetzwerke spielen während des Aufbaus einer effizienten pflanzlichen Immunantwort eine wichtige Rolle. Hier konnte gezeigt werden, dass XopS mit genau diesen Signalnetzwerken zu interferieren scheint. Eine ektopische Expression des Effektors in Arabidopsis führte beispielsweise zu einer signifikanten Induktion des Phytohormons Jasmonsäure (JA), während eine Infektion von suszeptiblen Paprika Pflanzen mit einem xopS-defizienten Xcv Stamm zu einer ebenfalls signifikanten Akkumulation des Salicylsäure (SA)-Gehalts führte.
So kann zu diesem Zeitpunkt vermutet werden, dass XopS die Virulenz von Xcv fördert, indem JA-abhängige Signalwege induziert werden und es gleichzeitig zur Unterdrückung SA-abhängiger Signalwege kommt. Die Virus-induzierte Genstilllegung des XopS Interaktionspartners WRKY40a in Paprika erhöhte die Toleranz der Pflanze gegenüber einer Xcv Infektion, was darauf hindeutet, dass es sich bei diesem Protein um einen transkriptionellen Repressor pflanzlicher Immunantworten handelt. Die Hypothese, dass WRKY40 die Abwehr-assoziierte Genexpression reprimiert, konnte hier über verschiedene experimentelle Ansätze bekräftigt werden. So wurde beispielsweise gezeigt, dass die Expression von verschiedenen Abwehrgenen einschließlich des SA-abhängigen Gens PR1 und die des Negativregulators des JA-Signalwegs JAZ8 von WRKY40 gehemmt wird. Um bei einem Pathogenangriff die Abwehr-assoziierte Genexpression zu gewährleisten, muss WRKY40 als Negativregulator abgebaut werden. Vorarbeiten zeigten, dass WRKY40 über das 26S Proteasom abgebaut wird. In der hier vorliegenden Studie konnte weiter bestätigt, dass der T3E XopS zu einer Stabilisierung des WRKY40 Proteins führt, indem er auf bislang ungeklärte Weise dessen Abbau über das 26S Proteasom verhindert. Die Ergebnisse aus der hier vorliegenden Arbeit lassen die Vermutung zu, dass die Stabilisierung des Negativregulators der Immunantwort WRKY40 seitens XopS dazu führt, dass eine darüber vermittelte Manipulation der Abwehr-assoziierten Genexpression, sowie eine Umsteuerung phytohormoneller Wechselwirkungen die Ausbreitung von Xcv auf suszeptiblen Paprikapflanzen fördert. Ein weiteres Ziel dieser Arbeit war es, weitere potentielle in planta Interaktionspartner von XopS zu identifizieren die für seine Interaktion mit WRKY40 bzw. für die Aufschlüsselung seines Wirkmechanismus relevant sein könnten. So konnte die Deubiquitinase UBP12 als weiterer pflanzlicher Interaktionspartner sowohl von XopS als auch von WRKY40 gefunden werden. Dieses Enzym ist in der Lage, die Ubiquitinierung von Substratproteinen zu modifizieren und seine Funktion könnte somit ein Bindeglied zwischen XopS und dessen Interferenz mit dem proteasomalen Abbau von WRKY40 sein. Während einer kompatiblen Xcv-Wirtsinteraktion führte die Virus-induzierte Genstilllegung von UBP12 zu einer reduzierten Resistenz der Pflanze gegenüber des Pathogens Xcv, was auf dessen positiv-regulatorische Wirkung während der Immunantwort hindeutet. Zudem zeigten Western Blot Analysen, dass das Protein WRKY40 bei einer Herunterregulierung von UBP12 akkumuliert und dass diese Akkumulation von der Anwesenheit des T3Es XopS zusätzlich verstärkt wird. Weiterführende Analysen zur biochemischen Charakterisierung der XopS/WRKY40/UBP12 Interaktion sollten in Zukunft durchgeführt werden, um den genauen Wirkmechanismus des XopS T3Es weiter aufzuschlüsseln.
Boon and bane
(2021)
Semi-natural habitats (SNHs) in agricultural landscapes represent important refugia for biodiversity including organisms providing ecosystem services. Their spill-over into agricultural fields may lead to the provision of regulating ecosystem services such as biological pest control ultimately affecting agricultural yield. Still, it remains largely unexplored, how different habitat types and their distributions in the surrounding landscape shape this provision of ecosystem services within arable fields. Hence, in this thesis I investigated the effect of SNHs on biodiversity-driven ecosystem services and disservices affecting wheat production with an emphasis on the role and interplay of habitat type, distance to the habitat and landscape complexity.
I established transects from the field border into the wheat field, starting either from a field-to-field border, a hedgerow, or a kettle hole, and assessed beneficial and detrimental organisms and their ecosystem functions as well as wheat yield at several in-field distances. Using this study design, I conducted three studies where I aimed to relate the impacts of SNHs at the field and at the landscape scale on ecosystem service providers to crop production.
In the first study, I observed yield losses close to SNHs for all transect types. Woody habitats, such as hedgerows, reduced yields stronger than kettle holes, most likely due to shading from the tall vegetation structure. In order to find the biotic drivers of these yield losses close to SNHs, I measured pest infestation by selected wheat pests as potential ecosystem disservices to crop production in the second study. Besides relating their damage rates to wheat yield of experimental plots, I studied the effect of SNHs on these pest rates at the field and at the landscape scale. Only weed cover could be associated to yield losses, having their strongest impact on wheat yield close to the SNH. While fungal seed infection rates did not respond to SNHs, fungal leaf infection and herbivory rates of cereal leaf beetle larvae were positively influenced by kettle holes. The latter even increased at kettle holes with increasing landscape complexity suggesting a release of natural enemies at isolated habitats within the field interior.
In the third study, I found that also ecosystem service providers benefit from the presence of kettle holes. The distance to a SNH decreased species richness of ecosystem service providers, whereby the spatial range depended on species mobility, i.e. arable weeds diminished rapidly while carabids were less affected by the distance to a SNH. Contrarily, weed seed predation increased with distance suggesting that a higher food availability at field borders might have diluted the predation on experimental seeds. Intriguingly, responses to landscape complexity were rather mixed: While weed species richness was generally elevated with increasing landscape complexity, carabids followed a hump-shaped curve with highest species numbers and activity-density in simple landscapes. The latter might give a hint that carabids profit from a minimum endowment of SNHs, while a further increase impedes their mobility. Weed seed predation was affected differently by landscape complexity depending on weed species displayed. However, in habitat-rich landscapes seed predation of the different weed species converged to similar rates, emphasising that landscape complexity can stabilize the provision of ecosystem services. Lastly, I could relate a higher weed seed predation to an increase in wheat yield even though seed predation did not diminish weed cover. The exact mechanisms of the provision of weed control to crop production remain to be investigated in future studies.
In conclusion, I found habitat-specific responses of ecosystem (dis)service providers and their functions emphasizing the need to evaluate the effect of different habitat types on the provision of ecosystem services not only at the field scale, but also at the landscape scale. My findings confirm that besides identifying species richness of ecosystem (dis)service providers the assessment of their functions is indispensable to relate the actual delivery of ecosystem (dis)services to crop production.
Halide perovskites are a class of novel photovoltaic materials that have recently attracted much attention in the photovoltaics research community due to their highly promising optoelectronic properties, including large absorption coefficients and long carrier lifetimes. The charge carrier mobility of halide perovskites is investigated in this thesis by THz spectroscopy, which is a contact-free technique that yields the intra-grain sum mobility of electrons and holes
in a thin film.
The polycrystalline halide perovskite thin films, provided from Potsdam University, show moderate mobilities in the range from 21.5 to 33.5 cm2V-1s-1. It is shown in this work that the room temperature mobility is limited by charge carrier scattering at polar optical phonons. The mobility at low temperature is likely to be limited by scattering at charged and neutral impurities at impurity concentration N=1017-1018 cm-3. Furthermore, it is shown that exciton formation
may decrease the mobility at low temperatures. Scattering at acoustic phonons can be neglected at both low and room temperatures. The analysis of mobility spectra over a broad range of temperatures for perovskites with various cation compounds shows that cations have a minor impact on charge carrier mobility.
The low-dimensional thin films of quasi-2D perovskite with different numbers of [PbI6]4−sheets (n=2-4) alternating with long organic spacer molecules were provided by S. Zhang from Potsdam University. They exhibit mobilities in the range from 3.7 to 8 cm2V-1s-1. A clear
decrease of mobility is observed with decrease in number of metal-halide sheets n, which likely arises from charge carrier confinement within metal-halide layers. Modelling the measured THz mobility with the modified Drude-Smith model yields localization length from 0.9 to 3.7 nm, which agrees well on the thicknesses of the metal-halide layers. Additionally, the mobilities are found to be dependent on the orientation of the layers. The charge carrier dynamics is also
dependent on the number of metal-halide sheets n. For the thin films with n =3-4 the dynamics is similar to the 3D MHPs. However, the thin film with n = 2 shows clearly different dynamics, where the signs of exciton formation are observed within 390 fs timeframe after
photoexcitation.
Also, the charge carrier dynamics of CsPbI3 perovskite nanocrystals was investigated, in particular the effect of post treatments on the charge carrier transport.
Centroid moment tensor inversion can provide insight into ongoing tectonic processes and active faults. In the Alpine mountains (central Europe), challenges result from low signal-to-noise ratios of earthquakes with small to moderate magnitudes and complex wave propagation effects through the heterogeneous crustal structure of the mountain belt. In this thesis, I make use of the temporary installation of the dense AlpArray seismic network (AASN) to establish a work flow to study seismic source processes and enhance the knowledge of the Alpine seismicity. The cumulative thesis comprises four publications on the topics of large seismic networks, seismic source processes in the Alps, their link to tectonics and stress field, and the inclusion of small magnitude earthquakes into studies of active faults.
Dealing with hundreds of stations of the dense AASN requires the automated assessment of data and metadata quality. I developed the open source toolbox AutoStatsQ to perform an automated data quality control. Its first application to the AlpArray seismic network has revealed significant errors of amplitude gains and sensor orientations. A second application of the orientation test to the Turkish KOERI network, based on Rayleigh wave polarization, further illustrated the potential in comparison to a P wave polarization method. Taking advantage of the gain and orientation results of the AASN, I tested different inversion settings and input data types to approach the specific challenges of centroid moment tensor (CMT) inversions in the Alps. A comparative study was carried out to define the best fitting procedures.
The application to 4 years of seismicity in the Alps (2016-2019) substantially enhanced the amount of moment tensor solutions in the region. We provide a list of moment tensors solutions down to magnitude Mw 3.1. Spatial patterns of typical focal mechanisms were analyzed in the seismotectonic context, by comparing them to long-term seismicity, historical earthquakes and observations of strain rates. Additionally, we use our MT solutions to investigate stress regimes and orientations along the Alpine chain. Finally, I addressed the challenge of including smaller magnitude events into the study of active faults and source processes. The open-source toolbox Clusty was developed for the clustering of earthquakes based on waveforms recorded across a network of seismic stations. The similarity of waveforms reflects both, the location and the similarity of source mechanisms. Therefore the clustering bears the opportunity to identify earthquakes of similar faulting styles, even when centroid moment tensor inversion is not possible due to low signal-to-noise ratios of surface waves or oversimplified velocity models. The toolbox is described through an application to the Zakynthos 2018 aftershock sequence and I subsequently discuss its potential application to weak earthquakes (Mw<3.1) in the Alps.
Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville Problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular and some singular SLPs of even orders (tested up to order eight), with a mix of boundary conditions (including non-separable and finite singular endpoints), accurately and efficiently.
The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved.
Next, a concrete implementation to the inverse Sturm–Liouville problem
algorithm proposed by Barcilon (1974) is provided. Furthermore, computational feasibility and applicability of this algorithm to solve inverse Sturm–Liouville problems of order n=2,4 is verified successfully. It is observed that the method is successful even in the presence of significant noise, provided that the assumptions of the algorithm are satisfied.
In conclusion, this work provides methods that can be adapted successfully for solving a direct (regular/singular) or inverse SLP of an arbitrary order with arbitrary boundary conditions.
The aim of the doctoral project was to answer the question of whether the structural word-initial noun capitalization, as it can otherwise only be found in Luxembourgish alongside German, has a function that is advantageous for the reader. The overriding hypothesis was that an advantage is achieved by activating a syntactic category, namely the core of a noun phrase, through the parafoveal perception of the capital letters. This perception from the corner of the eye should make it possible to preprocess the following noun. As a result, sentence processing should be facilitated, which should ultimately be reflected in overall faster reading times and fixation durations.
The structure of the project includes three studies, some of which included different participant groups:
Study 1:
Study design: Semantic priming using garden-path sentences should bring out the functionality of noun capitalization for the reader
Participant groups: German natives reading German
Study 2:
Study design: same design as study 1, but in English
Participant groups:
English natives without any knowledge of German reading English
English natives who regularly read German reading English
German with high proficiency in English reading English
Study 3:
Study design:
Influence of the noun frequency on a potential preprocessing using the boundary paradigm; Study languages: German and English
Participant groups:
German natives reading German
English natives without any knowledge of German reading English
German with high proficiency in English reading English
Brief summary: The noun capitalization clearly has an impact on sentence processing in both German and English. It cannot be confirmed that this has a substantial, decisive advantage.
Compound values are not universally supported in virtual machine (VM)-based programming systems and languages. However, providing data structures with value characteristics can be beneficial. On one hand, programming systems and languages can adequately represent physical quantities with compound values and avoid inconsistencies, for example, in representation of large numbers. On the other hand, just-in-time (JIT) compilers, which are often found in VMs, can rely on the fact that compound values are immutable, which is an important property in optimizing programs. Considering this, compound values have an optimization potential that can be put to use by implementing them in VMs in a way that is efficient in memory usage and execution time. Yet, optimized compound values in VMs face certain challenges: to maintain consistency, it should not be observable by the program whether compound values are represented in an optimized way by a VM; an optimization should take into account, that the usage of compound values can exhibit certain patterns at run-time; and that necessary value-incompatible properties due to implementation restrictions should be reduced.
We propose a technique to detect and compress common patterns of compound value usage at run-time to improve memory usage and execution speed. Our approach identifies patterns of frequent compound value references and introduces abbreviated forms for them. Thus, it is possible to store multiple inter-referenced compound values in an inlined memory representation, reducing the overhead of metadata and object references. We extend our approach by a notion of limited mutability, using cells that act as barriers for our approach and provide a location for shared, mutable access with the possibility of type specialization. We devise an extension to our approach that allows us to express automatic unboxing of boxed primitive data types in terms of our initial technique. We show that our approach is versatile enough to express another optimization technique that relies on values, such as Booleans, that are unique throughout a programming system. Furthermore, we demonstrate how to re-use learned usage patterns and optimizations across program runs, thus reducing the performance impact of pattern recognition.
We show in a best-case prototype that the implementation of our approach is feasible and can also be applied to general purpose programming systems, namely implementations of the Racket language and Squeak/Smalltalk. In several micro-benchmarks, we found that our approach can effectively reduce memory consumption and improve execution speed.
Die vorliegende Studie beschäftigt sich mit der Planung und Durchführung des Lernprozesses von Schauspielern, wobei das Hauptaugenmerk auf dem Einsatz von Lernstrategien liegt. Es geht darum, welcher Strategien sich professionell Lernende bedienen, um die für die Berufsausübung erforderliche Textsicherheit zu erlangen, nicht um die Optimierung des Lernerfolges.
Die Literaturrecherche machte deutlich, dass aktuelle Studien zum Lernen von Erwachsenen vor allem im berufsspezifischen Kontext angesiedelt sind und sich auf den Erwerb von Kompetenzen, Problemlösestrategien und gesellschaftliche Teilhabe beziehen. Dem Lernen von Schauspielern liegt aber keine Absicht einer Verhaltensänderung oder eines konkreten Wissenszuwachses zugrunde.
Für Schauspieler ist der Auftritt Bestandteil ihrer Berufskultur. Angesichts der Tatsache, dass präzisem Faktenwissen als Grundlage für kompetentes, überzeugendes Präsentieren entscheidende Bedeutung zukommt, sind die Ergebnisse der Studie auch für Berufsgruppen relevant, die öffentlich auftreten müssen, wie z. B. für Priester, Juristen und Lehrende. Das gilt ebenso für Schüler und Studenten, die Referate halten und/oder Arbeiten präsentieren müssen.
Für die empirische Untersuchung werden zwölf renommierte Schauspieler mittels problemzentriertem Interview befragt, anschließend wird eine qualitative Inhaltsanalyse durchgeführt.
In der Auswertung der Daten kann ein deutlicher Zusammenhang zwischen Körper und Sprechpraxis nachgewiesen werden. Ebenso ergibt die Analyse, wie wichtig Bewegung für den Lernprozess ist. Es können Ergebnisse in Bezug auf kognitive, metakognitive und ressourcenorientierte Strategien generiert werden, wobei der Lernumgebung und dem Lernen mit Kollegen entscheidende Bedeutung zukommt.
The propagation of test fields, such as electromagnetic, Dirac or linearized gravity, on a fixed spacetime manifold is often studied by using the geometrical optics approximation. In the limit of infinitely high frequencies, the geometrical optics approximation provides a conceptual transition between the test field and an effective point-particle description. The corresponding point-particles, or wave rays, coincide with the geodesics of the underlying spacetime. For most astrophysical applications of interest, such as the observation of celestial bodies, gravitational lensing, or the observation of cosmic rays, the geometrical optics approximation and the effective point-particle description represent a satisfactory theoretical model. However, the geometrical optics approximation gradually breaks down as test fields of finite frequency are considered.
In this thesis, we consider the propagation of test fields on spacetime, beyond the leading-order geometrical optics approximation. By performing a covariant Wentzel-Kramers-Brillouin analysis for test fields, we show how higher-order corrections to the geometrical optics approximation can be considered. The higher-order corrections are related to the dynamics of the spin internal degree of freedom of the considered test field. We obtain an effective point-particle description, which contains spin-dependent corrections to the geodesic motion obtained using geometrical optics. This represents a covariant generalization of the well-known spin Hall effect, usually encountered in condensed matter physics and in optics. Our analysis is applied to electromagnetic and massive Dirac test fields, but it can easily be extended to other fields, such as linearized gravity. In the electromagnetic case, we present several examples where the gravitational spin Hall effect of light plays an important role. These include the propagation of polarized light rays on black hole spacetimes and cosmological spacetimes, as well as polarization-dependent effects on the shape of black hole shadows. Furthermore, we show that our effective point-particle equations for polarized light rays reproduce well-known results, such as the spin Hall effect of light in an inhomogeneous medium, and the relativistic Hall effect of polarized electromagnetic wave packets encountered in Minkowski spacetime.
Today, the Mekong Delta in the southern of Vietnam is home for 18 million people. The delta also accounts for more than half of the country’s food production and 80% of the exported rice. Due to the low elevation, it is highly susceptible to the risk of fluvial and coastal flooding. Although extreme floods often result in excessive damages and economic losses, the annual flood pulse from the Mekong is vital to sustain agricultural cultivation and livelihoods of million delta inhabitants.
Delta-wise risk management and adaptation strategies are required to mitigate the adverse impacts from extreme events while capitalising benefits from floods. However, a proper flood risk management has not been implemented in the VMD, because the quantification of flood damage is often overlooked and the risks are thus not quantified. So far, flood management has been exclusively focused on engineering measures, i.e. high- and low- dyke systems, aiming at flood-free or partial inundation control without any consideration of the actual risks or a cost-benefit analysis. Therefore, an analysis of future delta flood dynamics driven these stressors is valuable to facilitate the transition from sole hazard control towards a risk management approach, which is more cost-effective and also robust against future changes in risk.
Built on these research gaps, this thesis investigates the current state and future projections of flood hazard, damage and risk to rice cultivation, the most important economic activity in the VMD. The study quantifies the changes in risk and hazard brought by the development of delta-based flood control measures in the last decades, and analyses the expected changes in risk driven by the changing climate, rising sea-level and deltaic land subsidence, and finally the development of hydropower projects in the Mekong Basin. For this purpose, flood trend analyses and comprehensive hydraulic modelling were performed, together with the development of a concept to quantify flood damage and risk to rice plantation.
The analysis of observed flood levels revealed strong and robust increasing trends of peak and duration downstream of the high-dyke areas with a step change in 2000/2001, i.e. after the disastrous flood which initiated the high-dyke development. These changes were in contrast to the negative trends detected upstream, suggested that high-dyke development has shifted flood hazard downstream. Findings of the trend’s analysis were later confirmed by hydraulic simulations of the two recent extreme floods in 2000 and 2011, where the hydrological boundaries and dyke system settings were interchanged.
However, the high-dyke system was not the only and often not the main cause for a shift of flood hazard, as a comparative analysis of these two extreme floods proved. The high-dyke development was responsible for 20–90% of the observed changes in flood level between 2000 and 2011, with large spatial variances. The particular flood hydrograph of the two events had the highest contribution in the northern part of the delta, while the tidal level had 2–3 times higher influence than the high-dyke in the lower-central and coastal areas downstream of high-dyke areas. The impact of the high-dyke development was highest in the areas closely downstream of the high-dyke area just south of the Cambodia-Vietnam border. The hydraulic simulations also validated that the concurrence of the flood peak with spring tides, i.e. high sea level along the coast, amplified the flood level and inundation in the central and coastal regions substantially.
The risk assessment quantified the economic losses of rice cultivation to USD 25.0 and 115 million (0.02–0.1% of the total GDP of Vietnam in 2011) corresponding to the 10-year and the 100-year floods, with an expected annual damage of about USD 4.5 million. A particular finding is that the flood damage was highly sensitive to flood timing. Here, a 10-year event with an early peak, i.e. late August-September, could cause as much damage as a 100-year event that peaked in October. This finding underlines the importance of a reliable early flood warning, which could substantially reduce the damage to rice crops and thus the risk.
The developed risk assessment concept was furthermore applied to investigate two high-dyke development alternatives, which are currently under discussion among the administrative bodies in Vietnam, but also in the public. The first option favouring the utilization of the current high-dyke compartments as flood retention areas instead for rice cropping during the flood season could reduce flood hazard and expected losses by 5–40%, depending on the region of the delta. On the contrary, the second option promoting the further extension of the areas protected by high-dyke to facilitate third rice crop planting on a larger area, tripled the current expected annual flood damage. This finding challenges the expected economic benefit of triple rice cultivation, in addition to the already known reducing of nutrient supply by floodplain sedimentation and thus higher costs for fertilizers.
The economic benefits of the high-dyke and triple rice cropping system is further challenged by the changes in the flood dynamics to be expected in future. For the middle of the 21st century (2036-2065) the effective sea-level rise an increase of the inundation extent by 20–27% was projected. This corresponds to an increase of flood damage to rice crops in dry, normal and wet year by USD 26.0, 40.0 and 82.0 million in dry, normal and wet year compared to the baseline period 1971-2000.
Hydraulic simulations indicated that the planned massive development of hydropower dams in the Mekong Basin could potentially compensate the increase in flood hazard and agriculture losses stemming from climate change. However, the benefits of dams as mitigation of flood losses are highly uncertain, because a) the actual development of the dams is highly disputed, b) the operation of the dams is primarily targeted at power generation, not flood control, and c) this would require international agreements and cooperation, which is difficult to achieve in South-East Asia. The theoretical flood mitigation benefit is additionally challenged by a number of negative impacts of the dam development, e.g. disruption of floodplain inundation in normal, non-extreme flood years. Adding to the certain reduction of sediment and nutrient load to the floodplains, hydropower dams will drastically impair rice and agriculture production, the basis livelihoods of million delta inhabitants.
In conclusion, the VMD is expected to face increasing threats of tidal induced floods in the coming decades. Protection of the entire delta coastline solely with “hard” engineering flood protection structures is neither technically nor economically feasible, adaptation and mitigation actions are urgently required. Better control and reduction of groundwater abstraction is thus strongly recommended as an immediate and high priority action to reduce the land subsidence and thus tidal flooding and salinity intrusion in the delta. Hydropower development in the Mekong basin might offer some theoretical flood protection for the Mekong delta, but due to uncertainties in the operation of the dams and a number of negative effects, the dam development cannot be recommended as a strategy for flood management. For the Vietnamese authorities, it is advisable to properly maintain the existing flood protection structures and to develop flexible risk-based flood management plans. In this context the study showed that the high-dyke compartments can be utilized for emergency flood management in extreme events. For this purpose, a reliable flood forecast is essential, and the action plan should be materialised in official documents and legislation to assure commitment and consistency in the implementation and operation.
Over the last decades, the rate of near-surface warming in the Arctic is at least double than elsewhere on our planet (Arctic amplification). However, the relative contribution of different feedback processes to Arctic amplification is a topic of ongoing research, including the role of aerosol and clouds. Lidar systems are well-suited for the investigation of aerosol and optically-thin clouds as they provide vertically-resolved information on fine temporal scales. Global aerosol models fail to converge on the sign of the Arctic aerosol radiative effect (ARE). In the first part of this work, the optical and microphysical properties of Arctic aerosol were characterized at case study level in order to assess the short-wave (SW) ARE. A long-range transport episode was first investigated. Geometrically similar aerosol layers were captured over three locations. Although the aerosol size distribution was different between Fram Strait(bi-modal) and Ny-Ålesund (fine mono-modal), the atmospheric column ARE was similar. The latter was related to the domination of accumulation mode aerosol. Over both locations top of the atmosphere (TOA) warming was accompanied by surface cooling.
Subsequently, the sensitivity of ARE was investigated with respect to different aerosol and spring-time ambient conditions. A 10% change in the single-scattering albedo (SSA) induced higher ARE perturbations compared to a 30% change in the aerosol extinction coefficient. With respect to ambient conditions, the ARETOA was more sensitive to solar elevation changes compared to AREsur f ace. Over dark surfaces the ARE profile was exclusively negative, while over bright surfaces a negative to positive shift occurred above the aerosol layers. Consequently, the sign of ARE can be highly sensitive in spring since this season is characterized by transitional surface albedo conditions.
As the inversion of the aerosol microphysics is an ill-posed problem, the inferred aerosol size distribution of a low-tropospheric event was compared to the in-situ measured distribution. Both techniques revealed a bi-modal distribution, with good agreement in the total volume concentration. However, in terms of SSA a disagreement was found, with the lidar inversion indicating highly scattering particles and the in-situ measurements pointing to absorbing particles. The discrepancies could stem from assumptions in the inversion (e.g. wavelength-independent refractive index) and errors in the conversion of the in-situ measured light attenuation into absorption. Another source of discrepancy might be related to an incomplete capture of fine particles in the in-situ sensors. The disagreement in the most critical parameter for the Arctic ARE necessitates further exploration in the frame of aerosol closure experiments. Care must be taken in ARE modelling studies, which may use either the in-situ or lidar-derived SSA as input.
Reliable characterization of cirrus geometrical and optical properties is necessary for improving their radiative estimates. In this respect, the detection of sub-visible cirrus is of special importance. The total cloud radiative effect (CRE) can be negatively biased, should only the optically-thin and opaque cirrus contributions are considered. To this end, a cirrus retrieval scheme was developed aiming at increased sensitivity to thin clouds. The cirrus detection was based on the wavelet covariance transform (WCT) method, extended by dynamic thresholds. The dynamic WCT exhibited high sensitivity to faint and thin cirrus layers (less than 200 m) that were partly or completely undetected by the existing static method. The optical characterization scheme extended the Klett–Fernald retrieval by an iterative lidar ratio (LR) determination (constrained Klett). The iterative process was constrained by a reference value, which indicated the aerosol concentration beneath the cirrus cloud. Contrary to existing approaches, the aerosol-free assumption was not adopted, but the aerosol conditions were approximated by an initial guess. The inherent uncertainties of the constrained Klett were higher for optically-thinner cirrus, but an overall good agreement was found with two established retrievals. Additionally, existing approaches, which rely on aerosol-free assumptions, presented increased accuracy when the proposed reference value was adopted. The constrained Klett retrieved reliably the optical properties in all cirrus regimes, including upper sub-visible cirrus with COD down to 0.02.
Cirrus is the only cloud type capable of inducing TOA cooling or heating at daytime. Over the Arctic, however, the properties and CRE of cirrus are under-explored. In the final part of this work, long-term cirrus geometrical and optical properties were investigated for the first time over an Arctic site (Ny-Ålesund). To this end, the newly developed retrieval scheme was employed. Cirrus layers over Ny-Ålesund seemed to be more absorbing in the visible spectral region compared to lower latitudes and comprise relatively more spherical ice particles. Such meridional differences could be related to discrepancies in absolute humidity and ice nucleation mechanisms. The COD tended to decline for less spherical and smaller ice particles probably due to reduced water vapor deposition on the particle surface. The cirrus optical properties presented weak dependence on ambient temperature and wind conditions.
Over the 10 years of the analysis, no clear temporal trend was found and the seasonal cycle was not pronounced. However, winter cirrus appeared under colder conditions and stronger winds. Moreover, they were optically-thicker, less absorbing and consisted of relatively more spherical ice particles. A positive CREnet was primarily revealed for a broad range of representative cloud properties and ambient conditions. Only for high COD (above 10) and over tundra a negative CREnet was estimated, which did not hold true over snow/ice surfaces. Consequently, the COD in combination with the surface albedo seem to play the most critical role in determining the CRE sign over the high European Arctic.
The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction.
Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion.
PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 – 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants.
Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles.
Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 – 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.
Deoxyribonucleic acid (DNA) nanostructures enable the attachment of functional molecules to nearly any unique location on their underlying structure. Due to their single-base-pair structural resolution, several ligands can be spatially arranged and closely controlled according to the geometry of their desired target, resulting in optimized binding and/or signaling interactions.
This dissertation covers three main projects. All of them use variations of functionalized DNA nanostructures that act as platform for oligovalent presentation of ligands. The purpose of this work was to evaluate the ability of DNA nanostructures to precisely display different types of functional molecules and to consequently enhance their efficacy according to the concept of multivalency. Moreover, functionalized DNA structures were examined for their suitability in functional screening assays. The developed DNA-based compound ligands were used to target structures in different biological systems.
One part of this dissertation attempted to bind pathogens with small modified DNA nanostructures. Pathogens like viruses and bacteria are known for their multivalent attachment to host cells membranes. By blocking their receptors for recognition and/or fusion with their targeted host in an oligovalent manner, the objective was to impede their ability to adhere to and invade cells. For influenza A, only enhanced binding of oligovalent peptide-DNA constructs compared to the monovalent peptide could be observed, whereas in the case of respiratory syncytial virus (RSV), binding as well as blocking of the target receptors led to an increased inhibition of infection in vitro.
In the final part, the ability of chimeric DNA-peptide constructs to bind to and activate signaling receptors on the surface of cells was investigated. Specific binding of DNA trimers, conjugated with up to three peptides, to EphA2 receptor expressing cells was evaluated in flow cytometry experiments. Subsequently, their ability to activate these receptors via phosphorylation was assessed. EphA2 phosphorylation was significantly increased by DNA trimers carrying three peptides compared to monovalent peptide. As a result of activation, cells underwent characteristic morphological changes, where they "round up" and retract their periphery.
The results obtained in this work comprehensively prove the capability of DNA nanostructures to serve as stable, biocompatible, controllable platforms for the oligovalent presentation of functional ligands. Functionalized DNA nanostructures were used to enhance biological effects and as tool for functional screening of bio-activity. This work demonstrates that modified DNA structures have the potential to improve drug development and to unravel the activation of signaling pathways.
Elucidating the molecular basis of enhanced growth in the Arabidopsis thaliana accession Bur-0
(2021)
The life cycle of flowering plants is a dynamic process that involves successful passing through several developmental phases and tremendous progress has been made to reveal cellular and molecular regulatory mechanisms underlying these phases, morphogenesis, and growth. Although several key regulators of plant growth or developmental phase transitions have been identified in Arabidopsis, little is known about factors that become active during embryogenesis, seed development and also during further postembryonic growth. Much less is known about accession-specific factors that determine plant architecture and organ size. Bur-0 has been reported as a natural Arabidopsis thaliana accession with exceptionally big seeds and a large rosette; its phenotype makes it an interesting candidate to study growth and developmental aspects in plants, however, the molecular basis underlying this big phenotype remains to be elucidated. Thus, the general aim of this PhD project was to investigate and unravel the molecular mechanisms underlying the big phenotype in Bur-0.
Several natural Arabidopsis accessions and late flowering mutant lines were analysed in this study, including Bur-0. Phenotypes were characterized by determining rosette size, seed size, flowering time, SAM size and growth in different photoperiods, during embryonic and postembryonic development. Our results demonstrate that Bur-0 stands out as an interesting accession with simultaneously larger rosettes, larger SAM, later flowering phenotype and larger seeds, but also larger embryos. Interestingly, inter-accession crosses (F1) resulted in bigger seeds than the parental self-crossed accessions, particularly when Bur-0 was used as the female parental genotype, suggesting parental effects on seed size that might be maternally controlled. Furthermore, developmental stage-based comparisons revealed that the large embryo size of Bur-0 is achieved during late embryogenesis and the large rosette size is achieved during late postembryonic growth. Interestingly, developmental phase progression analyses revealed that from germination onwards, the length of developmental phases during postembryonic growth is delayed in Bur-0, suggesting that in general, the mechanisms that regulate developmental phase progression are shared across developmental phases.
On the other hand, a detailed physiological characterization in different tissues at different developmental stages revealed accession-specific physiological and metabolic traits that underlie accession-specific phenotypes and in particular, more carbon resources during embryonic and postembryonic development were found in Bur-0, suggesting an important role of carbohydrates in determination of the bigger Bur-0 phenotype. Additionally, differences in the cellular organization, nuclei DNA content, as well as ploidy level were analyzed in different tissues/cell types and we found that the large organ size in Bur-0 can be mainly attributed to its larger cells and also to higher cell proliferation in the SAM, but not to a different ploidy level.
Furthermore, RNA-seq analysis of embryos at torpedo and mature stage, as well as SAMs at vegetative and floral transition stage from Bur-0 and Col-0 was conducted to identify accession-specific genetic determinants of plant phenotypes, shared across tissues and developmental stages during embryonic and postembryonic growth. Potential candidate genes were identified and further validation of transcriptome data by expression analyses of candidate genes as well as known key regulators of organ size and growth during embryonic and postembryonic development confirmed that the high confidence transcriptome datasets generated in this study are reliable for elucidation of molecular mechanisms regulating plant growth and accession-specific phenotypes in Arabidopsis.
Taken together, this PhD project contributes to the plant development research field providing a detailed analysis of mechanisms underlying plant growth and development at different levels of biological organization, focusing on Arabidopsis accessions with remarkable phenotypical differences. For this, the natural accession Bur-0 was an ideal outlier candidate and different mechanisms at organ and tissue level, cell level, metabolism, transcript and gene expression level were identified, providing a better understanding of different factors involved in plant growth regulation and mechanisms underlying different growth patterns in nature.
Bottom-up synthetic biology is used for the understanding of how a cell works. It is achieved through developing techniques to produce lipid-based vesicular structures as cellular mimics. The most common techniques used to produce cellular mimics or synthetic cells is through electroformation and swelling method. However, the abovementioned techniques cannot efficiently encapsulate macromolecules such as proteins, enzymes, DNA and even liposomes as synthetic organelles. This urges the need to develop new techniques that can circumvent this issue and make the artificial cell a reality where it is possible to imitate a eukaryotic cell through encapsulating macromolecules. In this thesis, the aim to construct a cell system using giant unilamellar vesicles (GUVs) to reconstitute the mitochondrial molybdenum cofactor biosynthetic pathway. This pathway is highly conserved among all life forms, and therefore is known for its biological significance in disorders induced through its malfunctioning. Furthermore, the pathway itself is a multi-step enzymatic reaction that takes place in different compartments. Initially, GTP in the mitochondrial matrix is converted to cPMP in the presence of cPMP synthase. Further, produced cPMP is transported across the membrane to the cytosol, to be converted by MPT synthase into MPT. This pathway provides a possibility to address the general challenges faced in the development of a synthetic cell, to encapsulate large biomolecules with good efficiency and greater control and to evaluate the enzymatic reactions involved in the process.
For this purpose, the emulsion-based technique was developed and optimised to allow rapid production of GUVs (~18 min) with high encapsulation efficiency (80%). This was made possible by optimizing various parameters such as density, type of oil, the impact of centrifugation speed/time, lipid concentration, pH, temperature, and emulsion droplet volume. Furthermore, the method was optimised in microtiter plates for direct experimentation and visualization after the GUV formation. Using this technique, the two steps - formation of cPMP from GTP and the formation of MPT from cPMP were encapsulated in different sets of GUVs to mimic the two compartments. Two independent fluorescence-based detection systems were established to confirm the successful encapsulation and conversion of the reactants. Alternatively, the enzymes produced using bacterial expression and measured. Following the successful encapsulation and evaluation of enzymatic reactions, cPMP transport across mitochondrial membrane has been mimicked using GUVs using a complex mitochondrial lipid composition. It was found that the cPMP interaction with the lipid bilayer results in transient pore-formation and leakage of internal contents.
Overall, it can be concluded that in this thesis a novel technique has been optimised for fast production of functional synthetic cells. The individual enzymatic steps of the Moco biosynthetic pathway have successfully implemented and quantified within these cellular mimics.
Zum Einfluss von Adaptivität auf die Wahrnehmung von Komplexität in der Mensch-Technik-Interaktion
(2021)
Wir leben in einer Gesellschaft, die von einem stetigen Wunsch nach Innovation und Fortschritt geprägt ist. Folgen dieses Wunsches sind die immer weiter fortschreitende Digitalisierung und informatische Vernetzung aller Lebensbereiche, die so zu immer komplexeren sozio-technischen Systemen führen. Ziele dieser Systeme sind u. a. die Unterstützung von Menschen, die Verbesserung ihrer Lebenssituation oder Lebensqualität oder die Erweiterung menschlicher Möglichkeiten. Doch haben neue komplexe technische Systeme nicht nur positive soziale und gesellschaftliche Effekte. Oft gibt es unerwünschte Nebeneffekte, die erst im Gebrauch sichtbar werden, und sowohl Konstrukteur*innen als auch Nutzer*innen komplexer vernetzter Technologien fühlen sich oft orientierungslos. Die Folgen können von sinkender Akzeptanz bis hin zum kompletten Verlust des Vertrauens in vernetze Softwaresysteme reichen. Da komplexe Anwendungen, und damit auch immer komplexere Mensch-Technik-Interaktionen, immer mehr an Relevanz gewinnen, ist es umso wichtiger, wieder Orientierung zu finden. Dazu müssen wir zuerst diejenigen Elemente identifizieren, die in der Interaktion mit vernetzten sozio-technischen Systemen zu Komplexität beitragen und somit Orientierungsbedarf hervorrufen.
Mit dieser Arbeit soll ein Beitrag geleistet werden, um ein strukturiertes Reflektieren über die Komplexität vernetzter sozio-technischer Systeme im gesamten Konstruktionsprozess zu ermöglichen. Dazu wird zuerst eine Definition von Komplexität und komplexen Systemen erarbeitet, die über das informatische Verständnis von Komplexität (also der Kompliziertheit von Problemen, Algorithmen oder Daten) hinausgeht. Im Vordergrund soll vielmehr die sozio-technische Interaktion mit und in komplexen vernetzten Systemen stehen. Basierend auf dieser Definition wird dann ein Analysewerkzeug entwickelt, welches es ermöglicht, die Komplexität in der Interaktion mit sozio-technischen Systemen sichtbar und beschreibbar zu machen.
Ein Bereich, in dem vernetzte sozio-technische Systeme zunehmenden Einzug finden, ist jener digitaler Bildungstechnologien. Besonders adaptiven Bildungstechnologien wurde in den letzten Jahrzehnten ein großes Potential zugeschrieben. Zwei adaptive Lehr- bzw. Trainingssysteme sollen deshalb exemplarisch mit dem in dieser Arbeit entwickelten Analysewerkzeug untersucht werden. Hierbei wird ein besonderes Augenmerkt auf den Einfluss von Adaptivität auf die Komplexität von Mensch-Technik-Interaktionssituationen gelegt. In empirischen Untersuchungen werden die Erfahrungen von Konstrukteur*innen und Nutzer*innen jener adaptiver Systeme untersucht, um so die entscheidenden Kriterien für Komplexität ermitteln zu können. Auf diese Weise können zum einen wiederkehrende Orientierungsfragen bei der Entwicklung adaptiver Bildungstechnologien aufgedeckt werden. Zum anderen werden als komplex wahrgenommene Interaktionssituationen identifiziert. An diesen Situationen kann gezeigt werden, wo aufgrund der Komplexität des Systems die etablierten Alltagsroutinen von Nutzenden nicht mehr ausreichen, um die Folgen der Interaktion mit dem System vollständig erfassen zu können. Dieses Wissen kann sowohl Konstrukteur*innen als auch Nutzer*innen helfen, in Zukunft besser mit der inhärenten Komplexität moderner Bildungstechnologien umzugehen.
Monoklonale Antikörper sind essenzielle Werkzeuge in der modernen Laboranalytik sowie in der medizinischen Therapie und Diagnostik. Die Herstellung monoklonaler Antikörper ist ein zeit- und arbeitsintensiver Prozess. Herkömmliche Methoden beruhen auf der Immunisierung von Labortieren, die mitunter mehrere Monate in Anspruch nimmt. Anschließend werden die Antikörper-produzierenden B-Lymphozyten bzw. deren Antikörpergene isoliert und in Screening-Verfahren untersucht, um geeignete Binder zu identifizieren.
Der Transfer der humoralen Immunantwort in eine in vitro Umgebung erlaubt eine Verkürzung des Prozesses und umgeht die Notwendigkeit der in vivo Immunisierung. Das komplexe Zusammenspiel aller involvierten Immunzellen in vitro abzubilden, stellt sich allerdings als schwierig dar. Der Schwerpunkt dieser Arbeit war deshalb die Realisierung einer vereinfachten In vitro Immunisierung, die sich auf die Protagonisten der Antikörper-Produktion konzentriert: die B-Lymphozyten. Darüber hinaus sollte eine permanente Zelllinie etabliert werden, die zur Antikörper-Herstellung eingesetzt werden und die Verwendung von Primärzellen ersetzen würde.
Im ersten Teil der Arbeit wurde ein Protokoll zur In vitro Immunisierung muriner BLymphozyten etabliert. In Vorversuchen wurden die optimalen Konditionen für die Antigenspezifische Aktivierung gereinigter Milz-B-Lymphozyten aus nicht-immunisierten Mäusen
determiniert. Dazu wurde der Einfluss verschiedener Stimuli auf die Produktion unspezifischer und spezifischer Antikörper untersucht. Eine Kombination aus dem Modellantigen VP1 (Hamster Polyomavirus Hüllprotein 1), einem Anti-CD40-Antikörper, Interleukin 4 (IL 4) und Lipopolysaccharid (LPS) oder IL 7 induzierte nachweislich eine Antigen-spezifische Antikörper-Antwort in vitro. Als Indikatoren einer erfolgreichen Aktivierung der B-Lymphozyten infolge der in vitro Stimulation wurden die rapide Proliferation und die Expression charakteristischer Aktivierungsmarker auf der Zelloberfläche nachgewiesen. In einer Zeitreihe über zehn Tage wurde am zehnten Tag der In vitro Immunisierung die verhältnismäßig höchste Konzentration Antigen-spezifischer IgG-Antikörper im Kulturüberstand der stimulierten Zellen nachgewiesen.
Als nächster Schritt sollte eine permanente Zelllinie hergestellt werden, die statt primärer BLymphozyten für die zuvor etablierte In vitro Immunisierung eingesetzt werden könnte. Zu diesem Zweck wurden retrovirale Vektoren hergestellt, die durch den Transfer verschiedener Onkogene in murine B-Lymphozyten bzw. deren Vorläuferzellen das Proliferationsverhalten der Zellen manipulieren sollen. Es wurden Retroviren mit Doxycyclin-induzierbaren Expressionskassetten mit den Onkogenen cmyc, Bcl2, BclxL und dem Fusionsgen NUP98HOXB4 generiert. Eine Testzelllinie wurde erfolgreich mit den hergestellten Retroviren transduziert und die Funktionalität der hergestellten Viren anhand verschiedener Assays belegt. Die transferierten Gene konnten in der Testzelllinie auf DNAEbene nachgewiesen oder die Überexpression der entsprechenden Proteine im Western Blot detektiert werden. Es wurden schließlich B-Lymphozyten bzw. unreife Vorläuferzellen derselben mit den generierten Retroviren transduziert und mit Knochenmark-ähnlichen Stromazellen co-kultiviert. Aus keinem der transduzierten Ansätze konnte bisher eine Zelllinie oder eine Langzeit-Kultur etabliert werden.
Im letzten Teil der Arbeit wurde die Effektivität und Übertragbarkeit des zuvor etablierten Protokolls zur In vitro Immunisierung muriner B-Lymphozyten anhand verschiedener Antigene gezeigt. Es konnten in vitro spezifische IgG-Antworten gegen VP1, Legionella pneumophila und das Protein Mip, von dem ein Peptid in das zur Immunisierung eingesetzte VP1 integriert wurde, induziert werden. Die stimulierten B-Lymphozyten wurden durch Fusion mit Myelomzellen in permanente Antikörper-produzierende Zelllinien transformiert.
Dabei konnten mehrere Hybridomzelllinien generiert werden, die spezifische IgGAntikörper gegen VP1 oder Mip produzieren. Die generierten Antikörper konnten sowohl im Western Blot als auch im ELISA (Enzyme-Linked Immunosorbent Assay) das entsprechende Antigen spezifisch binden.
Die hier etablierte In vitro Immunisierung bietet eine effektive Alternative zu bisherigen Verfahren zur Herstellung spezifischer Antikörper. Sie ersetzt die Immunisierung von Versuchstieren und reduziert den Zeitaufwand erheblich. In Kombination mit der Hybridomtechnologie können die in vitro immunisierten Zellen, wie hier demonstriert, zur Generation von Hybridomzelllinien und zur Herstellung monoklonaler Antikörper genutzt werden. Um die Verwendung von Versuchstieren in dieser Methode durch eine adäquate permanente Zelllinie zu ersetzen, muss die genetische Veränderung von B-Lymphozyten und unreifen hämatopoetischen Zellen optimiert werden. Die Ergebnisse bieten eine Basis für eine universelle, Spezies-unabhängige Methodik zur Antikörperherstellung und für die
Etablierung einer idealen, tierfreien In vitro Immunisierung.
Flooding is a vast problem in many parts of the world, including Europe. It occurs mainly due to extreme weather conditions (e.g. heavy rainfall and snowmelt) and the consequences of flood events can be devastating. Flood risk is mainly defined as a combination of the probability of an event and its potential adverse impacts. Therefore, it covers three major dynamic components: hazard (physical characteristics of a flood event), exposure (people and their physical environment that being exposed to flood), and vulnerability (the elements at risk). Floods are natural phenomena and cannot be fully prevented. However, their risk can be managed and mitigated. For a sound flood risk management and mitigation, a proper risk assessment is needed. First of all, this is attained by a clear understanding of the flood risk dynamics. For instance, human activity may contribute to an increase in flood risk. Anthropogenic climate change causes higher intensity of rainfall and sea level rise and therefore an increase in scale and frequency of the flood events. On the other hand, inappropriate management of risk and structural protection measures may not be very effective for risk reduction. Additionally, due to the growth of number of assets and people within the flood-prone areas, risk increases. To address these issues, the first objective of this thesis is to perform a sensitivity analysis to understand the impacts of changes in each flood risk component on overall risk and further their mutual interactions. A multitude of changes along the risk chain are simulated by regional flood model (RFM) where all processes from atmosphere through catchment and river system to damage mechanisms are taken into consideration. The impacts of changes in risk components are explored by plausible change scenarios for the mesoscale Mulde catchment (sub-basin of the Elbe) in Germany.
A proper risk assessment is ensured by the reasonable representation of the real-world flood event. Traditionally, flood risk is assessed by assuming homogeneous return periods of flood peaks throughout the considered catchment. However, in reality, flood events are spatially heterogeneous and therefore traditional assumption misestimates flood risk especially for large regions. In this thesis, two different studies investigate the importance of spatial dependence in large scale flood risk assessment for different spatial scales. In the first one, the “real” spatial dependence of return period of flood damages is represented by continuous risk modelling approach where spatially coherent patterns of hydrological and meteorological controls (i.e. soil moisture and weather patterns) are included. Further the risk estimations under this modelled dependence assumption are compared with two other assumptions on the spatial dependence of return periods of flood damages: complete dependence (homogeneous return periods) and independence (randomly generated heterogeneous return periods) for the Elbe catchment in Germany. The second study represents the “real” spatial dependence by multivariate dependence models. Similar to the first study, the three different assumptions on the spatial dependence of return periods of flood damages are compared, but at national (United Kingdom and Germany) and continental (Europe) scales. Furthermore, the impacts of the different models, tail dependence, and the structural flood protection level on the flood risk under different spatial dependence assumptions are investigated.
The outcomes of the sensitivity analysis framework suggest that flood risk can vary dramatically as a result of possible change scenarios. The risk components that have not received much attention (e.g. changes in dike systems and in vulnerability) may mask the influence of climate change that is often investigated component.
The results of the spatial dependence research in this thesis further show that the damage under the false assumption of complete dependence is 100 % larger than the damage under the modelled dependence assumption, for the events with return periods greater than approximately 200 years in the Elbe catchment. The complete dependence assumption overestimates the 200-year flood damage, a benchmark indicator for the insurance industry, by 139 %, 188 % and 246 % for the UK, Germany and Europe, respectively. The misestimation of risk under different assumptions can vary from upstream to downstream of the catchment. Besides, tail dependence in the model and flood protection level in the catchments can affect the risk estimation and the differences between different spatial dependence assumptions.
In conclusion, the broader consideration of the risk components, which possibly affect the flood risk in a comprehensive way, and the consideration of the spatial dependence of flood return periods are strongly recommended for a better understanding of flood risk and consequently for a sound flood risk management and mitigation.
The incorporation of proteins in artificial materials such as membranes offers great opportunities to avail oneself the miscellaneous qualities of proteins and enzymes perfected by nature over millions of years. One possibility to leverage proteins is the modification with artificial polymers. To obtain such protein-polymer conjugates, either a polymer can be grown from the protein surface (grafting-from) or a pre-synthesized polymer attached to the protein (grafting-to). Both techniques were used to synthesize conjugates of different proteins with thermo-responsive polymers in this thesis.
First, conjugates were analyzed by protein NMR spectroscopy. Typical characterization techniques for conjugates can verify the successful conjugation and give hints on the secondary structure of the protein. However, the 3-dimensional structure, being highly important for the protein function, cannot be probed by standard techniques. NMR spectroscopy is a unique method allowing to follow even small alterations in the protein structure. A mutant of the carbohydrate binding module 3b (CBM3bN126W) was used as model protein and functionalized with poly(N-isopropylacrylamide). Analysis of conjugates prepared by grafting-to or grafting-from revealed a strong impact of conjugation type on protein folding. Whereas conjugates prepared by grafting a pre-formed polymer to the protein resulted in complete preservation of protein folding, grafting the polymer from the protein surface led to (partial) disruption of the protein structure.
Next, conjugates of bovine serum albumin (BSA) as cheap and easily accessible protein were synthesized with PNIPAm and different oligoethylene glycol (meth)acrylates. The obtained protein-polymer conjugates were analyzed by an in-line combination of size exclusion chromatography and multi-angle laser light scattering (SEC-MALS). This technique is particular advantageous to determine molar masses, as no external calibration of the system is needed. Different SEC column materials and operation conditions were tested to evaluate the applicability of this system to determine absolute molar masses and hydrodynamic properties of heterogeneous conjugates prepared by grafting-from and grafting-to. Hydrophobic and non-covalent interactions of conjugates lead to error-prone values not in accordance to expected molar masses based on conversions and extents of modifications.
As alternative to this method, conjugates were analyzed by sedimentation velocity analytical ultracentrifugation (SV-AUC) to gain insights in the hydrodynamic properties and how they change after conjugation. Within a centrifugal field, a sample moves and fractionates according to the mass, density, and shape of its individual components. Conjugates of BSA with PNIPAm were analyzed below and above the cloud point temperature of the thermo-responsive polymer component. It was identified that the polymer characteristics were transferred to the conjugate molecule which than showed a decreased ideality – defined as increased deviation from a perfect sphere model – below and increased ideality above the cloud point temperature. This effect can be attributed to an arrangement of the polymer chain pointing towards the solvent (expanded state) or snuggling around the protein surface depending on the applied temperature.
The last project dealt with the synthesis of ferric hydroxamate uptake protein component A (FhuA)-polymer conjugates as building blocks for novel membrane materials. The shape of FhuA can be described as barrel and removal of a cork domain inside the protein results in a passive channel aimed to be utilized as pores in the membrane system. The polymer matrix surrounding the membrane protein is composed of a thermo-responsive and a UV-crosslinkable part. Therefore, an external trigger for covalent immobilization of these building blocks in the membrane and switchability of the membrane between different states was incorporated. The overall performance of membranes prepared by a drying-mediated self-assembly approach was evaluated by permeability and size exclusion experiments. The obtained membranes displayed an insufficiency in interchain crosslinking and therefore a lack in performance. Furthermore, the aimed switch between a hydrophilic and hydrophobic state of the polymer matrix did not occur. Correspondingly, size exclusion experiments did not result in a retention of analytes larger than the pores defined by the dimension of the used FhuA variant.
Overall, different paths to generate protein-polymer conjugates by either grafting-from or grafting-to the protein surface were presented paving the way to the generation of new hybrid materials. Different analytical methods were utilized to describe the folding and hydrodynamic properties of conjugates providing a deeper insight in the overall characteristics of these seminal building blocks.
Virtualizing physical space
(2021)
The true cost for virtual reality is not the hardware, but the physical space it requires, as a one-to-one mapping of physical space to virtual space allows for the most immersive way of navigating in virtual reality. Such “real-walking” requires physical space to be of the same size and the same shape of the virtual world represented. This generally prevents real-walking applications from running on any space that they were not designed for.
To reduce virtual reality’s demand for physical space, creators of such applications let users navigate virtual space by means of a treadmill, altered mappings of physical to virtual space, hand-held controllers, or gesture-based techniques. While all of these solutions succeed at reducing virtual reality’s demand for physical space, none of them reach the same level of immersion that real-walking provides.
Our approach is to virtualize physical space: instead of accessing physical space directly, we allow applications to express their need for space in an abstract way, which our software systems then map to the physical space available. We allow real-walking applications to run in spaces of different size, different shape, and in spaces containing different physical objects. We also allow users immersed in different virtual environments to share the same space.
Our systems achieve this by using a tracking volume-independent representation of real-walking experiences — a graph structure that expresses the spatial and logical relationships between virtual locations, virtual elements contained within those locations, and user interactions with those elements. When run in a specific physical space, this graph representation is used to define a custom mapping of the elements of the virtual reality application and the physical space by parsing the graph using a constraint solver. To re-use space, our system splits virtual scenes and overlap virtual geometry. The system derives this split by means of hierarchically clustering of our virtual objects as nodes of our bi-partite directed graph that represents the logical ordering of events of the experience. We let applications express their demands for physical space and use pre-emptive scheduling between applications to have them share space. We present several application examples enabled by our system. They all enable real-walking, despite being mapped to physical spaces of different size and shape, containing different physical objects or other users.
We see substantial real-world impact in our systems. Today’s commercial virtual reality applications are generally designing to be navigated using less immersive solutions, as this allows them to be operated on any tracking volume. While this is a commercial necessity for the developers, it misses out on the higher immersion offered by real-walking. We let developers overcome this hurdle by allowing experiences to bring real-walking to any tracking volume, thus potentially bringing real-walking to consumers.
Die eigentlichen Kosten für Virtual Reality Anwendungen entstehen nicht primär durch die erforderliche Hardware, sondern durch die Nutzung von physischem Raum, da die eins-zu-eins Abbildung von physischem auf virtuellem Raum die immersivste Art von Navigation ermöglicht. Dieses als „Real-Walking“ bezeichnete Erlebnis erfordert hinsichtlich Größe und Form eine Entsprechung von physischem Raum und virtueller Welt. Resultierend daraus können Real-Walking-Anwendungen nicht an Orten angewandt werden, für die sie nicht entwickelt wurden.
Um den Bedarf an physischem Raum zu reduzieren, lassen Entwickler von Virtual Reality-Anwendungen ihre Nutzer auf verschiedene Arten navigieren, etwa mit Hilfe eines Laufbandes, verfälschten Abbildungen von physischem zu virtuellem Raum, Handheld-Controllern oder gestenbasierten Techniken. All diese Lösungen reduzieren zwar den Bedarf an physischem Raum, erreichen jedoch nicht denselben Grad an Immersion, den Real-Walking bietet.
Unser Ansatz zielt darauf, physischen Raum zu virtualisieren: Anstatt auf den physischen Raum direkt zuzugreifen, lassen wir Anwendungen ihren Raumbedarf auf abstrakte Weise formulieren, den unsere Softwaresysteme anschließend auf den verfügbaren physischen Raum abbilden. Dadurch ermöglichen wir Real-Walking-Anwendungen Räume mit unterschiedlichen Größen und Formen und Räume, die unterschiedliche physische Objekte enthalten, zu nutzen. Wir ermöglichen auch die zeitgleiche Nutzung desselben Raums durch mehrere Nutzer verschiedener Real-Walking-Anwendungen.
Unsere Systeme erreichen dieses Resultat durch eine Repräsentation von Real-Walking-Erfahrungen, die unabhängig sind vom gegebenen Trackingvolumen – eine Graphenstruktur, die die räumlichen und logischen Beziehungen zwischen virtuellen Orten, den virtuellen Elementen innerhalb dieser Orte, und Benutzerinteraktionen mit diesen Elementen, ausdrückt. Bei der Instanziierung der Anwendung in einem bestimmten physischen Raum wird diese Graphenstruktur und ein Constraint Solver verwendet, um eine individuelle Abbildung der virtuellen Elemente auf den physischen Raum zu erreichen. Zur mehrmaligen Verwendung des Raumes teilt unser System virtuelle Szenen und überlagert virtuelle Geometrie. Das System leitet diese Aufteilung anhand eines hierarchischen Clusterings unserer virtuellen Objekte ab, die als Knoten unseres bi-partiten, gerichteten Graphen die logische Reihenfolge aller Ereignisse repräsentieren. Wir verwenden präemptives Scheduling zwischen den Anwendungen für die zeitgleiche Nutzung von physischem Raum. Wir stellen mehrere Anwendungsbeispiele vor, die Real-Walking ermöglichen – in physischen Räumen mit unterschiedlicher Größe und Form, die verschiedene physische Objekte oder weitere Nutzer enthalten.
Wir sehen in unseren Systemen substantielles Potential. Heutige Virtual Reality-Anwendungen sind bisher zwar so konzipiert, dass sie auf einem beliebigen Trackingvolumen betrieben werden können, aber aus kommerzieller Notwendigkeit kein Real-Walking beinhalten. Damit entgeht Entwicklern die Gelegenheit eine höhere Immersion herzustellen. Indem wir es ermöglichen, Real-Walking auf jedes Trackingvolumen zu bringen, geben wir Entwicklern die Möglichkeit Real-Walking zu ihren Nutzern zu bringen.
While patients are known to respond differently to drug therapies, current clinical practice often still follows a standardized dosage regimen for all patients. For drugs with a narrow range of both effective and safe concentrations, this approach may lead to a high incidence of adverse events or subtherapeutic dosing in the presence of high patient variability. Model-informedprecision dosing (MIPD) is a quantitative approach towards dose individualization based on mathematical modeling of dose-response relationships integrating therapeutic drug/biomarker monitoring (TDM) data. MIPD may considerably improve the efficacy and safety of many drug therapies. Current MIPD approaches, however, rely either on pre-calculated dosing tables or on simple point predictions of the therapy outcome. These
approaches lack a quantification of uncertainties and the ability to account for effects that are delayed. In addition, the underlying models are not improved while applied to patient data. Therefore, current approaches are not well suited for informed clinical decision-making based on a differentiated understanding of the individually predicted therapy outcome.
The objective of this thesis is to develop mathematical approaches for MIPD, which (i) provide efficient fully Bayesian forecasting of the individual therapy outcome including associated uncertainties, (ii) integrate Markov decision processes via reinforcement learning (RL) for a comprehensive decision framework for dose individualization, (iii) allow for continuous learning across patients and hospitals. Cytotoxic anticancer chemotherapy with its major dose-limiting toxicity, neutropenia, serves as a therapeutically relevant application example.
For more comprehensive therapy forecasting, we apply Bayesian data assimilation (DA) approaches, integrating patient-specific TDM data into mathematical models of chemotherapy-induced neutropenia that build on prior population analyses. The value of uncertainty quantification is demonstrated as it allows reliable computation of the patient-specific probabilities of relevant clinical quantities, e.g., the neutropenia grade. In view of novel home monitoring devices that increase the amount of TDM data available, the data processing of
sequential DA methods proves to be more efficient and facilitates handling of the variability between dosing events.
By transferring concepts from DA and RL we develop novel approaches for MIPD. While DA-guided dosing integrates individualized uncertainties into dose selection, RL-guided dosing provides a framework to consider delayed effects of dose selections. The combined
DA-RL approach takes into account both aspects simultaneously and thus represents a holistic approach towards MIPD. Additionally, we show that RL can be used to gain insights into important patient characteristics for dose selection. The novel dosing strategies substantially reduce the occurrence of both subtherapeutic and life-threatening neutropenia grades in a simulation study based on a recent clinical study (CEPAC-TDM trial) compared to currently used MIPD approaches.
If MIPD is to be implemented in routine clinical practice, a certain model bias with respect to the underlying model is inevitable, as the models are typically based on data from comparably small clinical trials that reflect only to a limited extent the diversity in real-world patient populations. We propose a sequential hierarchical Bayesian inference framework that enables continuous cross-patient learning to learn the underlying model parameters of the target patient population. It is important to note that the approach only requires summary information of the individual patient data to update the model. This separation of the individual inference from population inference enables implementation across different centers of care.
The proposed approaches substantially improve current MIPD approaches, taking into account new trends in health care and aspects of practical applicability. They enable progress towards more informed clinical decision-making, ultimately increasing patient benefits beyond the current practice.
Spatiotemporal variations of key air pollutants and greenhouse gases in the Himalayan foothills
(2021)
South Asia is a rapidly developing, densely populated and highly polluted region that is facing the impacts of increasing air pollution and climate change, and yet it remains one of the least studied regions of the world scientifically. In recognition of this situation, this thesis focuses on studying (i) the spatial and temporal variation of key greenhouse gases (CO2 and CH4) and air pollutants (CO and O3) and (ii) the vertical distribution of air pollutants (PM, BC) in the foothills of the Himalaya. Five sites were selected in the Kathmandu Valley, the capital region of Nepal, along with two sites outside of the valley in the Makawanpur and Kaski districts, and conducted measurements during the period of 2013-2014 and 2016. These measurements are analyzed in this thesis.
The CO measurements at multiple sites in the Kathmandu Valley showed a clear diurnal cycle: morning and evening levels were high, with an afternoon dip. There are slight differences in the diurnal cycles of CO2 and CH4, with the CO2 and CH4 mixing ratios increasing after the afternoon dip, until the morning peak the next day. The mixing layer height (MLH) of the nocturnal stable layer is relatively constant (~ 200 m) during the night, after which it transitions to a convective mixing layer during the day and the MLH increases up to 1200 m in the afternoon. Pollutants are thus largely trapped in the valley from the evening until sunrise the following day, and the concentration of pollutants increases due to emissions during the night. During afternoon, the pollutants are diluted due to the circulation by the valley winds after the break-up of the mixing layer. The major emission sources of GHGs and air pollutants in the valley are transport sector, residential cooking, brick kilns, trash burning, and agro-residue burning. Brick industries are influential in the winter and pre-monsoon season. The contribution of regional forest fires and agro-residue burning are seen during the pre-monsoon season. In addition, relatively higher CO values were also observed at the valley outskirts (Bhimdhunga and Naikhandi), which indicates the contribution of regional emission sources. This was also supported by the presence of higher concentrations of O3 during the pre-monsoon season.
The mixing ratios of CO2 (419.3 ±6.0 ppm) and CH4 (2.192 ±0.066 ppm) in the valley were much higher than at background sites, including the Mauna Loa observatory (CO2: 396.8 ± 2.0 ppm, CH4:1.831 ± 0.110 ppm) and Waligaun (CO2: 397.7 ± 3.6 ppm, CH4: 1.879 ± 0.009 ppm), China, as well as at an urban site Shadnagar (CH4: 1.92 ± 0.07 ppm) in India.
The daily 8 hour maximum O3 average in the Kathmandu Valley exceeds the WHO recommended value during more than 80% of the days during the pre-monsoon period, which represents a significant risk for human health and ecosystems in the region. Moreover, in the measurements of the vertical distribution of particulate matter, which were made using an ultralight aircraft, and are the first of their kind in the region, an elevated polluted layer at around ca. 3000 m asl. was detected over the Pokhara Valley. The layer could be associated with the large-scale regional transport of pollution. These contributions towards understanding the distributions of key air pollutants and their main sources will provide helpful information for developing management plans and policies to help reduce the risks for the millions of people living in the region.
The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes located in the Khomas Highland of Namibia. H.E.S.S. operates in a wide energy range from several tens of GeV to several tens of TeV, reaching the best sensitivity around 1 TeV or at lower energies. However, there are many important topics – such as the search for Galactic PeVatrons, the study of gamma-ray production scenarios for sources (hadronic vs. leptonic), EBL absorption studies – which require good sensitivity at energies above 10 TeV. This work aims at improving the sensitivity of H.E.S.S. and increasing the gamma-ray statistics at high energies. The study investigates an enlargement of the H.E.S.S. effective field of view using events with larger offset angles in the analysis. The greatest challenges in the analysis of large-offset events are a degradation of the reconstruction accuracy and a rise of the background rate as the offset angle increases. The more sophisticated direction reconstruction method (DISP) and improvements to the standard background rejection technique, which by themselves are effective ways to increase the gamma-ray statistics and improve the sensitivity of the analysis, are implemented to overcome the above-mentioned issues. As a result, the angular resolution at the preselection level is improved by 5 - 10% for events at 0.5◦ offset angle and by 20 - 30% for events at 2◦ offset angle. The background rate at large offset angles is decreased nearly to a level typical for offset angles below 2.5◦. Thereby, sensitivity improvements of 10 - 20% are achieved for the proposed analysis compared to the standard analysis at small offset angles. Developed analysis also allows for the usage of events at large offset angles up to approximately 4◦, which was not possible before. This analysis method is applied to the analysis of the Galactic plane data above 10 TeV. As a result, 40 sources out of the 78 presented in the H.E.S.S. Galactic plane survey (HGPS) are detected above 10 TeV. Among them are representatives of all source classes that are present in the HGPS catalogue; namely, binary systems, supernova remnants, pulsar wind nebulae and composite objects. The potential of the improved analysis method is demonstrated by investigating the more than 10 TeV emission for two objects: the region associated with the shell-type SNR HESS J1731−347 and the PWN candidate associated with PSR J0855−4644 that is coincident with Vela Junior (HESS J0852−463).
Modern knowledge bases contain and organize knowledge from many different topic areas. Apart from specific entity information, they also store information about their relationships amongst each other. Combining this information results in a knowledge graph that can be particularly helpful in cases where relationships are of central importance. Among other applications, modern risk assessment in the financial sector can benefit from the inherent network structure of such knowledge graphs by assessing the consequences and risks of certain events, such as corporate insolvencies or fraudulent behavior, based on the underlying network structure. As public knowledge bases often do not contain the necessary information for the analysis of such scenarios, the need arises to create and maintain dedicated domain-specific knowledge bases.
This thesis investigates the process of creating domain-specific knowledge bases from structured and unstructured data sources. In particular, it addresses the topics of named entity recognition (NER), duplicate detection, and knowledge validation, which represent essential steps in the construction of knowledge bases.
As such, we present a novel method for duplicate detection based on a Siamese neural network that is able to learn a dataset-specific similarity measure which is used to identify duplicates. Using the specialized network architecture, we design and implement a knowledge transfer between two deduplication networks, which leads to significant performance improvements and a reduction of required training data.
Furthermore, we propose a named entity recognition approach that is able to identify company names by integrating external knowledge in the form of dictionaries into the training process of a conditional random field classifier. In this context, we study the effects of different dictionaries on the performance of the NER classifier. We show that both the inclusion of domain knowledge as well as the generation and use of alias names results in significant performance improvements.
For the validation of knowledge represented in a knowledge base, we introduce Colt, a framework for knowledge validation based on the interactive quality assessment of logical rules. In its most expressive implementation, we combine Gaussian processes with neural networks to create Colt-GP, an interactive algorithm for learning rule models. Unlike other approaches, Colt-GP uses knowledge graph embeddings and user feedback to cope with data quality issues of knowledge bases. The learned rule model can be used to conditionally apply a rule and assess its quality.
Finally, we present CurEx, a prototypical system for building domain-specific knowledge bases from structured and unstructured data sources. Its modular design is based on scalable technologies, which, in addition to processing large datasets, ensures that the modules can be easily exchanged or extended. CurEx offers multiple user interfaces, each tailored to the individual needs of a specific user group and is fully compatible with the Colt framework, which can be used as part of the system.
We conduct a wide range of experiments with different datasets to determine the strengths and weaknesses of the proposed methods. To ensure the validity of our results, we compare the proposed methods with competing approaches.
Natural products have proved to be a major resource in the discovery and development of many pharmaceuticals that are in use today. There is a wide variety of biologically active natural products that contain conjugated polyenes or benzofuran structures. Therefore, new synthetic methods for the construction of such building blocks are of great interest to synthetic chemists. The recently developed one-pot tethered ring-closing metathesis approach allows for the formation of Z,E-dienoates in high stereoselectivity. The extension of this method with a Julia-Kocienski olefination protocol would allow for the formation of conjugated trienes in a stereoselective manner. This strategy was applied in the total synthesis of conjugated triene containing (+)-bretonin B. Additionally, investigations of cross metathesis using methyl substituted olefins were pursued. This methodology was applied, as a one-pot cross metathesis/ring-closing metathesis sequence, in the total synthesis of benzofuran containing 7-methoxywutaifuranal. Finally, the design and synthesis of a catalyst for stereoretentive metathesis in aqueous media was investigated.
The survey of the prevalence of chronic ankle instability in elite Taiwanese basketball athletes
(2021)
BACKGROUND: Ankle sprains are common in basketball. It could develop into Chronic Ankle Instability (CAI) causing decreased quality of life, functional performance, early osteoarthritis, and increased risk of other injuries. To develop a strategy of CAI prevention, localized epidemiology data and a valid/reliable tool are essential. However, the epidemiological data of CAI is not conclusive from previous studies and the prevalence of CAI in Taiwanese basketball athletes are not clear. In addition, a valid and reliable tool among the Taiwan-Chinese version to evaluate ankle instability is missing.
PURPOSE: The aims were to have an overview of the prevalence of CAI in sports population using a systematic review, to develop a valid and reliable cross-cultural adapted Cumberland Ankle Instability Tool Questionnaire (CAIT) in Taiwan-Chinese (CAIT-TW), and to survey the prevalence of CAI in elite basketball athletes in Taiwan using CAIT-TW.
METHODS: Firstly, a systematic search was conducted. Research articles applying CAI related questionnaires in order to survey the prevalence of CAI were included in the review. Second, the English version of CAIT was translated and cross-culturally adapted into the CAIT-TW. The construct validity, test-retest reliability, internal consistency, and cutoff score of CAIT-TW were evaluated in an athletic population (N=135). Finally, the cross-sectional data of CAI prevalence in 388 elite Taiwanese basketball athletes were presented. Demographics, presence of CAI, and difference of prevalence between gender, different competitive levels and play positions were evaluated.
RESULTS: The prevalence of CAI was 25%, ranging between 7% and 53%. The prevalence of CAI among participants with a history of ankle sprains was 46%, ranging between 9% and 76%. In addition, the cross-cultural adapted CAIT-TW showed a moderate to strong construct validity, an excellent test-retest reliability, a good internal consistency, and a cutoff score of 21.5 for the Taiwanese athletic population. Finally, 26% of Taiwanese basketball athletes had unilateral CAI while 50% of them had bilateral CAI. In addition, women athletes in the investigated cohort had a higher prevalence of CAI than men. There was no difference in prevalence between competitive levels and among play positions.
CONCLUSION: The systematic review shows that the prevalence of CAI has a wide range among included studies. This could be due to the different exclusion criteria, age, sports discipline, or other factors among the included studies. For future studies, standardized criteria to investigate the epidemiology of CAI are required. The CAI epidemiological study should be prospective. Factors affecting the prevalence of CAI ability should be investigated and described. The translated CAIT-TW is a valid and reliable tool to differentiate between stable and unstable ankles in athletes and may further apply for research or daily practice in Taiwan. In the Taiwanese basketball population, CAI is highly prevalent. This might relate to the research method, preexisting ankle instability, and training-related issues. Women showed a higher prevalence of CAI than men. When applying the preventive measure, gender should be taken into consideration.
As society paves its way towards device miniaturization and precision medicine, micro-scale actuation and guided transport become increasingly prominent research fields, with high potential impact in both technological and clinical contexts. In order to accomplish directed motion of micron-sized objects, as biosensors and drug-releasing microparticles, towards specific target sites, a promising strategy is the use of living cells as smart biochemically-powered carriers, building the so-called bio-hybrid systems. Inspired by leukocytes, native cells of living organisms efficiently migrating to critical targets as tumor tissue, an emerging concept is to exploit the amoeboid crawling motility of such cells as mean of transport for drug delivery applications.
In the research work described in this thesis, I synergistically applied experimental, computational and theoretical modeling approaches to investigate the behaviour and transport mechanism of a novel kind of bio-hybrid system for active transport at the micro-scale, referred to as cellular truck. This system consists of an amoeboid crawling cell, the carrier, attached to a microparticle, the cargo, which may ideally be drug-loaded for specific therapeutic treatments.
For the purposes of experimental investigation, I employed the amoeba Dictyostelium discoideum as crawling cellular carrier, being a renowned model organism for leukocyte migration and, in general, for eukaryotic cell motility. The performed experiments revealed a complex recurrent cell-cargo relative motion, together with an intermittent motility of the cellular truck as a whole. The evidence suggests the presence of cargoes on amoeboid cells to act as mechanical stimulus leading cell polarization, thus promoting cell motility and giving rise to the observed intermittent dynamics of the truck. Particularly, bursts in cytoskeletal polarity along the cell-cargo axis have been
found to occur in time with a rate dependent on cargo geometrical features, as particle diameter. Overall, the collected experimental evidence pointed out a pivotal role of cell-cargo interactions in the emergent cellular truck motion dynamics. Especially, they can determine the transport capabilities of amoeboid cells, as the cargo size significantly impacts the cytoskeletal activity and repolarization dynamics along the cell-cargo axis, the latter responsible for truck displacement and reorientation.
Furthermore, I developed a modeling framework, built upon the experimental evidence on cellular truck behaviour, that connects the relative dynamics and interactions arising at the truck scale with the actual particle transport dynamics. In fact, numerical simulations of the proposed model successfully reproduced the phenomenology of the cell-cargo system, while enabling the prediction of the transport properties of cellular trucks over larger spatial and temporal scales. The theoretical analysis provided a deeper understanding of the role of cell-cargo interaction on mass transport, unveiling in particular how the long-time transport efficiency is governed by the interplay between the persistence time of cell polarity and time scales of the relative dynamics stemming from cell-cargo interaction. Interestingly, the model predicts the existence of an optimal cargo size, enhancing the diffusivity of cellular trucks; this is in line with previous independent experimental data, which appeared rather counterintuitive and had no explanation prior to this study.
In conclusion, my research work shed light on the importance of cargo-carrier interactions in the context of crawling cell-mediated particle transport, and provides a prototypical, multifaceted framework for the analysis and modelling of such complex bio-hybrid systems and their perspective optimization.
The presented study investigated the influence of microbial and biogeochemical processes on the physical transport related properties and the fate of microplastics in freshwater reservoirs. The overarching goal was to elucidate the mechanisms leading to sedimentation and deposition of microplastics in such environments. This is of importance, as large amounts of initially buoyant microplastics are found in reservoir sediments worldwide. However, the transport processes which lead to microplastics accumulation in sediments, were up to now understudied.
The impact of biofilm formation on the density and subsequent sedimentation of microplastics was investigated in the eutrophic Bautzen reservoirs (Chapter 2). Biofilms are complex microbial communities fixed to submerged surfaces through a slimy organic film. The mineral calcite was detected in the biofilms, which led to the
sinking of the overgrown microplastic particles. The calcite was of biogenic origin, most likely precipitated by sessile cyanobacteria within the biofilms.
Biofilm formation was also studied in the mesotrophic Malter reservoir. Unlike in Bautzen reservoir, biofilm formation did not govern the sedimentation of different microplastics in Malter reservoir (Chapter 3). Instead autumnal lake mixing led to
the formation of sinking aggregates of microplastics and iron colloids. Such colloids form when anoxic, iron-rich water from the hypolimnion mixes with the oxygenated epilimnetic waters. The colloids bind organic material from the lake water, which leads to the formation of large and sinking iron-organo flocs.
Hence, iron-organo floc formation and their influence on the buoyancy or burial of microplastics into sediments of Bautzen reservoir was studied in laboratory experiments (Chapter 4). Microplastics of different shapes (fiber, fragment, sphere) and sizes were readily incorporated into sinking iron-organo flocs. By this initially buoyant polyethylene microplastics were transported on top of sediments from Bautzen reservoir. Shortly after deposition, the microplastic bearing flocs started to subside and transported the pollutants into deeper sediment layers. The microplastics were not released from the sediments within two months of laboratory incubation.
The stability of floc microplastic deposition was further investigated employing experiments with the iron reducing model organism Shewanella oneidensis (Chapter 5). It was shown, that reduction or re-mineralization of the iron minerals did not affect the integrity of the iron-organo flocs. The organic matrix was stable under iron reducing conditions. Hence, no incorporated microplastics were released from the flocs. As similar processes are likely to take place in natural sediments, this might explain the previous described low microplastic release from the sediments.
This thesis introduced different mechanisms leading to the sedimentation of initially buoyant microplastics and to their subsequent deposition in freshwater reservoirs. Novel processes such as the aggregation with iron-organo flocs were identified and the understudied issue of biofilm densification through biogenic mineral formation was further investigated. The findings might have implications for the fate of microplastics within the river-reservoir system and outline the role of freshwater reservoirs as important accumulation zone for microplastics. Microplastics deposited in the sediments of reservoirs might not be transported further by through flowing river. Hence the study might contribute to better risk assessment and transport balances of these anthropogenic contaminants.
As part of our everyday life we consume breaking news and interpret it based on our own viewpoints and beliefs. We have easy access to online social networking platforms and news media websites, where we inform ourselves about current affairs and often post about our own views, such as in news comments or social media posts. The media ecosystem enables opinions and facts to travel from news sources to news readers, from news article commenters to other readers, from social network users to their followers, etc. The views of the world many of us have depend on the information we receive via online news and social media. Hence, it is essential to maintain accurate, reliable and objective online content to ensure democracy and verity on the Web. To this end, we contribute to a trustworthy media ecosystem by analyzing news and social media in the context of politics to ensure that media serves the public interest. In this thesis, we use text mining, natural language processing and machine learning techniques to reveal underlying patterns in political news articles and political discourse in social networks.
Mainstream news sources typically cover a great amount of the same news stories every day, but they often place them in a different context or report them from different perspectives. In this thesis, we are interested in how distinct and predictable newspaper journalists are, in the way they report the news, as a means to understand and identify their different political beliefs. To this end, we propose two models that classify text from news articles to their respective original news source, i.e., reported speech and also news comments. Our goal is to capture systematic quoting and commenting patterns by journalists and news commenters respectively, which can lead us to the newspaper where the quotes and comments are originally published. Predicting news sources can help us understand the potential subjective nature behind news storytelling and the magnitude of this phenomenon. Revealing this hidden knowledge can restore our trust in media by advancing transparency and diversity in the news.
Media bias can be expressed in various subtle ways in the text and it is often challenging to identify these bias manifestations correctly, even for humans. However, media experts, e.g., journalists, are a powerful resource that can help us overcome the vague definition of political media bias and they can also assist automatic learners to find the hidden bias in the text. Due to the enormous technological advances in artificial intelligence, we hypothesize that identifying political bias in the news could be achieved through the combination of sophisticated deep learning modelsxi and domain expertise. Therefore, our second contribution is a high-quality and reliable news dataset annotated by journalists for political bias and a state-of-the-art solution for this task based on curriculum learning. Our aim is to discover whether domain expertise is necessary for this task and to provide an automatic solution for this traditionally manually-solved problem. User generated content is fundamentally different from news articles, e.g., messages are shorter, they are often personal and opinionated, they refer to specific topics and persons, etc. Regarding political and socio-economic news, individuals in online communities make use of social networks to keep their peers up-to-date and to share their own views on ongoing affairs. We believe that social media is also an as powerful instrument for information flow as the news sources are, and we use its unique characteristic of rapid news coverage for two applications. We analyze Twitter messages and debate transcripts during live political presidential debates to automatically predict the topics that Twitter users discuss. Our goal is to discover the favoured topics in online communities on the dates of political events as a way to understand the political subjects of public interest. With the up-to-dateness of microblogs, an additional opportunity emerges, namely to use social media posts and leverage the real-time verity about discussed individuals to find their locations.
That is, given a person of interest that is mentioned in online discussions, we use the wisdom of the crowd to automatically track her physical locations over time. We evaluate our approach in the context of politics, i.e., we predict the locations of US politicians as a proof of concept for important use cases, such as to track people that
are national risks, e.g., warlords and wanted criminals.
The goal of this dissertation is to empirically evaluate the predictions of two classes of models applied to language processing: the similarity-based interference models (Lewis & Vasishth, 2005; McElree, 2000) and the group of smaller-scale accounts that we will refer to as faulty encoding accounts (Eberhard, Cutting, & Bock, 2005; Bock & Eberhard, 1993). Both types of accounts make predictions with regard to processing the same class of structures: sentences containing a non-subject (interfering) noun in addition to a subject noun and a verb. Both accounts make the same predictions for processing ungrammatical sentences with a number-mismatching interfering noun, and this prediction finds consistent support in the data. However, the similarity-based interference accounts predict similar effects not only for morphosyntactic, but also for the semantic level of language organization. We verified this prediction in three single-trial online experiments, where we found consistent support for the predictions of the similarity-based interference account. In addition, we report computational simulations further supporting the similarity-based interference accounts. The combined evidence suggests that the faulty encoding accounts are not required to explain comprehension of ill-formed sentences.
For the processing of grammatical sentences, the accounts make conflicting predictions, and neither the slowdown predicted by the similarity-based interference account, nor the complementary slowdown predicted by the faulty encoding accounts were systematically observed. The majority of studies found no difference between the compared configurations. We tested one possible explanation for the lack of predicted difference, namely, that both slowdowns are present simultaneously and thus conceal each other. We decreased the amount of similarity-based interference: if the effects were concealing each other, decreasing one of them should allow the other to surface. Surprisingly, throughout three larger-sample single-trial online experiments, we consistently found the slowdown predicted by the faulty encoding accounts, but no effects consistent with the presence of inhibitory interference.
The overall pattern of the results observed across all the experiments reported in this dissertation is consistent with previous findings: predictions of the interference accounts for the processing of ungrammatical sentences receive consistent support, but the predictions for the processing of grammatical sentences are not always met. Recent proposals by Nicenboim et al. (2016) and Mertzen et al. (2020) suggest that interference might arise only in people with high working memory capacity or under deep processing mode. Following these proposals, we tested whether interference effects might depend on the depth of processing: we manipulated the complexity of the training materials preceding the grammatical experimental sentences while making no changes to the experimental materials themselves. We found that the slowdown predicted by the faulty encoding accounts disappears in the deep processing mode, but the effects consistent with the predictions of the similarity-based interference account do not arise.
Independently of whether similarity-based interference arises under deep processing mode or not, our results suggest that the faulty encoding accounts cannot be dismissed since they make unique predictions with regard to processing grammatical sentences, which are supported by data. At the same time, the support is not unequivocal: the slowdowns are present only in the superficial processing mode, which is not predicted by the faulty encoding accounts. Our results might therefore favor a much simpler system that superficially tracks number features and is distracted by every plural feature.
Smart contracts promise to reform the legal domain by automating clerical and procedural work, and minimizing the risk of fraud and manipulation. Their core idea is to draft contract documents in a way which allows machines to process them, to grasp the operational and non-operational parts of the underlying legal agreements, and to use tamper-proof code execution alongside established judicial systems to enforce their terms. The implementation of smart contracts has been largely limited by the lack of an adequate technological foundation which does not place an undue amount of trust in any contract party or external entity. Only recently did the emergence of Decentralized Applications (DApps) change this: Stored and executed via transactions on novel distributed ledger and blockchain networks, powered by complex integrity and consensus protocols, DApps grant secure computation and immutable data storage while at the same time eliminating virtually all assumptions of trust.
However, research on how to effectively capture, deploy, and most of all enforce smart contracts with DApps in mind is still in its infancy. Starting from the initial expression of a smart contract's intent and logic, to the operation of concrete instances in practical environments, to the limits of automatic enforcement---many challenges remain to be solved before a widespread use and acceptance of smart contracts can be achieved.
This thesis proposes a model-driven smart contract management approach to tackle some of these issues. A metamodel and semantics of smart contracts are presented, containing concepts such as legal relations, autonomous and non-autonomous actions, and their interplay. Guided by the metamodel, the notion and a system architecture of a Smart Contract Management System (SCMS) is introduced, which facilitates smart contracts in all phases of their lifecycle. Relying on DApps in heterogeneous multi-chain environments, the SCMS approach is evaluated by a proof-of-concept implementation showing both its feasibility and its limitations.
Further, two specific enforceability issues are explored in detail: The performance of fully autonomous tamper-proof behavior with external off-chain dependencies and the evaluation of temporal constraints within DApps, both of which are essential for smart contracts but challenging to support in the restricted transaction-driven and closed environment of blockchain networks. Various strategies of implementing or emulating these capabilities, which are ultimately applicable to all kinds of DApp projects independent of smart contracts, are presented and evaluated.
Conceptual knowledge about objects, people and events in the world is central to human cognition, underlying core cognitive abilities such as object recognition and use, and word comprehension. Previous research indicates that concepts consist of perceptual and motor features represented in modality-specific perceptual-motor brain regions. In addition, cross-modal convergence zones integrate modality-specific features into more abstract conceptual representations.
However, several questions remain open: First, to what extent does the retrieval of perceptual-motor features depend on the concurrent task? Second, how do modality-specific and cross-modal regions interact during conceptual knowledge retrieval? Third, which brain regions are causally relevant for conceptually-guided behavior? This thesis addresses these three key issues using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in the healthy human brain.
Study 1 - an fMRI activation study - tested to what extent the retrieval of sound and action features of concepts, and the resulting engagement of auditory and somatomotor brain regions depend on the concurrent task. 40 healthy human participants performed three different tasks - lexical decision, sound judgment, and action judgment - on words with a high or low association to sounds and actions. We found that modality-specific regions selectively respond to task-relevant features: Auditory regions selectively responded to sound features during sound judgments, and somatomotor regions selectively responded to action features during action judgments. Unexpectedly, several regions (e.g. the left posterior parietal cortex; PPC) exhibited a task-dependent response to both sound and action features. We propose these regions to be "multimodal", and not "amodal", convergence zones which retain modality-specific information.
Study 2 - an fMRI connectivity study - investigated the functional interaction between modality-specific and multimodal areas during conceptual knowledge retrieval. Using the above fMRI data, we asked (1) whether modality-specific and multimodal regions are functionally coupled during sound and action feature retrieval, (2) whether their coupling depends on the task, (3) whether information flows bottom-up, top-down, or bidirectionally, and (4) whether their coupling is behaviorally relevant. We found that functional coupling between multimodal and modality-specific areas is task-dependent, bidirectional, and relevant for conceptually-guided behavior. Left PPC acted as a connectivity "switchboard" that flexibly adapted its coupling to task-relevant modality-specific nodes.
Hence, neuroimaging studies 1 and 2 suggested a key role of left PPC as a multimodal convergence zone for conceptual knowledge. However, as neuroimaging is correlational, it remained unknown whether left PPC plays a causal role as a multimodal conceptual hub. Therefore, study 3 - a TMS study - tested the causal relevance of left PPC for sound and action feature retrieval. We found that TMS over left PPC selectively impaired action judgments on low sound-low action words, as compared to sham stimulation. Computational simulations of the TMS-induced electrical field revealed that stronger stimulation of left PPC was associated with worse performance on action, but not sound, judgments. These results indicate that left PPC causally supports conceptual processing when action knowledge is task-relevant and cannot be compensated by sound knowledge. Our findings suggest that left PPC is specialized for action knowledge, challenging the view of left PPC as a multimodal conceptual hub.
Overall, our studies support "hybrid theories" which posit that conceptual processing involves both modality-specific perceptual-motor regions and cross-modal convergence zones. In our new model of the conceptual system, we propose conceptual processing to rely on a representational hierarchy from modality-specific to multimodal up to amodal brain regions. Crucially, this hierarchical system is flexible, with different regions and connections being engaged in a task-dependent fashion. Our model not only reconciles the seemingly opposing grounded cognition and amodal theories, it also incorporates task dependency of conceptually-related brain activity and connectivity, thereby resolving several current issues on the neural basis of conceptual knowledge retrieval.
In our daily life, recurrence plays an important role on many spatial and temporal scales and in different contexts. It is the foundation of learning, be it in an evolutionary or in a neural context. It therefore seems natural that recurrence is also a fundamental concept in theoretical dynamical systems science. The way in which states of a system recur or develop in a similar way from similar initial states makes it possible to infer information about the underlying dynamics of the system. The mathematical space in which we define the state of a system (state space) is often high dimensional, especially in complex systems that can also exhibit chaotic dynamics. The recurrence plot (RP) enables us to visualize the recurrences of any high-dimensional systems in a two-dimensional, binary representation. Certain patterns in RPs can be related to physical properties of the underlying system, making the qualitative and quantitative analysis of RPs an integral part of nonlinear systems science. The presented work has a methodological focus and further develops recurrence analysis (RA) by addressing current research questions related to an increasing amount of available data and advances in machine learning techniques. By automatizing a central step in RA, namely the reconstruction of the state space from measured experimental time series, and by investigating the impact of important free parameters this thesis aims to make RA more accessible to researchers outside of physics.
The first part of this dissertation is concerned with the reconstruction of the state space from time series. To this end, a novel idea is proposed which automates the reconstruction problem in the sense that there is no need to preprocesse the data or estimate parameters a priori. The key idea is that the goodness of a reconstruction can be evaluated by a suitable objective function and that this function is minimized in the embedding process. In addition, the new method can process multivariate time series input data. This is particularly important because multi-channel sensor-based observations are ubiquitous in many research areas and continue to increase. Building on this, the described minimization problem of the objective function is then processed using a machine learning approach.
In the second part technical and methodological aspects of RA are discussed. First, we mathematically justify the idea of setting the most influential free parameter in RA, the recurrence threshold ε, in relation to the distribution of all pairwise distances in the data. This is especially important when comparing different RPs and their quantification statistics and is fundamental to any comparative study. Second, some aspects of recurrence quantification analysis (RQA) are examined. As correction schemes for biased RQA statistics, which are based on diagonal lines, we propose a simple method for dealing with border effects of an RP in RQA and a skeletonization algorithm for RPs. This results in less biased (diagonal line based) RQA statistics for flow-like data. Third, a novel type of RQA characteristic is developed, which can be viewed as a generalized non-linear powerspectrum of high dimensional systems. The spike powerspectrum transforms a spike-train like signal into its frequency domain. When transforming the diagonal line-dependent recurrence rate (τ-RR) of a RP in this way, characteristic periods, which can be seen in the state space representation of the system can be unraveled. This is not the case, when Fourier transforming τ-RR.
Finally, RA and RQA are applied to climate science in the third part and neuroscience in the fourth part. To the best of our knowledge, this is the first time RPs and RQA have been used to analyze lake sediment data in a paleoclimate context. Therefore, we first elaborate on the basic formalism and the interpretation of visually visible patterns in RPs in relation to the underlying proxy data. We show that these patterns can be used to classify certain types of variability and transitions in the Potassium record from six short (< 17m) sediment cores collected during the Chew Bahir Drilling Project. Building on this, the long core (∼ m composite) from the same site is analyzed and two types of variability and transitions are
identified and compared with ODP Site wetness index from the eastern Mediterranean. Type variability likely reflects the influence of precessional forcing in the lower latitudes at times of maximum values of the long eccentricity cycle ( kyr) of the earth’s orbit around the sun, with a tendency towards extreme events. Type variability appears to be related to the minimum values of this cycle and corresponds to fairly rapid transitions between relatively dry and relatively wet conditions.
In contrast, RQA has been applied in the neuroscientific context for almost two decades. In the final part, RQA statistics are used to quantify the complexity in a specific frequency band of multivariate EEG (electroencephalography) data. By analyzing experimental data, it can be shown that the complexity of the signal measured in this way across the sensorimotor cortex decreases as motor tasks are performed. The results are consistent with and comple- ment the well known concepts of motor-related brain processes. We assume that the thus discovered features of neuronal dynamics in the sensorimotor cortex together with the robust RQA methods for identifying and classifying these contribute to the non-invasive EEG-based development of brain-computer interfaces (BCI) for motor control and rehabilitation.
The present work is an important step towards a robust analysis of complex systems based on recurrence.
Identification of chemical mediators that regulate the specialized metabolism in Nostoc punctiforme
(2021)
Specialized metabolites, so-called natural products, are produced by a variety of different organisms, including bacteria and fungi. Due to their wide range of different biological activities, including pharmaceutical relevant properties, microbial natural products are an important source for drug development. They are encoded by biosynthetic gene clusters (BGCs), which are a group of locally clustered genes. By screening genomic data for genes encoding typical core biosynthetic enzymes, modern bioinformatical approaches are able to predict a wide range of BGCs. To date, only a small fraction of the predicted BGCs have their associated products identified.
The phylum of the cyanobacteria has been shown to be a prolific, but largely untapped source for natural products. Especially multicellular cyanobacterial genera, like Nostoc, harbor a high amount of BGCs in their genomes.
A main goal of this study was to develop new concepts for the discovery of natural products in cyanobacteria. Due to its diverse setup of orphan BGCs and its amenability to genetic manipulation, Nostoc punctiforme PCC 73102 (N. punctiforme) appeared to be a promising candidate to be established as a model organism for natural product discovery in cyanobacteria. By utilizing a combination of genome-mining, bioactivity-screening, variations of culture conditions, as well as metabolic engineering, not only two new polyketides were discovered, but also first-time insights into the regulation of the specialized metabolism in N. punctiforme were gained during this study.
The cultivation of N. punctiforme to very high densities by utilizing increasing light intensities and CO2 levels, led to an enhanced metabolite production, causing rather complex metabolite extracts. By utilizing a library of CFP reporter mutant strains, each strain reporting for one of the predicted BGCs, it was shown that eight out of 15 BGCs were upregulated under high density (HD) cultivation conditions. Furthermore, it could be demonstrated that the supernatant of an HD culture can increase the expression of four of the influenced BGCs, even under conventional cultivation conditions. This led to the hypothesis that a chemical mediator encoded by one of the affected BGCs is accumulating in the HD supernatant and is able to increase the expression of other BGCs as part of a cell-density dependent regulatory circuit. To identify which of the BGCs could be a main trigger of the presumed regulatory circuit, it was tried to activate four BGCs (pks1, pks2, ripp3, ripp4) selectively by overexpression of putative pathway-specific regulatory genes that were found inside the gene clusters. Transcriptional analysis of the mutants revealed that only the mutant strain targeting the pks1 BGC, called AraC_PKS1, was able to upregulate the expression of its associated BGC. From an RNA sequencing study of the AraC_PKS1 mutant strain, it was discovered that beside pks1, the orphan BGCs ripp3 and ripp4 were also upregulated in the mutant strain. Furthermore, it was observed that secondary metabolite production in the AraC_PKS1 mutant strain is further enhanced under high-light and high-CO2 cultivation conditions. The increased production of the pks1 regulator NvlA also had an impact on other regulatory factors, including sigma factors and the RNA chaperone Hfq. Analysis of the AraC_PKS1 cell and supernatant extracts led to the discovery of two novel polyketides, nostoclide and nostovalerolactone, both encoded by the pks1 BGC. Addition of the polyketides to N. punctiforme WT demonstrated that the pks1-derived compounds are able to partly reproduce the effects on secondary metabolite production found in the AraC_PKS1 mutant strain. This indicates that both compounds are acting as extracellular signaling factors as part of a regulatory network. Since not all transcriptional effects that were found in the AraC_PKS1 mutant strain could be reproduced by the pks1 products, it can be assumed that the regulator NvlA has a global effect and is not exclusively specific to the pks1 pathway.
This study was the first to use a putative pathway specific regulator for the specific activation of BGC expression in cyanobacteria. This strategy did not only lead to the detection of two novel polyketides, it also gave first-time insights into the regulatory mechanism of the specialized metabolism in N. punctiforme. This study illustrates that understanding regulatory pathways can aid in the discovery of novel natural products. The findings of this study can guide the design of new screening strategies for bioactive compounds in cyanobacteria and help to develop high-titer production platforms for cyanobacterial natural products.
Silicate melts are major components of the Earth’s interior and as such they make an essential contribution in igneous processes, in the dynamics of the solid Earth and the chemical development of the entire Earth. Macroscopic physical and chemical properties such as density, compressibility, viscosity, degree of polymerization etc. are determined by the atomic structure of the melt. Depending on the pressure, but also on the temperature and the chemical composition, silicate melts show different structural properties. These properties are best described by the local coordination environment, i.e. symmetry and number of neighbors (coordination number) of an atom, as well as the distance between the central atom and its neighbors (inter-atomic distance). With increasing pressure and temperature, i.e. with increasing depth in the Earth, the density of the melt increases, which can lead to changes in coordination number and distances. If the coordination number remains the same, the distance usually decreases. If the coordination number increases, the distance can increase. These general trends can, however, vary greatly, which can be attributed in particular to the chemical composition.
Due to the fact that natural melts of the deep earth are not accessible to direct investigations, in order to understand their properties under the relevant conditions, extensive experimental and theoretical investigations have been carried out so far. This has often been studied using the example of amorphous samples of the end-members SiO2 and GeO2 , with the latter serving as a structural and chemical analog model to SiO2. Commonly, the experiments were carried out at high pressure and at room temperature. Natural melts are chemically much more complex than the simple end-member SiO2 and GeO2, so that observations made on them may lead to incorrect compression models. Furthermore, the investigations on glasses at room temperature can show potentially strong deviations from the properties of melts under natural thermodynamic conditions.
The aim of this thesis was to explain the influence of the composition and the temperature on the structural properties of the melts at high pressures. To understand this, we studied complex alumino-germanate and alumino-silicate glasses. More precisely, we studied synthetic glasses that have a composition like the mineral albite and like a mixture of albite-diopside at the eutectic point. The albite glass is structurally similar to a simplified granitic melt, while the albite-diopside glass simulates a simplified basaltic melt. To study the local coordination environment of the elements, we used X-ray absorption spectroscopy in combination with a diamond anvil cell. Because the diamonds have a high absorbance for X-rays with energies below 10 keV, the direct investigation of the geologically relevant elements such as Si, Al, Ca, Mg etc. with this spectroscopic probe technique in combination with a diamond anvil cell is not possible. Therefore the glasses were doped with Ge and Sr. These elements serve partially or fully as substitutes for important major elements. In this sense, Ge serves as an a substitute for Si and other network formers, while Sr replaces network modifiers such as Ca, Na, Mg etc.,
as well as other cations with a large ionic radius.
In the first step we studied the Ge K-edge in Ge-Albit-glass, NaAlGe3O8, at room temperature up to 131 GPa. This glass has a higher chemical complexity than SiO2 and GeO2, but it is still fully polymerized. The differences in the compression mechanism between this glass and the simple oxides can clearly be attributed to higher chemical complexity. The albite and albite-diopside compositions partially doped with Ge and Sr were probed at room temperature for Ge up to 164 GPa and for Sr up to 42 GPa. While the albite glass is nominally fully polymerized like NaAlGe3O8, the albite-diopside glass is partially depolymerized. The results show that structural changes take place in all three glasses in the first 25 to a maximum of 30 GPa, with both Ge and Sr reaching the maximum coordination number 6 and ∼9, respectively. At higher pressures, only isostructural shrinkage of the coordination polyhedra takes place in the glasses. The most important finding of the high pressure studies on the alumino-silicate and alumino-germanate glasses is that in these complex glasses the polyhedra show a much higher compressibility than what can be observed in the end-members. This is shown in particular by the strong shortening of the Ge-O distances in the amorphous NaAlGe3O8 and albite-diopside glass at pressures above 30 GPa.
In addition to the effects of the composition on the compaction process, we investigated the influence of temperature on the structural changes. To do this, we probed the albite-diopside glass, as it is chemically most similar to the melts in the lower mantle. We studied the Ge K edge of the sample with a resistively heated and a laser-heated diamond anvil cell, for a pressure range of up to 48 GPa and a temperature range of up to 5000 K. High temperatures at which the sample is liquid and that are relevant for the Earth mantle, have a significant impact on the structural transformation, with a shift of approx. 30% to significantly lower pressures, compared to the glasses at room temperature and below 1000 K.
The results of this thesis represent an important contribution to the understanding of the properties of melts at conditions of the lower mantle. In the context of the discussion about the existence and origin of ultra-dense silicate melts at the core-mantle boundary, these investigations show that the higher density compared to the surrounding material cannot be explained by only structural features, but by a distinct chemical composition. The results also suggest that only very low solubilities of noble gases are to be expected for melts in the lower mantle, so that the structural properties clearly influence the overall budget and transport of noble gases in the Earth’s mantle.
In Systems Medicine, in addition to high-throughput molecular data (*omics), the wealth of clinical characterization plays a major role in the overall understanding of a disease. Unique problems and challenges arise from the heterogeneity of data and require new solutions to software and analysis methods. The SMART and EurValve studies establish a Systems Medicine approach to valvular heart disease -- the primary cause of subsequent heart failure.
With the aim to ascertain a holistic understanding, different *omics as well as the clinical picture of patients with aortic stenosis (AS) and mitral regurgitation (MR) are collected. Our task within the SMART consortium was to develop an IT platform for Systems Medicine as a basis for data storage, processing, and analysis as a prerequisite for collaborative research. Based on this platform, this thesis deals on the one hand with the transfer of the used Systems Biology methods to their use in the Systems Medicine context and on the other hand with the clinical and biomolecular differences of the two heart valve diseases. To advance differential expression/abundance (DE/DA) analysis software for use in Systems Medicine, we state 21 general software requirements and features of automated DE/DA software, including a novel concept for the simple formulation of experimental designs that can represent complex hypotheses, such as comparison of multiple experimental groups, and demonstrate our handling of the wealth of clinical data in two research applications DEAME and Eatomics. In user interviews, we show that novice users are empowered to formulate and test their multiple DE hypotheses based on clinical phenotype. Furthermore, we describe insights into users' general impression and expectation of the software's performance and show their intention to continue using the software for their work in the future. Both research applications cover most of the features of existing tools or even extend them, especially with respect to complex experimental designs. Eatomics is freely available to the research community as a user-friendly R Shiny application.
Eatomics continued to help drive the collaborative analysis and interpretation of the proteomic profile of 75 human left myocardial tissue samples from the SMART and EurValve studies. Here, we investigate molecular changes within the two most common types of valvular heart disease: aortic valve stenosis (AS) and mitral valve regurgitation (MR). Through DE/DA analyses, we explore shared and disease-specific protein alterations, particularly signatures that could only be found in the sex-stratified analysis. In addition, we relate changes in the myocardial proteome to parameters from clinical imaging. We find comparable cardiac hypertrophy but differences in ventricular size, the extent of fibrosis, and cardiac function. We find that AS and MR show many shared remodeling effects, the most prominent of which is an increase in the extracellular matrix and a decrease in metabolism. Both effects are stronger in AS. In muscle and cytoskeletal adaptations, we see a greater increase in mechanotransduction in AS and an increase in cortical cytoskeleton in MR. The decrease in proteostasis proteins is mainly attributable to the signature of female patients with AS. We also find relevant therapeutic targets.
In addition to the new findings, our work confirms several concepts from animal and heart failure studies by providing the largest collection of human tissue from in vivo collected biopsies to date. Our dataset contributing a resource for isoform-specific protein expression in two of the most common valvular heart diseases. Apart from the general proteomic landscape, we demonstrate the added value of the dataset by showing proteomic and transcriptomic evidence for increased expression of the SARS-CoV-2- receptor at pressure load but not at volume load in the left ventricle and also provide the basis of a newly developed metabolic model of the heart.
Over the past decades, natural hazards, many of which are aggravated by climate change and reveal an increasing trend in frequency and intensity, have caused significant human and economic losses and pose a considerable obstacle to sustainable development. Hence, dedicated action toward disaster risk reduction is needed to understand the underlying drivers and create efficient risk mitigation plans. Such action is requested by the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR), a global agreement launched in 2015 that establishes stating priorities for action, e.g. an improved understanding of disaster risk. Turkey is one of the SFDRR contracting countries and has been severely affected by many natural hazards, in particular earthquakes and floods. However, disproportionately little is known about flood hazards and risks in Turkey. Therefore, this thesis aims to carry out a comprehensive analysis of flood hazards for the first time in Turkey from triggering drivers to impacts. It is intended to contribute to a better understanding of flood risks, improvements of flood risk mitigation and the facilitated monitoring of progress and achievements while implementing the SFDRR.
In order to investigate the occurrence and severity of flooding in comparison to other natural hazards in Turkey and provide an overview of the temporal and spatial distribution of flood losses, the Turkey Disaster Database (TABB) was examined for the years 1960-2014. The TABB database was reviewed through comparison with the Emergency Events Database (EM-DAT), the Dartmouth Flood Observatory database, the scientific literature and news archives. In addition, data on the most severe flood events between 1960 and 2014 were retrieved. These served as a basis for analyzing triggering mechanisms (i.e. atmospheric circulation and precipitation amounts) and aggravating pathways (i.e. topographic features, catchment size, land use types and soil properties). For this, a new approach was developed and the events were classified using hierarchical cluster analyses to identify the main influencing factor per event and provide additional information about the dominant flood pathways for severe floods. The main idea of the study was to start with the event impacts based on a bottom-up approach and identify the causes that created damaging events, instead of applying a model chain with long-term series as input and searching for potentially impacting events as model outcomes. However, within the frequency analysis of the flood-triggering circulation pattern types, it was discovered that events in terms of heavy precipitation were not included in the list of most severe floods, i.e. their impacts were not recorded in national and international loss databases but were mentioned in news archives and reported by the Turkish State Meteorological Service. This finding challenges bottom-up modelling approaches and underlines the urgent need for consistent event and loss documentation. Therefore, as a next step, the aim was to enhance the flood loss documentation by calibrating, validating and applying the United Nations Office for Disaster Risk Reduction (UNDRR) loss estimation method for the recent severe flood events (2015-2020). This provided, a consistent flood loss estimation model for Turkey, allowing governments to estimate losses as quickly as possible after events, e.g. to better coordinate financial aid.
This thesis reveals that, after earthquakes, floods have the second most destructive effects in Turkey in terms of human and economic impacts, with over 800 fatalities and US$ 885.7 million in economic losses between 1960 and 2020, and that more attention should be paid on the national scale. The clustering results of the dominant flood-producing mechanisms (e.g. circulation pattern types, extreme rainfall, sudden snowmelt) present crucial information regarding the source and pathway identification, which can be used as base information for hazard identification in the preliminary risk assessment process. The implementation of the UNDRR loss estimation model shows that the model with country-specific parameters, calibrated damage ratios and sufficient event documentation (i.e. physically damaged units) can be recommended in order to provide first estimates of the magnitude of direct economic losses, even shortly after events have occurred, since it performed well when estimates were compared to documented losses.
The presented results can contribute to improving the national disaster loss database in Turkey and thus enable a better monitoring of the national progress and achievements with regard to the targets stated by the SFDRR. In addition, the outcomes can be used to better characterize and classify flood events. Information on the main underlying factors and aggravating flood pathways further supports the selection of suitable risk reduction policies.
All input variables used in this thesis were obtained from publicly available data. The results are openly accessible and can be used for further research.
As an overall conclusion, it can be stated that consistent loss data collection and better event documentation should gain more attention for a reliable monitoring of the implementation of the SFDRR. Better event documentation should be established according to a globally accepted standard for disaster classification and loss estimation in Turkey. Ultimately, this enables stakeholders to create better risk mitigation actions based on clear hazard definitions, flood event classification and consistent loss estimations.
The development and optimization of carbonaceous materials is of great interest for several applications including gas sorption, electrochemical storage and conversion, or heterogeneous catalysis. In this thesis, the exploration and optimization of nitrogen containing carbonaceous materials by direct condensation of smart chosen, molecular precursors will be presented. As suggested with the concept of noble carbons, the choice of a stable, nitrogen-containing precursor will lead to an even more stable, nitrogen doped carbonaceous material with a controlled structure and electronic properties. Molecules fulfilling this requirement are for example nucleobases. The direct condensation of nucleobases leads to highly nitrogen containing carbonaceous materials without any further post or pretreatment. By using salt melt templating, pore structure adjustment is possible without the use of hazardous or toxic reagents and the template can be reused.
Using these simple tools, the synergetic effect of the pore structure and nitrogen content of the materials can be explored. Within this thesis, the influence of the condensation parameters will be correlated to the structure and performance of the materials. First, the influence of the condensation temperature to the porosity and nitrogen content of guanine will be discussed and the exploration of highly CO2 selective structural pores in C1N1 materials will be shown. Further tuning the pore structure of the materials by salt melt templating will be then explored, the potential of the prepared materials as heterogeneous catalysts and their basic catalytic strength will be correlated to their nitrogen content and pore morphology. A similar approach is used to explore the water sorption behavior of uric acid derived carbonaceous materials as potential sorbents for heat transformation applications. Changes in maximum water uptake and hydrophilicity of the prepared materials will be correlated to the nitrogen content and pore architecture. Due to the high thermal stability, porosity, and nitrogen content of ionic liquid derived nitrogen doped carbonaceous materials, a simple impregnation and calcination route can be conducted to obtain copper nano cluster decorated nitrogen-doped carbonaceous materials. The activity as catalyst for the oxygen reduction reaction of the obtained materials will be shown and structure performance relations are discussed.
In conclusion, the versatility of nitrogen doped carbonaceous materials with a nitrogen to carbon ratio of up to one will be shown. The possibility to tune the pore structure as well as the nitrogen content by using a simple procedure including salt melt templating as well as the use of molecular precursors and their effect on the performance will be discussed.
This work develops hybrid methods of imaging spectroscopy for open pit mining and examines their feasibility compared with state-of-the-art. The material distribution within a mine face differs in the small scale and within daily assigned extraction segments. These changes can be relevant to subsequent processing steps but are not always visually identifiable prior to the extraction. Misclassifications that cause false allocations of extracted material need to be minimized in order to reduce energy-intensive material re-handling. The use of imaging spectroscopy aspires to the allocation of relevant deposit-specific materials before extraction, and allows for efficient material handling after extraction. The aim of this work is the parameterization of imaging spectroscopy for pit mining applications and the development and evaluation of a workflow for a mine face, ground- based, spectral characterization. In this work, an application-based sensor adaptation is proposed. The sensor complexity is reduced by down-sampling the spectral resolution of the system based on the samples’ spectral characteristics. This was achieved by the evaluation of existing hyperspectral outcrop analysis approaches based on laboratory sample scans from the iron quadrangle in Minas Gerais, Brazil and by the development of a spectral mine face monitoring workflow which was tested for both an operating and an inactive open pit copper mine in the Republic of Cyprus.
The workflow presented here is applied to three regional data sets: 1) Iron ore samples from Brazil, (laboratory); 2) Samples and hyperspectral mine face imagery from the copper-gold-pyrite mine Apliki, Republic of Cyprus (laboratory and mine face data); and 3) Samples and hyperspectral mine face imagery from the copper-gold-pyrite deposit Three Hills, Republic of Cyprus (laboratory and mine face data). The hyperspectral laboratory dataset of fifteen Brazilian iron ore samples was used to evaluate different analysis methods and different sensor models. Nineteen commonly used methods to analyze and map hyperspectral data were compared regarding the methods’ resulting data products and the accuracy of the mapping and the analysis computation time. Four of the evaluated methods were determined for subsequent analyses to determine the best-performing algorithms: The spectral angle mapper (SAM), a support vector machine algorithm (SVM), the binary feature fitting algorithm (BFF) and the EnMap geological mapper (EnGeoMap). Next, commercially available imaging spectroscopy sensors were evaluated for their usability in open pit mining conditions. Step-wise downsampling of the data - the reduction of the number of bands with an increase of each band’s bandwidth - was performed to investigate the possible simplification and ruggedization of a sensor without a quality fall-off of the mapping results. The impact of the atmosphere visible in the spectrum between 1300–2010nm was reduced by excluding the spectral range from the data for mapping. This tested the feasibility of the method under realistic open pit data conditions. Thirteen datasets based on the different, downsampled sensors were analyzed with the four predetermined methods. The optimum sensor for spectral mine face material distinction was determined as a VNIR-SWIR sensor with 40nm bandwidths in the VNIR and 15nm bandwidths in the SWIR spectral range and excluding the atmospherically impacted bands. The Apliki mine sample dataset was used for the application of the found optimal analyses and sensors. Thirty-six samples were analyzed geochemically and mineralogically. The sample spectra were compiled to two spectral libraries, both distinguishing between seven different geochemical-spectral clusters. The reflectance dataset was downsampled to five different sensors. The five different datasets were mapped with the SAM, BFF and SVM method achieving mapping accuracies of 85-72%, 85-76% and 57-46% respectively. One mine face scan of Apliki was used for the application of the developed workflow. The mapping results were validated against the geochemistry and mineralogy of thirty-six documented field sampling points and a zonation map of the mine face which is based on sixty-six samples and field mapping. The mine face was analyzed with SAM and BFF. The analysis maps were visualized on top of a Structure-from-Motion derived 3D model of the open pit. The mapped geological units and zones correlate well with the expected zonation of the mine face. The third set of hyperspectral imagery from Three Hills was available for applying the fully-developed workflow. Geochemical sample analyses and laboratory spectral data of fifteen different samples from the Three Hills mine, Republic of Cyprus, were used to analyse a downsampled mine face scan of the open pit. Here, areas of low, medium and high ore content were identified.
The developed workflow is successfully applied to the open pit mines Apliki and Three Hills and the spectral maps reflect the prevailing geological conditions. This work leads through the acquisition, preparation and processing of imaging spectroscopy data, the optimum choice of analysis methodology, and the utilization of simplified, robust sensors that meet the requirements of open pit mining conditions. It accentuates the importance of a site-specific and deposit-specific spectral library for the mine face analysis and underlines the need for geological and spectral analysis experts to successfully implement imaging spectroscopy in the field of open pit mining.
In der vorliegenden Arbeit wird die Herstellung und Charakterisierung von Mixed-Matrix-Membranen (MMM) für die Gastrennung thematisiert. Dazu wurden verschiedene Füllstoffe genutzt, um in Verbindung mit dem Membranmaterial Polysulfon MMMs herzustellen. Als Füllstoffe wurden 3 aktive und 2 passive Füllstoffe verwendet. Die aktiven Füllstoffe besaßen Porenöffnungen, die in der Lage sind Gase in Abhängigkeit der Molekülgröße zu trennen. Daraus folgt ein höherer idealer Trennfaktor für bestimmte Gaspaare als in Polysulfon selbst. Aufgrund der durch die Poren gebildeten permanenten Kanäle in den aktiven Füllstoffen ergibt sich ein schnellerer Gastransport (Permeabilität) als in Polysulfon. Es handelte sich bei den aktiven Füllstoffen um den Zeolith SAPO-34 und 2 Chargen eines Zeolitic Imidazolate Framework (ZIF) ZIF-8. Die beiden Chargen ZIF-8 unterschieden sich in ihrer spezifischen Oberfläche, was diesen Einfluss speziell in die Untersuchungen zum Gastransport einbeziehen sollte. Bei den passiven Füllstoffen handelte es sich um ein aminofunktionalisiertes Kieselgel und unporöse (dichte) Glaskügelchen. Das Kieselgel besaß Poren, die zu groß waren, um Gase effektiv zu trennen. Die Glaskügelchen konnten keine Gastrennung ermöglichen, da sie keine Poren besaßen.
Aus der Literatur ist bekannt, dass die Einbettung von Füllstoffen oft zu Defekten in MMMs führt. Ein Ziel dieser Arbeit war es daher die Einbettung zu optimieren. Weiterhin sollte der Gastransport in MMMs dieser Arbeit mit dem in einer unbeladenen Polysulfonmembran verglichen werden. Aufgrund des selektiveren Trennverhaltens der aktiven Füllstoffe im Vergleich zum Membranmaterial, sollte mit der Einbettung aktiver Füllstoffe die Trennleistung der MMMs mit steigender Füllstoffbeladung immer weiter verbessert werden.
Um die Eigenschaften der MMMs zu untersuchen, wurden diese mittels Rasterelektronenmikroskop (REM), Gaspermeationsmessungen (GP) und Thermogravimetrischer Analyse gekoppelt mit Massenspektrometrie (TGA-MS) charakterisiert.
Untersuchungen am REM konnten eine Verbesserung der Einbettung zeigen, wenn ein polymerer Haftvermittler verwendet wurde. Verglichen wurde die optimierte Einbettung mit der Einbettung ohne Haftvermittler und Ergebnissen aus der Literatur, in der die Verwendung verschiedener Silane als Haftvermittler beschrieben wurde. Trotz der verbesserten Einbettung konnte lediglich bei geringen Beladungen an Füllstoff (10 und 20 Ma-% bezogen auf das Membranmaterial) eine geringe Steigerung des idealen Trennfaktors in den MMMs gegenüber der unbeladenen Polysulfonmembranen beobachtet werden. Bei höheren Füllstoffbeladungen (30, 40 und 50 Ma-%) war ein deutlicher Anstieg der Permeabilität bei stark sinkendem idealen Trennfaktor zu beobachten. Mit Hilfe von TGA-MS Messungen konnte darüber hinaus festgestellt werden, dass der verwendete Zeolith SAPO-34 durch Wassermoleküle blockierte Porenöffnungen besaß. Das verhinderte den Gastransport im Füllstoff, wodurch die Trennleistung des Füllstoffes nicht ausgenutzt werden konnte. Die Füllstoffe ZIF-8 (chargenunabhängig) und aminofunktionalisiertes Kieselgel wiesen keine blockierten Poren auf. Dennoch zeigte sich in diesen MMMs keine Verbesserung der Gastrenn- oder Gastransporteigenschaften. MMMs mit dichten Glaskügelchen als Füllstoff zeigten dasselbe Gastrenn- und Gastransportverhalten, wie alle MMMs mit den zuvor genannten Füllstoffen.
In dieser Arbeit konnte, trotz optimierter Einbettung anorganischer Füllstoffe, für MMMs keine Verbesserung der Gastrenn- oder Gastransporteigenschaften nachgewiesen werden. Vielmehr wurde ein Einfluss der Füllstoffmenge auf die Gastransporteigenschaften in MMMs festgestellt. Die Änderungen der MMMs gegenüber Polysulfon stammen von den Folgen der Einbettung von Füllstoffen in das Matrixpolymer. Durch die Einbettung werden die Eigenschaften des Matrixpolymers ändern, sodass auch der Gastransport beeinflusst wird. Des Weiteren wurde dokumentiert, dass in Abhängigkeit der Füllstoffbeladung die entstehende Membranstruktur beeinflusst wird. Die Beeinflussung war dabei unabhängig von der Füllstoffart. Es wurde eine Korrelation zwischen Füllstoffmenge und veränderter Membranstruktur gefunden.
Insulinresistenz ist ein zentraler Bestandteil des metabolischen Syndroms und trägt maßgeblich zur Ausbildung eines Typ-2-Diabetes bei. Eine mögliche Ursache für die Entstehung von Insulinresistenz ist eine chronische unterschwellige Entzündung, welche ihren Ursprung im Fettgewebe übergewichtiger Personen hat. Eingewanderte Makrophagen produzieren vermehrt pro-inflammatorische Mediatoren, wie Zytokine und Prostaglandine, wodurch die Konzentrationen dieser Substanzen sowohl lokal als auch systemisch erhöht sind. Darüber hinaus weisen übergewichtige Personen einen gestörten Fettsäuremetabolismus und eine erhöhte Darmpermeabilität auf. Ein gesteigerter Flux an freien Fettsäuren vom Fettgewebe in andere Organe führt zu einer lokalen Konzentrationssteigerung in diesen Organen. Eine erhöhte Darmpermeabilität erleichtert das Eindringen von Pathogenen und anderer körperfremder Substanzen in den Körper.
Ziel dieser Arbeit war es, zu untersuchen, ob hohe Konzentrationen von Insulin, des bakteriellen Bestandteils Lipopolysaccharid (LPS) oder der freien Fettsäure Palmitat eine Entzündungsreaktion in Makrophagen auslösen oder verstärken können und ob diese Entzündungsantwort zur Ausbildung einer Insulinresistenz beitragen kann. Weiterhin sollte untersucht werden, ob Metabolite und Signalsubstanzen, deren Konzentrationen beim metabolischen Syndrom erhöht sind, die Produktion des Prostaglandins (PG) E2 begünstigen können und ob dieses wiederum die Entzündungsreaktion und seine eigene Produktion in Makrophagen regulieren kann. Um den Einfluss dieser Faktoren auf die Produktion pro-inflammatorischer Mediatoren in Makrophagen zu untersuchen, wurden Monozyten-artigen Zelllinien und primäre humane Monozyten, welche aus dem Blut gesunder Probanden isoliert wurden, in Makrophagen differenziert und mit Insulin, LPS, Palmitat und/ oder PGE2 inkubiert. Überdies wurden primäre Hepatozyten der Ratte isoliert und mit Überständen Insulin-stimulierter Makrophagen inkubiert, um zu untersuchen, ob die Entzündungsanwort in Makrophagen an der Ausbildung einer Insulinresistenz in Hepatozyten beteiligt ist.
Insulin induzierte die Expression pro-inflammatorischer Zytokine in Makrophagen-artigen Zelllinien wahrscheinlich vorrangig über den Phosphoinositid-3-Kinase (PI3K)-Akt-Signalweg mit anschließender Aktiverung des Transkriptionsfaktors NF-κB (nuclear factor 'kappa-light-chain-enhancer' of activated B-cells). Die dabei ausgeschütteten Zytokine hemmten in primären Hepatozyten der Ratte die Insulin-induzierte Expression der Glukokinase durch Überstände Insulin-stimulierter Makrophagen.
Auch LPS oder Palmitat, deren lokale Konzentrationen im Zuge des metabolischen Syndroms erhöht sind, waren in der Lage, die Expression pro-inflammatorischer Zytokine in Makrophagen-artigen Zelllinien zu stimulieren. Während LPS seine Wirkung, laut Literatur, unbestritten über eine Aktivierung des Toll-ähnlichen Rezeptors (toll-like receptor; TLR) 4 vermittelt, scheint Palmitat jedoch weitestgehend TLR4-unabhängig wirken zu können. Vielmehr schien die de novo-Ceramidsynthese eine entscheidene Rolle zu spielen. Darüber hinaus verstärkte Insulin sowohl die LPS- als auch die Palmitat-induzierte Ent-zündungsantwort in beiden Zelllinien. Die in Zelllinien gewonnenen Ergebnisse wurden größtenteils in primären humanen Makrophagen bestätigt.
Desweiteren induzierten sowohl Insulin als auch LPS oder Palmitat die Produktion von PGE2 in den untersuchten Makrophagen. Die Daten legen nahe, dass dies auf eine gesteigerte Expression PGE2-synthetisierender Enzyme zurückzuführen ist.
PGE2 wiederum hemmte auf der einen Seite die Stimulus-abhängige Expression des pro-inflammatorischen Zytokins Tumornekrosefaktor (TNF) α in U937-Makrophagen. Auf der anderen Seite verstärkte es jedoch die Expression der pro-inflammatorischen Zytokine Interleukin- (IL-) 1β und IL-8. Darüber hinaus verstärkte es die Expression von IL-6-Typ-Zytokinen, welche sowohl pro- als auch anti-inflammatorisch wirken können. Außerdem vestärkte PGE2 die Expression PGE2-synthetisierender Enzyme. Es scheint daher in der Lage zu sein, seine eigene Synthese zu verstärken.
Zusammenfassend kann die Freisetzung pro-inflammatorischer Mediatoren aus Makro-phagen im Zuge einer Hyperinsulinämie die Entstehung einer Insulinresistenz begünstigen. Insulin ist daher in der Lage, einen Teufelskreis der immer stärker werdenden Insulin-resistenz in Gang zu setzen.
Auch Metabolite und Signalsubstanzen, deren Konzentrationen beim metabolischen Syndrom erhöht sind (zum Beispiel LPS, freie Fettsäuren und PGE2), lösten Entzündungsantworten in Makrophagen aus. Das wechselseitige Zusammenspiel von Insulin und diesen Metaboliten und Signalsubstanzen löste eine stärkere Entzündungsantwort in Makrophagen aus als jeder der Einzelkomponenten. Die dadurch freigesetzten Zytokine könnten zur Manifestation einer Insulinresistenz und des metabolischen Syndroms beitragen.
Membrane contact sites are of particular interest in the field of synthetic biology and biophysics. They are involved in a great variety of cellular functions. They form in between two cellular organelles or an organelle and the plasma membrane in order to establish a communication path for molecule transport or signal transmission.
The development of an artificial membrane system which can mimic membrane contact sites using bottom up synthetic biology was the goal of this research study. For this, a multi - compartmentalised giant unilamellar vesicle (GUV) system was created with the membrane of the outer vesicle mimicking the plasma membrane and the inner GUVs posing as cellular organelles.
In the following steps, three different strategies were used to achieve an internal membrane - membrane adhesion.
Die vorliegende Studie beschreitet im religionswissenschaftlichen Kontext einen Weg zur Erforschung der Modifikation und Neuausrichtung eines einzelnen christlichen Bildmotivs, dessen Bildformel sich bis in die Gegenwart durchgesetzt hat.
Das Bildmotiv der Pietà wird in der Gegenwartskunst verstärkt als innovative Bildformel in politischen oder sozialen Kontexten verwendet, um existenzielle Lebenserfahrungen oder gesellschaftskritische, sowie politische Anklagen zu formulieren. Es erlebt einen Relaunch in der Medienberichterstattung, der Kunst, in Filmen oder der Alltagskultur. Künstler_innen und Fotojournalist_innen geben ihren Objekten vermehrt den Titel Pietà oder er wird ihnen von außen zuge-schrieben. Die Semantik dieses spezifischen Bildmotivs rührt offenbar an und kann bei Betrachtenden eine emotionale Gestimmtheit evozieren. Für diese Stu-die ist das Norm- und Wertesystem mit dem dahinter liegenden Tradierungs- und Transformationsprozess von Interesse. Bisher fehlt eine Monografie, in der die Zusammenhänge der Wiederbelebung eines primär christlichen Bildmotivs und der gegenwärtigen Bezüge zu Gewalt, Tod, Angst, Vergänglichkeit, dem Altern oder des Verlustes analysiert werden.
Im Vordergrund steht die Frage nach einer Modifikation bzw. Neuinterpretation dieser Ikonik. Das Aufzeigen eines möglichen dynamischen Entwicklungspro-zesses des Bildmotivs soll klären, welche veränderten Funktionen dem Pietà-Motiv in der Gegenwartskunst zugeschrieben werden. Über ein Set international renomierter, zeitgenössischer Künstler_innen werden eventuelle Veränderun-gen und ein damit verbundener gesellschaftlicher Bedeutungswandel seit dem 21. Jahrhundert analysiert.
Vor diesem Hintergrund ist die Frage nach einer religionsübergreifenden Wirk-mächtigkeit ikonischer Präsenz eines religiösen Bildmotivs in der Kunst und den Bildmedien von aktueller Relevanz. Diese Studie leistet einen exemplarischen Beitrag für die Affektforschung, die sich in den vergangenen Jahren vermehrt mit der Emotionsdarstellung und der Emotionsvermittlung in den audiovisuellen Medien befasst.
Die vorliegende Arbeit thematisiert die Synthese und Charakterisierung von neuen funktionalisierten ionischen Flüssigkeiten und deren Polymerisation. Die ionischen Flüssigkeiten wurden dabei sowohl mit polymerisierbaren Kationen als auch Anionen hergestellt. Zum einen wurden bei thermisch initiierten Polymerisationen Azobis(isobutyronitril) (AIBN) verwendet und zum anderen dienten bei photochemisch initiierten Polymerisationen Bis-4-(methoxybenzoyl)diethylgermanium (Ivocerin®) als Radikalstarter.
Mittels Gelpermeationschromatographie konnte das Homopolymer Polydimethylaminoethylmethacrylat untersucht werden, welches erst im Anschluss an die GPC-Messungen polymeranalog modifiziert wurde. Dabei wurden nach einer Quaternisierung und anschließender Anionenmetathese bei diesen Polymeren die Grenzviskositäten bestimmt und mit den Grenzviskositäten der direkt polymerisierten ionischen Flüssigkeiten verglichen. Bei der direkten Polymerisation von Poly(N-[2-(Methacryloyloxy)ethyl]-N-butyl-N,N-dimethyl-ammoniumbis(trifluormethylsulfonyl)imid) lag [η_Huggins] bei 100 mL/g und bei dem polymeranalog hergestellten Polymer betrug [η_Huggins] = 40 mL/g.
Die ionischen Flüssigkeiten mit polymerisierbaren funktionellen Gruppen wurden mittels Photo-DSC hinsichtlich der maximalen Polymerisationsgeschwindigkeit (Rpmax), der Zeit, in der dieses Maximum erreicht wurde, tmax, ihrer Glasüberganstemperatur (Tg) und des Umsatzes an Vinylprotonen untersucht. Bei diesen Messungen wurde zum einen der Einfluss der unterschiedlichen Alkylkettenlänge am Ammoniumion und der Einfluss von verschiedenen Anionen bei gleichbleibender Kationenstruktur analysiert. So polymerisierte das ethylsubstituierte Kation mit einer tmax von 21 Sekunden am langsamsten. Die maximale Polymerisationsgeschwindigkeit (Rpmax) betrug 3.3∙10-2 s-1. Die tmax Werte der übrigen alkylsubstituierten ionischen Flüssigkeiten mit einer polymerisierbaren funktionellen Gruppe hingegen lagen zwischen 10 und 15 Sekunden. Die Glasübergangstemperaturen der mittels photoinduzierter Polymerisation hergestellten Polymere lagen mit 44 bis 55 °C nahe beieinander. Alle Monomere zeigten einen hohen Umsatz der Vinylprotonen; er betrug zwischen 93 und 100%.
Mithilfe einer Bandanlage, ausgerüstet mit einer LED (λ = 395 nm), konnten Polymerfilme hergestellt werden. Der Umsatz an Doppelbindungsäquivalenten dieser Filme wurde anhand der 1H-NMR Spektroskopie bestimmt. Bei der dynamisch-mechanischen Analyse wurden die Polymerfilme mit einer konstanten Heizrate und Frequenz periodisch wechselnden Beanspruchungen ausgesetzt, um die Glasübergangstemperaturen zu bestimmen. Die niedrigste Tg mit 26 °C besaß das butylsubstituierte N-[2-(Methacryloyloxy)ethyl]-N-butyl-N,N-dimethyl-ammoniumbis(trifluormethylsulfonyl)imid, welches als Polymerfilm mit Ivocerin® als Initiator hergestellt wurde, wohingegen die höchste Tg bei dem gleichen Polymer, welches direkt durch freie radikalische Polymerisation der ionischen Flüssigkeit in Masse mit AIBN hergestellt wurde, 51 °C betrug. Zusätzlich wurden die Filme unter dem Aspekt der Topographie mit einem Rasterkraftmikroskop untersucht, welches eine Domänenstruktur des Polymers N-[2-(methacryloyloxy)ethyl]-N-butyl-N,N-dimethyl-ammonium tris(pentafluorethyl)trifluorphosphat offenbarte.
Active Galactic Nuclei (AGN) are considered to be the main powering source of active galaxies, where central Super Massive Black Holes (SMBHs), with masses between 106 and 109 M⊙ gravitationally pull the surrounding material via accre- tion. AGN phenomenon expands over a very wide range of luminosities, from the most luminous high-redshift quasars (QSOs), to the local Low-Luminosity AGN (LLAGN), with significantly weaker luminosities. While "typical" luminous AGNs distinguish themselves by their characteristical blue featureless continuum, the Broad Emission Lines (BELs) with Full Widths at Half Maximum (FWHM) in order of few thousands km s1, arising from the so-called Broad Line Region (BLR), and strong radio and/or X-ray emission, detection of LLAGNs on the other hand is quite chal- lenging due to their extremely weak emission lines, and absence of the power-law continuum. In order to fully understand AGN evolution and their duty-cycles across cosmic history, we need a proper knowledge of AGN phenomenon at all luminosi- ties and redshifts, as well as perspectives from different wavelength bands.
In this thesis I present a search for AGN signatures in central spectra of 542 local (0.005 < z < 0.03) galaxies from the Calar Alto Legacy Integral Field Area (CALIFA) survey. The adopted aperture of 3′′ × 3′′ corresponds to central ∼ 100 − 500 pc for the redshift range of CALIFA. Using the standard emission-line ratio diagnostic diagrams, we initially classified all CALIFA emission-line galaxies (526) into star- forming, LINER-like, Seyfert 2 and intermediates. We further detected signatures of the broad Hα component in 89 spectra from the sample, of which more than 60% are present in the central spectra of LINER-like galaxies. These BELs are very weak, with luminosities in range 1038 − 1041 erg s−1, but with FWHMs between 1000 km s−1 and 6000 km s−1, comparable to those of luminous high-z AGN. This result implies that type 1 AGN are in fact quite frequent in the local Universe. We also identified additional 29 Seyfert 2 galaxies using the emission-line ratio diagnostic diagrams.
Using the MBH − σ∗ correlation, we estimated black hole masses of 55 type 1 AGN from CALIFA, a sample for which we had estimates of bulge stellar velocity dispersions σ∗. We compared these masses to the ones that we estimated from the virial method and found large discrepancies. We analyzed the validity of both meth- ods for black hole mass estimation of local LLAGN, and concluded that most likely virial scaling relations can no longer be applied as a valid MBH estimator in such low-luminosity regime. These black holes accrete at very low rate, having Edding- ton ratios in range 4.1 × 10−5 − 2.4 × 10−3. Detection of BELs with such low lumi- nosities and at such low Eddington rates implies that these LLAGN are still able to form the BLR, although with probably modified structure of the central engine.
In order to obtain full picture of black hole growth across cosmic time, it is es- sential that we study them in different stages of their activity. For that purpose, we estimated the broad AGN Luminosity Function (AGNLF) of our entire type 1 AGN sample using the 1/Vmax method. The shape of AGNLF indicates an apparent flattening below luminosities LHα ∼ 1039 erg s−1. Correspondingly we estimated ac- tive Black Hole Mass Function (BHMF) and Eddington Ration Distribution Function (ERDF) for a sub-sample of type 1 AGN for which we have MBH and λ estimates. The flattening is also present in both BHMF and ERDF, around log(MBH) ∼ 7.7 and log(λ) < 3, respectively. We estimated the fraction of active SMBHs in CALIFA by comparing our active BHMF to the one of the local quiescent SMBHs. The shape of
the active fraction which decreases with increasing MBH, as well as the flattening of AGNLF, BHMF and ERDF is consistent with scenario of AGN cosmic downsizing.
To complete AGN census in the CALIFA galaxy sample, it is necessary to search for them in various wavelength bands. For the purpose of completing the census we performed cross-correlations between all 542 CALIFA galaxies and multiwavelength surveys, Swift – BAT 105 month catalogue (in hard 15 - 195 keV X-ray band), and NRAO VLA Sky Survey (NVSS, in 1.4 GHz radio domain). This added 1 new AGN candidate in X-ray, and 7 in radio wavelength band to our local LLAGN count.
It is possible to detect AGN emission signatures within 10 – 20 kpc outside of the central galactic regions. This may happen when the central AGN has recently switched off and the photoionized material is spread across the galaxy within the light-travel-time, or the photoionized material is blown away from the nucleus by outflows. In order to detect these extended AGN regions we constructed spatially resolved emission-line ratio diagnostic diagrams of all emission-line galaxies from the CALIFA, and found 1 new object that was previously not identified as AGN.
Obtaining the complete AGN census in CALIFA, with five different AGN types, showed that LLAGN contribute a significant fraction of 24% of the emission-line galaxies in the CALIFA sample. This result implies that AGN are quite common in the local Universe, and although being in very low activity stage, they contribute to large fraction of all local SMBHs. Within this thesis we approached the upper limit of AGN fraction in the local Universe and gained some deeper understanding of the LLAGN phenomenon.
Zusammenfassung zur Dissertation „Neuartige DBD-Fluoreszenzfarbstoffe: Synthese, Untersuchungen und Anwendungen“ von Leonard John
In dieser Arbeit konnten auf Basis der etablierten [1,3]-Dioxolo[4,5-f][1,3]benzodioxol (DBD) Fluoreszenzfarbstoffe zwei neue Konzepte zur Darstellung unsymmetrisch funktionalisierter DBD-Fluorophore entwickelt werden. Die Variation der elektronenziehenden Reste führte zu einer Erweiterung des Farbspektrums an DBD-Fluorophoren, wobei alle weiteren spektroskopischen Parameter (Fluoreszenzlebenszeit, -quantenausbeute und STOKES-Verschiebung) unverändert hohe Werte aufweisen. Neben der Variation der elektronenziehenden Reste wurde das "pi"-System des DBD-Farbstoffs mit der Einführung von Stilben-, und Tolan-Derivaten vergrößert. Stilben-Derivate zeigten ähnlich gute spektroskopische Eigenschaften wie die bereits etablierten DBD-Farbstoffe.
Fluorophore mit langwelliger Emission sind auf Grund der großen Gewebe-Eindringtiefe besonders interessant für biologische Anwendungen. Da der langwelligste Vertreter der O4-DBD-Farbstoffe in polaren Medien nur schwer löslich ist, wurde ein Weg zur Einführung löslichkeitsvermittelnder Gruppen gesucht. Hierbei fiel die Wahl auf eine Carbonsäure-Gruppe zur Steigerung der Hydrophilie. Eine von vier untersuchten Methoden erwies sich als zielführend, sodass das gewünschte Molekül isoliert werden konnte. Eine erhöhte Wasserlöslichkeit wurde allerdings nicht beobachtet.
Zur Erforschung von Fettstoffwechselkrankheiten wie der ALZHEIMER-Krankheit werden fluoreszenzmarkierte Lipide benötigt. Um unterschiedliche Bereiche einer Membran zu untersuchen, war das Ziel, den Fluorophor an unterschiedlichen Stellen innerhalb der Fettsäure zu lokalisieren. Hierbei sollte die Gesamtkettenlänge des DBD-Lipids einer C18-Kette, analog der Stearinsäure, entsprechen. Durch die stufenweise Einführung der Reste gelang es, drei DBD-Lipide herzustellen, wobei sich der Fluorophor an unterschiedlichen Positionen innerhalb der Kette befindet. Die photophysikalischen Eigenschaften der Lipide weichen nur marginal von denen der reinen Fluorophore ab. Eine Einlagerung in giant unilamellar vesicles (GUVs) konnte für zwei Derivate beobachtet werden, wobei keine domänenspezifisch war.
Ein weiteres Ziel dieser Arbeit war es, die vier Sauerstoffatome im DBD-Grundkörper stufenweise durch Schwefelatome zu ersetzen und die Ringgrößen des DBD-Fluorophors zu variieren. Für die Ringgröße zeigte der 1,2-S2-DBD mit jeweils zwei Fünfringen die besten spektroskopischen Eigenschaften. Durch die Synthese von zwei weiteren schwefelhaltigen DBD-Grundkörpern (S1- und 1,4-S2-DBD) konnten insgesamt drei neue Farbstoffklassen zugänglich gemacht werden. Für alle neuen Chromophore wurden elektronenziehende Reste (Aldehyd, Acyl, Ester, Carboxy) eingeführt und die jeweiligen Derivate spektroskopisch untersucht. Mit steigender Anzahl an Schwefel-Atomen im Grundkörper zeigt sich eine bathochrome Verschiebung der Emission,
wobei die Werte für die Fluoreszenzlebenszeit- und -quantenausbeute abnehmen. Die optimalen spektroskopischen Eigenschaften aus langwelliger Emission, hoher Fluoreszenzlebenszeit und -quantenausbeute zeigt das 1,4-S2-Dialdehyd-Derivat. Für die S1- und 1,2-S2-Dialdehyd-
Derivate wurden Konzepte entwickelt, um bioreaktive Reste (Alkin, HOSu, Maleimid) einzuführen und die Fluorophore in biologischen Systemen anwenden zu können.
Major challenges during geothermal exploration and exploitation include the structural-geological characterization of the geothermal system and the application of sustainable monitoring concepts to explain changes in a geothermal reservoir during production and/or reinjection of fluids. In the absence of sufficiently permeable reservoir rocks, faults and fracture networks are preferred drilling targets because they can facilitate the migration of hot and/or cold fluids. In volcanic-geothermal systems considerable amounts of gas emissions can be released at the earth surface, often related to these fluid-releasing structures.
In this thesis, I developed and evaluated different methodological approaches and measurement concepts to determine the spatial and temporal variation of several soil gas parameters to understand the structural control on fluid flow. In order to validate their potential as innovative geothermal exploration and monitoring tools, these methodological approaches were applied to three different volcanic-geothermal systems. At each site an individual survey design was developed regarding the site-specific questions.
The first study presents results of the combined measurement of CO2 flux, ground temperatures, and the analysis of isotope ratios (δ13CCO2, 3He/4He) across the main production area of the Los Humeros geothermal field, to identify locations with a connection to its supercritical (T > 374◦C and P > 221 bar) geothermal reservoir. The results of the systematic and large-scale (25 x 200 m) CO2 flux scouting survey proved to be a fast and flexible way to identify areas of anomalous degassing. Subsequent sampling with high resolution surveys revealed the actual extent and heterogenous pattern of anomalous degassing areas. They have been related to the internal fault hydraulic architecture and allowed to assess favourable structural settings for fluid flow such as fault intersections. Finally, areas of unknown structurally controlled permeability with a connection to the superhot geothermal reservoir have been determined, which represent promising targets for future geothermal exploration and development.
In the second study, I introduce a novel monitoring approach by examining the variation of CO2 flux to monitor changes in the reservoir induced by fluid reinjection. For that reason, an automated, multi-chamber CO2 flux system was deployed across the damage zone of a major normal fault crossing the Los Humeros geothermal field. Based on the results of the CO2 flux scouting survey, a suitable site was selected that had a connection to the geothermal reservoir, as identified by hydrothermal CO2 degassing and hot ground temperatures (> 50 °C). The results revealed a response of gas emissions to changes in reinjection rates within 24 h, proving an active hydraulic communication between the geothermal reservoir and the earth surface. This is a promising monitoring strategy that provides nearly real-time and in-situ data about changes in the reservoir and allows to timely react to unwanted changes (e.g., pressure decline, seismicity).
The third study presents results from the Aluto geothermal field in Ethiopia where an area-wide and multi-parameter analysis, consisting of measurements of CO2 flux, 222Rn, and 220Rn activity concentrations and ground temperatures was conducted to detect hidden permeable structures. 222Rn and 220Rn activity concentrations are evaluated as a complementary soil gas parameter to CO2 flux, to investigate their potential to understand tectono-volcanic degassing. The combined measurement of all parameters enabled to develop soil gas fingerprints, a novel visualization approach. Depending on the magnitude of gas emissions and their migration velocities the study area was divided in volcanic (heat), tectonic (structures), and volcano-tectonic dominated areas. Based on these concepts, volcano-tectonic dominated areas, where hot hydrothermal fluids migrate along permeable faults, present the most promising targets for future geothermal exploration and development in this geothermal field. Two of these areas have been identified in the south and south-east which have not yet been targeted for geothermal exploitation. Furthermore, two unknown areas of structural related permeability could be identified by 222Rn and 220Rn activity concentrations.
Eventually, the fourth study presents a novel measurement approach to detect structural controlled CO2 degassing, in Ngapouri geothermal area, New Zealand. For the first time, the tunable diode laser (TDL) method was applied in a low-degassing geothermal area, to evaluate its potential as a geothermal exploration method. Although the sampling approach is based on profile measurements, which leads to low spatial resolution, the results showed a link between known/inferred faults and increased CO2 concentrations. Thus, the TDL method proved to be a successful in the determination of structural related permeability, also in areas where no obvious geothermal activity is present. Once an area of anomalous CO2 concentrations has been identified, it can be easily complemented by CO2 flux grid measurements to determine the extent and orientation of the degassing segment.
With the results of this work, I was able to demonstrate the applicability of systematic and area-wide soil gas measurements for geothermal exploration and monitoring purposes. In particular, the combination of different soil gases using different measurement networks enables the identification and characterization of fluid-bearing structures and has not yet been used and/or tested as standard practice. The different studies present efficient and cost-effective workflows and demonstrate a hands-on approach to a successful and sustainable exploration and monitoring of geothermal resources. This minimizes the resource risk during geothermal project development. Finally, to advance the understanding of the complex structure and dynamics of geothermal systems, a combination of comprehensive and cutting-edge geological, geochemical, and geophysical exploration methods is essential.
Soft actuators have drawn significant attention due to their relevance for applications, such as artificial muscles in devices developed for medicine and robotics. Tuning their performance and expanding their functionality are frequently done by means of chemical modification. The introduction of structural elements rendering non-synthetic modification of the performance possible, as well as control over physical appearance and facilitating their recycling is a subject of a great interest in the field of smart materials. The primary aim of this thesis was to create a shape-memory polymeric actuator, where the capability for non-synthetic tuning of the actuation performance is combined with reprocessability. Physically cross-linked polymeric matrices provide a solid material platform, where the in situ processing methods can be employed for modification of the composition and morphology, resulting in the fine tuning of the related mechanical properties and shape-memory actuation capability.
The morphological features, required for shape-memory polymeric actuators, namely two crystallisable domains and anchoring points for physical cross-links, were embedded into a multiblock copolymer with poly(ε-caprolactone) and poly(L-lactide) segments (PLLA-PCL). Here, the melting transition of PCL was bisected into the actuating and skeleton-forming units, while the cross-linking was introduced via PLA stereocomplexation in blends with oligomeric poly(D-lactide) (ODLA). PLLA segment number average length of 12-15 repeating units was experimentally defined to be capable of the PLA stereocomplexes formation, but not sufficient for the isotactic crystallisation. Multiblock structure and phase dilution broaden the PCL melting transition, facilitating its separation into two conditionally independent crystalline domains. Low molar mass of the PLA stereocomplex components and a multiblock structure enables processing and reprocessing of the PLLA-PCL / ODLA blends with common non-destructive techniques. The modularity of the PLLA-PCL structure and synthetic approach allows for independent tuning of the properties of its components. The designed material establishes a solid platform for non-synthetic tuning of thermomechanical and structural properties of thermoplastic elastomers.
To evaluate the thermomechanical stability of the formed physical network, three criteria were appraised. As physical cross-links, PLA stereocomplexes have to be evenly distributed within the material matrix, their melting temperature shall not overlap with the thermal transitions of the PCL domains and they have to maintain the structural integrity within the strain ε ranges further applied in the shape-memory actuation experiments. Assigning PCL the function of the skeleton-forming and actuating units, and PLA stereocomplexes the role of physical netpoints, shape-memory actuation was realised in the PLLA-PCL / ODLA blends. Reversible strain of shape-memory actuation was found to be a function of PLA stereocomplex crystallinity, i.e. physical cross-linking density, with a maximum of 13.4 ± 1.5% at PLA stereocomplex content of 3.1 ± 0.3 wt%. In this way, shape-memory actuation can be tuned via adjusting the composition of the PLLA-PCL / ODLA blend. This makes the developed material a valuable asset in the production of cost-effective tunable soft polymeric actuators for the applications in medicine and soft robotics.
The Central Andes region in South America is characterized by a complex and heterogeneous deformation system. Recorded seismic activity and mapped neotectonic structures indicate that most of the intraplate deformation is located along the margins of the orogen, in the transitions to the foreland and the forearc. Furthermore, the actively deforming provinces of the foreland exhibit distinct deformation styles that vary along strike, as well as characteristic distributions of seismicity with depth. The style of deformation transitions from thin-skinned in the north to thick-skinned in the south, and the thickness of the seismogenic layer increases to the south. Based on geological/geophysical observations and numerical modelling, the most commonly invoked causes for the observed heterogeneity are the variations in sediment thickness and composition, the presence of inherited structures, and changes in the dip of the subducting Nazca plate. However, there are still no comprehensive investigations on the relationship between the lithospheric composition of the Central Andes, its rheological state and the observed deformation processes. The central aim of this dissertation is therefore to explore the link between the nature of the lithosphere in the region and the location of active deformation. The study of the lithospheric composition by means of independent-data integration establishes a strong base to assess the thermal and rheological state of the Central Andes and its adjacent lowlands, which alternatively provide new foundations to understand the complex deformation of the region. In this line, the general workflow of the dissertation consists in the construction of a 3D data-derived and gravity-constrained density model of the Central Andean lithosphere, followed by the simulation of the steady-state conductive thermal field and the calculation of strength distribution. Additionally, the dynamic response of the orogen-foreland system to intraplate compression is evaluated by means of 3D geodynamic modelling.
The results of the modelling approach suggest that the inherited heterogeneous composition of the lithosphere controls the present-day thermal and rheological state of the Central Andes, which in turn influence the location and depth of active deformation processes. Most of the seismic activity and neo--tectonic structures are spatially correlated to regions of modelled high strength gradients, in the transition from the felsic, hot and weak orogenic lithosphere to the more mafic, cooler and stronger lithosphere beneath the forearc and the foreland. Moreover, the results of the dynamic simulation show a strong localization of deviatoric strain rate second invariants in the same region suggesting that shortening is accommodated at the transition zones between weak and strong domains. The vertical distribution of seismic activity appears to be influenced by the rheological state of the lithosphere as well. The depth at which the frequency distribution of hypocenters starts to decrease in the different morphotectonic units correlates with the position of the modelled brittle-ductile transitions; accordingly, a fraction of the seismic activity is located within the ductile part of the crust. An exhaustive analysis shows that practically all the seismicity in the region is restricted above the 600°C isotherm, in coincidence with the upper temperature limit for brittle behavior of olivine. Therefore, the occurrence of earthquakes below the modelled brittle-ductile could be explained by the presence of strong residual mafic rocks from past tectonic events. Another potential cause of deep earthquakes is the existence of inherited shear zones in which brittle behavior is favored through a decrease in the friction coefficient. This hypothesis is particularly suitable for the broken foreland provinces of the Santa Barbara System and the Pampean Ranges, where geological studies indicate successive reactivation of structures through time. Particularly in the Santa Barbara System, the results indicate that both mafic rocks and a reduction in friction are required to account for the observed deep seismic events.
Im Mittelpunkt dieser Dissertation steht die Wiederentdeckung, Analyse und bildungshistorische Einordnung des reformpädagogischen Schulprojekts von Eugenie SCHWARZWALD (1872-1940) in Wien im ersten Drittel des 20. Jahrhunderts. Die Genese der Schulentwicklung offenbart die reformpädagogischen Verflechtungen eines überregional bedeutsamen Schulprojekts, die maßgeblich das Profil, die inhaltliche sowie didaktisch-methodische Ausgestaltung von Schule, Schulleben und Unterricht geprägt haben. In der Einleitung (Kap. 1) werden das Erkenntnisinteresse, die zentralen Fragestellungen, die ausgewerteten Quellenbestände und die methodische Vorgehensweise der Arbeit als historisch kritische Analyse der herangezogenen Quellen aufgezeigt. Die systematische Entfaltung des Themas erfolgt entlang von drei zentralen Kapiteln. Dabei rücken die gesellschaftliche und bildungshistorische Einordnung des Schulprojekts in die Ideenwelt und sozialstrukturelle Wirklichkeit Wiens (Kap. 2), biographische Zugänge der Schulgründerin, die Gründung, Genese, Ausformung sowie Beendigung des Schulprojekts, die strukturellen und pädagogischen Charakteristika, die reformpädagogischen Merkmale im ersten Drittel des 20. Jahrhunderts (Kap. 3) in den Mittelpunkt der Analyse. Zugleich werden exemplarische Verflechtungen zu den zeitgenössischen reformpädagogischen Strömungen ebenso sichtbar gemacht wie die damit verbundene Impulsgebung des SCHWARZWALD-Schulprojekts auf das Schulwesen Wiens und Österreichs. Einen Schwerpunkt der Arbeit bildet die Analyse der mannigfachen Vernetzungen der SCHWARZWALDschule im Hinblick auf die Künstlerische Avantgarde (Kap. 4). In der thesenhaften Zusammenfassung (Kap. 5) werden SCHWARZWALDs Leistungen für das österreichische Schul- und Bildungswesen, u. a. für die höhere Mädchenbildung, gewürdigt. Die Arbeit fragt schließlich nach der Reichweite der mit dem Schulprojekt verbundenen reformpädagogischen Impulse und systematisiert Gelingens- und Nichtgelingens-Bedingungen für den Schulreformprozess. Das macht die Arbeit – mit Blick auf Transferüberlegungen – für aktuelle Fragestellungen der Schulentwicklung anschlussfähig.
Digitalisierung ermöglicht es uns, mit Partnern (z.B. Unternehmen, Institutionen) in einer IT-unterstützten Umgebung zu interagieren und Tätigkeiten auszuführen, die vormals manuell erledigt wurden. Ein Ziel der Digitalisierung ist dabei, Dienstleistungen unterschiedlicher fachlicher Domänen zu Prozessen zu kombinieren und vielen Nutzergruppen bedarfsgerecht zugänglich zu machen. Hierzu stellen Anbieter technische Dienste bereit, die in unterschiedliche Anwendungen integriert werden können.
Die Digitalisierung stellt die Anwendungsentwicklung vor neue Herausforderungen. Ein Aspekt ist die bedarfsgerechte Anbindung von Nutzern an Dienste. Zur Interaktion menschlicher Nutzer mit den Diensten werden Benutzungsschnittstellen benötigt, die auf deren Bedürfnisse zugeschnitten sind. Hierzu werden Varianten für spezifische Nutzergruppen (fachliche Varianten) und variierende Umgebungen (technische Varianten) benötigt. Zunehmend müssen diese mit Diensten anderer Anbieter kombiniert werden können, um domänenübergreifend Prozesse zu Anwendungen mit einem erhöhten Mehrwert für den Endnutzer zu verknüpfen (z.B. eine Flugbuchung mit einer optionalen Reiseversicherung).
Die Vielfältigkeit der Varianten lässt die Erstellung von Benutzungsschnittstellen komplex und die Ergebnisse sehr individuell erscheinen. Daher werden die Varianten in der Praxis vorwiegend manuell erstellt. Dies führt zur parallelen Entwicklung einer Vielzahl sehr ähnlicher Anwendungen, die nur geringes Potential zur Wiederverwendung besitzen. Die Folge sind hohe Aufwände bei Erstellung und Wartung. Dadurch wird häufig auf die Unterstützung kleiner Nutzerkreise mit speziellen Anforderungen verzichtet (z.B. Menschen mit physischen Einschränkungen), sodass diese weiterhin von der Digitalisierung ausgeschlossen bleiben.
Die Arbeit stellt eine konsistente Lösung für diese neuen Herausforderungen mit den Mitteln der modellgetriebenen Entwicklung vor. Sie präsentiert einen Ansatz zur Modellierung von Benutzungsschnittstellen, Varianten und Kompositionen und deren automatischer Generierung für digitale Dienste in einem verteilten Umfeld. Die Arbeit schafft eine Lösung zur Wiederverwendung und gemeinschaftlichen Nutzung von Benutzungsschnittstellen über Anbietergrenzen hinweg. Sie führt zu einer Infrastruktur, in der eine Vielzahl von Anbietern ihre Expertise in gemeinschaftliche Anwendungen einbringen können.
Die Beiträge bestehen im Einzelnen in Konzepten und Metamodellen zur Modellierung von Benutzungsschnittstellen, Varianten und Kompositionen sowie einem Verfahren zu deren vollständig automatisierten Transformation in funktionale Benutzungsschnittstellen. Zur Umsetzung der gemeinschaftlichen Nutzbarkeit werden diese ergänzt um eine universelle Repräsentation der Modelle, einer Methodik zur Anbindung unterschiedlicher Dienst-Anbieter sowie einer Architektur zur verteilten Nutzung der Artefakte und Verfahren in einer dienstorientierten Umgebung.
Der Ansatz bietet die Chance, unterschiedlichste Menschen bedarfsgerecht an der Digitalisierung teilhaben zu lassen. Damit setzt die Arbeit Impulse für zukünftige Methoden zur Anwendungserstellung in einem zunehmend vielfältigen Umfeld.
Die Dissertation legt ihren Schwerpunkt auf die synchronische und diachronische Variation im Gebrauch der französischen Kausalkonjunktion parce que sowie auf die Interaktion mit den außersprachlichen Variablen Alter und sozioprofessionelle Kategorie. Basierend auf vorausgehenden makrodiachronischen Studien, die Anhaltspunkte dafür liefern, dass die Konjunktion einen Prozess der Pragmatikalisierung durchlaufen hat und weiterhin durchläuft, wurde ein Untersuchungskorpus von 56 Interviews aus den diachronisch distinkten Korpora ESLO1, ESLO2 und LangAge extrahiert. Dieses Untersuchungskorpus diente als Grundlage für Panelstudien und Trendstudien, die darauf ausgerichtet waren, die Pragmatikalisierung von parce que aus einem mikrodiachronischen Gesichtspunkt zu verifizieren. Zusätzlich zu der diachronischen Perspektive wurde eine synchronische Perspektive eingenommen, um die Variation im Gebrauch der Konjunktion so einem diachronischen Phänomen wie dem age grading oder der apparent time zuordnen zu können. Ausgehend von der Theorie der Konstruktionsgrammatik wurden parce que enthaltende Konstruktionen bottom-up annotiert und in fünf Pragmatikalitätsgrade kategorisiert (pra0–pra4). Diese wurden anschließend quantifiziert und in Abhängigkeit des Geburtsjahres und der sozioprofessionellen Kategorie der (männlichen) Sprecher mithilfe mehrerer R-Modelle wie ctrees, trees, lm, hclust und kmeans analysiert.
Die Frequenzentwicklung der Pragmatikalitätsgrade bestätigte die Pragmatikalisierungshypothese in einem mikrodiachronischen Rahmen. Zudem konnte ein quantitativer Rückgang im Gebrauch der Konstruktionen am nicht- oder weniger pragmatikalisierten (pra0, pra1) Pol festgestellt werden, während Verwendungsweisen höherer Pragmatikalisierungsgrade (pra2–pra4) über 40 Jahre vergleichsweise stabil blieben.
Obwohl für pra2 kein signifikanter Wandel hervortrat, wies dessen Entwicklung bei den Sprechern im mittleren Lebensalter sowie das synchronische Muster in Abhängigkeit von Alter (oder Geburtsjahr) und von sozioprofessioneller Kategorie dennoch in Richtung einer zugrundeliegenden diachronischen Variation. Diese könnte als ein durch die sozialen Transformationen der 1960er und 1970er Jahre katalysiertes Phänomen des age grading interpretiert werden. Für die näher am pragmatischen Pol situierten Gebrauchsweisen (pra3 und pra4) konnte keine klare Tendenz ermittelt werden.
Die Ergebnisse fordern diachronische Konzepte wie age grading und apparent time heraus, indem sie die Simplizität der zugrundeliegenden Mechanismen sowie die gängigen Methoden, diese zu identifizieren, infrage stellen.
Die Dissertation geht der grundlegenden Forschungsfrage nach, wie die Liberal-Demokratische Partei Deutschlands (LDPD) auf lokaler Ebene die ihr zugeschriebene Rolle im politischen Alltag ausfüllte, in welchem Verhältnis sie zum System der DDR stand sowie welche Handlungsspielräume bestanden und genutzt wurden. Ihre Parteiarbeit vor Ort vom Mauerbau bis in die 1980er Jahre hinein blieb von der Forschung bisher weitgehend unbeobachtet, da das Interesse verstärkt der herrschenden SED oder den rebellischen Ansätzen der LDPD in den 1940er und späten 1980er Jahren galt. Die vorliegende Arbeit hat einen ersten Schritt unternommen, die liberale Partei auf Kreis- und Ortsebene zu untersuchen, und trägt dazu bei, diese Lücken zu schließen. Anhand der Fallbeispiele Gotha, Erfurt-Stadt und Eisenach beleuchtet die Dissertation die interne Parteiorganisation, Verhalten und Motivationen der Mitglieder sowie unter Berücksichtigung netzwerktheoretischer Ansätze die Verflechtungen der lokalen Parteifunktionsträger, die sich in die kommunale Arbeit vor Ort einmischten. Informations- und Situationsberichte sowie Korrespondenzen und Organisationsunterlagen gaben Auskunft über Selbstbilder, Aktivität, Themen und Kommunikationsaspekte. Deutlich werden die strengen Kontrollmechanismen innerhalb der Partei sowie das Spannungsfeld zwischen einer klaren Unterstützung der SED-Politik und individuell eigen-sinnigem Verhalten.
Durch die Analysekategorie des „Eigen-Sinns“ als Form der vielschichtigen Aneig- nung von Herrschaftsstrukturen in Abgrenzung zu den Begriffen Opposition und Widerstand wird gezeigt, dass die LDPD-Mitglieder in den untersuchten Kreisen sich zwar Freiheiten der Kritikäußerung nahmen sowie weitgehend selbstständig den Grad ihrer Aktivität bestimmten, dabei die grundlegenden Systemfragen jedoch nicht berührten. Es existierten viele unterschiedliche Lebenswelten der Akteure, abhängig von Tätigkeitsfeld, Motivation und Umfeld, die zu verschiedenen Taktiken und Ausprägungen des Eigen-Sinns bei einfachen Mitgliedern und den lokalen Funktionsträgern führten. Durch ihre kommunale Mitarbeit jedoch kümmerten sich die Liberaldemokraten in den Gemeinden um die drängendsten Versorgungsprobleme und sorgten mit der aktiven Rekrutierung ihrer Mitglieder für Arbeitsprogramme und Wettbewerbe für eine Beteiligung der LDPD an der Beseitigung der schlimmsten Mängel im öffentlichen Raum. Damit leisteten sie einen Beitrag zur Dämpfung der allgemeinen Unzufriedenheit und stärkten mittelbar das DDR-System. Im Gegenzug erhielten sie dafür von der SED eingeschränkte und klar definierte Handlungsspielräume. Mittels der beruflichen Verankerung der meisten aktiven Liberaldemokraten im ökonomischen Bereich konnte viel Praxiswissen herausgebildet werden, mit dem sich die untersuchten LDPD-Verbände im Rahmen der gewährten Gestaltungsfreiheit durchaus selbstbewusst in kommunale Prozesse einmischten. Für die Stabilisierung des Systems über die lange Zeit zwischen Mauerbau und Mauerfall spielten sie damit eine wichtige Rolle.
Die Vermischung von Distanzierung, Akzeptanz, Widerspruch und Gehorsam machen die Parteibasis und auch die aktiven Parteifunktionsträger auf der unteren Ebene zu einem sehr spannenden Untersuchungsfeld, das auch noch längst nicht ausgeschöpft ist.