Refine
Has Fulltext
- yes (165) (remove)
Year of publication
- 2021 (165) (remove)
Document Type
- Doctoral Thesis (165) (remove)
Is part of the Bibliography
- yes (165)
Keywords
- Spektroskopie (4)
- Klimawandel (3)
- Politik (3)
- climate change (3)
- spectroscopy (3)
- 3D-Visualisierung (2)
- Agrarökologie (2)
- Air pollution (2)
- Alpen (2)
- Alps (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Geowissenschaften (22)
- Institut für Biochemie und Biologie (20)
- Institut für Chemie (20)
- Hasso-Plattner-Institut für Digital Engineering GmbH (13)
- Institut für Umweltwissenschaften und Geographie (9)
- Wirtschaftswissenschaften (9)
- Institut für Informatik und Computational Science (8)
- Department Psychologie (6)
- Extern (5)
Institutionelle Bildung ist für autistische Lernende mit vielgestaltigen und spezifischen Hindernissen verbunden. Dies gilt insbesondere im Zusammenhang mit Inklusion, deren Relevanz nicht zuletzt durch das Übereinkommen der Vereinten Nationen über die Rechte von Menschen mit Behinderung gegeben ist.
Diese Arbeit diskutiert zahlreiche lernrelevante Besonderheiten im Kontext von Autismus und zeigt Diskrepanzen zu den nicht immer ausreichend angemessenen institutionellen Lehrkonzepten. Eine zentrale These ist hierbei, dass die ungewöhnlich intensive Aufmerksamkeit von Autist*innen für ihre Spezialinteressen dafür genutzt werden kann, das Lernen mit fremdgestellten Inhalten zu erleichtern. Darauf aufbauend werden Lösungsansätze diskutiert, welche in einem neuartigen Konzept für ein digitales mehrgerätebasiertes Lernspiel resultieren.
Eine wesentliche Herausforderung bei der Konzeption spielbasierten Lernens besteht in der adäquaten Einbindung von Lerninhalten in einen fesselnden narrativen Kontext. Am Beispiel von Übungen zur emotionalen Deutung von Mimik, welche für das Lernen von sozioemotionalen Kompetenzen besonders im Rahmen von Therapiekonzepten bei Autismus Verwendung finden, wird eine angemessene Narration vorgestellt, welche die störungsarme Einbindung dieser sehr speziellen Lerninhalte ermöglicht.
Die Effekte der einzelnen Konzeptionselemente werden anhand eines prototypisch entwickelten Lernspiels untersucht. Darauf aufbauend zeigt eine quantitative Studie die gute Akzeptanz und Nutzerfreundlichkeit des Spiels und belegte vor allem die
Verständlichkeit der Narration und der Spielelemente. Ein weiterer Schwerpunkt liegt in der minimalinvasiven Untersuchung möglicher Störungen des Spielerlebnisses durch den Wechsel zwischen verschiedenen Endgeräten, für die ein innovatives Messverfahren entwickelt wurde.
Im Ergebnis beleuchtet diese Arbeit die Bedeutung und die Grenzen von spielbasierten Ansätzen für autistische Lernende. Ein großer Teil der vorgestellten Konzepte lässt sich auf andersartige Lernszenarien übertragen. Das dafür entwickelte technische Framework zur Realisierung narrativer Lernpfade ist ebenfalls darauf vorbereitet, für weitere Lernszenarien, gerade auch im institutionellen Kontext, Verwendung zu finden.
Noise is ubiquitous in nature and usually results in rich dynamics in stochastic systems such as oscillatory systems, which exist in such various fields as physics, biology and complex networks. The correlation and synchronization of two or many oscillators are widely studied topics in recent years.
In this thesis, we mainly investigate two problems, i.e., the stochastic bursting phenomenon in noisy excitable systems and synchronization in a three-dimensional Kuramoto model with noise. Stochastic bursting here refers to a sequence of coherent spike train, where each spike has random number of followers due to the combined effects of both time delay and noise. Synchronization, as a universal phenomenon in nonlinear dynamical systems, is well illustrated in the Kuramoto model, a prominent model in the description of collective motion.
In the first part of this thesis, an idealized point process, valid if the characteristic timescales in the problem are well separated, is used to describe statistical properties such as the power spectral density and the interspike interval distribution. We show how the main parameters of the point process, the spontaneous excitation rate, and the probability to induce a spike during the delay action can be calculated from the solutions of a stationary and a forced Fokker-Planck equation. We extend it to the delay-coupled case and derive analytically the statistics of the spikes in each neuron, the pairwise correlations between any two neurons, and the spectrum of the total output from the network.
In the second part, we investigate the three-dimensional noisy Kuramoto model, which can be used to describe the synchronization in a swarming model with helical trajectory. In the case without natural frequency, the Kuramoto model can be connected with the Vicsek model, which is widely studied in collective motion and swarming of active matter. We analyze the linear stability of the incoherent state and derive the critical coupling strength above which the incoherent state loses stability. In the limit of no natural frequency, an exact self-consistent equation of the mean field is derived and extended straightforward to any high-dimensional case.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
With ongoing anthropogenic global warming, some of the most vulnerable components of the Earth system might become unstable and undergo a critical transition. These subsystems are the so-called tipping elements. They are believed to exhibit threshold behaviour and would, if triggered, result in severe consequences for the biosphere and human societies. Furthermore, it has been shown that climate tipping elements are not isolated entities, but interact across the entire Earth system. Therefore, this thesis aims at mapping out the potential for tipping events and feedbacks in the Earth system mainly by the use of complex dynamical systems and network science approaches, but partially also by more detailed process-based models of the Earth system.
In the first part of this thesis, the theoretical foundations are laid by the investigation of networks of interacting tipping elements. For this purpose, the conditions for the emergence of global cascades are analysed against the structure of paradigmatic network types such as Erdös-Rényi, Barabási-Albert, Watts-Strogatz and explicitly spatially embedded networks. Furthermore, micro-scale structures are detected that are decisive for the transition of local to global cascades. These so-called motifs link the micro- to the macro-scale in the network of tipping elements. Alongside a model description paper, all these results are entered into the Python software package PyCascades, which is publicly available on github.
In the second part of this dissertation, the tipping element framework is first applied to components of the Earth system such as the cryosphere and to parts of the biosphere. Afterwards it is applied to a set of interacting climate tipping elements on a global scale. Using the Earth system Model of Intermediate Complexity (EMIC) CLIMBER-2, the temperature feedbacks are quantified, which would arise if some of the large cryosphere elements disintegrate over a long span of time. The cryosphere components that are investigated are the Arctic summer sea ice, the mountain glaciers, the Greenland and the West Antarctic Ice Sheets. The committed temperature increase, in case the ice masses disintegrate, is on the order of an additional half a degree on a global average (0.39-0.46 °C), while local to regional additional temperature increases can exceed 5 °C. This means that, once tipping has begun, additional reinforcing feedbacks are able to increase global warming and with that the risk of further tipping events.
This is also the case in the Amazon rainforest, whose parts are dependent on each other via the so-called moisture-recycling feedback. In this thesis, the importance of drought-induced tipping events in the Amazon rainforest is investigated in detail. Despite the Amazon rainforest is assumed to be adapted to past environmental conditions, it is found that tipping events sharply increase if the drought conditions become too intense in a too short amount of time, outpacing the adaptive capacity of the Amazon rainforest. In these cases, the frequency of tipping cascades also increases to 50% (or above) of all tipping events. In the model that was developed in this study, the southeastern region of the Amazon basin is hit hardest by the simulated drought patterns. This is also the region that already nowadays suffers a lot from extensive human-induced changes due to large-scale deforestation, cattle ranching or infrastructure projects.
Moreover, on the larger Earth system wide scale, a network of conceptualised climate tipping elements is constructed in this dissertation making use of a large literature review, expert knowledge and topological properties of the tipping elements. In global warming scenarios, tipping cascades are detected even under modest scenarios of climate change, limiting global warming to 2 °C above pre-industrial levels. In addition, the structural roles of the climate tipping elements in the network are revealed. While the large ice sheets on Greenland and Antarctica are the initiators of tipping cascades, the Atlantic Meridional Overturning Circulation (AMOC) acts as the transmitter of cascades. Furthermore, in our conceptual climate tipping element model, it is found that the ice sheets are of particular importance for the stability of the entire system of investigated climate tipping elements.
In the last part of this thesis, the results from the temperature feedback study with the EMIC CLIMBER-2 are combined with the conceptual model of climate tipping elements. There, it is observed that the likelihood of further tipping events slightly increases due to the temperature feedbacks even if no further CO$_2$ would be added to the atmosphere.
Although the developed network model is of conceptual nature, it is possible with this work for the first time to quantify the risk of tipping events between interacting components of the Earth system under global warming scenarios, by allowing for dynamic temperature feedbacks at the same time.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
Das Fachwissen von Lehrkräften weist für die Ausprägung fachdidaktischer Expertise eine hohe Bedeutung auf. Welche Merkmale universitäre Lehrveranstaltungen aufweisen sollten, um Lehramtsstudierenden ein berufsspezifisches Fachwissen zu vermitteln, ist jedoch überwiegend noch unklar.
Innerhalb des Projekts PSI-Potsdam wurde auf theoretischer Grundlage das fachübergreifende Modell des erweiterten Fachwissens für den schulischen Kontext entwickelt. Als Ansatz zur Verbesserung des Biologie-Lehramtsstudiums diente dieses Modell als Konzeptionsgrundlage für eine additive Lehrveranstaltung. Hierbei werden Lerngelegenheiten geboten, um das universitär erworbene Fachwissen über zellbiologische Inhalte auf schulische Kontexte anzuwenden, z.B. durch die Dekonstruktion und anschließende Rekonstruktion von schulischen Lerntexten. Die Wirkung des Seminars wurde in mehreren Zyklen im Forschungsformat der Fachdidaktischen Entwicklungsforschung beforscht. Eine der zentralen Forschungsfragen lautet dabei: Wie kann eine Lerngelegenheit für Lehramtsstudierende der Biologie gestaltet sein, um ein erweitertes Fachwissen für den schulischen Kontext für den zellbiologischen Themenbereich „Struktur und Funktion der Biomembran“ zu fördern?
Anhand fallübergreifender Analysen (n = 29) wird im empirischen Teil aufgezeigt, welche Einstellungen zum Lehramtsstudium in der Stichprobe bestehen. Als ein wichtiges Ergebnis kann hierbei herausgestellt werden, dass sich das Fachinteresse hinsichtlich schulisch und universitär vermittelter Inhalte bei den untersuchten Studierenden auffallend unterscheidet, wobei dem Schulwissen ein deutlich höheres Interesse entgegengebracht wird. Die Berufsrelevanz fachlicher Inhalte wird seitens der Studierenden häufig am Schulwissen festgemacht.
Innerhalb konkreter Einzelfallanalysen (n = 6) wird anhand von Lernpfaden dargestellt, wie sich über mehrere Design-Experimente hinweg fachliche Konzepte entwickelt haben. Bei der Beschreibung wird vor allem auf Schlüsselstellen und Hürden im Lernprozess fokussiert. Aus diesen Ergebnissen folgend werden vorgenommene Iterationen für die einzelnen Zyklen beschrieben, die ebenfalls anhand der iterativen Entwicklung der Design-Prinzipien dargelegt werden.
Es konnte gezeigt werden, dass die Schlüsselstellen sehr individuell aufgrund der subjektiv fokussierten Inhalte zu Tage treten. Meist treten sie jedoch im Zusammenhang mit der Verknüpfung verschiedener fachlicher Konzepte oder durch kooperative Aufschlüsselungen von Konzepten auf. Fachliche Hürden konnten hingegen in Form von fachlich unangemessenen Vorstellungen fallübergreifend identifiziert werden. Dies betrifft unter anderem die Vorstellung der Biomembran als Wand, die mit den Vorstellungen einer Schutzfunktion und einer formgebenden Funktion der Biomembran einhergeht.
Weiterhin wird beleuchtet, wie das erweiterte Fachwissen für den schulischen Kontext zur Bearbeitung der Lernaufgaben angewendet wurde. Es hat sich gezeigt, dass sich bestimmte Lerngelegenheiten eigenen, um bestimmte Facetten des erweiterten Fachwissens zu fördern.
Insgesamt scheint das Modell des erweiterten Fachwissens für den schulischen Kontext äußerst geeignet zu sein, um anhand der Facetten und deren Beschreibungen Lerngelegenheiten oder Gestaltungsprinzipien für diese zu konzipieren. Für das untersuchte Lehr-Lernarrangement haben sich kleinere Adaptationen des Modells als sinnvoll erwiesen. Hinsichtlich der Methodologie konnten Ableitungen für die Anwendung der fachdidaktischen Entwicklungsforschung für additive fachliche Lehrveranstaltungen dieser Art herausgestellt werden.
Um den Professionsbezug der fachwissenschaftlichen Anteile im Lehramtsstudium zu verbessern, ist der weitere Einbezug des erweiterten Fachwissens für den schulischen Kontext in die fachwissenschaftlichen Studienanteile überaus wünschenswert.
Supernova remnants (SNRs) are discussed as the most promising sources of galactic cosmic rays (CR). The diffusive shock acceleration (DSA) theory predicts particle spectra in a rough agreement with observations. Upon closer inspection, however, the photon spectra of observed SNRs indicate that the particle spectra produced at SNRs shocks deviate from the standard expectation. This work suggests a viable explanation for a softening of the particle spectra in SNRs. The basic idea is the re-acceleration of particles in the turbulent region immediately downstream of the shock. This thesis shows that at the re-acceleration of particles by the fast-mode waves in the downstream region can be efficient enough to impact particle spectra over several decades in energy. To demonstrate this, a generic SNR model is presented, where the evolution of particles is described by the reduced transport equation for CR. It is shown that the resulting particle and the corresponding synchrotron spectra are significantly softer compared to the standard case. Next, this work outlines RATPaC, a code developed to model particle acceleration and corresponding photon emissions in SNRs. RATPaC solves the particle transport equation in test-particle mode using hydrodynamic simulations of the SNR plasma flow. The background magnetic field can be either computed from the induction equation or follows analytic profiles. This work presents an extended version of RATPaC that accounts for stochastic re-acceleration by fast-mode waves that provide diffusion of particles in momentum space. This version is then applied to model the young historical SNR Tycho. According to radio observations, Tycho’s SNR features the radio spectral index of approximately −0.65. In previous modeling approaches, this fact has been attributed to the strongly distinctive Alfvénic drift, which is assumed to operate in the shock vicinity. In this work, the problems and inconsistencies of this scenario are discussed. Instead, stochastic re-acceleration of electrons in the immediate downstream region of Tycho’s SNR is suggested as a cause for the soft radio spectrum. Furthermore, this work investigates two different scenarios for magnetic-field distributions inside Tycho’s SNR. It is concluded that magnetic-field damping is needed to account for the observed filaments in the radio range. Two models are presented for Tycho’s SNR, both of them feature strong hadronic contribution. Thus, a purely leptonic model is considered as very unlikely. Additionally, to the detailed modeling of Tycho’s SNR, this dissertation presents a relatively simple one-zone model for the young SNR Cassiopeia A and an interpretation for the recently analyzed VERITAS and Fermi-LAT data. It shows that the γ-ray emission of Cassiopeia A cannot be explained without a hadronic contribution and that the remnant accelerates protons up to TeV energies. Thus, Cassiopeia A is found to be unlikely a PeVatron.
Das Schulfach Geographie war in der DDR eines der Fächer, das sehr stark mit politischen Themen im Sinne des Marxismus-Leninismus bestückt war. Ein anderer Aspekt sind die sozialistischen Erziehungsziele, die in der Schulbildung der DDR hoch im Kurs standen. Im Fokus stand diesbezüglich die Erziehung der Kinder zu sozialistischen Persönlichkeiten. Die Arbeit versucht einen klaren Blick auf diesen Umstand zu werfen, um zu erfahren, was da von den Lehrkräften gefordert wurde und wie es in der Schule umzusetzen war.
Durch den Fall der Mauer war natürlich auch eine Umstrukturierung des Bildungssystems im Osten unausweichlich. Hier will die Arbeit Einblicke geben, wie die Geographielehrkräfte diese Transformation mitgetragen und umgesetzt haben. Welche Wesenszüge aus der Sozialisierung in der DDR haben sich bei der Gestaltung des Unterrichtes und dessen Ausrichtung auf die neuen Erziehungsziele erhalten?
Hierzu wurden Geographielehrkräfte befragt, die sowohl in der DDR als auch im geeinten Deutschland unterrichtet haben. Die Fragen bezogen sich in erster Linie auf die Art und Weise des Unterrichtens vor, während und nach der Wende und der daraus entstandenen Systemtransformation.
Die Befragungen kommen zu dem Ergebnis, dass sich der Geographieunterricht in der DDR thematisch von dem in der BRD nicht sonderlich unterschied. Von daher bedurfte es keiner umfangreichen inhaltlichen Veränderung des Geographieunterrichts. Schon zu DDR-Zeiten wurden durch die Lehrkräfte offenbar eigenmächtig ideologiefreie physisch-geographische Themen oft ausgedehnt, um die Ideologie des Faches zu reduzieren. So fiel den meisten eine Anpassung ihres Unterrichts an das westdeutsche System relativ leicht. Die humanistisch geprägte Werteerziehung des DDR-Bildungssystems wurde unter Ausklammerung des sozialistischen Aspektes ebenso fortgeführt, da es auch hier viele Parallelen zum westdeutschen System gegeben hat. Deutlich wird eine Charakterisierung des Faches als Naturwissenschaft von Seiten der ostdeutschen Lehrkräfte, obwohl das Fach an den Schulen den Gesellschaftswissenschaften zugeordnet wird und auch in der DDR eine starke wirtschaftsgeographische Ausrichtung hatte.
Von der Verantwortung sozialistische Persönlichkeiten zu erziehen, wurden die Lehrkräfte mit dem Ende der DDR entbunden und die in dieser Arbeit aufgeführten Interviewauszüge lassen keinen Zweifel daran, dass es dem Großteil der Befragten darum nicht leidtat, sie sich aber bis heute an der Werteorientierung aus DDR-Zeiten orientieren.
Geochemical processes such as mineral dissolution and precipitation alter the microstructure of rocks, and thereby affect their hydraulic and mechanical behaviour. Quantifying these property changes and considering them in reservoir simulations is essential for a sustainable utilisation of the geological subsurface. Due to the lack of alternatives, analytical methods and empirical relations are currently applied to estimate evolving hydraulic and mechanical rock properties associated with chemical reactions. However, the predictive capabilities of analytical approaches remain limited, since they assume idealised microstructures, and thus are not able to reflect property evolution for dynamic processes. Hence, aim of the present thesis is to improve the prediction of permeability and stiffness changes resulting from pore space alterations of reservoir sandstones.
A detailed representation of rock microstructure, including the morphology and connectivity of pores, is essential to accurately determine physical rock properties. For that purpose, three-dimensional pore-scale models of typical reservoir sandstones, obtained from highly resolved micro-computed tomography (micro-CT), are used to numerically calculate permeability and stiffness. In order to adequately depict characteristic distributions of secondary minerals, the virtual samples are systematically altered and resulting trends among the geometric, hydraulic, and mechanical rock properties are quantified. It is demonstrated that the geochemical reaction regime controls the location of mineral precipitation within the pore space, and thereby crucially affects the permeability evolution. This emphasises the requirement of determining distinctive porosity-permeability relationships
by means of digital pore-scale models. By contrast, a substantial impact of spatial alterations patterns on the stiffness evolution of reservoir sandstones are only observed in case of certain microstructures, such as highly porous granular rocks or sandstones comprising framework-supporting cementations. In order to construct synthetic granular samples a process-based approach is proposed including grain deposition and diagenetic cementation. It is demonstrated that the generated samples reliably represent the microstructural complexity of natural sandstones. Thereby, general limitations of imaging techniques can be overcome and various realisations of granular rocks can be flexibly produced. These can be further altered by virtual experiments, offering a fast and cost-effective way to examine the impact of precipitation, dissolution or fracturing on various petrophysical correlations.
The presented research work provides methodological principles to quantify trends in permeability and stiffness resulting from geochemical processes. The calculated physical property relations are directly linked to pore-scale alterations, and thus have a higher accuracy than commonly applied analytical approaches. This will considerably improve the predictive capabilities of reservoir models, and is further relevant to assess and reduce potential risks, such as productivity or injectivity losses as well as reservoir compaction or fault reactivation. Hence, the proposed method is of paramount importance for a wide range of natural and engineered subsurface applications, including geothermal energy systems, hydrocarbon reservoirs, CO2 and energy storage as well as hydrothermal deposit exploration.
Die stetige Weiterentwicklung von VR-Systemen bietet neue Möglichkeiten der Interaktion mit virtuellen Objekten im dreidimensionalen Raum, stellt Entwickelnde von VRAnwendungen aber auch vor neue Herausforderungen. Selektions- und Manipulationstechniken müssen unter Berücksichtigung des Anwendungsszenarios, der Zielgruppe und der zur Verfügung stehenden Ein- und Ausgabegeräte ausgewählt werden. Diese Arbeit leistet einen Beitrag dazu, die Auswahl von passenden Interaktionstechniken zu unterstützen. Hierfür wurde eine repräsentative Menge von Selektions- und Manipulationstechniken untersucht und, unter Berücksichtigung existierender Klassifikationssysteme, eine Taxonomie entwickelt, die die Analyse der Techniken hinsichtlich interaktionsrelevanter Eigenschaften ermöglicht. Auf Basis dieser Taxonomie wurden Techniken ausgewählt, die in einer explorativen Studie verglichen wurden, um Rückschlüsse auf die Dimensionen der Taxonomie zu ziehen und neue Indizien für Vor- und Nachteile der Techniken in spezifischen Anwendungsszenarien zu generieren. Die Ergebnisse der Arbeit münden in eine Webanwendung, die Entwickelnde von VR-Anwendungen gezielt dabei unterstützt, passende Selektions- und Manipulationstechniken für ein Anwendungsszenario auszuwählen, indem Techniken auf Basis der Taxonomie gefiltert und unter Verwendung der Resultate aus der Studie sortiert werden können.
Energy is at the heart of the climate crisis—but also at the heart of any efforts for climate change mitigation. Energy consumption is namely responsible for approximately three quarters of global anthropogenic greenhouse gas (GHG) emissions. Therefore, central to any serious plans to stave off a climate catastrophe is a major transformation of the world's energy system, which would move society away from fossil fuels and towards a net-zero energy future. Considering that fossil fuels are also a major source of air pollutant emissions, the energy transition has important implications for air quality as well, and thus also for human and environmental health. Both Europe and Germany have set the goal of becoming GHG neutral by 2050, and moreover have demonstrated their deep commitment to a comprehensive energy transition. Two of the most significant developments in energy policy over the past decade have been the interest in expansion of shale gas and hydrogen, which accordingly have garnered great interest and debate among public, private and political actors.
In this context, sound scientific information can play an important role by informing stakeholder dialogue and future research investments, and by supporting evidence-based decision-making. This thesis examines anticipated environmental impacts from possible, relevant changes in the European energy system, in order to impart valuable insight and fill critical gaps in knowledge. Specifically, it investigates possible future shale gas development in Germany and the United Kingdom (UK), as well as a hypothetical, complete transition to hydrogen mobility in Germany. Moreover, it assesses the impacts on GHG and air pollutant emissions, and on tropospheric ozone (O3) air quality. The analysis is facilitated by constructing emission scenarios and performing air quality modeling via the Weather Research and Forecasting model coupled with chemistry (WRF-Chem). The work of this thesis is presented in three research papers.
The first paper finds that methane (CH4) leakage rates from upstream shale gas development in Germany and the UK would range between 0.35% and 1.36% in a realistic, business-as-usual case, while they would be significantly lower - between 0.08% and 0.15% - in an optimistic, strict regulation and high compliance case, thus demonstrating the value and potential of measures to substantially reduce emissions. Yet, while the optimistic case is technically feasible, it is unlikely that the practices and technologies assumed would be applied and accomplished on a systematic, regular basis, owing to economics and limited monitoring resources. The realistic CH4 leakage rates estimated in this study are comparable to values reported by studies carried out in the US and elsewhere. In contrast, the optimistic rates are similar to official CH4 leakage data from upstream gas production in Germany and in the UK. Considering that there is a lack of systematic, transparent and independent reports supporting the official values, this study further highlights the need for more research efforts in this direction. Compared with national energy sector emissions, this study suggests that shale gas emissions of volatile organic compounds (VOCs) could be significant, though relatively insignificant for other air pollutants. Similar to CH4, measures could be effective for reducing VOCs emissions.
The second paper shows that VOC and nitrogen oxides (NOx) emissions from a future shale gas industry in Germany and the UK have potentially harmful consequences for European O3 air quality on both the local and regional scale. The results indicate a peak increase in maximum daily 8-hour average O3 (MDA8) ranging from 3.7 µg m-3 to 28.3 µg m-3. Findings suggest that shale gas activities could result in additional exceedances of MDA8 at a substantial percentage of regulatory measurement stations both locally and in neighboring and distant countries, with up to circa one third of stations in the UK and one fifth of stations in Germany experiencing additional exceedances. Moreover, the results reveal that the shale gas impact on the cumulative health-related metric SOMO35 (annual Sum of Ozone Means Over 35 ppb) could be substantial, with a maximum increase of circa 28%. Overall, the findings suggest that shale gas VOC emissions could play a critical role in O3 enhancement, while NOx emissions would contribute to a lesser extent. Thus, the results indicate that stringent regulation of VOC emissions would be important in the event of future European shale gas development to minimize deleterious health outcomes.
The third paper demonstrates that a hypothetical, complete transition of the German vehicle fleet to hydrogen fuel cell technology could contribute substantially to Germany's climate and air quality goals. The results indicate that if the hydrogen were to be produced via renewable-powered water electrolysis (green hydrogen), German carbon dioxide equivalent (CO2eq) emissions would decrease by 179 MtCO2eq annually, though if electrolysis were powered by the current electricity mix, emissions would instead increase by 95 MtCO2eq annually. The findings generally reveal a notable anticipated decrease in German energy emissions of regulated air pollutants. The results suggest that vehicular hydrogen demand is 1000 PJ annually, which would require between 446 TWh and 525 TWh for electrolysis, hydrogen transport and storage. When only the heavy duty vehicle segment (HDVs) is shifted to green hydrogen, the results of this thesis show that vehicular hydrogen demand drops to 371 PJ, while a deep emissions cut is still realized (-57 MtCO2eq), suggesting that HDVs are a low-hanging fruit for contributing to decarbonization of the German road transport sector with hydrogen energy.
Anthropogenic climate change alters the hydrological cycle. While certain areas experience more intense precipitation events, others will experience droughts and increased evaporation, affecting water storage in long-term reservoirs, groundwater, snow, and glaciers. High elevation environments are especially vulnerable to climate change, which will impact the water supply for people living downstream. The Himalaya has been identified as a particularly vulnerable system, with nearly one billion people depending on the runoff in this system as their main water resource. As such, a more refined understanding of spatial and temporal changes in the water cycle in high altitude systems is essential to assess variations in water budgets under different climate change scenarios.
However, not only anthropogenic influences have an impact on the hydrological cycle, but changes to the hydrological cycle can occur over geological timescales, which are connected to the interplay between orogenic uplift and climate change. However, their temporal evolution and causes are often difficult to constrain. Using proxies that reflect hydrological changes with an increase in elevation, we can unravel the history of orogenic uplift in mountain ranges and its effect on the climate.
In this thesis, stable isotope ratios (expressed as δ2H and δ18O values) of meteoric waters and organic material are combined as tracers of atmospheric and hydrologic processes with remote sensing products to better understand water sources in the Himalayas. In addition, the record of modern climatological conditions based on the compound specific stable isotopes of leaf waxes (δ2Hwax) and brGDGTs (branched Glycerol dialkyl glycerol tetraethers) in modern soils in four Himalayan river catchments was assessed as proxies of the paleoclimate and (paleo-) elevation. Ultimately, hydrological variations over geological timescales were examined using δ13C and δ18O values of soil carbonates and bulk organic matter originating from sedimentological sections from the pre-Siwalik and Siwalik groups to track the response of vegetation and monsoon intensity and seasonality on a timescale of 20 Myr.
I find that Rayleigh distillation, with an ISM moisture source, mainly controls the isotopic composition of surface waters in the studied Himalayan catchments. An increase in d-excess in the spring, verified by remote sensing data products, shows the significant impact of runoff from snow-covered and glaciated areas on the surface water isotopic values in the timeseries.
In addition, I show that biomarker records such as brGDGTs and δ2Hwax have the potential to record (paleo-) elevation by yielding a significant correlation with the temperature and surface water δ2H values, respectively, as well as with elevation. Comparing the elevation inferred from both brGDGT and δ2Hwax, large differences were found in arid sections of the elevation transects due to an additional effect of evapotranspiration on δ2Hwax. A combined study of these proxies can improve paleoelevation estimates and provide recommendations based on the results found in this study.
Ultimately, I infer that the expansion of C4 vegetation between 20 and 1 Myr was not solely dependent on atmospheric pCO2, but also on regional changes in aridity and seasonality from to the stable isotopic signature of the two sedimentary sections in the Himalaya (east and west).
This thesis shows that the stable isotope chemistry of surface waters can be applied as a tool to monitor the changing Himalayan water budget under projected increasing temperatures. Minimizing the uncertainties associated with the paleo-elevation reconstructions were assessed by the combination of organic proxies (δ2Hwax and brGDGTs) in Himalayan soil. Stable isotope ratios in bulk soil and soil carbonates showed the evolution of vegetation influenced by the monsoon during the late Miocene, proving that these proxies can be used to record monsoon intensity, seasonality, and the response of vegetation. In conclusion, the use of organic proxies and stable isotope chemistry in the Himalayas has proven to successfully record changes in climate with increasing elevation. The combination of δ2Hwax and brGDGTs as a new proxy provides a more refined understanding of (paleo-)elevation and the influence of climate.
Die Arbeit untersucht die historische Entwicklung der Prätorianerpräfektur im 3. Jh. und bewertet die Funktion im Rahmen der kaiserlichen Herrschaftsordnung. Aufgrund der militärischen und politischen Krisen des 3. Jh. und der daran angepassten Herrschaftsstrategien erhielten die Prätorianerpräfekten umfassende Aufgaben. Die disparate Quellen- und Forschungslage beschreibt den Machtzuwachs und die Funktionsaufwertung der Prätorianerpräfekten in dieser wichtigen Phase aber sehr unterschiedlich. Ausgehend von den spätantiken Berichten geht die mehrheitliche Forschung zudem von einem Machtverlust der Prätorianerpräfekten unter Konstantin aus, dem eine Reformierung der Prätorianerpräfektur zugesprochen wird. Dieser Machtverlust lässt sich zeitlich und funktional jedoch nicht sicher bestimmen. In der Forschung wird dieser funktionale Abstieg oft mit der konstantinischen Demilitarisierung und Regionalisierung der Prätorianerpräfektur begründet. Bisher fehlte eine aktuelle Gesamtdarstellung, die die Prätorianerpräfektur in der Herrschaftsordnung des 3. Jh. bewertet und kategorisiert, um eine funktionale Abgrenzung zur klassischen Prätorianerpräfektur und zur Regionalpräfektur im 4. Jh. vorzunehmen.
Für diese funktionale Abgrenzung wurden in dieser Arbeit die Funktionsmerkmale und historischen Zusammenhänge der Prätorianerpräfektur im 3. Jh. abstrahiert und hieraus der Idealtypus einer „Kaiserlichen Magistratur“ gebildet. Die Ergebnisse dieser Abstrahierung zeigen die Prätorianerpräfektur im 3. Jh. als eine kommunikative Schnittstelle zwischen dem Kaiser und den leitenden Stellen der Zentral- und Provinzadministration. Die Prätorianerpräfektur übernahm hierbei eine leitende Stabsfunktion, die im Zusammenhang mit der höchsten inappellablen Gerichtsbarkeit die zweite Funktionsträgerebene nach dem Kaiser bildete. Diese Funktion übten die Prätorianerpräfekten ohne territoriale Bindung bis zum Ende der Tetrarchie bzw. bis zur frühen Herrschaft Konstantins aus.
Generative adversarial networks (GANs) have been broadly applied to a wide range of application domains since their proposal. In this thesis, we propose several methods that aim to tackle different existing problems in GANs. Particularly, even though GANs are generally able to generate high-quality samples, the diversity of the generated set is often sub-optimal. Moreover, the common increase of the number of models in the original GANs framework, as well as their architectural sizes, introduces additional costs. Additionally, even though challenging, the proper evaluation of a generated set is an important direction to ultimately improve the generation process in GANs. We start by introducing two diversification methods that extend the original GANs framework to multiple adversaries to stimulate sample diversity in a generated set. Then, we introduce a new post-training compression method based on Monte Carlo methods and importance sampling to quantize and prune the weights and activations of pre-trained neural networks without any additional training. The previous method may be used to reduce the memory and computational costs introduced by increasing the number of models in the original GANs framework. Moreover, we use a similar procedure to quantize and prune gradients during training, which also reduces the communication costs between different workers in a distributed training setting. We introduce several topology-based evaluation methods to assess data generation in different settings, namely image generation and language generation. Our methods retrieve both single-valued and double-valued metrics, which, given a real set, may be used to broadly assess a generated set or separately evaluate sample quality and sample diversity, respectively. Moreover, two of our metrics use locality-sensitive hashing to accurately assess the generated sets of highly compressed GANs. The analysis of the compression effects in GANs paves the way for their efficient employment in real-world applications. Given their general applicability, the methods proposed in this thesis may be extended beyond the context of GANs. Hence, they may be generally applied to enhance existing neural networks and, in particular, generative frameworks.
Massive Open Online Courses (MOOCs) open up new opportunities to learn a wide variety of skills online and are thus well suited for individual education, especially where proffcient teachers are not available locally. At the same time, modern society is undergoing a digital transformation, requiring the training of large numbers of current and future employees. Abstract thinking, logical reasoning, and the need to formulate instructions for computers are becoming increasingly relevant. A holistic way to train these skills is to learn how to program. Programming, in addition to being a mental discipline, is also considered a craft, and practical training is required to achieve mastery. In order to effectively convey programming skills in MOOCs, practical exercises are incorporated into the course curriculum to offer students the necessary hands-on experience to reach an in-depth understanding of the programming concepts presented. Our preliminary analysis showed that while being an integral and rewarding part of courses, practical exercises bear the risk of overburdening students who are struggling with conceptual misunderstandings and unknown syntax. In this thesis, we develop, implement, and evaluate different interventions with the aim to improve the learning experience, sustainability, and success of online programming courses. Data from four programming MOOCs, with a total of over 60,000 participants, are employed to determine criteria for practical programming exercises best suited for a given audience.
Based on over five million executions and scoring runs from students' task submissions, we deduce exercise difficulties, students' patterns in approaching the exercises, and potential flaws in exercise descriptions as well as preparatory videos. The primary issue in online learning is that students face a social gap caused by their isolated physical situation. Each individual student usually learns alone in front of a computer and suffers from the absence of a pre-determined time structure as provided in traditional school classes. Furthermore, online learning usually presses students into a one-size-fits-all curriculum, which presents the same content to all students, regardless of their individual needs and learning styles. Any means of a personalization of content or individual feedback regarding problems they encounter are mostly ruled out by the discrepancy between the number of learners and the number of instructors. This results in a high demand for self-motivation and determination of MOOC participants. Social distance exists between individual students as well as between students and course instructors. It decreases engagement and poses a threat to learning success. Within this research, we approach the identified issues within MOOCs and suggest scalable technical solutions, improving social interaction and balancing content difficulty.
Our contributions include situational interventions, approaches for personalizing educational content as well as concepts for fostering collaborative problem-solving. With these approaches, we reduce counterproductive struggles and create a universal improvement for future programming MOOCs. We evaluate our approaches and methods in detail to improve programming courses for students as well as instructors and to advance the state of knowledge in online education.
Data gathered from our experiments show that receiving peer feedback on one's programming problems improves overall course scores by up to 17%. Merely the act of phrasing a question about one's problem improved overall scores by about 14%. The rate of students reaching out for help was significantly improved by situational just-in-time interventions. Request for Comment interventions increased the share of students asking for help by up to 158%. Data from our four MOOCs further provide detailed insight into the learning behavior of students. We outline additional significant findings with regard to student behavior and demographic factors. Our approaches, the technical infrastructure, the numerous educational resources developed, and the data collected provide a solid foundation for future research.
Partial synchronous states exist in systems of coupled oscillators between full synchrony and asynchrony. They are an important research topic because of their variety of different dynamical states. Frequently, they are studied using phase dynamics. This is a caveat, as phase dynamics are generally obtained in the weak coupling limit of a first-order approximation in the coupling strength. The generalization to higher orders in the coupling strength is an open problem. Of particular interest in the research of partial synchrony are systems containing both attractive and repulsive coupling between the units. Such a mix of coupling yields very specific dynamical states that may help understand the transition between full synchrony and asynchrony. This thesis investigates partial synchronous states in mixed-coupling systems. First, a method for higher-order phase reduction is introduced to observe interactions beyond the pairwise one in the first-order phase description, hoping that these may apply to mixed-coupling systems. This new method for coupled systems with known phase dynamics of the units gives correct results but, like most comparable methods, is computationally expensive. It is applied to three Stuart-Landau oscillators coupled in a line with a uniform coupling strength. A numerical method is derived to verify the analytical results. These results are interesting but give importance to simpler phase models that still exhibit exotic states. Such simple models that are rarely considered are Kuramoto oscillators with attractive and repulsive interactions. Depending on how the units are coupled and the frequency difference between the units, it is possible to achieve many different states. Rich synchronization dynamics, such as a Bellerophon state, are observed when considering a Kuramoto model with attractive interaction in two subpopulations (groups) and repulsive interactions between groups. In two groups, one attractive and one repulsive, of identical oscillators with a frequency difference, an interesting solitary state appears directly between full and partial synchrony. This system can be described very well analytically.
The spread of shrubs in Namibian savannas raises questions about the resilience of these ecosystems to global change. This makes it necessary to understand the past dynamics of the vegetation, since there is no consensus on whether shrub encroachment is a new phenomenon, nor on its main drivers. However, a lack of long-term vegetation datasets for the region and the scarcity of suitable palaeoecological archives, makes reconstructing past vegetation and land cover of the savannas a challenge.
To help meet this challenge, this study addresses three main research questions: 1) is pollen analysis a suitable tool to reflect the vegetation change associated with shrub encroachment in savanna environments? 2) Does the current encroached landscape correspond to an alternative stable state of savanna vegetation? 3) To what extent do pollen-based quantitative vegetation reconstructions reflect changes in past land cover?
The research focuses on north-central Namibia, where despite being the region most affected by shrub invasion, particularly since the 21st century, little is known about the dynamics of this phenomenon.
Field-based vegetation data were compared with modern pollen data to assess their correspondence in terms of composition and diversity along precipitation and grazing intensity gradients. In addition, two sediment cores from Lake Otjikoto were analysed to reveal changes in vegetation composition that have occurred in the region over the past 170 years and their possible drivers. For this, a multiproxy approach (fossil pollen, sedimentary ancient DNA (sedaDNA), biomarkers, compound specific carbon (δ13C) and deuterium (δD) isotopes, bulk carbon isotopes (δ13Corg), grain size, geochemical properties) was applied at high taxonomic and temporal resolution. REVEALS modelling of the fossil pollen record from Lake Otjikoto was run to quantitatively reconstruct past vegetation cover. For this, we first made pollen productivity estimates (PPE) of the most relevant savanna taxa in the region using the extended R-value model and two pollen dispersal options (Gaussian plume model and Lagrangian stochastic model). The REVEALS-based vegetation reconstruction was then validated using remote sensing-based regional vegetation data.
The results show that modern pollen reflects the composition of the vegetation well, but diversity less well. Interestingly, precipitation and grazing explain a significant amount of the compositional change in the pollen and vegetation spectra. The multiproxy record shows that a state change from open Combretum woodland to encroached Terminalia shrubland can occur over a century, and that the transition between states spans around 80 years and is characterized by a unique vegetation composition. This transition is supported by gradual environmental changes induced by management (i.e. broad-scale logging for the mining industry, selective grazing and reduced fire activity associated with intensified farming) and related land-use change. Derived environmental changes (i.e. reduced soil moisture, reduced grass cover, changes in species composition and competitiveness, reduced fire intensity) may have affected the resilience of Combretum open woodlands, making them more susceptible to change to an encroached state by stochastic events such as consecutive years of precipitation and drought, and by high concentrations of pCO2. We assume that the resulting encroached state was further stabilized by feedback mechanisms that favour the establishment and competitiveness of woody vegetation.
The REVEALS-based quantitative estimates of plant taxa indicate the predominance of a semi-open landscape throughout the 20th century and a reduction in grass cover below 50% since the 21st century associated with the spread of encroacher woody taxa. Cover estimates show a close match with regional vegetation data, providing support for the vegetation dynamics inferred from multiproxy analyses. Reasonable PPEs were made for all woody taxa, but not for Poaceae.
In conclusion, pollen analysis is a suitable tool to reconstruct past vegetation dynamics in savannas. However, because pollen cannot identify grasses beyond family level, a multiproxy approach, particularly the use of sedaDNA, is required. I was able to separate stable encroached states from mere woodland phases, and could identify drivers and speculate about related feedbacks. In addition, the REVEALS-based quantitative vegetation reconstruction clearly reflects the magnitude of the changes in the vegetation cover that occurred during the last 130 years, despite the limitations of some PPEs.
This research provides new insights into pollen-vegetation relationships in savannas and highlights the importance of multiproxy approaches when reconstructing past vegetation dynamics in semi-arid environments. It also provides the first time series with sufficient taxonomic resolution to show changes in vegetation composition during shrub encroachment, as well as the first quantitative reconstruction of past land cover in the region. These results help to identify the different stages in savanna dynamics and can be used to calibrate predictive models of vegetation change, which are highly relevant to land management.
Carbonatite magmatism is a highly efficient transport mechanism from Earth’s mantle to the crust, thus providing insights into the chemistry and dynamics of the Earth’s mantle. One evolving and promising tool for tracing magma interaction are stable iron isotopes, particularly because iron isotope fractionation is controlled by oxidation state and bonding environment. Meanwhile, a large data set on iron isotope fractionation in igneous rocks exists comprising bulk rock compositions and fractionation between mineral groups. Iron isotope data from natural carbonatite rocks are extremely light and of remarkably high variability. This resembles iron isotope data from mantle xenoliths, which are characterized by a variability in δ56Fe spanning three times the range found in basalts, and by the extremely light values of some whole rock samples, reaching δ56Fe as low as -0.69 ‰ in a spinel lherzolite. Cause to this large range of variations may be metasomatic processes, involving metasomatic agents like volatile bearing high-alkaline silicate melts or carbonate melts. The expected effects of metasomatism on iron isotope fractionation vary with parameters like melt/rock-ratio, reaction time, and the nature of metasomatic agents and mineral reactions involved. An alternative or additional way to enrich light isotopes in the mantle could be multiple phases of melt extraction. To interpret the existing data sets more knowledge on iron isotope fractionation factors is needed.
To investigate the behavior of iron isotopes in the carbonatite systems, kinetic and equilibration experiments in natro-carbonatite systems between immiscible silicate and carbonate melts were performed in an internally heated gas pressure vessel at intrinsic redox conditions at temperatures between 900 and 1200 °C and pressures of 0.5 and 0.7 GPa. The iron isotope compositions of coexisting silicate melt and carbonate melt were analyzed by solution MC-ICP-MS. The kinetic experiments employing a Fe-58 spiked starting material show that isotopic equilibrium is obtained after 48 hours. The experimental studies of equilibrium iron isotope fractionation between immiscible silicate and carbonate melts have shown that light isotopes are enriched in the carbonatite melt. The highest Δ56Fesil.m.-carb.melt (mean) of 0.13 ‰ was determined in a system with a strongly peralkaline silicate melt composition (ASI ≥ 0.21, Na/Al ≤ 2.7). In three systems with extremely peralkaline silicate melt compositions (ASI between 0.11 and 0.14) iron isotope fractionation could analytically not be resolved. The lowest Δ56Fesil.m.-carb.melt (mean) of 0.02 ‰ was determined in a system with an extremely peralkaline silicate melt composition (ASI ≤ 0.11 , Na/Al ≥ 6.1). The observed iron isotope fractionation is most likely governed by the redox conditions of the system. Yet, in the systems, where no fractionation occurred, structural changes induced by compositional changes possibly overrule the influence of redox conditions. This interpretation implicates, that the iron isotope system holds the potential to be useful not only for exploring redox conditions in magmatic systems, but also for discovering structural changes in a melt.
In situ iron isotope analyses by femtosecond laser ablation coupled to MC-ICP-MS on magnetite and olivine grains were performed to reveal variations in iron isotope composition on the micro scale. The investigated sample is a melilitite bomb from the Salt Lake Crater group at Honolulu (Oahu, Hawaii), showing strong evidence for interaction with a carbonatite melt. While magnetite grains are rather homogeneous in their iron isotope compositions, olivine grains span a far larger range in iron isotope ratios. The variability of δ56Fe in magnetite is limited from - 0.17 ‰ (± 0.11 ‰, 2SE) to +0.08 ‰ (± 0.09 ‰, 2SE). δ56Fe in olivine range from -0.66‰ (± 0.11 ‰, 2SE) to +0.10 ‰ (± 0.13 ‰, 2SE). Olivine and magnetite grains hold different informations regarding kinetic and equilibrium fractionation due to their different Fe diffusion coefficients. The observations made in the experiments and in the in situ iron isotope analyses suggest that the extremely light iron isotope signatures found in carbonatites are generated by several steps of isotope fractionation during carbonatite genesis. These may involve equilibrium and kinetic fractionation. Since iron isotopic signatures in natural systems are generated by a combination of multiple factors (pressure, temperature, redox conditions, phase composition and structure, time scale), multi tracer approaches are needed to explain signatures found in natural rocks.
One of the key challenges in modern Facility Management (FM) is to digitally reflect the current state of the built environment, referred to as-is or as-built versus as-designed representation. While the use of Building Information Modeling (BIM) can address the issue of digital representation, the generation and maintenance of BIM data requires a considerable amount of manual work and domain expertise. Another key challenge is being able to monitor the current state of the built environment, which is used to provide feedback and enhance decision making. The need for an integrated solution for all data associated with the operational life cycle of a building is becoming more pronounced as practices from Industry 4.0 are currently being evaluated and adopted for FM use. This research presents an approach for digital representation of indoor environments in their current state within the life cycle of a given building. Such an approach requires the fusion of various sources of digital data. The key to solving such a complex issue of digital data integration, processing and representation is with the use of a Digital Twin (DT). A DT is a digital duplicate of the physical environment, states, and processes. A DT fuses as-designed and as-built digital representations of built environment with as-is data, typically in the form of floorplans, point clouds and BIMs, with additional information layers pertaining to the current and predicted states of an indoor environment or a complete building (e.g., sensor data). The design, implementation and initial testing of prototypical DT software services for indoor environments is presented and described. These DT software services are implemented within a service-oriented paradigm, and their feasibility is presented through functioning and tested key software components within prototypical Service-Oriented System (SOS) implementations. The main outcome of this research shows that key data related to the built environment can be semantically enriched and combined to enable digital representations of indoor environments, based on the concept of a DT. Furthermore, the outcomes of this research show that digital data, related to FM and Architecture, Construction, Engineering, Owner and Occupant (AECOO) activity, can be combined, analyzed and visualized in real-time using a service-oriented approach. This has great potential to benefit decision making related to Operation and Maintenance (O&M) procedures within the scope of the post-construction life cycle stages of typical office buildings.
Anthropogenic activities such as continuous landscape changes threaten biodiversity at both local and regional scales. Metacommunity models attempt to combine these two scales and continuously contribute to a better mechanistic understanding of how spatial processes and constraints, such as fragmentation, affect biodiversity. There is a strong consensus that such structural changes of the landscape tend to negatively effect the stability of metacommunities. However, in particular the interplay of complex trophic communities and landscape structure is not yet fully understood.
In this present dissertation, a metacommunity approach is used based on a dynamic and spatially explicit model that integrates population dynamics at the local scale and dispersal dynamics at the regional scale. This approach allows the assessment of complex spatial landscape components such as habitat clustering on complex species communities, as well as the analysis of population dynamics of a single species. In addition to the impact of a fixed landscape structure, periodic environmental disturbances are also considered, where a periodical change of habitat availability, temporally alters landscape structure, such as the seasonal drying of a water body.
On the local scale, the model results suggest that large-bodied animal species, such as predator species at high trophic positions, are more prone to extinction in a state of large patch isolation than smaller species at lower trophic levels.
Increased metabolic losses for species with a lower body mass lead to increased energy limitation for species on higher trophic levels and serves as an explanation for a predominant loss of these species. This effect is particularly pronounced for food webs, where species are more sensitive to increased metabolic losses through dispersal and a change in landscape structure.
In addition to the impact of species composition in a food web for diversity, the strength of local foraging interactions likewise affect the synchronization of population dynamics. A reduced predation pressure leads to more asynchronous population dynamics, beneficial for the stability of population dynamics as it reduces the risk of correlated extinction events among habitats. On the regional scale, two landscape aspects, which are the mean patch isolation and the formation of local clusters of two patches, promote an increase in $\beta$-diversity. Yet, the individual composition and robustness of the local species community equally explain a large proportion of the observed diversity patterns.
A combination of periodic environmental disturbance and patch isolation has a particular impact on population dynamics of a species. While the periodic disturbance has a synchronizing effect, it can even superimpose emerging asynchronous dynamics in a state of large patch isolation and unifies trends in synchronization between different species communities.
In summary, the findings underline a large local impact of species composition and interactions on local diversity patterns of a metacommunity. In comparison, landscape structures such as fragmentation have a negligible effect on local diversity patterns, but increase their impact for regional diversity patterns. In contrast, at the level of population dynamics, regional characteristics such as periodic environmental disturbance and patch isolation have a particularly strong impact and contribute substantially to the understanding of the stability of population dynamics in a metacommunity. These studies demonstrate once again the complexity of our ecosystems and the need for further analysis for a better understanding of our surrounding environment and more targeted conservation of biodiversity.
Kenya and Uganda are amongst the countries that, for different historical, political, and economic reasons, have embarked on law reform processes as regards to citizenship. In 2009, Uganda made provisions in its laws to allow citizens to have dual citizenship while Kenya’s 2010 constitution similarly introduced it, and at the same time, a general prohibition on dual citizenship was lifted, that is, a ban on state officers, including the President and Deputy President, being dual nationals (Manby, 2018).
Against this background, I analysed the reasons for which these countries that previously held stringent laws and policies against dual citizenship, made a shift in a close time proximity. Given their geo-political roles, location, regional, continental, and international obligations, I conducted a comparative study on the processes, actors, impact, and effect. A specific period of 2000 to 2010 was researched, that is, from when the debates for law reforms emerged, to the processes being implemented, the actors, and the implications.
According to Rubenstein (2000, p. 520), citizenship is observed in terms of “political institutions” that are free to act according to the will of, in the interests of, or with authority over, their citizenry. Institutions are emergent national or international, higher-order factors above the individual spectrum, having the interests and political involvement of their actors without requiring recurring collective mobilisation or imposing intervention to realise these regularities. Transnational institutions are organisations with authority beyond single governments. Given their International obligations, I analysed the role of the UN, AU, and EAC in influencing the citizenship debates and reforms in Kenya and Uganda. Further, non-state actors, such as civil society, were considered.
Veblen, (1899) describes institutions as a set of settled habits of thought common to the generality of men. Institutions function only because the rules involved are rooted in shared habits of thought and behaviour although there is some ambiguity in the definition of the term “habit”. Whereas abstracts and definitions depend on different analytical procedures, institutions restrain some forms of action and facilitate others. Transnational institutions both restrict and aid behaviour. The famous “invisible hand” is nothing else but transnational institutions. Transnational theories, as applied to politics, posit two distinct forms that are of influence over policy and political action (Veblen, 1899). This influence and durability of institutions is “a function of the degree to which they are instilled in political actors at the individual or organisational level, and the extent to which they thereby “tie up” material resources and networks. Against this background, transitional networks with connection to Kenya and Uganda were considered alongside the diaspora from these two countries and their role in the debate and reforms on Dual citizenship.
Sterian (2013, p. 310) notes that Nation states may be vulnerable to institutional influence and this vulnerability can pose a threat to a nation’s autonomy, political legitimacy, and to the democratic public law. Transnational institutions sometimes “collide with the sovereignty of the state when they create new structures for regulating cross-border relationships”. However, Griffin (2003) disagrees that transnational institutional behaviour is premised on the principles of neutrality, impartiality, and independence. Transnational institutions have become the main target of the lobby groups and civil society, consequently leading to excessive politicisation. Kenya and Uganda are member states not only of the broader African union but also of the E.A.C which has adopted elements of socio-economic uniformity. Therefore, in the comparative analysis, I examine the role of the East African Community and its partners in the dual citizenship debate on the two countries.
I argue in the analysis that it is not only important to be a citizen within Kenya or Uganda but also important to discover how the issue of dual citizenship is legally interpreted within the borders of each individual nation-state. In light of this discussion, I agree with Mamdani’s definition of the nation-state as a unique form of power introduced in Africa by colonial powers between 1880 and 1940 whose outcomes can be viewed as “debris of a modernist postcolonial project, an attempt to create a centralised modern state as the bearer of Westphalia sovereignty against the background of indirect rule” (Mamdani, 1996, p. xxii). I argue that this project has impacted the citizenship debate through the adopted legal framework of post colonialism, built partly on a class system, ethnic definitions, and political affiliation. I, however, insist that the nation-state should still be a vital custodian of the citizenship debate, not in any way denying the individual the rights to identity and belonging. The question then that arises is which type of nation-state? Mamdani (1996, p. 298) asserts that the core agenda that African states faced at independence was threefold: deracialising civil society; detribalising the native authority; and developing the economy in the context of unequal international relations. Post-independence governments grappled with overcoming the citizen and subject dichotomy through either preserving the customary in the name of “defending tradition against alien encroachment or abolishing it in the name of overcoming backwardness and embracing triumphant modernism”. Kenya and Uganda are among countries that have reformed their citizenship laws attesting to Mamdani’s latter assertion.
Mamdani’s (1996) assertions on how African states continue to deal with the issue of citizenship through either the defence of tradition against subjects or abolishing it in the name of overcoming backwardness and acceptance of triumphant modernism are based on the colonial legal theory and the citizen-subject dichotomy within Africa communities. To further create a wider perspective on legal theory, I argue that those assertions above, point to the historical divergence between the republican model of citizenship, which places emphasis on political agency as envisioned in Rousseau´s social contract, as opposed to the liberal model of citizenship, which stresses the legal status and protection (Pocock, 1995).
I, therefore, compare the contexts of both Kenya and Uganda, the actors, the implications of transnationalism and post-nationalism, on the citizens, the nation-state and the region. I conclude by highlighting the shortcomings in the law reforms that allowed for dual citizenship, further demonstrating an urgent need to address issues, such as child statelessness, gender nationality laws, and the rights of dual citizens. Ethnicity, a weak nation state, and inconsistent citizenship legal reforms are closely linked to the historical factors of both countries. I further indicate the economic and political incentives that influenced the reform.
Keywords: Citizenship, dual citizenship, nation state, republicanism, liberalism, transnationalism, post-nationalism
Forming as a result of the collision between the Adriatic and European plates, the Alpine orogen exhibits significant lithospheric heterogeneity due to the long history of interplay between these plates, other continental and oceanic blocks in the region, and inherited features from preceeding orogenies. This implies that the thermal and rheological configuration of the lithosphere also varies significantly throughout the region. Lithology and temperature/pressure conditions exert a first order control on rock strength, principally via thermally activated creep deformation and on the distribution at depth of the brittle-ductile transition zone, which can be regarded as the lower bound to the seismogenic zone. Therefore, they influence the spatial distribution of seismicity within a lithospheric plate. In light of this, accurately constrained geophysical models of the heterogeneous Alpine lithospheric configuration, are crucial in describing regional deformation patterns. However, despite the amount of research focussing on the area, different hypotheses still exist regarding the present-day lithospheric state and how it might relate to the present-day seismicity distribution.
This dissertaion seeks to constrain the Alpine lithospheric configuration through a fully 3D integrated modelling workflow, that utilises multiple geophysical techniques and integrates from all available data sources. The aim is therefore to shed light on how lithospheric heterogeneity may play a role in influencing the heterogeneous patterns of seismicity distribution observed within the region. This was accomplished through the generation of: (i) 3D seismically constrained, structural and density models of the lithosphere, that were adjusted to match the observed gravity field; (ii) 3D models of the lithospheric steady state thermal field, that were adjusted to match observed wellbore temperatures; and (iii) 3D rheological models of long term lithospheric strength, with the results of each step used as input for the following steps.
Results indicate that the highest strength within the crust (~ 1 GPa) and upper mantle (> 2 GPa), are shown to occur at temperatures characteristic for specific phase transitions (more felsic crust: 200 – 400 °C; more mafic crust and upper lithospheric mantle: ~600 °C) with almost all seismicity occurring in these regions. However, inherited lithospheric heterogeneity was found to significantly influence this, with seismicity in the thinner and more mafic Adriatic crust (~22.5 km, 2800 kg m−3, 1.30E-06 W m-3) occuring to higher temperatures (~600 °C) than in the thicker and more felsic European crust (~27.5 km, 2750 kg m−3, 1.3–2.6E-06 W m-3, ~450 °C). Correlation between seismicity in the orogen forelands and lithospheric strength, also show different trends, reflecting their different tectonic settings. As such, events in the plate boundary setting of the southern foreland correlate with the integrated lithospheric strength, occurring mainly in the weaker lithosphere surrounding the strong Adriatic indenter. Events in the intraplate setting of the northern foreland, instead correlate with crustal strength, mainly occurring in the weaker and warmer crust beneath the Upper Rhine Graben.
Therefore, not only do the findings presented in this work represent a state of the art understanding of the lithospheric configuration beneath the Alps and their forelands, but also a significant improvement on the features known to significantly influence the occurrence of seismicity within the region. This highlights the importance of considering lithospheric state in regards to explaining observed patterns of deformation.
The ubiquitin-proteasome-system (UPS) is a cellular cascade involving three enzymatic steps for protein ubiquitination to target them to the 26S proteasome for proteolytic degradation. Several components of the UPS have been shown to be central for regulation of defense responses during infections with phytopathogenic bacteria. Upon recognition of the pathogen, local defense is induced which also primes the plant to acquire systemic resistance (SAR) for enhanced immune responses upon challenging infections. Here, ubiquitinated proteins were shown to accumulate locally and systemically during infections with Psm and after treatment with the SAR-inducing metabolites salicylic acid (SA) and pipecolic acid (Pip). The role of the 26S proteasome in local defense has been described in several studies, but the potential role during SAR remains elusive and was therefore investigated in this project by characterizing the Arabidopsis proteasome mutants rpt2a-2 and rpn12a-1 during priming and infections with Pseudomonas. Bacterial replication assays reveal decreased basal and systemic immunity in both mutants which was verified on molecular level showing impaired activation of defense- and SAR-genes. rpt2a-2 and rpn12a-1 accumulate wild type like levels of camalexin but less SA. Endogenous SA treatment restores local PR gene expression but does not rescue the SAR-phenotype. An RNAseq experiment of Col-0 and rpt2a-2 reveal weak or absent induction of defense genes in the proteasome mutant during priming. Thus, a functional 26S proteasome was found to be required for induction of SAR while compensatory mechanisms can still be initiated.
E3-ubiquitin ligases conduct the last step of substrate ubiquitination and thereby convey specificity to proteasomal protein turnover. Using RNAseq, 11 E3-ligases were found to be differentially expressed during priming in Col-0 of which plant U-box 54 (PUB54) and ariadne 12 (ARI12) were further investigated to gain deeper understanding of their potential role during priming.
PUB54 was shown to be expressed during priming and /or triggering with virulent Pseudomonas. pub54 I and pub54-II mutants display local and systemic defense comparable to Col-0. The heavy-metal associated protein 35 (HMP35) was identified as potential substrate of PUB54 in yeast which was verified in vitro and in vivo. PUB54 was shown to be an active E3-ligase exhibiting auto-ubiquitination activity and performing ubiquitination of HMP35. Proteasomal turnover of HMP35 was observed indicating that PUB54 targets HMP35 for ubiquitination and subsequent proteasomal degradation. Furthermore, hmp35-I benefits from increased resistance in bacterial replication assays. Thus, HMP35 is potentially a negative regulator of defense which is targeted and ubiquitinated by PUB54 to regulate downstream defense signaling. ARI12 is transcriptionally activated during priming or triggering and hyperinduced during priming and triggering. Gene expression is not inducible by the defense related hormone salicylic acid (SA) and is dampened in npr1 and fmo1 mutants consequently depending on functional SA- and Pip-pathways, respectively. ARI12 accumulates systemically after priming with SA, Pip or Pseudomonas. ari12 mutants are not altered in resistance but stable overexpression leads to increased resistance in local and systemic tissue. During priming and triggering, unbalanced ARI12 levels (i.e. knock out or overexpression) leads to enhanced FMO1 activation indicating a role of ARI12 in Pip-mediated SAR. ARI12 was shown to be an active E3-ligase with auto-ubiquitination activity likely required for activation with an identified ubiquitination site at K474. Mass spectrometrically identified potential substrates were not verified by additional experiments yet but suggest involvement of ARI12 in regulation of ROS in turn regulating Pip-dependent SAR pathways.
Thus, data from this project provide strong indications about the involvement of the 26S proteasome in SAR and identified a central role of the two so far barely described E3-ubiquitin ligases PUB54 and ARI12 as novel components of plant defense.
Background: A growing body of research has documented negative effects of sexualization in the media on individuals’ self-objectification. This research is predominantly built on studies examining traditional media, such as magazines and television, and young female samples. Furthermore, longitudinal studies are scarce, and research is missing studying mediators of the relationship. The first aim of the present PhD thesis was to investigate the relations between the use of sexualized interactive media and social media and self-objectification. The second aim of this work was to examine the presumed processes within understudied samples, such as males and females beyond college age, thus investigating the moderating roles of age and gender. The third aim was to shed light on possible mediators of the relation between sexualized media and self-objectification.
Method: The research aims were addressed within the scope of four studies. In an experiment, women’s self-objectification and body satisfaction was measured after playing a video game with a sexualized vs. a nonsexualized character that was either personalized or generic. The second study investigated the cross-sectional link between sexualized television use and self-objectification and consideration of cosmetic surgery in a sample of women across a broad age spectrum, examining the role of age in the relations. The third study looked at the cross-sectional link between male and female sexualized images on Instagram and their associations with self-objectification among a sample of male and female adolescents. Using a two-wave longitudinal design, the fourth study examined sexualized video game and Instagram use as predictors of adolescents’ self-objectification. Path models were conceptualized for the second, third and fourth study, in which media use predicted body surveillance via appearance comparisons (Study 4), thin-ideal internalization (Study 2, 3, 4), muscular-ideal internalization (Study 3, 4), and valuing appearance (all studies).
Results: The results of the experimental study revealed no effect of sexualized video game characters on women’s self-objectification and body satisfaction. No moderating effect of personalization emerged. Sexualized television use was associated to consideration of cosmetic surgery via body surveillance and valuing appearance for women of all ages in Study 2, while no moderating effect of age was found. Study 3 revealed that seeing sexualized male images on Instagram was indirectly associated with higher body surveillance via muscular-ideal internalization for boys and girls. Sexualized female images were indirectly linked to higher body surveillance via thin-ideal internalization and valuing appearance over competence only for girls. The longitudinal analysis of Study 4 showed no moderating effect of gender: For boys and girls, sexualized video game use at T1 predicted body surveillance at T2 via appearance comparisons, thin-ideal internalization and valuing appearance over competence. Furthermore, the use of sexualized Instagram images at T1 predicted body surveillance at T2 via valuing appearance.
Conclusion: The findings show that sexualization in the media is linked to self-objectification among a variety of media formats and within diverse groups of people. While the longitudinal study indicates that sexualized media predict self-objectification over time, the experimental null findings warrant caution regarding this temporal order. The results demonstrate that several mediating variables might be involved in this link. Possible implications for research and practice, such as intervention programs and policy-making, are discussed.
Media artists have been struggling for financial survival ever since media art came into being. The non-material value of the artwork, a provocative attitude towards the traditional arts world and originally anti-capitalist mindset of the movement makes it particularly difficult to provide a constructive solution. However, a cultural entrepreneurial approach can be used to build a framework in order to find a balance between culture and business while ensuring that the cultural mission remains the top priority.
We investigate models for incremental binary classification, an example for supervised online learning. Our starting point is a model for human and machine learning suggested by E.M.Gold.
In the first part, we consider incremental learning algorithms that use all of the available binary labeled training data in order to compute the current hypothesis. For this model, we observe that the algorithm can be assumed to always terminate and that the distribution of the training data does not influence learnability. This is still true if we pose additional delayable requirements that remain valid despite a hypothesis output delayed in time. Additionally, we consider the non-delayable requirement of consistent learning. Our corresponding results underpin the claim for delayability being a suitable structural property to describe and collectively investigate a major part of learning success criteria. Our first theorem states the pairwise implications or incomparabilities between an established collection of delayable learning success criteria, the so-called complete map. Especially, the learning algorithm can be assumed to only change its last hypothesis in case it is inconsistent with the current training data. Such a learning behaviour is called conservative.
By referring to learning functions, we obtain a hierarchy of approximative learning success criteria. Hereby we allow an increasing finite number of errors of the hypothesized concept by the learning algorithm compared with the concept to be learned. Moreover, we observe a duality depending on whether vacillations between infinitely many different correct hypotheses are still considered a successful learning behaviour. This contrasts the vacillatory hierarchy for learning from solely positive information.
We also consider a hypothesis space located between the two most common hypothesis space types in the nearby relevant literature and provide the complete map.
In the second part, we model more efficient learning algorithms. These update their hypothesis referring to the current datum and without direct regress to past training data. We focus on iterative (hypothesis based) and BMS (state based) learning algorithms. Iterative learning algorithms use the last hypothesis and the current datum in order to infer the new hypothesis.
Past research analyzed, for example, the above mentioned pairwise relations between delayable learning success criteria when learning from purely positive training data. We compare delayable learning success criteria with respect to iterative learning algorithms, as well as learning from either exclusively positive or binary labeled data. The existence of concept classes that can be learned by an iterative learning algorithm but not in a conservative way had already been observed, showing that conservativeness is restrictive. An additional requirement arising from cognitive science research %and also observed when training neural networks is U-shapedness, stating that the learning algorithm does diverge from a correct hypothesis. We show that forbidding U-shapes also restricts iterative learners from binary labeled data.
In order to compute the next hypothesis, BMS learning algorithms refer to the currently observed datum and the actual state of the learning algorithm. For learning algorithms equipped with an infinite amount of states, we provide the complete map. A learning success criterion is semantic if it still holds, when the learning algorithm outputs other parameters standing for the same classifier. Syntactic (non-semantic) learning success criteria, for example conservativeness and syntactic non-U-shapedness, restrict BMS learning algorithms. For proving the equivalence of the syntactic requirements, we refer to witness-based learning processes. In these, every change of the hypothesis is justified by a later on correctly classified witness from the training data. Moreover, for every semantic delayable learning requirement, iterative and BMS learning algorithms are equivalent. In case the considered learning success criterion incorporates syntactic non-U-shapedness, BMS learning algorithms can learn more concept classes than iterative learning algorithms.
The proofs are combinatorial, inspired by investigating formal languages or employ results from computability theory, such as infinite recursion theorems (fixed point theorems).
During sentence reading the eyes quickly jump from word to word to sample visual information with the high acuity of the fovea. Lexical properties of the currently fixated word are known to affect the duration of the fixation, reflecting an interaction of word processing with oculomotor planning. While low level properties of words in the parafovea can likewise affect the current fixation duration, results concerning the influence of lexical properties have been ambiguous (Drieghe, Rayner, & Pollatsek, 2008; Kliegl, Nuthmann, & Engbert, 2006). Experimental investigations of such lexical parafoveal-on-foveal effects using the boundary paradigm have instead shown, that lexical properties of parafoveal previews affect fixation durations on the upcoming target words (Risse & Kliegl, 2014). However, the results were potentially confounded with effects of preview validity.
The notion of parafoveal processing of lexical information challenges extant models of eye movements during reading. Models containing serial word processing assumptions have trouble explaining such effects, as they usually couple successful word processing to saccade planning, resulting in skipping of the parafoveal word. Although models with parallel word processing are less restricted, in the SWIFT model (Engbert, Longtin, & Kliegl, 2002) only processing of the foveal word can directly influence the saccade latency.
Here we combine the results of a boundary experiment (Chapter 2) with a predictive modeling approach using the SWIFT model, where we explore mechanisms of parafoveal inhibition in a simulation study (Chapter 4). We construct a likelihood function for the SWIFT model (Chapter 3) and utilize the experimental data in a Bayesian approach to parameter estimation (Chapter 3 & 4).
The experimental results show a substantial effect of parafoveal preview frequency on fixation durations on the target word, which can be clearly distinguished from the effect of preview validity. Using the eye movement data from the participants, we demonstrate the feasibility of the Bayesian approach even for a small set of estimated parameters, by comparing summary statistics of experimental and simulated data. Finally, we can show that the SWIFT model can account for the lexical preview effects, when a mechanism for parafoveal inhibition is added. The effects of preview validity were modeled best, when processing dependent saccade cancellation was added for invalid trials. In the simulation study only the control condition of the experiment was used for parameter estimation, allowing for cross validation. Simultaneously the number of free parameters was increased. High correlations of summary statistics demonstrate the capabilities of the parameter estimation approach. Taken together, the results advocate for a better integration of experimental data into computational modeling via parameter estimation.
To achieve a sustainable energy economy, it is necessary to turn back on the combustion of fossil fuels as a means of energy production and switch to renewable sources. However, their temporal availability does not match societal consumption needs, meaning that renewably generated energy must be stored in its main generation times and allocated during peak consumption periods. Electrochemical energy storage (EES) in general is well suited due to its infrastructural independence and scalability. The lithium ion battery (LIB) takes a special place, among EES systems due to its energy density and efficiency, but the scarcity and uneven geological occurrence of minerals and ores vital for many cell components, and hence the high and fluctuating costs will decelerate its further distribution.
The sodium ion battery (SIB) is a promising successor to LIB technology, as the fundamental setup and cell chemistry is similar in the two systems. Yet, the most widespread negative electrode material in LIBs, graphite, cannot be used in SIBs, as it cannot store sufficient amounts of sodium at reasonable potentials. Hence, another carbon allotrope, non-graphitizing or hard carbon (HC) is used in SIBs. This material consists of turbostratically disordered, curved graphene layers, forming regions of graphitic stacking and zones of deviating layers, so-called internal or closed pores.
The structural features of HC have a substantial impact of the charge-potential curve exhibited by the carbon when it is used as the negative electrode in an SIB. At defects and edges an adsorption-like mechanism of sodium storage is prevalent, causing a sloping voltage curve, ill-suited for the practical application in SIBs, whereas a constant voltage plateau of relatively high capacities is found immediately after the sloping region, which recent research attributed to the deposition of quasimetallic sodium into the closed pores of HC.
Literature on the general mechanism of sodium storage in HCs and especially the role of the closed pore is abundant, but the influence of the pore geometry and chemical nature of the HC on the low-potential sodium deposition is yet in an early stage. Therefore, the scope of this thesis is to investigate these relationships using suitable synthetic and characterization methods. Materials of precisely known morphology, porosity, and chemical structure are prepared in clear distinction to commonly obtained ones and their impact on the sodium storage characteristics is observed. Electrochemical impedance spectroscopy in combination with distribution of relaxation times analysis is further established as a technique to study the sodium storage process, in addition to classical direct current techniques, and an equivalent circuit model is proposed to qualitatively describe the HC sodiation mechanism, based on the recorded data. The obtained knowledge is used to develop a method for the preparation of closed porous and non-porous materials from open porous ones, proving not only the necessity of closed pores for efficient sodium storage, but also providing a method for effective pore closure and hence the increase of the sodium storage capacity and efficiency of carbon materials.
The insights obtained and methods developed within this work hence not only contribute to the better understanding of the sodium storage mechanism in carbon materials of SIBs, but can also serve as guidance for the design of efficient electrode materials.
Der Bildungshausbau ist Thema aktueller Debatten in der Stadtentwicklung und Stadtplanung sowie in der Pädagogik. Viele Expert*innen beschäftigen sich in Studien mit Fragen zu gutem und gelingendem Schulbau. Die Anforderungen der Gesellschaft an Bildungshäuser verändern sich, wenn in ganztägigen Schulformen nicht nur Unterricht, sondern auch Freizeitbetreuung für die Schülerinnen und Schüler stattfinden soll. Gleichzeitig soll Schule ein Ort der Begegnung und Kommunikation, des sozialen Lernens und der Kooperation sein. Schule ist in vielfacher Hinsicht in Bewegung. Um mit den Veränderungen und Ansprüchen Schritt zu halten, steht der Bildungshausbau immer wieder vor Herausforderungen. Einerseits werden Leuchtturmprojekte geschaffen, andererseits entstehen nach wie vor Bildungsbauten, die den gegenwärtigen Anforderungen und zukünftigen Entwicklungen nicht gerecht werden.
An dieser Stelle setzt die vorliegende Arbeit an, die nicht neue Normen zu gutem Schulbau vorlegt, sondern in einer qualitativen empirischen Studie nach den pädagogischen Vorstellungen von Beteiligten im Bildungshausbau und den typischen Entwicklungen im Planungsprozess fragt. Der vorliegenden Fallstudie wurde die dokumentarische Methode als Auswertungsverfahren zugrunde gelegt. Gegenstand der Untersuchung waren zwei Bildungsbauten eines Großbauprojektes. Im Zuge der Auswertung erfolgten eine Analyse der Projektstrukturen und eine Analyse der Deutungsmuster der befragten Akteur*innen, die in einer zusammen¬führenden Ergebnisdarstellung in Form eines Handlungs-Struktur-Gefüges mündeten.
Es werden Einblicke in Zusammenhänge von Handlungen der Beteiligten und Projektstrukturen gegeben, wie sie sich gegenseitig beeinflussen oder im Prozessverlauf verändern. Die Auswertung zeigt, dass Transferproblematiken zwischen Wissenschaft und Praxis nach wie vor bestehen. Besonderes Gewicht bei Planungsentscheidungen haben finanzielle, zeitliche und architektonische Strukturen. Nur wenige pädagogische Vorstellungen bzw. Deutungsmuster können in Erscheinung treten.
Fördermittelfinanzierte Gründungsunterstützungsangebote waren in den EU-Förderperioden 2007-2013 und 2014-2020 ein wichtiges Element der Hochschulgründungsförderung im Land Brandenburg. Aufgrund der positiven wirtschaftlichen Entwicklung des Landes, reduzierte sich das Fördervolumen in der gleichen Zeit jedoch stetig. Für die EU-Förderperiode 2021-2027 steht eine weitere Reduzierung der Fördermittel bereits fest. In der Folge wird es, ohne Anpassungen der etablierten Förderstrukturen, zur weiteren Reduzierung oder Erosion der Gründungsunterstützungsangebote an Brandenburger Hochschulen kommen. Die vorliegende Arbeit befasst sich daher u.a. mit der Frage, wie ein theoretisches Referenzmodell zur fördermittelfinanzierten Hochschulgründungsberatung gestaltet sein kann, um den reduzierten Fördersätzen bei gleichzeitiger Aufrechterhaltung der Angebotsvielfalt gerecht zu werden.
Zur Beantwortung dieser Frage wird als Untersuchungsobjekt das Förderprojekt BIEM Startup Navigator herangezogen. Das Gründungsberatungsprojekt BIEM Startup Navigator wurde von 2010 bis 2014 an sechs Brandenburger Hochschulen durchgeführt. Mit Hilfe der Modelle und Prämissen der Prinzipal-Agent-Theorie wird zunächst ein theoretischer Rahmen aufgespannt, auf dessen Grundlage die empirische Untersuchung erfolgt. Anhand der Prinzipal-Agent-Theorie werden die beteiligten Organisationen, Individuen und Institutionen aufgezeigt. Weiterhin werden die wesentlichen Problemfelder und Lösungsansätze der Prinzipal-Agent-Theorie für die Untersuchung des BIEM Startup Navigators diskutiert.
Im Untersuchungsverlauf werden u.a. die Konzepte zur Durchführung des Förderprojekts an sechs Hochschulstandorten, die Daten von 610 Teilnehmenden und 288 Gründungen analysiert, um so sachlogische Zusammenhänge und Wechselwirkungen identifizieren und beschreiben zu können. Es werden unterschiedliche theoretische Annahmen zu den Bereichen Projekteffektivität bzw. Projekteffizienz, Kostenverteilung und zur konzeptionellen Ausgestaltung in Form von 24 Arbeitshypothesen formuliert und auf die Untersuchung übertragen. Die Verifizierung bzw. Falsifizierung der Hypothesen erfolgt auf Grundlage der kombinierten Erkenntnisse aus Literaturrecherchen und den Ergebnissen der empirischen Untersuchung.
Im Verlauf der Arbeit gelingt es, die in der Prinzipal-Agent-Theorie auftretenden Agencykosten auch am Beispiel des BIEM Startup Navigators zu beschreiben und ex post Ineffizienzen in den durchgeführten Screening- und Signalingprozessen aufzuzeigen.
Mit Hilfe des im Verlauf der Arbeit entwickelten theoretischen Referenzmodells zur fördermittelfinanzierten Gründungsberatung an Brandenburger Hochschulen soll es gelingen, den sinkenden EU-Fördermitteln, ohne eine gleichzeitige Reduzierung der Gründungsunterstützungsangebote an den Hochschulen, gerecht zu werden. Hierfür zeigt das theoretische Referenzmodell wie die Ergebnisse der empirischen Untersuchung genutzt werden können, um die Agencykosten der fördermittelfinanzierten Gründungsberatung zu reduzieren.
Was ist HipHop?
(2021)
Es handelt sich bei der vorliegenden Dissertation um eine investigative Forschungsarbeit, die sich mit dem dynamisch wandelnden HipHop-Phänomen befasst. Der Autor erläutert hierbei die anhaltende Attraktivität des kulturellen Phänomens HipHop und versucht die Tatsache der stetigen Reproduzierbarkeit des HipHops genauer zu erklären. Daher beginnt er mit einer historischen Diskursanalyse der HipHop-Kultur. Er analysiert hierfür die Formen, die Protagonisten und die Diskurse des HipHops, um diesen besser verstehen zu können. Durch die Herausarbeitung der genuinen Eigenschaft der Mehrfachkodierbarkeit des HipHops werden gängige Erklärungsmuster aus Wissenschaft und Medien relativiert und kritisiert. Der Autor kombiniert in seiner Studie kultur- und erziehungswissenschaftliche Literatur mit diversen aktuellen und historischen Darstellungen und Bildern. Es werden vor allem bildbasierte Selbstinszenierungen von HipHoppern und Selbstzeugnisse aus narrativen Interviews, die er selbst mit verschiedenen HipHoppern in Deutschland geführt hat, ausgewertet. Neben den narrativen Interviews dient vor allem die Bildinterpretation nach Bohnsack als Quelle zur Bildung der These der Mehrfachkodierbarkeit. Hierbei werden zwei Bilder der HipHopper Lady Bitch Ray und Kollegah nach Bohnsack (2014) interpretiert und gezeigt wie HipHop neben der lyrischen und der klanglichen Komponente auch visuell inszeniert und produziert wird. Hieraus wird geschlussfolgert, dass es im HipHop möglich ist konträre Sichtweisen bei gleichzeitiger Anwendung von typischen Kulturpraktiken wie zum Beispiel dem Boasting darzustellen und zu vermitteln. Die stetige Offenheit des HipHops wird durch Praktiken wie dem Sampling oder dem Battle deutlich und der Autor erklärt, dass durch diese Techniken die generative Eigenschaft der Mehrfachkodierbarkeit hergestellt wird. Damit vertritt er eine Art Baukasten-Theorie, die besagt, dass sich prinzipiell jeder aus dem Baukasten HipHop, je nach Vorliebe, Interesse und Affinität, bedienen kann. Durch die Vielfalt an Meinungen zu HipHop, die der Autor durch die Kodierung der geführten narrativen Interviews erhält, wird diese These verdeutlicht und es wird klar, dass es sich bei HipHop um mehr als nur eine Mode handelt. HipHop besitzt die prinzipielle Möglichkeit durch die Offenheit, die er in sich trägt, sich stetig neu zu wandeln und damit an Beliebtheit und Popularität zuzunehmen. Die vorliegende Arbeit erweitert damit die immer größer werdende Forschung in den HipHop-Studies und setzt wichtige Akzente um weiter zu forschen und HipHop besser verständlich zu machen.
Trotz der hohen innovationspolitischen Bedeutung der außeruniversitären Forschungseinrichtungen (AUF) sind sie bisher selten Gegenstand empirischer Untersuchungen. Keine der bisher vorliegenden Arbeiten legt ihren Fokus auf die Zusammenarbeit von Wissenschaftler:innen in Forschungsteams, obwohl wissenschaftliche Zusammenarbeit ein weitgehend unerforschtes Gebiet ist. Dies verwundert insofern, da gerade innovative und komplexe Aufgaben, wie sie im Bereich der Forschung bestehen, das kreative Potenzial Einzelner sowie eine gut funktionierende Kooperation der einzelnen Individuen benötigen. Die Zusammenarbeit von Wissenschaftler:innen in den AUF findet in einem kompetitiven Umfeld statt. Einerseits stehen die AUF auf Organisationsebene im Wettbewerb zueinander und konkurrieren um Forschungsgelder und wissenschaftliches Personal. Andererseits ist die kompetitive Einwerbung von Drittmitteln für Wissenschaftler:innen essentiell, um Leistungen, gemessen an hochrangigen Publikationen und Drittmittelquoten, für die eigene Karriere zu erbringen. Ein zunehmender Anteil an Drittmittelfinanzierung in den Einrichtungen hat zudem Auswirkungen auf die Personalpolitik und die Anzahl befristeter Arbeitsverhältnisse. Gleichzeitig wird Forschungsförderung häufig an Kollaborationen von Wissenschaftler:innen geknüpft und bei Publikationen und Forschungsergebnissen zeigen Studien, dass diese überwiegend das Resultat von mehreren Personen sind. Dieses Spannungsfeld zwischen Zusammenarbeit und Wettbewerb wird verstärkt durch die fehlenden Möglichkeiten für den wissenschaftlichen Nachwuchs in der Wissenschaft zu bleiben. Auch wenn die Bundesregierung auf diese Herausforderungen reagiert, muss der Einzelne seinen Weg zwischen Zusammenarbeit und Konkurrenz finden.
Zielsetzung dieser Arbeit ist es, nachfolgende Forschungsfragen zu beantworten:
1. Wie können naturwissenschaftliche Forschungsteams in AUF charakterisiert werden?
2. Wie agiert die einzelne Forscherin/ der einzelne Forscher im Spannungsfeld zwischen Kooperation und Wettbewerb?
3. Welche Potentiale und Hemmnisse lassen sich auf Individual-, Team- und Umweltebene für eine erfolgreiche Arbeit von Forschungsteams in AUF ausmachen?
Um die Forschungsfragen beantworten zu können, wurde eine empirische Untersuchung im Mixed Method Design, bestehend aus einer deutschlandweiten Onlinebefragung von 574 Naturwissenschaftler:innen in AUF und qualitativen Interviews mit 122 Teammitgliedern aus 20 naturwissenschaftlichen Forschungsteams in AUF, durchgeführt.
Die Ergebnisse zeigen, dass die Teams eher als Arbeitsgruppen bezeichnet werden können, da v.a. in der Grundlagenforschung kein gemeinsames Ziel als vielmehr ein gemeinsamer inhaltlicher Rahmen vorliegt, in dem die Forschenden ihre individuellen Ziele verfolgen. Die Arbeit im Team wird überwiegend als positiv und kooperativ beschrieben und ist v.a. durch gegenseitige Unterstützung bei Problemen und weniger durch einen thematisch wissenschaftlichen Erkenntnisprozess geprägt. Dieser findet vielmehr in Form kleiner Untergruppen innerhalb der Arbeitsgruppe und vor allem in enger Abstimmung mit der Teamleitung (TL) statt. Als wettbewerbsverschärfend werden vor allem organisationale Rahmenbedingungen, wie Befristungen und der Flaschenhals, thematisiert.
Die TL nimmt die zentrale Rolle im Team ein, trägt die wissenschaftliche, finanzielle und personelle Verantwortung und muss den Forderungen der Organisation gerecht werden. Promovierende konzentrieren sich fast ausschließlich auf ihre Qualifizierungsarbeit. Bei Postdocs ist ein Spannungsfeld zu erkennen, da sie eigene Projekte und Ziele verfolgen, die neben den Anforderungen der TL bestehen. Die Gatekeeperfunktion der TL wird gestärkt durch ihre Rolle bei der Weitergabe von karriererelevanten Informationen im Team, z.B. bei anstehenden Konferenzen. Sie hat die wichtigen Kontakte, sorgt für die Vernetzung des Teams und ist für die Netzwerkpflege zuständig. Der wissenschaftliche Nachwuchs verlässt sich bei seinen Aufgaben und den karriererelevanten Faktoren sehr auf ihre Unterstützung. Nicht-wissenschaftliche Mitarbeitende gilt es stärker zu berücksichtigen, dies sowohl in ihrer Funktion in den Teams als auch in der Gesamtorganisation. Sie sind die zentralen Ansprechpersonen des wissenschaftlichen Personals und sorgen für eine Kontinuität bei der Wissensspeicherung und -weitergabe. Für die Organisationen gilt es, unterstützende Rahmen-, Arbeits- und Aufgabenbedingungen für die TL zu schaffen und den wissenschaftlichen Nachwuchs bei einer frühzeitigen Verantwortung für wissenschaftliche und karriererelevante Aufgaben zu unterstützen. Dafür bedarf es verbesserter Personalentwicklungskonzepte und -angebote. Darüber hinaus gilt es, Kooperationsmöglichkeiten innerhalb der Einrichtung und zwischen den Gruppen zu schaffen, z.B. durch offene Räume und Netzwerkmöglichkeiten, und innovative Arbeitsumgebungen zu fördern, um neue Formen einer innovationsfreundlichen Wissenschaftskultur zu etablieren.
Detecting and categorizing particular entities in the environment are important visual tasks that humans have had to solve at various points in our evolutionary time. The question arises whether characteristics of entities that were of ecological significance for humans play a particular role during the development of visual categorization.
The current project addressed this question by investigating the effects of developing visual abilities, visual properties and ecological significance on categorization early in life. Our stimuli were monochromatic photographs of structure-like assemblies and surfaces taken from three categories: vegetation, non-living natural elements, and artifacts. A set of computational and rated visual properties were assessed for these stimuli. Three empirical studies applied coherent research concepts and methods in young children and adults, comprising (a) two card-sorting tasks with preschool children (age: 4.1-6.1 years) and adults (age: 18-50 years) which assessed classification and similarity judgments, (b) a gaze contingent eye-tracking search task which investigated the impact of visual properties and category membership on 8-month-olds' ability to segregate visual structure. Because eye-tracking with infants still provides challenges, a methodological study (c) assessed the effect of infant eye-tracking procedures on data quality with 8- to 12-month-old infants and adults.
In the categorization tasks we found that category membership and visual properties impacted the performance of all participant groups. Sensitivity to the respective categories varied between tasks and over the age groups. For example, artifact images hindered infants' visual search but were classified best by adults, whereas sensitivity to vegetation was highest during similarity judgments. Overall, preschool children relied less on visual properties than adults, but some properties (e.g., rated depth, shading) were drawn upon similarly strong. In children and infants, depth predicted task performance stronger than shape-related properties. Moreover, children and infants were sensitive to variations in the complexity of low-level visual statistics. These results suggest that classification of visual structures, and attention to particular visual properties is affected by the functional or ecological significance these categories and properties may have for each of the respective age groups.
Based on this, the project highlights the importance of further developmental research on visual categorization with naturalistic, structure-like stimuli. As intended with the current work, this would allow important links between developmental and adult research.
Botulinum neurotoxin (BoNT) is produced by the anaerobic bacterium Clostridium botulinum. It is one of the most potent toxins found in nature and can enter motor neurons (MN) to cleave proteins necessary for neurotransmission, resulting in flaccid paralysis. The toxin has applications in both traditional and esthetic medicine. Since BoNT activity varies between batches despite identical protein concentrations, the activity of each lot must be assessed. The gold standard method is the mouse lethality assay, in which mice are injected with a BoNT dilution series to determine the dose at which half of the animals suffer death from peripheral asphyxia. Ethical concerns surrounding the use of animals in toxicity testing necessitate the creation of alternative model systems to measure the potency of BoNT.
Prerequisites of a successful model are that it is human specific; it monitors the complete toxic pathway of BoNT; and it is highly sensitive, at least in the range of the mouse lethality assay. One model system was developed by our group, in which human SIMA neuroblastoma cells were genetically modified to express a reporter protein (GLuc), which is packaged into neurosecretory vesicles, and which, upon cellular depolarization, can be released – or inhibited by BoNT – simultaneously with neurotransmitters. This assay has great potential, but includes the inherent disadvantages that the GLuc sequence was randomly inserted into the genome and the tumor cells only have limited sensitivity and specificity to BoNT. This project aims to improve these deficits, whereby induced pluripotent stem cells (iPSCs) were genetically modified by the CRISPR/Cas9 method to insert the GLuc sequence into the AAVS1 genomic safe harbor locus, precluding genetic disruption through non-specific integrations. Furthermore, GLuc was modified to associate with signal peptides that direct to the lumen of both large dense core vesicles (LDCV), which transport neuropeptides, and synaptic vesicles (SV), which package neurotransmitters. Finally, the modified iPSCs were differentiated into motor neurons (MNs), the true physiological target of BoNT, and hypothetically the most sensitive and specific cells available for the MoN-Light BoNT assay.
iPSCs were transfected to incorporate one of three constructs to direct GLuc into LDCVs, one construct to direct GLuc into SVs, and one “no tag” GLuc control construct. The LDCV constructs fused GLuc with the signal peptides for proopiomelanocortin (hPOMC-GLuc), chromogranin-A (CgA-GLuc), and secretogranin II (SgII-GLuc), which are all proteins found in the LDCV lumen. The SV construct comprises a VAMP2-GLuc fusion sequence, exploiting the SV membrane-associated protein synaptobrevin (VAMP2). The no tag GLuc expresses GLuc non-specifically throughout the cell and was created to compare the localization of vesicle-directed GLuc.
The clones were characterized to ensure that the GLuc sequence was only incorporated into the AAVS1 safe harbor locus and that the signal peptides directed GLuc to the correct vesicles. The accurate insertion of GLuc was confirmed by PCR with primers flanking the AAVS1 safe harbor locus, capable of simultaneously amplifying wildtype and modified alleles. The PCR amplicons, along with an insert-specific amplicon from candidate clones were Sanger sequenced to confirm the correct genomic region and sequence of the inserted DNA. Off-target integrations were analyzed with the newly developed dc-qcnPCR method, whereby the insert DNA was quantified by qPCR against autosomal and sex-chromosome encoded genes. While the majority of clones had off-target inserts, at least one on-target clone was identified for each construct.
Finally, immunofluorescence was utilized to localize GLuc in the selected clones. In iPSCs, the vesicle-directed GLuc should travel through the Golgi apparatus along the neurosecretory pathway, while the no tag GLuc should not follow this pathway. Initial analyses excluded the CgA-GLuc and SgII-GLuc clones due to poor quality protein visualization. The colocalization of GLuc with the Golgi was analyzed by confocal microscopy and quantified. GLuc was strongly colocalized with the Golgi in the hPOMC-GLuc clone (r = 0.85±0.09), moderately in the VAMP2-GLuc clone (r = 0.65±0.01), and, as expected, only weakly in the no tag GLuc clone (r = 0.44±0.10). Confocal microscopy of differentiated MNs was used to analyze the colocalization of GLuc with proteins associated with LDCVs and SVs, SgII in the hPOMC-GLuc clone (r = 0.85±0.08) and synaptophysin in the VAMP2-GLuc clone (r = 0.65±0.07). GLuc was also expressed in the same cells as the MN-associated protein, Islet1.
A significant portion of GLuc was found in the correct cell type and compartment. However, in the MoN-Light BoNT assay, the hPOMC-GLuc clone could not be provoked to reliably release GLuc upon cellular depolarization. The depolarization protocol for hPOMC-GLuc must be further optimized to produce reliable and specific release of GLuc upon exposure to a stimulus. On the other hand, the VAMP2-GLuc clone could be provoked to release GLuc upon exposure to the muscarinic and nicotinic agonist carbachol. Furthermore, upon simultaneous exposure to the calcium chelator EGTA, the carbachol-provoked release of GLuc could be significantly repressed, indicating the detection of GLuc was likely associated with vesicular fusion at the presynaptic terminal. The application of the VAMP2-GLuc clone in the MoN-Light BoNT assay must still be verified, but the results thus far indicate that this clone could be appropriate for the application of BoNT toxicity assessment.
Due to global climate change providing food security for an increasing world population is a big challenge. Especially abiotic stressors have a strong negative effect on crop yield. To develop climate-adapted crops a comprehensive understanding of molecular alterations in the response of varying levels of environmental stresses is required. High throughput or ‘omics’ technologies can help to identify key-regulators and pathways of abiotic stress responses. In addition to obtain omics data also tools and statistical analyses need to be designed and evaluated to get reliable biological results.
To address these issues, I have conducted three different studies covering two omics technologies. In the first study, I used transcriptomic data from the two polymorphic Arabidopsis thaliana accessions, namely Col-0 and N14, to evaluate seven computational tools for their ability to map and quantify Illumina single-end reads. Between 92% and 99% of the reads were mapped against the reference sequence. The raw count distributions obtained from the different tools were highly correlated. Performing a differential gene expression analysis between plants exposed to 20 °C or 4°C (cold acclimation), a large pairwise overlap between the mappers was obtained. In the second study, I obtained transcript data from ten different Oryza sativa (rice) cultivars by PacBio Isoform sequencing that can capture full-length transcripts. De novo reference transcriptomes were reconstructed resulting in 38,900 to 54,500 high-quality isoforms per cultivar. Isoforms were collapsed to reduce sequence redundancy and evaluated, e.g. for protein completeness level (BUSCO), transcript length, and number of unique transcripts per gene loci. For the heat and drought tolerant aus cultivar N22, I identified around 650 unique and novel transcripts of which 56 were significantly differentially expressed in developing seeds during combined drought and heat stress. In the last study, I measured and analyzed the changes in metabolite profiles of eight rice cultivars exposed to high night temperature (HNT) stress and grown during the dry and wet season on the field in the Philippines. Season-specific changes in metabolite levels, as well as for agronomic parameters, were identified and metabolic pathways causing a yield decline at HNT conditions suggested.
In conclusion, the comparison of mapper performances can help plant scientists to decide on the right tool for their data. The de novo reconstruction of rice cultivars without a genome sequence provides a targeted, cost-efficient approach to identify novel genes responding to stress conditions for any organism. With the metabolomics approach for HNT stress in rice, I identified stress and season-specific metabolites which might be used as molecular markers for crop improvement in the future.
Polymeric films and coatings derived from semi-crystalline oligomers are of relevance for medical and pharmaceutical applications. In this context, the material surface is of particular importance, as it mediates the interaction with the biological system. Two dimensional (2D) systems and ultrathin films are used to model this interface. However, conventional techniques for their preparation, such as spin coating or dip coating, have disadvantages, since the morphology and chain packing of the generated films can only be controlled to a limited extent and adsorption on the substrate used affects the behavior of the films. Detaching and transferring the films prepared by such techniques requires additional sacrificial or supporting layers, and free-standing or self supporting domains are usually of very limited lateral extension. The aim of this thesis is to study and modulate crystallization, melting, degradation and chemical reactions in ultrathin films of oligo(ε-caprolactone)s (OCL)s with different end-groups under ambient conditions. Here, oligomeric ultrathin films are assembled at the air-water interface using the Langmuir technique. The water surface allows lateral movement and aggregation of the oligomers, which, unlike solid substrates, enables dynamic physical and chemical interaction of the molecules. Parameters like surface pressure (π), temperature and mean molecular area (MMA) allow controlled assembly and manipulation of oligomer molecules when using the Langmuir technique. The π-MMA isotherms, Brewster angle microscopy (BAM), and interfacial infrared spectroscopy assist in detecting morphological and physicochemical changes in the film. Ultrathin films can be easily transferred to the solid silicon surface via Langmuir Schaefer (LS) method (horizontal substrate dipping). Here, the films transferred on silicon are investigated using atomic force microscopy (AFM) and optical microscopy and are compared to the films on the water surface.
The semi-crystalline morphology (lamellar thicknesses, crystal number densities, and lateral crystal dimensions) is tuned by the chemical structure of the OCL end-groups (hydroxy or methacrylate) and by the crystallization temperature (Tc; 12 or 21 °C) or MMAs. Compression to lower MMA of ~2 Å2, results in the formation of a highly crystalline film, which consists of tightly packed single crystals. Preparation of tightly packed single crystals on a cm2 scale is not possible by conventional techniques. Upon transfer to a solid surface, these films retain their crystalline morphology whereas amorphous films undergo dewetting.
The melting temperature (Tm) of OCL single crystals at the water and the solid surface is found proportional to the inverse crystal thickness and is generally lower than the Tm of bulk PCL. The impact of OCL end-groups on melting behavior is most noticeable at the air-solid interface, where the methacrylate end-capped OCL (OCDME) melted at lower temperatures than the hydroxy end-capped OCL (OCDOL). When comparing the underlying substrate, melting/recrystallization of OCL ultrathin films is possible at lower temperatures at the air water interface than at the air-solid interface, where recrystallization is not visible. Recrystallization at the air-water interface usually occurs at a higher temperature than the initial Tc.
Controlled degradation is crucial for the predictable performance of degradable polymeric biomaterials. Degradation of ultrathin films is carried out under acidic (pH ~ 1) or enzymatic catalysis (lipase from Pseudomonas cepcia) on the water surface or on a silicon surface as transferred films. A high crystallinity strongly reduces the hydrolytic but not the enzymatic degradation rate. As an influence of end-groups, the methacrylate end-capped linear oligomer, OCDME (~85 ± 2 % end-group functionalization) hydrolytically degrades faster than the hydroxy end capped linear oligomer, OCDOL (~95 ± 3 % end-group functionalization) at different temperatures. Differences in the acceleration of hydrolytic degradation of semi-crystalline films were observed upon complete melting, partial melting of the crystals, or by heating to temperatures close to Tm. Therefore, films of densely packed single crystals are suitable as barrier layers with thermally switchable degradation rates.
Chemical modification in ultrathin films is an intricate process applicable to connect functionalized molecules, impart stability or create stimuli-sensitive cross-links. The reaction of end-groups is explored for transferred single crystals on a solid surface or amorphous monolayer at the air-water interface. Bulky methacrylate end-groups are expelled to the crystal surface during chain-folded crystallization. The density of end-groups is inversely proportional to molecular weight and hence very pronounced for oligomers. The methacrylate end-groups at the crystal surface, which are present at high concentration, can be used for further chemical functionalization. This is demonstrated by fluorescence microscopy after reaction with fluorescein dimethacrylate. The thermoswitching behavior (melting and recrystallization) of fluorescein functionalized single crystals shows the temperature-dependent distribution of the chemically linked fluorescein moieties, which are accumulated on the surfaces of crystals, and homogeneously dispersed when the crystals are molten. In amorphous monolayers at the air-water interface, reversible cross-linking of hydroxy-terminated oligo(ε-caprolactone) monolayers using dialdehyde (glyoxal) lead to the formation of 2D networks. Pronounced contraction in the area occurred for 2D OCL films in dependence of surface pressure and time indicating the reaction progress. Cross linking inhibited crystallization and retarded enzymatic degradation of the OCL film. Altering the subphase pH to ~2 led to cleavage of the covalent acetal cross-links. Besides as model systems, these reversibly cross-linked films are applicable for drug delivery systems or cell substrates modulating adhesion at biointerfaces.
Innerhalb dieser Arbeit erfolgte die erstmalige systematische Untersuchung von Vinylsulfonsäureethylester (1a), Phenylvinylsulfon (1b), N-Benzyl-N-methylethensulfonamid (1c) in der FUJIWARA-MORITANI Reaktion (alternativ als DHR bezeichnet). Bei dieser übergangsmetallkatalysierten Reaktion erfolgt der Aufbau einer neuen C-C-Bindung unter der doppelten Aktivierung einer C-H-Bindung. Somit kann ein atomökonomischer Aufbau von Molekülen realisiert werden, da keine Beiprodukte in Form von Salzen entstehen. Als aromatischer Reaktant wurden Acetanilide (2) verwendet, damit eine regiospezifische Kupplung durch die katalysatordirigierende Acetamid-Gruppe (CDG) erfolgt. Für die Pd-katalysierte DHR wurde eine umfangreiche Optimierung durchgeführt und anschließend konnten neun verschieden, substituierte 2 mit 1a und sieben verschieden, substituierte 2 mit 1b funktionalisiert werden. Da eine Reaktion mit 1c ausblieb, erfolgte ein Wechsel auf eine Ru-katalysierte Methode für die DHR. Mit dieser Methode konnte 1c mit Acetaniliden funktionalisiert werden und das Spektrum der verwendeten 2, in Form von deaktivierenden Substituenten erweitert werden.
Im Anschluss wurden die sulfalkenylierten Acetanilide in weiterführenden Reaktionen untersucht. Hierfür wurde eine Reaktionssequenz bestehend aus einer DeacetylierungDiazotierung-Kupplungsreaktion verwendet, um die Acetamid-Gruppe in eine Abgangsgruppe zu überführen und danach in einer MATSUDA-HECK Reaktion zu kuppeln. Mit dieser Methode konnten mehrere 1,2-Dialkenylbenzole erhalten werden und die CDG ein weiteres Mal genutzt werden. Neben der Überführung der CDG in eine Abgangsgruppe konnte diese auch in die Synthese verschiedener Heterozyklen integriert werden. Dafür erfolgte zunächst eine 1,3-Zykloaddition durch deprotonierten Tosylmethylisocanid an der elektronenarmen Sulfalkenylgruppe zur Synthese von Pyrrolen. Anschließend erfolgte eine Kupplung der PyrrolFunktion und der CDG durch Zyklokondensation, wodurch Quinoline dargestellt wurden. Durch diese Synthesen konnten Schwefelanaloga des Naturstoffes Marinoquionolin A erhalten werden.
Ein weitere übergangsmetallkatalysierte C-H-Aktivierungsreaktion, die MATSUDA-HECK Reaktion, wurde genutzt, um 1b zu mit verschieden, subtituierten Diazoniumsalzen zu arylieren. Hier konnten zahlreichen Styrenylsulfone erhalten werden. Der erfolgreiche Einsatz der Vinylsulfonylverbindungen in der Kreuzmetathese konnte innerhalb dieser Arbeit nicht erreicht werden. Daher erfolgte die Synthese verschiedener dialkenylierter Sulfonamide. Hierfür wurde die Kettenlänge der Alkenyl-Gruppe am Schwefel zwischen 2-3 und am Stickstoff zwischen 3-4 variiert. Der Einsatz der dialkenylierten Sulfonamide erfolgte in den zuvor untersuchten C-H-Aktivierungsmethoden.
N-Allyl-N-phenylethensulfonamid (3) konnte erfolgreich in der DHR und HECK Reaktion funktionalisiert werden. Hierbei erfolgte eine methodenspezifische Kupplung in Abhängigkeit von der Elektronendichte der entsprechenden Alkenyl-Gruppe. Die DHR führte zur selektiven Arylierung der Vinyl-Gruppe und die HECK Reaktion zur Arylierung an der Allyl-Gruppe. Gemischte Produkte wurden nicht erhalten. Für die weiteren Diolefine wurde komplexe Produktgemische erhalten. Des Weiteren wurden die Diolefine in der Ringschlussmetathese untersucht und die entsprechenden Sultame in sehr guten Ausbeuten erhalten. Die Verwendung der Sultame in der C-H-Aktivierung war erfolglos. Es wird vermutet, dass für diese zweifachsubstituierten Sulfonamide die vorhandenen Reaktionsbedingungen optimiert werden müssen.
Abschließend wurden verschiedene, enantiomerenreine Olefine ausgehend von Levoglucosenon dargestellt. Hierfür wurde Levoglucosenon zunächst mit einem Allyl- und 3-Butenylgrignard Reagenz umgesetzt. Die entsprechenden Produkte wurden in moderaten Ausbeuten erhalten. Eine weitere Methode begann mit der Reduktion von Levoglucosenon zum Levoglucosenol. Dieser Alkohol wurde mit Allylbromid erfolgreich verethert. Neben der Untersuchungen zur Ethersynthese, erfolgte die Veresterung von Levoglucosenol mit verschiedenen Sulfonylchloriden zu den entsprechenden Sulfonsäureestern. Diese Olefine wurden in einer Dominometathesereaktion untersucht. Ausgehend vom Allyllevoglucosenylether erfolgte die Darstellung eines Dihydrofurans.
The present work focuses on minimising the usage of toxic chemicals by integration of the biobased monomers, derived from fatty acid esters, to photopolymerization processes, which are known to be nature friendly. Internal double bond present in the oleic acid was converted to more reactive (meth)acrylate or epoxy group. Biobased starting materials, functionalized by different pendant groups, were used for photopolymerizing formulations to design of new polymeric structures by using ultraviolet light emitting diode (UV-LED) (395 nm) via free radical polymerization or cationic polymerization.
New (meth)acrylates (2,3 and 4) consisting of two isomers, methyl 9-((meth)acryloyloxy)-10-hydroxyoctadecanoate / methyl 9-hydroxy-10-((meth)acryloyloxy)octadecanoate (2 and 3) and methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4), modified from oleic acid mix, and ionic liquid monomers (1a and 1b) bearing long alkyl chain were polymerized photochemically. New (meth)acrylates are based on vegetable oil, and ionic liquids (ILs) have nonvolatile behaviour. Therefore, both monomer types have green approach. Photoinitiated polymerization of new (meth)acrylates and ionic liquids was investigated in the presence of ethyl (2,4,6-trimethylbenzoyl) phenylphosphinate (Irgacure® TPO−L) or di(4-methoxybenzoyl)diethylgermane (Ivocerin®) as photoinitiator (PI). Additionally, the results were discussed in comparison with those obtained from commercial 1,6-hexanediol di(meth)acrylate (5 and 6) for deeper investigation of biobased monomer’s potential to substitute petroleum derived materials with renewable resources for possible coating applications. Kinetic study shows that methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4) and ionic liquids (1a and 1b) have quantitative conversion after irradiation process which is important for practical applications. On the other hand, heat generation occurs in a longer time during the polymerization of biobased systems or ILs.
The poly(meth)acrylates modified from (meth)acrylated fatty acid methyl ester monomers generally show a low glass transition temperature because of the presence of long aliphatic chain in the polymer structure. However, poly(meth)acrylates containing aromatic group have higher glass transition temperature. Therefore, new 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was synthesized which can be a promising candidate for the green techniques, such as light induced polymerization. Photokinetic investigation of the new monomer, 4-(4-methacryloyloxyphenyl)-butan-2-one (7), was discussed using Irgacure® TPO−L or Ivocerin® as photoinitiator. The reactivity of that monomer was compared to commercial 2-phenoxyethyl methacrylate (8) and phenyl methacrylate (9) basis of the differences on monomer structures. The photopolymer of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) might be an interesting candidate for the coating application with the properties of quantitative conversion and high molecular weight. It also shows higher glass transition temperature.
In addition to the linear systems based on renewable materials, new crosslinked polymers were also designed in this thesis. Therefore, isomer mixture consisting of ethane-1,2-diyl bis(9-methacryloyloxy-10-hydroxy octadecanoate), ethane-1,2-diyl 9-hydroxy-10-methacryloyloxy-9’-methacryloyloxy10’-hydroxy octadecanoate and ethane-1,2-diyl bis(9-hydroxy-10-methacryloyloxy octadecanoate) (10) was synthesized by derivation of the oleic acid which has not been previously described in the literature. Crosslinked material based on this biobased monomer was produced by photoinitiated free radical polymerization using Irgacure® TPO−L or Ivocerin® as photoinitiator. Furthermore, material properties were diversified by copolymerization of 10 with 4-(4-methacryloyloxyphenyl)-butan-2-one (7) or methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4). In addition to this, influence of comonomer with different chemical structure on the network system was investigated by analysis of thermo-mechanical properties, crosslink density and molecular weight between two crosslink junctions. An increase in the glass transition temperature caused by copolymerization of biobased monomer 10 with the excess amount of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was confirmed by both techniques, differential scanning calorimetry (DSC) and dynamic mechanical analysis (DMA). On the other hand, crosslink density decreased as a result of copolymerization reactions due to the reduction in the mean functionality of the system. Furthermore, surface characterization has been tested by contact angle measurements using solvents with different polarity.
This work also contributes to the limited data reported about cationic photopolymerization of the epoxidized vegetable oils in the literature in contrast to the widely investigation of thermal curing of the biorenewable epoxy monomers. In addition to the 9,10-epoxystearic acid methyl ester (11), a new monomer of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) has been synthesized from oleic acid. These two biobased epoxies have been polymerized via cationic photoinitiated polymerization in the presence of bis(t-butyl)-iodonium-tetrakis(perfluoro-t-butoxy)aluminate ([Al(O-t-C4F9)4]-) and isopropylthioxanthone (ITX) as photinitiating system. Polymerization kinetic of 9,10-epoxystearic acid methyl ester (11) and bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was investigated and compared with the kinetic of commercial monomers being 3,4-epoxycyclohexylmethyl-3’,4’-epoxycyclohexane carboxylate (13), 1,4-butanediol diglycidyl ether (14), and diglycidylether of bisphenol-A (15). Both biobased epoxies (11 and 12) showed higher conversion than cycloaliphatic epoxy (13), and lower reactivity than 1,4-butanediol diglycidyl ether (14). Additional network systems were designed by copolymerization of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) in different molar ratios (1:1; 1:5; 1:9). It addresses that, final conversion is dependent on polymerization rate as well as physical processes such as vitrification during polymerization. Moreover, low glass transition temperature of homopolymer derived from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was successfully increased by copolymerization with diglycidylether bisphenol-A (15). On the other hand, the surface produced from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) shows hydrophobic character. Higher concentration of biobased diepoxy (12) in the copolymerizing mixture decreases surface free energy. Network systems were also investigated according to the rubber elasticity theory. Crosslinked polymer derived from the mixture of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) (molar ratio=1:5) exhibits almost ideal polymer network.
The Internet of Things (IoT) is a system of physical objects that can be discovered, monitored, controlled, or interacted with by electronic devices that communicate over various networking interfaces and eventually can be connected to the wider Internet. [Guinard and Trifa, 2016]. IoT devices are equipped with sensors and/or actuators and may be constrained in terms of memory, computational power, network bandwidth, and energy. Interoperability can help to manage such heterogeneous devices. Interoperability is the ability of different types of systems to work together smoothly. There are four levels of interoperability: physical, network and transport, integration, and data. The data interoperability is subdivided into syntactic and semantic data. Semantic data describes the meaning of data and the common understanding of vocabulary e.g. with the help of dictionaries, taxonomies, ontologies. To achieve interoperability, semantic interoperability is necessary.
Many organizations and companies are working on standards and solutions for interoperability in the IoT. However, the commercial solutions produce a vendor lock-in. They focus on centralized approaches such as cloud-based solutions. This thesis proposes a decentralized approach namely Edge Computing. Edge Computing is based on the concepts of mesh networking and distributed processing. This approach has an advantage that information collection and processing are placed closer to the sources of this information. The goals are to reduce traffic, latency, and to be robust against a lossy or failed Internet connection.
We see management of IoT devices from the network configuration management perspective. This thesis proposes a framework for network configuration management of heterogeneous, constrained IoT devices by using semantic descriptions for interoperability. The MYNO framework is an acronym for MQTT, YANG, NETCONF and Ontology. The NETCONF protocol is the IETF standard for network configuration management. The MQTT protocol is the de-facto standard in the IoT. We picked up the idea of the NETCONF-MQTT bridge, originally proposed by Scheffler and Bonneß[2017], and extended it with semantic device descriptions. These device descriptions provide a description of the device capabilities. They are based on the oneM2M Base ontology and formalized by the Semantic Web Standards.
The novel approach is using a ontology-based device description directly on a constrained device in combination with the MQTT protocol. The bridge was extended in order to query such descriptions. Using a semantic annotation, we achieved that the device capabilities are self-descriptive, machine readable and re-usable.
The concept of a Virtual Device was introduced and implemented, based on semantic device descriptions. A Virtual Device aggregates the capabilities of all devices at the edge network and contributes therefore to the scalability. Thus, it is possible to control all devices via a single RPC call.
The model-driven NETCONF Web-Client is generated automatically from this YANG model which is generated by the bridge based on the semantic device description. The Web-Client provides a user-friendly interface, offers RPC calls and displays sensor values. We demonstrate the feasibility of this approach in different use cases: sensor and actuator scenarios, as well as event configuration and triggering.
The semantic approach results in increased memory overhead. Therefore, we evaluated CBOR and RDF HDT for optimization of ontology-based device descriptions for use on constrained devices. The evaluation shows that CBOR is not suitable for long strings and RDF HDT is a promising candidate but is still a W3C Member Submission. Finally, we used an optimized JSON-LD format for the syntax of the device descriptions.
One of the security tasks of network management is the distribution of firmware updates. The MYNO Update Protocol (MUP) was developed and evaluated on constrained devices CC2538dk and 6LoWPAN. The MYNO update process is focused on freshness and authenticity of the firmware. The evaluation shows that it is challenging but feasible to bring the firmware updates to constrained devices using MQTT. As a new requirement for the next MQTT version, we propose to add a slicing feature for the better support of constrained devices. The MQTT broker should slice data to the maximum packet size specified by the device and transfer it slice-by-slice.
For the performance and scalability evaluation of MYNO framework, we setup the High Precision Agriculture demonstrator with 10 ESP-32 NodeMCU boards at the edge of the network. The ESP-32 NodeMCU boards, connected by WLAN, were equipped with six sensors and two actuators. The performance evaluation shows that the processing of ontology-based descriptions on a Raspberry Pi 3B with the RDFLib is a challenging task regarding computational power. Nevertheless, it is feasible because it must be done only once per device during the discovery process.
The MYNO framework was tested with heterogeneous devices such as CC2538dk from Texas Instruments, Arduino Yún Rev 3, and ESP-32 NodeMCU, and IP-based networks such as 6LoWPAN and WLAN.
Summarizing, with the MYNO framework we could show that the semantic approach on constrained devices is feasible in the IoT.
Permafrost is warming globally, which leads to widespread permafrost thaw and impacts the surrounding landscapes, ecosystems and infrastructure. Especially ice-rich permafrost is vulnerable to rapid and abrupt thaw, resulting from the melting of excess ground ice. Local remote sensing studies have detected increasing rates of abrupt permafrost disturbances, such as thermokarst lake change and drainage, coastal erosion and RTS in the last two decades. All of which indicate an acceleration of permafrost degradation.
In particular retrogressive thaw slumps (RTS) are abrupt disturbances that expand by up to several meters each year and impact local and regional topographic gradients, hydrological pathways, sediment and nutrient mobilisation into aquatic systems, and increased permafrost carbon mobilisation. The feedback between abrupt permafrost thaw and the carbon cycle is a crucial component of the Earth system and a relevant driver in global climate models. However, an assessment of RTS at high temporal resolution to determine the dynamic thaw processes and identify the main thaw drivers as well as a continental-scale assessment across diverse permafrost regions are still lacking.
In northern high latitudes optical remote sensing is restricted by environmental factors and frequent cloud coverage. This decreases image availability and thus constrains the application of automated algorithms for time series disturbance detection for large-scale abrupt permafrost disturbances at high temporal resolution. Since models and observations suggest that abrupt permafrost disturbances will intensify, we require disturbance products at continental-scale, which allow for meaningful integration into Earth system models.
The main aim of this dissertation therefore, is to enhance our knowledge on the spatial extent and temporal dynamics of abrupt permafrost disturbances in a large-scale assessment. To address this, three research objectives were posed:
1. Assess the comparability and compatibility of Landsat-8 and Sentinel-2 data for a combined use in multi-spectral analysis in northern high latitudes.
2. Adapt an image mosaicking method for Landsat and Sentinel-2 data to create combined mosaics of high quality as input for high temporal disturbance assessments in northern high latitudes.
3. Automatically map retrogressive thaw slumps on the landscape-scale and assess their high temporal thaw dynamics.
We assessed the comparability of Landsat-8 and Sentinel-2 imagery by spectral comparison of corresponding bands. Based on overlapping same-day acquisitions of Landsat-8 and Sentinel-2 we derived spectral bandpass adjustment coefficients for North Siberia to adjust Sentinel-2 reflectance values to resemble Landsat-8 and harmonise the two data sets. Furthermore, we adapted a workflow to combine Landsat and Sentinel-2 images to create homogeneous and gap-free annual mosaics. We determined the number of images and cloud-free pixels, the spatial coverage and the quality of the mosaic with spectral comparisons to demonstrate the relevance of the Landsat+Sentinel-2 mosaics. Lastly, we adapted the automatic disturbance detection algorithm LandTrendr for large-scale RTS identification and mapping at high temporal resolution. For this, we modified the temporal segmentation algorithm for annual gradual and abrupt disturbance detection to incorporate the annual Landsat+Sentinel-2 mosaics. We further parametrised the temporal segmentation and spectral filtering for optimised RTS detection, conducted further spatial masking and filtering, and implemented a binary object classification algorithm with machine-learning to derive RTS from the LandTrendr disturbance output. We applied the algorithm to North Siberia, covering an area of 8.1 x 106 km2.
The spectral band comparison between same-day Landsat-8 and Sentinel-2 acquisitions already showed an overall good fit between both satellite products. However, applying the acquired spectral bandpass coefficients for adjustment of Sentinel-2 reflectance values, resulted in a near-perfect alignment between the same-day images. It can therefore be concluded that the spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to those of Landsat-8 in North Siberia.
The number of available cloud-free images increased steadily between 1999 and 2019, especially intensified after 2016 with the addition of Sentinel-2 images. This signifies a highly improved input database for the mosaicking workflow. In a comparison of annual mosaics, the Landsat+Sentinel-2 mosaics always fully covered the study areas, while Landsat-only mosaics contained data-gaps for the same years. The spectral comparison of input images and Landsat+Sentinel-2 mosaic showed a high correlation between the input images and the mosaic bands, testifying mosaicking results of high quality. Our results show that especially the mosaic coverage for northern, coastal areas was substantially improved with the Landsat+Sentinel-2 mosaics. By combining data from both Landsat and Sentinel-2 sensors we reliably created input mosaics at high spatial resolution for comprehensive time series analyses.
This research presents the first automatically derived assessment of RTS distribution and temporal dynamics at continental-scale. In total, we identified 50,895 RTS, primarily located in ice-rich permafrost regions, as well as a steady increase in RTS-affected areas between 2001 and 2019 across North Siberia. From 2016 onward the RTS area increased more abruptly, indicating heightened thaw slump dynamics in this period. Overall, the RTS-affected area increased by 331 % within the observation period. Contrary to this, five focus sites show spatiotemporal variability in their annual RTS dynamics, alternating between periods of increased and decreased RTS development. This suggests a close relationship to varying thaw drivers. The majority of identified RTS was active from 2000 onward and only a small proportion initiated during the assessment period. This highlights that the increase in RTS-affected area was mainly caused by enlarging existing RTS and not by newly initiated RTS.
Overall, this research showed the advantages of combining Landsat and Sentinel-2 data in northern high latitudes and the improvements in spatial and temporal coverage of combined annual mosaics. The mosaics build the database for automated disturbance detection to reliably map RTS and other abrupt permafrost disturbances at continental-scale. The assessment at high temporal resolution further testifies the increasing impact of abrupt permafrost disturbances and likewise emphasises the spatio-temporal variability of thaw dynamics across landscapes. Obtaining such consistent disturbance products is necessary to parametrise regional and global climate change models, for enabling an improved representation of the permafrost thaw feedback.
This dissertation was carried out as part of the international and interdisciplinary graduate school StRATEGy. This group has set itself the goal of investigating geological processes that take place on different temporal and spatial scales and have shaped the southern central Andes. This study focuses on claystones and carbonates of the Yacoraite Fm. that were deposited between Maastricht and Dan in the Cretaceous Salta Rift Basin. The former rift basin is located in northwest Argentina and is divided into the sub-basins Tres Cruces, Metán-Alemanía and Lomas de Olmedo. The overall motivation for this study was to gain new knowledge about the evolution of marine and lacustrine conditions during the Yacoraite Fm. Deposit in the Tres Cruces and Metán-Alemanía sub-basins. Other important aspects that were examined within the scope of this dissertation are the conversion of organic matter from Yacoraite Fm. into oil and its genetic relationship to selected oils produced and natural oil spills. The results of my study show that the Yacoraite Fm. began to be deposited under marine conditions and that a lacustrine environment developed by the end of the deposition in the Tres Cruces and Metán-Alemanía Basins. In general, the kerogen of Yacoraite Fm. consists mainly of the kerogen types II, III and II / III mixtures. Kerogen type III is mainly found in samples from the Yacoraite Fm., whose TOC values are low. Due to the adsorption of hydrocarbons on the mineral surfaces (mineral matrix effect), the content of type III kerogen with Rock-Eval pyrolysis in these samples could be overestimated. Investigations using organic petrography show that the organic particles of Yacoraite Fm. mainly consist of alginites and some vitrinite-like particles. The pyrolysis GC of the rock samples showed that the Yacoraite Fm. generates low-sulfur oils with a predominantly low-wax, paraffinic-naphthenic-aromatic composition and paraffinic wax-rich oils. Small proportions of paraffinic, low-wax oils and a gas condensate-generating facies are also predicted. Here, too, mineral matrix effects were taken into account, which can lead to a quantitative overestimation of the gas-forming character.
The results of an additional 1D tank modeling carried out show that the beginning (10% TR) of the oil genesis took place between ≈10 Ma and ≈4 Ma. Most of the oil (from ≈50% to 65%) was generated prior to the development of structural traps formed during the Plio-Pleistocene Diaguita deformation phase. Only ≈10% of the total oil generated was formed and potentially trapped after the formation of structural traps. Important factors in the risk assessment of this petroleum system, which can determine the small amounts of generated and migrated oil, are the generally low TOC contents and the variable thickness of the Yacoraite Fm. Additional risks are associated with a low density of information about potentially existing reservoir structures and the quality of the overburden.
River flooding poses a threat to numerous cities and communities all over the world. The detection, quantification and attribution of changes in flood characteristics is key to assess changes in flood hazard and help affected societies to timely mitigate and adapt to emerging risks. The Rhine River is one of the major European rivers and numerous large cities reside at its shores. Runoff from several large tributaries superimposes in the main channel shaping the complex from regime. Rainfall, snowmelt as well as ice-melt are important runoff components. The main objective of this thesis is the investigation of a possible transient merging of nival and pluvial Rhine flood regimes under global warming. Rising temperatures cause snowmelt to occur earlier in the year and rainfall to be more intense. The superposition of snowmelt-induced floods originating from the Alps with more intense rainfall-induced runoff from pluvial-type tributaries might create a new flood type with potentially disastrous consequences.
To introduce the topic of changing hydrological flow regimes, an interactive web application that enables the investigation of runoff timing and runoff season- ality observed at river gauges all over the world is presented. The exploration and comparison of a great diversity of river gauges in the Rhine River Basin and beyond indicates that river systems around the world undergo fundamental changes. In hazard and risk research, the provision of background as well as real-time information to residents and decision-makers in an easy accessible way is of great importance. Future studies need to further harness the potential of scientifically engineered online tools to improve the communication of information related to hazards and risks.
A next step is the development of a cascading sequence of analytical tools to investigate long-term changes in hydro-climatic time series. The combination of quantile sampling with moving average trend statistics and empirical mode decomposition allows for the extraction of high resolution signals and the identification of mechanisms driving changes in river runoff. Results point out that the construction and operation of large reservoirs in the Alps is an important factor redistributing runoff from summer to winter and hint at more (intense) rainfall in recent decades, particularly during winter, in turn increasing high runoff quantiles. The development and application of the analytical sequence represents a further step in the scientific quest to disentangling natural variability, climate change signals and direct human impacts.
The in-depth analysis of in situ snow measurements and the simulations of the Alpine snow cover using a physically-based snow model enable the quantification of changes in snowmelt in the sub-basin upstream gauge Basel. Results confirm previous investigations indicating that rising temperatures result in a decrease in maximum melt rates. Extending these findings to a catchment perspective, a threefold effect of rising temperatures can be identified: snowmelt becomes weaker, occurs earlier and forms at higher elevations. Furthermore, results indicate that due to the wide range of elevations in the basin, snowmelt does not occur simultaneously at all elevation, but elevation bands melt together in blocks. The beginning and end of the release of meltwater seem to be determined by the passage of warm air masses, and the respective elevation range affected by accompanying temperatures and snow availability. Following those findings, a hypothesis describing elevation-dependent compensation effects in snowmelt is introduced: In a warmer world with similar sequences of weather conditions, snowmelt is moved upward to higher elevations, i.e., the block of elevation bands providing most water to the snowmelt-induced runoff is located at higher elevations. The movement upward the elevation range makes snowmelt in individual elevation bands occur earlier. The timing of the snowmelt-induced runoff, however, stays the same. Meltwater from higher elevations, at least partly, replaces meltwater from elevations below.
The insights on past and present changes in river runoff, snow covers and underlying mechanisms form the basis of investigations of potential future changes in Rhine River runoff. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios is used to analyse future changes in streamflow, snowmelt, precipitation and evapotranspiration at 1.5, 2.0 and
3.0 ◦ C global warming. Simulation results suggest that future changes in flood characteristics in the Rhine River Basin are controlled by increased precipitation amounts on the one hand, and reduced snowmelt on the other hand. Rising temperatures deplete seasonal snowpacks. At no time during the year, a warming climate results in an increase in the risk of snowmelt-driven flooding. Counterbalancing effects between snowmelt and precipitation often result in only little and transient changes in streamflow peaks. Although, investigations point at changes in both rainfall and snowmelt-driven runoff, there are no indications of a transient merging of nival and pluvial Rhine flood regimes due to climate warming. Flooding in the main tributaries of the Rhine, such as the Moselle River, as well as the High Rhine is controlled by both precipitation and snowmelt. Caution has to be exercised labelling sub-basins such as the Moselle catchment as purely pluvial-type or the Rhine River Basin at Basel as purely nival-type. Results indicate that this (over-) simplifications can entail misleading assumptions with regard to flood-generating mechanisms and changes in flood hazard. In the framework of this thesis, some progress has been made in detecting, quantifying and attributing past, present and future changes in Rhine flow/flood characteristics. However, further studies are necessary to pin down future changes in the flood genesis of Rhine floods, particularly very rare events.
Learning analytics at scale
(2021)
Digital technologies are paving the way for innovative educational approaches. The learning format of Massive Open Online Courses (MOOCs) provides a highly accessible path to lifelong learning while being more affordable and flexible than face-to-face courses. Thereby, thousands of learners can enroll in courses mostly without admission restrictions, but this also raises challenges. Individual supervision by teachers is barely feasible, and learning persistence and success depend on students' self-regulatory skills. Here, technology provides the means for support. The use of data for decision-making is already transforming many fields, whereas in education, it is still a young research discipline. Learning Analytics (LA) is defined as the measurement, collection, analysis, and reporting of data about learners and their learning contexts with the purpose of understanding and improving learning and learning environments. The vast amount of data that MOOCs produce on the learning behavior and success of thousands of students provides the opportunity to study human learning and develop approaches addressing the demands of learners and teachers.
The overall purpose of this dissertation is to investigate the implementation of LA at the scale of MOOCs and to explore how data-driven technology can support learning and teaching in this context. To this end, several research prototypes have been iteratively developed for the HPI MOOC Platform. Hence, they were tested and evaluated in an authentic real-world learning environment. Most of the results can be applied on a conceptual level to other MOOC platforms as well. The research contribution of this thesis thus provides practical insights beyond what is theoretically possible. In total, four system components were developed and extended:
(1) The Learning Analytics Architecture: A technical infrastructure to collect, process, and analyze event-driven learning data based on schema-agnostic pipelining in a service-oriented MOOC platform. (2) The Learning Analytics Dashboard for Learners: A tool for data-driven support of self-regulated learning, in particular to enable learners to evaluate and plan their learning activities, progress, and success by themselves. (3) Personalized Learning Objectives: A set of features to better connect learners' success to their personal intentions based on selected learning objectives to offer guidance and align the provided data-driven insights about their learning progress. (4) The Learning Analytics Dashboard for Teachers: A tool supporting teachers with data-driven insights to enable the monitoring of their courses with thousands of learners, identify potential issues, and take informed action.
For all aspects examined in this dissertation, related research is presented, development processes and implementation concepts are explained, and evaluations are conducted in case studies. Among other findings, the usage of the learner dashboard in combination with personalized learning objectives demonstrated improved certification rates of 11.62% to 12.63%. Furthermore, it was observed that the teacher dashboard is a key tool and an integral part for teaching in MOOCs. In addition to the results and contributions, general limitations of the work are discussed—which altogether provide a solid foundation for practical implications and future research.
Magmatic continental rifts often constitute the earliest stage of nascent plate boundaries. These extensional tectonic provinces are characterized by ubiquitous normal faulting and volcanic activity; the spatial pattern, the geometry, and the age of these normal faults can help to unravel the spatiotemporal relationships between extensional deformation, magmatism, and long-wavelength crustal deformation of continental rift provinces. This study focuses on the active faulting in the Kenya Rift of the Cenozoic East African Rift System (EARS) with a focus on the mid-Pleistocene to the present-day.
To examine the early stages of continental break-up in the EARS, this thesis presents a time-averaged minimum extension rate for the inner graben of the Northern Kenya Rift (NKR) for the last 0.5 m.y. Using the TanDEM-X digital elevation model, fault-scarp geometries and associated throws are determined across the volcano-tectonic axis of the inner graben of the NKR. By integrating existing geochronology of faulted units with new ⁴⁰Ar/³⁹Ar radioisotopic dates, time-averaged extension rates are calculated. This study reveals that in the inner graben of the NKR, the long-term extension rate based on mid-Pleistocene to recent brittle deformation has minimum values of 1.0 to 1.6 mm yr⁻¹, locally with values up to 2.0 mm yr⁻¹. In light of virtually inactive border faults of the NKR, we show that extension is focused in the region of the active volcano-tectonic axis in the inner graben, thus highlighting the maturing of continental rifting in the NKR.
The phenomenon of focused extension is further investigated with a structural analysis of the youngest volcanic manifestations of the Kenya Rift, their relationship with extensional structures, and their overprint by Holocene faulting. In this context I analyzed the fault characteristics at the ~36 ka old Menengai Caldera and adjacent areas in the Central Kenya Rift using detailed field mapping and a structure-from-motion-based DEM generated from UAV data. In general, the Holocene intra-rift normal faults are dip-slip faults which strike NNE and thus reflect the present-day tectonic stress field; however, inside Menengai caldera persistent magmatic activity and magmatic resurgence overprints these young structures significantly. The caldera is located at the center of an actively extending rift segment and this and the other volcanic edifices of the Kenya Rift may constitute nucleation points of faulting an magmatic extensional processes that ultimately lead into a future stage of magma-assisted rifting.
When viewed at the scale of the entire Kenya Rift the protracted normal faulting in this region compartmentalizes the larger rift depressions, and influences the sedimentology and the hydrology of the intra-rift basins at a scale of less than 100 km. In the present day, most of the fault-bounded sub-basins of the Kenya Rift are hydrologically isolated due to this combination of faulting and magmatic activity that has generated efficient hydrological barriers that maintain these basins as semi-independent geomorphic entities. This isolation, however, was overcome during wetter climatic conditions during the past when the basins were transiently connected. I therefore also investigated the hydrological connectivity of the rift basins during the African Humid Period of the early Holocene, when climate was wetter. With the help of DEM analysis, lake-highstand indicators, radiocarbon dating, and a review of the fossil record, two lake-river-cascades could be identified: one directed southward, and one directed northward. Both cascades connected presently isolated rift basins during the early Holocene via spillovers of lakes and incised river gorges. This hydrological connection fostered the dispersal of aquatic faunas along the rift, and in addition, the water divide between the two river systems represented the only terrestrial dispersal corridor across the Kenya Rift. The reconstruction explains isolated distributions of Nilotic fish species in Kenya Rift lakes and of Guineo-Congolian mammal species in forests east of the Kenya Rift. On longer timescales, repeated episodes of connectivity and isolation must have occurred. To address this problem I participated in research to analyze a sediment drill core from the Koora basin of the Southern Kenya Rift, which provides a paleo-environmental record of the last 1 Ma. Based on this record it can be concluded that at ~400 ka relatively stable environmental conditions were disrupted by tectonic, hydrological, and ecological changes, resulting in increasingly large and frequent fluctuations in water availability, grassland communities, and woody plant cover. The major environmental shifts reflected in the drill core data coincide with phases where volcano-tectonic activity affected the basin. This thesis therefore shows how protracted extensional tectonic processes and the resulting geomorphologic conditions can affect the hydrology, the paleo-environment and the biodiversity of extensional zones in Kenya and elsewhere.
Die Geschichtsschreibung terminiert das Ende des deutschen Zionismus bisher mit dem NS-Verbot der Zionistischen Vereinigung für Deutschland im Zuge des Novemberpogroms 1938. Zu diesem Zeitpunkt hatte er aber von seinem geographischen Kontext entgrenzt, in Erez Israel bereits neue Wurzeln geschlagen. Zionisten aus Deutschland schickten sich nun an, mit ihrem spezifischen Erfahrungshorizont und Wertemaßstab und mitgebrachtem ideologischen Rüstzeug die Entwicklung des jüdischen Nationalheims mitzugestalten und einer umfassenden ökonomischen, kulturellen und politischen Akkulturation der deutschen Alijah den Weg zu bahnen. Entgegen aller zionistischen Theorie gründeten sie auf landsmannschaftlicher Basis im Jahr 1932 die Selbsthilfeorganisation Hitachduth Olej Germania und während des Weltkrieges die Partei Alija Chadascha.
Die Dissertation beinhaltet die Gesamtschau des deutschen Zionismus in seiner letzten Phase in den Jahren 1932 bis 1948; zugleich beleuchtet sie die Geschichte der etwa 60.000 in Palästina eingewanderten Juden aus Deutschland in der für diese Abhandlung relevanten Zeitperiode. Im ersten Teil wird in chronologischer Folge die 1932 beginnende letztmalige Sammlung und Neuformierung des deutschen Zionismus in seiner neu-alten Heimat dargestellt. Wenn man so will, die formativen Jahre im personellen, organisatorischen und ideologisch-politischen Sinne, die schließlich nach dem fast gänzlichen Scheitern der politischen Integration der deutschen Alijah mit der – in der Rückschau – fast zwangsläufig erscheinenden Begründung der Alija Chadascha ihren Abschluss fanden. Im zweiten Teil werden die Positionen der deutschen Zionisten zu den existenziellen Fragen der jüdischen Gemeinschaft in Palästina, hebräisch Jischuw genannt, in der im Fokus stehenden Zeitperiode dargestellt. Im Einzelnen handelt es sich erstens um die Einwanderungsfrage, die untrennbar verbunden war mit der in der zionistischen Theorie unabdingbaren Forderung nach der Erlangung einer jüdischen Majorität in Palästina; zweitens um die der staatlichen Ausgestaltung des zukünftigen jüdischen Gemeinwesens und drittens um die Frage der adäquaten Reaktion des Jischuw auf die Schoah. In diese jeweils in separaten Kapiteln behandelten Themenkomplexe wird die Frage nach dem anzustrebenden Verhältnis zur britischen Mandatsmacht mit einfließen. Hieran mussten die deutschen Zionisten ihr mitgebrachtes geistig-ideologisches Rüstzeug einem Praxistest unterziehen und nach realpolitischen Antworten suchen.
Dem kometenhaften Aufstieg der weiterhin landsmannschaftlich geprägten Alija Chadascha folgte dann in den ersten Nachkriegsjahren ein ebenso rapider Zerfall. Einige Monate nach der Staatsgründung Israels löste sie sich dann sang- und klanglos auf und das Gros ihrer Aktivisten integrierte sich in das Parteiengefüge des neuen Staates. Der deutsche Zionismus als politische Bewegung kam nun wirklich an sein Ende. Diese Abhandlung wird somit zum einen den Kampf der deutschen Alijah um gesellschaftliche Anerkennung und politische Partizipation im Jischuw nachzeichnen und zum anderen eine geistig-ideologische Verortung des deutschen Zionismus in seiner letzten Phase vollziehen und Tendenzen der ideologischen Neuausrichtung offenlegen. Darüber hinaus werden in der Historiographie vorhandene Allgemeinplätze wie die fast allseits anerkannte These vom Scheitern der deutschen Zionisten in der neuen Heimat einer Überprüfung unterzogen. Die letzte vorhandene Leerstelle im wissenschaftlichen Kanon zur mehr als 50-jährigen Geschichte des deutschen Zionismus wird somit geschlossen.
Magnetic strain contributions in laser-excited metals studied by time-resolved X-ray diffraction
(2021)
In this work I explore the impact of magnetic order on the laser-induced ultrafast strain response of metals. Few experiments with femto- or picosecond time-resolution have so far investigated magnetic stresses. This is contrasted by the industrial usage of magnetic invar materials or magnetostrictive transducers for ultrasound generation, which already utilize magnetostrictive stresses in the low frequency regime.
In the reported experiments I investigate how the energy deposition by the absorption of femtosecond laser pulses in thin metal films leads to an ultrafast stress generation. I utilize that this stress drives an expansion that emits nanoscopic strain pulses, so called hypersound, into adjacent layers. Both the expansion and the strain pulses change the average inter-atomic distance in the sample, which can be tracked with sub-picosecond time resolution using an X-ray diffraction setup at a laser-driven Plasma X-ray source. Ultrafast X-ray diffraction can also be applied to buried layers within heterostructures that cannot be accessed by optical methods, which exhibit a limited penetration into metals. The reconstruction of the initial energy transfer processes from the shape of the strain pulse in buried detection layers represents a contribution of this work to the field of picosecond ultrasonics.
A central point for the analysis of the experiments is the direct link between the deposited energy density in the nano-structures and the resulting stress on the crystal lattice. The underlying thermodynamical concept of a Grüneisen parameter provides the theoretical framework for my work. I demonstrate how the Grüneisen principle can be used for the interpretation of the strain response on ultrafast timescales in various materials and that it can be extended to describe magnetic stresses. The class of heavy rare-earth elements exhibits especially large magnetostriction effects, which can even lead to an unconventional contraction of the laser-excited transducer material. Such a dominant contribution of the magnetic stress to the motion of atoms has not been demonstrated previously. The observed rise time of the magnetic stress contribution in Dysprosium is identical to the decrease in the helical spin-order, that has been found previously using time-resolved resonant X-ray diffraction. This indicates that the strength of the magnetic stress can be used as a proxy of the underlying magnetic order. Such magnetostriction measurements are applicable even in case of antiparallel or non-collinear alignment of the magnetic moments and a vanishing magnetization.
The strain response of metal films is usually determined by the pressure of electrons and lattice vibrations. I have developed a versatile two-pulse excitation routine that can be used to extract the magnetic contribution to the strain response even if systematic measurements above and below the magnetic ordering temperature are not feasible. A first laser pulse leads to a partial ultrafast demagnetization so that the amplitude and shape of the strain response triggered by the second pulse depends on the remaining magnetic order. With this method I could identify a strongly anisotropic magnetic stress contribution in the magnetic data storage material iron-platinum and identify the recovery of the magnetic order by the variation of the pulse-to-pulse delay. The stark contrast of the expansion of iron-platinum nanograins and thin films shows that the different constraints for the in-plane expansion have a strong influence on the out-of-plane expansion, due to the Poisson effect. I show how such transverse strain contributions need to be accounted for when interpreting the ultrafast out-of-plane strain response using thermal expansion coefficients obtained in near equilibrium conditions.
This work contributes an investigation of magnetostriction on ultrafast timescales to the literature of magnetic effects in materials. It develops a method to extract spatial and temporal varying stress contributions based on a model for the amplitude and shape of the emitted strain pulses. Energy transfer processes result in a change of the stress profile with respect to the initial absorption of the laser pulses. One interesting example occurs in nanoscopic gold-nickel heterostructures, where excited electrons rapidly transport energy into a distant nickel layer, that takes up much more energy and expands faster and stronger than the laser-excited gold capping layer. Magnetic excitations in rare earth materials represent a large energy reservoir that delays the energy transfer into adjacent layers. Such magneto-caloric effects are known in thermodynamics but not extensively covered on ultrafast timescales. The combination of ultrafast X-ray diffraction and time-resolved techniques with direct access to the magnetization has a large potential to uncover and quantify such energy transfer processes.
By regulating the concentration of carbon in our atmosphere, the global carbon cycle drives changes in our planet’s climate and habitability. Earth surface processes play a central, yet insufficiently constrained role in regulating fluxes of carbon between terrestrial reservoirs and the atmosphere. River systems drive global biogeochemical cycles by redistributing significant masses of carbon across the landscape. During fluvial transit, the balance between carbon oxidation and preservation determines whether this mass redistribution is a net atmospheric CO2 source or sink. Existing models for fluvial carbon transport fail to integrate the effects of sediment routing processes, resulting in large uncertainties in fluvial carbon fluxes to the oceans.
In this Ph.D. dissertation, I address this knowledge gap through three studies that focus on the timescale and routing pathways of fluvial mass transfer and show their effect on the composition and fluxes of organic carbon exported by rivers. The hypotheses posed in these three studies were tested in an analog lowland alluvial river system – the Rio Bermejo in Argentina. The Rio Bermejo annually exports more than 100 Mt of sediment and organic matter from the central Andes, and transports this material nearly 1300 km downstream across the lowland basin without influence from tributaries, allowing me to isolate the effects of geomorphic processes on fluvial organic carbon cycling. These studies focus primarily on the geochemical composition of suspended sediment collected from river depth profiles along the length of the Rio Bermejo.
In Chapter 3, I aimed to determine the mean fluvial sediment transit time for the Rio Bermejo and evaluate the geomorphic processes that regulate the rate of downstream sediment transfer. I developed a framework to use meteoric cosmogenic 10Be (10Bem) as a chronometer to track the duration of sediment transit from the mountain front downstream along the ~1300 km channel of the Rio Bermejo. I measured 10Bem concentrations in suspended sediment sampled from depth profiles, and found a 230% increase along the fluvial transit pathway. I applied a simple model for the time-dependent accumulation of 10Bem on the floodplain to estimate a mean sediment transit time of 8.5±2.2 kyr. Furthermore, I show that sediment transit velocity is influenced by lateral migration rate and channel morphodynamics. This approach to measuring sediment transit time is much more precise than other methods previously used and shows promise for future applications.
In Chapter 4, I aimed to quantify the effects of hydrodynamic sorting on the composition and quantity of particulate organic carbon (POC) export transported by lowland rivers. I first used scanning electron miscroscopy (SEM) coupled with nanoscale secondary ion mass spectrometry (NanoSIMS) analyses to show that the Bermejo transports two principal types of POC: 1) mineral-bound organic carbon associated with <4 µm, platy grains, and 2) coarse discrete organic particles. Using n-alkane stable isotope data and particle shape analysis, I showed that these two carbon pools are vertically sorted in the water column, due to differences in particle settling velocity. This vertical sorting may drive modern POC to be transported efficiently from source-to-sink, driving efficient CO2 drawdown. Simultaneously, vertical sorting may drive degraded, mineral-bound POC to be deposited overbank and stored on the floodplain for centuries to millennia, resulting in enhanced POC remineralization. In the Rio Bermejo, selective deposition of coarse material causes the proportion of mineral-bound POC to increase with distance downstream, but the majority of exported POC is composed of discrete organic particles, suggesting that the river is a net carbon sink. In summary, this study shows that selective deposition and hydraulic sorting control the composition and fate of fluvial POC during fluvial transit.
In Chapter 5, I characterized and quantified POC transformation and oxidation during fluvial transit. I analyzed the radiocarbon content and stable carbon isotopic composition of Rio Bermejo suspended sediment and found that POC ages during fluvial transit, but is also degraded and oxidized during transient floodplain storage. Using these data, I developed a conceptual model for fluvial POC cycling that allows the estimation of POC oxidation relative to POC export, and ultimately reveals whether a river is a net source or sink of CO2 to the atmosphere. Through this study, I found that the Rio Bermejo annually exports more POC than is oxidized during transit, largely due to high rates of lateral migration that cause erosion of floodplain vegetation and soil into the river. These results imply that human engineering of rivers could alter the fluvial carbon balance, by reducing lateral POC inputs and increasing the mean sediment transit time.
Together, these three studies quantitatively link geomorphic processes to rates of POC transport and degradation across sub-annual to millennial time scales and nanoscale to 103 km spatial scales, laying the groundwork for a global-scale fluvial organic carbon cycling model.
Angepasste Pathogene besitzen eine Reihe von Virulenzmechanismen, um pflanzliche Immunantworten unterhalb eines Schwellenwerts der effektiven Resistenz zu unterdrücken. Dadurch sind sie in der Lage sich zu vermehren und Krankheiten auf einem bestimmten Wirt zu verursachen. Eine essentielle Virulenzstrategie Gram-negativer Bakterien ist die Translokation von sogenannten Typ-III Effektorproteinen (T3Es) direkt in die Wirtszelle. Dort stören diese die Immunantwort des Wirts oder fördern die Etablierung einer für das Pathogen günstigen Umgebung. Eine kritische Komponente der Pflanzenimmunität gegen eindringende Pathogene ist die schnelle transkriptionelle Umprogrammierung der angegriffenen Zelle. Viele adaptierte bakterielle Pflanzenpathogene verwenden T3Es, um die Induktion Abwehr-assoziierter Gene zu stören. Die Aufklärung von Effektor-Funktionen, sowie die Identifikation ihrer pflanzlichen Zielproteine sind für das Verständnis der bakteriellen Pathogenese essentiell. Im Rahmen dieser Arbeit sollte das Typ-III Effektorprotein XopS aus Xanthomonas campestris pv. vesicatoria (Xcv) funktionell charakterisiert werden. Zudem lag hier ein besonderer Fokus auf der Untersuchung der Wechselwirkung zwischen XopS und seinem in Vorarbeiten identifizierten pflanzlichen Interaktionspartner WRKY40, einem transkriptionellen Regulator der Abwehr-assoziierten Genexpression. Es konnte gezeigt werden, dass XopS ein essentieller Virulenzfaktor des Phytopathogens Xcv während der präinvasiven Immunantwort ist. So zeigten xopS-defiziente Xcv Bakterien bei einer Inokulation der Blattoberfläche suszeptibler Paprika Pflanzen eine deutlich reduzierte Virulenz im Vergleich zum Xcv Wildtyp. Die Translokation von XopS durch Xcv, sowie die ektopische Expression von XopS in Arabidopsis oder N. benthamiana verhinderte das Schließen von Stomata als Reaktion auf Bakterien bzw. einem Pathogen-assoziierten Stimulus, wobei zudem gezeigt werden konnte, dass dies in einer WRKY40-abhängigen Weise geschieht. Weiter konnte gezeigt werden, dass XopS in der Lage ist, die Expression Abwehr-assoziierter Gene zu manipulieren. Dies deutet darauf hin, dass XopS sowohl in die prä-als auch in die postinvasive, apoplastische Abwehr eingreift. Phytohormon-Signalnetzwerke spielen während des Aufbaus einer effizienten pflanzlichen Immunantwort eine wichtige Rolle. Hier konnte gezeigt werden, dass XopS mit genau diesen Signalnetzwerken zu interferieren scheint. Eine ektopische Expression des Effektors in Arabidopsis führte beispielsweise zu einer signifikanten Induktion des Phytohormons Jasmonsäure (JA), während eine Infektion von suszeptiblen Paprika Pflanzen mit einem xopS-defizienten Xcv Stamm zu einer ebenfalls signifikanten Akkumulation des Salicylsäure (SA)-Gehalts führte.
So kann zu diesem Zeitpunkt vermutet werden, dass XopS die Virulenz von Xcv fördert, indem JA-abhängige Signalwege induziert werden und es gleichzeitig zur Unterdrückung SA-abhängiger Signalwege kommt. Die Virus-induzierte Genstilllegung des XopS Interaktionspartners WRKY40a in Paprika erhöhte die Toleranz der Pflanze gegenüber einer Xcv Infektion, was darauf hindeutet, dass es sich bei diesem Protein um einen transkriptionellen Repressor pflanzlicher Immunantworten handelt. Die Hypothese, dass WRKY40 die Abwehr-assoziierte Genexpression reprimiert, konnte hier über verschiedene experimentelle Ansätze bekräftigt werden. So wurde beispielsweise gezeigt, dass die Expression von verschiedenen Abwehrgenen einschließlich des SA-abhängigen Gens PR1 und die des Negativregulators des JA-Signalwegs JAZ8 von WRKY40 gehemmt wird. Um bei einem Pathogenangriff die Abwehr-assoziierte Genexpression zu gewährleisten, muss WRKY40 als Negativregulator abgebaut werden. Vorarbeiten zeigten, dass WRKY40 über das 26S Proteasom abgebaut wird. In der hier vorliegenden Studie konnte weiter bestätigt, dass der T3E XopS zu einer Stabilisierung des WRKY40 Proteins führt, indem er auf bislang ungeklärte Weise dessen Abbau über das 26S Proteasom verhindert. Die Ergebnisse aus der hier vorliegenden Arbeit lassen die Vermutung zu, dass die Stabilisierung des Negativregulators der Immunantwort WRKY40 seitens XopS dazu führt, dass eine darüber vermittelte Manipulation der Abwehr-assoziierten Genexpression, sowie eine Umsteuerung phytohormoneller Wechselwirkungen die Ausbreitung von Xcv auf suszeptiblen Paprikapflanzen fördert. Ein weiteres Ziel dieser Arbeit war es, weitere potentielle in planta Interaktionspartner von XopS zu identifizieren die für seine Interaktion mit WRKY40 bzw. für die Aufschlüsselung seines Wirkmechanismus relevant sein könnten. So konnte die Deubiquitinase UBP12 als weiterer pflanzlicher Interaktionspartner sowohl von XopS als auch von WRKY40 gefunden werden. Dieses Enzym ist in der Lage, die Ubiquitinierung von Substratproteinen zu modifizieren und seine Funktion könnte somit ein Bindeglied zwischen XopS und dessen Interferenz mit dem proteasomalen Abbau von WRKY40 sein. Während einer kompatiblen Xcv-Wirtsinteraktion führte die Virus-induzierte Genstilllegung von UBP12 zu einer reduzierten Resistenz der Pflanze gegenüber des Pathogens Xcv, was auf dessen positiv-regulatorische Wirkung während der Immunantwort hindeutet. Zudem zeigten Western Blot Analysen, dass das Protein WRKY40 bei einer Herunterregulierung von UBP12 akkumuliert und dass diese Akkumulation von der Anwesenheit des T3Es XopS zusätzlich verstärkt wird. Weiterführende Analysen zur biochemischen Charakterisierung der XopS/WRKY40/UBP12 Interaktion sollten in Zukunft durchgeführt werden, um den genauen Wirkmechanismus des XopS T3Es weiter aufzuschlüsseln.
Boon and bane
(2021)
Semi-natural habitats (SNHs) in agricultural landscapes represent important refugia for biodiversity including organisms providing ecosystem services. Their spill-over into agricultural fields may lead to the provision of regulating ecosystem services such as biological pest control ultimately affecting agricultural yield. Still, it remains largely unexplored, how different habitat types and their distributions in the surrounding landscape shape this provision of ecosystem services within arable fields. Hence, in this thesis I investigated the effect of SNHs on biodiversity-driven ecosystem services and disservices affecting wheat production with an emphasis on the role and interplay of habitat type, distance to the habitat and landscape complexity.
I established transects from the field border into the wheat field, starting either from a field-to-field border, a hedgerow, or a kettle hole, and assessed beneficial and detrimental organisms and their ecosystem functions as well as wheat yield at several in-field distances. Using this study design, I conducted three studies where I aimed to relate the impacts of SNHs at the field and at the landscape scale on ecosystem service providers to crop production.
In the first study, I observed yield losses close to SNHs for all transect types. Woody habitats, such as hedgerows, reduced yields stronger than kettle holes, most likely due to shading from the tall vegetation structure. In order to find the biotic drivers of these yield losses close to SNHs, I measured pest infestation by selected wheat pests as potential ecosystem disservices to crop production in the second study. Besides relating their damage rates to wheat yield of experimental plots, I studied the effect of SNHs on these pest rates at the field and at the landscape scale. Only weed cover could be associated to yield losses, having their strongest impact on wheat yield close to the SNH. While fungal seed infection rates did not respond to SNHs, fungal leaf infection and herbivory rates of cereal leaf beetle larvae were positively influenced by kettle holes. The latter even increased at kettle holes with increasing landscape complexity suggesting a release of natural enemies at isolated habitats within the field interior.
In the third study, I found that also ecosystem service providers benefit from the presence of kettle holes. The distance to a SNH decreased species richness of ecosystem service providers, whereby the spatial range depended on species mobility, i.e. arable weeds diminished rapidly while carabids were less affected by the distance to a SNH. Contrarily, weed seed predation increased with distance suggesting that a higher food availability at field borders might have diluted the predation on experimental seeds. Intriguingly, responses to landscape complexity were rather mixed: While weed species richness was generally elevated with increasing landscape complexity, carabids followed a hump-shaped curve with highest species numbers and activity-density in simple landscapes. The latter might give a hint that carabids profit from a minimum endowment of SNHs, while a further increase impedes their mobility. Weed seed predation was affected differently by landscape complexity depending on weed species displayed. However, in habitat-rich landscapes seed predation of the different weed species converged to similar rates, emphasising that landscape complexity can stabilize the provision of ecosystem services. Lastly, I could relate a higher weed seed predation to an increase in wheat yield even though seed predation did not diminish weed cover. The exact mechanisms of the provision of weed control to crop production remain to be investigated in future studies.
In conclusion, I found habitat-specific responses of ecosystem (dis)service providers and their functions emphasizing the need to evaluate the effect of different habitat types on the provision of ecosystem services not only at the field scale, but also at the landscape scale. My findings confirm that besides identifying species richness of ecosystem (dis)service providers the assessment of their functions is indispensable to relate the actual delivery of ecosystem (dis)services to crop production.
Halide perovskites are a class of novel photovoltaic materials that have recently attracted much attention in the photovoltaics research community due to their highly promising optoelectronic properties, including large absorption coefficients and long carrier lifetimes. The charge carrier mobility of halide perovskites is investigated in this thesis by THz spectroscopy, which is a contact-free technique that yields the intra-grain sum mobility of electrons and holes
in a thin film.
The polycrystalline halide perovskite thin films, provided from Potsdam University, show moderate mobilities in the range from 21.5 to 33.5 cm2V-1s-1. It is shown in this work that the room temperature mobility is limited by charge carrier scattering at polar optical phonons. The mobility at low temperature is likely to be limited by scattering at charged and neutral impurities at impurity concentration N=1017-1018 cm-3. Furthermore, it is shown that exciton formation
may decrease the mobility at low temperatures. Scattering at acoustic phonons can be neglected at both low and room temperatures. The analysis of mobility spectra over a broad range of temperatures for perovskites with various cation compounds shows that cations have a minor impact on charge carrier mobility.
The low-dimensional thin films of quasi-2D perovskite with different numbers of [PbI6]4−sheets (n=2-4) alternating with long organic spacer molecules were provided by S. Zhang from Potsdam University. They exhibit mobilities in the range from 3.7 to 8 cm2V-1s-1. A clear
decrease of mobility is observed with decrease in number of metal-halide sheets n, which likely arises from charge carrier confinement within metal-halide layers. Modelling the measured THz mobility with the modified Drude-Smith model yields localization length from 0.9 to 3.7 nm, which agrees well on the thicknesses of the metal-halide layers. Additionally, the mobilities are found to be dependent on the orientation of the layers. The charge carrier dynamics is also
dependent on the number of metal-halide sheets n. For the thin films with n =3-4 the dynamics is similar to the 3D MHPs. However, the thin film with n = 2 shows clearly different dynamics, where the signs of exciton formation are observed within 390 fs timeframe after
photoexcitation.
Also, the charge carrier dynamics of CsPbI3 perovskite nanocrystals was investigated, in particular the effect of post treatments on the charge carrier transport.