Refine
Year of publication
- 2022 (252) (remove)
Document Type
- Doctoral Thesis (252) (remove)
Is part of the Bibliography
- yes (252)
Keywords
- Klimawandel (6)
- climate change (5)
- Arabidopsis thaliana (3)
- Bundeswehr (3)
- Digitalisierung (3)
- Epigenetik (3)
- Kunstgeschichte (3)
- Modellierung (3)
- Röntgenspektroskopie (3)
- Transkriptionsfaktoren (3)
Institute
- Institut für Biochemie und Biologie (43)
- Extern (27)
- Institut für Physik und Astronomie (25)
- Institut für Chemie (23)
- Institut für Geowissenschaften (19)
- Historisches Institut (15)
- Hasso-Plattner-Institut für Digital Engineering GmbH (14)
- Institut für Ernährungswissenschaft (10)
- Institut für Umweltwissenschaften und Geographie (9)
- Wirtschaftswissenschaften (8)
Pädagog*innen der Primarstufe nehmen an spezifischen bewegungsorientierten Weiterbildungen teil. Zahlreiche Untersuchungen im Kontext von Fort- und Weiterbildungen stellen dar, unter welchen Bedingungen sich Teilnahmen förderlich oder hinderlich auswirken. In didaktisch-konzeptionellen Überlegungen werden häufig Fragen diskutiert, wie äußere Umstände, etwa in Bezug auf zeitliche, räumliche oder inhaltliche Dimensionen, zu gestalten sind, damit Bildungsangebote im Schulsystem bestimmte Wirkungen erzielen. Unter welchen Bedingungen erfolgt sozusagen günstiger Weise eine Vermittlung von spezifischen Inhalten an Lehrer*innen, damit über diese ein Transfereffekt von (system-)relevantem Wissen in das Schulsystem erfolgen kann?
In dieser Forschungsarbeit soll nicht ein Bedingungsdiskurs im Vordergrund stehen, auf dessen Grundlage wirkungsvolle Vermittlungsstrategien für Bildungsangebote diskutiert werden. Im Zentrum steht die Frage nach je eigenen Teilnahme- und Lernbegründungen von Pädagog*innen, und wie sie sich zu ihrer Weiterbildung ins Verhältnis setzen. Dieser Zugang verändert die Perspektive auf die Thematik und erlaubt die Auseinandersetzung mit Subjekten im Rahmen eines Begründungsdiskurses. Im Zuge einer empirisch-qualitativen Studie werden narrative Interviews mit elf Absolvent*innen einer bewegungsorientierten Weiterbildung geführt, die Auswertung der Daten erfolgt mit der Dokumentarischen Methode. Die Rekonstruktionsergebnisse werden in Form von zwei Fallbeschreibungen und durch vier typische, in der Studie entwickelte, Begründungsfiguren dargestellt: die Figur Lernen, die Figur Wissensmanagement, die Figur Neugierige Suche und die Figur Körperliche Aktivität. Neben der Rekonstruktion von Teilnahme- und Lernbegründungsmustern wird deutlich, dass Teilnehmen und Lernen keine unterschiedlichen Zugangslogiken in Bezug auf Bedeutungs-Begründungs-Zusammenhänge verfolgen. Vielmehr sind sowohl expansive als auch defensive Lernbegründungen im Zuge von Teilnahmebegründungen identifizierbar.
Zu Seiner Majestät allerhöchstem Interesse als titelgebendem Zitat bekannten sich die zeitgenössischen Amtsträger in ihrer Beeidigung als grundlegende Maxime des durch den König verkörperten werdenden preussischen Staates.Die Domänenpolitik Friedrich Wilhelms I., dem zweiten preussischen König, war die wesentliche Voraussetzung für die folgende Entwicklung Preussens von einer Regionalmacht im Heiligen Römischen Reich zu einer europäischen Grossmacht. Ohne nennenswerte Rohstoffe und handwerkliche Traditionen, agrarisch geprägt und ganz dem merkantilistischen Wirtschaftsmodell und seiner kameralistischen Methodik verhaftet, war der Ausbau und die Intensivierung der landesherrlichen Gutswirtschaft der für Preussen verheissungsvollste Weg zur Erwirtschaftung der Mittel, die es zum Schuldenabbau, dem Aufbau einer starken Militärmacht und Anhäufung eines Staatsschatzes befähigte. Erreicht wurde dies durch konsequente Einnahmen- und Ausgabenkontrolle und die Schaffung eines effizienten Verwaltungsapparates mit detaillierten Vorschriften, was knapp einhundert Jahre Bestand behalten sollte und bis zu nahezu 50 Prozent der Staatseinnahmen hervorbrachte. Das friderizianische Preussen wäre ohne die Leistungen Friedrich Wilhelms I. nicht möglich gewesen.Der vorliegende Band stellt diese Entwicklung mit einer Fülle von Quellenmaterial und dessen Auswertungen sowie einem umfangreichen Anhang unter Beweis. Der allein 22seitige Personenindex mit über 1.300 Verweisen und 103 Kurzbiogra-fien der kurmärkischen Kammerräte machen deutlich, wer hinter den Leistungen dieser Epoche stand.
Zionism and Cosmopolitanism
(2022)
X-rays are integral to furthering our knowledge of exoplanetary systems. In this work we discuss the use of X-ray observations to understand star-planet interac- tions, mass-loss rates of an exoplanet’s atmosphere and the study of an exoplanet’s atmospheric components using future X-ray spectroscopy.
The low-mass star GJ 1151 was reported to display variable low-frequency radio emission, which is an indication of coronal star-planet interactions with an unseen exoplanet. In chapter 5 we report the first X-ray detection of GJ 1151’s corona based on XMM-Newton data. Averaged over the observation, we detect the star with a low coronal temperature of 1.6 MK and an X-ray luminosity of LX = 5.5 × 1026 erg/s. This is compatible with the coronal assumptions for a sub-Alfvénic star- planet interaction origin of the observed radio signals from this star.
In chapter 6, we aim to characterise the high-energy environment of known ex- oplanets and estimate their mass-loss rates. This work is based on the soft X-ray instrument on board the Spectrum Roentgen Gamma (SRG) mission, eROSITA, along with archival data from ROSAT, XMM-Newton, and Chandra. We use these four X-ray source catalogues to derive X-ray luminosities of exoplanet host stars in the 0.2-2 keV energy band. A catalogue of the mass-loss rates of 287 exoplan- ets is presented, with 96 of these planets characterised for the first time using new eROSITA detections. Of these first time detections, 14 are of transiting exoplanets that undergo irradiation from their host stars that is of a level known to cause ob- servable evaporation signals in other systems, making them suitable for follow-up observations.
In the next generation of space observatories, X-ray transmission spectroscopy of an exoplanet’s atmosphere will be possible, allowing for a detailed look into the atmospheric composition of these planets. In chapter 7, we model sample spectra using a toy model of an exoplanetary atmosphere to predict what exoplanet transit observations with future X-ray missions such as Athena will look like. We then estimate the observable X-ray transmission spectrum for a typical Hot Jupiter-type exoplanet, giving us insights into the advances in X-ray observations of exoplanets in the decades to come.
Writing travel, writing life
(2022)
The book compares the texts of three Swiss authors: Ella Maillart, Annemarie Schwarzenbach and Nicolas Bouvier. The focus is on their trip from Genève to Kabul that Ella Maillart and Annemarie Schwarzenbach made together in 1939/1940 and Nicolas Bouvier 1953/1954 with the artist Thierry Vernet. The comparison shows the strong connection between the journey and life and between ars vivendi and travel literature.
This book also gives an overview of and organises the numerous terms, genres, and categories that already exist to describe various travel texts and proposes the new term travelling narration. The travelling narration looks at the text from a narratological perspective that distinguishes the author, narrator, and protagonist within the narration.
In the examination, ten motifs could be found to characterise the travelling narration: Culture, Crossing Borders, Freedom, Time and Space, the Aesthetics of Landscapes, Writing and Reading, the Self and/as the Other, Home, Religion and Spirituality as well as the Journey. The importance of each individual motif does not only apply in the 1930s or 1950s but also transmits important findings for living together today and in the future.
Among the multitude of geomorphological processes, aeolian shaping processes are of special character, Pedogenic dust is one of the most important sources of atmospheric aerosols and therefore regarded as a key player for atmospheric processes. Soil dust emissions, being complex in composition and properties, influence atmospheric processes and air quality and has impacts on other ecosystems. In this because even though their immediate impact can be considered low (exceptions exist), their constant and large-scale force makes them a powerful player in the earth system. dissertation, we unravel a novel scientific understanding of this complex system based on a holistic dataset acquired during a series of field experiments on arable land in La Pampa, Argentina. The field experiments as well as the generated data provide information about topography, various soil parameters, the atmospheric dynamics in the very lower atmosphere (4m height) as well as measurements regarding aeolian particle movement across a wide range of particle size classes between 0.2μm up to the coarse sand.
The investigations focus on three topics: (a) the effects of low-scale landscape structures on aeolian transport processes of the coarse particle fraction, (b) the horizontal and vertical fluxes of the very fine particles and (c) the impact of wind gusts on particle emissions.
Among other considerations presented in this thesis, it could in particular be shown, that even though the small-scale topology does have a clear impact on erosion and deposition patterns, also physical soil parameters need to be taken into account for a robust statistical modelling of the latter. Furthermore, specifically the vertical fluxes of particulate matter have different characteristics for the particle size classes. Finally, a novel statistical measure was introduced to quantify the impact of wind gusts on the particle uptake and its application on the provided data set. The aforementioned measure shows significantly increased particle concentrations during points in time defined as gust event.
With its holistic approach, this thesis further contributes to the fundamental understanding of how atmosphere and pedosphere are intertwined and affect each other.
Knowledge-intensive business processes are flexible and data-driven. Therefore, traditional process modeling languages do not meet their requirements: These languages focus on highly structured processes in which data plays a minor role. As a result, process-oriented information systems fail to assist knowledge workers on executing their processes. We propose a novel case management approach that combines flexible activity-centric processes with data models, and we provide a joint semantics using colored Petri nets. The approach is suited to model, verify, and enact knowledge-intensive processes and can aid the development of information systems that support knowledge work.
Knowledge-intensive processes are human-centered, multi-variant, and data-driven. Typical domains include healthcare, insurances, and law. The processes cannot be fully modeled, since the underlying knowledge is too vast and changes too quickly. Thus, models for knowledge-intensive processes are necessarily underspecified. In fact, a case emerges gradually as knowledge workers make informed decisions. Knowledge work imposes special requirements on modeling and managing respective processes. They include flexibility during design and execution, ad-hoc adaption to unforeseen situations, and the integration of behavior and data. However, the predominantly used process modeling languages (e.g., BPMN) are unsuited for this task.
Therefore, novel modeling languages have been proposed. Many of them focus on activities' data requirements and declarative constraints rather than imperative control flow. Fragment-Based Case Management, for example, combines activity-centric imperative process fragments with declarative data requirements. At runtime, fragments can be combined dynamically, and new ones can be added. Yet, no integrated semantics for flexible activity-centric process models and data models exists.
In this thesis, Wickr, a novel case modeling approach extending fragment-based Case Management, is presented. It supports batch processing of data, sharing data among cases, and a full-fledged data model with associations and multiplicity constraints. We develop a translational semantics for Wickr targeting (colored) Petri nets. The semantics assert that a case adheres to the constraints in both the process fragments and the data models. Among other things, multiplicity constraints must not be violated. Furthermore, the semantics are extended to multiple cases that operate on shared data. Wickr shows that the data structure may reflect process behavior and vice versa. Based on its semantics, prototypes for executing and verifying case models showcase the feasibility of Wickr. Its applicability to knowledge-intensive and to data-centric processes is evaluated using well-known requirements from related work.
This study explores the identity of the Bene Israel caste from India and its assimilation into Israeli society. The large immigration from India to Israel started in the early 1950s and continued until the early 1970s. Initially, these immigrants struggled hard as they faced many problems such as the language barrier, cultural differences, a new climate, geographical isolation, and racial discrimination. This analysis focuses on the three major aspects of the integration process involving the Bene Israel: economic, socio-cultural and political. The study covers the period from the early fifties to the present.
I will focus on the origin of the Bene Israel, which has evolved after their immigration to Israel; from a Hindu–Muslim lifestyle and customs they integrated into the Jewish life of Israel. Despite its ethnographic nature, this study has theological implications as it is an encounter between Jewish monotheism and Indian polytheism.
All the western scholars who researched the Bene Israel community felt impelled to rely on information received by community members themselves. No written historical evidence recorded Bene Israel culture and origin. Only during the nineteenth century onwards, after the intrusion of western Jewish missionaries, were Jewish books translated into Marathi . Missionary activities among the Bene Israel served as a catalyst for the Bene Israel themselves to investigate their historical past . Haeem Samuel Kehimkar (1830-1908), a Bene Israel teacher, wrote notes on the history of the Bene Israel in India in Marathi in 1897. Brenda Ness wrote in her dissertation:
The results [of the missionary activities] are several works about the community in English and Marathi by Bene-Israel authors which have appeared during the last century. These are, for the most part, not documented; they consist of much theorizing on accepted tradition and tend to be apologetic in nature.
There can be no philosophical explanation or rational justification for an entire community to leave their motherland India, and enter into a process of annihilation of its own free will. I see this as a social and cultural suicide. In craving for a better future in Israel, the Indian Bene Israel community pays an enormously heavy price as a people that are today discarded by the East and disowned by the West: because they chose to become something that they never were and never could be. As it is written, “know where you came from, and where you are going.” A community with an ancient history from a spiritual culture has completely lost its identity and self-esteem.
In concluding this dissertation, I realize the dilemma with which I have confronted the members of the Bene Israel community which I have reviewed after strenuous and constant self-examination. I chose to evolve the diversifications of the younger generations urges towards acceptance, and wish to clarify my intricate analysis of this controversial community. The complexity of living in a Jewish State, where citizens cannot fulfill their basic desires, like matrimony, forced an entire community to conceal their true identity and perjure themselves to blend in, for the sake of national integration. Although scholars accepted their new claims, the skepticism of the rabbinate authorities prevails, and they refuse to marry them to this day, suspecting they are an Indian caste.
Countries processing raw coffee beans are burdened with low economical incomes to fight the serious environmental problems caused by the by-products and wastewater that is generated during the wet-coffee processing. The aim of this work was to develop alternative methods of improving the waste by-product quality and thus making the process economically more attractive with valorization options that can be brought to the coffee producers.
The type of processing influences not only the constitution of green coffee but also of by-products and wastewater. Therefore, coffee bean samples as well as by-products and wastewater collected at different production steps of were analyzed. Results show that the composition of wastewater is dependent on how much and how often the wastewater is recycled in the processing. Considering the coffee beans, results indicate that the proteins might be affected during processing and a positive effect of the fermentation on the solubility and accessibility of proteins seems to be probable. The steps of coffee processing influence the different constituents of green coffee beans which, during roasting, give rise to aroma compounds and express the characteristics of roasted coffee beans. Knowing that this group of compounds is involved in the Maillard reaction during roasting, this possibility could be utilized for the coffee producers to improve the quality of green coffee beans and finally the coffee cup quality.
The valorization of coffee wastes through modification to activated carbon has been considered as a low-cost option creating an adsorbent with prospective to compete with commercial carbons. Activation protocol using spent coffee and parchment was developed and prepared to assess their adsorption capacity for organic compounds. Spent coffee grounds and parchment proved to have similar adsorption efficiency to commercial activated carbon.
The results of this study document a significant information originating from the processing of the de-pulped to green coffee beans. Furthermore, it showed that coffee parchment and spent coffee grounds can be valorized as low-cost option to produce activated carbons. Further work needs to be directed to the optimization of the activation methods to improve the quality of the materials produced and the viability of applying such experiments in-situ to bring the coffee producer further valorization opportunities with environmental perspectives.
Coffee producers would profit in establishing appropriate simple technologies to improve green coffee quality, re-use coffee by-products, and wastewater valorization.
Das Centrosom von Dictyostelium ist acentriolär aufgebaut, misst ca. 500 nm und besteht aus einer dreischichten Core-Struktur mit umgebender Corona, an der Mikrotubuli nukleieren. In dieser Arbeit wurden das centrosomale Protein Cep192 und mögliche Interaktionspartner am Centrosom eingehend untersucht. Die einleitende Lokalisationsuntersuchung von Cep192 ergab, dass es während der gesamten Mitose an den Spindelpolen lokalisiert und im Vergleich zu den anderen Strukturproteinen der Core-Struktur am stärksten exprimiert ist. Die dauerhafte Lokalisation an den Spindelpolen während der Mitose wird für Proteine angenommen, die in den beiden identisch aufgebauten äußeren Core-Schichten lokalisieren, die das mitotische Centrosom formen. Ein Knockdown von Cep192 führte zur Ausbildung von überzähligen Mikrotubuli-organisierenden Zentren (MTOC) sowie zu einer leicht erhöhten Ploidie. Deshalb wird eine Destabilisierung des Centrosoms durch die verminderte Cep192-Expression angenommen. An Cep192 wurden zwei kleine Tags, der SpotH6- und BioH6-Tag, etabliert, die mit kleinen fluoreszierenden Nachweiskonjugaten markiert werden konnten. Mit den so getagten Proteinen konnte die hochauflösende Expansion Microscopy für das Centrosom optimiert werden und die Core-Struktur erstmals proteinspezifisch in der Fluoreszenzmikroskopie dargestellt werden. Cep192 lokalisiert dabei in den äußeren Core-Schichten. Die kombinierte Markierung von Cep192 und den centrosomalen Proteinen CP39 und CP91 in der Expansion Microscopy erlaubte die Darstellung des dreischichtigen Aufbaus der centrosomalen Core-Struktur, wobei CP39 und CP91 zwischen Cep192 in der inneren Core-Schicht lokalisieren. Auch die Corona wurde in der Expansion Microscopy untersucht: Das Corona-Protein CDK5RAP2 lokalisiert in räumlicher Nähe zu Cep192 in der inneren Corona. Ein Vergleich der Corona-Proteine CDK5RAP2, CP148 und CP224 in der Expansion Microscopy ergab unterscheidbare Sublokalisationen der Proteine innerhalb der Corona und relativ zur Core-Struktur. In Biotinylierungsassays mit den centrosomalen Core-Proteinen CP39 und CP91 sowie des Corona-Proteins CDK5RAP2 konnte Cep192 als möglicher Interaktionspartner identifiziert werden.
Die Ergebnisse dieser Arbeit zeigen die wichtige Funktion des Proteins Cep192 im Dictyostelium-Centrosom und ermöglichen durch die Kombination aus Biotinylierungsassays und Expansion Microscopy der untersuchten Proteine ein verbessertes Verständnis der Topologie des Centrosoms.
Der Verteidigungsausschuss des Deutschen Bundestags steht seit seiner Gründung in rationaler und emotionaler Auseinandersetzung mit Parlament und Öffentlichkeit. Wolfgang Geist untersucht in seiner Langzeitanalyse die wechselnde Stellung des Ausschusses im Bundestag und gegenüber dessen Fraktionen unter den sich wandelnden politischen und gesellschaftlichen Gegebenheiten. So wird deutlich, welche Rolle der Ausschuss – auch in seiner besonderen Tätigkeit als Untersuchungsausschuss – in der Sicherheitspolitik der Bundesrepublik spielte sowie welcher Bedeutung der personellen Zusammensetzung und einzelnen politischen Akteuren zukam. Gleichzeitig hinterfragt er das Schlagwort »Parlamentsarmee«.
Voilà une Parisienne!
(2022)
Stereotype sind als kulturgenerierende und dabei stark normierende Figurationen omnipräsent. Maria Weilandt betrachtet sie unter einem literatur- und kunstkomparatistischen Blickwinkel nicht als starre und unveränderliche Entitäten, sondern als Ensemble verflochtener Erzählungen. Anhand des Stereotyps der Pariserin im 19. und beginnenden 20. Jahrhundert entwickelt sie ein eigenes Analysekonzept. Neben literarischen Texten und Kunstwerken beleuchtet sie dabei auch die sogenannte Parisienne als Werbefigur der Pariser Warenhauskultur sowie als Repräsentation eines spezifischen intersektionalen Konzepts von Nation, etwa im Rahmen der Weltausstellung 1900.
Current business organizations want to be more efficient and constantly evolving to find ways to retain talent. It is well established that visionary leadership plays a vital role in organizational success and contributes to a better working environment. This study aims to determine the effect of visionary leadership on employees' perceived job satisfaction. Specifically, it investigates whether the mediators meaningfulness at work and commitment to the leader impact the relationship. I take support from job demand resource theory to explain the overarching model used in this study and broaden-and-build theory to leverage the use of mediators.
To test the hypotheses, evidence was collected in a multi-source, time-lagged design field study of 95 leader-follower dyads. The data was collected in a three-wave study, each survey appearing after one month. Data on employee perception of visionary leadership was collected in T1, data for both mediators were collected in T2, and employee perception of job satisfaction was collected in T3. The findings display that meaningfulness at work and commitment to the leader play positive intervening roles (in the form of a chain) in the indirect influence of visionary leadership on employee perceptions regarding job satisfaction.
This research offers contributions to literature and theory by first broadening the existing knowledge on the effects of visionary leadership on employees. Second, it contributes to the literature on constructs meaningfulness at work, commitment to the leader, and job satisfaction. Third, it sheds light on the mediation mechanism dealing with study variables in line with the proposed model. Fourth, it integrates two theories, job demand resource theory and broaden-and-build theory providing further evidence. Additionally, the study provides practical implications for business leaders and HR practitioners.
Overall, my study discusses the potential of visionary leadership behavior to elevate employee outcomes. The study aligns with previous research and answers several calls for further research on visionary leadership, job satisfaction, and mediation mechanism with meaningfulness at work and commitment to the leader.
Dynamic resource management is an essential requirement for private and public cloud computing environments. With dynamic resource management, the physical resources assignment to the cloud virtual resources depends on the actual need of the applications or the running services, which enhances the cloud physical resources utilization and reduces the offered services cost. In addition, the virtual resources can be moved across different physical resources in the cloud environment without an obvious impact on the running applications or services production. This means that the availability of the running services and applications in the cloud is independent on the hardware resources including the servers, switches and storage failures. This increases the reliability of using cloud services compared to the classical data-centers environments.
In this thesis we briefly discuss the dynamic resource management topic and then deeply focus on live migration as the definition of the compute resource dynamic management. Live migration is a commonly used and an essential feature in cloud and virtual data-centers environments. Cloud computing load balance, power saving and fault tolerance features are all dependent on live migration to optimize the virtual and physical resources usage. As we will discuss in this thesis, live migration shows many benefits to cloud and virtual data-centers environments, however the cost of live migration can not be ignored. Live migration cost includes the migration time, downtime, network overhead, power consumption increases and CPU overhead.
IT admins run virtual machines live migrations without an idea about the migration cost. So, resources bottlenecks, higher migration cost and migration failures might happen. The first problem that we discuss in this thesis is how to model the cost of the virtual machines live migration. Secondly, we investigate how to make use of machine learning techniques to help the cloud admins getting an estimation of this cost before initiating the migration for one of multiple virtual machines. Also, we discuss the optimal timing for a specific virtual machine before live migration to another server. Finally, we propose practical solutions that can be used by the cloud admins to be integrated with the cloud administration portals to answer the raised research questions above.
Our research methodology to achieve the project objectives is to propose empirical models based on using VMware test-beds with different benchmarks tools. Then we make use of the machine learning techniques to propose a prediction approach for virtual machines live migration cost. Timing optimization for live migration is also proposed in this thesis based on using the cost prediction and data-centers network utilization prediction. Live migration with persistent memory clusters is also discussed at the end of the thesis. The cost prediction and timing optimization techniques proposed in this thesis could be practically integrated with VMware vSphere cluster portal such that the IT admins can now use the cost prediction feature and timing optimization option before proceeding with a virtual machine live migration.
Testing results show that our proposed approach for VMs live migration cost prediction shows acceptable results with less than 20% prediction error and can be easily implemented and integrated with VMware vSphere as an example of a commonly used resource management portal for virtual data-centers and private cloud environments. The results show that using our proposed VMs migration timing optimization technique also could save up to 51% of migration time of the VMs migration time for memory intensive workloads and up to 27% of the migration time for network intensive workloads. This timing optimization technique can be useful for network admins to save migration time with utilizing higher network rate and higher probability of success.
At the end of this thesis, we discuss the persistent memory technology as a new trend in servers memory technology. Persistent memory modes of operation and configurations are discussed in detail to explain how live migration works between servers with different memory configuration set up. Then, we build a VMware cluster with persistent memory inside server and also with DRAM only servers to show the live migration cost difference between the VMs with DRAM only versus the VMs with persistent memory inside.
Vermögen vererben
(2022)
Die politische Regulierung der Vermögensweitergabe und individuelle Erbregelungen in der zweiten Hälfte des 20. Jahrhunderts.
Das Vererben von Vermögen stabilisiert die Gesellschaftsordnung. Erbregelungen können soziale Ungleichheitsverhältnisse in die Zukunft fortschreiben oder zu Enttäuschungen übergangener Familienmitglieder führen. Da das Vererben soziale Gerechtigkeits- und Familienvorstellungen berührt, ist seine Regulierung politisch höchst umstritten. Obwohl die jährlich vererbten Vermögen in den letzten Jahren immer neue Rekordhöhen erreichten, ist die Vorgeschichte dieser gegenwärtigen Entwicklung bisher kaum erforscht.
Ronny Grundig untersucht den Wandel der Vermögensvererbung vom Ende des Zweiten Weltkriegs bis zum Ende der 1980er Jahre. Er blickt auf politische Regulierungen,
die Praktiken des Vererbens und die Aneignung des Erbes durch die Hinterbliebenen. Der Autor analysiert die Steuervermeidung Vermögender sowie Konflikte zwischen Erben und Erbinnen. Ebenso zeigt er den Wandel von Paar- und Familienbeziehungen beim Vererben, die sich in den Testamenten niederschlagen und die Verteilung der hinterlassenen Vermögen beeinflussen.
Respiratorische Erkrankungen stellen zunehmend eine relevante globale Problematik dar. Die Erweiterung bzw. Modifizierung von Applikationswegen möglicher Arzneimittel für gezielte topische Anwendungen ist dabei von größter Bedeutung. Die Variation eines bekannten Applikationsweges durch unterschiedliche technologische Umsetzungen kann die Vielfalt der Anwendungsmöglichkeiten, aber auch die Patienten-Compliance erhöhen. Die einfache und flexible Verfahrensweise durch schnelle Verfügbarkeit und eine handliche Technologie sind heutzutage wichtige Eigenschaften im Entwicklungsprozess eines Produktes. Eine direkte topische Behandlung von Atemwegserkrankungen am Wirkort in Form einer inhalativen Applikation bietet dabei viele Vorteile gegenüber einer systemischen Therapie. Die medizinische Inhalation von Wirkstoffen über die Lunge ist jedoch eine komplexe Herausforderung. Inhalatoren gehören zu den erklärungsbedürftigen Applikationsformen, die zur Erhöhung der konsequenten Einhaltung der Verordnung so einfach, wie möglich gestaltet werden müssen. Parallel besitzen und nutzen weltweit annähernd 68 Millionen Menschen die Technologie eines inhalativen Applikators zur bewussten Schädigung ihrer Gesundheit in Form einer elektronischen Zigarette. Diese bekannte Anwendung bietet die potentielle Möglichkeit einer verfügbaren, kostengünstigen und qualitätsgeprüften Gesundheitsmaßnahme zur Kontrolle, Prävention und Heilung von Atemwegserkrankungen. Sie erzeugt ein Aerosol durch elektrothermische Erwärmung eines sogenannten Liquids, das durch Kapillarkräfte eines Trägermaterials an ein Heizelement gelangt und verdampft. Ihr Bekanntheitsgrad zeigt, dass eine beabsichtigte Wirkung in den Atemwegen eintritt. Diese Wirkung könnte jedoch auch auf potentielle pharmazeutische Einsatzgebiete übertragbar sein. Die Vorteile der pulmonalen Verabreichung sind dabei vielfältig. Im Vergleich zur peroralen Applikation gelangt der Wirkstoff gezielt zum Wirkort. Wenn eine systemische Applikation zu Arzneimittelkonzentrationen unterhalb der therapeutischen Wirksamkeit in der Lunge führt, könnte eine inhalative Darreichung bereits bei niedriger Dosierung die gewünschten höheren Konzentrationen am Wirkort hervorrufen. Aufgrund der großen Resorptionsfläche der Lunge sind eine höhere Bioverfügbarkeit und ein schnellerer Wirkungseintritt infolge des fehlenden First-Pass-Effektes möglich. Es kommt ebenfalls zu minimalen systemischen Nebenwirkungen. Die elektronische Zigarette erzeugt wie die medizinischen Inhalatoren lungengängige Partikel. Die atemzuggesteuerte Technik ermöglicht eine unkomplizierte und intuitive Anwendung. Der prinzipielle Aufbau besteht aus einer elektrisch beheizten Wendel und einem Akku. Die Heizwendel ist von einem sogenannten Liquid in einem Tank umgeben und erzeugt das Aerosol. Das Liquid beinhaltet eine Basismischung bestehend aus Propylenglycol, Glycerin und reinem Wasser in unterschiedlichen prozentualen Anteilen. Es besteht die Annahme, dass das Basisliquid auch mit pharmazeutischen Wirkstoffen für die pulmonale Applikation beladen werden kann. Aufgrund der thermischen Belastung durch die e-Zigarette müssen potentielle Wirkstoffe sowie das Vehikel eine thermische Stabilität aufweisen.
Die potentielle medizinische Anwendung der Technologie einer handelsüblichen e-Zigarette wurde anhand von drei Schwerpunkten an vier Wirkstoffen untersucht. Die drei ätherischen Öle Eucalyptusöl, Minzöl und Nelkenöl wurden aufgrund ihrer leichten Flüchtigkeit und der historischen pharmazeutischen Anwendung anhand von Inhalationen bei Erkältungssymptomen bzw. im zahnmedizinischen Bereich gewählt. Das eingesetzte Cannabinoid Cannabidiol (CBD) hat einen aktuellen Bezug zu dem pharmazeutischen Markt Deutschlands zur Legalisierung von cannabishaltigen Produkten und der medizinischen Forschung zum inhalativen Konsum. Es wurden relevante wirkstoffhaltige Flüssigformulierungen entwickelt und hinsichtlich ihrer Verdampfbarkeit zu Aerosolen bewertet. In den quantitativen und qualitativen chromatographischen Untersuchungen konnten spezifische Verdampfungsprofile der Wirkstoffe erfasst und bewertet werden. Dabei stieg die verdampfte Masse der Leitsubstanzen 1,8-Cineol (Eucalyptusöl), Menthol (Minzöl) und Eugenol (Nelkenöl) zwischen 33,6 µg und 156,2 µg pro Zug proportional zur Konzentration im Liquid im Bereich zwischen 0,5% und 1,5% bei einer Leistung von 20 Watt. Die Freisetzungsrate von Cannabidiol hingegen schien unabhängig von der Konzentration im Liquid im Mittelwert bei 13,3 µg pro Zug zu liegen. Dieses konnte an fünf CBD-haltigen Liquids im Konzentrationsbereich zwischen 31 µg/g und 5120 µg/g Liquid gezeigt werden. Außerdem konnte eine Steigerung der verdampften Massen mit Zunahme der Leistung der e-Zigarette festgestellt werden. Die Interaktion der Liquids bzw. Aerosole mit den Bestandteilen des Speichels sowie weiterer gastrointestinaler Flüssigkeiten wurde über die Anwendung von zugehörigen in vitro Modellen und Einsatz von Enzymaktivitäts-Assays geprüft. In den Untersuchungen wurden Änderungen von Enzymaktivitäten anhand des oralen Schlüsselenzyms α-Amylase sowie von Proteasen ermittelt. Damit sollte exemplarisch ein möglicher Einfluss auf physiologische bzw. metabolische Prozesse im humanen Organismus geprüft werden. Das Bedampfen von biologischen Suspensionen führte bei niedriger Leistung der e-Zigarette (20 Watt) zu keiner bzw. einer leichten Änderung der Enzymaktivität. Die Anwendung einer hohen Leistung (80 Watt) bewirkte tendenziell das Herabsetzen der Enzymaktivitäten. Die Erhöhung der Enzymaktivitäten könnte zu einem enzymatischen Abbau von Schleimstoffen wie Mucinen führen, was wiederum die effektive, mechanische Abwehr gegenüber bakteriellen Infektionen zur Folge hätte. Da eine Anwendung der Applikation insbesondere bei bakteriellen Atemwegserkrankungen denkbar wäre, folgten abschließend Untersuchungen der antibakteriellen Eigenschaften der Liquids bzw. Aerosole in vitro. Es wurden sechs klinisch relevante bakterielle Krankheitserreger ausgewählt, die nach zwei Charakteristika gruppiert werden können. Die drei multiresistenten Bakterien Pseudomonas aeruginosa, Klebsiella pneumoniae und Methicillin-resistenter Staphylococcus aureus können mithilfe von üblichen Therapien mit Antibiotika nicht abgetötet werden und haben vor allem eine nosokomiale Relevanz. Die zweite Gruppe weist Eigenschaften auf, die vordergründig assoziiert sind mit respiratorischen Erkrankungen. Die Bakterien Streptococcus pneumoniae, Moraxella catarrhalis und Haemophilus influenzae sind repräsentativ beteiligt an Atemwegserkrankungen mit diverser Symptomatik. Die Bakterienarten wurden mit den jeweiligen Liquids behandelt bzw. bedampft und deren grundlegende Dosis-Wirkungsbeziehung charakterisiert. Dabei konnte eine antibakterielle Aktivität der Formulierungen ermittelt werden, die durch Zugabe eines Wirkstoffes die bereits antibakterielle Wirkung der Bestandteile Glycerin und Propylenglycol verstärkte. Die hygroskopischen Eigenschaften dieser Substanzen sind vermutlich für eine Wirkung in aerosolierter Form verantwortlich. Sie entziehen die Feuchtigkeit aus der Luft und haben einen austrocknenden Effekt auf die Bakterien. Das Bedampfen der Bakterienarten Streptococcus pneumoniae, Moraxella catarrhalis und Haemophilus influenzae hatte einen antibakteriellen Effekt, der zeitlich abhängig von der Leistung der e-Zigarette war.
Die Ergebnisse der Untersuchungen führen zu dem Schluss, dass jeder Wirkstoff bzw. jede Substanzklasse individuell zu bewerten ist und somit Inhalator und Formulierung aufeinander abgestimmt werden müssen. Der Einsatz der e-Zigarette als Medizinprodukt zur Applikation von Arzneimitteln setzt stets Prüfungen nach Europäischem Arzneibuch voraus. Durch Modifizierungen könnte eine Dosierung gut kontrollierbar gemacht werden, aber auch die Partikelgrößenverteilung kann insoweit reguliert werden, dass die Wirkstoffe je nach Partikelgröße zu einem geeigneten Applikationsort wie Mund, Rachen oder Bronchien transportiert werden. Der Vergleich mit den Eigenschaften anderer medizinischer Inhalatoren führt zu dem Schluss, dass die Technologie der e-Zigarette durchaus eine gleichartige oder bessere Performance für thermisch stabile Wirkstoffe bieten könnte. Dieses fiktive Medizinprodukt könnte aus einer hersteller-unspezifisch produzierten, wieder aufladbaren Energiequelle mit Universalgewinde zum mehrfachen Gebrauch und einer hersteller- und wirkstoffspezifisch produzierten Einheit aus Verdampfer und Arzneimittel bestehen. Das Arzneimittel, ein medizinisches Liquid (Vehikel und Wirkstoff) kann in dem Tank des Verdampfers mit konstanten, nicht variablen Parametern patientenindividuell produziert werden. Inhalative Anwendungen werden perspektivisch wohl nicht zuletzt aufgrund der aktuellen COVID-19-Pandemie eine zunehmende Rolle spielen. Der Bedarf nach alternativen Therapieoptionen wird weiter ansteigen. Diese Arbeit liefert einen Beitrag zum Einsatz der Technologie der elektronischen Zigarette als electronic nicotin delivery system (ENDS) nach Modifizierung zu einem potentiellen pulmonalen Applikationssystem als electronic drug delivery system (EDDS) von inhalativen, thermisch stabilen Arzneimitteln in Form eines Medizinproduktes.
Humankind and their environment need to be protected from the harmful effects of spent nuclear fuel, and therefore disposal in deep geological formations is favoured worldwide. Suitability of potential host rocks is evaluated, among others, by the retention capacity with respect to radionuclides. Safety assessments are based on the quantification of radionuclide migration lengths with numerical simulations as experiments cannot cover the required temporal (1 Ma) and spatial scales (>100 m).
Aim of the present thesis is to assess the migration of uranium, a geochemically complex radionuclide, in the potential host rock Opalinus Clay. Radionuclide migration in clay formations is governed by diffusion due to their low permeability and retarded by sorption. Both processes highly depend on pore water geochemistry and mineralogy that vary between different facies. Diffusion is quantified with the single-component (SC) approach using one diffusion coefficient for all species and the process-based multi-component (MC) option. With this, each species is assigned its own diffusion coefficient and the interaction with the diffuse double layer is taken into account. Sorption is integrated via a bottom-up approach using mechanistic surface complexation models and cation exchange. Therefore, reactive transport simulations are conducted with the geochemical code PHREEQC to quantify uranium migration, i.e. diffusion and sorption, as a function of mineralogical and geochemical heterogeneities on the host rock scale.
Sorption processes are facies dependent. Migration lengths vary between the Opalinus Clay facies by up to 10 m. Thereby, the geochemistry of the pore water, in particular the partial pressure of carbon dioxide (pCO2), is more decisive for the sorption capacity than the amount of clay minerals. Nevertheless, higher clay mineral quantities compensate geochemical variations. Consequently, sorption processes must be quantified as a function of pore water geochemistry in contact with the mineral assemblage.
Uranium diffusion in the Opalinus Clay is facies independent. Speciation is dominated by aqueous ternary complexes of U(VI) with calcium and carbonate. Differences in the migration lengths between SC and MC diffusion are with +/-5 m negligible. Further, the application of the MC approach highly depends on the quality and availability of the underlying data. Therefore, diffusion processes can be adequately quantified with the SC approach using experimentally determined diffusion coefficients.
The hydrogeological system governs pore water geochemistry within the formation rather than the mineralogy. Diffusive exchange with the adjacent aquifers established geochemical gradients over geological time scales that can enhance migration by up to 25 m. Consequently, uranium sorption processes must be quantified following the identified priority: pCO2 > hydrogeology > mineralogy.
The presented research provides a workflow and orientation for other potential disposal sites with similar pore water geochemistry due to the identified mechanisms and dependencies. With a maximum migration length of 70 m, the retention capacity of the Opalinus Clay with respect to uranium is sufficient to fulfill the German legal minimum requirement of a thickness of at least 100 m.
The Annamites mountain range of Southeast Asia which runs along the border of Viet Nam and Laos is an important biodiversity hotspot with high levels of endemism. However, that biodiversity is threatened by unsustainable hunting, and many protected areas across the region have been emptied of their wildlife. To better protect the unique species in the Annamites, it is crucial to have a better understanding of their ecology and distribution. Additionally, basic genetic information is needed to provide conservation stakeholders with essential information to facilitate conservation breeding and counteract the illegal wildlife trade. To date, this baseline information is lacking for many Annamites species.
This thesis aims to assess the effectiveness of using non-invasive collection methods, i.e. camera-trap surveys and leech-derived wildlife host DNA, in order to improve and enhance our understanding of ecology, distribution, and genetic diversity of the Annamites terrestrial mammals.
In chapter 1, we analysed data from a systematic landscape camera-trap survey using single-species occupancy models to assess the ecology and distribution of two little-known Annamite endemics, the Annamite dark muntjac (Muntiacus rooseveltorum / truongsonensis) and Annamite striped rabbit (Nesolagus timminsi), in multiple protected areas across the Annamites. This chapter provided the first in-depth information on their ecology, as well as distribution patterns at large spatial scales. Most notably, we found that the Annamite dark muntjac was predominantly found at higher elevations, while responses to elevation varied among study areas for the Annamite striped rabbit. We estimated occupancy probabilities for both endemics by using their responses to environmental and anthropogenic influences and used this information to make recommendations for targeted conservation actions. We discuss how the approach we used for these two Annamites endemics can be expanded for other little-known and threatened species in other tropical regions.
As is the case with ecology and distribution, very little is known about the genetic diversity of the Annamite striped rabbit and other mammals of the Annamites. This poor understanding is mainly attributed to the lack of a comprehensive DNA sample collection that covers the species’ entire distribution range, which is believed to be a consequence of the low density of mammals or the remoteness of species’ habitat. In order to overcome the difficulties when trying to collect DNA samples from elusive mammals, we applied invertebrate-derived DNA (iDNA) sampling via hematophagous leeches to indirectly obtain genetic materials of their terrestrial host mammals.
In chapter 2, leech-derived DNA was used to study the genetic diversity of the Annamite striped rabbit population. By analysing the DNA extracted from leech samples collected at multiple study areas of the central Annamites, we found a genetic variation with five haplotypes among nine obtained sequences. Despite this diversity, we found no clear phylogeographic pattern among the lagomorph’s populations in central Annamites. The findings have direct conservation implications for the species, as local stakeholders are currently establishing a conservation rescue and breeding facility for Annamite endemic species. Thus our results suggested that Annamite striped rabbits from multiple protected areas in central Annamites can be used as founders for the breeding program.
In chapter 3, the genetic material of six mammals, which are frequently found in Indochina's illegal wildlife trade, was extracted from leeches collected at six study sites across the Anamites. Species-specific genetic markers were used to obtain DNA fragments that were analysed together with Genbank reference sequences from other parts of the species’ distribution range. Our results showed that invertebrate-derived DNA can be used to fill the sampling gaps and provide genetic reference data that is needed for conservation breeding programmes or to counteract the illegal wildlife trade.
Overal, this dissertation provides the first insights in the ecology, distribution, and genetics of rare and threatened species of the Annamites by utilising camera traps and leech-derived DNA as two non-invasive collection methods. This information is essential for improving conservation efforts of local stakeholders and managers, especially for the Annamite endemics. Results in this dissertation also show the effectiveness of both non-invasive methods for studying terrestrial mammals at a landscape level. By expanding the application of these methods to other protected areas across the Annamites, we will further our understanding of ecology, distribution, and genetics of Annamite endemics. With such landscape-scale surveys, we are able to provide stakeholders with an overview of the current status of wildlife in the Annamites which supports efforts to protect these secretive species from illegal hunting and thus their extinction.
In this thesis, the dependencies of charge localization and itinerance in two classes of aromatic molecules are accessed: pyridones and porphyrins. The focus lies on the effects of isomerism, complexation, solvation, and optical excitation, which are concomitant with different crucial biological applications of specific members of these groups of compounds. Several porphyrins play key roles in the metabolism of plants and animals. The nucleobases, which store the genetic information in the DNA and RNA are pyridone derivatives. Additionally, a number of vitamins are based on these two groups of substances.
This thesis aims to answer the question of how the electronic structure of these classes of molecules is modified, enabling the versatile natural functionality. The resulting insights into the effect of constitutional and external factors are expected to facilitate the design of new processes for medicine, light-harvesting, catalysis, and environmental remediation.
The common denominator of pyridones and porphyrins is their aromatic character. As aromaticity was an early-on topic in chemical physics, the overview of relevant theoretical models in this work also mirrors the development of this scientific field in the 20th century. The spectroscopic investigation of these compounds has long been centered on their global, optical transition between frontier orbitals.
The utilization and advancement of X-ray spectroscopic methods characterizing the local electronic structure of molecular samples form the core of this thesis. The element selectivity of the near-edge X-ray absorption fine structure (NEXAFS) is employed to probe the unoccupied density of states at the nitrogen site, which is key for the chemical reactivity of pyridones and porphyrins. The results contribute to the growing database of NEXAFS features and their interpretation, e.g., by advancing the debate on the porphyrin N K-edge through systematic experimental and theoretical arguments. Further, a state-of-the-art laser pump – NEXAFS probe scheme is used to characterize the relaxation pathway of a photoexcited porphyrin on the atomic level.
Resonant inelastic X-ray scattering (RIXS) provides complementary results by accessing the highest occupied valence levels including symmetry information. It is shown that RIXS is an effective experimental tool to gain detailed information on charge densities of individual species in tautomeric mixtures. Additionally, the hRIXS and METRIXS high-resolution RIXS spectrometers, which have been in part commissioned in the course of this thesis, will gain access to the ultra-fast and thermal chemistry of pyridones, porphyrins, and many other compounds.
With respect to both classes of bio-inspired aromatic molecules, this thesis establishes that even though pyridones and porphyrins differ largely by their optical absorption bands and hydrogen bonding abilities, they all share a global stabilization of local constitutional changes and relevant external perturbation. It is because of this wide-ranging response that pyridones and porphyrins can be applied in a manifold of biological and technical processes.
Climate change is one of the greatest challenges to humanity in this century, and most noticeable consequences are expected to be impacts on the water cycle – in particular the distribution and availability of water, which is fundamental for all life on Earth. In this context, it is essential to better understand where and when water is available and what processes influence variations in water storages. While estimates of the overall terrestrial water storage (TWS) variations are available from the GRACE satellites, these represent the vertically integrated signal over all water stored in ice, snow, soil moisture, groundwater and surface water bodies. Therefore, complementary observational data and hydrological models are still required to determine the partitioning of the measured signal among different water storages and to understand the underlying processes. However, the application of large-scale observational data is limited by their specific uncertainties and the incapacity to measure certain water fluxes and storages. Hydrological models, on the other hand, vary widely in their structure and process-representation, and rarely incorporate additional observational data to minimize uncertainties that arise from their simplified representation of the complex hydrologic cycle.
In this context, this thesis aims to contribute to improving the understanding of global water storage variability by combining simple hydrological models with a variety of complementary Earth observation-based data. To this end, a model-data integration approach is developed, in which the parameters of a parsimonious hydrological model are calibrated against several observational constraints, inducing GRACE TWS, simultaneously, while taking into account each data’s specific strengths and uncertainties. This approach is used to investigate 3 specific aspects that are relevant for modelling and understanding the composition of large-scale TWS variations.
The first study focusses on Northern latitudes, where snow and cold-region processes define the hydrological cycle. While the study confirms previous findings that seasonal dynamics of TWS are dominated by the cyclic accumulation and melt of snow, it reveals that inter-annual TWS variations on the contrary, are determined by variations in liquid water storages. Additionally, it is found to be important to consider the impact of compensatory effects of spatially heterogeneous hydrological variables when aggregating the contribution of different storage components over large areas. Hence, the determinants of TWS variations are scale-dependent and underlying driving mechanism cannot be simply transferred between spatial and temporal scales. These findings are supported by the second study for the global land areas beyond the Northern latitudes as well.
This second study further identifies the considerable impact of how vegetation is represented in hydrological models on the partitioning of TWS variations. Using spatio-temporal varying fields of Earth observation-based data to parameterize vegetation activity not only significantly improves model performance, but also reduces parameter equifinality and process uncertainties. Moreover, the representation of vegetation drastically changes the contribution of different water storages to overall TWS variability, emphasizing the key role of vegetation for water allocation, especially between sub-surface and delayed water storages. However, the study also identifies parameter equifinality regarding the decay of sub-surface and delayed water storages by either evapotranspiration or runoff, and thus emphasizes the need for further constraints hereof.
The third study focuses on the role of river water storage, in particular whether it is necessary to include computationally expensive river routing for model calibration and validation against the integrated GRACE TWS. The results suggest that river routing is not required for model calibration in such a global model-data integration approach, due to the larger influence other observational constraints, and the determinability of certain model parameters and associated processes are identified as issues of greater relevance. In contrast to model calibration, considering river water storage derived from routing schemes can already significantly improve modelled TWS compared to GRACE observations, and thus should be considered for model evaluation against GRACE data.
Beyond these specific findings that contribute to improved understanding and modelling of large-scale TWS variations, this thesis demonstrates the potential of combining simple modeling approaches with diverse Earth observational data to improve model simulations, overcome inconsistencies of different observational data sets, and identify areas that require further research. These findings encourage future efforts to take advantage of the increasing number of diverse global observational data.
Uncovering the interplay between nutrient availability and cellulose biosynthesis inhibitor activity
(2022)
All plant cells are surrounded by a dynamic, carbohydrate-rich extracellular matrix known as the cell wall. Nutrient availability affects cell wall composition via uncharacterized regulatory mechanisms, and cellulose deficient mutants develop a hypersensitive root response to growth on high concentrations of nitrate. Since cell walls account for the bulk of plant biomass, it is important to understand how nutrients regulate cell walls. This could provide important knowledge for directing fertilizer treatments and engineering plants with higher nutrient use efficiency. The direct effect of nitrate on cell wall synthesis was investigated through growth assays on varying concentrations of nitrate, measuring cellulose content of roots and shoots, and assessing cellulose synthase activity (CESA) using live cell imaging with spinning disk confocal microscopy. A forward genetic screen was developed to isolate mutants impaired in nutrient-mediated cell wall regulation, revealing that cellulose biosynthesis inhibitor (CBI) activity is modulated by nutrient availability. Various non-CESA mutants were isolated that displayed CBI resistance, with the majority of mutations causing perturbation of mitochondria-localized proteins. To investigate mitochondrial involvement, the CBI mechanism of action was investigated using a reverse genetic screen, a targeted pharmacological screen, and -omics approaches. The results generated suggest that CBI-induced cellulose inhibition is due to off-target effects. This provides the groundwork to investigate uncharacterized processes of CESA regulation and adds valuable knowledge to the understanding of CBI activity, which could be harnessed to develop new and improved herbicides.
Die internationale Schifffahrt erhofft sich mit der Entwicklung unbemannter Schiffe, die nur noch von Kontrollzentren an Land durch Personal überwacht werden und sonst durch Elektromotoren und Solarenergie betrieben und mit selbstlernenden Navigationsprogrammen ausgestattet weitgehend autark agieren, eine Einsparung von Transportkosten von über 20 %. Diese voranschreitende technische Entwicklung wird insbesondere das internationale Seerecht in Zukunft vor Herausforderungen stellen. Das Werk untersucht vor diesem Hintergrund primär die Kompatibilität dieser Schiffe mit dem Seerechtsübereinkommen. Zunächst wird eine Schiffsdefinition für den Vertrag entwickelt und eine Anwendung des Regelwerks auf autonome Schiffe überprüft. Dann wird auf Problemfelder wie die Einhaltung von Pflichten durch die Schiffe, die Notwendigkeit besonderer Schutzrechte vor allem in Bezug auf Zwangsmaßnahmen durch die Küstenstaaten an Bord und die Anwendbarkeit der bestehenden Piraterievorschriften auf diese Schiffe eingegangen. Weiter wirft die Arbeit die Frage auf, ob die Staatengemeinschaft, besonders mit Hinblick auf den maritimen Umweltschutz, nach dem Seerechtsübereinkommen eine Pflicht zur Förderung unbemannter Schiffe hat. Abschließend wird auf erforderliche Cyber Security Maßnahmen für diesen besonderen Schiffstyp eingegangen. Insgesamt zeigt sich nach dieser Analyse, dass das Seerechtsübereinkommen, mit überschaubaren Anpassungen, gut Anwendung auf autonome Schiffe finden kann.
Umdeutungen des Islams
(2022)
Fanatismus, Krieg und Terror – öffentliche Deutungen und Stereotype über Muslim*innen in der Bundesrepublik.
Ein großer Teil der deutschen Bevölkerung hat heutzutage eine negative Wahrnehmung von Muslim*innen. Ihnen wird pauschal ein Hang zu Gewalt, religiösem Fanatismus, Extremismus und Unterdrückung von Frauen unterstellt. Diese Zuschreibungen bestehen nicht erst seit den Terroranschlägen vom 11. September 2001, sondern haben sich bereits in den drei Jahrzehnten zuvor etabliert.
Alexander Konrad untersucht den Wandel der bundesdeutschen Wahrnehmungen von Muslim*innen von den Siebzigerjahren bis zur Jahrtausendwende. Dabei nimmt er öffentliche Aussagen und Handlungen von Akteur*innen aus Politik, Medien, Wissenschaft, Religionsgemeinschaften und Zivilgesellschaft kritisch in den Blick. Hintergründe, argumentative Überschneidungen und Agenden stehen im Zentrum seiner Analyse. Auch den damaligen Bemühungen um reflektierte Sichtweisen zu Muslim*innen spürt der Autor nach. Mit seiner Studie leistet Alexander Konrad einen fundamentalen Beitrag zur zeithistorischen Dekonstruktion von Denkweisen über Islam und Muslim*innen.
Turnschuhdiplomatie
(2022)
Auch wenn der Sport der DDR mit seinen Rekorden und Medaillen als eines ihrer weltweiten Aushängeschilder galt, fehlte bisher eine detaillierte Unter- suchung ihrer internationalen Sportbeziehungen. Der vorliegende Band holt dies in Form einer erstmaligen außen- und sportpolitischen Kontinentalstudie am Beispiel Afrikas nach und erörtert, welche Rolle die vielfältigen Facetten des Sports in der Afrikapolitik der DDR spielten und mit welchen Interessen diese u.a. in den Bereichen der Diplomatie, der kulturellen Auslandsarbeit, des Leistungssports oder des Außenhandels verknüpft waren. Die 610 Seiten starke Schrift beleuchtet die Zeit von 1955 bis 1990 und stützt sich dabei u.a. auf über 2200 (teils erstmals recherchierte) Quellen- und Literaturnachweise. Schwerpunkte bilden hierbei u.a. ausführliche Länder- und Regional- studien für Nordafrika (Ägypten, Algerien), Westafrika (Ghana, Mali, Guinea) sowie zu Äthiopien und Mosambik.
The increasing introduction of non-native plant species may pose a threat to local biodiversity. However, the basis of successful plant invasion is not conclusively understood, especially since these plant species can adapt to the new range within a short period of time despite impoverished genetic diversity of the starting populations. In this context, DNA methylation is considered promising to explain successful adaptation mechanisms in the new habitat. DNA methylation is a heritable variation in gene expression without changing the underlying genetic information. Thus, DNA methylation is considered a so-called epigenetic mechanism, but has been studied in mainly clonally reproducing plant species or genetic model plants. An understanding of this epigenetic mechanism in the context of non-native, predominantly sexually reproducing plant species might help to expand knowledge in biodiversity research on the interaction between plants and their habitats and, based on this, may enable more precise measures in conservation biology.
For my studies, I combined chemical DNA demethylation of field-collected seed material from predominantly sexually reproducing species and rearing offsping under common climatic conditions to examine DNA methylation in an ecological-evolutionary context. The contrast of chemically treated (demethylated) plants, whose variation in DNA methylation was artificially reduced, and untreated control plants of the same species allowed me to study the impact of this mechanism on adaptive trait differentiation and local adaptation. With this experimental background, I conducted three studies examining the effect of DNA methylation in non-native species along a climatic gradient and also between climatically divergent regions.
The first study focused on adaptive trait differentiation in two invasive perennial goldenrod species, Solidago canadensis sensu latu and S. gigantea AITON, along a climate gradient of more than 1000 km in length in Central Europe. I found population differences in flowering timing, plant height, and biomass in the temporally longer-established S. canadensis, but only in the number of regrowing shoots for S. gigantea. While S. canadensis did not show any population structure, I was able to identify three genetic groups along this climatic gradient in S. gigantea. Surprisingly, demethylated plants of both species showed no change in the majority of traits studied. In the subsequent second study, I focused on the longer-established goldenrod species S. canadensis and used molecular analyses to infer spatial epigenetic and genetic population differences in the same specimens from the previous study. I found weak genetic but no epigenetic spatial variation between populations. Additionally, I was able to identify one genetic marker and one epigenetic marker putatively susceptible to selection. However, the results of this study reconfirmed that the epigenetic mechanism of DNA methylation appears to be hardly involved in adaptive processes within the new range in S. canadensis.
Finally, I conducted a third study in which I reciprocally transplanted short-lived plant species between two climatically divergent regions in Germany to investigate local adaptation at the plant family level. For this purpose, I used four plant families (Amaranthaceae, Asteraceae, Plantaginaceae, Solanaceae) and here I additionally compared between non-native and native plant species. Seeds were transplanted to regions with a distance of more than 600 kilometers and had either a temperate-oceanic or a temperate-continental climate. In this study, some species were found to be maladapted to their own local conditions, both in non-native and native plant species alike. In demethylated individuals of the plant species studied, DNA methylation had inconsistent but species-specific effects on survival and biomass production. The results of this study highlight that DNA methylation did not make a substantial contribution to local adaptation in the non-native as well as native species studied.
In summary, my work showed that DNA methylation plays a negligible role in both adaptive trait variation along climatic gradients and local adaptation in non-native plant species that either exhibit a high degree of genetic variation or rely mainly on sexual reproduction with low clonal propagation. I was able to show that the adaptive success of these non-native plant species can hardly be explained by DNA methylation, but could be a possible consequence of multiple introductions, dispersal corridors and meta-population dynamics. Similarly, my results illustrate that the use of plant species that do not predominantly reproduce clonally and are not model plants is essential to characterize the effect size of epigenetic mechanisms in an ecological-evolutionary context.
Identity management is at the forefront of applications’ security posture. It separates the unauthorised user from the legitimate individual. Identity management models have evolved from the isolated to the centralised paradigm and identity federations. Within this advancement, the identity provider emerged as a trusted third party that holds a powerful position. Allen postulated the novel self-sovereign identity paradigm to establish a new balance. Thus, extensive research is required to comprehend its virtues and limitations. Analysing the new paradigm, initially, we investigate the blockchain-based self-sovereign identity concept structurally. Moreover, we examine trust requirements in this context by reference to patterns. These shapes comprise major entities linked by a decentralised identity provider. By comparison to the traditional models, we conclude that trust in credential management and authentication is removed. Trust-enhancing attribute aggregation based on multiple attribute providers provokes a further trust shift. Subsequently, we formalise attribute assurance trust modelling by a metaframework. It encompasses the attestation and trust network as well as the trust decision process, including the trust function, as central components. A secure attribute assurance trust model depends on the security of the trust function. The trust function should consider high trust values and several attribute authorities. Furthermore, we evaluate classification, conceptual study, practical analysis and simulation as assessment strategies of trust models. For realising trust-enhancing attribute aggregation, we propose a probabilistic approach. The method exerts the principle characteristics of correctness and validity. These values are combined for one provider and subsequently for multiple issuers. We embed this trust function in a model within the self-sovereign identity ecosystem. To practically apply the trust function and solve several challenges for the service provider that arise from adopting self-sovereign identity solutions, we conceptualise and implement an identity broker. The mediator applies a component-based architecture to abstract from a single solution. Standard identity and access management protocols build the interface for applications. We can conclude that the broker’s usage at the side of the service provider does not undermine self-sovereign principles, but fosters the advancement of the ecosystem. The identity broker is applied to sample web applications with distinct attribute requirements to showcase usefulness for authentication and attribute-based access control within a case study.
Public administrations confront fundamental challenges, including globalization, digitalization, and an eroding level of trust from society. By developing joint public service delivery with other stakeholders, public administrations can respond to these challenges. This increases the importance of inter-organizational governance—a development often referred to as New Public Governance, which to date has not been realized because public administrations focus on intra-organizational practices and follow the traditional “governmental chain.”
E-government initiatives, which can lead to high levels of interconnected public services, are currently perceived as insufficient to meet this goal. They are not designed holistically and merely affect the interactions of public and non-public stakeholders. A fundamental shift toward a joint public service delivery would require scrutiny of established processes, roles, and interactions between stakeholders.
Various scientists and practitioners within the public sector assume that the use of blockchain institutional technology could fundamentally change the relationship between public and non-public stakeholders. At first glance, inter-organizational, joint public service delivery could benefit from the use of blockchain. This dissertation aims to shed light on this widespread assumption. Hence, the objective of this dissertation is to substantiate the effect of blockchain on the relationship between public administrations and non-public stakeholders.
This objective is pursued by defining three major areas of interest. First, this dissertation strives to answer the question of whether or not blockchain is suited to enable New Public Governance and to identify instances where blockchain may not be the proper solution. The second area aims to understand empirically the status quo of existing blockchain implementations in the public sector and whether they comply with the major theoretical conclusions. The third area investigates the changing role of public administrations, as the blockchain ecosystem can significantly increase the number of stakeholders.
Corresponding research is conducted to provide insights into these areas, for example, combining theoretical concepts with empirical actualities, conducting interviews with subject matter experts and key stakeholders of leading blockchain implementations, and performing a comprehensive stakeholder analysis, followed by visualization of its results.
The results of this dissertation demonstrate that blockchain can support New Public Governance in many ways while having a minor impact on certain aspects (e.g., decentralized control), which account for this public service paradigm. Furthermore, the existing projects indicate changes to relationships between public administrations and non-public stakeholders, although not necessarily the fundamental shift proposed by New Public Governance. Lastly, the results suggest that power relations are shifting, including the decreasing influence of public administrations within the blockchain ecosystem. The results raise questions about the governance models and regulations required to support mature solutions and the further diffusion of blockchain for public service delivery.
In this thesis, I present my contributions to the field of ultrafast molecular spectroscopy. Using the molecule 2-thiouracil as an example, I use ultrashort x-ray pulses from free- electron lasers to study the relaxation dynamics of gas-phase molecular samples. Taking advantage of the x-ray typical element- and site-selectivity, I investigate the charge flow and geometrical changes in the excited states of 2-thiouracil.
In order to understand the photoinduced dynamics of molecules, knowledge about the ground-state structure and the relaxation after photoexcitation is crucial. Therefore, a part of this thesis covers the electronic ground-state spectroscopy of mainly 2-thiouracil to provide the basis for the time-resolved experiments. Many of the previously published studies that focused on the gas-phase time-resolved dynamics of thionated uracils after UV excitation relied on information from solution phase spectroscopy to determine the excitation energies. This is not an optimal strategy as solvents alter the absorption spec- trum and, hence, there is no guarantee that liquid-phase spectra resemble the gas-phase spectra. Therefore, I measured the UV-absorption spectra of all three thionated uracils to provide a gas-phase reference and, in combination with calculations, we determined the excited states involved in the transitions.
In contrast to the UV absorption, the literature on the x-ray spectroscopy of thionated uracil is sparse. Thus, we measured static photoelectron, Auger-Meitner and x-ray absorption spectra on the sulfur L edge before or parallel to the time-resolved experiments we performed at FLASH (DESY, Hamburg). In addition, (so far unpublished) measurements were performed at the synchrotron SOLEIL (France) which have been included in this thesis and show the spin-orbit splitting of the S 2p photoline and its satellite which was not observed at the free-electron laser.
The relaxation of 2-thiouracil has been studied extensively in recent years with ultrafast visible and ultraviolet methods showing the ultrafast nature of the molecular process after photoexcitation. Ultrafast spectroscopy probing the core-level electrons provides a complementary approach to common optical ultrafast techniques. The method inherits its local sensitivity from the strongly localised core electrons. The core energies and core-valence transitions are strongly affected by local valence charge and geometry changes, and past studies have utilised this sensitivity to investigate the molecular process reflected by the ultrafast dynamics. We have built an apparatus that provides the requirements to perform time-resolved x-ray spectroscopy on molecules in the gas phase. With the apparatus, we performed UV-pump x-ray-probe electron spectroscopy on the S 2p edge of 2-thiouracil using the free-electron laser FLASH2. While the UV triggers the relaxation dynamics, the x-ray probes the single sulfur atom inside the molecule. I implemented photoline self-referencing for the photoelectron spectral analysis. This minimises the spectral jitter of the FEL, which is due to the underlying self-amplified spontaneous emission (SASE) process. With this approach, we were not only able to study dynamical changes in the binding energy of the electrons but also to detect an oscillatory behaviour in the shift of the observed photoline, which we associate with non-adiabatic dynamics involving several electronic states. Moreover, we were able to link the UV-induced shift in binding energy to the local charge flow at the sulfur which is directly connected to the electronic state. Furthermore, the analysis of the Auger-Meitner electrons shows that energy shifts observed at early stages of the photoinduced relaxation are related to the geometry change in the molecule. More specifically, the observed increase in kinetic energy of the Auger-Meitner electrons correlates with a previously predicted C=S bond stretch.
Seismology, like many scientific fields, e.g., music information retrieval and speech signal pro- cessing, is experiencing exponential growth in the amount of data acquired by modern seismo- logical networks. In this thesis, I take advantage of the opportunities offered by "big data" and by the methods developed in the areas of music information retrieval and machine learning to predict better the ground motion generated by earthquakes and to study the properties of the surface layers of the Earth. In order to better predict seismic ground motions, I propose two approaches based on unsupervised deep learning methods, an autoencoder network and Generative Adversarial Networks. The autoencoder technique explores a massive amount of ground motion data, evaluates the required parameters, and generates synthetic ground motion data in the Fourier amplitude spectra (FAS) domain. This method is tested on two synthetic datasets and one real dataset. The application on the real dataset shows that the substantial information contained within the FAS data can be encoded to a four to the five-dimensional manifold. Consequently, only a few independent parameters are required for efficient ground motion prediction. I also propose a method based on Conditional Generative Adversarial Networks (CGAN) for simulating ground motion records in the time-frequency and time domains. CGAN generates the time-frequency domains based on the parameters: magnitude, distance, and shear wave velocities to 30 m depth (VS30). After generating the amplitude of the time-frequency domains using the CGAN model, instead of classical conventional methods that assume the amplitude spectra with a random phase spectrum, the phase of the time-frequency domains is recovered by minimizing the observed and reconstructed spectrograms. In the second part of this dissertation, I propose two methods for the monitoring and characterization of near-surface materials and site effect analyses. I implement an autocorrelation function and an interferometry method to monitor the velocity changes of near-surface materials resulting from the Kumamoto earthquake sequence (Japan, 2016). The observed seismic velocity changes during the strong shaking are due to the non-linear response of the near-surface materials. The results show that the velocity changes lasted for about two months after the Kumamoto mainshock. Furthermore, I used the velocity changes to evaluate the in-situ strain-stress relationship. I also propose a method for assessing the site proxy "VS30" using non-invasive analysis. In the proposed method, a dispersion curve of surface waves is inverted to estimate the shear wave velocity of the subsurface. This method is based on the Dix-like linear operators, which relate the shear wave velocity to the phase velocity. The proposed method is fast, efficient, and stable. All of the methods presented in this work can be used for processing "big data" in seismology and for the analysis of weak and strong ground motion data, to predict ground shaking, and to analyze site responses by considering potential time dependencies and nonlinearities.
This cumulative doctoral thesis consists of three empirical studies that examine the role of top-level executives in shaping adverse financial reporting outcomes and other forms of corporate misconduct. The first study examines CEO effects on a wide range of offenses. Using data from enforcement actions by more than 50 U.S. federal agencies, regression re-sults show CEO effects on the likelihood, frequency, and severity of corporate misconduct. The findings hold for financial, labor-related, and environmental offenses; however, CEO effects are more pronounced for non-financial misconduct. Further results show a positive relation between CEO ability and non-financial misconduct, but no relation with financial misconduct, suggesting that higher CEO ability can have adverse consequences for employee welfare and society and public health. The second study focuses on CEO and CFO effects on financial misreporting. Using data on restatements and public enforcement actions, regression results show that the incremental effect of CFOs is economically larger than that of CEOs. This greater economic impact of CFOs is particularly pronounced for fraudulent misreporting. The findings remain consistent across different samples, methods, misreporting measures, and specification choices for the underlying conceptual mechanism, highlighting the important role of the CFO as a key player in the beyond-GAAP setting. The third study reexamines the relation between equity incentives and different reporting outcomes. The literature review reveals large variation in the empirical measures for firm size as standard control variable, equity incentives as key explanatory variables, and the reporting outcome of interest. Regres-sion results show that these design choices have a direct bearing on empirical results, with changes in t-statistics that often exceed typical thresholds for statistical significance. The find-ings hold for aggressive accrual management, earnings management through discretionary accruals, and material misstatements, suggesting that common design choices can have a large impact on whether equity incentives effects are considered significant or not.
Im Rahmen dieser Dissertation wurde der Sauerstoff im Grundgerüst der [1,3]-Dioxolo[4.5-f]benzodioxol-Fluoreszenzfarbstoffe (DBD-Fluoreszenzfarbstoffe) vollständig mit Schwefel ausgetauscht und daraus eine neue Klasse von Fluoreszenzfarbstoffen entwickelt, die Benzo[1,2-d:4,5-d']bis([1,3]dithiol)-Fluorophore (S4-DBD-Fluorophore). Insgesamt neun der besonders interessanten, difunktionalisierten Vertreter konnten synthetisiert werden, die sich in ihren elektronenziehenden Gruppen und in ihrer Anordnung unterschieden.
Durch den Austausch von Sauerstoff mit Schwefel kam es zu teilweise auffälligen Veränderungen in den Fluoreszenzparametern, wie eine Abnahme der Fluoreszenzquantenausbeuten und -lebenszeiten aber auch eine deutliche Rotverschiebung in den Absorptions- und Emissionswellenlängen mit großen STOKES-Verschiebungen. Damit sind die S4-DBD-Fluorophore eine wertvolle Ergänzung für die DBD-Farbstoffe.
Die Ursachen für die Abnahme der Lebenszeiten und Quantenausbeuten konnte auf eine hohe Besetzung des Triplett-Zustandes zurückgeführt werden, welcher durch die verstärkten Spin-Bahn-Kopplungen des Schwefels hervorgerufen wird. Zusammen mit dem Arbeitskreis physikalische Chemie der Universität Potsdam konnten auch die photophysikalischen Prozesse über die Transienten-Absorptionsspektroskopie (TAS) aufgeklärt werden.
Eine Strategie zur Funktionalisierung der S4-DBD-Farbstoffe am Thioacetalgerüst konnte entwickelt werden. So gelang es Alkohol-, Propargyl-, Azid-, NHS-Ester-, Carbonsäure-, Maleimid- und Tosyl-Gruppen an S4-DBD-Dialdehyden anzubringen.
Erweiternd wurden molekulare Stäbe auf Basis von Schwefel-Oligo-Spiro-Ketalen (SOSKs) untersucht, bei denen Sauerstoff durch Schwefel ersetzt wurde. Hier konnten die Synthesen der löslichkeitsvermittelnden TER-Muffe und auch des Tetrathiapentaerythritols als Grundbaustein deutlich verbessert werden. Aus diesen konnte ein einfaches SOSK-Polymer hergestellt werden. Weitere Versuche zum Aufbau eines Stabes müssen aber noch untersucht werden. Um einen S-OSK-Stab aufzubauen hat sich dabei die Dithiocarbonat-Gruppe in ersten Versuchen als potenzielle geeignete Schutzgruppe für das Tetrathiapentaerythritol herausgestellt.
The Andes are a ~7000 km long N-S trending mountain range developed along the South American western continental margin. Driven by the subduction of the oceanic Nazca plate beneath the continental South American plate, the formation of the northern and central parts of the orogen is a type case for a non-collisional orogeny. In the southern Central Andes (SCA, 29°S-39°S), the oceanic plate changes the subduction angle between 33°S and 35°S from almost horizontal (< 5° dip) in the north to a steeper angle (~30° dip) in the south. This sector of the Andes also displays remarkable along- and across- strike variations of the tectonic deformation patterns. These include a systematic decrease of topographic elevation, of crustal shortening and foreland and orogenic width, as well as an alternation of the foreland deformation style between thick-skinned and thin-skinned recorded along- and across the strike of the subduction zone. Moreover, the SCA are a very seismically active region. The continental plate is characterized by a relatively shallow seismicity (< 30 km depth) which is mainly focussed at the transition from the orogen to the lowland areas of the foreland and the forearc; in contrast, deeper seismicity occurs below the interiors of the northern foreland. Additionally, frequent seismicity is also recorded in the shallow parts of the oceanic plate and in a sector of the flat slab segment between 31°S and 33°S. The observed spatial heterogeneity in tectonic and seismic deformation in the SCA has been attributed to multiple causes, including variations in sediment thickness, the presence of inherited structures and changes in the subduction angle of the oceanic slab. However, there is no study that inquired the relationship between the long-term rheological configuration of the SCA and the spatial deformation patterns. Moreover, the effects of the density and thickness configuration of the continental plate and of variations in the slab dip angle in the rheological state of the lithosphere have been not thoroughly investigated yet. Since rheology depends on composition, pressure and temperature, a detailed characterization of the compositional, structural and thermal fields of the lithosphere is needed. Therefore, by using multiple geophysical approaches and data sources, I constructed the following 3D models of the SCA lithosphere: (i) a seismically-constrained structural and density model that was tested against the gravity field; (ii) a thermal model integrating the conversion of mantle shear-wave velocities to temperature with steady-state conductive calculations in the uppermost lithosphere (< 50 km depth), validated by temperature and heat-flow measurements; and (iii) a rheological model of the long-term lithospheric strength using as input the previously-generated models.
The results of this dissertation indicate that the present-day thermal and rheological fields of the SCA are controlled by different mechanisms at different depths. At shallow depths (< 50 km), the thermomechanical field is modulated by the heterogeneous composition of the continental lithosphere. The overprint of the oceanic slab is detectable where the oceanic plate is shallow (< 85 km depth) and the radiogenic crust is thin, resulting in overall lower temperatures and higher strength compared to regions where the slab is steep and the radiogenic crust is thick. At depths > 50 km, largest temperatures variations occur where the descending slab is detected, which implies that the deep thermal field is mainly affected by the slab dip geometry.
The outcomes of this thesis suggests that long-term thermomechanical state of the lithosphere influences the spatial distribution of seismic deformation. Most of the seismicity within the continental plate occurs above the modelled transition from brittle to ductile conditions. Additionally, there is a spatial correlation between the location of these events and the transition from the mechanically strong domains of the forearc and foreland to the weak domain of the orogen. In contrast, seismicity within the oceanic plate is also detected where long-term ductile conditions are expected. I therefore analysed the possible influence of additional mechanisms triggering these earthquakes, including the compaction of sediments in the subduction interface and dehydration reactions in the slab. To that aim, I carried out a qualitative analysis of the state of hydration in the mantle using the ratio between compressional- and shear-wave velocity (vp/vs ratio) from a previous seismic tomography. The results from this analysis indicate that the majority of the seismicity spatially correlates with hydrated areas of the slab and overlying continental mantle, with the exception of the cluster within the flat slab segment. In this region, earthquakes are likely triggered by flexural processes where the slab changes from a flat to a steep subduction angle.
First-order variations in the observed tectonic patterns also seem to be influenced by the thermomechanical configuration of the lithosphere. The mechanically strong domains of the forearc and foreland, due to their resistance to deformation, display smaller amounts of shortening than the relatively weak orogenic domain. In addition, the structural and thermomechanical characteristics modelled in this dissertation confirm previous analyses from geodynamic models pointing to the control of the observed heterogeneities in the orogen and foreland deformation style. These characteristics include the lithospheric and crustal thickness, the presence of weak sediments and the variations in gravitational potential energy.
Specific conditions occur in the cold and strong northern foreland, which is characterized by active seismicity and thick-skinned structures, although the modelled crustal strength exceeds the typical values of externally-applied tectonic stresses. The additional mechanisms that could explain the strain localization in a region that should resist deformation are: (i) increased tectonic forces coming from the steepening of the slab and (ii) enhanced weakening along inherited structures from pre-Andean deformation events. Finally, the thermomechanical conditions of this sector of the foreland could be a key factor influencing the preservation of the flat subduction angle at these latitudes of the SCA.
Die Arbeit untersucht bewaffnete Konfliktszenarien, in denen an multinationalen Militäroperationen beteiligte Staaten während einer Gewahrsamsoperation gegnerische Kräfte oder andere Personen in Gewahrsam nehmen und diese dann an die Kräfte eines anderen Staates, oftmals der Hostnation mit zweifelhafter Menschenrechtsreputation, überstellen. Gewahrsamspersonen laufen dann Gefahr, Opfer erheblicher Rechtsverletzungen zu werden
Hitze ist eine bedeutende klimatische Bedingung, die das Wachstum und das Überleben von Pflanzen bedroht. Extreme Temperaturereignisse in der Natur werden gravierender, häufiger, länger anhaltend, was sich nachteilig auf die landwirtschaftliche Produktion auswirkt. Daher ist es wichtig, mehr über die Mechanismen zu erfahren, die zu einer erhöhten Hitzetoleranz bei Pflanzen führen. Um auszuhalten und zu überleben, haben höhere Pflanzen komplexe Mechanismen entwickelt, um auf verschiedene Intensitäten von Hitzestress zu reagieren. Pflanzen haben eine thermische Toleranz, die es ihnen ermöglicht, schnelle und dramatische Temperaturanstiege für eine begrenzte Zeit zu überleben. Pflanzen können auch darauf vorbereitet werden, Hitzestress (HS) zu widerstehen, der ansonsten tödlich wäre, indem man sie kurzen, moderaten und nicht-tödlichen HS (als Priming-Stimulus bezeichnet) aussetzt, bevor sie hohem HS ausgesetzt werden. Eine erworbene Thermotoleranz kann bei Pflanzen unter optimalen Bedingungen lange aufrechterhalten werden, was bedeutet, dass Pflanzen während dieser Zeit Informationen speichern können. Mehrere Studien haben gezeigt, dass sich erworbene Thermotoleranz (Thermopriming) auf die erhöhte Widerstandsfähigkeit von Zellen, Geweben und Organismen gegenüber erhöhten Temperaturen nach vorheriger Hitzeeinwirkung bezieht. Die Aufrechterhaltung der erworbenen Thermotoleranz (Thermomemory) ist mit der Synthese von speziellen Stressproteinen verbunden, die am Zellschutz und der beschleunigten Gewebereparatur beteiligt sind, wie z. B. Hitzeschockproteine (HSPs). Neuere Studien haben eine Beteiligung von Hitzeschockproteinen, z.B. HSP21, in Chloroplasten an der Regulation des Thermogedächtnisses belegt. Als wichtiges Organell ist die mitochondriale Funktion entscheidend für die Reaktion von Pflanzenzellen auf Hitze. Es ist jedoch noch unbekannt, wie die molekulare und physiologische Beteiligung von HSPs an der mitochondrialen Funktion im Thermogedächtnis erfolgt. In unserer Studie haben wir gezeigt, dass Thermopriming Transkript- und Proteinspiegel von zwei mitochondrialen kleinen Hitzeschockproteinen, HSP23.56 (AT5G51440) und HSP23.6 (AT4G25200), induziert, die während der Thermogedächtnisphase 2-3 Tage andauern. Die morphologische Analyse von HSP23.5/6-transgenen Pflanzen zeigte eine HSP23.5/6-Funktionsredundanz bei Hitzestress. Wir zeigten, dass hsp23.5/6-Doppel-Knockout-Pflanzen Anomalien im Thermogedächtnis im Keimlingsstadium aufwiesen und dass reife hsp23.5/6-Pflanzen sowohl mit basaler Thermotoleranz als auch mit Thermogedächtnis empfindlicher sind. Die Wärmebehandlung beeinflusste die Atmungsrate von hsp23.5/6-Keimlingen im Vergleich zu WT signifikant, was auf eine mitochondriale Dysfunktion in Abhängigkeit von HSP23.5 und HSP23.6 hinweist. Darüber hinaus haben wir die Chaperon-Aktivität von HSP23.6 gegenüber dem Modellsubstratprotein Malatdehydrogenase (MDH) in vitro getestet und bestätigt, was darauf hindeutet, dass HSP23.6 möglicherweise zur Aufrechterhaltung der zellulären Lebensfähigkeit beiträgt. Darüber hinaus entdeckten wir ein neues HSP23.6-Clientprotein, CIB22, ein mitochondriales Komplex-I-Untereinheitsprotein. Nach experimentellen Daten (BiFC und Co-IP) interagieren HSP23.6 und CIB22 in Pflanzenzellen. Wir identifizierten auch einen Hitzereaktionsphänotyp in der cib22-Mutante im Vergleich zu WT sowie einen CIB22-Proteinabbau in der hsp23.5/6-Mutante, wenn sie Hitze ausgesetzt wurde. Unsere Ergebnisse legen nahe, dass die beiden mitochondrial lokalisierten
Hitzeschockproteine eine Rolle bei der Thermotoleranz spielen, vermutlich indem sie die mitochondriale Funktion und Struktur beeinflussen. Um neue genetische Komponenten zu identifizieren, die mit dem Thermogedächtnis in Pflanzen verbunden sind, haben wir weiterhin ein Proteom-Profiling von Arabidopsis WT (Col-0) -Keimlingen während des Thermogedächtnisses durchgeführt. Mehrere Zeitpunkte von Priming und Triggerung mit Kontrollen wurden gesammelt und analysiert, um dynamische Proteomänderungen während der Gedächtnisphase in
Arabidopsis-Zellen aufzudecken. Unter den Top-gedächtnis-assoziierten Proteinen entdeckten wir, dass HSP70-4 nach dem Priming signifikant hochreguliert wurde und für die nächsten vier Tage auf hohem Niveau bleibt (mindestens 2-fach erhöht). Durch Analyse ihres Hitzestressverhaltens konnten wir verifizieren, dass HSP70-4 an der 7 Reaktion von Pflanzen auf Hitzestress beteiligt ist. Interessant ist, dass HSP70-4-GFP nach dem Priming zytosolische Foci erzeugt, die für einige Tage während der Erholungsphase bestehen bleiben. Wir schlagen vor, dass der Fokus mit SGs verbunden ist, da Cycloheximid (CHX) GFP-Foci-Signale unterdrückt, wenn sie der Hitze ausgesetzt werden. Diese Ergebnisse weisen auf eine HSP70-4-vermittelte Transkriptions- und Translationssteuerungsverbindung (Modul) während der basalen Thermotoleranz und des Thermogedächtnisses sowie auf ihre potenzielle(n) Rolle(n) bei der Reaktion auf Hitzestress hin.
Zusammenfassend bietet unsere Forschung neue Einblicke in die Rolle von Hitzeschockproteinen bei der Kontrolle der Hitzestresstoleranz und des Gedächtnisses.
This paper examines the function that cross-cultural competence (3C) has for NATO in a military context while focusing on two member states and their armed forces: the United States and Germany. Three dimensions were established to analyze 3C internally and externally: dimension A, dealing with 3C within the military organization; dimension B, focusing on 3C in a coalition environment/multicultural NATO contingent, for example while on a mission/training exercise abroad; and dimension C, covering 3C and NATO missions abroad with regard to interaction with the local population.
When developing the research design, the cultural studies-based theory of hegemony constructed by Antonio Gramsci was applied to a comprehensive document analysis of 3C coursework and regulations as well as official documents in order to establish a typification for cross-cultural competence.
As the result, 3C could be categorized as Type I – Ethical 3C, Type II – Hegemonic 3C, and Type III – Dominant 3C. Attributes were assigned according to each type. To validate the established typification, qualitative surveys were conducted with NATO (ACT), the U.S. Armed Forces (USCENTCOM), and the German Armed Forces (BMVg). These interviews validated the typification and revealed a varied approach to 3C in the established dimensions. It became evident that dimensions A and B indicated a prevalence of Type III, which greatly impacts the work atmosphere and effectiveness for NATO (ACT). In contrast, dimension C revealed the use of postcolonial mechanisms by NATO forces, such as applying one’s value systems to other cultures and having the appearance of an occupying force when 3C is not applied (Type I-II). In general, the function of each 3C type in the various dimensions could be determined.
In addition, a comparative study of the document analysis and the qualitative surveys resulted in a canon for culture-general skills. Regarding the determined lack of coherence in 3C correlating with a demonstrably negative impact on effectiveness and efficiency as well as interoperability, a NATO standard in the form of a standardization agreement (STANAG) was suggested based on the aforementioned findings, with a focus on: empathy, cross-cultural awareness, communication skills (including active listening), flexibility and adaptability, and interest. Moreover, tolerance of ambiguity and teachability, patience, observation skills, and perspective-taking could be considered significant. Suspending judgment and respect are also relevant skills here.
At the same time, the document analysis also revealed a lack of coherency and consistency in 3C education and interorganizational alignment. In particular, the documents examined for the U.S. Forces indicated divergent approaches. Furthermore, the interview analysis disclosed a large discrepancy in part between doctrine and actual implementation with regard to the NATO Forces.
Why do exercises in collaborative governance often witness more impasse than advantage? This cumulative dissertation undertakes a micro-level analysis of collaborative governance to tackle this research puzzle. It situates micropolitics at the very center of analysis: a wide range of activities, interventions, and tactics used by actors – be they conveners, facilitators, or participants – to shape the collaborative exercise. It is by focusing on these daily minutiae, and on the consequences that they bring along, the study argues, that we can better understand why and how collaboration can become stuck or unproductive. To do so, the foundational part of this dissertation (Article 1) uses power as a sensitizing concept to investigate the micro-dynamics that shape collaboration. It develops an analytical approach to advance the study of collaborative governance at the empirical level under a power-sensitive and process-oriented perspective. The subsequent articles follow the dissertation's red thread of investigating the micropolitics of collaborative governance by showing facilitation artefacts' interrelatedness and contribution to the potential success or failure of collaborative arrangements (Article 2); and by examining the specialized knowledge, skills and practices mobilized when designing a collaborative process (Article 3). The work is based on an abductive research approach, tacking back and forth between empirical data and theory, and offers a repertoire of concepts – from analytical terms (designed and emerging interaction orders, flows of power, arenas for power), to facilitation practices (scripting, situating, and supervising) and types of knowledge (process expertise) – to illustrate and study the detailed and constant work (and rework) that surrounds collaborative arrangements. These concepts sharpen the way researchers can look at, observe, and understand collaborative processes at a micro level. The thesis thereby elucidates the subtleties of power, which may be overlooked if we focus only on outcomes rather than the processes that engender them, and supports efforts to identify potential sources of impasse.
The Antarctic ice sheet is the largest freshwater reservoir worldwide. If it were to melt completely, global sea levels would rise by about 58 m. Calculation of projections of the Antarctic contribution to sea level rise under global warming conditions is an ongoing effort which
yields large ranges in predictions. Among the reasons for this are uncertainties related to the physics of ice sheet modeling. These
uncertainties include two processes that could lead to runaway ice retreat: the Marine Ice Sheet Instability (MISI), which causes rapid grounding line retreat on retrograde bedrock, and the Marine Ice Cliff Instability (MICI), in which tall ice cliffs become unstable and calve off, exposing even taller ice cliffs.
In my thesis, I investigated both marine instabilities (MISI and MICI) using the Parallel Ice Sheet Model (PISM), with a focus on MICI.
The demand for learning Design Thinking (DT) as a path towards acquiring 21st-century skills has increased globally in the last decade. Because DT education originated in the Silicon Valley context of the d.school at Stanford, it is important to evaluate how the teaching of the methodology adapts to different cultural contexts.The thesis explores the impact of the socio-cultural context on DT education.
DT institutes in Cape Town, South Africa and Kuala Lumpur, Malaysia, were visited to observe their programs and conduct 22 semistructured interviews with local educators regarding their adaption strategies. Grounded theory methodology was used to develop a model of Socio-Cultural Adaptation of Design Thinking Education that maps these strategies onto five dimensions: Planning, Process, People, Place, and Presentation. Based on this model, a list of recommendations is provided to help DT educators and practitioners in designing and delivering culturally inclusive DT education.
As climate change worsens, there is a growing urgency to promote renewable energies and improve their accessibility to society. Here, solar energy harvesting is of particular importance. Currently, metal halide perovskite (MHP) solar cells are indispensable in future solar energy generation research. MHPs are crystalline semiconductors increasingly relevant as low-cost, high-performance materials for optoelectronics. Their processing from solution at low temperature enables easy fabrication of thin film elements, encompassing solar cells and light-emitting diodes or photodetectors. Understanding the coordination chemistry of MHPs in their precursor solution would allow control over the thin film crystallization, the material properties and the final device performance.
In this work, we elaborate on the key parameters to manipulate the precursor solution with the long-term objective of enabling systematic process control. We focus on the nanostructural characterization of the initial arrangements of MHPs in the precursor solutions. Small-angle scattering is particularly well suited for measuring nanoparticles in solution. This technique proved to be valuable for the direct analyzes of perovskite precursor solutions in standard processing concentrations without causing radiation damage. We gain insights into the chemical nature of widely used precursor structures such as methylammonium lead iodide (MAPbI3), presenting first insights into the complex arrangements and interaction within this precursor state. Furthermore, we transfer the preceding results to other more complex perovskite precursors. The influence of compositional engineering is investigated using the addition of alkali cations as an example. As a result, we propose a detailed working mechanism on how the alkali cations suppress the formation of intermediate phases and improve the quality of the crystalline thin film. In addition, we investigate the crystallization process of a tin-based perovskite composition (FASnI3) under the influence of fluoride chemistry. We prove that the frequently used additive, tin fluoride (SnF2), selectively binds undesired oxidized tin (Sn(IV)) in the precursor solution. This prevents its incorporation into the actual crystal structure and thus reduces the defect density of the material. Furthermore, SnF2 leads to a more homogeneous crystal growth process, which results in improved crystal quality of the thin film material.
In total, this study provides a detailed characterization of the complex system of perovskite precursor chemistry. We thereby cover relevant parameters for future MHP solar cell process control, such as (I) the environmental impact based on concentration and temperature (II) the addition of counter ions to reduce the diffuse layer surrounding the precursor nanostructures and (III) the targeted use of additives to eliminate unwanted components selectively and to ensure a more homogeneous crystal growth.
The Clash of the Images
(2022)
In everyday life, we take there to be ordinary objects such as persons, tables, and stones bearing certain properties such as color and shape and standing in various causal relationships to each other. Basic convictions such as these form our everyday picture of the world: the manifest image.
The scientific image, on the other hand, is a system of beliefs that is only based on scientific results. It contains many beliefs that are not contained in the manifest image. At first glance, this may not seem to be a problem. But Mulamustafi? shows convincingly that this is a mistake: The world as it is in itself cannot be both the way the manifest image depicts it and the way the scientific image describes it to be.
Adem Mulamustafic studied and completed his PhD in philosophy at the University of Potsdam. His areas of specialization are metaphysics, philosophy of science, and critical thinking.
Die Jagdflieger der Wehrmacht waren die »Popstars« der nationalsozialistischen Propaganda. Doch was steckte hinter der glänzenden Fassade? Jens Wehner wirft – nüchtern und ausgewogen – ein neues Licht auf ihre militärische Funktion in der Luftkriegsführung. Seine Studie analysiert den militärischen Nutzen der Jagdflugzeugtypen von Messerschmitt und Focke-Wulf, ihre taktische Anwendung und Leitbilder. Vergleiche mit dem Entwicklungsstand der alliierten Kriegsgegner hinterfragen die Sinnhaftigkeit des Strebens nach technischer Überlegenheit. So entsteht das Bild einer Technik, welche die in sie gesetzten Erwartungen nicht erfüllte, einer fehlerhaften und statischen Doktrin, die für zahlreiche Rückschläge sorgte, und von individualistisch agierenden Piloten, die sich dem militärhierarchischen System entfremdeten.
Poly(vinylidene fluoride) (PVDF)-based homo-, co- and ter-polymers are well-known for their ferroelectric and relaxor-ferroelectric properties. Their semi-crystalline morphology consists of crystalline and amorphous phases, plus interface regions in between, and governs the relevant electro-active properties. In this work, the influence of chemical, thermal and mechanical treatments on the structure and morphology of PVDF-based polymers and on the related ferroelectric/relaxor-ferroelectric properties is investigated. Polymer films were prepared in different ways and subjected to various treatments such as annealing, quenching and stretching. The resulting changes in the transitions and relaxations of the polymer samples were studied by means of dielectric, thermal, mechanical and optical techniques. In particular, the origin(s) behind the mysterious mid-temperature transition (T_{mid}) that is observed in all PVDF-based polymers was assessed. A new hypothesis is proposed to describe the T_{mid} transition as a result of multiple processes taking place within the temperature range of the transition. The contribution of the individual processes to the observed overall transition depends on both the chemical structure of the monomer units and the processing conditions which also affect the melting transition. Quenching results in a decrease of the overall crystallinity and in smaller crystallites. On samples quenched after annealing, notable differences in the fractions of different crystalline phases have been observed when compared to samples that had been slowly cooled. Stretching of poly(vinylidene fluoride-tetrafluoroethylene) (P(VDF-TFE)) films causes an increase in the fraction of the ferroelectric β-phase with simultaneous increments in the melting point (T_m) and the crystallinity (\chi_c) of the copolymer. While an increase in the stretching temperature does not have a profound effect on the amount of the ferroelectric phase, its stability appears to improve.
Measurements of the non-linear dielectric permittivity \varepsilon_2^\prime in a poly(vinylidenefluoride-trifluoroethylene-chlorofluoroethylene) (P(VDF-TrFE- CFE)) relaxor-ferroelectric (R-F) terpolymer reveal peaks at 30 and 80 °C that cannot be identified in conventional dielectric spectroscopy. The former peak is associated with T_{mid}\ and may help to understand the non-zero \varepsilon_2^\prime values that are found for the paraelectric terpolymer phase. The latter peak can also be observed during cooling of P(VDF-TrFE) copolymer samples at 100 °C and is due to conduction processes and space-charge polarization as a result of the accumulation of real charges at the electrode-sample interface. Annealing lowers the Curie-transition temperature of the terpolymer as a consequence of its smaller ferroelectric-phase fraction, which by default exists even in terpolymers with relatively high CFE content. Changes in the transition temperatures are in turn related to the behavior of the hysteresis curves observed on differently heat-treated samples. Upon heating, the hysteresis curves evolve from those known for a ferroelectric to those of a typical relaxor-ferroelectric material. Comparing dielectric-hysteresis loops obtained at various temperatures, we find that annealed terpolymer films show higher electric-displacement values and lower coercive fields than the non-annealed samples − irrespective of the measurement temperature − and also exhibit ideal relaxor-ferroelectric behavior at ambient temperatures, which makes them excellent candidates for related applications at or near room temperature. However, non-annealed films − by virtue of their higher ferroelectric activity − show a larger and more stable remanent polarization at room temperature, while annealed samples need to be poled below 0 °C to induce a well-defined polarization. Overall, by modifying the three phases in PVDF-based polymers, it has been demonstrated how the preparation steps and processing conditions can be tailored to achieve the desired properties that are optimal for specific applications.
Gas hydrates are ice-like crystalline compounds made of water cavities that retain various types of guest molecules. Natural gas hydrates are CH4-rich but also contain higher hydrocarbons as well as CO2, H2S, etc. They are highly dependent of local pressure and temperature conditions. Considering the high energy content, natural gas hydrates are artificially dissociated for the production of methane gas. Besides, they may also dissociate in response to global warming. It is therefore crucial to investigate the hydrate nucleation and growth process at a molecular level. The understanding of how guest molecules in the hydrate cavities respond to warming climate or gas injection is also of great importance.
This thesis is concerned with a systematic investigation of simple and mixed gas hydrates at conditions relevant to the natural hydrate reservoir in Qilian Mountain permafrost, China. A high-pressure cell that integrated into the confocal Raman spectroscopy ensured a precise and continuous characterization of the hydrate phase during formation/dissociation/transformation processes with a high special and spectral resolution. By applying laboratory experiments, the formation of mixed gas hydrates containing other hydrocarbons besides methane was simulated in consideration of the effects from gas supply conditions and sediments. The results revealed a preferential enclathration of different guest molecules in hydrate cavities and further refute the common hypothesis of the coexistence of hydrate phases due to a changing feed gas phase. However, the presence of specific minerals and organic compounds in sediments may have significant impacts on the coexisting solid phases. With regard to the dissociation, the formation damage caused by fines mobilization and migration during hydrate decomposition was reported for the first time, illustrating the complex interactions between fine grains and hydrate particles. Gas hydrates, starting from simple CH4 hydrates to binary CH4—C3H8 hydrates and multi-component mixed hydrates were decomposed by thermal stimulation mimicking global warming. The mechanisms of guest substitution in hydrate structures were studied through the experimental data obtained from CH4—CO2, CH4—mixed gas hydrates and mixed gas hydrates—CO2 systems. For the first time, a second transformation behavior was documented during the transformation process from CH4 hydrates to CO2-rich mixed hydrates. Most of the crystals grew or maintained when exposed to CO2 gas while some others decreased in sizes and even disappeared over time. The highlight of the two last experimental simulations was to visualize and characterize the hydrate crystals which were at different structural transition stages. These experimental simulations enhanced our knowledge about the mixed gas hydrates in natural reservoirs and improved our capability to assess the response to global warming.
Two approaches for the synthesis of prenylated isoflavones were explored: the 2,3-oxidative rearrangement/cross metathesis approach, using hypervalent iodine reagents as oxidants and the Suzuki-Miyaura cross-coupling/cross metathesis approach. Three natural prenylated isoflavones: 5-deoxy-3′-prenylbiochanin A (59), erysubin F (61) and 7-methoxyebenosin (64), and non-natural analogues: 7,4′-dimethoxy-8,3′-diprenylisoflavone (126j) and 4′-hydroxy-7-methoxy-8,3′-diprenylisoflavone (128) were synthesized for the first time via the 2,3-oxidative rearrangement/cross metathesis approach, using mono- or diallylated flavanones as key intermediates. The reaction of flavanones with hypervalent iodine reagents afforded isoflavones via a 2,3-oxidative rearrangement and the corresponding flavone isomers via a 2,3-dehydrogenation. This afforded the synthesis of 7,4′-dimethoxy-8-prenylflavone (127g), 7,4′-dimethoxy-8,3′-diprenylflavone (127j), 7,4′-dihydroxy-8,3′-diprenylflavone (129) and 4′-hydroxy-7-methoxy-8,3′-diprenylflavone (130), the non-natural regioisomers of 7-methoxyebenosin, 126j, erysubin F and 128 respectively. Three natural prenylated isoflavones: 3′-prenylbiochanin A (58), neobavaisoflavone (66) and 7-methoxyneobavaisoflavone (137) were synthesized for the first time using the Suzuki-Miyaura cross-coupling/cross metathesis approach. The structures of 3′-prenylbiochanin A (58) and 5-deoxy-3′-prenylbiochanin A (59) were confirmed by single crystal X-ray diffraction analysis. The 2,3-oxidative rearrangement approach appears to be limited to the substitution pattern on both rings A and B of the flavanone while the Suzuki-Miyaura cross-coupling approach appears to be the most suitable for the synthesis of simple isoflavones or prenylated isoflavones whose prenyl substituents or allyl groups, the substituents that are essential precursors for the prenyl side chains, can be regioselectively introduced after the construction of the isoflavone core.
The chalcone-flavanone hybrids 146, 147 and 148, hybrids of the naturally occurring bioactive flavanones liquiritigenin-7-methyl ether, liquiritigenin and liquiritigenin-4′-methyl ether respectively were also synthesized for the first time, using Matsuda-Heck arylation and allylic/benzylic oxidation as key steps.
The intermolecular interactions of 5-deoxy-3′-prenylbiochanin A (59) and its two closely related precursors 106a and 106b was investigated by single crystal and Hirshfeld surface analyses to comprehend their different physicochemical properties. The results indicate that the presence of strong intermolecular O-H···O hydrogen bonds and an increase in the number of π-stacking interactions increases the melting point and lowers the solubility of isoflavone derivatives. However, the strong intermolecular O-H···O hydrogen bonds have a greater effect than the π-stacking interactions.
5-Deoxy-3′-prenylbiochanin A (59), erysubin F (61) and 7,4′-dihydroxy-8,3′-diprenylflavone (129), were tested against three bacterial strains and one fungal pathogen. All the three compounds were inactive against Salmonella enterica subsp. enterica (NCTC 13349), Escherichia coli (ATCC 25922), and Candida albicans (ATCC 90028), with MIC values greater than 80.0 μM. The diprenylated isoflavone erysubin F (61) and its flavone isomer 129 showed in vitro activity against methicillin-resistant Staphylococcus aureus (MRSA, ATCC 43300) at MIC values of 15.4 and 20.5 μM, respectively. 5-Deoxy-3′-prenylbiochanin A (59) was inactive against this MRSA strain. Erysubin F (61) and its flavone isomer 129 could serve as lead compounds for the development of new alternative drugs for the treatment of MRSA infections.
Sustainable urban growth
(2022)
This dissertation explores the determinants for sustainable and socially optimalgrowth in a city. Two general equilibrium models establish the base for this evaluation, each adding its puzzle piece to the urban sustainability discourse and examining the role of non-market-based and market-based policies for balanced growth and welfare improvements in different theory settings. Sustainable urban growth either calls for policy actions or a green energy transition. Further, R&D market failures can pose severe challenges to the sustainability of urban growth and the social optimality of decentralized allocation decisions. Still, a careful (holistic) combination of policy instruments can achieve sustainable growth and even be first best.
Traditional organizations are strongly encouraged by emerging digital customer behavior and digital competition to transform their businesses for the digital age. Incumbents are particularly exposed to the field of tension between maintaining and renewing their business model. Banking is one of the industries most affected by digitalization, with a large stream of digital innovations around Fintech. Most research contributions focus on digital innovations, such as Fintech, but there are only a few studies on the related challenges and perspectives of incumbent organizations, such as traditional banks. Against this background, this dissertation examines the specific causes, effects and solutions for traditional banks in digital transformation − an underrepresented research area so far.
The first part of the thesis examines how digitalization has changed the latent customer expectations in banking and studies the underlying technological drivers of evolving business-to-consumer (B2C) business models. Online consumer reviews are systematized to identify latent concepts of customer behavior and future decision paths as strategic digitalization effects. Furthermore, the service attribute preferences, the impact of influencing factors and the underlying customer segments are uncovered for checking accounts in a discrete choice experiment. The dissertation contributes here to customer behavior research in digital transformation, moving beyond the technology acceptance model. In addition, the dissertation systematizes value proposition types in the evolving discourse around smart products and services as key drivers of business models and market power in the platform economy.
The second part of the thesis focuses on the effects of digital transformation on the strategy development of financial service providers, which are classified along with their firm performance levels. Standard types are derived based on fuzzy-set qualitative comparative analysis (fsQCA), with facade digitalization as one typical standard type for low performing incumbent banks that lack a holistic strategic response to digital transformation. Based on this, the contradictory impact of digitalization measures on key business figures is examined for German savings banks, confirming that the shift towards digital customer interaction was not accompanied by new revenue models diminishing bank profitability. The dissertation further contributes to the discourse on digitalized work designs and the consequences for job perceptions in banking customer advisory. The threefold impact of the IT support perceived in customer interaction on the job satisfaction of customer advisors is disentangled.
In the third part of the dissertation, solutions are developed design-oriented for core action areas of digitalized business models, i.e., data and platforms. A consolidated taxonomy for data-driven business models and a future reference model for digital banking have been developed. The impact of the platform economy is demonstrated here using the example of the market entry by Bigtech. The role-based e3-value modeling is extended by meta-roles and role segments and linked to value co-creation mapping in VDML. In this way, the dissertation extends enterprise modeling research on platform ecosystems and value co-creation using the example of banking.
Struggle for existence
(2022)
In this project, I sought to understand how Palestinian claim-making in the West Bank is possible within the context of continuing Israeli occupation and repression by the Palestinian political leadership. I explored the questions of what channels non-state actors use to advance their claims, what opportunities they have for making these claims, and what challenges they face. This exploration covers the time period from the Oslo Accords in the mid-1990s to the so-called Great March of Return in 2018.
I demonstrated that Palestinians used different modes and strategies of resistance in the past century, as the area of what today is Israel/Palestine has historically been a target for foreign penetration. Yet, the Oslo agreements between the Israeli government and the Palestinian leadership have ended Palestinians’ decentralized and pluralist social governance, reinforced Israeli rule in the Palestinian territories, promoted continuing dispossession and segregation of Palestinians, and further restricted their rights and their claim-making opportunities until this day. Therefore, today, Palestinian society in the West Bank is characterized by fragmentation, geographical and societal segregation, and double repression by Israeli occupation and Palestinian Authority (PA) policies. What is more, Palestinian claim-making is legally curtailed due to the establishment of different geographical entities in which Palestinians are subjugated to different forms of Israeli rule and regulations.
I argue that the concepts of civil society and acts of citizenship, which are often used to describe non-state actors’ rights-seeking activities, fall short on understanding and describing Palestinian claim-making in the West Bank comprehensively. By determining their boundaries, the concept of acts of subjecthood evolved as a novel theoretical approach within the research process and as a means of claim-making within repressive contexts where claim makers’ rights are curtailed and opportunities for rights-seeking activities are few. Thereby, this study applies a new theoretical framework to the conflict in Israel/Palestine and contributes to a better understanding of rights-seeking activities within the West Bank. Further, I argue that Palestinian acts of subjecthood against hostile Israeli rule in the West Bank are embedded within the comprehensive structure of settler colonialism. As a form of colonialism that aims at replacing an indigenous population, Israeli settler colonialism in the West Bank manifests itself in restrictions of Palestinian movement, settlement constructions, home demolitions, violence, and detentions.
By using grounded theory and inductive reasoning as methodological approaches, I was able to make generalizations about the state of Palestinian claim-making. These generalizations are based on the analysis of secondary materials and data collected via face-to-face and video interviews with non-state actors in Israel/Palestine. The conducted research shows that there is not a single measure or a standalone condition that hinders Palestinian claim-making, but a complex and comprehensive structure that, on the one hand, shrinks Palestinian living space by occupation and destruction and, on the other hand, diminishes Palestinian civic space by limiting the fundamental rights to organize and build social movements to change the status Palestinians live in.
Although the concrete, tangible outcomes of Palestinian acts of subjecthood are marginal, they contribute to strengthening and perpetuating Palestinian’s long history of resistance against Israeli oppression. With a lack of adherence to international law, the neglect of UN resolutions by the Israeli government, the continuous defeats of rights organizations in Israeli courts, and the repression of institutions based in the West Bank by PA and occupation policies, Palestinian acts of subjecthood cannot overturn current power structures. Nevertheless, the ongoing persistence of non-state actors claiming rights, as well as the pop-up of new initiatives and youth movements are all essential for strengthening Palestinians’ resilience and documenting current injustices. Therefore, they can build the pillars for social change in the future.
Das Ziel der vorliegenden Dissertation war es zu untersuchen, wie palästinensisches claim-making, also die Artikulation von Forderungen bzw. die Geltendmachung von bestimmten Rechten, vor dem Hintergrund der anhaltenden israelischen Besatzung und Repressalien durch die palästinensische politische Führung im Westjordanland durchgesetzt werden kann. Dabei soll der Frage nachgegangen werden, welche Kanäle nichtstaatliche Akteure nutzen, um ihre Ansprüche geltend zu machen, welche Möglichkeiten sich ihnen dafür bieten und vor welchen Herausforderungen sie stehen. Der Untersuchungszeitraum erstreckt sich dabei vom Osloer Friedensprozess Mitte der 1990er Jahre bis hin zum sogenannten Great March of Return im Jahr 2018.
Die im Gebiet des heutigen Israel/Palästina lebenden PalästinenserInnen bedienten sich in Zeiten ausländischer Einflussnahme, z.B. während der britischen Besatzung im vergangenen Jahrhundert, verschiedenster Widerstandsformen und -strategien. Jedoch haben die Osloer Abkommen zwischen der israelischen Regierung und der palästinensischen Führung die dezentrale und partizipative Mobilisierung der palästinensischen Gesellschaft erschwert, die andauernde Enteignung von PalästinenserInnen begünstigt und ihre Rechte bis zum heutigen Tag weiter eingeschränkt. Die heutige palästinensische Gesellschaft im Westjordanland ist daher durch Zersplitterung, geografische und gesellschaftliche Segregation und doppelte Un-terdrückung durch die israelische Besatzung sowie die Palästinensische Autonomiebehörde gekennzeichnet. Zudem führt die Etablierung verschiedener geografischer Entitäten, in denen PalästinenserInnen unterschiedlichen Formen israelischer Herrschaft, Regularien und Ein-griffsrechten unterworfen sind, dazu, dass palästinensisches claim-making auch formalrecht-lich eingeschränkt ist.
Um die Aktivitäten nichtstaatlicher Akteure in diesem Kontext beschreiben zu können, wer-den häufig das Konzept der Zivilgesellschaft oder das der acts of citizenship herangezogen. In der vorliegenden Arbeit wird jedoch argumentiert, dass diese Konzepte nur bedingt auf den Status Quo im Westjordanland anwendbar sind und palästinensisches claim-making nicht hinreichend verstehen und beschreiben können. Im Laufe des Forschungsprozesses hat sich daher das Konzept der acts of subjecthood als neuer theoretischer Ansatz herausgebildet, der claim-making in repressiven Kontexten beschreibt, in denen nichtstaatliche Akteure nur geringen Handlungsspielraum haben, ihre Forderungen durchsetzen zu können. Durch diese „Theorie-Brille“ ermöglicht meine Forschung einen neuartigen Blick auf den israelisch-palästinensischen Konflikt und trägt auf diese Weise zu einem besseren Verständnis von claim-making-Aktivitäten im Westjordanland bei. Darüber hinaus bettet die vorliegende Ar-beit acts of subjecthood in den größeren Kontext des Siedlungskolonialismus ein. Dieser beschreibt eine Form des Kolonialismus, die darauf abzielt, eine einheimische Bevölkerung durch die der Kolonialmacht zu ersetzen. Im Westjordanland manifestiert sich der israelische Siedlungskolonialismus in der Einschränkung der Bewegungsfreiheit von PalästinenserIn-nen, dem Bau von Siedlungen, der Zerstörung von Häusern, Gewalt und Inhaftierungen.
Die Verwendung der Grounded Theory und des induktiven Denkens als methodische Ansätze ermöglichte es, verallgemeinerbare Aussagen zum Zustand palästinensischen claim-makings treffen zu können. Diese Verallgemeinerungen beruhen auf der Analyse von Sekundärquellen und Daten, die im Rahmen von Interviews mit VertreterInnen nichtstaatlicher Organisationen in Israel/Palästina erhoben wurden. Die durchgeführte Analyse macht deutlich, dass nicht eine einzelne Maßnahme oder Bedingung palästinensisches claim-making behindert, sondern eine komplexe, vielschichtige und zielgerichtet implementierte Struktur. Diese verringert einerseits den Lebensraum von PalästinenserInnen durch Besatzung und Zerstörung und schränkt andererseits den zivilen Raum ein, indem sie ihnen grundlegende Rechte und fundamentale Freiheiten verwehrt.
Obwohl die konkreten Auswirkungen palästinensischer acts of subjecthood marginal sind, tragen sie dazu bei, den Widerstand gegen politische Unterdrückung zu stärken und fortzusetzen. Angesichts der Verletzung von Völkerrecht und der Missachtung zahlreicher UN-Resolutionen durch die israelische Regierung, der Niederlagen von Menschenrechtsorganisationen vor israelischen Gerichten, der Unterdrückung von Institutionen im Westjordanland durch die Palästinensische Autonomiebehörde und die Besatzungspolitik können acts of subjecthood die derzeitigen Machtstrukturen nicht aufbrechen. Dennoch sind die anhaltende Beharrlichkeit nichtstaatlicher Akteure, Forderungen zu artikulieren und Rechte einzufordern und die Gründung neuer Initiativen und Organisationen essenziell für die Stärkung gesellschaftlicher Resilienz sowie die Dokumentation von Ungerechtigkeiten und Rechtsverletzungen. Diese Akteure legen so den Grundstein für einen möglichen gesellschaftspolitischen Wandel in der Zukunft.
Subdividing space through interfaces leads to many space partitions that are relevant to soft matter self-assembly. Prominent examples include cellular media, e.g. soap froths, which are bubbles of air separated by interfaces of soap and water, but also more complex partitions such as bicontinuous minimal surfaces.
Using computer simulations, this thesis analyses soft matter systems in terms of the relationship between the physical forces between the system's constituents and the structure of the resulting interfaces or partitions. The focus is on two systems, copolymeric self-assembly and the so-called Quantizer problem, where the driving force of structure formation, the minimisation of the free-energy, is an interplay of surface area minimisation and stretching contributions, favouring cells of uniform thickness.
In the first part of the thesis we address copolymeric phase formation with sharp interfaces. We analyse a columnar copolymer system "forced" to assemble on a spherical surface, where the perfect solution, the hexagonal tiling, is topologically prohibited. For a system of three-armed copolymers, the resulting structure is described by solutions of the so-called Thomson problem, the search of minimal energy configurations of repelling charges on a sphere. We find three intertwined Thomson problem solutions on a single sphere, occurring at a probability depending on the radius of the substrate.
We then investigate the formation of amorphous and crystalline structures in the Quantizer system, a particulate model with an energy functional without surface tension that favours spherical cells of equal size. We find that quasi-static equilibrium cooling allows the Quantizer system to crystallise into a BCC ground state, whereas quenching and non-equilibrium cooling, i.e. cooling at slower rates then quenching, leads to an approximately hyperuniform, amorphous state. The assumed universality of the latter, i.e. independence of energy minimisation method or initial configuration, is strengthened by our results. We expand the Quantizer system by introducing interface tension, creating a model that we find to mimic polymeric micelle systems: An order-disorder phase transition is observed with a stable Frank-Caspar phase.
The second part considers bicontinuous partitions of space into two network-like domains, and introduces an open-source tool for the identification of structures in electron microscopy images. We expand a method of matching experimentally accessible projections with computed projections of potential structures, introduced by Deng and Mieczkowski (1998). The computed structures are modelled using nodal representations of constant-mean-curvature surfaces. A case study conducted on etioplast cell membranes in chloroplast precursors establishes the double Diamond surface structure to be dominant in these plant cells. We automate the matching process employing deep-learning methods, which manage to identify structures with excellent accuracy.
In the automotive industry, suppliers from the consumer electronics and high-tech industry are becoming increasingly relevant, for example in the context of automated vehicles. The carmakers’ purchasing organizations need to understand the power constellation in negotiations with these new suppliers, since negotiating power is the greatest lever for influencing the outcome of negotiations. This study analyzes the importance of organizational sources of power and their interplay with the products’ degree of innovation.
Stimuli-promoted in situ formation of hydrogels with thiol/thioester containing peptide precursors
(2022)
Hydrogels are potential synthetic ECM-like substitutes since they provide functional and structural similarities compared to soft tissues. They can be prepared by crosslinking of macromolecules or by polymerizing suitable precursors. The crosslinks are not necessarily covalent bonds, but could also be formed by physical interactions such as π-π interactions, hydrophobic interactions, or H-bonding. On demand in situ forming hydrogels have garnered increased interest especially for biomedical applications over preformed gels due to the relative ease of in vivo delivery and filling of cavities. The thiol-Michael addition reaction provides a straightforward and robust strategy for in situ gel formation with its fast reaction kinetics and ability to proceed under physiological conditions. The incorporation of a trigger function into a crosslinking system becomes even more interesting since gelling can be controlled with stimulus of choice. The use of small molar mass crosslinker precursors with active groups orthogonal to thiol-Michael reaction type electrophile provides the opportunity to implement an on-demand in situ crosslinking without compromising the fast reaction kinetics.
It was postulated that short peptide sequences due to the broad range structural-function relations available with the different constituent amino acids, can be exploited for the realisation of stimuli-promoted in situ covalent crosslinking and gelation applications. The advantages of this system over conventional polymer-polymer hydrogel systems are the ability tune and predict material property at the molecular level.
The main aim of this work was to develop a simplified and biologically-friendly stimuli-promoted in situ crosslinking and hydrogelation system using peptide mimetics as latent crosslinkers. The approach aims at using a single thiodepsipeptide sequence to achieve separate pH- and enzyme-promoted gelation systems with little modification to the thiodepsipeptide sequence. The realization of this aim required the completion of three milestones.
In the first place, after deciding on the thiol-Michael reaction as an effective in situ crosslinking strategy, a thiodepsipeptide, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH (TDP) with expected propensity towards pH-dependent thiol-thioester exchange (TTE) activation, was proposed as a suitable crosslinker precursor for pH-promoted gelation system. Prior to the synthesis of the proposed peptide-mimetic, knowledge of the thiol-Michael reactivity of the would-be activated thiol moiety SH-Leu, which is internally embedded in the thiodepsipeptide was required. In line with pKa requirements for a successful TTE, the reactivity of a more acidic thiol, SH-Phe was also investigated to aid the selection of the best thiol to be incorporated in the thioester bearing peptide based crosslinker precursor. Using ‘pseudo’ 2D-NMR investigations, it was found that only reactions involving SH-Leu yielded the expected thiol-Michael product, an observation that was attributed to the steric hindrance of the bulkier nature of SH-Phe. The fast reaction rates and complete acrylate/maleimide conversion obtained with SH-Leu at pH 7.2 and higher aided the direct elimination of SH-Phe as a potential thiol for the synthesis of the peptide mimetic.
Based on the initial studies, for the pH-promoted gelation system, the proposed Ac-Pro-Leu-Gly-SLeu-Leu-Gly-NEtSH was kept unmodified. The subtle difference in pKa values between SH-Leu (thioester thiol) and the terminal cysteamine thiol from theoretical conditions should be enough to effect a ‘pseudo’ intramolecular TTE. In polar protic solvents and under basic aqueous conditions, TDP successfully undergoes a ‘pseudo’ intramolecular TTE reaction to yield an α,ω-dithiol tripeptide, HSLeu-Leu-Gly-NEtSH. The pH dependence of thiolate ion generation by the cysteamine thiol aided the incorporation of the needed stimulus (pH) for the overall success of TTE (activation step) – thiol-Michael addition (crosslinking) strategy.
Secondly, with potential biomedical applications in focus, the susceptibility of TDP, like other thioesters, to intermolecular TTE reaction was probed with a group of thiols of varying thiol pKa values, since biological milieu characteristically contain peptide/protein thiols. L-cysteine, which is a biologically relevant thiol, and a small molecular weight thiol, methylthioglycolate both with relatively similar thiol pKa, values, led to an increase concentration of the dithiol crosslinker when reacted with TDP. In the presence of acidic thiols (p-NTP and 4MBA), a decrease in the dithiol concentration was observed, an observation that can be attributed to the inability of the TTE tetrahedral intermediate to dissociate into exchange products and is in line with pKa requirements for successful TTE reaction. These results additionally makes TDP more attractive and the potentially the first crosslinker precursor for applications in biologically relevant media.
Finally, the ability of TDP to promote pH-sensitive in situ gel formation was probed with maleimide functionalized 4-arm polyethylene glycol polymers in tris-buffered media of varying pHs. When a 1:1 thiol: maleimide molar ratio was used, TDP-PEG4MAL hydrogels formed within 3, 12 and 24 hours at pH values of 8.5, 8.0 and 7.5 respectively. However, gelation times of 3, 5 and 30 mins were observed for the same pH trend when the thiol: maleimide molar was increased to 2:1.
A direct correlation of thiol content with G’ of the gels at each pH could also be drawn by comparing gels with thiol: maleimide ratios of 1:1 to those with 2:1 thiol: maleimide mole ratios. This is supported by the fact that the storage modulus (G') is linearly dependent on the crosslinking density of the polymer. The values of initial G′ for all gels ranged between (200 – 5000 Pa), which falls in the range of elasticities of certain tissue microenvironments for example brain tissue 200 – 1000 Pa and adipose tissue (2500 – 3500 Pa).
Knowledge so far gained from the study on the ability to design and tune the exchange reaction of thioester containing peptide mimetic will give those working in the field further insight into the development of new sequences tailored towards specific applications.
TTE substrate design using peptide mimetic as presented in this work has revealed interesting new insights considering the state-of-the-art. Using the results obtained as reference, the strategy provides a possibility to extend the concept to the controlled delivery of active molecules needed for other robust and high yielding crosslinking reactions for biomedical applications. Application for this sequentially coupled functional system could be seen e.g. in the treatment of inflamed tissues associated with urinary tract like bladder infections for which pH levels above 7 were reported. By the inclusion of cell adhesion peptide motifs, the hydrogel network formed at this pH could act as a new support layer for the healing of damage epithelium as shown in interfacial gel formation experiments using TDP and PEG4MAL droplets.
The versatility of the thiodepsipeptide sequence, Ac-Pro-Leu-Gly-SLeu-Leu-Gly-(TDPo) was extended for the design and synthesis of a MMP-sensitive 4-arm PEG-TDPo conjugate. The purported cleavage of TDPo at the Gly-SLeu bond yields active thiol units for subsequent reaction of orthogonal Michael acceptor moieties. One of the advantages of stimuli-promoted in situ crosslinking systems using short peptides should be the ease of design of required peptide molecules due to the predictability of peptide functions their sequence structure. Consequently the functionalisation of a 4-arm PEG core with the collagenase active TDPo sequence yielded an MMP-sensitive 4-arm thiodepsipeptide-PEG conjugate (PEG4TDPo) substrate.
Cleavage studies using thiol flourometric assay in the presence of MMPs -2 and -9 confirmed the susceptibility of PEG4TDPo towards these enzymes. The resulting time-dependent increase in fluorescence intensity in the presence of thiol assay signifies the successful cleavage of TDPo at the Gly-SLeu bond as expected. It was observed that the cleavage studies with thiol flourometric assay introduces a sigmoid non-Michaelis-Menten type kinetic profile, hence making it difficult to accurately determine the enzyme cycling parameters, kcat and KM .
Gelation studies with PEG4MAL at 10 % wt. concentrations revealed faster gelation with MMP-2 than MMP-9 with 28 and 40 min gelation times respectively. Possible contributions by hydrolytic cleavage of PEG4TDPo has resulted in the gelation of PEG4MAL blank samples but only after 60 minutes of reaction. From theoretical considerations, the simultaneous gelation reaction would be expected to more negatively impact the enzymatic than hydrolytic cleavage. The exact contributions from hydrolytic cleavage of PEG4TDPo would however require additional studies.
In summary this new and simplified in situ crosslinking system using peptide-based crosslinker precursors with tuneable properties exhibited in situ crosslinking gelation kinetics on similar levels with already active dithiols reported. The advantageous on-demand functionality associated with its pH-sensitivity and physiological compatibility makes it a strong candidate worth further research as biomedical applications in general and on-demand material synthesis is concerned.
Results from MMP-promoted gelation system unveils a simple but unexplored approach for in situ synthesis of covalently crosslinked soft materials, that could lead to the development of an alternative pathway in addressing cancer metastasis by making use of MMP overexpression as a trigger. This goal has so far not being reach with MMP inhibitors despite the extensive work this regard.
Lehrkräftefortbildungen bieten in Deutschland im Rahmen der dritten Phase der Lehrkräftebildung eine zentrale Lerngelegenheit für die Kompetenzentwicklung der Lehr-kräfte (Avalos, 2011; Guskey & Yoon, 2009). In dieser Phase können Lehrkräfte aus einem Angebot an berufsbegleitenden Lerngelegenheiten wählen, die auf die Anpassung und Weiterentwicklung ihrer professionellen Kompetenzen abzielen. Im Rahmen dieser Professionalisierungsmaßnahmen haben Lehrkräfte Gelegenheit zur Reflexion und Weiterentwicklung ihrer Unterrichtspraxis. Deshalb sind Lehrkräftefortbildungen auch für die Entwicklung von Unterrichtsqualität und das Lernen der Schüler:innen bedeutsam (Lipowsky, 2014).
Ergebnisse der Nutzungsforschung zeigen jedoch, dass das Fortbildungsangebot nicht von allen Lehrkräften im vollen Umfang genutzt wird und sich Lehrkräfte in dem Nutzungsumfang dieser beruflichen Lerngelegenheiten unterscheiden (Hoffmann & Richter, 2016). Das hat zur Folge, dass das Wirkpotenzial des Fortbildungsangebots nicht voll ausgeschöpft werden kann. Um die Nutzung von Lehrkräftefortbildungen zu fördern, werden auf unterschiedlichen Ebenen verschiedene Steuerungsinstrumente von Akteuren eingesetzt. Die Frage nach der Steuerungsmöglichkeit im Rahmen der dritten Phase der Lehrkräftebildung ist bislang jedoch weitestgehend unbearbeitet geblieben.
Die vorliegende Arbeit knüpft an die bestehende Forschung zur Lehrkräftefortbildung an und nutzt die theoretische Perspektive der Educational Governance, um im Rahmen von vier Teilstudien der Frage nachzugehen, welche Instrumente und Potenziale der Steue-rung auf den unterschiedlichen Ebenen des Lehrkräftefortbildungssystems bestehen und wie diese durch die verschiedenen politischen und schulischen Akteure umgesetzt werden. Außerdem soll der Frage nachgegangen werden, wie wirksam die genutzten Steuerungsinstrumente im Hinblick auf die Nutzung von Lehrkräftefortbildungen sind. Die übergeordnete Fragestellung wird vor dem Hintergrund eines für das Lehrkräftefortbildungssystem abgelei-teten theoretischen Rahmenmodells in Form eines Mehrebenenmodells bearbeitet, welches als Grundlage für die theoretische Verortung der nachfolgenden empirischen Untersuchungen zur Fortbildungsnutzung und der Wirksamkeit verschiedener Steuerungsinstrumente dient.
Studie I nimmt vor diesem Hintergrund die Ebene der politischen Akteure in den Blick und geht der Frage nach, wie bedeutsam die gesetzliche Fortbildungspflicht für die Fortbildungsbeteiligung von Lehrkräften ist. Hierzu wurde untersucht, inwiefern Zusammenhänge zwischen der Fortbildungsteilnahme von Lehrkräften und der Zugehörigkeit zu Bundesländern mit und ohne konkreter Fortbildungsverpflichtung sowie zu Bundesländern mit und ohne Nachweispflicht absolvierter Fortbildungen bestehen. Dazu wurden Daten aus dem IQB-Ländervergleich 2011 und 2012 sowie dem IQB-Bildungstrend 2015 mittels logistischer und linearer Regressionsmodelle analysiert.
Studie II und Studie III widmen sich den Rahmenbedingungen für schulinterne Fortbildungen. Studie II befasst sich zunächst mit schulformspezifischen Unterschieden bei der Wahl der Fortbildungsthemen. Studie III untersucht das schulinterne Fortbildungsangebot hinsichtlich des Nutzungsumfangs und des Zusammenhangs zwischen Schulmerkmalen und der Nutzung unterschiedlicher Fortbildungsthemen. Darüber hinaus wird ein Vergleich zwi-schen den beiden Angebotsformaten hinsichtlich des jeweiligen Anteils an thematischen Fortbildungsveranstaltungen vorgenommen. Hierzu wurden Daten der Fortbildungsdatenbank des Landes Brandenburg ausgewertet.
Neben der Untersuchung der Fortbildungsteilnahme im Zusammenhang mit administrativen Vorgaben und der Nutzung des schulinternen Fortbildungsangebots auf Schulebene wurde zur Bearbeitung der übergeordneten Forschungsfrage der vorliegenden Arbeit in der Studie IV darüber hinaus eine Untersuchung des Einsatzes von Professionalisierungsmaßnahmen im Rahmen schulischer Personalentwicklung durchgeführt. Durch die qualitative Studie IV wurde ein vertiefender Einblick in die schulische Praxis ermöglicht, um die Kenntnisse aus den quantitativen Studien I bis III zu ergänzen. Im Rahmen einer qualitati-ven Interviewstudie wurde der Frage nachgegangen werden, wie Schulleitungen ausgezeichneter Schulen Personalentwicklung auffassen, welche Informationsquellen sie hierbei mit einbeziehen und welche Maßnahmen sie nutzen und in diesem Sinne Personalentwicklung als ein Instrument für Organisationsentwicklung einsetzen.
Im abschließenden Kapitel der vorliegenden Arbeit werden die zentralen Ergebnisse der durchgeführten Studien zusammenfassend diskutiert. Die Ergebnisse der Arbeit deuten insgesamt darauf hin, dass Akteure auf den jeweiligen Ebenen direkte und indirekete Steuerungsinstrumente mit dem Ziel einsetzen, die Nutzung des zur Verfügung stehenden Angebots zu erhöhen, allerdings erzielen sie mit den genutzten Instrumenten nicht die gewünschte Steuerungswirkung. Da sie weder mit beruflichen Sanktionen noch mit Anreizen verknüpft sind, fehlt es den bestehenden Steuerungsinstrumenten an Durchsetzungsmacht. Außerdem wird das Repertoire an möglichen Steuerungsinstrumenten von den beteiligten Akteuren nicht ausgeschöpft. Die Ergebnisse dieser Arbeit bieten somit die Grundlage für anknüpfende Forschungsarbeiten und geben Anreize für mögliche Implikationen in der Praxis des Fortbildungssystems und der Bildungspolitik.
Variation in traits permeates and affects all levels of biological organisation, from within individuals to between species. Yet, intraspecific trait variation (ITV) is not sufficiently represented in many ecological theories. Instead, species averages are often assumed. Especially ITV in behaviour has only recently attracted more attention as its pervasiveness and magnitude became evident. The surge in interest in ITV in behaviour was accompanied by a methodological and technological leap in the field of movement ecology. Many aspects of behaviour become visible via movement, allowing us to observe inter-individual differences in fundamental processes such as foraging, mate searching, predation or migration. ITV in movement behaviour may result from within-individual variability and consistent, repeatable among-individual differences. Yet, questions on why such among-individual differences occur in the first place and how they are integrated with life-history have remained open. Furthermore, consequences of ITV, especially of among-individual differences in movement behaviour, on populations and species communities are not sufficiently understood. In my thesis, I approach timely questions on the sources and consequences of ITV, particularly, in movement behaviour. After outlining fundamental concepts and the current state of knowledge, I approach these questions by using agent-based models to integrate concepts from behavioural and movement ecology and to develop novel perspectives.
Modern coexistence theory is a central pillar of community ecology, yet, insufficiently considers ITV in behaviour. In chapter 2, I model a competitive two-species system of ground-dwelling, central-place foragers to investigate the consequences of among-individual differences in movement behaviour on species coexistence. I show that the simulated among-individual differences, which matched with empirical data, reduce fitness differences betweem species, i.e. provide an equalising coexistence mechanism. Furthermore, I explain this result mechanistically and, thus, resolve an apparent ambiguity of the consequences of ITV on species coexistence described in previous studies.
In chapter 3, I turn the focus to sources of among-individual differences in movement behaviour and their potential integration with life-history. The pace-of-life syndrome (POLS) theory predicts that the covariation between among-individual differences in behaviour and life-history is mediated by a trade-off between early and late reproduction. This theory has generated attention but is also currently scrutinised. In chapter 3, I present a model which supports a recent conceptual development that suggests fluctuating density-dependent selection as a cause of the POLS. Yet, I also identified processes that may alter the association between movement behaviour and life-history across levels of biological organization.
ITV can buffer populations, i.e. reduce their extinction risk. For instance, among-individual differences can mediate portfolio effects or increase evolvability and, thereby, facilitate rapid evolution which can alleviate extinction risk. In chapter 4, I review ITV, environmental heterogeneity, and density-dependent processes which constitute local buffer mechanisms. In the light of habitat isolation, which reduces connectivity between populations, local buffer mechanisms may become more relevant compared to dispersal-related regional buffer mechanisms. In this chapter, I argue that capacities, latencies, and interactions of local buffer mechanisms should motivate more process-based and holistic integration of local buffer mechanisms in theoretical and empirical studies.
Recent perspectives propose to apply principles from movement and community ecology to study filamentous fungi. It is an open question whether and how the arrangement and geometry of microstructures select for certain movement traits, and, thus, facilitate coexistence-stabilising niche partitioning. As a coauthor of chapter 5, I developed an agent-based model of hyphal tips navigating in soil-like microstructures along a gradient of soil porosity. By measuring network properties, we identified changes in the optimal movement behaviours along the gradient. Our findings suggest that the soil architecture facilitates niche partitioning.
The core chapters are framed by a general introduction and discussion. In the general introduction, I outline fundamental concepts of movement ecology and describe theory and open questions on sources and consequences of ITV in movement behaviour. In the general discussion, I consolidate the findings of the core chapters and critically discuss their respective value and, if applicable, their impact. Furthermore, I emphasise promising avenues for further research.
Social networking site use and well-being - a nuanced understanding of a complex relationship
(2022)
Social Networking Sites (SNSs) are ubiquitous and attract an enormous chair of the digital population. Their functionalities allow users to connect and interact with others and weave complex social networks in which social information is continuously disseminated between users. Besides the social value SNSs are generating, they likewise attract companies and allow for new forms of marketing, thereby creating considerable economic value alike. However, as SNSs grew in popularity, so did concerns about the impact of their use on social interactions in general and the well-being of individual users in particular. While existing scientific evidence points to both risk as well as benefits of SNS use, research still lacks a profound understanding of which aspects of SNSs enable an impact on well-being and which psychological processes on the part of the users underly and explain this relationship. Therefore, this thesis is dedicated to an in-depth exploration of the relationship between SNS use and well-being and aims to answer how SNS use can impact well-being. Primarily, it focuses on the unique technological features that characterize SNSs and enable potential well- being alterations and on specific psychological processes on the part of the users, underlying and explaining the relationship. For this purpose, the thesis first introduces the concept of well- being. It continues by presenting SNSs’ unique technological features, divided into specifics of the content disseminated on SNSs and the network structure of SNSs. Further, the thesis introduces three classes of psychological processes assumed most relevant for the relationship between SNSs and well-being: other-focused, self-focused, and contrastive processes.. It is assumed that the course and quality of these common processes change in the SNS context and that a complex interplay between the unique features of SNSs and these processes determines how SNSs may ultimately affect users' well-being - both in positive and negative ways. The dissertation comprises seven research articles, each of which focusses on a particular set of SNS characteristics, their interplay with one or more of the proposed psychological processes, and ultimately the resulting effects on user well-being or its key resilience and risk factors. The seven articles investigate this relationship using different methodological approaches. Three articles are based on either systematic or narrative literature reviews, one applies an empirical cross-sectional research design, and three articles present an experimental investigation. Thematically, two articles revolve around SNS use’s effect on self-esteem. Three articles examine the specific role of the emotion of envy and its potential to establish and perpetuate a well-being-damaging social climate on SNSs. The two last articles of this thesis revolve around the established assumption that active and passive SNS use, as different modalities of SNS use, cause differential effects on users’ well-being due to the involvement of different psychological processes. The results of this thesis illustrate different ways how SNSs can affect users’ well-being. The results suggest that especially contrastive processes play a decisive role in explaining potential well-being risks for SNS users. Their interplay with certain SNS features seems to foster upward social comparisons and feelings of envy, potentially leading to a complex set of deleterious effects on users’ well-being. At the same time, the findings illuminate ways in which SNSs can benefit users and their self-esteem – especially when SNS use promotes self- focused and social-feedback-based other-focused processes. The thesis and their findings illustrate that the relationship between SNSs and well-being is complex. Therefore, a nuanced perspective, taking into consideration both the technological uniqueness of SNSs and the psychological processes they are enabling, is crucial to understand how these technologies affect their users in good and potentially harmful ways. On the one hand, the gathered insights contribute to research, providing novel insights into the complex relationship between SNS use and well-being. On the other hand, the results enable a focused and action-oriented derivation of recommendations for stakeholders such as individual users, policymakers, and platform providers. The findings of this thesis can help them to better combat SNS-related risks and ultimately ensure a healthy and sustainable environment for users - and thus also the economic values of SNSs - in the long term.
Individuals have an intrinsic need to express themselves to other humans within a given community by sharing their experiences, thoughts, actions, and opinions. As a means, they mostly prefer to use modern online social media platforms such as Twitter, Facebook, personal blogs, and Reddit. Users of these social networks interact by drafting their own statuses updates, publishing photos, and giving likes leaving a considerable amount of data behind them to be analyzed. Researchers recently started exploring the shared social media data to understand online users better and predict their Big five personality traits: agreeableness, conscientiousness, extraversion, neuroticism, and openness to experience. This thesis intends to investigate the possible relationship between users’ Big five personality traits and the published information on their social media profiles. Facebook public data such as linguistic status updates, meta-data of likes objects, profile pictures, emotions, or reactions records were adopted to address the proposed research questions. Several machine learning predictions models were constructed with various experiments to utilize the engineered features correlated with the Big 5 Personality traits. The final predictive performances improved the prediction accuracy compared to state-of-the-art approaches, and the models were evaluated based on established benchmarks in the domain. The research experiments were implemented while ethical and privacy points were concerned. Furthermore, the research aims to raise awareness about privacy between social media users and show what third parties can reveal about users’ private traits from what they share and act on different social networking platforms.
In the second part of the thesis, the variation in personality development is studied within a cross-platform environment such as Facebook and Twitter platforms. The constructed personality profiles in these social platforms are compared to evaluate the effect of the used platforms on one user’s personality development. Likewise, personality continuity and stability analysis are performed using two social media platforms samples. The implemented experiments are based on ten-year longitudinal samples aiming to understand users’ long-term personality development and further unlock the potential of cooperation between psychologists and data scientists.
Complex networks like the Internet or social networks are fundamental parts of our everyday lives. It is essential to understand their structural properties and how these networks are formed. A game-theoretic approach to network design problems has become of high interest in the last decades. The reason is that many real-world networks are the outcomes of decentralized strategic behavior of independent agents without central coordination. Fabrikant, Luthra, Maneva, Papadimitriou, and Schenker proposed a game-theoretic model aiming to explain the formation of the Internet-like networks. In this model, called the Network Creation Game, agents are associated with nodes of a network. Each agent seeks to maximize her centrality by establishing costly connections to other agents. The model is relatively simple but shows a high potential in modeling complex real-world networks. In this thesis, we contribute to the line of research on variants of the Network Creation Games. Inspired by real-world networks, we propose and analyze several novel network creation models. We aim to understand the impact of certain realistic modeling assumptions on the structure of the created networks and the involved agents’ behavior.
The first natural additional objective that we consider is the network’s robustness. We consider a game where the agents seek to maximize their centrality and, at the same time, the stability of the created network against random edge failure.
Our second point of interest is a model that incorporates an underlying geometry. We consider a network creation model where the agents correspond to points in some underlying space and where edge lengths are equal to the distances between the endpoints in that space. The geometric setting captures many physical real-world networks like transport networks and fiber-optic communication networks.
We focus on the formation of social networks and consider two models that incorporate particular realistic behavior observed in real-world networks. In the first model, we embed the anti-preferential attachment link formation. Namely, we assume that the cost of the connection is proportional to the popularity of the targeted agent. Our second model is based on the observation that the probability of two persons to connect is inversely proportional to the length of their shortest chain of mutual acquaintances.
For each of the four models above, we provide a complete game-theoretical analysis. In particular, we focus on distinctive structural properties of the equilibria, the hardness of computing a best response, the quality of equilibria in comparison to the centrally designed socially optimal networks. We also analyze the game dynamics, i.e., the process of sequential strategic improvements by the agents, and analyze the convergence to an equilibrium state and its properties.
Selbstwirksamkeitserwartungen von Lehramtsstudierenden im Kontext von schulpraktischen Erfahrungen
(2022)
Selbstwirksamkeitserwartungen spielen eine wichtige Rolle für das professionelle Verhalten von Lehrkräften im Unterricht (Tschannen-Moran et al., 1998) sowie für die Leistungen und das Verhalten der Schülerinnen und Schüler (Mojavezi & Tamiz, 2012). Selbstwirksamkeitserwartungen von Lehrkräften sind definiert als die Überzeugung von Lehrkräften, dass sie in der Lage sind, bestimmte Ziele in einer spezifischen Situation zu erreichen (Dellinger et al., 2008; Tschannen-Moran & Hoy, 2001). Aufgrund der bedeutenden Rolle der Lehrkräfte im Bildungssystem und in der Gesellschaft ist es wichtig, das Wohlbefinden, die Produktivität und die Wirksamkeit von Lehrkräften zu fördern (Kasalak & Dagyar, 2020). Empirische Befunde unterstreichen die positiven Effekte von Selbstwirksamkeitserwartungen bei Lehrkräften auf ihr Wohlbefinden (Perera & John, 2020) und auf das Lernen sowie die Leistungen der Schülerinnen und Schüler (Zee & Koomen, 2016). Dabei mangelt es jedoch an empirischer Forschung, die die Bedeutung von Selbstwirksamkeitserwartungen bei Lehramtsstudierende in der Lehrkräftebildung untersucht (Yurekli et al., 2020), insbesondere während schulpraktischen Ausbildungsphasen. Ausgehend von der Bedeutung eigener Unterrichtserfahrungen, die als mastery experience, d.h. als stärkste Quelle von Selbstwirksamkeit für Lehramtsstudierende, beschrieben wurden (Pfitzner-Eden, 2016b), werden in dieser Dissertation Praxiserfahrungen als Quelle von Selbstwirksamkeit von Lehramtsstudierenden und die Veränderung der Selbstwirksamkeit von Lehramtsstudierenden während der Lehrkräfteausbildung untersucht. Studie 1 konzentriert sich daher auf die Veränderung der Selbstwirksamkeit von Lehramtsstudierenden während kurzer praktischer Unterrichtserfahrungen im Vergleich zur Online-Lehre ohne Unterrichtserfahrung. Aufgrund inkonsistenter Befunde zu den wechselseitigen Beziehungen zwischen den Selbstwirksamkeitserwartungen von Lehrkräften und ihrem Unterrichtsverhalten (Holzberger et al., 2013; Lazarides et al., 2022) wurde in Studie 2 der Zusammenhang zwischen der Selbstwirksamkeit von Lehramtsstudierenden und ihrem Unterrichtsverhalten während des Lehramtsstudiums untersucht. Da Feedback als verbale Überzeugung (verbal persuasion) dienen kann und somit eine wichtige Quelle für Selbstwirksamkeitserwartungen ist, die das Gefühl der Kompetenz stärkt (Pfitzner-Eden, 2016b), fokussiert Studie 2 den Zusammenhang zwischen der Veränderung der Selbstwirksamkeit von Lehramtsstudierenden und der wahrgenommenen Qualität des Peer-Feedbacks im Kontext kurzer schulpraktischer Erfahrungen während des Lehramtsstudiums. Darüber hinaus ist es für die Untersuchung der Veränderung von Selbstwirksamkeit bei Lehramtsstudierenden wichtig, individuelle Persönlichkeitsaspekte und spezifische Bedingungen der Lernumgebung in der Lehrkräftebildung zu untersuchen (Bach, 2022). Ausgehend von der Annahme, dass die Unterstützung von Reflexionsprozessen in der Lehrkräftebildung (Menon & Azam, 2021) und der Einsatz innovativer Lernsettings wie VR-Videos (Nissim & Weissblueth, 2017) die Entwicklung von Selbstwirksamkeitserwartungen von Lehramtsstudierenden fördern, werden in Studie 3 und Studie 4 Reflexionsprozesse bei Lehramtsstudierenden in Bezug auf ihre eigenen Unterrichtserfahrungen bzw. stellvertretenden Unterrichtserfahrungen anderer untersucht. Vor dem Hintergrund inkonsistenter Befunde und fehlender empirischer Forschung zu den Zusammenhängen zwischen Selbstwirksamkeit von Lehramtsstudierenden und verschiedenen Faktoren, die das Lernumfeld oder persönliche Merkmale betreffen, sind weitere empirische Studien erforderlich, die verschiedene Quellen und Zusammenhänge der Selbstwirksamkeitserwartungen von Lehramtsstudierenden während des Lehramtsstudiums untersuchen. In diesem Zusammenhang wird in der vorliegenden Dissertation der Frage nachgegangen, welche individuellen Merkmale und Lernumgebungen die Selbstwirksamkeit von Lehramtsstudierenden – insbesondere während kurzer schulpraktischer Phasen im Lehramtsstudium fördern können. Darüber hinaus schließt die Dissertation mit der Diskussion der Ergebnisse aus den vier Teilstudien ab, indem Stärken und Schwächen jeder Studie gesamtheitlich in den Blick genommen werden. Abschließend werden Limitationen und Implikationen für die weitere Forschung und die Praxis diskutiert.
The Pamir Frontal Thrust (PFT) located in the Trans Alai range in Central Asia is the principal active fault of the intracontinental India-Eurasia convergence zone and constitutes the northernmost boundary of the Pamir orogen at the NW edge of this collision zone. Frequent seismic activity and ongoing crustal shortening reflect the northward propagation of the Pamir into the intermontane Alai Valley. Quaternary deposits are being deformed and uplifted by the advancing thrust front of the Trans Alai range. The Alai Valley separates the Pamir range front from the Tien Shan mountains in the north; the Alai Valley is the vestige of a formerly contiguous basin that linked the Tadjik Depression in the west with the Tarim Basin in the east. GNSS measurements across the Central Pamir document a shortening rate of ~25 mm/yr, with a dramatic decrease of ~10-15 mm over a short distance across the northernmost Trans Alai range. This suggests that almost half of the shortening in the greater Pamir – Tien Shan collision zone is absorbed along the PFT. The short-term (geodetic) and long-term (geologic) shortening rates across the northern Pamir appear to be at odds with an apparent slip-rate discrepancy along the frontal fault system of the Pamir. Moreover, the present-day seismicity and historical records have not revealed great Mw > 7 earthquakes that might be expected with such a significant slip accommodation. In contrast, recent and historic earthquakes exhibit complex rupture patterns within and across seismotectonic segments bounding the Pamir mountain front, challenging our understanding of fault interaction and the seismogenic potential of this area, and leaving the relationships between seismicity and the geometry of the thrust front not well understood.
In this dissertation I employ different approaches to assess the seismogenic behavior along the PFT. Firstly, I provide paleoseismic data from five trenches across the central PFT segment (cPFT) and compute a segment-wide earthquake chronology over the past 16 kyr. This novel dataset provides important insights into the recurrence, magnitude, and rupture extent of past earthquakes along the cPFT. I interpret five, possibly six paleoearthquakes that have ruptured the Pamir mountain front since ∼7 ka and 16 ka, respectively. My results indicate that at least three major earthquakes ruptured the full-segment length and possibly crossed segment boundaries with a recurrence interval of ∼1.9 kyr and potential magnitudes of up to Mw 7.4. Importantly, I did not find evidence for great (i.e., Mw ≥8) earthquakes.
Secondly, I combine my paleoseimic results with morphometric analyses to establish a segment-wide distribution of the cumulative vertical separation along offset fluvial terraces and I model a long-term slip rate for the cPFT. My investigations reveal discrepancies between the extents of slip and rupture during apparent partial segment ruptures in the western half of the cPFT. Combined with significantly higher fault scarp offsets in this sector of the cPFT, the observations indicate a more mature fault section with a potential for future fault linkage. I estimate an average rate of horizontal motion for the cPFT of 4.1 ± 1.5 mm/yr during the past ∼5 kyr, which does not fully match the GNSS-derived present-day shortening rate of ∼10 mm/yr. This suggests a complex distribution of strain accumulation and potential slip partitioning between the cPFT and additional faults and folds within the Pamir that may be associated with a partially locked regional décollement.
The third part of the thesis provides new insights regarding the surface rupture of the 2008 Mw 6.6 Nura earthquake that ruptured along the eastern PFT sector. I explore this rupture in the context of its structural complexity by combining extensive field observations with high-resolution digital surface models. I provide a map of the rupture extent, net slip measurements, and updated regional geological observations. Based on this data I propose a tectonic model in this area associated with secondary flexural-slip faulting along steeply dipping bedding of folded Paleogene sedimentary strata that is related to deformation along a deeper blind thrust. Here, the strain release seems to be transferred from the PFT towards older inherited basement structures within the area of advanced Pamir-Tien Shan collision zone.
The extensive research of my dissertation results in a paleoseismic database of the past 16 ~kyr, which contributes to the understanding of the seismogenic behavior of the PFT, but also to that of segmented thrust-fault systems in active collisional settings. My observations underscore the importance of combining different methodological approaches in the geosciences, especially in structurally complex tectonic settings like the northern Pamir. Discrepancy between GNSS-derived present-day deformation rates and those from different geological archives in the central part, as well as the widespread distribution of the deformation due to earthquake triggered strain transfer in the eastern part reveals the complexity of this collision zone and calls for future studies involving multi-temporal and interdisciplinary approaches.
Climate change and human-driven eutrophication promote the spread of harmful cyanobacteria blooms in lakes worldwide, which affects water quality and impairs the aquatic food chain. In recent times, sedimentary ancient DNA-based (sedaDNA) studies were used to probe how centuries of climate and environmental changes have affected cyanobacterial assemblages in temperate lakes. However, there is a lack of information on the consistency between sediment-deposited cyanobacteria communities versus those of the water column, and on the individual role of natural climatic changes versus human pressure on cyanobacteria community dynamics over multi-millennia time scales.
Therefore, this thesis uses sedimentary ancient DNA of Lake Tiefer See in northeastern Germany to trace the deposition of cyanobacteria along the water column into the sediment, and to reconstruct cyanobacteria communities spanning the last 11,000 years using a set of molecular techniques including quantitative PCR, biomarkers, metabarcoding, and metagenome sequence analyses.
The results of this thesis proved that cyanobacterial composition and species richness did not significantly differ among different water depths, sediment traps, and surface sediments. This means that the cyanobacterial community composition from the sediments reflects the water column communities. However, there is a skewed sediment deposition of different cyanobacteria groups because of DNA alteration and/or deterioration during transport along the water column to the sediment. Specifically, single filament taxa, such as Planktothrix, are poorly represented in sediments despite being abundant in the water column as shown by an additional study of the thesis on cyanobacteria seasonality. In contrast, aggregate-forming taxa, like Aphanizomenon, are relatively overrepresented in sediment although they are not abundant in the water column. These different deposition patterns of cyanobacteria taxa should be considered in future DNA-based paleolimnological investigations. The thesis also reveals a substantial increase in total cyanobacteria abundance during the Bronze Age which is not apparent in prior phases of the early to middle Holocene and is suggested to be caused by human farming, deforestation, and excessive nutrient addition to the lake. Not only cyanobacterial abundance was influenced by human activity but also cyanobacteria community composition differed significantly between phases of no, moderate, and intense human impact.
The data presented in this thesis are the first on sedimentary cyanobacteria DNA since the early Holocene in a temperate lake. The results bring together archaeological, historical climatic, and limnological data with deep DNA-sequencing and paleoecology to reveal a legacy impact of human pressure on lake cyanobacteria populations dating back to approximately 4000 years.
With the fast rise of cloud computing adoption in the past few years, more companies are migrating their confidential files from their private data center to the cloud to help enterprise's digital transformation process. Enterprise file synchronization and share (EFSS) is one of the solutions offered for enterprises to store their files in the cloud with secure and easy file sharing and collaboration between its employees. However, the rapidly increasing number of cyberattacks on the cloud might target company's files on the cloud to be stolen or leaked to the public. It is then the responsibility of the EFSS system to ensure the company's confidential files to only be accessible by authorized employees.
CloudRAID is a secure personal cloud storage research collaboration project that provides data availability and confidentiality in the cloud. It combines erasure and cryptographic techniques to securely store files as multiple encrypted file chunks in various cloud service providers (CSPs). However, several aspects of CloudRAID's concept are unsuitable for secure and scalable enterprise cloud storage solutions, particularly key management system, location-based access control, multi-cloud storage management, and cloud file access monitoring.
This Ph.D. thesis focuses on CloudRAID for Business (CfB) as it resolves four main challenges of CloudRAID's concept for a secure and scalable EFSS system. First, the key management system is implemented using the attribute-based encryption scheme to provide secure and scalable intra-company and inter-company file-sharing functionalities. Second, an Internet-based location file access control functionality is introduced to ensure files could only be accessed at pre-determined trusted locations. Third, a unified multi-cloud storage resource management framework is utilized to securely manage cloud storage resources available in various CSPs for authorized CfB stakeholders. Lastly, a multi-cloud storage monitoring system is introduced to monitor the activities of files in the cloud using the generated cloud storage log files from multiple CSPs.
In summary, this thesis helps CfB system to provide holistic security for company's confidential files on the cloud-level, system-level, and file-level to ensure only authorized company and its employees could access the files.
Boolean Satisfiability (SAT) is one of the problems at the core of theoretical computer science. It was the first problem proven to be NP-complete by Cook and, independently, by Levin. Nowadays it is conjectured that SAT cannot be solved in sub-exponential time. Thus, it is generally assumed that SAT and its restricted version k-SAT are hard to solve. However, state-of-the-art SAT solvers can solve even huge practical instances of these problems in a reasonable amount of time.
Why is SAT hard in theory, but easy in practice? One approach to answering this question is investigating the average runtime of SAT. In order to analyze this average runtime the random k-SAT model was introduced. The model generates all k-SAT instances with n variables and m clauses with uniform probability. Researching random k-SAT led to a multitude of insights and tools for analyzing random structures in general. One major observation was the emergence of the so-called satisfiability threshold: A phase transition point in the number of clauses at which the generated formulas go from asymptotically almost surely satisfiable to asymptotically almost surely unsatisfiable. Additionally, instances around the threshold seem to be particularly hard to solve.
In this thesis we analyze a more general model of random k-SAT that we call non-uniform random k-SAT. In contrast to the classical model each of the n Boolean variables now has a distinct probability of being drawn. For each of the m clauses we draw k variables according to the variable distribution and choose their signs uniformly at random. Non-uniform random k-SAT gives us more control over the distribution of Boolean variables in the resulting formulas. This allows us to tailor distributions to the ones observed in practice. Notably, non-uniform random k-SAT contains the previously proposed models random k-SAT, power-law random k-SAT and geometric random k-SAT as special cases.
We analyze the satisfiability threshold in non-uniform random k-SAT depending on the variable probability distribution. Our goal is to derive conditions on this distribution under which an equivalent of the satisfiability threshold conjecture holds. We start with the arguably simpler case of non-uniform random 2-SAT. For this model we show under which conditions a threshold exists, if it is sharp or coarse, and what the leading constant of the threshold function is. These are exactly the three ingredients one needs in order to prove or disprove the satisfiability threshold conjecture. For non-uniform random k-SAT with k=3 we only prove sufficient conditions under which a threshold exists. We also show some properties of the variable probabilities under which the threshold is sharp in this case. These are the first results on the threshold behavior of non-uniform random k-SAT.
Illegale Preisabsprachen und andere Kartellverstöße verursachen jährlich Schäden in Millionenhöhe. Um solche Schäden einzuklagen, sind Unternehmen und Bürger jedoch auf Informationen angewiesen, die ihnen regelmäßig verborgen sind. Abhilfe sollen die in §§ 33g, 89b f. GWB kreierten Offenlegungsansprüche schaffen, die Anja Meier-Hoffmann analysiert.
Aldehyde oxidases (AOXs) (E.C. 1.2.3.1) are molybdoflavo-enzymes belonging to the xanthine oxidase (XO) family. AOXs in mammals contain one molybdenum cofactor (Moco), one flavin adenine dinucleotide (FAD) and two [2Fe-2S] clusters, the presence of which is essential for the activity of the enzyme. Human aldehyde oxidase (hAOX1) is a cytosolic enzyme mainly expressed in the liver. hAOX1is involved in the metabolism of xenobiotics. It oxidizes aldehydes to their corresponding carboxylic acids and hydroxylates N-heterocyclic compounds. Since these functional groups are widely present in therapeutics, understanding the behaviour of hAOX1 has important implications in medicine. During the catalytic cycle of hAOX1, the substrate is oxidized at Moco and electrons are internally transferred to FAD via the FeS clusters. An electron acceptor juxtaposed to the FAD receives the electrons and re-oxidizes the enzyme for the next catalytic cycle. Molecular oxygen is the endogenous electron acceptor of hAOX1 and in doing so it is reduced and produces reactive oxygen species (ROS) including hydrogen peroxide (H2O2) and superoxide (O2.-). The production of ROS has patho-physiological importance, as ROS can have a wide range of effects on cell components including the enzyme itself.
In this thesis, we have shown that hAOX1 loses its activity over multiple cycles of catalysis due to endogenous ROS production and have identified a cysteine rich motif that protects hAOX1 from the ROS damaging effects. We have also shown that a sulfido ligand, which is bound at Moco and is essential for the catalytic activity of the enzyme, is vulnerable during turnover. The ROS produced during the course of the reaction are also able to remove this sulfido ligand from Moco. ROS, in addition, oxidize particular cysteine residues. The combined effects of ROS on the sulfido ligand and on specific cysteine residues in the enzyme result in its inactivation. Furthermore, we report that small reducing agents containing reactive sulfhydryl groups, in a selective manner, inactivate some of the mammalian AOXs by modifying the sulfido ligand at Moco. The mechanism of ROS production by hAOX1 is another scope that has been investigated as part of the work in this thesis. We have shown that the ratio of type of ROS, i.e. hydrogen peroxide (H2O2) and superoxide (O2.-), produced by hAOX1 is determined by a particular position on a flexible loop that locates in close proximity of FAD. The size of the cavity at the ROS producing site, i.e. the N5 position of the FAD isoalloxazine ring, kinetically affects the amount of each type of ROS generated by hAOX1. Taken together, hAOX1 is an enzyme with emerging importance in pharmacological and medical studies, not only due to its involvement in drug metabolism, but also due to ROS production which has physiological and pathological implications.
Growth differentiation factor 15 (GDF15) is a stress-induced cytokine secreted into the circulation by a number of tissues under different pathological conditions such as cardiovascular disease, cancer or mitochondrial dysfunction, among others. While GDF15 signaling through its recently identified hindbrain-specific receptor GDNF family receptor alpha-like (GFRAL) has been proposed to be involved in the metabolic stress response, its endocrine role under chronic stress conditions is still poorly understood. Mitochondrial dysfunction is characterized by the impairment of oxidative phosphorylation (OXPHOS), leading to inefficient functioning of mitochondria and consequently, to mitochondrial stress. Importantly, mitochondrial dysfunction is among the pathologies to most robustly induce GDF15 as a cytokine in the circulation.
The overall aim of this thesis was to elucidate the role of the GDF15-GFRAL pathway under mitochondrial stress conditions. For this purpose, a mouse model of skeletal muscle-specific mitochondrial stress achieved by ectopic expression of uncoupling protein 1 (UCP1), the HSA-Ucp1-transgenic (TG) mouse, was employed. As a consequence of mitochondrial stress, TG mice display a metabolic remodeling consisting of a lean phenotype, an improved glucose metabolism, an increased metabolic flexibility and a metabolic activation of white adipose tissue.
Making use of TG mice crossed with whole body Gdf15-knockout (GdKO) and Gfral-knockout (GfKO) mouse models, this thesis demonstrates that skeletal muscle mitochondrial stress induces the integrated stress response (ISR) and GDF15 in skeletal muscle, which is released into the circulation as a myokine (muscle-induced cytokine) in a circadian manner. Further, this work identifies GDF15-GFRAL signaling to be responsible for the systemic metabolic remodeling elicited by mitochondrial stress in TG mice. Moreover, this study reveals a daytime-restricted anorexia induced by the GDF15-GFRAL axis under muscle mitochondrial stress, which is, mechanistically, mediated through the induction of hypothalamic corticotropin releasing hormone (CRH). Finally, this work elucidates a so far unknown physiological outcome of the GDF15-GFRAL pathway: the induction of anxiety-like behavior.
In conclusion, this study uncovers a muscle-brain crosstalk under skeletal muscle mitochondrial stress conditions through the induction of GDF15 as a myokine that signals through the hindbrain-specific GFRAL receptor to elicit a stress response leading to metabolic remodeling and modulation of ingestive- and anxiety-like behavior.
The morphogenesis of sessile plants is mainly driven by directional cell growth and cell division. The organization of their cytoskeleton and the mechanical properties of the cell wall greatly influence morphogenetic events in plants. It is well known that cortical microtubules (CMTs) contribute to directional growth by regulating the deposition of the cellulose microfibrils, as major cell wall fortifying elements. More recent findings demonstrate that mechanical stresses existing in cells and tissues influence microtubule organization. Also, in dividing cells, mechanical stress directions contribute to the orientation of the new cell wall. In comparison to the microtubule cytoskeleton, the role of the actin cytoskeleton in regulating shoot meristem morphogenesis has not been extensively studied.
This thesis focuses on the functional relevance of the actin cytoskeleton during cell and tissue scale morphogenesis in the shoot apical meristem (SAM) of Arabidopsis thaliana. Visualization of transcriptional reporters indicates that ACTIN2 and ACTIN7 are two highly expressed actin genes in the SAM. A link between the actin cytoskeleton and SAM development derives from the observation that the act2-1 act7-1 double mutant has abnormal cell shape and perturbed phyllotactic patterns. Live-cell imaging of the actin cytoskeleton further shows that its organization correlates with cell shape, which indicates a potential role of actin in influencing cellular morphogenesis.
In this thesis, a detailed characterization of the act2-1 act7-1 mutant reveals that perturbation of actin leads to more rectangular cellular geometries with more 90° cell internal angles, and higher incidences of four-way junctions (four cell boundaries intersecting together). This observation deviates from the conventional tricellular junctions found in epidermal cells. Quantitative cellular-level growth data indicates that such differences in the act2-1 act7-1 mutant arise due to the reduced accuracy in the placement of the new cell wall, as well as its mechanical maturation. Changes in cellular morphology observed in the act2-1 act7-1 mutant result in cell packing defects that subsequently compromise the flow of information among cells in the SAM.
Scope: Several studies show that excessive lipid intake can cause hepatic steatosis. To investigate lipotoxicity on cellular level, palmitate (PA) is often used to highly increase lipid droplets (LDs). One way to remove LDs is autophagy, while it is controversially discussed if autophagy is also affected by PA. It is aimed to investigate whether PA-induced LD accumulation can impair autophagy and punicalagin, a natural autophagy inducer from pomegranate, can improve it.
Methods and results: To verify the role of autophagy in LD degradation, HepG2 cells are treated with PA and analyzed for LD and perilipin 2 content in presence of autophagy inducer Torin 1 and inhibitor 3-Methyladenine. PA alone seems to initially induce autophagy-related proteins but impairs autophagic-flux in a time-dependent manner, considering 6 and 24 h PA. To examine whether punicalagin can prevent autophagy impairment, cells are cotreated for 24 h with PA and punicalagin. Results show that punicalagin preserves expression of autophagy-related proteins and autophagic flux, while simultaneously decreasing LDs and perilipin 2.
Conclusion: Data provide new insights into the role of PA-induced excessive LD content on autophagy and suggest autophagy-inducing properties of punicalagin, indicating that punicalagin can be a health-beneficial compound for future research on lipotoxicity in liver.
River floods are among the most devastating natural hazards worldwide. As their generation is highly dependent on climatic conditions, their magnitude and frequency are projected to be affected by future climate change. Therefore, it is crucial to study the ways in which a changing climate will, and already has, influenced flood generation, and thereby flood hazard. Additionally, it is important to understand how other human influences - specifically altered land cover - affect flood hazard at the catchment scale.
The ways in which flood generation is influenced by climatic and land cover conditions differ substantially in different regions. The spatial variability of these effects needs to be taken into account by using consistent datasets across large scales as well as applying methods that can reflect this heterogeneity. Therefore, in the first study of this cumulative thesis a complex network approach is used to find 10 clusters of similar flood behavior among 4390 catchments in the conterminous United States. By using a consistent set of 31 hydro-climatological and land cover variables, and training a separate Random Forest model for each of the clusters, the regional controls on flood magnitude trends between 1960-2010 are detected. It is shown that changes in rainfall are the most important drivers of these trends, while they are regionally controlled by land cover conditions.
While climate change is most commonly associated with flood magnitude trends, it has been shown to also influence flood timing. This can lead to trends in the size of the area across which floods occur simultaneously, the flood synchrony scale. The second study is an analysis of data from 3872 European streamflow gauges and shows that flood synchrony scales have increased in Western Europe and decreased in Eastern Europe. These changes are attributed to changes in flood generation, especially a decreasing relevance of snowmelt. Additionally, the analysis shows that both the absolute values and the trends of flood magnitudes and flood synchrony scales are positively correlated. If these trends persist in the future and are not accounted for, the combined increases of flood magnitudes and flood synchrony scales can exceed the capacities of disaster relief organizations and insurers.
Hazard cascades are an additional way through which climate change can influence different aspects of flood hazard. The 2019/2020 wildfires in Australia, which were preceded by an unprecedented drought and extinguished by extreme rainfall that led to local flooding, present an opportunity to study the effects of multiple preceding hazards on flood hazard. All these hazards are individually affected by climate change, additionally complicating the interactions within the cascade. By estimating and analyzing the burn severity, rainfall magnitude, soil erosion and stream turbidity in differently affected tributaries of the Manning River catchment, the third study shows that even low magnitude floods can pose a substantial hazard within a cascade.
This thesis shows that humanity is affecting flood hazard in multiple ways with spatially and temporarily varying consequences, many of which were previously neglected (e.g. flood synchrony scale, hazard cascades). To allow for informed decision making in risk management and climate change adaptation, it will be crucial to study these aspects across the globe and to project their trajectories into the future. The presented methods can depict the complex interactions of different flood drivers and their spatial variability, providing a basis for the assessment of future flood hazard changes. The role of land cover should be considered more in future flood risk modelling and management studies, while holistic, transferable frameworks for hazard cascade assessment will need to be designed.
Natural hazards pose a threat to human health and life. In Germany, where the research for this thesis was conducted, numerous weather extremes occurred in the recent past that caused high numbers of fatalities and huge financial losses. The focus of this research is centred around two relevant natural hazards: heat stress and flooding. Preventing negative health impacts and deaths, as well as structural and monetary damage is the purpose of risk management and this requires citizens to adapt as well. Risk communication is implemented to foster people’s risk perception and motivate individual adaptation. However, methods of risk and crisis communication are often not evaluated in a structured manner. Much interdisciplinary research exists on both risk perception and adaptation, however, not much is known on the connection between the two. Furthermore, the existing research on risk communication is often not theory-driven and its impact on individual adaptation and risk perception is not thoroughly documented. This dissertation follows three research aims: (1) Compare psychological theories that contribute to natural hazard research. (2) Explore risk perception and adaptive behaviour by applying multiple methods. And (3) evaluate one risk communication method and one crisis communication method in a theory-driven manner to determine their impact on risk perception and adaptive behaviour. First, a literature review is provided on existing psychological theories which aim to explain the behaviour of individuals with regards to natural hazards. The three key theories included are the Protection Motivation Theory (PMT), the Protective Action Decision Model (PADM), and the Risk Information Seeking and Processing Model (RISP). Each of these are described and compared to each other with a focus on their explanatory power and practical significance in interdisciplinary research. Theoretical adaptations and possible extensions for future research are proposed for the presented approaches. Second, a multimethod field study on heat stress at an open-air event is presented. Face-to-face surveys (n = 306) and behavioural observations (n = 2750) were carried out at a horticultural show in Würzburg in summer 2018. The visitors’ risk perception, adaptive behaviour, and activity level were analysed and compared between hot days, summer days, and rainy days, applying correlation analyses, ANOVA, and multiple regression analyses. Heat risk perception was generally high, but most respondents were unaware of heat warnings on the day of their visit. During hot days the highest level of adaptation and lower activity levels were observed. Discrepancies between reported and observed adaptation emerged for different age groups.. Third, a telephone and web-based household survey on heat stress was conducted in the cities of Würzburg, Potsdam, and Remscheid in 2019 (n = 1417). The PADM served as the study’s theoretical framework. In multiple regression analyses the PADM factors of environmental and demographic context, risk communication, and psychological processes explained a substantial share of variance of protection motivation, protective response, and emotion-focused coping. Elements of crisis communication of a heat warning were evaluated experimentally. Results showed that understanding and adaptation intention was significantly higher in individuals that had received action recommendations alongside the heat warning. Fourth, the focus is set on a risk communication method of the flood context. A series of workshops on individual flood protection was carried out in six different settings. The participants (n = 115) answered a pretest-posttest questionnaire. Mixed-model analyses revealed significant increases in self-efficacy, subjective knowledge, and protection motivation. Stronger effects were observed in younger participants and those with lower levels of previous knowledge on flood adaptation as well as no flood experience. The findings of this thesis help to understand individual adaptation, as well as possible impacts of risk and crisis communication on risk perception and adaptation. The scientific background of this work is rooted in the disciplines of psychology and geosciences. The two theories PMT and PADM proved to be useful theoretical frameworks for the presented studies to suggest improvements in risk communication methods. A broad picture of individual adaptation is captured through a variety of methods of self-reports (face-to-face, telephone-based, web-based, and paper-pencil surveys) and behavioural observations, which recorded past and intended behaviour. Alongside with further methodological recommendations, the theory-driven evaluations of risk and crisis communication methods can serve as best-practice examples for future evaluation studies in natural hazard research but also other sciences dealing with risk behaviour to identify and improve effective risk communication pathways.
Knowledge graphs are structured repositories of knowledge that store facts
about the general world or a particular domain in terms of entities and
their relationships. Owing to the heterogeneity of use cases that are served
by them, there arises a need for the automated construction of domain-
specific knowledge graphs from texts. While there have been many research
efforts towards open information extraction for automated knowledge graph
construction, these techniques do not perform well in domain-specific settings.
Furthermore, regardless of whether they are constructed automatically from
specific texts or based on real-world facts that are constantly evolving, all
knowledge graphs inherently suffer from incompleteness as well as errors in
the information they hold.
This thesis investigates the challenges encountered during knowledge graph
construction and proposes techniques for their curation (a.k.a. refinement)
including the correction of semantic ambiguities and the completion of missing
facts. Firstly, we leverage existing approaches for the automatic construction
of a knowledge graph in the art domain with open information extraction
techniques and analyse their limitations. In particular, we focus on the
challenging task of named entity recognition for artwork titles and show
empirical evidence of performance improvement with our proposed solution
for the generation of annotated training data.
Towards the curation of existing knowledge graphs, we identify the issue of
polysemous relations that represent different semantics based on the context.
Having concrete semantics for relations is important for downstream appli-
cations (e.g. question answering) that are supported by knowledge graphs.
Therefore, we define the novel task of finding fine-grained relation semantics
in knowledge graphs and propose FineGReS, a data-driven technique that
discovers potential sub-relations with fine-grained meaning from existing pol-
ysemous relations. We leverage knowledge representation learning methods
that generate low-dimensional vectors (or embeddings) for knowledge graphs
to capture their semantics and structure. The efficacy and utility of the
proposed technique are demonstrated by comparing it with several baselines
on the entity classification use case.
Further, we explore the semantic representations in knowledge graph embed-
ding models. In the past decade, these models have shown state-of-the-art
results for the task of link prediction in the context of knowledge graph comple-
tion. In view of the popularity and widespread application of the embedding
techniques not only for link prediction but also for different semantic tasks,
this thesis presents a critical analysis of the embeddings by quantitatively
measuring their semantic capabilities. We investigate and discuss the reasons
for the shortcomings of embeddings in terms of the characteristics of the
underlying knowledge graph datasets and the training techniques used by
popular models.
Following up on this, we propose ReasonKGE, a novel method for generating
semantically enriched knowledge graph embeddings by taking into account the
semantics of the facts that are encapsulated by an ontology accompanying the
knowledge graph. With a targeted, reasoning-based method for generating
negative samples during the training of the models, ReasonKGE is able to
not only enhance the link prediction performance, but also reduce the number
of semantically inconsistent predictions made by the resultant embeddings,
thus improving the quality of knowledge graphs.
The Arctic nearshore zone plays a key role in the carbon cycle. Organic-rich sediments get eroded off permafrost affected coastlines and can be directly transferred to the nearshore zone. Permafrost in the Arctic stores a high amount of organic matter and is vulnerable to thermo-erosion, which is expected to increase due to climate change. This will likely result in higher sediment loads in nearshore waters and has the potential to alter local ecosystems by limiting light transmission into the water column, thus limiting primary production to the top-most part of it, and increasing nutrient export from coastal erosion. Greater organic matter input could result in the release of greenhouse gases to the atmosphere. Climate change also acts upon the fluvial system, leading to greater discharge to the nearshore zone. It leads to decreasing sea-ice cover as well, which will both increase wave energy and lengthen the open-water season. Yet, knowledge on these processes and the resulting impact on the nearshore zone is scarce, because access to and instrument deployment in the nearshore zone is challenging.
Remote sensing can alleviate these issues in providing rapid data delivery in otherwise non-accessible areas. However, the waters in the Arctic nearshore zone are optically complex, with multiple influencing factors, such as organic rich suspended sediments, colored dissolved organic matter (cDOM), and phytoplankton. The goal of this dissertation was to use remotely sensed imagery to monitor processes related to turbidity caused by suspended sediments in the Arctic nearshore zone. In-situ measurements of water-leaving reflectance and surface water turbidity were used to calibrate a semi-empirical algorithm which relates turbidity from satellite imagery. Based on this algorithm and ancillary ocean and climate variables, the mechanisms underpinning nearshore turbidity in the Arctic were identified at a resolution not achieved before.
The calibration of the Arctic Nearshore Turbidity Algorithm (ANTA) was based on in-situ measurements from the coastal and inner-shelf waters around Herschel Island Qikiqtaruk (HIQ) in the western Canadian Arctic from the summer seasons 2018 and 2019. It performed better than existing algorithms, developed for global applications, in relating turbidity from remotely sensed imagery. These existing algorithms were lacking validation data from permafrost affected waters, and were thus not able to reflect the complexity of Arctic nearshore waters. The ANTA has a higher sensitivity towards the lowest turbidity values, which is an asset for identifying sediment pathways in the nearshore zone. Its transferability to areas beyond HIQ was successfully demonstrated using turbidity measurements matching satellite image recordings from Adventfjorden, Svalbard. The ANTA is a powerful tool that provides robust turbidity estimations in a variety of Arctic nearshore environments.
Drivers of nearshore turbidity in the Arctic were analyzed by combining ANTA results from the summer season 2019 from HIQ with ocean and climate variables obtained from the weather station at HIQ, the ERA5 reanalysis database, and the Mackenzie River discharge. ERA5 reanalysis data were obtained as domain averages over the Canadian Beaufort Shelf. Nearshore turbidity was linearly correlated to wind speed, significant wave height and wave period. Interestingly, nearshore turbidity was only correlated to wind speed at the shelf, but not to the in-situ measurements from the weather station at HIQ. This shows that nearshore turbidity, albeit being of limited spatial extent, gets influenced by the weather conditions multiple kilometers away, rather than in its direct vicinity. The large influence of wave energy on nearshore turbidity indicates that freshly eroded material off the coast is a major contributor to the nearshore sediment load. This contrasts results from the temperate and tropical oceans, where tides and currents are the major drivers of nearshore turbidity. The Mackenzie River discharge was not identified as a driver of nearshore turbidity in 2019, however, the analysis of 30 years of Landsat archive imagery from 1986 to 2016 suggests a direct link between the prevailing wind direction, which heavily influences the Mackenzie River plume extent, and nearshore turbidity around HIQ. This discrepancy could be caused by the abnormal discharge behavior of the Mackenzie River in 2019.
This dissertation has substantially advanced the understanding of suspended sediment processes in the Arctic nearshore zone and provided new monitoring tools for future studies. The presented results will help to understand the role of the Arctic nearshore zone in the carbon cycle under a changing climate.
We live in an aging society. The change in demographic structures poses a number of challenges, including an increase in age-associated diseases. Delirium, dementia, and depression are considered to be of particular interest in the field of aging and mental health. A common theory regarding healthy aging and mental health is that the highest satisfaction and best performance is achieved when a person's abilities match the demands of their environment. In this context, the person's environment includes both the physical and the social environment. Based on this assumption, this dissertation focuses on the investigation of non-pharmacological interventions that modify environmental factors in order to facilitate the prevention and treatment of mental disorders in older patients and their caregivers. The first part of this dissertation consists of two publications and deals with the prevention of postoperative delirium in elderly patients. The PAWEL study investigated the use of a multimodal, non-pharmacological intervention in the routine care of patients aged 70 years or older undergoing elective surgery. The intervention included an interdepartmental delirium prevention team, daily use of seven manualized “best practice” procedures, structured staff training on delirium, and the adaptation of the hospital environment to the patients’ needs. The second part of the dissertation used a meta-analysis to investigate whether technology-based interventions are a suitable form of support for informal caregivers of people with dementia. Subgroup analyses were conducted to examine the effect of different types of technology on caregiver burden and depressive symptoms. The following main results were found: The PAWEL study showed that the use of a multimodal, non-pharmacological intervention resulted in a significantly lower incidence rate of postoperative delirium and reduced days with delirium in the intervention group compared to the control group. However, this difference could not be observed in the group of patients undergoing elective cardiac surgery. The results of the meta-analysis showed that technology-based interventions offer a promising alternative to traditional “face-to-face” services. Significant effect sizes could be found in relation to both the burden and the depressive symptoms of caregiving relatives. These results provide further important information on the significant impact of non-pharmacological interventions that modify environmental factors on mental health, and support the consideration of such interventions in the prevention and treatment of mental disorders in both older patients and their caregivers.
Text is a ubiquitous entity in our world and daily life. We encounter it nearly everywhere in shops, on the street, or in our flats. Nowadays, more and more text is contained in digital images. These images are either taken using cameras, e.g., smartphone cameras, or taken using scanning devices such as document scanners. The sheer amount of available data, e.g., millions of images taken by Google Streetview, prohibits manual analysis and metadata extraction. Although much progress was made in the area of optical character recognition (OCR) for printed text in documents, broad areas of OCR are still not fully explored and hold many research challenges. With the mainstream usage of machine learning and especially deep learning, one of the most pressing problems is the availability and acquisition of annotated ground truth for the training of machine learning models because obtaining annotated training data using manual annotation mechanisms is time-consuming and costly. In this thesis, we address of how we can reduce the costs of acquiring ground truth annotations for the application of state-of-the-art machine learning methods to optical character recognition pipelines. To this end, we investigate how we can reduce the annotation cost by using only a fraction of the typically required ground truth annotations, e.g., for scene text recognition systems. We also investigate how we can use synthetic data to reduce the need of manual annotation work, e.g., in the area of document analysis for archival material. In the area of scene text recognition, we have developed a novel end-to-end scene text recognition system that can be trained using inexact supervision and shows competitive/state-of-the-art performance on standard benchmark datasets for scene text recognition. Our method consists of two independent neural networks, combined using spatial transformer networks. Both networks learn together to perform text localization and text recognition at the same time while only using annotations for the recognition task. We apply our model to end-to-end scene text recognition (meaning localization and recognition of words) and pure scene text recognition without any changes in the network architecture.
In the second part of this thesis, we introduce novel approaches for using and generating synthetic data to analyze handwriting in archival data. First, we propose a novel preprocessing method to determine whether a given document page contains any handwriting. We propose a novel data synthesis strategy to train a classification model and show that our data synthesis strategy is viable by evaluating the trained model on real images from an archive. Second, we introduce the new analysis task of handwriting classification. Handwriting classification entails classifying a given handwritten word image into classes such as date, word, or number. Such an analysis step allows us to select the best fitting recognition model for subsequent text recognition; it also allows us to reason about the semantic content of a given document page without the need for fine-grained text recognition and further analysis steps, such as Named Entity Recognition. We show that our proposed approaches work well when trained on synthetic data. Further, we propose a flexible metric learning approach to allow zero-shot classification of classes unseen during the network’s training. Last, we propose a novel data synthesis algorithm to train off-the-shelf pixel-wise semantic segmentation networks for documents. Our data synthesis pipeline is based on the famous Style-GAN architecture and can synthesize realistic document images with their corresponding segmentation annotation without the need for any annotated data!
Vegetation change at high latitudes is one of the central issues nowadays with respect to ongoing climate changes and triggered potential feedback. At high latitude ecosystems, the expected changes include boreal treeline advance, compositional, phenological, physiological (plants), biomass (phytomass) and productivity changes. However, the rate and the extent of the changes under climate change are yet poorly understood and projections are necessary for effective adaptive strategies and forehanded minimisation of the possible negative feedbacks.
The vegetation itself and environmental conditions, which are playing a great role in its development and distribution are diverse throughout the Subarctic to the Arctic. Among the least investigated areas is central Chukotka in North-Eastern Siberia, Russia. Chukotka has mountainous terrain and a wide variety of vegetation types on the gradient from treeless tundra to northern taiga forests. The treeline there in contrast to subarctic North America and north-western and central Siberia is represented by a deciduous conifer, Larix cajanderi Mayr. The vegetation varies from prostrate lichen Dryas octopetala L. tundra to open graminoid (hummock and non-hummock) tundra to tall Pinus pumila (Pall.) Regel shrublands to sparse and dense larch forests.
Hence, this thesis presents investigations on recent compositional and above-ground biomass (AGB) changes, as well as potential future changes in AGB in central Chukotka. The aim is to assess how tundra-taiga vegetation develops under changing climate conditions particularly in Fareast Russia, central Chukotka. Therefore, three main research questions were considered:
1) What changes in vegetation composition have recently occurred in central Chukotka?
2) How have the above-ground biomass AGB rates and distribution changed in central Chukotka?
3) What are the spatial dynamics and rates of tree AGB change in the upcoming millennia in the northern tundra-taiga of central Chukotka?
Remote sensing provides information on the spatial and temporal variability of vegetation. I used Landsat satellite data together with field data (foliage projective cover and AGB) from two expeditions in 2016 and 2018 to Chukotka to upscale vegetation types and AGB for the study area. More specifically, I used Landsat spectral indices (Normalised Difference Vegetation Index (NDVI), Normalised Difference Water Index (NDWI) and Normalised Difference Snow Index (NDSI)) and constrained ordination (Redundancy analysis, RDA) for further k-means-based land-cover classification and general additive model (GAM)-based AGB maps for 2000/2001/2002 and 2016/2017. I also used Tandem-X DEM data for a topographical correction of the Landsat satellite data and to derive slope, aspect, and Topographical Wetness Index (TWI) data for forecasting AGB.
Firstly, in 2016, taxa-specific projective cover data were collected during a Russian-German expedition. I processed the field data and coupled them with Landsat spectral Indices in the RDA model that was used for k-means classification. I could establish four meaningful land-cover classes: (1) larch closed-canopy forest, (2) forest tundra and shrub tundra, (3) graminoid tundra and (4) prostrate herb tundra and barren areas, and accordingly, I produced the land cover maps for 2000/2001/2002 and 2016/20017. Changes in land-cover classes between the beginning of the century (2000/2001/2002) and the present time (2016/2017) were estimated and interpreted as recent compositional changes in central Chukotka. The transition from graminoid tundra to forest tundra and shrub tundra was interpreted as shrubification and amounts to a 20% area increase in the tundra-taiga zone and 40% area increase in the northern taiga. Major contributors of shrubification are alder, dwarf birch and some species of the heather family. Land-cover change from the forest tundra and shrub tundra class to the larch closed-canopy forest class is interpreted as tree infilling and is notable in the northern taiga. We find almost no land-cover changes in the present treeless tundra.
Secondly, total AGB state and change were investigated for the same areas. In addition to the total vegetation AGB, I provided estimations for the different taxa present at the field sites. As an outcome, AGB in the study region of central Chukotka ranged from 0 kg m-2 at barren areas to 16 kg m-2 in closed-canopy forests with the larch trees contributing the highest. A comparison of changes in AGB within the investigated period from 2000 to 2016 shows that the greatest changes (up to 1.25 kg m 2 yr 1) occurred in the northern taiga and in areas where land cover changed to larch closed-canopy forest. Our estimations indicate a general increase in total AGB throughout the investigated tundra-taiga and northern taiga, whereas the tundra showed no evidence of change in AGB within the 15 years from 2002 to 2017.
In the third manuscript, potential future AGB changes were estimated based on the results of simulations of the individual-based spatially explicit vegetation model LAVESI using different climate scenarios, depending on Representative Concentration Pathways (RCPs) RCP 2.6, RCP 4.5 and RCP 8.5 with or without cooling after 2300 CE. LAVESI-based AGB was simulated for the current state until 3000 CE for the northern tundra-taiga study area for larch species because we expect the most notable changes to occur will be associated with forest expansion in the treeline ecotone. The spatial distribution and current state of tree AGB was validated against AGB field data, AGB extracted from Landsat satellite data and a high spatial resolution image with distinctive trees visible. The simulation results are indicating differences in tree AGB dynamics plot wise, depending on the distance to the current treeline. The simulated tree AGB dynamics are in concordance with fundamental ecological (emigrational and successional) processes: tree stand formation in simulated results starts with seed dispersion, tree stand establishment, tree stand densification and episodic thinning. Our results suggest mostly densification of existing tree stands in the study region within the current century in the study region and a lagged forest expansion (up to 39% of total area in the RCP 8.5) under all considered climate scenarios without cooling in different local areas depending on the closeness to the current treeline. In scenarios with cooling air temperature after 2300 CE, forests stopped expanding at 2300 CE (up to 10%, RCP 8.5) and then gradually retreated to their pre-21st century position. The average tree AGB rates of increase are the strongest in the first 300 years of the 21st century. The rates depend on the RCP scenario, where the highest are as expected under RCP 8.5.
Overall, this interdisciplinary thesis shows a successful integration of field data, satellite data and modelling for tracking recent and predicting future vegetation changes in mountainous subarctic regions. The obtained results are unique for the focus area in central Chukotka and overall, for mountainous high latitude ecosystems.
One aspect of achieving a more sustainable chemical industry is the minimization of the usage of solvents and chemicals. Thus, optimization and development of chemical processes for large-scale production is favourably performed in small batches. The critical step in this approach is upscaling the batches from the small reaction systems to the large reactors mandatory for cost efficient production in an industrial environment. Scaling up the bulk volume always goes along with increasing the surface where the reaction medium is in contact with the confining vessel. Since volume scales proportional with the cubic dimension while the surface scales quadratic, their ratio is size-dependent. The influence of reaction vessel walls can change the reaction performance. A number of phenomena occurring at the surface-liquid interface can affect reaction rates and yields, resulting in possible difficulties in predicting and extrapolating from small size production scale to large industrial processes. The application of levitated droplets as a containerless reaction vessels provides a promising possibility to avoid the above-mentioned issues.
In the presented work, an efficient coupling of acoustically levitated droplets to an ion mobility (IM) spectrometer, operating at ambient conditions, was designed for real-time monitoring of chemical reactions. The design of the system comprises noncontact sampling and ionization of the droplet realised by laser desorption/ionization at 2,94 µm. The scope of the work includes fundamental studies covering understanding of laser irradiation of droplets enclosed in an acoustical field. Understanding of this phenomenon is crucial to comprehending the effects of temporal and spatial resolution of the generated ion plume that influence the resolution of the system.
The set-up includes an acoustic trap, laser irradiation and ion manipulation electrostatic lenses operating at high voltage at ambient pressure. The complexity of the design needs to fully be considered for an effective ion transfer at the interface region between the levitated droplet and IM spectrometer. For sampling and ionization, two distinct laser pulse lengths were evaluated, ns and µs. Irradiation via µs laser pulses provides several advantages: i) the droplet volume is not extensively impinged, as in case of ns laser pulses, allowing the sampling of only the small volume of the droplet; ii) the lower fluence results in less pronounced oscillations of the droplet confined in the acoustic field. The droplet will not be dissipated out of the acoustic field leading to loss of the sample; iii) the mild laser irradiation results in better spatial and temporal ion plume confinement, leading to better resolution of the detected ion packets. Finally, this knowledge allows the application of ion optics necessary to induce ion flow between the droplet suspended in the acoustic field and the IM spectrometer. The ion optics, composed of 2 electrostatic lenses placed in the near vicinity of the droplet, allow effective focusing of the ion plume and its redirection directly to the IM spectrometer entrance. This novel coupling has proved to be successful for detection of some simple molecules ionizable at the 2.94 µm wavelength. To further demonstrate the applicability of the system, a proof-of-principle reaction was selected, fulfilling the requirements of the system, and was subjected to comprehensive investigation of its performance. Herein, the reaction between N-Boc cysteine methyl ester and allyl alcohol has been performed in a batch reactor and on-line monitored via 1H NMR to establish reaction propagation. With the additional assessment, it was confirmed that the thiol-ene coupling can be performed within first 20 minutes of the irradiation with a reaction yield above 50%, proving that the reaction can be applied as a study case to assess the possibilities of the developed system.
The echo chamber model describes the development of groups in heterogeneous social networks. By heterogeneous social network we mean a set of individuals, each of whom represents exactly one opinion. The existing relationships between individuals can then be represented by a graph. The echo chamber model is a time-discrete model which, like a board game, is played in rounds. In each round, an existing relationship is randomly and uniformly selected from the network and the two connected individuals interact. If the opinions of the individuals involved are sufficiently similar, they continue to move closer together in their opinions, whereas in the case of opinions that are too far apart, they break off their relationship and one of the individuals seeks a new relationship. In this paper we examine the building blocks of this model. We start from the observation that changes in the structure of relationships in the network can be described by a system of interacting particles in a more abstract space.
These reflections lead to the definition of a new abstract graph that encompasses all possible relational configurations of the social network. This provides us with the geometric understanding necessary to analyse the dynamic components of the echo chamber model in Part III. As a first step, in Part 7, we leave aside the opinions of the inidividuals and assume that the position of the edges changes with each move as described above, in order to obtain a basic understanding of the underlying dynamics. Using Markov chain theory, we find upper bounds on the speed of convergence of an associated Markov chain to its unique stationary distribution and show that there are mutually identifiable networks that are not apparent in the dynamics under analysis, in the sense that the stationary distribution of the associated Markov chain gives equal weight to these networks.
In the reversible cases, we focus in particular on the explicit form of the stationary distribution as well as on the lower bounds of the Cheeger constant to describe the convergence speed.
The final result of Section 8, based on absorbing Markov chains, shows that in a reduced version of the echo chamber model, a hierarchical structure of the number of conflicting relations can be identified.
We can use this structure to determine an upper bound on the expected absorption time, using a quasi-stationary distribution. This hierarchy of structure also provides a bridge to classical theories of pure death processes. We conclude by showing how future research can exploit this link and by discussing the importance of the results as building blocks for a full theoretical understanding of the echo chamber model. Finally, Part IV presents a published paper on the birth-death process with partial catastrophe. The paper is based on the explicit calculation of the first moment of a catastrophe. This first part is entirely based on an analytical approach to second degree recurrences with linear coefficients. The convergence to 0 of the resulting sequence as well as the speed of convergence are proved. On the other hand, the determination of the upper bounds of the expected value of the population size as well as its variance and the difference between the determined upper bound and the actual value of the expected value. For these results we use almost exclusively the theory of ordinary nonlinear differential equations.
Queer Curating
(2022)
Dieses Buch ist für alle, die mutig genug sind, sich selbst und das Zeigen von Kunst zu stören. Beatrice Miersch entwirft radikal-relationale Alternativen zu zeitgenössischen Ausstellungspraktiken. Um sich der gesellschaftlichen Verantwortung im Rahmen des Ausstellens von Kunst zu stellen, erprobt sie queer-feministische, kulturwissenschaftliche und selbstreflektierende Methoden in der Praxis und Theorie des Kuratierens. Momente kuratorischer Störung werden zu produktiv-schöpferischen Momenten der Unterbrechung, mit denen sie reflektierte, offene, engagierte und vulnerable Perspektiven auf das Ausstellen eröffnet und tradierte Strukturen durchbricht.
Molecules are often naturally embedded in a complex environment. As a consequence, characteristic properties of a molecular subsystem can be substantially altered or new properties emerge due to interactions between molecular and environmental degrees of freedom. The present thesis is concerned with the numerical study of quantum dynamical and stationary properties of molecular vibrational systems embedded in selected complex environments.
In the first part, we discuss "strong-coupling" model scenarios for molecular vibrations interacting with few quantized electromagnetic field modes of an optical Fabry-Pérot cavity. We thoroughly elaborate on properties of emerging "vibrational polariton" light-matter hybrid states and examine the relevance of the dipole self-energy. Further, we identify cavity-induced quantum effects and an emergent dynamical resonance in a cavity-altered thermal isomerization model, which lead to significant suppression of thermal reaction rates. Moreover, for a single rovibrating diatomic molecule in an optical cavity, we observe non-adiabatic signatures in dynamics due to "vibro-polaritonic conical intersections" and discuss spectroscopically accessible "rovibro-polaritonic" light-matter hybrid states.
In the second part, we study a weakly coupled but numerically challenging quantum mechanical adsorbate-surface model system comprising a few thousand surface modes. We introduce an efficient construction scheme for a "hierarchical effective mode" approach to reduce the number of surface modes in a controlled manner. In combination with the multilayer multiconfigurational time-dependent Hartree (ML-MCTDH) method, we examine the vibrational adsorbate relaxation dynamics from different excited adsorbate states by solving the full non-Markovian system-bath dynamics for the characteristic relaxation time scale. We examine half-lifetime scaling laws from vibrational populations and identify prominent non-Markovian signatures as deviations from Markovian reduced system density matrix theory in vibrational coherences, system-bath entanglement and energy transfer dynamics.
In the final part of this thesis, we approach the dynamics and spectroscopy of vibronic model systems at finite temperature by formulating the ML-MCTDH method in the non-stochastic framework of thermofield dynamics. We apply our method to thermally-altered ultrafast internal conversion in the well-known vibronic coupling model of pyrazine. Numerically beneficial representations of multilayer wave functions ("ML-trees") are identified for different temperature regimes, which allow us to access thermal effects on both electronic and vibrational dynamics as well as spectroscopic properties for several pyrazine models.
Präexistente Musik im Film
(2022)
Vom Weltuntergang mit Richard Wagners "Tristan und Isolde" über die Gesangsperformance einer zum Tode verurteilten Björk am Galgen bis hin zu Johann Sebastian Bachs Orgelmusik als Erklärungsmodell für Hypersexualität: So seltsam das Kino des Lars von Trier erscheinen mag, so vielfältige Möglichkeiten bietet es, über ein Musikphänomen nachzudenken, das einen Großteil der heutigen Kunst- und Unterhaltungswelt prägt.
In Lars von Triers Filmen erklingt hauptsächlich Musik, die es bereits vor den Filmen gab. Einerseits besitzt solche präexistente Musik ein ausgeprägtes Eigenleben, andererseits entsteht aus der filmischen Aneignung etwas Neues. Am Beispiel eines der einflussreichsten Regisseure der Gegenwart untersucht Pascal Rudolph, wie Filmschaffende Musik adaptieren und wie dadurch Bedeutungen und Wirkungen entstehen. Erstmals bietet das Buch auf Grundlage von unveröffentlichtem Produktionsmaterial und Insider-Interviews detaillierte Einblicke in die Arbeit bei Lars von Triers Filmprojekten im Besonderen, aber auch in die Arbeitsprozesse der filmmusikalischen Gestaltung im Allgemeinen. Der musikalischen Vielfalt in den Filmen wird die Studie durch ihren multiperspektivischen und transdisziplinären Ansatz gerecht. Die zehn Kapitel beleuchten das Zusammenwirken von Musik und Film auf diese Weise aus verschiedenen Blickwinkeln.
The main goal of this dissertation is to experimentally investigate how focus is realised, perceived, and processed by native Turkish speakers, independent of preconceived notions of positional restrictions. Crucially, there are various issues and scientific debates surrounding focus in the Turkish language in the existing literature (chapter 1). It is argued in this dissertation that two factors led to the stagnant literature on focus in Turkish: the lack of clearly defined, modern understandings of information structure and its fundamental notion of focus, and the ongoing and ill-defined debate surrounding the question of whether there is an immediately preverbal focus position in Turkish. These issues gave rise to specific research questions addressed across this dissertation. Specifically, we were interested in how the focus dimensions such as focus size (comparing narrow constituent and broad sentence focus), focus target (comparing narrow subject and narrow object focus), and focus type (comparing new-information and contrastive focus) affect Turkish focus realisation and, in turn, focus comprehension when speakers are provided syntactic freedom to position focus as they see fit.
To provide data on these core goals, we presented three behavioural experiments based on a systematic framework of information structure and its notions (chapter 2): (i) a production task with trigger wh-questions and contextual animations manipulated to elicit the focus dimensions of interest (chapter 3), (ii) a timed acceptability judgment task in listening to the recorded answers in our production task (chapter 4), and (iii) a self-paced reading task to gather on-line processing data (chapter 5).
Based on the results of the conducted experiments, multiple conclusions are made in this dissertation (chapter 6). Firstly, this dissertation demonstrated empirically that there is no focus position in Turkish, neither in the sense of a strict focus position language nor as a focally loaded position facilitating focus perception and/or processing. While focus is, in fact, syntactically variable in the Turkish preverbal area, this is a consequence of movement triggered by other IS aspects like topicalisation and backgrounding, and the observational markedness of narrow subject focus compared to narrow object focus. As for focus type in Turkish, this dimension is not associated with word order in production, perception, or processing. Significant acoustic correlates of focus size (broad sentence focus vs narrow constituent focus) and focus target (narrow subject focus vs narrow object focus) were observed in fundamental frequency and intensity, representing focal boost, (postfocal) deaccentuation, and the presence or absence of a phrase-final rise in the prenucleus, while the perceivability of these effects remains to be investigated. In contrast, no acoustic correlates of focus type in simple, three-word transitive structures were observed, with focus types being interchangeable in mismatched question-answer pairs. Overall, the findings of this dissertation highlight the need for experimental investigations regarding focus in Turkish, as theoretical predictions do not necessarily align with experimental data. As such, the fallacy of implying causation from correlation should be strictly kept in mind, especially when constructions coincide with canonical structures, such as the immediately preverbal position in narrow object foci. Finally, numerous open questions remain to be explored, especially as focus and word order in Turkish are multifaceted. As shown, givenness is a confounding factor when investigating focus types, while thematic role assignment potentially confounds word order preferences. Further research based on established, modern information structure frameworks is needed, with chapter 5 concluding with specific recommendations for such future research.
Understanding the changes that follow UV-excitation in thionucleobases is of great importance for the study of light-induced DNA lesions and, in a broader context, for their applications in medicine and biochemistry. Their ultrafast photophysical reactions can alter the chemical structure of DNA - leading to damages to the genetic code - as proven by the increased skin cancer risk observed for patients treated with thiouracil for its immunosuppressant properties.
In this thesis, I present four research papers that result from an investigation of the ultrafast dynamics of 2-thiouracil by means of ultrafast x-ray probing combined with electron spectroscopy. A molecular jet in the gas phase is excited with a uv pulse and then ionized with x-ray radiation from a Free Electron Laser. The kinetic energy of the emitted electrons is measured in a magnetic bottle spectrometer. The spectra of the measured photo and Auger electrons are used to derive a picture of the changes in the geometrical and electronic configurations. The results allow us to look at the dynamical processes from a new perspective, thanks to the element- and site- sensitivity of x-rays. The custom-built URSA-PQ apparatus used in the experiment is described. It has been commissioned and used at the FL24 beamline of the FLASH2 FEL, showing an electron kinetic energy resolution of ∆E/E ~ 40 and a pump-probe timing resolution of 190 f s. X-ray only photoelectron and Auger spectra of 2-thiouracil are extracted from the data and used as reference. Photoelectrons following the formation a 2p core hole are identified, as well as resonant and non-resonant Auger electrons. At the L 1 edge, Coster-Kronig decay is observed from the 2s core hole.
The UV-induced changes in the 2p photoline allow the study the electronic-state dynamics. With the use of an Excited-State Chemical Shift (ESCS) model, we observe a ultrafast ground-state relaxation within 250 f s. Furthermore, an oscillation with a 250 f s period is observed in the 2p binding energy, showing a coherent population exchange between electronic states. Auger electrons from the 2p core hole are analyzed and used to deduce a ultrafast C −S bond expansion on a sub 100 f s scale. A simple Coulomb-model, coupled to quantum chemical calculations, can be used to infer the geometrical changes in the molecular structure.
This thesis deals with the synthesis of protein and composite protein-mineral microcapsules by the application of high-intensity ultrasound at the oil-water interface. While one system is stabilized by BSA molecules, the other system is stabilized by different nanoparticles modified with BSA. A comprehensive study of all synthesis stages as well as of resulting capsules were carried out and a plausible explanation of the capsule formation mechanism was proposed. During the formation of BSA microcapsules, the protein molecules adsorb firstly at the O/W interface and unfold there forming an interfacial network stabilized by hydrophobic interactions and hydrogen bonds between neighboring molecules. Simultaneously, the ultrasonic treatment causes the cross-linking of the BSA molecules via the formation of intermolecular disulfide bonds. In this thesis, the experimental evidences of ultrasonically induced cross-linking of the BSA in the shells of protein-based microcapsules are demonstrated. Therefore, the concept proposed many years ago by Suslick and co-workers is confirmed by experimental evidences for the first time. Moreover, a consistent mechanism for the formation of intermolecular disulfide bonds in capsule shells is proposed that is based on the redistribution of thiol and disulfide groups in BSA under the action of high-energy ultrasound. The formation of composite protein-mineral microcapsules loaded with three different oils and shells composed of nanoparticles was also successful. The nature of the loaded oil and the type of nanoparticles in the shell, had influence on size and shape of the microcapsules. The examination of the composite capsule revealed that the BSA molecules adsorbed on the nanoparticles surface in the capsule shell are not cross-linked by intermolecular disulfide bonds. Instead, a Pickering emulsion formation takes place. The surface modification of composite microcapsules through both pre-modification of main components and also the post-modification of the surface of ready composite microcapsules was successfully demonstrated. Additionally, the mechanical properties of protein and composite protein-mineral microcapsules were compared. The results showed that the protein microcapsules are more resistant to elastic deformation.
Postcolonial surveillance
(2022)
Postcolonial Surveillance investigates the long history of the European border regime, focusing on the colonial forerunners of today’s border technologies. The book takes a longue durée perspective to uncover how Europe’s colonial history continues to shape the high-tech political present and has morphed into EU border migration policies, border security, and surveillance apparatuses. It exposes the racial hierarchies and power relations that form these systems and highlights key moments when the past and present interact and collide, such as in panoptic surveillance, biopolitical registers, biometric sorting, and deterrent media infrastructure. The technological genealogies assembled in this book reveal the unacknowledged histories that had to be rejected for the seemingly clean, unbiased, and neutral technologies to emerge as such.
Pensionskassen
(2022)
Die Kapitalanlage von Pensionskassen in Investmentfonds wird dadurch erschwert, dass sich die sie betreffenden aufsichts- und steuerrechtlichen Normen widersprechen. Die den Pensionskassen aufsichtsrechtlich zugestandene Kaitalanlage, wird durch die aktuelle Lesart ihrer Steuerbefreiung beschränkt. Die Steuerbefreiung soll demnach entfallen, soweit die Pensionskasse durch die Kapitalanlage gewerbliche Einkünfte erzielt. Dann sei die dauernde Einkünfte- und Vermögensbindung für die Zwecke der Kasse nach § 5 Abs. 1 Nr. 3 lit. c KStG nicht gesichert. Das benachteiligt die Pensionskassen bei der Erwirtschaftung von Rentenleistungen. Zugleich findet diese Lesart bei der Auslegung der entsprechenden steuerrechtlichen Normen keinen Zuspruch.
Elementary particle physics is a contemporary topic in science that is slowly being integrated into high-school education. These new implementations are challenging teachers’ professional knowledge worldwide. Therefore, physics education research is faced with two important questions, namely, how can particle physics be integrated in high-school physics curricula and how best to support teachers in enhancing their professional knowledge on particle physics. This doctoral research project set up to provide better guidelines for answering these two questions by conducting three studies on high-school particle physics education.
First, an expert concept mapping study was conducted to elicit experts’ expectations on what high-school students should learn about particle physics. Overall, 13 experts in particle physics, computing, and physics education participated in 9 concept mapping rounds. The broad knowledge base of the experts ensured that the final expert concept map covers all major particle physics aspects. Specifically, the final expert concept map includes 180 concepts and examples, connected with 266 links and crosslinks. Among them are also several links to students’ prior knowledge in topics such as mechanics and thermodynamics. The high interconnectedness of the concepts shows possible opportunities for including particle physics as a context for other curricular topics. As such, the resulting expert concept map is showcased as a well-suited tool for teachers to scaffold their instructional practice.
Second, a review of 27 high-school physics curricula was conducted. The review uncovered which concepts related to particle physics can be identified in most curricula. Each curriculum was reviewed by two reviewers that followed a codebook with 60 concepts related to particle physics. The analysis showed that most curricula mention cosmology, elementary particles, and charges, all of which are considered theoretical particle physics concepts. None of the experimental particle physics concepts appeared in more than half of the reviewed curricula. Additional analysis was done on two curricular subsets, namely curricula with and curricula without an explicit particle physics chapter. Curricula with an explicit particle physics chapter mention several additional explicit particle physics concepts, namely the Standard Model of particle physics, fundamental interactions, antimatter research, and particle accelerators. The latter is an example of experimental particle physics concepts. Additionally, the analysis revealed that, overall, most curricula include Nature of Science and history of physics, albeit both are typically used as context or as a tool for teaching, respectively.
Third, a Delphi study was conducted to investigate stakeholders’ expectations regarding what teachers should learn in particle physics professional development programmes. Over 100 stakeholders from 41 countries represented four stakeholder groups, namely physics education researchers, research scientists, government representatives, and high-school teachers. The study resulted in a ranked list of the 13 most important topics to be included in particle physics professional development programmes. The highest-ranked topics are cosmology, the Standard Model, and real-life applications of particle physics. All stakeholder groups agreed on the overall ranking of the topics. While the highest-ranked topics are again more theoretical, stakeholders also expect teachers to learn about experimental particle physics topics, which are ranked as medium importance topics.
The three studies addressed two research aims of this doctoral project. The first research aim was to explore to what extent particle physics is featured in high-school physics curricula. The comparison of the outcomes of the curricular review and the expert concept map showed that curricula cover significantly less than what experts expect high-school students to learn about particle physics. For example, most curricula do not include concepts that could be classified as experimental particle physics. However, the strong connections between the different concept show that experimental particle physics can be used as context for theoretical particle physics concepts, Nature of Science, and other curricular topics. In doing so, particle physics can be introduced in classrooms even though it is not (yet) explicitly mentioned in the respective curriculum.
The second research aim was to identify which aspects of content knowledge teachers are expected to learn about particle physics. The comparison of the Delphi study results to the outcomes of the curricular review and the expert concept map showed that stakeholders generally expect teachers to enhance their school knowledge as defined by the curricula. Furthermore, teachers are also expected to enhance their deeper school knowledge by learning how to connect concepts from their school knowledge to other concepts in particle physics and beyond. As such, professional development programmes that focus on enhancing teachers’ school knowledge and deeper school knowledge best support teachers in building relevant context in their instruction.
Overall, this doctoral research project reviewed the current state of high-school particle physics education and provided guidelines for future enhancements of the particle physics content in high-school student and teacher education. The outcomes of the project support further implementations of particle physics in high-school education both as explicit content and as context for other curricular topics. Furthermore, the mixed-methods approach and the outcomes of this research project lead to several implications for professional development programmes and science education research, that are discussed in the final chapters of this dissertation.
Pannexin 1
(2022)
Hypoxic pulmonary vasoconstriction is an active alveolar hypoxia-caused physiological response redirecting pulmonary blood flow from poorly ventilated areas to better oxygenated lung regions in order to optimize oxygen supply. However, the signaling pathways underlying this pulmonary vascular response remain an area under investigation. In the present study I investigated the functional relevance of Pannexin 1 (Panx1)-mediated ATP release in hypoxic pulmonary vasoconstriction and chronic hypoxic pulmonary hypertension using murine isolated perfused lungs, chronic hypoxic mice, and pulmonary artery smooth muscle cell culture. In isolated mouse lungs, switch to hypoxic gas induced a marked increase in pulmonary artery pressure. Pharmacological inhibition of Panx1 using probenecid, Panx1 specific inhibitory peptide (10Panx1) or spironolactone as well as genetic deletion of Panx1 in smooth muscle cells diminished hypoxic pulmonary vasoconstriction in isolated perfused mouse lungs. Fura-2 imaging revealed a reduced Ca2+ response to hypoxia in pulmonary artery smooth muscle cells treated with spironolactone or 10Panx1. Although these findings suggested an important role of Panx1 in HPV, neither smooth muscle cell nor endothelial cell specific genetic deletion of Panx1 prevented the development of pulmonary hypertension in chronic hypoxic mice. Surprisingly, hypoxia did not induce ATP release and inhibition of purinergic receptors or ATP degradation by ATPase failed to decrease the pulmonary vasoconstriction response to hypoxia in isolated perfused mouse lungs. However, Panx1 antagonism as well as TRPV4 inhibition prevented the hypoxia-induced increase in intracellular Ca2+ concentration in pulmonary artery smooth muscle cells in an additive manner suggesting that Panx1 might modulate intracellular Ca2+ signaling independently of the ATP-P2-TRPV4 signaling axis. In line with this assumption, overexpression of Panx1 in HeLa cells increased intracellular Ca2+ concentrations in response to acute hypoxia. Conclusion: In this study I identifiy Panx1 as novel regulator of HPV.. Yet, the role of Panx1 was not attributable to the release of ATP and downstream P2 signaling pathways or activation of TRPV4 but rathter relates to a role of Panx1 as indirect or direct modulator of the Ca2+ response to hypoxia in PASMCs. Genetic deletion of Panx1 did not influence the development of chronic hypoxic pulmonary hypertension in mice.
High-mountain regions provide valuable ecosystem services, including food, water, and energy production, to more than 900 million people worldwide. Projections hold, that this population number will rapidly increase in the next decades, accompanied by a continued urbanisation of cities located in mountain valleys. One of the manifestations of this ongoing socio-economic change of mountain societies is a rise in settlement areas and transportation infrastructure while an increased power need fuels the construction of hydropower plants along rivers in the high-mountain regions of the world. However, physical processes governing the cryosphere of these regions are highly sensitive to changes in climate and a global warming will likely alter the conditions in the headwaters of high-mountain rivers. One of the potential implications of this change is an increase in frequency and magnitude of outburst floods – highly dynamic flows capable of carrying large amounts of water and sediments. Sudden outbursts from lakes formed behind natural dams are complex geomorphological processes and are often part of a hazard cascade. In contrast to other types of natural hazards in high-alpine areas, for example landslides or avalanches, outburst floods are highly infrequent. Therefore, observations and data describing for example the mode of outburst or the hydraulic properties of the downstream propagating flow are very limited, which is a major challenge in contemporary (glacial) lake outburst flood research. Although glacial lake outburst floods (GLOFs) and landslide-dammed lake outburst floods (LLOFs) are rare, a number of documented events caused high fatality counts and damage. The highest documented losses due to outburst floods since the start of the 20th century were induced by only a few high-discharge events. Thus, outburst floods can be a significant hazard to downvalley communities and infrastructure in high-mountain regions worldwide.
This thesis focuses on the Greater Himalayan region, a vast mountain belt stretching across 0.89 million km2. Although potentially hundreds of outburst floods have occurred there since the beginning of the 20th century, data on these events is still scarce. Projections of cryospheric change, including glacier-mass wastage and permafrost degradation, will likely result in an overall increase of the water volume stored in meltwater lakes as well as the destabilisation of mountain slopes in the Greater Himalayan region. Thus, the potential for outburst floods to affect the increasingly more densely populated valleys of this mountain belt is also likely to increase in the future. A prime example of one of these valleys is the Pokhara valley in Nepal, which is drained by the Seti Khola, a river crossing one of the steepest topographic gradients in the Himalayas. This valley is also home to Nepal’s second largest, rapidly growing city, Pokhara, which currently has a population of more than half a million people – some of which live in informal settlements within the floodplain of the Seti Khola. Although there is ample evidence for past outburst floods along this river in recent and historic times, these events have hardly been quantified.
The main motivation of my thesis is to address the data scarcity on past and potential future outburst floods in the Greater Himalayan region, both at a regional and at a local scale. For the former, I compiled an inventory of >3,000 moraine-dammed lakes, of which about 1% had a documented sudden failure in the past four decades. I used this data to test whether a number of predictors that have been widely applied in previous GLOF assessments are statistically relevant when estimating past GLOF susceptibility. For this, I set up four Bayesian multi-level logistic regression models, in which I explored the credibility of the predictors lake area, lake-area dynamics, lake elevation, parent-glacier-mass balance, and monsoonality. By using a hierarchical approach consisting of two levels, this probabilistic framework also allowed for spatial variability on GLOF susceptibility across the vast study area, which until now had not been considered in studies of this scale. The model results suggest that in the Nyainqentanglha and Eastern Himalayas – regions with strong negative glacier-mass balances – lakes have been more prone to release GLOFs than in regions with less negative or even stable glacier-mass balances. Similarly, larger lakes in larger catchments had, on average, a higher probability to have had a GLOF in the past four decades. Yet, monsoonality, lake elevation, and lake-area dynamics were more ambiguous. This challenges the credibility of a lake’s rapid growth in surface area as an indicator of a pending outburst; a metric that has been applied to regional GLOF assessments worldwide.
At a local scale, my thesis aims to overcome data scarcity concerning the flow characteristics of the catastrophic May 2012 flood along the Seti Khola, which caused 72 fatalities, as well as potentially much larger predecessors, which deposited >1 km³ of sediment in the Pokhara valley between the 12th and 14th century CE. To reconstruct peak discharges, flow depths, and flow velocities of the 2012 flood, I mapped the extents of flood sediments from RapidEye satellite imagery and used these as a proxy for inundation limits. To constrain the latter for the Mediaeval events, I utilised outcrops of slackwater deposits in the fills of tributary valleys. Using steady-state hydrodynamic modelling for a wide range of plausible scenarios, from meteorological (1,000 m³ s-1) to cataclysmic outburst floods (600,000 m³ s-1), I assessed the likely initial discharges of the recent and the Mediaeval floods based on the lowest mismatch between sedimentary evidence and simulated flood limits. One-dimensional HEC-RAS simulations suggest, that the 2012 flood most likely had a peak discharge of 3,700 m³ s-1 in the upper Seti Khola and attenuated to 500 m³ s-1 when arriving in Pokhara’s suburbs some 15 km downstream.
Simulations of flow in two-dimensions with orders of magnitude higher peak discharges in ANUGA show extensive backwater effects in the main tributary valleys. These backwater effects match the locations of slackwater deposits and, hence, attest for the flood character of Mediaeval sediment pulses. This thesis provides first quantitative proof for the hypothesis, that the latter were linked to earthquake-triggered outbursts of large former lakes in the headwaters of the Seti Khola – producing floods with peak discharges of >50,000 m³ s-1.
Building on this improved understanding of past floods along the Seti Khola, my thesis continues with an analysis of the impacts of potential future outburst floods on land cover, including built-up areas and infrastructure mapped from high-resolution satellite and OpenStreetMap data. HEC-RAS simulations of ten flood scenarios, with peak discharges ranging from 1,000 to 10,000 m³ s-1, show that the relative inundation hazard is highest in Pokhara’s north-western suburbs. There, the potential effects of hydraulic ponding upstream of narrow gorges might locally sustain higher flow depths. Yet, along this reach, informal settlements and gravel mining activities are close to the active channel. By tracing the construction dynamics in two of these potentially affected informal settlements on multi-temporal RapidEye, PlanetScope, and Google Earth imagery, I found that exposure increased locally between three- to twentyfold in just over a decade (2008 to 2021).
In conclusion, this thesis provides new quantitative insights into the past controls on the susceptibility of glacial lakes to sudden outburst at a regional scale and the flow dynamics of propagating flood waves released by past events at a local scale, which can aid future hazard assessments on transient scales in the Greater Himalayan region. My subsequent exploration of the impacts of potential future outburst floods to exposed infrastructure and (informal) settlements might provide valuable inputs to anticipatory assessments of multiple risks in the Pokhara valley.
Der Autor untersucht das System des gerichtlichen Rechtsschutzes gegen beamtenrechtliche Personalauswahlentscheidungen im Hinblick auf dessen tatsächliche Wirksamkeit zur Durchsetzung des Grundrechts auf gleichen Zugang zu öffentlichen Ämtern aus Art. 33 Abs. 2 GG. Maßstab der Wirksamkeitsprüfung ist die Rechtsschutzgarantie aus Art. 19 Abs. 4 S. 1 GG. Berücksichtigt werden insbesondere die Modifikationen der etablierten Rechtsschutzdogmatik durch das Urteil des BVerwG vom 04.11.2010. Der Autor konstatiert, dass Bewerbern um ein öffentliches Amt nun zwar ein formell lückenloser Primärrechtsschutz eingeräumt wird. Dessen praktische Wirksamkeit ist jedoch durch zahlreiche prozessuale Besonderheiten und die Handhabung des dem Dienstherrn bei der Auswahlentscheidung zugebilligten weiten Beurteilungs- und Ermessensspielraums erheblich eingeschränkt. Der Autor folgert, dass der geforderte effektive gerichtliche Rechtsschutz nur durch eine rechtsschutzfreundliche Gestaltung des behördlichen Auswahlverfahrens gewährleistet werden kann, und leitet bestimmte organisatorische Mindestanforderungen an das Auswahlverfahren her.
The Arctic is changing rapidly and permafrost is thawing. Especially ice-rich permafrost, such as the late Pleistocene Yedoma, is vulnerable to rapid and deep thaw processes such as surface subsidence after the melting of ground ice. Due to permafrost thaw, the permafrost carbon pool is becoming increasingly accessible to microbes, leading to increased greenhouse gas emissions, which enhances the climate warming.
The assessment of the molecular structure and biodegradability of permafrost organic matter (OM) is highly needed. My research revolves around the question “how does permafrost thaw affect its OM storage?” More specifically, I assessed (1) how molecular biomarkers can be applied to characterize permafrost OM, (2) greenhouse gas production rates from thawing permafrost, and (3) the quality of OM of frozen and (previously) thawed sediments.
I studied deep (max. 55 m) Yedoma and thawed Yedoma permafrost sediments from Yakutia (Sakha Republic). I analyzed sediment cores taken below thermokarst lakes on the Bykovsky Peninsula (southeast of the Lena Delta) and in the Yukechi Alas (Central Yakutia), and headwall samples from the permafrost cliff Sobo-Sise (Lena Delta) and the retrogressive thaw slump Batagay (Yana Uplands). I measured biomarker concentrations of all sediment samples. Furthermore, I carried out incubation experiments to quantify greenhouse gas production in thawing permafrost.
I showed that the biomarker proxies are useful to assess the source of the OM and to distinguish between OM derived from terrestrial higher plants, aquatic plants and microbial activity. In addition, I showed that some proxies help to assess the degree of degradation of permafrost OM, especially when combined with sedimentological data in a multi-proxy approach. The OM of Yedoma is generally better preserved than that of thawed Yedoma sediments. The greenhouse gas production was highest in the permafrost sediments that thawed for the first time, meaning that the frozen Yedoma sediments contained most labile OM. Furthermore, I showed that the methanogenic communities had established in the recently thawed sediments, but not yet in the still-frozen sediments.
My research provided the first molecular biomarker distributions and organic carbon turnover data as well as insights in the state and processes in deep frozen and thawed Yedoma sediments. These findings show the relevance of studying OM in deep permafrost sediments.
Organic solar cells offer an efficient and cost-effective alternative for solar energy harvesting. This type of photovoltaic cell typically consists of a blend of two organic semiconductors, an electron donating polymer and a low molecular weight electron acceptor to create what is known as a bulk heterojunction (BHJ) morphology. Traditionally, fullerene-based acceptors have been used for this purpose. In recent years, the development of new acceptor molecules, so-called non-fullerene acceptors (NFA), has breathed new life into organic solar cell research, enabling record efficiencies close to 19%. Today, NFA-based solar cells are approaching their inorganic competitors in terms of photocurrent generation, but lag in terms of open circuit voltage (V_OC). Interestingly, the V_OC of these cells benefits from small offsets of orbital energies at the donor-NFA interface, although previous knowledge considered large energy offsets to be critical for efficient charge carrier generation. In addition, there are several other electronic and structural features that distinguish NFAs from fullerenes.
My thesis focuses on understanding the interplay between the unique attributes of NFAs and the physical processes occurring in solar cells. By combining various experimental techniques with drift-diffusion simulations, the generation of free charge carriers as well as their recombination in state-of-the-art NFA-based solar cells is characterized. For this purpose, solar cells based on the donor polymer PM6 and the NFA Y6 have been investigated. The generation of free charge carriers in PM6:Y6 is efficient and independent of electric field and excitation energy. Temperature-dependent measurements show a very low activation energy for photocurrent generation (about 6 meV), indicating barrierless charge carrier separation. Theoretical modeling suggests that Y6 molecules have large quadrupole moments, leading to band bending at the donor-acceptor interface and thereby reducing the electrostatic Coulomb dissociation barrier. In this regard, this work identifies poor extraction of free charges in competition with nongeminate recombination as a dominant loss process in PM6:Y6 devices. Subsequently, the spectral characteristics of PM6:Y6 solar cells were investigated with respect to the dominant process of charge carrier recombination. It was found that the photon emission under open-circuit conditions can be almost entirely attributed to the occupation and recombination of Y6 singlet excitons. Nevertheless, the recombination pathway via the singlet state contributes only 1% to the total recombination, which is dominated by the charge transfer state (CT-state) at the donor-acceptor interface. Further V_OC gains can therefore only be expected if the density and/or recombination rate of these CT-states can be significantly reduced. Finally, the role of energetic disorder in NFA solar cells is investigated by comparing Y6 with a structurally related derivative, named N4. Layer morphology studies combined with temperature-dependent charge transport experiments show significantly lower structural and energetic disorder in the case of the PM6:Y6 blend. For both PM6:Y6 and PM6:N4, disorder determines the maximum achievable V_OC, with PM6:Y6 benefiting from improved morphological order. Overall, the obtained findings point to avenues for the realization of NFA-based solar cells with even smaller V_OC losses. Further reduction of nongeminate recombination and energetic disorder should result in organic solar cells with efficiencies above 20% in the future.
Due to the major role of greenhouse gas emissions in global climate change, the development of non-fossil energy technologies is essential. Deep geothermal energy represents such an alternative, which offers promising properties such as a high base load capability and a large untapped potential. The present work addresses barite precipitation within geothermal systems and the associated reduction in rock permeability, which is a major obstacle to maintaining high efficiency. In this context, hydro-geochemical models are essential to quantify and predict the effects of precipitation on the efficiency of a system.
The objective of the present work is to quantify the induced injectivity loss using numerical and analytical reactive transport simulations. For the calculations, the fractured-porous reservoirs of the German geothermal regions North German Basin (NGB) and Upper Rhine Graben (URG) are considered.
Similar depth-dependent precipitation potentials could be determined for both investigated regions (2.8-20.2 g/m3 fluid). However, the reservoir simulations indicate that the injectivity loss due to barite deposition in the NGB is significant (1.8%-6.4% per year) and the longevity of the system is affected as a result; this is especially true for deeper reservoirs (3000 m). In contrast, simulations of URG sites indicate a minor role of barite (< 0.1%-1.2% injectivity loss per year). The key differences between the investigated regions are reservoir thicknesses and the presence of fractures in the rock, as well as the ionic strength of the fluids. The URG generally has fractured-porous reservoirs with much higher thicknesses, resulting in a greater distribution of precipitates in the subsurface. Furthermore, ionic strengths are higher in the NGB, which accelerates barite precipitation, causing it to occur more concentrated around the wellbore. The more concentrated the precipitates occur around the wellbore, the higher the injectivity loss.
In this work, a workflow was developed within which numerical and analytical models can be used to estimate and quantify the risk of barite precipitation within the reservoir of geothermal systems. A key element is a newly developed analytical scaling score that provides a reliable estimate of induced injectivity loss. The key advantage of the presented approach compared to fully coupled reservoir simulations is its simplicity, which makes it more accessible to plant operators and decision makers. Thus, in particular, the scaling score can find wide application within geothermal energy, e.g., in the search for potential plant sites and the estimation of long-term efficiency.
The key to reduce the energy required for specific transformations in a selective manner is the employment of a catalyst, a very small molecular platform that decides which type of energy to use. The field of photocatalysis exploits light energy to shape one type of molecules into others, more valuable and useful.
However, many challenges arise in this field, for example, catalysts employed usually are based on metal derivatives, which abundance is limited, they cannot be recycled and are expensive. Therefore, carbon nitrides materials are used in this work to expand horizons in the field of photocatalysis.
Carbon nitrides are organic materials, which can act as recyclable, cheap, non-toxic, heterogeneous photocatalysts. In this thesis, they have been exploited for the development of new catalytic methods, and shaped to develop new types of processes.
Indeed, they enabled the creation of a new photocatalytic synthetic strategy, the dichloromethylation of enones by dichloromethyl radical generated in situ from chloroform, a novel route for the making of building blocks to be used for the productions of active pharmaceutical compounds.
Then, the ductility of these materials allowed to shape carbon nitride into coating for lab vials, EPR capillaries, and a cell of a flow reactor showing the great potential of such flexible technology in photocatalysis.
Afterwards, their ability to store charges has been exploited in the reduction of organic substrates under dark conditions, gaining new insights regarding multisite proton coupled electron transfer processes.
Furthermore, the combination of carbon nitrides with flavins allowed the development of composite materials with improved photocatalytic activity in the CO2 photoreduction.
Concluding, carbon nitrides are a versatile class of photoactive materials, which may help to unveil further scientific discoveries and to develop a more sustainable future.
Novel algorithms for prediction of protein complexes from protein-protein interacton networks
(2022)
Organic solar cells (OSCs), in recent years, have shown high efficiencies through the development of novel non-fullerene acceptors (NFAs). Fullerene derivatives have been the centerpiece of the accepting materials used throughout organic photovoltaic (OPV) research. However, since 2015 novel NFAs have been a game-changer and have overtaken fullerenes. However, the current understanding of the properties of NFAs for OPV is still relatively limited and critical mechanisms defining the performance of OPVs are still topics of debate.
In this thesis, attention is paid to understanding reduced-Langevin recombination with respect to the device physics properties of fullerene and non-fullerene systems. The work is comprised of four closely linked studies. The first is a detailed exploration of the fill factor (FF) expressed in terms of transport and recombination properties in a comparison of fullerene and non-fullerene acceptors. We investigated the key reason behind the reduced FF in the NFA (ITIC-based) devices which is faster non-geminate recombination relative to the fullerene (PCBM[70]-based) devices. This is then followed by a consideration of a newly synthesized NFA Y-series derivative which exhibits the highest power conversion efficiency for OSC at the time. Such that in the second study, we illustrated the role of disorder on the non-geminate recombination and charge extraction of thick NFA (Y6-based) devices. As a result, we enhanced the FF of thick PM6:Y6 by reducing the disorder which leads to suppressing the non-geminate recombination toward non-Langevin system. In the third work, we revealed the reason behind thickness independence of the short circuit current of PM6:Y6 devices, caused by the extraordinarily long diffusion length of Y6. The fourth study entails a broad comparison of a selection of fullerene and non-fullerene blends with respect to charge generation efficiency and recombination to unveil the importance of efficient charge generation for achieving reduced recombination.
I employed transient measurements such as Time Delayed Collection Field (TDCF), Resistance dependent Photovoltage (RPV), and steady-state techniques such as Bias Assisted Charge Extraction (BACE), Temperature-Dependent Space Charge Limited Current (T-SCLC), Capacitance-Voltage (CV), and Photo-Induce Absorption (PIA), to analyze the OSCs.
The outcomes in this thesis together draw a complex picture of multiple factors that affect reduced-Langevin recombination and thereby the FF and overall performance. This provides a suitable platform for identifying important parameters when designing new blend systems. As a result, we succeeded to improve the overall performance through enhancing the FF of thick NFA device by adjustment of the amount of the solvent additive in the active blend solution. It also highlights potentially critical gaps in the current experimental understanding of fundamental charge interaction and recombination dynamics.
The post-antiretroviral therapy era has transformed HIV into a chronic disease and non-HIV comorbidities (i.e., cardiovascular and mental diseases) are more prevalent in PLWH. The source of these non-HIV comorbidities aside from traditional risk factor include HIV infection, inflammation, distorted immune activation, burden of chronic diseases, and unhealthy lifestyle like sedentarism. Exercise is known for its beneficial effects in mental and physical health; reasons why exercise is recommended to prevent and treat difference cardiovascular and mental diseases in the general population. This cumulative thesis aimed to comprehend the relation exercise has to non-HIV comorbidities in German PLWH. Four studies were conducted to 1) understand exercise effects in cardiorespiratory fitness and muscle strength on PLWH through a systematic review and meta-analyses and 2) determine the likelihood of German PLWH developing non-HIV comorbidities, in a cross-sectional study. Meta-analytic examination indicates PLWH cardiorespiratory fitness (VO2max SMD = 0.61 ml·kg·min-1, 95% CI: 0.35-0.88, z = 4.47, p < 0.001, I2 = 50%) and strength (of remark lowerbody strength by 16.8 kg, 95% CI: 13–20.6, p< 0.001) improves after an exercise intervention in comparison to a control group. Cross-sectional data suggest exercise has a positive effect on German PLWH mental health (less anxiety and depressive symptoms) and protects against the development of anxiety (PR: 0.57, 95%IC: 0.36 – 0.91, p = 0.01) and depression (PR: 0.62, 95%IC: 0.41 – 0.94, p = 0.01). Likewise, exercise duration is related to a lower likelihood of reporting heart arrhythmias (PR: 0.20, 95%IC: 0.10 – 0.60, p < 0.01) and exercise frequency to a lower likelihood of reporting diabetes mellitus (PR: 0.40, 95%IC: 0.10 – 1, p < 0.01) in German PLWH. A preliminary recommendation for German PLWH who want to engage in exercise can be to exercise ≥ 1 time per week, at an intensity of 5 METs per session or > 103 MET·min·day-1, with a duration ≥ 150 minutes per week. Nevertheless, further research is needed to comprehend exercise dose response and protective effect for cardiovascular diseases, anxiety, and depression in German PLWH.