Refine
Year of publication
- 2020 (233) (remove)
Document Type
- Doctoral Thesis (233) (remove)
Is part of the Bibliography
- yes (233) (remove)
Keywords
- Maschinelles Lernen (3)
- diffusion (3)
- Anden (2)
- Andes (2)
- Arktis (2)
- Boden (2)
- Chemometrie (2)
- Datenassimilation (2)
- Diffusion (2)
- Digitalisierung (2)
Institute
- Institut für Biochemie und Biologie (31)
- Institut für Physik und Astronomie (31)
- Institut für Geowissenschaften (24)
- Institut für Chemie (22)
- Öffentliches Recht (11)
- Hasso-Plattner-Institut für Digital Engineering GmbH (9)
- Institut für Anglistik und Amerikanistik (9)
- Institut für Ernährungswissenschaft (8)
- Institut für Umweltwissenschaften und Geographie (8)
- Department Psychologie (7)
Matthias Walden
(2020)
Matthias Walden (1927–1984) war einer der Vertreter eines politischen Neuanfangs im Journalismus in Deutschland nach 1945. Im Kern seines politischen Denkens stand die Verteidigung der liberalen Demokratie, deren ideellen Gehalt Walden sowohl durch eine personelle Kontinuität in der Bundesrepublik Deutschland zum Nationalsozialismus als auch durch die von ihm als Anbiederung empfundene Neue Ostpolitik und den gesellschaftlichen Protest der 1960er und 1970er Jahre in Gefahr sah.
Als profilierter Leitartikler wurde er vor allem für den Verlag Axel Springer zu einem intellektuellen Impulsgeber. Walden war überzeugt, Diktaturen und totalitäre Gesellschaftsentwürfe sähen immer nur so aus, als wären sie für die Ewigkeit gemacht. Im Kalten Krieg gab ihm gerade die Unmenschlichkeit der kommunistischen Regime die Gewissheit, dass diese einst verschwinden würden.
Nils Lange legt mit seiner intellektuellen Biographie von Matthias Walden die erste umfassende Arbeit über diesen streitbaren Journalisten vor. Er arbeitet sowohl die politischen als auch die ideengeschichtlichen Ursprünge von Waldens Denken heraus.
‘The Territorialities of U.S. Imperialisms’ sets into relation U.S. imperial and Indigenous conceptions of territoriality as articulated in U.S. legal texts and Indigenous life writing in the 19th century. It analyzes the ways in which U.S. legal texts as “legal fictions” narratively press to affirm the United States’ territorial sovereignty and coherence in spite of its reliance on a variety of imperial practices that flexibly disconnect and (re)connect U.S. sovereignty, jurisdiction and territory.
At the same time, the book acknowledges Indigenous life writing as legal texts in their own right and with full juridical force, which aim to highlight the heterogeneity of U.S. national territory both from their individual perspectives and in conversation with these legal fictions. Through this, the book’s analysis contributes to a more nuanced understanding of the coloniality of U.S. legal fictions, while highlighting territoriality as a key concept in the fashioning of the narrative of U.S. imperialism.
To meet the demands of a growing world population while reducing carbon dioxide (CO2) emissions, it is necessary to capture CO2 and convert it into value-added compounds. In recent years, metabolic engineering of microbes has gained strong momentum as a strategy for the production of valuable chemicals. As common microbial feedstocks like glucose directly compete with human consumption, the one carbon (C1) compound formate was suggested as an alternative feedstock. Formate can be easily produced by various means including electrochemical reduction of CO2 and could serve as a feedstock for microbial production, hence presenting a novel entry point for CO2 to the biosphere and a storage option for excess electricity. Compared to the gaseous molecule CO2, formate is a highly soluble compound that can be easily handled and stored. It can serve as a carbon and energy source for natural formatotrophs, but these microbes are difficult to cultivate and engineer. In this work, I present the results of several projects that aim to establish efficient formatotrophic growth of E. coli – which cannot naturally grow on formate – via synthetic formate assimilation pathways. In the first study, I establish a workflow for growth-coupled metabolic engineering of E. coli. I demonstrate this approach by presenting an engineering scheme for the PFL-threonine cycle, a synthetic pathway for anaerobic formate assimilation in E. coli. The described methods are intended to create a standardized toolbox for engineers that aim to establish novel metabolic routes in E. coli and related organisms. The second chapter presents a study on the catalytic efficiency of C1-oxidizing enzymes in vivo. As formatotrophic growth requires generation of both energy and biomass from formate, the engineered E. coli strains need to be equipped with a highly efficient formate dehydrogenase, which provides reduction equivalents and ATP for formate assimilation. I engineered a strain that cannot generate reducing power and energy for cellular growth, when fed on acetate. Under this condition, the strain depends on the introduction of an enzymatic system for NADH regeneration, which could further produce ATP via oxidative phosphorylation. I show that the strain presents a valuable testing platform for C1-oxidizing enzymes by testing different NAD-dependent formate and methanol dehydrogenases in the energy auxotroph strain. Using this platform, several candidate enzymes with high in vivo activity, were identified and characterized as potential energy-generating systems for synthetic formatotrophic or methylotrophic growth in E. coli. In the third chapter, I present the establishment of the serine threonine cycle (STC) – a synthetic formate assimilation pathway – in E. coli. In this pathway, formate is assimilated via formate tetrahydrofolate ligase (FtfL) from Methylobacterium extorquens (M. extorquens). The carbon from formate is attached to glycine to produce serine, which is converted into pyruvate entering central metabolism. Via the natural threonine synthesis and cleavage route, glycine is regenerated and acetyl-CoA is produced as the pathway product. I engineered several selection strains that depend on different STC modules for growth and determined key enzymes that enable high flux through threonine synthesis and cleavage. I could show that expression of an auxiliary formate dehydrogenase was required to achieve growth via threonine synthesis and cleavage on pyruvate. By overexpressing most of the pathway enzymes from the genome, and applying adaptive laboratory evolution, growth on glycine and formate was achieved, indicating the activity of the complete cycle. The fourth chapter shows the establishment of the reductive glycine pathway (rGP) – a short, linear formate assimilation route – in E. coli. As in the STC, formate is assimilated via M. extorquens FtfL. The C1 from formate is condensed with CO2 via the reverse reaction of the glycine cleavage system to produce glycine. Another carbon from formate is attached to glycine to form serine, which is assimilated into central metabolism via pyruvate. The engineered E. coli strain, expressing most of the pathway genes from the genome, can grow via the rGP with formate or methanol as a sole carbon and energy source.
Das Ziel der vorliegenden Dissertation war es herauszuarbeiten, ob Nachhaltigkeitsbewusstsein den Konsum von Luxusgütern beeinflusst und ob verschiedene Moderatoren einen Einfluss auf diesen Zusammenhang ausüben. Das Nachhaltigkeitsbewusstsein wurde dabei basierend auf dem von Balderjahn et al. (2013) entwickelten Consciousness-for-sustainable-consumption-Modell durch die ökologische, soziale und die ökonomische Nachhaltigkeit sowie ergänzend durch das Tierschutzbewusstsein und Bewusstsein für lokale Produktion repräsentiert. Als Moderatoren dienten das Streben nach sozialer Anerkennung und Prestige, Materialismus, Hedonismus und Traditionsbewusstsein. Für die Aufdeckung möglicher Zusammenhänge zwischen den verschiedenen Dimensionen der Nachhaltigkeit und Luxuskonsum wurde eine Prädiktorenanalyse durchgeführt. Moderatorenanalysen offenbarten zusätzlich, ob ein Einfluss der verschiedenen Moderatoren auf die einzelnen Zusammenhänge vorlag. Die Untersuchung zeigte, dass jeweils das Umweltbewusstsein, das Bewusstsein für genügsamen Konsum sowie das Bewusstsein für schuldenfreien Konsum als Teil der ökonomischen Nachhaltigkeit und das Tierschutzbewusstsein einen Einfluss auf den Luxuskonsum ausüben. Darüber hinaus konnten insgesamt sieben Einflüsse durch die verschiedenen Moderatorvariablen auf die unterschiedlichen Zusammenhänge zwischen den Nachhaltigkeitsdimensionen und Luxuskonsum aufgedeckt werden.
With populations growing worldwide and climate change threatening food production there is an urgent need to find ways to ensure food security. Increasing carbon fixation rate in plants is a promising approach to boost crop yields. The carbon-fixing enzyme Rubisco catalyzes, beside the carboxylation reaction, also an oxygenation reaction that generates glycolate-2P, which needs to be recycled via a metabolic route termed photorespiration. Photorespiration dissipates energy and most importantly releases previously fixed CO2, thus significantly lowering carbon fixation rate and yield. Engineering plants to omit photorespiratory CO2 release is the goal of the FutureAgriculture consortium and this thesis is part of this collaboration. The consortium aims to establish alternative glycolate-2P recycling routes that do not release CO2. Ultimately, they are expected to increase carbon fixation rates and crop yields. Natural and novel reactions, which require enzyme engineering, were considered in the pathway design process. Here I describe the engineering of two pathways, the arabinose-5P and the erythrulose shunt. They were designed to recycle glycolate-2P via glycolaldehyde into a sugar phosphate and thereby reassimilate glycolate-2P to the Calvin cycle. I used Escherichia coli gene deletion strains to validate and characterize the activity of both synthetic shunts. The strains’ auxotrophies can be alleviated by the activity of the synthetic route, thus providing a direct way to select for pathway activity. I introduced all pathway components to these dedicated selection strains and discovered inhibitions, limitations and metabolic cross talk interfering with pathway activity. After resolving these issues, I was able to show the in vivo activity of all pathway components and combine them into functional modules.. Specifically, I demonstrate the activity of a new-to-nature module of glycolate reduction to glycolaldehyde. Also, I successfully show a new glycolaldehyde assimilation route via arabinose-5P to ribulose-5P. In addition, all necessary enzymes for glycolaldehyde assimilation via L-erythrulose were shown to be active and an L-threitol assimilation route via L-erythrulose was established in E. coli. On their own, these findings demonstrate the power of using an easily engineerable microbe to test novel pathways; combined, they will form the basis for implementing photorespiration bypasses in plants.
Die Entwicklung einer Theorie zur schulischen Inklusion ist das zentrale Thema der Dissertation. Die Autorin nutzt empirische Analysen zur Umsetzung inklusiven Lernens sowie Daten zu sonderpädagogischen Förderschwerpunkten an inklusiven Grundschulen für die Erarbeitung von Bedingungen und Formen eines inklusiven Schulsystems. Empirische Daten zur Umsetzung inklusiver Bildung liegen aus vielen Bundesländern vor, es fehlte jedoch eine forschungsleitende Theorie zur Einordnung und Analyse der Daten. Jennifer Lambrecht hat diese Theorie auf Grundlage der Systemtheorie Luhmanns entwickelt. Sie differenziert zwischen Schulsystemen und verortet unterschiedliche Inklusionsverständnisse. Im Ergebnis entwickelt sie fünf Thesen zur schulischen Inklusion im allgemeinen Schulsystem und im Sonderschulsystem. Die Dissertation, die ein hochaktuelles Thema der empirischen Bildungsforschung behandelt, regt zum Mit- und Nachdenken an und generiert neue, interessante Forschungsfragen.
Fiktion und Wirklichkeit
(2020)
TrainTrap
(2020)
Der zentralasiatische Naturraum, wie er sich uns heute präsentiert, ist das Ergebnis eines Zusammenwirkens vieler verschiedener Faktoren über Jahrmillionen hinweg. Im aktuellen Kontext des Klimawandels zeigt sich jedoch, wie stark sich Stoffflüsse auch kurzfristig ändern und dabei das Gesicht der Landschaft verwandeln können. Die Gobi-Wüste in der Inneren Mongolei (China), als Teil der gleichnamigen Trockenregionen Nordwestchinas, ist aufgrund der Ausgestaltung ihrer landschaftsprägenden Elemente sowie ihrer Landschaftsdynamik, im Zusammenhang mit der Lage zum Tibet-Plateau, in den Fokus der klimageschichtlichen Grundlagenforschung gerückt. Als großes Langzeitarchiv unterschiedlichster fluvialer, lakustriner und äolischer Sedimente stellt sie eine bedeutende Lokalität zur Rekonstruktion von lokalen und regionalen Stoffflüssen dar.. Andererseits ist die Gobi-Wüste zugleich auch eine bedeutende Quelle für den überregionalen Staubtransport, da sie aufgrund der klimatischen Bedingungen insbesondere der Erosion durch Ausblasung preisgegeben wird. Vor diesem Hintergrund erfolgten zwischen 2011 und 2014, im Rahmen des BMBF-Verbundprogramms WTZ Zentralasien – Monsundynamik & Geoökosysteme (Förderkennzeichen 03G0814), mehrere deutsch-chinesische Expeditionen in das Ejina-Becken (Innere Mongolei) und das Qilian Shan-Vorland. Im Zuge dieser Expeditionen wurden für eine Bestimmung potenzieller Sedimentquellen erstmals zahlreiche Oberflächenproben aus dem gesamten Einzugsgebiet des Heihe (schwarzer Fluss) gesammelt. Zudem wurden mit zwei Bohrungen im inneren des Ejina-Beckens, ergänzende Sedimentbohrkerne zum bestehenden Bohrkern D100 (siehe Wünnemann (2005)) abgeteuft, um weit reichende, ergänzende Informationen zur Landschaftsgeschichte und zum überregionalen Sedimenttransfer zu erhalten. Gegenstand und Ziel der vorliegenden Doktorarbeit ist die sedimentologisch-mineralogische Charakterisierung des Untersuchungsgebietes in Bezug auf potenzielle Sedimentquellen und Stoffflüsse des Ejina-Beckens sowie die Rekonstruktion der Ablagerungsgeschichte eines dort erbohrten, 19m langen Sedimentbohrkerns (GN100). Schwerpunkt ist hierbei die Klärung der Sedimentherkunft innerhalb des Bohrkerns sowie die Ausweisung von Herkunftssignalen und möglichen Sedimentquellen bzw. Sedimenttransportpfaden. Die methodische Herangehensweise basiert auf einem Multi-Proxy-Ansatz zur Charakterisierung der klastischen Sedimentfazies anhand von Geländebeobachtungen, lithologisch-granulometrischen und mineralogisch-geochemischen Analysen sowie statistischen Verfahren. Für die mineralogischen Untersuchungen der Sedimente wurde eine neue, rasterelektronenmikroskopische Methode zur automatisierten Partikelanalyse genutzt und den traditionellen Methoden gegenübergestellt. Die synoptische Betrachtung der granulometrischen, geochemischen und mineralogischen Befunde der Oberflächensedimente ergibt für das Untersuchungsgebiet ein logisches Kaskadenmodell mit immer wiederkehrenden Prozessbereichen und ähnlichen Prozesssignalen. Die umfangreichen granulometrischen Analysen deuten dabei auf abnehmende Korngrößen mit zunehmender Entfernung vom Qilian Shan hin und ermöglichen die Identifizierung von vier texturellen Signalen: den fluvialen Sanden, den Dünensanden, den Stillwassersedimenten und Stäuben. Diese Ergebnisse können als Interpretationsgrundlage für die Korngrößenanalysen des Bohrkerns genutzt werden. Somit ist es möglich, die Ablagerungsgeschichte der Bohrkernsedimente zu rekonstruieren und in Verbindung mit eigenen und literaturbasierten Datierungen in einen Gesamtkontext einzuhängen. Für das Untersuchungsgebiet werden somit vier Ablagerungsphasen ausgewiesen, die bis in die Zeit des letzten glazialen Maximums (LGM) zurückreichen. Während dieser Ablagerungsphasen kam es im Zuge unterschiedlicher Aktivitäts- und Stabilitätsphasen zu einer kontinuierlichen Progradation und Überprägung des Schwemmfächers. Eine besonders aktive Phase kann zwischen 8 ka und 4 ka BP festgestellt werden, während der es aufgrund zunehmender fluvialer Aktivitäten zu einer deutlich verstärkten Schwemmfächerdynamik gekommen zu sein scheint. In den Abschnitten davor und danach waren es vor allem äolische Prozesse, die zu einer Überprägung des Schwemmfächers geführt haben. Hinsichtlich der mineralogischen Herkunftssignale gibt es eine große Variabilität. Dies spiegelt die enorme Heterogenität der Geologie des Untersuchungsgebietes wider, wodurch die räumlichen Signale nicht sehr stark ausgeprägt sind. Dennoch, können für das Einzugsgebiet drei größere Bereiche deklariert werden, die als Herkunftsgebiet in Frage kommen. Das östliche Qilian Shan Vorland zeichnet sich dabei durch deutlich höhere Chloritgehalte als primäre Quelle für die Sedimente im Ejina-Becken aus. Sie unterscheiden sich insbesondere durch stark divergierende Chloritgehalte in der Tonmineral- und Gesamtmineralfraktion, was das östliche Qilian Shan Vorland als primäre Quelle für die Sedimente im Ejina-Becken auszeichnet. Dies steht in Zusammenhang mit den Grünschiefern, Ophioliten und Serpentiniten in diesem Bereich. Geochemisch deutet vor allem das Cr/Rb-Verhältnis eine große Variabilität innerhalb des Einzugsgebietes an. Auch hier ist es das östliche Vorland, welches aufgrund seines hohen Anteils an mafischen Gesteinen reich an Chromiten und Spinellen ist und sich somit vom restlichen Untersuchungsgebiet abhebt. Die zeitliche aber auch die generelle Variabilität der Sedimentherkunft lässt sich in den Bohrkernsedimenten nicht so deutlich nachzeichnen. Die mineralogisch-sedimentologischen Eigenschaften der erbohrten klastischen Sedimente zeugen zwar von zwischenzeitlichen Änderungen bei der Sedimentherkunft, diese sind jedoch nicht so deutlich ausgeprägt, wie es die Quellsignale in den Oberflächensedimenten vermuten lassen. Ein Grund dafür scheint die starke Vermischung unterschiedlichster Sedimente während des Transportes zu sein. Die Kombination der Korngrößenergebnisse mit den Befunden der Gesamt- und Schwermineralogie deuten darauf hin, dass es zwischenzeitlich eine Phase mit überwiegend äolischen Prozessen gegeben hat, die mit einem Sedimenteintrag aus dem westlichen Bei Shan in Verbindung stehen. Neben der Zunahme ultrastabiler Schwerminerale wie Zirkon und Granat und der Abnahme opaker Schwerminerale, weisen vor allem die heutigen Verhältnisse darauf hin. Der Vergleich der traditionellen Schwermineralanalyse mit der Computer-Controlled-Scanning-Electron-Microscopy (kurz: CCSEM), die eine automatisierte Partikelauswertung der Proben ermöglicht, zeigt den deutlichen Vorteil der modernen Analysemethode. Neben einem zeitlichen Vorteil, den man durch die automatisierte Abarbeitung der vorbereiteten Proben erlangen kann, steht vor allem die deutlich größere statistische Signifikanz des Ergebnisses im Vordergrund. Zudem können mit dieser Methode auch chemische Varietäten einiger Schwerminerale bestimmt werden, die eine noch feinere Klassifizierung und sicherere Aussagen zu einer möglichen Sedimentherkunft ermöglichen. Damit ergeben sich außerdem verbesserte Aussagen zu Zusammensetzungen und Entstehungsprozessen der abgelagerten Sedimente. Die Studie verdeutlicht, dass die Sedimentherkunft innerhalb des Untersuchungsgebietes sowie die ablaufenden Prozesse zum Teil stark von lokalen Gegebenheiten abhängen. Die Heterogenität der Geologie und die Größe des Einzugsgebietes sowie die daraus resultierende Komplexität der Sedimentgenese, machen exakte Zuordnungen zu klar definierten Sedimentquellen sehr schwer. Dennoch zeigen die Ergebnisse, dass die Sedimentzufuhr in das Ejina-Becken in erster Linie durch fluviale klastische Sedimente des Heihe aus dem Qilian Shan erfolgt sein muss. Die Untersuchungsergebnisse zeigen jedoch ebenso die Notwendigkeit einer ergänzenden Bearbeitung angrenzender Untersuchungsgebiete, wie beispielsweise den Gobi-Altai im Norden oder den Beishan im Westen, sowie die Verdichtung der Oberflächenbeprobung zur feineren Auflösung von lokalen Sedimentquellen.
Due to continuously intensifying human usage of the marine environment worldwide ranging cetaceans face an increasing number of threats. Besides whaling, overfishing and by-catch, new technical developments increase the water and noise pollution, which can negatively affect marine species. Cetaceans are especially prone to these influences, being at the top of the food chain and therefore accumulating toxins and contaminants. Furthermore, they are extremely noise sensitive due to their highly developed hearing sense and echolocation ability. As a result, several cetacean species were brought to extinction during the last century or are now classified as critically endangered. This work focuses on two odontocetes. It applies and compares different molecular methods for inference of population status and adaptation, with implications for conservation. The worldwide distributed sperm whale (Physeter macrocephalus) shows a matrilineal population structure with predominant male dispersal. A recently stranded group of male sperm whales provided a unique opportunity to investigate male grouping for the first time. Based on the mitochondrial control region, I was able to infer that male bachelor groups comprise multiple matrilines, hence derive from different social groups, and that they represent the genetic variability of the entire North Atlantic. The harbor porpoise (Phocoena phocoena) occurs only in the northern hemisphere. By being small and occurring mostly in coastal habitats it is especially prone to human disturbance. Since some subspecies and subpopulations are critically endangered, it is important to generate and provide genetic markers with high resolution to facilitate population assignment and subsequent protection measurements. Here, I provide the first harbour porpoise whole genome, in high quality and including a draft annotation. Using it for mapping ddRAD seq data, I identify genome wide SNPs and, together with a fragment of the mitochondrial control region, inferred the population structure of its North Atlantic distribution range. The Belt Sea harbors a distinct subpopulation oppose to the North Atlantic, with a transition zone in the Kattegat. Within the North Atlantic I could detect subtle genetic differentiation between western (Canada-Iceland) and eastern (North Sea) regions, with support for a German North Sea breading ground around the Isle of Sylt. Further, I was able to detect six outlier loci which show isolation by distance across the investigated sampling areas. In employing different markers, I could show that single maker systems as well as genome wide data can unravel new information about population affinities of odontocetes. Genome wide data can facilitate investigation of adaptations and evolutionary history of the species and its populations. Moreover, they facilitate population genetic investigations, providing a high resolution, and hence allowing for detection of subtle population structuring especially important for highly mobile cetaceans.
With rising complexity of today's software and hardware systems and the hypothesized increase in autonomous, intelligent, and self-* systems, developing correct systems remains an important challenge. Testing, although an important part of the development and maintainance process, cannot usually establish the definite correctness of a software or hardware system - especially when systems have arbitrarily large or infinite state spaces or an infinite number of initial states. This is where formal verification comes in: given a representation of the system in question in a formal framework, verification approaches and tools can be used to establish the system's adherence to its similarly formalized specification, and to complement testing.
One such formal framework is the field of graphs and graph transformation systems. Both are powerful formalisms with well-established foundations and ongoing research that can be used to describe complex hardware or software systems with varying degrees of abstraction. Since their inception in the 1970s, graph transformation systems have continuously evolved; related research spans extensions of expressive power, graph algorithms, and their implementation, application scenarios, or verification approaches, to name just a few topics.
This thesis focuses on a verification approach for graph transformation systems called k-inductive invariant checking, which is an extension of previous work on 1-inductive invariant checking. Instead of exhaustively computing a system's state space, which is a common approach in model checking, 1-inductive invariant checking symbolically analyzes graph transformation rules - i.e. system behavior - in order to draw conclusions with respect to the validity of graph constraints in the system's state space. The approach is based on an inductive argument: if a system's initial state satisfies a graph constraint and if all rules preserve that constraint's validity, we can conclude the constraint's validity in the system's entire state space - without having to compute it.
However, inductive invariant checking also comes with a specific drawback: the locality of graph transformation rules leads to a lack of context information during the symbolic analysis of potential rule applications. This thesis argues that this lack of context can be partly addressed by using k-induction instead of 1-induction. A k-inductive invariant is a graph constraint whose validity in a path of k-1 rule applications implies its validity after any subsequent rule application - as opposed to a 1-inductive invariant where only one rule application is taken into account. Considering a path of transformations then accumulates more context of the graph rules' applications.
As such, this thesis extends existing research and implementation on 1-inductive invariant checking for graph transformation systems to k-induction. In addition, it proposes a technique to perform the base case of the inductive argument in a symbolic fashion, which allows verification of systems with an infinite set of initial states. Both k-inductive invariant checking and its base case are described in formal terms. Based on that, this thesis formulates theorems and constructions to apply this general verification approach for typed graph transformation systems and nested graph constraints - and to formally prove the approach's correctness.
Since unrestricted graph constraints may lead to non-termination or impracticably high execution times given a hypothetical implementation, this thesis also presents a restricted verification approach, which limits the form of graph transformation systems and graph constraints. It is formalized, proven correct, and its procedures terminate by construction. This restricted approach has been implemented in an automated tool and has been evaluated with respect to its applicability to test cases, its performance, and its degree of completeness.
"How Wenzel and Cassie were wrong" – this was the eye-catching title of an article published by Lichao Gao and Thomas McCarthy in 2007, in which fundamental interpretations of wetting behavior were put into question. The authors initiated a discussion on a subject, which had been generally accepted a long time ago and they showed that wetting phenomena were not as fully understood as imagined. Similarly, this thesis tries to put a focus on certain aspects of liquid wetting, which so far have been widely neglected in terms of interpretation and experimental proof. While the effect of surface roughness on the macroscopically observed wetting behavior is commonly and reliably interpreted according to the well-known models of Wenzel and Cassie/Baxter, the size-scale of the structures responsible for the surface's rough texture has not been of further interest. Analogously, the limits of these models have not been described and exploited. Thus, the question arises, what will happen when the size of surface structures is reduced to the size of the contacting liquid molecules itself? Are common methods still valid or can deviations from macroscopic behavior be observed?
This thesis wants to create a starting point regarding these questions. In order to investigate the effect of smallest-scale surface structures on liquid wetting, a suitable model system is developed by means of self-assembled monolayer (SAM) formation from (fluoro)organic thiols of differing lengths of the alkyl chain. Surface topographies are created which rely on size differences of several Ångströms and exhibit surprising wetting behavior depending on the choice of the individual precursor system. Thus, contact angles are experimentally detected, which deviate considerably from theoretical calculations based on Wenzel and Cassie/Baxter models and confirm that sub-nm surface topographies affect wetting. Moreover, experimentally determined wetting properties are found to correlate well to an assumed scale-dependent surface tension of the contacting liquid. This behavior has already been described for scattering experiments taking into account capillary waves on the liquid surface induced by temperature and had been predicted earlier by theoretical calculations.
However, the investigation of model surfaces requires the provision of suitable precursor molecules, which are not commercially available and opens up a door to the exotic chemistry of fluoro-organic materials. During the course of this work, the synthesis of long-chain precursors is examined with a particular focus put on oligomerically pure semi-fluorinated n-alkyl thiols and n-alkyl trichlorosilanes. For this, general protocols for the syntheses of the desired compounds are developed and product mixtures are assayed to be separated into fractions of individual chain lengths by fluorous-phase high-performance liquid chromatography (F-HPLC).
The transition from model systems to technically more relevant surfaces and applications is initiated through the deposition of SAMs from long-chain fluorinated n-alkyl trichlorosilanes. Depositions are accomplished by a vapor-phase deposition process conducted on a pilot-scale set-up, which enables the exact control of relevant process parameters. Thus, the influence of varying deposition conditions on the properties of the final coating is examined and analyzed for the most important parameters. The strongest effect is observed for the partial pressure of reactive water vapor, which directly controls the extent of precursor hydrolysis during the deposition process. Experimental results propose that the formation of ordered monolayers rely on the amount of hydrolyzed silanol species present in the deposition system irrespective of the exact grade of hydrolysis. However, at increased amounts of species which are able to form cross-linked molecules due to condensation reactions, films deteriorate in quality. This effect is assumed to be caused by the introduction of defects within the film and the adsorption of cross linked agglomerates. Deposition conditions are also investigated for chain extended precursor species and reveal distinct differences caused by chain elongation.
This thesis is concerned with Data Assimilation, the process of combining model predictions with observations. So called filters are of special interest. One is inter- ested in computing the probability distribution of the state of a physical process in the future, given (possibly) imperfect measurements. This is done using Bayes’ rule. The first part focuses on hybrid filters, that bridge between the two main groups of filters: ensemble Kalman filters (EnKF) and particle filters. The first are a group of very stable and computationally cheap algorithms, but they request certain strong assumptions. Particle filters on the other hand are more generally applicable, but computationally expensive and as such not always suitable for high dimensional systems. Therefore it exists a need to combine both groups to benefit from the advantages of each. This can be achieved by splitting the likelihood function, when assimilating a new observation and treating one part of it with an EnKF and the other part with a particle filter.
The second part of this thesis deals with the application of Data Assimilation to multi-scale models and the problems that arise from that. One of the main areas of application for Data Assimilation techniques is predicting the development of oceans and the atmosphere. These processes involve several scales and often balance rela- tions between the state variables. The use of Data Assimilation procedures most often violates relations of that kind, which leads to unrealistic and non-physical pre- dictions of the future development of the process eventually. This work discusses the inclusion of a post-processing step after each assimilation step, in which a minimi- sation problem is solved, which penalises the imbalance. This method is tested on four different models, two Hamiltonian systems and two spatially extended models, which adds even more difficulties.
Orogenic peridotites represent portions of upper subcontinental mantle now incorporated in mountain belts. They often contain layers, lenses and irregular bodies of pyroxenite and eclogite. The origin of this heterogeneity and the nature of these layers is still debated but it is likely to involve processes such as transient melts coming from the crust or the mantle and segregating in magma conduits, crust-mantle interaction, upwelling of the asthenosphere and metasomatism. All these processes occur in the lithospheric mantle and are often related with the subduction of crustal rocks to mantle depths. In fact, during subduction, fluids and melts are released from the slab and can interact with the overlying mantle, making the study of deep melts in this environment crucial to understand mantle heterogeneity and crust-mantle interaction. The aim of this thesis is precisely to better constrain how such processes take place studying directly the melt trapped as primary inclusions in pyroxenites and eclogites. The Bohemian Massif, crystalline core of the Variscan belt, is targeted for these purposes because it contains orogenic peridotites with layers of pyroxenite and eclogite and other mafic rocks enclosed in felsic high pressure and ultra-high pressure crustal rocks. Within this Massif mafic rocks from two areas have been selected: the garnet clinopyroxenite in orogenic peridotite of the Granulitgebirge and the ultra-high pressure eclogite in the diamond-bearing gneisses of the Erzgebirge. In both areas primary melt inclusions were recognized in the garnet, ranging in size between 2-25 µm and with different degrees of crystallization, from glassy to polycrystalline. They have been investigated with Micro Raman spectroscopy and EDS mapping and the mineral assemblage is kumdykolite, phlogopite, quartz, kokchetavite, phase with a main Raman peak at 430 cm-1, phase with a main Raman peak at 412 cm-1, white mica and calcite with some variability in relative abundance depending on the case study. In the Granulitgebirge osumilite and pyroxene are also present, whereas calcite is one of the main phases in the Erzgebirge. The presence of glass and the mineral assemblage in the nanogranitoids suggest that they were former droplets of melt trapped in the garnet while it was growing. Glassy inclusions and re-homogenized nanogranitoids show a silicate melt that is granitic, hydrous, high in alkalis and weakly peraluminous. The melt is also enriched in both case studies in Cs, Pb, Rb, U, Th, Li and B suggesting the involvement of crustal component, i.e. white mica (main carrier of Cs, Pb, Rb, Li and B), and a fluid (Cs, Th and U) in the melt producing reaction. The whole rock in both cases mainly consists of garnet and clinopyroxene with, in Erzgebirge samples, the additional presence of quartz both in the matrix and as a polycrystalline inclusion in the garnet. The latter is interpreted as a quartz pseudomorph after coesite and occurs in the same microstructural position as the melt inclusions. Both rock types show a crustal and subduction zone signature with garnet and clinopyroxene in equilibrium. Melt was likely present during the metamorphic peak of the rock, as it occurs in garnet.
Our data suggest that the processes most likely responsible for the formation of the investigated rocks in both areas is a metasomatic reaction between a melt produced in the crust and mafic layers formerly located in the mantle wedge for the Granulitgebirge and in the subducted continental crust itself in the Erzgebirge. Thus metasomatism in the first case took place in the mantle overlying the slab, whereas in the second case metasomatism took place in the continental crust that already contained, before subduction, mafic layers. Moreover, the presence of former coesite in the same microstructural position of the melt inclusions in the Erzgebirge garnets suggest that metasomatism took place at ultra-high pressure conditions.
Summarizing, in this thesis we provide new insights into the geodynamic evolution of the Bohemian Massif based on the study of melt inclusions in garnet in two different mafic rock types, combining the direct microstructural and geochemical investigation of the inclusions with the whole-rock and mineral geochemistry. We report for the first time data, directly extracted from natural rocks, on the metasomatic melt responsible for the metasomatism of several areas of the Bohemian Massif. Besides the two locations here investigated, belonging to the Saxothuringian Zone, a signature similar to the investigated melt is clearly visible in pyroxenite and peridotite of the T-7 borehole (again Saxothuringian Zone) and the durbachite suite located in the Moldanubian Zone.
תקציר העבודה
בערב יום העצמאות 1967, שבועות אחדים ערב מלחמת ששת הימים זעק הרב צבי יהודה קוק (1891-1982) "איפה חברון שלנו – אתם שוכחים את זה?! ואיפה שכם שלנו – אנחנו שוכחים את זה?! ואיפה עבר הירדן שלנו איפה כל רגב ורגב? כל חלק וחלק, של ארבע אמות ארץ ד'. הבידינו לוותר על איזה מילימטר מהן? חלילה וחס ושלום"
דבריו הנחרצים של הרב, יחד עם כיבוש שטחי הגדה המערבית, חברון שכם וירושלים כמו גם חצי האי סיני ורצועת עזה במהלך ששת ימי הלחימה ביוני 1967, הביאו להתפרצות העוצמתית ביותר של תחושת פעמי משיח בקרב הציבור הציוני דתי. וכפי שצויין בדבריו של הרב ישראל אריאל, בואו של המשיח היה עניין של שעות אחדות.
אולם כעשר שנים מאוחר יותר בשנת 1978, חתמה ממשלת ישראל על הסכם שלום עם מצרים. במסגרת הסכם השלום, הוחזרו כל שטחי חצי האי סיני לידי המצרים ופורקה ההתיישבות בחבל ימית . תושבי חבל ימית נעקרו מבתיהם (בהם התגוררו משנת 1971- אפריל 1982). גם הציבור הציוני דתי נאלץ למצוא הסברים לכך שמדינת ישראל פועלת בניגוד לציפיות שנתלו בה מבחינת היותה אבן דרך בדרך לגאולה.
תלמידיו של הרב צבי יהודה קוק, שרובם כבר החזיקו במשרות רבניות. חוו את חוויית העקירה בשיא פועלם. הרב צבי ישראל טאו, מתלמידיו הקרובים של הרב צבי יהודה קוק, אסר על תלמידיו להשתתף בהפגנות. הוא פסק כי חייבים לכבד את החלטת הממשלה שנבחרה על ידי רוב יהודי. הוא דבק בדבריו של הרב צבי יהודה קוק כשאמר "העם לא איתנו" ו"דינא דמלכותא דינא" בייחוד כשמדובר בממשלה יהודית הנבחרת על העם.
אולם השבר הגדול, כפי שמוטי ענברי כתב "the profound theological crisis " היה ההתנתקות מרצועת עזה בשנת 2005. הקושייה היתה - האם מדינה שעוקרת התיישבות יהודית ומוסרת שטחים מארץ ישראל לידי אויביה, יכולה עוד להיקרא מדינה קדושה.
הדיסוננס הקוגניטיבי במסגרת ההתנתקות מרצועת עזה
כדי להבין את משבר ההתנתקות – יש לחלקו לשניים. האחד: המשבר האמוני כלפי המדינה – הציבור הציוני דתי החזיק באמונה כי זאת המדינה שהיא "יסוד כיסא השם בעולם" וזאת המדינה "אותה חזו הנביאים " (דבריו של הרב צבי יהודה קוק בעקבות אכזבתו מהעדרם של המקומות הקדושים בשטחה של מדינת ישראל, לפני מלחמת ששת הימים).
והמשבר השני הוא: המשבר האמוני כלפי האלוקות אשר "איפשרה" לתוכנית זאת לצאת לפועל.
ישנו, אם כן, את הקונפליקט כלפי החלטות המדינה החילונית הקיימת לבין התפיסה האמונית המתעקשת לראות אותה כמדינה קדושה. וכן הקונפליקט כלפי האמונה בה הם מחזיקים – אשר לפיה, עם ישראל נמצא בשעת גאולה וזו הולכת ומתעצמת לקראת בואו של המשיח. לפי תפיסת מציאות אמונית זו - תוכנית ההתנתקות אינה אמורה להתממש כלל.
וזו אכן התממשה. אם כן, כיצד התגבר ציבור זה ושיקם את התפיסה האמונית שלו כלפי האלוקות, זאת שאלה אחת. והשאלה השניה כיצד הוא שיקם את יחסו כלפי המדינה, והאם הוא עדיין רואה בה את "זאת המדינה שחזו הנביאים".
השתמשתי בתיאוריית הדיסוננס הקוגניטיבי של ליאון ופסטינגר ועמיתיו כדי לנתח ולהעריך את המשבר האמוני שעבר ציבור זה. מאמריו של ענברי וניתוחו את המשבר האמוני שחל בקרב הציבור הדתי ציוני בתום תוכנית ההתנתקות. שימשו גם כן כלי בעבודה זו, על מנת להעריך את ההתפתחויות הנוספות שחלו בקרב הציבור הזה במלאת עשר שנים מאז מימוש תוכנית ההתנתקות. כמו גם הופעת אירועי האלימות והטרור המכונים "אירועי תג מחיר".
לפי חמשת העקרונות של פסטינגר ועמיתיו ניתן לצפות שחברי קבוצה כלשהיא יגבירו את להט האמונה בעקבות הפרכת אמונתם, כלומר "כישלון הנבואי" במידה ומתקיימים חמשת התנאים הבאים: 1. על האמונה לנבוע מתוך שכנוע עמוק ועליה להיות רלונטית לפעולה שהמאמין עושה או איך שהוא מתנהג, 2. המאמין, האדם המחזיק באמונה זו חייב למסור את עצמו למענה, עליו לבצע החלטות חשובות שלא ניתן לבטלם. ככל שהפעולות חשובות יותר, וככל שקשה יותר לבטלם, כך גדלה בהתאמה המחוייבות של הפרט לאמונה. לדוגמא: להתפטר מעבודה, לעבור דירה. 3. האמונה חייבת להיות ספציפית למדי ומעורבת מספיק במציאות כך שאירועים מציאותיים יכולים באופן חד משמעי להפריך את האמונה. 4. אירוע הסותר את האמונה חייב להתרחש ולהיות מזוהה על ידי הפרט המאמין. 5. המאמין חייב לקבל תמיכה חברתית, כמעט לא יתכן שפרט מאמין מבודד יכול יהיה לעמוד בפני ראיות סותרות שהדגשנו מעלה.
בהנתן חמשת התנאים האלו, יש לצפות שהפרט המאמין -החבר בקבוצה של פרטים המשוכנעים באמונתם ויכולים לתמוך אחד בשני, ימשיך להחזיק באמונתו באופן יציב, ובנוסף, הוא וחבריו ימשיכו לגייס מאמינים חדשים.
אולם בניגוד למקרים שנבחנו על ידי פסטינגר ועמיתיו, תוכנית ההתנתקות מרצועת עזה, התאפיינה בכך שתאריך הפינוי לא נקבע על ידי מנהיגי הציבור הציוני דתי, אלא על ידי ראש ממשלת ישראל דאז אריאל שרון. הוא החליט יחד עם ממשלת ישראל על פינוי רצועת עזה ב15 באוגוסט 2005 (י' באב תשס"ה) מכל תושביה היהודים, פירוק הבסיסים הצבאיים וחיסול כל סממן של ריבונות ישראלית בה. יש כאן בעצם, תמונת תשליל, אירוע קיצוני עם תאריך יעד מוגדר שהציבור הציוני דתי נלחם כדי לבטלו.
תגובות הרבנים, תגובות שנבחנו על פי תיאוריית הדיסוננס הקוגניטיבי של המתנחלים לפני הנסיגה ולאחריה
בעבודה, בחנתי את תיאולוגיות רבני הציונות הדתית הפעילים ביותר בתקופה מדוברת זו והם: הרב דב ליאור, הרב זלמן מלמד, הרב צבי ישראל טאו, הרב חנן פורת ז"ל והרב שלמה אבינר. התיאולוגיות נבחנו לפני ההתנתקות ולאחריה, כמו גם עשר שנים לאחר ההתנתקות. בין השאלות שנבחנו הם: כיצד השתנה היחס לממלכתיות, האם יש חובה לסרב פקודה, כיצד הם נימקו את הגזירה לציבור תלמידיהם. בנוסף לכך, נבחנה תופעת ה“תג מחיר“, בדקתי האם היא תוצאה ממשבר ההתנתקות מרצועת עזה או המשך ישיר של התפתחות "תנועת גוש אמונים" וההתמודדיות היומיומית בין המדינה הממשית החילונית לבין המדינה הדתית האוטופית, אליה הם שואפים. כמו כן בחנתי את יחסם של הרבנים כלפי הפעילים בטרור זה, כלפי הפלסטינים וכלפי הצבא.
הרב דב ליאור
הרב דב ליאור, היה רב הישוב קרית ארבע - חברון וראש ישיבת ההסדר ניר שבמקום. נולד בשנת 1933 בירוסלב בגליציה למשפחת חסידי בעלז, בשנת 1939 עם פרוץ הכוחות הגרמניים לפולין, נמלטה משפחתו לרוסיה. בשנת 1948 עלה באוניית המעפילים "נגבה" לארץ ישראל, ובשנת 1949 עבר לישיבת "מרכז הרב" בירושלים, ולמד עם הרב צבי יהודה קוק. הרב ליאור היה הרב הראשון שנשלח על ידי הרב צבי לכהן כרב הישוב כפר הרא"ה, ומאוחר יותר עבר לקריית ארבע, שם הוא כיהן כרב היישוב עד לפרישתו.
גישתו להחלטה על ההתנתקות מרצועת עזה היתה: "אחרי עשרות שנים של חוסר חינוך" תוצאה היא –"התנתקות מארץ ישראל". אולם על הציבור הציוני דתי: "לבלום את הרשעות הזאת. [...] אם זה יבוצע זה עלול לפגוע בקיומנו כעם וקיומנו כמדינה". הדרך להיאבק לשיטתו היא: "בהתנגדות פסיבית". למרות שהציבור הציוני דתי הוא ממלכתי "דינא דמלכותא דינא" אבל לא דבר שכנגד תורת ישראל, אם ממשלת ישראל מחליטה להרוס ישוב ולהסגיר שטח לידי האויב, יש להראות זאת כגזירה וצריכים לעמוד כנגד זה מכל המשתמע מכך. [...] הם יכולים לקבוע בענייני מיסים, מהירות הנסיעה.
החזון, לפי הרב ליאור הוא ש "לא ירחק היום והציבור שלנו יחליט בכל מערכות החיים הציבוריים, במשפט בכלכלה. לדעתו: "מערכת המשפט זה החילול הגדול שיותר שיש במדינה. להתכונן לקראת הנהגת עם ישראל ואם הציבור שלנו יקח את ההנהגה יהיה שלום אמת, אימתם של ישראל תהיה מוטלת על כל המחבלים ותורת ישראל ברוחה והשפעתה תהיה ניכרת בכל תחומי חיינו הציבוריים. נעבור את התקופה הקשה הזאת ולא ניפול ביאוש ונזכה לראות בישועת ה'" . מצד אחד הרב ליאור קרא למאבק שהכלים שלו אינם סטדנדרטים שדרש שלא ללכת כמו צאן ולקרוא על החלטות הכנסת "קדוש". ומצד שני הוא קרא למאבק פסיבי. מאחרית דבריו של הרב ליאור במאמר לעיל בפסקה האחרונה, ניכר כי הוא השלים עם הגזירה של תוכנית ההתנתקות ונותר רק לקרוא לציבור הציוני דתי לקחת את ההנהגה לידיים.
לאחר ההתנתקות
כחודש לאחר ההתנתקות, בכנס שנערך ב- ג' באלול תשס"ה, הביע הרב ליאור את דעתו. הוא נתן במה לשאלות ולטרדות שקיימות בקרב הציבור הציוני דתי, כמו איך להתייחס למדינה, האם להתנתק ממנה, האם יש צורך לפשפש במעשינו, לחזור בתשובה? "ומכאן עולות שאלות כבדות משקל: האם אנחנו חטאנו, האם אנשי הגוש חטאו, האם עם ישראל חטא. אנחנו מחפשים מהי בדיוק נקודת החולשה, לא מתוך כוונה להטיל אשמות במישהו, אלא כדי לדעת לתקן."
הוא הוסיף כי: "מי שחש את פעמי הגאולה, לא ייפול בייאוש ובחולשת הדעת גם במצבים של מעין נסיגה. לא נאבד בגלל זה את האמון שלנו במהלך האלוקי הגדול של שיבת עם ישראל לארצו. כמו כן, הרב ליאור כתב בגליון י' של ירחון „קומי אורי“ בעקבות המלחמה בלבנון השניה בשנת 2006: "אין לי ספק שהיסורים הקשים שפוקדים את עמנו במלחמה הזאת, שנמשכה למעלה מחודש, באו בעקבות הפשע החמור לפני כשנה, הגליית היהודים מגוש קטיף, החרבת ביתם והסגרת חבלי ארץ לידי אויב "
לפי הרב ליאור אין צורך בתשובה או היטהרות, או חזרה לערכים שננטשו, אלא הסבר תכליתי מחודש לאירוע ה'הכישלון הנבואי'. מדבריו של הרב ליאור לפני ההתנתקות, הוא הניח שתוכנית ההתנתקות יכולה להתבטל, וזה תלוי במאבק של מיישמי התוכנית, כלומר החיילים והמפקדים. במידה והם יסרבו פקודה, אז התוכנית תתבטל.
הרב זלמן ברוך מלמד
הרב זלמן ברוך מלמד, נולד בתל אביב בשנת 1937. הוא ראש ישיבת בית אל וממקימי ערוץ 7 ואתר האינטרנט Yeshiva, אתר האינטרנט הראשון להפצת שיעורי תורה של רבני הציונות הדתית. הוא למד בישיבת "מרכז הרב" אצל הרב צבי יהודה קוק משנת 1954. לאחר כעשר שנים כתלמידו הקרוב, הוא התמנה לר"מ (רב ומלמד) בישיבה. בשנת 1978 נשלח בתמיכת הרב צבי יהודה קוק להקים ישיבה בבסיס הצבאי שליד בית אל. מאוחר יותר הוקמה במקום ההתנחלות בית אל.
לפי גישתו, המאבק על גוש קטיף הוא מאבק סמוי שיש בין שני חלקי העם: "אין כאן הבדלים פוליטים של הערכות מדיניות אם הצעד הזה נכון או לא נכון. לא זו המחלוקת, אלא איזה מהות תהיה למדינה - מדינה לכל אזרחיה חסרת זהות או מדינה יהודית בעלת תוכן יהודי" . הוא ציין שמדובר ב"משבר זהות הציונות", משבר שחל בקרב חלק מהציבור וכעת מתנהל מאבק בין שני חלקי ציבור זה.
יחסו לסירוב פקודה היה ברור: "ואני אומר לצבא ולשוטרים, אם תכשלו ולא תוכלו לעקור את הישובים, הצלחתם. אתם יכולים כבר עכשיו לומר שזו משימה בלתי אפשרית וכבר עכשיו להצליח, צה"ל לא צריך לנצח את היהודים, צה"ל צריך לנצח את האוייב." הרב מלמד קרא לשוטרים וחיילים לסרב פקודה על מנת לעשות הכל כדי לסכל את תכנית ההתנתקות.
לפי גישתו האמונית: "המכות האלו שאנו מקבלים הם מפני שהשקפת השמאל החילוני מתרסקת, אין קיום לעם ישראל בלי אמונה, ותוך כדי פרפורי הגסיסה של השמאל הוא מכה בכוחותיו האחרונים. לאחר מכן תקום הנהגה יהודית אמונית שתוביל את המדינה לקראת הגאולה "
הרב חנן פורת (1943-2011)
חנן פורת היה ממייסדי תנועת "גוש אמונים". הוא הגיע לכפר עציון עם משפחתו בשנת 1943 בהיותו בן חצי שנה. המשפחה פונתה מכפר עציון בהיותו כבן שש בתחילת מלחמת העצמאות, 1948. מאוחר יותר למד חנן פורת מספר שנים בישיבת "כרם ביבנה", לפני שעבר יחד עם גרעין הגחלת לישיבת "מרכז הרב". שם למד עם הרב צבי יהודה קוק. בשנת 2000 ייסד הרב פורת את עלון פרשת השבוע "מעט מן האור", בו פירסם חידושי תורה. העלון חולק חינם בבתי כנסת בכניסת השבת והופיע עד שנת 2014 (כשלוש שנים לאחר מותו).
תגובתו לפני ההתנתקות: "אנו קוראים לכל הדבקים בארץ ישראל, ולכל המאמינים בחי-עולמים וזורעים: אל תגררו אחרי קמפיין-תקשורתי זה שכל מגמתו להעצים את הציפיה ל'יום פקודה' ולקובעו כעובדה מוגמרת כבר עתה. [...] אנחנו ממשיכים יום יום לנטוע ולזרוע, ובזכותם יהפוך בעזרת ה' יתברך יום הפקודה, מיום 'בשורת איוב' ליום 'בשורת גאולה'."
הרב פורת קרא להמשיך ולפעול כרגיל ולא להתייחס להוראות הפינוי על כל סעיפיהם, (פנייה למנהלת ההתנתקות על מנת להסדיר מקום חלופי כמו גם אומדן הרכוש לצורך קבלת כספי הפיצויים). הוא התנגד לאלימות, והסביר כי פעולות אלימות נובעות מכעס שאין לו מקום: "שלטון הכעס על האדם הריהו חלילה בבחינת שלטון זר של אל נכר"
בעלון הראשון של הגליון "מעט מן האור" אשר הופיע זמן קצר אחרי ההתנתקות, באה לידי ביטוי תגובת האבל החריפה על האובדן של ההתיישבות בגוש קטיף, אך יחד איתה גם נבואת נחמה: "עשינו מה שגזרת עלינו, עשה אתה מה שעליך לעשות!" "בחסדי ה' יתברך עוד נשוב לגוש קטיף לבנות ולהבנות בו, חרף הייסורים שראו עלינו מידי אדם ובשובנו בשנית...והפכתי אבלם למחול ושימחתים מיגונם" (ירמיה ל"א ז-יב) .
אכזבתו ממוסדות המדינה בעקבות הגירוש, בא לידי ביטוי בעלון שיצא כחודש לאחר ההתנתקות. "הריקבון הזה - שפשה בכל רקמות השלטון, מחייב אותנו לחשבון נפש עמוק ונוקב, באשר ליחסינו לכל הנהגת המדינה, ולא רק כלפי העומד בראשה."
במלאות חמש שנים להתנתקות, הוא התייחס לכישלון המאבק בשיחה שהתנהלה בדצמבר 2009 ב"מכון מאיר" בין הרב פורת לבין חגי לונדין. בשיחה – "הוא הירבה להתייחס לפסוק "ושבו בנים לגבולם" שהינו חלק מנבואת הנביא ירמיהו לגבי שיבת עם ישראל לארצו. "התגשמות החלום של ילדי כפר עציון שידענו שיום יבוא ואנחנו נשוב". "עקירת הישובים היא פצע פתוח שותת דם. כדאי לא לשכוח שמעבר לפגיעה בזכויות האדם, היא העובדה כאילו סטרו בפרצופה של רחל שאלוקים הבטיח לה בנבואה "ושבו בנים לגבולם"". מבחינתו של פורת - הציונות החילונית היתה מבחינת החוצפה כלפי שמיים, של "אנחנו נעקור בנים מגבולם".
הרב שלמה חיים אבינר
ראש ישיבת עטרת ירושלים, השוכנת ברובע היהודי וגם רב הישוב "בית אל" א', נולד בצרפת בשנת 1943 ועלה לישראל בשנת 1966. לאחר שירותו הצבאי הצטרף לישיבת "מרכז הרב“. לאחר מלחמת ששת הימים, הצטרף לקבוצה ששהתה במלון "פארק" במטרה לחדש את הישוב היהודי בחברון (אפריל 1968). לימים, רב הישוב קשת ברמת הגולן. ומשנת 1981 ועד היום, רב הישוב "בית אל" א'.
מבחינת הרב אבינר התגשמות הגאולה הארצית חלה על ידי ייסודה של מדינת ישראל ומוסדותיה על אדמות ארץ ישראל. הוא הדגיש ומדגיש את "קדושת הממלכתיות", וזאת באה לידי ביטוי ביחסו לצבא, להחלטות הממשלה ולחוקיה. מבחינתו אלה הם בגדר קודש, שאין לערער עליו. אולם מצד שני, הוא קרא לסרבנות אזרחית, לא לשתף פעולה אזרחית עם מנהלת ההתנתקות בכל שלבי הפינוי. ואף קרא לחרם צרכני על אותם אזרחים המשתפים פעולה עימה. בפרק על הרב אבינר בעבודת הדוקטורט, התייחסתי גם לנושא 'מסירות הנפש'. נושא שהתמלא בתוכן חדש, זאת לראשונה על ידי הרב צבי יהודה קוק ותלמידיו.
למרות שהרב אבינר ראה את עצמו כרב בעל גישה ממלכתית ולא פסל את הקריאה לסרבנות. הוא היה בין הכוחות המסייעים אקטיבית להרגעת הציבור המפונה, כשתפקידו היה למנוע את ההתנגדות לפינוי. בנוסף לכך, הרב אבינר קרע את בגדי הציבור המפונה, לאות אבל.
הרב צבי ישראל טאו
לפי תיאולוגיית הרב טאו, שורש הסיבה לגזירת ההתנתקות היא ההתנתקות מהחילונים, אותם הוא מכנה "המנותקים מן הקודש". אנשים שמבחינתו שייכים לעם ישראל, אך ריקים בתוכנם ולכן האובססיה שהם גילו כלפי תוכנית ההתנתקות מהווה עבורם חלופה לאובססיה דתית: "רוממות הרוח שבאה בעקבות 'מלחמת ששת הימים' שככה במהרה, ואת מקומה תפסו הספקנות וחוסר הודאות בכל השייך לענייננו הלאומי. [...] כיצד נקלענו לשבר אידיאלוגי כה עמוק עד כדי 'פוסט ציונות' מוצהרת ועד כדי 'תוכנית ההתנתקות' בימינו". לפי הרב טאו יש חשיבות גדולה להבנה, שהעולם בשיח תמידי המתנהל פנים מול פנים. כלומר ברגע שישנו ניתוק בעם מן התכלית האמיתית, אזי מאיימת תוכנית התנתקות חיצונית. וזאת אמורה לעורר את האומה כולה לחשיבה מחודשת לחזרה לעקרונות, וממנה לעשייה.
"לכן, לא בחסימת כבישים ובהפגנות אלימות וכוחניות נהיה פועלים עם אל לישועת עמנו, פעולה זו מתייחסת לסימפטומים ולתופעות החיצוניות ולא לשורשיהן וסיבותיהן, וכמוה כנשיכת הכלב את המקל". הוא המליץ לתלמידיו, להתרחק מן הזעם (הפגנות וחסימת כבישים) ומהיאוש: "התמלאות בזעם וביאוש מזיקה לכל המצב הכללי, ומוסיפה נפילה על נפילה.
התייחסותו לסירוב פקודה:
הרב טאו התנגד לסירוב הפקודה, מאחר ולהשקפתו אין לנשוך במקל המכה, אלא להאבק בשורש הבעיה, שהיא הניתוק בעם. אלא שככל שתוכנית ההתנתקות קרבה למועדה המתוכנן, חל שינוי בהשקפתו. החוקר יאיר שלג, התייחס לשינוי הזה במאמר שפירסם ב"הארץ": "הסרבנות האפורה של הרב טאו" לפיה: "סרבנות מפורשת אכן אסורה, אבל על התלמידים להבהיר למפקדיהם ש"אינם מסוגלים" למלא פקודה מעין זו."
הרב טאו קרא למאמיניו תלמידיו ותומכיו שלא להתנגד לסירוב פקודה, אלא להמנע מקיום הפקודה. ומדברים שפורסמו כעשר שנים לאחר ההתנתקות בהם אמר הרב "לא היינו צריכים להיות בהתנתקות". הרב טאו האמין גם כן שלהתנתקות יש סיכוי טוב להתבטל, אם השורש והבסיס לקיומה יתבטל. כלומר, הוא האמין ביכולתו של הפרט בקבוצה לפעול למען כישלון תוכנית ההתנתקות. קריאתו של הרב טאו לפעולות "פנים אל פנים" על מנת לקרב את עם ישראל לאידיאולוגיה שלו, אפשר לראותה כקריאה לגיוס מאמינים, אולם קריאה זו נשארה רק בין כתלי בית המדרש.
כעשר שנים לאחר ההתנתקות מרצועת עזה
מסיכום הממצאים בעבודה זו, יש לציין כי - גם הרבנים שקראו לסירוב פקודה וגם הרבנים שקראו לכבד את החלטת הממשלה יישרו קו בסופו של דבר והם רואים את תוכנית ההתנתקות כמשבר מקומי. כמעט כל הרבנים, מלבד הרב טאו- טוענים שיש לראות את התמונה כולה שהיא בכללה חיובית. שהרי ההתיישבות היהודית ממשיכה להתרחב כמו גם עולם התורה ועולם הישיבות.
הרב חנן פורת ז"ל האמין שעם ישראל עוד יחזור למקומות מהם הוא גורש, כפי שקרה בעבר עם משפחתו אשר גורשה מכפר עציון.
הרב זלמן מלמד אמר במספר ניסוחים: "יש פה ושם צרה, אך החיבור שלנו לארץ-ישראל לא נעצר מלכת והוא הולך ומתחזק. הארץ נבנית בגליל ובנגב, בשרון ובשומרון ובכל רחבי ארץ-ישראל. יש מקומות שיש עליהם מאבק, אבל בסך הכול ההתרחבות וההתקדמות לא נעצרו"
הרב דב ליאור: "לא נאבד בגלל זה את האמון שלנו במהלך האלוקי הגדול של שיבת עם ישראל לארצו" .
הרב אבינר: "אך כאמור איננו מתייאשים, איננו עוסקים בהאשמה עצמית או בהאשמת הזולת ביחס לעבר, אלא מושכים קדימה אל העתיד מתוך כיסופים גדולים עוד יותר" .
הרב טאו: "המשך הבניין: אבל אנחנו מבינים שלא ביום אחד אפשר לבנות את ירושלים ואת גאולת-ישראל. לכן אבותיכם הלכו להמשיך את ההתיישבות כבר באותו לילה של הפינוי, ובבוקר כבר עלו על הקרקע בחבל חלוצה. הם לא נשברו. ודאי הם הצטערו על האובדן" .
אירועי „תג מחיר“
אולם יחד עם תמונה אופטימית זאת, קיימת התפתחות נוספת שאינה המשך ישיר של תלמידי הרב צבי קוק וישיבת "מרכז הרב". התפתחות אירועי האלימות והטרור המכונות „תג מחיר“.
כשלוש שנים לקח לציבור הציוני דתי להחלים ממשבר ההתנתקות ולתכנן תגובה נאותה כלפי אכזבתו העמוקה מהמדינה. אולם כבר בפינוי עמונה ב1 לפברואר 2006, מספר חודשים לאחר ההתנתקות, חלה תפנית חדה ביחס לקדושת החלטות המדינה. כשצעירי הציבור הציוני דתי פעלו בכל כוחם כדי למנוע את פינוי ההתנחלות עמונה. פינוי שהתבצע באלימות כוחות הביטחון. זאת בניגוד בוטה למאבק נגד ההתנתקות שהיה לרוב, נקי מאלימות .
פעולות „תג מחיר“ התחילו במהלך 2008 בעקבות פינוי מבנים במאחז יצהר. בכתבה שסיקרה את האירוע נכתב כי השיטה היא: "לגבות „תג מחיר“ גבוה על כל פעולה מסוג זה של הצבא או המשטרה". בפעולה הראשונה זו של „תג מחיר“: "נחסמו לתנועה צומת שילה, צומת הטי, צומת רחלים, חווארה, צומת חטיבת שומרון ועוד. ביצהר יודעים לספר שצה"ל הודיע שאין ביכולתו לשלוח כוחות לכל המקומות על מנת לעצור את המחאה, כך שהשיטה הוכיחה את עצמה. במקום התפתחו עימותים קשים בין יהודים לערבים, ושטחים גדולים של מרעה וזיתים נשרפו. גם בעסירה אל קבלייה התפתח עימות גדול ובמהלכו נשרף בית, תוך כדי שהצבא מודיע בקשר ש"אין לו כוחות לשלוח למקום."
אירועים אלו ניתנים לבחינה הן כתוצאה ישירה של תוכנית ההתנתקות והן כתופעה בפני עצמה:
• „תג מחיר“ השפעתה של תוכנית ההתנתקות על הציונות הדתית: הפעילים בפעולות אלו שייכים לציבור הציוני דתי ולאותם משפחות אשר גורשו מחבל גוש קטיף, ואשר חוו את המשבר באופן אישי ולא רק באופן פוליטי ואמוני. מכאן לקשר האפשרי שאירועי „תג מחיר“ הינם בעצם, תוצר ישיר של המשבר האמוני שחווה ציבור זה כתוצאה מההתנתקות מרצועת עזה. והם, הפעילים מביעים בצורה זו את חוסר האמון שלהם כלפי המדינה, כמו גם את יכולתה לקחת אחריות על עתידם. בדרך זו הם מביעים גם את האכזבה שחשו כלפי רבני הציונות הדתית הממלכתיים. שלדעתם, יישרו קו עם החלטת המדינה ואין הם עוד מוחים כלפיה - כפי שנקטו בתקופת ההתנתקות. חוסר האותנטיות של הרבנים, כפי שכתבתי על כך בפרק הסיכום בעבודה.
כחיזוק לכך ראו במאמרה של ענת רוט: "הציונות הדתית במבחן הממלכתיות – מכפר מימון לעמונה". בו היא מתארת את השינוי שחל בקרב הציבור הציוני דתי שתמך במדינה. ציבור אשר נמנע מאלימות מתוך אידיאולוגיה, ואת התעוררותו למציאות ביום שאחרי. התעוררות שהביאה לאלימות כפי שנחזתה. "משהוכח כביכול שניתן לנצח את המתנחלים ולפנות יישובים בנקל, במהירות ובלא אלימות, יגבר תאבונם של אלה ותיסלל הדרך להתנתקות הבאה". ,
• אירועי „תג מחיר“ – כתוצאה מהתפתחות רדיקליזם פוליטי בישראל: אירועי „תג מחיר“ יתכן והיו יכולים להתפתח ללא קשר להתנתקות מרצועת עזה. כפי שנראה במאמרו של דון יחיא אליעזר משנת 2003 . במאמרו, הוא ציין שההתנגדות למדינה היהודית החילונית, היא בעצם "הדיסוננס הקוגניטיבי" שהציבור הציוני דתי מנסה ליישב בתוכו. אולם ללא הצלחה רבה. הוא מפרט כיצד מערך הישיבות הלאומיות תרם רבות לתהליך הרדיקליזציה הפוליטית של הציבור הציוני דתי וזאת לפני שדובר על תוכנית ההתנתקות או פינוי מאחזים: "בגישת ההרחבה הפונדמנטליסטית בנוסח הרב קוק היה טמון מלכתחילה הפוטנציאל של רדיקליזם לאומי – פוליטי, שכן הרחבת תחומה של הקדושה הדתית בגישה זו והחלתה (של הקדושה,) על ערכי הלאומיות המודרנית עשויות להביא למאבק בלתי מתפשר למען המטרות הלאומיות, הנתפסות כחלק בלתי נפרד מן המכלול המקודש של עולם הערכים הדתי". בדבריו אלה של דון יחיא הוא תיאר את הרדיקליות המתבקשת במצב בו עלולה להתרחש התנגשות בין עולם הערכים הלאומי דתי לבין החלטותיה של המדינה. לפי דון יחיא, פעולות אלו - הן תוצאה של הקושי להכיל את החילוניות של מוסדות המדינה. קושי שקשה ליישבו ועל כן הוא מתפרץ בצורה של פעולות טרור. הפעילים מנסים כביכול, לפתור את הדיסוננס מתוכם, בדרך פעולות תג מחיר. דבריו נכתבו לפני ההחלטה על ההתנתקות מרצועת עזה.
• „תג מחיר“ – כתוצאה ממעבר בין תודעת מהגר לתודעת קולוניאליסט: דרך נוספת להסבר תופעת „תג מחיר“ היא: המעבר קטגורי ממצב של תודעת מהגר לתודעת קולוניאליסט. בנובמבר 1995, רצח יגאל אמיר את ראש הממשלה דאז יצחק רבין. הציבור הציוני דתי כולו הפך ברחוב כמו גם בתקשורת, לנרדף והואשם ברצח זה . המאבק הלא אלים בתוכנית ההתנתקות, היתה הזדמנות להראות לציבור החילוני ולמעצבי דעת הקהל, כי דוקא הציבור שנחשב לרצחני ואלים ונדחק הצידה בכלימה, נמנע מאלימות לחלוטין. ההנחה היתה שאם יעברו את המבחן, הם יתקבלו לקבוצת האליטה. כלומר, ההתרחקות מאלימות היתה כלי למטרה גבוהה יותר מאשר סיכול תוכנית ההתנתקות.
כמה שנים מאוחר יותר, בניהם של המשפחות שפונו מגוש קטיף ומרצועת עזה, חשים ביטחון בצדקת דרכם וקובעים סדר יום המתאים לעקרונותיהם ומעל הכל, לא מנסים למצוא חן. כלומר אצל נערי הגבעות חל תהליך של התקרבות ברמה ההתנהגותית לשבט הציוני החילוני, שניהם אינם מתנצלים ומשדרים ביטחון ויוהרה, כדמות היהודי החדש של בן גוריון. שתי קבוצות אלו: הציוני החילוני וקבוצת נוער הגבעות, שתיהן קוראות להקים יישות ציונית על האדמה שנכבשה מתושבים המקומיים.
לפי ההיסטוריונים החדשים המפעל הציוני הוא מפעל קולוניאליסטי: שהמשאב העיקרי בו הוא האדמה, מעבר לכך לא היה רצון להטמע בחברה המקומית הטבעית, בדיוק כמו הקולוניות הצרפתיות והבריטיות ברחבי אפריקה, שהיו מבודדות וניזונו מכוח עבודה זול של התושבים הטבעיים, תוך כדי ניצול משאבי המקום.
קבוצת המהגרים הלבנים – הבורים- שנטמעה במרחב הטבעי של דרום אפריקה והפכה לשליט על קבוצת הילידים בה, תוך דבקות בתנ"ך ובאדמה . היא נסיון להבין את העתיד האפשרי של נוער הגבעות ופעולות „תג מחיר“. בחינה של קבוצה זו, יכולה גם להסביר את דבקותם של נערי הגבעות בתורתו של הרב גינזבורג. שכן הרב גינזבורג הינו מהגר על פי הגדרה, הוא נולד בארצות הברית ועלה לישראל. אולם, זה לא מנע ממנו להרגיש שייך למקום בו לא נולד, ויתרה מזו לראות במקום זה כמקום הטבעי לו ולא לאותה אוכלוסיה שנולדה במקום וניהלה את חייה במשך דורות רבים בו.
כך בדומה לבורים בדרום אפריקה אשר נדדו למרכז היבשת על מנת לברוח מעימות עם הכובשים הבריטים ולבסוף החליטו לשתף עימם פעולה על מנת לקדם את עקרונותיהם, כך גם נדדו אנשי הציונות הדתית לשטחים הכבושים על מנת לחיות על פי חוקי אמונתם. משבר ההתנתקות הבהיר להם שעליהם ליטול חלק בממשל החילוני הישראלי ולהשפיע על החלטותיו.
התייחסות רבני הציונות הדתית לפעולות „תג מחיר“
הרב צבי ישראל טאו
התגובה לפעולות „תג מחיר“ נמצאה במאמרו של דר' גדי גבריהו יושב ראש הפורום "תג מאיר" בעיתון הארץ, פורום הפועל כנגד פעולות „תג מחיר“: "מוכרחים לחזור ולשנן דברים שאמר הרב צבי טאו - מנהיג ישיבות הקו - על אנשי המחתרת היהודית הראשונה(המחתרת היהודית פעלה בראשית שנות השמונים ונתפסה ב27.4.1984: "יש לנו עסק עם כת משיחית, שרוצה להביא את הגאולה לעם ישראל עם נשק ביד; [...]זוהי תפיסה של לומדי קבלה שטחיים, קטנוניים, והם גורמים בזה להרס ולחורבן" (חגי סגל, "אחים יקרים") . לפי מקור זה, הרב טאו, בהנחה שנשאר נאמן לדעותיו מאז ראה בפעילי „תג מחיר“, כת משיחית המעוניינת להביא גאולה לעם ישראל באמצעות אלימות, פגיעה באיסלאם או בחפים מפשע. והם אלו הפוגעים בתהליך הגאולה ומביאים לחורבנו.
הרב שלמה אבינר
באתר הציוני דתי "כיפה", פרסם אורי פולק את התייחסותו של הרב שלמה אבינר לפעולות „תג מחיר“, תחת הכותרת "אסור לפגוע ברכוש ערבי": "הויכוח שלנו עם הערבים הוא בשאלה של מי הארץ הזאת אבל זה לא מרשה לנו להעליב אותם, לגנוב אותם, או להציק להם".
הרב זלמן מלמד
לאחר מספר פעולות „תג מחיר“, הוא התייחס אליהן בשיעורו השבועי וענה שראשית כל, לפני שמגנים את הפעולה יש לבדוק האם המפגע הוא יהודי ואם אכן כן, אז לברר מהיכן הוא, כלומר מאיזה חוגים המפגעים מגיעים. הוא הוסיף כי, הפיגועים אינם מסייעים לציבור הציוני הדתי במטרתו, ואף "ויש סיכוי שזה מזיק" . תגובתו של הרב לא היתה נחרצת ואף ניסתה להפנות את פעולות אלה לשוליו של הציבור הציוני דתי.
הרב דוב ליאור
הרב ליאור והתייחסותו לפעולות „תג מחיר“: לא ניתן למצוא התייחסות מדוייקת של הרב ליאור לפעולות אלו. אולם חשוב לציין שהוא נעצר בחשד להסתה מאחר ונתן את הסכמתו ואת המלצתו לספר תורת המלך המתאר את האופן ההלכתי המאפשר לפגוע בשאינם יהודים.
עלייתו של הרב יצחק גינזבורג
הרב גינזבורג דוגל בשני עיקרים: ארץ ישראל השלמה והמשיחיות. את רעיון "ארץ ישראל השלמה" הוא לא רואה כמטרה בפני עצמה אלא כדרך. הדרך היחידה המאפשרת ליהודי להגיע למהות. ציטוט מדבריו: "אבל מדוע צריך את כל הארץ? מי שאומר כך אינו מבין מהי באמת ארץ ישראל, "אֶרֶץ אֲשֶׁר ה' אֱלֹהֶיךָ דֹּרֵשׁ אֹתָהּ תָּמִיד עֵינֵי ה' אֱלֹהֶיךָ בָּהּ" – אשר כולה ניתנה רק לעם ישראל, ואיננו רשאים לתת לנכרים גם חלק קטן ממנה .
הרב גינזבורג אינו משתייך לאידיאולוגית הציונות הדתית של תורת הרב קוק. ומתחילת דרכו הדתית, נחשב לקיצוני בדיעותיו שלא חשש להשמיען כמו גם לפרסם חוברת שבה הוא משבח מעשה אלימות קיצוני בברכת "ברוך", "ברוך הגבר" , . הקריאה בדבריו של הרב גינזבורג היא לחזרה לטבע, חזרה לרגש המקורי הטהור, אותו רגש שאימצו גם נערי הגבעות בבחירתם לעסוק בחקלאות, רעיית בעלי חיים וקרבה מיידית לאדמה.
התייחסות הרב גינזבורג לפעולות „תג מחיר“: "זה קרה ביצהר בחול המועד פסח. כינס הרב יצחק גינזבורג, התוועדות תחת הכותרת "בארעא דישראל בני חורין"". יהודה יפרח הוסיף: "נדמה שזו הפעם הראשונה שבה הוגה בסדר גודל של הרב יצחק גינזבורג מתייחס ישירות לתופעה הספציפית של מעשי „תג מחיר“ ומכיל אותה. אז אין באמת ואקום. מאחורי האירועים שמשגעים את המדינה יש תפיסת עולם מגובשת עם כיוון ועם יעד" הרב גינזבורג רואה בפעולות אלה, חלק הכרחי בהתפתחות הנפש היהודית וחירותה משלטון זר.
המשבר האמוני לפי ענברי
ענברי בחן את המשבר האמוני בקרב אנשי הציונות הדתית לאחר ההתנתקות מרצועת עזה וכתב במאמרו : "תהליך ההתנתקות מהווה מקרה בחן לבדיקת התמודדותו של הציבור הציוני דתי בכללותו עם משבר האמונה, ובפרט - בדיקת התמודדותם של מורי ההלכה של ציבור זה, אלה המבקשים לעצב את דפוסי התנהלותו הדתית." הוא הדגיש עוד: "שבחינת עמדותיו של הפלג האקטיביסטי של רבני גוש אמונים, המזוהה עם אסכולת מרכז הרב, איננה יכולה לשקף את עמדותיה של תנועת ההתנחלות בכללותה" . לפי ענברי, מדובר בשתי אסכולות חשיבה שהתפצלו מהאידיאולוגיה הציונית דתית. ושאין הבדל מהותי באידיאולוגיה של הפלגים השונים בציונות הדתית, אלא בהבדלים שעל דרך הפעולה: "אף שבדיעבד כבר הצטמצמו הפערים בין שני הזרמים. מתגובות שני הצדדים נראה ששני המחנות המנוגדים שואפים במהותם לכינונה של מדינת תורה שתחליף את המדינה החילונית חסרת ייעוד הקודש, אולם הפולמוס ביניהם הוא בעיקר על דרך הפעולה הנכונה.
ענברי סיים את מאמרו בשאלה, מה ניתן לחזות לעתיד, כיצד ולאן תתפתח האידיאולוגיה הציונית דתית: "עולה השאלה, לאן נוטה המערכת של הציונות הדתית וגוש אמונים, ואיזו מן המגמות אני מעריך שתגבר. לשם כך מן הראוי ראשית לומר שבחינה כנה של התנהלות ציבור גוש אמונים ערב תוכנית ההתנתקות מלמדת שרק מיעוט ממנו השתתף בהפגנות נגד התוכנית". אלא שבשנת 2012 בראיון עם תומר פרסיקו שהתפרסם באתר "לולאת האל" . שבע שנים לאחר ההתנתקות, הדגיש ענברי את הרדיקליזציה שחלה בעקבות ההתנתקות: "אני רואה את התחזקות עמדות המיעוט, ותוהה לאן פונה תנועה זו. אני רואה שתי מגמות פוטנציאליות – האחת היא שהאכזבה והכשלון יביאו להתפרצות אלימה ותקיפה."
סיכום
הבאתי את תגובותיהם של הרבנים להתנתקות מרצועת עזה וניתחתי אותה על פי תיאוריית הדיסוננס קוגניטיבי. אירועי תג מחיר נכללו בעבודה מסיבות שצויינו לעיל. אולם בהכרח קשר בין אירועי תג מחיר לבין הרבנים שדיעותיהם נבחנו בעבודה זו. הם אינם תומכים בפעולות אלו וכן פעולות אלו לא יצאו מבית מדרשם. ההתפתחות הנ"ל, קרי, אירועי תג מחיר משקפת את התרחקותו של הדור ההמשך מדרך רבני הציונות הדתית. בעצם העבודה מהווה מבט פנוראמי על תהליך חילופי הגברא, כלומר חילופי דיעות והחלפתם של המנהיגים, תהליך שנעשה מתוך השטח.
מכאן שאפשר להניח כי תם תפקידם של הרבנים הללו אשר היו תלמידיו של הרב צבי יהודה קוק. תם עידן הברית בין מדינת ישראל החילונית לבין הציבור הציוני הדתי. יתכן כי אנחנו כעת חווים עידן בו למנהיגים החדשים של הציונות הדתית כמו גם לציבור הצעיר יש דרישות שכבר לא ניתן להדחיק למען המטרה הגדולה (הקמת מדינת ישראל), כפי שהיה בעבר. היום הדור הצעיר דורש את מקומו בהנהגה של המדינה ובעיצוב עתידה הקרוב והרחוק. והנה דבריו של מנהיג מפלגת "בית היהודי" נפתלי בנט בכנס לציון העשור להתנתקות אמר כך: "מטרת הניתוח – מות ההתיישבות ושבירת רוח הימין. מטרת ההתנתקות היתה עצירת עליית אליטות שבמשך שנים צברו השפעה לגיטימית אבל מסוכנת לדעת אנשי השמאל. הם הרגישו שמישהו שינה את החוקים ולא טרח לספר להם". לטענתו, ההתנתקות היתה הדרך למנוע את ייהוד החברה הישראלית, כמו גם מאבק בין אליטות ישנות לחדשות.
נפתלי בנט הטיב להציג את הדברים נכונה, אכן לא מדובר בחילוקי דיעות על עניין החזרת שטחים או שלום עם הפלסטינים אלא מדובר על מהותה של המדינה ולאן מועדות פניה. התחזקותו של המחנה הציוני דתי בראשות נפתלי בנט מראה כי אכן הוא צדק.
אולם, השאלה הרלונטית למחקרי זה היתה, האם רדיקליזציה זו, היא צעד נוסף שתחילתו היא בשורשיו של גוש אמונים או שהרדיקליזציה נבעה מהמשבר האמוני שחל בעקבות ההתנתקות.
על התגברות האלימות מעיר ענברי, כי מאחר ואין גינוי מצד המנהיגים של הציונות הדתית, לא באופן נחרץ, אזי נראה כי היא תגבר. "במצב שבו אין גינוי לאלימות מצד האוטוריטות המרכזיות , במצב שבו היריב הפוליטי מתואר באופן דמנולוגי (ערב רב, קליפה וכו') ובמצב שבו אידיאולוגיה המצדיקה אלימות ספונטנית תופסת תאוצה בקרב חוגים רדיקלים, אין להתפלא שהאלימות הופכת לחלק בלתי נפרד ממפעל ההתנחלויות. "
מה היתה המטרה העיקרית במניעת האלימות במהלך ההתנתקות? במאמרה של ענת רוט התקבל הרושם כי היה חשוב לציבור הציוני דתי להיות חלק מהתרבות הדמוקרטית ולהתרחק מהשם הרע שיצא לציונות הדתית שלאחר רצח רבין: "המתנחלים" חזרו וטענו כי למרות שהם רואים בהתנתקות מהלך "אנטי ציוני, לא מוסרי ולא דמוקרטי", הם מחויבים לכללי הדמוקרטיה ואינם מתכוונים להתנגד לו באלימות" .
נראה כי הטיעונים האלו, של ענברי מצד אחד ושל רוט מצד שני, הם שחקנים במפה הכללית של המהפך שחל בציונות הדתית והתקדמותה לעבר עמדות שלטוניות בהם יבואו לידי ביטוי תפישותיה האמוניות והמדיניות. שורות אלו נכתבות ערב הבחירות לכנסת העשרים ואחת והן יהוו ראיה לחלק מההנחות שהונחו בסיכום זה.
Single-column data profiling
(2020)
The research area of data profiling consists of a large set of methods and processes to examine a given dataset and determine metadata about it. Typically, different data profiling tasks address different kinds of metadata, comprising either various statistics about individual columns (Single-column Analysis) or relationships among them (Dependency Discovery). Among the basic statistics about a column are data type, header, the number of unique values (the column's cardinality), maximum and minimum values, the number of null values, and the value distribution. Dependencies involve, for instance, functional dependencies (FDs), inclusion dependencies (INDs), and their approximate versions.
Data profiling has a wide range of conventional use cases, namely data exploration, cleansing, and integration. The produced metadata is also useful for database management and schema reverse engineering. Data profiling has also more novel use cases, such as big data analytics. The generated metadata describes the structure of the data at hand, how to import it, what it is about, and how much of it there is. Thus, data profiling can be considered as an important preparatory task for many data analysis and mining scenarios to assess which data might be useful and to reveal and understand a new dataset's characteristics.
In this thesis, the main focus is on the single-column analysis class of data profiling tasks. We study the impact and the extraction of three of the most important metadata about a column, namely the cardinality, the header, and the number of null values.
First, we present a detailed experimental study of twelve cardinality estimation algorithms. We classify the algorithms and analyze their efficiency, scaling far beyond the original experiments and testing theoretical guarantees. Our results highlight their trade-offs and point out the possibility to create a parallel or a distributed version of these algorithms to cope with the growing size of modern datasets.
Then, we present a fully automated, multi-phase system to discover human-understandable, representative, and consistent headers for a target table in cases where headers are missing, meaningless, or unrepresentative for the column values. Our evaluation on Wikipedia tables shows that 60% of the automatically discovered schemata are exact and complete. Considering more schema candidates, top-5 for example, increases this percentage to 72%.
Finally, we formally and experimentally show the ghost and fake FDs phenomenon caused by FD discovery over datasets with missing values. We propose two efficient scores, probabilistic and likelihood-based, for estimating the genuineness of a discovered FD. Our extensive set of experiments on real-world and semi-synthetic datasets show the effectiveness and efficiency of these scores.
Lately, the integration of upconverting nanoparticles (UCNP) in industrial, biomedical and scientific applications has been increasingly accelerating, owing to the exceptional photophysical properties that UCNP offer. Some of the most promising applications lie in the field of medicine and bioimaging due to such advantages as, among others, deeper tissue penetration, reduced optical background, possibility for multicolor imaging, and lower toxicity, compared to many known luminophores. However, some questions regarding not only the fundamental photophysical processes, but also the interaction of the UCNP with other luminescent reporters frequently used for bioimaging and the interaction with biological media remain unanswered. These issues were the primary motivation for the presented work.
This PhD thesis investigated several aspects of various properties and possibilities for bioapplications of Yb3+,Tm3+-doped NaYF4 upconverting nanoparticles. First, the effect of Gd3+ doping on the structure and upconverting behaviour of the nanocrystals was assessed. The ageing process of the UCNP in cyclohexane was studied over 24 months on the samples with different Gd3+ doping concentrations. Structural information was gathered by means of X-ray diffraction (XRD), transmission electron microscopy (TEM), dynamic light scattering (DLS), and discussed in relation to spectroscopic results, obtained through multiparameter upconversion luminescence studies at various temperatures (from 4 K to 295 K). Time-resolved and steady-state emission spectra recorded over this ample temperature range allowed for a deeper understanding of photophysical processes and their dependence on structural changes of UCNP.
A new protocol using a commercially available high boiling solvent allowed for faster and more controlled production of very small and homogeneous UCNP with better photophysical properties, and the advantages of a passivating NaYF4 shell were shown.
Förster resonance energy transfer (FRET) between four different species of NaYF4: Yb3+, Tm3+ UCNP (synthesized using the improved protocol) and a small organic dye was studied. The influence of UCNP composition and the proximity of Tm3+ ions (donors in the process of FRET) to acceptor dye molecules have been assessed. The brightest upconversion luminescence was observed in the UCNP with a protective inert shell. UCNP with Tm3+ ions only in the shell were the least bright, but showed the most efficient energy transfer.
In the final part, two surface modification strategies were applied to make UCNP soluble in water, which simultaneously allowed for linking them via a non-toxic copper-free click reaction to the liposomes, which served as models for further cell experiments. The results were assessed on a confocal microscope system, which was made possible by lesser known downshifting properties of Yb3+, Tm3+-doped UCNP. Preliminary antibody-staining tests using two primary and one dye-labelled secondary antibodies were performed on MDCK-II cells.
Over the last decades, the Arctic regions of the earth have warmed at a rate 2–3 times faster than the global average– a phenomenon called Arctic Amplification. A complex, non-linear interplay of physical processes and unique pecularities in the Arctic climate system is responsible for this, but the relative role of individual processes remains to be debated. This thesis focuses on the climate change and related processes on Svalbard, an archipelago in the North Atlantic sector of the Arctic, which is shown to be a "hotspot" for the amplified recent warming during winter. In this highly dynamical region, both oceanic and atmospheric large-scale transports of heat and moisture interfere with spatially inhomogenous surface conditions, and the corresponding energy exchange strongly shapes the atmospheric boundary layer. In the first part, Pan-Svalbard gradients in the surface air temperature (SAT) and sea ice extent (SIE) in the fjords are quantified and characterized. This analysis is based on observational data from meteorological stations, operational sea ice charts, and hydrographic observations from the adjacent ocean, which cover the 1980–2016 period. It is revealed that typical estimates of SIE during late winter range from 40–50% (80–90%) in the western (eastern) parts of Svalbard. However, strong SAT warming during winter of the order of 2–3K per decade dictates excessive ice loss, leaving fjords in the western parts essentially ice-free in recent winters. It is further demostrated that warm water currents on the west coast of Svalbard, as well as meridional winds contribute to regional differences in the SIE evolution. In particular, the proximity to warm water masses of the West Spitsbergen Current can explain 20–37% of SIE variability in fjords on west Svalbard, while meridional winds and associated ice drift may regionally explain 20–50% of SIE variability in the north and northeast. Strong SAT warming has overruled these impacts in recent years, though.
In the next part of the analysis, the contribution of large-scale atmospheric circulation changes to the Svalbard temperature development over the last 20 years is investigated. A study employing kinematic air-back trajectories for Ny-Ålesund reveals a shift in the source regions of lower-troposheric air over time for both the winter and the summer season. In winter, air in the recent decade is more often of lower-latitude Atlantic origin, and less frequent of Arctic origin. This affects heat- and moisture advection towards Svalbard, potentially manipulating clouds and longwave downward radiation in that region. A closer investigation indicates that this shift during winter is associated with a strengthened Ural blocking high and Icelandic low, and contributes about 25% to the observed winter warming on Svalbard over the last 20 years. Conversely, circulation changes during summer include a strengthened Greenland blocking high which leads to more frequent cold air advection from the central Arctic towards Svalbard, and less frequent air mass origins in the lower latitudes of the North Atlantic. Hence, circulation changes during winter are shown to have an amplifying effect on the recent warming on Svalbard, while summer circulation changes tend to mask warming.
An observational case study using upper air soundings from the AWIPEV research station in Ny-Ålesund during May–June 2017 underlines that such circulation changes during summer are associated with tropospheric anomalies in temperature, humidity and boundary layer height.
In the last part of the analysis, the regional representativeness of the above described changes around Svalbard for the broader Arctic is investigated. Therefore, the terms in the diagnostic temperature equation in the Arctic-wide lower troposphere are examined for the Era-Interim atmospheric reanalysis product. Significant positive trends in diabatic heating rates, consistent with latent heat transfer to the atmosphere over regions of increasing ice melt, are found for all seasons over the Barents/Kara Seas, and in individual months in the vicinity of Svalbard. The above introduced warm (cold) advection trends during winter (summer) on Svalbard are successfully reproduced. Regarding winter, they are regionally confined to the Barents Sea and Fram Strait, between 70°–80°N, resembling a unique feature in the whole Arctic. Summer cold advection trends are confined to the area between eastern Greenland and Franz Josef Land, enclosing Svalbard.
Cleft exhaustivity
(2020)
In this dissertation a series of experimental studies are presented which demonstrate that the exhaustive inference of focus-background it-clefts in English and their cross-linguistic counterparts in Akan, French, and German is neither robust nor systematic. The inter-speaker and cross-linguistic variability is accounted for with a discourse-pragmatic approach to cleft exhaustivity, in which -- following Pollard & Yasavul 2016 -- the exhaustive inference is derived from an interaction with another layer of meaning, namely, the existence presupposition encoded in clefts.
To investigate the reliability and stability of spherical harmonic models based on archeo/-paleomagnetic data, 2000 Geomagnetic models were calculated. All models are based on the same data set but with randomized uncertainties. Comparison of these models to the geomagnetic field model gufm1 showed that large scale magnetic field structures up to spherical harmonic degree 4 are stable throughout all models. Through a ranking of all models by comparing the dipole coefficients to gufm1 more realistic uncertainty estimates were derived than the authors of the data provide.
The derived uncertainty estimates were used in further modelling, which combines archeo/-paleomagnetic and historical data. The huge difference in data count, accuracy and coverage of these two very different data sources made it necessary to introduce a time dependent spatial damping, which was constructed to constrain the spatial complexity of the model. Finally 501 models were calculated by considering that each data point is a Gaussian random variable, whose mean is the original value and whose standard deviation is its uncertainty. The final model arhimag1k is calculated by taking the mean of the 501 sets of Gauss coefficients. arhimag1k fits different dependent and independent data sets well. It shows an early reverse flux patch at the core-mantle boundary between 1000 AD and 1200 AD at the location of the South Atlantic Anomaly today. Another interesting feature is a high latitude flux patch over Greenland between 1200 and 1400 AD. The dipole moment shows a constant behaviour between 1600 and 1840 AD.
In the second part of the thesis 4 new paleointensities from 4 different flows of the island Fogo, which is part of Cape Verde, are presented. The data is fitted well by arhimag1k with the exception of the value at 1663 of 28.3 microtesla, which is approximately 10 microtesla lower than the model suggest.
The goal of this thesis was to thoroughly investigate the behavior of multimode fibres to aid the development of modern and forthcoming fibre-fed spectrograph systems. Based on the Eigenmode Expansion Method, a field propagation model was created that can emulate effects in fibres relevant for astronomical spectroscopy, such as modal noise, scrambling, and focal ratio degradation. These effects are of major concern for any fibre-coupled spectrograph used in astronomical research. Changes in the focal ratio, modal distribution of light or non-perfect scrambling limit the accuracy of measurements, e.g. the flux determination of the astronomical object, the sky-background subtraction and detection limit for faint galaxies, or the spectral line position accuracy used for the detection of extra-solar planets.
Usually, fibres used for astronomical instrumentation are characterized empirically through tests. The results of this work allow to predict the fibre behaviour under various conditions using sophisticated software tools to simulate the waveguide behaviour and mode transport of fibres.
The simulation environment works with two software interfaces. The first is the mode solver module FemSIM from Rsoft. It is used to calculate all the propagation modes and effective refractive indexes of a given system. The second interface consists of Python scripts which enable the simulation of the near- and far-field outputs of a given fibre. The characteristics of the input field can be manipulated to emulate real conditions. Focus variations, spatial translation, angular fluctuations, and disturbances through the mode coupling factor can also be simulated.
To date, complete coherent propagation or complete incoherent propagation can be simulated. Partial coherence was not addressed in this work. Another limitation of the simulations is that they work exclusively for the monochromatic case and that the loss coefficient of the fibres is not considered. Nevertheless, the simulations were able to match the results of realistic measurements.
To test the validity of the simulations, real fibre measurements were used for comparison. Two fibres with different cross-sections were characterized. The first fibre had a circular cross-section, and the second one had an octagonal cross-section. The utilized test-bench was originally developed for the prototype fibres of the 4MOST fibre feed characterization. It allowed for parallel laser beam measurements, light cone measurements, and scrambling measurements. Through the appropriate configuration, the acquisition of the near- and/or far-field was feasible.
By means of modal noise analysis, it was possible to compare the near-field speckle patterns of simulations and measurements as a function of the input angle. The spatial frequencies that originate from the modal interference could be analyzed by using the power spectral density analysis. Measurements and simulations yielded similar results. Measurements with induced modal scrambling were compared to simulations using incoherent propagation and once again similar results were achieved. Through both measurements and simulations, the enlargement of the near-field distribution could be observed and analyzed. The simulations made it possible to explain incoherent intensity fluctuations that appear in real measurements due to the field distribution of the active propagation modes.
By using the Voigt analysis in the far-field distribution, it was possible to separate the modal diffusion component in order to compare it with the simulations. Through an appropriate assessment, the modal diffusion component as a function of the input angle could be translated into angular divergence. The simulations gave the minimal angular divergence of the system. Through the mean of the difference between simulations and measurements, a figure of merit is given which can be used to characterize the angular divergence of real fibres using the simulations. Furthermore, it was possible to simulate light cone measurements. Due to the overall consistent results, it can be stated that the simulations represent a good tool to assist the fibre characterization process for fibre-fed spectrograph systems.
This work was possible through the BMBF Grant 05A14BA1 which was part of the phase A study of the fibre system for MOSAIC, a multi-object spectrograph for the Extremely Large Telescope (ELT-MOS).
This work presents a new design for programming environments that promote the exploration of domain-specific software artifacts and the construction of graphical tools for such program comprehension tasks. In complex software projects, tool building is essential because domain- or task-specific tools can support decision making by representing concerns concisely with low cognitive effort. In contrast, generic tools can only support anticipated scenarios, which usually align with programming language concepts or well-known project domains.
However, the creation and modification of interactive tools is expensive because the glue that connects data to graphics is hard to find, change, and test. Even if valuable data is available in a common format and even if promising visualizations could be populated, programmers have to invest many resources to make changes in the programming environment. Consequently, only ideas of predictably high value will be implemented. In the non-graphical, command-line world, the situation looks different and inspiring: programmers can easily build their own tools as shell scripts by configuring and combining filter programs to process data.
We propose a new perspective on graphical tools and provide a concept to build and modify such tools with a focus on high quality, low effort, and continuous adaptability. That is, (1) we propose an object-oriented, data-driven, declarative scripting language that reduces the amount of and governs the effects of glue code for view-model specifications, and (2) we propose a scalable UI-design language that promotes short feedback loops in an interactive, graphical environment such as Morphic known from Self or Squeak/Smalltalk systems.
We implemented our concept as a tool building environment, which we call VIVIDE, on top of Squeak/Smalltalk and Morphic. We replaced existing code browsing and debugging tools to iterate within our solution more quickly. In several case studies with undergraduate and graduate students, we observed that VIVIDE can be applied to many domains such as live language development, source-code versioning, modular code browsing, and multi-language debugging. Then, we designed a controlled experiment to measure the effect on the time to build tools. Several pilot runs showed that training is crucial and, presumably, takes days or weeks, which implies a need for further research.
As a result, programmers as users can directly work with tangible representations of their software artifacts in the VIVIDE environment. Tool builders can write domain-specific scripts to populate views to approach comprehension tasks from different angles. Our novel perspective on graphical tools can inspire the creation of new trade-offs in modularity for both data providers and view designers.
This dissertation aims to deliver a transcendental interpretation of Immanuel Kant's Kritik der Urteilskraft, considering both its coherence with other critical works as well as the internal coherence of the work itself. This interpretation is called transcendental insofar as special emphasis is placed on the newly introduced cognitive power, namely the reflective power of judgement, guided by the a priori principle of purposiveness. In this way the seeming manifold of themes, varying from judgements of taste through culture to teleological judgements about natural purposes, are discussed exclusively in regard of their dependence on this faculty and its transcendental principle. In contrast, in contemporary scholarship the book is often treated as a fragmented work, consisting of different independent parts, while my focus lies on the continuity comprised primarily of the activity of the power of judgement.
Going back to certain central yet silently presupposed concepts, adopted from previous critical works, the main contribution of this study is to integrate the KU within the overarching critical project. More specifically, I have argue how the need for the presupposition by the reflective power of judgement follows from the peculiar character of our sense-dependent discursive mind. Because we are sense-dependent discursive minds, we do not and cannot have immediate insight into all of nature's features. The particular constitution of our mind rather demands conceptually informed representations which mediately refer to objects.
Having said that, the principle of purposiveness, namely the presupposition that nature is organized in concert with the particular constitution of our mind, is a necessary condition for the possibility of reflection on nature's empirical features. Reflection refers on my account to a process of selecting features in order to allow a classification, including reflection on the method, means and selection criteria. Rather than directly contributing to cognition, like the categories, reflective judgements thus express our ignorance when it comes to the motivation behind nature's design, and this is most forcefully expressed by judgements of taste and teleological judgements about organized matter. In this way, reflection, regardless whether it is manifested in concept acquisition, scientific systematization, judgements of taste or judgements about organized matter, relies on a principle of the power of judgement which is revealed and justified in this transcendental inquiry.
The development of bioinspired self-assembling materials, such as hydrogels, with promising applications in cell culture, tissue engineering and drug delivery is a current focus in material science. Biogenic or bioinspired proteins and peptides are frequently used as versatile building blocks for extracellular matrix (ECM) mimicking hydrogels. However, precisely controlling and reversibly tuning the properties of these building blocks and the resulting hydrogels remains challenging. Precise control over the viscoelastic properties and self-healing abilities of hydrogels are key factors for developing intelligent materials to investigate cell matrix interactions. Thus, there is a need to develop building blocks that are self-healing, tunable and self-reporting. This thesis aims at the development of α-helical peptide building blocks, called coiled coils (CCs), which integrate these desired properties. Self-healing is a direct result of the fast self-assembly of these building blocks when used as material cross-links. Tunability is realized by means of reversible histidine (His)-metal coordination bonds. Lastly, implementing a fluorescent readout, which indicates the CC assembly state, self-reporting hydrogels are obtained.
Coiled coils are abundant protein folding motifs in Nature, which often have mechanical function, such as in myosin or fibrin. Coiled coils are superhelices made up of two or more α-helices wound around each other. The assembly of CCs is based on their repetitive sequence of seven amino acids, so-called heptads (abcdefg). Hydrophobic amino acids in the a and d position of each heptad form the core of the CC, while charged amino acids in the e and g position form ionic interactions. The solvent-exposed positions b, c and f are excellent targets for modifications since they are more variable. His-metal coordination bonds are strong, yet reversible interactions formed between the amino acid histidine and transition metal ions (e.g. Ni2+, Cu2+ or Zn2+). His-metal coordination bonds essentially contribute to the mechanical stability of various high-performance proteinaceous materials, such as spider fangs, Nereis worm jaws and mussel byssal threads. Therefore, I bioengineered reversible His-metal coordination sites into a well-characterized heterodimeric CC that served as tunable material cross-link. Specifically, I took two distinct approaches facilitating either intramolecular (Chapter 4.2) and/or intermolecular (Chapter 4.3) His-metal coordination.
Previous research suggested that force-induced CC unfolding in shear geometry starts from the points of force application. In order to tune the stability of a heterodimeric CC in shear geometry, I inserted His in the b and f position at the termini of force application (Chapter 4.2). The spacing of His is such that intra-CC His-metal coordination bonds can form to bridge one helical turn within the same helix, but also inter-CC coordination bonds are not generally excluded. Starting with Ni2+ ions, Raman spectroscopy showed that the CC maintained its helical structure and the His residues were able to coordinate Ni2+. Circular dichroism (CD) spectroscopy revealed that the melting temperature of the CC increased by 4 °C in the presence of Ni2+. Using atomic force microscope (AFM)-based single molecule force spectroscopy, the energy landscape parameters of the CC were characterized in the absence and the presence of Ni2+. His-Ni2+ coordination increased the rupture force by ~10 pN, accompanied by a decrease of the dissociation rate constant. To test if this stabilizing effect can be transferred from the single molecule level to the bulk viscoelastic material properties, the CC building block was used as a non-covalent cross-link for star-shaped poly(ethylene glycol) (star-PEG) hydrogels. Shear rheology revealed a 3-fold higher relaxation time in His-Ni2+ coordinating hydrogels compared to the hydrogel without metal ions. This stabilizing effect was fully reversible when using an excess of the metal chelator ethylenediaminetetraacetate (EDTA). The hydrogel properties were further investigated using different metal ions, i.e. Cu2+, Co2+ and Zn2+. Overall, these results suggest that Ni2+, Cu2+ and Co2+ primarily form intra-CC coordination bonds while Zn2+ also participates in inter-CC coordination bonds. This may be a direct result of its different coordination geometry.
Intermolecular His-metal coordination bonds in the terminal regions of the protein building blocks of mussel byssal threads are primarily formed by Zn2+ and were found to be intimately linked to higher-order assembly and self-healing of the thread. In the above example, the contribution of intra-CC and inter-CC His-Zn2+ cannot be disentangled. In Chapter 4.3, I redesigned the CC to prohibit the formation of intra-CC His-Zn2+ coordination bonds, focusing only on inter-CC interactions. Specifically, I inserted His in the solvent-exposed f positions of the CC to focus on the effect of metal-induced higher-order assembly of CC cross-links. Raman and CD spectroscopy revealed that this CC building block forms α-helical Zn2+ cross-linked aggregates. Using this CC as a cross-link for star-PEG hydrogels, I showed that the material properties can be switched from viscoelastic in the absence of Zn2+ to elastic-like in the presence of Zn2+. Moreover, the relaxation time of the hydrogel was tunable over three orders of magnitude when using different Zn2+:His ratios. This tunability is attributed to a progressive transformation of single CC cross-links into His-Zn2+ cross-linked aggregates, with inter-CC His-Zn2+ coordination bonds serving as an additional, cross-linking mode.
Rheological characterization of the hydrogels with inter-CC His-Zn2+ coordination raised the question whether the His-Zn2+ coordination bonds between CCs or also the CCs themselves rupture when shear strain is applied. In general, the amount of CC cross-links initially formed in the hydrogel as well as the amount of CC cross-links breaking under force remains to be elucidated. In order to more deeply probe these questions and monitor the state of the CC cross-links when force is applied, a fluorescent reporter system based on Förster resonance energy transfer (FRET) was introduced into the CC (Chapter 4.4). For this purpose, the donor-acceptor pair carboxyfluorescein and tetramethylrhodamine was used. The resulting self-reporting CC showed a FRET efficiency of 77 % in solution. Using this fluorescently labeled CC as a self-reporting, reversible cross-link in an otherwise covalently cross-linked star-PEG hydrogel enabled the detection of the FRET efficiency change under compression force. This proof-of-principle result sets the stage for implementing the fluorescently labeled CCs as molecular force sensors in non-covalently cross-linked hydrogels.
In summary, this thesis highlights that rationally designed CCs are excellent reversibly tunable, self-healing and self-reporting hydrogel cross-links with high application potential in bioengineering and biomedicine. For the first time, I demonstrated that His-metal coordination-based stabilization can be transferred from the single CC level to the bulk material with clear viscoelastic consequences. Insertion of His in specific sequence positions was used to implement a second non-covalent cross-linking mode via intermolecular His-metal coordination. This His-metal binding induced aggregation of the CCs enabled for reversibly tuning the hydrogel properties from viscoelastic to elastic-like. As a proof-of-principle to establish self-reporting CCs as material cross-links, I labeled a CC with a FRET pair. The fluorescently labelled CC acts as a molecular force sensor and first preliminary results suggest that the CC enables the detection of hydrogel cross-link failure under compression force. In the future, fluorescently labeled CC force sensors will likely not only be used as intelligent cross-links to study the failure of hydrogels but also to investigate cell-matrix interactions in 3D down to the single molecule level.
Escaping the plant cell
(2020)
Ein Ergebnis der interkulturellen Beziehungen in Südostasien sind die immer noch existierenden portugiesisch-basierten Kreolsprachen Papia Kristang und Macaísta, die zu Muttersprachen von Generationen von Menschen in Malakka und Macau geworden sind. Welche Faktoren bewirken den Sprachwandel dieser Idiome, und wie ist dieser erkennbar? Dieser Band beschäftigt sich nicht nur mit der Sprachdynamik der portugiesisch-basierten Kreolsprachen Südostasiens, sondern auch mit anderen wesentlichen Fragestellungen der Variationslinguistik. Als Basis dienen die Ergebnisse einer empirischen Datenerhebung, die insbesondere die Veränderungen im Sprachgebrauch dokumentieren. Darüber hinaus stellt der Autor neue Resultate hinsichtlich der Sprachidentifikationen vor, die nicht nur für die Kreolistik von Bedeutung sind, sondern auch fachübergreifend für das Interesse der allgemeinen Sprachwissenschaft.
Subsea permafrost is perennially cryotic earth material that lies offshore. Most submarine permafrost is relict terrestrial permafrost beneath the Arctic shelf seas, was inundated after the last glaciation, and has been warming and thawing ever since. It is a reservoir and confining layer for gas hydrates and has the potential to release greenhouse gases and affect global climate change. Furthermore, subsea permafrost thaw destabilizes coastal infrastructure. While numerous studies focus on its distribution and rate of thaw over glacial timescales, these studies have not been brought together and examined in their entirety to assess rates of thaw beneath the Arctic Ocean. In addition, there is still a large gap in our understanding of sub-aquatic permafrost processes on finer spatial and temporal scales. The degradation rate of subsea permafrost is influenced by the initial conditions upon submergence. Terrestrial permafrost that has already undergone warming, partial thawing or loss of ground ice may react differently to inundation by seawater compared to previously undisturbed ice-rich permafrost. Heat conduction models are sufficient to model the thaw of thick subsea permafrost from the bottom, but few studies have included salt diffusion for top-down chemical degradation in shallow waters characterized by mean annual cryotic conditions on the seabed. Simulating salt transport is critical for assessing degradation rates for recently inundated permafrost, which may accelerate in response to warming shelf waters, a lengthening open water season, and faster coastal erosion rates. In the nearshore zone, degradation rates are also controlled by seasonal processes like bedfast ice, brine injection, seasonal freezing under floating ice conditions and warm freshwater discharge from large rivers. The interplay of all these variables is complex and needs further research. To fill this knowledge gap, this thesis investigates sub-aquatic permafrost along the southern coast of the Bykovsky Peninsula in eastern Siberia. Sediment cores and ground temperature profiles were collected at a freshwater thermokarst lake and two thermokarst lagoons in 2017. At this site, the coastline is retreating, and seawater is inundating various types of permafrost: sections of ice-rich Pleistocene permafrost (Yedoma) cliffs at the coastline alternate with lagoons and lower elevation previously thawed and refrozen permafrost basins (Alases). Electrical resistivity surveys with floating electrodes were carried out to map ice-bearing permafrost and taliks (unfrozen zones in the permafrost, usually formed beneath lakes) along the diverse coastline and in the lagoons. Combined with the borehole data, the electrical resistivity results permit estimation of contemporary ice-bearing permafrost characteristics, distribution, and occasionally, thickness. To conceptualize possible geomorphological and marine evolutionary pathways to the formation of the observed layering, numerical models were applied. The developed model incorporates salt diffusion and seasonal dynamics at the seabed, including bedfast ice. Even along coastlines with mean annual non-cryotic boundary conditions like the Bykovsky Peninsula, the modelling results show that salt diffusion minimizes seasonal freezing of the seabed, leading to faster degradation rates compared to models without salt diffusion. Seasonal processes are also important for thermokarst lake to lagoon transitions because lagoons can generate cold hypersaline conditions underneath the ice cover. My research suggests that ice-bearing permafrost can form in a coastal lagoon environment, even under floating ice. Alas basins, however, may degrade more than twice as fast as Yedoma permafrost in the first several decades of inundation. In addition to a lower ice content compared to Yedoma permafrost, Alas basins may be pre-conditioned with salt from adjacent lagoons. Considering the widespread distribution of thermokarst in the Arctic, its integration into geophysical models and offshore surveys is important to quantify and understand subsea permafrost degradation and aggradation. Through numerical modelling, fieldwork, and a circum-Arctic review of subsea permafrost literature, this thesis provides new insights into sub-aquatic permafrost evolution in saline coastal environments.
Zufriedenheitsanalysen durch Patientenbefragungen, wie in diesem Fall der neu entwickele und getestet Fragebogen (HNO-PROM), haben drei Säulen. Es kann zum einen eine bessere Patientenbindung geschaffen werden, die Qualität kann gemessen, verglichen und optimiert werden und es kann ein Mitarbeiterleitfaden im Sinne einer „Corporate Identity“ erstellt werden, welcher konkrete Managementimplikationen im Sinne von Handlungsimplikationen enthält. Der Leitgedanke des Qualitätsmanagements ist die Patientenorientierung im Sinne der Patientenzentrierten Medizin. Hierbei sollen nicht nur Wünsche und Bedürfnisse des Patienten erfüllt werden, sondern vorallem auch die Zufriedenheit gemessen und geplant werden. Gleichzeit muss man in diesem Zusammenhang die Behandlung der Patienten als Dienstleistung verstehen und die größtmögliche Zufriedenheit des Patienten als primäres Ziel setzen. Dies führt zu einer Kundenbindung dadurch, dass Patienten sowohl eine gleichbleibende Qualität erwarten können als auch und auch weiche Faktoren ihren Wünschen entsprechen werden. Corporate Identity mit dem Ziel als Unternehmen einheitlich für die Werte und damit die Qualität zu stehen.. Dies ermöglicht, das Wohlbefinden in der Vorstellung der Patienten beginnen zu lassen und dadurch Vertrauen zu schaffen. Alle drei Säulen haben nicht nur die Patientenzufriedenheit zum Ziel, sondern in gleichem Maße auch die Positionierung einer Institution auf dem Gesundheitsmarkt und damit die Verbesserung der Kosten-Nutzen-Rechnung durch ein positives Outcome. Damit fördern Zufriedenheitsanalysen nicht nur die ökonomische Position einer Abteilung, sondern behalten gleichermaßen die ethischen Aspekte einer Arzt-Patienten-Beziehung im Blick.
NADPH is an essential cofactor that drives biosynthetic reactions in all living organisms. It is a reducing agent and thus electron donor of anabolic reactions that produce major cellular components as well as many products in biotechnology. Indeed, the engineering of metabolic pathways for the production of many products is often limited by the availability of NADPH. One common strategy to address this issue is to swap cofactor specificity from NADH to NADPH of enzymes. However, this process is time consuming and challenging because multiple parameters need to be engineered in parallel. Therefore, the first aim of this project is to establish an efficient metabolic biosensor to select enzymes that can reduce NADP+. An NADPH auxotroph strain was constructed by deleting major reactions involved in NADPH biosynthesis in E. coli’s central carbon metabolism with the exception of 6-phosphogluconate dehydrogenase. To validate this strain, two enzymes were tested in the presence of several carbon sources: a dihydrolipoamide dehydrogenase variant of E. coli harboring seven mutations and a formate dehydrogenase (FDH) from Mycobacterium vaccae N10 harboring four mutations were found to support NADPH biosynthesis and growth. The strain was subjected to adaptive laboratory evolution with the goal of testing its robustness under different carbon sources. Our evolution experiment resulted in the random mutagenesis of the malic enzyme (maeA), enabling it to produce NADPH. The additional deletion of maeA rendered a more robust second-generation biosensor strain for NADP+ reduction. We devised a structure-guided directed evolution approach to change cofactor specificity in Pseudomonas sp. 101 FDH. To this end, a library of >106 variants was tested using in vivo selection. Compared to the best engineered enzymes reported, our best variant carrying five mutations shows 5-fold higher catalytic efficiency and 13-fold higher specificity towards NADP+, as well as 2-fold higher affinity towards formate. In conclusion, we demonstrate the potential of in vivo selection and evolution-guided approaches to develop better NADPH biosensors and to engineer cofactor specificity by the simultaneous improvement of multiple parameters (kinetic efficiency with NADP+, specificity towards NADP+, and affinity towards formate), which is a major challenge in protein engineering due to the existence of tradeoffs and epistasis.
The idea of critical childhood studies is a relatively young disciplinary undertaking in eastern Africa. And so, a lot of inquiries have not been carried out. This field is a potential important socio-political marker, among others, of some narratives, that have emerged out of eastern Africa. Towards this end, my research seeks out an archaeology of childhood in eastern Africa. There is a monochromatic hue which has often painted the eastern African childhood. This broad stroke portrays the childhood as characterized by want. The image of the eastern African childhood is composed in terms of the war-child, poverty, disease-ridden, and aid-begging. The pitfall of this consciousness is that it erases a differentiated and pluralist nature of the eastern African childhood. Therefore, I hypothesise that childhood is a discourse from which institutional vectors become conduits of certain statement-making both process-wise and content-wise. As such a critical childhood study is a theatre of staging and unearthing its joys, tribulations, cultural constructions, and even political interventions. To this end childhood and its literatures not only reflect but also contribute to meaning making and worldliness thereof. As an attempt to move from an un-nuanced depiction, which is often monodirectional, I seek to present a chronologically synchronic and diachronic analysis of childhood in the eastern Africa. Accordingly, I excavate a chronological construction of childhood within this geopolitical region. The main conceptual anchorage is Francis Nyamnjoh who tells of the African occupying a life on convivial frontiers. He theorises an Africa that is involved in technologies of self-definition that privilege conversations, fluidity of being and relational connections on a globalised scale. I also appropriate the notion of Bula Matadi from the Congo as a decolonialist epistemological exercise to break apart polarising representations and practices of childhood in eastern Africa. This opens a space for an unbounded reconfiguration of childhood in eastern Africa. This book works on and with archival matter, in a cross-disciplinary manner and ranges from pre-colonial to post-colonial eastern Africa. It is an exploration of the trajectory of the discourse of childhood in eastern Africa, in order to eclectically investigate childhood in eastern Africa, in fictional and non-fictional representations.
Hugo Greßmann (1877-1927) hat als einer der führenden Vertreter der Religionsgeschichtlichen Schule die religionsgeschichtliche Methode zur Geltung gebracht. Die biographisch-wissenschaftsgeschichtliche Studie stellt Greßmanns religionsgeschichtliches Programm dar, ordnet es in den wissenschaftshistorischen Kontext ein und zeigt seine Bedeutung für die weitere Wissenschaftsgeschichte auf.
Wege zur Gesangskarriere
(2020)
Hugo Greßmann (1877-1927) hat als einer der führenden Vertreter der Religionsgeschichtlichen Schule die religionsgeschichtliche Methode zur Geltung gebracht. Die biographisch-wissenschaftsgeschichtliche Studie stellt Greßmanns religionsgeschichtliches Programm dar, ordnet es in den wissenschaftshistorischen Kontext ein und zeigt seine Bedeutung für die weitere Wissenschaftsgeschichte auf.
Early numeracy is one of the strongest predictors for later success in school mathematics (e.g., Duncan et al., 2007). The main goal of first grade mathematics teachers should therefore be to provide learning opportunities that enable all students to develop sound early numeracy skills. Developmental models, or learning progressions, can describe how early numerical understanding typically develops. Assessments that are aligned to empirically validated learning progressions can support teachers to understand their students learning better and target instruction accordingly. To date, there have been no progression-based instruments made available for German teachers to monitor their students’ progress in the domain of early numeracy. This dissertation contributes to the design of such an instrument. The first study analysed the suitability of early numeracy assessments currently used in German primary schools at school entry to identify students’ individual starting points for subsequent progress monitoring. The second study described the development of progression-based items and investigated the items in regards to main test quality criteria, such as reliability, validity, and test fairness, to find a suitable item pool to build targeted tests. The third study described the construction of the progress monitoring measure, referred to as the learning progress assessment (LPA). The study investigated the extent to which the LPA was able to monitor students’ individual learning progress in early numeracy over time. The results of the first study indicated that current school entry assessments were not able to provide meaningful information about the students’ initial learning status. Thus, the MARKO-D test (Ricken, Fritz, & Balzer, 2013) was used to determine the students’ initial numerical understanding in the other two studies, because it has been shown to be an effective measure of conceptual numerical understanding (Fritz, Ehlert, & Leutner, 2018). Both studies provided promising evidence for the quality of the LPA and its ability to detect changes in numerical understanding over the course of first grade. The studies of this dissertation can be considered an important step in the process of designing an empirically validated instrument that supports teachers to monitor their students’ early numeracy development and to adjust their teaching accordingly to enhance school achievement.
Cells and tissues are sensitive to mechanical forces applied to them. In particular, bone forming cells and connective tissues, composed of cells embedded in fibrous extracellular matrix (ECM), are continuously remodeled in response to the loads they bear. The mechanoresponses of cells embedded in tissue include proliferation, differentiation, apoptosis, internal signaling between cells, and formation and resorption of tissue.
Experimental in-vitro systems of various designs have demonstrated that forces affect tissue growth, maturation and mineralization. However, the results depended on different parameters such as the type and magnitude of the force applied in each study. Some experiments demonstrated that applied forces increase cell proliferation and inhibit cell maturation rate, while other studies found the opposite effect. When the effect of different magnitudes of forces was compared, some studies showed that higher forces resulted in a cell proliferation increase or differentiation decrease, while other studies observed the opposite trend or no trend at all.
In this study, MC3T3-E1 cells, a cell line of pre-osteoblasts (bone forming cells), was used. In this cell line, cell differentiation is known to accelerate after cells stop proliferating, typically at confluency. This makes this cell line an interesting subject for studying the influence of forces on the switch between the proliferation stage of the precursor cell and the differentiation to the mature osteoblasts.
A new experimental system was designed to perform systematic investigations of the influence of the type and magnitude of forces on tissue growth. A single well plate contained an array of 80 rectangular pores. Each pore was seeded with MC3T3-E1 cells. The culture medium contained magnetic beads (MBs) of 4.5 μm in diameter that were incorporated into the pre-osteoblast cells. Using an N52 neodymium magnet, forces ranging over three orders of magnitude were applied to MBs incorporated in cells at 10 different distances from the magnet. The amount of formed tissue was assessed after 24 days of culture. The experimental design allowed to obtain data concerning (i) the influence of the type of the force (static, oscillating, no force) on tissue growth; (ii) the influence of the magnitude of force (pN-nN range); (iii) the effect of functionalizing the magnetic beads with the tripeptide Arg-Gly-Asp (RGD). To learn about cell differentiation state, in the final state of the tissue growth experiments, an analysis for the expression of alkaline phosphatase (ALP), a well - known marker of osteoblast differentiation, was performed.
The experiments showed that the application of static magnetic forces increased tissue growth compared to control, while oscillating forces resulted in tissue growth reduction. A statistically significant positive correlation was found between the amount of tissue grown and the magnitude of the oscillating magnetic force. A positive but non-significant correlation of the amount of tissue with the magnitude of forces was obtained when static forces were applied. Functionalizing the MBs with RGD peptides and applying oscillating forces resulted in an increase of tissue growth relative to tissues incubated with “plain” epoxy MBs. ALP expression decreased as a function of the magnitude of force both when static and oscillating forces were applied. ALP stain intensity was reduced relative to control when oscillating forces were applied and was not significantly different than control for static forces.
The suggested interpretation of the experimental findings is that larger mechanical forces delay cell maturation and keep the pre-osteoblasts in a more proliferative stage characterized by more tissue formed and lower expression of ALP. While the influence of the force magnitude can be well explained by an effect of the force on the switch between proliferation and differentiation, the influence of force type (static or oscillating) is less clear. In particular, it is challenging to reconcile the reduction of tissue formed under oscillating forces as compared to controls with the simultaneous reduction of ALP expression. To better understand this, it may be necessary to refine the staining protocol of the scaffolds and to include the amount and structure of ECM as well as other factors that were not monitored in the experiment and which may influence tissue growth and maturation.
The developed experimental system proved well suited for a systematic and efficient study of the mechanoresponsiveness of tissue growth, it allowed a study of the dependence of tissue growth on force magnitude ranging over three orders of magnitude, and a comparison between the effect of static and oscillating forces. Future experiments can explore the multiple parameters that affect tissue growth as a function of the magnitude of the force: by applying different time-dependent forces; by extending the force range studied; or by using different cell lines and manipulating the mechanotransduction in the cells biochemically.
Some of the most frequent questions surrounding business negotiations address not only the nature of such negotiations, but also how they should be conducted. The answers given by business people from different cultural backgrounds to these questions are likely to differ from the standard answers found in business manuals.
In her book, Milene Mendes de Oliveira investigates how Brazilian and German business people conceptualize and act out business negotiations using English as a Lingua Franca. The frameworks of Cultural Linguistics, English as a Lingua Franca, World Englishes, and Business Discourse offer the theoretical and methodological grounding for the analysis of interviews with high-ranking Brazilian and German business people. Moreover, a side study on e-mail exchanges between Brazilian and German employees of a healthcare company serves as a test case for the results arising from the interviews, and helps understand other facets of authentic intercultural business communication.
Offering new insights on English as a Lingua Franca in international business contexts, Business Negotiations in ELF from a Cultural Linguistic Perspective simultaneously provides a detailed cultural-conceptual account of business negotiations from the viewpoint of Brazilian and German business people and a secondary analysis of their pragmatic aspects.
Das bayerisch-ligistische Kriegskommissariat kontrollierte das weitestgehend autonome Söldnerheer im Dreißigjährigen Krieg. Es gilt daher als ein beispielhaftes Forschungsobjekt zur fürstlichen Macht dieser Zeit. Während sich die Forschung bisher auf die normative Ebene beschränkte, unternimmt diese Arbeit durch die Auswertung von Feldakten und Privatkorrespondenzen sowie anhand der Methoden der Prosopographie und Netzwerkanalyse eine multiperspektivische Annäherung an das Thema. Die Verwicklungen verschiedener Kompetenzen und Funktionen des sozialen Netzwerks für die Amtsausführung des Kriegskommissariats werden zu Tage gefördert. Damit trägt die Arbeit zur Erfassung der Vielfältigkeit der Herrschaft in der Frühen Neuzeit bei.
Socializing Development
(2020)
Die Kooperation von Schule und Jugendhilfe befindet sich aktuell im Wandel. Spätestens seit Beginn der 2000er-Jahre wird insbesondere von Seiten der Politik eine verstärkte Zusammenarbeit beider Institutionen gefordert. Die PISA-Studie des Jahres 2000 verdeutlichte die Chancenungleichheit unseres Bildungssystems und sorgte damit für eine Renaissance der Thematik. Zuletzt führten die Inklusionsdebatte und die Flüchtlingsströme, die Europa seit 2015 erreichen, sowie daraus folgende rechtspopulistische Tendenzen in Deutschland verstärkt zur Notwendigkeit, die Bildungswelten Jugendhilfe und Schule stärker zu vernetzen. Junge Menschen unterschiedlichster Personengruppen sollen besser ins Bildungssystem integriert und demokratiefeindlichem Gedankengut entgegengewirkt werden.
Die Zusammenarbeit von Jugendhilfe und Schule ist somit aktuell stärker gefragt denn je. Sie gilt als komplexe Problemlösestrategie und wird mit einer Vielzahl positiver Erwartungen assoziiert. Sie soll auf bildungs- und sozialpolitische Fragen Antwort geben, den demokratischen Gedanken stärken und junge Menschen auf rasche technische und mediale Veränderungen im Arbeitsalltag vorbereiten. In der Theorie wird die Zusammenarbeit von Jugendhilfe und Schule als Allheilmittel angepriesen, doch wie gestaltet sich die Praxis?
Diese Studie geht dieser Frage nach, indem sie exemplarisch die Kooperation an der Bildungsstätte „Kurt Löwenstein“ im brandenburgischen Werneuchen aus der Sicht von Lehrkräften untersucht. Die Autorin wertet Leitfadeninterviews systematisch aus und kommt dabei zu überraschenden Ergebnissen. Die Studie legt Erfolgsfaktoren und Hemmnisse der Kooperation dar und liefert eine kritische und differenzierte Analyse des Ist-Zustandes.
Die vorgelegte Arbeit besteht aus drei Teilprojekten, der Realisierung eines Multiparametersensors (Temperatur, pH-Wert und Sauerstoffkonzentration), der Konzipierung und Untersuchung eines optischen Atemgassensors und Untersuchungen zur Anwendung des Konzeptes der Sauerstofflöschung in der Immuntechnologie. Zur Realisierung des Multiparametersensors wurden die einzelnen Sensorfarbstoffe, sofern notwendig, synthetisiert und anschließend einzeln unter Laborbedingungen charakterisiert. Im weiteren Verlauf wurde ein Versuchsaufbau konzipiert mit dem es möglich ist, alle verwendeten Sensorfarbstoffe mit einer Anregungsquelle anzuregen. Dabei erfolgte die Detektion der Parameter Temperatur und Sauerstoffkonzentration mittels Phasenmodulationsspektroskopie und die pH-Wert-bestimmung mittels stationärer Fluoreszenzspektroskopie. So konnte ein Multiparametersensor konzipiert werden, mit dem es möglich ist, die drei genannten Parameter simultan, in Echtzeit und ohne externe Temperaturmessung zu detektieren. Im Rahmen der Entwicklung eines optischen Atemgassensors konnte zunächst eine neue Sensorform entwickelt werden. Durch diese neue Sensorform, welche sich durch sehr kurze Ansprechzeiten auszeichnet, ist es möglich den Sauerstoffgehalt in der Exspirationsluft sehr detailreich zu erfassen. Durch freiwillige Selbstversuche mit dem Atemgassensor konnte eine Korrelation mit einer etablierten Untersuchungsmethode hergestellt werden. Während der Untersuchungen zur Anwendung des Konzeptes der Sauerstofflöschung in der Immuntechnologie konnte zunächst ein Modell entwickelt werden, welches die Wechselwirkung zwischen Antikörper und synthetisiertem Farbstoff, welcher als Antigen fungierte, beschreibt. Nachdem weiterhin eine Wechselwirkung zwischen Antikörper und Antigen in einfachen Medien, wie PBS-Pufferlösung, gezeigt werden konnte, gelang dies auch in komplexen Medien wie bovinem Serum, Kuhmilch oder Speichelflüssigkeit. So konnte ein System entwickelt werden, mit dem es möglich ist Antikörper-Antigen-Wechselwirkungen in komplexen biologischen Medien zu verfolgen.
One of the tremendous discoveries by the Cassini spacecraft has been the detection of propeller structures in Saturn's A ring. Although the generating moonlet is too small to be resolved by the cameras aboard Cassini, its produced density structure within the rings, caused by its gravity can be well observed. The largest observed propeller is called Blériot and has an azimuthal extent over several thousand kilometers. Thanks to its large size, Blériot could be identified in different images over a time span of over 10 years, allowing the reconstruction of its orbital evolution. It turns out that Blériot deviates considerably from its expected Keplerian orbit in azimuthal direction by several thousand kilometers. This excess motion can be well reconstructed by a superposition of three harmonics, and therefore resembles the typical fingerprint of a resonantly perturbed body. This PhD thesis is directed to the excess motion of Blériot. Resonant perturbations are a known for some of the outer satellites of Saturn. Thus, in the first part of this thesis, we seek for suiting resonance candidates nearby the propeller, which might explain the observed periods and amplitudes. In numeric simulations, we show that indeed resonances by Prometheus, Pandora and Mimas can explain the libration periods in good agreement, but not the amplitudes. The amplitude problem is solved by the introduction of a propeller-moonlet interaction model, where we assume a broken symmetry of the propeller by a small displacement of the moonlet. This results in a librating motion the moonlet around the propeller's symmetry center due to the non-vanishing accelerations. The retardation of the reaction of the propeller structure to the motion of the moonlet causes the propeller to become asymmetric. Hydrodynamic simulations to test our analytical model confirm our predictions. In the second part of this thesis, we consider a stochastic migration of the moonlet, which is an alternative hypothesis to explain the observed excess motion of Blériot. The mean-longitude is a time-integrated quantity and thus introduces a correlation between the independent kicks of a random walk, smoothing the noise and thus makes the residual look similar to the observed one for Blériot. We apply a diagonalization test to decorrelated the observed residuals for the propellers Blériot and Earhart and the ring-moon Daphnis. It turns out that the decorrelated distributions do not strictly follow the expected Gaussian distribution. The decorrelation method fails to distinguish a correlated random walk from a noisy libration and thus we provide an alternative study. Assuming the three-harmonic fit to be a valid representation of the excess motion for Blériot, independently from its origin, we test the likelihood that this excess motion can be created by a random walk. It turns out that a non-correlated and correlated random walk is unlikely to explain the observed excess motion.
Die vorliegende Untersuchung analysierte den direkten Zusammenhang eines berufsbezogenen Angebots Sozialer Gruppenarbeit mit dem Ergebnis beruflicher Wiedereingliederung bei Rehabilitandinnen und Rehabilitanden in besonderen beruflichen Problemlagen. Sie wurde von der Deutschen Rentenversicherung Bund als Forschungsprojekt vom 01.01.2013 bis 31.12. 2015 gefördert und an der Professur für Rehabilitationswissenschaften der Universität Potsdam realisiert.
Die Forschungsfrage lautete: Kann eine intensive sozialarbeiterische Gruppenintervention im Rahmen der stationären medizinischen Rehabilitation soweit auf die Stärkung sozialer Kompetenzen und die Soziale Unterstützung von Rehabilitandinnen und Rehabilitanden einwirken, dass sich dadurch langfristige Verbesserungen hinsichtlich der beruflichen Wiedereingliederung im Vergleich zur konventionellen Behandlung ergeben?
Die Studie gliederte sich in eine qualitative und eine quantitative Erhebung mit einer zwischenliegenden Intervention. Eingeschlossen waren 352 Patientinnen und Patienten im Alter zwischen 18 und 65 Jahren mit kardiovaskulären Diagnosen, deren Krankheitsbilder häufig von komplexen Problemlagen begleitet sind, verbunden mit einer schlechten sozialmedizinischen Prognose.
Die Evaluation der Gruppenintervention erfolgte in einem clusterrandomisierten kontrollierten Studiendesign, um einen empirischen Nachweis darüber zu erbringen, inwieweit die Intervention gegenüber der regulären sozialarbeiterischen Behandlung höhere Effekte erzielen kann. Die Interventionsgruppen nahmen am Gruppenprogramm teil, die Kontrollgruppen erhielten die reguläre sozialarbeiterische Behandlung.
Im Ergebnis konnte mit dieser Stichprobe kein Nachweis zur Verbesserung der beruflichen Wiedereingliederung, der gesundheitsbezogenen Arbeitsfähigkeit, der Lebensqualität sowie der Sozialen Unterstützung durch die Teilnahme am sozialarbeiterischen Gruppenprogramm erbracht werden. Die Return-To-Work-Rate betrug 43,7 %, ein Viertel der Untersuchungsgruppe befand sich nach einem Jahr in Arbeitslosigkeit. Die durchgeführte Gruppenintervention ist dem konventionellen Setting Sozialer Arbeit als gleichwertig anzusehen.
Schlussfolgernd wurde auf eine sozialarbeiterische Unterstützung der beruflichen Wiedereingliederung über einen längeren Zeitraum nach einer kardiovaskulären Erkrankung verwiesen, insbesondere durch wohnortnahe Angebote zu einem späteren Zeitpunkt bei stabilerer Gesundheit. Aus den Erhebungen ließen sich mögliche Erfolge bei engerer Kooperation zwischen dem Fachbereich der Sozialen Arbeit und der Psychologie ableiten. Ebenfalls gab es Hinweise auf die einflussreiche Rolle der Angehörigen, die durch Einbindung in die Soziale Beratung unterstützend auf den Wiedereingliederungsprozess wirken könnten. Die Passgenauigkeit der untersuchten sozialarbeiterischen Gruppeninterventionen ist durch eine gezielte Soziale Diagnostik zu verbessern.
Chloroplasts are the photosynthetic organelles in plant and algae cells that enable photoautotrophic growth. Due to their prokaryotic origin, modern-day chloroplast genomes harbor 100 to 200 genes. These genes encode for core components of the photosynthetic complexes and the chloroplast gene expression machinery, making most of them essential for the viability of the organism. The regulation of those genes is predominated by translational adjustments. The powerful technique of ribosome profiling was successfully used to generate highly resolved pictures of the translational landscape of Arabidopsis thaliana cytosol, identifying translation of upstream open reading frames and long non-coding transcripts. In addition, differences in plastidial translation and ribosomal pausing sites were addressed with this method. However, a highly resolved picture of the chloroplast translatome is missing. Here, with the use of chloroplast isolation and targeted ribosome affinity purification, I generated highly enriched ribosome profiling datasets of the chloroplasts translatome for Nicotiana tabacum in the dark and light. Chloroplast isolation was found unsuitable for the unbiased analysis of translation in the chloroplast but adequate to identify potential co-translational import. Affinity purification was performed for the small and large ribosomal subunit independently. The enriched datasets mirrored the results obtained from whole-cell ribosome profiling. Enhanced translational activity was detected for psbA in the light. An alternative translation initiation mechanism was not identified by selective enrichment of small ribosomal subunit footprints. In sum, this is the first study that used enrichment strategies to obtain high-depth ribosome profiling datasets of chloroplasts to study ribosome subunit distribution and chloroplast associated translation.
Ever-changing light intensities are challenging the photosynthetic capacity of photosynthetic organism. Increased light intensities may lead to over-excitation of photosynthetic reaction centers resulting in damage of the photosystem core subunits. Additional to an expensive repair mechanism for the photosystem II core protein D1, photosynthetic organisms developed various features to reduce or prevent photodamage. In the long-term, photosynthetic complex contents are adjusted for the efficient use of experienced irradiation. However, the contribution of chloroplastic gene expression in the acclimation process remained largely unknown. Here, comparative transcriptome and ribosome profiling was performed for the early time points of high-light acclimation in Nicotiana tabacum chloroplasts in a genome-wide scale. The time- course data revealed stable transcript level and only minor changes in translational activity of specific chloroplast genes during high-light acclimation. Yet, psbA translation was increased by two-fold in the high light from shortly after the shift until the end of the experiment. A stress-inducing shift from low- to high light exhibited increased translation only of psbA. This study indicate that acclimation fails to start in the observed time frame and only short-term responses to reduce photoinhibition were observed.
Remembering the dismembered
(2020)
This thesis – written in co-authorship with Tanzanian activist Mnyaka Sururu Mboro – examines different cases of repatriation of ancestral remains to African countries and communities through the prism of postcolonial memory studies. It follows the theft and displacement of prominent ancestors from East and Southern Africa (Sarah Baartman, Dawid Stuurman, Mtwa Mkwawa, Songea Mbano, King Hintsa and the victims of the Ovaherero and Nama genocides) and argues that efforts made for the repatriation of their remains have contributed to a transnational remembrance of colonial violence.
Drawing from cultural studies theories such as "multidirectional memory", "rehumanisation" and "necropolitics", the thesis argues for a new conceptualisation or "re-membrance" in repatriation, through processes of reunion, empowerment, story-telling and belonging. Besides, the afterlives of the dead ancestors, who stand at the centre of political debates on justice and reparations, remind of their past struggles against colonial oppression. They are therefore "memento vita", fostering counter-discourses that recognize them
as people and stories.
This manuscript is accompanied by a “(web)site of memory” where some of the research findings are made available to a wider audience. This blog also hosts important sound material which appears in the thesis as interventions by external contributors. Through QR codes, both the written and the digital version are linked with each other to problematize the idea of a written monograph and bring a polyphonic perspective to those diverse, yet connected, histories.
Towards seasonal prediction: stratosphere-troposphere coupling in the atmospheric model ICON-NWP
(2020)
Stratospheric variability is one of the main potential sources for sub-seasonal to seasonal predictability in mid-latitudes in winter. Stratospheric pathways play an important role for long-range teleconnections between tropical phenomena, such as the quasi-biennial oscillation (QBO) and El Niño-Southern Oscillation (ENSO), and the mid-latitudes on the one hand, and linkages between Arctic climate change and the mid-latitudes on the other hand. In order to move forward in the field of extratropical seasonal predictions, it is essential that an atmospheric model is able to realistically simulate the stratospheric circulation and variability. The numerical weather prediction (NWP) configuration of the ICOsahedral Non-hydrostatic atmosphere model ICON is currently being used by the German Meteorological Service for the regular weather forecast, and is intended to produce seasonal predictions in future. This thesis represents the first extensive evaluation of Northern Hemisphere stratospheric winter circulation in ICON-NWP by analysing a large set of seasonal ensemble experiments.
An ICON control climatology simulated with a default setup is able to reproduce the basic behaviour of the stratospheric polar vortex. However, stratospheric westerlies are significantly too weak and major stratospheric warmings too frequent, especially in January. The weak stratospheric polar vortex in ICON is furthermore connected to a mean sea level pressure (MSLP) bias pattern resembling the negative phase of the Arctic Oscillation (AO). Since a good representation of the drag exerted by gravity waves is crucial for a realistic simulation of the stratosphere, three sensitivity experiments with reduced gravity wave drag are performed. Both a reduction of the non-orographic and orographic gravity wave drag respectively, lead to a strengthening of the stratospheric vortex and thus a bias reduction in winter, in particular in January. However, the effect of the non-orographic gravity wave drag on the stratosphere is stronger. A third experiment, combining a reduced orographic and non-orographic drag, exhibits the largest stratospheric bias reductions. The analysis of stratosphere-troposphere coupling based on an index of the Northern Annular Mode demonstrates that ICON realistically represents downward coupling. This coupling is intensified and more realistic in experiments with a reduced gravity wave drag, in particular with reduced non-orographic drag. Tropospheric circulation is also affected by the reduced gravity wave drag, especially in January, when the strongly improved stratospheric circulation reduces biases in the MSLP patterns. Moreover, a retuning of the subgrid-scale orography parameterisations leads to a significant error reduction in the MSLP in all months. In conclusion, the combination of these adjusted parameterisations is recommended as a current optimal setup for seasonal simulations with ICON.
Additionally, this thesis discusses further possible influences on the stratospheric polar vortex, including the influence of tropical phenomena, such as QBO and ENSO, as well as the influence of a rapidly warming Arctic. ICON does not simulate the quasi-oscillatory behaviour of the QBO and favours weak easterlies in the tropical stratosphere. A comparison with a reanalysis composite of the easterly QBO phase reveals, that the shift towards the easterly QBO in ICON further weakens the stratospheric polar vortex. On the other hand, the stratospheric reaction to ENSO events in ICON is realistic. ICON and the reanalysis exhibit a weakened stratospheric vortex in warm ENSO years. Furthermore, in particular in winter, warm ENSO events favour the negative phase of the Arctic Oscillation, whereas cold events favour the positive phase. The ICON simulations also suggest a significant effect of ENSO on the Atlantic-European sector in late winter. To investigate the influence of Arctic climate change on mid-latitude circulation changes, two differing approaches with transient and fixed sea ice conditions are chosen. Neither ICON approach exhibits the mid-latitude tropospheric negative Arctic Oscillation circulation response to amplified Arctic warming, as it is discussed on the basis of observational evidence. Nevertheless, adding a new model to the current and active discussion on Arctic-midlatitude linkages, further contributes to the understanding of divergent conclusions between model and observational studies.
Successfully completing any data science project demands careful consideration across its whole process. Although the focus is often put on later phases of the process, in practice, experts spend more time in earlier phases, preparing data, to make them consistent with the systems' requirements or to improve their models' accuracies. Duplicate detection is typically applied during the data cleaning phase, which is dedicated to removing data inconsistencies and improving the overall quality and usability of data. While data cleaning involves a plethora of approaches to perform specific operations, such as schema alignment and data normalization, the task of detecting and removing duplicate records is particularly challenging. Duplicates arise when multiple records representing the same entities exist in a database. Due to numerous reasons, spanning from simple typographical errors to different schemas and formats of integrated databases. Keeping a database free of duplicates is crucial for most use-cases, as their existence causes false negatives and false positives when matching queries against it. These two data quality issues have negative implications for tasks, such as hotel booking, where users may erroneously select a wrong hotel, or parcel delivery, where a parcel can get delivered to the wrong address. Identifying the variety of possible data issues to eliminate duplicates demands sophisticated approaches.
While research in duplicate detection is well-established and covers different aspects of both efficiency and effectiveness, our work in this thesis focuses on the latter. We propose novel approaches to improve data quality before duplicate detection takes place and apply the latter in datasets even when prior labeling is not available. Our experiments show that improving data quality upfront can increase duplicate classification results by up to 19%. To this end, we propose two novel pipelines that select and apply generic as well as address-specific data preparation steps with the purpose of maximizing the success of duplicate detection. Generic data preparation, such as the removal of special characters, can be applied to any relation with alphanumeric attributes. When applied, data preparation steps are selected only for attributes where there are positive effects on pair similarities, which indirectly affect classification, or on classification directly. Our work on addresses is twofold; first, we consider more domain-specific approaches to improve the quality of values, and, second, we experiment with known and modified versions of similarity measures to select the most appropriate per address attribute, e.g., city or country.
To facilitate duplicate detection in applications where gold standard annotations are not available and obtaining them is not possible or too expensive, we propose MDedup. MDedup is a novel, rule-based, and fully automatic duplicate detection approach that is based on matching dependencies. These dependencies can be used to detect duplicates and can be discovered using state-of-the-art algorithms efficiently and without any prior labeling. MDedup uses two pipelines to first train on datasets with known labels, learning to identify useful matching dependencies, and then be applied on unseen datasets, regardless of any existing gold standard. Finally, our work is accompanied by open source code to enable repeatability of our research results and application of our approaches to other datasets.
Unlike today’s prevailing terrestrial features, the geologic past of Central Asia witnessed marine environments and conditions as well. A vast, shallow sea, known as proto-Paratethys, extended across Eurasia from the Mediterranean Tethys to the Tarim Basin in western China during Cretaceous to Paleogene times. This sea formed about 160 million years ago (during Jurassic times) when the waters of the Tethys Ocean flooded into Eurasia. It drastically retreated to the west and became isolated as the Paratethys during the Late Eocene-Oligocene (ca. 34 Ma).
Having well-constrained timing and paleogeography for the Cretaceous-Paleogene proto-Paratethys sea incursions in Central Asia is essential to properly understand and distinguish the controlling mechanisms and their link to Asian paleoenvironmental and paleoclimatic change. The Cretaceous-Paleogene tectonic evolution of the Pamir and Tibet and their far-field effects play a significant role on the sedimentological and structural evolution of the Central Asian basins and on the evolution of the proto-Paratethys sea fluctuations as well. Comparing the records of the sea incursions to the tectonic and eustatic events has paramount importance to reveal the controlling mechanisms behind the sea incursions. However, due to inaccuracies in the dating of rocks (mostly continental rocks and marine rocks with benthic microfossils providing low-resolution biostratigraphic constraints) and conflicting results, there has been no consensus on the timing of the sea incursions and interpretation of their records has been in question. Here, we present a new chronostratigraphic framework based on biostratigraphy and magnetostratigraphy as well as a detailed paleoenvironmental analysis for the Cretaceous and Paleogene proto-Paratethys Sea incursions in the Tajik and Tarim basins, in Central Asia. This enables us to identify the major drivers of marine fluctuations and their potential consequences on regional and global climate, particularly Asian aridification and the global carbon cycle perturbations such as the Paleocene-Eocene Thermal Maximum (PETM). To estimate the paleogeographic evolution of the proto-Paratethys Sea, the refined age constraints and detailed paleoenvironmental interpretations are combined with successive paleogeographic maps. Regional coastlines and depositional environments during the Cretaceous-Paleogene sea advances and retreats were drawn based on the results of this thesis and integrated with existing literature to generate new paleogeographic maps.
Before its final westward retreat in the Eocene, a total of six Cretaceous and Paleogene major sea incursions have been distinguished from the sedimentary records of the Tajik and Tarim basins in Central Asia. All have been studied and documented here.
We identify the presence of marine conditions already in the Early Cretaceous in the western Tajik Basin, followed by the Cenomanian (ca. 100 Ma) and Santonian (ca. 86 Ma) major marine incursions far into the eastern Tajik and Tarim basins separated by a Turonian-Coniacian (ca. 92-86 Ma) regression. Basin-wide tectonic subsidence analyses imply that the Early Cretaceous invasion of the sea into the Tajik Basin is related to increased Pamir tectonism (at ca. 130 – 90 Ma) in a retro-arc basin setting inferred to be linked to collision and subduction. This tectonic event mainly governed the Cenomanian (ca. 100 Ma) sea incursion in conjunction with a coeval global eustatic high resulting in the maximum geographic extent of the sea. The following Turonian-Coniacian (ca. 92-86 Ma) major regression, driven by eustasy, coincides with a sharp slowdown in tectonic subsidence related to a regime change in Pamir tectonism from compression to extension. The Santonian (ca. 86 Ma) major sea incursion was more likely controlled dominantly by eustasy as also evidenced by the coeval fluctuations in the west Siberian Basin. During the early Maastrichtian, the global Late Cretaceous cooling is inferred from the disappearance of mollusk-rich limestones and the dominance of bryozoan-rich and echinoderm-rich limestones in the Tajik Basin documenting the first evidence for the Late Cretaceous cooling event in Central Asia.
Following the last Cretaceous sea incursion, a major regional restriction event, marked by the exceptionally thick (≤ 400 m) shelf evaporites is assigned a Danian-Selandian age (ca. 63-59 Ma). This is followed by the largest recorded proto-Paratethys sea incursion with a transgression estimated as early Thanetian (ca. 59-57 Ma) and a regression within the Ypresian (ca. 53-52 Ma). The transgression of the next incursion is now constrained as early Lutetian (ca. 47-46 Ma), whereas its regression is constrained as late Lutetian (ca. 41 Ma) and is associated with a drastic increase in both tectonic subsidence and basin infilling. The age of the final and least pronounced sea incursion restricted to the westernmost margin of the Tarim Basin is assigned as Bartonian–Priabonian (ca. 39.7-36.7 Ma). We interpret the long-term westward retreat of the proto-Paratethys Sea starting at ca. 41 Ma to be associated with far-field tectonic effects of the Indo-Asia collision and Pamir/Tibetan plateau uplift. Short-term eustatic sea level transgressions are superimposed on this long-term regression and seem coeval with the transgression events in the other northern Peri-Tethyan sedimentary provinces for the 1st and 2nd Paleogene sea incursions. However, the last Paleogene sea incursion is interpreted as related to tectonism. The transgressive and regressive intervals of the proto-Paratethys Sea correlate well with the reported humid and arid phases, respectively in the Qaidam and Xining basins, thus demonstrating the role of the proto-Paratethys Sea as an important moisture source for the Asian interior and its regression as a contributor to Asian aridification.
We lastly study the mechanics, relative contribution and preservation efficiency of ancient epicontinental seas as carbon sinks with new and existing data, using organic rich (sapropel) deposits dated to the PETM from the extensive epicontinental proto-Paratethys and West Siberian seas. We estimate ca. 1390±230 Gt organic C burial, a substantial amount compared to previously estimated global total excess organic C burial (ca. 1700-2900 Gt) is focused in the proto-Paratethys and West Siberian seas alone. We also speculate that enhanced organic carbon burial later over much of the proto-Paratethys (and later Paratethys) basin (during the deposition of the Kuma Formation and Maikop series, repectively) may have majorly contributed to drawdown of atmospheric carbon dioxide before and during the EOT cooling and glaciation of Antarctica. For past periods with smaller epicontinental seas, the effectiveness of this negative carbon cycle feedback was arguably diminished, and the same likely applies to the present-day.
With his September 2015 speech “Breaking the tragedy of the horizon”, the President of the Central Bank of England, Mark Carney, put climate change on the agenda of financial market regulators. Until then, climate change had been framed mainly as a problem of negative externalities leading to long-term economic costs, which resulted in countries trying to keep the short-term costs of climate action to a minimum. Carney argued that climate change, as well as climate policy, can also lead to short-term financial risks, potentially causing strong adjustments in asset prices. Analysing the effect of a sustainability transition on the financial sector challenges traditional economic and financial analysis and requires a much deeper understanding of the interrelations between climate policy and financial markets.
This dissertation thus investigates the implications of climate policy for financial markets as well as the role of financial markets in a transition to a sustainable economy. The approach combines insights from macroeconomic and financial risk analysis. Following an introduction and classification in Chapter 1, Chapter 2 shows a macroeconomic analysis that combines ambitious climate targets (negative externality) with technological innovation (positive externality), adaptive expectations and an investment program, resulting in overall positive macroeconomic outcomes. The analysis also reveals the limitations of climate economic models in their representation of financial markets. Therefore, the subsequent part of this dissertation is concerned with the link between climate policies and financial markets. In Chapter 3, an empirical analysis of stock-market responses to the announcement of climate policy targets is performed to investigate impacts of climate policy on financial markets. Results show that 1) international climate negotiations have an effect on asset prices and 2) investors increasingly recognize transition risks in carbon-intensive investments. In Chapter 4, an analysis of equity markets and the interbank market shows that transition risks can potentially affect a large part of the equity market and that financial interconnections can amplify negative shocks. In Chapter 5, an analysis of mortgage loans shows how information on climate policy and the energy performance of buildings can be integrated into risk management and reflected in interest rates.
While costs of climate action have been explored at great depth, this dissertation offers two main contributions. First, it highlights the importance of a green investment program to strengthen the macroeconomic benefits of climate action. Second, it shows different approaches on how to integrate transition risks and opportunities into financial market analysis. Anticipating potential losses and gains in the value of financial assets as early as possible can make the financial system more resilient to transition risks and can stimulate investments into the decarbonization of the economy.
Gold at the nanoscale
(2020)
In this cumulative dissertation, I want to present my contributions to the field of plasmonic nanoparticle science. Plasmonic nanoparticles are characterised by resonances of the free electron gas around the spectral range of visible light. In recent years, they have evolved as promising components for light based nanocircuits, light harvesting, nanosensors, cancer therapies, and many more.
This work exhibits the articles I authored or co-authored in my time as PhD student at the University of Potsdam. The main focus lies on the coupling between localised plasmons and excitons in organic dyes. Plasmon–exciton coupling brings light–matter coupling to the nanoscale. This size reduction is accompanied by strong enhancements of the light field which can, among others, be utilised to enhance the spectroscopic footprint of molecules down to single molecule detection, improve the efficiency of solar cells, or establish lasing on the nanoscale. When the coupling exceeds all decay channels, the system enters the strong coupling regime. In this case, hybrid light–matter modes emerge utilisable as optical switches, in quantum networks, or as thresholdless lasers. The present work investigates plasmon–exciton coupling in gold–dye core–shell geometries and contains both fundamental insights and technical novelties. It presents a technique which reveals the anticrossing in coupled systems without manipulating the particles themselves. The method is used to investigate the relation between coupling strength and particle size. Additionally, the work demonstrates that pure extinction measurements can be insufficient when trying to assess the coupling regime. Moreover, the fundamental quantum electrodynamic effect of vacuum induced saturation is introduced. This effect causes the vacuum fluctuations to diminish the polarisability of molecules and has not yet been considered in the plasmonic context.
The work additionally discusses the reaction of gold nanoparticles to optical heating. Such knowledge is of great importance for all potential optical applications utilising plasmonic nanoparticles since optical excitation always generates heat. This heat can induce a change in the optical properties, but also mechanical changes up to melting can occur. Here, the change of spectra in coupled plasmon–exciton particles is discussed and explained with a precise model. Moreover, the work discusses the behaviour of gold nanotriangles exposed to optical heating. In a pump–probe measurement, X-ray probe pulses directly monitored the particles’ breathing modes. In another experiment, the triangles were exposed to cw laser radiation with varying intensities and illumination areas. X-ray diffraction directly measured the particles’ temperature. Particle melting was investigated with surface enhanced Raman spectroscopy and SEM imaging demonstrating that larger illumination areas can cause melting at lower intensities. An elaborate methodological and theoretical introduction precedes the articles. This way, also readers without specialist’s knowledge get a concise and detailed overview of the theory and methods used in the articles. I introduce localised plasmons in metal nanoparticles of different shapes. For this work, the plasmons were mostly coupled to excitons in J-aggregates. Therefore, I discuss these aggregates of organic dyes with sharp and intense resonances and establish an understanding of the coupling between the two systems. For ab initio simulations of the coupled systems, models for the systems’ permittivites are presented, too. Moreover, the route to the sample fabrication – the dye coating of gold nanoparticles, their subsequent deposition on substrates, and the covering with polyelectrolytes – is presented together with the measurement methods that were used for the articles.
The field of gamma-ray astronomy opened a new window into the non-thermal universe that allows studying the acceleration sites of cosmic rays and the role of cosmic rays on evolutionary processes in galaxies. The detection of almost one hundred Galactic very-high-energy (VHE: 0.1−100TeV) gamma-ray sources in the Milky Way demonstrates that particle acceleration up to tens of TeV energies is a common phenomenon. Furthermore, the detection of VHE gamma rays from other galaxies has confirmed that cosmic rays are not exclusively accelerated in the Milky Way. The rapid development of gamma-ray astronomy in the past two decades has led to a transition from the detection and study of individual sources to source population studies. To answer the question, whether the VHE gamma-ray source population of the Milky Way is unique, observations of galaxies, for which individual sources can be resolved, are required. Such galaxies are the Magellanic Clouds, two satellite galaxies of the Milky Way, which have been surveyed by the H.E.S.S. experiment in the last decade. In this thesis, data from a total of 450 hours of H.E.S.S. observations towards the Large Magellanic Cloud (LMC) and the Small Magellanic Cloud (SMC) are presented. During the analysis of the data sets, special emphasis is put on the evaluation of systematic uncertainties of the experiment in order to assure an unbiased flux estimation of the potential VHE gamma-ray sources of the Magellanic Clouds. A detailed analysis of the survey data revealed the detection of the gamma-ray binary LMCP3, the most powerful gamma-ray binary known so far, that is located in the LMC, and thus, increases the number of known VHE gamma-ray sources in the LMC to four. No other VHE gamma-ray source is detected in the Magellanic Clouds and integral flux upper limits are estimated. These flux upper limits are used to perform a source population study based on known VHE source classes and existing multi-wavelength catalogues. A comparison of the source populations of the Magellanic Clouds and the Milky Way revealed that no other source in the Magellanic Clouds is as bright as the most luminous VHE gamma-ray source in the LMC: the pulsar wind nebula N 157B, and that one-third of the source population of the Magellanic Clouds is less luminous than the other known VHE gamma-ray sources in the LMC. For only a couple of sources luminosity levels of Galactic VHE sources, that are more than one order of magnitude fainter than the detected sources in the LMC, are constrained. Based on the flux upper limits, differences on the TeV source populations in the Magellanic Clouds and the Milky Way as well as the importance of the source environments will be discussed.
Die Trennung von Werbung und Programm gilt als »Magna Charta« des Medienrechts. In der Medienpraxis scheinen jedoch andere Spielregeln zu herrschen: Werbung soll dort so unauffällig wie möglich in das redaktionelle Programm eingebaut werden, um mit dem potenziellen Käufer erfolgreich zu kommunizieren. Immer neue programmintegrierte Werbeformen entstehen, die Programme darauf ausrichten, Markenprodukte in Szene zu setzen. Dieser Widerspruch zwischen Recht und Praxis bildet den Hintergrund dieser Arbeit. Im Schwerpunkt wird der Rechtsbegriff der Schleichwerbung im nationalen Medienrecht untersucht. Für die Bestimmung der verbotenen programmintegrierten Werbeform im Fernsehen und Internet stellt der Rundfunkstaatsvertrag die entscheidende Rechtsgrundlage dar. Insbesondere spielt die Schleichwerbung im Zusammenhang mit dem Influencer-Marketing eine große Rolle und daher klärt die Arbeit, was Werbebetreibende in sozialen Medien beachten müssen. Ferner werden die Aktualität und Angemessenheit der heutigen Regeln analysiert. Die Arbeit wurde ausgezeichnet mit dem Wolf-Rüdiger-Bub-Preis des Vereins der Freunde und Förderer der Juristischen Fakultät der Universität Potsdam.
The development of methods such as super-resolution microscopy (Nobel prize in Chemistry, 2014) and multi-scale computer modelling (Nobel prize in Chemistry, 2013) have provided scientists with powerful tools to study microscopic systems. Sub-micron particles or even fluorescently labelled single molecules can now be tracked for long times in a variety of systems such as living cells, biological membranes, colloidal solutions etc. at spatial and temporal resolutions previously inaccessible. Parallel to such single-particle tracking experiments, super-computing techniques enable simulations of large atomistic or coarse-grained systems such as biologically relevant membranes or proteins from picoseconds to seconds, generating large volume of data. These have led to an unprecedented rise in the number of reported cases of anomalous diffusion wherein the characteristic features of Brownian motion—namely linear growth of the mean squared displacement with time and the Gaussian form of the probability density function (PDF) to find a particle at a given position at some fixed time—are routinely violated. This presents a big challenge in identifying the underlying stochastic process and also estimating the corresponding parameters of the process to completely describe the observed behaviour. Finding the correct physical mechanism which leads to the observed dynamics is of paramount importance, for example, to understand the first-arrival time of transcription factors which govern gene regulation, or the survival probability of a pathogen in a biological cell post drug administration. Statistical Physics provides useful methods that can be applied to extract such vital information. This cumulative dissertation, based on five publications, focuses on the development, implementation and application of such tools with special emphasis on Bayesian inference and large deviation theory. Together with the implementation of Bayesian model comparison and parameter estimation methods for models of diffusion, complementary tools are developed based on different observables and large deviation theory to classify stochastic processes and gather pivotal information. Bayesian analysis of the data of micron-sized particles traced in mucin hydrogels at different pH conditions unveiled several interesting features and we gained insights into, for example, how in going from basic to acidic pH, the hydrogel becomes more heterogeneous and phase separation can set in, leading to observed non-ergodicity (non-equivalence of time and ensemble averages) and non-Gaussian PDF. With large deviation theory based analysis we could detect, for instance, non-Gaussianity in seeming Brownian diffusion of beads in aqueous solution, anisotropic motion of the beads in mucin at neutral pH conditions, and short-time correlations in climate data. Thus through the application of the developed methods to biological and meteorological datasets crucial information is garnered about the underlying stochastic processes and significant insights are obtained in understanding the physical nature of these systems.
Largescale patterns of global land use change are very frequently accompanied by natural habitat loss. To assess the consequences of habitat loss for the remaining natural and semi-natural biotopes, inclusion of cumulative effects at the landscape level is required. The interdisciplinary concept of vulnerability constitutes an appropriate assessment framework at the landscape level, though with few examples of its application for ecological assessments. A comprehensive biotope vulnerability analysis allows identification of areas most affected by landscape change and at the same time with the lowest chances of regeneration.
To this end, a series of ecological indicators were reviewed and developed. They measured spatial attributes of individual biotopes as well as some ecological and conservation characteristics of the respective resident species community. The final vulnerability index combined seven largely independent indicators, which covered exposure, sensitivity and adaptive capacity of biotopes to landscape changes. Results for biotope vulnerability were provided at the regional level. This seems to be an appropriate extent with relevance for spatial planning and designing the distribution of nature reserves.
Using the vulnerability scores calculated for the German federal state of Brandenburg, hot spots and clusters within and across the distinguished types of biotopes were analysed. Biotope types with high dependence on water availability, as well as biotopes of the open landscape containing woody plants (e.g., orchard meadows) are particularly vulnerable to landscape changes. In contrast, the majority of forest biotopes appear to be less vulnerable. Despite the appeal of such generalised statements for some biotope types, the distribution of values suggests that conservation measures for the majority of biotopes should be designed specifically for individual sites. Taken together, size, shape and spatial context of individual biotopes often had a dominant influence on the vulnerability score.
The implementation of biotope vulnerability analysis at the regional level indicated that large biotope datasets can be evaluated with high level of detail using geoinformatics. Drawing on previous work in landscape spatial analysis, the reproducible approach relies on transparent calculations of quantitative and qualitative indicators. At the same time, it provides a synoptic overview and information on the individual biotopes. It is expected to be most useful for nature conservation in combination with an understanding of population, species, and community attributes known for specific sites. The biotope vulnerability analysis facilitates a foresighted assessment of different land uses, aiding in identifying options to slow habitat loss to sustainable levels. It can also be incorporated into planning of restoration measures, guiding efforts to remedy ecological damage. Restoration of any specific site could yield synergies with the conservation objectives of other sites, through enhancing the habitat network or buffering against future landscape change.
Biotope vulnerability analysis could be developed in line with other important ecological concepts, such as resilience and adaptability, further extending the broad thematic scope of the vulnerability concept. Vulnerability can increasingly serve as a common framework for the interdisciplinary research necessary to solve major societal challenges.
Unter Verschluss
(2020)
Cardiac valves are essential for the continuous and unidirectional flow of blood throughout the body. During embryonic development, their formation is strictly connected to the mechanical forces exerted by blood flow. The endocardium that lines the interior of the heart is a specialized endothelial tissue and is highly sensitive to fluid shear stress. Endocardial cells harbor a signal transduction machinery required for the translation of these forces into biochemical signaling, which strongly impacts cardiac morphogenesis and physiology. To date, we lack a solid understanding on the mechanisms by which endocardial cells sense the dynamic mechanical stimuli and how they trigger different cellular responses. In the zebrafish embryo, endocardial cells at the atrioventricular canal respond to blood flow by rearranging from a monolayer to a double-layer, composed of a luminal cell population subjected to blood flow and an abluminal one that is not exposed to it. These early morphological changes lead to the formation of an immature valve leaflet. While previous studies mainly focused on genes that are positively regulated by shear stress, the mechanisms regulating cell behaviors and fates in cells that lack the stimulus of blood flow are largely unknown. One key discovery of my work is that the flow-sensitive Notch receptor and Krüppel-like factor (Klf) 2, one of the best characterized flow-regulated transcriptional factors, are activated by shear stress but that they function in two parallel signal transduction pathways. Each of these two pathways is essential for the rearrangement of atrioventricular cells into an immature double-layered valve leaflets. A second key discovery of my study is the finding that both Notch and Klf2 signaling negatively regulate the expression of the angiogenesis receptor Vegfr3/Flt4, which becomes restricted to abluminal endocardial cells of the valve leaflet. Within these cells, Flt4 downregulates the expressions of the cell adhesion proteins Alcam and VE-cadherin. A loss of Flt4 causes abluminal endocardial cells to ectopically express Notch, which is normally restricted to luminal cells, and impairs valve morphology. My study suggests that abluminal endocardial cells that do not experience mechanical stimuli loose Notch expression and this triggers expression of Flt4. In turn, Flt4 negatively regulates Notch on the abluminal side of the valve leaflet. These antagonistic signaling activities and fine-tuned gene regulatory mechanisms ultimately shape cardiac valve leaflets by inducing unique differences in the fates of endocardial cells.
The hepatokine FGF21 and the adipokine chemerin have been implicated as metabolic regulators and mediators of inter-tissue crosstalk. While FGF21 is associated with beneficial metabolic effects and is currently being tested as an emerging therapeutic for obesity and diabetes, chemerin is linked to inflammation-mediated insulin resistance. However, dietary regulation of both organokines and their role in tissue interaction needs further investigation.
The LEMBAS nutritional intervention study investigated the effects of two diets differing in their protein content in obese human subjects with non-alcoholic fatty liver disease (NAFLD). The study participants consumed hypocaloric diets containing either low (LP: 10 EN%, n = 10) or high (HP: 30 EN%, n = 9) dietary protein 3 weeks prior to bariatric surgery. Before and after the intervention the participants were anthropometrically assessed, blood samples were drawn, and hepatic fat content was determined by MRS. During bariatric surgery, paired subcutaneous and visceral adipose tissue biopsies as well as liver biopsies were collected. The aim of this thesis was to investigate circulating levels and tissue-specific regulation of (1) FGF21 and (2) chemerin in the LEMBAS cohort. The results were compared to data obtained in 92 metabolically healthy subjects with normal glucose tolerance and normal liver fat content.
(1) Serum FGF21 concentrations were elevated in the obese subjects, and strongly associated with intrahepatic lipids (IHL). In accordance, FGF21 serum concentrations increased with severity of NAFLD as determined histologically in the liver biopsies. Though both diets were successful in reducing IHL, the effect was more pronounced in the HP group. FGF21 serum concentrations and mRNA expression were bi-directionally regulated by dietary protein, independent from metabolic improvements. In accordance, in the healthy study subjects, serum FGF21 concentrations dropped by more than 60% in response to the HP diet. A short-term HP intervention confirmed the acute downregulation of FGF21 within 24 hours. Lastly, experiments in HepG2 cell cultures and primary murine hepatocytes identified nitrogen metabolites (NH4Cl and glutamine) to dose-dependently suppress FGF21 expression.
(2) Circulating chemerin concentrations were considerably elevated in the obese versus lean study participants and differently associated with markers of obesity and NAFLD in the two cohorts. The adipokine decreased in response to the hypocaloric interventions while an unhealthy high-fat diet induced a rise in chemerin serum levels. In the lean subjects, mRNA expression of RARRES2, encoding chemerin, was strongly and positively correlated with expression of several cytokines, including MCP1, TNFα, and IL6, as well as markers of macrophage infiltration in the subcutaneous fat depot. However, RARRES2 was not associated with any cytokine assessed in the obese subjects and the data indicated an involvement of chemerin not only in the onset but also resolution of inflammation. Analyses of the tissue biopsies and experiments in human primary adipocytes point towards a role of chemerin in adipogenesis while discrepancies between the in vivo and in vitro data were detected.
Taken together, the results of this thesis demonstrate that circulating FGF21 and chemerin levels are considerably elevated in obesity and responsive to dietary interventions. FGF21 was acutely and bi-directionally regulated by dietary protein in a hepatocyte-autonomous manner. Given that both, a lack in essential amino acids and excessive nitrogen intake, exert metabolic stress, FGF21 may serve as an endocrine signal for dietary protein balance. Lastly, the data revealed that chemerin is derailed in obesity and associated with obesity-related inflammation. However, future studies on chemerin should consider functional and regulatory differences between secreted and tissue-specific isoforms.
Die Praktische Fahrerlaubnisprüfung dient der Erfassung und Beurteilung der Fahrkompe-tenz von Fahrerlaubnisbewerbern. Die aus dieser Prüfung gewonnenen Rückschlüsse auf das Niveau der Fahrkompetenz sollen insbesondere auch der Weiterentwicklung des Bewerbers dienen. Bisher erhalten Bewerber nur bei nicht bestandener Praktischer Fahrerlaubnisprü-fung eine Auflistung der wichtigsten Fehler, die zum Nichtbestehen geführt haben. Für ein zielgerichtetes Weiterlernen ist es aber notwendig, dass die Ergebnisse der Leistungserfas-sung und der Leistungsbewertung gemäß prüfungsdidaktischer Grundsätze pädagogisch an-spruchsvoll an alle Fahranfänger (unabhängig vom Prüfungsergebnis) zurückgemeldet wer-den.
Das Ziel der vorliegenden Arbeit besteht darin, die Gestaltungsgrundlagen und einen Umset-zungsvorschlag für ein kompetenzbezogenes und lernförderliches Rückmeldesystem für die Praktische Fahrerlaubnisprüfung zu erarbeiten. Dieses Rückmeldesystem soll in der Praxis erprobt werden. Darüber hinaus sollen anhand einer Bewerberbefragung zur Nutzerzufrie-denheit Erkenntnisse für die Weiterentwicklung gewonnen werden. Der Entwicklungs- und Erprobungsprozess des optimierten Rückmeldesystems lässt sich in drei Projektphasen auf-teilen:
1. Im Zuge der Optimierungsarbeiten zur Praktischen Fahrerlaubnisprüfung wurde in der ersten Projektphase ein neues Rückmeldesystem erarbeitet, das aus einem kompetenz-bezogenen mündlichen Auswertungsgespräch und einer ergänzenden schriftlichen Rückmeldung einschließlich weiterführender Lernhinweise für alle Bewerber besteht. Dieses Rückmeldesystem soll einerseits die Fahranfänger dabei unterstützen, die Leis-tungsbewertung inhaltlich besser zu verstehen sowie ein zielgerichtetes Weiterlernen ermöglichen. Andererseits soll es die Bewerber dazu motivieren, die festgestellten Kompetenzdefizite weiter zu bearbeiten, und dadurch Lernzuwachs fördern.
2. Das Rückmeldesystem wurde in der zweiten Projektphase in verschiedenen Modell-
regionen Deutschlands anhand von ca. 9.000 realen Praktischen Fahrerlaubnisprüfun-gen erprobt. Die Fahrerlaubnisbewerber, die in den Modellregionen an einer optimier-ten Praktischen Fahrerlaubnisprüfung teilgenommen und somit eine schriftliche Rückmeldung gemäß der optimierten Vorgaben bzw. einen individuellen Zugangscode zum Downloadbereich erhalten haben, wurden zu einer Befragung eingeladen. Dabei wurden vor allem Aspekte der Akzeptanz und der Lernwirksamkeit aus Sicht der Be-werber erfasst. Ziel war es, die Qualität der verkehrspädagogischen Gestaltung des Rückmeldesystems und seinen Nutzen zu untersuchen, um die erprobte Rückmeldung weiterzuentwickeln. Für die Bewerberbefragung wurde eine Onlinebefragung mit ei-nem standardisierten Fragebogen durchgeführt.
3. Die Erprobungs- und Befragungsergebnisse dienten in der dritten Projektphase der Ableitung von Schlussfolgerungen für die Weiterentwicklung des Rückmeldesystems. Die vorliegenden Ergebnisse der Felderprobung deuten darauf hin, dass die Bereitstel-lung einer schriftlichen, ausführlichen Rückmeldung zu den Prüfungsleistungen der Praktischen Fahrerlaubnisprüfungen insgesamt als nützlich und gewinnbringend ange-sehen wird. Allerdings wurde auch deutlich, dass bezüglich der Umsetzung noch Op-timierungspotenzial besteht. Im Anschluss an die Erprobung wurde die schriftliche Rückmeldung daher – ausgehend von den Nutzererfahrungen während der Felderpro-bung – umfassend überarbeitet und eine revidierte Version vorgelegt.
Als Ergebnis der Arbeit liegt ein in mehreren Schritten entwickeltes, empirisch fundiertes und erprobtes Rückmeldesystem vor, das eine differenzierte Kompetenzrückmeldung er-möglicht. Die umfassende Rückmeldung bietet künftig einerseits eine verbesserte Ausgangs-lage für eine ggf. anschließende Wiederholungsprüfung und andererseits ist es dem Bewer-ber anhand der aufgezeigten Stärken und Schwächen auch nach einer bestandenen Prüfung möglich, diese Rückmeldung für das weitere Lernen zu nutzen.
Even though the majority of individuals know that exercising is healthy, a high percentage struggle to achieve the recommended amount of exercise. The (social-cognitive) theories that are commonly applied to explain exercise motivation refer to the assumption that people base their decisions mainly on rational reasoning. However, behavior is not only bound to reflection. In recent years, the role of automaticity and affect for exercise motivation has been increasingly discussed. In this dissertation, central assumptions of the affective–reflective theory of physical inactivity and exercise (ART; Brand & Ekkekakis, 2018), an exercise-specific dual-process theory that emphasizes the role of a momentary automatic affective reaction for exercise-decisions, were examined. The central aim of this dissertation was to investigate exercisers and non-exercisers automatic affective reactions to exercise-related stimuli (i.e., type-1 process). In particular, the two components of the ART’s type-1 process, that are, automatic associations with exercise and the automatic affective valuation to exercise, were under study.
In the first publication (Schinkoeth & Antoniewicz, 2017), research on automatic (evaluative) associations with exercise was summarized and evaluated in a systematic review. The results indicated that automatic associations with exercise appeared to be relevant predictors for exercise behavior and other exercise-related variables, providing evidence for a central assumption of the ART’s type-1 process. Furthermore, indirect methods seem to be suitable to assess automatic associations. The aim of the second publication (Schinkoeth, Weymar, & Brand, 2019) was to approach the somato-affective core of the automatic valuation of exercise using analysis of reactivity in vagal HRV while viewing exercise-related pictures. Results revealed that differences in exercise volume could be regressed on HRV reactivity. In light of the ART, these findings were interpreted as evidence of an inter-individual affective reaction elicited at the thought of exercise and triggered by exercise-stimuli. In the third publication (Schinkoeth & Brand, 2019, subm.), it was sought to disentangle and relate to each other the ART’s type-1 process components—automatic associations and the affective valuation of exercise. Automatic associations to exercise were assessed with a recoding-free variant of an implicit association test (IAT). Analysis of HRV reactivity was applied to approach a somatic component of the affective valuation, and facial reactions in a facial expression (FE) task served as indicators of the automatic affective reaction’s valence. Exercise behavior was assessed via self-report. The measurement of the affective valuation’s valence with the FE task did not work well in this study. HRV reactivity was predicted by the IAT score and did also statistically predict exercise behavior. These results thus confirm and expand upon the results of publication two and provide empirical evidence for the type-1 process, as defined in the ART. This dissertation advances the field of exercise psychology concerning the influence of automaticity and affect on exercise motivation. Moreover, both methodical implications and theoretical extensions for the ART can be derived from the results.
Sie senden den Wandel
(2020)
Altbekannt ist, welch wichtige Rolle Medien bei der Konsolidierung oder aber auch bei der Transformation einer Gesellschaft spielen. Was aber geschieht, wenn Medien von unten aus agieren und dies in großer Zahl geschieht, unter Einbindung vieler gesellschaftlicher Akteure sowie gegenüber einem umfassenden Publikum? In Argentinien hat sich eine faszinierende Radiolandschaft gebildet, die kollektiv, partizipativ und progressiv arbeitet: Die Community-Radios. Viviana Uriona nimmt uns mit auf eine ethnografische Reise durch die Geschichte dieser Radios, analysiert ihre Arbeitsweise und sucht nach den Gründen ihres Erfolges. Am Ende der Lektüre bleibt eine Frage nicht mehr offen: Könnte hierzulande in gleicher Weise gelingen, was dort geschah?
The current thesis is focused on the properties of graphene supported by metallic substrates and specifically on the behaviour of electrons in such systems. Methods of scanning tunneling microscopy, electron diffraction and photoemission spectroscopy were applied to study the structural and electronic properties of graphene. The purpose of the first part of this work is to introduce the most relevant aspects of graphene physics and the methodical background of experimental techniques used in the current thesis.
The scientific part of this work starts with the extensive study by means of scanning tunneling microscopy of the nanostructures that appear in Au intercalated graphene on Ni(111). This study was aimed to explore the possible structural explanations of the Rashba-type spin splitting of ~100 meV experimentally observed in this system — much larger than predicted by theory. It was demonstrated that gold can be intercalated under graphene not only as a dense monolayer, but also in the form of well-periodic arrays of nanoclusters, a structure previously not reported. Such nanocluster arrays are able to decouple graphene from the strongly interacting Ni substrate and render it quasi-free-standing, as demonstrated by our DFT study. At the same time calculations confirm strong enhancement of the proximity-induced SOI in graphene supported by such nanoclusters in comparison to monolayer gold. This effect, attributed to the reduced graphene-Au distance in the case of clusters, provides a large Rashba-type spin splitting of ~60 meV.
The obtained results not only provide a possible mechanism of SOI enhancement in this particular system, but they can be also generalized for graphene on other strongly interacting substrates intercalated by nanostructures of heavy noble d metals.
Even more intriguing is the proximity of graphene to heavy sp-metals that were predicted to induce an intrinsic SOI and realize a spin Hall effect in graphene. Bismuth is the heaviest stable sp-metal and its compounds demonstrate a plethora of exciting physical phenomena. This was the motivation behind the next part of the current thesis, where structural and electronic properties of a previously unreported phase of Bi-intercalated graphene on Ir(111) were studied by means of scanning tunneling microscopy, spin- and angle-resolved photoemission spectroscopy and electron diffraction. Photoemission experiments revealed a remarkable, nearly ideal graphene band structure with strongly suppressed signatures of interaction between graphene and the Ir(111) substrate, moreover, the characteristic moiré pattern observed in graphene on Ir(111) by electron diffraction and scanning tunneling microscopy was strongly suppressed after intercalation. The whole set of experimental data evidences that Bi forms a dense intercalated layer that efficiently decouples graphene from the substrate. The interaction manifests itself only in the n-type charge doping (~0.4 eV) and a relatively small band gap at the Dirac point (~190 meV). The origin of this minor band gap is quite intriguing and in this work it was possible to exclude a wide range of mechanisms that could be responsible for it, such as induced intrinsic spin-orbit interaction, hybridization with the substrate states and corrugation of the graphene lattice. The main origin of the band gap was attributed to the A-B symmetry breaking and this conclusion found support in the careful analysis of the interference effects in photoemission that provided the band gap estimate of ~140 meV.
While the previous chapters were focused on adjusting the properties of graphene by proximity to heavy metals, graphene on its own is a great object to study various physical effects at crystal surfaces. The final part of this work is devoted to a study of surface scattering resonances by means of photoemission spectroscopy, where this effect manifests itself as a distinct modulation of photoemission intensity. Though scattering resonances were widely studied in the past by means of electron diffraction, studies about their observation in photoemission experiments started to appear only recently and they are very scarce.
For a comprehensive study of scattering resonances graphene was selected as a versatile model system with adjustable properties. After the theoretical and historical introduction to the topic of scattering resonances follows a detailed description of the unusual features observed in the photoemission spectra obtained in this work and finally the equivalence between these features and scattering resonances is proven. The obtained photoemission results are in a good qualitative agreement with the existing theory, as verified by our calculations in the framework of the interference model. This simple model gives a suitable explanation for the general experimental observations.
The possibilities of engineering the scattering resonances were also explored. A systematic study of graphene on a wide range of substrates revealed that the energy position of the resonances is in a direct relation to the magnitude of charge transfer between graphene and the substrate. Moreover, it was demonstrated that the scattering resonances in graphene on Ir(111) can be suppressed by nanopatterning either by a superlattice of Ir nanoclusters or by atomic hydrogen. These effects were attributed to strong local variations of tork function and/or destruction of long-range order of thephene lattice. The tunability of scattering resonances can be applied for optoelectronic devices based on graphene. Moreover, the results of this study expand the general understanding of the phenomenon of scattering resonances and are applicable to many other materials besides graphene.
Cosa avviene quando coscienze linguistiche distinte, oltre ad essere separate dall’epoca, dall’area geografica di provenienza o dalla differenziazione sociale, dalle diverse dimensioni linguistiche, appartengono anche a domini semiotici diversi? È quel che accade ogni volta che comunichiamo in rete, l’interazione digitale è infatti l’ambito di comunicazione ibrido per eccellenza: in esso alla mescolanza di lingue diverse si sovrappone la mescolanza di codici diversi. Partendo dal presupposto che siano i nuovi bisogni espressivi e le nuove situazioni comunicative a spingere verso le innovazioni linguistiche, sembra dunque interessante tener conto del rilievo assunto dal repertorio visuale – e più in generale multimodale – nell’uso spontaneo dei nuovi media e constatare come le particolari strategie di costruzione del significato attualmente in atto non possano ormai più prescindere da queste seconde dimensioni. Del loro peso nell’uso digitale della lingua è bene avere consapevolezza per affrontare senza pregiudizi tutte le novità ad essa connesse. Un ruolo di centrale importanza nell’approccio al linguaggio verbale in Internet è legato alla funzione indessicale della lingua che, unito alla presenza di un archivio di riferimento di conoscenze del mondo condiviso, innesca un nuovo tipo d’inferenzialità nel ricevente. La conversazione attraverso i social network consente infatti azioni che non necessariamente sono presenti nello scambio vis-a-vis, ma che invece sono peculiari di Facebook, Twitter, G+, Instagram, Flickr e in generale dei social network: la condivisione di materiale multimediale di vario genere, l’opzione di richiamare i messaggi relativi a un tema specifico e la possibilità di glossarlo. Il materiale multimediale diventa così al tempo stesso parte integrante della comunicazione e modalità espressiva, focus del discorso e linguaggio metaforico condiviso. Questo lavoro di ricerca indaga come ambiti di ricerca diversi, e apparentemente distanti fra loro, possano interagire produttivamente con il panorama scientifico delle scienze del linguaggio, dell’immagine e della comunicazione, giungendo alla formulazione di un modello aggiornato dell'ibridazione linguistica che caratterizza la comunicazione in rete.
In recent years, a substantial number of psycholinguistic studies and of studies on acquired language impairments have investigated the case of morphologically complex words. These have provided evidence for what is known as ‘morphological decomposition’, i.e. a mechanism that decomposes complex words into their constituent morphemes during online processing. This is believed to be a fundamental, possibly universal mechanism of morphological processing, operating irrespective of a word’s specific properties.
However, current accounts of morphological decomposition are mostly based on evidence from suffixed words and compound words, while prefixed words have been comparably neglected. At the same time, it has been consistently observed that, across languages, prefixed words are less widespread than suffixed words. This cross-linguistic preference for suffixing morphology has been claimed to be grounded in language processing and language learning mechanisms. This would predict differences in how prefixed words are processed and therefore also affected in language impairments, challenging the predictions of the major accounts of morphological decomposition.
Against this background, the present thesis aims at reducing the gap between the accounts of morphological decomposition and the accounts of the suffixing preference, by providing a thorough empirical investigation of prefixed words. Prefixed words are examined in three different domains: (i) visual word processing in native speakers; (ii) visual word processing in non-native speakers; (iii) acquired morphological impairments. The processing studies employ the masked priming paradigm, tapping into early stages of visual word recognition. Instead, the studies on morphological impairments investigate the errors produced in reading aloud tasks.
As for native processing, the present work first focuses on derivation (Publication I), specifically investigating whether German prefixed derived words, both lexically restricted (e.g. inaktiv ‘inactive’) and unrestricted (e.g. unsauber ‘unclean’) can be efficiently decomposed. I then present a second study (Publication II) on a Bantu language, Setswana, which offers the unique opportunity of testing inflectional prefixes, and directly comparing priming with prefixed inflected primes (e.g. dikgeleke ‘experts’) to priming with prefixed derived primes (e.g. bokgeleke ‘talent’). With regard to non-native processing (Publication I), the priming effects obtained from the lexically restricted and unrestricted prefixed derivations in native speakers are additionally compared to the priming effects obtained in a group of non-native speakers of German. Finally, in the two studies on acquired morphological impairments, the thesis investigates whether prefixed derived words yield different error patterns than suffixed derived words (Publication III and IV).
For native speakers, the results show evidence for morphological decomposition of both types of prefixed words, i.e. lexically unrestricted and restricted derivations, as well as of prefixed inflected words. Furthermore, non-native speakers are also found to efficiently decompose prefixed derived words, with parallel results to the group of native speakers. I therefore conclude that, for the early stages of visual word recognition, the relative position of stem and affix in prefixed versus suffixed words does not affect how efficiently complex words are decomposed, either in native or in non-native processing. In the studies on acquired language impairments, instead, prefixes are consistently found to be more impaired than suffixes. This is explained in terms of a learnability disadvantage for prefixed words, which may cause weaker representations of the information encoded in affixes when these precede the stem (prefixes) as compared to when they follow it (suffixes). Based on the impairment profiles of the individual participants and on the nature of the task, this dissociation is assumed to emerge from later processing stages than those that are tapped into by masked priming. I therefore conclude that the different characteristics of prefixed and suffixed words do come into play at later processing stages, during which the lexical-semantic information contained in the different constituent morphemes is processed.
The findings presented in the four manuscripts significantly contribute to our current understanding of the mechanisms involved in processing prefixed words. Crucially, the thesis constrains the processing disadvantage for prefixed words to later processing stages, thereby suggesting that theories trying to establish links between language universals and processing mechanisms should more carefully consider the different stages involved in language processing and what factors are relevant for each specific stage.
After endosymbiosis, chloroplasts lost most of their genome. Many former endosymbiotic genes are now nucleus-encoded and the products are re-imported post-translationally. Consequently, photosynthetic complexes are built of nucleus- and plastid-encoded subunits in a well-defined stoichiometry. In Chlamydomonas, the translation of chloroplast-encoded photosynthetic core subunits is feedback-regulated by the assembly state of the complexes they reside in. This process is called Control by Epistasy of Synthesis (CES) and enables the efficient production of photosynthetic core subunits in stoichiometric amounts. In chloroplasts of embryophytes, only Rubisco subunits have been shown to be feedback-regulated. That opens the question if there is additional CES regulation in embryophytes. I analyzed chloroplast gene expression in tobacco and Arabidopsis mutants with assembly defects for each photosynthetic complex to broadly answer this question. My results (i) confirmed CES within Rubisco and hint to potential translational feedback regulation in the synthesis of (ii) cytochrome b6f (Cyt b6f) and (iii) photosystem II (PSII) subunits. This work suggests a CES network in PSII that links psbD, psbA, psbB, psbE, and potentially psbH expression by a feedback mechanism that at least partially differs from that described in Chlamydomonas. Intriguingly, in the Cyt b6f complex, a positive feedback regulation that coordinates the synthesis of PetA and PetB was observed, which was not previously reported in Chlamydomonas. No evidence for CES interactions was found in the expression of NDH and ATP synthase subunits of embryophytes. Altogether, this work provides solid evidence for novel assembly-dependent feedback regulation mechanisms controlling the expression of photosynthetic genes in chloroplasts of embryophytes.
In order to obtain a comprehensive inventory of the rbcL and psbA RNA-binding proteomes (including factors that regulate their expression, especially factors involved in CES), an aptamer based affinity purification method was adapted and refined for the specific purification these transcripts from tobacco chloroplasts. To this end, three different aptamers (MS2, Sephadex ,and streptavidin binding) were stably introduced into the 3’ UTRs of psbA and rbcL by chloroplast transformation. RNA aptamer based purification and subsequent chip analysis (RAP Chip) demonstrated a strong enrichment of psbA and rbcL transcripts and currently, ongoing mass spectrometry analyses shall reveal potential regulatory factors. Furthermore, the suborganellar localization of MS2 tagged psbA and rbcL transcripts was analyzed by a combined affinity, immunology, and electron microscopy approach and demonstrated the potential of aptamer tags for the examination of the spatial distribution of chloroplast transcripts.
Depending on the biochemical and biotechnical approach, the aim of this work was to understand the mechanism of protein-glucan interactions in regulation and control of starch degradation. Although starch degradation starts with the phosphorylation process, the mechanisms by which this process is controlling and adjusting starch degradation are not yet fully understood. Phosphorylation is a major process performed by the two dikinases enzymes α-glucan, water dikinase (GWD) and phosphoglucan water dikinase (PWD). GWD and PWD enzymes phosphorylate the starch granule surface; thereby stimulate starch degradation by hydrolytic enzymes. Despite these important roles for GWD and PWD, so far the biochemical processes by which these enzymes are able to regulate and adjust the rate of phosphate incorporation into starch during the degradation process haven‘t been understood. Recently, some proteins were found associated with the starch granule. Two of these proteins are named Early Starvation Protein 1 (ESV1) and its homologue Like-Early Starvation Protein 1 (LESV). It was supposed that both are involved in the control of starch degradation, but their function has not been clearly known until now. To understand how ESV1 and LESV-glucan interactions are regulated and affect the starch breakdown, it was analyzed the influence of ESV1 and LESV proteins on the phosphorylating enzyme GWD and PWD and hydrolysing enzymes ISA, BAM, and AMY. However, the analysis determined the location of LESV and ESV1 in the chloroplast stroma of Arabidopsis. Mass spectrometry data predicted ESV1and LESV proteins as a product of the At1g42430 and At3g55760 genes with a predicted mass of ~50 kDa and ~66 kDa, respectively. The ChloroP program predicted that ESV1 lacks the chloroplast transit peptide, but it predicted the first 56 amino acids N-terminal region as a chloroplast transit peptide for LESV. Usually, the transit peptide is processed during transport of the proteins into plastids. Given that this processing is critical, two forms of each ESV1 and LESV were generated and purified, a full-length form and a truncated form that lacks the transit peptide, namely, (ESV1and tESV1) and (LESV and tLESV), respectively. Both protein forms were included in the analysis assays, but only slight differences in glucan binding and protein action between ESV1 and tESV1 were observed, while no differences in the glucan binding and effect on the GWD and PWD action were observed between LESV and tLESV. The results revealed that the presence of the N-terminal is not massively altering the action of ESV1 or LESV. Therefore, it was only used the ESV1 and tLESV forms data to explain the function of both proteins.
However, the analysis of the results revealed that LESV and ESV1 proteins bind strongly at the starch granule surface. Furthermore, not all of both proteins were released after their incubation with starches after washing the granules with 2% [w/v] SDS indicates to their binding to the deeper layers of the granule surface. Supporting of this finding comes after the binding of both proteins to starches after removing the free glucans chains from the surface by the action of ISA and BAM. Although both proteins are capable of binding to the starch structure, only LESV showed binding to amylose, while in ESV1, binding was not observed. The alteration of glucan structures at the starch granule surface is essential for the incorporation of phosphate into starch granule while the phosphorylation of starch by GWD and PWD increased after removing the free glucan chains by ISA. Furthermore, PWD showed the possibility of starch phosphorylation without prephosphorylation by GWD.
Biochemical studies on protein-glucan interactions between LESV or ESV1 with different types of starch showed a potentially important mechanism of regulating and adjusting the phosphorylation process while the binding of LESV and ESV1 leads to altering the glucan structures of starches, hence, render the effect of the action of dikinases enzymes (GWD and PWD) more able to control the rate of starch degradation. Despite the presence of ESV1 which revealed an antagonistic effect on the PWD action as the PWD action was decreased without prephosphorylation by GWD and increased after prephosphorylation by GWD (Chapter 4), PWD showed a significant reduction in its action with or without prephosphorylation by GWD in the presence of ESV1 whether separately or together with LESV (Chapter 5). However, the presence of LESV and ESV1 together revealed the same effect compared to the effect of each one alone on the phosphorylation process, therefore it is difficult to distinguish the specific function between them. However, non-interactions were detected between LESV and ESV1 or between each of them with GWD and PWD or between GWD and PWD indicating the independent work for these proteins. It was also observed that the alteration of the starch structure by LESV and ESV1 plays a role in adjusting starch degradation rates not only by affecting the dikinases but also by affecting some of the hydrolysing enzymes since it was found that the presence of LESV and ESV1leads to the reduction of the action of BAM, but does not abolish it.
Im Jahre 1960 behauptete Yamabe folgende Aussage bewiesen zu haben: Auf jeder kompakten Riemannschen Mannigfaltigkeit (M,g) der Dimension n ≥ 3 existiert eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung. Diese Aussage ist äquivalent zur Existenz einer Lösung einer bestimmten semilinearen elliptischen Differentialgleichung, der Yamabe-Gleichung. 1968 fand Trudinger einen Fehler in seinem Beweis und infolgedessen beschäftigten sich viele Mathematiker mit diesem nach Yamabe benannten Yamabe-Problem. In den 80er Jahren konnte durch die Arbeiten von Trudinger, Aubin und Schoen gezeigt werden, dass diese Aussage tatsächlich zutrifft. Dadurch ergeben sich viele Vorteile, z.B. kann beim Analysieren von konform invarianten partiellen Differentialgleichungen auf kompakten Riemannschen Mannigfaltigkeiten die Skalarkrümmung als konstant vorausgesetzt werden.
Es stellt sich nun die Frage, ob die entsprechende Aussage auch auf Lorentz-Mannigfaltigkeiten gilt. Das Lorentz'sche Yamabe Problem lautet somit: Existiert zu einer gegebenen räumlich kompakten global-hyperbolischen Lorentz-Mannigfaltigkeit (M,g) eine zu g konform äquivalente Metrik mit konstanter Skalarkrümmung? Das Ziel dieser Arbeit ist es, dieses Problem zu untersuchen.
Bei der sich aus dieser Fragestellung ergebenden Yamabe-Gleichung handelt es sich um eine semilineare Wellengleichung, deren Lösung eine positive glatte Funktion ist und aus der sich der konforme Faktor ergibt. Um die für die Behandlung des Yamabe-Problems benötigten Grundlagen so allgemein wie möglich zu halten, wird im ersten Teil dieser Arbeit die lokale Existenztheorie für beliebige semilineare Wellengleichungen für Schnitte auf Vektorbündeln im Rahmen eines Cauchy-Problems entwickelt. Hierzu wird der Umkehrsatz für Banachräume angewendet, um mithilfe von bereits existierenden Existenzergebnissen zu linearen Wellengleichungen, Existenzaussagen zu semilinearen Wellengleichungen machen zu können. Es wird bewiesen, dass, falls die Nichtlinearität bestimmte Bedingungen erfüllt, eine fast zeitglobale Lösung des Cauchy-Problems für kleine Anfangsdaten sowie eine zeitlokale Lösung für beliebige Anfangsdaten existiert.
Der zweite Teil der Arbeit befasst sich mit der Yamabe-Gleichung auf global-hyperbolischen Lorentz-Mannigfaltigkeiten. Zuerst wird gezeigt, dass die Nichtlinearität der Yamabe-Gleichung die geforderten Bedingungen aus dem ersten Teil erfüllt, so dass, falls die Skalarkrümmung der gegebenen Metrik nahe an einer Konstanten liegt, kleine Anfangsdaten existieren, so dass die Yamabe-Gleichung eine fast zeitglobale Lösung besitzt. Mithilfe von Energieabschätzungen wird anschließend für 4-dimensionale global-hyperbolische Lorentz-Mannigfaltigkeiten gezeigt, dass unter der Annahme, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist, eine zeitglobale Lösung der Yamabe-Gleichung existiert, die allerdings nicht notwendigerweise positiv ist. Außerdem wird gezeigt, dass, falls die H2-Norm der Skalarkrümmung bezüglich der gegebenen Metrik auf einem kompakten Zeitintervall auf eine bestimmte Weise beschränkt ist, die Lösung positiv auf diesem Zeitintervall ist. Hierbei wird ebenfalls angenommen, dass die konstante Skalarkrümmung der konform äquivalenten Metrik nichtpositiv ist. Falls zusätzlich hierzu gilt, dass die Skalarkrümmung bezüglich der gegebenen Metrik negativ ist und die Metrik gewisse Bedingungen erfüllt, dann ist die Lösung für alle Zeiten in einem kompakten Zeitintervall positiv, auf dem der Gradient der Skalarkrümmung auf eine bestimmte Weise beschränkt ist. In beiden Fällen folgt unter den angeführten Bedingungen die Existenz einer zeitglobalen positiven Lösung, falls M = I x Σ für ein beschränktes offenes Intervall I ist. Zum Schluss wird für M = R x Σ ein Beispiel für die Nichtexistenz einer globalen positiven Lösung angeführt.
Ferruginous conditions were a prominent feature of the oceans throughout the Precambrian Eons and thus throughout much of Earth’s history. Organic matter mineralization and diagenesis within the ferruginous sediments that deposited from Earth’s early oceans likely played a key role in global biogeochemical cycling. Knowledge of organic matter mineralization in ferruginous sediments, however, remains almost entirely conceptual, as modern analogue environments are extremely rare and largely unstudied, to date. Lake Towuti on the island of Sulawesi, Indonesia is such an analogue environment and the purpose of this PhD project was to investigate the rates and pathways of organic matter mineralization in its ferruginous sediments.
Lake Towuti is the largest tectonic lake in Southeast Asia and is hosted in the mafic and ultramafic rocks of the East Sulawesi Ophiolite. It has a maximum water depth of 203 m and is weakly thermally stratified. A well-oygenated surface layer extends to 70 m depth, while waters below 130 m are persistently anoxic. Intensive weathering of the ultramafic catchment feeds the lake with large amounts of iron(oxy)hydroxides while the runoff contains only little sulfate, leading to sulfate-poor (< 20 µM) lake water and anoxic ferruginous conditions below 130 m. Such conditions are analogous to the ferruginous water columns that persisted throughout much of the Archean and Proterozoic eons. Short (< 35 cm) sediment cores were collected from different water depths corresponding to different bottom water redox conditions. Also, a drilling campaign of the International Continental Scientific Drilling Program (ICDP) retrieved a 114 m long sediment core dedicated for geomicrobiological investigations from a water depth of 153 m, well below the depth of oxygen penetration at the time of sampling. Samples collected from these sediment cores form the fundament of this thesis and were used to perform a suite of biogeochemical and microbiological analyses.
Geomirobiological investigations depend on uncontaminated samples. However, exploration of subsurface environments relies on drilling, which requires the use of a drilling fluid. Drilling fluid infiltration during drilling can not be avoided. Thus, in order to trace contamination of the sediment core and to identify uncontaminated samples for further analyses a simple and inexpensive technique for assessing contamination during drilling operations was developed and applied during the ICDP drilling campaign. This approach uses an aqeous fluorescent pigment dispersion commonly used in the paint industry as a particulate tracer. It has the same physical properties as conventionally used particulate tracers. However, the price is nearly four orders of magnitude lower solving the main problem of particulate tracer approaches. The approach requires only a minimum of equipment and allows for a rapid contamination assessment potentially even directly on site, while the senstitivity is in the range of already established approaches. Contaminated samples in the drill core were identified and not included for further geomicrobiological investigations.
Biogeochemical analyses of short sediment cores showed that Lake Towutis sediments are strongly depleted in electron acceptors commonly used in microbial organic matter mineralization (i.e. oxygen, nitrate, sulfate). Still, the sediments harbor high microbial cell densities, which are a function of redox conditions of Lake Towuti’s bottom water. In shallow water depths bottom water oxygenation leads to a higher input of labile organic matter and electron acceptors like sulfate and iron, which promotes a higher microbial abundance. Microbial analyses showed that a versatile microbial community with a potential to perform metabolisms related to iron and sulfate reduction, fermentation as well as methanogenesis inhabits Lake Towuti’s surface sediments.
Biogeochemical investigations of the upper 12 m of the 114 m sediment core showed that Lake Towuti’s sediment is extremely rich in iron with total concentrations up to 2500 µmol cm-3 (20 wt. %), which makes it the natural sedimentary environment with the highest total iron concentrations studied to date. In the complete or near absence of oxygen, nitrate and sulfate, organic matter mineralization in ferruginous sediments would be expected to proceed anaerobically via the energetically most favorable terminal electron acceptors available - in this case ferric iron. Astonishingly, however, methanogenesis is the dominant (>85 %) organic matter mineralization process in Lake Towuti’s sediment. Reactive ferric iron known to be available for microbial iron reduction is highly abundant throughout the upper 12 m and thus remained stable for at least 60.000 years. The produced methane is not oxidized anaerobically and diffuses out of the sediment into the water column. The proclivity towards methanogenesis, in these very iron-rich modern sediments, implies that methanogenesis may have played a more important role in organic matter mineralization thoughout the Precambrian than previously thought and thus could have been a key contributor to Earth’s early climate dynamics.
Over the whole sequence of the 114 m long sediment core siderites were identified and characterized using high-resolution microscopic and spectroscopic imaging together with microchemical and geochemical analyses. The data show early diagenetic growth of siderite crystals as a response to sedimentary organic matter mineralization. Microchemical zoning was identified in all siderite crystals. Siderite thus likely forms during diagenesis through growth on primary existing phases and the mineralogical and chemical features of these siderites are a function of changes in redox conditions of the pore water and sediment over time. Identification of microchemical zoning in ancient siderites deposited in the Precambrian may thus also be used to infer siderite growth histories in ancient sedimentary rocks including sedimentary iron formations.
Die Gesundheitswirtschaft in Deutschland steht vor zahlreichen Veränderungen in ihrem Umfeld. Dabei mangelt es der Krankenhauslandschaft an ärztlichem Personal. Es fehlt insbesondere an Arztstunden bei zugleich steigendem Bedarf an medizinischen Leistungen. Der demografische Wandel, Abwanderung ins Ausland (z. B. in die Schweiz) und Extrapolationen wie Feminisierung des Arztberufs und der Wertewandel respektive Generationenforschung begünstigen diese
Entwicklung. Zudem wird der Arbeitsplatz Krankenhaus zunehmend als unattraktiv von zukünftigen Arbeitnehmerkohorten angesehen. Nachwuchsärzte entscheiden sich bereits im Studium vermehrt gegen die kurative Medizin beziehungsweise gegen eine medizinisch-klinische Tätigkeit. Ein virulentes Beschaffungsproblem zeichnet sich ab, das es zu lösen gilt.
Das Forschungsziel ist die Entwicklung eines ganzheitlichen strategischen Lösungsansatzes mit marktorientierter Akzentuierung für den Arbeitgeber Krankenhaus. Dabei wird das Krankenhaus als wissens- und kompetenzintensive Dienstleistungsorganisation definiert. Employer Branding stellt den Bezugsrahmen eines marktorientierten Personalmanagements dar. Ein forschungstheoretischer Unterbau wird durch strategische Managementansätze integriert. Employer Branding schlägt die Brücke vom Market-based View zum Competence-based View und ist Markenmanagement im Kontext des strategischen Personalmanagements. In der vorliegenden Arbeit wird ein holistischer Bezugsrahmen vorgestellt, der die Wirkungszusammenhänge respektive Employer Branding darstellt. Das Herzstück ist die Employer Value Proposition, die auf der Markenidentität der Organisation basiert. Ziel des Employer Branding Ansatzes ist es, unter anderem eine präferenzerzeugende Wirkung im Prozess der Arbeitsplatzwahl sicherzustellen.
Die Zielrichtung und die Erkenntnis-Interessen erfordern einen breit angelegten Forschungsansatz, der qualitative und quantitative Methoden kombiniert. Ziel der leitfadengesteuerten Tiefeninterviews (exploratives Studiendesign) ist es, bestehende und noch zu entwickelnde Kompetenzstärken der Krankenhausorganisationen zu identifizieren. Bei der Stichprobe handelt es sich um ein „Typical Case Sampling“ (N=12). Defizitäre Befunde, welche Werte und Attraktivitätsfaktoren bei angehenden Ärzten ausgeprägt sind, werden bestätigt. In der Krankenhauslandschaft wird eine fragmentarische und reaktive Herangehensweise identifiziert, die die erfolgreiche Rekrutierung von qualifiziertem Krankenhauspersonal erschwert.
Durch die qualitative Marktforschung werden Anforderungen des Marktes – also zukünftiges, ärztliches Krankenhauspersonal – auf faktorenanalytischer Basis analysiert. Die Stichprobe (N=475) ist isomorph. Dabei wird der Prozess der Einstellungsbildung in das neobehavioristische Erklärungsmodell des Käuferverhaltens eingeordnet. Die Vereinbarkeit von Beruf, Karriere und Familie sowie die betriebliche Kinderbetreuung werden als Schlüsselkomponenten erkannt.
Wichtigste Arbeitswerte in Bezug auf einen attraktiven Arbeitgeber sind Verlässlichkeit, Verantwortung und Respekt. Diese Komponenten haben kommunikativen Nutzen und Differenzierungskraft. Schließlich bewerten Bewerber ein Krankenhaus positiver im Prozess der Arbeitsplatzwahl, je mehr die Werte des potentiellen Arbeitnehmers mit dem Werteprofil der Person übereinstimmen (Person-Organisation Fit).
Krankenhausorganisationen, die den Employer Branding Ansatz implementieren und als Chance zur Definition ihrer Stärken, ihrer Vorzüge und ihres Leistungsangebots als Arbeitgeber annehmen, rüsten sich für das verstärkte Werben um Jungmediziner. Schließlich setzt Employer Branding als marktorientierter Strategieansatz intern und extern Kräfte frei, die Differenzierungsvorteile gegenüber anderen Arbeitgebern bringen. Dabei hat Employer Branding positive Auswirkungen auf den Human Ressource-Bereich entlang der Wertschöpfungskette und mindert das gesamtwirtschaftliche Problem.
In nature, bacteria are found to reside in multicellular communities encased in self-produced extracellular matrices. Indeed, biofilms are the default lifestyle of the bacteria which cause persistent infections in humans. The biofilm assembly protects bacterial cells from desiccation and limits the effectiveness of antimicrobial treatments. A myriad of biomolecules in the extracellular matrix, including proteins, exopolysaccharides, lipids, extracellular DNA and other, form a dense and viscoelastic three dimensional network. Many studies emphasized that a destabilization of the mechanical integrity of biofilm architectures potentially eliminates the protective shield and renders bacteria more susceptible to the immune system and antibiotics. Pantoea stewartii is a plant pathogen which infects monocotyledons such as maize and sweet corn. These bacteria produce dense biofilms in the xylem of infected plants which cause wilting of plants and crops. Stewartan is an exopolysaccharide which is produced by Pantoea stewartii and secreted as the major component to the extracellular matrix. It consists of heptasaccharide repeating units with a high degree of polymerization (2-4 MDa). In this work, the physicochemical properties of stewartan were investigated to understand the contributions of this exopolysaccharide to the mechanical integrity and cohesiveness of Pantoea stewartii biofilms. Therefore, a coarse-grained model of stewartan was developed with computational techniques to obtain a model for its three dimensional structural features. Here, coarse-grained molecular dynamic simulations revealed that the exopolysaccharide forms a hydrogel in which the exopolysaccharide chains arrange into a three dimensional mesh-like network. Simulations at different concentrations were used to investigate the influence of the water content on the network formation. Stewartan was further purified from 72 h grown Pantoea stewartii biofilms and the diffusion of bacteriophage and differently-sized nanoparticles (which ranged from 1.1 to 193 nm diameter) was analyzed in reconstituted stewartan solutions. Fluorescence correlation spectroscopy and single-particle tracking revealed that the stewartan network impeded the mobility of a set of differently-sized fluorescent particles in a size-dependent manner. Diffusion of these particles became more anomalous, as characterized by fitting the diffusion data to an anomalous diffusion model, with increasing stewartan concentrations. Further bulk and microrheological experiments were used to analyze the transitions in stewartan fluid behavior and stewartan chain entanglements were described. Moreover, it was noticed, that a small fraction of bacteriophage particles was trapped in small-sized pores deviating from classical random walks which highlighted the structural heterogeneity of the stewartan network. Additionally, the mobility of fluorescent particles
also depended on the charge of the stewartan exopolysaccharide and a model of a molecular sieve for the stewartan network was proposed. The here reported structural features of the stewartan polymers were used to provide a detailed description of the mechanical properties of typically glycan-based biofilms such as the one from Pantoea stewartii.
In addition, the mechanical properties of the biofilm architecture are permanently sensed by the embedded bacteria and enzymatic modifications of the extracellular matrix take place to address environmental cues. Hence, in this work the influence of enzymatic degradation of the stewartan exopolysaccharides on the overall exopolysaccharide network structure was analyzed to describe relevant physiological processes in Pantoea stewartii biofilms. Here, the stewartan hydrolysis kinetics of the tailspike protein from the ΦEa1h bacteriophage, which is naturally found to infect Pantoea stewartii cells, was compared to WceF. The latter protein is expressed from the Pantoea stewartii stewartan biosynthesis gene cluster wce I-III. The degradation of stewartan by the ΦEa1h tailspike protein was shown to be much faster than the hydrolysis kinetics of WceF, although both enzymes cleaved the β D GalIII(1→3)-α-D-GalI glycosidic linkage from the stewartan backbone. Oligosaccharide fragments which were produced during the stewartan cleavage, were analyzed in size-exclusion chromatography and capillary electrophoresis. Bioinformatic studies and the analysis of a WceF crystal structure revealed a remarkably high structural similarity of both proteins thus unveiling WceF as a bacterial tailspike-like protein. As a consequence, WceF might play a role in stewartan chain length control in Pantoea stewartii biofilms.
Sekundäre Pflanzenstoffe und ihre gesundheitsfördernden Eigenschaften sind in den letzten zwei Jahrzehnten vielfach ernährungsphysiologisch untersucht und spezifische positive Effekte im humanen Organismus zum Teil sehr genau beschrieben worden. Zu den Carotinoiden zählend ist der sekundäre Pflanzenstoff Lutein insbesondere in der Prävention von ophthalmologischen Erkrankungen in den Mittelpunkt der Forschung gerückt. Das ausschließlich von Pflanzen und einigen Algen synthetisierte Xanthophyll wird über die pflanzliche Nahrung insbesondere grünes Blattgemüse in den humanen Organismus aufgenommen. Dort akkumuliert es bevorzugt im Makulapigment der Retina des menschlichen Auges und ist bedeutend im Prozess der Aufrechterhaltung der Funktionsfähigkeit der Photorezeptorzellen. Im Laufe des Alterns kann die Abnahme der Dichte des Makulapigments und der Abbau von Lutein beobachtet werden. Die dadurch eintretende Destabilisierung der Photorezeptorzellen im Zusammenhang mit einer veränderten Stoffwechsellage im alternden Organismus kann zur Ausprägung der altersbedingten Makuladegeneration (AMD) führen. Die pathologische Symptomatik der Augenerkrankung reicht vom Verlust der Sehschärfe bis hin zum irreversiblen Erblinden. Da therapeutische Mittel ausschließlich ein Fortschreiten verhindern, bestehen hier Forschungsansätze präventive Maßnahmen zu finden. Die Supplementierung von luteinhaltigen Präparaten bietet dabei einen Ansatzpunkt. Auf dem Markt finden sich bereits Nahrungsergänzungsmittel (NEM) mit Lutein in verschiedenen Applikationen. Limitierend ist dabei die Stabilität und Bioverfügbarkeit von Lutein, welches teilweise kostenintensiv und mit unbekannter Reinheit zu erwerben ist. Aus diesem Grund wäre die Verwendung von Luteinestern als die pflanzliche Speicherform des Luteins im Rahmen eines NEMs vorteilhaft. Neben ihrer natürlichen, höheren Stabilität sind Luteinester nachhaltig und kostengünstig einsetzbar.
In dieser Arbeit wurden physikochemische und ernährungsphysiologisch relevante Aspekte in dem Produktentwicklungsprozess eines NEMs mit Luteinestern in einer kolloidalen Formulierung untersucht. Die bisher einzigartige Anwendung von Luteinestern in einem Mundspray sollte die Aufnahme des Wirkstoffes insbesondere für ältere Menschen erleichtern und verbessern. Unter Beachtung der Ergebnisse und der ernährungsphysiologischen Bewertung sollten u.a. Empfehlungen für die Rezepturzusammensetzungen einer Miniemulsion (Emulsion mit Partikelgrößen <1,0 µm) gegeben werden. Eine Einschätzung der Bioverfügbarkeit der Luteinester aus den entwickelten, kolloidalen Formulierungen konnte anhand von Studien zur Resorption- und Absorptionsverfügbarkeit in vitro ermöglicht werden.
In physikalischen Untersuchungen wurden zunächst Basisbestandteile für die Formulierungen präzisiert. In ersten wirkstofffreien Musteremulsionen konnten ausgewählte Öle als Trägerphase sowie Emulgatoren und Löslichkeitsvermittler (Peptisatoren) hinsichtlich ihrer Eignung zur Bereitstellung einer Miniemulsion physikalisch geprüft werden. Die beste Stabilität und optimale Eigenschaften einer Miniemulsion zeigten sich bei der Verwendung von MCT-Öl (engl. medium chain triglyceride) bzw. Rapsöl in der Trägerphase sowie des Emulgators Tween® 80 (Tween 80) allein oder in Kombination mit dem Molkenproteinhydrolysat Biozate® 1 (Biozate 1).
Aus den physikalischen Untersuchungen der Musteremulsionen gingen die Präemulsionen als Prototypen hervor. Diese enthielten den Wirkstoff Lutein in verschiedenen Formen. So wurden Präemulsionen mit Lutein, mit Luteinestern sowie mit Lutein und Luteinestern konzipiert, welche den Emulgator Tween 80 oder die Kombination mit Biozate 1 enthielten. Bei der Herstellung der Präemulsionen führte die Anwendung der Emulgiertechniken Ultraschall mit anschließender Hochdruckhomogenisation zu den gewünschten Miniemulsionen. Beide eingesetzten Emulgatoren boten optimale Stabilisierungseffekte. Anschließend erfolgte die physikochemische Charakterisierung der Wirkstoffe. Insbesondere Luteinester aus Oleoresin erwiesen sich hier als stabil gegenüber verschiedenen Lagerungsbedingungen. Ebenso konnte bei einer kurzzeitigen Behandlung der Wirkstoffe unter spezifischen mechanischen, thermischen, sauren und basischen Bedingungen eine Stabilität von Lutein und Luteinestern gezeigt werden. Die Zugabe von Biozate 1 bot dabei nur für Lutein einen zusätzlichen Schutz. Bei längerer physikochemischer Behandlung unterlagen die in den Miniemulsionen eingebrachten Wirkstoffe moderaten Abbauvorgängen. Markant war deren Sensitivität gegenüber dem basischen Milieu. Im Rahmen der Rezepturentwicklung des NEMs war hier die Empfehlung, eine Miniemulsion mit einem leicht saurem pH-Milieu zum Schutz des Wirkstoffes durch kontrollierte Zugabe weiterer Inhaltstoffe zu gestalten.
Im weiteren Entwicklungsprozess des NEMs wurden Fertigrezepturen mit dem Wirkstoff Luteinester aufgestellt. Die alleinige Anwendung des Emulgators Biozate 1 zeigte sich dabei als ungeeignet. Die weiterhin zur Verfügung stehenden Fertigrezepturen enthielten in der Öl-phase neben dem Wirkstoff das MCT-ÖL oder Rapsöl sowie a-Tocopherol zur Stabilisierung. Die Wasserphase bestand aus dem Emulgator Tween 80 oder einer Kombination aus Tween 80 und Biozate 1. Zusatzstoffe waren zudem als mikrobiologischer Schutz Ascorbinsäure und Kaliumsorbat sowie für sensorische Effekte Xylitol und Orangenaroma. Die Anordnung der Basisrezeptur und das angewendete Emulgierverfahren lieferten stabile Miniemulsionen. Weiterhin zeigten langfristige Lagerungsversuche mit den Fertigrezepturen bei 4°C, dass eine Aufrechterhaltung der geforderten Luteinestermenge im Produkt gewährleistet war. Analoge Untersuchungen an einem luteinhaltigen, marktgängigen Präparat bestätigten dagegen eine bereits bei kurzfristiger Lagerung auftretende Instabilität von Lutein.
Abschließend wurde durch Resorptions- und Absorptionsstudien in vitro mit den Präemulsionen und Fertigrezepturen die Bioverfügbarkeit von Luteinestern geprüft. Nach Behandlung in einem etablierten in vitro Verdaumodell konnte eine geringfügige Resorptionsverfügbarkeit der Luteinester definiert werden. Limitiert war eine Micellarisierung des Wirkstoffes aus den konzipierten Formulierungen zu beobachten. Eine enzymatische Spaltung der Luteinester zu freiem Lutein wurde nur begrenzt festgestellt. Spezifität und Aktivität von entsprechenden hydrolytischen Lipasen sind als äußerst gering gegenüber Luteinestern zu bewerten. In sich anschließenden Zellkulturversuchen mit der Zelllinie Caco-2 wurden keine zytotoxischen Effekte durch die relevanten Inhaltsstoffe in den Präemulsionen gezeigt. Dagegen konnten eine Sensibilität gegenüber den Fertigrezepturen beobachtet werden. Diese sollte im Zusammenhang mit Irritationen der Schleimhäute des Magen-Darm-Traktes bedacht werden. Eine weniger komplexe Rezeptur könnte die beobachteten Einschränkungen möglicherweise minimieren. Abschließende Absorptionsstudien zeigten, dass grundsätzlich eine geringfügige Aufnahme von vorrangig Lutein, aber auch Luteinmonoestern in den Enterocyten aus Miniemulsionen erfolgen kann. Dabei hatte weder Tween 80 noch Biozate 1 einen förderlichen Einfluss auf die Absorptionsrate von Lutein oder Luteinestern. Die Metabolisierung der Wirkstoffe durch vorherigen in vitro-Verdau steigerte die zelluläre Aufnahme von Wirkstoffen aus Formulierungen mit Lutein und Luteinestern gleichermaßen. Die beobachtete Aufnahme von Lutein und Luteinmonoestern in den Enterocyten scheint über passive Diffusion zu erfolgen, wobei auch der aktive Transport nicht ausgeschlossen werden kann. Dagegen können Luteindiester aufgrund ihrer Molekülgröße nicht über den Weg der Micellarisierung und einfachen Diffusion in die Enterocyten gelangen. Ihre Aufnahme in die Dünndarmepithelzellen bedarf einer vorherigen hydrolytischen Spaltung durch spezifische Lipasen. Dieser Schritt limitiert wiederum die effektive Aufnahme der Luteinester in die Zellen bzw. stellt eine Einschränkung in ihrer Bioverfügbarkeit im Vergleich zu freiem Lutein dar.
Zusammenfassend konnte für die physikochemisch stabilen Luteinester eine geringe Bioverfügbarkeit aus kolloidalen Formulierungen gezeigt werden. Dennoch ist die Verwendung als Wirkstoffquelle für den sekundären Pflanzenstoff Lutein in einem NEM zu empfehlen. Im Zusammenhang mit der Aufnahme von luteinreichen, pflanzlichen Lebensmitteln kann trotz der zu erwartenden geringen Bioverfügbarkeit der Luteinester aus dem NEM ein Beitrag zur Verbesserung des Luteinstatus erreicht werden. Entsprechende Publikationen zeigten eindeutige Korrelationen zwischen der Aufnahme von luteinesterhaltigen Präparaten und einem Anstieg der Luteinkonzentration im Serum bzw. der Makulapigmentdichte in vivo. Die geringfügig bessere Bioverfügbarkeit von freiem Lutein steht im kritischen Zusammenhang mit seiner Instabilität und Kostenintensität. Bilanzierend wurde im Rahmen dieser Arbeit das marktgängige Produkt Vita Culus® konzipiert. Im Ausblick sollten humane Interventionsstudien mit dem NEM die abschließende Bewertung der Bioverfügbarkeit von Luteinestern aus dem Präparat möglich machen.
Urbanization and agricultural land use are two of the main drivers of global changes with effects on ecosystem functions and human wellbeing. Green Infrastructure is a new approach in spatial planning contributing to sustainable urban development, and to address urban challenges, such as biodiversity conservation, climate change adaptation, green economy development, and social cohesion. Because the research focus has been mainly on open green space structures, such as parks, urban forest, green building, street green, but neglected spatial and functional potentials of utilizable agricultural land, this thesis aims at fill this gap.
This cumulative thesis addresses how agricultural land in urban and peri-urban landscapes can contribute to the development of urban green infrastructure as a strategy to promote sustainable urban development. Therefore, a number of different research approaches have been applied. First, a quantitative, GIS-based modeling approach looked at spatial potentials, addressing the heterogeneity of peri-urban landscape that defines agricultural potentials and constraints. Second, a participatory approach was applied, involving stakeholder opinions to evaluate multiple urban functions and benefits. Finally, an evidence synthesis was conducted to assess the current state of research on evidence to support future policy making at different levels.
The results contribute to the conceptual understanding of urban green infrastructures as a strategic spatial planning approach that incorporates inner-urban utilizable agricultural land and the agriculturally dominated landscape at the outer urban fringe. It highlights the proposition that the linkage of peri-urban farmland with the green infrastructure concept can contribute to a network of multifunctional green spaces to provide multiple benefits to the urban system and to successfully address urban challenges. Four strategies are introduced for spatial planning with the contribution of peri-urban farmland to a strategically planned multifunctional network, namely the connecting, the productive, the integrated, and the adapted way. Finally, this thesis sheds light on the opportunities that arise from the integration of the peri- urban farmland in the green infrastructure concept to support transformation towards a more sustainable urban development. In particular, the inherent core planning principle of multifunctionality endorses the idea of co-benefits that are considered crucial to trigger transformative processes.
This work concludes that the linkage of peri-urban farmland with the green infrastructure concept is a promising action field for the development of new pathways for urban transformation towards sustainable urban development. Along with these outcomes, attention is drawn to limitations that remain to be addressed by future research, especially the identification of further mechanisms required to support policy integration at all levels.
The East Asian monsoons characterize the modern-day Asian climate, yet their geological history and driving mechanisms remain controversial. The southeasterly summer monsoon provides moisture, whereas the northwesterly winter monsoon sweeps up dust from the arid Asian interior to form the Chinese Loess Plateau. The onset of this loess accumulation, and therefore of the monsoons, was thought to be 8 million years ago (Ma). However, in recent years these loess records have been extended further back in time to the Eocene (56-34 Ma), a period characterized by significant changes in both the regional geography and global climate. Yet the extent to which these reconfigurations drive atmospheric circulation and whether the loess-like deposits are monsoonal remains debated. In this thesis, I study the terrestrial deposits of the Xining Basin previously identified as Eocene loess, to derive the paleoenvironmental evolution of the region and identify the geological processes that have shaped the Asian climate.
I review dust deposits in the geological record and conclude that these are commonly represented by a mix of both windblown and water-laid sediments, in contrast to the pure windblown material known as loess. Yet by using a combination of quartz surface morphologies, provenance characteristics and distinguishing grain-size distributions, windblown dust can be identified and quantified in a variety of settings. This has important implications for tracking aridification and dust-fluxes throughout the geological record.
Past reversals of Earth’s magnetic field are recorded in the deposits of the Xining Basin and I use these together with a dated volcanic ash layer to accurately constrain the age to the Eocene period. A combination of pollen assemblages, low dust abundances and other geochemical data indicates that the early Eocene was relatively humid suggesting an intensified summer monsoon due to the warmer greenhouse climate at this time. A subsequent shift from predominantly freshwater to salt lakes reflects a long-term aridification trend possibly driven by global cooling and the continuous uplift of the Tibetan Plateau. Superimposed on this aridification are wetter intervals reflected in more abundant lake deposits which correlate with highstands of the inland proto-Paratethys Sea. This sea covered the Eurasian continent and thereby provided additional moisture to the winter-time westerlies during the middle to late Eocene.
The long-term aridification culminated in an abrupt shift at 40 Ma reflected by the onset of windblown dust, an increase in steppe-desert pollen, the occurrence of high-latitude orbital cycles and northwesterly winds identified in deflated salt deposits. Together, these indicate the onset of a Siberian high atmospheric pressure system driving the East Asian winter monsoon as well as dust storms and was triggered by a major sea retreat from the Asian interior. These results therefore show that the proto-Paratethys Sea, though less well recognized than the Tibetan Plateau and global climate, has been a major driver in setting up the modern-day climate in Asia.
Redox signalling in plants
(2020)
Once proteins are synthesized, they can additionally be modified by post-translational modifications (PTMs). Proteins containing reactive cysteine thiols, stabilized in their deprotonated form due to their local environment as thiolates (RS-), serve as redox sensors by undergoing a multitude of oxidative PTMs (Ox-PTMs). Ox-PTMs such as S-nitrosylation or formation of inter- or intra-disulfide bridges induce functional changes in these proteins. Proteins containing cysteines, whose thiol oxidation state regulates their functions, belong to the so-called redoxome. Such Ox-PTMs are controlled by site-specific cellular events that play a crucial role in protein regulation, affecting enzyme catalytic sites, ligand binding affinity, protein-protein interactions or protein stability. Reversible protein thiol oxidation is an essential regulatory mechanism of photosynthesis, metabolism, and gene expression in all photosynthetic organisms. Therefore, studying PTMs will remain crucial for understanding plant adaptation to external stimuli like fluctuating light conditions. Optimizing methods suitable for studying plants Ox-PTMs is of high importance for elucidation of the redoxome in plants. This study focusses on thiol modifications occurring in plant and provides novel insight into in vivo redoxome of Arabidopsis thaliana in response to light vs. dark. This was achieved by utilizing a resin-assisted thiol enrichment approach. Furthermore, confirmation of candidates on the single protein level was carried out by a differential labelling approach. The thiols and disulfides were differentially labelled, and the protein levels were detected using immunoblot analysis. Further analysis was focused on light-reduced proteins. By the enrichment approach many well studied redox-regulated proteins were identified. Amongst those were fructose 1,6-bisphosphatase (FBPase) and sedoheptulose-1,7-bisphosphatase (SBPase) which have previously been described as thioredoxin system targeted enzymes. The redox regulated proteins identified in the current study were compared to several published, independent results showing redox regulated proteins in Arabidopsis leaves, root, mitochondria and specifically S-nitrosylated proteins. These proteins were excluded as potential new candidates but remain as a proof-of-concept to the enrichment experiments to be effective. Additionally, CSP41A and CSP41B proteins, which emerged from this study as potential targets of redox-regulation, were analyzed by Ribo-Seq. The active translatome study of csp41a mutant vs. wild-type showed most of the significant changes at end of the night, similarly as csp41b. Yet, in both mutants only several chloroplast-encoded genes were altered. Further studies of CSP41A and CSP41B proteins are needed to reveal their functions and elucidate the role of redox regulation of these proteins.
Water quality in river systems is of growing concern due to rising anthropogenic pressures and climate change. Mitigation efforts have been placed under the guidelines of different governance conventions during last decades (e.g., the Water Framework Directive in Europe). Despite significant improvement through relatively straightforward measures, the environmental status has likely reached a plateau. A higher spatiotemporal accuracy of catchment nitrate modeling is, therefore, needed to identify critical source areas of diffuse nutrient pollution (especially for nitrate) and to further guide implementation of spatially differentiated, cost-effective mitigation measures. On the other hand, the emerging high-frequency sensor monitoring upgrades the monitoring resolution to the time scales of biogeochemical processes and enables more flexible monitoring deployments under varying conditions. The newly available information offers new prospects in understanding nitrate spatiotemporal dynamics. Formulating such advanced process understanding into catchment models is critical for model further development and environmental status evaluation. This dissertation is targeting on a comprehensive analysis of catchment and in-stream nitrate dynamics and is aiming to derive new insights into their spatial and temporal variabilities through the new fully distributed model development and the new high-frequency data.
Firstly, a new fully distributed, process-based catchment nitrate model (the mHM-Nitrate model) is developed based on the mesoscale Hydrological Model (mHM) platform. Nitrate process descriptions are adopted from the Hydrological Predictions for the Environment (HYPE), with considerable improved implementations. With the multiscale grid-based discretization, mHM-Nitrate balances the spatial representation and the modeling complexity. The model has been thoughtfully evaluated in the Selke catchment (456 km2), central Germany, which is characterized by heterogeneous physiographic conditions. Results show that the model captures well the long-term discharge and nitrate dynamics at three nested gauging stations. Using daily nitrate-N observations, the model is also validated in capturing short-term fluctuations due to changes in runoff partitioning and spatial contribution during flooding events. By comparing the model simulations with the values reported in the literature, the model is capable of providing detailed and reliable spatial information of nitrate concentrations and fluxes. Therefore, the model can be taken as a promising tool for environmental scientists in advancing environmental modeling research, as well as for stakeholders in supporting their decision-making, especially for spatially differentiated mitigation measures.
Secondly, a parsimonious approach of regionalizing the in-stream autotrophic nitrate uptake is proposed using high-frequency data and further integrated into the new mHM-Nitrate model. The new regionalization approach considers the potential uptake rate (as a general parameter) and effects of above-canopy light and riparian shading (represented by global radiation and leaf area index data, respectively). Multi-parameter sensors have been continuously deployed in a forest upstream reach and an agricultural downstream reach of the Selke River. Using the continuous high-frequency data in both streams, daily autotrophic uptake rates (2011-2015) are calculated and used to validate the regionalization approach. The performance and spatial transferability of the approach is validated in terms of well-capturing the distinct seasonal patterns and value ranges in both forest and agricultural streams. Integrating the approach into the mHM-Nitrate model allows spatiotemporal variability of in-stream nitrate transport and uptake to be investigated throughout the river network.
Thirdly, to further assess the spatial variability of catchment nitrate dynamics, for the first time the fully distributed parameterization is investigated through sensitivity analysis. Sensitivity results show that parameters of soil denitrification, in-stream denitrification and in-stream uptake processes are the most sensitive parameters throughout the Selke catchment, while they all show high spatial variability, where hot-spots of parameter sensitivity can be explicitly identified. The Spearman rank correlation is further analyzed between sensitivity indices and multiple catchment factors. The correlation identifies that the controlling factors vary spatially, reflecting heterogeneous catchment responses in the Selke catchment. These insights are, therefore, informative in informing future parameter regionalization schemes for catchment water quality models. In addition, the spatial distributions of parameter sensitivity are also influenced by the gauging information that is being used for sensitivity evaluation. Therefore, an appropriate monitoring scheme is highly recommended to truly reflect the catchment responses.
Die Fehlerkorrektur in der Codierungstheorie beschäftigt sich mit der Erkennung und Behebung von Fehlern bei der Übertragung und auch Sicherung von Nachrichten.
Hierbei wird die Nachricht durch zusätzliche Informationen in ein Codewort kodiert.
Diese Kodierungsverfahren besitzen verschiedene Ansprüche, wie zum Beispiel die maximale Anzahl der zu korrigierenden Fehler und die Geschwindigkeit der Korrektur.
Ein gängiges Codierungsverfahren ist der BCH-Code, welches industriell für bis zu vier Fehler korrigiere Codes Verwendung findet. Ein Nachteil dieser Codes ist die technische Durchlaufzeit für die Berechnung der Fehlerstellen mit zunehmender Codelänge.
Die Dissertation stellt ein neues Codierungsverfahren vor, bei dem durch spezielle Anordnung kleinere Codelängen eines BCH-Codes ein langer Code erzeugt wird. Diese Anordnung geschieht über einen weiteren speziellen Code, einem LDPC-Code, welcher für eine schneller Fehlererkennung konzipiert ist.
Hierfür wird ein neues Konstruktionsverfahren vorgestellt, welches einen Code für einen beliebige Länge mit vorgebbaren beliebigen Anzahl der zu korrigierenden Fehler vorgibt. Das vorgestellte Konstruktionsverfahren erzeugt zusätzlich zum schnellen Verfahren der Fehlererkennung auch eine leicht und schnelle Ableitung eines Verfahrens zu Kodierung der Nachricht zum Codewort. Dies ist in der Literatur für die LDPC-Codes bis zum jetzigen Zeitpunkt einmalig.
Durch die Konstruktion eines LDPC-Codes wird ein Verfahren vorgestellt wie dies mit einem BCH-Code kombiniert wird, wodurch eine Anordnung des BCH-Codes in Blöcken erzeugt wird. Neben der allgemeinen Beschreibung dieses Codes, wird ein konkreter Code für eine 2-Bitfehlerkorrektur beschrieben. Diese besteht aus zwei Teilen, welche in verschiedene Varianten beschrieben und verglichen werden. Für bestimmte Längen des BCH-Codes wird ein Problem bei der Korrektur aufgezeigt, welche einer algebraischen Regel folgt.
Der BCH-Code wird sehr allgemein beschrieben, doch existiert durch bestimmte Voraussetzungen ein BCH-Code im engerem Sinne, welcher den Standard vorgibt. Dieser BCH-Code im engerem Sinne wird in dieser Dissertation modifiziert, so dass das algebraische Problem bei der 2-Bitfehler Korrektur bei der Kombination mit dem LDPC-Code nicht mehr existiert. Es wird gezeigt, dass nach der Modifikation der neue Code weiterhin ein BCH-Code im allgemeinen Sinne ist, welcher 2-Bitfehler korrigieren und 3-Bitfehler erkennen kann. Bei der technischen Umsetzung der Fehlerkorrektur wird des Weiteren gezeigt, dass die Durchlaufzeiten des modifizierten Codes im Vergleich zum BCH-Code schneller ist und weiteres Potential für Verbesserungen besitzt.
Im letzten Kapitel wird gezeigt, dass sich dieser modifizierte Code mit beliebiger Länge eignet für die Kombination mit dem LDPC-Code, wodurch dieses Verfahren nicht nur umfänglicher in der Länge zu nutzen ist, sondern auch durch die schnellere Dekodierung auch weitere Vorteile gegenüber einem BCH-Code im engerem Sinne besitzt.
Addressing both scholars of international law and political science as well as decision makers involved in cybersecurity policy, the book tackles the most important and intricate legal issues that a state faces when considering a reaction to a malicious cyber operation conducted by an adversarial state. While often invoked in political debates and widely analysed in international legal scholarship, self-defence and countermeasures will often remain unavailable to states in situations of cyber emergency due to the pervasive problem of reliable and timely attribution of cyber operations to state actors. Analysing the legal questions surrounding attribution in detail, the book presents the necessity defence as an evidently available alternative. However, the shortcomings of the doctrine as based in customary international law that render it problematic as a remedy for states are examined in-depth. In light of this, the book concludes by outlining a special emergency regime for cyberspace.
The steadily rising number of investor-State arbitration proceedings within the EU has triggered an extensive backlash and an increased questioning of the international investment law regime by different Member States as well as the EU Commission. This has resulted in the EU's assertion of control over the intra-EU investment regime by promoting the termination of bilateral intra-EU investment treaties (intra-EU BITs) and by opposing the jurisdiction of arbitral tribunals in intra-EU investor-State arbitration proceedings. Against the backdrop of the landmark Achmea decision of the European Court of Justice, the book offers an in depth analysis of the interplay of international investment law and the law of the European Union with regard to intra-EU investments, i.e. investments undertaken by an investor from one EU Member State within the territory of another EU Member State. It specifically analyses the conflict between the two investment protection regimes applicable within the EU with a particular emphasis on the compatibility of the international legal instruments with the law of the European Union. The book thereby addresses the more general question of the relationship between EU law and international law and offers a conceptual framework of intra-European investment protection based on the analysis of all intra-EU BITs, the Energy Charter Treaty and EU law, as well as the arbitral practice in over 180 intra-EU investor-State arbitration proceedings. Finally, the book develops possible solutions to reconcile the international legal standards of protection with the regionalized transnational law of the European Union
The current thesis contains the results from two experimental and one modelling study focused on the topic of ductile strain localization in the presence of material heterogeneities. Localization of strain in the high temperature regime is a well known feature of rock deformation occurring in nature at different scales and in a variety of lithologies. Large scale shear zones at the roots of major crustal fault zones are considered responsible for the activity of plate tectonics on our planet. A large number of mechanisms are suggested to be associated with strain softening and nucleation of localization. Among these, the presence of material heterogeneities within homogeneous host rocks is frequently observed in field examples to trigger shear zone development. Despite a number of studies conducted on the topic, the mechanisms controlling initiation and evolution of localization are not fully understood yet. We investigated, experimentally and by means of numerical modelling, phenomenological and microphysical aspects of high temperature strain localization in a homogeneous body containing single and paired inclusions of weaker material. A monomineralic carbonate system composed of Carrara marble (homogeneous, strong matrix) and Solnhofen limestone (weak planar inclusions) is selected for our studies based on its versatility as an experimental material and on the frequent occurrence of carbonate rocks at the core of natural shear zones.
To explore the influence of different loading conditions on heterogeneity-induced high temperature shear zones we conducted torsion experiments under constant twist (deformation) rate and constant torque (stress) conditions in a Paterson-type deformation apparatus on hollow cylinders of marble containing single planar inclusions of limestone. At the imposed experimental conditions (900 ◦C temperature and 400 MPa confining pressure) both materials deform plastically and the marble is ≈ 9 times stronger than the limestone. The viscosity contrast between the two materials induces a perturbation of the stress field within the marble matrix at the tip of the planar inclusion. Early on along the deformation path (at bulk shear strains ≈ 0.3), heterogeneous distribution of strain can be observed under both loading conditions and a small area of incipient strain localization is formed at the tip of the weak limestone inclusion. Strongly deformed grains, incipient dynamic recrystallization and a weak crystallographic preferred orientation characterize the marble within an area a few mm in front of the inclusion. As the bulk strain is increased (up to γ ≈ 1), the area of microstructural modification is expanded along the inclusion plane, the texture strengthens and grain size refinement by dynamic recrystallization becomes pervasive. Locally, evidences for coexisting brittle deformation are also observed regardless of the imposed loading conditions. A shear zone is effectively formed within the deforming Carrara marble, its geometry controlled by the plane containing the thin plate of limestone. Thorough microstructural and textural analysis, however, do not reveal substantial differences in the mechanisms or magnitude of strain localization at the different loading conditions. We conclude that, in the presence of material heterogeneities capable of inducing strain softening, the imposed loading conditions do not affect ductile localization in its nucleating and transient stages.
As the ultimate goal of experimental rock deformation is the extrapolation of results to geologically relevant time and space scales, we developed 2D numerical models reproducing (and benchmarked to) our experimental results. Our cm-scaled models have been implemented with a first-order strain-dependent softening law to reproduce the effect of rheological weakening in the deforming material. We successfully reproduced the local stress concentration at the inclusion tips and the strain localization initiated in the marble matrix. The heterogeneous distribution of strain and its evolution with imposed bulk deformation (i.e. the shape and extent of the nucleating shear zone) are observed to depend on the degree of softening imposed to the deforming matrix. When a second (artificial) softening step is introduced at elevated bulk strains in the model, the formation of a secondary high strain layer is observed at the core of the initial shear zone, analogous to the development of ultramylonite bands in high strain natural shear zones. Our results do not only reproduce the nucleation and transient evolution of a heterogeneity-induced high temperature shear zone with high accuracy, but also confirm the importance of introducing reliable softening laws capable of mimicking strain weakening to numerical models of crustal scale ductile processes.
Material heterogeneities inducing strain localization in the field are often consisting of brittle precursors (joints and fractures). More generally, the interaction of brittle and ductile deformation mechanisms and its effect on the localization of strain have been a key topic in the structural geology community for a long time. The positive feedback between (micro)fracturing and ductile strain localization is a well recognized effect in a number of field examples. We experimentally investigated the influence of brittle deformation on the initiation and evolution of high temperature shear zones in a strong matrix containing pairs of weak material heterogeneities. Our Carrara marble-Solnhofen limestone inclusions system was tested in triaxial compression under constant strain rate and high temperature (900 ◦C) conditions in a Paterson deformation apparatus. The inclusion pairs were arranged in non-overlapping step-over geometries of either compressional or extensional nature. Experimental runs were conducted at different confining pressures (30, 50, 100 and 300 MPa) to induce various amounts of brittle deformation within the marble matrix. At low confinement (30 and 50 MPa) abundant brittle deformation is observed in all configurations, but the spatial distribution of cracks is dependent on the kinematics of the step-over region: concentrated along the shearing plane between the inclusions in the extensional samples, or broadly distributed around the inclusions but outside the step-over region in the compressional configuration. Accordingly, brittle-assisted ductile processes tend to localize deformation along the inclusions plane in the extensional geometry or to distribute widely across large areas of the matrix in the compressional step-over. At pressures of 100 and 300 MPa fracturing is mostly suppressed in both configurations and strain is accommodated almost entirely by viscous creep. In extensional samples this leads to progressive de-localization with increasing confinement. Our results show that, while ductile localization of strain is indeed more efficient where assisted by brittle processes, these latter are only effective if themselves heterogeneously distributed, ultimately a function of the local stress perturbations.
Geomechanical and petrological characterisation of exposed slip zones, Alpine Fault, New Zealand
(2020)
The Alpine Fault is a large, plate-bounding, strike-slip fault extending along the north-western edge of the Southern Alps, South Island, New Zealand. It regularly accommodates large (MW > 8) earthquakes and has a high statistical probability of failure in the near future, i.e., is late in its seismic cycle. This pending earthquake and associated co-seismic landslides are expected to cause severe infrastructural damage that would affect thousands of people, so it presents a substantial geohazard. The interdisciplinary study presented here aims to characterise the fault zone’s 4D (space and time) architecture, because this provides information about its rheological properties that will enable better assessment of the hazard
the fault poses.
The studies undertaken include field investigations of principal slip zone fault gouges exposed
along strike of the fault, and subsequent laboratory analyses of these outcrop and additional borehole samples. These observations have provided new information on (I) characteristic microstructures down to the nanoscale that indicate which deformation mechanisms operated within the rocks, (II) mineralogical information that constrains the fault’s geomechanical behaviour and (III) geochemical compositional information that allows the influence of fluid- related alteration processes on material properties to be unraveled.
Results show that along-strike variations of fault rock properties such as microstructures and mineralogical composition are minor and / or do not substantially influence fault zone architecture. They furthermore provide evidence that the architecture of the fault zone, particularly its fault core, is more complex than previously considered, and also more complex than expected for this sort of mature fault cutting quartzofeldspathic rocks. In particular our results strongly suggest that the fault has more than one principal slip zone, and that these form an anastomosing network extending into the basement below the cover of Quaternary sediments.
The observations detailed in this thesis highlight that two major processes, (I) cataclasis and (II) authigenic mineral formation, are the major controls on the rheology of the Alpine Fault. The velocity-weakening behaviour of its fault gouge is favoured by abundant nanoparticles
promoting powder lubrication and grain rolling rather than frictional sliding. Wall-rock fragmentation is accompanied by co-seismic, fluid-assisted dilatancy that is recorded by calcite cementation. This mineralisation, along with authigenic formation of phyllosilicates, quickly alters the petrophysical fault zone properties after each rupture, restoring fault competency. Dense networks of anastomosing and mutually cross-cutting calcite veins and intensively reworked gouge matrix demonstrate that strain repeatedly localised within the narrow fault gouge. Abundantly undeformed euhedral chlorite crystallites and calcite veins cross-cutting both fault gouge and gravels that overlie basement on the fault’s footwall provide evidence that the processes of authigenic phyllosilicate growth, fluid-assisted dilatancy and associated fault healing are processes active particularly close to the Earth’s surface in this fault zone.
Exposed Alpine Fault rocks are subject to intense weathering as direct consequence of abundant orogenic rainfall associated with the fault’s location at the base of the Southern Alps. Furthermore, fault rock rheology is substantially affected by shallow-depth conditions such as the juxtaposition of competent hanging wall fault rocks on poorly consolidated footwall sediments. This means microstructural, mineralogical and geochemical properties of the exposed fault rocks may differ substantially from those at deeper levels, and thus are not characteristic of the majority of the fault rocks’ history. Examples are (I) frictionally weak smectites found within the fault gouges being artefacts formed at temperature conditions, and imparting petrophysical properties that are not typical for most of fault rocks of the Alpine Fault, (II) grain-scale dissolution resulting from subaerial weathering rather than deformation by pressure-solution processes and (III) fault gouge geometries being more complex than expected for deeper counterparts.
The methodological approaches deployed in analyses of this, and other fault zones, and the major results of this study are finally discussed in order to contextualize slip zone investigations of fault zones and landslides. Like faults, landslides are major geohazards, which highlights the importance of characterising their geomechanical properties. Similarities between faults, especially those exposed to subaerial processes, and landslides, include mineralogical composition and geomechanical behaviour. Together, this ensures failure occurs predominantly by cataclastic processes, although aseismic creep promoted by weak phyllosilicates is not uncommon. Consequently, the multidisciplinary approach commonly used to investigate fault zones may contribute to increase the understanding of landslide faulting processes and the assessment of their hazard potential.
Lava domes are severely hazardous, mound-shaped extrusions of highly viscous lava and commonly erupt at many active stratovolcanoes around the world. Due to gradual growth and flank oversteepening, such lava domes regularly experience partial or full collapses, resulting in destructive and far-reaching pyroclastic density currents. They are also associated with cyclic explosive activity as the complex interplay of cooling, degassing, and solidification of dome lavas regularly causes gas pressurizations on the dome or the underlying volcano conduit. Lava dome extrusions can last from days to decades, further highlighting the need for accurate and reliable monitoring data.
This thesis aims to improve our understanding of lava dome processes and to contribute to the monitoring and prediction of hazards posed by these domes. The recent rise and sophistication of photogrammetric techniques allows for the extraction of observational data in unprecedented detail and creates ideal tools for accomplishing this purpose. Here, I study natural lava dome extrusions as well as laboratory-based analogue models of lava dome extrusions and employ photogrammetric monitoring by Structure-from-Motion (SfM) and Particle-Image-Velocimetry (PIV) techniques. I primarily use aerial photography data obtained by helicopter, airplanes, Unoccupied Aircraft Systems (UAS) or ground-based timelapse cameras. Firstly, by combining a long time-series of overflight data at Volcán de Colima, México, with seismic and satellite radar data, I construct a detailed timeline of lava dome and crater evolution. Using numerical model, the impact of the extrusion on dome morphology and loading stress is further evaluated and an impact on the growth direction is identified, bearing important implications for the location of collapse hazards. Secondly, sequential overflight surveys at the Santiaguito lava dome, Guatemala, reveal surface motion data in high detail. I quantify the growth of the lava dome and the movement of a lava flow, showing complex motions that occur on different timescales and I provide insight into rock properties relevant for hazard assessment inferred purely by photogrammetric processing of remote sensing data. Lastly, I recreate artificial lava dome and spine growth using analogue modelling under controlled conditions, providing new insights into lava extrusion processes and structures as well as the conditions in which they form.
These findings demonstrate the capabilities of photogrammetric data analyses to successfully monitor lava dome growth and evolution while highlighting the advantages of complementary modelling methods to explain the observed phenomena. The results presented herein further bear important new insights and implications for the hazards posed by lava domes.
Carbonates play a key role in the chemistry and dynamics of our planet. They are directly connected to the CO2 budget of our atmosphere and have a great impact on the deep carbon cycle. Moreover, recent studies have shown that carbonates are stable along the geothermal gradient down to Earth's lower mantle conditions, changing their crystal structure and related properties. Subducted carbonates may also react with silicates to form new phases. These reactions will redistribute elements, such as calcium (Ca), magnesium (Mg), iron (Fe) and carbon in the form of carbon dioxide (CO2), but also trace elements, that are carried by the carbonates. The trace elements of most interest are strontium (Sr) and rare earth elements (REE) which have been found to be important constituents in the composition of the primitive lower mantle and in mineral inclusions found in super-deep diamonds. However, the stability of carbonates in presence of mantle silicates at relevant temperatures is far from being well understood. Related to this, very little is known about distribution processes of trace elements between carbonates and mantle silicates. To shed light on these processes, we studied reactions between Sr- and REE-containing CaCO3 and Mg/Fe-bearing silicates of the system (Mg,Fe)2SiO4 - (Mg,Fe)SiO3 at high pressure and high temperature using synchrotron radiation based μ-X-ray diffraction (μ-XRD) and μ-X-ray fluorescence (μ-XRF) with μm-resolution in a laser-heated diamond anvil cell. X-ray diffraction is used to derive the structural changes of the phase reactions whereas X-ray fluorescence gives information on the chemical changes in the sample. In-situ experiments at high pressure and high temperature were performed at beamline P02.2 at PETRA III (Hamburg, Germany) and at beamline ID27 at ESRF (Grenoble, France). In addition to μ-XRD and μ-XRF, ex-situ measurements were made on the recovered sample material using transmission electron microscopy (TEM) and provided further insights into the reaction kinetics of carbonate-silicate reactions.
Our investigations show that CaCO3 is unstable in presence of mantle silicates above 1700 K and a reaction takes place in which magnesite plus CaSiO3-perovskite are formed. In addition, we observed that a high content of iron in the carbonate-silicate system favours dolomite formation during the reaction. The subduction of natural carbonates with significant amounts of Sr leads to a comprehensive investigation of the stability not only of CaCO3 phases in contact with mantle silicates but also of SrCO3 (and of Sr-bearing CaCO3). We found that SrCO3 reacts with (Mg,Fe)SiO3-perovskite to form magnesite and gained evidence for the formation of SrSiO3-perovskite.
To complement our study on the stability of SrCO3 at conditions of the Earth's lower mantle, we performed powder X-ray diffraction and single crystal X-ray diffraction experiments at ambient temperature and up to 49 GPa. We observed a transformation from SrCO3-I into a new high-pressure phase SrCO3-II at around 26 GPa with Pmmn crystal structure and a bulk modulus of 103(10) GPa. This information is essential to fully understand the phase behaviour and stability of carbonates in the Earth's lower mantle and to elucidate the possibility of introducing Sr into mantle silicates by carbonate-silicate reactions.
Simultaneous recording of μ-XRD and μ-XRF in the μm-range over the heated areas provides spatial information not only about phase reactions but also on the elemental redistribution during the reactions. A comparison of the spatial intensity distribution of the XRF signal before and after heating indicates a change in the elemental distribution of Sr and an increase in Sr-concentration was found around the newly formed SrSiO3-perovskite. With the help of additional TEM analyses on the quenched sample material the elemental redistribution was studied at a sub-micrometer scale. Contrary to expectations from combined μ-XRD and μ-XRF measurements, we found that La and Eu were not incorporated into the silicate phases, instead they tend to form either isolated oxide phases (e.g. Eu2O3, La2O3) or hydroxyl-bastnäsite (La(CO3)(OH)). In addition, we observed the transformation from (Mg,Fe)SiO3-perovskite to low-pressure clinoenstatite during pressure release. The monoclinic structure (P21/c) of this phase allows the incorporation of Ca as shown by additional EDX analyses and, to a minor extent, Sr too.
Based on our experiments, we can conclude that a detection of the trace elements in-situ at high pressure and high temperature remains challenging. However, our first findings imply that silicates may incorporate the trace elements provided by the carbonates and indicate that carbonates may have a major effect on the trace element contents of mantle phases.
In the present study, we employ the angle-resolved photoemission spectroscopy (ARPES) technique to study the electronic structure of topological states of matter. In particular, the so-called topological crystalline insulators (TCIs) Pb1-xSnxSe and Pb1-xSnxTe, and the Mn-doped Z2 topological insulators (TIs) Bi2Te3 and Bi2Se3. The Z2 class of strong topological insulators is protected by time-reversal symmetry and is characterized by an odd number of metallic Dirac type surface states in the surface Brillouin zone. The topological crystalline insulators on the other hand are protected by the individual crystal symmetries and exhibit an even number of Dirac cones.
The topological properties of the lead tin chalcogenides topological crystalline insulators can be tuned by temperature and composition. Here, we demonstrate that Bi-doping of the Pb1-xSnxSe(111) epilayers induces a quantum phase transition from a topological crystalline insulator to a Z2 topological insulator. This occurs because Bi-doping lifts the fourfold valley degeneracy in the bulk. As a consequence a gap appears at ⌈¯, while the three Dirac cones at the M̅ points of the surface Brillouin zone remain intact. We interpret this new phase transition is caused by lattice distortion. Our findings extend the topological phase diagram enormously and make strong topological insulators switchable by distortions or electric field. In contrast, the bulk Bi doping of epitaxial Pb1-xSnxTe(111) films induces a giant Rashba splitting at the surface that can be tuned by the doping level. Tight binding calculations identify their origin as Fermi level pinning by trap states at the surface.
Magnetically doped topological insulators enable the quantum anomalous Hall effect (QAHE) which provide quantized edge states for lossless charge transport applications. The edge states are hosted by a magnetic energy gap at the Dirac point which has not been experimentally observed to date. Our low temperature ARPES studies unambiguously reveal the magnetic gap of Mn-doped Bi2Te3. Our analysis shows a five times larger gap size below the Tc than theoretically predicted. We assign this enhancement to a remarkable structure modification induced by Mn doping. Instead of a disordered impurity system, a self-organized alternating sequence of MnBi2Te4 septuple and Bi2Te3quintuple layers is formed. This enhances the wave-function overlap and gives rise to a large magnetic gap. Mn-doped Bi2Se3 forms similar heterostructure, but only a nonmagnetic gap is observed in this system. This correlates with the difference in magnetic anisotropy due to the much larger spin-orbit interaction in Bi2Te3 compared to Bi2Se3. These findings provide crucial insights for pushing lossless transport in topological insulators towards room-temperature applications.
The Willmore functional is a function that maps an immersed Riemannian manifold to its total mean curvature. Finding closed surfaces that minimizes the Willmore energy, or more generally finding critical surfaces, is a classic problem of differential geometry.
In this thesis we will develop the concept of generalized Willmore functionals for surfaces in Riemannian manifolds. We are guided by models in mathematical physics, such as the Hawking energy of general relativity and the bending energies for thin membranes.
We prove the existence of minimizers under area constraint for these generalized Willmore functionals in a suitable class of generalized surfaces. In particular, we construct minimizers of the bending energy mentioned above for prescribed area and enclosed volume.
Furthermore, we prove that critical surfaces of generalized Willmore functionals with prescribed area are smooth, away from finitely many points. These results and the following are based on the existing theory for the Willmore functional.
This general discussion is succeeded by a detailed analysis of the Hawking energy. In the context of general relativity the surrounding manifold describes the space at a given time, hence we strive to understand the interplay between the Hawking energy and the ambient space. We characterize points in the surrounding manifold for which there are small critical spheres with prescribed area in any neighborhood. These points are interpreted as concentration points of the Hawking energy.
Additionally, we calculate an expansion of the Hawking energy on small, round spheres. This allows us to identify a kind of energy density of the Hawking energy.
It needs to be mentioned that our results stand in contrast to previous expansions of the Hawking energy. However, these expansions are obtained on spheres along the light cone at a given point. At this point it is not clear how to explain the discrepancy.
Finally, we consider asymptotically Schwarzschild manifolds. They are a special case of asymptotically flat manifolds, which serf as models for isolated systems. The Schwarzschild spacetime itself is a classical solution to the Einstein equations and yields a simple description of a black hole.
In these asymptotically Schwarzschild manifolds we construct a foliation of the exterior region by critical spheres of the Hawking energy with prescribed large area. This foliation can be seen as a generalized notion of the center of mass of the isolated system. Additionally, the Hawking energy of grows along the foliation as the area of the surfaces grows.