Refine
Has Fulltext
- yes (2562) (remove)
Year of publication
- 2024 (103)
- 2023 (182)
- 2022 (160)
- 2021 (165)
- 2020 (122)
- 2019 (130)
- 2018 (140)
- 2017 (135)
- 2016 (136)
- 2015 (108)
- 2014 (109)
- 2013 (137)
- 2012 (105)
- 2011 (110)
- 2010 (83)
- 2009 (78)
- 2008 (87)
- 2007 (96)
- 2006 (84)
- 2005 (94)
- 2004 (76)
- 2003 (46)
- 2002 (33)
- 2001 (19)
- 2000 (12)
- 1999 (7)
- 1998 (2)
- 1997 (1)
- 1988 (1)
- 1972 (1)
Document Type
- Doctoral Thesis (2562) (remove)
Language
Keywords
- climate change (54)
- Klimawandel (52)
- Modellierung (34)
- Nanopartikel (28)
- machine learning (22)
- Fernerkundung (20)
- Synchronisation (19)
- remote sensing (18)
- Spracherwerb (17)
- Blickbewegungen (16)
Institute
- Institut für Physik und Astronomie (411)
- Institut für Biochemie und Biologie (391)
- Institut für Geowissenschaften (331)
- Institut für Chemie (304)
- Extern (153)
- Institut für Umweltwissenschaften und Geographie (125)
- Institut für Ernährungswissenschaft (102)
- Wirtschaftswissenschaften (97)
- Hasso-Plattner-Institut für Digital Engineering GmbH (93)
- Department Psychologie (89)
Research on novel and advanced biomaterials is an indispensable step towards their applications in desirable fields such as tissue engineering, regenerative medicine, cell culture, or biotechnology. The work presented here focuses on such a promising material: polyelectrolyte multilayer (PEM) composed of hyaluronic acid (HA) and poly(L-lysine) (PLL). This gel-like polymer surface coating is able to accumulate (bio-)molecules such as proteins or drugs and release them in a controlled manner. It serves as a mimic of the extracellular matrix (ECM) in composition and intrinsic properties. These qualities make the HA/PLL multilayers a promising candidate for multiple bio-applications such as those mentioned above. The work presented aims at the development of a straightforward approach for assessment of multi-fractional diffusion in multilayers (first part) and at control of local molecular transport into or from the multilayers by laser light trigger (second part).
The mechanism of the loading and release is governed by the interaction of bioactives with the multilayer constituents and by the diffusion phenomenon overall. The diffusion of a molecule in HA/PLL multilayers shows multiple fractions of different diffusion rate. Approaches, that are able to assess the mobility of molecules in such a complex system, are limited. This shortcoming motivated the design of a novel evaluation tool presented here.
The tool employs a simulation-based approach for evaluation of the data acquired by fluorescence recovery after photobleaching (FRAP) method. In this approach, possible fluorescence recovery scenarios are primarily simulated and afterwards compared with the data acquired while optimizing parameters of a model until a sufficient match is achieved. Fluorescent latex particles of different sizes and fluorescein in an aqueous medium are utilized as test samples validating the analysis results. The diffusion of protein cytochrome c in HA/PLL multilayers is evaluated as well.
This tool significantly broadens the possibilities of analysis of spatiotemporal FRAP data, which originate from multi-fractional diffusion, while striving to be widely applicable. This tool has the potential to elucidate the mechanisms of molecular transport and empower rational engineering of the drug release systems.
The second part of the work focuses on the fabrication of such a spatiotemporarily-controlled drug release system employing the HA/PLL multilayer. This release system comprises different layers of various functionalities that together form a sandwich structure. The bottom layer, which serves as a reservoir, is formed by HA/PLL PEM deposited on a planar glass substrate. On top of the PEM, a layer of so-called hybrids is deposited. The hybrids consist of thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) -based hydrogel microparticles with surface-attached gold nanorods. The layer of hybrids is intended to serve as a gate that controls the local molecular transport through the PEM–solution-interface. The possibility of stimulating the molecular transport by near-infrared (NIR) laser irradiation is being explored.
From several tested approaches for the deposition of hybrids onto the PEM surface, the drying-based approach was identified as optimal. Experiments, that examine the functionality of the fabricated sandwich at elevated temperature, document the reversible volume phase transition of the PEM-attached hybrids while sustaining the sandwich stability. Further, the gold nanorods were shown to effectively absorb light radiation in the tissue- and cell-friendly NIR spectral region while transducing the energy of light into heat. The rapid and reversible shrinkage of the PEM-attached hybrids was thereby achieved. Finally, dextran was employed as a model transport molecule. It loads into the PEM reservoir in a few seconds with the partition constant of 2.4, while it spontaneously releases in a slower, sustained manner. The local laser irradiation of the sandwich, which contains the fluorescein isothiocyanate tagged dextran, leads to a gradual reduction of fluorescence intensity in the irradiated region.
The release system fabricated employs renowned photoresponsivity of the hybrids in an innovative setting. The results of the research are a step towards a spatially-controlled on-demand drug release system that paves the way to spatiotemporally controlled drug release.
The approaches developed in this work have the potential to elucidate the molecular dynamics in ECM and to foster engineering of multilayers with properties tuned to mimic the ECM. The work aims at spatiotemporal control over the diffusion of bioactives and their presentation to the cells.
While estimated numbers of past and future climate migrants are alarming, the growing empirical evidence suggests that the association between adverse climate-related events and migration is not universally positive. This dissertation seeks to advance our understanding of when and how climate migration emerges by analyzing heterogeneous climatic influences on migration in low- and middle-income countries. To this end, it draws on established economic theories of migration, datasets from physical and social sciences, causal inference techniques and approaches from systematic literature review. In three of its five chapters, I estimate causal effects of processes of climate change on inequality and migration in India and Sub-Saharan Africa. By employing interaction terms and by analyzing sub-samples of data, I explore how these relationships differ for various segments of the population. In the remaining two chapters, I present two systematic literature reviews. First, I undertake a comprehensive meta-regression analysis of the econometric climate migration literature to summarize general climate migration patterns and explain the conflicting findings. Second, motivated by the broad range of approaches in the field, I examine the literature from a methodological perspective to provide best practice guidelines for studying climate migration empirically. Overall, the evidence from this dissertation shows that climatic influences on human migration are highly heterogeneous. Whether adverse climate-related impacts materialize in migration depends on the socio-economic characteristics of the individual households, such as wealth, level of education, agricultural dependence or access to adaptation technologies and insurance. For instance, I show that while adverse climatic shocks are generally associated with an increase in migration in rural India, they reduce migration in the agricultural context of Sub-Saharan Africa, where the average wealth levels are much lower so that households largely cannot afford the upfront costs of moving. I find that unlike local climatic shocks which primarily enhance internal migration to cities and hence accelerate urbanization, shocks transmitted via agricultural producer prices increase migration to neighboring countries, likely due to the simultaneous decrease in real income in nearby urban areas. These findings advance our current understanding by showing when and how economic agents respond to climatic events, thus providing explicit contexts and mechanisms of climate change effects on migration in the future. The resulting collection of findings can guide policy interventions to avoid or mitigate any present and future welfare losses from climate change-related migration choices.
Die vorliegende Arbeit untersucht die Politik der Zentralbankunabhängigkeit (ZBU) am Beispiel der Türkei. Im Mittelpunkt der Arbeit stehen theoretische und empirische Fragen und Probleme, die sich im Zusammenhang mit der ZBU stellen und anhand der türkischen Geldpolitik diskutiert werden. Ein zentrales Ziel der Arbeit besteht darin, zu untersuchen, ob und inwiefern die türkische Zentralbank nach Erlangung der de jure institutionellen Unabhängigkeit tatsächlich als unabhängig und entpolitisiert eingestuft werden kann. Um diese Forschungsfrage zu beantworten, werden die institutionellen Bedingungen, die Ziele und die Regeln, nach denen sich die türkische Geldpolitik richtet, geklärt. Anschließend wird empirisch überprüft, ob die geldpolitische Praxis der CBRT sich an dem offiziell vorgegebenen Regelwerk orientiert. Die Hauptthese dieser Arbeit lautet, dass die formelle Unabhängigkeit der CBRT und die regelorientierte Geldpolitik nicht mit einer Entpolitisierung der Geldpolitik in der Türkei gleichzusetzen ist. Als Alternative schlägt die vorliegende Studie vor, den institutionellen Status der CBRT als einen der relativen Autonomie zu untersuchen. Auch eine de jure unabhängige Zentralbank kann sich nicht von politischen Eingriffen abkoppeln, wie das Fallbeispiel Türkei zeigen wird.
Landslides are frequent natural hazards in rugged terrain, when the resisting frictional force of the surface of rupture yields to the gravitational force. These forces are functions of geological and morphological factors, such as angle of internal friction, local slope gradient or curvature, which remain static over hundreds of years; whereas more dynamic triggering events, such as rainfall and earthquakes, compromise the force balance by temporarily reducing resisting forces or adding transient loads. This thesis investigates landslide distribution and orientation due to landslide triggers (e.g. rainfall) at different scales (6-4∙10^5 km^2) and aims to link rainfall movement with the landslide distribution. It additionally explores the local impacts of the extreme rainstorms on landsliding and the role of precursory stability conditions that could be induced by an earlier trigger, such as an earthquake.
Extreme rainfall is a common landslide trigger. Although several studies assessed rainfall intensity and duration to study the distribution of thus triggered landslides, only a few case studies quantified spatial rainfall patterns (i.e. orographic effect). Quantifying the regional trajectories of extreme rainfall could aid predicting landslide prone regions in Japan. To this end, I combined a non-linear correlation metric, namely event synchronization, and radial statistics to assess the general pattern of extreme rainfall tracks over distances of hundreds of kilometers using satellite based rainfall estimates. Results showed that, although the increase in rainfall intensity and duration positively correlates with landslide occurrence, the trajectories of typhoons and frontal storms were insufficient to explain landslide distribution in Japan. Extreme rainfall trajectories inclined northwestwards and were concentrated along some certain locations, such as coastlines of southern Japan, which was unnoticed in the landslide distribution of about 5000 rainfall-triggered landslides. These landslides seemed to respond to the mean annual rainfall rates.
Above mentioned findings suggest further investigation on a more local scale to better understand the mechanistic response of landscape to extreme rainfall in terms of landslides. On May 2016 intense rainfall struck southern Germany triggering high waters and landslides. The highest damage was reported at the Braunsbach, which is located on the tributary-mouth fan formed by the Orlacher Bach. Orlacher Bach is a ~3 km long creek that drains a catchment of about ~6 km^2. I visited this catchment in June 2016 and mapped 48 landslides along the creek. Such high landslide activity was not reported in the nearby catchments within ~3300 km^2, despite similar rainfall intensity and duration based on weather radar estimates. My hypothesis was that several landslides were triggered by rainfall-triggered flash floods that undercut hillslope toes along the Orlacher Bach. I found that morphometric features such as slope and curvature play an important role in landslide distribution on this micro scale study site (<10 km^2). In addition, the high number of landslides along the Orlacher Bach could also be boosted by accumulated damages on hillslopes due karst weathering over longer time scales.
Precursory damages on hillslopes could also be induced by past triggering events that effect landscape evolution, but this interaction is hard to assess independently from the latest trigger. For example, an earthquake might influence the evolution of a landscape decades long, besides its direct impacts, such as landslides that follow the earthquake. Here I studied the consequences of the 2016 Kumamoto Earthquake (MW 7.1) that triggered some 1500 landslides in an area of ~4000 km^2 in central Kyushu, Japan. Topography, i.e. local slope and curvature, both amplified and attenuated seismic waves, thus controlling the failure mechanism of those landslides (e.g. progressive). I found that topography fails in explaining the distribution and the preferred orientation of the landslides after the earthquake; instead the landslides were concentrated around the northeast of the rupture area and faced mostly normal to the rupture plane. This preferred location of the landslides was dominated mainly by the directivity effect of the strike-slip earthquake, which is the propagation of wave energy along the fault in the rupture direction; whereas amplitude variations of the seismic radiation altered the preferred orientation. I suspect that the earthquake directivity and the asymmetry of seismic radiation damaged hillslopes at those preferred locations increasing landslide susceptibility. Hence a future weak triggering event, e.g. scattered rainfall, could further trigger landslides at those damaged hillslopes.
Mit der Liberalisierung des Strommarkts, den unsicheren Aussichten in der Klimapolitik und stark schwankenden Preisen bei Brennstoffen, Emissionsrechten und Kraftwerkskomponenten hat bei Kraftwerksinvestitionen das Risikomanagement an Bedeutung gewonnen. Dies äußert sich im vermehrten Einsatz probabilistischer Verfahren. Insbesondere bei regulativen Risiken liefert der klassische, häufigkeitsbasierte Wahrscheinlichkeitsbegriff aber keine Handhabe zur Risikoquantifizierung. In dieser Arbeit werden Kraftwerksinvestitionen und -portfolien in Deutschland mit Methoden des Bayes'schen Risikomanagements bewertet. Die Bayes'sche Denkschule begreift Wahrscheinlichkeit als persönliches Maß für Unsicherheit. Wahrscheinlichkeiten können auch ohne statistische Datenanalyse allein mit Expertenbefragungen gewonnen werden. Das Zusammenwirken unsicherer Werttreiber wurde mit einem probabilistischen DCF-Modell (Discounted Cash Flow-Modell) spezifiziert und in ein Einflussdiagramm mit etwa 1200 Objekten umgesetzt. Da der Überwälzungsgrad von Brennstoff- und CO2-Kosten und damit die Höhe der von den Kraftwerken erwirtschafteten Deckungsbeiträge im Wettbewerb bestimmt werden, reicht eine einzelwirtschaftliche Betrachtung der Kraftwerke nicht aus. Strompreise und Auslastungen werden mit Heuristiken anhand der individuellen Position der Kraftwerke in der Merit Order bestimmt, d.h. anhand der nach kurzfristigen Grenzkosten gestaffelten Einsatzreihenfolge. Dazu wurden 113 thermische Großkraftwerke aus Deutschland in einer Merit Order vereinigt. Das Modell liefert Wahrscheinlichkeitsverteilungen für zentrale Größen wie Kapitalwerte von Bestandsportfolien sowie Stromgestehungskosten und Kapitalwerte von Einzelinvestitionen (Steinkohle- und Braunkohlekraftwerke mit und ohne CO2-Abscheidung sowie GuD-Kraftwerke). Der Wert der Bestandsportfolien von RWE, E.ON, EnBW und Vattenfall wird primär durch die Beiträge der Braunkohle- und Atomkraftwerke bestimmt. Erstaunlicherweise schlägt sich der Emissionshandel nicht in Verlusten nieder. Dies liegt einerseits an den Zusatzgewinnen der Atomkraftwerke, andererseits an den bis 2012 gratis zugeteilten Emissionsrechten, welche hohe Windfall-Profite generieren. Dadurch erweist sich der Emissionshandel in seiner konkreten Ausgestaltung insgesamt als gewinnbringendes Geschäft. Über die Restlaufzeit der Bestandskraftwerke resultiert ab 2008 aus der Einführung des Emissionshandels ein Barwertvorteil von insgesamt 8,6 Mrd. €. In ähnlicher Dimension liegen die Barwertvorteile aus der 2009 von der Bundesregierung in Aussicht gestellten Laufzeitverlängerung für Atomkraftwerke. Bei einer achtjährigen Laufzeitverlängerung ergäben sich je nach CO2-Preisniveau Barwertvorteile von 8 bis 15 Mrd. €. Mit höheren CO2-Preisen und Laufzeitverlängerungen von bis zu 28 Jahren würden 25 Mrd. € oder mehr zusätzlich anfallen. Langfristig erscheint fraglich, ob unter dem gegenwärtigen Marktdesign noch Anreize für Investitionen in fossile Kraftwerke gegeben sind. Zu Beginn der NAP 2-Periode noch rentable Investitionen in Braunkohle- und GuD-Kraftwerke werden mit der auslaufenden Gratiszuteilung von Emissionsrechten zunehmend unrentabler. Die Rentabilität wird durch Strommarkteffekte der erneuerbaren Energien und ausscheidender alter Gas- und Ölkraftwerke stetig weiter untergraben. Steinkohlekraftwerke erweisen sich selbst mit anfänglicher Gratiszuteilung als riskante Investition. Die festgestellten Anreizprobleme für Neuinvestitionen sollten jedoch nicht dem Emissionshandel zugeschrieben werden, sondern resultieren aus den an Grenzkosten orientierten Strompreisen. Das Anreizproblem ist allerdings bei moderaten CO2-Preisen am größten. Es gilt auch für Kraftwerke mit CO2-Abscheidung: Obwohl die erwarteten Vermeidungskosten für CCS-Kraftwerke gegenüber konventionellen Kohlekraftwerken im Jahr 2025 auf 25 €/t CO2 (Braunkohle) bzw. 38,5 €/t CO2 (Steinkohle) geschätzt werden, wird ihr Bau erst ab CO2-Preisen von 50 bzw. 77 €/t CO2 rentabel. Ob und welche Kraftwerksinvestitionen sich langfristig rechnen, wird letztlich aber politisch entschieden und ist selbst unter stark idealisierten Bedingungen kaum vorhersagbar.
The Sun is a star, which due to its proximity has a tremendous influence on Earth. Since its very first days mankind tried to "understand the Sun", and especially in the 20th century science has uncovered many of the Sun's secrets by using high resolution observations and describing the Sun by means of models. As an active star the Sun's activity, as expressed in its magnetic cycle, is closely related to the sunspot numbers. Flares play a special role, because they release large energies on very short time scales. They are correlated with enhanced electromagnetic emissions all over the spectrum. Furthermore, flares are sources of energetic particles. Hard X-ray observations (e.g., by NASA's RHESSI spacecraft) reveal that a large fraction of the energy released during a flare is transferred into the kinetic energy of electrons. However the mechanism that accelerates a large number of electrons to high energies (beyond 20 keV) within fractions of a second is not understood yet. The thesis at hand presents a model for the generation of energetic electrons during flares that explains the electron acceleration based on real parameters obtained by real ground and space based observations. According to this model photospheric plasma flows build up electric potentials in the active regions in the photosphere. Usually these electric potentials are associated with electric currents closed within the photosphere. However as a result of magnetic reconnection, a magnetic connection between the regions of different magnetic polarity on the photosphere can establish through the corona. Due to the significantly higher electric conductivity in the corona, the photospheric electric power supply can be closed via the corona. Subsequently a high electric current is formed, which leads to the generation of hard X-ray radiation in the dense chromosphere. The previously described idea is modelled and investigated by means of electric circuits. For this the microscopic plasma parameters, the magnetic field geometry and hard X-ray observations are used to obtain parameters for modelling macroscopic electric components, such as electric resistors, which are connected with each other. This model demonstrates that such a coronal electric current is correlated with large scale electric fields, which can accelerate the electrons quickly up to relativistic energies. The results of these calculations are encouraging. The electron fluxes predicted by the model are in agreement with the electron fluxes deduced from the measured photon fluxes. Additionally the model developed in this thesis proposes a new way to understand the observed double footpoint hard X-ray sources.
Die Elektrosprayionisation (ESI) ist eine der weitverbreitetsten Ionisationstechniken für flüssige Pro-ben in der Massen- und Ionenmobilitäts(IM)-Spektrometrie. Aufgrund ihrer schonenden Ionisierung wird ESI vorwiegend für empfindliche, komplexe Moleküle in der Biologie und Medizin eingesetzt. Überdies ist sie allerdings für ein sehr breites Spektrum an Substanzklassen anwendbar. Die IM-Spektrometrie wurde ursprünglich zur Detektion gasförmiger Proben entwickelt, die hauptsächlich durch radioaktive Quellen ionisiert werden. Sie ist die einzige analytische Methode, bei der Isomere in Echtzeit getrennt und über ihre charakteristische IM direkt identifiziert werden können. ESI wurde in den 90ger Jahren durch die Hill Gruppe in die IM-Spektrometrie eingeführt. Die Kombination wird bisher jedoch nur von wenigen Gruppen verwendet und hat deshalb noch ein hohes Entwick-lungspotential. Ein vielversprechendes Anwendungsfeld ist der Einsatz in der Hochleistungs-flüssigkeitschromatographie (HPLC) zur mehrdimensionalen Trennung. Heutzutage ist die HPLC die Standardmethode zur Trennung komplexer Proben in der Routineanalytik. HPLC-Trennungsgänge sind jedoch häufig langwierig und der Einsatz verschiedener Laufmittel, hoher Flussraten, von Puffern, sowie Laufmittelgradienten stellt hohe Anforderungen an die Detektoren. Die ESI-IM-Spektrometrie wurde in einigen Studien bereits als HPLC-Detektor eingesetzt, war dort bisher jedoch auf Flussratensplitting oder geringe Flussraten des Laufmittels beschränkt.
In dieser kumulativen Doktorarbeit konnte daher erstmals ein ESI IM-Spektrometer als HPLC-Detektor für den Flussratenbereich von 200-1500 μl/min entwickelt werden. Anhand von fünf Publi-kationen wurden (1) über eine umfassende Charakterisierung die Eignung des Spektrometers als HPLC-Detektor festgestellt, (2) ausgewählte komplexe Trenngänge präsentiert und (3) die Anwen-dung zum Reaktionsmonitoring und (4, 5) mögliche Weiterentwicklungen gezeigt.
Erfolgreich konnten mit dem selbst-entwickelten ESI IM-Spektrometer typische HPLC-Bedingungen wie Wassergehalte im Laufmittel von bis zu 90%, Pufferkonzentrationen von bis zu 10 mM, sowie Nachweisgrenzen von bis zu 50 nM erreicht werden. Weiterhin wurde anhand der komplexen Trennungsgänge (24 Pestizide/18 Aminosäuren) gezeigt, dass die HPLC und die IM-Spektrometrie eine hohe Orthogonalität besitzen. Eine effektive Peakkapazität von 240 wurde so realisiert. Auf der HPLC-Säule koeluierende Substanzen konnten über die Driftzeit getrennt und über ihre IM identifi-ziert werden, sodass die Gesamttrennzeiten erheblich minimiert werden konnten. Die Anwend-barkeit des ESI IM-Spektrometers zur Überwachung chemischer Synthesen wurde anhand einer dreistufigen Reaktion demonstriert. Es konnten die wichtigsten Edukte, Zwischenprodukte und Produkte aller Stufen identifiziert werden. Eine quantitative Auswertung war sowohl über eine kurze HPLC-Vortrennung als auch durch die Entwicklung eines eigenen Kalibrierverfahrens, welches die Ladungskonkurrenz bei ESI berücksichtigt, ohne HPLC möglich. Im zweiten Teil der Arbeit werden zwei Weiterentwicklungen des Spektrometers präsentiert. Eine Möglichkeit ist die Reduzierung des Drucks in den intermediären Bereich (300 - 1000 mbar) mit dem Ziel der Verringerung der benötigten Spannungen. Mithilfe von Streulichtbildern und Strom-Spannungs-Kurven wurden für geringe Drücke eine verminderte Freisetzung der Analyt-Ionen aus den Tropfen festgestellt. Die Verluste konnten jedoch über höhere elektrische Feldstärken ausgeglichen werden, sodass gleiche Nachweisgrenzen bei 500 mbar und bei 1 bar erreicht wurden. Die zweite Weiterentwicklung ist ein neuartiges Ionentors mit Pulsschaltung, welches eine Verdopplung der Auflösung auf bis zu R > 100 bei gleicher Sensitivität ermöglichte. Eine denkbare Anwendung im Bereich der Peptidanalytik wurde mit beachtlichen Auflösungen der Peptide von R = 90 gezeigt.
Die vorliegende Arbeit beschäftigt sich mit der Charakterisierung von Seismizität anhand von Erdbebenkatalogen. Es werden neue Verfahren der Datenanalyse entwickelt, die Aufschluss darüber geben sollen, ob der seismischen Dynamik ein stochastischer oder ein deterministischer Prozess zugrunde liegt und was daraus für die Vorhersagbarkeit starker Erdbeben folgt. Es wird gezeigt, dass seismisch aktive Regionen häufig durch nichtlinearen Determinismus gekennzeichent sind. Dies schließt zumindest die Möglichkeit einer Kurzzeitvorhersage ein. Das Auftreten seismischer Ruhe wird häufig als Vorläuferphaenomen für starke Erdbeben gedeutet. Es wird eine neue Methode präsentiert, die eine systematische raumzeitliche Kartierung seismischer Ruhephasen ermöglicht. Die statistische Signifikanz wird mit Hilfe des Konzeptes der Ersatzdaten bestimmt. Als Resultat erhält man deutliche Korrelationen zwischen seismischen Ruheperioden und starken Erdbeben. Gleichwohl ist die Signifikanz dafür nicht hoch genug, um eine Vorhersage im Sinne einer Aussage über den Ort, die Zeit und die Stärke eines zu erwartenden Hauptbebens zu ermöglichen.
Species respond to environmental change by dynamically adjusting their geographical ranges. Robust predictions of these changes are prerequisites to inform dynamic and sustainable conservation strategies. Correlative species distribution models (SDMs) relate species’ occurrence records to prevailing environmental factors to describe the environmental niche. They have been widely applied in global change context as they have comparably low data requirements and allow for rapid assessments of potential future species’ distributions. However, due to their static nature, transient responses to environmental change are essentially ignored in SDMs. Furthermore, neither dispersal nor demographic processes and biotic interactions are explicitly incorporated. Therefore, it has often been suggested to link statistical and mechanistic modelling approaches in order to make more realistic predictions of species’ distributions for scenarios of environmental change. In this thesis, I present two different ways of such linkage. (i) Mechanistic modelling can act as virtual playground for testing statistical models and allows extensive exploration of specific questions. I promote this ‘virtual ecologist’ approach as a powerful evaluation framework for testing sampling protocols, analyses and modelling tools. Also, I employ such an approach to systematically assess the effects of transient dynamics and ecological properties and processes on the prediction accuracy of SDMs for climate change projections. That way, relevant mechanisms are identified that shape the species’ response to altered environmental conditions and which should hence be considered when trying to project species’ distribution through time. (ii) I supplement SDM projections of potential future habitat for black grouse in Switzerland with an individual-based population model. By explicitly considering complex interactions between habitat availability and demographic processes, this allows for a more direct assessment of expected population response to environmental change and associated extinction risks. However, predictions were highly variable across simulations emphasising the need for principal evaluation tools like sensitivity analysis to assess uncertainty and robustness in dynamic range predictions. Furthermore, I identify data coverage of the environmental niche as a likely cause for contrasted range predictions between SDM algorithms. SDMs may fail to make reliable predictions for truncated and edge niches, meaning that portions of the niche are not represented in the data or niche edges coincide with data limits. Overall, my thesis contributes to an improved understanding of uncertainty factors in predictions of range dynamics and presents ways how to deal with these. Finally I provide preliminary guidelines for predictive modelling of dynamic species’ response to environmental change, identify key challenges for future research and discuss emerging developments.
מחקר זה עוסק במשנתו החדשנית של ר' צדוק הכהן מלובלין. לפי רוב החוקרים, ר' צדוק ממשיך את משנת רבו, ר' מרדכי יוסף ליינער מאיזביצה, ומציג פטליזם אקזיסטנציאליסטי: לאדם יש חופש כנגד החוק המקובל, לפי רצון האל הנגלה בליבו, אפילו בתשוקותיו, החורג מההלכה; ואולם, האדם אינו קובע את הרצון המתגלה בשורש נשמתו, אלא חושפו בלבד. בכתבי ר' צדוק ניתן למצוא ביטויים רבים, בתוכן ובצורה, לפטליזם זה; עם זאת, הוא מציג גם התבטאויות רבות בדבר הבחירה האנושית החופשית וכוחה היצירתי לכונן ולקבוע את שורשו של האדם, ולחדש ולהשפיע על העולמות האלוקיים ועל העולם-הזה. מחקר זה מתמקד בקטעים אלו שמרכזיותם הוזנחה במחקר עד כה, ומכאן הבנתו המחודשת במשנתו.
את עמדת ר' צדוק ניתן להסביר באמצעות התפיסה הפרדוקסלית, הטוענת לכוחם המלא של שני הקטבים הסותרים ואף להשפעה הדדית ביניהם, היוצרת מתח קשה אך פורה: בניגוד לתהליך הפורמלי, האניטלקטואלי והאפריורי של הפטליזם, שבו 'הידיעה' האלוהית מבטלת את 'הבחירה' האנושית החופשית, ר' צדוק מציג פטאליות, המזהה המציאות עצמה את הידיעה והרצון האלוהיים המצויים בכל. בפטליות זו הידיעה אינה מבטלת את הבחירה; להיפך: ללא הידיעה האלוקית אין לאף דבר קיום, ולכן רק מציאות הידיעה בבחירה היא המאפשרת את קיומה הממשי.
תפיסתו האונטולוגית של ר' צדוק מתקיימת לא רק בתוכן הישיר של דבריו, אלא גם בעקיפין, באופן דרשנותו ובמובן שהוא מעניק למושגים שבהם הוא דן; לכן, הפטליות מתגלה גם בתחומים נוספים, שבהם מצוי פער בין הממד המוחלט וההכרחי ('ידיעה') לממד הקונטינגנטי, הארעי והיחסי ('בחירה'): השקר, הדמיון, הרוע, החטא, הייסורים ועוד, אמנם קונטינגנטיים ויחסיים לעומת מוחלטות האמת, הטוב וכו'; אך לפי ר' צדוק, האל רוצה בהם ככאלה – כלומר יש להם ממשות שאינה מוחלטת, אלא ממשות ככאלה, כקונטינגנטיים. אך ממשות זו אינה מאשרת אותם כפי שהם, אלא יוצרת בהם-עצמם טרנספורמציה. למשל, הרוע אינו הופך לטוב מוחלט או נותר כרוע, אלא הופך ל'טוב מאד' שלפי ר' צדוק גדול מהטוב הרגיל.
מכאן עולה גם ההשפעה ההפוכה, זו של הבחירה על הידיעה. לפי ר' צדוק, משקבלה הקונטינגנטיות והיחסיות של 'הבחירה' – שהיא למעשה מהות הבריאה ומהות האדם ומעשיו – את ממשותה, בכוחה אף להוסיף כביכול על המוחלטות האלוקית הקבועה של 'הידיעה': 'נגע' החטא או הייסורים הופך בעצמו, בחילוף אותם אותיות, ל'ענג' הגדול מההנאה הרגילה; בכוח האדם להשפיע על העולמות העליונים בגזירת גזירות ובביטול גזירות אלוקיות; בכוחו אף להשפיע על העולם-הזה, בחידושו היומיומי על-ידי חידושי התושב"ע, ובקידוש החודש המסוגל לשנות את תנועת המזלות השמימיים. מצד אחד, היצירתיות האנושית של חידושי התושב"ע כלולה באמת האלוקית הגנוזה בתושב"כ האלוקית והיא רק חושפת אותה; אך לפי ר"צ, מצד אחר מתברר שמקור חידושי התושב"ע דווקא גבוה ממקור התושב"כ והם אלו היוצרים וקובעים אותה.
ר' צדוק מציג שני מושגים מרכזיים לפרדוקס שלו: בממד האונטולוגי – 'הייחוד הגמור' של האל, שבו הדואליות הקונטינגנטית (בין האל לנבראיו, והבחירה שהיא מאפשרת) של 'הייחוד התחתון' מתקיימת פרדוקסלית עם האחדות המוחלטת ו'הידיעה' האלוהית של 'הייחוד העליון'.
בממד של מצבו האקזיסטנציאלי של האדם – 'השורש הנעלם': בניגוד לר' מרדכי יוסף רבו, ר' צדוק טוען שהאדם קובע את גורלו על-ידי מעשיו הבחיריים והקונטינגנטיים; ואולם, במקביל, בדומה לרבו, הוא טוען גם שהשורש הקבוע של האדם נקבע על-ידי 'הידיעה' האלוהית והיא זו הקובעת את מעשיו לטוב או למוטב. אך מעל אלו מציג ר' צדוק שורש נוסף, 'נעלם' הגבוה מהשורש הקבוע: הוא אכן שורש מוחלט ('ידיעה'), אך הוא נקבע ומכוּנן על-ידי מעשיו הבחיריים של האדם, בדומה לבריאה האלוהית 'יש מאין'.
With recent advances in the area of information extraction, automatically extracting structured information from a vast amount of unstructured textual data becomes an important task, which is infeasible for humans to capture all information manually. Named entities (e.g., persons, organizations, and locations), which are crucial components in texts, are usually the subjects of structured information from textual documents. Therefore, the task of named entity mining receives much attention. It consists of three major subtasks, which are named entity recognition, named entity linking, and relation extraction.
These three tasks build up an entire pipeline of a named entity mining system, where each of them has its challenges and can be employed for further applications. As a fundamental task in the natural language processing domain, studies on named entity recognition have a long history, and many existing approaches produce reliable results. The task is aiming to extract mentions of named entities in text and identify their types. Named entity linking recently received much attention with the development of knowledge bases that contain rich information about entities. The goal is to disambiguate mentions of named entities and to link them to the corresponding entries in a knowledge base. Relation extraction, as the final step of named entity mining, is a highly challenging task, which is to extract semantic relations between named entities, e.g., the ownership relation between two companies.
In this thesis, we review the state-of-the-art of named entity mining domain in detail, including valuable features, techniques, evaluation methodologies, and so on. Furthermore, we present two of our approaches that focus on the named entity linking and relation extraction tasks separately.
To solve the named entity linking task, we propose the entity linking technique, BEL, which operates on a textual range of relevant terms and aggregates decisions from an ensemble of simple classifiers. Each of the classifiers operates on a randomly sampled subset of the above range. In extensive experiments on hand-labeled and benchmark datasets, our approach outperformed state-of-the-art entity linking techniques, both in terms of quality and efficiency.
For the task of relation extraction, we focus on extracting a specific group of difficult relation types, business relations between companies. These relations can be used to gain valuable insight into the interactions between companies and perform complex analytics, such as predicting risk or valuating companies. Our semi-supervised strategy can extract business relations between companies based on only a few user-provided seed company pairs. By doing so, we also provide a solution for the problem of determining the direction of asymmetric relations, such as the ownership_of relation. We improve the reliability of the extraction process by using a holistic pattern identification method, which classifies the generated extraction patterns. Our experiments show that we can accurately and reliably extract new entity pairs occurring in the target relation by using as few as five labeled seed pairs.
Throughout the last ~3 million years, the Earth's climate system was characterised by cycles of glacial and interglacial periods. The current warm period, the Holocene, is comparably stable and stands out from this long-term cyclicality. However, since the industrial revolution, the climate has been increasingly affected by a human-induced increase in greenhouse gas concentrations. While instrumental observations are used to describe changes over the past ~200 years, indirect observations via proxy data are the main source of information beyond this instrumental era. These data are indicators of past climatic conditions, stored in palaeoclimate archives around the Earth. The proxy signal is affected by processes independent of the prevailing climatic conditions. In particular, for sedimentary archives such as marine sediments and polar ice sheets, material may be redistributed during or after the initial deposition and subsequent formation of the archive. This leads to noise in the records challenging reliable reconstructions on local or short time scales. This dissertation characterises the initial deposition of the climatic signal and quantifies the resulting archive-internal heterogeneity and its influence on the observed proxy signal to improve the representativity and interpretation of climate reconstructions from marine sediments and ice cores.
To this end, the horizontal and vertical variation in radiocarbon content of a box-core from the South China Sea is investigated. The three-dimensional resolution is used to quantify the true uncertainty in radiocarbon age estimates from planktonic foraminifera with an extensive sampling scheme, including different sample volumes and replicated measurements of batches of small and large numbers of specimen. An assessment on the variability stemming from sediment mixing by benthic organisms reveals strong internal heterogeneity. Hence, sediment mixing leads to substantial time uncertainty of proxy-based reconstructions with error terms two to five times larger than previously assumed.
A second three-dimensional analysis of the upper snowpack provides insights into the heterogeneous signal deposition and imprint in snow and firn. A new study design which combines a structure-from-motion photogrammetry approach with two-dimensional isotopic data is performed at a study site in the accumulation zone of the Greenland Ice Sheet. The photogrammetry method reveals an intermittent character of snowfall, a layer-wise snow deposition with substantial contributions by wind-driven erosion and redistribution to the final spatially variable accumulation and illustrated the evolution of stratigraphic noise at the surface. The isotopic data show the preservation of stratigraphic noise within the upper firn column, leading to a spatially variable climate signal imprint and heterogeneous layer thicknesses. Additional post-depositional modifications due to snow-air exchange are also investigated, but without a conclusive quantification of the contribution to the final isotopic signature.
Finally, this characterisation and quantification of the complex signal formation in marine sediments and polar ice contributes to a better understanding of the signal content in proxy data which is needed to assess the natural climate variability during the Holocene.
In this work, some new results to exploit the recurrence properties of quasiperiodic dynamical systems are presented by means of a two dimensional visualization technique, Recurrence Plots(RPs). Quasiperiodicity is the simplest form of dynamics exhibiting nontrivial recurrences, which are common in many nonlinear systems. The concept of recurrence was introduced to study the restricted three body problem and it is very useful for the characterization of nonlinear systems. I have analyzed in detail the recurrence patterns of systems with quasiperiodic dynamics both analytically and numerically. Based on a theoretical analysis, I have proposed a new procedure to distinguish quasiperiodic dynamics from chaos. This algorithm is particular useful in the analysis of short time series. Furthermore, this approach demonstrates to be efficient in recognizing regular and chaotic trajectories of dynamical systems with mixed phase space. Regarding the application to real situations, I have shown the capability and validity of this method by analyzing time series from fluid experiments.
Lava domes are severely hazardous, mound-shaped extrusions of highly viscous lava and commonly erupt at many active stratovolcanoes around the world. Due to gradual growth and flank oversteepening, such lava domes regularly experience partial or full collapses, resulting in destructive and far-reaching pyroclastic density currents. They are also associated with cyclic explosive activity as the complex interplay of cooling, degassing, and solidification of dome lavas regularly causes gas pressurizations on the dome or the underlying volcano conduit. Lava dome extrusions can last from days to decades, further highlighting the need for accurate and reliable monitoring data.
This thesis aims to improve our understanding of lava dome processes and to contribute to the monitoring and prediction of hazards posed by these domes. The recent rise and sophistication of photogrammetric techniques allows for the extraction of observational data in unprecedented detail and creates ideal tools for accomplishing this purpose. Here, I study natural lava dome extrusions as well as laboratory-based analogue models of lava dome extrusions and employ photogrammetric monitoring by Structure-from-Motion (SfM) and Particle-Image-Velocimetry (PIV) techniques. I primarily use aerial photography data obtained by helicopter, airplanes, Unoccupied Aircraft Systems (UAS) or ground-based timelapse cameras. Firstly, by combining a long time-series of overflight data at Volcán de Colima, México, with seismic and satellite radar data, I construct a detailed timeline of lava dome and crater evolution. Using numerical model, the impact of the extrusion on dome morphology and loading stress is further evaluated and an impact on the growth direction is identified, bearing important implications for the location of collapse hazards. Secondly, sequential overflight surveys at the Santiaguito lava dome, Guatemala, reveal surface motion data in high detail. I quantify the growth of the lava dome and the movement of a lava flow, showing complex motions that occur on different timescales and I provide insight into rock properties relevant for hazard assessment inferred purely by photogrammetric processing of remote sensing data. Lastly, I recreate artificial lava dome and spine growth using analogue modelling under controlled conditions, providing new insights into lava extrusion processes and structures as well as the conditions in which they form.
These findings demonstrate the capabilities of photogrammetric data analyses to successfully monitor lava dome growth and evolution while highlighting the advantages of complementary modelling methods to explain the observed phenomena. The results presented herein further bear important new insights and implications for the hazards posed by lava domes.
This dissertation examines the integration of incongruent visual-scene and morphological-case information (“cues”) in building thematic-role representations of spoken relative clauses in German.
Addressing the mutual influence of visual and linguistic processing, the Coordinated Interplay Account (CIA) describes a mechanism in two steps supporting visuo-linguistic integration (Knoeferle & Crocker, 2006, Cog Sci). However, the outcomes and dynamics of integrating incongruent thematic-role representations from distinct sources have been investigated scarcely. Further, there is evidence that both second-language (L2) and older speakers may rely on non-syntactic cues relatively more than first-language (L1)/young speakers. Yet, the role of visual information for thematic-role comprehension has not been measured in L2 speakers, and only limitedly across the adult lifespan.
Thematically unambiguous canonically ordered (subject-extracted) and noncanonically ordered (object-extracted) spoken relative clauses in German (see 1a-b) were presented in isolation and alongside visual scenes conveying either the same (congruent) or the opposite (incongruent) thematic relations as the sentence did.
1 a Das ist der Koch, der die Braut verfolgt.
This is the.NOM cook who.NOM the.ACC bride follows
This is the cook who is following the bride.
b Das ist der Koch, den die Braut verfolgt.
This is the.NOM cook whom.ACC the.NOM bride follows
This is the cook whom the bride is following.
The relative contribution of each cue to thematic-role representations was assessed with agent identification. Accuracy and latency data were collected post-sentence from a sample of L1 and L2 speakers (Zona & Felser, 2023), and from a sample of L1 speakers from across the adult lifespan (Zona & Reifegerste, under review). In addition, the moment-by-moment dynamics of thematic-role assignment were investigated with mouse tracking in a young L1 sample (Zona, under review).
The following questions were addressed: (1) How do visual scenes influence thematic-role representations of canonical and noncanonical sentences? (2) How does reliance on visual-scene, case, and word-order cues vary in L1 and L2 speakers? (3) How does reliance on visual-scene, case, and word-order cues change across the lifespan?
The results showed reliable effects of incongruence of visually and linguistically conveyed thematic relations on thematic-role representations. Incongruent (vs. congruent) scenes yielded slower and less accurate responses to agent-identification probes presented post-sentence. The recently inspected agent was considered as the most likely agent ~300ms after trial onset, and the convergence of visual scenes and word order enabled comprehenders to assign thematic roles predictively.
L2 (vs. L1) participants relied more on word order overall. In response to noncanonical clauses presented with incongruent visual scenes, sensitivity to case predicted the size of incongruence effects better than L1-L2 grouping. These results suggest that the individual’s ability to exploit specific cues might predict their weighting.
Sensitivity to case was stable throughout the lifespan, while visual effects increased with increasing age and were modulated by individual interference-inhibition levels. Thus, age-related changes in comprehension may stem from stronger reliance on visually (vs. linguistically) conveyed meaning.
These patterns represent evidence for a recent-role preference – i.e., a tendency to re-assign visually conveyed thematic roles to the same referents in temporally coordinated utterances. The findings (i) extend the generalizability of CIA predictions across stimuli, tasks, populations, and measures of interest, (ii) contribute to specifying the outcomes and mechanisms of detecting and indexing incongruent representations within the CIA, and (iii) speak to current efforts to understand the sources of variability in sentence comprehension.
Institutionelle Bildung ist für autistische Lernende mit vielgestaltigen und spezifischen Hindernissen verbunden. Dies gilt insbesondere im Zusammenhang mit Inklusion, deren Relevanz nicht zuletzt durch das Übereinkommen der Vereinten Nationen über die Rechte von Menschen mit Behinderung gegeben ist.
Diese Arbeit diskutiert zahlreiche lernrelevante Besonderheiten im Kontext von Autismus und zeigt Diskrepanzen zu den nicht immer ausreichend angemessenen institutionellen Lehrkonzepten. Eine zentrale These ist hierbei, dass die ungewöhnlich intensive Aufmerksamkeit von Autist*innen für ihre Spezialinteressen dafür genutzt werden kann, das Lernen mit fremdgestellten Inhalten zu erleichtern. Darauf aufbauend werden Lösungsansätze diskutiert, welche in einem neuartigen Konzept für ein digitales mehrgerätebasiertes Lernspiel resultieren.
Eine wesentliche Herausforderung bei der Konzeption spielbasierten Lernens besteht in der adäquaten Einbindung von Lerninhalten in einen fesselnden narrativen Kontext. Am Beispiel von Übungen zur emotionalen Deutung von Mimik, welche für das Lernen von sozioemotionalen Kompetenzen besonders im Rahmen von Therapiekonzepten bei Autismus Verwendung finden, wird eine angemessene Narration vorgestellt, welche die störungsarme Einbindung dieser sehr speziellen Lerninhalte ermöglicht.
Die Effekte der einzelnen Konzeptionselemente werden anhand eines prototypisch entwickelten Lernspiels untersucht. Darauf aufbauend zeigt eine quantitative Studie die gute Akzeptanz und Nutzerfreundlichkeit des Spiels und belegte vor allem die
Verständlichkeit der Narration und der Spielelemente. Ein weiterer Schwerpunkt liegt in der minimalinvasiven Untersuchung möglicher Störungen des Spielerlebnisses durch den Wechsel zwischen verschiedenen Endgeräten, für die ein innovatives Messverfahren entwickelt wurde.
Im Ergebnis beleuchtet diese Arbeit die Bedeutung und die Grenzen von spielbasierten Ansätzen für autistische Lernende. Ein großer Teil der vorgestellten Konzepte lässt sich auf andersartige Lernszenarien übertragen. Das dafür entwickelte technische Framework zur Realisierung narrativer Lernpfade ist ebenfalls darauf vorbereitet, für weitere Lernszenarien, gerade auch im institutionellen Kontext, Verwendung zu finden.
This research addressed the question, if it is possible to simplify current microcontact printing systems for the production of anisotropic building blocks or patchy particles, by using common chemicals while still maintaining reproducibility, high precision and tunability of the Janus-balance
Chapter 2 introduced the microcontact printing materials as well as their defined electrostatic interactions. In particular polydimethylsiloxane stamps, silica particles and high molecular weight polyethylenimine ink were mainly used in this research. All of these components are commercially available in large quantities and affordable, which gives this approach a huge potential for further up-scaling developments. The benefits of polymeric over molecular inks was described including its flexible influence on the printing pressure. With this alteration of the µCP concept, a new method of solvent assisted particle release mechanism enabled the switch from two-dimensional surface modification to three-dimensional structure printing on colloidal silica particles, without changing printing parameters or starting materials. This effect opened the way to use the internal volume of the achieved patches for incorporation of nano additives, introducing additional physical properties into the patches without alteration of the surface chemistry.
The success of this system and its achievable range was further investigated in chapter 3 by giving detailed information about patch geometry parameters including diameter, thickness and yield. For this purpose, silica particles in a size range between 1µm and 5µm were printed with different ink concentrations to change the Janus-balance of these single patched particles. A necessary intermediate step, consisting of air-plasma treatment, for the production of trivalent particles using "sandwich" printing was discovered and comparative studies concerning the patch geometry of single and double patched particles were conducted. Additionally, the usage of structured PDMS stamps during printing was described. These results demonstrate the excellent precision of this approach and opens the pathway for even greater accuracy as further parameters can be finely tuned and investigated, e.g. humidity and temperature during stamp loading.
The performance of these synthesized anisotropic colloids was further investigated in chapter 4, starting with behaviour studies in alcoholic and aqueous dispersions. Here, the stability of the applied patches was studied in a broad pH range, discovering a release mechanism by disabling the electrostatic bonding between particle surface and polyelectrolyte ink. Furthermore, the absence of strong attractive forces between divalent particles in water was investigated using XPS measurements. These results lead to the conclusion that the transfer of small PDMS oligomers onto the patch surface is shielding charges, preventing colloidal agglomeration. However, based on this knowledge, further patch modifications for particle self-assembly were introduced including physical approaches using magnetic nano additives, chemical patch functionalization with avidin-biotin or the light responsive cyclodextrin-arylazopyrazoles coupling as well as particle surface modification for the synthesis of highly amphiphilic colloids. The successful coupling, its efficiency, stability and behaviour in different solvents were evaluated to find a suitable coupling system for future assembly experiments. Based on these results the possibility of more sophisticated structures by colloidal self-assembly is given.
Certain findings needed further analysis to understand their underlying mechanics, including the relatively broad patch diameter distribution and the decreasing patch thickness for smaller silica particles. Mathematical assumptions for both effects are introduced in chapter 5. First, they demonstrate the connection between the naturally occurring particle size distribution and the broadening of the patch diameter, indicating an even higher precision for this µCP approach. Second, explaining the increase of contact area between particle and ink surface due to higher particle packaging, leading to a decrease in printing pressure for smaller particles.
These calculations ultimately lead to the development of a new mechanical microcontact printing approach, using centrifugal forces for high pressure control and excellent parallel alignment of printing substrates. First results with this device and the comparison with previously conducted by-hand experiments conclude this research. It furthermore displays the advantages of such a device for future applications using a mechanical printing approach, especially for accessing even smaller nano particles with great precision and excellent yield.
In conclusion, this work demonstrates the successful adjustment of the µCP approach using commercially available and affordable silica particles and polyelectrolytes for high flexibility, reduced costs and higher scale-up value. Furthermore, its was possible to increase the modification potential by introducing three-dimensional patches for additional functionalization volume. While keeping a high colloidal stability, different coupling systems showed the self-assembly capabilities of this toolbox for anisotropic particles.
As land-cover conversion continues to expand into ever more remote areas in the humid tropics, montane rainforests are increasingly threatened. In the south Ecuadorian Andes, they are not only subject to man-made disturbances but also to naturally occurring landslides. I was interested in the impact of this ecosystem dynamics on a key parameter of the hydrologic cycle, the soil saturated hydraulic conductivity (synonym: permeability; Ks from here on), because it is a sensitive indicator for soil disturbances. My general objective was to quantify the effects of the regional natural and human disturbances on the saturated hydraulic conductivity and to describe the resulting spatial-temporal patterns. The main hypotheses were: 1) disturbances cause an apparent displacement of the less permeable soil layer towards the surface, either due to a loss of the permeable surface soil after land-sliding, or as a consequence of the surface soil compaction under cattle pastures; 2) ‘recovery’ from disturbance, either because of landslide re-vegetation or because of secondary succession after pasture abandonment, involves an apparent displacement of the less permeable layer back towards the original depth an 3) disturbances cause a simplification of the Ks spatial structure, i.e. the spatially dependent random variation diminishes; the subsequent recovery entails the re-establishment of the original structure. In my first study, I developed a synthesis of recent geostatistical research regarding its applicability to soil hydraulic data, including exploratory data analysis and variogram estimation techniques; I subsequently evaluated the results in terms of spatial prediction uncertainty. Concerning the exploratory data analysis, my main results were: 1) Gaussian uni- and bivariate distributions of the log-transformed data; 2) the existence of significant local trends; 3) no need for robust estimation; 4) no anisotropic variation. I found partly considerable differences in covariance parameters resulting from different variogram estimation techniques, which, in the framework of spatial prediction, were mainly reflected in the spatial connectivity of the Ks-field. Ignoring the trend component and an arbitrary use of robust estimators, however, would have the most severe consequences in this respect. Regarding variogram modeling, I encouraged restricted maximum likelihood estimation because of its accuracy and independence on the selected lags needed for experimental variograms. The second study dealt with the Ks spatial-temporal pattern in the sequences of natural and man-made disturbances characteristic for the montane rainforest study area. To investigate the disturbance effects both on global means and the spatial structure of Ks, a combined design-and model-based sampling approach was used for field-measurements at soil depths of 12.5, 20, and 50 cm (n=30-150/depth) under landslides of different ages (2 and 8 years), under actively grazed pasture, fallows following pasture abandonment (2 to 25 years of age), and under natural forest. Concerning global means, our main findings were 1) global means of the soil permeability generally decrease with increasing soil depth; 2) no significant Ks differences can be observed among landslides and compared to the natural forest; 3) a distinct permeability decrease of two orders of magnitude occurs after forest conversion to pasture at shallow soil depths, and 4) the slow regeneration process after pasture abandonment requires at least one decade. Regarding the Ks spatial structure, we found that 1) disturbances affect the Ks spatial structure in the topsoil, and 2) the largest differences in spatial patterns are associated with the subsoil permeability. In summary, the regional landslide activity seems to affect soil hydrology to a marginal extend only, which is in contrast to the pronounced drop of Ks after forest conversion. We used this spatial-temporal information combined with local rain intensities to assess the partitioning of rainfall into vertical and lateral flowpaths under undisturbed, disturbed, and regenerating land-cover types in the third study. It turned out that 1) the montane rainforest is characterized by prevailing vertical flowpaths in the topsoil, which can switch to lateral directions below 20 cm depth for a small number of rain events, which may, however, transport a high portion of the annual runoff; 2) similar hydrological flowpaths occur under the landslides except for a somewhat higher probability of impermeable layer formation in the topsoil of a young landslide, and 3) pronounced differences in runoff components can be observed for the human disturbance sequence involving the development of near-surface impeding layers for 24, 44, and 8 % of rain events for pasture, a two-year-old fallow, and a ten-year-old fallow, respectively.
Motivations and research objectives: During the passage of rain water through a forest canopy two main processes take place. First, water is redistributed; and second, its chemical properties change substantially. The rain water redistribution and the brief contact with plant surfaces results in a large variability of both throughfall and its chemical composition. Since throughfall and its chemistry influence a range of physical, chemical and biological processes at or below the forest floor the understanding of throughfall variability and the prediction of throughfall patterns potentially improves the understanding of near-surface processes in forest ecosystems. This thesis comprises three main research objectives. The first objective is to determine the variability of throughfall and its chemistry, and to investigate some of the controlling factors. Second, I explored throughfall spatial patterns. Finally, I attempted to assess the temporal persistence of throughfall and its chemical composition. Research sites and methods: The thesis is based on investigations in a tropical montane rain forest in Ecuador, and lowland rain forest ecosystems in Brazil and Panama. The first two studies investigate both throughfall and throughfall chemistry following a deterministic approach. The third study investigates throughfall patterns with geostatistical methods, and hence, relies on a stochastic approach. Results and Conclusions: Throughfall is highly variable. The variability of throughfall in tropical forests seems to exceed that of many temperate forests. These differences, however, do not solely reflect ecosystem-inherent characteristics, more likely they also mirror management practices. Apart from biotic factors that influence throughfall variability, rainfall magnitude is an important control. Throughfall solute concentrations and solute deposition are even more variable than throughfall. In contrast to throughfall volumes, the variability of solute deposition shows no clear differences between tropical and temperate forests, hence, biodiversity is not a strong predictor of solute deposition heterogeneity. Many other factors control solute deposition patterns, for instance, solute concentration in rainfall and antecedent dry period. The temporal variability of the latter factors partly accounts for the low temporal persistence of solute deposition. In contrast, measurements of throughfall volume are quite stable over time. Results from the Panamanian research site indicate that wet and dry areas outlast consecutive wet seasons. At this research site, throughfall exhibited only weak or pure nugget autocorrelation structures over the studies lag distances. A close look at the geostatistical tools at hand provided evidence that throughfall datasets, in particular those of large events, require robust variogram estimation if one wants to avoid outlier removal. This finding is important because all geostatistical throughfall studies that have been published so far analyzed their data using the classical, non-robust variogram estimator.
Die vorliegende Untersuchung verfolgt das Ziel, kulturelle und religiöse Aspekte der Erneuerung jüdischen Lebens in Berlin seit 1989 zu erforschen. Die Entwicklungen der jüdischen Gemeinschaft in der Hauptstadt seit dem Fall der Mauer und dem Zusammenbruch der Sowjetunion führen zur Wiederannäherung eines Teils der jüdischen Bevölkerung in Deutschland an die eigene Kultur, Religion und Geschichte. Dabei kommt die Pluralität der kulturellen, literarischen und religiösen Ausdrucksformen der jüdischen Identitäten zum Vorschein. Die Arbeit verdeutlicht diese in Berlin nach 1989 einsetzende kulturelle und religiöse „Renaissance“. Vier wichtige Punkte kennzeichnen das jüdische Leben in Berlin nach 1989. Erstens gewinnt Deutschland seit der Wiedervereinigung eine neue Rolle als mögliches Einwanderungsland für Juden. Vor allem mit der massiven jüdischen Einwanderung aus den Staaten der ehemaligen Sowjetunion seit den 1990er Jahren wird Deutschland allmählich als wichtiges Zentrum in der europäischen Diaspora anerkannt. Zweitens bleibt zwar die Shoah tief verankert im Gedächtnis der jüdischen Gemeinschaft; die meisten Kinder oder Enkelkinder von Überlebenden der Shoah weigern sich jedoch, ihre jüdische Identität exklusiv durch die Shoah zu definieren. Sie gründen zur Wiederentdeckung und Forderung ihres kulturellen, religiösen und historischen Erbes jüdische Gruppen und Einrichtungen in Berlin, die in den meisten Fällen alternativ zur Jüdischen Gemeinde entstehen: Künstlergruppen, jüdische Kulturvereine, Konferenzen und Podiumsdiskussionen, religiöse Kongregationen und Lernhäuser. Damit – und dies ist der dritte Punkt – verliert zwar die offizielle Jüdische Gemeinde an Bedeutung als einzige Vertreterin der jüdischen Gemeinschaft Berlins; diese kulturelle und religiöse „Renaissance“ außerhalb der offiziellen Strukturen der Gemeinde bedeutet aber auch eine wachsende Pluralität und Diversifizierung der jüdischen Gemeinschaft in Berlin. Viertens spielt Berlin die Hauptrolle in diesem Prozess. Heute werden viele ehemalige jüdische Orte neu belebt: Synagogen werden wiederentdeckt und renoviert, Denk- und Mahnmale gebaut, Stadtführungen auf der Spur des „jüdischen Berlins“ organisiert, Rabbinerseminare neu gegründet. Die Topographie Berlins bildet auch eine Inspirationsquelle für jüdische (und nichtjüdische) Schriftsteller und Künstler. Die Analyse dieser nach 1989 entstandenen religiösen Initiativen, literarischen Werke und kulturellen Produktionen dient dazu, Aspekte der kulturellen und religiösen „Renaissance“ in Berlin näher zu verdeutlichen.
This work incorporates three treatises which are commonly concerned with a stochastic theory of the Lyapunov exponents. With the help of this theory universal scaling laws are investigated which appear in coupled chaotic and disordered systems. First, two continuous-time stochastic models for weakly coupled chaotic systems are introduced to study the scaling of the Lyapunov exponents with the coupling strength (coupling sensitivity of chaos). By means of the the Fokker-Planck formalism scaling relations are derived, which are confirmed by results of numerical simulations. Next, coupling sensitivity is shown to exist for coupled disordered chains, where it appears as a singular increase of the localization length. Numerical findings for coupled Anderson models are confirmed by analytic results for coupled continuous-space Schrödinger equations. The resulting scaling relation of the localization length resembles the scaling of the Lyapunov exponent of coupled chaotic systems. Finally, the statistics of the exponential growth rate of the linear oscillator with parametric noise are studied. It is shown that the distribution of the finite-time Lyapunov exponent deviates from a Gaussian one. By means of the generalized Lyapunov exponents the parameter range is determined where the non-Gaussian part of the distribution is significant and multiscaling becomes essential.
Geometric electroelasticity
(2014)
In this work a diffential geometric formulation of the theory of electroelasticity is developed which also includes thermal and magnetic influences. We study the motion of bodies consisting of an elastic material that are deformed by the influence of mechanical forces, heat and an external electromagnetic field. To this end physical balance laws (conservation of mass, balance of momentum, angular momentum and energy) are established. These provide an equation that describes the motion of the body during the deformation. Here the body and the surrounding space are modeled as Riemannian manifolds, and we allow that the body has a lower dimension than the surrounding space. In this way one is not (as usual) restricted to the description of the deformation of three-dimensional bodies in a three-dimensional space, but one can also describe the deformation of membranes and the deformation in a curved space. Moreover, we formulate so-called constitutive relations that encode the properties of the used material. Balance of energy as a scalar law can easily be formulated on a Riemannian manifold. The remaining balance laws are then obtained by demanding that balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space. This generalizes a result by Marsden and Hughes that pertains to bodies that have the same dimension as the surrounding space and does not allow the presence of electromagnetic fields. Usually, in works on electroelasticity the entropy inequality is used to decide which otherwise allowed deformations are physically admissible and which are not. It is alsoemployed to derive restrictions to the possible forms of constitutive relations describing the material. Unfortunately, the opinions on the physically correct statement of the entropy inequality diverge when electromagnetic fields are present. Moreover, it is unclear how to formulate the entropy inequality in the case of a membrane that is subjected to an electromagnetic field. Thus, we show that one can replace the use of the entropy inequality by the demand that for a given process balance of energy is invariant under the action of arbitrary diffeomorphisms on the surrounding space and under linear rescalings of the temperature. On the one hand, this demand also yields the desired restrictions to the form of the constitutive relations. On the other hand, it needs much weaker assumptions than the arguments in physics literature that are employing the entropy inequality. Again, our result generalizes a theorem of Marsden and Hughes. This time, our result is, like theirs, only valid for bodies that have the same dimension as the surrounding space.
Die gewaltigen Strukturveränderungen im Bereich des Gesundheitswesens, die in den letzten Jahren bereits erfolgten und die, die noch bevorstehen, zwingen Unternehmen, mit geplanten und gesteuerten Veränderungsprozessen die Voraussetzungen für eine kontinuierliche Anpassung an die neuen Gegebenheiten zu schaffen und somit ihre Zukunftsfähigkeit sicherzustellen. Vor diesem Hintergrund wird gezeigt, wie das Excellence-Modell der EFQM als Instrument für Veränderungsprozesse eingesetzt werden kann und es geeignet ist, Veränderungsziele zu definieren und die Zielerreichung zu bewerten. Referenzobjekt der Fallstudienanalyse, die einen Zeitraum von 13 Jahren umfasst, ist der Medizinische Dienst der Krankenversicherung Rheinland-Pfalz. Neben der Analyse und Darstellung von theoretischen Grundlagen wird an einem Unternehmen des Gesundheitswesens beispielhaft gezeigt, wie die Umsetzung in der Praxis unter Einsatz des EFQM-Modells erfolgen kann. Da das EFQM-Modell mit seiner Systematik unternehmensunabhängig eingesetzt werden kann, sind Lern- und Übertragungsmöglichkeiten strukturiert möglich. Es wird der Nachweis erbracht, dass sich das EFQM-Modell im Rahmen eines Management- und Qualitätssteue-rungsmodells als Universalmethode nutzen lässt, sofern das Management über die Kompe-tenz branchenspezifischer Anpassung verfügt. Auf dem Weg zu organisatorischer Excel-lence wird gezielt an Planungs- und Prognosetechniken des strategischen Managements (SWOT, Szenario-Analyse, Portfolio-Analyse) angeknüpft und auf das VRIO-Paradigma des Resource-based View Bezug genommen. Das EFQM-Modell wird dem Stresstest des ressourcenstrategischen Ansatzes unterzogen, um so zu zeigen, dass es sich beim EFQM-Modell um ein einzigartiges, schwer imitierbares, kaum zu substituierendes, organisatorisch verankertes und kundennutzen-stiftendes Er-folgspotenzial handeln kann. Die Arbeit liefert Impulse und konkrete Anregungen, die zu einem hohen managementprakti-schen Erkenntniszuwachs für den erfolgreichen Umgang mit dem EFQM-Modell und dem Einsatz von Qualitätsmanagementsystemen führen können.
Hauptanliegen der Dissertation ist es, einen Entwurf einer praktischen Ästhetik zu lancieren, der an der Schnittstelle zwischen philosophischer Ästhetik und Kunst – genauer Performancekunst - im Zeichen der Bezugsgrösse der Verletzbarkeit steht. In jüngeren Ästhetikansätzen hat sich eine Auffassung herauskristallisiert, die nicht über, sondern mit Kunst reflektiert. Die Pointe im ‚Mit’ liegt darin, dass diese Ästhetiken die Kunst nicht erklären, sie bestimmen und damit ihre Bedeutung festlegen, sondern dass diese entlang der Kunst die Brüche, Widerstände und Zäsuren zwischen Wahrnehmen und Denken markieren und diese als produktiv bewerten. Diese Lesart etabliert ein Denken, das nicht aus der Distanz auf etwas schaut (theoria), sondern ästhetisch-reflektierend (zurückwendend, auch selbstkritisch) mit der Kunst denkt. Die Disziplin der Ästhetik - als aisthesis: Lehre der sinnlichen Wahrnehmung - nimmt innerhalb der Philosophie eine besondere Stellung ein, weil sie auf ebendiese Differenz verweist und deshalb sinnliche und nicht nur logisch-argumentatorische Denkfiguren stärkt. Als eine Möglichkeit, die Kluft, das Nicht-Einholbare, die brüchige Unzulänglichkeit des begrifflich Denkenden gegenüber ästhetischer Erfahrung zu stärken, schlage ich die Bezugsgrösse der Verletzbarkeit vor. Eine solche Ästhetik besteht aus dem Kreieren verletzbarer Orte, wobei diese auf zweierlei Weisen umkreist werden: Zum einen aus der Kunstpraxis heraus anhand der ästhetischen Figur des verletzbaren Körpes, wie er sich in der zeitgenössischen Performance zeigt. Zum anderen als ein Kreieren von Begriffen im Bewusstsein ihrer Verletzbarkeit. Ausgangspunkte sind die Denkentwürfe von Gilles Deleuze und Hans Blumenberg: Die Ästhetik von Gilles Deleuze entwirft eine konkrete Überschneidungsmöglichkeit von Kunst und Philosophie, aus der sich meine These des Mit-Kunst-Denkens entwickeln lässt. Sie kann aus der Grundvoraussetzung des Deleuzeschen Denkens heraus begründet werden, die besagt, dass nicht nur die Kunst, sondern auch die Philosophie eine schöpferische Tätigkeit ist. Beide Disziplinen beruhen auf dem Prinzip der creatio continua, durch welche die Kunst Empfindungen und die Philosophie Begriffe schöpft, wobei eben genau dieser schöpferische Prozess Kunst und Philosophie in ein produktives Verhältnis zueinander treten lässt. Wie Deleuze seine Begriffsarbeit entlang künstlerischer Praxis entwickelt, wird anhand der Analyse des bis heute wenig rezipierten Textes Ein Manifest weniger in Bezug auf das Theater von Carmelo Bene analysiert. Eine ganz anderen Zugang zum Entwurf einer praktischen Ästhetik liefert Hans Blumenberg, der eine Theorie der Unbegrifflichkeit in Aussicht stellt. Im Anschluss an seine Forderung, die Metapher wieder vermehrt in die philosophische Denkpraxis zu integrieren, radikalisiert er seine Forderung, auch das Nichtanschauliche zu berücksichtigen, indem er das gänzlich Unbegriffliche an die Seite des Begrifflichen stellt. Definitorische Schwäche zeigt sich als wahrhaftige Stärke, die in der Unbegrifflichkeit ihren Zenit erreicht. Der Schiffbruch wird von mir als zentrale Metapher – gewissermassen als Metapher der Metapher – verstanden, die das Auf-Grund-Laufen des Allwissenden veranschaulicht. Im Schiffbruch wird die produktive Kollision von Theorie und Praxis deutlich. Deleuze und Blumenberg zeigen über ‚creatio continua’ und ‚Unbegrifflichkeit’ die Grenzen des Begreifens, indem sie betonen, dass sich Ästhetik nicht nur auf künstlerische Erfahrungen bezieht, sondern selber in das Gegenwärtigmachen von Erfahrungen involviert ist. Daraus folgt, dass ästhetische Reflexion nicht nur begrifflich agieren muss. Die praktische Ästhetik animiert dazu, andere darstellerische Formen (Bilder, Töne, Körper) als differente und ebenbürtige reflexive Modi anzuerkennen und sie als verletzbarmachende Formate der Sprache an die Seite zu stellen. Diese Lesart betont den gestalterischen Aspekt der Ästhetik selber. Zur Verdeutlichung dieser Kluft zwischen (Körper-)Bild und Begriff ist der von mir mitgestaltete Film Augen blickeN der Dissertation als Kapitel beigefügt. Dieser Film zeigt Performer und Performerinnen, die sich bewusst entschieden haben, ihren ‚abweichenden’ Körper auf der Bühne zu präsentieren. Das Wort Verletzbarkeit verweist auf die paradoxe Situation, etwas Brüchiges tragfähig zu machen und dadurch auch auf eine besondere Beziehungsform und auf ein existenzielles Aufeinander-Verwiesensein der Menschen. Verletzbarkeit geht alle an, und stiftet deshalb eine Gemeinsamkeit besonderer Art. In diesem Sinne sind verletzbare Orte nicht nur ästhetische, sondern auch ethische Orte, womit die politische Dimension des Vorhabens betont wird.
Distances affect economic decision-making in numerous situations. The time at which we make a decision about future consumption has an impact on our consumption behavior. The spatial distance to employer, school or university impacts the place where we live and vice versa. The emotional closeness to other individuals influences our willingness to give money to them. This cumulative thesis aims to enrich the literature on the role of distance for economic decision-making. Thereby, each of my research projects sheds light on the impact of one kind of distance for efficient decision-making.
This thesis is concerned with the solution of the blind source separation problem (BSS). The BSS problem occurs frequently in various scientific and technical applications. In essence, it consists in separating meaningful underlying components out of a mixture of a multitude of superimposed signals. In the recent research literature there are two related approaches to the BSS problem: The first is known as Independent Component Analysis (ICA), where the goal is to transform the data such that the components become as independent as possible. The second is based on the notion of diagonality of certain characteristic matrices derived from the data. Here the goal is to transform the matrices such that they become as diagonal as possible. In this thesis we study the latter method of approximate joint diagonalization (AJD) to achieve a solution of the BSS problem. After an introduction to the general setting, the thesis provides an overview on particular choices for the set of target matrices that can be used for BSS by joint diagonalization. As the main contribution of the thesis, new algorithms for approximate joint diagonalization of several matrices with non-orthogonal transformations are developed. These newly developed algorithms will be tested on synthetic benchmark datasets and compared to other previous diagonalization algorithms. Applications of the BSS methods to biomedical signal processing are discussed and exemplified with real-life data sets of multi-channel biomagnetic recordings.
Information on the contemporary in-situ stress state of the earth’s crust is essential for geotechnical applications and physics-based seismic hazard assessment. Yet, stress data records for a data point are incomplete and their availability is usually not dense enough to allow conclusive statements. This demands a thorough examination of the in-situ stress field which is achieved by 3D geomechanicalnumerical models. However, the models spatial resolution is limited and the resulting local stress state is subject to large uncertainties that confine the significance of the findings. In addition, temporal variations of the in-situ stress field are naturally or anthropogenically induced. In my thesis I address these challenges in three manuscripts that investigate (1) the current crustal stress field orientation, (2) the 3D geomechanical-numerical modelling of the in-situ stress state, and (3) the phenomenon of injection induced temporal stress tensor rotations. In the first manuscript I present the first comprehensive stress data compilation of Iceland with 495 data records. Therefore, I analysed image logs from 57 boreholes in Iceland for indicators of the orientation of the maximum horizontal stress component. The study is the first stress survey from different kinds of stress indicators in a geologically very young and tectonically active area of an onshore spreading ridge. It reveals a distinct stress field with a depth independent stress orientation even very close to the spreading centre. In the second manuscript I present a calibrated 3D geomechanical-numerical modelling approach of the in-situ stress state of the Bavarian Molasse Basin that investigates the regional (70x70x10km³) and local (10x10x10km³) stress state. To link these two models I develop a multi-stage modelling approach that provides a reliable and efficient method to derive from the larger scale model initial and boundary conditions for the smaller scale model. Furthermore, I quantify the uncertainties in the models results which are inherent to geomechanical-numerical modelling in general and the multi-stage approach in particular. I show that the significance of the models results is mainly reduced due to the uncertainties in the material properties and the low number of available stress magnitude data records for calibration. In the third manuscript I investigate the phenomenon of injection induced temporal stress tensor rotation and its controlling factors. I conduct a sensitivity study with a 3D generic thermo-hydro-mechanical model. I show that the key control factors for the stress tensor rotation are the permeability as the decisive factor, the injection rate, and the initial differential stress. In particular for enhanced geothermal systems with a low permeability large rotations of the stress tensor are indicated. According to these findings the estimation of the initial differential stress in a reservoir is possible provided the permeability is known and the angle of stress rotation is observed. I propose that the stress tensor rotations can be a key factor in terms of the potential for induced seismicity on pre-existing faults due to the reorientation of the stress field that changes the optimal orientation of faults.
Self-adaptive data quality
(2017)
Carrying out business processes successfully is closely linked to the quality of the data inventory in an organization. Lacks in data quality lead to problems: Incorrect address data prevents (timely) shipments to customers. Erroneous orders lead to returns and thus to unnecessary effort. Wrong pricing forces companies to miss out on revenues or to impair customer satisfaction. If orders or customer records cannot be retrieved, complaint management takes longer. Due to erroneous inventories, too few or too much supplies might be reordered.
A special problem with data quality and the reason for many of the issues mentioned above are duplicates in databases. Duplicates are different representations of same real-world objects in a dataset. However, these representations differ from each other and are for that reason hard to match by a computer. Moreover, the number of required comparisons to find those duplicates grows with the square of the dataset size. To cleanse the data, these duplicates must be detected and removed. Duplicate detection is a very laborious process. To achieve satisfactory results, appropriate software must be created and configured (similarity measures, partitioning keys, thresholds, etc.). Both requires much manual effort and experience.
This thesis addresses automation of parameter selection for duplicate detection and presents several novel approaches that eliminate the need for human experience in parts of the duplicate detection process.
A pre-processing step is introduced that analyzes the datasets in question and classifies their attributes semantically. Not only do these annotations help understanding the respective datasets, but they also facilitate subsequent steps, for example, by selecting appropriate similarity measures or normalizing the data upfront. This approach works without schema information.
Following that, we show a partitioning technique that strongly reduces the number of pair comparisons for the duplicate detection process. The approach automatically finds particularly suitable partitioning keys that simultaneously allow for effective and efficient duplicate retrieval. By means of a user study, we demonstrate that this technique finds partitioning keys that outperform expert suggestions and additionally does not need manual configuration. Furthermore, this approach can be applied independently of the attribute types.
To measure the success of a duplicate detection process and to execute the described partitioning approach, a gold standard is required that provides information about the actual duplicates in a training dataset. This thesis presents a technique that uses existing duplicate detection results and crowdsourcing to create a near gold standard that can be used for the purposes above. Another part of the thesis describes and evaluates strategies how to reduce these crowdsourcing costs and to achieve a consensus with less effort.
Biofilms are complex living materials that form as bacteria get embedded in a matrix of self-produced protein and polysaccharide fibres. The formation of a network of extracellular biopolymer fibres contributes to the cohesion of the biofilm by promoting cell-cell attachment and by mediating biofilm-substrate interactions. This sessile mode of bacteria growth has been well studied by microbiologists to prevent the detrimental effects of biofilms in medical and industrial settings. Indeed, biofilms are associated with increased antibiotic resistance in bacterial infections, and they can also cause clogging of pipelines or promote bio-corrosion. However, biofilms also gained interest from biophysics due to their ability to form complex morphological patterns during growth. Recently, the emerging field of engineered living materials investigates biofilm mechanical properties at multiple length scales and leverages the tools of synthetic biology to tune the functions of their constitutive biopolymers.
This doctoral thesis aims at clarifying how the morphogenesis of Escherichia coli (E. coli) biofilms is influenced by their growth dynamics and mechanical properties. To address this question, I used methods from cell mechanics and materials science. I first studied how biological activity in biofilms gives rise to non-uniform growth patterns. In a second study, I investigated how E. coli biofilm morphogenesis and its mechanical properties adapt to an environmental stimulus, namely the water content of their substrate. Finally, I estimated how the mechanical properties of E. coli biofilms are altered when the bacteria express different extracellular biopolymers.
On nutritive hydrogels, micron-sized E. coli cells can build centimetre-large biofilms. During this process, bacterial proliferation and matrix production introduce mechanical stresses in the biofilm, which release through the formation of macroscopic wrinkles and delaminated buckles. To relate these biological and mechanical phenomena, I used time-lapse fluorescence imaging to track cell and matrix surface densities through the early and late stages of E. coli biofilm growth. Colocalization of high cell and matrix densities at the periphery precede the onset of mechanical instabilities at this annular region. Early growth is detected at this outer annulus, which was analysed by adding fluorescent microspheres to the bacterial inoculum. But only when high rates of matrix production are present in the biofilm centre, does overall biofilm spreading initiate along the solid-air interface. By tracking larger fluorescent particles for a long time, I could distinguish several kinematic stages of E. coli biofilm expansion and observed a transition from non-linear to linear velocity profiles, which precedes the emergence of wrinkles at the biofilm periphery. Decomposing particle velocities to their radial and circumferential components revealed a last kinematic stage, where biofilm movement is mostly directed towards the radial delaminated buckles, which verticalize. The resulting compressive strains computed in these regions were observed to substantially deform the underlying agar substrates. The co-localization of higher cell and matrix densities towards an annular region and the succession of several kinematic stages are thus expected to promote the emergence of mechanical instabilities at the biofilm periphery. These experimental findings are predicted to advance future modelling approaches of biofilm morphogenesis.
E. coli biofilm morphogenesis is further anticipated to depend on external stimuli from the environment. To clarify how the water could be used to tune biofilm material properties, we quantified E. coli biofilm growth, wrinkling dynamics and rigidity as a function of the water content of the nutritive substrates. Time-lapse microscopy and computational image analysis revealed that substrates with high water content promote biofilm spreading kinetics, while substrates with low water content promote biofilm wrinkling. The wrinkles observed on biofilm cross-sections appeared more bent on substrates with high water content, while they tended to be more vertical on substrates with low water content. Both wet and dry biomass, accumulated over 4 days of culture, were larger in biofilms cultured on substrates with high water content, despite extra porosity within the matrix layer. Finally, the micro-indentation analysis revealed that substrates with low water content supported the formation of stiffer biofilms. This study shows that E. coli biofilms respond to the water content of their substrate, which might be used for tuning their material properties in view of further applications.
Biofilm material properties further depend on the composition and structure of the matrix of extracellular proteins and polysaccharides. In particular, E. coli biofilms were suggested to present tissue-like elasticity due to a dense fibre network consisting of amyloid curli and phosphoethanolamine-modified cellulose. To understand the contribution of these components to the emergent mechanical properties of E. coli biofilms, we performed micro-indentation on biofilms grown from bacteria of several strains. Besides showing higher dry masses, larger spreading diameters and slightly reduced water contents, biofilms expressing both main matrix components also presented high rigidities in the range of several hundred kPa, similar to biofilms containing only curli fibres. In contrast, a lack of amyloid curli fibres provides much higher adhesive energies and more viscoelastic fluid-like material behaviour. Therefore, the combination of amyloid curli and phosphoethanolamine-modified cellulose fibres implies the formation of a composite material whereby the amyloid curli fibres provide rigidity to E. coli biofilms, whereas the phosphoethanolamine-modified cellulose rather acts as a glue. These findings motivate further studies involving purified versions of these protein and polysaccharide components to better understand how their interactions benefit biofilm functions.
All three studies depict different aspects of biofilm morphogenesis, which are interrelated. The first work reveals the correlation between non-uniform biological activities and the emergence of mechanical instabilities in the biofilm. The second work acknowledges the adaptive nature of E. coli biofilm morphogenesis and its mechanical properties to an environmental stimulus, namely water. Finally, the last study reveals the complementary role of the individual matrix components in the formation of a stable biofilm material, which not only forms complex morphologies but also functions as a protective shield for the bacteria it contains. Our experimental findings on E. coli biofilm morphogenesis and their mechanical properties can have further implications for fundamental and applied biofilm research fields.
Concerns have been raised that anthropogenic climate change could lead to large-scale singular climate events, i.e., abrupt nonlinear climate changes with repercussions on regional to global scales. One central goal of this thesis is the development of models of two representative components of the climate system that could exhibit singular behavior: the Atlantic thermohaline circulation (THC) and the Indian monsoon. These models are conceived so as to fulfill the main requirements of integrated assessment modeling, i.e., reliability, computational efficiency, transparency and flexibility. The model of the THC is an interhemispheric four-box model calibrated against data generated with a coupled climate model of intermediate complexity. It is designed to be driven by global mean temperature change which is translated into regional fluxes of heat and freshwater through a linear down-scaling procedure. Results of a large number of transient climate change simulations indicate that the reduced-form THC model is able to emulate key features of the behavior of comprehensive climate models such as the sensitivity of the THC to the amount, regional distribution and rate of change in the heat and freshwater fluxes. The Indian monsoon is described by a novel one-dimensional box model of the tropical atmosphere. It includes representations of the radiative and surface fluxes, the hydrological cycle and surface hydrology. Despite its high degree of idealization, the model satisfactorily captures relevant aspects of the observed monsoon dynamics, such as the annual course of precipitation and the onset and withdrawal of the summer monsoon. Also, the model exhibits the sensitivity to changes in greenhouse gas and sulfate aerosol concentrations that are known from comprehensive models. A simplified version of the monsoon model is employed for the identification of changes in the qualitative system behavior against changes in boundary conditions. The most notable result is that under summer conditions a saddle-node bifurcation occurs at critical values of the planetary albedo or insolation. Furthermore, the system exhibits two stable equilibria: besides the wet summer monsoon, a stable state exists which is characterized by a weak hydrological cycle. These results are remarkable insofar, as they indicate that anthropogenic perturbations of the planetary albedo such as sulfur emissions and/or land-use changes could destabilize the Indian summer monsoon. The reduced-form THC model is employed in an exemplary integrated assessment application. Drawing on the conceptual and methodological framework of the tolerable windows approach, emissions corridors (i.e., admissible ranges of CO2- emissions) are derived that limit the risk of a THC collapse while considering expectations about the socio-economically acceptable pace of emissions reductions. Results indicate, for example, a large dependency of the width of the emissions corridor on climate and hydrological sensitivity: for low values of climate and/or hydrological sensitivity, the corridor boundaries are far from being transgressed by any plausible emissions scenario for the 21st century. In contrast, for high values of both quantities low non-intervention scenarios leave the corridor already in the early decades of the 21st century. This implies that if the risk of a THC collapse is to be kept low, business-as-usual paths would need to be abandoned within the next two decades. All in all, this thesis highlights the value of reduced-form modeling by presenting a number of applications of this class of models, ranging from sensitivity and bifurcation analysis to integrated assessment. The results achieved and conclusions drawn provide a useful contribution to the scientific and policy debate about the consequences of anthropogenic climate change and the long-term goals of climate protection. --- Anmerkung: Die Autorin ist Trägerin des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2003/2004.
The European Water Framework Directive (WFD) has identified river morphological alteration and diffuse pollution as the two main pressures affecting water bodies in Europe at the catchment scale. Consequently, river restoration has become a priority to achieve the WFD's objective of good ecological status. However, little is known about the effects of stream morphological changes, such as re-meandering, on in-stream nitrate retention at the river network scale. Therefore, catchment nitrate modeling is necessary to guide the implementation of spatially targeted and cost-effective mitigation measures. Meanwhile, Germany, like many other regions in central Europe, has experienced consecutive summer droughts from 2015-2018, resulting in significant changes in river nitrate concentrations in various catchments. However, the mechanistic exploration of catchment nitrate responses to changing weather conditions is still lacking.
Firstly, a fully distributed, process-based catchment Nitrate model (mHM-Nitrate) was used, which was properly calibrated and comprehensively evaluated at numerous spatially distributed nitrate sampling locations. Three calibration schemes were designed, taking into account land use, stream order, and mean nitrate concentrations, and they varied in spatial coverage but used data from the same period (2011–2019). The model performance for discharge was similar among the three schemes, with Nash-Sutcliffe Efficiency (NSE) scores ranging from 0.88 to 0.92. However, for nitrate concentrations, scheme 2 outperformed schemes 1 and 3 when compared to observed data from eight gauging stations. This was likely because scheme 2 incorporated a diverse range of data, including low discharge values and nitrate concentrations, and thus provided a better representation of within-catchment heterogenous. Therefore, the study suggests that strategically selecting gauging stations that reflect the full range of within-catchment heterogeneity is more important for calibration than simply increasing the number of stations.
Secondly, the mHM-Nitrate model was used to reveal the causal relations between sequential droughts and nitrate concentration in the Bode catchment (3200 km2) in central Germany, where stream nitrate concentrations exhibited contrasting trends from upstream to downstream reaches. The model was evaluated using data from six gauging stations, reflecting different levels of runoff components and their associated nitrate-mixing from upstream to downstream. Results indicated that the mHM-Nitrate model reproduced dynamics of daily discharge and nitrate concentration well, with Nash-Sutcliffe Efficiency ≥ 0.73 for discharge and Kling-Gupta Efficiency ≥ 0.50 for nitrate concentration at most stations. Particularly, the spatially contrasting trends of nitrate concentration were successfully captured by the model. The decrease of nitrate concentration in the lowland area in drought years (2015-2018) was presumably due to (1) limited terrestrial export loading (ca. 40% lower than that of normal years 2004-2014), and (2) increased in-stream retention efficiency (20% higher in summer within the whole river network). From a mechanistic modelling perspective, this study provided insights into spatially heterogeneous flow and nitrate dynamics and effects of sequential droughts, which shed light on water-quality responses to future climate change, as droughts are projected to be more frequent.
Thirdly, this study investigated the effects of stream restoration via re-meandering on in-stream nitrate retention at network-scale in the well-monitored Bode catchment. The mHM-Nitrate model showed good performance in reproducing daily discharge and nitrate concentrations, with median Kling-Gupta values of 0.78 and 0.74, respectively. The mean and standard deviation of gross nitrate retention efficiency, which accounted for both denitrification and assimilatory uptake, were 5.1 ± 0.61% and 74.7 ± 23.2% in winter and summer, respectively, within the stream network. The study found that in the summer, denitrification rates were about two times higher in lowland sub-catchments dominated by agricultural lands than in mountainous sub-catchments dominated by forested areas, with median ± SD of 204 ± 22.6 and 102 ± 22.1 mg N m-2 d-1, respectively. Similarly, assimilatory uptake rates were approximately five times higher in streams surrounded by lowland agricultural areas than in those in higher-elevation, forested areas, with median ± SD of 200 ± 27.1 and 39.1 ± 8.7 mg N m-2 d-1, respectively. Therefore, restoration strategies targeting lowland agricultural areas may have greater potential for increasing nitrate retention. The study also found that restoring stream sinuosity could increase net nitrate retention efficiency by up to 25.4 ± 5.3%, with greater effects seen in small streams. These results suggest that restoration efforts should consider augmenting stream sinuosity to increase nitrate retention and decrease nitrate concentrations at the catchment scale.
To what extent cities can be made sustainable under the mega-trends of urbanization and climate change remains a matter of unresolved scientific debate. Our inability in answering this question lies partly in the deficient knowledge regarding pivotal humanenvironment interactions. Regarded as the most well documented anthropogenic climate modification, the urban heat island (UHI) effect – the warmth of urban areas relative to the rural hinterland – has raised great public health concerns globally. Worse still, heat waves are being observed and are projected to increase in both frequency and intensity, which further impairs the well-being of urban dwellers. Albeit with a substantial increase in the number of publications on UHI in the recent decades, the diverse urban-rural definitions applied in previous studies have remarkably hampered the general comparability of results achieved. In addition, few studies have attempted to synergize the land use data and thermal remote sensing to systematically assess UHI and its contributing factors.
Given these research gaps, this work presents a general framework to systematically quantify the UHI effect based on an automated algorithm, whereby cities are defined as clusters of maximum spatial continuity on the basis of land use data, with their rural hinterland being defined analogously. By combining land use data with spatially explicit surface skin temperatures from satellites, the surface UHI intensity can be calculated in a consistent and robust manner. This facilitates monitoring, benchmarking, and categorizing UHI intensities for cities across scales. In light of this innovation, the relationship between city size and UHI intensity has been investigated, as well as the contributions of urban form indicators to the UHI intensity.
This work delivers manifold contributions to the understanding of the UHI, which have complemented and advanced a number of previous studies. Firstly, a log-linear relationship between surface UHI intensity and city size has been confirmed among the 5,000 European cities. The relationship can be extended to a log-logistic one, when taking a wider range of small-sized cities into account. Secondly, this work reveals a complex interplay between UHI intensity and urban form. City size is found to have the strongest influence on the UHI intensity, followed by the fractality and the anisometry. However, their relative contributions to the surface UHI intensity depict a pronounced regional heterogeneity, indicating the importance of considering spatial patterns of UHI while implementing UHI adaptation measures.
Lastly, this work presents a novel seasonality of the UHI intensity for individual clusters in the form of hysteresis-like curves, implying a phase shift between the time series of UHI intensity and background temperatures. Combining satellite observation and urban boundary layer simulation, the seasonal variations of UHI are assessed from both screen and skin levels. Taking London as an example, this work ascribes the discrepancies between the seasonality observed at different levels mainly to the peculiarities of surface skin temperatures associated with the incoming solar radiation. In addition, the efforts in classifying cities according to their UHI characteristics highlight the important role of regional climates in determining the UHI.
This work serves as one of the first studies conducted to systematically and statistically scrutinize the UHI. The outcomes of this work are of particular relevance for the overall spatial planning and regulation at meso- and macro levels in order to harness the benefits of rapid urbanization, while proactively minimizing its ensuing thermal stress.
Noise is ubiquitous in nature and usually results in rich dynamics in stochastic systems such as oscillatory systems, which exist in such various fields as physics, biology and complex networks. The correlation and synchronization of two or many oscillators are widely studied topics in recent years.
In this thesis, we mainly investigate two problems, i.e., the stochastic bursting phenomenon in noisy excitable systems and synchronization in a three-dimensional Kuramoto model with noise. Stochastic bursting here refers to a sequence of coherent spike train, where each spike has random number of followers due to the combined effects of both time delay and noise. Synchronization, as a universal phenomenon in nonlinear dynamical systems, is well illustrated in the Kuramoto model, a prominent model in the description of collective motion.
In the first part of this thesis, an idealized point process, valid if the characteristic timescales in the problem are well separated, is used to describe statistical properties such as the power spectral density and the interspike interval distribution. We show how the main parameters of the point process, the spontaneous excitation rate, and the probability to induce a spike during the delay action can be calculated from the solutions of a stationary and a forced Fokker-Planck equation. We extend it to the delay-coupled case and derive analytically the statistics of the spikes in each neuron, the pairwise correlations between any two neurons, and the spectrum of the total output from the network.
In the second part, we investigate the three-dimensional noisy Kuramoto model, which can be used to describe the synchronization in a swarming model with helical trajectory. In the case without natural frequency, the Kuramoto model can be connected with the Vicsek model, which is widely studied in collective motion and swarming of active matter. We analyze the linear stability of the incoherent state and derive the critical coupling strength above which the incoherent state loses stability. In the limit of no natural frequency, an exact self-consistent equation of the mean field is derived and extended straightforward to any high-dimensional case.
The plasmasphere is a dynamic region of cold, dense plasma surrounding the Earth. Its shape and size are highly susceptible to variations in solar and geomagnetic conditions. Having an accurate model of plasma density in the plasmasphere is important for GNSS navigation and for predicting hazardous effects of radiation in space on spacecraft. The distribution of cold plasma and its dynamic dependence on solar wind and geomagnetic conditions remain, however, poorly quantified. Existing empirical models of plasma density tend to be oversimplified as they are based on statistical averages over static parameters. Understanding the global dynamics of the plasmasphere using observations from space remains a challenge, as existing density measurements are sparse and limited to locations where satellites can provide in-situ observations. In this dissertation, we demonstrate how such sparse electron density measurements can be used to reconstruct the global electron density distribution in the plasmasphere and capture its dynamic dependence on solar wind and geomagnetic conditions.
First, we develop an automated algorithm to determine the electron density from in-situ measurements of the electric field on the Van Allen Probes spacecraft. In particular, we design a neural network to infer the upper hybrid resonance frequency from the dynamic spectrograms obtained with the Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) instrumentation suite, which is then used to calculate the electron number density. The developed Neural-network-based Upper hybrid Resonance Determination (NURD) algorithm is applied to more than four years of EMFISIS measurements to produce the publicly available electron density data set.
We utilize the obtained electron density data set to develop a new global model of plasma density by employing a neural network-based modeling approach. In addition to the location, the model takes the time history of geomagnetic indices and location as inputs, and produces electron density in the equatorial plane as an output. It is extensively validated using in-situ density measurements from the Van Allen Probes mission, and also by comparing the predicted global evolution of the plasmasphere with the global IMAGE EUV images of He+ distribution. The model successfully reproduces erosion of the plasmasphere on the night side as well as plume formation and evolution, and agrees well with data.
The performance of neural networks strongly depends on the availability of training data, which is limited during intervals of high geomagnetic activity. In order to provide reliable density predictions during such intervals, we can employ physics-based modeling. We develop a new approach for optimally combining the neural network- and physics-based models of the plasmasphere by means of data assimilation. The developed approach utilizes advantages of both neural network- and physics-based modeling and produces reliable global plasma density reconstructions for quiet, disturbed, and extreme geomagnetic conditions.
Finally, we extend the developed machine learning-based tools and apply them to another important problem in the field of space weather, the prediction of the geomagnetic index Kp. The Kp index is one of the most widely used indicators for space weather alerts and serves as input to various models, such as for the thermosphere, the radiation belts and the plasmasphere. It is therefore crucial to predict the Kp index accurately. Previous work in this area has mostly employed artificial neural networks to nowcast and make short-term predictions of Kp, basing their inferences on the recent history of Kp and solar wind measurements at L1. We analyze how the performance of neural networks compares to other machine learning algorithms for nowcasting and forecasting Kp for up to 12 hours ahead. Additionally, we investigate several machine learning and information theory methods for selecting the optimal inputs to a predictive model of Kp. The developed tools for feature selection can also be applied to other problems in space physics in order to reduce the input dimensionality and identify the most important drivers.
Research outlined in this dissertation clearly demonstrates that machine learning tools can be used to develop empirical models from sparse data and also can be used to understand the underlying physical processes. Combining machine learning, physics-based modeling and data assimilation allows us to develop novel methods benefiting from these different approaches.
Evaluation of nitrogen dynamics in high-order streams and rivers based on high-frequency monitoring
(2023)
Nutrient storage, transform and transport are important processes for achieving environmental and ecological health, as well as conducting water management plans. Nitrogen is one of the most noticeable elements due to its impacts on tremendous consequences of eutrophication in aquatic systems. Among all nitrogen components, researches on nitrate are blooming because of widespread deployments of in-situ high-frequency sensors. Monitoring and studying nitrate can become a paradigm for any other reactive substances that may damage environmental conditions and cause economic losses.
Identifying nitrate storage and its transport within a catchment are inspiring to the management of agricultural activities and municipal planning. Storm events are periods when hydrological dynamics activate the exchange between nitrate storage and flow pathways. In this dissertation, long-term high-frequency monitoring data at three gauging stations in the Selke river were used to quantify event-scale nitrate concentration-discharge (C-Q) hysteretic relationships. The Selke catchment is characterized into three nested subcatchments by heterogeneous physiographic conditions and land use. With quantified hysteresis indices, impacts of seasonality and landscape gradients on C-Q relationships are explored. For example, arable area has deep nitrate legacy and can be activated with high intensity precipitation during wetting/wet periods (i.e., the strong hydrological connectivity). Hence, specific shapes of C-Q relationships in river networks can identify targeted locations and periods for agricultural management actions within the catchment to decrease nitrate output into downstream aquatic systems like the ocean.
The capacity of streams for removing nitrate is of both scientific and social interest, which makes the quantification motivated. Although measurements of nitrate dynamics are advanced compared to other substances, the methodology to directly quantify nitrate uptake pathways is still limited spatiotemporally. The major problem is the complex convolution of hydrological and biogeochemical processes, which limits in-situ measurements (e.g., isotope addition) usually to small streams with steady flow conditions. This makes the extrapolation of nitrate dynamics to large streams highly uncertain. Hence, understanding of in-stream nitrate dynamic in large rivers is still necessary. High-frequency monitoring of nitrate mass balance between upstream and downstream measurement sites can quantitatively disentangle multi-path nitrate uptake dynamics at the reach scale (3-8 km). In this dissertation, we conducted this approach in large stream reaches with varying hydro-morphological and environmental conditions for several periods, confirming its success in disentangling nitrate uptake pathways and their temporal dynamics. Net nitrate uptake, autotrophic assimilation and heterotrophic uptake were disentangled, as well as their various diel and seasonal patterns. Natural streams generally can remove more nitrate under similar environmental conditions and heterotrophic uptake becomes dominant during post-wet seasons. Such two-station monitoring provided novel insights into reach-scale nitrate uptake processes in large streams.
Long-term in-stream nitrate dynamics can also be evaluated with the application of water quality model. This is among the first time to use a data-model fusion approach to upscale the two-station methodology in large-streams with complex flow dynamics under long-term high-frequency monitoring, assessing the in-stream nitrate retention and its responses to drought disturbances from seasonal to sub-daily scale. Nitrate retention (both net uptake and net release) exhibited substantial seasonality, which also differed in the investigated normal and drought years. In the normal years, winter and early spring seasons exhibited extensive net releases, then general net uptake occurred after the annual high-flow season at later spring and early summer with autotrophic processes dominating and during later summer-autumn low-flow periods with heterotrophy-characteristics predominating. Net nitrate release occurred since late autumn until the next early spring. In the drought years, the late-autumn net releases were not so consistently persisted as in the normal years and the predominance of autotrophic processes occurred across seasons. Aforementioned comprehensive results of nitrate dynamics on stream scale facilitate the understanding of instream processes, as well as raise the importance of scientific monitoring schemes for hydrology and water quality parameters.
Completely water-based systems are of interest for the development of novel material for various reasons: On one hand, they provide benign environment for biological systems and on the other hand they facilitate effective molecular transport in a membrane-free environment. In order to investigate the general potential of aqueous two-phase systems (ATPSs) for biomaterials and compartmentalized systems, various solid particles were applied to stabilize all-aqueous emulsion droplets. The target ATPS to be investigated should be prepared via mixing of two aqueous solutions of water-soluble polymers, which turn biphasic when exceeding a critical polymer concentration. Hydrophilic polymers with a wide range of molar mass such as dextran/poly(ethylene glycol) (PEG) can therefore be applied. Solid particles adsorbed at the interfaces can be exceptionally efficient stabilizers forming so-called Pickering emulsions, and nanoparticles can bridge the correlation length of polymer solutions and are thereby the best option for water-in-water emulsions.
The first approach towards the investigation of ATPS was conducted with all aqueous dextran-PEG emulsions in the presence of poly(dopamine) particles (PDP) in Chapter 4. The water-in-water emulsions were formed with a PEG/dextran system via utilizing PDP as stabilizers. Studies of the formed emulsions were performed via laser scanning confocal microscope (CLSM), optical microscope (OM), cryo-scanning electron microscope (SEM) and tensiometry. The stable emulsions (at least 16 weeks) were demulsified easily via dilution or surfactant addition. Furthermore, the solid PDP at the water-water interface were crosslinked in order to inhibit demulsification of the Pickering emulsion. Transmission electron microscope (TEM) and scanning electron microscope (SEM) were used to visualize the morphology of PDP before and after crosslinking. PDP stabilized water-in-water emulsions were utilized in the following Chapter 5 to form supramolecular compartmentalized hydrogels. Here, hydrogels were prepared in pre-formed water-in-water emulsions and gelled via α-cyclodextrin-PEG (α-CD-PEG) inclusion complex formation. Studies of the formed complexes were performed via X-ray powder diffraction (XRD) and the mechanical properties of the hydrogels were measured with oscillatory shear rheology. In order to verify the compartmentalized state and its triggered decomposition, hydrogels and emulsions were assessed via OM, SEM and CLSM. The last chapter broadens the investigations from the previous two systems by utilizing various carbon nitrides (CN) as different stabilizers in ATPS. CN introduces another way to trigger demulsification, namely irradiation with visible light. Therefore, emulsification and demulsification with various triggers were probed. The investigated all aqueous multi-phase systems will act as model for future fabrication of biocompatible materials, cell micropatterning as well as separation of compartmentalized systems.
Microfabricated solid-state surfaces, also called atom chip', have become a well-established technique to trap and manipulate atoms. This has simplified applications in atom interferometry, quantum information processing, and studies of many-body systems. Magnetic trapping potentials with arbitrary geommetries are generated with atom chip by miniaturized current-carrying conductors integrated on a solid substrate. Atoms can be trapped and cooled to microKelvin and even nanoKelvin temperatures in such microchip trap. However, cold atoms can be significantly perturbed by the chip surface, typically held at room temperature. The magnetic field fluctuations generated by thermal currents in the chip elements may induce spin flips of atoms and result in loss, heating and decoherence. In this thesis, we extend previous work on spin flip rates induced by magnetic noise and consider the more complex geometries that are typically encountered in atom chips: layered structures and metallic wires of finite cross-section. We also discuss a few aspects of atom chips traps built with superconducting structures that have been suggested as a means to suppress magnetic field fluctuations. The thesis describes calculations of spin flip rates based on magnetic Green functions that are computed analytically and numerically. For a chip with a top metallic layer, the magnetic noise depends essentially on the thickness of that layer, as long as the layers below have a much smaller conductivity. Based on this result, scaling laws for loss rates above a thin metallic layer are derived. A good agreement with experiments is obtained in the regime where the atom-surface distance is comparable to the skin depth of metal. Since in the experiments, metallic layers are always etched to separate wires carrying different currents, the impact of the finite lateral wire size on the magnetic noise has been taken into account. The local spectrum of the magnetic field near a metallic microstructure has been investigated numerically with the help of boundary integral equations. The magnetic noise significantly depends on polarizations above flat wires with finite lateral width, in stark contrast to an infinitely wide wire. Correlations between multiple wires are also taken into account. In the last part, superconducting atom chips are considered. Magnetic traps generated by superconducting wires in the Meissner state and the mixed state are studied analytically by a conformal mapping method and also numerically. The properties of the traps created by superconducting wires are investigated and compared to normal conducting wires: they behave qualitatively quite similar and open a route to further trap miniaturization, due to the advantage of low magnetic noise. We discuss critical currents and fields for several geometries.
This thesis aimed to investigate several fundamental and perplexing questions relating to the phloem loading and transport mechanisms of Cucurbita maxima, by combining metabolomic analysis with cell biological techniques. This putative symplastic loading species has long been used for experiments on phloem anatomy, phloem biochemistry, phloem transport physiology and phloem signalling. Symplastic loading species have been proposed to use a polymer trapping mechanism to accumulate RFO (raffinose family oligosaccharides) sugars to build up high osmotic pressure in minor veins which sustains a concentration gradient that drives mass flow. However, extensive evidence indicating a low sugar concentration in their phloem exudates is a long-known problem that conflicts with this hypothesis. Previous metabolomic analysis shows the concentration of many small molecules in phloem exudates is higher than that of leaf tissues, which indicates an active apoplastic loading step. Therefore, in the view of the phloem metabolome, a symplastic loading mechanism cannot explain how small molecules other than RFO sugars are loaded into phloem. Most studies of phloem physiology using cucurbits have neglected the possible functions of vascular architecture in phloem transport. It is well known that there are two phloem systems in cucurbits with distinctly different anatomical features: central phloem and extrafascicular phloem. However, mistaken conclusions on sources of cucurbit phloem exudation from previous reports have hindered consideration of the idea that there may be important differences between these two phloem systems. The major results are summarized as below: 1) O-linked glycans in C.maxima were structurally identified as beta-1,3 linked glucose polymers, and the composition of glycans in cucurbits was found to be species-specific. Inter-species grafting experiments proved that these glycans are phloem mobile and transported uni-directionally from scion to stock. 2) As indicated by stable isotopic labelling experiments, a considerable amount of carbon is incorporated into small metabolites in phloem exudates. However, the incorporation of carbon into RFO sugars is much faster than for other metabolites. 3) Both CO2 labelling experiments and comparative metabolomic analysis of phloem exudates and leaf tissues indicated that metabolic processes other than RFO sugar metabolism play an important role in cucurbit phloem physiology. 4) The underlying assumption that the central phloem of cucurbits continuously releases exudates after physical incision was proved wrong by rigorous experiments including direct observation by normal microscopy and combined multiple-microscopic methods. Errors in previous experimental confirmation of phloem exudation in cucurbits are critically discussed. 5) Extrafascicular phloem was proved to be functional, as indicated by phloem-mobile carboxyfluorescein tracer studies. Commissural sieve tubes interconnect phloem bundles into a complete super-symplastic network. 6) Extrafascicular phloem represents the main source of exudates following physical incision. The major transported metabolites by these extrafacicular phloem are non-sugar compounds including amino acids, O-glycans, amines. 7) Central phloem contains almost exclusively RFO sugars, the estimated amount of which is up to 1 to 2 molar. The major RFO sugar present in central phloem is stachyose. 8) Cucurbits utilize two structurally different phloem systems for transporting different group of metabolites (RFO sugars and non-RFO sugar compounds). This implies that cucurbits may use spatially separated loading mechanisms (apoplastic loading for extrafascicular phloem and symplastic loading for central phloem) for supply of nutrients to sinks. 9) Along the transport systems, RFO sugars were mainly distributed within central phloem tissues. There were only small amounts of RFO sugars present in xylem tissues (millimolar range) and trace amounts of RFO sugars in cortex and pith. The composition of small molecules in external central phloem is very different from that in internal central phloem. 10) Aggregated P-proteins were manually dissected from central phloem and analysed by both SDS-PAGE and mass spectrometry. Partial sequences of peptides were obtained by QTOF de novo sequencing from trypsin digests of three SDS-PAGE bands. None of these partial sequences shows significant homology to known cucurbit phloem proteins or other plant proteins. This proves that these central phloem proteins are a completely new group of proteins different from those in extrafascicular phloem. The extensively analysed P-proteins reported in literature to date are therefore now shown to arise from extrafascicular phloem and not central phloem, and therefore do not appear to be involved in the occlusion processes in central phloem.
In the present thesis I investigate the lattice dynamics of thin film hetero structures of magnetically ordered materials upon femtosecond laser excitation as a probing and manipulation scheme for the spin system. The quantitative assessment of laser induced thermal dynamics as well as generated picosecond acoustic pulses and their respective impact on the magnetization dynamics of thin films is a challenging endeavor. All the more, the development and implementation of effective experimental tools and comprehensive models are paramount to propel future academic and technological progress.
In all experiments in the scope of this cumulative dissertation, I examine the crystal lattice of nanoscale thin films upon the excitation with femtosecond laser pulses. The relative change of the lattice constant due to thermal expansion or picosecond strain pulses is directly monitored by an ultrafast X-ray diffraction (UXRD) setup with a femtosecond laser-driven plasma X-ray source (PXS). Phonons and spins alike exert stress on the lattice, which responds according to the elastic properties of the material, rendering the lattice a versatile sensor for all sorts of ultrafast interactions. On the one hand, I investigate materials with strong magneto-elastic properties; The highly magnetostrictive rare-earth compound TbFe2, elemental Dysprosium or the technological relevant Invar material FePt. On the other hand I conduct a comprehensive study on the lattice dynamics of Bi1Y2Fe5O12 (Bi:YIG), which exhibits high-frequency coherent spin dynamics upon femtosecond laser excitation according to the literature. Higher order standing spinwaves (SSWs) are triggered by coherent and incoherent motion of atoms, in other words phonons, which I quantified with UXRD. We are able to unite the experimental observations of the lattice and magnetization dynamics qualitatively and quantitatively. This is done with a combination of multi-temperature, elastic, magneto-elastic, anisotropy and micro-magnetic modeling.
The collective data from UXRD, to probe the lattice, and time-resolved magneto-optical Kerr effect (tr-MOKE) measurements, to monitor the magnetization, were previously collected at different experimental setups. To improve the precision of the quantitative assessment of lattice and magnetization dynamics alike, our group implemented a combination of UXRD and tr-MOKE in a singular experimental setup, which is to my knowledge, the first of its kind. I helped with the conception and commissioning of this novel experimental station, which allows the simultaneous observation of lattice and magnetization dynamics on an ultrafast timescale under identical excitation conditions. Furthermore, I developed a new X-ray diffraction measurement routine which significantly reduces the measurement time of UXRD experiments by up to an order of magnitude. It is called reciprocal space slicing (RSS) and utilizes an area detector to monitor the angular motion of X-ray diffraction peaks, which is associated with lattice constant changes, without a time-consuming scan of the diffraction angles with the goniometer. RSS is particularly useful for ultrafast diffraction experiments, since measurement time at large scale facilities like synchrotrons and free electron lasers is a scarce and expensive resource. However, RSS is not limited to ultrafast experiments and can even be extended to other diffraction techniques with neutrons or electrons.
The aim of this thesis is the quantum dynamical study of two examples of scanning tunneling microscope (STM)-controllable, Si(100)(2x1) surface-mounted switches of atomic and molecular scale. The first example considers the switching of single H-atoms between two dangling-bond chemisorption sites on a Si-dimer of the Si(100) surface (Grey et al., 1996). The second system examines the conformational switching of single 1,5-cyclooctadiene molecules chemisorbed on the Si(100) surface (Nacci et al., 2008). The temporal dynamics are provided by the propagation of the density matrix in time via an according set of equations of motion (EQM). The latter are based on the open-system density matrix theory in Lindblad form. First order perturbation theory is used to evaluate those transition rates between vibrational levels of the system part. In order to account for interactions with the surface phonons, two different dissipative models are used, namely the bilinear, harmonic and the Ohmic bath model. IET-induced vibrational transitions in the system are due to the dipole- and the resonance-mechanism. A single surface approach is used to study the influence of dipole scattering and resonance scattering in the below-threshold regime. Further, a second electronic surface was included to study the resonance-induced switching in the above-threshold regime. Static properties of the adsorbate, e.g., potentials and dipole function and potentials, are obtained from quantum chemistry and used within the established quantum dynamical models.
Die funktionelle Charakterisierung von therapeutisch relevanten Proteinen kann bereits durch die Bereitstellung des Zielproteins in adäquaten Mengen limitierend sein. Dies trifft besonders auf Membranproteine zu, die aufgrund von zytotoxischen Effekten auf die Produktionszelllinie und der Tendenz Aggregate zu bilden, in niedrigen Ausbeuten an aktivem Protein resultieren können. Der lebende Organismus kann durch die Verwendung von translationsaktiven Zelllysaten umgangen werden- die Grundlage der zellfreien Proteinsynthese. Zu Beginn der Arbeit wurde die ATP-abhängige Translation eines Lysates auf der Basis von kultivierten Insektenzellen (Sf21) analysiert. Für diesen Zweck wurde ein ATP-bindendes Aptamer eingesetzt, durch welches die Translation der Nanoluziferase reguliert werden konnte. Durch die dargestellte Applizierung von Aptameren, könnten diese zukünftig in zellfreien Systemen für die Visualisierung der Transkription und Translation eingesetzt werden, wodurch zum Beispiel komplexe Prozesse validiert werden können.
Neben der reinen Proteinherstellung können Faktoren wie posttranslationale Modifikationen sowie eine Integration in eine lipidische Membran essentiell für die Funktionalität des Membranproteins sein. Im zweiten Abschnitt konnte, im zellfreien Sf21-System, für den G-Protein-gekoppelten Rezeptor Endothelin B sowohl eine Integration in die endogen vorhandenen Endoplasmatisch Retikulum-basierten Membranstrukturen als auch Glykosylierungen, identifiziert werden.
Auf der Grundlage der erfolgreichen Synthese des ET-B-Rezeptors wurden verschiedene Methoden zur Fluoreszenzmarkierung des Adenosin-Rezeptors A2a (Adora2a) angewandt und optimiert. Im dritten Abschnitt wurde der Adora2a mit Hilfe einer vorbeladenen tRNA, welche an eine fluoreszierende Aminosäure gekoppelt war, im zellfreien Chinesischen Zwerghamster Ovarien (CHO)-System markiert. Zusätzlich konnte durch den Einsatz eines modifizierten tRNA/Aminoacyl-tRNA-Synthetase-Paares eine nicht-kanonische Aminosäure an Position eines integrierten Amber-Stopcodon in die Polypeptidkette eingebaut und die funktionelle Gruppe im Anschluss an einen Fluoreszenzfarbstoff gekoppelt werden. Aufgrund des offenen Charakters eignen sich zellfreie Proteinsynthesesysteme besonders für eine Integration von exogenen Komponenten in den Translationsprozess. Mit Hilfe der Fluoreszenzmarkierung wurde eine ligandvermittelte Konformationsänderung im Adora2a über einen Biolumineszenz-Resonanzenergietransfer detektiert. Durch die Etablierung der Amber-Suppression wurde darüber hinaus das Hormon Erythropoetin pegyliert, wodurch Eigenschaften wie Stabilität und Halbwertszeit des Proteins verändert wurden.
Zu guter Letzt wurde ein neues tRNA/Aminoacyl-tRNA-Synthetase-Paar auf Basis der Methanosarcina mazei Pyrrolysin-Synthetase etabliert, um das Repertoire an nicht-kanonischen Aminosäuren und den damit verbundenen Kopplungsreaktionen zu erweitern. Zusammenfassend wurden die Potenziale zellfreier Systeme in Bezug auf der Herstellung von komplexen Membranproteinen und der Charakterisierung dieser durch die Einbringung einer positionsspezifischen Fluoreszenzmarkierung verdeutlicht, wodurch neue Möglichkeiten für die Analyse und Funktionalisierung von komplexen Proteinen geschaffen wurden.
The mammalian brain is, with its numerous neural elements and structured complex connectivity, one of the most complex systems in nature. Recently, large-scale corticocortical connectivities, both structural and functional, have received a great deal of research attention, especially using the approach of complex networks. Here, we try to shed some light on the relationship between structural and functional connectivities by studying synchronization dynamics in a realistic anatomical network of cat cortical connectivity. We model the cortical areas by a subnetwork of interacting excitable neurons (multilevel model) and by a neural mass model (population model). With weak couplings, the multilevel model displays biologically plausible dynamics and the synchronization patterns reveal a hierarchical cluster organization in the network structure. We can identify a group of brain areas involved in multifunctional tasks by comparing the dynamical clusters to the topological communities of the network. With strong couplings of multilevel model and by using neural mass model, the dynamics are characterized by well-defined oscillations. The synchronization patterns are mainly determined by the node intensity (total input strengths of a node); the detailed network topology is of secondary importance. The biologically improved multilevel model exhibits similar dynamical patterns in the two regimes. Thus, the study of synchronization in a multilevel complex network model of cortex can provide insights into the relationship between network topology and functional organization of complex brain networks.
The Greenland Ice Sheet is the second-largest mass of ice on Earth. Being almost 2000 km long, more than 700 km wide, and more than 3 km thick at the summit, it holds enough ice to raise global sea levels by 7m if melted completely. Despite its massive size, it is particularly vulnerable to anthropogenic climate change: temperatures over the Greenland Ice Sheet have increased by more than 2.7◦C in the past 30 years, twice as much as the global mean temperature. Consequently, the ice sheet has been significantly losing mass since the 1980s and the rate of loss has increased sixfold since then. Moreover, it is one of the potential tipping elements of the Earth System, which might undergo irreversible change once a warming threshold is exceeded. This thesis aims at extending the understanding of the resilience of the Greenland Ice Sheet against global warming by analyzing processes and feedbacks relevant to its centennial to multi-millennial stability using ice sheet modeling.
One of these feedbacks, the melt-elevation-feedback is driven by the temperature rise with decreasing altitudes: As the ice sheet melts, its thickness and surface elevation decrease, exposing the ice surface to warmer air and thus increasing the melt rates even further. The glacial isostatic adjustment (GIA) can partly mitigate this melt-elevation feedback as the bedrock lifts in response to an ice load decrease, forming the negative GIA feedback. In my thesis, I show that the interaction between these two competing feedbacks can lead to qualitatively different dynamical responses of the Greenland Ice Sheet to warming – from permanent loss to incomplete recovery, depending on the feedback parameters. My research shows that the interaction of those feedbacks can initiate self-sustained oscillations of the ice volume while the climate forcing remains constant.
Furthermore, the increased surface melt changes the optical properties of the snow or ice surface, e.g. by lowering their albedo, which in turn enhances melt rates – a process known as the melt-albedo feedback. Process-based ice sheet models often neglect this melt-albedo feedback. To close this gap, I implemented a simplified version of the diurnal Energy Balance Model, a computationally efficient approach that can capture the first-order effects of the melt-albedo feedback, into the Parallel Ice Sheet Model (PISM). Using the coupled model, I show in warming experiments that the melt-albedo feedback almost doubles the ice loss until the year 2300 under the low greenhouse gas emission scenario RCP2.6, compared to simulations where the melt-albedo feedback is neglected,
and adds up to 58% additional ice loss under the high emission scenario RCP8.5. Moreover, I find that the melt-albedo feedback dominates the ice loss until 2300, compared to the melt-elevation feedback.
Another process that could influence the resilience of the Greenland Ice Sheet is the warming induced softening of the ice and the resulting increase in flow. In my thesis, I show with PISM how the uncertainty in Glen’s flow law impacts the simulated response to warming. In a flow line setup at fixed climatic mass balance, the uncertainty in flow parameters leads to a range of ice loss comparable to the range caused by different warming levels.
While I focus on fundamental processes, feedbacks, and their interactions in the first three projects of my thesis, I also explore the impact of specific climate scenarios on the sea level rise contribution of the Greenland Ice Sheet. To increase the carbon budget flexibility, some warming scenarios – while still staying within the limits of the Paris Agreement – include a temporal overshoot of global warming. I show that an overshoot by 0.4◦C increases the short-term and long-term ice loss from Greenland by several centimeters. The long-term increase is driven by the warming at high latitudes, which persists even when global warming is reversed. This leads to a substantial long-term commitment of the sea level rise contribution from the Greenland Ice Sheet.
Overall, in my thesis I show that the melt-albedo feedback is most relevant for the ice loss of the Greenland Ice Sheet on centennial timescales. In contrast, the melt-elevation feedback and its interplay with the GIA feedback become increasingly relevant on millennial timescales. All of these influence the resilience of the Greenland Ice Sheet against global warming, in the near future and on the long term.
In dieser Arbeit wurden Zusammenhänge zwischen den sexuellen Erfahrungen junger Frauen und Männer, ihren Persönlichkeitseigenschaften und ihren sexualmoralischen Einstellungen auf der einen Seite und der Einschätzung ihrer sexuellen Handlungsfähigkeit auf der anderen Seite untersucht. Die Grundlage für das Modell der sexuellen Handlungsfähigkeit bildeten die Vorstellungen der Arbeitsgruppe um Matthias Grundmann (Grundmann et al. 2006) sowie von Emirbayer und Mische (1998). Das in dieser Arbeit entwickelte Modell zur sexuellen Handlungsfähigkeit ist ein multidimensionales Konstrukt, das sich aus den Komponenten „sexuelle Kommunikation“, „sexuelle Zufriedenheit“, „sexuelle Reziprozität“ sowie „sexuelle Eigenverantwortung“ zusammensetzt. „Sexuelle Kommunikation“ beinhaltet die Fähigkeit, sexuelle Wünsche zum Ausdruck bringen zu können. „Sexuelle Zufriedenheit“ beschreibt den Grad der Zufriedenheit mit dem eigenen Sexualleben. „Sexuelle Reziprozität“ verweist auf die Fähigkeit, sexuelle Aufmerksamkeiten sowohl Annehmen als auch Geben zu können. „Sexuelle Eigenverantwortung“ betont schließlich die Einschätzung, inwieweit die eigene Sexualität selbst bestimmt gestaltet werden kann. Mit Emirbayer und Mische werden die sexuellen Erfahrungen der Frauen und Männer als Korrelate der Einschätzung der Dimensionen der sexuellen Handlungsfähigkeit betrachtet. Mit Grundmann et al. sind es zudem verschiedene Persönlichkeitseigenschaften sowie sexualmoralische Einstellungen, deren Beschaffenheiten Aussagen über die sexuelle Handlungsfähigkeit erlauben. Um die Thematik der sexuellen Handlungsfähigkeit empirisch zu betrachten, wurden im Jahr 2006 695 junge Potsdamer/innen im Alter von 19 bis 21 Jahren im Rahmen einer standardisierten Erhebung zu ihren sexuellen und Beziehungserfahrungen befragt. Die empirischen Analysen verdeutlichen eine ko-konstruktive Anschauung von der Entwicklung sexueller Handlungsfähigkeit. Diese entsteht nicht im Individuum allein, sondern innerhalb der Interaktions- und Aushandlungsprozesse des Individuums mit den Anderen seiner sozialen und sexuellen Umwelt. Von Bedeutung erweisen dabei sowohl die Erlebnisse der sexuellen Biografie als auch die Persönlichkeitsmerkmale eines jeden Einzelnen. Nur geringfügig erscheinen die erfragten sexualmoralischen Ansichten von Bedeutung.
Seismological and seismotectonic analysis of the northwestern Argentine Central Andean foreland
(2020)
After a severe M W 5.7 earthquake on October 17, 2015 in El Galpón in the province of Salta NW Argentina, I installed a local seismological network around the estimated epicenter. The network covered an area characterized by inherited Cretaceous normal faults and neotectonic faults with unknown recurrence intervals, some of which may have been reactivated normal faults. The 13 three-component seismic stations recorded data continuously for 15 months.
The 2015 earthquake took place in the Santa Bárbara System of the Andean foreland, at about 17km depth. This region is the easternmost morphostructural region of the central Andes. As a part of the broken foreland, it is bounded to the north by the Subandes fold-and-thrust belt and the Sierras Pampeanas to the south; to the east lies the Chaco-Paraná basin.
A multi-stage morphotectonic evolution with thick-skinned basement uplift and coeval thin-skinned deformation in the intermontane basins is suggested for the study area. The release of stresses associated with the foreland deformation can result in strong earthquakes, as the study area is known for recurrent and historical, destructive earthquakes. The available continuous record reaches back in time, when the strongest event in 1692 (magnitude 7 or intensity IX) destroyed the city of Esteco. Destructive earthquakes and surface deformation are thus a hallmark of this part of the Andean foreland.
With state-of-the-art Python packages (e.g. pyrocko, ObsPy), a semi-automatic approach is followed to analyze the collected continuous data of the seismological network. The resulting 1435 hypocenter locations consist of three different groups: 1.) local crustal earthquakes (nearly half of the events belong to this group), 2.) interplate activity, of regional distance in the slab of the Nazca-plate, and 3.) very deep earthquakes at about 600km depth. My major interest focused on the first event class. Those crustal events are partly aftershock events of the El Galpón earthquake and a second earthquake, in the south of the same fault. Further events can be considered as background seismicity of other faults within the study area. Strikingly, the seismogenic zone encompass the whole crust and propagates brittle deformation down, close to the Moho.
From the collected seismological data, a local seismic velocity model is estimated, using VELEST. After the execution of various stability tests, the robust minimum 1D-velocity model implies guiding values for the composition of the local, subsurface structure of the crust. Afterwards, performing a hypocenter relocation enables the assignment of individual earthquakes to aftershock clusters or extended seismotectonic structures. This allows the mapping of previously unknown seismogenic faults.
Finally, focal mechanisms are modeled for events with acurately located hypocenters, using the newly derived local velocity model. A compressive regime is attested by the majority of focal mechanisms, while the strike direction of the individual seismogenic structures is in agreement with the overall north – south orientation of the Central Andes, its mountain front, and individual mountain ranges in the southern Santa-Bárbara-System.
This work describes the realization of physically crosslinked networks based on gelatin by the introduction of functional groups enabling specific supramolecular interactions. Molecular models were developed in order to predict the material properties and permit to establish a knowledge-based approach to material design. The effect of additional supramolecular interactions with hydroxyapaptite was then studied in composite materials. The calculated properties are compared to experimental results to validate the models. The models are then further used for the study of physically crosslinked networks. Gelatin was functionalized with desaminotyrosine (DAT) and desaminotyrosyl-tyrosine (DATT) side groups, derived from the natural amino acid tyrosine. These group can potentially undergo to π-π and hydrogen bonding interactions also under physiological conditions. Molecular dynamics (MD) simulations were performed on models with 0.8 wt.-% or 25 wt.-% water content, using the second generation forcefield CFF91. The validation of the models was obtained by the comparison with specific experimental data such as, density, peptide conformational angles and X-ray scattering spectra. The models were then used to predict the supramolecular organization of the polymer chain, analyze the formation of physical netpoints and calculate the mechanical properties. An important finding of simulation was that with the increase of aromatic groups also the number of observed physical netpoints increased. The number of relatively stable physical netpoints, on average zero 0 for natural gelatin, increased to 1 and 6 for DAT and DATT functionalized gelatins respectively. A comparison with the Flory-Rehner model suggested reduced equilibrium swelling by factor 6 of the DATT-functionalized materials in water. The functionalized gelatins could be synthesized by chemoselective coupling of the free carboxylic acid groups of DAT and DATT to the free amino groups of gelatin. At 25 wt.-% water content, the simulated and experimentally determined elastic mechanical properties (e.g. Young Modulus) were both in the order of GPa and were not influenced by the degree of aromatic modification. The experimental equilibrium degree of swelling in water decreased with increasing the number of inserted aromatic functions (from 2800 vol.-% for pure gelatin to 300 vol.-% for the DATT modified gelatin), at the same time, Young’s modulus, elongation at break, and maximum tensile strength increased. It could be show that the functionalization with DAT and DATT influences the chain organization of gelatin based materials together with a controlled drying condition. Functionalization with DAT and DATT lead to a drastic reduction of helical renaturation, that could be more finely controlled by the applied drying conditions. The properties of the materials could then be influenced by application of two independent methods. Composite materials of DAT and DATT functionalized gelatins with hydroxyapatite (HAp) show a drastic reduction of swelling degree. In tensile tests and rheological measurements, the composites equilibrated in water had increased Young’s moduli (from 200 kPa up to 2 MPa) and tensile strength (from 57 kPa up to 1.1 MPa) compared to the natural polymer matrix without affecting the elongation at break. Furthermore, an increased thermal stability from 40 °C to 85 °C of the networks could be demonstrated. The differences of the behaviour of the functionalized gelatins to pure gelatin as matrix suggested an additional stabilizing bond between the incorporated aromatic groups to the hydroxyapatite.
This thesis focuses on the study of marked Gibbs point processes, in particular presenting some results on their existence and uniqueness, with ideas and techniques drawn from different areas of statistical mechanics: the entropy method from large deviations theory, cluster expansion and the Kirkwood--Salsburg equations, the Dobrushin contraction principle and disagreement percolation.
We first present an existence result for infinite-volume marked Gibbs point processes. More precisely, we use the so-called entropy method (and large-deviation tools) to construct marked Gibbs point processes in R^d under quite general assumptions. In particular, the random marks belong to a general normed space S and are not bounded. Moreover, we allow for interaction functionals that may be unbounded and whose range is finite but random. The entropy method relies on showing that a family of finite-volume Gibbs point processes belongs to sequentially compact entropy level sets, and is therefore tight.
We then present infinite-dimensional Langevin diffusions, that we put in interaction via a Gibbsian description. In this setting, we are able to adapt the general result above to show the existence of the associated infinite-volume measure. We also study its correlation functions via cluster expansion techniques, and obtain the uniqueness of the Gibbs process for all inverse temperatures β and activities z below a certain threshold. This method relies in first showing that the correlation functions of the process satisfy a so-called Ruelle bound, and then using it to solve a fixed point problem in an appropriate Banach space. The uniqueness domain we obtain consists then of the model parameters z and β for which such a problem has exactly one solution.
Finally, we explore further the question of uniqueness of infinite-volume Gibbs point processes on R^d, in the unmarked setting. We present, in the context of repulsive interactions with a hard-core component, a novel approach to uniqueness by applying the discrete Dobrushin criterion to the continuum framework. We first fix a discretisation parameter a>0 and then study the behaviour of the uniqueness domain as a goes to 0. With this technique we are able to obtain explicit thresholds for the parameters z and β, which we then compare to existing results coming from the different methods of cluster expansion and disagreement percolation.
Throughout this thesis, we illustrate our theoretical results with various examples both from classical statistical mechanics and stochastic geometry.
Taking advantage of ATRP and using functionalized initiators, different functionalities were introduced in both α and ω chain-ends of synthetic polymers. These functionalized polymers could then go through modular synthetic pathways such as click cycloaddition (copper-catalyzed or copper-free) or amidation to couple synthetic polymers to other synthetic polymers, biomolecules or silica monoliths. Using this general strategy and designing these co/polymers so that they are thermoresponsive, yet bioinert and biocompatible with adjustable cloud point values (as it is the case in the present thesis), the whole generated system becomes "smart" and potentially applicable in different branches. The applications which were considered in the present thesis were in polymer post-functionalization (in situ functionalization of micellar aggregates with low and high molecular weight molecules), hydrophilic/hydrophobic tuning, chromatography and bioconjugation (enzyme thermoprecipitation and recovery, improvement of enzyme activity). Different α-functionalized co/polymers containing cholesterol moiety, aldehyde, t-Boc protected amine, TMS-protected alkyne and NHS-activated ester were designed and synthesized in this work.
Interactions and feedbacks between tectonics, climate, and upper plate architecture control basin geometry, relief, and depositional systems. The Andes is part of a longlived continental margin characterized by multiple tectonic cycles which have strongly modified the Andean upper plate architecture. In the Andean retroarc, spatiotemporal variations in the structure of the upper plate and tectonic regimes have resulted in marked along-strike variations in basin geometry, stratigraphy, deformational style, and mountain belt morphology. These along-strike variations include high-elevation plateaus (Altiplano and Puna) associated with a thin-skin fold-and-thrust-belt and thick-skin deformation in broken foreland basins such as the Santa Barbara system and the Sierras Pampeanas. At the confluence of the Puna Plateau, the Santa Barbara system and the Sierras Pampeanas, major along-strike changes in upper plate architecture, mountain belt morphology, basement exhumation, and deformation style can be recognized. I have used a source to sink approach to unravel the spatiotemporal tectonic evolution of the Andean retroarc between 26 and 28°S. I obtained a large low-temperature thermochronology data set from basement units which includes apatite fission track, apatite U-Th-Sm/He, and zircon U-Th/He (ZHe) cooling ages. Stratigraphic descriptions of Miocene units were temporally constrained by U-Pb LA-ICP-MS zircon ages from interbedded pyroclastic material.
Modeled ZHe ages suggest that the basement of the study area was exhumed during the Famatinian orogeny (550-450 Ma), followed by a period of relative tectonic quiescence during the Paleozoic and the Triassic. The basement experienced horst exhumation during the Cretaceous development of the Salta rift. After initial exhumation, deposition of thick Cretaceous syn-rift strata caused reheating of several basement blocks within the Santa Barbara system. During the Eocene-Oligocene, the Andean compressional setting was responsible for the exhumation of several disconnected basement blocks. These exhumed blocks were separated by areas of low relief, in which humid climate and low erosion rates facilitated the development of etchplains on the crystalline basement. The exhumed basement blocks formed an Eocene to Oligocene broken foreland basin in the back-bulge depozone of the Andean foreland. During the Early Miocene, foreland basin strata filled up the preexisting Paleogene topography. The basement blocks in lower relief positions were reheated; associated geothermal gradients were higher than 25°C/km. Miocene volcanism was responsible for lateral variations on the amount of reheating along the Campo-Arenal basin. Around 12 Ma, a new deformational phase modified the drainage network and fragmented the lacustrine system. As deformation and rock uplift continued, the easily eroded sedimentary cover was efficiently removed and reworked by an ephemeral fluvial system, preventing the development of significant relief. After ~6 Ma, the low erodibility of the basement blocks which began to be exposed caused relief increase, leading to the development of stable fluvial systems. Progressive relief development modified atmospheric circulation, creating a rainfall gradient. After 3 Ma, orographic rainfall and high relief lead to the development of proximal fluvial-gravitational depositional systems in the surrounding basins.
The recent discovery of an intricate and nontrivial interaction topology among the elements of a wide range of natural systems has altered the manner we understand complexity. For example, the axonal fibres transmitting electrical information between cortical regions form a network which is neither regular nor completely random. Their structure seems to follow functional principles to balance between segregation (functional specialisation) and integration. Cortical regions are clustered into modules specialised in processing different kinds of information, e.g. visual or auditory. However, in order to generate a global perception of the real world, the brain needs to integrate the distinct types of information. Where this integration happens, nobody knows. We have performed an extensive and detailed graph theoretical analysis of the cortico-cortical organisation in the brain of cats, trying to relate the individual and collective topological properties of the cortical areas to their function. We conclude that the cortex possesses a very rich communication structure, composed of a mixture of parallel and serial processing paths capable of accommodating dynamical processes with a wide variety of time scales. The communication paths between the sensory systems are not random, but largely mediated by a small set of areas. Far from acting as mere transmitters of information, these central areas are densely connected to each other, strongly indicating their functional role as integrators of the multisensory information. In the quest of uncovering the structure-function relationship of cortical networks, the peculiarities of this network have led us to continuously reconsider the stablished graph measures. For example, a normalised formalism to identify the “functional roles” of vertices in networks with community structure is proposed. The tools developed for this purpose open the door to novel community detection techniques which may also characterise the overlap between modules. The concept of integration has been revisited and adapted to the necessities of the network under study. Additionally, analytical and numerical methods have been introduced to facilitate understanding of the complicated statistical interrelations between the distinct network measures. These methods are helpful to construct new significance tests which may help to discriminate the relevant properties of real networks from side-effects of the evolutionary-growth processes.
Modern anthropogenic forcing of atmospheric chemistry poses the question of how the Earth System will respond as thousands of gigatons of greenhouse gas are rapidly added to the atmosphere. A similar, albeit nonanthropogenic, situation occurred during the early Paleogene, when catastrophic release of carbon to the atmosphere triggered abrupt increase in global temperatures. The best documented of these events is the Paleocene-Eocene Thermal Maximum (PETM, ~55 Ma) when the magnitude of carbon addition to the oceans and atmosphere was similar to those expected for the future. This event initiated global warming, changes in hydrological cycles, biotic extinction and migrations. A recently proposed hypothesis concerning changes in marine ecosystems suggests that this global warming strongly influenced the shallow-water biosphere, triggering extinctions and turnover in the Larger Foraminifera (LF) community and the demise of corals. The successions from the Adriatic Carbonate Platform (SW Slovenia) represent an ideal location to test the hypothesis of a possible causal link between the PETM and evolution of shallow-water organisms because they record continuous sedimentation from the Late Paleocene to the Early Eocene and are characterized by a rich biota, especially LF, fundamental for detailed biostratigraphic studies. In order to reconstruct paleoenvironmental conditions during deposition, I focused on sedimentological analysis and paleoecological study of benthic assemblages. During the Late Paleocene-earliest Eocene, sedimentation occurred on a shallow-water carbonate ramp system characterized by enhanced nutrient levels. LF represent the common constituent of the benthic assemblages that thrived in this setting throughout the Late Paleocene to the Early Eocene. With detailed biostratigraphic and chemostratigraphic analyses documenting the most complete record to date available for the PETM event in a shallow-water marine environment, I correlated chemostratigraphically for the first time the evolution of LF with the δ¹³C curves. This correlation demonstrated that no major turnover in the LF communities occurred synchronous with the PETM; thus the evolution of LF was mainly controlled by endogenous biotic forces. The study of Late Thanetian metric-sized microbialite-coral mounds which developed in the middle part of the ramp, documented the first Cenozoic occurrence of microbially-cemented mounds. The development of these mounds, with temporary dominance of microbial communities over corals, suggest environmentally-triggered “phase shifts” related to frequent fluctuations of nutrient/turbidity levels during recurrent wet phases which preceding the extreme greenhouse conditions of the PETM. The paleoecological study of the coral community in the microbialites-coral mounds, the study of corals from Early Eocene platform from SW France, and a critical, extensive literature research of Late Paleocene – Early Eocene coral occurrences from the Tethys, the Atlantic, the Caribbean realms suggested that these corals types, even if not forming extensive reefs, are common in the biofacies as small isolated colonies, piles of rubble or small patch-reefs. These corals might have developed ‘alternative’ life strategies to cope with harsh conditions (high/fluctuating nutrients/turbidity, extreme temperatures, perturbation of aragonite saturation state) during the greenhouse times of the early Paleogene, representing a good fossil analogue to modern corals thriving close to their thresholds for survival. These results demonstrate the complexity of the biological responses to extreme conditions, not only in terms of temperature but also nutrient supply, physical disturbance and their temporal variability and oscillating character.
Volcanoes are one of the Earth’s most dynamic zones and responsible for many changes in our planet. Volcano seismology aims to provide an understanding of the physical processes in volcanic systems and anticipate the style and timing of eruptions by analyzing the seismic records. Volcanic tremor signals are usually observed in the seismic records before or during volcanic eruptions. Their analysis contributes to evaluate the evolving volcanic activity and potentially predict eruptions. Years of continuous seismic monitoring now provide useful information for operational eruption forecasting. The continuously growing amount of seismic recordings, however, poses a challenge for analysis, information extraction, and interpretation, to support timely decision making during volcanic crises. Furthermore, the complexity of eruption processes and precursory activities makes the analysis challenging.
A challenge in studying seismic signals of volcanic origin is the coexistence of transient signal swarms and long-lasting volcanic tremor signals. Separating transient events from volcanic tremors can, therefore, contribute to improving our understanding of the underlying physical processes. Some similar issues (data reduction, source separation, extraction, and classification) are addressed in the context of music information retrieval (MIR). The signal characteristics of acoustic and seismic recordings comprise a number of similarities. This thesis is going beyond classical signal analysis techniques usually employed in seismology by exploiting similarities of seismic and acoustic signals and building the information retrieval strategy on the expertise developed in the field of MIR.
First, inspired by the idea of harmonic–percussive separation (HPS) in musical signal processing, I have developed a method to extract harmonic volcanic tremor signals and to detect transient events from seismic recordings. This provides a clean tremor signal suitable for tremor investigation along with a characteristic function suitable for earthquake detection. Second, using HPS algorithms, I have developed a noise reduction technique for seismic signals. This method is especially useful for denoising ocean bottom seismometers, which are highly contaminated by noise. The advantage of this method compared to other denoising techniques is that it doesn’t introduce distortion to the broadband earthquake waveforms, which makes it reliable for different applications in passive seismological analysis. Third, to address the challenge of extracting information from high-dimensional data and investigating the complex eruptive phases, I have developed an advanced machine learning model that results in a comprehensive signal processing scheme for volcanic tremors. Using this method seismic signatures of major eruptive phases can be automatically detected. This helps to provide a chronology of the volcanic system. Also, this model is capable to detect weak precursory volcanic tremors prior to the eruption, which could be used as an indicator of imminent eruptive activity. The extracted patterns of seismicity and their temporal variations finally provide an explanation for the transition mechanism between eruptive phases.
In this work the first observation of new type of liquid crystals is presented. This is ionic self-assembly (ISA) liquid crystals formed by introduction of oppositely charged ions between different low molecular tectonic units. As practically all conventional liquid crystals consist of rigid core and alkyl chains the attention is focused to the simplest case where oppositely charged ions are placed between a rigid core and alkyl tails. The aim of this work is to investigate and understand liquid crystalline and alignment properties of these materials. It was found that ionic interactions within complexes play the main role. Presence of these interactions restricts transition to isotropic phase. In addition, these interactions hold the system (like network) allowing crystallization into a single domain from aligned LC state. Alignment of these simple ISA complexes was spontaneous on a glass substrate. In order to show potentials for application perylenediimide and azobenzene containing ISA complexes have been investigated for correlations between phase behavior and their alignment properties. The best results of macroscopic alignment of perylenediimide-based ISA complexes have been obtained by zone-casting method. In the aligned films the columns of the complex align perpendicular to the phase-transition front. The obtained anisotropy (DR = 18) is thermally stable. The investigated photosensitive (azobenzene-based) ISA complexes show formation of columnar LC phases. It was demonstrated that photo alignment of such complexes was very effective (DR = 50 has been obtained). It was shown that photo-reorientation in the photosensitive ISA complexes is cooperative process. The size of domains has direct influence on efficiency of the photo-reorientation process. In the case of small domains the photo-alignment is the most effective. Under irradiation with linearly polarized light domains reorient in the plane of the film leading to macroscopic alignment of columns parallel to the light polarization and joining of small domains into big ones. Finally, the additional distinguishable properties of the ISA liquid crystalline complexes should be noted: (I) the complexes do not solve in water but readily solve in organic solvents; (II) the complexes have good film-forming properties when cast or spin-coated from organic solvent; (III) alignment of the complexes depends on their structure and secondary interactions between tectonic units.
Analysis and modeling of transient earthquake patterns and their dependence on local stress regimes
(2015)
Investigations in the field of earthquake triggering and associated interactions, which includes aftershock triggering as well as induced seismicity, is important for seismic hazard assessment due to earthquakes destructive power. One of the approaches to study earthquake triggering and their interactions is the use of statistical earthquake models, which are based on knowledge of the basic seismicity properties, in particular, the magnitude distribution and spatiotemporal properties of the triggered events.
In my PhD thesis I focus on some specific aspects of aftershock properties, namely, the relative seismic moment release of the aftershocks with respect to the mainshocks; the spatial correlation between aftershock occurrence and fault deformation; and on the influence of aseismic transients on the aftershock parameter estimation. For the analysis of aftershock sequences I choose a statistical approach, in particular, the well known Epidemic Type Aftershock Sequence (ETAS) model, which accounts for the input of background and triggered seismicity. For my specific purposes, I develop two ETAS model modifications in collaboration with Sebastian Hainzl. By means of this approach, I estimate the statistical aftershock parameters and performed simulations of aftershock sequences as well.
In the case of seismic moment release of aftershocks, I focus on the ratio of cumulative seismic moment release with respect to the mainshocks. Specifically, I investigate the ratio with respect to the focal mechanism of the mainshock and estimate an effective magnitude, which represents the cumulative aftershock energy (similar to Bath's law, which defines the average difference between mainshock and the largest aftershock magnitudes). Furthermore, I compare the observed seismic moment ratios with the results of the ETAS simulations. In particular, I test a restricted ETAS (RETAS) model which is based on results of a clock advanced model and static stress triggering.
To analyze spatial variations of triggering parameters I focus in my second approach on the aftershock occurrence triggered by large mainshocks and the study of the aftershock parameter distribution and their spatial correlation with the coseismic/postseismic slip and interseismic locking. To invert the aftershock parameters I improve the modified ETAS (m-ETAS) model, which is able to take the extension of the mainshock rupture into account. I compare the results obtained by the classical approach with the output of the m-ETAS model.
My third approach is concerned with the temporal clustering of seismicity, which might not only be related to earthquake-earthquake interactions, but also to a time-dependent background rate, potentially biasing the parameter estimations. Thus, my coauthors and I also applied a modification of the ETAS model, which is able to take into account time-dependent background activity. It can be applicable for two different cases: when an aftershock catalog has a temporal incompleteness or when the background seismicity rate changes with time, due to presence of aseismic forces.
An essential part of any research is the testing of the developed models using observational data sets, which are appropriate for the particular study case. Therefore, in the case of seismic moment release I use the global seismicity catalog. For the spatial distribution of triggering parameters I exploit two aftershock sequences of the Mw8.8 2010 Maule (Chile) and Mw 9.0 2011 Tohoku (Japan) mainshocks. In addition, I use published geodetic slip models of different authors. To test our ability to detect aseismic transients my coauthors and I use the data sets from Western Bohemia (Central Europe) and California.
Our results indicate that:
(1) the seismic moment of aftershocks with respect to mainshocks depends on the static stress changes and is maximal for the normal, intermediate for thrust and minimal for strike-slip stress regimes, where the RETAS model shows a good correspondence with the results;
(2) The spatial distribution of aftershock parameters, obtained by the m-ETAS model, shows anomalous values in areas of reactivated crustal fault systems. In addition, the aftershock density is found to be correlated with coseismic slip gradient, afterslip, interseismic coupling and b-values. Aftershock seismic moment is positively correlated with the areas of maximum coseismic slip and interseismically locked areas. These correlations might be related to the stress level or to material properties variations in space;
(3) Ignoring aseismic transient forcing or temporal catalog incompleteness can lead to the significant under- or overestimation of the underlying trigger parameters. In the case when a catalog is complete, this method helps to identify aseismic sources.
Background: Individuals with aphasia after stroke (IWA) often present with working memory (WM) deficits. Research investigating the relationship between WM and language abilities has led to the promising hypothesis that treatments of WM could lead to improvements in language, a phenomenon known as transfer. Although recent treatment protocols have been successful in improving WM, the evidence to date is scarce and the extent to which improvements in trained tasks of WM transfer to untrained memory tasks, spoken sentence comprehension, and functional communication is yet poorly understood.
Aims: We aimed at (a) investigating whether WM can be improved through an adaptive n-back training in IWA (Study 1–3); (b) testing whether WM training leads to near transfer to unpracticed WM tasks (Study 1–3), and far transfer to spoken sentence comprehension (Study 1–3), functional communication (Study 2–3), and memory in daily life in IWA (Study 2–3); and (c) evaluating the methodological quality of existing WM treatments in IWA (Study 3). To address these goals, we conducted two empirical studies – a case-controls study with Hungarian speaking IWA (Study 1) and a multiple baseline study with German speaking IWA (Study 2) – and a systematic review (Study 3).
Methods: In Study 1 and 2 participants with chronic, post-stroke aphasia performed an adaptive, computerized n-back training. ‘Adaptivity’ was implemented by adjusting the tasks’ difficulty level according to the participants’ performance, ensuring that they always practiced at an optimal level of difficulty. To assess the specificity of transfer effects and to better understand the underlying mechanisms of transfer on spoken sentence comprehension, we included an outcome measure testing specific syntactic structures that have been proposed to involve WM processes (e.g., non-canonical structures with varying complexity).
Results: We detected a mixed pattern of training and transfer effects across individuals: five participants out of six significantly improved in the n-back training. Our most important finding is that all six participants improved significantly in spoken sentence comprehension (i.e., far transfer effects). In addition, we also found far transfer to functional communication (in two participants out of three in Study 2) and everyday memory functioning (in all three participants in Study 2), and near transfer to unpracticed n-back tasks (in four participants out of six). Pooled data analysis of Study 1 and 2 showed a significant negative relationship between initial spoken sentence comprehension and the amount of improvement in this ability, suggesting that the more severe the participants’ spoken sentence comprehension deficit was at the beginning of training, the more they improved after training. Taken together, we detected both near far and transfer effects in our studies, but the effects varied across participants. The systematic review evaluating the methodological quality of existing WM treatments in stroke IWA (Study 3) showed poor internal and external validity across the included 17 studies. Poor internal validity was mainly due to use of inappropriate design, lack of randomization of study phases, lack of blinding of participants and/or assessors, and insufficient sampling. Low external validity was mainly related to incomplete information on the setting, lack of use of appropriate analysis or justification for the suitability of the analysis procedure used, and lack of replication across participants and/or behaviors. Results in terms of WM, spoken sentence comprehension, and reading are promising, but further studies with more rigorous methodology and stronger experimental control are needed to determine the beneficial effects of WM intervention.
Conclusions: Results of the empirical studies suggest that WM can be improved with a computerized and adaptive WM training, and improvements can lead to transfer effects to spoken sentence comprehension and functional communication in some individuals with chronic post-stroke aphasia. The fact that improvements were not specific to certain syntactic structures (i.e., non-canonical complex sentences) in spoken sentence comprehension suggest that WM is not involved in the online, automatic processing of syntactic information (i.e., parsing and interpretation), but plays a more general role in the later stage of spoken sentence comprehension (i.e., post-interpretive comprehension). The individual differences in treatment outcomes call for future research to clarify how far these results are generalizable to the population level of IWA. Future studies are needed to identify a few mechanisms that may generalize to at least a subpopulation of IWA as well as to investigate baseline non-linguistic cognitive and language abilities that may play a role in transfer effects and the maintenance of such effects. These may require larger yet homogenous samples.
Optical frequency combs (OFC) constitute an array of phase-correlated equidistant spectral lines with nearly equal intensities over a broad spectral range. The adaptations of combs generated in mode-locked lasers proved to be highly efficient for the calibration of high-resolution (resolving power > 50000) astronomical spectrographs. The observation of different galaxy structures or the studies of the Milky Way are done using instruments in the low- and medium resolution range. To such instruments belong, for instance, the Multi Unit Spectroscopic Explorer (MUSE) being developed for the Very Large Telescope (VLT) of the European Southern Observatory (ESO) and the 4-metre Multi-Object Spectroscopic Telescope (4MOST) being in development for the ESO VISTA 4.1 m Telescope. The existing adaptations of OFC from mode-locked lasers are not resolvable by these instruments.
Within this work, a fibre-based approach for generation of OFC specifically in the low- and medium resolution range is studied numerically. This approach consists of three optical fibres that are fed by two equally intense continuous-wave (CW) lasers. The first fibre is a conventional single-mode fibre, the second one is a suitably pumped amplifying Erbium-doped fibre with anomalous dispersion, and the third one is a low-dispersion highly nonlinear optical fibre. The evolution of a frequency comb in this system is governed by the following processes: as the two initial CW-laser waves with different frequencies propagate through the first fibre, they generate an initial comb via a cascade of four-wave mixing processes. The frequency components of the comb are phase-correlated with the original laser lines and have a frequency spacing that is equal to the initial laser frequency separation (LFS), i.e. the difference in the laser frequencies. In the time domain, a train of pre-compressed pulses with widths of a few pico-seconds arises out of the initial bichromatic deeply-modulated cosine-wave. These pulses undergo strong compression in the subsequent amplifying Erbium-doped fibre: sub-100 fs pulses with broad OFC spectra are formed. In the following low-dispersion highly nonlinear fibre, the OFC experience a further broadening and the intensity of the comb lines are fairly equalised. This approach was mathematically modelled by means of a Generalised Nonlinear Schrödinger Equation (GNLS) that contains terms describing the nonlinear optical Kerr effect, the delayed Raman response, the pulse self-steepening, and the linear optical losses as well as the wavelength-dependent Erbium gain profile for the second fibre. The initial condition equation being a deeply-modulated cosine-wave mimics the radiation of the two initial CW lasers. The numerical studies are performed with the help of Matlab scripts that were specifically developed for the integration of the GNLS and the initial condition according to the proposed approach for the OFC generation. The scripts are based on the Fourth-Order Runge-Kutta in the Interaction Picture Method (RK4IP) in combination with the local error method.
This work includes the studies and results on the length optimisation of the first and the second fibre depending on different values of the group-velocity dispersion of the first fibre. Such length optimisation studies are necessary because the OFC have the biggest possible broadband and exhibit a low level of noise exactly at the optimum lengths. Further, the optical pulse build-up in the first and the second fibre was studied by means of the numerical technique called Soliton Radiation Beat Analysis (SRBA). It was shown that a common soliton crystal state is formed in the first fibre for low laser input powers. The soliton crystal continuously dissolves into separated optical solitons as the input power increases. The pulse formation in the second fibre is critically dependent on the features of the pulses formed in the first fibre. I showed that, for low input powers, an adiabatic soliton compression delivering low-noise OFC occurs in the second fibre. At high input powers, the pulses in the first fibre have more complicated structures which leads to the pulse break-up in the second fibre with a subsequent degradation of the OFC noise performance. The pulse intensity noise studies that were performed within the framework of this thesis allow making statements about the noise performance of an OFC. They showed that the intensity noise of the whole system decreases with the increasing value of LFS.
At present, carbon sequestration in terrestrial ecosystems slows the growth rate of atmospheric CO2 concentrations, and thereby reduces the impact of anthropogenic fossil fuel emissions on the climate system. Changes in climate and land use affect terrestrial biosphere structure and functioning at present, and will likely impact on the terrestrial carbon balance during the coming decades - potentially providing a positive feedback to the climate system due to soil carbon releases under a warmer climate. Quantifying changes, and the associated uncertainties, in regional terrestrial carbon budgets resulting from these effects is relevant for the scientific understanding of the Earth system and for long-term climate mitigation strategies. A model describing the relevant processes that govern the terrestrial carbon cycle is a necessary tool to project regional carbon budgets into the future. This study (1) provides an extensive evaluation of the parameter-based uncertainty in model results of a leading terrestrial biosphere model, the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJ-DGVM), against a range of observations and under climate change, thereby complementing existing studies on other aspects of model uncertainty; (2) evaluates different hypotheses to explain the age-related decline in forest growth, both from theoretical and experimental evidence, and introduces the most promising hypothesis into the model; (3) demonstrates how forest statistics can be successfully integrated with process-based modelling to provide long-term constraints on regional-scale forest carbon budget estimates for a European forest case-study; and (4) elucidates the combined effects of land-use and climate changes on the present-day and future terrestrial carbon balance over Europe for four illustrative scenarios - implemented by four general circulation models - using a comprehensive description of different land-use types within the framework of LPJ-DGVM. This study presents a way to assess and reduce uncertainty in process-based terrestrial carbon estimates on a regional scale. The results of this study demonstrate that simulated present-day land-atmosphere carbon fluxes are relatively well constrained, despite considerable uncertainty in modelled net primary production. Process-based terrestrial modelling and forest statistics are successfully combined to improve model-based estimates of vegetation carbon stocks and their change over time. Application of the advanced model for 77 European provinces shows that model-based estimates of biomass development with stand age compare favourably with forest inventory-based estimates for different tree species. Driven by historic changes in climate, atmospheric CO2 concentration, forest area and wood demand between 1948 and 2000, the model predicts European-scale, present-day age structure of forests, ratio of biomass removals to increment, and vegetation carbon sequestration rates that are consistent with inventory-based estimates. Alternative scenarios of climate and land-use change in the 21<sup>st century suggest carbon sequestration in the European terrestrial biosphere during the coming decades will likely be on magnitudes relevant to climate mitigation strategies. However, the uptake rates are small in comparison to the European emissions from fossil fuel combustion, and will likely decline towards the end of the century. Uncertainty in climate change projections is a key driver for uncertainty in simulated land-atmosphere carbon fluxes and needs to be accounted for in mitigation studies of the terrestrial biosphere.
The world energy consumption has constantly increased every year due to economic development and population growth. This inevitably caused vast amount of CO2 emission, and the CO2 concentration in the atmosphere keeps increasing with economic growth. To reduce CO2 emission, various methods have been developed but there are still many bottlenecks to be solved. Solvents easily absorbing CO2 such as monoethanol-amine (MEA) and diethanolamine, for example, have limitations of solvent loss, amine degradation, vulnerability to heat and toxicity, and the high cost of regeneration which is especially caused due to chemisorption process. Though some of these drawbacks can be compensated through physisorption with zeolites and metal-organic frameworks (MOFs) by displaying significant adsorption selectivity and capacity even in ambient conditions, limitations for these materials still exist. Zeolites demand relatively high regeneration energy and have limited adsorption kinetics due to the exceptionally narrow pore structure. MOFs have low stability against heat and moisture and high manufacturing cost.
Nanoporous carbons have recently received attention as an attractive functional porous material due to their unique properties. These materials are crucial in many applications of modern science and industry such as water and air purification, catalysis, gas separation, and energy storage/conversion due to their high chemical and thermal stability, and in particular electronic conductivity in combination with high specific surface areas. Nanoporous carbons can be used to adsorb environmental pollutants or small gas molecules such as CO2 and to power electrochemical energy storage devices such as batteries and fuel cells. In all fields, their pore structure or electrical properties can be modified depending on their purposes.
This thesis provides an in-depth look at novel nanoporous carbons from the synthetic and the application point of view. The interplay between pore structure, atomic construction, and the adsorption properties of nanoporous carbon materials are investigated. Novel nanoporous carbon materials are synthesized by using simple precursor molecules containing heteroatoms through a facile
templating method. The affinity, and in turn the adsorption capacity, of carbon materials toward polar gas molecules (CO2 and H2O) is enhanced by the modification of their chemical construction. It is also shown that these properties are important in electrochemical energy storage, here especially for supercapacitors with aqueous electrolytes which are basically based on the physisorption of ions on carbon surfaces. This shows that nanoporous carbons can be a “functional” material with specific physical or chemical interactions with guest species just like zeolites and MOFs.
The synthesis of sp2-conjugated materials with high heteroatom content from a mixture of citrazinic acid and melamine in which heteroatoms are already bonded in specific motives is illustrated. By controlling the removal procedure of the salt-template and the condensation temperature, the role of salts in the formation of porosity and as coordination sites for the stabilization of heteroatoms is proven. A high amount of nitrogen of up to 20 wt. %, oxygen contents of up to 19 wt.%, and a high CO2/N2 selectivity with maximum CO2 uptake at 273 K of 5.31 mmol g–1 are achieved. Besides, the further controlled thermal condensation of precursor molecules and advanced functional properties on applications of the synthesized porous carbons are described. The materials have different porosity and atomic construction exhibiting a high nitrogen content up to 25 wt. % as well as a high porosity with a specific surface area of more than 1800 m2 g−1, and a high performance in selective CO2 gas adsorption of 62.7. These pore structure as well as properties of surface affect to water adsorption with a remarkably high Qst of over 100 kJ mol−1 even higher than that of zeolites or CaCl2 well known as adsorbents. In addition to that, the pore structure of HAT-CN-derived carbon materials during condensation in vacuum is fundamentally understood which is essential to maximize the utilization of porous system in materials showing significant difference in their pore volume of 0.5 cm3 g−1 and 0.25 cm3 g−1 without and with vacuum, respectively.
The molecular designs of heteroatom containing porous carbon derived from abundant and simple molecules are introduced in the presented thesis. Abundant precursors that already containing high amount of nitrogen or oxygen are beneficial to achieve enhanced interaction with adsorptives. The physical and chemical properties of these heteroatom-doped porous carbons are affected by mainly two parameters, that is, the porosity from the pore structure and the polarity from the atomic composition on the surface. In other words, controlling the porosity as well as the polarity of the carbon materials is studied to understand interactions with different guest species which is a fundamental knowledge for the utilization on various applications.
The Milky Way is a spiral galaxy consisting of a disc of gas, dust and stars embedded in a halo of dark matter. Within this dark matter halo there is also a diffuse population of stars called the stellar halo, that has been accreting stars for billions of years from smaller galaxies that get pulled in and disrupted by the large gravitational potential of the Milky Way. As they are disrupted, these galaxies leave behind long streams of stars that can take billions of years to mix with the rest of the stars in the halo. Furthermore, the amount of heavy elements (metallicity) of the stars in these galaxies reflects the rate of chemical enrichment that occurred in them, since the Universe has been slowly enriched in heavy elements (e.g. iron) through successive generations of stars which produce them in their cores and supernovae explosions. Therefore, stars that contain small amounts of heavy elements (metal-poor stars) either formed at early times before the Universe was significantly enriched, or in isolated environments. The aim of this thesis is to develop a better understanding of the substructure content and chemistry of the Galactic stellar halo, in order to gain further insight into the formation and evolution of the Milky Way.
The Pristine survey uses a narrow-band filter which specifically targets the Ca II H & K spectral absorption lines to provide photometric metallicities for a large number of stars down to the extremely metal-poor (EMP) regime, making it a very powerful data set for Galactic archaeology studies. In Chapter 2, we quantify the efficiency of the survey using a preliminary spectroscopic follow-up sample of ~ 200 stars. We also use this sample to establish a set of selection criteria to improve the success rate of selecting EMP candidates for follow-up spectroscopy. In Chapter 3, we extend this work and present the full catalogue of ~ 1000 stars from a three year long medium resolution spectroscopic follow-up effort conducted as part of the Pristine survey. From this sample, we compute success rates of 56% and 23% for recovering stars with [Fe/H] < -2.5 and [Fe/H] < -3.0, respectively. This demonstrates a high efficiency for finding EMP stars as compared to previous searches with success rates of 3-4%.
In Chapter 4, we select a sample of ~ 80000 halo stars using colour and magnitude cuts to select a main sequence turnoff population in the distance range 6 < dʘ < 20 kpc. We then use the spectroscopic follow-up sample presented in Chapter 3 to statistically rescale the Pristine photometric metallicities of this sample, and present the resulting corrected metallicity distribution function (MDF) of the halo. The slope at the metal-poor end is significantly shallower than previous spectroscopic efforts have shown, suggesting that there may be more metal-poor stars with [Fe/H] < -2.5 in the halo than previously thought. This sample also shows evidence that the MDF of the halo may not be bimodal as was proposed by previous works, and that the lack of globular clusters in the Milky Way may be the result of a physical truncation of the MDF rather than just statistical under-sampling.
Chapter 5 showcases the unexpected capability of the Pristine filter for separating blue horizontal branch (BHB) stars from Blue Straggler (BS) stars. We demonstrate a purity of 93% and completeness of 91% for identifying BHB stars, a substantial improvement over previous works. We then use this highly pure and complete sample of BHB stars to trace the halo density profile out to d > 100 kpc, and the Sagittarius stream substructure out to ~ 130 kpc.
In Chapter 6 we use the photometric metallicities from the Pristine survey to perform a clustering analysis of the halo as a function of metallicity. Separating the Pristine sample into four metallicity bins of [Fe/H] < -2, -2 < [Fe/H] < -1.5, -1.5 < [Fe/H] < -1 and -0.9 < [Fe/H] < -0.8, we compute the two-point correlation function to measure the amount of clustering on scales of < 5 deg. For a smooth comparison sample we make a mock Pristine data set generated using the Galaxia code based on the Besançon model of the Galaxy. We find enhanced clustering on small scales (< 0.5 deg) for some regions of the Galaxy for the most metal-poor bin ([Fe/H] < -2), while in others we see large scale signals that correspond to known substructures in those directions. This confirms that the substructure content of the halo is highly anisotropic and diverse in different Galactic environments. We discuss the difficulties of removing systematic clustering signals from the data and the limitations of disentangling weak clustering signals from real substructures and residual systematic structure in the data.
Taken together, the work presented in this thesis approaches the problem of better understanding the halo of our Galaxy from multiple angles. Firstly, presenting a sizeable sample of EMP stars and improving the selection efficiency of EMP stars for the Pristine survey, paving the way for the further discovery of metal-poor stars to be used as probes to early chemical evolution. Secondly, improving the selection of BHB distance tracers to map out the halo to large distances, and finally, using the large samples of metal-poor stars to derive the MDF of the inner halo and analyse the substructure content at different metallicities. The results of this thesis therefore expand our understanding of the physical and chemical properties of the Milky Way stellar halo, and provide insight into the processes involved in its formation and evolution.
CHAMP (CHAllenging Minisatellite Payload) is a German small satellite mission to study the earth's gravity field, magnetic field and upper atmosphere. Thanks to the good condition of the satellite so far, the planned 5 years mission is extended to year 2009. The satellite provides continuously a large quantity of measurement data for the purpose of Earth study. The measurements of the magnetic field are undertaken by two Fluxgate Magnetometers (vector magnetometer) and one Overhauser Magnetometer (scalar magnetometer) flown on CHAMP. In order to ensure the quality of the data during the whole mission, the calibration of the magnetometers has to be performed routinely in orbit. The scalar magnetometer serves as the magnetic reference and its readings are compared with the readings of the vector magnetometer. The readings of the vector magnetometer are corrected by the parameters that are derived from this comparison, which is called the scalar calibration. In the routine processing, these calibration parameters are updated every 15 days by means of scalar calibration. There are also magnetic effects coming from the satellite which disturb the measurements. Most of them have been characterized during tests before launch. Among them are the remanent magnetization of the spacecraft and fields generated by currents. They are all considered to be constant over the mission life. The 8 years of operation experience allow us to investigate the long-term behaviors of the magnetometers and the satellite systems. According to the investigation, it was found that for example the scale factors of the FGM show obvious long-term changes which can be described by logarithmic functions. The other parameters (offsets and angles between the three components) can be considered constant. If these continuous parameters are applied for the FGM data processing, the disagreement between the OVM and the FGM readings is limited to \pm1nT over the whole mission. This demonstrates, the magnetometers on CHAMP exhibit a very good stability. However, the daily correction of the parameter Z component offset of the FGM improves the agreement between the magnetometers markedly. The Z component offset plays a very important role for the data quality. It exhibits a linear relationship with the standard deviation of the disagreement between the OVM and the FGM readings. After Z offset correction, the errors are limited to \pm0.5nT (equivalent to a standard deviation of 0.2nT). We improved the corrections of the spacecraft field which are not taken into account in the routine processing. Such disturbance field, e.g. from the power supply system of the satellite, show some systematic errors in the FGM data and are misinterpreted in 9-parameter calibration, which brings false local time related variation of the calibration parameters. These corrections are made by applying a mathematical model to the measured currents. This non-linear model is derived from an inversion technique. If the disturbance field of the satellite body are fully corrected, the standard deviation of scalar error \triangle B remains about 0.1nT. Additionally, in order to keep the OVM readings a reliable standard, the imperfect coefficients of the torquer current correction for the OVM are redetermined by solving a minimization problem. The temporal variation of the spacecraft remanent field is investigated. It was found that the average magnetic moment of the magneto-torquers reflects well the moment of the satellite. This allows for a continuous correction of the spacecraft field. The reasons for the possible unknown systemic error are discussed in this thesis. Particularly, both temperature uncertainties and time errors have influence on the FGM data. Based on the results of this thesis the data processing of future magnetic missions can be designed in an improved way. In particular, the upcoming ESA mission Swarm can take advantage of our findings and provide all the auxiliary measurements needed for a proper recovery of the ambient magnetic field.
The aim of this thesis is to achieve a deep understanding of the working mechanism of polymer based solar cells and to improve the device performance. Two types of the polymer based solar cells are studied here: all-polymer solar cells comprising macromolecular donors and acceptors based on poly(p-phenylene vinylene) and hybrid cells comprising a PPV copolymer in combination with a novel small molecule electron acceptor. To understand the interplay between morphology and photovoltaic properties in all-polymer devices, I compared the photocurrent characteristics and excited state properties of bilayer and blend devices with different nano-morphology, which was fine tuned by using solvents with different boiling points. The main conclusion from these complementary measurements was that the performance-limiting step is the field-dependent generation of free charge carriers, while bimolecular recombination and charge extraction do not compromise device performance. These findings imply that the proper design of the donor-acceptor heterojunction is of major importance towards the goal of high photovoltaic efficiencies. Regarding polymer-small molecular hybrid solar cells I combined the hole-transporting polymer M3EH-PPV with a novel Vinazene-based electron acceptor. This molecule can be either deposited from solution or by thermal evaporation, allowing for a large variety of layer architectures to be realized. I then demonstrated that the layer architecture has a large influence on the photovoltaic properties. Solar cells with very high fill factors of up to 57 % and an open circuit voltage of 1V could be achieved by realizing a sharp and well-defined donor-acceptor heterojunction. In the past, fill factors exceeding 50 % have only been observed for polymers in combination with soluble fullerene-derivatives or nanocrystalline inorganic semiconductors as the electron-accepting component. The finding that proper processing of polymer-vinazene devices leads to similar high values is a major step towards the design of efficient polymer-based solar cells.
In this work, the role of the TusA protein was investigated for the cell functionality and FtsZ ring assembly in Escherichia coli. TusA is the tRNA-2-thiouridine synthase that acts as a sulfur transferase in tRNA thiolation for the formation of 2-thiouridine at the position 34 (wobble base) of tRNALys, tRNAGlu and tRNAGln. It binds the persulfide form of sulfur and transfers it to further proteins during mnm5s2U tRNA modification at wobble position and for Moco biosynthesis. With this thiomodification of tRNA, the ribosome binding is more efficient and frameshifting is averted during the protein translation. Previous studies have revealed an essential role of TusA in bacterial cell physiology since deletion of the tusA gene resulted in retarded growth and filamentous cells during the exponential growth phase in a rich medium which suddenly disappeared during the stationary phase. This indicates a problem in the cell division process. Therefore the focus of this work was to investigate the role of TusA for cell functionality and FtsZ ring formation and thus the cell separation.
The reason behind the filamentous growth of the tusA mutant strain was investigated by growth and morphological analyses. ΔtusA cells showed a retarded growth during the exponential phase compared to the WT strain. Also, morphological analysis of ΔtusA cells confirmed the filamentous cell shape. The growth and cell division defects in ΔtusA indicated a defect in FtsZ protein as a key player of cell division. The microscopic investigation revealed that filamentous ΔtusA cells possessed multiple DNA parts arranged next to each other. This suggested that although the DNA replication occurred correctly, there was a defect in the step where FtsZ should act; probably FtsZ is unable to assemble to the ring structure or the assembled ring is not able to constrict. All tested mutant strains (ΔtusD, ΔtusE and ΔmnmA) involved in the mnm5s2U34 tRNA modification pathway shared the similar retarded growth and filamentous cell shape like ΔtusA strain. Thus, the cell division defect arises from a defect in mnm5s2U34 tRNA thiolation.
Since the FtsZ ring formation was supposed to be defective in filaments, a possible intracellular interaction of TusA and FtsZ was examined by fluorescent (EGFP and mCherry) fusion proteins expression and FRET. FtsZ expressing tusA mutant (DE3) cells showed a red mCherry signal at the cell poles, indicating that FtsZ is still in the assembling phase. Interestingly, the cellular region of EGFP-TusA fusion protein expressed in ΔtusA (DE3) was conspicuous; the EGFP signal was spread throughout the whole cell and, in addition, a slight accumulation of the EGFP-TusA fluorescence was detectable at the cell poles, the same part of the cell as for mCherry-FtsZ. Thus, this strongly suggested an interaction of TusA and FtsZ.
Furthermore, the cellular FtsZ and Fis concentrations, and their change during different growth phases were determined via immunoblotting. All tested deletion strains of mnm5s2U34 tRNA modification show high cellular FtsZ and Fis levels in the exponential phase, shifting to the later growth phases. This shift reflects the retarded growth, whereby the deletion strains reach later the exponential phase. Conclusively, the growth and cell division defect, and thus the formation of filaments, is most likely caused by changes in the cellular FtsZ and Fis concentrations.
Finally, the translation efficiencies of certain proteins (RpoS, Fur, Fis and mFis) in tusA mutant and in additional gene deletion strains were studied whether they were affected by using unmodified U34 tRNAs of Lys, Glu and Gln. The translation efficiency is decreased in mnm5s2U34 tRNA modification-impaired strains in addition to their existing growth and cell division defect due to the elimination of these three amino acids. Finally, these results confirm and reinforce the importance of Lys, Glu and Gln and the mnm5s2U34 tRNA thiolation for efficient protein translation. Thus, these findings verify that the translation of fur, fis and rpoS is regulated by mnm5s2U34 tRNA modifications, which is growth phase-dependent.
In total, this work showed the importance of the role of TusA for bacterial cell functionality and physiology. The deletion of the tusA gene disrupted a complex regulatory network within the cell, that most influenced by the decreased translation of Fis and RpoS, caused by the absence of mnm5s2U34 tRNA modifications. The disruption of RpoS and Fis cellular network influences in turn the cellular FtsZ level in the early exponential phase. Finally, the reduced FtsZ concentration leads to elongated, filamentous E. coli cells, which are unable to divide.
Synchronization is a fundamental phenomenon in nature. It can be considered as a general property of self-sustained oscillators to adjust their rhythm in the presence of an interaction.
In this work we investigate complex regimes of synchronization phenomena by means of theoretical analysis, numerical modeling, as well as practical analysis of experimental data.
As a subject of our investigation we consider chimera state, where due to spontaneous symmetry-breaking of an initially homogeneous oscillators lattice split the system into two parts with different dynamics. Chimera state as a new synchronization phenomenon was first found in non-locally coupled oscillators system, and has attracted a lot of attention in the last decade. However, the recent studies indicate that this state is also possible in globally coupled systems. In the first part of this work, we show under which conditions the chimera-like state appears in a system of globally coupled identical oscillators with intrinsic delayed feedback. The results of the research explain how initially monostable oscillators became effectivly bistable in the presence of the coupling and create a mean field that sustain the coexistence of synchronized and desynchronized states. Also we discuss other examples, where chimera-like state appears due to frequency dependence of the phase shift in the bistable system.
In the second part, we make further investigation of this topic by modeling influence of an external periodic force to an oscillator with intrinsic delayed feedback. We made stability analysis of the synchronized state and constructed Arnold tongues. The results explain formation of the chimera-like state and hysteric behavior of the synchronization area. Also, we consider two sets of parameters of the oscillator with symmetric and asymmetric Arnold tongues, that correspond to mono- and bi-stable regimes of the oscillator.
In the third part, we demonstrate the results of the work, which was done in collaboration with our colleagues from Psychology Department of University of Potsdam. The project aimed to study the effect of the cardiac rhythm on human perception of time using synchronization analysis. From our part, we made a statistical analysis of the data obtained from the conducted experiment on free time interval reproduction task. We examined how ones heartbeat influences the time perception and searched for possible phase synchronization between heartbeat cycles and time reproduction responses. The findings support the prediction that cardiac cycles can serve as input signals, and is used for reproduction of time intervals in the range of several seconds.
Water quality in river systems is of growing concern due to rising anthropogenic pressures and climate change. Mitigation efforts have been placed under the guidelines of different governance conventions during last decades (e.g., the Water Framework Directive in Europe). Despite significant improvement through relatively straightforward measures, the environmental status has likely reached a plateau. A higher spatiotemporal accuracy of catchment nitrate modeling is, therefore, needed to identify critical source areas of diffuse nutrient pollution (especially for nitrate) and to further guide implementation of spatially differentiated, cost-effective mitigation measures. On the other hand, the emerging high-frequency sensor monitoring upgrades the monitoring resolution to the time scales of biogeochemical processes and enables more flexible monitoring deployments under varying conditions. The newly available information offers new prospects in understanding nitrate spatiotemporal dynamics. Formulating such advanced process understanding into catchment models is critical for model further development and environmental status evaluation. This dissertation is targeting on a comprehensive analysis of catchment and in-stream nitrate dynamics and is aiming to derive new insights into their spatial and temporal variabilities through the new fully distributed model development and the new high-frequency data.
Firstly, a new fully distributed, process-based catchment nitrate model (the mHM-Nitrate model) is developed based on the mesoscale Hydrological Model (mHM) platform. Nitrate process descriptions are adopted from the Hydrological Predictions for the Environment (HYPE), with considerable improved implementations. With the multiscale grid-based discretization, mHM-Nitrate balances the spatial representation and the modeling complexity. The model has been thoughtfully evaluated in the Selke catchment (456 km2), central Germany, which is characterized by heterogeneous physiographic conditions. Results show that the model captures well the long-term discharge and nitrate dynamics at three nested gauging stations. Using daily nitrate-N observations, the model is also validated in capturing short-term fluctuations due to changes in runoff partitioning and spatial contribution during flooding events. By comparing the model simulations with the values reported in the literature, the model is capable of providing detailed and reliable spatial information of nitrate concentrations and fluxes. Therefore, the model can be taken as a promising tool for environmental scientists in advancing environmental modeling research, as well as for stakeholders in supporting their decision-making, especially for spatially differentiated mitigation measures.
Secondly, a parsimonious approach of regionalizing the in-stream autotrophic nitrate uptake is proposed using high-frequency data and further integrated into the new mHM-Nitrate model. The new regionalization approach considers the potential uptake rate (as a general parameter) and effects of above-canopy light and riparian shading (represented by global radiation and leaf area index data, respectively). Multi-parameter sensors have been continuously deployed in a forest upstream reach and an agricultural downstream reach of the Selke River. Using the continuous high-frequency data in both streams, daily autotrophic uptake rates (2011-2015) are calculated and used to validate the regionalization approach. The performance and spatial transferability of the approach is validated in terms of well-capturing the distinct seasonal patterns and value ranges in both forest and agricultural streams. Integrating the approach into the mHM-Nitrate model allows spatiotemporal variability of in-stream nitrate transport and uptake to be investigated throughout the river network.
Thirdly, to further assess the spatial variability of catchment nitrate dynamics, for the first time the fully distributed parameterization is investigated through sensitivity analysis. Sensitivity results show that parameters of soil denitrification, in-stream denitrification and in-stream uptake processes are the most sensitive parameters throughout the Selke catchment, while they all show high spatial variability, where hot-spots of parameter sensitivity can be explicitly identified. The Spearman rank correlation is further analyzed between sensitivity indices and multiple catchment factors. The correlation identifies that the controlling factors vary spatially, reflecting heterogeneous catchment responses in the Selke catchment. These insights are, therefore, informative in informing future parameter regionalization schemes for catchment water quality models. In addition, the spatial distributions of parameter sensitivity are also influenced by the gauging information that is being used for sensitivity evaluation. Therefore, an appropriate monitoring scheme is highly recommended to truly reflect the catchment responses.
Supercapacitors are electrochemical energy storage devices with rapid charge/discharge rate and long cycle life. Their biggest challenge is the inferior energy density compared to other electrochemical energy storage devices such as batteries. Being the most widely spread type of supercapacitors, electrochemical double-layer capacitors (EDLCs) store energy by electrosorption of electrolyte ions on the surface of charged electrodes. As a more recent development, Na-ion capacitors (NICs) are expected to be a more promising tactic to tackle the inferior energy density due to their higher-capacity electrodes and larger operating voltage. The charges are simultaneously stored by ion adsorption on the capacitive-type cathode surface and via faradic process in the battery-type anode, respectively. Porous carbon electrodes are of great importance in these devices, but the paramount problems are the facile synthetic routes for high-performance carbons and the lack of fundamental understanding of the energy storage mechanisms. Therefore, the aim of the present dissertation is to develop novel synthetic methods for (nitrogen-doped) porous carbon materials with superior performance, and to reveal a deeper understanding energy storage mechanisms of EDLCs and NICs.
The first part introduces a novel synthetic method towards hierarchical ordered meso-microporous carbon electrode materials for EDLCs. The large amount of micropores and highly ordered mesopores endow abundant sites for charge storage and efficient electrolyte transport, respectively, giving rise to superior EDLC performance in different electrolytes. More importantly, the controversial energy storage mechanism of EDLCs employing ionic liquid (IL) electrolytes is investigated by employing a series of porous model carbons as electrodes. The results not only allow to conclude on the relations between the porosity and ion transport dynamics, but also deliver deeper insights into the energy storage mechanism of IL-based EDLCs which is different from the one usually dominating in solvent-based electrolytes leading to compression double-layers.
The other part focuses on anodes of NICs, where novel synthesis of nitrogen-rich porous carbon electrodes and their sodium storage mechanism are investigated. Free-standing fibrous nitrogen-doped carbon materials are synthesized by electrospinning using the nitrogen-rich monomer (hexaazatriphenylene-hexacarbonitrile, C18N12) as the precursor followed by condensation at high temperature. These fibers provide superior capacity and desirable charge/discharge rate for sodium storage. This work also allows insights into the sodium storage mechanism in nitrogen-doped carbons. Based on this mechanism, further optimization is done by designing a composite material composed of nitrogen-rich carbon nanoparticles embedded in conductive carbon matrix for a better charge/discharge rate. The energy density of the assembled NICs significantly prevails that of common EDLCs while maintaining the high power density and long cycle life.
When azobenzene-modified photosensitive polymer films are irradiated with light interference patterns, topographic variations in the film develop that follow the electric field vector distribution resulting in the formation of surface relief grating (SRG). The exact correspondence of the electric field vector orientation in interference pattern in relation to the presence of local topographic minima or maxima of SRG is in general difficult to determine. In my thesis, we have established a systematic procedure to accomplish the correlation between different interference patterns and the topography of SRG. For this, we devise a new setup combining an atomic force microscope and a two-beam interferometer (IIAFM). With this set-up, it is possible to track the topography change in-situ, while at the same time changing polarization and phase of the impinging interference pattern. To validate our results, we have compared two photosensitive materials named in short as PAZO and trimer. This is the first time that an absolute correspondence between the local distribution of electric field vectors of interference pattern and the local topography of the relief grating could be established exhaustively. In addition, using our IIAFM we found that for a certain polarization combination of two orthogonally polarized interfering beams namely SP (↕, ↔) interference pattern, the topography forms SRG with only half the period of the interference patterns. Exploiting this phenomenon we are able to fabricate surface relief structures below diffraction limit with characteristic features measuring only 140 nm, by using far field optics with a wavelength of 491 nm. We have also probed for the stresses induced during the polymer mass transport by placing an ultra-thin gold film on top (5–30 nm). During irradiation, the metal film not only deforms along with the SRG formation, but ruptures in regular and complex manner. The morphology of the cracks differs strongly depending on the electric field distribution in the interference pattern even when the magnitude and the kinetic of the strain are kept constant. This implies a complex local distribution of the opto-mechanical stress along the topography grating. The neutron reflectivity measurements of the metal/polymer interface indicate the penetration of metal layer within the polymer resulting in the formation of bonding layer that confirms the transduction of light induced stresses in the polymer layer to a metal film.
Successful sentence comprehension requires the comprehender to correctly figure out who did what to whom. For example, in the sentence John kicked the ball, the comprehender has to figure out who did the action of kicking and what was being kicked. This process of identifying and connecting the syntactically-related words in a sentence is called dependency completion. What are the cognitive constraints that determine dependency completion? A widely-accepted theory is cue-based retrieval. The theory maintains that dependency completion is driven by a content-addressable search for the co-dependents in memory. The cue-based retrieval explains a wide range of empirical data from several constructions including subject-verb agreement, subject-verb non-agreement, plausibility mismatch configurations, and negative polarity items.
However, there are two major empirical challenges to the theory: (i) Grammatical sentences’ data from subject-verb number agreement dependencies, where the theory predicts a slowdown at the verb in sentences like the key to the cabinet was rusty compared to the key to the cabinets was rusty, but the data are inconsistent with this prediction; and, (ii) Data from antecedent-reflexive dependencies, where a facilitation in reading times is predicted at the reflexive in the bodybuilder who worked with the trainers injured themselves vs. the bodybuilder who worked with the trainer injured themselves, but the data do not show a facilitatory effect.
The work presented in this dissertation is dedicated to building a more general theory of dependency completion that can account for the above two datasets without losing the original empirical coverage of the cue-based retrieval assumption. In two journal articles, I present computational modeling work that addresses the above two empirical challenges.
To explain the grammatical sentences’ data from subject-verb number agreement dependencies, I propose a new model that assumes that the cue-based retrieval operates on a probabilistically distorted representation of nouns in memory (Article I). This hybrid distortion-plus-retrieval model was compared against the existing candidate models using data from 17 studies on subject-verb number agreement in 4 languages. I find that the hybrid model outperforms the existing models of number agreement processing suggesting that the cue-based retrieval theory must incorporate a feature distortion assumption.
To account for the absence of facilitatory effect in antecedent-reflexive dependencies, I propose an individual difference model, which was built within the cue-based retrieval framework (Article II). The model assumes that individuals may differ in how strongly they weigh a syntactic cue over a number cue. The model was fitted to data from two studies on antecedent-reflexive dependencies, and the participant-level cue-weighting was estimated. We find that one-fourth of the participants, in both studies, weigh the syntactic cue higher than the number cue in processing reflexive dependencies and the remaining participants weigh the two cues equally. The result indicates that the absence of predicted facilitatory effect at the level of grouped data is driven by some, not all, participants who weigh syntactic cues higher than the number cue. More generally, the result demonstrates that the assumption of differential cue weighting is important for a theory of dependency completion processes. This differential cue weighting idea was independently supported by a modeling study on subject-verb non-agreement dependencies (Article III).
Overall, the cue-based retrieval, which is a general theory of dependency completion, needs to incorporate two new assumptions: (i) the nouns stored in memory can undergo probabilistic feature distortion, and (ii) the linguistic cues used for retrieval can be weighted differentially. This is the cumulative result of the modeling work presented in this dissertation.
The dissertation makes an important theoretical contribution: Sentence comprehension in humans is driven by a mechanism that assumes cue-based retrieval, probabilistic feature distortion, and differential cue weighting. This insight is theoretically important because there is some independent support for these three assumptions in sentence processing and the broader memory literature. The modeling work presented here is also methodologically important because for the first time, it demonstrates (i) how the complex models of sentence processing can be evaluated using data from multiple studies simultaneously, without oversimplifying the models, and (ii) how the inferences drawn from the individual-level behavior can be used in theory development.
The correlations between the chemical structures of the 2,5-diphenyl-1,3,4-oxadiazole compounds and their corresponding vapour deposited film structures on Si/SiO2 were systematically investigated with AFM, XSR and IR for the first time. The result shows that the film structure depends strongly on the substrate temperature (Ts). For the compounds with ether bridge group, the film periodicity depends linearly on the length of the aliphatic chain. The films based on those oxadiazols have ordered structure in the investigated substrate temperature region, while die amide bridged compounds form ordered film only at high Ts due to the formation of intermolecular H-bond. The tilt angle of most molecules is determined by the pi-pi complexes between the molecules. The intermolecular interaction between head groups leads to the structural transformation during the thermal treatment after deposition. All the ether bridged oxadiazoles form films with bilayer structure, while amide bridged oxadiazole form film bilayer structure only when the molecule has a head group.
Time-dependent correlation function based methods to study optical spectroscopy involving electronic transitions can be traced back to the work of Heller and coworkers. This intuitive methodology can be expected to be computationally efficient and is applied in the current work to study the vibronic absorption, emission, and resonance Raman spectra of selected organic molecules. Besides, the "non-standard" application of this approach to photoionization processes is also explored. The application section consists of four chapters as described below.
In Chapter 4, the molar absorptivities and vibronic absorption/emission spectra of perylene and several of its N-substituted derivatives are investigated. By systematically varying the number and position of N atoms, it is shown that the presence of nitrogen heteroatoms has a negligible effect on the molecular structure and geometric distortions upon electronic transitions, while spectral properties are more sensitive: In particular the number of N atoms is important while their position is less decisive. Thus, N-substitution can be used to fine-tune the optical properties of perylene-based molecules.
In Chapter 5, the same methods are applied to study the vibronic absorption/emission and resonance Raman spectra of a newly synthesized donor-acceptor type molecule. The simulated absorption/emission spectra agree fairly well with experimental data, with discrepancies being attributed to solvent effects. Possible modes which may dominate the fine-structure in the vibronic spectra are proposed by analyzing the correlation function with the aid of Raman and resonance Raman spectra.
In the next two chapters, besides the above types of spectra, the methods are extended to study photoelectron spectra of several small diamondoid-related systems (molecules, radicals, and cations). Comparison of the photoelectron spectra with available experimental data suggests that the correlation function based approach can describe ionization processes reasonably well. Some of these systems, cationic species in particular, exhibit somewhat peculiar optical behavior, which presents them as possible candidates for functional devices.
Correlation function based methods in a more general sense can be very versatile. In fact, besides the above radiative processes, formulas for non-radiative processes such as internal conversion have been derived in literature. Further implementation of the available methods is among our next goals.
The present thesis focuses on the synthesis of nanostructured iron-based compounds by using β-FeOOH nanospindles and poly(ionic liquid)s (PILs) vesicles as hard and soft templates, respectively, to suppress the shuttle effect of lithium polysulfides (LiPSs) in Li-S batteries. Three types of composites with different nanostructures (mesoporous nanospindle, yolk-shell nanospindle, and nanocapsule) have been synthesized and applied as sulfur host material for Li-S batteries. Their interactions with LiPSs and effects on the electrochemical performance of Li-S batteries have been systematically studied.
In the first part of the thesis, carbon-coated mesoporous Fe3O4 (C@M-Fe3O4) nanospindles have been synthesized to suppress the shuttle effect of LiPSs. First, β-FeOOH nanospindles have been synthesized via the hydrolysis of iron (III) chloride in aqueous solution and after silica coating and subsequent calcination, mesoporous Fe2O3 (M-Fe2O3) have been obtained inside the confined silica layer through pyrolysis of β-FeOOH. After the removal of the silica layer, electron tomography (ET) has been applied to rebuild the 3D structure of the M-Fe2O3 nanospindles. After coating a thin layer of polydopamine (PDA) as carbon source, the PDA-coated M-Fe2O3 particles have been calcinated to synthesize C@M-Fe3O4 nanospindles. With the chemisorption of Fe3O4 and confinement of mesoporous structure to anchor LiPSs, the composite C@M-Fe3O4/S electrode delivers a remaining capacity of 507.7 mAh g-1 at 1 C after 600 cycles.
In the second part of the thesis, a series of iron-based compounds (Fe3O4, FeS2, and FeS) with the same yolk-shell nanospindle morphology have been synthesized, which allows for the direct comparison of the effects of compositions on the electrochemical performance of Li-S batteries. The Fe3O4-carbon yolk-shell nanospindles have been synthesized by using the β-FeOOH nanospindles as hard template. Afterwards, Fe3O4-carbon yolk-shell nanospindles have been used as precursors to obtain iron sulfides (FeS and FeS2)-carbon yolk-shell nanospindles through sulfidation at different temperatures. Using the three types of yolk-shell nanospindles as sulfur host, the effects of compositions on interactions with LiPSs and electrochemical performance in Li-S batteries have been systematically investigated and compared. Benefiting from the chemisorption and catalytic effect of FeS2 particles and the physical confinement of the carbon shell, the FeS2-C/S electrode exhibits the best electrochemical performance with an initial specific discharge capacity of 877.6 mAh g-1 at 0.5 C and a retention ratio of 86.7% after 350 cycles.
In the third part, PILs vesicles have been used as soft template to synthesize carbon nanocapsules embedded with iron nitride particles to immobilize and catalyze LiPSs in Li-S batteries. First, 3-n-decyl-1-vinylimidazolium bromide has been used as monomer to synthesize PILs nanovesicles by free radical polymerization. Assisted by PDA coating route and ion exchange, PIL nanovesicles have been successfully applied as soft template in morphology-maintaining carbonization to prepare carbon nanocapsules embedded with iron nitride nanoparticles (FexN@C). The well-dispersed iron nitride nanoparticles effectively catalyze the conversion of LiPSs to Li2S, owing to their high electrical conductivity and strong chemical binding to LiPSs. The constructed FexN@C/S cathode demonstrates a high initial discharge capacity of 1085.0 mAh g-1 at 0.5 C with a remaining value of 930.0 mAh g-1 after 200 cycles.
The results in the present thesis demonstrate the facile synthetic routes of nanostructured iron-based compounds with controllable morphologies and compositions using soft and hard colloidal templates, which can be applied as sulfur host to suppress the shuttle behavior of LiPSs. The synthesis approaches developed in this thesis are also applicable to fabricating other transition metal-based compounds with porous nanostructures for other applications.
Die vorliegende Arbeit setzt sich aus zwei Teilstudien zusammen. In Teilstudie 1 wird die Stabilität eines allgemeinen Modells zu den Zusammenhängen zwischen Über- und Unterforderungsmerkmalen, sozialen Belastungen, Anforderungen und organisationalen Ressourcen einerseits sowie den Fehlbeanspruchungen emotionale Erschöpfung und Klientenaversion bzw. Distanzierungstendenzen andererseits für personenbezogene Dienstleistungstätigkeiten untersucht. Einbezogen wurden Ärztinnen und Ärzte, Pflegende und Mitarbeitende aus dem paramedizinischen Bereich sowie Lehrkräfte. Die deutlichsten positiven Zusammenhänge zeigen sich zwischen den Belastungen und der emotionalen Erschöpfung, wobei für die quantitative Überforderung die stabilsten Ergebnisse resultieren. Die Belastungen weisen über die emotionale Erschöpfung hinaus signifikante Zusammenhänge mit aversiven Gefühlen gegen Klientinnen und Klienten auf. Hinsichtlich der modellimplizierten Annahmen zu den positiven Zusammenhängen zwischen den Belastungen und der Distanzierung können in dieser Untersuchung zwar signifikante Ergebnisse, aber keine über die Stichproben hinweg stabilen Zusammenhänge gefunden werden. Die Annahmen zu einem negativen Zusammenhang zwischen den Anforderungen/Ressourcen und der Distanzierung können nur für die Anforderungsmerkmale bestätigt werden. In Teilstudie 2 erfolgte auf der Basis des in Teilstudie 1 entwickelten Arbeitsmodells eine vertiefte Betrachtung der Lehrkräftetätigkeit. Dabei wurden sowohl verschiedene Schulsystemebenen einbezogen als auch verschiedene Aufgabentypen unterschieden. Die Ergebnisse zeigen, dass auf organisationaler Ebene Gratifikationskrisen und mangelnde kollektive Selbstwirksamkeitserwartungen fehlbeanspruchungsrelevant sein können. Besonders deutliche Zusammenhänge mit den Fehlbeanspruchungen und der Distanzierung zeigen sich für die primäraufgabenbezogenen Belastungen. Auch die Reziprozitätseinschätzungen in Bezug auf Schülerinnen und Eltern zeigen diesbezüglich ähnliche, wenn auch weniger deutliche, Zusammenhänge. Die Ergebnisse zu den Personmerkmalen lassen darauf schliessen, dass die Rolle der Person bei der Burnoutentwicklung nicht unterschätzt werde sollte. Als praktische Implikationen der Untersuchungsergebnisse werden u.a. Vorschläge für eine Stärkung der unterrichtsbezogenen und der klassenübergreifenden Kooperation, für eine Optimierung der Organisationsstruktur und eine „Professionalisierung“ der Organisation sowie für eine weiterführende Erarbeitung schulhausspezifischer Konzepte und Leitlinien gemacht. Es wird die Frage gestellt, ob die Lehrkräftetätigkeit sinnvollerweise als Lebensberuf verstanden werden sollte. Schliesslich wird auf die Bedeutung der Distanzierungsfähigkeit und der Selbstwirksamkeitserwartungen der Lehrkräfte hingewiesen.
Recently, due to an increasing demand on functionality and flexibility, beforehand isolated systems have become interconnected to gain powerful adaptive Systems of Systems (SoS) solutions with an overall robust, flexible and emergent behavior. The adaptive SoS comprises a variety of different system types ranging from small embedded to adaptive cyber-physical systems. On the one hand, each system is independent, follows a local strategy and optimizes its behavior to reach its goals. On the other hand, systems must cooperate with each other to enrich the overall functionality to jointly perform on the SoS level reaching global goals, which cannot be satisfied by one system alone. Due to difficulties of local and global behavior optimizations conflicts may arise between systems that have to be solved by the adaptive SoS.
This thesis proposes a modeling language that facilitates the description of an adaptive SoS by considering the adaptation capabilities in form of feedback loops as first class entities. Moreover, this thesis adopts the Models@runtime approach to integrate the available knowledge in the systems as runtime models into the modeled adaptation logic. Furthermore, the modeling language focuses on the description of system interactions within the adaptive SoS to reason about individual system functionality and how it emerges via collaborations to an overall joint SoS behavior. Therefore, the modeling language approach enables the specification of local adaptive system behavior, the integration of knowledge in form of runtime models and the joint interactions via collaboration to place the available adaptive behavior in an overall layered, adaptive SoS architecture.
Beside the modeling language, this thesis proposes analysis rules to investigate the modeled adaptive SoS, which enables the detection of architectural patterns as well as design flaws and pinpoints to possible system threats. Moreover, a simulation framework is presented, which allows the direct execution of the modeled SoS architecture. Therefore, the analysis rules and the simulation framework can be used to verify the interplay between systems as well as the modeled adaptation effects within the SoS. This thesis realizes the proposed concepts of the modeling language by mapping them to a state of the art standard from the automotive domain and thus, showing their applicability to actual systems. Finally, the modeling language approach is evaluated by remodeling up to date research scenarios from different domains, which demonstrates that the modeling language concepts are powerful enough to cope with a broad range of existing research problems.
In dieser ethnographisch-soziologischen Fallstudie wird die Geschichte der chinesischen Immigration in Bukarest nach 1990 in ihrer Besonderheit und ihren allgemeinen Strukturen nachvollzogen. Dies geschieht durch eine Doppelperspektive auf den Fall: Die Rekonstruktion der diskursiven (Re-)Präsentation des Falles als methodisch-analytische Vorstufe wurde einer anschließenden historischen Rekon-struktion der erlebten Geschichte der Migranten vorangestellt. Die anschließende Kontrastierung der Ergebnisse aus beiden Analyseschritten diente der Heraus-arbeitung vorher noch nicht sichtbarer Interdependenzen und Bezüge zwischen diesen beiden Ebenen. Mit anderen Worten: Die diskursive Ebene und die historische bzw. erlebte Ebene bilden gemeinsam und in Verschränkung miteinander die konstitutive/n Struktur/en des Falles: Erstens geht es um die Besonderheit der Post-1989-Migration von China nach Rumänien im Kontext von Transformationen. So ist für die Struktur und den Verlauf des Falles bestimmend, dass sich mehrere Wandlungsprozesse sowohl im Herkunfts- als auch im Einwanderungskontext zugleich ereigneten. Dazu gehören die Globalisierung allgemein sowie die als Transformationen bezeichneten Prozesse in China seit 1978 und in Rumänien seit 1989. Auf nationaler, lokaler Ebene sowie auf der Ebene der alltäglichen sozialen Wirklichkeit der Stadtbevölkerung und der Immigranten in Bukarest war zu beobachten, dass diese mit rapiden Veränderungen konfrontiert waren und sind. So kamen zu Beginn der 1990er Jahre einige Pioniere mit Koffern voll China-Ware in Bukarest an und verkauften ihre Ware auf den unzähligen kleinen Straßenmärkten der Stadt. Im Jahr 2007 befindet sich auf einem großen Areal am Stadtrand von Bukarest ein Baukomplex aus acht großen Shopping-Malls. Chinesische UnternehmerInnen investieren nun in Großprojekte wie etwa der Telekommunikation in Rumänien. Neu ist auch die Rekrutierung von chinesischen Textilarbeiterinnen durch rumänische Unternehmer. Diese Entwicklung hängt mit dem gegenwärtigen Mangel an rumänischen Arbeitskräften aufgrund der Auswanderungs-wellen aus Rumänien zusammen. Charakteristisch ist jedoch, dass diese ge-sellschaftlich tiefgreifenden Änderungen, die nicht nur die hier genannten öko-nomischen, sondern auch politische und soziale Konsequenzen haben, in einem starken Kontrast dazu stehen, dass ihre soziale Wirklichkeit in der Forschung, der Politik und Öffentlichkeit unbeachtet und unbekannt blieb. Als zweites Themenfeld ist das Spannungsverhältnis von lokalen und globalen Prozessen für die Geschichte der Migranten zu nennen. Dabei sind die Bedeutung der „Verortung“ einerseits und die der Transnationalität andererseits strukturbildend für die Fallgeschichte. Dabei spielte die Auseinandersetzung mit dem wissenschaftlichen Transnationalismuskonzept eine besondere Rolle: Dieses behandelte ich als theo-retischen Diskurs, der zunächst kritisch betrachtet wurde. Die Frage, ob und wie die chinesische Community in Bukarest transnational lebt und was transnationale Lebens-formen sein könnten, wurde empirisch beantwortet. Drittens habe ich unter Bezugnahme auf das Konzept des sozialen Deutungsmusters aufgezeigt, dass Informalität als soziales Konstrukt auf unterschiedlichen Ebenen fallbestimmend ist. Informalisierungsprozesse sowie damit verbundene Illegalisierung und Kriminalisierung der Migranten in Europa bilden also ein weiteres Themenfeld der Fallstruktur. Schließlich stellt die gesellschaftliche Konstruktion kultureller Deutungsmuster über chinesische Migranten ein viertes zentrales Thema der Fallstruktur dar. Die chinesischen Migranten in Bukarest leben im Fadenkreuz unterschiedlicher Zuschreibungen. Im Herkunftsland werden sie als patriotische Kapitalisten, die Moder-nität ins Land bringen, gefeiert. In Europa werden sie der internationalen Welle ‚illegaler Migration‘ zugeordnet und auf Kriminelle und/oder Opfer reduziert. In Bukarest leben sie mit und in Konkurrenz der Bewertungen bzw. Abwertungen von Minderheitengruppen, wie etwa der Roma-Minderheit. Diese Bewertungen und Zuschreibungen haben mehrere Konsequenzen. Eine davon ist, dass chinesische Immigranten in Bukarest es gelernt haben, mit diesen aktiv umzugehen. So wurde deutlich, dass sich kulturelle Deutungsmuster nicht nur habituell verankern oder als biographisches Kapital gelebt werden, sondern situationsangemessen genutzt oder sogar teilsweise inszeniert werden, mit dem Ziel, einen Weg in die Gesellschaft zu finden und in dieser als Minderheitengruppe in eine respektierte Position zu gelangen.
With ongoing anthropogenic global warming, some of the most vulnerable components of the Earth system might become unstable and undergo a critical transition. These subsystems are the so-called tipping elements. They are believed to exhibit threshold behaviour and would, if triggered, result in severe consequences for the biosphere and human societies. Furthermore, it has been shown that climate tipping elements are not isolated entities, but interact across the entire Earth system. Therefore, this thesis aims at mapping out the potential for tipping events and feedbacks in the Earth system mainly by the use of complex dynamical systems and network science approaches, but partially also by more detailed process-based models of the Earth system.
In the first part of this thesis, the theoretical foundations are laid by the investigation of networks of interacting tipping elements. For this purpose, the conditions for the emergence of global cascades are analysed against the structure of paradigmatic network types such as Erdös-Rényi, Barabási-Albert, Watts-Strogatz and explicitly spatially embedded networks. Furthermore, micro-scale structures are detected that are decisive for the transition of local to global cascades. These so-called motifs link the micro- to the macro-scale in the network of tipping elements. Alongside a model description paper, all these results are entered into the Python software package PyCascades, which is publicly available on github.
In the second part of this dissertation, the tipping element framework is first applied to components of the Earth system such as the cryosphere and to parts of the biosphere. Afterwards it is applied to a set of interacting climate tipping elements on a global scale. Using the Earth system Model of Intermediate Complexity (EMIC) CLIMBER-2, the temperature feedbacks are quantified, which would arise if some of the large cryosphere elements disintegrate over a long span of time. The cryosphere components that are investigated are the Arctic summer sea ice, the mountain glaciers, the Greenland and the West Antarctic Ice Sheets. The committed temperature increase, in case the ice masses disintegrate, is on the order of an additional half a degree on a global average (0.39-0.46 °C), while local to regional additional temperature increases can exceed 5 °C. This means that, once tipping has begun, additional reinforcing feedbacks are able to increase global warming and with that the risk of further tipping events.
This is also the case in the Amazon rainforest, whose parts are dependent on each other via the so-called moisture-recycling feedback. In this thesis, the importance of drought-induced tipping events in the Amazon rainforest is investigated in detail. Despite the Amazon rainforest is assumed to be adapted to past environmental conditions, it is found that tipping events sharply increase if the drought conditions become too intense in a too short amount of time, outpacing the adaptive capacity of the Amazon rainforest. In these cases, the frequency of tipping cascades also increases to 50% (or above) of all tipping events. In the model that was developed in this study, the southeastern region of the Amazon basin is hit hardest by the simulated drought patterns. This is also the region that already nowadays suffers a lot from extensive human-induced changes due to large-scale deforestation, cattle ranching or infrastructure projects.
Moreover, on the larger Earth system wide scale, a network of conceptualised climate tipping elements is constructed in this dissertation making use of a large literature review, expert knowledge and topological properties of the tipping elements. In global warming scenarios, tipping cascades are detected even under modest scenarios of climate change, limiting global warming to 2 °C above pre-industrial levels. In addition, the structural roles of the climate tipping elements in the network are revealed. While the large ice sheets on Greenland and Antarctica are the initiators of tipping cascades, the Atlantic Meridional Overturning Circulation (AMOC) acts as the transmitter of cascades. Furthermore, in our conceptual climate tipping element model, it is found that the ice sheets are of particular importance for the stability of the entire system of investigated climate tipping elements.
In the last part of this thesis, the results from the temperature feedback study with the EMIC CLIMBER-2 are combined with the conceptual model of climate tipping elements. There, it is observed that the likelihood of further tipping events slightly increases due to the temperature feedbacks even if no further CO$_2$ would be added to the atmosphere.
Although the developed network model is of conceptual nature, it is possible with this work for the first time to quantify the risk of tipping events between interacting components of the Earth system under global warming scenarios, by allowing for dynamic temperature feedbacks at the same time.
Weltweit streben Anti-Doping Institute danach jene Sportler zu überführen, welche sich unerlaubter Mittel oder Methoden bedienen. Die hierfür notwendigen Testsysteme werden kontinuierlich weiterentwickelt und neue Methoden aufgrund neuer Wirkstoffe der Pharmaindustrie etabliert. Gegenstand dieser Arbeit war es, eine parallele Mehrkomponentenanalyse auf Basis von Antigen-Antikörper Reaktionen zu entwickeln, bei dem es primär um Verringerung des benötigten Probevolumens und der Versuchszeit im Vergleich zu einem Standard Nachweis-Verfahren ging. Neben der Verwendung eines Multiplex Ansatzes und der Mikroarraytechnologie stellten ebenfalls die Genauigkeit aller Messparameter, die Stabilität des Versuchsaufbaus sowie die Performance über einen Einfach-Blind-Ansatz Herausforderungen dar. Die Anforderung an den Multiplex Ansatz, keine falschen Signale trotz ähnlicher Strukturen zu messen, konnte durch die gezielte Kombination von spezifischen Antikörpern realisiert werden. Hierfür wurden neben Kreuzreaktivitätstests auf dem Mikroarray parallel erfolgreich Western Blot Versuche durchgeführt. Jene Antikörper, welche in diesen Versuchen die gesetzten Anforderungen erfüllten, wurden für das Ermitteln der kleinsten nachweisbaren Konzentration verwendet. Über das Optimieren der Versuchsbedingungen konnte unter Verwendung von Tween in der Waschlösung sowohl auf Glas als auch auf Kunststoff die Hintergrundfluoreszenz reduziert und somit eine Steigerung des Signal/Hintergrundverhältnisses erreicht werden. In den Versuchen zu Ermittlung der Bestimmungsgrenze wurde für das humane Choriongonadotropin (hCG-i) eine Konzentration von 10 mU/ml, für dessen beta-Untereinheit (hCG-beta) eine Konzentration von 3,6 mU/ml und für das luteinisierende Hormon (LH) eine Konzentration von 10 mU/ml bestimmt. Den ermittelten Wert im Serum für das hCG-i entspricht dem von der Welt-Anti-Dopin-Agentur (WADA) geforderten Wert in Urin von 5 mU/ml. Neben der Ermittlung von Bestimmungsgrenzen wurden diese hinsichtlich auftretender Matrixeffekte in Serum und Blut gemessen. Wie aus den Versuchen zur Ermittlung von Kreuzreaktivitäten auf dem Mikroarray zu entnehmen ist, lassen sich das LH, das hCG-i und hCG-β ebenfalls in Serum und Blut messen. Die Durchführung einer Performance-Analyse über einem Einfach-Blind-Ansatz mit 130 Serum Proben, wurde ebenfalls über dieses System realisiert. Die ausgewerteten Proben wurden anschließend über eine Grenzwertoptimierungskurve analysiert und die diagnostische Spezifität ermittelt. Für die Messungen des LH konnte eine Sensitivität und Spezifität von 100% erreicht werden. Demnach wurden alle negativen und positiven Proben eindeutig interpretiert. Für das hCG-β konnte ebenfalls eine Spezifität von 100% und eine Sensitivität von 97% erreicht werden. Die hCG-i Proben wurden mit einer Spezifität von 100% und eine Sensitivität von 97,5% gemessen. Um den Nachweis zu erbringen, dass dieser Versuchsaufbau über mehrere Wochen stabile Signale bei Vermessen von identischen Proben liefert, wurde ein über zwölf Wochen angesetzter Stabilitätstest für alle Parameter erfolgreich in Serum und Blut durchgeführt. Zusammenfassend konnte in dieser Arbeit erfolgreich eine Mehrkomponentenanalyse als Multiplex Ansatz auf einem Mikroarray entwickelt werden. Die Durchführung der Performance-Analyse und des Stabilitätstests zeigen bereits die mögliche Einsatzfähigkeit dieses Tests im Kontext einer Dopinganalyse.
This dissertation consists of four self-contained papers that deal with the implications of financial market imperfections and heterogeneity. The analysis mainly relates to the class of incomplete-markets models but covers different research topics.
The first paper deals with the distributional effects of financial integration for developing countries. Based on a simple heterogeneous-agent approach, it is shown that capital owners experience large welfare losses while only workers moderately gain due to higher wages. The large welfare losses for capital owners contrast with the small average welfare gains from representative-agent economies and indicate that a strong opposition against capital market opening has to be expected.
The second paper considers the puzzling observation of capital flows from poor to rich countries and the accompanying changes in domestic economic development. Motivated by the mixed results from the literature, we employ an incomplete-markets model with different types of idiosyncratic risk and borrowing constraints. Based on different scenarios, we analyze under what conditions the presence of financial market imperfections contributes to explain the empirical findings and how the conditions may change with different model assumptions.
The third paper deals with the interplay of incomplete information and financial market imperfections in an incomplete-markets economy. In particular, it analyzes the impact of incomplete information about idiosyncratic income shocks on aggregate saving. The results show that the effect of incomplete information is not only quantitatively substantial but also qualitatively ambiguous and varies with the influence of the income risk and the borrowing constraint.
Finally, the fourth paper analyzes the influence of different types of fiscal rules on the response of key macroeconomic variables to a government spending shock. We find that a strong temporary increase in public debt contributes to stabilizing consumption and leisure in the first periods following the change in government spending, whereas a non-debt-intensive fiscal rule leads to a faster recovery of consumption, leisure, capital and output in later periods. Regarding optimal debt policy, we find that a debt-intensive fiscal rule leads to the largest aggregate welfare benefit and that the individual welfare gain is particularly high for wealth-poor agents.
Rainfall, snow-, and glacial melt throughout the Himalaya control river discharge, which is vital for maintaining agriculture, drinking water and hydropower generation. However, the spatiotemporal contribution of these discharge components to Himalayan rivers is not well understood, mainly because of the scarcity of ground-based observations. Consequently, there is also little known about the triggers and sources of peak sediment flux events, which account for extensive hydropower reservoir filling and turbine abrasion. We therefore lack basic information on the distribution of water resources and controls of erosion processes. In this thesis, I employ various methods to assess and quantify general characteristics of and links between precipitation, river discharge, and sediment flux in the Sutlej Valley. First, I analyze daily precipitation data (1998-2007) from 80 weather stations in the western Himalaya, to decipher the distribution of rain- and snowfall. Rainfall magnitude frequency analyses indicate that 40% of the summer rainfall budget is attributed to monsoonal rainstorms, which show higher variability in the orogenic interior than in frontal regions. Combined analysis of rainstorms and sediment flux data of a major Sutlej River tributary indicate that monsoonal rainfall has a first order control on erosion processes in the orogenic interior, despite the dominance of snowfall in this region. Second, I examine the contribution of rainfall, snow and glacial melt to river discharge in the Sutlej Valley (s55,000 km2), based on a distributed hydrological model, which covers the period 2000-2008. To achieve high spatial and daily resolution despite limited ground-based observations the hydrological model is forced by daily remote sensing data, which I adjusted and calibrated with ground station data. The calibration shows that the Tropical Rainfall Measuring Mission (TRMM) 3B42 rainfall product systematically overestimates rainfall in semi-arid and arid regions, increasing with aridity. The model results indicate that snowmelt-derived discharge (74%) is most important during the pre-monsoon season (April to June) whereas rainfall (56%) and glacial melt (17%) dominate the monsoon season (July-September). Therefore, climate change most likely causes a reduction in river discharge during the pre-monsoon season, which especially affects the orogenic interior. Third, I investigate the controls on suspended sediment flux in different parts of the Sutlej catchments, based on daily gauging data from the past decade. In conjunction with meteorological data, earthquake records, and rock strength measurements I find that rainstorms are the most frequent trigger of high-discharge events with peaks in suspended sediment concentrations (SSC) that account for the bulk of the suspended sediment flux. The suspended sediment flux increases downstream, mainly due to increases in runoff. Pronounced erosion along the Himalayan Front occurs throughout the monsoon season, whereas efficient erosion of the orogenic interior is confined to single extreme events. The results of this thesis highlight the importance of snow and glacially derived melt waters in the western Himalaya, where extensive regions receive only limited amounts of monsoonal rainfall. These regions are therefore particularly susceptible to global warming with major implications on the hydrological cycle. However, the sediment discharge data show that infrequent monsoonal rainstorms that pass the orographic barrier of the Higher Himalaya are still the primary trigger of the highest-impact erosion events, despite being subordinate to snow and glacially–derived discharge. These findings may help to predict peak sediment flux events and could underpin the strategic development of preventative measures for hydropower infrastructures.
The availability of large data sets has allowed researchers to uncover complex properties in complex systems, such as complex networks and human dynamics. A vast number of systems, from the Internet to the brain, power grids, ecosystems, can be represented as large complex networks. Dynamics on and of complex networks has attracted more and more researchers’ interest. In this thesis, first, I introduced a simple but effective dynamical optimization coupling scheme which can realize complete synchronization in networks with undelayed and delayed couplings and enhance the small-world and scale-free networks’ synchronizability. Second, I showed that the robustness of scale-free networks with community structure was enhanced due to the existence of communities in the networks and some of the response patterns were found to coincide with topological communities. My results provide insights into the relationship between network topology and the functional organization in complex networks from another viewpoint. Third, as an important kind of nodes of complex networks, human detailed correspondence dynamics was studied by both data and the model. A new and general type of human correspondence pattern was found and an interacting priority-queues model was introduced to explain it. The model can also embrace a range of realistic social interacting systems such as email and letter communication. My findings provide insight into various human activities both at the individual and network level. Fourth, I present clearly new evidence that human comment behavior in on-line social systems, a different type of interacting human dynamics, is non-Poissonian and a model based on the personal attraction was introduced to explain it. These results are helpful for discovering regular patterns of human behavior in on-line society and the evolution of the public opinion on the virtual as well as real society. Finally, there are conclusion and outlook of human dynamics and complex networks.
People engage in a multitude of different relationships. Relatives, spouses, and friends are modestly to moderately similar in various characteristics, e.g., personality characteristics, interests, appearance. The role of psychological (e.g., skills, global appraisal) and social (e.g., gender, familial status) similarities in personal relationships and the association with relationship quality (emotional closeness and reciprocity of support) were examined in four independent studies. Young adults (N = 456; M = 27 years) and middle-aged couples from four different family types (N = 171 couples, M = 38 years) gave answer to a computer-aided questionnaire regarding their ego-centered networks. A subsample of 175 middle-aged adults (77 couples and 21 individuals) participated in a one-year follow-up questioning. Two experimental studies (N = 470; N = 802), both including two assessments with an interval of five weeks, were conducted to examine causal relationships among similarity, closeness, and reciprocity expectations. Results underline the role of psychological and social similarities as covariates of emotional closeness and reciprocity of support on the between-relationship level, but indicate a relatively weak effect within established relationships. In specific relationships, such as parent-child relationships and friendships, psychological similarity partly alleviates the effects of missing genetic relatedness. Individual differences moderate these between-relationship effects. In all, results combine evolutionary and social psychological perspectives on similarity in personal relationships and extend previous findings by means of a network approach and an experimental manipulation of existing relationships. The findings further show that psychological and social similarity have different implications for the study of personal relationships depending on the phase in the developmental process of relationships.
Chemical transformations and hydraulic processes in soil and groundwater often lead to an apparent retention of nitrate in lowland catchments. Models are needed to evaluate the interaction of these processes in space and time. The objectives of this study are i) to develop a specific modelling approach by combining selected modelling tools simulating N-transport and turnover in soils and groundwater of lowland catchments, ii) to study interactions between catchment properties and nitrogen transport. Special attention was paid to potential N-loads to surface waters. The modelling approach combines various submodels for water flow and solute transport in soil and groundwater: The soil-water- and nitrogen-model mRISK-N, the groundwater flow model MODFLOW and the solute transport model RT3D. In order to investigate interactions of N-transport and catchment characteristics, the distribution and availability of reaction partners have to be taken into account. Therefore, a special reaction-module is developed, which simulates various chemical processes in groundwater, such as the degradation of organic matter by oxygen, nitrate, sulphate or pyrite oxidation by oxygen and nitrate. The model approach is applied to different simulation, focussing on specific submodels. All simulation studies are based on field data from the Schaugraben catchment, a pleistocene catchment of approximately 25 km², close to Osterburg(Altmark) in the North of Saxony-Anhalt. The following modelling studies have been carried out: i) evaluation of the soil-water- and nitrogen-model based on lysimeter data, ii) modelling of a field scale tracer experiment on nitrate transport and turnover in the groundwater as a first application of the reaction module, iii) evaluation of interactions between hydraulic and chemical aquifer properties in a two-dimensional groundwater transect, iv) modelling of distributed groundwater recharge and soil nitrogen leaching in the study area, to be used as input data for subsequent groundwater simulations, v) study of groundwater nitrate distribution and nitrate breakthrough to the surface water system in the Schaugraben catchment area and a subcatchment, using three-dimensional modelling of reactive groundwater transport. The various model applications prove the model to be capable of simulating interactions between transport, turnover and hydraulic and chemical catchment properties. The distribution of nitrate in the sediment and the resulting loads to surface waters are strongly affected by the amount of reactive substances and by the residence time within the aquifer. In the Schaugraben catchment simulations, it is found that a period of 70 years is needed to raise the average seepage concentrations of nitrate to a level corresponding to the given input situation, if no reactions are considered. Under reactive transport conditions, nitrate concentrations are reduced effectively. Simulation results show that groundwater exfiltration does not contribute considerably to the nitrate pollution of surface waters, as most nitrate entering soils and groundwater is lost by denitrification. Additional sources, such as direct inputs or tile drains have to be taken into account to explain surface water loads. The prognostic value of the models for the study site is limited by uncertainties of input data and estimation of model parameters. Nevertheless, the modelling approach is a useful aid for the identification of source and sink areas of nitrate pollution as well as the investigation of system response to management measures or landuse changes with scenario simulations. The modelling approach assists in the interpretation of observed data, as it allows to integrate local observations into a spatial and temporal framework.
Modern health care systems are characterized by pronounced prevention and cost-optimized treatments. This dissertation offers novel empirical evidence on how useful such measures can be. The first chapter analyzes how radiation, a main pollutant in health care, can negatively affect cognitive health. The second chapter focuses on the effect of Low Emission Zones on public heath, as air quality is the major external source of health problems. Both chapters point out potentials for preventive measures. Finally, chapter three studies how changes in treatment prices affect the reallocation of hospital resources. In the following, I briefly summarize each chapter and discuss implications for health care systems as well as other policy areas. Based on the National Educational Panel Study that is linked to data on radiation, chapter one shows that radiation can have negative long-term effects on cognitive skills, even at subclinical doses. Exploiting arguably exogenous variation in soil contamination in Germany due to the Chernobyl disaster in 1986, the findings show that people exposed to higher radiation perform significantly worse in cognitive tests 25 years later. Identification is ensured by abnormal rainfall within a critical period of ten days. The results show that the effect is stronger among older cohorts than younger cohorts, which is consistent with radiation accelerating cognitive decline as people get older. On average, a one-standarddeviation increase in the initial level of CS137 (around 30 chest x-rays) is associated with a decrease in the cognitive skills by 4.1 percent of a standard deviation (around 0.05 school years). Chapter one shows that sub-clinical levels of radiation can have negative consequences even after early childhood. This is of particular importance because most of the literature focuses on exposure very early in life, often during pregnancy. However, population exposed after birth is over 100 times larger. These results point to substantial external human capital costs of radiation which can be reduced by choices of medical procedures. There is a large potential for reductions because about one-third of all CT scans are assumed to be not medically justified (Brenner and Hall, 2007). If people receive unnecessary CT scans because of economic incentives, this chapter points to additional external costs of health care policies. Furthermore, the results can inform the cost-benefit trade-off for medically indicated procedures. Chapter two provides evidence about the effectiveness of Low Emission Zones. Low Emission Zones are typically justified by improvements in population health. However, there is little evidence about the potential health benefits from policy interventions aiming at improving air quality in inner-cities. The chapter ask how the coverage of Low Emission Zones air pollution and hospitalization, by exploiting variation in the roll out of Low Emission Zones in Germany. It combines information on the geographic coverage of Low Emission Zones with rich panel data on the universe of German hospitals over the period from 2006 to 2016 with precise information on hospital locations and the annual frequency of detailed diagnoses. In order to establish that our estimates of Low Emission Zones’ health impacts can indeed be attributed to improvements in local air quality, we use data from Germany’s official air pollution monitoring system and assign monitor locations to Low Emission Zones and test whether measures of air pollution are affected by the coverage of a Low Emission Zone. Results in chapter two confirm former results showing that the introduction of Low Emission Zones improved air quality significantly by reducing NO2 and PM10 concentrations. Furthermore, the chapter shows that hospitals which catchment areas are covered by a Low Emission Zone, diagnose significantly less air pollution related diseases, in particular by reducing the incidents of chronic diseases of the circulatory and the respiratory system. The effect is stronger before 2012, which is consistent with a general improvement in the vehicle fleet’s emission standards. Depending on the disease, a one-standard-deviation increase in the coverage of a hospitals catchment area covered by a Low Emission Zone reduces the yearly number of diagnoses up to 5 percent. These findings have strong implications for policy makers. In 2015, overall costs for health care in Germany were around 340 billion euros, of which 46 billion euros for diseases of the circulatory system, making it the most expensive type of disease caused by 2.9 million cases (Statistisches Bundesamt, 2017b). Hence, reductions in the incidence of diseases of the circulatory system may directly reduce society’s health care costs. Whereas chapter one and two study the demand-side in health care markets and thus preventive potential, chapter three analyzes the supply-side. By exploiting the same hospital panel data set as in chapter two, chapter three studies the effect of treatment price shocks on the reallocation of hospital resources in Germany. Starting in 2005, the implementation of the German-DRG-System led to general idiosyncratic treatment price shocks for individual hospitals. Thus far there is little evidence of the impact of general price shocks on the reallocation of hospital resources. Additionally, I add to the exiting literature by showing that price shocks can have persistent effects on hospital resources even when these shocks vanish. However, simple OLS regressions would underestimate the true effect, due to endogenous treatment price shocks. I implement a novel instrument variable strategy that exploits the exogenous variation in the number of days of snow in hospital catchment areas. A peculiarity of the reform allowed variation in days of snow to have a persistent impact on treatment prices. I find that treatment price increases lead to increases in input factors such as nursing staff, physicians and the range of treatments offered but to decreases in the treatment volume. This indicates supplier-induced demand. Furthermore, the probability of hospital mergers and privatization decreases. Structural differences in pre-treatment characteristics between hospitals enhance these effects. For instance, private and larger hospitals are more affected. IV estimates reveal that OLS results are biased towards zero in almost all dimensions because structural hospital differences are correlated with the reallocation of hospital resources. These results are important for several reasons. The G-DRG-Reform led to a persistent polarization of hospital resources, as some hospitals were exposed to treatment price increases, while others experienced reductions. If hospitals increase the treatment volume as a response to price reductions by offering unnecessary therapies, it has a negative impact on population wellbeing and public spending. However, results show a decrease in the range of treatments if prices decrease. Hospitals might specialize more, thus attracting more patients. From a policy perspective it is important to evaluate if such changes in the range of treatments jeopardize an adequate nationwide provision of treatments. Furthermore, the results show a decrease in the number of nurses and physicians if prices decrease. This could partly explain the nursing crisis in German hospitals. However, since hospitals specialize more they might be able to realize efficiency gains which justify reductions in input factors without loses in quality. Further research is necessary to provide evidence for the impact of the G-DRG-Reform on health care quality. Another important aspect are changes in the organizational structure. Many public hospitals have been privatized or merged. The findings show that this is at least partly driven by the G-DRG-Reform. This can again lead to a lack in services offered in some regions if merged hospitals specialize more or if hospitals are taken over by ecclesiastical organizations which do not provide all treatments due to moral conviction. Overall, this dissertation reveals large potential for preventive health care measures and helps to explain reallocation processes in the hospital sector if treatment prices change. Furthermore, its findings have potentially relevant implications for other areas of public policy. Chapter one identifies an effect of low dose radiation on cognitive health. As mankind is searching for new energy sources, nuclear power is becoming popular again. However, results of chapter one point to substantial costs of nuclear energy which have not been accounted yet. Chapter two finds strong evidence that air quality improvements by Low Emission Zones translate into health improvements, even at relatively low levels of air pollution. These findings may, for instance, be of relevance to design further policies targeted at air pollution such as diesel bans. As pointed out in chapter three, the implementation of DRG-Systems may have unintended side-effects on the reallocation of hospital resources. This may also apply to other providers in the health care sector such as resident doctors.
Throughout its empirical research history eye movement research has always been aware of the differences in reading behavior induced by individual differences and task demands. This work introduces a novel comprehensive concept of reading strategy, comprising individual differences in reading style and reading skill as well as reader goals. In a series of sentence reading experiments recording eye movements, the influence of reading strategies on reader- and word-level effects assuming distributed processing has been investigated. Results provide evidence for strategic, top-down influences on eye movement control that extend our understanding of eye guidance in reading.
Food Neophilie
(2023)
Trotz der eindeutigen Vorteile einer ausgewogenen Ernährung halten sich viele Menschen weltweit nicht an entsprechende Ernährungsrichtlinien. Um angemessene Strategien zur Unterstützung einer gesundheitsfördernden Ernährung zu entwickeln, ist ein Verständnis der zugrunde liegenden Faktoren unerlässlich. Insbesondere die Gruppe der älteren Erwachsenen stellt dabei eine wichtige Zielgruppe für ernährungsbezogene Präventions- und Interventionsansätze dar. Einer der vielen Faktoren, die als Determinanten einer gesundheitsfördernden Ernährung diskutiert werden, ist die Food Neophilie, also die Bereitschaft, neue und unbekannte Lebensmittel auszuprobieren. Aktuelle Forschungsergebnisse legen nahe, dass die Food Neophilie positiv mit einer gesundheitsfördernden Ernährung in Verbindung steht, allerdings ist die bisherige Forschung in diesem Bereich äußerst begrenzt. Das Ziel der Dissertation war es, das Konstrukt der Food Neophilie sowie seine Beziehung zu gesundheitsförderndem Ernährungsverhalten im höheren Erwachsenenalter grundlegend zu erforschen, um das Potenzial der Food Neophilie für die Gesundheitsförderung älterer Erwachsener besser zu verstehen. Dabei wurde im Rahmen der ersten Publikation zunächst untersucht, wie sich das Konstrukt der Food Neophilie reliabel und valide erfassen lässt, um weiterführende Untersuchungen der Food Neophilie zu ermöglichen. Die psychometrische Validierung der deutschen Version der Variety Seeking Tendency Scale (VARSEEK) basierte auf zwei unabhängigen Stichproben mit insgesamt N = 1000 Teilnehmenden und bestätigte, dass es sich bei der Skala um ein reliables und valides Messinstrument zur Erfassung der Food Neophilie handelt. Darauf aufbauend wurde im Rahmen der zweiten Publikation die Beziehung der Food Neophilie und der Ernährungsqualität über die Zeit hinweg analysiert. Die prospektive Untersuchung von N = 960 Teilnehmenden des höheren Erwachsenenalters (M = 63.4 Jahre) anhand einer Cross-Lagged-Panel-Analyse ergab hohe zeitliche Stabilitäten der Food Neophilie und der Ernährungsqualität über einen Zeitraum von drei Jahren. Es zeigte sich zudem ein positiver querschnittlicher Zusammenhang zwischen der Food Neophilie und der Ernährungsqualität, jedoch wurde die Food Neophilie nicht als signifikante Determinante der Ernährungsqualität über die Zeit hinweg identifiziert. In der dritten Publikation wurden schließlich nicht nur die individuellen Auswirkungen der Food Neophilie auf die Ernährungsqualität betrachtet, sondern auch potenzielle dynamische Wechselwirkungen innerhalb von Partnerschaften einbezogen. Hierzu erfolgte mittels eines Actor-Partner-Interdependence-Modells eine Differenzierung potenzieller intra- und interpersoneller Einflüsse der Food Neophilie auf die Ernährungsqualität. Im Rahmen der dyadischen Analyse zeigte sich bei N = 390 heterosexuellen Paaren im höheren Erwachsenenalter (M = 64.0 Jahre) ein Dominanzmuster: Während die Food Neophilie der Frauen positiv mit ihrer eigenen Ernährungsqualität und der ihrer Partner zusammenhing, war die Food Neophilie der Männer nicht mit der Ernährungsqualität des Paares assoziiert. Insgesamt leistet die vorliegende Dissertation einen wertvollen Beitrag zum umfassenden Verständnis der Food Neophilie sowie ihrer Rolle im Kontext der Ernährungsgesundheit älterer Erwachsener. Trotz fehlender Vorhersagekraft über die Zeit hinweg deutet der positive Zusammenhang zwischen Food Neophilie und Ernährungsqualität darauf hin, dass die Fokussierung auf eine positive und neugierige Einstellung gegenüber Lebensmitteln eine innovative Perspektive für Präventions- und Interventionsansätze zur Unterstützung einer gesundheitsfördernden Ernährung älterer Erwachsener bieten könnte.
The intergalactic medium is kept highly photoionised by the intergalactic UV background radiation field generated by the overall population of quasars and galaxies. In the vicinity of sources of UV photons, such as luminous high-redshift quasars, the UV radiation field is enhanced due to the local source contribution. The higher degree of ionisation is visible as a reduced line density or generally as a decreased level of absorption in the Lyman alpha forest of neutral hydrogen. This so-called proximity effect has been detected with high statistical significance towards luminous quasars. If quasars radiate rather isotropically, background quasar sightlines located near foreground quasars should show a region of decreased Lyman alpha absorption close to the foreground quasar. Despite considerable effort, such a transverse proximity effect has only been detected in a few cases. So far, studies of the transverse proximity effect were mostly limited by the small number of suitable projected pairs or groups of high-redshift quasars. With the aim to substantially increase the number of quasar groups in the vicinity of bright quasars we conduct a targeted survey for faint quasars around 18 well-studied quasars at employing slitless spectroscopy. Among the reduced and calibrated slitless spectra of 29000 objects on a total area of 4.39 square degrees we discover in total 169 previously unknown quasar candidates based on their prominent emission lines. 81 potential z>1.7 quasars are selected for confirmation by slit spectroscopy at the Very Large Telescope (VLT). We are able to confirm 80 of these. 64 of the newly discovered quasars reside at z>1.7. The high success rate of the follow-up observations implies that the majority of the remaining candidates are quasars as well. In 16 of these groups we search for a transverse proximity effect as a systematic underdensity in the HI Lyman alpha absorption. We employ a novel technique to characterise the random absorption fluctuations in the forest in order to estimate the significance of the transverse proximity effect. Neither low-resolution spectra nor high-resolution spectra of background quasars of our groups present evidence for a transverse proximity effect. However, via Monte Carlo simulations the effect should be detectable only at the 1-2sigma level near three of the foreground quasars. Thus, we cannot distinguish between the presence or absence of a weak signature of the transverse proximity effect. The systematic effects of quasar variability, quasar anisotopy and intrinsic overdensities near quasars likely explain the apparent lack of the transverse proximity effect. Even in absence of the systematic effects, we show that a statistically significant detection of the transverse proximity effect requires at least 5 medium-resolution quasar spectra of background quasars near foreground quasars whose UV flux exceeds the UV background by a factor 3. Therefore, statistical studies of the transverse proximity effect require large numbers of suitable pairs. Two sightlines towards the central quasars of our survey fields show intergalactic HeII Lyman alpha absorption. A comparison of the HeII absorption to the corresponding HI absorption yields an estimate of the spectral shape of the intergalactic UV radiation field, typically parameterised by the HeII/HI column density ratio eta. We analyse the fluctuating UV spectral shape on both lines of sight and correlate it with seven foreground quasars. On the line of sight towards Q0302-003 we find a harder radiation field near 4 foreground quasars. In the direct vicinity of the quasars eta is consistent with values of 25-100, whereas at large distances from the quasars eta>200 is required. The second line of sight towards HE2347-4342 probes lower redshifts where eta is directly measurable in the resolved HeII forest. Again we find that the radiation field near the 3 foreground quasars is significantly harder than in general. While eta still shows large fluctuations near the quasars, probably due to radiative transfer, the radiation field is on average harder near the quasars than far away from them. We interpret these discoveries as the first detections of the transverse proximity effect as a local hardness fluctuation in the UV spectral shape. No significant HI proximity effect is predicted for the 7 foreground quasars. In fact, the HI absorption near the quasars is close to or slightly above the average, suggesting that the weak signature of the transverse proximity effect is masked by intrinsic overdensities. However, we show that the UV spectral shape traces the transverse proximity effect even in overdense regions or at large distances. Therefore, the spectral hardness is a sensitive physical measure of the transverse proximity effect that is able to break the density degeneracy affecting the traditional searches.
Proteins are amphiphilic and adsorb at liquid interfaces. Therefore, they can be efficient stabilizers of foams and emulsions. β-lactoglobulin (BLG) is one of the most widely studied proteins due to its major industrial applications, in particular in food technology.
In the present work, the influence of different bulk concentration, solution pH and ionic strength on the dynamic and equilibrium pressures of BLG adsorbed layers at the solution/tetradecane (W/TD) interface has been investigated. Dynamic interfacial pressure (Π) and interfacial dilational elastic modulus (E’) of BLG solutions for various concentrations at three different pH values of 3, 5 and 7 at a fixed ionic strength of 10 mM and for a selected fixed concentration at three different ionic strengths of 1 mM, 10 mM and 100 mM are measured by Profile Analysis Tensiometer PAT-1 (SINTERFACE Technologies, Germany). A quantitative data analysis requires additional consideration of depletion due to BLG adsorption at the interface at low protein bulk concentrations. This fact makes experiments more efficient when oil drops are studied in the aqueous protein solutions rather than solution drops formed in oil. On the basis of obtained experimental data, concentration dependencies and the effect of solution pH on the protein surface activity was qualitatively analysed. In the presence of 10 mM buffer, we observed that generally the adsorbed amount is increasing with increasing BLG bulk concentration for all three pH values. The adsorption kinetics at pH 5 result in the highest Π values at any time of adsorption while it exhibits a less active behaviour at pH 3.
Since the experimental data have not been in a good agreement with the classical diffusion controlled model due to the conformational changes which occur when the protein molecules get in contact with the hydrophobic oil phase in order to adapt to the interfacial environment, a new theoretical model is proposed here. The adsorption kinetics data were analysed with the newly proposed model, which is the classical diffusion model but modified by assuming an additional change in the surface activity of BLG molecules when adsorbing at the interface. This effect can be expressed through the adsorption activity constant in the corresponding equation of state. The dilational visco-elasticity of the BLG adsorbed interfacial layers is determined from measured dynamic interfacial tensions during sinusoidal drop area variations. The interfacial tension responses to these harmonic drop oscillations are interpreted with the same thermodynamic model which is used for the corresponding adsorption isotherm.
At a selected BLG concentration of 2×10-6 mol/l, the influence of the ionic strength using different buffer concentration of 1, 10 and 100 mM on the interfacial pressure was studied. It is affected weakly at pH 5, whereas it has a strong impact by increasing buffer concentration at pH 3 and 7. In conclusion, the structure formation of BLG adsorbed layer in the early stage of adsorption at the W/TD interface is similar to those of the solution/air (W/A) surface. However, the equation of state at the W/TD interface provides an adsorption activity constant which is almost two orders of magnitude higher than that for the solution/air surface.
At the end of this work, a new experimental tool called Drop and Bubble Micro Manipulator DBMM (SINTERFACE Technologies, Germany) has been introduced to study the stability of protein covered bubbles against coalescence. Among the available protocols the lifetime between the moment of contact and coalescence of two contacting bubble is determined for different BLG concentrations. The adsorbed amount of BLG is determined as a function of time and concentration and correlates with the observed coalescence behaviour of the contacting bubbles.
This thesis presents methods, techniques and tools for developing three-dimensional representations of tactical intelligence assessments. Techniques from GIScience are combined with crime mapping methods. The range of methods applied in this study provides spatio-temporal GIS analysis as well as 3D geovisualisation and GIS programming. The work presents methods to enhance digital three-dimensional city models with application specific thematic information. This information facilitates further geovisual analysis, for instance, estimations of urban risks exposure. Specific methods and workflows are developed to facilitate the integration of spatio-temporal crime scene analysis results into 3D tactical intelligence assessments. Analysis comprises hotspot identification with kernel-density-estimation techniques (KDE), LISA-based verification of KDE hotspots as well as geospatial hotspot area characterisation and repeat victimisation analysis. To visualise the findings of such extensive geospatial analysis, three-dimensional geovirtual environments are created. Workflows are developed to integrate analysis results into these environments and to combine them with additional geospatial data. The resulting 3D visualisations allow for an efficient communication of complex findings of geospatial crime scene analysis.
The impact of global warming on human water resources is attracting increasing attention. No other region in this world is so strongly affected by changes in water supply than the tropics. Especially in Africa, the availability and access to water is more crucial to existence (basic livelihoods and economic growth) than anywhere else on Earth. In East Africa, rainfall is mainly influenced by the migration of the Inter-Tropical Convergence Zone (ITCZ) and by the El Niño Southern Oscillation (ENSO) with more rain and floods during El Niño and severe droughts during La Niña. The forecasting of East African rainfall in a warming world requires a better understanding of the response of ENSO-driven variability to mean climate. Unfortunately, existing meteorological data sets are too short or incomplete to establish a precise evaluation of future climate. From Lake Challa near Mount Kilimanjaro, we report records from a laminated lake sediment core spanning the last 25,000 years. Analyzing a monthly cleared sediment trap confirms the annual origin of the laminations and demonstrates that the varve-thicknesses are strongly linked to the duration and strength of the windy season. Given the modern control of seasonal ITCZ location on wind and rain in this region and the inverse relation between the two, thicker varves represent windier and thus drier years. El Niño (La Niña) events are associated with wetter (drier) conditions in east Africa and decreased (increased) surface wind speeds. Based on this fact, the thickness of the varves can be used as a tool to reconstruct a) annual rainfall b) wind season strength, and c) ENSO variability. Within this thesis, I found evidence for centennialscale changes in ENSO-related rainfall variability during the last three millennia, abrupt changes in variability during the Medieval Climate Anomaly and the Little Ice Age, and an overall reduction in East African rainfall and its variability during the Last Glacial period. Climate model simulations support forward extrapolation from these lake-sediment data, indicating that a future Indian Ocean warming will enhance East Africa’s hydrological cycle and its interannual variability in rainfall. Furthermore, I compared geochemical analyses from the sediment trap samples with a broad range of limnological, meteorological, and geological parameters to characterize the impact of sedimentation processes from the in-situ rocks to the deposited sediments. As a result an excellent calibration for existing μXRF data from Lake Challa over the entire 25,000 year long profile was provided. The climate development during the last 25,000 years as reconstructed from the Lake Challa sediments is in good agreement with other studies and highlights the complex interactions between long-term orbital forcing, atmosphere, ocean and land surface conditions. My findings help to understand how abrupt climate changes occur and how these changes correlate with climate changes elsewhere on Earth.
Perovskite solar cells have become one of the most studied systems in the quest for new, cheap and efficient solar cell materials. Within a decade device efficiencies have risen to >25% in single-junction and >29% in tandem devices on top of silicon. This rapid improvement was in many ways fortunate, as e. g. the energy levels of commonly used halide perovskites are compatible with already existing materials from other photovoltaic technologies such as dye-sensitized or organic solar cells. Despite this rapid success, fundamental working principles must be understood to allow concerted further improvements. This thesis focuses on a comprehensive understanding of recombination processes in functioning devices.
First the impact the energy level alignment between the perovskite and the electron transport layer based on fullerenes is investigated. This controversial topic is comprehensively addressed and recombination is mitigated through reducing the energy difference between the perovskite conduction band minimum and the LUMO of the fullerene. Additionally, an insulating blocking layer is introduced, which is even more effective in reducing this recombination, without compromising carrier collection and thus efficiency. With the rapid efficiency development (certified efficiencies have broken through the 20% ceiling) and thousands of researchers working on perovskite-based optoelectronic devices, reliable protocols on how to reach these efficiencies are lacking. Having established robust methods for >20% devices, while keeping track of possible pitfalls, a detailed description of the fabrication of perovskite solar cells at the highest efficiency level (>20%) is provided. The fabrication of low-temperature p-i-n structured devices is described, commenting on important factors such as practical experience, processing atmosphere & temperature, material purity and solution age. Analogous to reliable fabrication methods, a method to identify recombination losses is needed to further improve efficiencies. Thus, absolute photoluminescence is identified as a direct way to quantify the Quasi-Fermi level splitting of the perovskite absorber (1.21eV) and interfacial recombination losses the transport layers impose, reducing the latter to ~1.1eV. Implementing very thin interlayers at both the p- and n-interface (PFN-P2 and LiF, respectively), these losses are suppressed, enabling a VOC of up to 1.17eV. Optimizing the device dimensions and the bandgap, 20% devices with 1cm2 active area are demonstrated. Another important consideration is the solar cells’ stability if subjected to field-relevant stressors during operation. In particular these are heat, light, bias or a combination thereof. Perovskite layers – especially those incorporating organic cations – have been shown to degrade if subjected to these stressors. Keeping in mind that several interlayers have been successfully used to mitigate recombination losses, a family of perfluorinated self-assembled monolayers (X-PFCn, where X denotes I/Br and n = 7-12) are introduced as interlayers at the n-interface. Indeed, they reduce interfacial recombination losses enabling device efficiencies up to 21.3%. Even more importantly they improve the stability of the devices. The solar cells with IPFC10 are stable over 3000h stored in the ambient and withstand a harsh 250h of MPP at 85◦C without appreciable efficiency losses. To advance further and improve device efficiencies, a sound understanding of the photophysics of a device is imperative. Many experimental observations in recent years have however drawn an inconclusive picture, often suffering from technical of physical impediments, disguising e. g. capacitive discharge as recombination dynamics. To circumvent these obstacles, fully operational, highly efficient perovskites solar cells are investigated by a combination of multiple optical and optoelectronic probes, allowing to draw a conclusive picture of the recombination dynamics in operation. Supported by drift-diffusion simulations, the device recombination dynamics can be fully described by a combination of first-, second- and third-order recombination and JV curves as well as luminescence efficiencies over multiple illumination intensities are well described within the model. On this basis steady state carrier densities, effective recombination constants, densities-of-states and effective masses are calculated, putting the devices at the brink of the radiative regime. Moreover, a comprehensive review of recombination in state-of-the-art devices is given, highlighting the importance of interfaces in nonradiative recombination. Different strategies to assess these are discussed, before emphasizing successful strategies to reduce interfacial recombination and pointing towards the necessary steps to further improve device efficiency and stability. Overall, the main findings represent an advancement in understanding loss mechanisms in highly efficient solar cells. Different reliable optoelectronic techniques are used and interfacial losses are found to be of grave importance for both efficiency and stability. Addressing the interfaces, several interlayers are introduced, which mitigate recombination losses and degradation.
Das Ziel dieser Arbeit ist es, die Strukturen im äußeren Erdkern zu untersuchen und Rückschlüsse auf die sich daraus ergebenden Konsequenzen für geodynamische Modellvorstellungen zu ziehen. Die Untersuchung der Kernphasenkaustik B mit Hilfe einer kumulierten Amplituden-Entfernungskurve ist Gegenstand des ersten Teils. Dazu werden die absoluten Amplituden der PKP-Phasen im Entfernungsbereich von 142 ° bis 147 ° bestimmt und mit den Amplituden synthetischer Seismogramme verglichen. Als Datenmaterial dienen die Breitbandregistrierungen des Deutschen Seismologischen Re-gionalnetzes (GRSN 1 ) und des Arrays Gräfenberg (GRF). Die verwendeten Wellen-formen werden im WWSSN-SP-Frequenzbereich gefiltert. Als Datenbasis dienen vier Tiefherdbeben der Subduktionszone der Neuen Hebriden (Vanuatu Island) und vier Nuklearexplosionen, die auf dem Mururoa und Fangataufa Atoll im Südpazifik stattgefunden haben. Beide Regionen befinden sich vom Regionalnetz aus gesehen in einer Epizentraldistanz von ungefähr 145 °. Die Verwendung eines homogen instrumentierten Netzes von Detektoren und die Anwendung von Stations- und Magnitudenkorrekturen verringern den Hauptteil der Streuung bei den Amplitudenwerten. Dies gilt auch im Vergleich zu Untersuchungen von langperiodischen Amplituden im Bereich der Kernphasenkaustik (Häge, 1981). Ein weiterer Grund für die geringe Streuung ist die ausschließliche Verwendung von Ereignissen mit kurzer impulsiver Herdzeitfunktion. Erst die geringe Streuung der Amplitudenwerte ermöglicht eine Interpretation der Daten. Die theoretischen Amplitudenkurven der untersuchten Erdmodelle zeigen im Bereich der Kaustik B einen gleichartigen Kurvenverlauf. Bei allen Berechnungen wird ein einheitliches Modell für die Güte der P- und S-Wellen verwendet, das sich aus den Q-Werten der Modelle CIT112 und PREM 2 zusammensetzt. Die mit diesem Q-Modell berechneten Amplituden liegen in geringem Maße oberhalb der gemessenen Amplituden. Dies braucht nicht berücksichtigt zu werden, da die kumulierte Amplituden-Entfernungskurve anhand der Lage des Maximums auf der Entfernungsachse ausgewertet wird. Folglich wird darauf verzichtet, ein alternatives Q-Modell zu entwickeln. Hinsichtlich der Lage des Kaustikmaximums lassen sich die untersuchten Erdmodelle in zwei Kategorien einteilen. Eine Gruppe besteht aus den Modellen IASP91 und 1066B, deren Maxima bei 144.6 ° und 144.7 ° liegen. Zur zweiten Gruppe von Modellen zählen AK135, PREM und SP6 mit den Maxima bei 145.1 ° und 145.2 ° (SP6). Die gemessene Amplitudenkurve hat ihr Maximum bei 145 °. Alle Entfernungsangaben beziehen sich auf eine Herdtiefe von 200 km. Die Kaustikentfernung für einen Oberflächenherd ist jeweils um 0.454 ° größer als die angegeben Werte. Damit liegen die Maxima der Modelle AK135 und PREM nur 0.1 ° neben dem der gemessenen kumulierten Amplitudenkurve. Daher wird auf die Erstellung eines eigenen Modells verzichtet, da dieses eine unwesentlich verbesserte Amplitudenkurve aufweisen würde. Das Ergebnis der Untersuchung ist die Erstellung einer gemessenen kumulierten Amplituden-Entfernungskurve für die Kaustik B. Die Kurve legt die Position der Kaustik B für kurzperiodische Daten auf ± 0.15 ° fest und bestimmt damit, welche Erdmodelle für die Beschreibung der Amplituden im Entfernungsbereich der Kaustik B besonders geeignet sind. Die Erdmodelle AK135 und PREM, ergänzt durch ein einheitliches Q-Modell, geben den Verlauf der Amplituden am besten wieder. Da die Amplitudenkurven beider Modelle nahe beieinander liegen, sind sie als gleichwertig zu bezeichnen. Im zweiten Teil der Arbeit wird die Struktur der Übergangszone in den inneren Erdkern anhand des spektralen Abklingens der Phase PKP(BC)diff am Punkt C der Laufzeitkurve untersucht. Der physikalische Prozeß der Beugung ist für die starke Abnahme der Amplituden dieser Phase verantwortlich. Die Diffraktion beeinflußt das Abklingverhalten verschiedener Frequenzanteile des seismischen Signals auf unterschiedliche Weise. Eine Deutung des Verhaltens erfordert die Berechnung von Abklingspektren. Dabei wird die Abschwächung des PKP(BC)diff Signals für acht Frequenzen zwischen 6.4 s und 1.25 Hz ermittelt und als Spektrum dargestellt. Die Form des Abklingspektrums ist charakteristisch für die Beschaffenheit der Geschwindigkeitsstruktur direkt oberhalb der Grenze zum inneren Erdkern (GIK). Die Beben, deren Kernphasen im Regionalnetz als diffraktierte Kernphasen BCdiff registriert werden, liegen in einem Entfernungsbereich jenseits von 150 °. In dieser Distanz befinden sich die Erdbebenherde der Tonga-Fidschi-Subduktionszone, deren Breitbandaufzeichnungen verwendet werden. Die Auswertung unkorrigierter Wellenformen ergibt Abklingspektren, die mit plausiblen Erdmodellen nicht in Einklang zu bringen sind. Aus diesem Grund werden die Daten einer spektralen Stationskorrektur unterzogen, die eigens zu diesem Zweck ermittelt wird. Am Beginn der Auswertung steht eine Prüfung bekannter Erdmodelle mit unterschiedlichen Geschwindigkeitsstrukturen oberhalb der GIK. Zu den untersuchten Modellen zählen PREM, IASP91, AK135Q, PREM2, SP6, OICM2 und eine Variante des PREM. Die Untersuchung ergibt, daß Modelle, die einen verringerten Gradienten oberhalb der GIK aufweisen, eine bessere Übereinstimmung mit den gemessenen Daten zeigen als Modelle ohne diese Übergangszone. Zur Verifikation dieser These wird ein Erdmodell, das keinen verringerten Gradienten oberhalb der GIK besitzt (PREM), durch eine Reihe unterschiedlicher Geschwindigkeitsverläufe in diesem Bereich ergänzt und deren synthetische Seismogramme berechnet. Das Resultat der Untersuchung sind zwei Varianten des PREM, deren Frequenzanalyse eine gute Übereinstimmung mit den Daten zeigt. Das Abklingspektrum des Erdmodells PD47, das in einer 380 km mächtigen Schicht einen negativen Gradienten besitzt, zeigt eine große Ähnlichkeit mit den gemessenen Spektren. Dennoch kann es nicht als realistisches Modell angesehen werden, da der Punkt C in einer zu großen Entfernung liegt. Darüber hinaus müßte die zu kurze Differenzlaufzeit zwischen PKP(AB) und PKP(DF) beziehungsweise PKIKP durch eine größere Änderung der Geschwindigkeitsstruktur im inneren Kern kompensiert werden. Es wird deshalb das Modell PD27a favorisiert, das diese Nachteile nicht aufweist. PD27a besitzt eine Schicht konstanter Geschwindigkeit oberhalb der GIK mit einer Mächtigkeit von 150 km. Die Art des Geschwindigkeitsverlaufs steht im Einklang mit der geodynamischen Modellvorstellung, nach der eine Anreicherung leichter Elemente oberhalb der GIK vorliegt, die als Ursache für die Konvektion im äußeren Erdkern anzusehen ist.
Partial melting is a first order process for the chemical differentiation of the crust (Vielzeuf et al., 1990). Redistribution of chemical elements during melt generation crucially influences the composition of the lower and upper crust and provides a mechanism to concentrate and transport chemical elements that may also be of economic interest. Understanding of the diverse processes and their controlling factors is therefore not only of scientific interest but also of high economic importance to cover the demand for rare metals.
The redistribution of major and trace elements during partial melting represents a central step for the understanding how granite-bound mineralization develops (Hedenquist and Lowenstern, 1994). The partial melt generation and mobilization of ore elements (e.g. Sn, W, Nb, Ta) into the melt depends on the composition of the sedimentary source and melting conditions. Distinct source rocks have different compositions reflecting their deposition and alteration histories. This specific chemical “memory” results in different mineral assemblages and melting reactions for different protolith compositions during prograde metamorphism (Brown and Fyfe, 1970; Thompson, 1982; Vielzeuf and Holloway, 1988). These factors do not only exert an important influence on the distribution of chemical elements during melt generation, they also influence the volume of melt that is produced, extraction of the melt from its source, and its ascent through the crust (Le Breton and Thompson, 1988). On a larger scale, protolith distribution and chemical alteration (weathering), prograde metamorphism with partial melting, melt extraction, and granite emplacement are ultimately depending on a (plate-)tectonic control (Romer and Kroner, 2016). Comprehension of the individual stages and their interaction is crucial in understanding how granite-related mineralization forms, thereby allowing estimation of the mineralization potential of certain areas. Partial melting also influences the isotope systematics of melt and restite. Radiogenic and stable isotopes of magmatic rocks are commonly used to trace back the source of intrusions or to quantify mixing of magmas from different sources with distinct isotopic signatures (DePaolo and Wasserburg, 1979; Lesher, 1990; Chappell, 1996). These applications are based on the fundamental requirement that the isotopic signature in the melt reflects that of the bulk source from which it is derived. Different minerals in a protolith may have isotopic compositions of radiogenic isotopes that deviate from their whole rock signature (Ayres and Harris, 1997; Knesel and Davidson, 2002). In particular, old minerals with a distinct parent-to-daughter (P/D) ratio are expected to have a specific radiogenic isotope signature. As the partial melting reaction only involves selective phases in a protolith, the isotopic signature of the melt reflects that of the minerals involved in the melting reaction and, therefore, should be different from the bulk source signature. Similar considerations hold true for stable isotopes.
Sekundäre Pflanzenstoffe und ihre gesundheitsfördernden Eigenschaften sind in den letzten zwei Jahrzehnten vielfach ernährungsphysiologisch untersucht und spezifische positive Effekte im humanen Organismus zum Teil sehr genau beschrieben worden. Zu den Carotinoiden zählend ist der sekundäre Pflanzenstoff Lutein insbesondere in der Prävention von ophthalmologischen Erkrankungen in den Mittelpunkt der Forschung gerückt. Das ausschließlich von Pflanzen und einigen Algen synthetisierte Xanthophyll wird über die pflanzliche Nahrung insbesondere grünes Blattgemüse in den humanen Organismus aufgenommen. Dort akkumuliert es bevorzugt im Makulapigment der Retina des menschlichen Auges und ist bedeutend im Prozess der Aufrechterhaltung der Funktionsfähigkeit der Photorezeptorzellen. Im Laufe des Alterns kann die Abnahme der Dichte des Makulapigments und der Abbau von Lutein beobachtet werden. Die dadurch eintretende Destabilisierung der Photorezeptorzellen im Zusammenhang mit einer veränderten Stoffwechsellage im alternden Organismus kann zur Ausprägung der altersbedingten Makuladegeneration (AMD) führen. Die pathologische Symptomatik der Augenerkrankung reicht vom Verlust der Sehschärfe bis hin zum irreversiblen Erblinden. Da therapeutische Mittel ausschließlich ein Fortschreiten verhindern, bestehen hier Forschungsansätze präventive Maßnahmen zu finden. Die Supplementierung von luteinhaltigen Präparaten bietet dabei einen Ansatzpunkt. Auf dem Markt finden sich bereits Nahrungsergänzungsmittel (NEM) mit Lutein in verschiedenen Applikationen. Limitierend ist dabei die Stabilität und Bioverfügbarkeit von Lutein, welches teilweise kostenintensiv und mit unbekannter Reinheit zu erwerben ist. Aus diesem Grund wäre die Verwendung von Luteinestern als die pflanzliche Speicherform des Luteins im Rahmen eines NEMs vorteilhaft. Neben ihrer natürlichen, höheren Stabilität sind Luteinester nachhaltig und kostengünstig einsetzbar.
In dieser Arbeit wurden physikochemische und ernährungsphysiologisch relevante Aspekte in dem Produktentwicklungsprozess eines NEMs mit Luteinestern in einer kolloidalen Formulierung untersucht. Die bisher einzigartige Anwendung von Luteinestern in einem Mundspray sollte die Aufnahme des Wirkstoffes insbesondere für ältere Menschen erleichtern und verbessern. Unter Beachtung der Ergebnisse und der ernährungsphysiologischen Bewertung sollten u.a. Empfehlungen für die Rezepturzusammensetzungen einer Miniemulsion (Emulsion mit Partikelgrößen <1,0 µm) gegeben werden. Eine Einschätzung der Bioverfügbarkeit der Luteinester aus den entwickelten, kolloidalen Formulierungen konnte anhand von Studien zur Resorption- und Absorptionsverfügbarkeit in vitro ermöglicht werden.
In physikalischen Untersuchungen wurden zunächst Basisbestandteile für die Formulierungen präzisiert. In ersten wirkstofffreien Musteremulsionen konnten ausgewählte Öle als Trägerphase sowie Emulgatoren und Löslichkeitsvermittler (Peptisatoren) hinsichtlich ihrer Eignung zur Bereitstellung einer Miniemulsion physikalisch geprüft werden. Die beste Stabilität und optimale Eigenschaften einer Miniemulsion zeigten sich bei der Verwendung von MCT-Öl (engl. medium chain triglyceride) bzw. Rapsöl in der Trägerphase sowie des Emulgators Tween® 80 (Tween 80) allein oder in Kombination mit dem Molkenproteinhydrolysat Biozate® 1 (Biozate 1).
Aus den physikalischen Untersuchungen der Musteremulsionen gingen die Präemulsionen als Prototypen hervor. Diese enthielten den Wirkstoff Lutein in verschiedenen Formen. So wurden Präemulsionen mit Lutein, mit Luteinestern sowie mit Lutein und Luteinestern konzipiert, welche den Emulgator Tween 80 oder die Kombination mit Biozate 1 enthielten. Bei der Herstellung der Präemulsionen führte die Anwendung der Emulgiertechniken Ultraschall mit anschließender Hochdruckhomogenisation zu den gewünschten Miniemulsionen. Beide eingesetzten Emulgatoren boten optimale Stabilisierungseffekte. Anschließend erfolgte die physikochemische Charakterisierung der Wirkstoffe. Insbesondere Luteinester aus Oleoresin erwiesen sich hier als stabil gegenüber verschiedenen Lagerungsbedingungen. Ebenso konnte bei einer kurzzeitigen Behandlung der Wirkstoffe unter spezifischen mechanischen, thermischen, sauren und basischen Bedingungen eine Stabilität von Lutein und Luteinestern gezeigt werden. Die Zugabe von Biozate 1 bot dabei nur für Lutein einen zusätzlichen Schutz. Bei längerer physikochemischer Behandlung unterlagen die in den Miniemulsionen eingebrachten Wirkstoffe moderaten Abbauvorgängen. Markant war deren Sensitivität gegenüber dem basischen Milieu. Im Rahmen der Rezepturentwicklung des NEMs war hier die Empfehlung, eine Miniemulsion mit einem leicht saurem pH-Milieu zum Schutz des Wirkstoffes durch kontrollierte Zugabe weiterer Inhaltstoffe zu gestalten.
Im weiteren Entwicklungsprozess des NEMs wurden Fertigrezepturen mit dem Wirkstoff Luteinester aufgestellt. Die alleinige Anwendung des Emulgators Biozate 1 zeigte sich dabei als ungeeignet. Die weiterhin zur Verfügung stehenden Fertigrezepturen enthielten in der Öl-phase neben dem Wirkstoff das MCT-ÖL oder Rapsöl sowie a-Tocopherol zur Stabilisierung. Die Wasserphase bestand aus dem Emulgator Tween 80 oder einer Kombination aus Tween 80 und Biozate 1. Zusatzstoffe waren zudem als mikrobiologischer Schutz Ascorbinsäure und Kaliumsorbat sowie für sensorische Effekte Xylitol und Orangenaroma. Die Anordnung der Basisrezeptur und das angewendete Emulgierverfahren lieferten stabile Miniemulsionen. Weiterhin zeigten langfristige Lagerungsversuche mit den Fertigrezepturen bei 4°C, dass eine Aufrechterhaltung der geforderten Luteinestermenge im Produkt gewährleistet war. Analoge Untersuchungen an einem luteinhaltigen, marktgängigen Präparat bestätigten dagegen eine bereits bei kurzfristiger Lagerung auftretende Instabilität von Lutein.
Abschließend wurde durch Resorptions- und Absorptionsstudien in vitro mit den Präemulsionen und Fertigrezepturen die Bioverfügbarkeit von Luteinestern geprüft. Nach Behandlung in einem etablierten in vitro Verdaumodell konnte eine geringfügige Resorptionsverfügbarkeit der Luteinester definiert werden. Limitiert war eine Micellarisierung des Wirkstoffes aus den konzipierten Formulierungen zu beobachten. Eine enzymatische Spaltung der Luteinester zu freiem Lutein wurde nur begrenzt festgestellt. Spezifität und Aktivität von entsprechenden hydrolytischen Lipasen sind als äußerst gering gegenüber Luteinestern zu bewerten. In sich anschließenden Zellkulturversuchen mit der Zelllinie Caco-2 wurden keine zytotoxischen Effekte durch die relevanten Inhaltsstoffe in den Präemulsionen gezeigt. Dagegen konnten eine Sensibilität gegenüber den Fertigrezepturen beobachtet werden. Diese sollte im Zusammenhang mit Irritationen der Schleimhäute des Magen-Darm-Traktes bedacht werden. Eine weniger komplexe Rezeptur könnte die beobachteten Einschränkungen möglicherweise minimieren. Abschließende Absorptionsstudien zeigten, dass grundsätzlich eine geringfügige Aufnahme von vorrangig Lutein, aber auch Luteinmonoestern in den Enterocyten aus Miniemulsionen erfolgen kann. Dabei hatte weder Tween 80 noch Biozate 1 einen förderlichen Einfluss auf die Absorptionsrate von Lutein oder Luteinestern. Die Metabolisierung der Wirkstoffe durch vorherigen in vitro-Verdau steigerte die zelluläre Aufnahme von Wirkstoffen aus Formulierungen mit Lutein und Luteinestern gleichermaßen. Die beobachtete Aufnahme von Lutein und Luteinmonoestern in den Enterocyten scheint über passive Diffusion zu erfolgen, wobei auch der aktive Transport nicht ausgeschlossen werden kann. Dagegen können Luteindiester aufgrund ihrer Molekülgröße nicht über den Weg der Micellarisierung und einfachen Diffusion in die Enterocyten gelangen. Ihre Aufnahme in die Dünndarmepithelzellen bedarf einer vorherigen hydrolytischen Spaltung durch spezifische Lipasen. Dieser Schritt limitiert wiederum die effektive Aufnahme der Luteinester in die Zellen bzw. stellt eine Einschränkung in ihrer Bioverfügbarkeit im Vergleich zu freiem Lutein dar.
Zusammenfassend konnte für die physikochemisch stabilen Luteinester eine geringe Bioverfügbarkeit aus kolloidalen Formulierungen gezeigt werden. Dennoch ist die Verwendung als Wirkstoffquelle für den sekundären Pflanzenstoff Lutein in einem NEM zu empfehlen. Im Zusammenhang mit der Aufnahme von luteinreichen, pflanzlichen Lebensmitteln kann trotz der zu erwartenden geringen Bioverfügbarkeit der Luteinester aus dem NEM ein Beitrag zur Verbesserung des Luteinstatus erreicht werden. Entsprechende Publikationen zeigten eindeutige Korrelationen zwischen der Aufnahme von luteinesterhaltigen Präparaten und einem Anstieg der Luteinkonzentration im Serum bzw. der Makulapigmentdichte in vivo. Die geringfügig bessere Bioverfügbarkeit von freiem Lutein steht im kritischen Zusammenhang mit seiner Instabilität und Kostenintensität. Bilanzierend wurde im Rahmen dieser Arbeit das marktgängige Produkt Vita Culus® konzipiert. Im Ausblick sollten humane Interventionsstudien mit dem NEM die abschließende Bewertung der Bioverfügbarkeit von Luteinestern aus dem Präparat möglich machen.
Die zerstörungsfreien Prüfungen von Bauwerken mit Hilfe von Ultraschallmessverfahren haben in den letzten Jahren an Bedeutung gewonnen. Durch Ultraschallmessungen können die Geometrien von Bauteilen bestimmt sowie von außen nicht sichtbare Fehler wie Delaminationen und Kiesnester erkannt werden.
Mit neuartigen, in das Betonbauteil eingebetteten Ultraschallprüfköpfen sollen nun Bauwerke dauerhaft auf Veränderungen überprüft werden. Dazu werden Ultraschallsignale direkt im Inneren eines Bauteils erzeugt, was die Möglichkeiten der herkömmlichen Methoden der Bauwerksüberwachung wesentlich erweitert. Ein Ultraschallverfahren könnte mit eingebetteten Prüfköpfen ein Betonbauteil kontinuierlich integral überwachen und damit auch stetig fortschreitende Gefügeänderungen, wie beispielsweise Mikrorisse, registrieren.
Sicherheitsrelevante Bauteile, die nach dem Einbau für Messungen unzugänglich oder mittels Ultraschall, beispielsweise durch zusätzliche Beschichtungen der Oberfläche, nicht prüfbar sind, lassen sich mit eingebetteten Prüfköpfen überwachen. An bereits vorhandenen Bauwerken können die Ultraschallprüfköpfe mithilfe von Bohrlöchern und speziellem Verpressmörtel auch nachträglich in das Bauteil integriert werden. Für Fertigbauteile bieten sich eingebettete Prüfköpfe zur Herstellungskontrolle sowie zur Überwachung der Baudurchführung als Werkzeug der Qualitätssicherung an. Auch die schnelle Schadensanalyse eines Bauwerks nach Naturkatastrophen, wie beispielsweise einem Erdbeben oder einer Flut, ist denkbar.
Durch die gute Ankopplung ermöglichen diese neuartigen Prüfköpfe den Einsatz von empfindlichen Auswertungsmethoden, wie die Kreuzkorrelation, die Coda-Wellen-Interferometrie oder die Amplitudenauswertung, für die Signalanalyse. Bei regelmäßigen Messungen können somit sich anbahnende Schäden eines Bauwerks frühzeitig erkannt werden.
Da die Schädigung eines Bauwerks keine direkt messbare Größe darstellt, erfordert eine eindeutige Schadenserkennung in der Regel die Messung mehrerer physikalischer Größen die geeignet verknüpft werden. Physikalische Größen können sein: Ultraschalllaufzeit, Amplitude des Ultraschallsignals und Umgebungstemperatur. Dazu müssen Korrelationen zwischen dem Zustand des Bauwerks, den Umgebungsbedingungen und den Parametern des gemessenen Ultraschallsignals untersucht werden.
In dieser Arbeit werden die neuartigen Prüfköpfe vorgestellt. Es wird beschrieben, dass sie sich, sowohl in bereits errichtete Betonbauwerke als auch in der Konstruktion befindliche, einbauen lassen. Experimentell wird gezeigt, dass die Prüfköpfe in mehreren Ebenen eingebettet sein können da ihre Abstrahlcharakteristik im Beton nahezu ungerichtet ist. Die Mittenfrequenz von rund 62 kHz ermöglicht Abstände, je nach Betonart und SRV, von mindestens 3 m zwischen Prüfköpfen die als Sender und Empfänger arbeiten. Die Empfindlichkeit der eingebetteten Prüfköpfe gegenüber Veränderungen im Beton wird an Hand von zwei Laborexperimenten gezeigt, einem Drei-Punkt-Biegeversuch und einem Versuch zur Erzeugung von Frost-Tau-Wechsel Schäden. Die Ergebnisse werden mit anderen zerstörungsfreien Prüfverfahren verglichen. Es zeigt sich, dass die Prüfköpfe durch die Anwendung empfindlicher Auswertemethoden, auftretende Risse im Beton detektieren, bevor diese eine Gefahr für das Bauwerk darstellen. Abschließend werden Beispiele von Installation der neuartigen Ultraschallprüfköpfe in realen Bauteilen, zwei Brücken und einem Fundament, gezeigt und basierend auf dort gewonnenen ersten Erfahrungen ein Konzept für die Umsetzung einer Langzeitüberwachung aufgestellt.
3D point clouds are a universal and discrete digital representation of three-dimensional objects and environments. For geospatial applications, 3D point clouds have become a fundamental type of raw data acquired and generated using various methods and techniques. In particular, 3D point clouds serve as raw data for creating digital twins of the built environment.
This thesis concentrates on the research and development of concepts, methods, and techniques for preprocessing, semantically enriching, analyzing, and visualizing 3D point clouds for applications around transport infrastructure. It introduces a collection of preprocessing techniques that aim to harmonize raw 3D point cloud data, such as point density reduction and scan profile detection. Metrics such as, e.g., local density, verticality, and planarity are calculated for later use. One of the key contributions tackles the problem of analyzing and deriving semantic information in 3D point clouds. Three different approaches are investigated: a geometric analysis, a machine learning approach operating on synthetically generated 2D images, and a machine learning approach operating on 3D point clouds without intermediate representation.
In the first application case, 2D image classification is applied and evaluated for mobile mapping data focusing on road networks to derive road marking vector data. The second application case investigates how 3D point clouds can be merged with ground-penetrating radar data for a combined visualization and to automatically identify atypical areas in the data. For example, the approach detects pavement regions with developing potholes. The third application case explores the combination of a 3D environment based on 3D point clouds with panoramic imagery to improve visual representation and the detection of 3D objects such as traffic signs.
The presented methods were implemented and tested based on software frameworks for 3D point clouds and 3D visualization. In particular, modules for metric computation, classification procedures, and visualization techniques were integrated into a modular pipeline-based C++ research framework for geospatial data processing, extended by Python machine learning scripts. All visualization and analysis techniques scale to large real-world datasets such as road networks of entire cities or railroad networks.
The thesis shows that some use cases allow taking advantage of established image vision methods to analyze images rendered from mobile mapping data efficiently. The two presented semantic classification methods working directly on 3D point clouds are use case independent and show similar overall accuracy when compared to each other. While the geometry-based method requires less computation time, the machine learning-based method supports arbitrary semantic classes but requires training the network with ground truth data. Both methods can be used in combination to gradually build this ground truth with manual corrections via a respective annotation tool.
This thesis contributes results for IT system engineering of applications, systems, and services that require spatial digital twins of transport infrastructure such as road networks and railroad networks based on 3D point clouds as raw data. It demonstrates the feasibility of fully automated data flows that map captured 3D point clouds to semantically classified models. This provides a key component for seamlessly integrated spatial digital twins in IT solutions that require up-to-date, object-based, and semantically enriched information about the built environment.
Aufgrund des großen Verhältnisses von Oberfläche zu Volumen zeigen Nanopartikel interessante, größenabhängige Eigenschaften, die man im ausgedehnten Festkörper nicht beobachtet. Sie sind daher von großem wissenschaftlichem und technologischem Interesse. Die Herstellung kleinster Partikel ist aus diesem Grund überaus wünschenswert. Dieses Ziel kann mit Hilfe von Mikroemulsionen als Templatphasen bei der Herstellung von Nanopartikeln erreicht werden. Mikroemulsionen sind thermodynamisch stabile, transparente und isotrope Mischungen von Wasser und Öl, die durch einen Emulgator stabilisiert sind. Sie können eine Vielzahl verschiedener Mikrostrukturen bilden. Die Kenntnis der einer Mikroemulsion zugrunde liegenden Struktur und Dynamik ist daher von außerordentlicher Bedeutung, um ein gewähltes System potentiell als Templatphase zur Nanopartikelherstellung einsetzen zu können. In der vorliegenden Arbeit wurden komplexe Mehrkomponentensysteme auf der Basis einer natürlich vorkommenden Sojabohnenlecithin-Mischung, eines gereinigten Lecithins und eines Sulfobetains als Emulgatoren mit Hilfe der diffusionsgewichteten 1H-NMR-Spektroskopie unter Verwendung gepulster Feldgradienten (PFG) in Abhängigkeit des Zusatzes des Polykations Poly-(diallyl-dimethyl-ammoniumchlorid) (PDADMAC) untersucht. Der zentrale Gegenstand dieser Untersuchungen war die strukturelle und dynamische Charakterisierung der verwendeten Mikroemulsionen hinsichtlich ihrer potentiellen Anwendbarkeit als Templatphasen für die Herstellung möglichst kleiner Nanopartikel. Die konzentrations- und zeit-abhängige NMR-Diffusionsmessung stellte sich dabei als hervorragend geeignete und genaue Methode zur Untersuchung der Mikrostruktur und Dynamik in den vorliegenden Systemen heraus. Die beobachtete geschlossene Wasser-in-Öl- (W/O-) Mikrostruktur der Mikroemulsionen zeigt deutlich deren potentielle Anwendbarkeit in der Nanopartikelsynthese. Das Gesamtdiffusionsverhalten des Tensides wird durch variierende Anteile aus der Verschiebung gesamter Aggregate, der Monomerdiffusion im Medium bzw. der medium-vermittelten Oberflächendiffusion bestimmt. Dies resultierte in einigen Fällen in einer anormalen Diffusionscharakteristik. In allen Systemen liegen hydrodynamische und direkte Wechselwirkungen zwischen den Tensidaggregaten vor. Der Zusatz von PDADMAC zu den Mikroemulsionen resultiert in einer Stabilisierung der flüssigen Grenzfläche der Tensidaggregate aufgrund der Adsorption des Polykations auf den entgegengesetzt geladenen Tensidfilm und kann potentiell zu Nanopartikeln mit kleineren Dimensionen und schmaleren Größenverteilungen führen.
Quantified Self, die pro-aktive Selbstvermessung von Menschen, hat sich in den letzten Jahren von einer Nischenanwendung zu einem Massenphänomen entwickelt. Dabei stehen den Nutzern heute vielfältige technische Unterstützungsmöglichkeiten, beispielsweise in Form von Smartphones, Fitness-Trackern oder Gesundheitsapps zur Verfügung, welche eine annähernd lückenlose Überwachung unterschiedlicher Kontextfaktoren einer individuellen Lebenswirklichkeit erlauben.
In der Folge widmet sich diese Arbeit unter anderem der Fragestellung, inwieweit diese intensive und eigen-initiierte Beschäftigung, insbesondere mit gesundheitsbezogenen Daten, die weitgehend als objektiviert und damit belastbar gelten, die Gesundheitskompetenz derart aktiver Menschen erhöhen kann. Darüber hinaus werden Aspekte untersucht, inwieweit die neuen Technologien in der Lage sind, spezifische medizinische Erkenntnisse zu vertiefen und in der Konsequenz die daraus resultierenden Behandlungsprozesse zu verändern.
Während der Ursprung des Quantified Self im 2. Gesundheitsmarkt liegt, geht die vorliegende Arbeit der Frage nach, welche strukturellen, personellen und prozessualen Anknüpfungspunkte perspektivisch im 1. Gesundheitsmarkt existieren werden, wenn ein potentieller Patient in einer stärker emanzipierten Weise den Wunsch verspürt, oder eine entsprechende Forderung stellt, seine gesammelten Gesundheitsdaten in möglichst umfassender Form in eine medizinische Behandlung zu integrieren.
Dabei werden auf der einen Seite aktuelle Entwicklungen im 2. Gesundheitsmarkt untersucht, die gekennzeichnet sind von einer hohen Dynamik und einer großen Intransparenz. Auf der anderen Seite steht der als stark reguliert und wenig digitalisiert geltende 1. Gesundheitsmarkt mit seinen langen Entwicklungszyklen und ausgeprägten Partikularinteressen der verschiedenen Stakeholder.
In diesem Zuge werden aktuelle Entwicklungen des zugrunde liegenden Rechtsrahmens, speziell im Hinblick auf stärker patientenzentrierte und digitalisierte Normen untersucht, wobei insbesondere das Digitale Versorgung Gesetz eine wichtige Rolle einnimmt.
Ziel der Arbeit ist die stärkere Durchdringung von Wechselwirkungen an der Schnittstelle zwischen den beiden Gesundheitsmärkten in Bezug auf die Verwendung von Technologien der Selbstvermessung, um in der Folge zukünftige Geschäftspotentiale für existierende oder neu in den Markt drängende Dienstleister zu eruieren.
Als zentrale Methodik kommt hier eine Delphi-Studie zum Einsatz, die in einem interprofessionellen Ansatz versucht, ein Zukunftsbild dieser derzeit noch sehr jungen Entwicklungen für das Jahr 2030 aufzuzeigen. Eingebettet werden die Ergebnisse in die Untersuchung einer allgemeinen gesellschaftlichen Akzeptanz der skizzierten Veränderungen.
Human activities modify nature worldwide via changes in the environment, biodiversity and the functioning of ecosystems, which in turn disrupt ecosystem services and feed back negatively on humans. A pressing challenge is thus to limit our impact on nature, and this requires detailed understanding of the interconnections between the environment, biodiversity and ecosystem functioning. These three components of ecosystems each include multiple dimensions, which interact with each other in different ways, but we lack a comprehensive picture of their interconnections and underlying mechanisms. Notably, diversity is often viewed as a single facet, namely species diversity, while many more facets exist at different levels of biological organisation (e.g. genetic, phenotypic, functional, multitrophic diversity), and multiple diversity facets together constitute the raw material for adaptation to environmental changes and shape ecosystem functioning. Consequently, investigating the multidimensionality of ecosystems, and in particular the links between multifaceted diversity, environmental changes and ecosystem functions, is crucial for ecological research, management and conservation. This thesis aims to explore several aspects of this question theoretically.
I investigate three broad topics in this thesis. First, I focus on how food webs with varying levels of functional diversity across three trophic levels buffer environmental changes, such as a sudden addition of nutrients or long-term changes (e.g. warming or eutrophication). I observed that functional diversity generally enhanced ecological stability (i.e. the buffering capacity of the food web) by increasing trophic coupling. More precisely, two aspects of ecological stability (resistance and resilience) increased even though a third aspect (the inverse of the time required for the system to reach its post-perturbation state) decreased with increasing functional diversity. Second, I explore how several diversity facets served as a raw material for different sources of adaptation and how these sources affected multiple ecosystem functions across two trophic levels. Considering several sources of adaptation enabled the interplay between ecological and evolutionary processes, which affected trophic coupling and thereby ecosystem functioning. Third, I reflect further on the multifaceted nature of diversity by developing an index K able to quantify the facet of functional diversity, which is itself multifaceted. K can provide a comprehensive picture of functional diversity and is a rather good predictor of ecosystem functioning. Finally I synthesise the interdependent mechanisms (complementarity and selection effects, trophic coupling and adaptation) underlying the relationships between multifaceted diversity, ecosystem functioning and the environment, and discuss the generalisation of my findings across ecosystems and further perspectives towards elaborating an operational biodiversity-ecosystem functioning framework for research and conservation.
The underlying motivation for the work carried out for this thesis was the growing need for more sustainable technologies. The aim was to synthesize a “palette” of functional nanomaterials using the established technique of hydrothermal carbonization (HTC). The incredible diversity of HTC was demonstrated together with small but steady advances in how HTC can be manipulated to tailor material properties for specific applications. Two main strategies were used to modify the materials obtained by HTC of glucose, a model precursor representing biomass. The first approach was the introduction of heteroatoms, or “doping” of the carbon framework. Sulfur was for the first time introduced as a dopant in hydrothermal carbon. The synthesis of sulfur and sulfur/nitrogen doped microspheres was presented whereby it was shown that the binding state of sulfur could be influenced by varying the type of sulfur source. Pyrolysis may additionally be used to tune the heteroatom binding states which move to more stable motifs with increasing pyrolysis temperature. Importantly, the presence of aromatic binding states in the as synthesized hydrothermal carbon allows for higher heteroatom retention levels after pyrolysis and hence more efficient use of dopant sources. In this regard, HTC may be considered as an “intermediate” step in the formation of conductive heteroatom doped carbon. To assess the novel hydrothermal carbons in terms of their potential for electrochemical applications, materials with defined nano-architectures and high surface areas were synthesized via templated, as well as template-free routes. Sulfur and/or nitrogen doped carbon hollow spheres (CHS) were synthesized using a polystyrene hard templating approach and doped carbon aerogels (CA) were synthesized using either the albumin directed or borax-mediated hydrothermal carbonization of glucose. Electrochemical testing showed that S/N dual doped CHS and aerogels derived via the albumin approach exhibited superior catalytic performance compared to solely nitrogen or sulfur doped counterparts in the oxygen reduction reaction (ORR) relevant to fuel cells. Using the borax mediated aerogel formation, nitrogen content and surface area could be tuned and a carbon aerogel was engineered to maximize electrochemical performance. The obtained sample exhibited drastically improved current densities compared to a platinum catalyst (but lower onset potential), as well as excellent long term stability. In the second approach HTC was carried out at elevated temperatures (550 °C) and pressure (50 bar), corresponding to the superheated vapor regime (htHTC). It was demonstrated that the carbon materials obtained via htHTC are distinct from those obtained via ltHTC and subsequent pyrolysis at 550 °C. No difference in htHTC-derived material properties could be observed between pentoses and hexoses. The material obtained from a polysaccharide exhibited a slightly lower degree of carbonization but was otherwise similar to the monosaccharide derived samples. It was shown that in addition to thermally induced carbonization at 550 °C, the SHV environment exhibits a catalytic effect on the carbonization process. The resulting materials are chemically inert (i.e. they contain a negligible amount of reactive functional groups) and possess low surface area and electronic conductivity which distinguishes them from carbon obtained from pyrolysis. Compared to the materials presented in the previous chapters on chemical modifications of hydrothermal carbon, this makes them ill-suited candidates for electronic applications like lithium ion batteries or electrocatalysts. However, htHTC derived materials could be interesting for applications that require chemical inertness but do not require specific electronic properties. The final section of this thesis therefore revisited the latex hard templating approach to synthesize carbon hollow spheres using htHTC. However, by using htHTC it was possible to carry out template removal in situ because the second heating step at 550 °C was above the polystyrene latex decomposition temperature. Preliminary tests showed that the CHS could be dispersed in an aqueous polystyrene latex without monomer penetrating into the hollow sphere voids. This leaves the stagnant air inside the CHS intact which in turn is promising for their application in heat and sound insulating coatings. Overall the work carried out in this thesis represents a noteworthy development in demonstrating the great potential of sustainable carbon materials.
Die vorliegende Arbeit thematisiert die Finanzierungsmodelle von Public-Private-Partnership-Projekten (PPP) und deren Refinanzierung durch die Kapitalgeber.
Dabei wurden zwei zentrale Fragestellungen thematisiert. Erstens: Führen PPPProjekte zu einer Verschuldung der öffentlichen Hand und sind sie entsprechend bei den Berechnungen der Konvergenzkriterien bzw. der Schulden- und Neuverschuldungsquoten zu berücksichtigen? Die zu prüfende Arbeitshypothese geht von einer Verschuldung der öffentlichen Hand in Folge von PPP-Projekten aus. Zweitens: Unterstellt wird eine bedeutsame Funktion von PPP für die Infrastrukturfinanzierung, wobei im Sinne einer Effizienzsteigerung die Passgenauigkeit beziehungsweise Konsistenz der haushaltsrechtlichen Regelungen mit den regulatorischen Vorgaben für die Kapitalgeber von PPP-Projekten analysiert wird. Diese Schnittstelle und die zur Generierung günstiger („kommunalähnlicher") Finanzierungskonditionen notwendigen staatlichen Garantien bei PPP drängt geradezu zu einem ordnungspolitischen Vergleich von Ansätzen bzw. Projekten im Bereich PPP und in Cash-Flow-Kalkülen.
Die Arbeit führt mit einem gewissen gesamtwirtschaftlichen Fokus der PPP tief in die Analyse des Kapitalmarktes und der Bankenregulierung. Es erfolgt ein Vergleich der gedeckten Refinanzierungsinstrumente für PPP, die durch Forderungen besichert sind (Asset Backed Securities) und solche, die beispielsweise durch Forderungen gegen die öffentliche Hand besichert sind (Covered Bonds). Letztere können auch grundpfandrechtlich gesichert sein. Hier setzt der Verfasser später seine Skizze eines „Infrastructure Covered Bonds" für die Finanzierung notwendiger Infrastrukturmaßnahmen nicht nur in Deutschland an, wobei das Wertpapier hier ausschließlich zur Finanzierung der Infrastruktur bei einem entsprechend neu zuschaffenden (Deckungs-) Registers begeben werden wird.
Das Fachwissen von Lehrkräften weist für die Ausprägung fachdidaktischer Expertise eine hohe Bedeutung auf. Welche Merkmale universitäre Lehrveranstaltungen aufweisen sollten, um Lehramtsstudierenden ein berufsspezifisches Fachwissen zu vermitteln, ist jedoch überwiegend noch unklar.
Innerhalb des Projekts PSI-Potsdam wurde auf theoretischer Grundlage das fachübergreifende Modell des erweiterten Fachwissens für den schulischen Kontext entwickelt. Als Ansatz zur Verbesserung des Biologie-Lehramtsstudiums diente dieses Modell als Konzeptionsgrundlage für eine additive Lehrveranstaltung. Hierbei werden Lerngelegenheiten geboten, um das universitär erworbene Fachwissen über zellbiologische Inhalte auf schulische Kontexte anzuwenden, z.B. durch die Dekonstruktion und anschließende Rekonstruktion von schulischen Lerntexten. Die Wirkung des Seminars wurde in mehreren Zyklen im Forschungsformat der Fachdidaktischen Entwicklungsforschung beforscht. Eine der zentralen Forschungsfragen lautet dabei: Wie kann eine Lerngelegenheit für Lehramtsstudierende der Biologie gestaltet sein, um ein erweitertes Fachwissen für den schulischen Kontext für den zellbiologischen Themenbereich „Struktur und Funktion der Biomembran“ zu fördern?
Anhand fallübergreifender Analysen (n = 29) wird im empirischen Teil aufgezeigt, welche Einstellungen zum Lehramtsstudium in der Stichprobe bestehen. Als ein wichtiges Ergebnis kann hierbei herausgestellt werden, dass sich das Fachinteresse hinsichtlich schulisch und universitär vermittelter Inhalte bei den untersuchten Studierenden auffallend unterscheidet, wobei dem Schulwissen ein deutlich höheres Interesse entgegengebracht wird. Die Berufsrelevanz fachlicher Inhalte wird seitens der Studierenden häufig am Schulwissen festgemacht.
Innerhalb konkreter Einzelfallanalysen (n = 6) wird anhand von Lernpfaden dargestellt, wie sich über mehrere Design-Experimente hinweg fachliche Konzepte entwickelt haben. Bei der Beschreibung wird vor allem auf Schlüsselstellen und Hürden im Lernprozess fokussiert. Aus diesen Ergebnissen folgend werden vorgenommene Iterationen für die einzelnen Zyklen beschrieben, die ebenfalls anhand der iterativen Entwicklung der Design-Prinzipien dargelegt werden.
Es konnte gezeigt werden, dass die Schlüsselstellen sehr individuell aufgrund der subjektiv fokussierten Inhalte zu Tage treten. Meist treten sie jedoch im Zusammenhang mit der Verknüpfung verschiedener fachlicher Konzepte oder durch kooperative Aufschlüsselungen von Konzepten auf. Fachliche Hürden konnten hingegen in Form von fachlich unangemessenen Vorstellungen fallübergreifend identifiziert werden. Dies betrifft unter anderem die Vorstellung der Biomembran als Wand, die mit den Vorstellungen einer Schutzfunktion und einer formgebenden Funktion der Biomembran einhergeht.
Weiterhin wird beleuchtet, wie das erweiterte Fachwissen für den schulischen Kontext zur Bearbeitung der Lernaufgaben angewendet wurde. Es hat sich gezeigt, dass sich bestimmte Lerngelegenheiten eigenen, um bestimmte Facetten des erweiterten Fachwissens zu fördern.
Insgesamt scheint das Modell des erweiterten Fachwissens für den schulischen Kontext äußerst geeignet zu sein, um anhand der Facetten und deren Beschreibungen Lerngelegenheiten oder Gestaltungsprinzipien für diese zu konzipieren. Für das untersuchte Lehr-Lernarrangement haben sich kleinere Adaptationen des Modells als sinnvoll erwiesen. Hinsichtlich der Methodologie konnten Ableitungen für die Anwendung der fachdidaktischen Entwicklungsforschung für additive fachliche Lehrveranstaltungen dieser Art herausgestellt werden.
Um den Professionsbezug der fachwissenschaftlichen Anteile im Lehramtsstudium zu verbessern, ist der weitere Einbezug des erweiterten Fachwissens für den schulischen Kontext in die fachwissenschaftlichen Studienanteile überaus wünschenswert.
This dissertation uses a common grammatical phenomenon, light verb constructions (LVCs) in English and German, to investigate how syntax-semantics mapping defaults influence the relationships between language processing, representation and conceptualization. LVCs are analyzed as a phenomenon of mismatch in the argument structure. The processing implication of this mismatch are experimentally investigated, using ERPs and a dual task. Data from these experiments point to an increase in working memory. Representational questions are investigated using structural priming. Data from this study suggest that while the syntax of LVCs is not different from other structures’, the semantics and mapping are represented differently. This hypothesis is tested with a new categorization paradigm, which reveals that the conceptual structure that LVC evoke differ in interesting, and predictable, ways from non-mismatching structures’.