Refine
Has Fulltext
- yes (165) (remove)
Year of publication
- 2021 (165) (remove)
Document Type
- Doctoral Thesis (165) (remove)
Is part of the Bibliography
- yes (165)
Keywords
- Spektroskopie (4)
- Klimawandel (3)
- Politik (3)
- climate change (3)
- spectroscopy (3)
- 3D-Visualisierung (2)
- Agrarökologie (2)
- Air pollution (2)
- Alpen (2)
- Alps (2)
Institute
- Institut für Physik und Astronomie (24)
- Institut für Geowissenschaften (22)
- Institut für Biochemie und Biologie (20)
- Institut für Chemie (20)
- Hasso-Plattner-Institut für Digital Engineering GmbH (13)
- Institut für Umweltwissenschaften und Geographie (9)
- Wirtschaftswissenschaften (9)
- Institut für Informatik und Computational Science (8)
- Department Psychologie (6)
- Extern (5)
- Institut für Mathematik (5)
- Institut für Ernährungswissenschaft (4)
- Department Erziehungswissenschaft (3)
- Department Sport- und Gesundheitswissenschaften (3)
- Historisches Institut (3)
- Department Linguistik (2)
- Institut für Romanistik (2)
- Department Grundschulpädagogik (1)
- Department Musik und Kunst (1)
- Department für Inklusionspädagogik (1)
- Fachgruppe Betriebswirtschaftslehre (1)
- Foundations of Computational Linguistics (1)
- Institut für Germanistik (1)
- Institut für Jüdische Studien und Religionswissenschaft (1)
- Institut für Philosophie (1)
- Potsdam Institute for Climate Impact Research (PIK) e. V. (1)
- Psycholinguistics and Neurolinguistics (1)
- Sozialwissenschaften (1)
- Strukturbereich Kognitionswissenschaften (1)
- Öffentliches Recht (1)
Innerhalb dieser Arbeit erfolgte die erstmalige systematische Untersuchung von Vinylsulfonsäureethylester (1a), Phenylvinylsulfon (1b), N-Benzyl-N-methylethensulfonamid (1c) in der FUJIWARA-MORITANI Reaktion (alternativ als DHR bezeichnet). Bei dieser übergangsmetallkatalysierten Reaktion erfolgt der Aufbau einer neuen C-C-Bindung unter der doppelten Aktivierung einer C-H-Bindung. Somit kann ein atomökonomischer Aufbau von Molekülen realisiert werden, da keine Beiprodukte in Form von Salzen entstehen. Als aromatischer Reaktant wurden Acetanilide (2) verwendet, damit eine regiospezifische Kupplung durch die katalysatordirigierende Acetamid-Gruppe (CDG) erfolgt. Für die Pd-katalysierte DHR wurde eine umfangreiche Optimierung durchgeführt und anschließend konnten neun verschieden, substituierte 2 mit 1a und sieben verschieden, substituierte 2 mit 1b funktionalisiert werden. Da eine Reaktion mit 1c ausblieb, erfolgte ein Wechsel auf eine Ru-katalysierte Methode für die DHR. Mit dieser Methode konnte 1c mit Acetaniliden funktionalisiert werden und das Spektrum der verwendeten 2, in Form von deaktivierenden Substituenten erweitert werden.
Im Anschluss wurden die sulfalkenylierten Acetanilide in weiterführenden Reaktionen untersucht. Hierfür wurde eine Reaktionssequenz bestehend aus einer DeacetylierungDiazotierung-Kupplungsreaktion verwendet, um die Acetamid-Gruppe in eine Abgangsgruppe zu überführen und danach in einer MATSUDA-HECK Reaktion zu kuppeln. Mit dieser Methode konnten mehrere 1,2-Dialkenylbenzole erhalten werden und die CDG ein weiteres Mal genutzt werden. Neben der Überführung der CDG in eine Abgangsgruppe konnte diese auch in die Synthese verschiedener Heterozyklen integriert werden. Dafür erfolgte zunächst eine 1,3-Zykloaddition durch deprotonierten Tosylmethylisocanid an der elektronenarmen Sulfalkenylgruppe zur Synthese von Pyrrolen. Anschließend erfolgte eine Kupplung der PyrrolFunktion und der CDG durch Zyklokondensation, wodurch Quinoline dargestellt wurden. Durch diese Synthesen konnten Schwefelanaloga des Naturstoffes Marinoquionolin A erhalten werden.
Ein weitere übergangsmetallkatalysierte C-H-Aktivierungsreaktion, die MATSUDA-HECK Reaktion, wurde genutzt, um 1b zu mit verschieden, subtituierten Diazoniumsalzen zu arylieren. Hier konnten zahlreichen Styrenylsulfone erhalten werden. Der erfolgreiche Einsatz der Vinylsulfonylverbindungen in der Kreuzmetathese konnte innerhalb dieser Arbeit nicht erreicht werden. Daher erfolgte die Synthese verschiedener dialkenylierter Sulfonamide. Hierfür wurde die Kettenlänge der Alkenyl-Gruppe am Schwefel zwischen 2-3 und am Stickstoff zwischen 3-4 variiert. Der Einsatz der dialkenylierten Sulfonamide erfolgte in den zuvor untersuchten C-H-Aktivierungsmethoden.
N-Allyl-N-phenylethensulfonamid (3) konnte erfolgreich in der DHR und HECK Reaktion funktionalisiert werden. Hierbei erfolgte eine methodenspezifische Kupplung in Abhängigkeit von der Elektronendichte der entsprechenden Alkenyl-Gruppe. Die DHR führte zur selektiven Arylierung der Vinyl-Gruppe und die HECK Reaktion zur Arylierung an der Allyl-Gruppe. Gemischte Produkte wurden nicht erhalten. Für die weiteren Diolefine wurde komplexe Produktgemische erhalten. Des Weiteren wurden die Diolefine in der Ringschlussmetathese untersucht und die entsprechenden Sultame in sehr guten Ausbeuten erhalten. Die Verwendung der Sultame in der C-H-Aktivierung war erfolglos. Es wird vermutet, dass für diese zweifachsubstituierten Sulfonamide die vorhandenen Reaktionsbedingungen optimiert werden müssen.
Abschließend wurden verschiedene, enantiomerenreine Olefine ausgehend von Levoglucosenon dargestellt. Hierfür wurde Levoglucosenon zunächst mit einem Allyl- und 3-Butenylgrignard Reagenz umgesetzt. Die entsprechenden Produkte wurden in moderaten Ausbeuten erhalten. Eine weitere Methode begann mit der Reduktion von Levoglucosenon zum Levoglucosenol. Dieser Alkohol wurde mit Allylbromid erfolgreich verethert. Neben der Untersuchungen zur Ethersynthese, erfolgte die Veresterung von Levoglucosenol mit verschiedenen Sulfonylchloriden zu den entsprechenden Sulfonsäureestern. Diese Olefine wurden in einer Dominometathesereaktion untersucht. Ausgehend vom Allyllevoglucosenylether erfolgte die Darstellung eines Dihydrofurans.
Trotz der hohen innovationspolitischen Bedeutung der außeruniversitären Forschungseinrichtungen (AUF) sind sie bisher selten Gegenstand empirischer Untersuchungen. Keine der bisher vorliegenden Arbeiten legt ihren Fokus auf die Zusammenarbeit von Wissenschaftler:innen in Forschungsteams, obwohl wissenschaftliche Zusammenarbeit ein weitgehend unerforschtes Gebiet ist. Dies verwundert insofern, da gerade innovative und komplexe Aufgaben, wie sie im Bereich der Forschung bestehen, das kreative Potenzial Einzelner sowie eine gut funktionierende Kooperation der einzelnen Individuen benötigen. Die Zusammenarbeit von Wissenschaftler:innen in den AUF findet in einem kompetitiven Umfeld statt. Einerseits stehen die AUF auf Organisationsebene im Wettbewerb zueinander und konkurrieren um Forschungsgelder und wissenschaftliches Personal. Andererseits ist die kompetitive Einwerbung von Drittmitteln für Wissenschaftler:innen essentiell, um Leistungen, gemessen an hochrangigen Publikationen und Drittmittelquoten, für die eigene Karriere zu erbringen. Ein zunehmender Anteil an Drittmittelfinanzierung in den Einrichtungen hat zudem Auswirkungen auf die Personalpolitik und die Anzahl befristeter Arbeitsverhältnisse. Gleichzeitig wird Forschungsförderung häufig an Kollaborationen von Wissenschaftler:innen geknüpft und bei Publikationen und Forschungsergebnissen zeigen Studien, dass diese überwiegend das Resultat von mehreren Personen sind. Dieses Spannungsfeld zwischen Zusammenarbeit und Wettbewerb wird verstärkt durch die fehlenden Möglichkeiten für den wissenschaftlichen Nachwuchs in der Wissenschaft zu bleiben. Auch wenn die Bundesregierung auf diese Herausforderungen reagiert, muss der Einzelne seinen Weg zwischen Zusammenarbeit und Konkurrenz finden.
Zielsetzung dieser Arbeit ist es, nachfolgende Forschungsfragen zu beantworten:
1. Wie können naturwissenschaftliche Forschungsteams in AUF charakterisiert werden?
2. Wie agiert die einzelne Forscherin/ der einzelne Forscher im Spannungsfeld zwischen Kooperation und Wettbewerb?
3. Welche Potentiale und Hemmnisse lassen sich auf Individual-, Team- und Umweltebene für eine erfolgreiche Arbeit von Forschungsteams in AUF ausmachen?
Um die Forschungsfragen beantworten zu können, wurde eine empirische Untersuchung im Mixed Method Design, bestehend aus einer deutschlandweiten Onlinebefragung von 574 Naturwissenschaftler:innen in AUF und qualitativen Interviews mit 122 Teammitgliedern aus 20 naturwissenschaftlichen Forschungsteams in AUF, durchgeführt.
Die Ergebnisse zeigen, dass die Teams eher als Arbeitsgruppen bezeichnet werden können, da v.a. in der Grundlagenforschung kein gemeinsames Ziel als vielmehr ein gemeinsamer inhaltlicher Rahmen vorliegt, in dem die Forschenden ihre individuellen Ziele verfolgen. Die Arbeit im Team wird überwiegend als positiv und kooperativ beschrieben und ist v.a. durch gegenseitige Unterstützung bei Problemen und weniger durch einen thematisch wissenschaftlichen Erkenntnisprozess geprägt. Dieser findet vielmehr in Form kleiner Untergruppen innerhalb der Arbeitsgruppe und vor allem in enger Abstimmung mit der Teamleitung (TL) statt. Als wettbewerbsverschärfend werden vor allem organisationale Rahmenbedingungen, wie Befristungen und der Flaschenhals, thematisiert.
Die TL nimmt die zentrale Rolle im Team ein, trägt die wissenschaftliche, finanzielle und personelle Verantwortung und muss den Forderungen der Organisation gerecht werden. Promovierende konzentrieren sich fast ausschließlich auf ihre Qualifizierungsarbeit. Bei Postdocs ist ein Spannungsfeld zu erkennen, da sie eigene Projekte und Ziele verfolgen, die neben den Anforderungen der TL bestehen. Die Gatekeeperfunktion der TL wird gestärkt durch ihre Rolle bei der Weitergabe von karriererelevanten Informationen im Team, z.B. bei anstehenden Konferenzen. Sie hat die wichtigen Kontakte, sorgt für die Vernetzung des Teams und ist für die Netzwerkpflege zuständig. Der wissenschaftliche Nachwuchs verlässt sich bei seinen Aufgaben und den karriererelevanten Faktoren sehr auf ihre Unterstützung. Nicht-wissenschaftliche Mitarbeitende gilt es stärker zu berücksichtigen, dies sowohl in ihrer Funktion in den Teams als auch in der Gesamtorganisation. Sie sind die zentralen Ansprechpersonen des wissenschaftlichen Personals und sorgen für eine Kontinuität bei der Wissensspeicherung und -weitergabe. Für die Organisationen gilt es, unterstützende Rahmen-, Arbeits- und Aufgabenbedingungen für die TL zu schaffen und den wissenschaftlichen Nachwuchs bei einer frühzeitigen Verantwortung für wissenschaftliche und karriererelevante Aufgaben zu unterstützen. Dafür bedarf es verbesserter Personalentwicklungskonzepte und -angebote. Darüber hinaus gilt es, Kooperationsmöglichkeiten innerhalb der Einrichtung und zwischen den Gruppen zu schaffen, z.B. durch offene Räume und Netzwerkmöglichkeiten, und innovative Arbeitsumgebungen zu fördern, um neue Formen einer innovationsfreundlichen Wissenschaftskultur zu etablieren.
Die vorliegende Studie beschäftigt sich mit der Planung und Durchführung des Lernprozesses von Schauspielern, wobei das Hauptaugenmerk auf dem Einsatz von Lernstrategien liegt. Es geht darum, welcher Strategien sich professionell Lernende bedienen, um die für die Berufsausübung erforderliche Textsicherheit zu erlangen, nicht um die Optimierung des Lernerfolges.
Die Literaturrecherche machte deutlich, dass aktuelle Studien zum Lernen von Erwachsenen vor allem im berufsspezifischen Kontext angesiedelt sind und sich auf den Erwerb von Kompetenzen, Problemlösestrategien und gesellschaftliche Teilhabe beziehen. Dem Lernen von Schauspielern liegt aber keine Absicht einer Verhaltensänderung oder eines konkreten Wissenszuwachses zugrunde.
Für Schauspieler ist der Auftritt Bestandteil ihrer Berufskultur. Angesichts der Tatsache, dass präzisem Faktenwissen als Grundlage für kompetentes, überzeugendes Präsentieren entscheidende Bedeutung zukommt, sind die Ergebnisse der Studie auch für Berufsgruppen relevant, die öffentlich auftreten müssen, wie z. B. für Priester, Juristen und Lehrende. Das gilt ebenso für Schüler und Studenten, die Referate halten und/oder Arbeiten präsentieren müssen.
Für die empirische Untersuchung werden zwölf renommierte Schauspieler mittels problemzentriertem Interview befragt, anschließend wird eine qualitative Inhaltsanalyse durchgeführt.
In der Auswertung der Daten kann ein deutlicher Zusammenhang zwischen Körper und Sprechpraxis nachgewiesen werden. Ebenso ergibt die Analyse, wie wichtig Bewegung für den Lernprozess ist. Es können Ergebnisse in Bezug auf kognitive, metakognitive und ressourcenorientierte Strategien generiert werden, wobei der Lernumgebung und dem Lernen mit Kollegen entscheidende Bedeutung zukommt.
Zum Einfluss von Adaptivität auf die Wahrnehmung von Komplexität in der Mensch-Technik-Interaktion
(2021)
Wir leben in einer Gesellschaft, die von einem stetigen Wunsch nach Innovation und Fortschritt geprägt ist. Folgen dieses Wunsches sind die immer weiter fortschreitende Digitalisierung und informatische Vernetzung aller Lebensbereiche, die so zu immer komplexeren sozio-technischen Systemen führen. Ziele dieser Systeme sind u. a. die Unterstützung von Menschen, die Verbesserung ihrer Lebenssituation oder Lebensqualität oder die Erweiterung menschlicher Möglichkeiten. Doch haben neue komplexe technische Systeme nicht nur positive soziale und gesellschaftliche Effekte. Oft gibt es unerwünschte Nebeneffekte, die erst im Gebrauch sichtbar werden, und sowohl Konstrukteur*innen als auch Nutzer*innen komplexer vernetzter Technologien fühlen sich oft orientierungslos. Die Folgen können von sinkender Akzeptanz bis hin zum kompletten Verlust des Vertrauens in vernetze Softwaresysteme reichen. Da komplexe Anwendungen, und damit auch immer komplexere Mensch-Technik-Interaktionen, immer mehr an Relevanz gewinnen, ist es umso wichtiger, wieder Orientierung zu finden. Dazu müssen wir zuerst diejenigen Elemente identifizieren, die in der Interaktion mit vernetzten sozio-technischen Systemen zu Komplexität beitragen und somit Orientierungsbedarf hervorrufen.
Mit dieser Arbeit soll ein Beitrag geleistet werden, um ein strukturiertes Reflektieren über die Komplexität vernetzter sozio-technischer Systeme im gesamten Konstruktionsprozess zu ermöglichen. Dazu wird zuerst eine Definition von Komplexität und komplexen Systemen erarbeitet, die über das informatische Verständnis von Komplexität (also der Kompliziertheit von Problemen, Algorithmen oder Daten) hinausgeht. Im Vordergrund soll vielmehr die sozio-technische Interaktion mit und in komplexen vernetzten Systemen stehen. Basierend auf dieser Definition wird dann ein Analysewerkzeug entwickelt, welches es ermöglicht, die Komplexität in der Interaktion mit sozio-technischen Systemen sichtbar und beschreibbar zu machen.
Ein Bereich, in dem vernetzte sozio-technische Systeme zunehmenden Einzug finden, ist jener digitaler Bildungstechnologien. Besonders adaptiven Bildungstechnologien wurde in den letzten Jahrzehnten ein großes Potential zugeschrieben. Zwei adaptive Lehr- bzw. Trainingssysteme sollen deshalb exemplarisch mit dem in dieser Arbeit entwickelten Analysewerkzeug untersucht werden. Hierbei wird ein besonderes Augenmerkt auf den Einfluss von Adaptivität auf die Komplexität von Mensch-Technik-Interaktionssituationen gelegt. In empirischen Untersuchungen werden die Erfahrungen von Konstrukteur*innen und Nutzer*innen jener adaptiver Systeme untersucht, um so die entscheidenden Kriterien für Komplexität ermitteln zu können. Auf diese Weise können zum einen wiederkehrende Orientierungsfragen bei der Entwicklung adaptiver Bildungstechnologien aufgedeckt werden. Zum anderen werden als komplex wahrgenommene Interaktionssituationen identifiziert. An diesen Situationen kann gezeigt werden, wo aufgrund der Komplexität des Systems die etablierten Alltagsroutinen von Nutzenden nicht mehr ausreichen, um die Folgen der Interaktion mit dem System vollständig erfassen zu können. Dieses Wissen kann sowohl Konstrukteur*innen als auch Nutzer*innen helfen, in Zukunft besser mit der inhärenten Komplexität moderner Bildungstechnologien umzugehen.
The present work deals with the variation in the linearisation of German infinitival complements from a diachronic perspective. Based on the observation that in present-day German the position of infinitival complements is restricted by properties of the matrix verb (Haider, 2010, Wurmbrand, 2001), whereas this appears much more liberal in older stages of German (Demske, 2008, Maché and Abraham, 2011, Demske, 2015), this dissertation investigates the emergence of those restrictions and the factors that have led to a reduced, yet still existing variability. The study contrasts infinitival complements of two types of matrix verbs, namely raising and control verbs. In present-day German, these show different syntactic behaviour and opposite preferences as far as the position of the infinitive is concerned: while infinitival complements of raising verbs build a single clausal domain with the with the matrix verb and occur obligatorily intraposed, infinitive complements of control verbs can form clausal constituents and occur predominantly extraposed. This correlation is not attested in older stages of German, at least not until Early New High German.
Drawing on diachronic corpus data, the present work provides a description of the changes in the linearisation of infinitival complements from Early New High German to present-day German which aims at finding out when the correlation between infinitive type and word order emerged and further examines their possible causes. The study shows that word order change in German infinitival complements is not a case of syntactic change in the narrow sense, but that the diachronic variation results from the interaction of different language-internal and language-external factors and that it reflects, on the one hand, the influence of language modality on the emerging standard language and, on the other hand, a process of specialisation.
Das Schulfach Geographie war in der DDR eines der Fächer, das sehr stark mit politischen Themen im Sinne des Marxismus-Leninismus bestückt war. Ein anderer Aspekt sind die sozialistischen Erziehungsziele, die in der Schulbildung der DDR hoch im Kurs standen. Im Fokus stand diesbezüglich die Erziehung der Kinder zu sozialistischen Persönlichkeiten. Die Arbeit versucht einen klaren Blick auf diesen Umstand zu werfen, um zu erfahren, was da von den Lehrkräften gefordert wurde und wie es in der Schule umzusetzen war.
Durch den Fall der Mauer war natürlich auch eine Umstrukturierung des Bildungssystems im Osten unausweichlich. Hier will die Arbeit Einblicke geben, wie die Geographielehrkräfte diese Transformation mitgetragen und umgesetzt haben. Welche Wesenszüge aus der Sozialisierung in der DDR haben sich bei der Gestaltung des Unterrichtes und dessen Ausrichtung auf die neuen Erziehungsziele erhalten?
Hierzu wurden Geographielehrkräfte befragt, die sowohl in der DDR als auch im geeinten Deutschland unterrichtet haben. Die Fragen bezogen sich in erster Linie auf die Art und Weise des Unterrichtens vor, während und nach der Wende und der daraus entstandenen Systemtransformation.
Die Befragungen kommen zu dem Ergebnis, dass sich der Geographieunterricht in der DDR thematisch von dem in der BRD nicht sonderlich unterschied. Von daher bedurfte es keiner umfangreichen inhaltlichen Veränderung des Geographieunterrichts. Schon zu DDR-Zeiten wurden durch die Lehrkräfte offenbar eigenmächtig ideologiefreie physisch-geographische Themen oft ausgedehnt, um die Ideologie des Faches zu reduzieren. So fiel den meisten eine Anpassung ihres Unterrichts an das westdeutsche System relativ leicht. Die humanistisch geprägte Werteerziehung des DDR-Bildungssystems wurde unter Ausklammerung des sozialistischen Aspektes ebenso fortgeführt, da es auch hier viele Parallelen zum westdeutschen System gegeben hat. Deutlich wird eine Charakterisierung des Faches als Naturwissenschaft von Seiten der ostdeutschen Lehrkräfte, obwohl das Fach an den Schulen den Gesellschaftswissenschaften zugeordnet wird und auch in der DDR eine starke wirtschaftsgeographische Ausrichtung hatte.
Von der Verantwortung sozialistische Persönlichkeiten zu erziehen, wurden die Lehrkräfte mit dem Ende der DDR entbunden und die in dieser Arbeit aufgeführten Interviewauszüge lassen keinen Zweifel daran, dass es dem Großteil der Befragten darum nicht leidtat, sie sich aber bis heute an der Werteorientierung aus DDR-Zeiten orientieren.
Was ist HipHop?
(2021)
Es handelt sich bei der vorliegenden Dissertation um eine investigative Forschungsarbeit, die sich mit dem dynamisch wandelnden HipHop-Phänomen befasst. Der Autor erläutert hierbei die anhaltende Attraktivität des kulturellen Phänomens HipHop und versucht die Tatsache der stetigen Reproduzierbarkeit des HipHops genauer zu erklären. Daher beginnt er mit einer historischen Diskursanalyse der HipHop-Kultur. Er analysiert hierfür die Formen, die Protagonisten und die Diskurse des HipHops, um diesen besser verstehen zu können. Durch die Herausarbeitung der genuinen Eigenschaft der Mehrfachkodierbarkeit des HipHops werden gängige Erklärungsmuster aus Wissenschaft und Medien relativiert und kritisiert. Der Autor kombiniert in seiner Studie kultur- und erziehungswissenschaftliche Literatur mit diversen aktuellen und historischen Darstellungen und Bildern. Es werden vor allem bildbasierte Selbstinszenierungen von HipHoppern und Selbstzeugnisse aus narrativen Interviews, die er selbst mit verschiedenen HipHoppern in Deutschland geführt hat, ausgewertet. Neben den narrativen Interviews dient vor allem die Bildinterpretation nach Bohnsack als Quelle zur Bildung der These der Mehrfachkodierbarkeit. Hierbei werden zwei Bilder der HipHopper Lady Bitch Ray und Kollegah nach Bohnsack (2014) interpretiert und gezeigt wie HipHop neben der lyrischen und der klanglichen Komponente auch visuell inszeniert und produziert wird. Hieraus wird geschlussfolgert, dass es im HipHop möglich ist konträre Sichtweisen bei gleichzeitiger Anwendung von typischen Kulturpraktiken wie zum Beispiel dem Boasting darzustellen und zu vermitteln. Die stetige Offenheit des HipHops wird durch Praktiken wie dem Sampling oder dem Battle deutlich und der Autor erklärt, dass durch diese Techniken die generative Eigenschaft der Mehrfachkodierbarkeit hergestellt wird. Damit vertritt er eine Art Baukasten-Theorie, die besagt, dass sich prinzipiell jeder aus dem Baukasten HipHop, je nach Vorliebe, Interesse und Affinität, bedienen kann. Durch die Vielfalt an Meinungen zu HipHop, die der Autor durch die Kodierung der geführten narrativen Interviews erhält, wird diese These verdeutlicht und es wird klar, dass es sich bei HipHop um mehr als nur eine Mode handelt. HipHop besitzt die prinzipielle Möglichkeit durch die Offenheit, die er in sich trägt, sich stetig neu zu wandeln und damit an Beliebtheit und Popularität zuzunehmen. Die vorliegende Arbeit erweitert damit die immer größer werdende Forschung in den HipHop-Studies und setzt wichtige Akzente um weiter zu forschen und HipHop besser verständlich zu machen.
Virtualizing physical space
(2021)
The true cost for virtual reality is not the hardware, but the physical space it requires, as a one-to-one mapping of physical space to virtual space allows for the most immersive way of navigating in virtual reality. Such “real-walking” requires physical space to be of the same size and the same shape of the virtual world represented. This generally prevents real-walking applications from running on any space that they were not designed for.
To reduce virtual reality’s demand for physical space, creators of such applications let users navigate virtual space by means of a treadmill, altered mappings of physical to virtual space, hand-held controllers, or gesture-based techniques. While all of these solutions succeed at reducing virtual reality’s demand for physical space, none of them reach the same level of immersion that real-walking provides.
Our approach is to virtualize physical space: instead of accessing physical space directly, we allow applications to express their need for space in an abstract way, which our software systems then map to the physical space available. We allow real-walking applications to run in spaces of different size, different shape, and in spaces containing different physical objects. We also allow users immersed in different virtual environments to share the same space.
Our systems achieve this by using a tracking volume-independent representation of real-walking experiences — a graph structure that expresses the spatial and logical relationships between virtual locations, virtual elements contained within those locations, and user interactions with those elements. When run in a specific physical space, this graph representation is used to define a custom mapping of the elements of the virtual reality application and the physical space by parsing the graph using a constraint solver. To re-use space, our system splits virtual scenes and overlap virtual geometry. The system derives this split by means of hierarchically clustering of our virtual objects as nodes of our bi-partite directed graph that represents the logical ordering of events of the experience. We let applications express their demands for physical space and use pre-emptive scheduling between applications to have them share space. We present several application examples enabled by our system. They all enable real-walking, despite being mapped to physical spaces of different size and shape, containing different physical objects or other users.
We see substantial real-world impact in our systems. Today’s commercial virtual reality applications are generally designing to be navigated using less immersive solutions, as this allows them to be operated on any tracking volume. While this is a commercial necessity for the developers, it misses out on the higher immersion offered by real-walking. We let developers overcome this hurdle by allowing experiences to bring real-walking to any tracking volume, thus potentially bringing real-walking to consumers.
Die eigentlichen Kosten für Virtual Reality Anwendungen entstehen nicht primär durch die erforderliche Hardware, sondern durch die Nutzung von physischem Raum, da die eins-zu-eins Abbildung von physischem auf virtuellem Raum die immersivste Art von Navigation ermöglicht. Dieses als „Real-Walking“ bezeichnete Erlebnis erfordert hinsichtlich Größe und Form eine Entsprechung von physischem Raum und virtueller Welt. Resultierend daraus können Real-Walking-Anwendungen nicht an Orten angewandt werden, für die sie nicht entwickelt wurden.
Um den Bedarf an physischem Raum zu reduzieren, lassen Entwickler von Virtual Reality-Anwendungen ihre Nutzer auf verschiedene Arten navigieren, etwa mit Hilfe eines Laufbandes, verfälschten Abbildungen von physischem zu virtuellem Raum, Handheld-Controllern oder gestenbasierten Techniken. All diese Lösungen reduzieren zwar den Bedarf an physischem Raum, erreichen jedoch nicht denselben Grad an Immersion, den Real-Walking bietet.
Unser Ansatz zielt darauf, physischen Raum zu virtualisieren: Anstatt auf den physischen Raum direkt zuzugreifen, lassen wir Anwendungen ihren Raumbedarf auf abstrakte Weise formulieren, den unsere Softwaresysteme anschließend auf den verfügbaren physischen Raum abbilden. Dadurch ermöglichen wir Real-Walking-Anwendungen Räume mit unterschiedlichen Größen und Formen und Räume, die unterschiedliche physische Objekte enthalten, zu nutzen. Wir ermöglichen auch die zeitgleiche Nutzung desselben Raums durch mehrere Nutzer verschiedener Real-Walking-Anwendungen.
Unsere Systeme erreichen dieses Resultat durch eine Repräsentation von Real-Walking-Erfahrungen, die unabhängig sind vom gegebenen Trackingvolumen – eine Graphenstruktur, die die räumlichen und logischen Beziehungen zwischen virtuellen Orten, den virtuellen Elementen innerhalb dieser Orte, und Benutzerinteraktionen mit diesen Elementen, ausdrückt. Bei der Instanziierung der Anwendung in einem bestimmten physischen Raum wird diese Graphenstruktur und ein Constraint Solver verwendet, um eine individuelle Abbildung der virtuellen Elemente auf den physischen Raum zu erreichen. Zur mehrmaligen Verwendung des Raumes teilt unser System virtuelle Szenen und überlagert virtuelle Geometrie. Das System leitet diese Aufteilung anhand eines hierarchischen Clusterings unserer virtuellen Objekte ab, die als Knoten unseres bi-partiten, gerichteten Graphen die logische Reihenfolge aller Ereignisse repräsentieren. Wir verwenden präemptives Scheduling zwischen den Anwendungen für die zeitgleiche Nutzung von physischem Raum. Wir stellen mehrere Anwendungsbeispiele vor, die Real-Walking ermöglichen – in physischen Räumen mit unterschiedlicher Größe und Form, die verschiedene physische Objekte oder weitere Nutzer enthalten.
Wir sehen in unseren Systemen substantielles Potential. Heutige Virtual Reality-Anwendungen sind bisher zwar so konzipiert, dass sie auf einem beliebigen Trackingvolumen betrieben werden können, aber aus kommerzieller Notwendigkeit kein Real-Walking beinhalten. Damit entgeht Entwicklern die Gelegenheit eine höhere Immersion herzustellen. Indem wir es ermöglichen, Real-Walking auf jedes Trackingvolumen zu bringen, geben wir Entwicklern die Möglichkeit Real-Walking zu ihren Nutzern zu bringen.
Rheology describes the flow of matter under the influence of stress, and - related to solids- it investigates how solids subjected to stresses deform. As the deformation of the Earth’s outer layers, the lithosphere and the crust, is a major focus of rheological studies, rheology in the geosciences describes how strain evolves in rocks of variable composition and temperature under tectonic stresses. It is here where deformation processes shape the form of ocean basins and mountain belts that ultimately result from the complex interplay between lithospheric plate motion and the susceptibility of rocks to the influence of plate-tectonic forces. A rigorous study of the strength of the lithosphere and deformation phenomena thus requires in-depth studies of the rheological characteristics of the involved materials and the temporal framework of deformation processes.
This dissertation aims at analyzing the influence of the physical configuration of the lithosphere on the present-day thermal field and the overall rheological characteristics of the lithosphere to better understand variable expressions in the formation of passive continental margins and the behavior of strike-slip fault zones. The main methodological approach chosen is to estimate the present-day thermal field and the strength of the lithosphere by 3-D numerical modeling. The distribution of rock properties is provided by 3-D structural models, which are used as the basis for the thermal and rheological modeling. The structural models are based on geophysical and geological data integration, additionally constrained by 3-D density modeling. More specifically, to decipher the thermal and rheological characteristics of the lithosphere in both oceanic and continental domains, sedimentary basins in the Sea of Marmara (continental transform setting), the SW African passive margin (old oceanic crust), and the Norwegian passive margin (young oceanic crust) were selected for this study.
The Sea of Marmara, in northwestern Turkey, is located where the dextral North Anatolian Fault zone (NAFZ) accommodates the westward escape of the Anatolian Plate toward the Aegean. Geophysical observations indicate that the crust is heterogeneous beneath the Marmara basin, but a detailed characterization of the lateral crustal heterogeneities is presented for the first time in this study. Here, I use different gravity datasets and the general non-uniqueness in potential field modeling, to propose three possible end-member scenarios of crustal configuration. The models suggest that pronounced gravitational anomalies in the basin originate from significant density heterogeneities within the crust. The rheological modeling reveals that associated variations in lithospheric strength control the mechanical segmentation of the NAFZ. Importantly, a strong crust that is mechanically coupled to the upper mantle spatially correlates with aseismic patches where the fault bends and changes its strike in response to the presence of high-density lower crustal bodies. Between the bends, mechanically weaker crustal domains that are decoupled from the mantle are characterized by creep.
For the passive margins of SW Africa and Norway, two previously published 3-D conductive and lithospheric-scale thermal models were analyzed. These 3-D models differentiate various sedimentary, crustal, and mantle units and integrate different geophysical data, such as seismic observations and the gravity field. Here, the rheological modeling suggests that the present-day lithospheric strength across the oceanic domain is ultimately affected by the age and past thermal and tectonic processes as well as the depth of the thermal lithosphere-asthenosphere boundary, while the configuration of the crystalline crust dominantly controls the rheological behavior of the lithosphere beneath the continental domains of both passive margins.
The thermal and rheological models show that the variations of lithospheric strength are fundamentally influenced by the temperature distribution within the lithosphere. Moreover, as the composition of the lithosphere significantly influences the present-day thermal field, it therefore also affects the rheological characteristics of the lithosphere. Overall my studies add to our understanding of regional tectonic deformation processes and the long-term behavior of sedimentary basins; they confirm other analyses that have pointed out that crustal heterogeneities in the continents result in diverse lithospheric thermal characteristics, which in turn results in higher complexity and variations of rheological behavior compared to oceanic domains with a thinner, more homogeneous crust.
Die vorliegende Dissertation behandelt drei thematische Schwerpunkte. Im Ergebnisteil steht die chemische Synthese von sogenannten (1,7)-Naphthalenophanen im Vordergrund, die zur Substanzklasse von Cyclophanen gehören. Während zahlreiche Synthesemethoden Strategien zum Aufbau von Ringsystemen (wie z. B. von Naphthalenophanen) verfolgen, die Teil einer bereits existierenden aromatischen Struktur der Ausgangsverbindung sind, nutzen nur wenige Ansätze Reaktionen, die einen Ringschluss zum gewünschten Produkt erst im Zuge der Synthese etablieren. Eine Benzanellierung, die eine besondere Aufmerksamkeit im Arbeitskreis erfahren hat, ist die Dehydro-DIELS-ALDER-Reaktion (DDA-Reaktion). Im Rahmen dieser Arbeit konnte gezeigt werden, dass zwölf ausgewählte (1,7)-Naphthalenophane, die teilweise ringgespannt und makrozyklisch aufgebaut waren, mithilfe einer photochemischen Variante der DDA-Reaktion (PDDA-Reaktion) zugänglich gemacht werden können. Die Versuche, auf thermischem Wege (TDDA-Reaktion) (1,7)-Naphthalenophane herzustellen, misslangen. Die außergewöhnliche Reaktivität der Photoreaktanten konnte mithilfe quantenchemischer Berechnungen durch eine gefaltete Grundzustandsgeometrie erklärt werden. Darüber hinaus wurden Ringspannungen und strukturelle Spannungsindikatoren der relevanten Photoprodukte ermittelt und Trends in Abhängigkeit der Linkerlänge in den NMR-Spektren der Zielverbindungen ermittelt sowie diskutiert. Zudem zeigte eine Variation am Chromophor (Acyl-, Carbonsäure- und Carbonsäureester) der Photoreaktanten bei der Bestrahlung in Dichlormethan eine vergleichbare Photokinetik und -reaktivität. Der zweite Abschnitt dieser Dissertation ist dem Design und der Entwicklung zweier Photoreaktoren für UV-Anwendungen im kontinuierlichen Durchfluss gewidmet, da photochemische Transformationen bekanntermaßen in ihrer Skalierbarkeit limitiert sind. Im ersten Prototyp konnten mittels effizienter Parallelschaltung mit bis zu drei UV-Lampen (𝜆𝜆 = 254, 310 und 355 nm) Produktmaterialmengen von bis zu n = 188 mmol anhand eines ausgewählten Fallbeispiels erreicht werden. Im konstruktionstechnisch stark vereinfachten zweiten Photoreaktor wurden alle quarzhaltigen Elemente gegen günstigeres PLEXIGLAS® ersetzt. Das Resultat waren identische Raum-Zeit-Ausbeuten in Bezug auf das zuvor gewählte Synthesebeispiel. Demnach bietet die UV-Photochemie im kontinuierlichen Durchfluss Vorteile gegenüber der traditionellen Bestrahlung im Tauchreaktor. Hinsichtlich Reaktionszeit, Produktausbeuten und Lösemittelverbrauch ist sie synthetisch weit überlegen. Im letzten Abschnitt der Arbeit wurden diese Erkenntnisse genutzt, um biomedizinisch und pharmakologisch vielversprechende 1-Arylnaphthalen-Lignane mittels einer intramolekularen PDDA-Reaktion (IMPDDA-Reaktion) als Schlüsselschritt herzustellen. Hierzu wurden drei Konzepte erarbeitet und in der Totalsynthese von drei ausgewählten Zielstrukturen auf Basis des 1-Arylnaphthalengrundgerüsts realisiert.
The Earth's electron radiation belts exhibit a two-zone structure, with the outer belt being highly dynamic due to the constant competition between a number of physical processes, including acceleration, loss, and transport. The flux of electrons in the outer belt can vary over several orders of magnitude, reaching levels that may disrupt satellite operations. Therefore, understanding the mechanisms that drive these variations is of high interest to the scientific community.
In particular, the important role played by loss mechanisms in controlling relativistic electron dynamics has become increasingly clear in recent years. It is now widely accepted that radiation belt electrons can be lost either by precipitation into the atmosphere or by transport across the magnetopause, called magnetopause shadowing. Precipitation of electrons occurs due to pitch-angle scattering by resonant interaction with various types of waves, including whistler mode chorus, plasmaspheric hiss, and electromagnetic ion cyclotron waves. In addition, the compression of the magnetopause due to increases in solar wind dynamic pressure can substantially deplete electrons at high L shells where they find themselves in open drift paths, whereas electrons at low L shells can be lost through outward radial diffusion. Nevertheless, the role played by each physical process during electron flux dropouts still remains a fundamental puzzle.
Differentiation between these processes and quantification of their relative contributions to the evolution of radiation belt electrons requires high-resolution profiles of phase space density (PSD). However, such profiles of PSD are difficult to obtain due to restrictions of spacecraft observations to a single measurement in space and time, which is also compounded by the inaccuracy of instruments. Data assimilation techniques aim to blend incomplete and inaccurate spaceborne data with physics-based models in an optimal way. In the Earth's radiation belts, it is used to reconstruct the entire radial profile of electron PSD, and it has become an increasingly important tool in validating our current understanding of radiation belt dynamics, identifying new physical processes, and predicting the near-Earth hazardous radiation environment.
In this study, sparse measurements from Van Allen Probes A and B and Geostationary Operational Environmental Satellites (GOES) 13 and 15 are assimilated into the three-dimensional Versatile Electron Radiation Belt (VERB-3D) diffusion model, by means of a split-operator Kalman filter over a four-year period from 01 October 2012 to 01 October 2016. In comparison to previous works, the 3D model accounts for more physical processes, namely mixed pitch angle-energy diffusion, scattering by EMIC waves, and magnetopause shadowing. It is shown how data assimilation, by means of the innovation vector (the residual between observations and model forecast), can be used to account for missing physics in the model. This method is used to identify the radial distances from the Earth and the geomagnetic conditions where the model is inconsistent with the measured PSD for different values of the adiabatic invariants mu and K. As a result, the Kalman filter adjusts the predictions in order to match the observations, and this is interpreted as evidence of where and when additional source or loss processes are active.
Furthermore, two distinct loss mechanisms responsible for the rapid dropouts of radiation belt electrons are investigated: EMIC wave-induced scattering and magnetopause shadowing. The innovation vector is inspected for values of the invariant mu ranging from 300 to 3000 MeV/G, and a statistical analysis is performed to quantitatively assess the effect of both processes as a function of various geomagnetic indices, solar wind parameters, and radial distance from the Earth. The results of this work are in agreement with previous studies that demonstrated the energy dependence of these two mechanisms. EMIC wave scattering dominates loss at lower L shells and it may amount to between 10%/hr to 30%/hr of the maximum value of PSD over all L shells for fixed first and second adiabatic invariants. On the other hand, magnetopause shadowing is found to deplete electrons across all energies, mostly at higher L shells, resulting in loss from 50%/hr to 70%/hr of the maximum PSD. Nevertheless, during times of enhanced geomagnetic activity, both processes can operate beyond such location and encompass the entire outer radiation belt.
The results of this study are two-fold. Firstly, it demonstrates that the 3D data assimilative code provides a comprehensive picture of the radiation belts and is an important step toward performing reanalysis using observations from current and future missions. Secondly, it achieves a better understanding and provides critical clues of the dominant loss mechanisms responsible for the rapid dropouts of electrons at different locations over the outer radiation belt.
In the frame of a world fighting a dramatic global warming caused by human-related activities, research towards the development of renewable energies plays a crucial role. Solar energy is one of the most important clean energy sources and its role in the satisfaction of the global energy demand is set to increase. In this context, a particular class of materials captured the attention of the scientific community for its attractive properties: halide perovskites. Devices with perovskite as light-absorber saw an impressive development within the last decade, reaching nowadays efficiencies comparable to mature photovoltaic technologies like silicon solar cells. Yet, there are still several roadblocks to overcome before a wide-spread commercialization of this kind of devices is enabled. One of the critical points lies at the interfaces: perovskite solar cells (PSCs) are made of several layers with different chemical and physical features. In order for the device to function properly, these properties have to be well-matched.
This dissertation deals with some of the challenges related to interfaces in PSCs, with a focus on the interface between the perovskite material itself and the subsequent charge transport layer. In particular, molecular assemblies with specific properties are deposited on the perovskite surface to functionalize it. The functionalization results in energy level alignment adjustment, interfacial losses reduction, and stability improvement.
First, a strategy to tune the perovskite’s energy levels is introduced: self-assembled monolayers of dipolar molecules are used to functionalize the surface, obtaining simultaneously a shift in the vacuum level position and a saturation of the dangling bonds at the surface. A shift in the vacuum level corresponds to an equal change in work function, ionization energy, and electron affinity. The direction of the shift depends on the direction of the collective interfacial dipole. The magnitude of the shift can be tailored by controlling the deposition parameters, such as the concentration of the solution used for the deposition. The shift for different molecules is characterized by several non-invasive techniques, including in particular Kelvin probe. Overall, it is shown that it is possible to shift the perovskite energy levels in both directions by several hundreds of meV. Moreover, interesting insights on the molecules deposition dynamics are revealed.
Secondly, the application of this strategy in perovskite solar cells is explored. Devices with different perovskite compositions (“triple cation perovskite” and MAPbBr3) are prepared. The two resulting model systems present different energetic offsets at the perovskite/hole-transport layer interface. Upon tailored perovskite surface functionalization, the devices show a stabilized open circuit voltage (Voc) enhancement of approximately 60 meV on average for devices with MAPbBr3, while the impact is limited on triple-cation solar cells. This suggests that the proposed energy level tuning method is valid, but its effectiveness depends on factors such as the significance of the energetic offset compared to the other losses in the devices.
Finally, the above presented method is further developed by incorporating the ability to interact with the perovskite surface directly into a novel hole-transport material (HTM), named PFI. The HTM can anchor to the perovskite halide ions via halogen bonding (XB). Its behaviour is compared to that of another HTM (PF) with same chemical structure and properties, except for the ability of forming XB. The interaction of perovskite with PFI and PF is characterized through UV-Vis, atomic force microscopy and Kelvin probe measurements combined with simulations. Compared to PF, PFI exhibits enhanced resilience against solvent exposure and improved energy level alignment with the perovskite layer. As a consequence, devices comprising PFI show enhanced Voc and operational stability during maximum-power-point tracking, in addition to hysteresis reduction. XB promotes the formation of a high-quality interface by anchoring to the halide ions and forming a stable and ordered interfacial layer, showing to be a particularly interesting candidate for the development of tailored charge transport materials in PSCs.
Overall, the results exposed in this dissertation introduce and discuss a versatile tool to functionalize the perovskite surface and tune its energy levels. The application of this method in devices is explored and insights on its challenges and advantages are given. Within this frame, the results shed light on XB as ideal interaction for enhancing stability and efficiency in perovskite-based devices.
Transient permeability in porous and fractured sandstones mediated by fluid-rock interactions
(2021)
Understanding the fluid transport properties of subsurface rocks is essential for a large number of geotechnical applications, such as hydrocarbon (oil/gas) exploitation, geological storage (CO2/fluids), and geothermal reservoir utilization. To date, the hydromechanically-dependent fluid flow patterns in porous media and single macroscopic rock fractures have received numerous investigations and are relatively well understood. In contrast, fluid-rock interactions, which may permanently affect rock permeability by reshaping the structure and changing connectivity of pore throats or fracture apertures, need to be further elaborated. This is of significant importance for improving the knowledge of the long-term evolution of rock transport properties and evaluating a reservoir’ sustainability. The thesis focuses on geothermal energy utilization, e.g., seasonal heat storage in aquifers and enhanced geothermal systems, where single fluid flow in porous rocks and rock fracture networks under various pressure and temperature conditions dominates.
In this experimental study, outcrop samples (i.e., Flechtinger sandstone, an illite-bearing Lower Permian rock, and Fontainebleau sandstone, consisting of pure quartz) were used for flow-through experiments under simulated hydrothermal conditions. The themes of the thesis are (1) the investigation of clay particle migration in intact Flechtinger sandstone and the coincident permeability damage upon cyclic temperature and fluid salinity variations; (2) the determination of hydro-mechanical properties of self-propping fractures in Flechtinger and Fontainebleau sandstones with different fracture features and contrasting mechanical properties; and (3) the investigation of the time-dependent fracture aperture evolution of Fontainebleau sandstone induced by fluid-rock interactions (i.e., predominantly pressure solution). Overall, the thesis aims to unravel the mechanisms of the instantaneous reduction (i.e., direct responses to thermo-hydro-mechanical-chemical (THMC) conditions) and progressively-cumulative changes (i.e., time-dependence) of rock transport properties.
Permeability of intact Flechtinger sandstone samples was measured under each constant condition, where temperature (room temperature up to 145 °C) and fluid salinity (NaCl: 0 ~ 2 mol/l) were stepwise changed. Mercury intrusion porosimetry (MIP), electron microprobe analysis (EMPA), and scanning electron microscopy (SEM) were performed to investigate the changes of local porosity, microstructures, and clay element contents before and after the experiments. The results indicate that the permeability of illite-bearing Flechtinger sandstones will be impaired by heating and exposure to low salinity pore fluids. The chemically induced permeability variations prove to be path-dependent concerning the applied succession of fluid salinity changes. The permeability decay induced by a temperature increase and a fluid salinity reduction operates by relatively independent mechanisms, i.e., thermo-mechanical and thermo-chemical effects.
Further, the hydro-mechanical investigations of single macroscopic fractures (aligned, mismatched tensile fractures, and smooth saw-cut fractures) illustrate that a relative fracture wall offset could significantly increase fracture aperture and permeability, but the degree of increase depends on fracture surface roughness. X-ray computed tomography (CT) demonstrates that the contact area ratio after the pressure cycles is inversely correlated to the fracture offset. Moreover, rock mechanical properties, determining the strength of contact asperities, are crucial so that relatively harder rock (i.e., Fontainebleau sandstone) would have a higher self-propping potential for sustainable permeability during pressurization. This implies that self-propping rough fractures with a sufficient displacement are efficient pathways for fluid flow if the rock matrix is mechanically strong.
Finally, two long-term flow-through experiments with Fontainebleau sandstone samples containing single fractures were conducted with an intermittent flow (~140 days) and continuous flow (~120 days), respectively. Permeability and fluid element concentrations were measured throughout the experiments. Permeability reduction occurred at the beginning stage when the stress was applied, while it converged at later stages, even under stressed conditions. Fluid chemistry and microstructure observations demonstrate that pressure solution governs the long-term fracture aperture deformation, with remarkable effects of the pore fluid (Si) concentration and the structure of contact grain boundaries. The retardation and the cessation of rock fracture deformation are mainly induced by the contact stress decrease due to contact area enlargement and a dissolved mass accumulation within the contact boundaries. This work implies that fracture closure under constant (pressure/stress and temperature) conditions is likely a spontaneous process, especially at the beginning stage after pressurization when the contact area is relatively small. In contrast, a contact area growth yields changes of fracture closure behavior due to the evolution of contact boundaries and concurrent changes in their diffusive properties. Fracture aperture and thus permeability will likely be sustainable in the long term if no other processes (e.g., mineral precipitations in the open void space) occur.
River flooding poses a threat to numerous cities and communities all over the world. The detection, quantification and attribution of changes in flood characteristics is key to assess changes in flood hazard and help affected societies to timely mitigate and adapt to emerging risks. The Rhine River is one of the major European rivers and numerous large cities reside at its shores. Runoff from several large tributaries superimposes in the main channel shaping the complex from regime. Rainfall, snowmelt as well as ice-melt are important runoff components. The main objective of this thesis is the investigation of a possible transient merging of nival and pluvial Rhine flood regimes under global warming. Rising temperatures cause snowmelt to occur earlier in the year and rainfall to be more intense. The superposition of snowmelt-induced floods originating from the Alps with more intense rainfall-induced runoff from pluvial-type tributaries might create a new flood type with potentially disastrous consequences.
To introduce the topic of changing hydrological flow regimes, an interactive web application that enables the investigation of runoff timing and runoff season- ality observed at river gauges all over the world is presented. The exploration and comparison of a great diversity of river gauges in the Rhine River Basin and beyond indicates that river systems around the world undergo fundamental changes. In hazard and risk research, the provision of background as well as real-time information to residents and decision-makers in an easy accessible way is of great importance. Future studies need to further harness the potential of scientifically engineered online tools to improve the communication of information related to hazards and risks.
A next step is the development of a cascading sequence of analytical tools to investigate long-term changes in hydro-climatic time series. The combination of quantile sampling with moving average trend statistics and empirical mode decomposition allows for the extraction of high resolution signals and the identification of mechanisms driving changes in river runoff. Results point out that the construction and operation of large reservoirs in the Alps is an important factor redistributing runoff from summer to winter and hint at more (intense) rainfall in recent decades, particularly during winter, in turn increasing high runoff quantiles. The development and application of the analytical sequence represents a further step in the scientific quest to disentangling natural variability, climate change signals and direct human impacts.
The in-depth analysis of in situ snow measurements and the simulations of the Alpine snow cover using a physically-based snow model enable the quantification of changes in snowmelt in the sub-basin upstream gauge Basel. Results confirm previous investigations indicating that rising temperatures result in a decrease in maximum melt rates. Extending these findings to a catchment perspective, a threefold effect of rising temperatures can be identified: snowmelt becomes weaker, occurs earlier and forms at higher elevations. Furthermore, results indicate that due to the wide range of elevations in the basin, snowmelt does not occur simultaneously at all elevation, but elevation bands melt together in blocks. The beginning and end of the release of meltwater seem to be determined by the passage of warm air masses, and the respective elevation range affected by accompanying temperatures and snow availability. Following those findings, a hypothesis describing elevation-dependent compensation effects in snowmelt is introduced: In a warmer world with similar sequences of weather conditions, snowmelt is moved upward to higher elevations, i.e., the block of elevation bands providing most water to the snowmelt-induced runoff is located at higher elevations. The movement upward the elevation range makes snowmelt in individual elevation bands occur earlier. The timing of the snowmelt-induced runoff, however, stays the same. Meltwater from higher elevations, at least partly, replaces meltwater from elevations below.
The insights on past and present changes in river runoff, snow covers and underlying mechanisms form the basis of investigations of potential future changes in Rhine River runoff. The mesoscale Hydrological Model (mHM) forced with an ensemble of climate projection scenarios is used to analyse future changes in streamflow, snowmelt, precipitation and evapotranspiration at 1.5, 2.0 and
3.0 ◦ C global warming. Simulation results suggest that future changes in flood characteristics in the Rhine River Basin are controlled by increased precipitation amounts on the one hand, and reduced snowmelt on the other hand. Rising temperatures deplete seasonal snowpacks. At no time during the year, a warming climate results in an increase in the risk of snowmelt-driven flooding. Counterbalancing effects between snowmelt and precipitation often result in only little and transient changes in streamflow peaks. Although, investigations point at changes in both rainfall and snowmelt-driven runoff, there are no indications of a transient merging of nival and pluvial Rhine flood regimes due to climate warming. Flooding in the main tributaries of the Rhine, such as the Moselle River, as well as the High Rhine is controlled by both precipitation and snowmelt. Caution has to be exercised labelling sub-basins such as the Moselle catchment as purely pluvial-type or the Rhine River Basin at Basel as purely nival-type. Results indicate that this (over-) simplifications can entail misleading assumptions with regard to flood-generating mechanisms and changes in flood hazard. In the framework of this thesis, some progress has been made in detecting, quantifying and attributing past, present and future changes in Rhine flow/flood characteristics. However, further studies are necessary to pin down future changes in the flood genesis of Rhine floods, particularly very rare events.
The optical properties of chromophores, especially organic dyes and optically active inorganic molecules, are determined by their chemical structures, surrounding media, and excited state behaviors. The classical optical go-to techniques for spectroscopic investigations are absorption and luminescence spectroscopy. While both techniques are powerful and easy to apply spectroscopic methods, the limited time resolution of luminescence spectroscopy and its reliance on luminescent properties can make its application, in certain cases, complex, or even impossible. This can be the case when the investigated molecules do not luminesce anymore due to quenching effects, or when they were never luminescent in the first place. In those cases, transient absorption spectroscopy is an excellent and much more sophisticated technique to investigate such systems. This pump-probe laser-spectroscopic method is excellent for mechanistic investigations of luminescence quenching phenomena and photoreactions. This is due to its extremely high time resolution in the femto- and picosecond ranges, where many intermediate or transient species of a reaction can be identified and their kinetic evolution can be observed. Furthermore, it does not rely on the samples being luminescent, due to the active sample probing after excitation. In this work it is shown, that with transient absorption spectroscopy it was possible to identify the luminescence quenching mechanisms and thus luminescence quantum yield losses of the organic dye classes O4-DBD, S4-DBD, and pyridylanthracenes. Hence, the population of their triplet states could be identified as the competitive mechanism to their luminescence. While the good luminophores O4-DBD showed minor losses, the S4-DBD dye luminescence was almost entirely quenched by this process. However, for pyridylanthracenes, this phenomenon is present in both the protonated and unprotonated forms and moderately effects the luminescence quantum yield. Also, the majority of the quenching losses in the protonated forms are caused by additional non-radiative processes introduced by the protonation of the pyridyl rings. Furthermore, transient absorption spectroscopy can be applied to investigate the quenching mechanisms of uranyl(VI) luminescence by chloride and bromide. The reduction of the halides by excited uranyl(VI) leads to the formation of dihalide radicals X^(·−2). This excited state redox process is thus identified as the quenching mechanism for both halides, and this process, being diffusion-limited, can be suppressed by cryogenically freezing the samples or by observing these interactions in media with a lower dielectric constant, such as ACN and acetone.
In our daily life, recurrence plays an important role on many spatial and temporal scales and in different contexts. It is the foundation of learning, be it in an evolutionary or in a neural context. It therefore seems natural that recurrence is also a fundamental concept in theoretical dynamical systems science. The way in which states of a system recur or develop in a similar way from similar initial states makes it possible to infer information about the underlying dynamics of the system. The mathematical space in which we define the state of a system (state space) is often high dimensional, especially in complex systems that can also exhibit chaotic dynamics. The recurrence plot (RP) enables us to visualize the recurrences of any high-dimensional systems in a two-dimensional, binary representation. Certain patterns in RPs can be related to physical properties of the underlying system, making the qualitative and quantitative analysis of RPs an integral part of nonlinear systems science. The presented work has a methodological focus and further develops recurrence analysis (RA) by addressing current research questions related to an increasing amount of available data and advances in machine learning techniques. By automatizing a central step in RA, namely the reconstruction of the state space from measured experimental time series, and by investigating the impact of important free parameters this thesis aims to make RA more accessible to researchers outside of physics.
The first part of this dissertation is concerned with the reconstruction of the state space from time series. To this end, a novel idea is proposed which automates the reconstruction problem in the sense that there is no need to preprocesse the data or estimate parameters a priori. The key idea is that the goodness of a reconstruction can be evaluated by a suitable objective function and that this function is minimized in the embedding process. In addition, the new method can process multivariate time series input data. This is particularly important because multi-channel sensor-based observations are ubiquitous in many research areas and continue to increase. Building on this, the described minimization problem of the objective function is then processed using a machine learning approach.
In the second part technical and methodological aspects of RA are discussed. First, we mathematically justify the idea of setting the most influential free parameter in RA, the recurrence threshold ε, in relation to the distribution of all pairwise distances in the data. This is especially important when comparing different RPs and their quantification statistics and is fundamental to any comparative study. Second, some aspects of recurrence quantification analysis (RQA) are examined. As correction schemes for biased RQA statistics, which are based on diagonal lines, we propose a simple method for dealing with border effects of an RP in RQA and a skeletonization algorithm for RPs. This results in less biased (diagonal line based) RQA statistics for flow-like data. Third, a novel type of RQA characteristic is developed, which can be viewed as a generalized non-linear powerspectrum of high dimensional systems. The spike powerspectrum transforms a spike-train like signal into its frequency domain. When transforming the diagonal line-dependent recurrence rate (τ-RR) of a RP in this way, characteristic periods, which can be seen in the state space representation of the system can be unraveled. This is not the case, when Fourier transforming τ-RR.
Finally, RA and RQA are applied to climate science in the third part and neuroscience in the fourth part. To the best of our knowledge, this is the first time RPs and RQA have been used to analyze lake sediment data in a paleoclimate context. Therefore, we first elaborate on the basic formalism and the interpretation of visually visible patterns in RPs in relation to the underlying proxy data. We show that these patterns can be used to classify certain types of variability and transitions in the Potassium record from six short (< 17m) sediment cores collected during the Chew Bahir Drilling Project. Building on this, the long core (∼ m composite) from the same site is analyzed and two types of variability and transitions are
identified and compared with ODP Site wetness index from the eastern Mediterranean. Type variability likely reflects the influence of precessional forcing in the lower latitudes at times of maximum values of the long eccentricity cycle ( kyr) of the earth’s orbit around the sun, with a tendency towards extreme events. Type variability appears to be related to the minimum values of this cycle and corresponds to fairly rapid transitions between relatively dry and relatively wet conditions.
In contrast, RQA has been applied in the neuroscientific context for almost two decades. In the final part, RQA statistics are used to quantify the complexity in a specific frequency band of multivariate EEG (electroencephalography) data. By analyzing experimental data, it can be shown that the complexity of the signal measured in this way across the sensorimotor cortex decreases as motor tasks are performed. The results are consistent with and comple- ment the well known concepts of motor-related brain processes. We assume that the thus discovered features of neuronal dynamics in the sensorimotor cortex together with the robust RQA methods for identifying and classifying these contribute to the non-invasive EEG-based development of brain-computer interfaces (BCI) for motor control and rehabilitation.
The present work is an important step towards a robust analysis of complex systems based on recurrence.
The Central Andes region in South America is characterized by a complex and heterogeneous deformation system. Recorded seismic activity and mapped neotectonic structures indicate that most of the intraplate deformation is located along the margins of the orogen, in the transitions to the foreland and the forearc. Furthermore, the actively deforming provinces of the foreland exhibit distinct deformation styles that vary along strike, as well as characteristic distributions of seismicity with depth. The style of deformation transitions from thin-skinned in the north to thick-skinned in the south, and the thickness of the seismogenic layer increases to the south. Based on geological/geophysical observations and numerical modelling, the most commonly invoked causes for the observed heterogeneity are the variations in sediment thickness and composition, the presence of inherited structures, and changes in the dip of the subducting Nazca plate. However, there are still no comprehensive investigations on the relationship between the lithospheric composition of the Central Andes, its rheological state and the observed deformation processes. The central aim of this dissertation is therefore to explore the link between the nature of the lithosphere in the region and the location of active deformation. The study of the lithospheric composition by means of independent-data integration establishes a strong base to assess the thermal and rheological state of the Central Andes and its adjacent lowlands, which alternatively provide new foundations to understand the complex deformation of the region. In this line, the general workflow of the dissertation consists in the construction of a 3D data-derived and gravity-constrained density model of the Central Andean lithosphere, followed by the simulation of the steady-state conductive thermal field and the calculation of strength distribution. Additionally, the dynamic response of the orogen-foreland system to intraplate compression is evaluated by means of 3D geodynamic modelling.
The results of the modelling approach suggest that the inherited heterogeneous composition of the lithosphere controls the present-day thermal and rheological state of the Central Andes, which in turn influence the location and depth of active deformation processes. Most of the seismic activity and neo--tectonic structures are spatially correlated to regions of modelled high strength gradients, in the transition from the felsic, hot and weak orogenic lithosphere to the more mafic, cooler and stronger lithosphere beneath the forearc and the foreland. Moreover, the results of the dynamic simulation show a strong localization of deviatoric strain rate second invariants in the same region suggesting that shortening is accommodated at the transition zones between weak and strong domains. The vertical distribution of seismic activity appears to be influenced by the rheological state of the lithosphere as well. The depth at which the frequency distribution of hypocenters starts to decrease in the different morphotectonic units correlates with the position of the modelled brittle-ductile transitions; accordingly, a fraction of the seismic activity is located within the ductile part of the crust. An exhaustive analysis shows that practically all the seismicity in the region is restricted above the 600°C isotherm, in coincidence with the upper temperature limit for brittle behavior of olivine. Therefore, the occurrence of earthquakes below the modelled brittle-ductile could be explained by the presence of strong residual mafic rocks from past tectonic events. Another potential cause of deep earthquakes is the existence of inherited shear zones in which brittle behavior is favored through a decrease in the friction coefficient. This hypothesis is particularly suitable for the broken foreland provinces of the Santa Barbara System and the Pampean Ranges, where geological studies indicate successive reactivation of structures through time. Particularly in the Santa Barbara System, the results indicate that both mafic rocks and a reduction in friction are required to account for the observed deep seismic events.
The survey of the prevalence of chronic ankle instability in elite Taiwanese basketball athletes
(2021)
BACKGROUND: Ankle sprains are common in basketball. It could develop into Chronic Ankle Instability (CAI) causing decreased quality of life, functional performance, early osteoarthritis, and increased risk of other injuries. To develop a strategy of CAI prevention, localized epidemiology data and a valid/reliable tool are essential. However, the epidemiological data of CAI is not conclusive from previous studies and the prevalence of CAI in Taiwanese basketball athletes are not clear. In addition, a valid and reliable tool among the Taiwan-Chinese version to evaluate ankle instability is missing.
PURPOSE: The aims were to have an overview of the prevalence of CAI in sports population using a systematic review, to develop a valid and reliable cross-cultural adapted Cumberland Ankle Instability Tool Questionnaire (CAIT) in Taiwan-Chinese (CAIT-TW), and to survey the prevalence of CAI in elite basketball athletes in Taiwan using CAIT-TW.
METHODS: Firstly, a systematic search was conducted. Research articles applying CAI related questionnaires in order to survey the prevalence of CAI were included in the review. Second, the English version of CAIT was translated and cross-culturally adapted into the CAIT-TW. The construct validity, test-retest reliability, internal consistency, and cutoff score of CAIT-TW were evaluated in an athletic population (N=135). Finally, the cross-sectional data of CAI prevalence in 388 elite Taiwanese basketball athletes were presented. Demographics, presence of CAI, and difference of prevalence between gender, different competitive levels and play positions were evaluated.
RESULTS: The prevalence of CAI was 25%, ranging between 7% and 53%. The prevalence of CAI among participants with a history of ankle sprains was 46%, ranging between 9% and 76%. In addition, the cross-cultural adapted CAIT-TW showed a moderate to strong construct validity, an excellent test-retest reliability, a good internal consistency, and a cutoff score of 21.5 for the Taiwanese athletic population. Finally, 26% of Taiwanese basketball athletes had unilateral CAI while 50% of them had bilateral CAI. In addition, women athletes in the investigated cohort had a higher prevalence of CAI than men. There was no difference in prevalence between competitive levels and among play positions.
CONCLUSION: The systematic review shows that the prevalence of CAI has a wide range among included studies. This could be due to the different exclusion criteria, age, sports discipline, or other factors among the included studies. For future studies, standardized criteria to investigate the epidemiology of CAI are required. The CAI epidemiological study should be prospective. Factors affecting the prevalence of CAI ability should be investigated and described. The translated CAIT-TW is a valid and reliable tool to differentiate between stable and unstable ankles in athletes and may further apply for research or daily practice in Taiwan. In the Taiwanese basketball population, CAI is highly prevalent. This might relate to the research method, preexisting ankle instability, and training-related issues. Women showed a higher prevalence of CAI than men. When applying the preventive measure, gender should be taken into consideration.
Flooding is a vast problem in many parts of the world, including Europe. It occurs mainly due to extreme weather conditions (e.g. heavy rainfall and snowmelt) and the consequences of flood events can be devastating. Flood risk is mainly defined as a combination of the probability of an event and its potential adverse impacts. Therefore, it covers three major dynamic components: hazard (physical characteristics of a flood event), exposure (people and their physical environment that being exposed to flood), and vulnerability (the elements at risk). Floods are natural phenomena and cannot be fully prevented. However, their risk can be managed and mitigated. For a sound flood risk management and mitigation, a proper risk assessment is needed. First of all, this is attained by a clear understanding of the flood risk dynamics. For instance, human activity may contribute to an increase in flood risk. Anthropogenic climate change causes higher intensity of rainfall and sea level rise and therefore an increase in scale and frequency of the flood events. On the other hand, inappropriate management of risk and structural protection measures may not be very effective for risk reduction. Additionally, due to the growth of number of assets and people within the flood-prone areas, risk increases. To address these issues, the first objective of this thesis is to perform a sensitivity analysis to understand the impacts of changes in each flood risk component on overall risk and further their mutual interactions. A multitude of changes along the risk chain are simulated by regional flood model (RFM) where all processes from atmosphere through catchment and river system to damage mechanisms are taken into consideration. The impacts of changes in risk components are explored by plausible change scenarios for the mesoscale Mulde catchment (sub-basin of the Elbe) in Germany.
A proper risk assessment is ensured by the reasonable representation of the real-world flood event. Traditionally, flood risk is assessed by assuming homogeneous return periods of flood peaks throughout the considered catchment. However, in reality, flood events are spatially heterogeneous and therefore traditional assumption misestimates flood risk especially for large regions. In this thesis, two different studies investigate the importance of spatial dependence in large scale flood risk assessment for different spatial scales. In the first one, the “real” spatial dependence of return period of flood damages is represented by continuous risk modelling approach where spatially coherent patterns of hydrological and meteorological controls (i.e. soil moisture and weather patterns) are included. Further the risk estimations under this modelled dependence assumption are compared with two other assumptions on the spatial dependence of return periods of flood damages: complete dependence (homogeneous return periods) and independence (randomly generated heterogeneous return periods) for the Elbe catchment in Germany. The second study represents the “real” spatial dependence by multivariate dependence models. Similar to the first study, the three different assumptions on the spatial dependence of return periods of flood damages are compared, but at national (United Kingdom and Germany) and continental (Europe) scales. Furthermore, the impacts of the different models, tail dependence, and the structural flood protection level on the flood risk under different spatial dependence assumptions are investigated.
The outcomes of the sensitivity analysis framework suggest that flood risk can vary dramatically as a result of possible change scenarios. The risk components that have not received much attention (e.g. changes in dike systems and in vulnerability) may mask the influence of climate change that is often investigated component.
The results of the spatial dependence research in this thesis further show that the damage under the false assumption of complete dependence is 100 % larger than the damage under the modelled dependence assumption, for the events with return periods greater than approximately 200 years in the Elbe catchment. The complete dependence assumption overestimates the 200-year flood damage, a benchmark indicator for the insurance industry, by 139 %, 188 % and 246 % for the UK, Germany and Europe, respectively. The misestimation of risk under different assumptions can vary from upstream to downstream of the catchment. Besides, tail dependence in the model and flood protection level in the catchments can affect the risk estimation and the differences between different spatial dependence assumptions.
In conclusion, the broader consideration of the risk components, which possibly affect the flood risk in a comprehensive way, and the consideration of the spatial dependence of flood return periods are strongly recommended for a better understanding of flood risk and consequently for a sound flood risk management and mitigation.
Proteins of halophilic organisms that accumulate molar concentrations of KCl in their cytoplasm have much higher content in acidic amino acids than proteins of mesophilic organisms. It has been proposed that this excess is necessary to maintain proteins hydrated in an environment with low water activity: either via direct interactions between water and the carboxylate groups of acidic amino acids or via cooperative interactions between acidic amino acids and hydrated cations, which would stabilize the folded protein. In the course of this Ph.D. study, we investigated these possibilities using atomistic molecular dynamics simulations and classical force fields. High quality parameters describing the interaction between K+ and carboxylate groups present in acidic amino acids are indispensable for this study. We first evaluated the quality of the default parameters for these ions within the widely used AMBER ff14SB force field for proteins and found that they perform poorly. We propose new parameters, which reproduce solution activity derivatives of potassium acetate solutions up to 2 mol/kg and the distances between potassium ions and carboxylate groups observed in x-ray structures of proteins. To understand the role of acidic amino acids in protein hydration, we investigated this aspect for 5 halophilic proteins in comparison with 5 mesophilic ones. Our results do not support the necessity of acidic amino acids to keep folded proteins hydrated. Proteins with a larger fraction of acidic amino acids indeed have higher hydration levels. However, the hydration level of each protein is identical at low (b_KCl = 0.15 mol/kg) and high (b_KCl = 2 mol/kg) KCl concentration. It has also been proposed that cooperative interactions between acidic amino acids with nearby hydrated cations stabilize the folded protein and slow down its solvation shell; according to this theory, the cations would be preferentially excluded from the unfolded structure. We investigate this possibility through extensive free energy calculation simulations. We find that cooperative interactions between neighboring acidic amino acids exist and are mediated by the ions in solution but are present in both folded and unfolded structures of halophilic proteins. The translational dynamics of the solvation shell is barely distinguishable between halophilic and mesophilic proteins; therefore, such a cooperative effect does not result in unusually slow solvent dynamics as has been suggested.
The spread of antibiotic-resistant bacteria poses a globally increasing threat to public health care. The excessive use of antibiotics in animal husbandry can develop resistances in the stables. Transmission through direct contact with animals and contamination of food has already been proven. The excrements of the animals combined with a binding material enable a further potential path of spread into the environment, if they are used as organic manure in agricultural landscapes. As most of the airborne bacteria are attached to particulate matter, the focus of the work will be the atmospheric dispersal via the dust fraction.
Field measurements on arable lands in Brandenburg, Germany and wind erosion studies in a wind tunnel were conducted to investigate the risk of a potential atmospheric dust-associated spread of antibiotic-resistant bacteria from poultry manure fertilized agricultural soils. The focus was to (i) characterize the conditions for aerosolization and (ii) qualify and quantify dust emissions during agricultural operations and wind erosion.
PM10 (PM, particulate matter with an aerodynamic diameter smaller than 10 µm) emission factors and bacterial fluxes for poultry manure application and incorporation have not been previously reported before. The contribution to dust emissions depends on the water content of the manure, which is affected by the manure pretreatment (fresh, composted, stored, dried), as well as by the intensity of manure spreading from the manure spreader. During poultry manure application, PM10 emission ranged between 0.05 kg ha-1 and 8.37 kg ha-1. For comparison, the subsequent land preparation contributes to 0.35 – 1.15 kg ha-1 of PM10 emissions. Manure particles were still part of dust emissions but they were accounted to be less than 1% of total PM10 emissions due to the dilution of poultry manure in the soil after manure incorporation. Bacterial emissions of fecal origin were more relevant during manure application than during the subsequent manure incorporation, although PM10 emissions of manure incorporation were larger than PM10 emissions of manure application for the non-dried manure variants.
Wind erosion leads to preferred detachment of manure particles from sandy soils, when poultry manure has been recently incorporated. Sorting effects were determined between the low-density organic particles of manure origin and the soil particles of mineral origin close above the threshold of 7 m s-1. In dependence to the wind speed, potential erosion rates between 101 and 854 kg ha-1 were identified, if 6 t ha-1 of poultry manure were applied. Microbial investigation showed that manure bacteria got detached more easily from the soil surface during wind erosion, due to their attachment on manure particles.
Although antibiotic-resistant bacteria (ESBL-producing E. coli) were still found in the poultry barns, no further contamination could be detected with them in the manure, fertilized soils or in the dust generated by manure application, land preparation or wind erosion. Parallel studies of this project showed that storage of poultry manure for a few days (36 – 72 h) is sufficient to inactivate ESBL-producing E. coli. Further antibiotic-resistant bacteria, i.e. MRSA and VRE, were only found sporadically in the stables and not at all in the dust. Therefore, based on the results of this work, the risk of a potential infection by dust-associated antibiotic-resistant bacteria can be considered as low.
Conceptual knowledge about objects, people and events in the world is central to human cognition, underlying core cognitive abilities such as object recognition and use, and word comprehension. Previous research indicates that concepts consist of perceptual and motor features represented in modality-specific perceptual-motor brain regions. In addition, cross-modal convergence zones integrate modality-specific features into more abstract conceptual representations.
However, several questions remain open: First, to what extent does the retrieval of perceptual-motor features depend on the concurrent task? Second, how do modality-specific and cross-modal regions interact during conceptual knowledge retrieval? Third, which brain regions are causally relevant for conceptually-guided behavior? This thesis addresses these three key issues using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in the healthy human brain.
Study 1 - an fMRI activation study - tested to what extent the retrieval of sound and action features of concepts, and the resulting engagement of auditory and somatomotor brain regions depend on the concurrent task. 40 healthy human participants performed three different tasks - lexical decision, sound judgment, and action judgment - on words with a high or low association to sounds and actions. We found that modality-specific regions selectively respond to task-relevant features: Auditory regions selectively responded to sound features during sound judgments, and somatomotor regions selectively responded to action features during action judgments. Unexpectedly, several regions (e.g. the left posterior parietal cortex; PPC) exhibited a task-dependent response to both sound and action features. We propose these regions to be "multimodal", and not "amodal", convergence zones which retain modality-specific information.
Study 2 - an fMRI connectivity study - investigated the functional interaction between modality-specific and multimodal areas during conceptual knowledge retrieval. Using the above fMRI data, we asked (1) whether modality-specific and multimodal regions are functionally coupled during sound and action feature retrieval, (2) whether their coupling depends on the task, (3) whether information flows bottom-up, top-down, or bidirectionally, and (4) whether their coupling is behaviorally relevant. We found that functional coupling between multimodal and modality-specific areas is task-dependent, bidirectional, and relevant for conceptually-guided behavior. Left PPC acted as a connectivity "switchboard" that flexibly adapted its coupling to task-relevant modality-specific nodes.
Hence, neuroimaging studies 1 and 2 suggested a key role of left PPC as a multimodal convergence zone for conceptual knowledge. However, as neuroimaging is correlational, it remained unknown whether left PPC plays a causal role as a multimodal conceptual hub. Therefore, study 3 - a TMS study - tested the causal relevance of left PPC for sound and action feature retrieval. We found that TMS over left PPC selectively impaired action judgments on low sound-low action words, as compared to sham stimulation. Computational simulations of the TMS-induced electrical field revealed that stronger stimulation of left PPC was associated with worse performance on action, but not sound, judgments. These results indicate that left PPC causally supports conceptual processing when action knowledge is task-relevant and cannot be compensated by sound knowledge. Our findings suggest that left PPC is specialized for action knowledge, challenging the view of left PPC as a multimodal conceptual hub.
Overall, our studies support "hybrid theories" which posit that conceptual processing involves both modality-specific perceptual-motor regions and cross-modal convergence zones. In our new model of the conceptual system, we propose conceptual processing to rely on a representational hierarchy from modality-specific to multimodal up to amodal brain regions. Crucially, this hierarchical system is flexible, with different regions and connections being engaged in a task-dependent fashion. Our model not only reconciles the seemingly opposing grounded cognition and amodal theories, it also incorporates task dependency of conceptually-related brain activity and connectivity, thereby resolving several current issues on the neural basis of conceptual knowledge retrieval.
The High Energy Stereoscopic System (H.E.S.S.) is an array of five imaging atmospheric Cherenkov telescopes located in the Khomas Highland of Namibia. H.E.S.S. operates in a wide energy range from several tens of GeV to several tens of TeV, reaching the best sensitivity around 1 TeV or at lower energies. However, there are many important topics – such as the search for Galactic PeVatrons, the study of gamma-ray production scenarios for sources (hadronic vs. leptonic), EBL absorption studies – which require good sensitivity at energies above 10 TeV. This work aims at improving the sensitivity of H.E.S.S. and increasing the gamma-ray statistics at high energies. The study investigates an enlargement of the H.E.S.S. effective field of view using events with larger offset angles in the analysis. The greatest challenges in the analysis of large-offset events are a degradation of the reconstruction accuracy and a rise of the background rate as the offset angle increases. The more sophisticated direction reconstruction method (DISP) and improvements to the standard background rejection technique, which by themselves are effective ways to increase the gamma-ray statistics and improve the sensitivity of the analysis, are implemented to overcome the above-mentioned issues. As a result, the angular resolution at the preselection level is improved by 5 - 10% for events at 0.5◦ offset angle and by 20 - 30% for events at 2◦ offset angle. The background rate at large offset angles is decreased nearly to a level typical for offset angles below 2.5◦. Thereby, sensitivity improvements of 10 - 20% are achieved for the proposed analysis compared to the standard analysis at small offset angles. Developed analysis also allows for the usage of events at large offset angles up to approximately 4◦, which was not possible before. This analysis method is applied to the analysis of the Galactic plane data above 10 TeV. As a result, 40 sources out of the 78 presented in the H.E.S.S. Galactic plane survey (HGPS) are detected above 10 TeV. Among them are representatives of all source classes that are present in the HGPS catalogue; namely, binary systems, supernova remnants, pulsar wind nebulae and composite objects. The potential of the improved analysis method is demonstrated by investigating the more than 10 TeV emission for two objects: the region associated with the shell-type SNR HESS J1731−347 and the PWN candidate associated with PSR J0855−4644 that is coincident with Vela Junior (HESS J0852−463).
The aim of the doctoral project was to answer the question of whether the structural word-initial noun capitalization, as it can otherwise only be found in Luxembourgish alongside German, has a function that is advantageous for the reader. The overriding hypothesis was that an advantage is achieved by activating a syntactic category, namely the core of a noun phrase, through the parafoveal perception of the capital letters. This perception from the corner of the eye should make it possible to preprocess the following noun. As a result, sentence processing should be facilitated, which should ultimately be reflected in overall faster reading times and fixation durations.
The structure of the project includes three studies, some of which included different participant groups:
Study 1:
Study design: Semantic priming using garden-path sentences should bring out the functionality of noun capitalization for the reader
Participant groups: German natives reading German
Study 2:
Study design: same design as study 1, but in English
Participant groups:
English natives without any knowledge of German reading English
English natives who regularly read German reading English
German with high proficiency in English reading English
Study 3:
Study design:
Influence of the noun frequency on a potential preprocessing using the boundary paradigm; Study languages: German and English
Participant groups:
German natives reading German
English natives without any knowledge of German reading English
German with high proficiency in English reading English
Brief summary: The noun capitalization clearly has an impact on sentence processing in both German and English. It cannot be confirmed that this has a substantial, decisive advantage.
Membrane contact sites are of particular interest in the field of synthetic biology and biophysics. They are involved in a great variety of cellular functions. They form in between two cellular organelles or an organelle and the plasma membrane in order to establish a communication path for molecule transport or signal transmission.
The development of an artificial membrane system which can mimic membrane contact sites using bottom up synthetic biology was the goal of this research study. For this, a multi - compartmentalised giant unilamellar vesicle (GUV) system was created with the membrane of the outer vesicle mimicking the plasma membrane and the inner GUVs posing as cellular organelles.
In the following steps, three different strategies were used to achieve an internal membrane - membrane adhesion.
The majority of baryons in the Universe is believed to reside in the intergalactic medium (IGM). This makes the IGM an important component in understanding cosmological structure formation. It is expected to trace the same dark matter distribution as galaxies, forming structures like filaments and clusters. However, whereas galaxies can be observed to be arranged along these large-scale structures, the spatial distribution of the diffuse IGM is not as easily unveiled. Absorption line studies of quasar (QSO) spectra can help with mapping the IGM, as well as the boundary layer between IGM and galaxies: the circumgalactic medium (CGM). By studying gas in the Local Group, as well as in the IGM, this study aims to get a better understanding of how the gas is linked to the large-scale structure of the local Universe and the galaxies residing in that structure.
Chapter 1 gives an introduction to the CGM and IGM, while the methods used in this study are explained in Chapter 2. Chapter 3 starts on a relatively small cosmological scale, namely that of our Local Group, which includes i.a. the Milky Way (MW) and the M31. Within the CGM of the MW, there exist denser clouds, some of which are infalling while others are moving away from the Galactic disc. To study these clouds, 29 QSO spectra obtained with the Cosmic Origins Spectrograph (COS) aboard the Hubble Space Telescope (HST) were analysed. Abundances of Si II, Si III, Si IV, C II, and C IV were measured for 69 HVCs belonging to two samples: one in the direction of the LG’s barycentre and the other in the anti-barycentre direction. Their velocities range from -100 ≥ vLSR ≥ -400 km/s for the barycentre sample and between +100 ≤ vLSR ≤ +300 km/s for the anti-barycentre sample. By using Cloudy models, these data could then be used to derive gas volume densities for the HVCs. Because of the relationship between density and pressure of the ambient medium, which is in turn determined by the Galactic radiation field, the distances of the HVCs could be estimated. From this, a subsample of absorbers located in the direction of M31 was found to exist outside of the MW’s virial radius, their low densities (log nH ≤ -3.54) making it likely for them to be part of the gas in between the MW and M31. No such low-density absorbers were found in the anti-barycentre sample. Our results thus hint at gas following the dark matter potential, which would be deeper between the MW and M31 as they are by far the most massive members of the LG.
From this bridge of gas in the LG, this study zooms out to the large-scale structure of the local Universe (z ~ 0) in Chapter 4. Galaxy data from the V8k catalogue and QSO spectra from COS were used to study the relation between the galaxies tracing large-scale filaments and the gas existing outside of those galaxies. This study used the filaments defined in Courtois et al. (2013). A total of 587 Lyman α (Lyα) absorbers were found in the 302 QSO spectra in the velocity range 1070 - 6700 km/s. After selecting sightlines passing through or close to these filaments, model spectra were made for 91 sightlines and 215 (227) Lyα absorbers (components) were measured in this sample. The velocity gradient along each filament was calculated and 74 absorbers were found within 1000 km/s of the nearest filament segment.
In order to find whether the absorbers are more tied to galaxies or to the large-scale structure, equivalent widths of the Lyα absorbers were plotted against both galaxy and filament impact parameters. While stronger absorbers do tend to be closer to either galaxies or filaments, there is a large scatter in this relation. Despite this large scatter, this study found that the absorbers do not follow a random distribution either. They cluster less strongly around filaments than galaxies, but stronger than random distributions, as confirmed by a Kolmogorov-Smirnov test.
Furthermore, the column density distribution function found in this study has a slope of -β = 1.63±0.12 for the total sample and -β =1.47±0.24 for the absorbers within 1000 km/s of a filament. The shallower slope for the latter subsample could indicate an excess of denser absorbers within the filament, but they are consistent within errors. These values are in agreement with values found in e.g. Lehner et al. (2007); Danforth et al. (2016).
The picture that emerges from this study regarding the relation between the IGM and the large-scale structure in the local Universe fits with what is found in other studies: while at least part of the gas traces the same filamentary structure as galaxies, the relation is complex. This study has shown that by taking a large sample of sightlines and comparing the data gathered from those with galaxy data, it is possible to study the gaseous large-scale structure. This approach can be used in the future together with simulations to get a better understanding of structure formation and evolution in the Universe.
Mental health problems are highly prevalent worldwide. Fortunately, psychotherapy has proven highly effective in the treatment of a number of mental health issues, such as depression and anxiety disorders. In contrast, psychotherapy training as is practised currently cannot be considered evidence-based. Thus, there is much room for improvement. The integration of simulated patients (SPs) into psychotherapy training and research is on the rise. SPs originate from the medical education and have, in a number of studies, been demonstrated to contribute to effective learning environments. Nevertheless, there has been voiced criticism regarding the authenticity of SP portrayals, but few studies have examined this to date.
Based on these considerations, this dissertation explores SPs’ authenticity while portraying a mental disorder, depression. Altogether, the present cumulative dissertation consists of three empirical papers. At the time of printing, Paper I and Paper III have been accepted for publication, and Paper II is under review after a minor revision.
First, Paper I develops and validates an observer-based rating-scale to assess SP authenticity in psychotherapeutic contexts. Based on the preliminary findings, it can be concluded that the Authenticity of Patient Demonstrations scale is a reliable and valid tool that can be used for recruiting, training, and evaluating the authenticity of SPs.
Second, Paper II tests whether student SPs are perceived as more authentic after they receive an in-depth role-script compared to those SPs who only receive basic information on the patient case. To test this assumption, a randomised controlled study design was implemented and the hypothesis could be confirmed. As a consequence, when engaging SPs, an in-depth role-script with details, e.g. on nonverbal behaviour and feelings of the patient, should be provided.
Third, Paper III demonstrates that psychotherapy trainees cannot distinguish between trained SPs and real patients and therefore suggests that, with proper training, SPs are a promising training method for psychotherapy.
Altogether, the dissertation shows that SPs can be trained to portray a depressive patient authentically and thus delivers promising evidence for the further dissemination of SPs.
Zentrales Element dieser Arbeit ist die Synthese und Charakterisierung praktisch nutzbarer Ionogele. Die Basis der Polymerionogele bildet das Modellpolymer Polymethylmethacrylat. Als Additive kommen ionische Flüssigkeiten zum Einsatz, deren Grundlage Derivate des vielfach verwendeten Imidazoliumkations sind. Die Eigenschaften der eingebetteten ionischen Flüssigkeiten sind für die Ionogele funktionsgebend. Die Funktionalität der jeweiligen Gele und damit der Transfer der Eigenschaften von ionischen Flüssigkeiten auf die Ionogele wurde in der vorliegenden Arbeit mittels zahlreicher Charakterisierungstechniken überprüft und bestätigt. In dieser Arbeit wurden durch Ionogelbildung makroskopische Ionogelobjekte in Form von Folien und Vliesen erzeugt. Dabei kamen das Filmgießen und das Elektrospinnen als Methoden zur Erzeugung dieser Folien und Vliese zum Einsatz, woraus jeweils ein Modellsystem resultiert. Dadurch wird die vorliegende Arbeit in die Themenkomplexe „elektrisch halbleitende Ionogelfolien“ und „antimikrobiell aktive Ionogelvliese“ gegliedert. Der Einsatz von triiodidhaltigen ionischen Flüssigkeiten und einer Polymermatrix in einem diskontinuierlichen Gießprozess resultiert in elektrisch halbleitenden Ionogelfolien. Die flexiblen und transparenten Folien können Mittelpunkt zahlreicher neuer Anwendungsfelder im Bereich flexibler Elektronik sein. Das Elektrospinnen von Polymethylmethacrylat mit einer ionischen Flüssigkeit führte zu einem homogen Ionogelvlies, welches ein Modell für die Übertragung antimikrobiell aktiver Eigenschaften ionischer Flüssigkeiten auf poröse Strukturen zur Filtration darstellt. Gleichzeitig ist es das erste Beispiel für ein kupferchloridhaltiges Ionogel. Ionogele sind attraktive Materialien mit zahlreichen Anwendungsmöglichkeiten. Mit der vorliegenden Arbeit wird das Spektrum der Ionogele um ein elektrisch halbleitendes und ein antimikrobiell aktives Ionogel erweitert. Gleichzeitig wurden durch diese Arbeit der Gruppe der ionischen Flüssigkeiten drei Beispiele für elektrisch halbleitende ionische Flüssigkeiten sowie zahlreiche kupfer(II)chloridbasierte ionische Flüssigkeiten hinzugefügt.
Supernova remnants (SNRs) are discussed as the most promising sources of galactic cosmic rays (CR). The diffusive shock acceleration (DSA) theory predicts particle spectra in a rough agreement with observations. Upon closer inspection, however, the photon spectra of observed SNRs indicate that the particle spectra produced at SNRs shocks deviate from the standard expectation. This work suggests a viable explanation for a softening of the particle spectra in SNRs. The basic idea is the re-acceleration of particles in the turbulent region immediately downstream of the shock. This thesis shows that at the re-acceleration of particles by the fast-mode waves in the downstream region can be efficient enough to impact particle spectra over several decades in energy. To demonstrate this, a generic SNR model is presented, where the evolution of particles is described by the reduced transport equation for CR. It is shown that the resulting particle and the corresponding synchrotron spectra are significantly softer compared to the standard case. Next, this work outlines RATPaC, a code developed to model particle acceleration and corresponding photon emissions in SNRs. RATPaC solves the particle transport equation in test-particle mode using hydrodynamic simulations of the SNR plasma flow. The background magnetic field can be either computed from the induction equation or follows analytic profiles. This work presents an extended version of RATPaC that accounts for stochastic re-acceleration by fast-mode waves that provide diffusion of particles in momentum space. This version is then applied to model the young historical SNR Tycho. According to radio observations, Tycho’s SNR features the radio spectral index of approximately −0.65. In previous modeling approaches, this fact has been attributed to the strongly distinctive Alfvénic drift, which is assumed to operate in the shock vicinity. In this work, the problems and inconsistencies of this scenario are discussed. Instead, stochastic re-acceleration of electrons in the immediate downstream region of Tycho’s SNR is suggested as a cause for the soft radio spectrum. Furthermore, this work investigates two different scenarios for magnetic-field distributions inside Tycho’s SNR. It is concluded that magnetic-field damping is needed to account for the observed filaments in the radio range. Two models are presented for Tycho’s SNR, both of them feature strong hadronic contribution. Thus, a purely leptonic model is considered as very unlikely. Additionally, to the detailed modeling of Tycho’s SNR, this dissertation presents a relatively simple one-zone model for the young SNR Cassiopeia A and an interpretation for the recently analyzed VERITAS and Fermi-LAT data. It shows that the γ-ray emission of Cassiopeia A cannot be explained without a hadronic contribution and that the remnant accelerates protons up to TeV energies. Thus, Cassiopeia A is found to be unlikely a PeVatron.
Natural products have proved to be a major resource in the discovery and development of many pharmaceuticals that are in use today. There is a wide variety of biologically active natural products that contain conjugated polyenes or benzofuran structures. Therefore, new synthetic methods for the construction of such building blocks are of great interest to synthetic chemists. The recently developed one-pot tethered ring-closing metathesis approach allows for the formation of Z,E-dienoates in high stereoselectivity. The extension of this method with a Julia-Kocienski olefination protocol would allow for the formation of conjugated trienes in a stereoselective manner. This strategy was applied in the total synthesis of conjugated triene containing (+)-bretonin B. Additionally, investigations of cross metathesis using methyl substituted olefins were pursued. This methodology was applied, as a one-pot cross metathesis/ring-closing metathesis sequence, in the total synthesis of benzofuran containing 7-methoxywutaifuranal. Finally, the design and synthesis of a catalyst for stereoretentive metathesis in aqueous media was investigated.
An ever-increasing number of prediction models is published every year in different medical specialties. Prognostic or diagnostic in nature, these models support medical decision making by utilizing one or more items of patient data to predict outcomes of interest, such as mortality or disease progression. While different computer tools exist that support clinical predictive modeling, I observed that the state of the art is lacking in the extent to which the needs of research clinicians are addressed. When it comes to model development, current support tools either 1) target specialist data engineers, requiring advanced coding skills, or 2) cater to a general-purpose audience, therefore not addressing the specific needs of clinical researchers. Furthermore, barriers to data access across institutional silos, cumbersome model reproducibility and extended experiment-to-result times significantly hampers validation of existing models. Similarly, without access to interpretable explanations, which allow a given model to be fully scrutinized, acceptance of machine learning approaches will remain limited. Adequate tool support, i.e., a software artifact more targeted at the needs of clinical modeling, can help mitigate the challenges identified with respect to model development, validation and interpretation. To this end, I conducted interviews with modeling practitioners in health care to better understand the modeling process itself and ascertain in what aspects adequate tool support could advance the state of the art. The functional and non-functional requirements identified served as the foundation for a software artifact that can be used for modeling outcome and risk prediction in health research. To establish the appropriateness of this approach, I implemented a use case study in the Nephrology domain for acute kidney injury, which was validated in two different hospitals. Furthermore, I conducted user evaluation to ascertain whether such an approach provides benefits compared to the state of the art and the extent to which clinical practitioners could benefit from it. Finally, when updating models for external validation, practitioners need to apply feature selection approaches to pinpoint the most relevant features, since electronic health records tend to contain several candidate predictors. Building upon interpretability methods, I developed an explanation-driven recursive feature elimination approach. This method was comprehensively evaluated against state-of-the art feature selection methods. Therefore, this thesis' main contributions are three-fold, namely, 1) designing and developing a software artifact tailored to the specific needs of the clinical modeling domain, 2) demonstrating its application in a concrete case in the Nephrology context and 3) development and evaluation of a new feature selection approach applicable in a validation context that builds upon interpretability methods. In conclusion, I argue that appropriate tooling, which relies on standardization and parametrization, can support rapid model prototyping and collaboration between clinicians and data scientists in clinical predictive modeling.
In the light of climate change, rising demands for agricultural products and the intensification and specialization of agricultural systems, ensuring an adequate and reliable supply of food is fundamental for food security. Maintaining diversity and redundancy has been postulated as one generic principle to increase the resilience of agricultural production and other ecosystem services. For example, if one crop fails due to climate instability and extreme events, others can compensate the losses. Crop diversity might be particularly important if different crops show asynchronous production trends. Furthermore, spatial heterogeneity has been suggested to increase stability at larger scales as production losses in some areas can be buffered by surpluses in undisturbed ones. Besides systematically investigating the mechanisms underlying stability, identifying transformative pathways that foster them is important.
In my thesis, I aim at answering the following questions: (i) How does yield stability differ between nations, regions and farms, and what is the effect of crop diversity on yield stability in relation to agricultural inputs, climate heterogeneity, climate instability and time at the national, regional or farm level? (ii) Is asynchrony between crops a better predictor of production stability than crop diversity? (iii) What is the effect of asynchrony between and within crops on stability and how is it related to crop diversity and space, respectively? (iv) What is the state of the art and what are knowledge gaps in exploring resilience and its multidimensionality in ecological and social-ecological systems with agent-based models and what are potential ways forward?
In the first chapter, I provide the theoretical background for the subsequent analyses. I stress the need to better understand the resilience of social-ecological systems and particularly the stability of agricultural production. Moreover, I introduce diversity and spatial heterogeneity as two prominently discussed resilience mechanisms and describe approaches to assess resilience.
In the second chapter, I combined agriculture and climate data at three levels of organization and spatial extents to investigate yield stability patterns and their relation to crop diversity, fertilizer, irrigation, climate heterogeneity and instability and time of nations globally, regions in Europe and farms in Germany using statistical analyses. Yield stability decreased from the national to the farm level. Several nations and regions substantially contributed to larger-scale stability. Crop diversity was positively associated with yield stability across all three levels of organization. This effect was typically more profound at smaller scales and in variable climates. In addition to crop diversity, climate heterogeneity was an important stabilizing mechanism especially at larger scales. These results confirm the stabilizing effect of crop diversity and spatial heterogeneity, yet their importance depends on the scale and agricultural management.
Building on the findings of the second chapter, I deepened in the third chapter my research on the effect of crop diversity at the national level. In particular, I tested if asynchrony between crops, i.e. between the temporal production patterns of different crops, better predicts agricultural production stability than crop diversity. The stabilizing effect of asynchrony was multiple times higher than the effect of crop diversity, i.e. asynchrony is one important property that can explain why a higher diversity supports the stability of national food production. Therefore, strategies to stabilize agricultural production through crop diversification also need to account for the asynchrony of the crops considered.
The previous chapters suggest that both asynchrony between crops and spatial heterogeneity are important stabilizing mechanisms. In the fourth chapter, I therefore aimed at better understanding the relative importance of asynchrony between and within crops, i.e. between the temporal production patterns of different crops and between the temporal production patterns of different cultivation areas of the same crop. Better understanding their relative importance is important to inform agricultural management decisions, but so far this has been hardly assessed. To address this, I used crop production data to study the effect of asynchrony between and within crops on the stability of agricultural production in regions in Germany and nations in Europe. Both asynchrony between and within crops consistently stabilized agricultural production. Adding crops increased asynchrony between crops, yet this effect levelled off after eight crops in regions in Germany and after four crops in nations in Europe. Combining already ten farms within a region led to high asynchrony within crops, indicating distinct production patters, while this effect was weaker when combining multiple regions within a nation. The results suggest, that both mechanisms need to be considered in agricultural management strategies that strive for more resilient farming systems.
The analyses in the foregoing chapters focused at different levels of organization, scales and factors potentially influencing agricultural stability. However, these statistical analyses are restricted by data availability and investigate correlative relationships, thus they cannot provide a mechanistic understanding of the actual processes underlying resilience. In this regard, agent-based models (ABM) are a promising tool. Besides their ability to measure different properties and to integrate multiple situations through extensive manipulation in a fully controlled system, they can capture the emergence of system resilience from individual interactions and feedbacks across different levels of organization. In the fifth chapter, I therefore reviewed the state of the art and potential knowledge gaps in exploring resilience and its multidimensionality in ecological and social-ecological systems with ABMs. Next, I derived recommendations for a more effective use of ABMs in resilience research. The review suggests that the potential of ABMs is not utilized in most models as they typically focus on a single dimension of resilience and are mostly limited to one reference state, disturbance type and scale. Moreover, only few studies explicitly test the ability of different mechanisms to support resilience. To solve real-world problems related to the resilience of complex systems, ABMs need to assess multiple stability properties for different situations and under consideration of the mechanisms that are hypothesized to render a system resilient.
In the sixth chapter, I discuss the major conclusions that can be drawn from the previous chapters. Moreover, I showcase the use of simulation models to identify management strategies to enhance asynchrony and thus stability, and the potential of ABMs to identify pathways to implement such strategies.
The results of my thesis confirm the stabilizing effect of crop diversity, yet its importance depends on the scale, agricultural management and climate. Moreover, strategies to stabilize agricultural production through crop diversification also need to account for the asynchrony of the crops considered. As spatial heterogeneity and particularly asynchrony within crops strongly enhances stability, integrated management approaches are needed that simultaneously address multiple resilience mechanisms at different levels of organization, scales and time horizons. For example, the simulation suggests that only increasing the number of crops at both the pixel and landscape level avoids trade-offs between asynchrony between and within crops. If their potential is better exploited, agent-based models have the capacity to systematically assess resilience and to identify comprehensive pathways towards resilient farming systems.
The propagation of test fields, such as electromagnetic, Dirac or linearized gravity, on a fixed spacetime manifold is often studied by using the geometrical optics approximation. In the limit of infinitely high frequencies, the geometrical optics approximation provides a conceptual transition between the test field and an effective point-particle description. The corresponding point-particles, or wave rays, coincide with the geodesics of the underlying spacetime. For most astrophysical applications of interest, such as the observation of celestial bodies, gravitational lensing, or the observation of cosmic rays, the geometrical optics approximation and the effective point-particle description represent a satisfactory theoretical model. However, the geometrical optics approximation gradually breaks down as test fields of finite frequency are considered.
In this thesis, we consider the propagation of test fields on spacetime, beyond the leading-order geometrical optics approximation. By performing a covariant Wentzel-Kramers-Brillouin analysis for test fields, we show how higher-order corrections to the geometrical optics approximation can be considered. The higher-order corrections are related to the dynamics of the spin internal degree of freedom of the considered test field. We obtain an effective point-particle description, which contains spin-dependent corrections to the geodesic motion obtained using geometrical optics. This represents a covariant generalization of the well-known spin Hall effect, usually encountered in condensed matter physics and in optics. Our analysis is applied to electromagnetic and massive Dirac test fields, but it can easily be extended to other fields, such as linearized gravity. In the electromagnetic case, we present several examples where the gravitational spin Hall effect of light plays an important role. These include the propagation of polarized light rays on black hole spacetimes and cosmological spacetimes, as well as polarization-dependent effects on the shape of black hole shadows. Furthermore, we show that our effective point-particle equations for polarized light rays reproduce well-known results, such as the spin Hall effect of light in an inhomogeneous medium, and the relativistic Hall effect of polarized electromagnetic wave packets encountered in Minkowski spacetime.
Spatiotemporal variations of key air pollutants and greenhouse gases in the Himalayan foothills
(2021)
South Asia is a rapidly developing, densely populated and highly polluted region that is facing the impacts of increasing air pollution and climate change, and yet it remains one of the least studied regions of the world scientifically. In recognition of this situation, this thesis focuses on studying (i) the spatial and temporal variation of key greenhouse gases (CO2 and CH4) and air pollutants (CO and O3) and (ii) the vertical distribution of air pollutants (PM, BC) in the foothills of the Himalaya. Five sites were selected in the Kathmandu Valley, the capital region of Nepal, along with two sites outside of the valley in the Makawanpur and Kaski districts, and conducted measurements during the period of 2013-2014 and 2016. These measurements are analyzed in this thesis.
The CO measurements at multiple sites in the Kathmandu Valley showed a clear diurnal cycle: morning and evening levels were high, with an afternoon dip. There are slight differences in the diurnal cycles of CO2 and CH4, with the CO2 and CH4 mixing ratios increasing after the afternoon dip, until the morning peak the next day. The mixing layer height (MLH) of the nocturnal stable layer is relatively constant (~ 200 m) during the night, after which it transitions to a convective mixing layer during the day and the MLH increases up to 1200 m in the afternoon. Pollutants are thus largely trapped in the valley from the evening until sunrise the following day, and the concentration of pollutants increases due to emissions during the night. During afternoon, the pollutants are diluted due to the circulation by the valley winds after the break-up of the mixing layer. The major emission sources of GHGs and air pollutants in the valley are transport sector, residential cooking, brick kilns, trash burning, and agro-residue burning. Brick industries are influential in the winter and pre-monsoon season. The contribution of regional forest fires and agro-residue burning are seen during the pre-monsoon season. In addition, relatively higher CO values were also observed at the valley outskirts (Bhimdhunga and Naikhandi), which indicates the contribution of regional emission sources. This was also supported by the presence of higher concentrations of O3 during the pre-monsoon season.
The mixing ratios of CO2 (419.3 ±6.0 ppm) and CH4 (2.192 ±0.066 ppm) in the valley were much higher than at background sites, including the Mauna Loa observatory (CO2: 396.8 ± 2.0 ppm, CH4:1.831 ± 0.110 ppm) and Waligaun (CO2: 397.7 ± 3.6 ppm, CH4: 1.879 ± 0.009 ppm), China, as well as at an urban site Shadnagar (CH4: 1.92 ± 0.07 ppm) in India.
The daily 8 hour maximum O3 average in the Kathmandu Valley exceeds the WHO recommended value during more than 80% of the days during the pre-monsoon period, which represents a significant risk for human health and ecosystems in the region. Moreover, in the measurements of the vertical distribution of particulate matter, which were made using an ultralight aircraft, and are the first of their kind in the region, an elevated polluted layer at around ca. 3000 m asl. was detected over the Pokhara Valley. The layer could be associated with the large-scale regional transport of pollution. These contributions towards understanding the distributions of key air pollutants and their main sources will provide helpful information for developing management plans and policies to help reduce the risks for the millions of people living in the region.
Centroid moment tensor inversion can provide insight into ongoing tectonic processes and active faults. In the Alpine mountains (central Europe), challenges result from low signal-to-noise ratios of earthquakes with small to moderate magnitudes and complex wave propagation effects through the heterogeneous crustal structure of the mountain belt. In this thesis, I make use of the temporary installation of the dense AlpArray seismic network (AASN) to establish a work flow to study seismic source processes and enhance the knowledge of the Alpine seismicity. The cumulative thesis comprises four publications on the topics of large seismic networks, seismic source processes in the Alps, their link to tectonics and stress field, and the inclusion of small magnitude earthquakes into studies of active faults.
Dealing with hundreds of stations of the dense AASN requires the automated assessment of data and metadata quality. I developed the open source toolbox AutoStatsQ to perform an automated data quality control. Its first application to the AlpArray seismic network has revealed significant errors of amplitude gains and sensor orientations. A second application of the orientation test to the Turkish KOERI network, based on Rayleigh wave polarization, further illustrated the potential in comparison to a P wave polarization method. Taking advantage of the gain and orientation results of the AASN, I tested different inversion settings and input data types to approach the specific challenges of centroid moment tensor (CMT) inversions in the Alps. A comparative study was carried out to define the best fitting procedures.
The application to 4 years of seismicity in the Alps (2016-2019) substantially enhanced the amount of moment tensor solutions in the region. We provide a list of moment tensors solutions down to magnitude Mw 3.1. Spatial patterns of typical focal mechanisms were analyzed in the seismotectonic context, by comparing them to long-term seismicity, historical earthquakes and observations of strain rates. Additionally, we use our MT solutions to investigate stress regimes and orientations along the Alpine chain. Finally, I addressed the challenge of including smaller magnitude events into the study of active faults and source processes. The open-source toolbox Clusty was developed for the clustering of earthquakes based on waveforms recorded across a network of seismic stations. The similarity of waveforms reflects both, the location and the similarity of source mechanisms. Therefore the clustering bears the opportunity to identify earthquakes of similar faulting styles, even when centroid moment tensor inversion is not possible due to low signal-to-noise ratios of surface waves or oversimplified velocity models. The toolbox is described through an application to the Zakynthos 2018 aftershock sequence and I subsequently discuss its potential application to weak earthquakes (Mw<3.1) in the Alps.
Lie group method in combination with Magnus expansion is utilized to develop a universal method applicable to solving a Sturm–Liouville Problem (SLP) of any order with arbitrary boundary conditions. It is shown that the method has ability to solve direct regular and some singular SLPs of even orders (tested up to order eight), with a mix of boundary conditions (including non-separable and finite singular endpoints), accurately and efficiently.
The present technique is successfully applied to overcome the difficulties in finding suitable sets of eigenvalues so that the inverse SLP problem can be effectively solved.
Next, a concrete implementation to the inverse Sturm–Liouville problem
algorithm proposed by Barcilon (1974) is provided. Furthermore, computational feasibility and applicability of this algorithm to solve inverse Sturm–Liouville problems of order n=2,4 is verified successfully. It is observed that the method is successful even in the presence of significant noise, provided that the assumptions of the algorithm are satisfied.
In conclusion, this work provides methods that can be adapted successfully for solving a direct (regular/singular) or inverse SLP of an arbitrary order with arbitrary boundary conditions.
Major challenges during geothermal exploration and exploitation include the structural-geological characterization of the geothermal system and the application of sustainable monitoring concepts to explain changes in a geothermal reservoir during production and/or reinjection of fluids. In the absence of sufficiently permeable reservoir rocks, faults and fracture networks are preferred drilling targets because they can facilitate the migration of hot and/or cold fluids. In volcanic-geothermal systems considerable amounts of gas emissions can be released at the earth surface, often related to these fluid-releasing structures.
In this thesis, I developed and evaluated different methodological approaches and measurement concepts to determine the spatial and temporal variation of several soil gas parameters to understand the structural control on fluid flow. In order to validate their potential as innovative geothermal exploration and monitoring tools, these methodological approaches were applied to three different volcanic-geothermal systems. At each site an individual survey design was developed regarding the site-specific questions.
The first study presents results of the combined measurement of CO2 flux, ground temperatures, and the analysis of isotope ratios (δ13CCO2, 3He/4He) across the main production area of the Los Humeros geothermal field, to identify locations with a connection to its supercritical (T > 374◦C and P > 221 bar) geothermal reservoir. The results of the systematic and large-scale (25 x 200 m) CO2 flux scouting survey proved to be a fast and flexible way to identify areas of anomalous degassing. Subsequent sampling with high resolution surveys revealed the actual extent and heterogenous pattern of anomalous degassing areas. They have been related to the internal fault hydraulic architecture and allowed to assess favourable structural settings for fluid flow such as fault intersections. Finally, areas of unknown structurally controlled permeability with a connection to the superhot geothermal reservoir have been determined, which represent promising targets for future geothermal exploration and development.
In the second study, I introduce a novel monitoring approach by examining the variation of CO2 flux to monitor changes in the reservoir induced by fluid reinjection. For that reason, an automated, multi-chamber CO2 flux system was deployed across the damage zone of a major normal fault crossing the Los Humeros geothermal field. Based on the results of the CO2 flux scouting survey, a suitable site was selected that had a connection to the geothermal reservoir, as identified by hydrothermal CO2 degassing and hot ground temperatures (> 50 °C). The results revealed a response of gas emissions to changes in reinjection rates within 24 h, proving an active hydraulic communication between the geothermal reservoir and the earth surface. This is a promising monitoring strategy that provides nearly real-time and in-situ data about changes in the reservoir and allows to timely react to unwanted changes (e.g., pressure decline, seismicity).
The third study presents results from the Aluto geothermal field in Ethiopia where an area-wide and multi-parameter analysis, consisting of measurements of CO2 flux, 222Rn, and 220Rn activity concentrations and ground temperatures was conducted to detect hidden permeable structures. 222Rn and 220Rn activity concentrations are evaluated as a complementary soil gas parameter to CO2 flux, to investigate their potential to understand tectono-volcanic degassing. The combined measurement of all parameters enabled to develop soil gas fingerprints, a novel visualization approach. Depending on the magnitude of gas emissions and their migration velocities the study area was divided in volcanic (heat), tectonic (structures), and volcano-tectonic dominated areas. Based on these concepts, volcano-tectonic dominated areas, where hot hydrothermal fluids migrate along permeable faults, present the most promising targets for future geothermal exploration and development in this geothermal field. Two of these areas have been identified in the south and south-east which have not yet been targeted for geothermal exploitation. Furthermore, two unknown areas of structural related permeability could be identified by 222Rn and 220Rn activity concentrations.
Eventually, the fourth study presents a novel measurement approach to detect structural controlled CO2 degassing, in Ngapouri geothermal area, New Zealand. For the first time, the tunable diode laser (TDL) method was applied in a low-degassing geothermal area, to evaluate its potential as a geothermal exploration method. Although the sampling approach is based on profile measurements, which leads to low spatial resolution, the results showed a link between known/inferred faults and increased CO2 concentrations. Thus, the TDL method proved to be a successful in the determination of structural related permeability, also in areas where no obvious geothermal activity is present. Once an area of anomalous CO2 concentrations has been identified, it can be easily complemented by CO2 flux grid measurements to determine the extent and orientation of the degassing segment.
With the results of this work, I was able to demonstrate the applicability of systematic and area-wide soil gas measurements for geothermal exploration and monitoring purposes. In particular, the combination of different soil gases using different measurement networks enables the identification and characterization of fluid-bearing structures and has not yet been used and/or tested as standard practice. The different studies present efficient and cost-effective workflows and demonstrate a hands-on approach to a successful and sustainable exploration and monitoring of geothermal resources. This minimizes the resource risk during geothermal project development. Finally, to advance the understanding of the complex structure and dynamics of geothermal systems, a combination of comprehensive and cutting-edge geological, geochemical, and geophysical exploration methods is essential.
Massive Open Online Courses (MOOCs) open up new opportunities to learn a wide variety of skills online and are thus well suited for individual education, especially where proffcient teachers are not available locally. At the same time, modern society is undergoing a digital transformation, requiring the training of large numbers of current and future employees. Abstract thinking, logical reasoning, and the need to formulate instructions for computers are becoming increasingly relevant. A holistic way to train these skills is to learn how to program. Programming, in addition to being a mental discipline, is also considered a craft, and practical training is required to achieve mastery. In order to effectively convey programming skills in MOOCs, practical exercises are incorporated into the course curriculum to offer students the necessary hands-on experience to reach an in-depth understanding of the programming concepts presented. Our preliminary analysis showed that while being an integral and rewarding part of courses, practical exercises bear the risk of overburdening students who are struggling with conceptual misunderstandings and unknown syntax. In this thesis, we develop, implement, and evaluate different interventions with the aim to improve the learning experience, sustainability, and success of online programming courses. Data from four programming MOOCs, with a total of over 60,000 participants, are employed to determine criteria for practical programming exercises best suited for a given audience.
Based on over five million executions and scoring runs from students' task submissions, we deduce exercise difficulties, students' patterns in approaching the exercises, and potential flaws in exercise descriptions as well as preparatory videos. The primary issue in online learning is that students face a social gap caused by their isolated physical situation. Each individual student usually learns alone in front of a computer and suffers from the absence of a pre-determined time structure as provided in traditional school classes. Furthermore, online learning usually presses students into a one-size-fits-all curriculum, which presents the same content to all students, regardless of their individual needs and learning styles. Any means of a personalization of content or individual feedback regarding problems they encounter are mostly ruled out by the discrepancy between the number of learners and the number of instructors. This results in a high demand for self-motivation and determination of MOOC participants. Social distance exists between individual students as well as between students and course instructors. It decreases engagement and poses a threat to learning success. Within this research, we approach the identified issues within MOOCs and suggest scalable technical solutions, improving social interaction and balancing content difficulty.
Our contributions include situational interventions, approaches for personalizing educational content as well as concepts for fostering collaborative problem-solving. With these approaches, we reduce counterproductive struggles and create a universal improvement for future programming MOOCs. We evaluate our approaches and methods in detail to improve programming courses for students as well as instructors and to advance the state of knowledge in online education.
Data gathered from our experiments show that receiving peer feedback on one's programming problems improves overall course scores by up to 17%. Merely the act of phrasing a question about one's problem improved overall scores by about 14%. The rate of students reaching out for help was significantly improved by situational just-in-time interventions. Request for Comment interventions increased the share of students asking for help by up to 158%. Data from our four MOOCs further provide detailed insight into the learning behavior of students. We outline additional significant findings with regard to student behavior and demographic factors. Our approaches, the technical infrastructure, the numerous educational resources developed, and the data collected provide a solid foundation for future research.
The goal of this dissertation is to empirically evaluate the predictions of two classes of models applied to language processing: the similarity-based interference models (Lewis & Vasishth, 2005; McElree, 2000) and the group of smaller-scale accounts that we will refer to as faulty encoding accounts (Eberhard, Cutting, & Bock, 2005; Bock & Eberhard, 1993). Both types of accounts make predictions with regard to processing the same class of structures: sentences containing a non-subject (interfering) noun in addition to a subject noun and a verb. Both accounts make the same predictions for processing ungrammatical sentences with a number-mismatching interfering noun, and this prediction finds consistent support in the data. However, the similarity-based interference accounts predict similar effects not only for morphosyntactic, but also for the semantic level of language organization. We verified this prediction in three single-trial online experiments, where we found consistent support for the predictions of the similarity-based interference account. In addition, we report computational simulations further supporting the similarity-based interference accounts. The combined evidence suggests that the faulty encoding accounts are not required to explain comprehension of ill-formed sentences.
For the processing of grammatical sentences, the accounts make conflicting predictions, and neither the slowdown predicted by the similarity-based interference account, nor the complementary slowdown predicted by the faulty encoding accounts were systematically observed. The majority of studies found no difference between the compared configurations. We tested one possible explanation for the lack of predicted difference, namely, that both slowdowns are present simultaneously and thus conceal each other. We decreased the amount of similarity-based interference: if the effects were concealing each other, decreasing one of them should allow the other to surface. Surprisingly, throughout three larger-sample single-trial online experiments, we consistently found the slowdown predicted by the faulty encoding accounts, but no effects consistent with the presence of inhibitory interference.
The overall pattern of the results observed across all the experiments reported in this dissertation is consistent with previous findings: predictions of the interference accounts for the processing of ungrammatical sentences receive consistent support, but the predictions for the processing of grammatical sentences are not always met. Recent proposals by Nicenboim et al. (2016) and Mertzen et al. (2020) suggest that interference might arise only in people with high working memory capacity or under deep processing mode. Following these proposals, we tested whether interference effects might depend on the depth of processing: we manipulated the complexity of the training materials preceding the grammatical experimental sentences while making no changes to the experimental materials themselves. We found that the slowdown predicted by the faulty encoding accounts disappears in the deep processing mode, but the effects consistent with the predictions of the similarity-based interference account do not arise.
Independently of whether similarity-based interference arises under deep processing mode or not, our results suggest that the faulty encoding accounts cannot be dismissed since they make unique predictions with regard to processing grammatical sentences, which are supported by data. At the same time, the support is not unequivocal: the slowdowns are present only in the superficial processing mode, which is not predicted by the faulty encoding accounts. Our results might therefore favor a much simpler system that superficially tracks number features and is distracted by every plural feature.
While the last few decades have seen impressive improvements in several areas in Natural Language Processing, asking a computer to make sense of the discourse of utterances in a text remains challenging. There are several different theories that aim to describe and analyse the coherent structure that a well-written text inhibits. These theories have varying degrees of applicability and feasibility for practical use. Presumably the most data-driven of these theories is the paradigm that comes with the Penn Discourse TreeBank, a corpus annotated for discourse relations containing over 1 million words. Any language other than English however, can be considered a low-resource language when it comes to discourse processing.
This dissertation is about shallow discourse parsing (discourse parsing following the paradigm of the Penn Discourse TreeBank) for German. The limited availability of annotated data for German means the potential of modern, deep-learning based methods relying on such data is also limited. This dissertation explores to what extent machine-learning and more recent deep-learning based methods can be combined with traditional, linguistic feature engineering to improve performance for the discourse parsing task. A pivotal role is played by connective lexicons that exhaustively list the discourse connectives of a particular language along with some of their core properties.
To facilitate training and evaluation of the methods proposed in this dissertation, an existing corpus (the Potsdam Commentary Corpus) has been extended and additional data has been annotated from scratch. The approach to end-to-end shallow discourse parsing for German adopts a pipeline architecture and either presents the first results or improves over state-of-the-art for German for the individual sub-tasks of the discourse parsing task, which are, in processing order, connective identification, argument extraction and sense classification. The end-to-end shallow discourse parser for German that has been developed for the purpose of this dissertation is open-source and available online.
In the course of writing this dissertation, work has been carried out on several connective lexicons in different languages. Due to their central role and demonstrated usefulness for the methods proposed in this dissertation, strategies are discussed for creating or further developing such lexicons for a particular language, as well as suggestions on how to further increase their usefulness for shallow discourse parsing.
Background: A growing body of research has documented negative effects of sexualization in the media on individuals’ self-objectification. This research is predominantly built on studies examining traditional media, such as magazines and television, and young female samples. Furthermore, longitudinal studies are scarce, and research is missing studying mediators of the relationship. The first aim of the present PhD thesis was to investigate the relations between the use of sexualized interactive media and social media and self-objectification. The second aim of this work was to examine the presumed processes within understudied samples, such as males and females beyond college age, thus investigating the moderating roles of age and gender. The third aim was to shed light on possible mediators of the relation between sexualized media and self-objectification.
Method: The research aims were addressed within the scope of four studies. In an experiment, women’s self-objectification and body satisfaction was measured after playing a video game with a sexualized vs. a nonsexualized character that was either personalized or generic. The second study investigated the cross-sectional link between sexualized television use and self-objectification and consideration of cosmetic surgery in a sample of women across a broad age spectrum, examining the role of age in the relations. The third study looked at the cross-sectional link between male and female sexualized images on Instagram and their associations with self-objectification among a sample of male and female adolescents. Using a two-wave longitudinal design, the fourth study examined sexualized video game and Instagram use as predictors of adolescents’ self-objectification. Path models were conceptualized for the second, third and fourth study, in which media use predicted body surveillance via appearance comparisons (Study 4), thin-ideal internalization (Study 2, 3, 4), muscular-ideal internalization (Study 3, 4), and valuing appearance (all studies).
Results: The results of the experimental study revealed no effect of sexualized video game characters on women’s self-objectification and body satisfaction. No moderating effect of personalization emerged. Sexualized television use was associated to consideration of cosmetic surgery via body surveillance and valuing appearance for women of all ages in Study 2, while no moderating effect of age was found. Study 3 revealed that seeing sexualized male images on Instagram was indirectly associated with higher body surveillance via muscular-ideal internalization for boys and girls. Sexualized female images were indirectly linked to higher body surveillance via thin-ideal internalization and valuing appearance over competence only for girls. The longitudinal analysis of Study 4 showed no moderating effect of gender: For boys and girls, sexualized video game use at T1 predicted body surveillance at T2 via appearance comparisons, thin-ideal internalization and valuing appearance over competence. Furthermore, the use of sexualized Instagram images at T1 predicted body surveillance at T2 via valuing appearance.
Conclusion: The findings show that sexualization in the media is linked to self-objectification among a variety of media formats and within diverse groups of people. While the longitudinal study indicates that sexualized media predict self-objectification over time, the experimental null findings warrant caution regarding this temporal order. The results demonstrate that several mediating variables might be involved in this link. Possible implications for research and practice, such as intervention programs and policy-making, are discussed.
Role of dietary sulfonates in the stimulation of gut bacteria promoting intestinal inflammation
(2021)
The interplay between intestinal microbiota and host has increasingly been recognized as a major factor impacting health. Studies indicate that diet is the most influential determinant affecting the gut microbiota. A diet rich in saturated fat was shown to stimulate the growth of the colitogenic bacterium Bilophila wadsworthia by enhancing the secretion of the bile acid taurocholate (TC). The sulfonated taurine moiety of TC is utilized as a substrate by B. wadsworthia. The resulting overgrowth of B. wadsworthia was accompanied by an increased incidence and severity of colitis in interleukin (IL)-10-deficient mice, which are genetically prone to develop inflammation.
Based on these findings, the question arose whether the intake of dietary sulfonates also stimulates the growth of B. wadsworthia and thereby promotes intestinal inflammation in genetically susceptible mice. Dietary sources of sulfonates include green vegetables and cyanobacteria, which contain the sulfolipids sulfoquinovosyl diacylglycerols (SQDG) in considerable amounts. Based on literature reports, the gut commensal Escherichia coli is able to release sulfoquinovose (SQ) from SQDG and in further steps, convert SQ to 2,3-dihydroxypropane-1-sulfonate (DHPS) and dihydroxyacetone phosphate. DHPS may then be utilized as a growth substrate by B. wadsworthia, which results in the formation of sulfide. Both, sulfide formation and a high abundance of B. wadsworthia have been associated with intestinal inflammation.
In the present study, conventional IL-10-deficient mice were fed either a diet supplemented with the SQDG-rich cyanobacterium Spirulina (20%, SD) or a control diet. In addition SQ, TC, or water were orally applied to conventional or gnotobiotic IL-10-deficient mice. The gnotobiotic mice harbored a simplified human intestinal microbiota (SIHUMI) either with or without B. wadsworthia. During the intervention period, the body weight of the mice was monitored, the colon permeability was assessed and fecal samples were collected. After the three-week intervention, the animals were examined with regard to inflammatory parameters, microbiota composition and sulfonate concentrations in different intestinal sites.
None of the mice treated with the above-mentioned sulfonates showed weight loss or intestinal inflammation. Solely mice fed SD or gavaged with TC displayed a slight immune response. These mice also displayed an altered microbiota composition, which was not observed in mice gavaged with SQ. The abundance of B. wadsworthia was strongly reduced in mice fed SD, while that of mice treated with SQ or TC was in part slightly increased. The intestinal SQ-concentration was elevated in mice orally treated with SD or SQ, whereas neither TC nor taurine concentrations were consistently elevated in mice gavaged with TC. Additional colonization of SIHUMI mice with B. wadsworthia resulted in a mild inflammatory response, but only in mice treated with TC. In general, TC-mediated effects on the immune system and abundance of B. wadsworthia were not as strong as described in the literature.
In summary, neither the tested dietary sulfonates nor TC led to bacteria-induced intestinal inflammation in the IL-10-deficient mouse model, which was consistently observed in both conventional and gnotobiotic mice. For humans, this means that foods containing SQDG, such as spinach or Spirulina, do not increase the risk of intestinal inflammation.
As part of our everyday life we consume breaking news and interpret it based on our own viewpoints and beliefs. We have easy access to online social networking platforms and news media websites, where we inform ourselves about current affairs and often post about our own views, such as in news comments or social media posts. The media ecosystem enables opinions and facts to travel from news sources to news readers, from news article commenters to other readers, from social network users to their followers, etc. The views of the world many of us have depend on the information we receive via online news and social media. Hence, it is essential to maintain accurate, reliable and objective online content to ensure democracy and verity on the Web. To this end, we contribute to a trustworthy media ecosystem by analyzing news and social media in the context of politics to ensure that media serves the public interest. In this thesis, we use text mining, natural language processing and machine learning techniques to reveal underlying patterns in political news articles and political discourse in social networks.
Mainstream news sources typically cover a great amount of the same news stories every day, but they often place them in a different context or report them from different perspectives. In this thesis, we are interested in how distinct and predictable newspaper journalists are, in the way they report the news, as a means to understand and identify their different political beliefs. To this end, we propose two models that classify text from news articles to their respective original news source, i.e., reported speech and also news comments. Our goal is to capture systematic quoting and commenting patterns by journalists and news commenters respectively, which can lead us to the newspaper where the quotes and comments are originally published. Predicting news sources can help us understand the potential subjective nature behind news storytelling and the magnitude of this phenomenon. Revealing this hidden knowledge can restore our trust in media by advancing transparency and diversity in the news.
Media bias can be expressed in various subtle ways in the text and it is often challenging to identify these bias manifestations correctly, even for humans. However, media experts, e.g., journalists, are a powerful resource that can help us overcome the vague definition of political media bias and they can also assist automatic learners to find the hidden bias in the text. Due to the enormous technological advances in artificial intelligence, we hypothesize that identifying political bias in the news could be achieved through the combination of sophisticated deep learning modelsxi and domain expertise. Therefore, our second contribution is a high-quality and reliable news dataset annotated by journalists for political bias and a state-of-the-art solution for this task based on curriculum learning. Our aim is to discover whether domain expertise is necessary for this task and to provide an automatic solution for this traditionally manually-solved problem. User generated content is fundamentally different from news articles, e.g., messages are shorter, they are often personal and opinionated, they refer to specific topics and persons, etc. Regarding political and socio-economic news, individuals in online communities make use of social networks to keep their peers up-to-date and to share their own views on ongoing affairs. We believe that social media is also an as powerful instrument for information flow as the news sources are, and we use its unique characteristic of rapid news coverage for two applications. We analyze Twitter messages and debate transcripts during live political presidential debates to automatically predict the topics that Twitter users discuss. Our goal is to discover the favoured topics in online communities on the dates of political events as a way to understand the political subjects of public interest. With the up-to-dateness of microblogs, an additional opportunity emerges, namely to use social media posts and leverage the real-time verity about discussed individuals to find their locations.
That is, given a person of interest that is mentioned in online discussions, we use the wisdom of the crowd to automatically track her physical locations over time. We evaluate our approach in the context of politics, i.e., we predict the locations of US politicians as a proof of concept for important use cases, such as to track people that
are national risks, e.g., warlords and wanted criminals.
Soft actuators have drawn significant attention due to their relevance for applications, such as artificial muscles in devices developed for medicine and robotics. Tuning their performance and expanding their functionality are frequently done by means of chemical modification. The introduction of structural elements rendering non-synthetic modification of the performance possible, as well as control over physical appearance and facilitating their recycling is a subject of a great interest in the field of smart materials. The primary aim of this thesis was to create a shape-memory polymeric actuator, where the capability for non-synthetic tuning of the actuation performance is combined with reprocessability. Physically cross-linked polymeric matrices provide a solid material platform, where the in situ processing methods can be employed for modification of the composition and morphology, resulting in the fine tuning of the related mechanical properties and shape-memory actuation capability.
The morphological features, required for shape-memory polymeric actuators, namely two crystallisable domains and anchoring points for physical cross-links, were embedded into a multiblock copolymer with poly(ε-caprolactone) and poly(L-lactide) segments (PLLA-PCL). Here, the melting transition of PCL was bisected into the actuating and skeleton-forming units, while the cross-linking was introduced via PLA stereocomplexation in blends with oligomeric poly(D-lactide) (ODLA). PLLA segment number average length of 12-15 repeating units was experimentally defined to be capable of the PLA stereocomplexes formation, but not sufficient for the isotactic crystallisation. Multiblock structure and phase dilution broaden the PCL melting transition, facilitating its separation into two conditionally independent crystalline domains. Low molar mass of the PLA stereocomplex components and a multiblock structure enables processing and reprocessing of the PLLA-PCL / ODLA blends with common non-destructive techniques. The modularity of the PLLA-PCL structure and synthetic approach allows for independent tuning of the properties of its components. The designed material establishes a solid platform for non-synthetic tuning of thermomechanical and structural properties of thermoplastic elastomers.
To evaluate the thermomechanical stability of the formed physical network, three criteria were appraised. As physical cross-links, PLA stereocomplexes have to be evenly distributed within the material matrix, their melting temperature shall not overlap with the thermal transitions of the PCL domains and they have to maintain the structural integrity within the strain ε ranges further applied in the shape-memory actuation experiments. Assigning PCL the function of the skeleton-forming and actuating units, and PLA stereocomplexes the role of physical netpoints, shape-memory actuation was realised in the PLLA-PCL / ODLA blends. Reversible strain of shape-memory actuation was found to be a function of PLA stereocomplex crystallinity, i.e. physical cross-linking density, with a maximum of 13.4 ± 1.5% at PLA stereocomplex content of 3.1 ± 0.3 wt%. In this way, shape-memory actuation can be tuned via adjusting the composition of the PLLA-PCL / ODLA blend. This makes the developed material a valuable asset in the production of cost-effective tunable soft polymeric actuators for the applications in medicine and soft robotics.
Rehabilitationspädagogik
(2021)
Die Rehabilitationspädagogik ist eine jüngere eigenständige Hybridwissenschaft im Feld der Humanwissenschaften. Sie setzt theoriebildend im Sinne des Neunten Buchs Sozialgesetzbuch (SGB IX) an den längerfristigen Folgen einer Krankheit oder eines biologischen Mangels an. Dabei orientiert sie sich konzeptionell zum Beispiel an der UN-Behindertenrechtskonvention (UN-BRK) und an der International Classification of Functioning, Disability and Health (ICF). Des Weiteren an den Konzepten der Humanontogenetik von K.-F. Wessel, insbesondere: dem ganzen Menschen, der Hierarchie der Kompetenzen, den sensiblen Phasen und der Souveränität.
Die Rehabilitationspädagogik ist Bestandteil der komplexen gesundheitlichen Rehabilitation und eine Tochterdisziplin der allgemeinen Pädagogik. Bei ihrem rehabilitationspädagogischen Prozess gilt das Richtziel, die umfassende Teilhabe des Menschen an individuellen Lebensbereichen durch rehabilitationspädagogische Mittel, Methoden und Organisationsformen zu unterstützen.
Die Dissertation setzt sich mittels Methoden der Hermeneutik mit der DDR-Rehabilitationspädagogik von K.- P. Becker und Autorenkollektiv kritisch-konstruktiv auseinander. Sie legt eine aktuelle fortführende Theorie der Rehabilitationspädagogik unter der Berücksichtigung der UN-BRK, der ICF und des SGB IX vor und liefert eine neue Sichtweise auf die Rehabilitationspädagogik aus historischer und aktueller Perspektive.
Polymeric semiconductors are strong contenders for replacing traditional inorganic semiconductors in electronic applications requiring low power, low cost and flexibility, such as biosensors, flexible solar cells and electronic displays. Molecular doping has the potential to enable this revolution by improving the conductivity and charge transport properties of this class of materials. Despite decades of research in this field, gaps in our understanding of the nature of dopant–polymer interactions has resulted in limited commercialization of this technology. This work aims at providing a deeper insight into the underlying mechanisms of molecular p-doping of semiconducting polymers in the solution and solid-state, and thereby bring the scientific community closer to realizing the dream of making organic semiconductors commonplace in the electronics industry. The role of 1) dopant size/shape, 2) polymer chain aggregation and 3) charge delocalization on the doping mechanism and efficiency is addressed using optical (UV-Vis-NIR) and electron paramagnetic resonance (EPR) spectroscopies. By conducting a comprehensive study of the nature and concentration of the doping-induced species in solutions of the polymer poly(3-hexylthiophene) (P3HT) with 3 different dopants, we identify the unique optical signatures of the delocalized polaron, localized polaron and charge-transfer complex, and report their extinction coefficient values. Furthermore, with X-ray diffraction, atomic force microscopy and electrical conductivity measurements, we study the impact of processing technique and doping mechanism on the morphology and thereby, charge transport through the doped films.
This work demonstrates that the doping mechanism and type of doping-induced species formed are strongly influenced by the polymer backbone arrangement rather than dopant shape/size. The ability of the polymer chain to aggregate is found to be crucial for efficient charge transfer (ionization) and polaron delocalization. At the same time, our results suggest that the high ionization efficiency of a dopant–polymer system in solution may subsequently hinder efficient charge transport in the solid-state due to the reduction in the fraction of tie chains, which enable charges to move efficiently between aggregated domains in the films. This study demonstrates the complex multifaceted nature of polymer doping while providing important hints for the future design of dopant-host systems and film fabrication techniques.
Bottom-up synthetic biology is used for the understanding of how a cell works. It is achieved through developing techniques to produce lipid-based vesicular structures as cellular mimics. The most common techniques used to produce cellular mimics or synthetic cells is through electroformation and swelling method. However, the abovementioned techniques cannot efficiently encapsulate macromolecules such as proteins, enzymes, DNA and even liposomes as synthetic organelles. This urges the need to develop new techniques that can circumvent this issue and make the artificial cell a reality where it is possible to imitate a eukaryotic cell through encapsulating macromolecules. In this thesis, the aim to construct a cell system using giant unilamellar vesicles (GUVs) to reconstitute the mitochondrial molybdenum cofactor biosynthetic pathway. This pathway is highly conserved among all life forms, and therefore is known for its biological significance in disorders induced through its malfunctioning. Furthermore, the pathway itself is a multi-step enzymatic reaction that takes place in different compartments. Initially, GTP in the mitochondrial matrix is converted to cPMP in the presence of cPMP synthase. Further, produced cPMP is transported across the membrane to the cytosol, to be converted by MPT synthase into MPT. This pathway provides a possibility to address the general challenges faced in the development of a synthetic cell, to encapsulate large biomolecules with good efficiency and greater control and to evaluate the enzymatic reactions involved in the process.
For this purpose, the emulsion-based technique was developed and optimised to allow rapid production of GUVs (~18 min) with high encapsulation efficiency (80%). This was made possible by optimizing various parameters such as density, type of oil, the impact of centrifugation speed/time, lipid concentration, pH, temperature, and emulsion droplet volume. Furthermore, the method was optimised in microtiter plates for direct experimentation and visualization after the GUV formation. Using this technique, the two steps - formation of cPMP from GTP and the formation of MPT from cPMP were encapsulated in different sets of GUVs to mimic the two compartments. Two independent fluorescence-based detection systems were established to confirm the successful encapsulation and conversion of the reactants. Alternatively, the enzymes produced using bacterial expression and measured. Following the successful encapsulation and evaluation of enzymatic reactions, cPMP transport across mitochondrial membrane has been mimicked using GUVs using a complex mitochondrial lipid composition. It was found that the cPMP interaction with the lipid bilayer results in transient pore-formation and leakage of internal contents.
Overall, it can be concluded that in this thesis a novel technique has been optimised for fast production of functional synthetic cells. The individual enzymatic steps of the Moco biosynthetic pathway have successfully implemented and quantified within these cellular mimics.
Ionizing radiation is used in cancer radiation therapy to effectively damage the DNA of tumors leading to cell death and reduction of the tumor tissue. The main damage is due to generation of highly reactive secondary species such as low-energy electrons (LEE) with the most probable energy around 10 eV through ionization of water molecules in the cells. A simulation of the dose distribution in the patient is required to optimize the irradiation modality in cancer radiation therapy, which must be based on the fundamental physical processes of high-energy radiation with the tissue. In the present work the accurate quantification of DNA radiation damage in the form of absolute cross sections for LEE-induced DNA strand breaks (SBs) between 5 and 20 eV is done by using the DNA origami technique. This method is based on the analysis of well-defined DNA target sequences attached to DNA origami triangles with atomic force microscopy (AFM) on the single molecule level. The present work focuses on poly-adenine sequences (5'-d(A4), 5'-d(A8), 5'-d(A12), 5'-d(A16), and 5'- d(A20)) irradiated with 5.0, 7.0, 8.4, and 10 eV electrons. Independent of the DNA length, the strand break cross section shows a maximum around 7.0 eV electron energy for all investigated oligonucleotides confirming that strand breakage occurs through the initial formation of negative ion resonances. Additionally, DNA double strand breaks from a DNA hairpin 5'-d(CAC)4T(Bt-dT)T2(GTG)4 are examined for the first time and are compared with those of DNA single strands 5'-d(CAC)4 and 5'- d(GTG)4. The irradiation is made in the most likely energy range of 5 to 20 eV with an anionic resonance maximum around 10 eV independently of the DNA sequence. There is a clear difference between σSSB and σDSB of DNA single and double strands, where the strand break for ssDNA are always higher in all electron energies compared to dsDNA by the factor 3. A further part of this work deals with the characterization and analysis of new types of radiosensitizers used in chemoradiotherapy, which selectively increases the DNA damage upon radiation. Fluorinated DNA sequences with 2'-fluoro-2'-deoxycytidine (dFC) show an increased sensitivity at 7 and 10 eV compared to the unmodified DNA sequences by an enhancement factor between 2.1 and 2.5. In addition, light-induced oxidative damage of 5'-d(GTG)4 and 5'-d((CAC)4T(Bt-dT)T2(GTG)4) modified DNA origami triangles by singlet oxygen 1O2 generated from three photoexcited DNA groove binders [ANT994], [ANT1083] and [Cr(ddpd)2][BF4]3 illuminated in different experiments with UV-Vis light at 430, 435 and 530 nm wavelength is demonstrated. The singlet oxygen induced generation of DNA damage could be detected in both aqueous and dry environments for [ANT1083] and [Cr(ddpd)2][BF4]3.
Der Bildungshausbau ist Thema aktueller Debatten in der Stadtentwicklung und Stadtplanung sowie in der Pädagogik. Viele Expert*innen beschäftigen sich in Studien mit Fragen zu gutem und gelingendem Schulbau. Die Anforderungen der Gesellschaft an Bildungshäuser verändern sich, wenn in ganztägigen Schulformen nicht nur Unterricht, sondern auch Freizeitbetreuung für die Schülerinnen und Schüler stattfinden soll. Gleichzeitig soll Schule ein Ort der Begegnung und Kommunikation, des sozialen Lernens und der Kooperation sein. Schule ist in vielfacher Hinsicht in Bewegung. Um mit den Veränderungen und Ansprüchen Schritt zu halten, steht der Bildungshausbau immer wieder vor Herausforderungen. Einerseits werden Leuchtturmprojekte geschaffen, andererseits entstehen nach wie vor Bildungsbauten, die den gegenwärtigen Anforderungen und zukünftigen Entwicklungen nicht gerecht werden.
An dieser Stelle setzt die vorliegende Arbeit an, die nicht neue Normen zu gutem Schulbau vorlegt, sondern in einer qualitativen empirischen Studie nach den pädagogischen Vorstellungen von Beteiligten im Bildungshausbau und den typischen Entwicklungen im Planungsprozess fragt. Der vorliegenden Fallstudie wurde die dokumentarische Methode als Auswertungsverfahren zugrunde gelegt. Gegenstand der Untersuchung waren zwei Bildungsbauten eines Großbauprojektes. Im Zuge der Auswertung erfolgten eine Analyse der Projektstrukturen und eine Analyse der Deutungsmuster der befragten Akteur*innen, die in einer zusammen¬führenden Ergebnisdarstellung in Form eines Handlungs-Struktur-Gefüges mündeten.
Es werden Einblicke in Zusammenhänge von Handlungen der Beteiligten und Projektstrukturen gegeben, wie sie sich gegenseitig beeinflussen oder im Prozessverlauf verändern. Die Auswertung zeigt, dass Transferproblematiken zwischen Wissenschaft und Praxis nach wie vor bestehen. Besonderes Gewicht bei Planungsentscheidungen haben finanzielle, zeitliche und architektonische Strukturen. Nur wenige pädagogische Vorstellungen bzw. Deutungsmuster können in Erscheinung treten.
The ubiquitin-proteasome-system (UPS) is a cellular cascade involving three enzymatic steps for protein ubiquitination to target them to the 26S proteasome for proteolytic degradation. Several components of the UPS have been shown to be central for regulation of defense responses during infections with phytopathogenic bacteria. Upon recognition of the pathogen, local defense is induced which also primes the plant to acquire systemic resistance (SAR) for enhanced immune responses upon challenging infections. Here, ubiquitinated proteins were shown to accumulate locally and systemically during infections with Psm and after treatment with the SAR-inducing metabolites salicylic acid (SA) and pipecolic acid (Pip). The role of the 26S proteasome in local defense has been described in several studies, but the potential role during SAR remains elusive and was therefore investigated in this project by characterizing the Arabidopsis proteasome mutants rpt2a-2 and rpn12a-1 during priming and infections with Pseudomonas. Bacterial replication assays reveal decreased basal and systemic immunity in both mutants which was verified on molecular level showing impaired activation of defense- and SAR-genes. rpt2a-2 and rpn12a-1 accumulate wild type like levels of camalexin but less SA. Endogenous SA treatment restores local PR gene expression but does not rescue the SAR-phenotype. An RNAseq experiment of Col-0 and rpt2a-2 reveal weak or absent induction of defense genes in the proteasome mutant during priming. Thus, a functional 26S proteasome was found to be required for induction of SAR while compensatory mechanisms can still be initiated.
E3-ubiquitin ligases conduct the last step of substrate ubiquitination and thereby convey specificity to proteasomal protein turnover. Using RNAseq, 11 E3-ligases were found to be differentially expressed during priming in Col-0 of which plant U-box 54 (PUB54) and ariadne 12 (ARI12) were further investigated to gain deeper understanding of their potential role during priming.
PUB54 was shown to be expressed during priming and /or triggering with virulent Pseudomonas. pub54 I and pub54-II mutants display local and systemic defense comparable to Col-0. The heavy-metal associated protein 35 (HMP35) was identified as potential substrate of PUB54 in yeast which was verified in vitro and in vivo. PUB54 was shown to be an active E3-ligase exhibiting auto-ubiquitination activity and performing ubiquitination of HMP35. Proteasomal turnover of HMP35 was observed indicating that PUB54 targets HMP35 for ubiquitination and subsequent proteasomal degradation. Furthermore, hmp35-I benefits from increased resistance in bacterial replication assays. Thus, HMP35 is potentially a negative regulator of defense which is targeted and ubiquitinated by PUB54 to regulate downstream defense signaling. ARI12 is transcriptionally activated during priming or triggering and hyperinduced during priming and triggering. Gene expression is not inducible by the defense related hormone salicylic acid (SA) and is dampened in npr1 and fmo1 mutants consequently depending on functional SA- and Pip-pathways, respectively. ARI12 accumulates systemically after priming with SA, Pip or Pseudomonas. ari12 mutants are not altered in resistance but stable overexpression leads to increased resistance in local and systemic tissue. During priming and triggering, unbalanced ARI12 levels (i.e. knock out or overexpression) leads to enhanced FMO1 activation indicating a role of ARI12 in Pip-mediated SAR. ARI12 was shown to be an active E3-ligase with auto-ubiquitination activity likely required for activation with an identified ubiquitination site at K474. Mass spectrometrically identified potential substrates were not verified by additional experiments yet but suggest involvement of ARI12 in regulation of ROS in turn regulating Pip-dependent SAR pathways.
Thus, data from this project provide strong indications about the involvement of the 26S proteasome in SAR and identified a central role of the two so far barely described E3-ubiquitin ligases PUB54 and ARI12 as novel components of plant defense.
Over the last decades, the rate of near-surface warming in the Arctic is at least double than elsewhere on our planet (Arctic amplification). However, the relative contribution of different feedback processes to Arctic amplification is a topic of ongoing research, including the role of aerosol and clouds. Lidar systems are well-suited for the investigation of aerosol and optically-thin clouds as they provide vertically-resolved information on fine temporal scales. Global aerosol models fail to converge on the sign of the Arctic aerosol radiative effect (ARE). In the first part of this work, the optical and microphysical properties of Arctic aerosol were characterized at case study level in order to assess the short-wave (SW) ARE. A long-range transport episode was first investigated. Geometrically similar aerosol layers were captured over three locations. Although the aerosol size distribution was different between Fram Strait(bi-modal) and Ny-Ålesund (fine mono-modal), the atmospheric column ARE was similar. The latter was related to the domination of accumulation mode aerosol. Over both locations top of the atmosphere (TOA) warming was accompanied by surface cooling.
Subsequently, the sensitivity of ARE was investigated with respect to different aerosol and spring-time ambient conditions. A 10% change in the single-scattering albedo (SSA) induced higher ARE perturbations compared to a 30% change in the aerosol extinction coefficient. With respect to ambient conditions, the ARETOA was more sensitive to solar elevation changes compared to AREsur f ace. Over dark surfaces the ARE profile was exclusively negative, while over bright surfaces a negative to positive shift occurred above the aerosol layers. Consequently, the sign of ARE can be highly sensitive in spring since this season is characterized by transitional surface albedo conditions.
As the inversion of the aerosol microphysics is an ill-posed problem, the inferred aerosol size distribution of a low-tropospheric event was compared to the in-situ measured distribution. Both techniques revealed a bi-modal distribution, with good agreement in the total volume concentration. However, in terms of SSA a disagreement was found, with the lidar inversion indicating highly scattering particles and the in-situ measurements pointing to absorbing particles. The discrepancies could stem from assumptions in the inversion (e.g. wavelength-independent refractive index) and errors in the conversion of the in-situ measured light attenuation into absorption. Another source of discrepancy might be related to an incomplete capture of fine particles in the in-situ sensors. The disagreement in the most critical parameter for the Arctic ARE necessitates further exploration in the frame of aerosol closure experiments. Care must be taken in ARE modelling studies, which may use either the in-situ or lidar-derived SSA as input.
Reliable characterization of cirrus geometrical and optical properties is necessary for improving their radiative estimates. In this respect, the detection of sub-visible cirrus is of special importance. The total cloud radiative effect (CRE) can be negatively biased, should only the optically-thin and opaque cirrus contributions are considered. To this end, a cirrus retrieval scheme was developed aiming at increased sensitivity to thin clouds. The cirrus detection was based on the wavelet covariance transform (WCT) method, extended by dynamic thresholds. The dynamic WCT exhibited high sensitivity to faint and thin cirrus layers (less than 200 m) that were partly or completely undetected by the existing static method. The optical characterization scheme extended the Klett–Fernald retrieval by an iterative lidar ratio (LR) determination (constrained Klett). The iterative process was constrained by a reference value, which indicated the aerosol concentration beneath the cirrus cloud. Contrary to existing approaches, the aerosol-free assumption was not adopted, but the aerosol conditions were approximated by an initial guess. The inherent uncertainties of the constrained Klett were higher for optically-thinner cirrus, but an overall good agreement was found with two established retrievals. Additionally, existing approaches, which rely on aerosol-free assumptions, presented increased accuracy when the proposed reference value was adopted. The constrained Klett retrieved reliably the optical properties in all cirrus regimes, including upper sub-visible cirrus with COD down to 0.02.
Cirrus is the only cloud type capable of inducing TOA cooling or heating at daytime. Over the Arctic, however, the properties and CRE of cirrus are under-explored. In the final part of this work, long-term cirrus geometrical and optical properties were investigated for the first time over an Arctic site (Ny-Ålesund). To this end, the newly developed retrieval scheme was employed. Cirrus layers over Ny-Ålesund seemed to be more absorbing in the visible spectral region compared to lower latitudes and comprise relatively more spherical ice particles. Such meridional differences could be related to discrepancies in absolute humidity and ice nucleation mechanisms. The COD tended to decline for less spherical and smaller ice particles probably due to reduced water vapor deposition on the particle surface. The cirrus optical properties presented weak dependence on ambient temperature and wind conditions.
Over the 10 years of the analysis, no clear temporal trend was found and the seasonal cycle was not pronounced. However, winter cirrus appeared under colder conditions and stronger winds. Moreover, they were optically-thicker, less absorbing and consisted of relatively more spherical ice particles. A positive CREnet was primarily revealed for a broad range of representative cloud properties and ambient conditions. Only for high COD (above 10) and over tundra a negative CREnet was estimated, which did not hold true over snow/ice surfaces. Consequently, the COD in combination with the surface albedo seem to play the most critical role in determining the CRE sign over the high European Arctic.
Background and objectives: The intricate interdependencies between the musculoskeletal and neural systems build the foundation for postural control in humans, which is a prerequisite for successful performance of daily and sports-specific activities. Balance training (BT) is a well-established training method to improve postural control and its components (i.e., static/dynamic steady-state, reactive, proactive balance). The effects of BT have been studied in adult and youth populations, but were systematically and comprehensively assessed only in young and old adults. Additionally, when taking a closer look at established recommendations for BT modalities (e.g., training period, frequency, volume), standardized means to assess and control the progressive increase in exercise intensity are missing. Considering that postural control is primarily neuronally driven, intensity is not easy to quantify. In this context, a measure of balance task difficulty (BTD) appears to be an auspicious alternative as a training modality to monitor BT and control training progression. However, it remains unclear how a systematic increase in BTD affects balance performance and neurophysiological outcomes. Therefore, the primary objectives of the present thesis were to systematically and comprehensively assess the effects of BT on balance performance in healthy youth and establish dose-response relationships for an adolescent population. Additionally, this thesis aimed to investigate the effects of a graded increase in BTD on balance performance (i.e., postural sway) and neurophysiological outcomes (i.e, leg muscle activity, leg muscle coactivation, cortical activity) in adolescents.
Methods: Initially, a systematic review and meta-analysis on the effects of BT on balance performance in youth was conducted per the Preferred Reporting Items for Systematic Reviews and Meta-Analysis statement guidelines. Following this complementary analysis, thirteen healthy adolescents (3 female/ 10 male) aged 16-17 years were enrolled for two cross-sectional studies. The participants executed bipedal balance tasks on a multidirectional balance board that allowed six gradually increasing levels of BTD by narrowing the balance boards’ base of support. During task performance, two pressure sensitive mats fixed on the balance board recorded postural sway. Leg muscle activity and leg muscle coactivation were assessed via electromyography while electroencephalography was used to monitor cortical activity.
Results: Findings from the systematic review and meta-analysis indicated moderate-to-large effects of BT on static and dynamic balance performance in youth (static: weighted mean standardized mean differences [SMDwm] = 0.71; dynamic: SMDwm = 1.03). In adolescents, training-induced effects were moderate and large for static (SMDwm = 0.61) and dynamic (SMDwm = 0.86) balance performance, respectively. Independently (i.e. modality-specific) calculated dose-response relationships identified a training period of 12 weeks, a frequency of two training sessions per week, a total of 24-36 sessions, a duration of 4-15 minutes, and a total duration of 31-60 minutes as the training modalities with the largest effect on overall balance performance in adolescents. However, the implemented meta-regression indicated that none of these training modalities (R² = 0%) could predict the observed performance-increasing effects of BT.
Results from the first cross-sectional study revealed that a gradually increasing level of BTD caused increases in postural sway (p < 0.001; d = 6.36), higher leg muscle activity (p < 0.001; 2.19 < d < 4.88), and higher leg muscle coactivation (p < 0.001; 1.32 < d < 1.41). Increases in postural sway and leg muscle activity were mainly observed during low and high levels of task difficulty during continuous performance of the respective balance task. Results from the second cross-sectional study indicated frequency-specific increases/decreases in cortical activity of different brain areas (p < 0.005; 0.92 < d < 1.80) as a function of BTD. Higher cortical activity within the theta frequency band in the frontal and central right brain areas was observed with increasing postural demands. Concomitantly, activity in the alpha-2 frequency band was attenuated in parietal brain areas.
Conclusion: BT is an effective method to increase static and dynamic balance performance and, thus, improve postural control in healthy youth populations. However, none of the reported training modalities (i.e., training period, frequency, volume) could explain the effects on balance performance. Furthermore, a gradually increasing level of task difficulty resulted in increases in postural sway, leg muscle activity, and coactivation. Frequency and brain area-specific increases/decreases in cortical activity emphasize the involvement of frontoparietal brain areas in regulatory processes of postural control dependent on BTD. Overall, it appears that increasing BTD can be easily accomplished by narrowing the base of support. Since valid methods to assess and quantify BT intensity do not exist, increasing BTD appears to be a very useful candidate to implement and monitor progression in BT programs in healthy adolescents.
Geochemical processes such as mineral dissolution and precipitation alter the microstructure of rocks, and thereby affect their hydraulic and mechanical behaviour. Quantifying these property changes and considering them in reservoir simulations is essential for a sustainable utilisation of the geological subsurface. Due to the lack of alternatives, analytical methods and empirical relations are currently applied to estimate evolving hydraulic and mechanical rock properties associated with chemical reactions. However, the predictive capabilities of analytical approaches remain limited, since they assume idealised microstructures, and thus are not able to reflect property evolution for dynamic processes. Hence, aim of the present thesis is to improve the prediction of permeability and stiffness changes resulting from pore space alterations of reservoir sandstones.
A detailed representation of rock microstructure, including the morphology and connectivity of pores, is essential to accurately determine physical rock properties. For that purpose, three-dimensional pore-scale models of typical reservoir sandstones, obtained from highly resolved micro-computed tomography (micro-CT), are used to numerically calculate permeability and stiffness. In order to adequately depict characteristic distributions of secondary minerals, the virtual samples are systematically altered and resulting trends among the geometric, hydraulic, and mechanical rock properties are quantified. It is demonstrated that the geochemical reaction regime controls the location of mineral precipitation within the pore space, and thereby crucially affects the permeability evolution. This emphasises the requirement of determining distinctive porosity-permeability relationships
by means of digital pore-scale models. By contrast, a substantial impact of spatial alterations patterns on the stiffness evolution of reservoir sandstones are only observed in case of certain microstructures, such as highly porous granular rocks or sandstones comprising framework-supporting cementations. In order to construct synthetic granular samples a process-based approach is proposed including grain deposition and diagenetic cementation. It is demonstrated that the generated samples reliably represent the microstructural complexity of natural sandstones. Thereby, general limitations of imaging techniques can be overcome and various realisations of granular rocks can be flexibly produced. These can be further altered by virtual experiments, offering a fast and cost-effective way to examine the impact of precipitation, dissolution or fracturing on various petrophysical correlations.
The presented research work provides methodological principles to quantify trends in permeability and stiffness resulting from geochemical processes. The calculated physical property relations are directly linked to pore-scale alterations, and thus have a higher accuracy than commonly applied analytical approaches. This will considerably improve the predictive capabilities of reservoir models, and is further relevant to assess and reduce potential risks, such as productivity or injectivity losses as well as reservoir compaction or fault reactivation. Hence, the proposed method is of paramount importance for a wide range of natural and engineered subsurface applications, including geothermal energy systems, hydrocarbon reservoirs, CO2 and energy storage as well as hydrothermal deposit exploration.
Filaments are omnipresent features in the solar chromosphere, one of the atmospheric layers of the Sun, which is located above the photosphere, the visible surface of the Sun. They are clouds of plasma reaching from the photosphere to the chromosphere, and even to the outer-most atmospheric layer, the corona. They are stabalized by the magnetic field. If the magnetic field is disturbed, filaments can erupt as coronal mass ejections (CME), releasing plasma into space, which can also hit the Earth. A special type of filaments are polar crown filaments, which form at the interface of the unipolar field of the poles and flux of opposite magnetic polarity, which was transported towards the poles. This flux transport is related to the global dynamo of the Sun and can therefore be analyzed indirectly with polar crown filaments. The main objective of this thesis is to better understand the physical properties and environment of high-latitude and polar crown filaments, which can be approached from two perspectives: (1) analyzing the large-scale properties of high-latitude and polar crown filaments with full-disk Hα observations from the Chromospheric Telescope (ChroTel) and (2) determining the relation of polar crown and high-latitude filaments from the chromosphere to the lower-lying photosphere with high-spatial resolution observations of the Vacuum Tower Telescope (VTT), which reveal the smallest details.
The Chromospheric Telescope (ChroTel) is a small 10-cm robotic telescope at Observatorio del Teide on Tenerife (Spain), which observes the entire Sun in Hα, Ca IIK, and He I 10830 Å. We present a new calibration method that includes limb-darkening correction, removal of non-uniform filter transmission, and determination of He I Doppler velocities. Chromospheric full-disk filtergrams are often obtained with Lyot filters, which may display non-uniform transmission causing large-scale intensity variations across the solar disk. Removal of a 2D symmetric limb-darkening function from full-disk images results in a flat background. However, transmission artifacts remain and are even more distinct in these contrast-enhanced images. Zernike polynomials are uniquely appropriate to fit these large-scale intensity variations of the background. The Zernike coefficients show a distinct temporal evolution for ChroTel data, which is likely related to the telescope’s alt-azimuth mount that introduces image rotation. In addition, applying this calibration to sets of seven filtergrams that cover the He I triplet facilitates determining chromospheric Doppler velocities. To validate the method, we use three datasets with varying levels of solar activity. The Doppler velocities are benchmarked with respect to co-temporal high-resolution spectroscopic data of the GREGOR Infrared Spectrograph (GRIS). Furthermore, this technique can be applied to ChroTel Hα and Ca IIK data. The calibration method for ChroTel filtergrams can be easily adapted to other full-disk data exhibiting unwanted large-scale variations. The spectral region of the He I triplet is a primary choice for high-resolution near-infrared spectropolarimetry. Here, the improved calibration of ChroTel data will provide valuable context data.
Polar crown filaments form above the polarity inversion line between the old magnetic flux of the previous cycle and the new magnetic flux of the current cycle. Studying their appearance and their properties can lead to a better understanding of the solar cycle. We use full-disk data of the ChroTel at Observatorio del Teide, Tenerife, Spain, which were taken in three different chromospheric absorption lines (Hα 6563 Å, Ca IIK 3933 Å, and He I 10830 Å), and we create synoptic maps. In addition, the spectroscopic He I data allow us to compute Doppler velocities and to create synoptic Doppler maps. ChroTel data cover the rising and decaying phase of Solar Cycle 24 on about 1000 days between 2012 and 2018. Based on these data, we automatically extract polar crown filaments with image-processing tools and study their properties. We compare contrast maps of polar crown filaments with those of quiet-Sun filaments. Furthermore, we present a super-synoptic map summarizing the entire ChroTel database. In summary, we provide statistical properties, i.e. number and location of filaments, area, and tilt angle for both the maximum and declining phase of Solar Cycle 24. This demonstrates that ChroTel provides a
promising dataset to study the solar cycle.
The cyclic behavior of polar crown filaments can be monitored by regular full-disk Hα observations. ChroTel provides such regular observations of the Sun in three chromospheric wavelengths. To analyze the cyclic behavior and the statistical properties of polar crown filaments, we have to extract the filaments from the images. Manual extraction is tedious, and extraction with morphological image processing tools produces a large number of false positive detections and the manual extraction of these takes too much time. Automatic object detection and extraction in a reliable manner allows us to process more data in a shorter time. We will present an overview of the ChroTel database and a proof of concept of a machine learning application, which allows us a unified extraction of, for example, filaments from ChroTel data.
The chromospheric Hα spectral line dominates the spectrum of the Sun and other stars. In the stellar regime, this spectral line is already used as a powerful tracer of magnetic activity. For the Sun, other tracers are typically used to monitor solar activity. Nonetheless, the Sun is observed constantly in Hα with globally distributed ground-based full-disk imagers. The aim of this study is to introduce Hα as a tracer of solar activity and compare it to other established indicators. We discuss the newly created imaging Hα excess in the perspective of possible application for modelling of stellar atmospheres. In particular, we try to determine how constant is the mean intensity of the Hα excess and number density of low-activity regions between solar maximum and minimum. Furthermore, we investigate whether the active region coverage fraction or the changing emission strength in the active regions dominates time variability in solar Hα observations. We use ChroTel observations of full-disk Hα filtergrams and morphological image processing techniques to extract the positive and negative imaging Hα excess, for bright features (plage regions) and dark absorption features (filaments and sunspots), respectively. We describe the evolution of the Hα excess during Solar Cycle 24 and compare it to other well established tracers: the relative sunspot number, the F10.7 cm radio flux, and the Mg II index. Moreover, we discuss possible applications of the Hα excess for stellar activity diagnostics and the contamination of exoplanet transmission spectra. The positive and negative Hα excess follow the behavior of the solar activity over the course of the cycle. Thereby, positive Hα excess is closely correlated to the chromospheric Mg II index. On the other hand, the negative Hα excess, created from dark features like filaments and sunspots, is introduced as a tracer of solar activity for the first time. We investigated the mean intensity distribution for active regions for solar minimum and maximum and found that the shape of both distributions is very similar but with different amplitudes. This might be related with the relatively stable coronal temperature component during the solar cycle. Furthermore, we found that the coverage fraction of Hα excess and the Hα excess of bright features are strongly correlated, which will influence modelling of stellar and exoplanet atmospheres.
High-resolution observations of polar crown and high-latitude filaments are scarce. We present a unique sample of such filaments observed in high-resolution Hα narrow-band filtergrams and broad-band images, which were obtained with a new fast camera system at the VTT. ChroTel provided full-disk context observations in Hα, Ca IIK, and He I 10830 Å. The Helioseismic and Magnetic Imager (HMI) and the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO) provided line-of-sight magnetograms and ultraviolet (UV) 1700 Å filtergrams, respectively. We study filigree in the vicinity of polar crown and high-latitude filaments and relate their locations to magnetic concentrations at the filaments’ footpoints. Bright points are a well studied phenomenon in the photosphere at low latitudes, but they were not yet studied in the quiet network close to the poles. We examine size, area, and eccentricity of bright points and find that their morphology is very similar to their counterparts at lower latitudes, but their sizes and areas are larger. Bright points at the footpoints of polar crown filaments are preferentially located at stronger magnetic flux concentrations, which are related to bright regions at the border of supergranules as observed in UV filtergrams. Examining the evolution of bright points on three consecutive days reveals that their amount increases while the filament decays, which indicates they impact the equilibrium of the cool plasma contained in filaments.
„If you can’t measure it, you can’t manage it.“ Dieser Slogan, der u. a. auf Peter Drucker, Henry Deming oder Robert Kaplan und David Norton zurückgehen soll, ist Ausdruck einer tiefen Überzeugung in die Notwendigkeit und den Nutzen des Performance Managements, einem Ansatz der auch die öffentliche Verwaltung erfasst und geprägt hat. Gleichzeitig impliziert er eine entscheidende Rolle von Performance Informationen. Die vorliegende Dissertation rückt das neuralgische Element Performance Information ins Zentrum des Forschungsinteresses, genauer die Verwendung von Kennzahlen.
Ausgangspunkt bildet die wissenschaftliche Beobachtung, dass Kennzahlen nicht immer und automatisch in der vom theoretischen Standpunkt aus erforderlichen und prognostizierten Art und Weise genutzt werden. Eine schlechte Implementierung des Managementansatzes oder Fehler im theoretischen Fundament sind mögliche Erklärungsansätze. Im Zuge der Analyse des Forschungsstandes ist offenkundig geworden, dass Erklärungen vor allem im organisationalen Setting und in Performance Management bezogenen Faktoren gesucht werden; ein Kennzeichen für eine eher technokratische und implementationsbezogene Perspektive auf die Verwendungsproblematik. Die aus neurowissenschaftlicher Sicht wichtige intrapersonale Ebene spielt eine ungeordnete Rolle.
In Anbetracht dessen ist auf der Grundlage neurowissenschaftlicher Erkenntnisse im Rahmen einer empirischen Untersuchung die Wirkung erfahrungsbezogener Variablen auf das Verwendungsverhalten untersucht worden. Dabei ist analysiert worden, wie Erfahrungen auf organisationaler Ebene entstehen und wie sie im Detail auf das Nutzungsverhalten wirken. Als Forschungsobjekt sind polizeiliche Führungskräfte herangezogen worden. Die Daten sind Ende 2016/Anfang 2017 online-basiert erhoben worden.
Im Ergebnis der Datenauswertung und Diskussion der Befunde sind folgende Erkenntnisse hervorzuheben:
(1) Erfahrungen beeinflussen die Verwendung von Performance Informationen. Die Art der Erfahrung mit Kennzahlen bildet dabei eine Mediatorvariable. Vor allem organisationale Faktoren, wie der Reifegrad des Performance Management Systems, wirken über den Faktor Erfahrung auf das Verwendungsverhalten.
(2) Erwähnenswert ist zudem, dass die Auseinandersetzung mit Kennzahlen sowohl den Erfahrungsschatz als auch die Nutzung von Kennzahlen positiv beeinflusst. Insgesamt haben sich die neurowissenschaftlich inspirierten Variablen als vielversprechende Erklärungsfaktoren herausgestellt.
(3) Des Weiteren hat die Arbeit bestehende Befunde abgesichert, v. a. die Wirkung des erwähnten Reifegrads. Allerdings sind auch Unterschiede aufgetreten. So büßt zum Beispiel der transformationale Führungsstil i. V. m. Art der Erfahrung seine positive Wirkung auf die Kennzahlennutzung ein.
(4) Interessant sind zudem die Ergebnisse des Labor- und Quasiexperiments. Erstmalig sind nicht zweckorientierte Verwendungsarten experimentell beobachtbar. Zudem sind neuro- und verhaltensökonomische Erklärungsansätze identifiziert und diskutiert worden, die eine Bereicherung des Forschungsdiskurses darstellen. Sie bieten eine neue Perspektive hinsichtlich des Verwendungsverhaltens und liefern Impulse für die weitere Forschung.
Für das New Public Management, in dessen Werkzeugkasten dieser Managementansatz eine Schlüsselrolle einnimmt, wiegen die Forschungsbefunde schwer. Ohne ein funktionierendes Performance Management kann das wichtige Reformziel „Wirkungsorientierung“ nicht erreicht werden. Das NPM läuft damit Gefahr, selbst Dysfunktionen zu entwickeln.
Insgesamt scheint es geboten, in der Auseinandersetzung mit Managementsystemen einen stärkeren Fokus auf intrapersonale Faktoren zu legen. Auch Verhaltensanomalien im Kontext von Management und deren Implikationen sollten näher untersucht werden. Es zeigt sich ferner, dass eine rein technokratische Sichtweise auf das Performance Management nicht zielführend ist. Folglich ist das Performance Management theoretisch wie konzeptionell fortzuentwickeln.
Die Forschungsarbeit liefert somit wichtige neue Erkenntnisse zur Verwendung von Performance Informationen und zum Verständnis von Performance Management. Vor allem erweitert sie den Forschungsdiskurs, da sie die Erklärungskraft intrapersonaler Faktoren aufgezeigt hat sowie methodisch mit dem Mixed-Method-Ansatz (Multimethod-Studie) und theoretisch mittels der Neuro- und Verhaltensökonomie neue Perspektiven hinsichtlich der Verwendungsproblematik eröffnet.
Previous behavioral studies showed that perceptual changes in infancy can be observed in multiple patterns, namely decline (e.g., Mattock et al., 2008; Yeung et al., 2013), maintenance (e.g., Chen & Kager, 2016) and U-shaped development (Liu & Kager, 2014).
This dissertation contributes further to the understanding of the developmental trajectory of phonological acquisition in infancy. The dissertation addresses the questions of how the perceptual sensitivity of lexical tones and vowels changes in infancy and how different experimental procedures contribute to our understanding. We used three experimental procedures to investigate German-learning infants’ discrimination abilities. In Studies 1 and 3 (Chapters 5 and 7) we used behavioral methods (habituation and familiarization procedures) and in Study 2 (Chapter 6) we measured neural correlates.
Study 1 showed a U-shaped developmental pattern: 6- and 18-month-olds discriminated a lexical tone contrast, but not the 9-month-olds. In addition, we found an effect of experimental procedure: infants discriminated the tone contrast at 6 months in a habituation but not in a familiarization procedure. In Study 2, we observed mismatch responses (MMR) to a non-native tone contrast and a native-like vowel in 6- and 9-month-olds. In 6-month-olds, both contrasts elicited positive MMRs. At 9 months, the vowel contrast elicited an adult-like negative MMR, while the tone contrast elicited a positive MMR. Study 3 demonstrated a change in perceptual sensitivity to a vowel contrast between 6 and 9 months. In contrast to the 6-month-old infants, the 9-month-old infants discriminated the tested vowel contrast asymmetrically.
We suggest that the shifts in perceptual sensitivity between 6 and 9 months are functional rather than perceptual. In the case of lexical tone discrimination, infants may have already learned by 9 months of age that pitch is not relevant at the lexical level in German, since the infants in Study 1 showed no perceptual sensitivity to the contrast tested. Nevertheless, the brain responded to the contrast, especially since pitch differences are also part of the German intonation system (Gussenhoven, 2004). The role of the intonation system in pitch discrimination could be supported by the recovery of behavioral discrimination at 18 months of age, as well as behavioral and neural discrimination in German-speaking adults.
Partial synchronous states exist in systems of coupled oscillators between full synchrony and asynchrony. They are an important research topic because of their variety of different dynamical states. Frequently, they are studied using phase dynamics. This is a caveat, as phase dynamics are generally obtained in the weak coupling limit of a first-order approximation in the coupling strength. The generalization to higher orders in the coupling strength is an open problem. Of particular interest in the research of partial synchrony are systems containing both attractive and repulsive coupling between the units. Such a mix of coupling yields very specific dynamical states that may help understand the transition between full synchrony and asynchrony. This thesis investigates partial synchronous states in mixed-coupling systems. First, a method for higher-order phase reduction is introduced to observe interactions beyond the pairwise one in the first-order phase description, hoping that these may apply to mixed-coupling systems. This new method for coupled systems with known phase dynamics of the units gives correct results but, like most comparable methods, is computationally expensive. It is applied to three Stuart-Landau oscillators coupled in a line with a uniform coupling strength. A numerical method is derived to verify the analytical results. These results are interesting but give importance to simpler phase models that still exhibit exotic states. Such simple models that are rarely considered are Kuramoto oscillators with attractive and repulsive interactions. Depending on how the units are coupled and the frequency difference between the units, it is possible to achieve many different states. Rich synchronization dynamics, such as a Bellerophon state, are observed when considering a Kuramoto model with attractive interaction in two subpopulations (groups) and repulsive interactions between groups. In two groups, one attractive and one repulsive, of identical oscillators with a frequency difference, an interesting solitary state appears directly between full and partial synchrony. This system can be described very well analytically.
During sentence reading the eyes quickly jump from word to word to sample visual information with the high acuity of the fovea. Lexical properties of the currently fixated word are known to affect the duration of the fixation, reflecting an interaction of word processing with oculomotor planning. While low level properties of words in the parafovea can likewise affect the current fixation duration, results concerning the influence of lexical properties have been ambiguous (Drieghe, Rayner, & Pollatsek, 2008; Kliegl, Nuthmann, & Engbert, 2006). Experimental investigations of such lexical parafoveal-on-foveal effects using the boundary paradigm have instead shown, that lexical properties of parafoveal previews affect fixation durations on the upcoming target words (Risse & Kliegl, 2014). However, the results were potentially confounded with effects of preview validity.
The notion of parafoveal processing of lexical information challenges extant models of eye movements during reading. Models containing serial word processing assumptions have trouble explaining such effects, as they usually couple successful word processing to saccade planning, resulting in skipping of the parafoveal word. Although models with parallel word processing are less restricted, in the SWIFT model (Engbert, Longtin, & Kliegl, 2002) only processing of the foveal word can directly influence the saccade latency.
Here we combine the results of a boundary experiment (Chapter 2) with a predictive modeling approach using the SWIFT model, where we explore mechanisms of parafoveal inhibition in a simulation study (Chapter 4). We construct a likelihood function for the SWIFT model (Chapter 3) and utilize the experimental data in a Bayesian approach to parameter estimation (Chapter 3 & 4).
The experimental results show a substantial effect of parafoveal preview frequency on fixation durations on the target word, which can be clearly distinguished from the effect of preview validity. Using the eye movement data from the participants, we demonstrate the feasibility of the Bayesian approach even for a small set of estimated parameters, by comparing summary statistics of experimental and simulated data. Finally, we can show that the SWIFT model can account for the lexical preview effects, when a mechanism for parafoveal inhibition is added. The effects of preview validity were modeled best, when processing dependent saccade cancellation was added for invalid trials. In the simulation study only the control condition of the experiment was used for parameter estimation, allowing for cross validation. Simultaneously the number of free parameters was increased. High correlations of summary statistics demonstrate the capabilities of the parameter estimation approach. Taken together, the results advocate for a better integration of experimental data into computational modeling via parameter estimation.
En el presente trabajo se realizó una investigación multidisciplinaria combinando métodos de geomorfología tectónica con estudios geofisicos y estructurales, focalizados principalmente en la caracterización neotectónica de ambos faldeos de la sierra de La Candelaria y del extremo sur de la cuenca de Metán. La zona de estudio se encuentra ubicada en la región limítrofe entre las provincias de Salta y Tucumán y pertenece a la provincia geológica del Sistema Santa Bárbara.
El principal objetivo consistió en contextualizar las evidencias de actividad tectónica cuaternaria de la región mediante la propuesta de un modelo estructural novedoso, con el propósito de incrementar la información disponible sobre estructuras neotectónicas y su potencial sismogénico. Con este fin, se aplicaron e integraron diversas técnicas tales como la interpretación de líneas sísmicas de reflexión, construcción de secciones estructurales balanceadas, y métodos geofísicos someros, para constatar el comportamiento en profundidad tanto de las estructuras geológicas identificadas en superficie como de las posibles fallas ciegas corticales involucradas.
En primer lugar, se realizó un relevamiento regional del área de estudio empleando imágenes satelitales multiespectrales LANDSAT y SENTINEL 2, que permitieron reconocer diferentes niveles de abanicos aluviales y terrazas fluviales cuaternarios. Mediante la determinación de diferentes indicadores morfométricos en modelos de elevación digital (MED), junto con observaciones de campo, fue posible identificar evidencias de deformación sobre dichos niveles cuaternarios que han sido relacionadas genéticamente con cuatro fallas neotectónicas. Tres de ellas (fallas Arias, El Quemado y Copo Quile) fueron seleccionadas para efectuar estudios de mayor detalle por medio de la aplicación de métodos de geofísica somera (tomografía eléctrica resistiva (ERT) y tomografía sísmica de refracción Sísmica (SRT)), que permitieron corroborar su existencia en profundidad, realizar inferencias geométricas y cinemáticas, y estimar la magnitud de la deformación reciente. Las fallas Arias y El Quemado fueron interpretadas como fallas inversas relacionadas con deslizamiento flexural interstratal, mientras que la falla Copo Quile se interpretó como una falla inversa ciega de bajo ángulo.También se realizó una interpretación conjunta de líneas sísmicas de reflexión y pozos exploratorios pertenecientes a áreas hidrocarburíferas de las cuencas de Choromoro y Metán con el fin de contextualizar las principales estructuras reconocidas en el marco estratigráfico y tectónico regional. Toda la información fue integrada en una sección estructural balanceada mediante técnicas de modelado cinemático. Dicho modelo permite inferir que la deformación cuaternaria reconocida está relacionada al desplazamiento del basamento a lo largo de un corrimiento ciego, responsable del levantamiento de la sierra de La Candelaria y el cerr Cantero. Asimismo, el modelo cinemático permite interpretar la ubicación aproximada de los principales niveles de despegue que controlan el estilo de deformación. El nivel de despegue más somero, que controla la deformación de la cobertura sedimentaria se encuentra a 4 km de profundidad, a 21 km se estima la presencia de otra zona de cizalla subhorizontal dentro del basamento.
Finalmente, a partir de la integración de todos los resultados obtenidos, se evaluó el potencial sismogénico de las fallas en la zona de estudio. Las fallas de primer orden que controlan la deformación en la zona son las responsables de los grandes terremotos. Mientras, las fallas Cuaternarias flexodeslizantes e inversas afectan solamente a la cobertura sedimentaria y serían estructuras de segundo orden que acomodan la deformación y fueron activadas durante el cuaternario con movimientos asísmicos y/o sísmicos de muy baja magnitud.
Estos resultados permiten inferir que el corrimiento La Candelaria constituye una fuente sismogénica potencial de importancia para la región, donde se ubican numerosas poblaciones y obras civiles de envergadura. Por otra parte, la sección estructural balanceada implica la presencia de otras fallas ciegas de distinto orden de magnitud que podrían ser posibles fuentes sismogénicas profundas adicionales, marcando la necesidad de continuar con el desarrollo de este tipo de estudios en esta región tectónicamente activa.
El flanco oriental de los Andes Centrales en el noroeste argentino es una zona caracterizada por serranías limitadas por fallas inversas que conforman un orógeno de piel gruesa activo con un patrón espacio-temporal no sistemático de deformación contraccional. Este patrón queda representado tanto por la dispersión de la actividad sísmica cortical como de la localización de las estructuras cuaternarias a través de la Cordillera Oriental y el Sistema de Santa Bárbara, configurando un frente orogénico difuso de más de 200 km de extensión. El estudio de la actividad neotectónica en esta región ha tomado más relevancia en los últimos años, mediante la aplicación de herramientas variadas, incluyendo técnicas de geomorfología tectónica, herramientas de teledetección, geodesia y estudios de campo convencionales. Los depósitos lacustres han demostrado ser, en numerosos ejemplos, excelentes marcadores de la actividad tectónica, dadas la horizontalidad original de sus capas y la susceptibilidad a los cambios del entorno. Es por ello que en este trabajo se analizaron los depósitos lacustres que afloran en el sector central de los valles Calchaquíes (región de Cafayate), para comprender cómo se acomoda la deformación cuaternaria en una de las cuencas intermontanas de la cuña orogénica activa.
El rumbo de las estructuras cuaternarias en el área de estudio es subparalelo al de las fallas que exhuman los cordones serranos circundantes. A partir del estudio estratigráfico, morfotectónico y estructural de los depósitos lacustres, se identificó un mínimo de cinco episodios de deformación afectando a la columna estratigráfica cuaternaria. Integrando perfiles estructurales balanceados con edades obtenidas en este trabajo y recopiladas de la bibliografía, se calcularon para el Pleistoceno mediotardío, tasas mínimas y máximas de acortamiento que varían entre 0,19-2,80 y 0,21-4,47 mm/a, respectivamente. Para comparar estos resultados con mediciones de la tectónica activa a escala regional se recopilaron datos de estaciones geodésicas del noroeste argentino, con los cuales se elaboró un perfil de velocidades horizontales. El perfil obtenido muestra un decrecimiento gradual de los vectores hacia el este, indicando actividad interna del orógeno en congruencia con los registros de actividad sísmica y compilación regional de las estructuras cuaternarias.
Además de la caracterización neotectónica de este sector de la Cordillera Oriental, el análisis estratigráfico de los depósitos lacustres ha permitido refinar la evolución geológica del sector central de los valles Calchaquíes durante el Cuaternario. De esta manera se han identificado al menos siete episodios de inundación lacustre relacionados con la desconexión del sistema fluvial con su nivel de base, dando lugar a sucesivos eventos de agradación y erosión. Las cotas máximas alcanzadas por los paleolagos, en conjunto con un modelo hidrológico previamente publicado para esta región, permitieron asimismo efectuar una comparación con el registro paleoclimático regional.
Los resultados de esta tesis representan un aporte significativo al conocimiento de la evolución tectónica y estratigráfica del sector central de los valles Calchaquíes durante el Cuaternario. Por otra parte, su integración a escala regional contribuye a comprender mejor la dinámica de la deformación en la cuña orogénica de piel gruesa del noroeste argentino.
This dissertation was carried out as part of the international and interdisciplinary graduate school StRATEGy. This group has set itself the goal of investigating geological processes that take place on different temporal and spatial scales and have shaped the southern central Andes. This study focuses on claystones and carbonates of the Yacoraite Fm. that were deposited between Maastricht and Dan in the Cretaceous Salta Rift Basin. The former rift basin is located in northwest Argentina and is divided into the sub-basins Tres Cruces, Metán-Alemanía and Lomas de Olmedo. The overall motivation for this study was to gain new knowledge about the evolution of marine and lacustrine conditions during the Yacoraite Fm. Deposit in the Tres Cruces and Metán-Alemanía sub-basins. Other important aspects that were examined within the scope of this dissertation are the conversion of organic matter from Yacoraite Fm. into oil and its genetic relationship to selected oils produced and natural oil spills. The results of my study show that the Yacoraite Fm. began to be deposited under marine conditions and that a lacustrine environment developed by the end of the deposition in the Tres Cruces and Metán-Alemanía Basins. In general, the kerogen of Yacoraite Fm. consists mainly of the kerogen types II, III and II / III mixtures. Kerogen type III is mainly found in samples from the Yacoraite Fm., whose TOC values are low. Due to the adsorption of hydrocarbons on the mineral surfaces (mineral matrix effect), the content of type III kerogen with Rock-Eval pyrolysis in these samples could be overestimated. Investigations using organic petrography show that the organic particles of Yacoraite Fm. mainly consist of alginites and some vitrinite-like particles. The pyrolysis GC of the rock samples showed that the Yacoraite Fm. generates low-sulfur oils with a predominantly low-wax, paraffinic-naphthenic-aromatic composition and paraffinic wax-rich oils. Small proportions of paraffinic, low-wax oils and a gas condensate-generating facies are also predicted. Here, too, mineral matrix effects were taken into account, which can lead to a quantitative overestimation of the gas-forming character.
The results of an additional 1D tank modeling carried out show that the beginning (10% TR) of the oil genesis took place between ≈10 Ma and ≈4 Ma. Most of the oil (from ≈50% to 65%) was generated prior to the development of structural traps formed during the Plio-Pleistocene Diaguita deformation phase. Only ≈10% of the total oil generated was formed and potentially trapped after the formation of structural traps. Important factors in the risk assessment of this petroleum system, which can determine the small amounts of generated and migrated oil, are the generally low TOC contents and the variable thickness of the Yacoraite Fm. Additional risks are associated with a low density of information about potentially existing reservoir structures and the quality of the overburden.
This paper-based dissertation aims to contribute to the open innovation (OI) and technology management (TM) research fields by investigating their mechanisms, and potentials at the operational level. The dissertation connects the well-known concept of technology management with OI formats and applies these on specific manufacturing technologies within a clearly defined setting.
Technological breakthroughs force firms to continuously adapt and reinvent themselves. The pace of technological innovation and their impact on firms is constantly increasing due to more connected infrastructure and accessible resources (i.e. data, knowledge). Especially in the manufacturing sector it is one key element to leverage new technologies to stay competitive. These technological shifts call for new management practices.
TM supports firms with various tools to manage these shifts at different levels in the firm. It is a multifunctional and multidisciplinary field as it deals with all aspects of integrating technological issues into business decision-making and is directly relevant to a number of core business processes. Thus, it makes sense to utilize this theory and their practices as a foundation of this dissertation. However, considering the increasing complexity and number of technologies it is not sufficient anymore for firms to only rely on previous internal R&D and managerial practices. OI can expanse these practices by involving distributed innovation processes and accessing further external knowledge sources. This expansion can lead to an increasing innovation performance and thereby accelerate the time-to-market of technologies.
Research in this dissertation was based on the expectations that OI formats will support the R&D activities of manufacturing technologies on the operational level by providing access to resources, knowledge, and leading-edge technology. The dissertation represents uniqueness regarding the rich practical data sets (observations, internal documents, project reviews) drawn from a very large German high-tech firm. The researcher was embedded in an R&D unit within the operational TM department for manufacturing technologies. The analyses include 1.) an exploratory in-depth analysis of a crowdsourcing initiative to elaborate the impact on specific manufacturing technologies, 2.) a deductive approach for developing a technology evaluation score model to create a common understanding of the value of selected manufacturing technologies at the operational level, and 3.) an abductive reasoning approach in form of a longitudinal case study to derive important indicator for the in-process activities of science-based partnership university-industry collaboration format. Thereby, the dissertation contributed to research and practice 1.) linkages of TM and OI practices to assimilate technologies at the operational level, 2.) insights about the impact of CS on manufacturing technologies and a related guideline to execute CS initiatives in this specific environment 3.) introduction of manufacturing readiness levels and further criteria into the TM and OI research field to support decision-makers in the firm in gaining a common understanding of the maturity of manufacturing technologies and, 4.) context-specific important indicators for science based university-industry collaboration projects and a holistic framework to connect TM with the university-industry collaboration approach
The findings of this dissertation illustrate that OI formats can support the acceleration of time-to-market of manufacturing technologies and further improve the technical requirements of the product by leveraging external capabilities. The conclusions and implications made are intended to foster further research and improve managerial practices to evolve TM into an open collaborative context with interconnectivities between all internal and external involved technologies, individuals and organizational levels.
Die vorliegende Publikation der Dissertationsschrift „Nutzungsfokussierte Evaluation in der Lehrkräftefortbildung Belcantare Brandenburg für musikunterrichtende Grundschul-lehrer*innen im ländlichen Raum“ ist eine akteursorientierte, explorativ angelegte Evaluation. Seit 2011 führt in den Regionen des Landes Brandenburg der Landesmusikrat Brandenburg e.V. in Kooperation mit mehreren Institutionen die zweijährige Fortbildung für fachnah sowie ausgebildete Musiklehrkräfte im Kompetenzfeld Singen und Lieddidaktik durch.
Der zugrunde liegende Evaluationsansatz stellt die Interessen der kooperierenden Partner, welche praktische Konsequenzen aus den Ergebnissen der Evaluation zu ziehen beabsichtigen, in den Mittelpunkt der Forschungsarbeit. Es handelt sich somit um eine Auftragsforschung. Der Evaluation kommen die Funktionen zu, die inhaltliche Qualität der Lehrkräftefortbildung zu sichern und zu optimieren, den Erkenntnisgewinn zur Gestaltung eines fachdidaktischen Coachings zu erweitern, die Forschungsergebnisse zur Legitimation und Partizipation sichtbar zu machen sowie analytische Entscheidungshilfe zur Weiterführung Belcantare Brandenburgs nach 2022 bereitzustellen.
Die von den Akteuren an die Autorin herangetragenen Forschungsanliegen wurden zu vier Fragestellungen zusammengefasst:
1. Wie zufrieden sind die Teilnehmenden mit der Veranstaltungsreihe?
2. Welche fachlichen, didaktischen und persönlichen Entwicklungen stellen sich während des Fortbildungszeitraumes aus der Wahrnehmungsperspektive der teilnehmenden Lehrkräfte ein?
3. Wie beurteilen die Coaching-Beteiligten die Chancen und Grenzen des musikdidaktischen Coachings als Fortbildungsform?
4. Welche Schlussfolgerungen lassen sich hinsichtlich professioneller Lehrkräftefortbildung aus der Gegenüberstellung der empirischen Erkenntnisse mit denen der Theorie ziehen?
Diese Forschungsfragen wurden in zwei Forschungsphasen beantwortet:
1. Der empirische Datenkorpus wurde zwischen 2011-2015 gebildet. In dieser Zeit hatten zur projektbegleitenden Qualitätssicherung und -weiterführung der Pilot- und Folgestaffel Belcantare Brandenburgs die Forschungsfragen 1, 2 und 3 besondere Relevanz. Die Evaluationsstudie ist explorativ angelegt: Die Variablen zu den Forschungsfragen 1 und 2 sind durch Dokumentenanalysen sowie Interview-auswertungen mit der Projektleitung und teilnehmenden Lehrkräften sukzessive herausgearbeitet. Ebenso entsprechen die halb-geschlossenen Fragebögen als zentrale Erhebungsinstrumente der Forschungsfragen 1 und 2 dem explorativen Charakter und stellen auf diesem Weg sicher, dass den Teilnehmer*innen (N=40) die Möglichkeit zum Einbringen eigener Perspektiven eingeräumt wurde. Mit der Gesamtnote „sehr gut“ (1,39) seitens der befragten Lehrkräfte gilt die Gestaltung der Veranstaltungsreihe als ein Best-Practice-Beispiel: Für die Lehrkräfte sind das handlungsorientierte Erarbeiten von schülerpassenden und thematisch geeigneten, unmittelbar einsetzbaren oder wiederholt geübten Unterrichtsinhalten, Lerngegenständen und dazu passenden Materialien für den Unterricht die wesentlichen Kriterien zur Nutzung einer solchen Professionalisierungsmaßnahme. Die Lehrkräfteentwicklungen beider beforschter Staffeln zeigen, dass die fachnahen Kräfte bei sich größere Entwicklungszuwächse nach Beendigung des Projektes wahrnehmen als die Fachkräfte. Gleichzeitig liegt die selbsteingeschätzte Fachkompetenz der fachnahen Kräfte zu Fortbildungsende unter denen der Fachkräfte.
Der Forschungsfrage 3 liegt ein ausschließlich qualitatives Design (N=16) zugrunde. Im Ergebnis konnten die Offene Form fachdidaktischen Coachings definiert werden, deren Parameter beschrieben und wesentliche Eigenschaften von Coach-Constellationen für ein binnendifferenziertes Coaching in der Lehrkräftefortbildung benannt werden.
2. Im Mai 2019 bildete sich aufgrund des sich verschärfenden Fachkräftemangels in Brandenburg das Bestreben der Kooperationspartner heraus, die Lehrkräftefortbildung nach 2022 als qualitätssichernde Maßnahme fortführen zu wollen. Diese Situation führte 2019 zur Aufnahme der Forschungsfrage 4, die eine umfassende und aktualisierte Analyse der theoretischen und bildungspolitischen Hintergründe der Intervention implizierte, mit dem Ziel, den Erkenntnisstand der Evaluation für eine erneute Empfehlung zu vertiefen. Das Thematisieren sowie das Gestalten von Selbstlernprozessen in der professionalisierenden Lehrkräftefortbildung stellte sich hierbei als ein zentrales Merkmal innovativer Lernkultur heraus.
Die Publikation gliedert sich in vier Teile: Teil I stellt den Forschungsstand zur professionalisierenden Lehrkräfte¬fortbildung aus bildungswissenschaftlicher und musikpäda-gogischer Perspektive dar. Teil II der Arbeit stellt die komplexen Begründungs-zusammenhänge zum Evaluationsgegenstand her. Im III. Teil der Arbeit ist die Evaluationsstudie zu finden. Deren induktiv erschlossene Erkenntnisse werden in Teil IV der Arbeit dem Forschungsstand zur professionalisierenden Lehrkräftefortbildung gegenübergestellt.
With ongoing anthropogenic global warming, some of the most vulnerable components of the Earth system might become unstable and undergo a critical transition. These subsystems are the so-called tipping elements. They are believed to exhibit threshold behaviour and would, if triggered, result in severe consequences for the biosphere and human societies. Furthermore, it has been shown that climate tipping elements are not isolated entities, but interact across the entire Earth system. Therefore, this thesis aims at mapping out the potential for tipping events and feedbacks in the Earth system mainly by the use of complex dynamical systems and network science approaches, but partially also by more detailed process-based models of the Earth system.
In the first part of this thesis, the theoretical foundations are laid by the investigation of networks of interacting tipping elements. For this purpose, the conditions for the emergence of global cascades are analysed against the structure of paradigmatic network types such as Erdös-Rényi, Barabási-Albert, Watts-Strogatz and explicitly spatially embedded networks. Furthermore, micro-scale structures are detected that are decisive for the transition of local to global cascades. These so-called motifs link the micro- to the macro-scale in the network of tipping elements. Alongside a model description paper, all these results are entered into the Python software package PyCascades, which is publicly available on github.
In the second part of this dissertation, the tipping element framework is first applied to components of the Earth system such as the cryosphere and to parts of the biosphere. Afterwards it is applied to a set of interacting climate tipping elements on a global scale. Using the Earth system Model of Intermediate Complexity (EMIC) CLIMBER-2, the temperature feedbacks are quantified, which would arise if some of the large cryosphere elements disintegrate over a long span of time. The cryosphere components that are investigated are the Arctic summer sea ice, the mountain glaciers, the Greenland and the West Antarctic Ice Sheets. The committed temperature increase, in case the ice masses disintegrate, is on the order of an additional half a degree on a global average (0.39-0.46 °C), while local to regional additional temperature increases can exceed 5 °C. This means that, once tipping has begun, additional reinforcing feedbacks are able to increase global warming and with that the risk of further tipping events.
This is also the case in the Amazon rainforest, whose parts are dependent on each other via the so-called moisture-recycling feedback. In this thesis, the importance of drought-induced tipping events in the Amazon rainforest is investigated in detail. Despite the Amazon rainforest is assumed to be adapted to past environmental conditions, it is found that tipping events sharply increase if the drought conditions become too intense in a too short amount of time, outpacing the adaptive capacity of the Amazon rainforest. In these cases, the frequency of tipping cascades also increases to 50% (or above) of all tipping events. In the model that was developed in this study, the southeastern region of the Amazon basin is hit hardest by the simulated drought patterns. This is also the region that already nowadays suffers a lot from extensive human-induced changes due to large-scale deforestation, cattle ranching or infrastructure projects.
Moreover, on the larger Earth system wide scale, a network of conceptualised climate tipping elements is constructed in this dissertation making use of a large literature review, expert knowledge and topological properties of the tipping elements. In global warming scenarios, tipping cascades are detected even under modest scenarios of climate change, limiting global warming to 2 °C above pre-industrial levels. In addition, the structural roles of the climate tipping elements in the network are revealed. While the large ice sheets on Greenland and Antarctica are the initiators of tipping cascades, the Atlantic Meridional Overturning Circulation (AMOC) acts as the transmitter of cascades. Furthermore, in our conceptual climate tipping element model, it is found that the ice sheets are of particular importance for the stability of the entire system of investigated climate tipping elements.
In the last part of this thesis, the results from the temperature feedback study with the EMIC CLIMBER-2 are combined with the conceptual model of climate tipping elements. There, it is observed that the likelihood of further tipping events slightly increases due to the temperature feedbacks even if no further CO$_2$ would be added to the atmosphere.
Although the developed network model is of conceptual nature, it is possible with this work for the first time to quantify the risk of tipping events between interacting components of the Earth system under global warming scenarios, by allowing for dynamic temperature feedbacks at the same time.
The self-assembly of amphiphilic polymers in aqueous systems is important for a plethora of applications, in particular in the field of cosmetics and detergents. When introducing thermoresponsive blocks, the aggregation behavior of these polymers can be controlled by changing the temperature. While confined to simple diblock copolymer systems for long, the complexity - and thus the versatility - of such smart systems can be strongly enlarged, once designed monomers, specific block sizes, different architectures, or additional functional groups such as hydrophobic stickers are implemented. In this work, the structure-property relationship of such thermoresponsive amphiphilic block copolymers was investigated by varying their structure systematically. The block copolymers were generally composed of a permanently hydrophobic sticker group, a permanently hydrophilic block, and a thermoresponsive block exhibiting a Lower Critical Solution Temperature (LCST) behavior. While the hydrophilic block consisted of N,N dimethylacrylamide (DMAm), different monomers were used for the thermoresponsive block, such as N n propylacrylamide (NPAm), N iso propylacrylamide (NiPAm), N,N diethylacrylamide (DEAm), N,N bis(2 methoxyethyl)acrylamide (bMOEAm), or N acryloylpyrrolidine (NAP) with different reported LCSTs of 25, 32, 33, 42 and 56 °C, respectively. The block copolymers were synthesized by successive reversible addition fragmentation chain transfer (RAFT) polymerization. For the polymers with the basic linear, the twinned hydrophobic and the symmetrical quasi miktoarm architectures, the results were well defined block sizes and end groups as well as narrow molar mass distributions (Ɖ ≤ 1.3). More complex architectures, such as the twinned thermoresponsive and the non-symmetrical quasi miktoarm one, were achieved by combining RAFT polymerization with a second technique, namely atom transfer radical polymerization (ATRP) or single unit monomer insertion (SUMI), respectively. The obtained block copolymers showed well defined block sizes, but due to the complexity of these reaction paths, the dispersities were generally higher (Ɖ ≤ 1.8) and some end groups were lost.
The thermoresponsive behavior of the block copolymers was investigated by turbidimetry and dynamic light scattering (DLS). Below the phase transition temperature, the polymers were soluble in water and small micellar structures were visible. However, above the phase transition temperature, the aggregation behavior was strongly dependent on the architecture and the chemical structure of the thermoresponsive block. Thermoresponsive blocks comprising PNAP and PbMOEAm with DPn = 40 showed no cloud point (CP), since their already high LCSTs were further increased by the attached hydrophilic block. Depending on the architecture as well as on the block size, block copolymers with PNiPAm, PDEAm and PNPAm showed different CP’s. Large aggregates were visible for block copolymers with PNiPAm and PDEAm above their CP. For PNPAm containing block copolymers, the phase transition was very sensitive towards the architecture resulting in either small or large aggregates.
In addition, fluorescence studies were performed using PDMAm and PNiPAm homo and block copolymers with linear architecture, functionalized with complementary fluorescence dyes introduced at the opposite chain ends. The thermoresponsive behavior was studied in pure aqueous solution as well as in an oil in water (o/w) microemulsion. The findings indicate that the block copolymer behaves as polymeric surfactant at low temperatures, with one relatively small hydrophobic end group and an extended hydrophilic chain forming ‘hairy micelles’ similar as the other synthesized architectures. Above the phase transition temperature of the PNiPAm block, however, the copolymer behaves as associative telechelic polymer with two non-symmetrical hydrophobic end groups, which do not mix. Thus, instead of a network of bridged ‘flower micelles’, large dynamic aggregates are formed. These are connected alternatingly by the original micellar cores as well as by clusters of the collapsed PNiPAm blocks. This type of bridged micelles is even more favored in the o/w microemulsion than in pure aqueous solution.
Die Vielfältigkeit des Winkelbegriffs ist gleichermaßen spannend wie herausfordernd in Hinblick auf seine Zugänge im Mathematikunterricht der Schule. Ausgehend von verschiedenen Vorstellungen zum Winkelbegriff wird in dieser Arbeit ein Lehrgang zur Vermittlung des Winkelbegriffs entwickelt und letztlich in konkrete Umsetzungen für den Schulunterricht überführt.
Dabei erfolgt zunächst eine stoffdidaktische Auseinandersetzung mit dem Winkelbegriff, die von einer informationstheoretischen Winkeldefinition begleitet wird. In dieser wird eine Definition für den Winkelbegriff unter der Fragestellung entwickelt, welche Informationen man über einen Winkel benötigt, um ihn beschreiben zu können. So können die in der fachdidaktischen Literatur auftretenden Winkelvorstellungen aus fachmathematischer Perspektive erneut abgeleitet und validiert werden. Parallel dazu wird ein Verfahren beschrieben, wie Winkel – auch unter dynamischen Aspekten – informationstechnisch verarbeitet werden können, so dass Schlussfolgerungen aus der informationstheoretischen Winkeldefinition beispielsweise in dynamischen Geometriesystemen zur Verfügung stehen.
Unter dem Gesichtspunkt, wie eine Abstraktion des Winkelbegriffs im Mathematikunterricht vonstatten gehen kann, werden die Grundvorstellungsidee sowie die Lehrstrategie des Aufsteigens vom Abstrakten zum Konkreten miteinander in Beziehung gesetzt. Aus der Verknüpfung der beiden Theorien wird ein grundsätzlicher Weg abgeleitet, wie im Rahmen der Lehrstrategie eine Ausgangsabstraktion zu einzelnen Winkelaspekten aufgebaut werden kann, was die Generierung von Grundvorstellungen zu den Bestandteilen des jeweiligen Winkelaspekts und zum Operieren mit diesen Begriffsbestandteilen ermöglichen soll. Hierfür wird die Lehrstrategie angepasst, um insbesondere den Übergang von Winkelsituationen zu Winkelkontexten zu realisieren. Explizit für den Aspekt des Winkelfeldes werden, anhand der Untersuchung der Sichtfelder von Tieren, Lernhandlungen und Forderungen an ein Lernmodell beschrieben, die Schülerinnen und Schüler bei der Begriffsaneignung unterstützen.
Die Tätigkeitstheorie, der die genannte Lehrstrategie zuzuordnen ist, zieht sich als roter Faden durch die weitere Arbeit, wenn nun theoriebasiert Designprinzipien generiert werden, die in die Entwicklung einer interaktiven Lernumgebung münden. Hierzu wird u. a. das Modell der Artifact-Centric Activity Theory genutzt, das das Beziehungsgefüge aus Schülerinnen und Schülern, dem mathematischen Gegenstand und einer zu entwickelnden App als vermittelndes Medium beschreibt, wobei der Einsatz der App im Unterrichtskontext sowie deren regelgeleitete Entwicklung Bestandteil des Modells sind. Gemäß dem Ansatz der Fachdidaktischen Entwicklungsforschung wird die Lernumgebung anschließend in mehreren Zyklen erprobt, evaluiert und überarbeitet. Dabei wird ein qualitatives Setting angewandt, das sich der Semiotischen Vermittlung bedient und untersucht, inwiefern sich die Qualität der von den Schülerinnen und Schülern gezeigten Lernhandlungen durch die Designprinzipien und deren Umsetzung erklären lässt. Am Ende der Arbeit stehen eine finale Version der Designprinzipien und eine sich daraus ergebende Lernumgebung zur Einführung des Winkelfeldbegriffs in der vierten Klassenstufe.
Die vorliegende Arbeit thematisiert die Synthese und Charakterisierung von neuen funktionalisierten ionischen Flüssigkeiten und deren Polymerisation. Die ionischen Flüssigkeiten wurden dabei sowohl mit polymerisierbaren Kationen als auch Anionen hergestellt. Zum einen wurden bei thermisch initiierten Polymerisationen Azobis(isobutyronitril) (AIBN) verwendet und zum anderen dienten bei photochemisch initiierten Polymerisationen Bis-4-(methoxybenzoyl)diethylgermanium (Ivocerin®) als Radikalstarter.
Mittels Gelpermeationschromatographie konnte das Homopolymer Polydimethylaminoethylmethacrylat untersucht werden, welches erst im Anschluss an die GPC-Messungen polymeranalog modifiziert wurde. Dabei wurden nach einer Quaternisierung und anschließender Anionenmetathese bei diesen Polymeren die Grenzviskositäten bestimmt und mit den Grenzviskositäten der direkt polymerisierten ionischen Flüssigkeiten verglichen. Bei der direkten Polymerisation von Poly(N-[2-(Methacryloyloxy)ethyl]-N-butyl-N,N-dimethyl-ammoniumbis(trifluormethylsulfonyl)imid) lag [η_Huggins] bei 100 mL/g und bei dem polymeranalog hergestellten Polymer betrug [η_Huggins] = 40 mL/g.
Die ionischen Flüssigkeiten mit polymerisierbaren funktionellen Gruppen wurden mittels Photo-DSC hinsichtlich der maximalen Polymerisationsgeschwindigkeit (Rpmax), der Zeit, in der dieses Maximum erreicht wurde, tmax, ihrer Glasüberganstemperatur (Tg) und des Umsatzes an Vinylprotonen untersucht. Bei diesen Messungen wurde zum einen der Einfluss der unterschiedlichen Alkylkettenlänge am Ammoniumion und der Einfluss von verschiedenen Anionen bei gleichbleibender Kationenstruktur analysiert. So polymerisierte das ethylsubstituierte Kation mit einer tmax von 21 Sekunden am langsamsten. Die maximale Polymerisationsgeschwindigkeit (Rpmax) betrug 3.3∙10-2 s-1. Die tmax Werte der übrigen alkylsubstituierten ionischen Flüssigkeiten mit einer polymerisierbaren funktionellen Gruppe hingegen lagen zwischen 10 und 15 Sekunden. Die Glasübergangstemperaturen der mittels photoinduzierter Polymerisation hergestellten Polymere lagen mit 44 bis 55 °C nahe beieinander. Alle Monomere zeigten einen hohen Umsatz der Vinylprotonen; er betrug zwischen 93 und 100%.
Mithilfe einer Bandanlage, ausgerüstet mit einer LED (λ = 395 nm), konnten Polymerfilme hergestellt werden. Der Umsatz an Doppelbindungsäquivalenten dieser Filme wurde anhand der 1H-NMR Spektroskopie bestimmt. Bei der dynamisch-mechanischen Analyse wurden die Polymerfilme mit einer konstanten Heizrate und Frequenz periodisch wechselnden Beanspruchungen ausgesetzt, um die Glasübergangstemperaturen zu bestimmen. Die niedrigste Tg mit 26 °C besaß das butylsubstituierte N-[2-(Methacryloyloxy)ethyl]-N-butyl-N,N-dimethyl-ammoniumbis(trifluormethylsulfonyl)imid, welches als Polymerfilm mit Ivocerin® als Initiator hergestellt wurde, wohingegen die höchste Tg bei dem gleichen Polymer, welches direkt durch freie radikalische Polymerisation der ionischen Flüssigkeit in Masse mit AIBN hergestellt wurde, 51 °C betrug. Zusätzlich wurden die Filme unter dem Aspekt der Topographie mit einem Rasterkraftmikroskop untersucht, welches eine Domänenstruktur des Polymers N-[2-(methacryloyloxy)ethyl]-N-butyl-N,N-dimethyl-ammonium tris(pentafluorethyl)trifluorphosphat offenbarte.
Zusammenfassung zur Dissertation „Neuartige DBD-Fluoreszenzfarbstoffe: Synthese, Untersuchungen und Anwendungen“ von Leonard John
In dieser Arbeit konnten auf Basis der etablierten [1,3]-Dioxolo[4,5-f][1,3]benzodioxol (DBD) Fluoreszenzfarbstoffe zwei neue Konzepte zur Darstellung unsymmetrisch funktionalisierter DBD-Fluorophore entwickelt werden. Die Variation der elektronenziehenden Reste führte zu einer Erweiterung des Farbspektrums an DBD-Fluorophoren, wobei alle weiteren spektroskopischen Parameter (Fluoreszenzlebenszeit, -quantenausbeute und STOKES-Verschiebung) unverändert hohe Werte aufweisen. Neben der Variation der elektronenziehenden Reste wurde das "pi"-System des DBD-Farbstoffs mit der Einführung von Stilben-, und Tolan-Derivaten vergrößert. Stilben-Derivate zeigten ähnlich gute spektroskopische Eigenschaften wie die bereits etablierten DBD-Farbstoffe.
Fluorophore mit langwelliger Emission sind auf Grund der großen Gewebe-Eindringtiefe besonders interessant für biologische Anwendungen. Da der langwelligste Vertreter der O4-DBD-Farbstoffe in polaren Medien nur schwer löslich ist, wurde ein Weg zur Einführung löslichkeitsvermittelnder Gruppen gesucht. Hierbei fiel die Wahl auf eine Carbonsäure-Gruppe zur Steigerung der Hydrophilie. Eine von vier untersuchten Methoden erwies sich als zielführend, sodass das gewünschte Molekül isoliert werden konnte. Eine erhöhte Wasserlöslichkeit wurde allerdings nicht beobachtet.
Zur Erforschung von Fettstoffwechselkrankheiten wie der ALZHEIMER-Krankheit werden fluoreszenzmarkierte Lipide benötigt. Um unterschiedliche Bereiche einer Membran zu untersuchen, war das Ziel, den Fluorophor an unterschiedlichen Stellen innerhalb der Fettsäure zu lokalisieren. Hierbei sollte die Gesamtkettenlänge des DBD-Lipids einer C18-Kette, analog der Stearinsäure, entsprechen. Durch die stufenweise Einführung der Reste gelang es, drei DBD-Lipide herzustellen, wobei sich der Fluorophor an unterschiedlichen Positionen innerhalb der Kette befindet. Die photophysikalischen Eigenschaften der Lipide weichen nur marginal von denen der reinen Fluorophore ab. Eine Einlagerung in giant unilamellar vesicles (GUVs) konnte für zwei Derivate beobachtet werden, wobei keine domänenspezifisch war.
Ein weiteres Ziel dieser Arbeit war es, die vier Sauerstoffatome im DBD-Grundkörper stufenweise durch Schwefelatome zu ersetzen und die Ringgrößen des DBD-Fluorophors zu variieren. Für die Ringgröße zeigte der 1,2-S2-DBD mit jeweils zwei Fünfringen die besten spektroskopischen Eigenschaften. Durch die Synthese von zwei weiteren schwefelhaltigen DBD-Grundkörpern (S1- und 1,4-S2-DBD) konnten insgesamt drei neue Farbstoffklassen zugänglich gemacht werden. Für alle neuen Chromophore wurden elektronenziehende Reste (Aldehyd, Acyl, Ester, Carboxy) eingeführt und die jeweiligen Derivate spektroskopisch untersucht. Mit steigender Anzahl an Schwefel-Atomen im Grundkörper zeigt sich eine bathochrome Verschiebung der Emission,
wobei die Werte für die Fluoreszenzlebenszeit- und -quantenausbeute abnehmen. Die optimalen spektroskopischen Eigenschaften aus langwelliger Emission, hoher Fluoreszenzlebenszeit und -quantenausbeute zeigt das 1,4-S2-Dialdehyd-Derivat. Für die S1- und 1,2-S2-Dialdehyd-
Derivate wurden Konzepte entwickelt, um bioreaktive Reste (Alkin, HOSu, Maleimid) einzuführen und die Fluorophore in biologischen Systemen anwenden zu können.
The Internet of Things (IoT) is a system of physical objects that can be discovered, monitored, controlled, or interacted with by electronic devices that communicate over various networking interfaces and eventually can be connected to the wider Internet. [Guinard and Trifa, 2016]. IoT devices are equipped with sensors and/or actuators and may be constrained in terms of memory, computational power, network bandwidth, and energy. Interoperability can help to manage such heterogeneous devices. Interoperability is the ability of different types of systems to work together smoothly. There are four levels of interoperability: physical, network and transport, integration, and data. The data interoperability is subdivided into syntactic and semantic data. Semantic data describes the meaning of data and the common understanding of vocabulary e.g. with the help of dictionaries, taxonomies, ontologies. To achieve interoperability, semantic interoperability is necessary.
Many organizations and companies are working on standards and solutions for interoperability in the IoT. However, the commercial solutions produce a vendor lock-in. They focus on centralized approaches such as cloud-based solutions. This thesis proposes a decentralized approach namely Edge Computing. Edge Computing is based on the concepts of mesh networking and distributed processing. This approach has an advantage that information collection and processing are placed closer to the sources of this information. The goals are to reduce traffic, latency, and to be robust against a lossy or failed Internet connection.
We see management of IoT devices from the network configuration management perspective. This thesis proposes a framework for network configuration management of heterogeneous, constrained IoT devices by using semantic descriptions for interoperability. The MYNO framework is an acronym for MQTT, YANG, NETCONF and Ontology. The NETCONF protocol is the IETF standard for network configuration management. The MQTT protocol is the de-facto standard in the IoT. We picked up the idea of the NETCONF-MQTT bridge, originally proposed by Scheffler and Bonneß[2017], and extended it with semantic device descriptions. These device descriptions provide a description of the device capabilities. They are based on the oneM2M Base ontology and formalized by the Semantic Web Standards.
The novel approach is using a ontology-based device description directly on a constrained device in combination with the MQTT protocol. The bridge was extended in order to query such descriptions. Using a semantic annotation, we achieved that the device capabilities are self-descriptive, machine readable and re-usable.
The concept of a Virtual Device was introduced and implemented, based on semantic device descriptions. A Virtual Device aggregates the capabilities of all devices at the edge network and contributes therefore to the scalability. Thus, it is possible to control all devices via a single RPC call.
The model-driven NETCONF Web-Client is generated automatically from this YANG model which is generated by the bridge based on the semantic device description. The Web-Client provides a user-friendly interface, offers RPC calls and displays sensor values. We demonstrate the feasibility of this approach in different use cases: sensor and actuator scenarios, as well as event configuration and triggering.
The semantic approach results in increased memory overhead. Therefore, we evaluated CBOR and RDF HDT for optimization of ontology-based device descriptions for use on constrained devices. The evaluation shows that CBOR is not suitable for long strings and RDF HDT is a promising candidate but is still a W3C Member Submission. Finally, we used an optimized JSON-LD format for the syntax of the device descriptions.
One of the security tasks of network management is the distribution of firmware updates. The MYNO Update Protocol (MUP) was developed and evaluated on constrained devices CC2538dk and 6LoWPAN. The MYNO update process is focused on freshness and authenticity of the firmware. The evaluation shows that it is challenging but feasible to bring the firmware updates to constrained devices using MQTT. As a new requirement for the next MQTT version, we propose to add a slicing feature for the better support of constrained devices. The MQTT broker should slice data to the maximum packet size specified by the device and transfer it slice-by-slice.
For the performance and scalability evaluation of MYNO framework, we setup the High Precision Agriculture demonstrator with 10 ESP-32 NodeMCU boards at the edge of the network. The ESP-32 NodeMCU boards, connected by WLAN, were equipped with six sensors and two actuators. The performance evaluation shows that the processing of ontology-based descriptions on a Raspberry Pi 3B with the RDFLib is a challenging task regarding computational power. Nevertheless, it is feasible because it must be done only once per device during the discovery process.
The MYNO framework was tested with heterogeneous devices such as CC2538dk from Texas Instruments, Arduino Yún Rev 3, and ESP-32 NodeMCU, and IP-based networks such as 6LoWPAN and WLAN.
Summarizing, with the MYNO framework we could show that the semantic approach on constrained devices is feasible in the IoT.
Forschendes Lernen und die digitale Transformation sind zwei der wichtigsten Einflüsse auf die Entwicklung der Hochschuldidaktik im deutschprachigen Raum. Während das forschende Lernen als normative Theorie das sollen beschreibt, geben die digitalen Werkzeuge, alte wie neue, das können in vielen Bereichen vor.
In der vorliegenden Arbeit wird ein Prozessmodell aufgestellt, was den Versuch unternimmt, das forschende Lernen hinsichtlich interaktiver, gruppenbasierter Prozesse zu systematisieren. Basierend auf dem entwickelten Modell wurde ein Softwareprototyp implementiert, der den gesamten Forschungsprozess begleiten kann. Dabei werden Gruppenformation, Feedback- und Reflexionsprozesse und das Peer Assessment mit Bildungstechnologien unterstützt. Die Entwicklungen wurden in einem qualitativen Experiment eingesetzt, um Systemwissen über die Möglichkeiten und Grenzen der digitalen Unterstützung von forschendem Lernen zu gewinnen.
Permafrost is warming globally, which leads to widespread permafrost thaw and impacts the surrounding landscapes, ecosystems and infrastructure. Especially ice-rich permafrost is vulnerable to rapid and abrupt thaw, resulting from the melting of excess ground ice. Local remote sensing studies have detected increasing rates of abrupt permafrost disturbances, such as thermokarst lake change and drainage, coastal erosion and RTS in the last two decades. All of which indicate an acceleration of permafrost degradation.
In particular retrogressive thaw slumps (RTS) are abrupt disturbances that expand by up to several meters each year and impact local and regional topographic gradients, hydrological pathways, sediment and nutrient mobilisation into aquatic systems, and increased permafrost carbon mobilisation. The feedback between abrupt permafrost thaw and the carbon cycle is a crucial component of the Earth system and a relevant driver in global climate models. However, an assessment of RTS at high temporal resolution to determine the dynamic thaw processes and identify the main thaw drivers as well as a continental-scale assessment across diverse permafrost regions are still lacking.
In northern high latitudes optical remote sensing is restricted by environmental factors and frequent cloud coverage. This decreases image availability and thus constrains the application of automated algorithms for time series disturbance detection for large-scale abrupt permafrost disturbances at high temporal resolution. Since models and observations suggest that abrupt permafrost disturbances will intensify, we require disturbance products at continental-scale, which allow for meaningful integration into Earth system models.
The main aim of this dissertation therefore, is to enhance our knowledge on the spatial extent and temporal dynamics of abrupt permafrost disturbances in a large-scale assessment. To address this, three research objectives were posed:
1. Assess the comparability and compatibility of Landsat-8 and Sentinel-2 data for a combined use in multi-spectral analysis in northern high latitudes.
2. Adapt an image mosaicking method for Landsat and Sentinel-2 data to create combined mosaics of high quality as input for high temporal disturbance assessments in northern high latitudes.
3. Automatically map retrogressive thaw slumps on the landscape-scale and assess their high temporal thaw dynamics.
We assessed the comparability of Landsat-8 and Sentinel-2 imagery by spectral comparison of corresponding bands. Based on overlapping same-day acquisitions of Landsat-8 and Sentinel-2 we derived spectral bandpass adjustment coefficients for North Siberia to adjust Sentinel-2 reflectance values to resemble Landsat-8 and harmonise the two data sets. Furthermore, we adapted a workflow to combine Landsat and Sentinel-2 images to create homogeneous and gap-free annual mosaics. We determined the number of images and cloud-free pixels, the spatial coverage and the quality of the mosaic with spectral comparisons to demonstrate the relevance of the Landsat+Sentinel-2 mosaics. Lastly, we adapted the automatic disturbance detection algorithm LandTrendr for large-scale RTS identification and mapping at high temporal resolution. For this, we modified the temporal segmentation algorithm for annual gradual and abrupt disturbance detection to incorporate the annual Landsat+Sentinel-2 mosaics. We further parametrised the temporal segmentation and spectral filtering for optimised RTS detection, conducted further spatial masking and filtering, and implemented a binary object classification algorithm with machine-learning to derive RTS from the LandTrendr disturbance output. We applied the algorithm to North Siberia, covering an area of 8.1 x 106 km2.
The spectral band comparison between same-day Landsat-8 and Sentinel-2 acquisitions already showed an overall good fit between both satellite products. However, applying the acquired spectral bandpass coefficients for adjustment of Sentinel-2 reflectance values, resulted in a near-perfect alignment between the same-day images. It can therefore be concluded that the spectral band adjustment succeeds in adjusting Sentinel-2 spectral values to those of Landsat-8 in North Siberia.
The number of available cloud-free images increased steadily between 1999 and 2019, especially intensified after 2016 with the addition of Sentinel-2 images. This signifies a highly improved input database for the mosaicking workflow. In a comparison of annual mosaics, the Landsat+Sentinel-2 mosaics always fully covered the study areas, while Landsat-only mosaics contained data-gaps for the same years. The spectral comparison of input images and Landsat+Sentinel-2 mosaic showed a high correlation between the input images and the mosaic bands, testifying mosaicking results of high quality. Our results show that especially the mosaic coverage for northern, coastal areas was substantially improved with the Landsat+Sentinel-2 mosaics. By combining data from both Landsat and Sentinel-2 sensors we reliably created input mosaics at high spatial resolution for comprehensive time series analyses.
This research presents the first automatically derived assessment of RTS distribution and temporal dynamics at continental-scale. In total, we identified 50,895 RTS, primarily located in ice-rich permafrost regions, as well as a steady increase in RTS-affected areas between 2001 and 2019 across North Siberia. From 2016 onward the RTS area increased more abruptly, indicating heightened thaw slump dynamics in this period. Overall, the RTS-affected area increased by 331 % within the observation period. Contrary to this, five focus sites show spatiotemporal variability in their annual RTS dynamics, alternating between periods of increased and decreased RTS development. This suggests a close relationship to varying thaw drivers. The majority of identified RTS was active from 2000 onward and only a small proportion initiated during the assessment period. This highlights that the increase in RTS-affected area was mainly caused by enlarging existing RTS and not by newly initiated RTS.
Overall, this research showed the advantages of combining Landsat and Sentinel-2 data in northern high latitudes and the improvements in spatial and temporal coverage of combined annual mosaics. The mosaics build the database for automated disturbance detection to reliably map RTS and other abrupt permafrost disturbances at continental-scale. The assessment at high temporal resolution further testifies the increasing impact of abrupt permafrost disturbances and likewise emphasises the spatio-temporal variability of thaw dynamics across landscapes. Obtaining such consistent disturbance products is necessary to parametrise regional and global climate change models, for enabling an improved representation of the permafrost thaw feedback.
In the last five years, gravitational-wave astronomy has gone from a purerly theoretical field into a thriving experimental science. Several gravitational- wave signals, emitted by stellar-mass binary black holes and binary neutron stars, have been detected, and many more are expected in the future as consequence of the planned upgrades in the gravitational-wave detectors. The observation of the gravitational-wave signals from these systems, and the characterization of their sources, heavily relies on the precise models for the emitted gravitational waveforms. To take full advantage of the increased detector sensitivity, it is then necessary to also improve the accuracy of the gravitational-waveform models.
In this work, I present an updated version of the waveform models for spinning binary black holes within the effective-one-body formalism. This formalism is based on the notion that the solution to the relativistic two- body problem varies smoothly with the mass ratio of the binary system, from the equal-mass regime to the test-particle limit. For this reason, it provides an elegant method to combine, under a unique framework, the solution to the relativistic two-body problem in different regimes. The main two regimes that are combined under the effective-one-body formalism are the slow-motion, weak field limit (accessible through the post-Newtonian theory), and the extreme mass-ratio regime (described using the black-hole- perturbation theory). This formalism is nevertheless flexible enough to integrate information about the solution to the relativistic two-body problem obtained using other techniques, such as numerical relativity.
The novelty of the waveform models presented in this work is the inclusion of beyond-quadupolar terms in the waveforms emitted by spinning binary black holes. In fact, while the time variation of the source quadupole moment is the leading contribution to the waveforms emitted by binary black holes observable by LIGO and Virgo detectors, beyond-quadupolar terms can be important for binary systems with asymmetric masses, large total mass, or observed with large inclination angle with respect to the orbital angular momentum of the binary. For this purpose, I combine the approximate analytic expressions of these beyond-quadupolar terms, with their calculations from numerical relativity, to develop an accurate waveform model including inspiral, merger and ringdown for spinning binary black holes. I first construct this model in the simplified case of black holes with spins aligned with the orbital angular momentum of the binary, then I extend it to the case of generic spin orientations. Finally, I test the accuracy of both these models against a large number of waveforms obtained from numerical relativity. The waveform models I present in this work are the state of the art for spinning binary black holes, without restrictions in the allowed values for the masses and the spins of the system.
The measurement of the source properties of a binary system emitting gravitational waves requires to compute O(107 − 109) different waveforms. Since the waveform models mentioned before can require O(1 − 10)s to generate a single waveform, they can be difficult to use in data-analysis studies given the increasing number of sources observed by the LIGO and Virgo detectors. To overcome this obstacle, I use the reduced-order-modeling technique to develop a faster version of the waveform model for black holes with spins aligned to the orbital angular momentum of the binary. This version of the model is as accurate as the original and reduces the time for evaluating a waveform by two orders of magnitude.
The waveform models developed in this thesis have been used by the LIGO and Virgo collaborations in the inference of the source parameters of the gravitational-wave signals detected during the second observing run (O2), and first half of the third observing run (O3a) of LIGO and Virgo detectors. Here, I present a study on the source properties of the signals GW170729 and GW190412, for which I have been directly involved in the analysis. In addition, these models have been used by the LIGO and Virgo collaborations to perform tests on General Relativity employing the gravitational-wave signals detected during O3a, and to analyze the population of the observed binary black holes.
The present work focuses on minimising the usage of toxic chemicals by integration of the biobased monomers, derived from fatty acid esters, to photopolymerization processes, which are known to be nature friendly. Internal double bond present in the oleic acid was converted to more reactive (meth)acrylate or epoxy group. Biobased starting materials, functionalized by different pendant groups, were used for photopolymerizing formulations to design of new polymeric structures by using ultraviolet light emitting diode (UV-LED) (395 nm) via free radical polymerization or cationic polymerization.
New (meth)acrylates (2,3 and 4) consisting of two isomers, methyl 9-((meth)acryloyloxy)-10-hydroxyoctadecanoate / methyl 9-hydroxy-10-((meth)acryloyloxy)octadecanoate (2 and 3) and methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4), modified from oleic acid mix, and ionic liquid monomers (1a and 1b) bearing long alkyl chain were polymerized photochemically. New (meth)acrylates are based on vegetable oil, and ionic liquids (ILs) have nonvolatile behaviour. Therefore, both monomer types have green approach. Photoinitiated polymerization of new (meth)acrylates and ionic liquids was investigated in the presence of ethyl (2,4,6-trimethylbenzoyl) phenylphosphinate (Irgacure® TPO−L) or di(4-methoxybenzoyl)diethylgermane (Ivocerin®) as photoinitiator (PI). Additionally, the results were discussed in comparison with those obtained from commercial 1,6-hexanediol di(meth)acrylate (5 and 6) for deeper investigation of biobased monomer’s potential to substitute petroleum derived materials with renewable resources for possible coating applications. Kinetic study shows that methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4) and ionic liquids (1a and 1b) have quantitative conversion after irradiation process which is important for practical applications. On the other hand, heat generation occurs in a longer time during the polymerization of biobased systems or ILs.
The poly(meth)acrylates modified from (meth)acrylated fatty acid methyl ester monomers generally show a low glass transition temperature because of the presence of long aliphatic chain in the polymer structure. However, poly(meth)acrylates containing aromatic group have higher glass transition temperature. Therefore, new 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was synthesized which can be a promising candidate for the green techniques, such as light induced polymerization. Photokinetic investigation of the new monomer, 4-(4-methacryloyloxyphenyl)-butan-2-one (7), was discussed using Irgacure® TPO−L or Ivocerin® as photoinitiator. The reactivity of that monomer was compared to commercial 2-phenoxyethyl methacrylate (8) and phenyl methacrylate (9) basis of the differences on monomer structures. The photopolymer of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) might be an interesting candidate for the coating application with the properties of quantitative conversion and high molecular weight. It also shows higher glass transition temperature.
In addition to the linear systems based on renewable materials, new crosslinked polymers were also designed in this thesis. Therefore, isomer mixture consisting of ethane-1,2-diyl bis(9-methacryloyloxy-10-hydroxy octadecanoate), ethane-1,2-diyl 9-hydroxy-10-methacryloyloxy-9’-methacryloyloxy10’-hydroxy octadecanoate and ethane-1,2-diyl bis(9-hydroxy-10-methacryloyloxy octadecanoate) (10) was synthesized by derivation of the oleic acid which has not been previously described in the literature. Crosslinked material based on this biobased monomer was produced by photoinitiated free radical polymerization using Irgacure® TPO−L or Ivocerin® as photoinitiator. Furthermore, material properties were diversified by copolymerization of 10 with 4-(4-methacryloyloxyphenyl)-butan-2-one (7) or methyl 9-(1H-imidazol-1-yl)-10-(methacryloyloxy)octadecanoate / methyl 9-(methacryloyloxy)-10-(1H-imidazol-1-yl)octadecanoate (4). In addition to this, influence of comonomer with different chemical structure on the network system was investigated by analysis of thermo-mechanical properties, crosslink density and molecular weight between two crosslink junctions. An increase in the glass transition temperature caused by copolymerization of biobased monomer 10 with the excess amount of 4-(4-methacryloyloxyphenyl)-butan-2-one (7) was confirmed by both techniques, differential scanning calorimetry (DSC) and dynamic mechanical analysis (DMA). On the other hand, crosslink density decreased as a result of copolymerization reactions due to the reduction in the mean functionality of the system. Furthermore, surface characterization has been tested by contact angle measurements using solvents with different polarity.
This work also contributes to the limited data reported about cationic photopolymerization of the epoxidized vegetable oils in the literature in contrast to the widely investigation of thermal curing of the biorenewable epoxy monomers. In addition to the 9,10-epoxystearic acid methyl ester (11), a new monomer of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) has been synthesized from oleic acid. These two biobased epoxies have been polymerized via cationic photoinitiated polymerization in the presence of bis(t-butyl)-iodonium-tetrakis(perfluoro-t-butoxy)aluminate ([Al(O-t-C4F9)4]-) and isopropylthioxanthone (ITX) as photinitiating system. Polymerization kinetic of 9,10-epoxystearic acid methyl ester (11) and bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was investigated and compared with the kinetic of commercial monomers being 3,4-epoxycyclohexylmethyl-3’,4’-epoxycyclohexane carboxylate (13), 1,4-butanediol diglycidyl ether (14), and diglycidylether of bisphenol-A (15). Both biobased epoxies (11 and 12) showed higher conversion than cycloaliphatic epoxy (13), and lower reactivity than 1,4-butanediol diglycidyl ether (14). Additional network systems were designed by copolymerization of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) in different molar ratios (1:1; 1:5; 1:9). It addresses that, final conversion is dependent on polymerization rate as well as physical processes such as vitrification during polymerization. Moreover, low glass transition temperature of homopolymer derived from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) was successfully increased by copolymerization with diglycidylether bisphenol-A (15). On the other hand, the surface produced from bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) shows hydrophobic character. Higher concentration of biobased diepoxy (12) in the copolymerizing mixture decreases surface free energy. Network systems were also investigated according to the rubber elasticity theory. Crosslinked polymer derived from the mixture of bis-(9,10-epoxystearic acid) 1,2-ethanediyl ester (12) and diglycidylether of bisphenol-A (15) (molar ratio=1:5) exhibits almost ideal polymer network.
Polymeric films and coatings derived from semi-crystalline oligomers are of relevance for medical and pharmaceutical applications. In this context, the material surface is of particular importance, as it mediates the interaction with the biological system. Two dimensional (2D) systems and ultrathin films are used to model this interface. However, conventional techniques for their preparation, such as spin coating or dip coating, have disadvantages, since the morphology and chain packing of the generated films can only be controlled to a limited extent and adsorption on the substrate used affects the behavior of the films. Detaching and transferring the films prepared by such techniques requires additional sacrificial or supporting layers, and free-standing or self supporting domains are usually of very limited lateral extension. The aim of this thesis is to study and modulate crystallization, melting, degradation and chemical reactions in ultrathin films of oligo(ε-caprolactone)s (OCL)s with different end-groups under ambient conditions. Here, oligomeric ultrathin films are assembled at the air-water interface using the Langmuir technique. The water surface allows lateral movement and aggregation of the oligomers, which, unlike solid substrates, enables dynamic physical and chemical interaction of the molecules. Parameters like surface pressure (π), temperature and mean molecular area (MMA) allow controlled assembly and manipulation of oligomer molecules when using the Langmuir technique. The π-MMA isotherms, Brewster angle microscopy (BAM), and interfacial infrared spectroscopy assist in detecting morphological and physicochemical changes in the film. Ultrathin films can be easily transferred to the solid silicon surface via Langmuir Schaefer (LS) method (horizontal substrate dipping). Here, the films transferred on silicon are investigated using atomic force microscopy (AFM) and optical microscopy and are compared to the films on the water surface.
The semi-crystalline morphology (lamellar thicknesses, crystal number densities, and lateral crystal dimensions) is tuned by the chemical structure of the OCL end-groups (hydroxy or methacrylate) and by the crystallization temperature (Tc; 12 or 21 °C) or MMAs. Compression to lower MMA of ~2 Å2, results in the formation of a highly crystalline film, which consists of tightly packed single crystals. Preparation of tightly packed single crystals on a cm2 scale is not possible by conventional techniques. Upon transfer to a solid surface, these films retain their crystalline morphology whereas amorphous films undergo dewetting.
The melting temperature (Tm) of OCL single crystals at the water and the solid surface is found proportional to the inverse crystal thickness and is generally lower than the Tm of bulk PCL. The impact of OCL end-groups on melting behavior is most noticeable at the air-solid interface, where the methacrylate end-capped OCL (OCDME) melted at lower temperatures than the hydroxy end-capped OCL (OCDOL). When comparing the underlying substrate, melting/recrystallization of OCL ultrathin films is possible at lower temperatures at the air water interface than at the air-solid interface, where recrystallization is not visible. Recrystallization at the air-water interface usually occurs at a higher temperature than the initial Tc.
Controlled degradation is crucial for the predictable performance of degradable polymeric biomaterials. Degradation of ultrathin films is carried out under acidic (pH ~ 1) or enzymatic catalysis (lipase from Pseudomonas cepcia) on the water surface or on a silicon surface as transferred films. A high crystallinity strongly reduces the hydrolytic but not the enzymatic degradation rate. As an influence of end-groups, the methacrylate end-capped linear oligomer, OCDME (~85 ± 2 % end-group functionalization) hydrolytically degrades faster than the hydroxy end capped linear oligomer, OCDOL (~95 ± 3 % end-group functionalization) at different temperatures. Differences in the acceleration of hydrolytic degradation of semi-crystalline films were observed upon complete melting, partial melting of the crystals, or by heating to temperatures close to Tm. Therefore, films of densely packed single crystals are suitable as barrier layers with thermally switchable degradation rates.
Chemical modification in ultrathin films is an intricate process applicable to connect functionalized molecules, impart stability or create stimuli-sensitive cross-links. The reaction of end-groups is explored for transferred single crystals on a solid surface or amorphous monolayer at the air-water interface. Bulky methacrylate end-groups are expelled to the crystal surface during chain-folded crystallization. The density of end-groups is inversely proportional to molecular weight and hence very pronounced for oligomers. The methacrylate end-groups at the crystal surface, which are present at high concentration, can be used for further chemical functionalization. This is demonstrated by fluorescence microscopy after reaction with fluorescein dimethacrylate. The thermoswitching behavior (melting and recrystallization) of fluorescein functionalized single crystals shows the temperature-dependent distribution of the chemically linked fluorescein moieties, which are accumulated on the surfaces of crystals, and homogeneously dispersed when the crystals are molten. In amorphous monolayers at the air-water interface, reversible cross-linking of hydroxy-terminated oligo(ε-caprolactone) monolayers using dialdehyde (glyoxal) lead to the formation of 2D networks. Pronounced contraction in the area occurred for 2D OCL films in dependence of surface pressure and time indicating the reaction progress. Cross linking inhibited crystallization and retarded enzymatic degradation of the OCL film. Altering the subphase pH to ~2 led to cleavage of the covalent acetal cross-links. Besides as model systems, these reversibly cross-linked films are applicable for drug delivery systems or cell substrates modulating adhesion at biointerfaces.
Deoxyribonucleic acid (DNA) nanostructures enable the attachment of functional molecules to nearly any unique location on their underlying structure. Due to their single-base-pair structural resolution, several ligands can be spatially arranged and closely controlled according to the geometry of their desired target, resulting in optimized binding and/or signaling interactions.
This dissertation covers three main projects. All of them use variations of functionalized DNA nanostructures that act as platform for oligovalent presentation of ligands. The purpose of this work was to evaluate the ability of DNA nanostructures to precisely display different types of functional molecules and to consequently enhance their efficacy according to the concept of multivalency. Moreover, functionalized DNA structures were examined for their suitability in functional screening assays. The developed DNA-based compound ligands were used to target structures in different biological systems.
One part of this dissertation attempted to bind pathogens with small modified DNA nanostructures. Pathogens like viruses and bacteria are known for their multivalent attachment to host cells membranes. By blocking their receptors for recognition and/or fusion with their targeted host in an oligovalent manner, the objective was to impede their ability to adhere to and invade cells. For influenza A, only enhanced binding of oligovalent peptide-DNA constructs compared to the monovalent peptide could be observed, whereas in the case of respiratory syncytial virus (RSV), binding as well as blocking of the target receptors led to an increased inhibition of infection in vitro.
In the final part, the ability of chimeric DNA-peptide constructs to bind to and activate signaling receptors on the surface of cells was investigated. Specific binding of DNA trimers, conjugated with up to three peptides, to EphA2 receptor expressing cells was evaluated in flow cytometry experiments. Subsequently, their ability to activate these receptors via phosphorylation was assessed. EphA2 phosphorylation was significantly increased by DNA trimers carrying three peptides compared to monovalent peptide. As a result of activation, cells underwent characteristic morphological changes, where they "round up" and retract their periphery.
The results obtained in this work comprehensively prove the capability of DNA nanostructures to serve as stable, biocompatible, controllable platforms for the oligovalent presentation of functional ligands. Functionalized DNA nanostructures were used to enhance biological effects and as tool for functional screening of bio-activity. This work demonstrates that modified DNA structures have the potential to improve drug development and to unravel the activation of signaling pathways.
We investigate models for incremental binary classification, an example for supervised online learning. Our starting point is a model for human and machine learning suggested by E.M.Gold.
In the first part, we consider incremental learning algorithms that use all of the available binary labeled training data in order to compute the current hypothesis. For this model, we observe that the algorithm can be assumed to always terminate and that the distribution of the training data does not influence learnability. This is still true if we pose additional delayable requirements that remain valid despite a hypothesis output delayed in time. Additionally, we consider the non-delayable requirement of consistent learning. Our corresponding results underpin the claim for delayability being a suitable structural property to describe and collectively investigate a major part of learning success criteria. Our first theorem states the pairwise implications or incomparabilities between an established collection of delayable learning success criteria, the so-called complete map. Especially, the learning algorithm can be assumed to only change its last hypothesis in case it is inconsistent with the current training data. Such a learning behaviour is called conservative.
By referring to learning functions, we obtain a hierarchy of approximative learning success criteria. Hereby we allow an increasing finite number of errors of the hypothesized concept by the learning algorithm compared with the concept to be learned. Moreover, we observe a duality depending on whether vacillations between infinitely many different correct hypotheses are still considered a successful learning behaviour. This contrasts the vacillatory hierarchy for learning from solely positive information.
We also consider a hypothesis space located between the two most common hypothesis space types in the nearby relevant literature and provide the complete map.
In the second part, we model more efficient learning algorithms. These update their hypothesis referring to the current datum and without direct regress to past training data. We focus on iterative (hypothesis based) and BMS (state based) learning algorithms. Iterative learning algorithms use the last hypothesis and the current datum in order to infer the new hypothesis.
Past research analyzed, for example, the above mentioned pairwise relations between delayable learning success criteria when learning from purely positive training data. We compare delayable learning success criteria with respect to iterative learning algorithms, as well as learning from either exclusively positive or binary labeled data. The existence of concept classes that can be learned by an iterative learning algorithm but not in a conservative way had already been observed, showing that conservativeness is restrictive. An additional requirement arising from cognitive science research %and also observed when training neural networks is U-shapedness, stating that the learning algorithm does diverge from a correct hypothesis. We show that forbidding U-shapes also restricts iterative learners from binary labeled data.
In order to compute the next hypothesis, BMS learning algorithms refer to the currently observed datum and the actual state of the learning algorithm. For learning algorithms equipped with an infinite amount of states, we provide the complete map. A learning success criterion is semantic if it still holds, when the learning algorithm outputs other parameters standing for the same classifier. Syntactic (non-semantic) learning success criteria, for example conservativeness and syntactic non-U-shapedness, restrict BMS learning algorithms. For proving the equivalence of the syntactic requirements, we refer to witness-based learning processes. In these, every change of the hypothesis is justified by a later on correctly classified witness from the training data. Moreover, for every semantic delayable learning requirement, iterative and BMS learning algorithms are equivalent. In case the considered learning success criterion incorporates syntactic non-U-shapedness, BMS learning algorithms can learn more concept classes than iterative learning algorithms.
The proofs are combinatorial, inspired by investigating formal languages or employ results from computability theory, such as infinite recursion theorems (fixed point theorems).
Digitalisierung ermöglicht es uns, mit Partnern (z.B. Unternehmen, Institutionen) in einer IT-unterstützten Umgebung zu interagieren und Tätigkeiten auszuführen, die vormals manuell erledigt wurden. Ein Ziel der Digitalisierung ist dabei, Dienstleistungen unterschiedlicher fachlicher Domänen zu Prozessen zu kombinieren und vielen Nutzergruppen bedarfsgerecht zugänglich zu machen. Hierzu stellen Anbieter technische Dienste bereit, die in unterschiedliche Anwendungen integriert werden können.
Die Digitalisierung stellt die Anwendungsentwicklung vor neue Herausforderungen. Ein Aspekt ist die bedarfsgerechte Anbindung von Nutzern an Dienste. Zur Interaktion menschlicher Nutzer mit den Diensten werden Benutzungsschnittstellen benötigt, die auf deren Bedürfnisse zugeschnitten sind. Hierzu werden Varianten für spezifische Nutzergruppen (fachliche Varianten) und variierende Umgebungen (technische Varianten) benötigt. Zunehmend müssen diese mit Diensten anderer Anbieter kombiniert werden können, um domänenübergreifend Prozesse zu Anwendungen mit einem erhöhten Mehrwert für den Endnutzer zu verknüpfen (z.B. eine Flugbuchung mit einer optionalen Reiseversicherung).
Die Vielfältigkeit der Varianten lässt die Erstellung von Benutzungsschnittstellen komplex und die Ergebnisse sehr individuell erscheinen. Daher werden die Varianten in der Praxis vorwiegend manuell erstellt. Dies führt zur parallelen Entwicklung einer Vielzahl sehr ähnlicher Anwendungen, die nur geringes Potential zur Wiederverwendung besitzen. Die Folge sind hohe Aufwände bei Erstellung und Wartung. Dadurch wird häufig auf die Unterstützung kleiner Nutzerkreise mit speziellen Anforderungen verzichtet (z.B. Menschen mit physischen Einschränkungen), sodass diese weiterhin von der Digitalisierung ausgeschlossen bleiben.
Die Arbeit stellt eine konsistente Lösung für diese neuen Herausforderungen mit den Mitteln der modellgetriebenen Entwicklung vor. Sie präsentiert einen Ansatz zur Modellierung von Benutzungsschnittstellen, Varianten und Kompositionen und deren automatischer Generierung für digitale Dienste in einem verteilten Umfeld. Die Arbeit schafft eine Lösung zur Wiederverwendung und gemeinschaftlichen Nutzung von Benutzungsschnittstellen über Anbietergrenzen hinweg. Sie führt zu einer Infrastruktur, in der eine Vielzahl von Anbietern ihre Expertise in gemeinschaftliche Anwendungen einbringen können.
Die Beiträge bestehen im Einzelnen in Konzepten und Metamodellen zur Modellierung von Benutzungsschnittstellen, Varianten und Kompositionen sowie einem Verfahren zu deren vollständig automatisierten Transformation in funktionale Benutzungsschnittstellen. Zur Umsetzung der gemeinschaftlichen Nutzbarkeit werden diese ergänzt um eine universelle Repräsentation der Modelle, einer Methodik zur Anbindung unterschiedlicher Dienst-Anbieter sowie einer Architektur zur verteilten Nutzung der Artefakte und Verfahren in einer dienstorientierten Umgebung.
Der Ansatz bietet die Chance, unterschiedlichste Menschen bedarfsgerecht an der Digitalisierung teilhaben zu lassen. Damit setzt die Arbeit Impulse für zukünftige Methoden zur Anwendungserstellung in einem zunehmend vielfältigen Umfeld.
The evolution of life on Earth has been driven by disturbances of different types and magnitudes over the 4.6 million years of Earth’s history (Raup, 1994, Alroy, 2008). One example for such disturbances are mass extinctions which are characterized by an exceptional increase in the extinction rate affecting a great number of taxa in a short interval of geologic time (Sepkoski, 1986). During the 541 million years of the Phanerozoic, life on Earth suffered five exceptionally severe mass extinctions named the “Big Five Extinctions”. Many mass extinctions are linked to changes in climate
(Feulner, 2009). Hence, the study of past mass extinctions is not only intriguing, but can also provide insights into the complex nature of the Earth system. This thesis aims at deepening our understanding of the triggers of mass extinctions and how they affected life. To accomplish this, I investigate changes in climate during two of the Big Five extinctions using a coupled climate model.
During the Devonian (419.2–358.9 million years ago) the first vascular plants and vertebrates evolved on land while extinction events occurred in the ocean (Algeo et al., 1995). The causes of these formative changes, their interactions and their links to changes in climate are still poorly understood. Therefore, we explore the sensitivity of the Devonian climate to various boundary conditions using an intermediate-complexity climate model (Brugger et al., 2019). In contrast to Le Hir et al. (2011), we find only a minor biogeophysical effect of changes in vegetation cover due to unrealistically high soil albedo values used in the earlier study. In addition, our results cannot support the strong influence of orbital parameters on the Devonian climate, as simulated with a climate model with a strongly simplified ocean model (De Vleeschouwer et al., 2013, 2014, 2017). We can only reproduce the changes in Devonian climate suggested by proxy data by decreasing atmospheric CO2. Still, finding agreement between the evolution of sea surface temperatures reconstructed from proxy data (Joachimski et al., 2009) and our simulations remains challenging and suggests a lower δ18O ratio of Devonian seawater. Furthermore, our study of the sensitivity of the Devonian climate reveals a prevailing mode of climate variability on a timescale of decades to centuries. The quasi-periodic ocean temperature fluctuations are linked to a physical mechanism of changing sea-ice cover, ocean convection and overturning in high northern latitudes.
In the second study of this thesis (Dahl et al., under review) a new reconstruction of atmospheric CO2 for the Devonian, which is based on CO2-sensitive carbon isotope fractionation in the earliest vascular plant fossils, suggests a much earlier drop of atmo- spheric CO2 concentration than previously reconstructed, followed by nearly constant CO2 concentrations during the Middle and Late Devonian. Our simulations for the Early Devonian with identical boundary conditions as in our Devonian sensitivity study (Brugger et al., 2019), but with a low atmospheric CO2 concentration of 500 ppm, show no direct conflict with available proxy and paleobotanical data and confirm that under the simulated climatic conditions carbon isotope fractionation represents a robust proxy for atmospheric CO2. To explain the earlier CO2 drop we suggest that early forms of vascular land plants have already strongly influenced weathering. This new perspective on the Devonian questions previous ideas about the climatic conditions and earlier explanations for the Devonian mass extinctions.
The second mass extinction investigated in this thesis is the end-Cretaceous mass extinction (66 million years ago) which differs from the Devonian mass extinctions in terms of the processes involved and the timescale on which the extinctions occurred. In the two studies presented here (Brugger et al., 2017, 2021), we model the climatic effects of the Chicxulub impact, one of the proposed causes of the end-Cretaceous extinction, for the first millennium after the impact. The light-dimming effect of stratospheric sulfate aerosols causes severe cooling, with a decrease of global annual mean surface air temperature of at least 26◦C and a recovery to pre-impact temperatures after more than 30 years. The sudden surface cooling of the ocean induces deep convection which brings nutrients from the deep ocean via upwelling to the surface ocean. Using an ocean biogeochemistry model we explore the combined effect of ocean mixing and iron-rich dust originating from the impactor on the marine biosphere. As soon as light levels have recovered, we find a short, but prominent peak in marine net primary productivity. This newly discovered mechanism could result in toxic effects for marine near-surface ecosystems. Comparison of our model results to proxy data (Vellekoop et al., 2014, 2016, Hull et al., 2020) suggests that carbon release from the terrestrial biosphere is required in addition to the carbon dioxide which can be attributed to the target material. Surface ocean acidification caused by the addition of carbon dioxide and sulfur is only moderate. Taken together, the results indicate a significant contribution of the Chicxulub impact to the end-Cretaceous mass extinction by triggering multiple stressors for the Earth system.
Although the sixth extinction we face today is characterized by human intervention in nature, this thesis shows that we can gain many insights into future extinctions from studying past mass extinctions, such as the importance of the rate of change (Rothman, 2017), the interplay of multiple stressors (Gunderson et al., 2016), and changes in the carbon cycle (Rothman, 2017, Tierney et al., 2020).
Insulinresistenz ist ein zentraler Bestandteil des metabolischen Syndroms und trägt maßgeblich zur Ausbildung eines Typ-2-Diabetes bei. Eine mögliche Ursache für die Entstehung von Insulinresistenz ist eine chronische unterschwellige Entzündung, welche ihren Ursprung im Fettgewebe übergewichtiger Personen hat. Eingewanderte Makrophagen produzieren vermehrt pro-inflammatorische Mediatoren, wie Zytokine und Prostaglandine, wodurch die Konzentrationen dieser Substanzen sowohl lokal als auch systemisch erhöht sind. Darüber hinaus weisen übergewichtige Personen einen gestörten Fettsäuremetabolismus und eine erhöhte Darmpermeabilität auf. Ein gesteigerter Flux an freien Fettsäuren vom Fettgewebe in andere Organe führt zu einer lokalen Konzentrationssteigerung in diesen Organen. Eine erhöhte Darmpermeabilität erleichtert das Eindringen von Pathogenen und anderer körperfremder Substanzen in den Körper.
Ziel dieser Arbeit war es, zu untersuchen, ob hohe Konzentrationen von Insulin, des bakteriellen Bestandteils Lipopolysaccharid (LPS) oder der freien Fettsäure Palmitat eine Entzündungsreaktion in Makrophagen auslösen oder verstärken können und ob diese Entzündungsantwort zur Ausbildung einer Insulinresistenz beitragen kann. Weiterhin sollte untersucht werden, ob Metabolite und Signalsubstanzen, deren Konzentrationen beim metabolischen Syndrom erhöht sind, die Produktion des Prostaglandins (PG) E2 begünstigen können und ob dieses wiederum die Entzündungsreaktion und seine eigene Produktion in Makrophagen regulieren kann. Um den Einfluss dieser Faktoren auf die Produktion pro-inflammatorischer Mediatoren in Makrophagen zu untersuchen, wurden Monozyten-artigen Zelllinien und primäre humane Monozyten, welche aus dem Blut gesunder Probanden isoliert wurden, in Makrophagen differenziert und mit Insulin, LPS, Palmitat und/ oder PGE2 inkubiert. Überdies wurden primäre Hepatozyten der Ratte isoliert und mit Überständen Insulin-stimulierter Makrophagen inkubiert, um zu untersuchen, ob die Entzündungsanwort in Makrophagen an der Ausbildung einer Insulinresistenz in Hepatozyten beteiligt ist.
Insulin induzierte die Expression pro-inflammatorischer Zytokine in Makrophagen-artigen Zelllinien wahrscheinlich vorrangig über den Phosphoinositid-3-Kinase (PI3K)-Akt-Signalweg mit anschließender Aktiverung des Transkriptionsfaktors NF-κB (nuclear factor 'kappa-light-chain-enhancer' of activated B-cells). Die dabei ausgeschütteten Zytokine hemmten in primären Hepatozyten der Ratte die Insulin-induzierte Expression der Glukokinase durch Überstände Insulin-stimulierter Makrophagen.
Auch LPS oder Palmitat, deren lokale Konzentrationen im Zuge des metabolischen Syndroms erhöht sind, waren in der Lage, die Expression pro-inflammatorischer Zytokine in Makrophagen-artigen Zelllinien zu stimulieren. Während LPS seine Wirkung, laut Literatur, unbestritten über eine Aktivierung des Toll-ähnlichen Rezeptors (toll-like receptor; TLR) 4 vermittelt, scheint Palmitat jedoch weitestgehend TLR4-unabhängig wirken zu können. Vielmehr schien die de novo-Ceramidsynthese eine entscheidene Rolle zu spielen. Darüber hinaus verstärkte Insulin sowohl die LPS- als auch die Palmitat-induzierte Ent-zündungsantwort in beiden Zelllinien. Die in Zelllinien gewonnenen Ergebnisse wurden größtenteils in primären humanen Makrophagen bestätigt.
Desweiteren induzierten sowohl Insulin als auch LPS oder Palmitat die Produktion von PGE2 in den untersuchten Makrophagen. Die Daten legen nahe, dass dies auf eine gesteigerte Expression PGE2-synthetisierender Enzyme zurückzuführen ist.
PGE2 wiederum hemmte auf der einen Seite die Stimulus-abhängige Expression des pro-inflammatorischen Zytokins Tumornekrosefaktor (TNF) α in U937-Makrophagen. Auf der anderen Seite verstärkte es jedoch die Expression der pro-inflammatorischen Zytokine Interleukin- (IL-) 1β und IL-8. Darüber hinaus verstärkte es die Expression von IL-6-Typ-Zytokinen, welche sowohl pro- als auch anti-inflammatorisch wirken können. Außerdem vestärkte PGE2 die Expression PGE2-synthetisierender Enzyme. Es scheint daher in der Lage zu sein, seine eigene Synthese zu verstärken.
Zusammenfassend kann die Freisetzung pro-inflammatorischer Mediatoren aus Makro-phagen im Zuge einer Hyperinsulinämie die Entstehung einer Insulinresistenz begünstigen. Insulin ist daher in der Lage, einen Teufelskreis der immer stärker werdenden Insulin-resistenz in Gang zu setzen.
Auch Metabolite und Signalsubstanzen, deren Konzentrationen beim metabolischen Syndrom erhöht sind (zum Beispiel LPS, freie Fettsäuren und PGE2), lösten Entzündungsantworten in Makrophagen aus. Das wechselseitige Zusammenspiel von Insulin und diesen Metaboliten und Signalsubstanzen löste eine stärkere Entzündungsantwort in Makrophagen aus als jeder der Einzelkomponenten. Die dadurch freigesetzten Zytokine könnten zur Manifestation einer Insulinresistenz und des metabolischen Syndroms beitragen.
Magnetic strain contributions in laser-excited metals studied by time-resolved X-ray diffraction
(2021)
In this work I explore the impact of magnetic order on the laser-induced ultrafast strain response of metals. Few experiments with femto- or picosecond time-resolution have so far investigated magnetic stresses. This is contrasted by the industrial usage of magnetic invar materials or magnetostrictive transducers for ultrasound generation, which already utilize magnetostrictive stresses in the low frequency regime.
In the reported experiments I investigate how the energy deposition by the absorption of femtosecond laser pulses in thin metal films leads to an ultrafast stress generation. I utilize that this stress drives an expansion that emits nanoscopic strain pulses, so called hypersound, into adjacent layers. Both the expansion and the strain pulses change the average inter-atomic distance in the sample, which can be tracked with sub-picosecond time resolution using an X-ray diffraction setup at a laser-driven Plasma X-ray source. Ultrafast X-ray diffraction can also be applied to buried layers within heterostructures that cannot be accessed by optical methods, which exhibit a limited penetration into metals. The reconstruction of the initial energy transfer processes from the shape of the strain pulse in buried detection layers represents a contribution of this work to the field of picosecond ultrasonics.
A central point for the analysis of the experiments is the direct link between the deposited energy density in the nano-structures and the resulting stress on the crystal lattice. The underlying thermodynamical concept of a Grüneisen parameter provides the theoretical framework for my work. I demonstrate how the Grüneisen principle can be used for the interpretation of the strain response on ultrafast timescales in various materials and that it can be extended to describe magnetic stresses. The class of heavy rare-earth elements exhibits especially large magnetostriction effects, which can even lead to an unconventional contraction of the laser-excited transducer material. Such a dominant contribution of the magnetic stress to the motion of atoms has not been demonstrated previously. The observed rise time of the magnetic stress contribution in Dysprosium is identical to the decrease in the helical spin-order, that has been found previously using time-resolved resonant X-ray diffraction. This indicates that the strength of the magnetic stress can be used as a proxy of the underlying magnetic order. Such magnetostriction measurements are applicable even in case of antiparallel or non-collinear alignment of the magnetic moments and a vanishing magnetization.
The strain response of metal films is usually determined by the pressure of electrons and lattice vibrations. I have developed a versatile two-pulse excitation routine that can be used to extract the magnetic contribution to the strain response even if systematic measurements above and below the magnetic ordering temperature are not feasible. A first laser pulse leads to a partial ultrafast demagnetization so that the amplitude and shape of the strain response triggered by the second pulse depends on the remaining magnetic order. With this method I could identify a strongly anisotropic magnetic stress contribution in the magnetic data storage material iron-platinum and identify the recovery of the magnetic order by the variation of the pulse-to-pulse delay. The stark contrast of the expansion of iron-platinum nanograins and thin films shows that the different constraints for the in-plane expansion have a strong influence on the out-of-plane expansion, due to the Poisson effect. I show how such transverse strain contributions need to be accounted for when interpreting the ultrafast out-of-plane strain response using thermal expansion coefficients obtained in near equilibrium conditions.
This work contributes an investigation of magnetostriction on ultrafast timescales to the literature of magnetic effects in materials. It develops a method to extract spatial and temporal varying stress contributions based on a model for the amplitude and shape of the emitted strain pulses. Energy transfer processes result in a change of the stress profile with respect to the initial absorption of the laser pulses. One interesting example occurs in nanoscopic gold-nickel heterostructures, where excited electrons rapidly transport energy into a distant nickel layer, that takes up much more energy and expands faster and stronger than the laser-excited gold capping layer. Magnetic excitations in rare earth materials represent a large energy reservoir that delays the energy transfer into adjacent layers. Such magneto-caloric effects are known in thermodynamics but not extensively covered on ultrafast timescales. The combination of ultrafast X-ray diffraction and time-resolved techniques with direct access to the magnetization has a large potential to uncover and quantify such energy transfer processes.
Silicate melts are major components of the Earth’s interior and as such they make an essential contribution in igneous processes, in the dynamics of the solid Earth and the chemical development of the entire Earth. Macroscopic physical and chemical properties such as density, compressibility, viscosity, degree of polymerization etc. are determined by the atomic structure of the melt. Depending on the pressure, but also on the temperature and the chemical composition, silicate melts show different structural properties. These properties are best described by the local coordination environment, i.e. symmetry and number of neighbors (coordination number) of an atom, as well as the distance between the central atom and its neighbors (inter-atomic distance). With increasing pressure and temperature, i.e. with increasing depth in the Earth, the density of the melt increases, which can lead to changes in coordination number and distances. If the coordination number remains the same, the distance usually decreases. If the coordination number increases, the distance can increase. These general trends can, however, vary greatly, which can be attributed in particular to the chemical composition.
Due to the fact that natural melts of the deep earth are not accessible to direct investigations, in order to understand their properties under the relevant conditions, extensive experimental and theoretical investigations have been carried out so far. This has often been studied using the example of amorphous samples of the end-members SiO2 and GeO2 , with the latter serving as a structural and chemical analog model to SiO2. Commonly, the experiments were carried out at high pressure and at room temperature. Natural melts are chemically much more complex than the simple end-member SiO2 and GeO2, so that observations made on them may lead to incorrect compression models. Furthermore, the investigations on glasses at room temperature can show potentially strong deviations from the properties of melts under natural thermodynamic conditions.
The aim of this thesis was to explain the influence of the composition and the temperature on the structural properties of the melts at high pressures. To understand this, we studied complex alumino-germanate and alumino-silicate glasses. More precisely, we studied synthetic glasses that have a composition like the mineral albite and like a mixture of albite-diopside at the eutectic point. The albite glass is structurally similar to a simplified granitic melt, while the albite-diopside glass simulates a simplified basaltic melt. To study the local coordination environment of the elements, we used X-ray absorption spectroscopy in combination with a diamond anvil cell. Because the diamonds have a high absorbance for X-rays with energies below 10 keV, the direct investigation of the geologically relevant elements such as Si, Al, Ca, Mg etc. with this spectroscopic probe technique in combination with a diamond anvil cell is not possible. Therefore the glasses were doped with Ge and Sr. These elements serve partially or fully as substitutes for important major elements. In this sense, Ge serves as an a substitute for Si and other network formers, while Sr replaces network modifiers such as Ca, Na, Mg etc.,
as well as other cations with a large ionic radius.
In the first step we studied the Ge K-edge in Ge-Albit-glass, NaAlGe3O8, at room temperature up to 131 GPa. This glass has a higher chemical complexity than SiO2 and GeO2, but it is still fully polymerized. The differences in the compression mechanism between this glass and the simple oxides can clearly be attributed to higher chemical complexity. The albite and albite-diopside compositions partially doped with Ge and Sr were probed at room temperature for Ge up to 164 GPa and for Sr up to 42 GPa. While the albite glass is nominally fully polymerized like NaAlGe3O8, the albite-diopside glass is partially depolymerized. The results show that structural changes take place in all three glasses in the first 25 to a maximum of 30 GPa, with both Ge and Sr reaching the maximum coordination number 6 and ∼9, respectively. At higher pressures, only isostructural shrinkage of the coordination polyhedra takes place in the glasses. The most important finding of the high pressure studies on the alumino-silicate and alumino-germanate glasses is that in these complex glasses the polyhedra show a much higher compressibility than what can be observed in the end-members. This is shown in particular by the strong shortening of the Ge-O distances in the amorphous NaAlGe3O8 and albite-diopside glass at pressures above 30 GPa.
In addition to the effects of the composition on the compaction process, we investigated the influence of temperature on the structural changes. To do this, we probed the albite-diopside glass, as it is chemically most similar to the melts in the lower mantle. We studied the Ge K edge of the sample with a resistively heated and a laser-heated diamond anvil cell, for a pressure range of up to 48 GPa and a temperature range of up to 5000 K. High temperatures at which the sample is liquid and that are relevant for the Earth mantle, have a significant impact on the structural transformation, with a shift of approx. 30% to significantly lower pressures, compared to the glasses at room temperature and below 1000 K.
The results of this thesis represent an important contribution to the understanding of the properties of melts at conditions of the lower mantle. In the context of the discussion about the existence and origin of ultra-dense silicate melts at the core-mantle boundary, these investigations show that the higher density compared to the surrounding material cannot be explained by only structural features, but by a distinct chemical composition. The results also suggest that only very low solubilities of noble gases are to be expected for melts in the lower mantle, so that the structural properties clearly influence the overall budget and transport of noble gases in the Earth’s mantle.
Learning analytics at scale
(2021)
Digital technologies are paving the way for innovative educational approaches. The learning format of Massive Open Online Courses (MOOCs) provides a highly accessible path to lifelong learning while being more affordable and flexible than face-to-face courses. Thereby, thousands of learners can enroll in courses mostly without admission restrictions, but this also raises challenges. Individual supervision by teachers is barely feasible, and learning persistence and success depend on students' self-regulatory skills. Here, technology provides the means for support. The use of data for decision-making is already transforming many fields, whereas in education, it is still a young research discipline. Learning Analytics (LA) is defined as the measurement, collection, analysis, and reporting of data about learners and their learning contexts with the purpose of understanding and improving learning and learning environments. The vast amount of data that MOOCs produce on the learning behavior and success of thousands of students provides the opportunity to study human learning and develop approaches addressing the demands of learners and teachers.
The overall purpose of this dissertation is to investigate the implementation of LA at the scale of MOOCs and to explore how data-driven technology can support learning and teaching in this context. To this end, several research prototypes have been iteratively developed for the HPI MOOC Platform. Hence, they were tested and evaluated in an authentic real-world learning environment. Most of the results can be applied on a conceptual level to other MOOC platforms as well. The research contribution of this thesis thus provides practical insights beyond what is theoretically possible. In total, four system components were developed and extended:
(1) The Learning Analytics Architecture: A technical infrastructure to collect, process, and analyze event-driven learning data based on schema-agnostic pipelining in a service-oriented MOOC platform. (2) The Learning Analytics Dashboard for Learners: A tool for data-driven support of self-regulated learning, in particular to enable learners to evaluate and plan their learning activities, progress, and success by themselves. (3) Personalized Learning Objectives: A set of features to better connect learners' success to their personal intentions based on selected learning objectives to offer guidance and align the provided data-driven insights about their learning progress. (4) The Learning Analytics Dashboard for Teachers: A tool supporting teachers with data-driven insights to enable the monitoring of their courses with thousands of learners, identify potential issues, and take informed action.
For all aspects examined in this dissertation, related research is presented, development processes and implementation concepts are explained, and evaluations are conducted in case studies. Among other findings, the usage of the learner dashboard in combination with personalized learning objectives demonstrated improved certification rates of 11.62% to 12.63%. Furthermore, it was observed that the teacher dashboard is a key tool and an integral part for teaching in MOOCs. In addition to the results and contributions, general limitations of the work are discussed—which altogether provide a solid foundation for practical implications and future research.
Modern knowledge bases contain and organize knowledge from many different topic areas. Apart from specific entity information, they also store information about their relationships amongst each other. Combining this information results in a knowledge graph that can be particularly helpful in cases where relationships are of central importance. Among other applications, modern risk assessment in the financial sector can benefit from the inherent network structure of such knowledge graphs by assessing the consequences and risks of certain events, such as corporate insolvencies or fraudulent behavior, based on the underlying network structure. As public knowledge bases often do not contain the necessary information for the analysis of such scenarios, the need arises to create and maintain dedicated domain-specific knowledge bases.
This thesis investigates the process of creating domain-specific knowledge bases from structured and unstructured data sources. In particular, it addresses the topics of named entity recognition (NER), duplicate detection, and knowledge validation, which represent essential steps in the construction of knowledge bases.
As such, we present a novel method for duplicate detection based on a Siamese neural network that is able to learn a dataset-specific similarity measure which is used to identify duplicates. Using the specialized network architecture, we design and implement a knowledge transfer between two deduplication networks, which leads to significant performance improvements and a reduction of required training data.
Furthermore, we propose a named entity recognition approach that is able to identify company names by integrating external knowledge in the form of dictionaries into the training process of a conditional random field classifier. In this context, we study the effects of different dictionaries on the performance of the NER classifier. We show that both the inclusion of domain knowledge as well as the generation and use of alias names results in significant performance improvements.
For the validation of knowledge represented in a knowledge base, we introduce Colt, a framework for knowledge validation based on the interactive quality assessment of logical rules. In its most expressive implementation, we combine Gaussian processes with neural networks to create Colt-GP, an interactive algorithm for learning rule models. Unlike other approaches, Colt-GP uses knowledge graph embeddings and user feedback to cope with data quality issues of knowledge bases. The learned rule model can be used to conditionally apply a rule and assess its quality.
Finally, we present CurEx, a prototypical system for building domain-specific knowledge bases from structured and unstructured data sources. Its modular design is based on scalable technologies, which, in addition to processing large datasets, ensures that the modules can be easily exchanged or extended. CurEx offers multiple user interfaces, each tailored to the individual needs of a specific user group and is fully compatible with the Colt framework, which can be used as part of the system.
We conduct a wide range of experiments with different datasets to determine the strengths and weaknesses of the proposed methods. To ensure the validity of our results, we compare the proposed methods with competing approaches.
Ausgangspunkt der Dissertation ist die Fragestellung, warum es relativ wenige weibliche Wirtschaftsprüfer/innen in Deutschland gibt. Laut Mitgliederstatistik der Wirtschaftsprüferkammer vom 1. Januar 2020 liegt der Frauenanteil im Berufs-stand bei rund 17 %. Einschlägige Literatur zeigt, dass auf Ebene der Berufseinstei-ger/innen im Segment der zehn größten Wirtschaftsprüfungsgesellschaften das Ge-schlechterverhältnis recht ausgewogen ist. Jedoch liegt der Frauenanteil auf der Hierarchieebene „Manager“, für die üblicherweise ein bestandenes Berufsexamen Voraussetzung ist, bereits deutlich niedriger und sinkt mit jeder weiteren Hierar-chiestufe. Die Zielstellung der Dissertation wurde somit dahingehend spezifiziert, diejenigen Faktoren zu analysieren, die dazu beitragen können, dass die relative Repräsentation von Frauen im Segment der zehn größten Wirtschaftsprüfungsge-sellschaften Deutschlands ab der Manager-Ebene (d. h. üblicherweise ab der Schwelle der examinierten Wirtschaftsprüfer/innen) sinkt. Der Fokus der Analyse liegt daher auf Ebene der erfahrenen Prüfungsassistenten und Prüfungsassistentin-nen (Senior), um diese Schwelle unmittelbar vor der Manager-Ebene detailliert zu beleuchten.
Neben der Auswertung von Erkenntnissen aus der internationalen Prüfungsfor-schung wurde eine empirische Studie unter den Senior von sechs der zehn größten Wirtschaftsprüfungsgesellschaften in Deutschland durchgeführt. Die empirischen Ergebnisse wurden mittels deskriptiver Datenanalyse ausgewertet und dahinge-hend analysiert, für welche der zuvor definierten Aspekte signifikante geschlechts-spezifische Unterschiede zu beobachten sind. Für ausgewählte Aspekte wurde zu-dem analysiert, ob es Unterschiede zwischen weiblichen/männlichen Senior mit Kind/ern und ohne Kind/er gibt. Insgesamt wurden für zahlreiche Aspekte ge-schlechtsspezifische Unterschiede und Unterschiede zwischen Senior mit Kind/ern und ohne Kind/er gefunden. Es zeigt sich außerdem, dass neben der beruflichen Situation auch die individuellen Eigenschaften und das private Umfeld von Bedeu-tung sind. Im Rahmen der beruflichen Situation spielen sowohl die Wahrnehmung der aktuellen beruflichen Situation eine Rolle als auch u. a. die Erwartungen der Senior an die mögliche künftige Manager-Position, an das Wirtschaftsprüfungsexa-men und an weitere berufliche Perspektiven.
Carbonatite magmatism is a highly efficient transport mechanism from Earth’s mantle to the crust, thus providing insights into the chemistry and dynamics of the Earth’s mantle. One evolving and promising tool for tracing magma interaction are stable iron isotopes, particularly because iron isotope fractionation is controlled by oxidation state and bonding environment. Meanwhile, a large data set on iron isotope fractionation in igneous rocks exists comprising bulk rock compositions and fractionation between mineral groups. Iron isotope data from natural carbonatite rocks are extremely light and of remarkably high variability. This resembles iron isotope data from mantle xenoliths, which are characterized by a variability in δ56Fe spanning three times the range found in basalts, and by the extremely light values of some whole rock samples, reaching δ56Fe as low as -0.69 ‰ in a spinel lherzolite. Cause to this large range of variations may be metasomatic processes, involving metasomatic agents like volatile bearing high-alkaline silicate melts or carbonate melts. The expected effects of metasomatism on iron isotope fractionation vary with parameters like melt/rock-ratio, reaction time, and the nature of metasomatic agents and mineral reactions involved. An alternative or additional way to enrich light isotopes in the mantle could be multiple phases of melt extraction. To interpret the existing data sets more knowledge on iron isotope fractionation factors is needed.
To investigate the behavior of iron isotopes in the carbonatite systems, kinetic and equilibration experiments in natro-carbonatite systems between immiscible silicate and carbonate melts were performed in an internally heated gas pressure vessel at intrinsic redox conditions at temperatures between 900 and 1200 °C and pressures of 0.5 and 0.7 GPa. The iron isotope compositions of coexisting silicate melt and carbonate melt were analyzed by solution MC-ICP-MS. The kinetic experiments employing a Fe-58 spiked starting material show that isotopic equilibrium is obtained after 48 hours. The experimental studies of equilibrium iron isotope fractionation between immiscible silicate and carbonate melts have shown that light isotopes are enriched in the carbonatite melt. The highest Δ56Fesil.m.-carb.melt (mean) of 0.13 ‰ was determined in a system with a strongly peralkaline silicate melt composition (ASI ≥ 0.21, Na/Al ≤ 2.7). In three systems with extremely peralkaline silicate melt compositions (ASI between 0.11 and 0.14) iron isotope fractionation could analytically not be resolved. The lowest Δ56Fesil.m.-carb.melt (mean) of 0.02 ‰ was determined in a system with an extremely peralkaline silicate melt composition (ASI ≤ 0.11 , Na/Al ≥ 6.1). The observed iron isotope fractionation is most likely governed by the redox conditions of the system. Yet, in the systems, where no fractionation occurred, structural changes induced by compositional changes possibly overrule the influence of redox conditions. This interpretation implicates, that the iron isotope system holds the potential to be useful not only for exploring redox conditions in magmatic systems, but also for discovering structural changes in a melt.
In situ iron isotope analyses by femtosecond laser ablation coupled to MC-ICP-MS on magnetite and olivine grains were performed to reveal variations in iron isotope composition on the micro scale. The investigated sample is a melilitite bomb from the Salt Lake Crater group at Honolulu (Oahu, Hawaii), showing strong evidence for interaction with a carbonatite melt. While magnetite grains are rather homogeneous in their iron isotope compositions, olivine grains span a far larger range in iron isotope ratios. The variability of δ56Fe in magnetite is limited from - 0.17 ‰ (± 0.11 ‰, 2SE) to +0.08 ‰ (± 0.09 ‰, 2SE). δ56Fe in olivine range from -0.66‰ (± 0.11 ‰, 2SE) to +0.10 ‰ (± 0.13 ‰, 2SE). Olivine and magnetite grains hold different informations regarding kinetic and equilibrium fractionation due to their different Fe diffusion coefficients. The observations made in the experiments and in the in situ iron isotope analyses suggest that the extremely light iron isotope signatures found in carbonatites are generated by several steps of isotope fractionation during carbonatite genesis. These may involve equilibrium and kinetic fractionation. Since iron isotopic signatures in natural systems are generated by a combination of multiple factors (pressure, temperature, redox conditions, phase composition and structure, time scale), multi tracer approaches are needed to explain signatures found in natural rocks.
Halide perovskites are a class of novel photovoltaic materials that have recently attracted much attention in the photovoltaics research community due to their highly promising optoelectronic properties, including large absorption coefficients and long carrier lifetimes. The charge carrier mobility of halide perovskites is investigated in this thesis by THz spectroscopy, which is a contact-free technique that yields the intra-grain sum mobility of electrons and holes
in a thin film.
The polycrystalline halide perovskite thin films, provided from Potsdam University, show moderate mobilities in the range from 21.5 to 33.5 cm2V-1s-1. It is shown in this work that the room temperature mobility is limited by charge carrier scattering at polar optical phonons. The mobility at low temperature is likely to be limited by scattering at charged and neutral impurities at impurity concentration N=1017-1018 cm-3. Furthermore, it is shown that exciton formation
may decrease the mobility at low temperatures. Scattering at acoustic phonons can be neglected at both low and room temperatures. The analysis of mobility spectra over a broad range of temperatures for perovskites with various cation compounds shows that cations have a minor impact on charge carrier mobility.
The low-dimensional thin films of quasi-2D perovskite with different numbers of [PbI6]4−sheets (n=2-4) alternating with long organic spacer molecules were provided by S. Zhang from Potsdam University. They exhibit mobilities in the range from 3.7 to 8 cm2V-1s-1. A clear
decrease of mobility is observed with decrease in number of metal-halide sheets n, which likely arises from charge carrier confinement within metal-halide layers. Modelling the measured THz mobility with the modified Drude-Smith model yields localization length from 0.9 to 3.7 nm, which agrees well on the thicknesses of the metal-halide layers. Additionally, the mobilities are found to be dependent on the orientation of the layers. The charge carrier dynamics is also
dependent on the number of metal-halide sheets n. For the thin films with n =3-4 the dynamics is similar to the 3D MHPs. However, the thin film with n = 2 shows clearly different dynamics, where the signs of exciton formation are observed within 390 fs timeframe after
photoexcitation.
Also, the charge carrier dynamics of CsPbI3 perovskite nanocrystals was investigated, in particular the effect of post treatments on the charge carrier transport.
The mitochondrial chaperone complex HSP60/HSP10 facilitates mitochondrial protein homeostasis by folding more than 300 mitochondrial matrix proteins. It has been shown previously that HSP60 is downregulated in brains of type 2 diabetic (T2D) mice and patients,
causing mitochondrial dysfunction and insulin resistance. As HSP60 is also decreased in peripheral tissues in T2D animals, this thesis investigated the effect of overall reduced HSP60 in the development of obesity and associated co-morbidities.
To this end, both female and male C57Bl/6N control (i.e. without further alterations in their genome, Ctrl) and heterozygous whole-body Hsp60 knock-out (Hsp60+/-) mice, which exhibit a 50 % reduction of HSP60 in all tissues, were fed a normal chow diet (NCD) or a highfat diet (HFD, 60 % calories from fat) for 16 weeks and were subjected to extensive metabolic phenotyping including indirect calorimetry, NMR spectroscopy, insulin, glucose and pyruvate tolerance tests, vena cava insulin injections, as well as histological and molecular analysis.
Interestingly, NCD feeding did not result in any striking phenotype, only a mild increase in energy expenditure in Hsp60+/- mice. Exposing mice to a HFD however revealed an increased body weight due to higher muscle mass in female Hsp60+/- mice, with a simultaneous decrease in energy expenditure. Additionally, these mice displayed decreased fasting glycemia. Opposingly, male Hsp60+/- compared to control mice showed lower body weight gain due to decreased fat mass and an increased energy expenditure, strikingly independent of lean mass. Further, only male Hsp60+/- mice display improved HOMA-IR and Matsuda
insulin sensitivity indices.
Despite the opposite phenotype in regards to body weight development, Hsp60+/- mice of both sexes show a significantly higher cell number, as well as a reduction in adipocyte size in the subcutaneous and gonadal white adipose tissue (sc/gWAT). Curiously, this adipocyte hyperplasia – usually associated with positive aspects of WAT function – is disconnected from metabolic improvements, as the gWAT of male Hsp60+/- mice shows mitochondrial dysfunction, oxidative stress, and insulin resistance. Transcriptomic analysis of gWAT shows an up
regulation of genes involved in macroautophagy. Confirmatory, expression of microtubuleassociated protein 1A/1B light chain 3B (LC3), as a protein marker of autophagy, and direct measurement of lysosomal activity is increased in the gWAT of male Hsp60+/- mice.
In summary, this thesis revealed a novel gene-nutrient interaction. The reduction of the crucial chaperone HSP60 did not have large effects in mice fed a NCD, but impacted metabolism during DIO in a sex-specific manner, where, despite opposing body weight and
body composition phenotypes, both female and male Hsp60+/- mice show signs of protection from high fat diet-induced systemic insulin resistance.
In der vorliegenden Arbeit wird die Herstellung und Charakterisierung von Mixed-Matrix-Membranen (MMM) für die Gastrennung thematisiert. Dazu wurden verschiedene Füllstoffe genutzt, um in Verbindung mit dem Membranmaterial Polysulfon MMMs herzustellen. Als Füllstoffe wurden 3 aktive und 2 passive Füllstoffe verwendet. Die aktiven Füllstoffe besaßen Porenöffnungen, die in der Lage sind Gase in Abhängigkeit der Molekülgröße zu trennen. Daraus folgt ein höherer idealer Trennfaktor für bestimmte Gaspaare als in Polysulfon selbst. Aufgrund der durch die Poren gebildeten permanenten Kanäle in den aktiven Füllstoffen ergibt sich ein schnellerer Gastransport (Permeabilität) als in Polysulfon. Es handelte sich bei den aktiven Füllstoffen um den Zeolith SAPO-34 und 2 Chargen eines Zeolitic Imidazolate Framework (ZIF) ZIF-8. Die beiden Chargen ZIF-8 unterschieden sich in ihrer spezifischen Oberfläche, was diesen Einfluss speziell in die Untersuchungen zum Gastransport einbeziehen sollte. Bei den passiven Füllstoffen handelte es sich um ein aminofunktionalisiertes Kieselgel und unporöse (dichte) Glaskügelchen. Das Kieselgel besaß Poren, die zu groß waren, um Gase effektiv zu trennen. Die Glaskügelchen konnten keine Gastrennung ermöglichen, da sie keine Poren besaßen.
Aus der Literatur ist bekannt, dass die Einbettung von Füllstoffen oft zu Defekten in MMMs führt. Ein Ziel dieser Arbeit war es daher die Einbettung zu optimieren. Weiterhin sollte der Gastransport in MMMs dieser Arbeit mit dem in einer unbeladenen Polysulfonmembran verglichen werden. Aufgrund des selektiveren Trennverhaltens der aktiven Füllstoffe im Vergleich zum Membranmaterial, sollte mit der Einbettung aktiver Füllstoffe die Trennleistung der MMMs mit steigender Füllstoffbeladung immer weiter verbessert werden.
Um die Eigenschaften der MMMs zu untersuchen, wurden diese mittels Rasterelektronenmikroskop (REM), Gaspermeationsmessungen (GP) und Thermogravimetrischer Analyse gekoppelt mit Massenspektrometrie (TGA-MS) charakterisiert.
Untersuchungen am REM konnten eine Verbesserung der Einbettung zeigen, wenn ein polymerer Haftvermittler verwendet wurde. Verglichen wurde die optimierte Einbettung mit der Einbettung ohne Haftvermittler und Ergebnissen aus der Literatur, in der die Verwendung verschiedener Silane als Haftvermittler beschrieben wurde. Trotz der verbesserten Einbettung konnte lediglich bei geringen Beladungen an Füllstoff (10 und 20 Ma-% bezogen auf das Membranmaterial) eine geringe Steigerung des idealen Trennfaktors in den MMMs gegenüber der unbeladenen Polysulfonmembranen beobachtet werden. Bei höheren Füllstoffbeladungen (30, 40 und 50 Ma-%) war ein deutlicher Anstieg der Permeabilität bei stark sinkendem idealen Trennfaktor zu beobachten. Mit Hilfe von TGA-MS Messungen konnte darüber hinaus festgestellt werden, dass der verwendete Zeolith SAPO-34 durch Wassermoleküle blockierte Porenöffnungen besaß. Das verhinderte den Gastransport im Füllstoff, wodurch die Trennleistung des Füllstoffes nicht ausgenutzt werden konnte. Die Füllstoffe ZIF-8 (chargenunabhängig) und aminofunktionalisiertes Kieselgel wiesen keine blockierten Poren auf. Dennoch zeigte sich in diesen MMMs keine Verbesserung der Gastrenn- oder Gastransporteigenschaften. MMMs mit dichten Glaskügelchen als Füllstoff zeigten dasselbe Gastrenn- und Gastransportverhalten, wie alle MMMs mit den zuvor genannten Füllstoffen.
In dieser Arbeit konnte, trotz optimierter Einbettung anorganischer Füllstoffe, für MMMs keine Verbesserung der Gastrenn- oder Gastransporteigenschaften nachgewiesen werden. Vielmehr wurde ein Einfluss der Füllstoffmenge auf die Gastransporteigenschaften in MMMs festgestellt. Die Änderungen der MMMs gegenüber Polysulfon stammen von den Folgen der Einbettung von Füllstoffen in das Matrixpolymer. Durch die Einbettung werden die Eigenschaften des Matrixpolymers ändern, sodass auch der Gastransport beeinflusst wird. Des Weiteren wurde dokumentiert, dass in Abhängigkeit der Füllstoffbeladung die entstehende Membranstruktur beeinflusst wird. Die Beeinflussung war dabei unabhängig von der Füllstoffart. Es wurde eine Korrelation zwischen Füllstoffmenge und veränderter Membranstruktur gefunden.
Influenza A virus (IAV) is a pathogen responsible for severe seasonal epidemics threatening human and animal populations every year. During the viral assembly process in the infected cells, the plasma membrane (PM) has to bend in localized regions into a vesicle towards the extracellular side. Studies in cellular models have proposed that different viral proteins might be responsible for inducing membrane curvature in this context (including M1), but a clear consensus has not been reached. M1 is the most abundant protein in IAV particles. It plays an important role in virus assembly and budding at the PM. M1 is recruited to the host cell membrane where it associates with lipids and other viral proteins. However, the details of M1 interactions with the cellular PM, as well as M1-mediated membrane bending at the budozone, have not been clarified.
In this work, we used several experimental approaches to analyze M1-lipids and M1-M1 interactions. By performing SPR analysis, we quantified membrane association for full-length M1 and different genetically engineered M1 constructs (i.e., N- and C-terminally truncated constructs and a mutant of the polybasic region). This allowed us to obtain novel information on the protein regions mediating M1 binding to membranes. By using fluorescence microscopy, cryogenic transmission electron microscopy (cryo-TEM), and three-dimensional (3D) tomography (cryo-ET), we showed that M1 is indeed able to cause membrane deformation on vesicles containing negatively-charged lipids, in the absence of other viral components. Further, sFCS analysis proved that simple protein binding is not sufficient to induce membrane restructuring. Rather, it appears that stable M1-M1 interactions and multimer formation are required to alter the bilayer three-dimensional structure through the formation of a protein scaffold.
Finally, to mimic the budding mechanism in cells that arise by the lateral organization of the virus membrane components on lipid raft domains, we created vesicles with lipid domains. Our results showed that local binding of M1 to spatial confined acidic lipids within membrane domains of vesicles led to local M1 inward curvature.
The prevalence of diseases associated with misfolded proteins increases with age. When cellular defense mechanisms become limited, misfolded proteins form aggregates and may also develop more stable cross-β structures ultimately forming amyloid aggregates. Amyloid aggregates are associated with neurodegenerative diseases such as Alzheimer’s disease and Huntington’s disease. The formation of amyloid deposits, their toxicity and cellular defense mechanisms have been intensively studied. However, surprisingly little is known about the effects of protein aggregates on cellular signal transduction. It is also not understood whether the presence of aggregation-prone, but still soluble proteins affect signal transduction.
In this study, the still soluble aggregation-prone HttExon1Q74 and its amyloid aggregates were used to analyze the effect of amyloid aggregates on internalization and receptor activation of G protein-coupled receptors (GPCRs), the largest protein family of mammalian cell surface receptors involved in signal transduction. The aggregated HttExon1Q74, but not its soluble form, could inhibit ligand-induced clathrin-mediated endocytosis (CME) of various GPCRs. Most likely this inhibitory effect is based on a terminal sequestration of the HSC70 chaperone to the aggregates which is necessary for CME. Using the vasopressinV1a receptor (V1aR) and the corticotropin-releasing factor receptor 1 (CRF1R) as a model, it could be shown that the presence of HttExon1Q74 aggregates and the inhibition of ligand-induced CME leads to an accumulation of desensitized receptors at the plasma membrane. In turn, this disrupts Gq-mediated Ca2+ signaling and Gs-mediated cAMP signaling of the V1aR and the CRF1R respectively. In contrast to HttExon1Q74 amyloid aggregates, soluble HttExon1Q74 as well as amorphous aggregates did not inhibit GPCR internalization and signaling demonstrating that cellular signal transduction mechanisms are specifically impaired in response to the formation of amyloid aggregates.
In addition, preliminary experiments could show that HttExon1Q74 aggregates provoke an increase in membrane expression of a protein from a structurally and functionally unrelated membrane protein family, namely the serotonin transporter SERT. As SERT is the main pharmacological target to treat depression this could shed light on this commonly occurring comorbidity in neurodegenerative diseases, in particular in early disease states.
Im Mittelpunkt dieser Dissertation steht die Wiederentdeckung, Analyse und bildungshistorische Einordnung des reformpädagogischen Schulprojekts von Eugenie SCHWARZWALD (1872-1940) in Wien im ersten Drittel des 20. Jahrhunderts. Die Genese der Schulentwicklung offenbart die reformpädagogischen Verflechtungen eines überregional bedeutsamen Schulprojekts, die maßgeblich das Profil, die inhaltliche sowie didaktisch-methodische Ausgestaltung von Schule, Schulleben und Unterricht geprägt haben. In der Einleitung (Kap. 1) werden das Erkenntnisinteresse, die zentralen Fragestellungen, die ausgewerteten Quellenbestände und die methodische Vorgehensweise der Arbeit als historisch kritische Analyse der herangezogenen Quellen aufgezeigt. Die systematische Entfaltung des Themas erfolgt entlang von drei zentralen Kapiteln. Dabei rücken die gesellschaftliche und bildungshistorische Einordnung des Schulprojekts in die Ideenwelt und sozialstrukturelle Wirklichkeit Wiens (Kap. 2), biographische Zugänge der Schulgründerin, die Gründung, Genese, Ausformung sowie Beendigung des Schulprojekts, die strukturellen und pädagogischen Charakteristika, die reformpädagogischen Merkmale im ersten Drittel des 20. Jahrhunderts (Kap. 3) in den Mittelpunkt der Analyse. Zugleich werden exemplarische Verflechtungen zu den zeitgenössischen reformpädagogischen Strömungen ebenso sichtbar gemacht wie die damit verbundene Impulsgebung des SCHWARZWALD-Schulprojekts auf das Schulwesen Wiens und Österreichs. Einen Schwerpunkt der Arbeit bildet die Analyse der mannigfachen Vernetzungen der SCHWARZWALDschule im Hinblick auf die Künstlerische Avantgarde (Kap. 4). In der thesenhaften Zusammenfassung (Kap. 5) werden SCHWARZWALDs Leistungen für das österreichische Schul- und Bildungswesen, u. a. für die höhere Mädchenbildung, gewürdigt. Die Arbeit fragt schließlich nach der Reichweite der mit dem Schulprojekt verbundenen reformpädagogischen Impulse und systematisiert Gelingens- und Nichtgelingens-Bedingungen für den Schulreformprozess. Das macht die Arbeit – mit Blick auf Transferüberlegungen – für aktuelle Fragestellungen der Schulentwicklung anschlussfähig.
Anthropogenic activities such as continuous landscape changes threaten biodiversity at both local and regional scales. Metacommunity models attempt to combine these two scales and continuously contribute to a better mechanistic understanding of how spatial processes and constraints, such as fragmentation, affect biodiversity. There is a strong consensus that such structural changes of the landscape tend to negatively effect the stability of metacommunities. However, in particular the interplay of complex trophic communities and landscape structure is not yet fully understood.
In this present dissertation, a metacommunity approach is used based on a dynamic and spatially explicit model that integrates population dynamics at the local scale and dispersal dynamics at the regional scale. This approach allows the assessment of complex spatial landscape components such as habitat clustering on complex species communities, as well as the analysis of population dynamics of a single species. In addition to the impact of a fixed landscape structure, periodic environmental disturbances are also considered, where a periodical change of habitat availability, temporally alters landscape structure, such as the seasonal drying of a water body.
On the local scale, the model results suggest that large-bodied animal species, such as predator species at high trophic positions, are more prone to extinction in a state of large patch isolation than smaller species at lower trophic levels.
Increased metabolic losses for species with a lower body mass lead to increased energy limitation for species on higher trophic levels and serves as an explanation for a predominant loss of these species. This effect is particularly pronounced for food webs, where species are more sensitive to increased metabolic losses through dispersal and a change in landscape structure.
In addition to the impact of species composition in a food web for diversity, the strength of local foraging interactions likewise affect the synchronization of population dynamics. A reduced predation pressure leads to more asynchronous population dynamics, beneficial for the stability of population dynamics as it reduces the risk of correlated extinction events among habitats. On the regional scale, two landscape aspects, which are the mean patch isolation and the formation of local clusters of two patches, promote an increase in $\beta$-diversity. Yet, the individual composition and robustness of the local species community equally explain a large proportion of the observed diversity patterns.
A combination of periodic environmental disturbance and patch isolation has a particular impact on population dynamics of a species. While the periodic disturbance has a synchronizing effect, it can even superimpose emerging asynchronous dynamics in a state of large patch isolation and unifies trends in synchronization between different species communities.
In summary, the findings underline a large local impact of species composition and interactions on local diversity patterns of a metacommunity. In comparison, landscape structures such as fragmentation have a negligible effect on local diversity patterns, but increase their impact for regional diversity patterns. In contrast, at the level of population dynamics, regional characteristics such as periodic environmental disturbance and patch isolation have a particularly strong impact and contribute substantially to the understanding of the stability of population dynamics in a metacommunity. These studies demonstrate once again the complexity of our ecosystems and the need for further analysis for a better understanding of our surrounding environment and more targeted conservation of biodiversity.
Cyanobacteria are an abundant bacterial group and are found in a variety of ecological niches all around the globe. They can serve as a real threat for fish or mammals and can restrict the use of lakes or rivers for recreational purposes or as a source of drinking water, when they form blooms. One of the most abundant bloom-forming cyanobacteria is Microcystis aeruginosa.
In the first part of the study, the role and possible dynamics of RubisCO in M. aeruginosa during high-light irradiation were examined. Its response was analyzed on the protein and peptide level via immunoblotting, immunofluorescence microscopy and with high performance liquid chromatography (HPLC). It was revealed that large amounts of RubisCO were located outside of carboxysomes under the applied high light stress. RubisCO aggregated mainly underneath the cytoplasmic membrane. There it forms a putative Calvin-Benson-Bassham (CBB) super complex together with other enzymes of photosynthesis. This complex could be part of an alternative carbon-concentrating mechanism (CCM) in M. aeruginosa, which enables a faster, and energy saving adaptation to high light stress of the whole bloom.
Furthermore, the re-localization of RubisCO was delayed in the microcystin-deficient mutant ΔmcyB and RubisCO was more evenly distributed over the cell in comparison to the wild type. Since ΔmcyB is not harmed in its growth, possibly other produced cyanopeptides as aeruginosin or cyanopeptolin also play a role in the stabilization of RubisCO and the putative CBB complex, especially in the microcystin-free mutant.
In the second part of this work, the possible role of microcystin as an extracellular signaling peptide during the diurnal cycle was studied. HPLC analysis showed a strong increase of extracellular microcystin in the wild type when the population entered nighttime and it resumed into the next day as well. Together with the increase of extracellular microcystin, a strong decrease of protein-bound intracellular microcystin was observed via immunoblot analysis. Interestingly, the signal of the large subunit of RubisCO (RbcL) also diminished when high amounts of microcystin were present in the surrounding medium. Microcystin addition experiments to M. aeruginosa WT and ΔmcyB cultures support this observation, since the immunoblot signal of both subunits of RubisCO and CcmK, a shell protein of carboxysomes, diminished after the addition of microcystin. In addition, the fluctuation of cyanopeptolin during the diurnal cycle indicates a more prominent role of other cyanopeptides besides microcystin as a signaling peptide, intracellularly as well as extracellularly.
The life cycle of higher plants is based on recurring phases of growth and development based on repetitive sequences of cell division, cell expansion and cell differentiation. This dissertation deals with two projects, each of them investigating two different topics that are related to cell expansion. The first project is examining an Arabidopsis thaliana mutant exhibiting overall cell enlargement and the second project is analysing two naturally occurring floral morphs of Amsinckia spectabilis (Boraginaceae) differing (amongst others) in style length and anther heights due to differences in longitudinal cell elongation. The EMS-mutant eop1 was shown to exhibit a petal size increase of 26% caused by cell enlargement. Further phenotypes were detected, such as cotyledon size increase (based on larger cells) as well as increased carpel, sepal, leaf and pollen sizes. Plant height was shown to be increased and more highly branched trichomes explained the hairy eop1 phenotype. Fine mapping revealed the causal SNP to be a C to T transition at the last nucleotide of intron 7 of the INCURVATA11 (ICU11) gene, a 2-oxoglutarate /Fe(II)-dependant dioxygenase, and thus causing missplicing of the mRNA. Two T-DNA insertion lines (icu11-2 & icu11-4) confirmed ICU11 as causal gene by exhibiting increased petal size. A comparison of three icu11 alleles, which possessed different mutation-related changes, either overexpressing ICU11 or modified mRNAs, was the base for investigating the molecular mechanism that underlies the observed phenotype. Different approaches revealed contradictory results regarding ICU11 protein functionality in the icu11 mutants. A complementation assay proved the three mutants to be exchangeable and ICU11 overexpression in the wild-type led to an icu11-like phenotype, arguing for all three icu11 mutants to be GOF mutants. Contradicting this conclusion, the icu11-4 line could be rescued by a genomic ICU11 transgene. A model, based on the assumption that an overexpression of ICU11 is inhibiting the function of the protein, and thus causing the same effect as a LOF protein was proposed. Further, icu11-3 (eop1) mutants were shown to have an increased resistance towards paclobutrazol, a gibberellin (GA) inhibitor and an upregulation of AtGA20ox2, a main GA biosynthesis gene. Additionally, ICU11 subcellular localization was discovered to be cytoplasmic, supporting the assumption, that ICU11 affects GA biosynthesis and overall GA level, possibly explaining the observed (GA-overdose) phenotype.
The second project aimed to identify the genetic base of the S-locus in Amsinckia spectabilis, as the Amsinckia genus represents untypical characteristics for a heterostylous species, such as no obvious self-incompatibility (SI) and the repeated transition towards homostylous and fully selfing variants. The work was based on three Amsinckia spectabilis forms: a heterostylous form, consisting of two floral morphs with reciprocal positioning of sexual organs (S-morph: high anthers and a short style and L-morph: low anthers and a long style), and two homostylous forms, one large-flowered and partially selfing and the other small-flowered and fully selfing. The maintenance of the two floral morphs is genetically based on the S-locus region, containing genes that encode for the morph-specific traits, which are marked by a tight linkage due to suppressed recombination. Natural populations are found to possess a 1:1 S:L morph ratio, that can be explained by predominant disassortative mating of the two morphs, causing the occurrence of the dominant S-allele only in the heterozygous state (heterozygous (Ss) for the S-morph and homozygous recessive (ss) for the L-morph). Investigation of morph-specific phenotypes detected 56% elongated L-morph styles and 58% higher positioned S-morph anthers. Approximately 50% of the observed size differences were explained by an increase in cell elongation. Moreover, additional phenotypes were found, such as 21% enlarged S-morph pollen and no obvious SI, confirmed by hand pollinated seed counts, in vivo pollen tube growth and the development of homozygous dominant SS individuals via selfing. The Amsinckia spec. S-locus was assumed to at least consist of the G- (style length), the A- (anther height) and the P- (pollen size) locus. Comparative Transcriptomics of the two morphs revealed 22 differentially expressed markers that were found to be located within two contigs of a SS individual PacBio genome assembly, allowing the localization of the S-locus to be delimited to a region of approximately 23 Mb. Contradictory to revealed S-loci within the plant kingdom, no strong argument for a present hemizygous region was found to be causal for the suppressed recombination of the S-locus, so that an inversion was assumed to be the causal mechanism.
Identification of chemical mediators that regulate the specialized metabolism in Nostoc punctiforme
(2021)
Specialized metabolites, so-called natural products, are produced by a variety of different organisms, including bacteria and fungi. Due to their wide range of different biological activities, including pharmaceutical relevant properties, microbial natural products are an important source for drug development. They are encoded by biosynthetic gene clusters (BGCs), which are a group of locally clustered genes. By screening genomic data for genes encoding typical core biosynthetic enzymes, modern bioinformatical approaches are able to predict a wide range of BGCs. To date, only a small fraction of the predicted BGCs have their associated products identified.
The phylum of the cyanobacteria has been shown to be a prolific, but largely untapped source for natural products. Especially multicellular cyanobacterial genera, like Nostoc, harbor a high amount of BGCs in their genomes.
A main goal of this study was to develop new concepts for the discovery of natural products in cyanobacteria. Due to its diverse setup of orphan BGCs and its amenability to genetic manipulation, Nostoc punctiforme PCC 73102 (N. punctiforme) appeared to be a promising candidate to be established as a model organism for natural product discovery in cyanobacteria. By utilizing a combination of genome-mining, bioactivity-screening, variations of culture conditions, as well as metabolic engineering, not only two new polyketides were discovered, but also first-time insights into the regulation of the specialized metabolism in N. punctiforme were gained during this study.
The cultivation of N. punctiforme to very high densities by utilizing increasing light intensities and CO2 levels, led to an enhanced metabolite production, causing rather complex metabolite extracts. By utilizing a library of CFP reporter mutant strains, each strain reporting for one of the predicted BGCs, it was shown that eight out of 15 BGCs were upregulated under high density (HD) cultivation conditions. Furthermore, it could be demonstrated that the supernatant of an HD culture can increase the expression of four of the influenced BGCs, even under conventional cultivation conditions. This led to the hypothesis that a chemical mediator encoded by one of the affected BGCs is accumulating in the HD supernatant and is able to increase the expression of other BGCs as part of a cell-density dependent regulatory circuit. To identify which of the BGCs could be a main trigger of the presumed regulatory circuit, it was tried to activate four BGCs (pks1, pks2, ripp3, ripp4) selectively by overexpression of putative pathway-specific regulatory genes that were found inside the gene clusters. Transcriptional analysis of the mutants revealed that only the mutant strain targeting the pks1 BGC, called AraC_PKS1, was able to upregulate the expression of its associated BGC. From an RNA sequencing study of the AraC_PKS1 mutant strain, it was discovered that beside pks1, the orphan BGCs ripp3 and ripp4 were also upregulated in the mutant strain. Furthermore, it was observed that secondary metabolite production in the AraC_PKS1 mutant strain is further enhanced under high-light and high-CO2 cultivation conditions. The increased production of the pks1 regulator NvlA also had an impact on other regulatory factors, including sigma factors and the RNA chaperone Hfq. Analysis of the AraC_PKS1 cell and supernatant extracts led to the discovery of two novel polyketides, nostoclide and nostovalerolactone, both encoded by the pks1 BGC. Addition of the polyketides to N. punctiforme WT demonstrated that the pks1-derived compounds are able to partly reproduce the effects on secondary metabolite production found in the AraC_PKS1 mutant strain. This indicates that both compounds are acting as extracellular signaling factors as part of a regulatory network. Since not all transcriptional effects that were found in the AraC_PKS1 mutant strain could be reproduced by the pks1 products, it can be assumed that the regulator NvlA has a global effect and is not exclusively specific to the pks1 pathway.
This study was the first to use a putative pathway specific regulator for the specific activation of BGC expression in cyanobacteria. This strategy did not only lead to the detection of two novel polyketides, it also gave first-time insights into the regulatory mechanism of the specialized metabolism in N. punctiforme. This study illustrates that understanding regulatory pathways can aid in the discovery of novel natural products. The findings of this study can guide the design of new screening strategies for bioactive compounds in cyanobacteria and help to develop high-titer production platforms for cyanobacterial natural products.
This work develops hybrid methods of imaging spectroscopy for open pit mining and examines their feasibility compared with state-of-the-art. The material distribution within a mine face differs in the small scale and within daily assigned extraction segments. These changes can be relevant to subsequent processing steps but are not always visually identifiable prior to the extraction. Misclassifications that cause false allocations of extracted material need to be minimized in order to reduce energy-intensive material re-handling. The use of imaging spectroscopy aspires to the allocation of relevant deposit-specific materials before extraction, and allows for efficient material handling after extraction. The aim of this work is the parameterization of imaging spectroscopy for pit mining applications and the development and evaluation of a workflow for a mine face, ground- based, spectral characterization. In this work, an application-based sensor adaptation is proposed. The sensor complexity is reduced by down-sampling the spectral resolution of the system based on the samples’ spectral characteristics. This was achieved by the evaluation of existing hyperspectral outcrop analysis approaches based on laboratory sample scans from the iron quadrangle in Minas Gerais, Brazil and by the development of a spectral mine face monitoring workflow which was tested for both an operating and an inactive open pit copper mine in the Republic of Cyprus.
The workflow presented here is applied to three regional data sets: 1) Iron ore samples from Brazil, (laboratory); 2) Samples and hyperspectral mine face imagery from the copper-gold-pyrite mine Apliki, Republic of Cyprus (laboratory and mine face data); and 3) Samples and hyperspectral mine face imagery from the copper-gold-pyrite deposit Three Hills, Republic of Cyprus (laboratory and mine face data). The hyperspectral laboratory dataset of fifteen Brazilian iron ore samples was used to evaluate different analysis methods and different sensor models. Nineteen commonly used methods to analyze and map hyperspectral data were compared regarding the methods’ resulting data products and the accuracy of the mapping and the analysis computation time. Four of the evaluated methods were determined for subsequent analyses to determine the best-performing algorithms: The spectral angle mapper (SAM), a support vector machine algorithm (SVM), the binary feature fitting algorithm (BFF) and the EnMap geological mapper (EnGeoMap). Next, commercially available imaging spectroscopy sensors were evaluated for their usability in open pit mining conditions. Step-wise downsampling of the data - the reduction of the number of bands with an increase of each band’s bandwidth - was performed to investigate the possible simplification and ruggedization of a sensor without a quality fall-off of the mapping results. The impact of the atmosphere visible in the spectrum between 1300–2010nm was reduced by excluding the spectral range from the data for mapping. This tested the feasibility of the method under realistic open pit data conditions. Thirteen datasets based on the different, downsampled sensors were analyzed with the four predetermined methods. The optimum sensor for spectral mine face material distinction was determined as a VNIR-SWIR sensor with 40nm bandwidths in the VNIR and 15nm bandwidths in the SWIR spectral range and excluding the atmospherically impacted bands. The Apliki mine sample dataset was used for the application of the found optimal analyses and sensors. Thirty-six samples were analyzed geochemically and mineralogically. The sample spectra were compiled to two spectral libraries, both distinguishing between seven different geochemical-spectral clusters. The reflectance dataset was downsampled to five different sensors. The five different datasets were mapped with the SAM, BFF and SVM method achieving mapping accuracies of 85-72%, 85-76% and 57-46% respectively. One mine face scan of Apliki was used for the application of the developed workflow. The mapping results were validated against the geochemistry and mineralogy of thirty-six documented field sampling points and a zonation map of the mine face which is based on sixty-six samples and field mapping. The mine face was analyzed with SAM and BFF. The analysis maps were visualized on top of a Structure-from-Motion derived 3D model of the open pit. The mapped geological units and zones correlate well with the expected zonation of the mine face. The third set of hyperspectral imagery from Three Hills was available for applying the fully-developed workflow. Geochemical sample analyses and laboratory spectral data of fifteen different samples from the Three Hills mine, Republic of Cyprus, were used to analyse a downsampled mine face scan of the open pit. Here, areas of low, medium and high ore content were identified.
The developed workflow is successfully applied to the open pit mines Apliki and Three Hills and the spectral maps reflect the prevailing geological conditions. This work leads through the acquisition, preparation and processing of imaging spectroscopy data, the optimum choice of analysis methodology, and the utilization of simplified, robust sensors that meet the requirements of open pit mining conditions. It accentuates the importance of a site-specific and deposit-specific spectral library for the mine face analysis and underlines the need for geological and spectral analysis experts to successfully implement imaging spectroscopy in the field of open pit mining.
Forming as a result of the collision between the Adriatic and European plates, the Alpine orogen exhibits significant lithospheric heterogeneity due to the long history of interplay between these plates, other continental and oceanic blocks in the region, and inherited features from preceeding orogenies. This implies that the thermal and rheological configuration of the lithosphere also varies significantly throughout the region. Lithology and temperature/pressure conditions exert a first order control on rock strength, principally via thermally activated creep deformation and on the distribution at depth of the brittle-ductile transition zone, which can be regarded as the lower bound to the seismogenic zone. Therefore, they influence the spatial distribution of seismicity within a lithospheric plate. In light of this, accurately constrained geophysical models of the heterogeneous Alpine lithospheric configuration, are crucial in describing regional deformation patterns. However, despite the amount of research focussing on the area, different hypotheses still exist regarding the present-day lithospheric state and how it might relate to the present-day seismicity distribution.
This dissertaion seeks to constrain the Alpine lithospheric configuration through a fully 3D integrated modelling workflow, that utilises multiple geophysical techniques and integrates from all available data sources. The aim is therefore to shed light on how lithospheric heterogeneity may play a role in influencing the heterogeneous patterns of seismicity distribution observed within the region. This was accomplished through the generation of: (i) 3D seismically constrained, structural and density models of the lithosphere, that were adjusted to match the observed gravity field; (ii) 3D models of the lithospheric steady state thermal field, that were adjusted to match observed wellbore temperatures; and (iii) 3D rheological models of long term lithospheric strength, with the results of each step used as input for the following steps.
Results indicate that the highest strength within the crust (~ 1 GPa) and upper mantle (> 2 GPa), are shown to occur at temperatures characteristic for specific phase transitions (more felsic crust: 200 – 400 °C; more mafic crust and upper lithospheric mantle: ~600 °C) with almost all seismicity occurring in these regions. However, inherited lithospheric heterogeneity was found to significantly influence this, with seismicity in the thinner and more mafic Adriatic crust (~22.5 km, 2800 kg m−3, 1.30E-06 W m-3) occuring to higher temperatures (~600 °C) than in the thicker and more felsic European crust (~27.5 km, 2750 kg m−3, 1.3–2.6E-06 W m-3, ~450 °C). Correlation between seismicity in the orogen forelands and lithospheric strength, also show different trends, reflecting their different tectonic settings. As such, events in the plate boundary setting of the southern foreland correlate with the integrated lithospheric strength, occurring mainly in the weaker lithosphere surrounding the strong Adriatic indenter. Events in the intraplate setting of the northern foreland, instead correlate with crustal strength, mainly occurring in the weaker and warmer crust beneath the Upper Rhine Graben.
Therefore, not only do the findings presented in this work represent a state of the art understanding of the lithospheric configuration beneath the Alps and their forelands, but also a significant improvement on the features known to significantly influence the occurrence of seismicity within the region. This highlights the importance of considering lithospheric state in regards to explaining observed patterns of deformation.
Fördermittelfinanzierte Gründungsunterstützungsangebote waren in den EU-Förderperioden 2007-2013 und 2014-2020 ein wichtiges Element der Hochschulgründungsförderung im Land Brandenburg. Aufgrund der positiven wirtschaftlichen Entwicklung des Landes, reduzierte sich das Fördervolumen in der gleichen Zeit jedoch stetig. Für die EU-Förderperiode 2021-2027 steht eine weitere Reduzierung der Fördermittel bereits fest. In der Folge wird es, ohne Anpassungen der etablierten Förderstrukturen, zur weiteren Reduzierung oder Erosion der Gründungsunterstützungsangebote an Brandenburger Hochschulen kommen. Die vorliegende Arbeit befasst sich daher u.a. mit der Frage, wie ein theoretisches Referenzmodell zur fördermittelfinanzierten Hochschulgründungsberatung gestaltet sein kann, um den reduzierten Fördersätzen bei gleichzeitiger Aufrechterhaltung der Angebotsvielfalt gerecht zu werden.
Zur Beantwortung dieser Frage wird als Untersuchungsobjekt das Förderprojekt BIEM Startup Navigator herangezogen. Das Gründungsberatungsprojekt BIEM Startup Navigator wurde von 2010 bis 2014 an sechs Brandenburger Hochschulen durchgeführt. Mit Hilfe der Modelle und Prämissen der Prinzipal-Agent-Theorie wird zunächst ein theoretischer Rahmen aufgespannt, auf dessen Grundlage die empirische Untersuchung erfolgt. Anhand der Prinzipal-Agent-Theorie werden die beteiligten Organisationen, Individuen und Institutionen aufgezeigt. Weiterhin werden die wesentlichen Problemfelder und Lösungsansätze der Prinzipal-Agent-Theorie für die Untersuchung des BIEM Startup Navigators diskutiert.
Im Untersuchungsverlauf werden u.a. die Konzepte zur Durchführung des Förderprojekts an sechs Hochschulstandorten, die Daten von 610 Teilnehmenden und 288 Gründungen analysiert, um so sachlogische Zusammenhänge und Wechselwirkungen identifizieren und beschreiben zu können. Es werden unterschiedliche theoretische Annahmen zu den Bereichen Projekteffektivität bzw. Projekteffizienz, Kostenverteilung und zur konzeptionellen Ausgestaltung in Form von 24 Arbeitshypothesen formuliert und auf die Untersuchung übertragen. Die Verifizierung bzw. Falsifizierung der Hypothesen erfolgt auf Grundlage der kombinierten Erkenntnisse aus Literaturrecherchen und den Ergebnissen der empirischen Untersuchung.
Im Verlauf der Arbeit gelingt es, die in der Prinzipal-Agent-Theorie auftretenden Agencykosten auch am Beispiel des BIEM Startup Navigators zu beschreiben und ex post Ineffizienzen in den durchgeführten Screening- und Signalingprozessen aufzuzeigen.
Mit Hilfe des im Verlauf der Arbeit entwickelten theoretischen Referenzmodells zur fördermittelfinanzierten Gründungsberatung an Brandenburger Hochschulen soll es gelingen, den sinkenden EU-Fördermitteln, ohne eine gleichzeitige Reduzierung der Gründungsunterstützungsangebote an den Hochschulen, gerecht zu werden. Hierfür zeigt das theoretische Referenzmodell wie die Ergebnisse der empirischen Untersuchung genutzt werden können, um die Agencykosten der fördermittelfinanzierten Gründungsberatung zu reduzieren.
Die vorliegende Arbeit befasst sich mit Gründungen durch Akademikerinnen und Akademiker mit Migrationshintergrund. Dabei wurden vor allem der Bezug dieser Gründungen zu der Umwelt – dem Gründerökosystem –, in der sie stattfinden, sowie ihre gegenseitigen Wechselwirkungen untersucht. Der Forschungsgegenstand ist die Schnittstelle aus den Bereichen Gründungen, Migrantentum und Hochqualifikation. Der Fokus auf die sehr spezifische Zielgruppe Gründungen durch Akademikerinnen und Akademiker mit Migrationshintergrund füllt eine wichtige Lücke in der bisherigen Forschung.
Methodisch gesehen bedient sich diese Arbeit eines theoretischen Bezugsrahmens. Dieser besteht aus der neoinstitutionalistischen Organisationstheorie (Meyer & Rowan 1977), dem Ressourcenabhängigkeitsansatz (Pfeffer & Salancik 1978) sowie dem sechs-dimensionalen Modell des Gründerökosystems (Isenberg 2011). Gründungen durch Akademikerinnen und Akademiker mit Migrationshintergrund müssen ihre interne Ausgestaltung an die Anforderung der institutionellen Umwelt anpassen, um die notwendige Legitimität zu sichern. Dadurch können bei unterschiedlichen Gründungen isomorphe Organisationsstrukturen entstehen. Darüber hinaus können akademische Gründende mit Migrationshintergrund durch interorganisatorische Aktivitäten den Zugang zu nicht-substituierbaren Ressourcen für die Unternehmensgründung bzw. Geschäftsentwicklung ermöglichen bzw. erleichtern. Daher ist die Kombination beider Theorien und des Erklärungsansatzes ein effektives und passendes Analysetool für die vorliegende Forschungsarbeit und schafft sowohl auf Mikro- als auch auf Makroebene für die Leserinnen und Leser ein vollständiges Gesamtbild.
Die vorliegende Arbeit beinhaltet nicht nur Daten aus Sekundärquellen und bereits vorhandenen quantitativen Studien im deskriptiven Teil, sondern auch direkte Informationen durch eigene qualitative Untersuchung im empirischen Teil. Dafür wurden insgesamt 23 semistrukturierte Experteninterviews durchgeführt. Durch die Inhaltsanalyse nach Mayring (2014) wurden mehrere Kategorien herausgefiltert; dazu zählen bspw. umweltbezogene Einflussfaktoren auf Legitimität sowie nicht-substituierbare Ressourcen für Gründungen durch Akademikerinnen und Akademiker. Darüber hinaus wurden durch die Empirie einige Hypothesen für weitere quantitative Forschungen in der Zukunft aufgestellt und konkrete Handlungsempfehlungen für die Praxis gegeben.
Detecting and categorizing particular entities in the environment are important visual tasks that humans have had to solve at various points in our evolutionary time. The question arises whether characteristics of entities that were of ecological significance for humans play a particular role during the development of visual categorization.
The current project addressed this question by investigating the effects of developing visual abilities, visual properties and ecological significance on categorization early in life. Our stimuli were monochromatic photographs of structure-like assemblies and surfaces taken from three categories: vegetation, non-living natural elements, and artifacts. A set of computational and rated visual properties were assessed for these stimuli. Three empirical studies applied coherent research concepts and methods in young children and adults, comprising (a) two card-sorting tasks with preschool children (age: 4.1-6.1 years) and adults (age: 18-50 years) which assessed classification and similarity judgments, (b) a gaze contingent eye-tracking search task which investigated the impact of visual properties and category membership on 8-month-olds' ability to segregate visual structure. Because eye-tracking with infants still provides challenges, a methodological study (c) assessed the effect of infant eye-tracking procedures on data quality with 8- to 12-month-old infants and adults.
In the categorization tasks we found that category membership and visual properties impacted the performance of all participant groups. Sensitivity to the respective categories varied between tasks and over the age groups. For example, artifact images hindered infants' visual search but were classified best by adults, whereas sensitivity to vegetation was highest during similarity judgments. Overall, preschool children relied less on visual properties than adults, but some properties (e.g., rated depth, shading) were drawn upon similarly strong. In children and infants, depth predicted task performance stronger than shape-related properties. Moreover, children and infants were sensitive to variations in the complexity of low-level visual statistics. These results suggest that classification of visual structures, and attention to particular visual properties is affected by the functional or ecological significance these categories and properties may have for each of the respective age groups.
Based on this, the project highlights the importance of further developmental research on visual categorization with naturalistic, structure-like stimuli. As intended with the current work, this would allow important links between developmental and adult research.